Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[experiment] Interactions #3092

Open
wants to merge 37 commits into
base: main
Choose a base branch
from
Open

[experiment] Interactions #3092

wants to merge 37 commits into from

Conversation

steveruizok
Copy link
Collaborator

@steveruizok steveruizok commented Mar 10, 2024

This PR introduces an "interactions" API for wrapping up interactions.

It adds a "brushing" interaction to demonstrate the API.

Background

In v1, we had interactions for rotating, translating, resizing, etc., which were essentially state machines designed to handle different user interactions. The editor could only have one interaction at a time and would update it when keys were pressed, when the pointer moved, etc.

This abstraction, while useful, was an incomplete abstraction for handling the complexity of the canvas.

In v2, I replaced interactions with a general state chart for handling interactions. This was much more flexible, however it also led to confusion when multiple states needed to share the same code. For example, when resizing a shape, we use the select.resizing state node; however, we also use the select.resizing state node when creating certain shapes. There are other concepts (like "tool mask") that are awkward patches for this fact.

Why not both?

This PR re-introduces interactions as an additional abstraction in addition to the state chart.

interactions (2.0)

A interaction is a class instance that has several methods: start, update, complete, cancel, interrupt, and dispose. When a interaction is created, it will update automatically on each tick, whenever keys are pressed, and when the user cancels or completes their interaction (i.e. by calling editor.complete()).

interactions are purely additive. They solve the problem of "sharing code". In the case of the geo shape, the geo.resizing state would create and manage a resizing interaction. The select.resizing state would also create and manage a resizing interaction.

Multi-interactions?

There's no reason why we can't have multiple interactions running at the same time, which makes them ideal for multi-touch and other multiple input scenarios. If anything, the constraint here has to do with the way our editor manages inputs. We could add multiple inputs later and use interactions to keep track of their changes.

Change Type

  • major

Test Plan

  1. Use the brush tool.
  • Unit Tests

Release Notes

  • [dev] Adds interactions.

@huppy-bot huppy-bot bot added the minor Increment the minor version when merged label Mar 10, 2024
Copy link

vercel bot commented Mar 10, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
examples ✅ Ready (Inspect) Visit Preview Mar 15, 2024 8:31am
1 Ignored Deployment
Name Status Preview Updated (UTC)
tldraw-docs ⬜️ Ignored (Inspect) Visit Preview Mar 15, 2024 8:31am

@steveruizok
Copy link
Collaborator Author

This work follows some discussions that I'd had with @SomeHats on a similar API for gestures. I believe the designs she has around gestures are more oriented toward a replacement for the state chart. This PR is a step toward extracting an existing pattern "hiding" in many of our state nodes, where private "complete / cancel / update" methods were used.

@steveruizok
Copy link
Collaborator Author

Next steps here would be to:

  • move logic from several state nodes into sessions
  • add state nodes for shared sessions (i.e. arrow.dragging_handle)

@steveruizok
Copy link
Collaborator Author

The design in this PR is definitely early, but as I've been going through the different interactions I'm starting to see some patterns.

Gesture / interaction patterns

This is a cool idea! I have two main thoughts:

1: Is there a better name than "session"? session feels like a very loaded term in the world of frontend, and we're using it for something very different here. Some other ideas:

  • DragGesture
  • DragInteraction
  • Interaction

I think DragInteraction or DragGesture would be most appropriate - are any of these not a drag?

Not all of them. For example, the "nudging crop" or "nudging shapes" are keyboard-based interactions.

For the most part, it looks like this:

image

For the keyboard interactions, it looks like this:

image

It's worth it to keep going in order to see if more patterns emerge. Maybe you see some patterns here?

Other benefits

2: this is a big new concept. it's a nice solution to code-reuse in the state chart, but doesn't solve the main problem we have with the state chart right now: how to modify some of our existing interactions, or add your own tied to certain on-canvas triggers. I think there's potential for this to help with the first of those (we could pass in a sessions object similar to components) but I don't know if this would help with the second. i'm not sure if the code-reuse thing on its own justifies such a big new concept though (we could e.g. duplicate certain state nodes across the tree instead)

There are a few more benefits here:

  • code reuse (removes a TON of complexity / fragility)
  • testable separately
  • easier separation between interactions (ie multiple handle dragging sessions rather than one state that handles them all)
  • the ability to spin up sessions without changing the state (our separate shape rotation code could be replaced with a session that the editor launches, updates, and completes)
  • many sessions are generic and can be moved to the @tldraw/editor package, rather than being embedded in @tldraw/tldraw
  • is a step toward multiple concurrent interactions, ie multitouch or multi-selection
  • makes the state tree significantly lighter

Eat the state tree?

It's possible that a pattern like this could eat the state tree from the bottom up, for example by combining pointing / dragging states, and then tools, etc. I've done some of this already (for example, pointing rotate corner / rotating) though it may be better to presume that the session begins when the interaction begins, ie when dragging has already occurred, so that they can be better used in isolation.

I haven't looked at nested sessions but that's possible. The hard part is tracking events as they're passed down from parent to child, and handled in order at every step in the tree. That's something that the state tree does very well; this model is more ambiguous as relates to the order of events.

The updates in sessions are driven on tick, however, so events like pointer-up or key-down will occur "between" ticks.

UI elements

Another issue is that the user interface relies on the state path to decide which UI elements to display. If we did bless sessions (or whatever we called them), then the editor could register these in a similar way that it registers open menus, allowing the UI to switch off of that. I'm not sure I would want to bundle this logic with UI. This would be another big question as we explore architecture around interactions.

@SomeHats
Copy link
Contributor

i'm sold on the benefits! I'd really recommend reading through the ios gesture recogniser docs tho: https://developer.apple.com/documentation/uikit/touches_presses_and_gestures (hard to point to just one second here as there's a lot of concepts and cool stuff, but take a look at the gesture recogniser, the APIs they take, and the way you establish relationships between different potential gestures).

re names: how about just interaction then? i'd really like to avoid session if we can as it's so overloaded.

honestly these get pretty close to solving a lot more of our problems around customisation too. Right now the state chart does a couple things:

  1. keep track of what actual state we're in (selected tool, cropping or not, etc)
  2. when a new interaction begins, decide where to route it (ie a big ol switch statement on event.target)
  3. handle the interaction itself

sessions take 3 away from this. 1 I think is actually what I'd like the state chart to evolve to doing exclusively. 2 is sort of the glue in the middle, which i think is where a lot of my exploration has been focused - using a single central gesture recogniser with a bunch of potential session-spawning "targets" to handle the routing part more declaratively

@steveruizok steveruizok changed the title [experiment] Sessions [experiment] Interactions Mar 15, 2024
@huppy-bot huppy-bot bot added the major Increment the major version when merged label Mar 15, 2024
@steveruizok
Copy link
Collaborator Author

i'm sold on the benefits! I'd really recommend reading through the ios gesture recogniser docs tho: https://developer.apple.com/documentation/uikit/touches_presses_and_gestures (hard to point to just one second here as there's a lot of concepts and cool stuff, but take a look at the gesture recogniser, the APIs they take, and the way you establish relationships between different potential gestures).

re names: how about just interaction then? i'd really like to avoid session if we can as it's so overloaded.

Sounds good to me, let's use interaction for now.

There's clearly a relationship here between the particular interaction and the gesture occurring simultaneously, but I'd like to have everything on the table before we start finding the patterns. There are still one or two things I need to move over, but once we have all the interactions covered so we can continue with that work. I may, like, print out all these different interactions so that we can compare them.

There are a few places I'd like to clean up:

  • some of these interactions set or update the cursor at the start / end / during the interaction; maybe these should get lifted out to the parent state? Some of them are different for interactions that occur while creating a shape, where we stick with the cross cursor

honestly these get pretty close to solving a lot more of our problems around customisation too. Right now the state chart does a couple things:

  1. keep track of what actual state we're in (selected tool, cropping or not, etc)
  2. when a new interaction begins, decide where to route it (ie a big ol switch statement on event.target)
  3. handle the interaction itself

sessions take 3 away from this. 1 I think is actually what I'd like the state chart to evolve to doing exclusively. 2 is sort of the glue in the middle, which i think is where a lot of my exploration has been focused - using a single central gesture recogniser with a bunch of potential session-spawning "targets" to handle the routing part more declaratively

Yeah, if we can separate our interactions logic/lifecycles from the state chart, then that gives us better room to explore alternative ways of triggering those interactions. One of my favorite things about the current system is the degree to which we can test / drive it separate from the UI.

As general as the state chart is, it's been able to handle almost everything that we've wanted from it. There's still a lot of logic in the state chart, for example handling the "shift+click" interactions around the draw or line shape when using those tools, and of course the select.idle state where we switch based on what we're hovering. Having targets would solve the second part but not the first? Anyway, big frog, many meals

@steveruizok
Copy link
Collaborator Author

We'll need to compare with some of the changes that Mitja worked on, maybe diff against the original branch that this worked from, I'm not sure I merged these correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
major Increment the major version when merged minor Increment the minor version when merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants