Hi Andy,

Thanks for the quick response!

Let me see if I can clarify some of your questions.


Thank you, John, for a detailed writeup.

Before going into details, I would like to ask you to clarify some of the aspects of the alternative proposal:

1. What is the API for setting user event handlers vs. behavior event handlers?

User handlers stay backwards compatible, so no change there, they can be installed as usual.  I only want to prevent mixing them up with FX internals, as this makes the call order unpredictable depending on when exactly your event handler was installed/reinstalled, and what type of events it is interested in.

Handlers for behaviors have a lot of freedom of how they can be dealt with, as it will all be internal code (when users can create behaviors, a small API can be exposed for this when installing the behavior).  The event handler system could be extended with some kind of priority system, or we could create separate lists for behavioral handlers (these can use the same EventTypes, but they're marked as behavorial -- no need to create new ones, a method on EventType to create a behavioral EventType (or lower priority type) from the current one should suffice).

The separate list proposal seems easiest; as the event system already maintains lists per type, adding new types will separate them easily.  The only thing that the event system than needs to do is to treat behavioral event handlers as lowest priority (either for a complete capture/bubble phase or per Node, not sure yet what would be best here).

2. Is there a way to conditionally invoke the default (behavior) event handler if the user event handler decides to do so?

I see two options;

1. The user handler mimics some behavior and fires the same event (ie. control.fireEvent(new ButtonEvent(ButtonEvent.BUTTON_FIRE))) 2. The user can install an event handler/filter to block the behavior events when it does not pass a condition

3. How is the key bindings registered / unregistered / reverted to default?

Initially, you would do this by registering your own event handler that overrides an existing key binding; consuming the event will block the default behavior.  Removing your event handler will revert it back to default.  The only thing that may need additional work is when you want to block it from being used by the behavior, but still want to let it buble up.  My other post had a suggestion to be able to mark the event in some way.

More control can be gained by subclassing or composing an existing behavior; see below.

4. How is the functions mapped to key bindings are registered / unregistered / reverted?

Functions can be blocked by consuming the relevant ButtonEvent; removing that handler will revert it to defaults.

Influencing existing behaviors directly I think should be done by subclassing the behavior or composing it (if they become public). I have some ideas here, where you are passed a Context object:

    interface Behavior<C extends Control> {
        void install(BehaviorContext<C> context);
    }

The BehaviorContext has primarily a method to install an event handler:

    <E extends Event> void addEventHandler(EventType<E> type, BiConsumer<C, E> consumer);

...which is only slightly different from what InputMap offers.

The context can further have methods that are more convenient:

    void addKeyMapping(KeyCode keyCode, EventType<KeyEvent> type, BiConsumer<C, KeyEvent> consumer);

The context can also offer methods to remove mappings, so subclassed or composed behaviors can remove mappings they want to specifically disable.  Other options include providing a predicate to make them conditional in a subclass/composition.

It could look something like:

      class MyBehavior implements Behavior<Button> {
            public void install(BehaviorContext<Button> context) {
                   // call behavior you wish to base your behavior on:
                  ButtonBehavior.getInstance().install(context);

                  // call methods on context to add/remove/remap/conditionalize things that ButtonBehavior did                   // call methods on context to add your own custom mappings
            }
      }

Installing the custom behavior is then a matter of passing it to a Control:

      control.setBehavior(new MyBehavior());  // can be a singleton, but not static as it implements interface

The `install` method can associate state if needed by associating it with the callbacks it installs:

      State state = new State();
      BiConsumer<Button, Control> bc = (b, c) -> {  // access state here };

Alternatively, the State class can have methods like:

    class State {
        boolean keyDown;  // some state

        void keyPressed(Button control, KeyEvent event) {
            if (!control.isPressed() && !control.isArmed()) {
                keyDown = true;
                control.fireEvent(new ButtonEvent(ButtonEvent.BUTTON_ARM));
            }
        }
    }

And the handler for installing can be referred to with "state::keyPressed".

5. Propagating the new events (TextInputControl.SELECT_ALL) up to unsuspecting parents represents a bit of departure, how will that impact all the existing applications?  Probably not much since they will be ignored, with a bit of overhead due to memory allocation, right?

It's pretty innocuous, the new event types will be ignored. There is some overhead associated with using the event system for this purpose (although I think it is not outside its purpose), but as it is in the context of other event processing it's not an order of magnitude difference.  Some memory is allocated for the event indeed, as it also is already for the events we're reacting to. The event system is I think reasonably optimized to skip controls that did not install handlers for a given type; most of the time I'd expect say a ButtonEvent to travel from the root node immediately to the Button control, skipping all parents.

It may also bring some unexpected bonusses as the events can be interacted with at higher levels as well (a group of Buttons could have handlers that do something with ARM/DISARM/FIRE).  It also may enable logging of events that are more at a semantic level (button was fired, some text was selected), perhaps it may even have applications in an undo/redo system.  I for sure see some testing applications; behaviors can be tested to send out the right events when they're interacted with correctly, while controls can use the more semantic events directly for testing purposes instead of having to simulate clicks/keypresses.

6. If I read it right, it is impossible to redefine the behavior of SomeControl.someFunction() except by subclassing, correct?

I'm not entirely sure what you mean by that.  Buttons have methods like `arm`, `disarm` and `fire`.  Changing how those work would require subclassing.

7. I wonder if this will require a more drastic re-write of all the skins?

If Behaviors become public, I think it would be best that Skins are not reliant on them (I think some are?).  As long as we're only toying with the event handlers, I think Skins are unaffected (unless of course some Skins are doing behavioral type stuff that they shouldn't be doing, that would be part of a clean up then).

Skins probably shouldn't be accessing the behaviors anyway, as I can imagine user controls without skins may still want to use behaviors.  In other words, I think Skins and Behaviors should be completely separate things that don't interact with each other, unless it is via the Control.  That will definitely help to keep things untangled.

I think it should be possible to do this alternative proposal also one control, and one behavior at a time.  I've primarily looked at ButtonBehavior so far, and that seems pretty trivial to change.

Thanks.

--John

Thank you

-andy

*From: *openjfx-dev <openjfx-dev-r...@openjdk.org> on behalf of John Hendrikx <john.hendr...@gmail.com>
*Date: *Monday, October 16, 2023 at 04:51
*To: *openjfx-dev@openjdk.org <openjfx-dev@openjdk.org>
*Subject: *Alternative approach for behaviors, leveraging existing event system

Hi Andy, hi list,

I've had the weekend to think about the proposal made by Andy Goryachev
to make some of the API's surrounding InputMap / Behaviors public.

I'm having some nagging doubts if that proposal is really the way
forward, and I'd like to explore a different approach which leverages
more of FX's existing event infrastructure.

First, let me repeat an earlier observation; I think event handlers
installed by users should always have priority over handlers installed
by FX behaviors. The reasoning here is that the user (the developer in
this case) should be in control.  Just like CSS will back off when the
user changes values directly, so should default behaviors.  For this
proposal to have merit, this needs to be addressed.

One thing that I think Andy's proposal addresses very nicely is the need
for an indirection between low level key and mouse events and their
associated behavior. Depending on the platform, or even platform
configuration, certain keys and mouse events will result in certain high
level actions.  Which keys and mouse events is platform specific.  A
user wishing to change this behavior should not need to be aware of how
these key and mouse events are mapped to a behavior.

I however think this can be addressed in a different way, and I will use
the Button control to illustrate this, as it is already doing something
similar out of the box.

The Button control will trigger itself when a specific combination of
key/mouse events occurs.  In theory, a user could install event handlers
to check if the mouse was released over the button, and then perform
some kind of action that the button is supposed to perform.  In practice
however, this is tricky, and would require mimicing the whole process to
ensure the mouse was also first **pressed** on that button, if it wasn't
moved outside the clickable area, etc.

Obviously expecting a user to install the necessary event handlers to
detect button presses based on key and mouse events is a ridiculous
expectation, and so Button offers a much simpler alternative: the
ActionEvent; this is a high level event that encapsulates several other
events, and translates it to a new concept.  It is triggered when all
the criteria to fire the button have been met without the user needing
to be aware of what those are.

I think the strategy of translating low level events to high level
events, is a really good one, and suitable for reusing for other purposes.

One such purpose is converting platform dependent events into platform
independent ones. Instead of needing to know the exact key press that
would fire a Button, there can be an event that can fire a button.  Such
a specific event can be filtered and listened for as usual, it can be
redirected, blocked and it can be triggered by anyone for any reason.

For a Button, the sequence of events is normally this:

- User presses SPACE, resulting in a KeyEvent
- Behavior receives KeyEvent and arms the button
- User releases SPACE, resulting in a KeyEvent
- Behavior receives KeyEvent, disarms and fires the button
- Control fires an ActionEvent

What I'm proposing is to change it to:

- User presses SPACE, resulting in a KeyEvent
- Behavior receives KeyEvent, and sends out ButtonEvent.BUTTON_ARM
- Control receives BUTTON_ARM, and arms the button
- User releases SPACE, resulting in a KeyEvent
- Behavior receives KeyEvent and sends out ButtonEvent.BUTTON_FIRE
- Control receives BUTTON_FIRE, disarms the button and fires an ActionEvent

The above basically adds an event based indirection. Normally it is
KeyEvent -> ActionEvent, but now it would be KeyEvent -> ButtonEvent ->
ActionEvent. The user now has the option of hooking into the mechanics
of a Button at several different levels:

- The "raw" level, listening for raw key/mouse events, useful for
creating custom behavior that can be platform specific
- The "interpreted" level, listening for things like ARM, DISARM, FIRE,
SELECT_NEXT_WORD, SELECT_ALL, etc...; these are platform independent
- The "application" level, primarily action type events

There is sufficient precedence for such a system. Action events are a
good example, but another example are the DnD events which are created
by looking at raw mouse events, effectively interpreting magic mouse
movements and presses into more useful DnD events.

The event based indirection here is very similar to the FunctionTag
indirection in Andy's proposal.  Instead of FunctionTags, there would be
new events defined:

     ButtonEvent {
         public static final EventType<ButtonEvent> ANY = ... ;
         public static final EventType<ButtonEvent> BUTTON_ARM = ... ;
         public static final EventType<ButtonEvent> BUTTON_DISARM = ... ;
         public static final EventType<ButtonEvent> BUTTON_FIRE = ... ;
     }

     TextFieldEvent {
         public static final EventType<TextFieldEvent> ANY = ... ;
         public static final EventType<TextFieldEvent> SELECT_ALL = ... ;
         public static final EventType<TextFieldEvent> SELECT_NEXT_WORD
= ... ;
     }

These events are similarly publically accessible and static as
FunctionTags would be.

The internal Behavior classes would shift from translating + executing a
behavior to only translating it.  The Control would be actually
executing the behavior.

This also simplifies the role of Behaviors, and maybe even clarifies it;
a Behavior's purpose is to translate platform dependent to platform
independent events, but not to act on those events. Acting upon the
events will be squarely the domain of the control.  As this pinpoints
better what Behavior's purpose it, and as it simplifies their
implementation (event translation only) it may be the way that leads to
them becoming public as well.

---

I've used a similar mechanism as described above in one of my FX
Applications; key bindings are defined in a configuration file:

     BACKSPACE: navigateBack
     LEFT: player.position:subtract(10000)
     RIGHT: player.position:add(10000)
     P: player.paused:toggle
     SPACE: player.paused:toggle
     I:
         - overlayVisible:toggle
         - showInfo:trigger

When the right key is pressed (and it is not consumed by anything), it
is translated to a new higher level event by a generic key binding
system.  This event is fired to the same target (the focused node).  If
the high level event is consumed, the action was succesfully triggered;
if not, and a key has more than one mapping, another event is sent out
that may get consumed or not.  If none of the high level events were
consumed, the low level event that triggered it is allowed to propogate
as usual.

The advantage of this system is obvious; the controls involved can keep
the action that needs to be performed separate from the exact key (or
something else) that may trigger it.  For "navigateBack" for example, it
is also an option to use the mouse; controls need not be aware of this
at all.  These events also bubble up; a nested control that has several
states may consume "navigateBack" until it has reached its local "top
level", and only then let it bubble up for one of its parents to act on.

--John

Reply via email to