Not allowing sniffing on evdev directly is also a goal (that most OSes now do right because device nodes are root, as you noticed).įor now, there is no code behind my proposal, because nobody actually had (code-backed) interest in it. Kind of ties into my previous comment, but I suppose the current API requires us to bind to the actions on startup, correct? If we don't, we won't receive notifications when the action is triggered? Otherwise, how do you map a keypress/event to an action using the exposed API? How would the flow work from a user perspective? Do you configure the actions outside the app itself? Perhaps we need a way to query which keys/events are bound to an action, so we can show that to the user? That means the UI for shortcuts would be less than ideal for users. and we wouldn't be able to show the actual bound key to the user, because that part is handled by the compositor. It seems like, if we were to use the current API, Mumble would simply bind to "mumble/push-to-talk", "mumble/volume-up", "mumble/volume-down", etc. We're willing to use different UI if we need to, for different platforms.) At least it doesn't map fully onto the way global shortcuts work in Mumble currently. If there is any interest from Mumble developers for this solution, I am willing to implement the compositor side for Weston (and all libweston-based compositors) as well as WLC-based compositors (at least via an LD_PRELOAD hack), and I may convince wlroots developers too (to be used by Sway and way-cooler, two important tiling I'm not sure it's enough in its current incarnation. (I should probably write all that to the mailing list, for the record if nothing else.) Let’s say Teamspeak is running too, pressing the key for voip/push-to-talk would lead to the action event being sent to either Mumble or Teamspeak (for example, based on the last focused one). The user could then have a Mumble-specific binding (or not), and a generic binding. For Mumble, it means you would ask for the "mumble/push-to-talk voip/push-to-talk" action. The action are namespaced, and you are expected to use fallbacks. On the other side, compositors would have to implement the protocol and provide their user a way to link (key, mouse, touch, mind-control) bindings to said actions. To work as expected, clients (or toolkits) wanting to support global bindings would have to implement it. I sent a proposal a few years ago and more recently, I made a cleaner one to allow for global action bindings. To have both, I went with an action-based protocol. We’re left with the fourth case (and half of the second one) that DEs had, until now, little interest in. The Media player case is mostly solved in DEs by having the compositor handling media keys and sending e.g. Also, the average user doesn’t need such tweaks in the first place. The second case would ideally be split between the first one (for launching stuff) and a non-problem, see below. Under Wayland, the first case will work as usual, since the compositor has control over the input.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |