UI Redesign - interactive devices' spec

[size=150]== Goals ==[/size]


  • mini size of monitor 1440x900 (?)
  • multiple monitor supported


  • stand keyboard
  • number pad included (?)


  • three buttons (RMB, MMB, LMB)
  • with wheel available (?)


  • pen (stylus, eraser)
  • additional buttons (?)

[size=150]== Non Goals ==[/size]

Touch screen

what’s your opinions, especially for those with “?” mark items ?

I only use the touch pad on my laptop. I would REALLY like the scroll wheel so I can scroll the layers list. Perhaps the wheel could work the zoom on the work area?


I often do animation work on my tablet pc which gives me more direct control rather through a wacom-tablet (which I also use, on my main machine).
The tablet pc does have a smaller screen size though, 1280x800. This is also the resolution of the smallest 12" Cintiq which I guess is also fairly common in graphics work.
So even if larger or multiple monitors is better I think it is quite common with smaller displays. Even if not optimal we need to make sure Synfig does work on those displays and nothing breaks. (I’ve seen horrible examples of software where menus goes outside screen estate etc.)

For multiple monitors we would need a tear off option for panels.

Also I don’t think we should assume number pads. blender does this, it relies heavily on it for switching views and it does cripple you quite a bit when working on a laptop (tablet pc) without a pad.

For tablet buttons I think we can leave that for the system where you can set your custom shortcuts to them.

If we evolve the shortcuts system with a gui settings window for them then we won’t have to worry too much about input devices. The user can setup his/her system as he/she likes.

I wouldn’t limit Synfig UI on this area in any way. I would just rely on whatever windows manager offers you at any time.

Why discard touch screen? If the input gives you one action (click here, zoom there, drag from here to there, …) why decide on follow the action requested depending on who is sending them?

I imagine the Synfig Studio like a remote device that just receive action requirements with its own arguments. There must be a intermediate interface the UI adapted to each situation that receive inputs from different devices and interprete them into Synfig’s action language.

I would like to use Synfig Studio with Kinect one of those days :mrgreen:

None of the CTRL + functions seem to work on my machine. They may just not work with the touch pad at all.