Am Mittwoch, den 06.02.2008, 09:02 +0100 schrieb Richard Spindler:
> I've made a little Picture about how I think the User Interface of a
> yet to be named Video Editor could be presented.
> 
> http://propirate.net/oracle/zipfiles/UI-concepts-CineTris.png
 
> cehteh, ichthyo, do you think that the backend could be used with
> such a frontend?

Hi Richard,

the structure displayed in your drawing is to a large degree identical to the
"high level model" I am currently developing in the Proc-Layer (middle Layer 
of the application). Besides, our design incorporates a Builder, which creates
a uniform network of render nodes which will be implemented/driven by the 
backend.
But the intention is to rather shield the user from the low level model while
presenting him a view of the high level model.

So, largely, you can take it as agreement. In the following I'll concentrate on 
differences in deail or on things I do a bit more flexible than in your 
proposal:


*Tracks and Busses*: in the Proc-Layer, material on one track can 
connetct to several different Busses. I want to ged rid of this
often very limiting "leftover" from the old analogue hardware, 
where each track corresponds to one reading/writing head and
thus to one bus. Even in the simplest case, for me a "Clip" 
containts a video and an stereo audio stream and thus makes
connection to two busses simulatnously.

*Tracks are a Tree*: Tracks are nested right from start. You can
attach properties on every level. Most important property of course
is the output bus connection, further properties are layer ordering,
sound PAN position, MIDI instrument/channel asignments. Each Clip
searches the next-applicable properties he needs. (Here the rule-based
aproach comes into play, but from users view, this is not immediatly
obivious, because the application will come with a set of pretty much
conventional rules).

*One or multiple Timelines?*: The Proc-Layer has N distinct EDLs in
the Session. Each EDL is a collection of objects placed in some way, but
from users view it can appear as an separate timeline. Obviously each
timeline can either be anchored at an absolute time or just use relative
coordinates (it doesn't make any difference). An EDL can be placed in
the form of a meta-clip in another EDL -- that's similar to the 
"sub-timeline" in your drawing. 

*basic building block = Pipeline*: We are using various names for the 
same concept: Node Graph, Bus, Scratch Bus in your drawing, "Port" in
my design and implementation. I am trying to find a grippy name for this
entity all the time, an currently I am geared to call it just "Pipeline".
Other opinions or proposals? Some facts to further the discussion:
- remember it's the high level model. Its an abstraction the user works
  with. You can expect this "Pipeline" to be translated into a chain of
  connected nodes in the low-level model, but it is not identical to
  this chain.
- each clip is built round a Pipeline, containing the effects directly
  attached to the clip. At the source end, it will be connected either
  to a source reader + Codec, or recieve input from some other Pipeline.
- we have global Pipelines ("Busses") forming a matrix similar to the
  mixer strips of a sound mixing console. As said, we can have
  attachement points to these global Pipelines at various levels in 
  the tree of tracks, and in several EDLs simultanously, so each global
  pipeline can recieve input from different and varying sources.
- an EDL which is a meta-clip can be configured either to connect
  to the global Pipelines directly, or it can connect to the local
  Pipeline of the meta-clip, where we could attach further effects
  and transitions. This configuration can even be mixed: e.g. route
  the sound directly but send the video through the meta-clip

*limit the Pipelines to linear chains?*: I want to hear your opinion
about limiting the individual pipeline to a strictly linear chain of
nodes! The only exception would be "send nodes", which can make 
arbitrary conections to the input side of other Pipelines. The rationale
is code simplicity. Having a tree or even network within each Pipeline
makes building, addressing, processing and managing much more complicated
without providing anything you could not get similar by connecting a 
"send node" or the exit point of a Pipeline to another Pipeline (which
could be a global one, or just appear as clip somewhere in some EDL 
and thus add even the possiblility of changing the config
in a time varying fashion).


> There is no central notion of the camera/projector Model, which I think
> could be added anywhere in the Graphs as a Node with an unlimited Number
> of inputs.

Agreed, I am following the same line of thought. Besides, I plan to
make the default configuration rules such as to insert a "camera" node
in case the source resolution is different of what can be derived for
the Clip-Pipeline and similarily to insert a "projector" node automatically
when the desired output canvas differs from what can be derived for
the global Pipelines to be output.

*Automation* will appear as separate objects (containing some curve,
defined either mathematically or by a dataset). They will be connected
and used by the same aproach any other connections are made: they can
be either placed/attached directly or be discovered in the context
by a rule-based aproach. Meaning you could have local automation 
following a individual clip, as well als global automation fixed 
relative to the timeline.

Cheers,
and have fun...
        Hermann V.

_______________________________________________
Cinelerra mailing list
Cinelerra@skolelinux.no
https://init.linpro.no/mailman/skolelinux.no/listinfo/cinelerra

Reply via email to