Thanks for your comments, Hermann.

It seems to me that using only float RGBA color model would be quite a
good solution. Plugins would become simpler and should behave
coherently for any videos and precision should be also fine. The
drawback I see here is that float would require 4-times more memory
than "YUVA 8 bits per channel". Also computation would be probably
slower for floats.

> - additionally, float is the most precise and desirable model,
>  but lacks support in (consumer) hardware

How did you mean this? If the float model would be used only as the
"inner" model in Cinelerra's pipeline and it would be converted to
8-bit YUVA (or RGBA) only before outputting a frame on the compositor
window or to the rendered output, it should not require any special
hardware.

When I looked at frei0r plugins implementation, they use almost only
RGBA colorspace (8 bits per channel), so it is quite simple from the
perspective of a plugin programmer, because each plugin has to cope
only with one colorspace. When I tried their saturat0r plugin in Lives
video editor, it didn't produce the artifacts like Cinelerra did,
although they convert everything to RGBA 8 bits.

Michal

>
>
> > How will Lumiera cope with color space selection?
>
> First, let me answer this question for *Cinelerra*
>
> On colour spce selection, Cinelerra will configure the frame buffer for your
> resulting video to be able to hold data in that colour model. Then, for each
> plug-in, it will pick the variant of the algorithm coded for that model. If
> that variant of the algorithm happens to be buggy, you've lost. We know since
> several years, that some models are coded erroneously for some plug-ins,
> especially when combined with an alpha channel. Unfortunately fixing that
> requires really intense and systematic work; often it's not even outright
> clear, how the "correct" implementation should work; thus, additionally
> it would require some research and learning of theory.
> We, as a community, simply didn't manage to get that done.
>
> This was one of the core problems which led the Lumiera developers to use a
> more elaborate approach right from start. Which unfortunately has the downside
> of making the *internals of *Lumiera* somewhat intricate and difficult to
> understand: In Lumiera, we completely separate the "Session" (the clips, 
> tracks,
> effects and further objects you as a user will interact with while editing) 
> and
> the "render graph" (that is what the engine processes). We put a 
> transformation
> step in between, which translates the objects in the session into a low-level
> pipeline.
>
> Clearly, in Lumiera our goal is *not* to have any fixed colour model.
> Similarily, we do *not* have a fixed framerate for the whole session.
>
> Rather these properties are controlled by the *output configuration* used.
> Which, in Lumiera becomes part of the timeline; but you can use your
> edited sequences within multiple timelines. Thus, when an edited sequence
> is used within a timeline, we get an output connection with a colour model
> and a framerate. OTOH, the source material also has framerate/colour model.
> Then, we try to keep as much of the pipeline running with the same
> model. And at some point, we'll insert an conversion node.
>
> That is the plan. But, honestly, at the moment we're targeting the goal
> of building such pipelines (partially done) and running them in a multi-core
> aware engine (also partially done). We haven't gotten to the point to worry
> about plug-in metadata, or about the rules to use to determine the point
> where to insert that conversion.
>
> Clearly, our approach contains some "complexity bombs":
> - allowing multiple timelines/outputs at the same time
> - allowing unlimited nesting (a sequence can be used as virtual clip
>  in another sequence)
> - supporting various kinds of relative "placement" for the clips
> - having no limitations on the number of channels or the kind and mix of 
> media.
> - not "taking side" for one fixed media handling framework (ffmpeg, gstreamer,
>  MLT, or writing our own, like Cinelerra). We want just plug-ins and metadata.
>
> But frankly, I don't know any other approach how to tackle that problem of
> professional editing in the current media landscape, without cheating, or
> without creating those nasty impediments and technologically unnecessary
> limitations found in many of the existing editing solutions.
>
> Cheers,
> Hermann Vosseler
> (aka "Ichthyo")
>
>
>
>
>
> _______________________________________________
> Cinelerra mailing list
> Cinelerra@skolelinux.no
> https://init.linpro.no/mailman/skolelinux.no/listinfo/cinelerra

_______________________________________________
Cinelerra mailing list
Cinelerra@skolelinux.no
https://init.linpro.no/mailman/skolelinux.no/listinfo/cinelerra

Reply via email to