Hi,

that's the problem I got with filmic. Once the wall of the non-linearity
is crossed, there is no coming back, so all you can do is move your
module to the right place in the pipe.

Basically every filter dealing with physical phenomenons (diffusion,
refraction) should be put in the linear part of the pipe, early. I
strongly think the signal-processing part of the pipe should be kept
fully separate from the artistic retouching, which is not the case now
in dt, with non-local denoising performed in Lab space late in the pipe,
or base curves applied early.

Be aware that RGB spaces are always linear, the gamma thing in RGB
spaces is only an encoding trick to avoid quantization artifacts when
dealing with (rounded) integers in files, but color management systems
always revert/decode them before converting from RGB to other RGB spaces
(and the conversion usually happen in floating point precision).

Camera RGB are supposed to be more or less linear as they are 3
colorimeters readings (filter light with coloured glass, convert photons
to electrons, count them). Camera profiles are usually simple 3×3
matrices converting from camera RGB to XYZ (linear vector base change).
You can also use 3D tonecurves profiles (one tonecurve per RGB channel)
or LUTs (dt now has Lab and RGB LUTs modules), but doing calibration
this way can badly backfire if your calibration shot is not 100 % clean
(evenly lit, no parasite color cast from walls reflection, white light
with a full daylight spectrum).

The camera-independant RGB representation is the best case scenario that
happens after the input profile module if the camera profile is accurate
and applied on correct data. In practice, I suspect the camera RGB space
is the cleanest to perform physically-bounded transformations, except if
you need some estimation of the luminance (which is a linear combination
of RGB components, and needs a proper white balance correction plus a
profiling), because they are the closest to photons you can get.

So, in practice, you need to open each IOP source file, analyse what can
break your things and what messes up colors, and find the right
trade-off for your module. In practice, since signal-processing and
artistic transforms are mixed up, you will probably end up issuing
warrants in the doc to prevent users from using some ill-placed modules
if they want to use yours, and get a shit-load of emails from users
complaining your module doesn't work while they have done everything you
advised not to do.

Good luck !

Aurélien.

Le 23/05/2019 à 01:02, Heiko Bauke a écrit :
> Hi,
>
> Am 23.05.19 um 00:32 schrieb Moritz Mœller:
>> each module that relies on a certain color
>> space must take into consideration everything in the pipe before (and
>> possible undo it, which is not always really possible) to push stuff
>> back
>> into linear, if necessary.
>
> I completely agree.
>
> My question is how can one "take into consideration everything in the
> pipe before" in practice?  My mental picture of the pixel pipe was
> that it starts from a highly non-linear camera-dependent RGB profile
> and reaches somewhere along the pixel pipeline a camera independent
> color representation before switching to LAB space.  Once, RGB color
> representation is linear or has some specific (known) gamma encoding
> switching to linear RGB becomes trivial.  However, I am not sure if my
> picture is right.
>
>
> Heiko
>

___________________________________________________________________________
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Reply via email to