Hello,

Darktable exposure fusion released after 2.2.0 is interesting. I tested it, and as expected it brightened pictures a bit like draco tone mapping operator but with more natural colors and different style of controls. Great!

LWN wrote about it in A look at darktable 2.2.0 [LWN.net] <https://lwn.net/Articles/710496/> that (emphasis mine):

In scenarios where the dynamic range of a scene is too wide to be captured in a single shot, the photographer can shoot *multiple exposures* (e.g., one to capture the highlights and one for the shadows). Those exposure*s* can then be combined <https://www.darktable.org/2016/08/compressing-dynamic-range-with-exposure-fusion/> via darktable's new "exposure fusion" module. In essence, *the two frames (or however many were taken)* are stacked together,

The link is to https://www.darktable.org/2016/08/compressing-dynamic-range-with-exposure-fusion/

I'm somehow confused because the latter link processes only one picture at a time.

Please explain if the following assertions are right or wrong and explain:

* Darktable basecurve fusion always considers only one image at a time. Never "two frames", several input files (be it bracketed exposure, flash/no flash, etc.).

* Darktable basecurve fusion implements http://web.stanford.edu/class/cs231m/project-1/exposure-fusion.pdf in the restricted case where "sequence" is actually a copy of the same input data with digitally boosted exposure.

* Darktable applies "traditional" basecurve upstream (i.e. before, or "first, then fed into") of Mertens/Kautz/Van Reeth algorithm.

* In traditional darktable basecurve, the output values for any pixel in output image only depends on the input value of that same and only pixel in source image, not any surrounding pixel.

* Darktable basecurve fusion is not reducible to an overall "meta-basecurve" because, following Mertens/Kautz/Van Reeth algorithm, it considers the neighborhood of each pixels in deciding which pixel to take, a kind of operation that traditional basecurve does not perform.

* As a consequence, darktable implementation provides the benefit of the algorithm in term of rendering perception (preserve natural colours, etc), but not the improved noise in dark area of the flash/no-flash option, since there is only one input image. That would either need preprocessing of the whole algorithm before darktable, or feeding several pictures into darktable to implement the whole algorithm.

Thank you in advance for clarification! Probably a number of people will benefit.


--
Stéphane Gourichon


____________________________________________________________________________
darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org

Reply via email to