Hi,

I have done experiments on that matter and took the opportunity to
correct/test further my code.

So here are my attempts to code a noise mask and a sharpness mask with
total variation and laplacian norms :
https://github.com/aurelienpierre/Image-Cases-Studies/blob/master/notebooks/Total%20Variation%20masking.ipynb

Performance benchmarks are at the end.

Cheers,

Aurélien.


Le 17/06/2018 à 15:03, rawfiner a écrit :
>
>
> Le dimanche 17 juin 2018, Aurélien Pierre <rese...@aurelienpierre.com
> <mailto:rese...@aurelienpierre.com>> a écrit :
>
>
>
>     Le 13/06/2018 à 17:31, rawfiner a écrit :
>>
>>
>>     Le mercredi 13 juin 2018, Aurélien Pierre
>>     <rese...@aurelienpierre.com <mailto:rese...@aurelienpierre.com>>
>>     a écrit :
>>
>>
>>>
>>>             On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>             <rese...@aurelienpierre.com
>>>             <mailto:rese...@aurelienpierre.com>> wrote:
>>>             > Hi,
>>>             >
>>>             > The problem of a 2-passes denoising method involving 2
>>>             differents
>>>             > algorithms, the later applied where the former failed,
>>>             could be the grain
>>>             > structure (the shape of the noise) would be different
>>>             along the picture,
>>>             > thus very unpleasing.
>>>
>>>
>>>         I agree that the grain structure could be different. Indeed,
>>>         the grain could be different, but my feeling (that may be
>>>         wrong) is that it would be still better than just no further
>>>         processing, that leaves some pixels unprocessed (they could
>>>         form grain structures far from uniform if we are not lucky).
>>>         If you think it is only due to a change of algorithm, I
>>>         guess we could apply non local means again on pixels where a
>>>         first pass failed, but with different parameters to be quite
>>>         confident that the second pass will work.
>>         That sounds better to me… but practice will have the last word.
>>
>>
>>     Ok :-) 
>>
>>>          
>>>
>>>             >
>>>             > I thought maybe we could instead create some sort of
>>>             total variation
>>>             > threshold on other denoising modules :
>>>             >
>>>             > compute the total variation of each channel of each
>>>             pixel as the divergence
>>>             > divided by the L1 norm of the gradient - we then
>>>             obtain a "heatmap" of the
>>>             > gradients over the picture (contours and noise)
>>>             > let the user define a total variation threshold and
>>>             form a mask where the
>>>             > weights above the threshold are the total variation
>>>             and the weights below
>>>             > the threshold are zeros (sort of a highpass filter
>>>             actually)
>>>             > apply the bilateral filter according to this mask.
>>>             >
>>>             > This way, if the user wants to stack several denoising
>>>             modules, he could
>>>             > protect the already-cleaned areas from further denoising.
>>>             >
>>>             > What do you think ?
>>>
>>>
>>>         That sounds interesting.
>>>         This would maybe allow to keep some small variations/details
>>>         that are not due to noise or not disturbing, while denoising
>>>         the other parts.
>>>         Also, it may be computationally interesting (depends on the
>>>         complexity of the total variation computation, I don't know
>>>         it), as it could reduce the number of pixels to process.
>>>         I guess the user could use something like that also the
>>>         other way?: to protect high detailed zones and apply the
>>>         denoising on quite smoothed zones only, in order to be able
>>>         to use stronger denoising on zones that are supposed to be
>>>         background blur.
>>
>>         The noise is high frequency, so the TV (total variation)
>>         threshold will have to be high pass only. The hypothesis
>>         behind the TV thresholding is noisy pixels should have
>>         abnormally higher gradients than true details, so you isolate
>>         them this way.  Selecting noise in low frequencies areas
>>         would require in addition something like a guided filter,
>>         which I believe is what is used in the dehaze module. The
>>         complexity of the TV computation depends on the order of
>>         accuracy you expect.
>>
>>         A classic approximation of the gradient is using a
>>         convolution product with Sobel or Prewitt operators (3×3
>>         arrays, very efficient, fairly accurate for edges, probably
>>         less accurate for punctual noise). I have developped myself
>>         optimized methods using 2, 4, and 8 neighbouring pixels that
>>         give higher order accuracy, given the sparsity of the data,
>>         at the expense of computing cost :
>>         
>> https://github.com/aurelienpierre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.pyx#L342
>>         
>> <https://github.com/aurelienpierre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.pyx#L342>
>>         (ignore the variable ut in the code, only u is relevant for
>>         us here).
>>
>>     Great, thanks for the explanations.
>>     Looking at the code of the 8 neighbouring pixels, I wonder if we
>>     would make sense to compute something like that on raw data
>>     considering only neighbouring pixels of the same color?
>
>     the RAW data are even more sparse, so the gradient can't be
>     computed this way. One would have to tweak the Taylor theorem to
>     find an expression of gradient for sparse data. And that would be
>     different for Bayer and X-Trans patterns. It's a bit of a conundrum.
>
>  
> Ok, thank you for these explainations
>
>
>>
>>     Also, when talking about the mask formed from the heat map, do
>>     you mean that the "heat" would give for each pixel a weight to
>>     use between input and output? (i.e. a mask that is not only ones
>>     and zeros, but that controls how much input and output are used
>>     for each pixel)
>>     If so, I think it is a good idea to explore!
>     yes, exactly, think of it as an opacity mask where you remap the
>     user-input TV threshold and the lower values to 0, the max
>     magnitude of TV to 1, and all the values in between accordingly.
>
>
> Ok that is really cool! It seems a good idea to try to use that!
>
> rawfiner
>  
>
>
>>
>>     rawfiner
>>
>>>
>>>          
>>>
>>>             >
>>>             > Aurélien.
>>>             >
>>>             >
>>>             > Le 13/06/2018 à 03:16, rawfiner a écrit :
>>>             >
>>>             > Hi,
>>>             >
>>>             > I don't have the feeling that increasing K is the best
>>>             way to improve noise
>>>             > reduction anymore.
>>>             > I will upload the raw next week (if I don't forget
>>>             to), as I am not at home
>>>             > this week.
>>>             > My feeling is that doing non local means on raw data
>>>             gives much bigger
>>>             > improvement than that.
>>>             > I still have to work on it yet.
>>>             > I am currently testing some raw downsizing ideas to
>>>             allow a fast execution
>>>             > of the algorithm.
>>>             >
>>>             > Apart of that, I also think that to improve noise
>>>             reduction such as the
>>>             > denoise profile in nlm mode and the denoise non local
>>>             means, we could do a 2
>>>             > passes algorithm, with non local means applied first,
>>>             and then a bilateral
>>>             > filter (or median filter or something else) applied
>>>             only on pixels where non
>>>             > local means failed to find suitable patches (i.e.
>>>             pixels where the sum of
>>>             > weights was close to 0).
>>>             > The user would have a slider to adjust this setting.
>>>             > I think that it would make easier to have a "uniform"
>>>             output (i.e. an output
>>>             > where noise has been reduced quite uniformly)
>>>             > I have not tested this idea yet.
>>>             >
>>>             > Cheers,
>>>             > rawfiner
>>>             >
>>>             > Le lundi 11 juin 2018, johannes hanika
>>>             <hana...@gmail.com <mailto:hana...@gmail.com>> a écrit :
>>>             >>
>>>             >> hi,
>>>             >>
>>>             >> i was playing with noise reduction presets again and
>>>             tried the large
>>>             >> neighbourhood search window. on my shots i could very
>>>             rarely spot a
>>>             >> difference at all increasing K above 7, and even less
>>>             so going above
>>>             >> 10. the image you posted earlier did show quite a
>>>             substantial
>>>             >> improvement however. i was wondering whether you'd be
>>>             able to share
>>>             >> the image so i can evaluate on it? maybe i just
>>>             haven't found the
>>>             >> right test image yet, or maybe it's camera dependent?
>>>             >>
>>>             >> (and yes, automatic and adaptive would be better but
>>>             if we can ship a
>>>             >> simple slider that can improve matters, maybe we should)
>>>             >>
>>>             >> cheers,
>>>             >>  jo
>>>             >>
>>>             >>
>>>             >>
>>>             >> On Mon, Jan 29, 2018 at 2:05 AM, rawfiner
>>>             <rawfi...@gmail.com <mailto:rawfi...@gmail.com>> wrote:
>>>             >> > Hi
>>>             >> >
>>>             >> > Yes, the patch size is set to 1 from the GUI, so it
>>>             is not a bilateral
>>>             >> > filter, and I guess it corresponds to a patch
>>>             window size of 3x3 in the
>>>             >> > code.
>>>             >> > The runtime difference is near the expected
>>>             quadratic slowdown:
>>>             >> > 1,460 secs (8,379 CPU) for 7 and 12,794 secs
>>>             (85,972 CPU) for 25, which
>>>             >> > means about 10.26x slowdown
>>>             >> >
>>>             >> > If you want to make your mind on it, I have pushed
>>>             a branch here that
>>>             >> > integrates the K parameter in the GUI:
>>>             >> > https://github.com/rawfiner/darktable.git
>>>             <https://github.com/rawfiner/darktable.git>
>>>             >> > The branch is denoise-profile-GUI-K
>>>             >> >
>>>             >> > I think that it may be worth to see if an automated
>>>             approach for the
>>>             >> > choice
>>>             >> > of K may work, in order not to integrate the
>>>             parameter in the GUI.
>>>             >> > I may try to implement the approach of Kervann and
>>>             Boulanger (the
>>>             >> > reference
>>>             >> > from the darktable blog post) to see how it performs.
>>>             >> >
>>>             >> > cheers,
>>>             >> > rawfiner
>>>             >> >
>>>             >> >
>>>             >> > 2018-01-27 13:50 GMT+01:00 johannes hanika
>>>             <hana...@gmail.com <mailto:hana...@gmail.com>>:
>>>             >> >>
>>>             >> >> heya,
>>>             >> >>
>>>             >> >> thanks for the reference! interesting
>>>             interpretation how the blotches
>>>             >> >> form. not sure i'm entirely convinced by that
>>>             argument.
>>>             >> >> your image does look convincing though. let me get
>>>             this right.. you
>>>             >> >> ran with radius 1 which means patch window size
>>>             3x3? not 1x1 which
>>>             >> >> would be a bilateral filter effectively?
>>>             >> >>
>>>             >> >> also what was the run time difference? is it near
>>>             the expected
>>>             >> >> quadratic slowdown from 7 (i.e. 15x15) to 25
>>>             (51x51) so about 11.56x
>>>             >> >> slower with the large window size? (test with
>>>             darktable -d perf)
>>>             >> >>
>>>             >> >> since nlmeans isn't the fastest thing, even with
>>>             this coalesced way of
>>>             >> >> implementing it, we should certainly keep an eye
>>>             on this.
>>>             >> >>
>>>             >> >> that being said if we can often times get much
>>>             better results we
>>>             >> >> should totally expose this in the gui, maybe with
>>>             a big warning that
>>>             >> >> it really severely impacts speed.
>>>             >> >>
>>>             >> >> cheers,
>>>             >> >>  jo
>>>             >> >>
>>>             >> >> On Sat, Jan 27, 2018 at 7:34 AM, rawfiner
>>>             <rawfi...@gmail.com <mailto:rawfi...@gmail.com>> wrote:
>>>             >> >> > Thank you for your answer
>>>             >> >> > I perfectly agree with the fact that the GUI
>>>             should not become
>>>             >> >> > overcomplicated.
>>>             >> >> >
>>>             >> >> > As far as I understand, the pixels within a
>>>             small zone may suffer
>>>             >> >> > from
>>>             >> >> > correlated noise, and there is a risk of noise
>>>             to noise matching.
>>>             >> >> > That's why this paper suggest not to take pixels
>>>             that are too close
>>>             >> >> > to
>>>             >> >> > the
>>>             >> >> > zone we are correcting, but to take them a
>>>             little farther (see the
>>>             >> >> > caption
>>>             >> >> > of Figure 2 for a quick explaination):
>>>             >> >> >
>>>             >> >> >
>>>             >> >> >
>>>             >> >> >
>>>             
>>> https://pdfs.semanticscholar.org/c458/71830cf535ebe6c2b7656f6a205033761fc0.pdf
>>>             
>>> <https://pdfs.semanticscholar.org/c458/71830cf535ebe6c2b7656f6a205033761fc0.pdf>
>>>             >> >> > (in case you ask, unfortunately there is a
>>>             patent associated with
>>>             >> >> > this
>>>             >> >> > approach, so we cannot implement it)
>>>             >> >> >
>>>             >> >> > Increasing the neighborhood parameter results in
>>>             having
>>>             >> >> > proportionally
>>>             >> >> > less
>>>             >> >> > problem of correlation between surrounding
>>>             pixels, and decreases the
>>>             >> >> > size of
>>>             >> >> > the visible spots.
>>>             >> >> > See for example the two attached pictures: one
>>>             with size 1, force 1,
>>>             >> >> > and
>>>             >> >> > K 7
>>>             >> >> > and the other with size 1, force 1, and K 25.
>>>             >> >> >
>>>             >> >> > I think that the best would probably be to adapt
>>>             K automatically, in
>>>             >> >> > order
>>>             >> >> > not to affect the GUI, and as we may have
>>>             different levels of noise
>>>             >> >> > in
>>>             >> >> > different parts of an image.
>>>             >> >> > In this post
>>>             >> >> >
>>>             >> >> >
>>>             
>>> (https://www.darktable.org/2012/12/profiling-sensor-and-photon-noise/
>>>             
>>> <https://www.darktable.org/2012/12/profiling-sensor-and-photon-noise/>),
>>>             >> >> > this
>>>             >> >> > paper is cited:
>>>             >> >> >
>>>             >> >> > [4] charles kervrann and jerome boulanger:
>>>             optimal spatial adaptation
>>>             >> >> > for
>>>             >> >> > patch-based image denoising. ieee trans. image
>>>             process. vol. 15, no.
>>>             >> >> > 10,
>>>             >> >> > 2006
>>>             >> >> >
>>>             >> >> > As far as I understand, it gives a way to choose
>>>             an adaptated window
>>>             >> >> > size
>>>             >> >> > for each pixel, but I don't see in the code
>>>             anything related to that
>>>             >> >> >
>>>             >> >> > Maybe is this paper related to the TODOs in the
>>>             code ?
>>>             >> >> >
>>>             >> >> > Was it planned to implement such a variable
>>>             window approach ?
>>>             >> >> >
>>>             >> >> > Or if it is already implemented, could you point
>>>             me where ?
>>>             >> >> >
>>>             >> >> > Thank you
>>>             >> >> >
>>>             >> >> > rawfiner
>>>             >> >> >
>>>             >> >> >
>>>             >> >> >
>>>             >> >> >
>>>             >> >> > 2018-01-26 9:05 GMT+01:00 johannes hanika
>>>             <hana...@gmail.com <mailto:hana...@gmail.com>>:
>>>             >> >> >>
>>>             >> >> >> hi,
>>>             >> >> >>
>>>             >> >> >> if you want, absolutely do play around with K.
>>>             in my tests it did
>>>             >> >> >> not
>>>             >> >> >> lead to any better denoising. to my surprise a
>>>             larger K often led to
>>>             >> >> >> worse results (for some reason often the
>>>             relevance of discovered
>>>             >> >> >> patches decreases with distance from the
>>>             current point). that's why
>>>             >> >> >> K
>>>             >> >> >> is not exposed in the gui, no need for another
>>>             irrelevant and
>>>             >> >> >> cryptic
>>>             >> >> >> parameter. if you find a compelling case where
>>>             this indeed leads to
>>>             >> >> >> better denoising we could rethink that.
>>>             >> >> >>
>>>             >> >> >> in general NLM is a 0-th order denoising
>>>             scheme, meaning the prior
>>>             >> >> >> is
>>>             >> >> >> piecewise constant (you claim the pixels you
>>>             find are trying to
>>>             >> >> >> express /the same/ mean, so you average them).
>>>             if you let that
>>>             >> >> >> algorithm do what it would really like to,
>>>             it'll create unpleasant
>>>             >> >> >> blotches of constant areas. so for best results
>>>             we need to tone it
>>>             >> >> >> down one way or another.
>>>             >> >> >>
>>>             >> >> >> cheers,
>>>             >> >> >>  jo
>>>             >> >> >>
>>>             >> >> >>
>>>             >> >> >>
>>>             >> >> >> On Fri, Jan 26, 2018 at 7:36 AM, rawfiner
>>>             <rawfi...@gmail.com <mailto:rawfi...@gmail.com>>
>>>             >> >> >> wrote:
>>>             >> >> >> > Hi
>>>             >> >> >> >
>>>             >> >> >> > I am surprised to see that we cannot control
>>>             the neighborhood
>>>             >> >> >> > parameter
>>>             >> >> >> > for
>>>             >> >> >> > the NLM algorithm (neither for the denoise
>>>             non local mean, nor for
>>>             >> >> >> > the
>>>             >> >> >> > denoise profiled) from the GUI.
>>>             >> >> >> > I see in the code (denoiseprofile.c) this
>>>             TODO that I don't
>>>             >> >> >> > understand:
>>>             >> >> >> > "//
>>>             >> >> >> > TODO: fixed K to use adaptive size trading
>>>             variance and bias!"
>>>             >> >> >> > And just some lines after that: "// TODO:
>>>             adaptive K tests here!"
>>>             >> >> >> > (K is the neighborhood parameter of the NLM
>>>             algorithm).
>>>             >> >> >> >
>>>             >> >> >> > In practice, I think that being able to
>>>             change the neighborhood
>>>             >> >> >> > parameter
>>>             >> >> >> > allows to have a better noise reduction for
>>>             one image.
>>>             >> >> >> > For  example, choosing a bigger K allows to
>>>             reduce the spotted
>>>             >> >> >> > aspect
>>>             >> >> >> > that
>>>             >> >> >> > one can get on high ISO images.
>>>             >> >> >> >
>>>             >> >> >> > Of course, increasing K increase
>>>             computational time, but I think
>>>             >> >> >> > we
>>>             >> >> >> > could
>>>             >> >> >> > find an acceptable range that would still be
>>>             useful.
>>>             >> >> >> >
>>>             >> >> >> >
>>>             >> >> >> > Is there any reason for not letting the user
>>>             control the
>>>             >> >> >> > neighborhood
>>>             >> >> >> > parameter in the GUI ?
>>>             >> >> >> > Also, do you understand the TODOs ?
>>>             >> >> >> > I feel that we would probably get better
>>>             denoising by fixing
>>>             >> >> >> > these,
>>>             >> >> >> > but
>>>             >> >> >> > I
>>>             >> >> >> > don't understand them.
>>>             >> >> >> >
>>>             >> >> >> > I can spend some time on these TODOs, or to
>>>             add the K parameter to
>>>             >> >> >> > the
>>>             >> >> >> > interface if you think it is worth it (I
>>>             think so but it is only
>>>             >> >> >> > my
>>>             >> >> >> > personal
>>>             >> >> >> > opinion), but I have to understand what the
>>>             TODOs mean before
>>>             >> >> >> >
>>>             >> >> >> > Thank you for your help
>>>             >> >> >> >
>>>             >> >> >> > rawfiner
>>>             >> >> >> >
>>>             >> >> >> >
>>>             >> >> >> >
>>>             >> >> >> >
>>>             >> >> >> >
>>>             
>>> ___________________________________________________________________________
>>>             >> >> >> > darktable developer mailing list to
>>>             unsubscribe send a mail to
>>>             >> >> >> > darktable-dev+unsubscr...@lists.darktable.org
>>>             <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>>>             >> >> >>
>>>             >> >> >>
>>>             >> >> >>
>>>             >> >> >>
>>>             
>>> ___________________________________________________________________________
>>>             >> >> >> darktable developer mailing list
>>>             >> >> >> to unsubscribe send a mail to
>>>             >> >> >> darktable-dev+unsubscr...@lists.darktable.org
>>>             <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>>>             >> >> >>
>>>             >> >> >
>>>             >> >>
>>>             >> >>
>>>             >> >>
>>>             
>>> ___________________________________________________________________________
>>>             >> >> darktable developer mailing list
>>>             >> >> to unsubscribe send a mail to
>>>             >> >> darktable-dev+unsubscr...@lists.darktable.org
>>>             <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>>>             >> >>
>>>             >> >
>>>             >
>>>             >
>>>             >
>>>             
>>> ___________________________________________________________________________
>>>             > darktable developer mailing list to unsubscribe send a
>>>             mail to
>>>             > darktable-dev+unsubscr...@lists.darktable.org
>>>             <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>>>             >
>>>             >
>>>             >
>>>             >
>>>             
>>> ___________________________________________________________________________
>>>             > darktable developer mailing list to unsubscribe send a
>>>             mail to
>>>             > darktable-dev+unsubscr...@lists.darktable.org
>>>             <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>>>             
>>> ___________________________________________________________________________
>>>             darktable developer mailing list
>>>             to unsubscribe send a mail to
>>>             darktable-dev+unsubscr...@lists.darktable.org
>>>             <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>>>
>>>
>>>         
>>> ___________________________________________________________________________
>>>         darktable developer mailing list to unsubscribe send a mail
>>>         to darktable-dev+unsubscr...@lists.darktable.org
>>>         <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>>
>>
>>         
>> ___________________________________________________________________________
>>         darktable developer mailing list to unsubscribe send a mail
>>         to darktable-dev+unsubscr...@lists.darktable.org
>>         <mailto:darktable-dev%2bunsubscr...@lists.darktable.org>
>>
>>      
>>
>>     
>> ___________________________________________________________________________
>>     darktable developer mailing list to unsubscribe send a mail to
>>     darktable-dev+unsubscr...@lists.darktable.org
>>     <mailto:darktable-dev+unsubscr...@lists.darktable.org>
>
>
>  
>
>  
>
> ___________________________________________________________________________
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org


___________________________________________________________________________
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Reply via email to