[darktable-dev] denoise profile non local means: neighborhood parameter

2018-06-13 Thread rawfiner
Hi,

I don't have the feeling that increasing K is the best way to improve noise
reduction anymore.
I will upload the raw next week (if I don't forget to), as I am not at home
this week.
My feeling is that doing non local means on raw data gives much bigger
improvement than that.
I still have to work on it yet.
I am currently testing some raw downsizing ideas to allow a fast execution
of the algorithm.

Apart of that, I also think that to improve noise reduction such as the
denoise profile in nlm mode and the denoise non local means, we could do a
2 passes algorithm, with non local means applied first, and then a
bilateral filter (or median filter or something else) applied only on
pixels where non local means failed to find suitable patches (i.e. pixels
where the sum of weights was close to 0).
The user would have a slider to adjust this setting.
I think that it would make easier to have a "uniform" output (i.e. an
output where noise has been reduced quite uniformly)
I have not tested this idea yet.

Cheers,
rawfiner

Le lundi 11 juin 2018, johannes hanika  a écrit :

> hi,
>
> i was playing with noise reduction presets again and tried the large
> neighbourhood search window. on my shots i could very rarely spot a
> difference at all increasing K above 7, and even less so going above
> 10. the image you posted earlier did show quite a substantial
> improvement however. i was wondering whether you'd be able to share
> the image so i can evaluate on it? maybe i just haven't found the
> right test image yet, or maybe it's camera dependent?
>
> (and yes, automatic and adaptive would be better but if we can ship a
> simple slider that can improve matters, maybe we should)
>
> cheers,
>  jo
>
>
>
> On Mon, Jan 29, 2018 at 2:05 AM, rawfiner  wrote:
> > Hi
> >
> > Yes, the patch size is set to 1 from the GUI, so it is not a bilateral
> > filter, and I guess it corresponds to a patch window size of 3x3 in the
> > code.
> > The runtime difference is near the expected quadratic slowdown:
> > 1,460 secs (8,379 CPU) for 7 and 12,794 secs (85,972 CPU) for 25, which
> > means about 10.26x slowdown
> >
> > If you want to make your mind on it, I have pushed a branch here that
> > integrates the K parameter in the GUI:
> > https://github.com/rawfiner/darktable.git
> > The branch is denoise-profile-GUI-K
> >
> > I think that it may be worth to see if an automated approach for the
> choice
> > of K may work, in order not to integrate the parameter in the GUI.
> > I may try to implement the approach of Kervann and Boulanger (the
> reference
> > from the darktable blog post) to see how it performs.
> >
> > cheers,
> > rawfiner
> >
> >
> > 2018-01-27 13:50 GMT+01:00 johannes hanika :
> >>
> >> heya,
> >>
> >> thanks for the reference! interesting interpretation how the blotches
> >> form. not sure i'm entirely convinced by that argument.
> >> your image does look convincing though. let me get this right.. you
> >> ran with radius 1 which means patch window size 3x3? not 1x1 which
> >> would be a bilateral filter effectively?
> >>
> >> also what was the run time difference? is it near the expected
> >> quadratic slowdown from 7 (i.e. 15x15) to 25 (51x51) so about 11.56x
> >> slower with the large window size? (test with darktable -d perf)
> >>
> >> since nlmeans isn't the fastest thing, even with this coalesced way of
> >> implementing it, we should certainly keep an eye on this.
> >>
> >> that being said if we can often times get much better results we
> >> should totally expose this in the gui, maybe with a big warning that
> >> it really severely impacts speed.
> >>
> >> cheers,
> >>  jo
> >>
> >> On Sat, Jan 27, 2018 at 7:34 AM, rawfiner  wrote:
> >> > Thank you for your answer
> >> > I perfectly agree with the fact that the GUI should not become
> >> > overcomplicated.
> >> >
> >> > As far as I understand, the pixels within a small zone may suffer from
> >> > correlated noise, and there is a risk of noise to noise matching.
> >> > That's why this paper suggest not to take pixels that are too close to
> >> > the
> >> > zone we are correcting, but to take them a little farther (see the
> >> > caption
> >> > of Figure 2 for a quick explaination):
> >> >
> >> >
> >> > https://pdfs.semanticscholar.org/c458/71830cf535ebe6c2b7656f
> 6a205033761fc0.pdf
> >> > (in case you ask, unfortunately there is a patent associated with this
> >> > approach, so we cannot implement it)
> >> >
> >> > Increasing the neighborhood parameter results in having proportionally
> >> > less
> >> > problem of correlation between surrounding pixels, and decreases the
> >> > size of
> >> > the visible spots.
> >> > See for example the two attached pictures: one with size 1, force 1,
> and
> >> > K 7
> >> > and the other with size 1, force 1, and K 25.
> >> >
> >> > I think that the best would probably be to adapt K automatically, in
> >> > order
> >> > not to affect the GUI, and as we may have different levels of noise in
> >> > different parts of an image.

Re: [darktable-dev] denoise profile non local means: neighborhood parameter

2018-06-13 Thread Aurélien Pierre
Hi,

The problem of a 2-passes denoising method involving 2 differents
algorithms, the later applied where the former failed, could be the
grain structure (the shape of the noise) would be different along the
picture, thus very unpleasing.

I thought maybe we could instead create some sort of total variation
threshold on other denoising modules :

 1. compute the total variation of each channel of each pixel as the
divergence divided by the L1 norm of the gradient - we then obtain a
"heatmap" of the gradients over the picture (contours and noise)
 2. let the user define a total variation threshold and form a mask
where the weights above the threshold are the total variation and
the weights below the threshold are zeros (sort of a highpass filter
actually)
 3. apply the bilateral filter according to this mask.

This way, if the user wants to stack several denoising modules, he could
protect the already-cleaned areas from further denoising.

What do you think ?

Aurélien.


Le 13/06/2018 à 03:16, rawfiner a écrit :
> Hi,
>
> I don't have the feeling that increasing K is the best way to improve
> noise reduction anymore.
> I will upload the raw next week (if I don't forget to), as I am not at
> home this week.
> My feeling is that doing non local means on raw data gives much bigger
> improvement than that.
> I still have to work on it yet.
> I am currently testing some raw downsizing ideas to allow a fast
> execution of the algorithm.
>
> Apart of that, I also think that to improve noise reduction such as
> the denoise profile in nlm mode and the denoise non local means, we
> could do a 2 passes algorithm, with non local means applied first, and
> then a bilateral filter (or median filter or something else) applied
> only on pixels where non local means failed to find suitable patches
> (i.e. pixels where the sum of weights was close to 0).
> The user would have a slider to adjust this setting.
> I think that it would make easier to have a "uniform" output (i.e. an
> output where noise has been reduced quite uniformly)
> I have not tested this idea yet.
>
> Cheers,
> rawfiner
>
> Le lundi 11 juin 2018, johannes hanika  > a écrit :
>
> hi,
>
> i was playing with noise reduction presets again and tried the large
> neighbourhood search window. on my shots i could very rarely spot a
> difference at all increasing K above 7, and even less so going above
> 10. the image you posted earlier did show quite a substantial
> improvement however. i was wondering whether you'd be able to share
> the image so i can evaluate on it? maybe i just haven't found the
> right test image yet, or maybe it's camera dependent?
>
> (and yes, automatic and adaptive would be better but if we can ship a
> simple slider that can improve matters, maybe we should)
>
> cheers,
>  jo
>
>
>
> On Mon, Jan 29, 2018 at 2:05 AM, rawfiner  > wrote:
> > Hi
> >
> > Yes, the patch size is set to 1 from the GUI, so it is not a
> bilateral
> > filter, and I guess it corresponds to a patch window size of 3x3
> in the
> > code.
> > The runtime difference is near the expected quadratic slowdown:
> > 1,460 secs (8,379 CPU) for 7 and 12,794 secs (85,972 CPU) for
> 25, which
> > means about 10.26x slowdown
> >
> > If you want to make your mind on it, I have pushed a branch here
> that
> > integrates the K parameter in the GUI:
> > https://github.com/rawfiner/darktable.git
> 
> > The branch is denoise-profile-GUI-K
> >
> > I think that it may be worth to see if an automated approach for
> the choice
> > of K may work, in order not to integrate the parameter in the GUI.
> > I may try to implement the approach of Kervann and Boulanger
> (the reference
> > from the darktable blog post) to see how it performs.
> >
> > cheers,
> > rawfiner
> >
> >
> > 2018-01-27 13:50 GMT+01:00 johannes hanika  >:
> >>
> >> heya,
> >>
> >> thanks for the reference! interesting interpretation how the
> blotches
> >> form. not sure i'm entirely convinced by that argument.
> >> your image does look convincing though. let me get this right.. you
> >> ran with radius 1 which means patch window size 3x3? not 1x1 which
> >> would be a bilateral filter effectively?
> >>
> >> also what was the run time difference? is it near the expected
> >> quadratic slowdown from 7 (i.e. 15x15) to 25 (51x51) so about
> 11.56x
> >> slower with the large window size? (test with darktable -d perf)
> >>
> >> since nlmeans isn't the fastest thing, even with this coalesced
> way of
> >> implementing it, we should certainly keep an eye on this.
> >>
> >> that being said if we can often times get much better result

Re: [darktable-dev] denoise profile non local means: neighborhood parameter

2018-06-13 Thread johannes hanika
hi,

that doesn't sound like a bad idea at all. for what it's worth, in
practice the nlmeans doesn't let any grain at all through due to the
piecewise constant prior that it's based on. well, only in regions
where it finds enough other patches that is. in the current
implementation with a radius of 7 that is not always the case.

also, i usually use some blending to add the input buffer back on top
of the output. this essentially leaves the grain alone but tones it
down.

cheers,
 jo


On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
 wrote:
> Hi,
>
> The problem of a 2-passes denoising method involving 2 differents
> algorithms, the later applied where the former failed, could be the grain
> structure (the shape of the noise) would be different along the picture,
> thus very unpleasing.
>
> I thought maybe we could instead create some sort of total variation
> threshold on other denoising modules :
>
> compute the total variation of each channel of each pixel as the divergence
> divided by the L1 norm of the gradient - we then obtain a "heatmap" of the
> gradients over the picture (contours and noise)
> let the user define a total variation threshold and form a mask where the
> weights above the threshold are the total variation and the weights below
> the threshold are zeros (sort of a highpass filter actually)
> apply the bilateral filter according to this mask.
>
> This way, if the user wants to stack several denoising modules, he could
> protect the already-cleaned areas from further denoising.
>
> What do you think ?
>
> Aurélien.
>
>
> Le 13/06/2018 à 03:16, rawfiner a écrit :
>
> Hi,
>
> I don't have the feeling that increasing K is the best way to improve noise
> reduction anymore.
> I will upload the raw next week (if I don't forget to), as I am not at home
> this week.
> My feeling is that doing non local means on raw data gives much bigger
> improvement than that.
> I still have to work on it yet.
> I am currently testing some raw downsizing ideas to allow a fast execution
> of the algorithm.
>
> Apart of that, I also think that to improve noise reduction such as the
> denoise profile in nlm mode and the denoise non local means, we could do a 2
> passes algorithm, with non local means applied first, and then a bilateral
> filter (or median filter or something else) applied only on pixels where non
> local means failed to find suitable patches (i.e. pixels where the sum of
> weights was close to 0).
> The user would have a slider to adjust this setting.
> I think that it would make easier to have a "uniform" output (i.e. an output
> where noise has been reduced quite uniformly)
> I have not tested this idea yet.
>
> Cheers,
> rawfiner
>
> Le lundi 11 juin 2018, johannes hanika  a écrit :
>>
>> hi,
>>
>> i was playing with noise reduction presets again and tried the large
>> neighbourhood search window. on my shots i could very rarely spot a
>> difference at all increasing K above 7, and even less so going above
>> 10. the image you posted earlier did show quite a substantial
>> improvement however. i was wondering whether you'd be able to share
>> the image so i can evaluate on it? maybe i just haven't found the
>> right test image yet, or maybe it's camera dependent?
>>
>> (and yes, automatic and adaptive would be better but if we can ship a
>> simple slider that can improve matters, maybe we should)
>>
>> cheers,
>>  jo
>>
>>
>>
>> On Mon, Jan 29, 2018 at 2:05 AM, rawfiner  wrote:
>> > Hi
>> >
>> > Yes, the patch size is set to 1 from the GUI, so it is not a bilateral
>> > filter, and I guess it corresponds to a patch window size of 3x3 in the
>> > code.
>> > The runtime difference is near the expected quadratic slowdown:
>> > 1,460 secs (8,379 CPU) for 7 and 12,794 secs (85,972 CPU) for 25, which
>> > means about 10.26x slowdown
>> >
>> > If you want to make your mind on it, I have pushed a branch here that
>> > integrates the K parameter in the GUI:
>> > https://github.com/rawfiner/darktable.git
>> > The branch is denoise-profile-GUI-K
>> >
>> > I think that it may be worth to see if an automated approach for the
>> > choice
>> > of K may work, in order not to integrate the parameter in the GUI.
>> > I may try to implement the approach of Kervann and Boulanger (the
>> > reference
>> > from the darktable blog post) to see how it performs.
>> >
>> > cheers,
>> > rawfiner
>> >
>> >
>> > 2018-01-27 13:50 GMT+01:00 johannes hanika :
>> >>
>> >> heya,
>> >>
>> >> thanks for the reference! interesting interpretation how the blotches
>> >> form. not sure i'm entirely convinced by that argument.
>> >> your image does look convincing though. let me get this right.. you
>> >> ran with radius 1 which means patch window size 3x3? not 1x1 which
>> >> would be a bilateral filter effectively?
>> >>
>> >> also what was the run time difference? is it near the expected
>> >> quadratic slowdown from 7 (i.e. 15x15) to 25 (51x51) so about 11.56x
>> >> slower with the large window size? (test with darktable -d pe

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-13 Thread rawfiner
Le mercredi 13 juin 2018, johannes hanika  a écrit :

> hi,
>
> that doesn't sound like a bad idea at all. for what it's worth, in
> practice the nlmeans doesn't let any grain at all through due to the
> piecewise constant prior that it's based on. well, only in regions
> where it finds enough other patches that is. in the current
> implementation with a radius of 7 that is not always the case.


That's precisely the type of grain that I thought to try to tackle with a 2
pass.
When the image is very noisy, it is quite frequent to have pixels without
enough other patches.
It sometimes forces me to raise the strength sliders, resulting in an
overly smoothed image.
The idea is to give the user the choice of how to handle these pixels,
either by leaving them like this, either by using another denoising
algorithm so that they integrate better with their surroundings.
Anyway, I guess I may try that and come back after some results to discuss
if it's worth it or no ;-)


>
> also, i usually use some blending to add the input buffer back on top
> of the output. this essentially leaves the grain alone but tones it
> down.


I do the same ;-)


>
> cheers,
>  jo
>
>
> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>  wrote:
> > Hi,
> >
> > The problem of a 2-passes denoising method involving 2 differents
> > algorithms, the later applied where the former failed, could be the grain
> > structure (the shape of the noise) would be different along the picture,
> > thus very unpleasing.


I agree that the grain structure could be different. Indeed, the grain
could be different, but my feeling (that may be wrong) is that it would be
still better than just no further processing, that leaves some pixels
unprocessed (they could form grain structures far from uniform if we are
not lucky).
If you think it is only due to a change of algorithm, I guess we could
apply non local means again on pixels where a first pass failed, but with
different parameters to be quite confident that the second pass will work.


> >
> > I thought maybe we could instead create some sort of total variation
> > threshold on other denoising modules :
> >
> > compute the total variation of each channel of each pixel as the
> divergence
> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
> the
> > gradients over the picture (contours and noise)
> > let the user define a total variation threshold and form a mask where the
> > weights above the threshold are the total variation and the weights below
> > the threshold are zeros (sort of a highpass filter actually)
> > apply the bilateral filter according to this mask.
> >
> > This way, if the user wants to stack several denoising modules, he could
> > protect the already-cleaned areas from further denoising.
> >
> > What do you think ?


That sounds interesting.
This would maybe allow to keep some small variations/details that are not
due to noise or not disturbing, while denoising the other parts.
Also, it may be computationally interesting (depends on the complexity of
the total variation computation, I don't know it), as it could reduce the
number of pixels to process.
I guess the user could use something like that also the other way?: to
protect high detailed zones and apply the denoising on quite smoothed zones
only, in order to be able to use stronger denoising on zones that are
supposed to be background blur.

rawfiner



> >
> > Aurélien.
> >
> >
> > Le 13/06/2018 à 03:16, rawfiner a écrit :
> >
> > Hi,
> >
> > I don't have the feeling that increasing K is the best way to improve
> noise
> > reduction anymore.
> > I will upload the raw next week (if I don't forget to), as I am not at
> home
> > this week.
> > My feeling is that doing non local means on raw data gives much bigger
> > improvement than that.
> > I still have to work on it yet.
> > I am currently testing some raw downsizing ideas to allow a fast
> execution
> > of the algorithm.
> >
> > Apart of that, I also think that to improve noise reduction such as the
> > denoise profile in nlm mode and the denoise non local means, we could do
> a 2
> > passes algorithm, with non local means applied first, and then a
> bilateral
> > filter (or median filter or something else) applied only on pixels where
> non
> > local means failed to find suitable patches (i.e. pixels where the sum of
> > weights was close to 0).
> > The user would have a slider to adjust this setting.
> > I think that it would make easier to have a "uniform" output (i.e. an
> output
> > where noise has been reduced quite uniformly)
> > I have not tested this idea yet.
> >
> > Cheers,
> > rawfiner
> >
> > Le lundi 11 juin 2018, johannes hanika  a écrit :
> >>
> >> hi,
> >>
> >> i was playing with noise reduction presets again and tried the large
> >> neighbourhood search window. on my shots i could very rarely spot a
> >> difference at all increasing K above 7, and even less so going above
> >> 10. the image you posted earlier did show quite a substanti

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-13 Thread Aurélien Pierre

Le 13/06/2018 à 14:48, rawfiner a écrit :
>
> Le mercredi 13 juin 2018, johannes hanika  > a écrit :
>
> hi,
>
> that doesn't sound like a bad idea at all. for what it's worth, in
> practice the nlmeans doesn't let any grain at all through due to the
> piecewise constant prior that it's based on. well, only in regions
> where it finds enough other patches that is. in the current
> implementation with a radius of 7 that is not always the case.
>
>
> That's precisely the type of grain that I thought to try to tackle
> with a 2 pass.
> When the image is very noisy, it is quite frequent to have pixels
> without enough other patches.
> It sometimes forces me to raise the strength sliders, resulting in an
> overly smoothed image.
> The idea is to give the user the choice of how to handle these pixels,
> either by leaving them like this, either by using another denoising
> algorithm so that they integrate better with their surroundings.
> Anyway, I guess I may try that and come back after some results to
> discuss if it's worth it or no ;-)
>  
>
>
> also, i usually use some blending to add the input buffer back on top
> of the output. this essentially leaves the grain alone but tones it
> down.
>
>
> I do the same ;-)
Me too
>  
>
>
> cheers,
>  jo
>
>
> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
> mailto:rese...@aurelienpierre.com>>
> wrote:
> > Hi,
> >
> > The problem of a 2-passes denoising method involving 2 differents
> > algorithms, the later applied where the former failed, could be
> the grain
> > structure (the shape of the noise) would be different along the
> picture,
> > thus very unpleasing.
>
>
> I agree that the grain structure could be different. Indeed, the grain
> could be different, but my feeling (that may be wrong) is that it
> would be still better than just no further processing, that leaves
> some pixels unprocessed (they could form grain structures far from
> uniform if we are not lucky).
> If you think it is only due to a change of algorithm, I guess we could
> apply non local means again on pixels where a first pass failed, but
> with different parameters to be quite confident that the second pass
> will work.
That sounds better to me… but practice will have the last word.
>  
>
> >
> > I thought maybe we could instead create some sort of total variation
> > threshold on other denoising modules :
> >
> > compute the total variation of each channel of each pixel as the
> divergence
> > divided by the L1 norm of the gradient - we then obtain a
> "heatmap" of the
> > gradients over the picture (contours and noise)
> > let the user define a total variation threshold and form a mask
> where the
> > weights above the threshold are the total variation and the
> weights below
> > the threshold are zeros (sort of a highpass filter actually)
> > apply the bilateral filter according to this mask.
> >
> > This way, if the user wants to stack several denoising modules,
> he could
> > protect the already-cleaned areas from further denoising.
> >
> > What do you think ?
>
>
> That sounds interesting.
> This would maybe allow to keep some small variations/details that are
> not due to noise or not disturbing, while denoising the other parts.
> Also, it may be computationally interesting (depends on the complexity
> of the total variation computation, I don't know it), as it could
> reduce the number of pixels to process.
> I guess the user could use something like that also the other way?: to
> protect high detailed zones and apply the denoising on quite smoothed
> zones only, in order to be able to use stronger denoising on zones
> that are supposed to be background blur.

The noise is high frequency, so the TV (total variation) threshold will
have to be high pass only. The hypothesis behind the TV thresholding is
noisy pixels should have abnormally higher gradients than true details,
so you isolate them this way.  Selecting noise in low frequencies areas
would require in addition something like a guided filter, which I
believe is what is used in the dehaze module. The complexity of the TV
computation depends on the order of accuracy you expect.

A classic approximation of the gradient is using a convolution product
with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
accurate for edges, probably less accurate for punctual noise). I have
developped myself optimized methods using 2, 4, and 8 neighbouring
pixels that give higher order accuracy, given the sparsity of the data,
at the expense of computing cost :
https://github.com/aurelienpierre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.pyx#L342
(ignore the variable ut in the code, only u is relevant for us here).

>
> rawfiner
>
>  
>
> >
> > Aurélien.
> >
> >
> > Le 13/06/20

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-13 Thread rawfiner
Le mercredi 13 juin 2018, Aurélien Pierre  a
écrit :

>
>
>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>  wrote:
>> > Hi,
>> >
>> > The problem of a 2-passes denoising method involving 2 differents
>> > algorithms, the later applied where the former failed, could be the
>> grain
>> > structure (the shape of the noise) would be different along the picture,
>> > thus very unpleasing.
>
>
> I agree that the grain structure could be different. Indeed, the grain
> could be different, but my feeling (that may be wrong) is that it would be
> still better than just no further processing, that leaves some pixels
> unprocessed (they could form grain structures far from uniform if we are
> not lucky).
> If you think it is only due to a change of algorithm, I guess we could
> apply non local means again on pixels where a first pass failed, but with
> different parameters to be quite confident that the second pass will work.
>
> That sounds better to me… but practice will have the last word.
>

Ok :-)

>
>
>> >
>> > I thought maybe we could instead create some sort of total variation
>> > threshold on other denoising modules :
>> >
>> > compute the total variation of each channel of each pixel as the
>> divergence
>> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
>> the
>> > gradients over the picture (contours and noise)
>> > let the user define a total variation threshold and form a mask where
>> the
>> > weights above the threshold are the total variation and the weights
>> below
>> > the threshold are zeros (sort of a highpass filter actually)
>> > apply the bilateral filter according to this mask.
>> >
>> > This way, if the user wants to stack several denoising modules, he could
>> > protect the already-cleaned areas from further denoising.
>> >
>> > What do you think ?
>
>
> That sounds interesting.
> This would maybe allow to keep some small variations/details that are not
> due to noise or not disturbing, while denoising the other parts.
> Also, it may be computationally interesting (depends on the complexity of
> the total variation computation, I don't know it), as it could reduce the
> number of pixels to process.
> I guess the user could use something like that also the other way?: to
> protect high detailed zones and apply the denoising on quite smoothed zones
> only, in order to be able to use stronger denoising on zones that are
> supposed to be background blur.
>
>
> The noise is high frequency, so the TV (total variation) threshold will
> have to be high pass only. The hypothesis behind the TV thresholding is
> noisy pixels should have abnormally higher gradients than true details, so
> you isolate them this way.  Selecting noise in low frequencies areas would
> require in addition something like a guided filter, which I believe is what
> is used in the dehaze module. The complexity of the TV computation depends
> on the order of accuracy you expect.
>
> A classic approximation of the gradient is using a convolution product
> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
> accurate for edges, probably less accurate for punctual noise). I have
> developped myself optimized methods using 2, 4, and 8 neighbouring pixels
> that give higher order accuracy, given the sparsity of the data, at the
> expense of computing cost : https://github.com/aurelienpierre/Image-Cases-
> Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.
> pyx#L342 (ignore the variable ut in the code, only u is relevant for us
> here).
>
> Great, thanks for the explanations.
Looking at the code of the 8 neighbouring pixels, I wonder if we would make
sense to compute something like that on raw data considering only
neighbouring pixels of the same color?

Also, when talking about the mask formed from the heat map, do you mean
that the "heat" would give for each pixel a weight to use between input and
output? (i.e. a mask that is not only ones and zeros, but that controls how
much input and output are used for each pixel)
If so, I think it is a good idea to explore!

rawfiner

>
>
>
>> >
>> > Aurélien.
>> >
>> >
>> > Le 13/06/2018 à 03:16, rawfiner a écrit :
>> >
>> > Hi,
>> >
>> > I don't have the feeling that increasing K is the best way to improve
>> noise
>> > reduction anymore.
>> > I will upload the raw next week (if I don't forget to), as I am not at
>> home
>> > this week.
>> > My feeling is that doing non local means on raw data gives much bigger
>> > improvement than that.
>> > I still have to work on it yet.
>> > I am currently testing some raw downsizing ideas to allow a fast
>> execution
>> > of the algorithm.
>> >
>> > Apart of that, I also think that to improve noise reduction such as the
>> > denoise profile in nlm mode and the denoise non local means, we could
>> do a 2
>> > passes algorithm, with non local means applied first, and then a
>> bilateral
>> > filter (or median filter or something else) applied only on pixels
>> where non
>> > local me