I went through Aurélien's study again
I wonder why the result of TV is divided by 4 (in case of 8 neighbors, "out[
i, j, k] /= 4.")
I guess it is kind of a normalisation.
But as we divided the differences along diagonals by sqrt(2), the maximum
achievable (supposing the values of the image are in [0,1], thus taking a
difference of 1 along each direction) are:
sqrt(1 + 1) + sqrt(1 + 1) + sqrt(1/2+1/2) + sqrt(1/2+1/2) = 2*sqrt(2) + 2
in case of L2 norm
2 + 2 + 2*1/sqrt(2) + 2*1/sqrt(2) = 4 + 2*sqrt(2) in case of L1 norm
So why this 4 and not a 4.83 and a 6.83 for L2 norm and L1 norm
respectively?
Or is it just a division by the number of directions? (if so, why are the
diagonals difference divided by sqrt(2)?)
Thanks!
rawfiner
2018-07-02 21:34 GMT+02:00 rawfiner :
> Thank you for all these explanations!
> Seems promising to me.
>
> Cheers,
>
> rawfiner
>
> 2018-07-01 21:26 GMT+02:00 Aurélien Pierre :
>
>> You're welcome ;-)
>>
>> That's true : the multiplication is equivalent to an "AND" operation, the
>> resulting mask has non-zero values where both TV AND Laplacian masks has
>> non-zero values, which - from my tests - is where the real noise is.
>>
>> That is because TV alone is too sensitive : when the image is noisy, it
>> works fine, but whenever the image is clean or barely noisy, it detect
>> edges as well, thus false-positive in the case of noise detection.
>>
>> The TV × Laplacian is a safety jacket that allows the TV to work as
>> expected on noisy images (see the example) but will protect sharp edges on
>> clean images (on the example, the masks barely grabs a few pixels in the
>> in-focus area).
>>
>> I have found that the only way we could overcome the oversensibility of
>> the TV alone is by setting a window (like a band-pass filter) instead of a
>> threshold (high-pass filter) because, in a noisy picture, depending on the
>> noise level, the TV values of noisy and edgy pixels are very close. From an
>> end-user perspective, this is tricky.
>>
>> Using TV × Laplacian, given that the noise stats should not vary much for
>> a given sensor at a given ISO, allows to confidently set a simple threshold
>> as a factor of the standard deviation. It gives more reproductibility and
>> allows to build preset/styles for given camera/ISO. Assuming gaussian
>> noise, if you set your threshold factor to X (which means "unmask
>> everything above the mean (TV × Laplacian) + X standard deviation), you
>> know beforehand how many high-frequency pixels will be affected, no matter
>> what :
>>
>>- X = -1 => 84 %,
>>- 0 => 50 %,
>>- 1 => 16 % ,
>>- 2 => 2.5 %,
>>- 3 => 0.15 %
>>- …
>>
>> Le 01/07/2018 à 14:13, rawfiner a écrit :
>>
>> Thank you for this study Aurélien
>>
>> As far as I understand, TV and Laplacians are complementary as they
>> detect noise in different regions of the image (noise in sharp edge for
>> Laplacian, noise elsewhere for TV).
>> Though, I do not understand why you multiply the TV and Laplacian results
>> to get the mask.
>> Multiplying them would result in a mask containing non-zero values only
>> for pixels that are detected as noise both by TV and Laplacian.
>> Is there a particular reason for multiplying (or did I misunderstood
>> something?), or could we take the maximum value among TV and Laplacian for
>> each pixel instead?
>>
>> Thanks again
>>
>> Cheers,
>> rawfiner
>>
>>
>> 2018-07-01 3:45 GMT+02:00 Aurélien Pierre :
>>
>>> Hi,
>>>
>>> I have done experiments on that matter and took the opportunity to
>>> correct/test further my code.
>>>
>>> So here are my attempts to code a noise mask and a sharpness mask with
>>> total variation and laplacian norms : https://github.com/aurelienpie
>>> rre/Image-Cases-Studies/blob/master/notebooks/Total%20Variat
>>> ion%20masking.ipynb
>>>
>>> Performance benchmarks are at the end.
>>>
>>> Cheers,
>>>
>>> Aurélien.
>>>
>>> Le 17/06/2018 à 15:03, rawfiner a écrit :
>>>
>>>
>>>
>>> Le dimanche 17 juin 2018, Aurélien Pierre
>>> a écrit :
>>>
Le 13/06/2018 à 17:31, rawfiner a écrit :
Le mercredi 13 juin 2018, Aurélien Pierre
a écrit :
>
>
>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>> wrote:
>> > Hi,
>> >
>> > The problem of a 2-passes denoising method involving 2 differents
>> > algorithms, the later applied where the former failed, could be the
>> grain
>> > structure (the shape of the noise) would be different along the
>> picture,
>> > thus very unpleasing.
>
>
> I agree that the grain structure could be different. Indeed, the grain
> could be different, but my feeling (that may be wrong) is that it would be
> still better than just no further processing, that leaves some pixels
> unprocessed (they could form grain structures far from uniform if we are
> not lucky).
> If you think it is only due to a change of algorithm, I guess we could
> apply non local means again on pi