Re: [darktable-dev] unused feature "blend only lightness"

2020-02-01 Thread rawfiner
Hi
Only a guess, but maybe it is used when we display some channels of the
parametric masks, or when we display the masks?
rawfiner


Le sam. 1 févr. 2020 à 13:28, Heiko Bauke  a écrit :

> Hi,
>
> all blend modes that operate in Lab space have some special treatment
> for the case that a module sets the flag IOP_FLAGS_BLEND_ONLY_LIGHTNESS.
>   However, I cannot find any module that actually sets this flag.
> Furthermore, it seams to me not reasonable why the behavior of a blend
> mode should depend on a module flag.
>
> Can we get rid of this?  Is there any good reason why we have this in
> darktable?
>
> To give an example,
>
> >   if(flag == 0)
> >   {
> > tb[1]
> =clamp_range_f(ta[1]*(1.0f-fabsf(tbo-tb[0]))+0.5f*(ta[1]+tb[1])*fabsf(tbo-tb[0]),
> >  min[1], max[1]);
> > tb[2]
> =clamp_range_f(ta[2]*(1.0f-fabsf(tbo-tb[0]))+0.5f*(ta[2]+tb[2])*fabsf(tbo-tb[0]),
> >  min[2], max[2]);
> >   }
> >   else
> >   {
> > tb[1] = ta[1];
> > tb[2] = ta[2];
> >   }
>
> in _blend_darken would simplify to
>
> > tb[1]
> =clamp_range_f(ta[1]*(1.0f-fabsf(tbo-tb[0]))+0.5f*(ta[1]+tb[1])*fabsf(tbo-tb[0]),
> >  min[1], max[1]);
> > tb[2]
> =clamp_range_f(ta[2]*(1.0f-fabsf(tbo-tb[0]))+0.5f*(ta[2]+tb[2])*fabsf(tbo-tb[0]),
> >  min[2], max[2]);
>
>
> Heiko
>
>
> --
> -- Number Crunch Blog @ https://www.numbercrunch.de
> --  Cluster Computing @ https://www.clustercomputing.de
> --  Social Networking @ https://www.researchgate.net/profile/Heiko_Bauke
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Changelog

2019-12-13 Thread rawfiner
There are several ressources on discuss.pixls.us about some of the changes.
See:
https://discuss.pixls.us/t/new-interface-in-darktable-2-7-dev/12106
https://discuss.pixls.us/t/tagging-improvements-in-darktable-2-7/13571
https://discuss.pixls.us/t/3d-lut-module-in-darktable-2-7-dev/12341
https://discuss.pixls.us/t/changes-in-noise-reduction-for-darktable-2-7-3-0/14672
https://discuss.pixls.us/t/a-tone-equalizer-in-darktable/10678/102
https://discuss.pixls.us/t/the-darktable-3-0-video-series/14837

As you can see in the last link, Aurélien Pierre has made some (great)
video tutorials to explain some of the changes.
Also, I will make a video about noise reduction (as usual) hopefully late
december or early january to explain the changes and how to reduce noise
with darktable 3.0.

Cheers,
rawfiner

Le ven. 13 déc. 2019 à 20:56, Bruce Williams  a écrit :

> Thanks for the links, guys, much appreciated!
>
> Cheers,
> Bruce Williams.
>
> -- Forwarded message -
> From: Jochen Keil 
> Date: Fri., 13 Dec. 2019, 22:53
> Subject: Re: [darktable-dev] Changelog
> To: Bruce Williams 
> Cc: darktable 
>
>
> Hi Bruce,
>
> I'm not a developer, but allow me to step in:
>
> Here you can find a pretty comprehensive summary:
>
> https://github.com/darktable-org/darktable/blob/master/RELEASE_NOTES.md
>
> I think it's the base for this post:
>
> https://www.darktable.org/2019/11/darktable-300rc2-released/
>
> HTH. :)
>
> Cheers,
>
>   Jochen
>
>
> On Fri, Dec 13, 2019 at 11:51 AM Bruce Williams 
> wrote:
>
>> Hi Devs,
>> Just wanted to ask, where is the best location to see/read up on all of
>> the new features for 3.0?
>> I'm starting to jot down ideas for new videos, and don't want to
>> accidentally omit anything!
>> Cheers,
>> Bruce Williams
>> --
>>
>> audio2u.com
>> brucewilliamsphotography.com
>> shuttersincpodcast.com
>> sinelanguagepodcast.com
>>
>> e-mail  | Twitter <http://twitter.com/@audio2u> |
>> LinkedIn <http://au.linkedin.com/pub/bruce-williams/1/318/489> | Facebook
>> <http://www.facebook.com/audio2u> | Soundcloud
>> <http://www.soundcloud.com/audio2u> | Quora
>> <https://www.quora.com/profile/Bruce-Williams-5>
>> --
>>
>>
>>
>> ___
>> darktable developer mailing list to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Fujifilm H1 treatment differences between jpeg and raw files

2019-12-07 Thread rawfiner
Do you have a DR setting enabled on the camera? (Like DR200 or DR400 or DR
auto)
This setting results in underexposed raw images.
You can try adding 1EV or 2EV in the exposure module to compensate for that.

rawfiner

Le sam. 7 déc. 2019 à 08:12, Axel Gerber  a écrit :

> Is it possible, not the right basecurve is chosen? What do you see, when
> you open basecurve and open the selector for diff BC styles?
>
> Von meinem Mobiltelefon gesendet
>
>
>  Ursprüngliche Nachricht 
> Betreff: [darktable-dev] Fujifilm H1 treatment differences between jpeg
> and raw files
> Von: Lorenzo Fontanella
> An: darktable-dev@lists.darktable.org
> Cc:
>
> Good morning,
> I submit a problem for which he has lost sleep without ever finding a
> valid explanation, maybe you can help me understand if it is due to a bug,
> a wrong setting or something else.
> DarkTable when opening my shots reproduces .jpeg and .raf files in a
> totally different way.
> From the screenshots you can find at this link, you can evaluate yourself.
>
>
> https://drive.google.com/drive/folders/1jhPJJmv08gg8jzsyGnvgzwXiymAHiOMh?usp=sharing
>
> The "clearest" images refer to jpegs, these images are also identical to
> how the camera shows them during shooting and preview.
>
> Some thoughts:
> 1. With other software the difference that exists anyway, since we talk
> about jpeg files already processed and compressed by the camera and
> untreated raw files, is not so marked.
> . The setting applied to the camera is minimal and the profile is the
> Provia / Standard
> . This behavior is present even if in a different way, even with shots
> taken at ISO100 (ie without sensor amplification)
> . Even using the Clut tables created to emulate the same effect (Provia /
> Standard), the difference is huge, in fact with those it is not clear how
> colors are changed, so they are simply useless.
>
> I write to you because my goal is to get jpeg / raw (similar) files
> considering the natural and physiological differences that pass between
> these two output files.
>
> Now if I start working on a .raf file, it's because I want to improve the
> jpeg on camera for various reasons. Currently working on the .raf file I
> have to work only to get similar results to the jpeg on camera, which makes
> the work of developing the raw files useless.
>
> Thanks
> Lorenzo Fontanella
>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Current dt master very slow

2019-09-29 Thread rawfiner
@Terry, you can try with current master (that now includes the patch to
make non local means preview faster)

Cheers,
rawfiner

Le dim. 29 sept. 2019 à 06:45, Terry Duell  a écrit :

> On Sun, 29 Sep 2019 09:59:43 +1000, jys 
> wrote:
>
> >
> >
> > On Sat, Sep 28, 2019, at 16:40, Terry Duell wrote:
> >
> >>RawSpeed submodule not found.  You probably want to run:
> >>
> >>$ git submodule init
> >>
> >>and then
> >>
> >>$ git submodule update
> >
> > That.
> >
> > You need to run those commands in a new local darktable git repo to
> pull
> > in the rawspeed stuff. It's not a bad idea to run the update command
> > again after each pull from upstream, just to stay in sync.
> >
> > Full build instructions in the readme:
> > https://github.com/darktable-org/darktable/blob/master/README.md
> >
>
> Sorry, a reply seems to have got away without my response.
> Thanks for your info.
> I understand those requirement for a git clone, it is something that I
> automatically do here via a script, to keep my git repository clone up to
> date.
> I wanted to avoid having to create another repository clone to test
> rawfiner's changes to his local repository, by grabbing a zip archive
> which is an option that was provided, thinking this would suffice for a
> local test build. It escaped me that a git clone and all that entailed,
> would be required.
>
>
> Cheers,
> --
> Regards,
> Terry Duell
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Current dt master very slow

2019-09-28 Thread rawfiner
Please, could you test branch rawfiner-faster-nlmeans-preview of my git
repository https://github.com/rawfiner/darktable
I tried to make preview of non local means faster while keeping the preview
accuracy.

Cheers,
rawfiner

Le sam. 28 sept. 2019 à 09:26, rawfiner  a écrit :

>
>
> Le sam. 28 sept. 2019 à 00:59, Terry Duell  a écrit :
>
>> Hello All,
>>
>> On Fri, 27 Sep 2019 09:42:54 +1000, Terry Duell 
>> wrote:
>>
>> > Hello,
>> > I've just built the current master (1886-ga2f84373b) on Fedora 30,
>> using
>> > my usual rpmbuild spec file, and find that it is very slow,far.
>>
>> Some more info that may help with this issue.
>> I just downgraded to a build of dt that I made on 25 Sep, and
>> performance
>> is back to what I have been used to, so I am guessing that something
>> changed in dt source since that version that impinges on performance, or
>> something has changed on my system that has affected subsequent builds??
>> I am unsure if there is a way to identity the specific version of the
>> master that was used for my build on the 25th.
>> Hope this helps.
>>
>
> Thanks for narrowing the search :-)
> So, it may be the addition of extra wavelets scales, but that should only
> make dt slower when in wavelets mode (and is necessary to denoise correctly
> high iso images).
> Or the commit that makes denoise profile non local means preview more
> accurate. For this one, you should only see speed differences when zoomed
> out, at 100% it should not make any difference. Anyway, I can try to make
> it a bit faster again we zoomed out while trying not to lower preview
> accuracy.
>
> Cheers,
> rawfiner
>
>
>>
>> Cheers,
>> --
>> Regards,
>> Terry Duell
>>
>> ___
>> darktable developer mailing list
>> to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>>
>>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Current dt master very slow

2019-09-28 Thread rawfiner
Le sam. 28 sept. 2019 à 00:59, Terry Duell  a écrit :

> Hello All,
>
> On Fri, 27 Sep 2019 09:42:54 +1000, Terry Duell 
> wrote:
>
> > Hello,
> > I've just built the current master (1886-ga2f84373b) on Fedora 30,
> using
> > my usual rpmbuild spec file, and find that it is very slow,far.
>
> Some more info that may help with this issue.
> I just downgraded to a build of dt that I made on 25 Sep, and performance
> is back to what I have been used to, so I am guessing that something
> changed in dt source since that version that impinges on performance, or
> something has changed on my system that has affected subsequent builds??
> I am unsure if there is a way to identity the specific version of the
> master that was used for my build on the 25th.
> Hope this helps.
>

Thanks for narrowing the search :-)
So, it may be the addition of extra wavelets scales, but that should only
make dt slower when in wavelets mode (and is necessary to denoise correctly
high iso images).
Or the commit that makes denoise profile non local means preview more
accurate. For this one, you should only see speed differences when zoomed
out, at 100% it should not make any difference. Anyway, I can try to make
it a bit faster again we zoomed out while trying not to lower preview
accuracy.

Cheers,
rawfiner


>
> Cheers,
> --
> Regards,
> Terry Duell
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Current dt master very slow

2019-09-28 Thread rawfiner
Le sam. 28 sept. 2019 à 09:00, Terry Duell  a écrit :

> More grist for the mill...
>
> On Sat, 28 Sep 2019 08:58:33 +1000, Terry Duell 
> wrote:
>
> > Hello All,
> >
> > On Fri, 27 Sep 2019 09:42:54 +1000, Terry Duell 
> > wrote:
>
> > Some more info that may help with this issue.
> > I just downgraded to a build of dt that I made on 25 Sep, and
> > performance is back to what I have been used to, so I am guessing that
> > something changed in dt source since that version that impinges on
> > performance, or something has changed on my system that has affected
> > subsequent builds??
>
> My build from 25th Sep (with OpenMP on) is now locking up, and I have to
> force a quit, so downgraded again to a build from 23rd Sep and that is
> doing the same.
>

Just to be sure, when downgrading like this, you start from files without
xmp and a fresh database?

I never had any of these issues with those builds previously, so it is
> beginning to look like something has changed on my system that is
> affecting this.
> I'll do a new build of the current master, with OpenMP off, and see how
> that behaves.
>
> Cheers,
> --
> Regards,
> Terry Duell
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Current dt master very slow

2019-09-27 Thread rawfiner
Le ven. 27 sept. 2019 à 09:50, thokster  a écrit :

> Hi,
> on my Ubuntu system it seems to be both. It's less slow when using the
> button but still slow with denoiseprofile.
> darktable -d opencl says:
>
> 369,710244 [opencl_denoiseprofile] couldn't enqueue kernel! -52, devid 0
> 369,710270 [opencl_pixelpipe] could not run module 'denoiseprofile' on
> gpu. falling back to cpu path
>

I unfortunately can't reproduce this issue on my side...
Could you tell me if it is with wavelets mode or non local means mode or
both?

rawfiner


> (once when using the button, twice with mouse-wheel)
>
> Am 27.09.19 um 08:34 schrieb rawfiner:
> > If it is only zooming, I guess this may be relared to the problem that
> > computations are triggered twice when zooming with the mouse wheel,
> > which doubles the processing time. If you zoom with the button on the
> > up-left part of the darkroom, do you experience the same issues?
> >
> > rawfiner
> >
> >
> > Le ven. 27 sept. 2019 à 08:17, Terry Duell  > <mailto:tdu...@iinet.net.au>> a écrit :
> >
> > Hello Andreas,
> >
> > On Fri, 27 Sep 2019 16:03:54 +1000, Andreas Schneider
> > mailto:a...@cryptomilk.org>>
> > wrote:
> >
> > > On Friday, 27 September 2019 01:42:54 CEST Terry Duell wrote:
> > >> Hello,
> > >
> > > Hi,
> > >
> > >> I've just built the current master (1886-ga2f84373b) on Fedora
> > 30, using
> > >> my usual rpmbuild spec file, and find that it is very slow,
> > particularly
> > >> noticeable when zooming the image or applying noise reduction,
> > it takes
> > >> much longer than previous builds to finish 'working'.
> > >> It's possible I have a local issue, but not seen anything
> > obvious thus
> > >> far.
> > >
> > > OpenMP is turned off in the Fedora spec file.
> >
> > I agree that would be the first best guess, but OpenMP is turned
> > on in my
> > spec file.
> > I searched the rpmbuild output and OpenMP is found.
> >
> > Cheers,
> > --
> > Regards,
> > Terry Duell
> >
>  ___
> > darktable developer mailing list
> > to unsubscribe send a mail to
> > darktable-dev+unsubscr...@lists.darktable.org
> > <mailto:darktable-dev%2bunsubscr...@lists.darktable.org>
> >
> >
> >
> ___
> > darktable developer mailing list to unsubscribe send a mail to
> > darktable-dev+unsubscr...@lists.darktable.org
>
>
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Current dt master very slow

2019-09-27 Thread rawfiner
If it is only zooming, I guess this may be relared to the problem that
computations are triggered twice when zooming with the mouse wheel, which
doubles the processing time. If you zoom with the button on the up-left
part of the darkroom, do you experience the same issues?

rawfiner


Le ven. 27 sept. 2019 à 08:17, Terry Duell  a écrit :

> Hello Andreas,
>
> On Fri, 27 Sep 2019 16:03:54 +1000, Andreas Schneider 
>
> wrote:
>
> > On Friday, 27 September 2019 01:42:54 CEST Terry Duell wrote:
> >> Hello,
> >
> > Hi,
> >
> >> I've just built the current master (1886-ga2f84373b) on Fedora 30, using
> >> my usual rpmbuild spec file, and find that it is very slow, particularly
> >> noticeable when zooming the image or applying noise reduction, it takes
> >> much longer than previous builds to finish 'working'.
> >> It's possible I have a local issue, but not seen anything obvious thus
> >> far.
> >
> > OpenMP is turned off in the Fedora spec file.
>
> I agree that would be the first best guess, but OpenMP is turned on in my
> spec file.
> I searched the rpmbuild output and OpenMP is found.
>
> Cheers,
> --
> Regards,
> Terry Duell
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] DT bad on skin tones?

2019-05-28 Thread rawfiner
It is not a matter of digital development choices here, nor of one's own
tastes or choices, it is a matter of trying to know whether the input color
profiles are well applied in darktable.
We can't compare colors if we use modules that are messing the colors (and
we know base curve does that).
So this has nothing to do with approaches of digital development.
The only question is, with only color accurate modules, are the colors ok
with standard color matrix, or is the colorin module broken

Cheers,
rawfiner

Le mar. 28 mai 2019 à 10:05, François Tissandier <
francois.tissand...@gmail.com> a écrit :

> The base curve can be still used with the standard one instead of the
> camera one, colours are quite fine then. I was doing that before the
> arrival of filmic. So the base curve can be kept. And indeed it's good to
> have the choice.
>
> Le mar. 28 mai 2019 à 10:00, Florian W  a écrit :
>
>> Not everyone has the same approach of digital development (eg. Film like
>> response vs more creative curve editing, with its disadvantages) and one of
>> the strong advantage of Darktable is allowing all these use cases. Starting
>> a war about this won't get us anywhere in the issue at hand here.
>>
>>
>> Le mar. 28 mai 2019 09:33, Aurélien Pierre 
>> a écrit :
>>
>>> For the last time :
>>>
>>> *BASE CURVES ARE EVIL, CRAP, GARBAGE, NO-GO, DON'T TOUCH, BIO HAZARD,
>>> KEEP AWAY, HUN HUN, SURVIVORS WILL BE SHOT AGAIN.*
>>>
>>> I wouldn't have taken 2 months of my life to develop filmic if base
>>> curves had worked as expected. Base curves are a broken design and will
>>> always destroy colors. I have repeated that multiple times in the past
>>> years, it would be great if people started to listen.
>>>
>>> In darktable 2.8, there will be a global preference to have the base
>>> curves disabled by default because they really harm, especially for the
>>> newest HDR cameras. Until then, the first thing you need to do while
>>> opening a raw picture is to disable that god-forsaken module manually.
>>>
>>> Thanks for confirming it has nothing to do with matrices though. That
>>> means everything works as expected.
>>>
>>> Aurélien.
>>> Le 28/05/2019 à 09:00, Florian Hühn a écrit :
>>>
>>>
>>>> If RawTherapee is really using the same matrices, it would be
>>>> interesting to find out what's being done differently (or additionally)...
>>>>
>>>> RawTherapee uses dcraw for import. I took the  A7RIII testchart raw and
>>> ran it through  'dcraw -v -w -o 1 -T DSC00157.ARW', then imported the .ARW
>>> and the TIFF created by dcraw into DarkTable. The TIFF lokes more natural
>>> to me. Especially the skin color of the guy on the right looks somehow a
>>> bit yellowish / ill in the .ARW but more natural in the TIFF from dcraw.
>>> BUT: When importing the TIFF no base curve is applied. When I disable
>>> base curve on the .ARW and instead use levels and tone curve manually i can
>>> get a look that is closer to the TIFF (i.e. the dcraw variant).
>>> Maybe it comes down to different default settings in DarkTable importing
>>> vs. dcraw. At some point I'd like to double-check that the matrix
>>> calculations done by DT are indeed carried out as intended, but so far I
>>> didn't find a way to artificially create a raw-file for this purpose.
>>>
>>>
>>> ___
>>> darktable developer mailing list to unsubscribe send a mail to
>>> darktable-dev+unsubscr...@lists.darktable.org
>>>
>>>
>>> ___
>>> darktable developer mailing list to unsubscribe send a mail to
>>> darktable-dev+unsubscr...@lists.darktable.org
>>>
>>
>> ___
>> darktable developer mailing list to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>>
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Raw import using color matrices

2019-04-30 Thread rawfiner
Hi,
Please write a bug report on github to discuss the issue. You can attach
the files there, or put them on a drive if necessary.
Thanks
rawfiner

Le mar. 30 avr. 2019 à 10:28, Florian Hühn  a
écrit :

> Hi,
>
> I am currently facing an issue where the red channel of imported images in
> darktable contains corruptions (black areas and ghost images) when i use
> "standard color matrix" (camera raw) or "embedded matrix" (dng file).
> Color profiles like Rec709 work as expected. Also converting the raw file
> using dcraw and importing the resulting tiff in darktable (sRGB color
> profile) works fine. I already made sure that dcraw and adobe_coeff.c of
> darktbale use the same color matrix for my camera, so I expected similar
> results. I cross checked with dcraw, rawTherapee and my camera
> manufacturers raw software - all show the picture as expected, only
> darktable shows corruptions.
>
> What would be the best place to discuss this issue? Attaching huge raw
> files to a mailing list might not be the best option, I guess.
> Should i open a bug report for this? Currently it seems to me as if
> darktable somehow handles color matrices differently from all the other
> programs, but I'm not an expert on this topic. Maybe I am missing something
> critical?
>
> Cheers,
> Florian
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] where to discuss big changes

2019-04-16 Thread rawfiner
Dear all,

Currently, the place where big changes should be discussed is IRC
(according to houz, as I have not seen this information anywhere, which is
problematic as new contributors may make big changes, so they have at least
to be able to know where is the place to discuss things).
I think IRC is not convenient at all.

I would like to discuss why I think making these discussion on IRC is bad,
and what alternatives we have.

IRC:
pros: honestly, I don't see any (it is nice for casual talk, but discussing
important changes is very different than casual talk...)
cons:
- no logs. Basically, if you are not connected at the right time to discuss
about the changes, you cannot catch up, and you cannot even know what
decision was taken!
- instantaneous chatting. While it is nice for casual talk, it forces to
write answer fastly. For making important decision, we should have least
have a platform were anyone can take time to write argumented answers. In
addition, answering quickly does leave enough time to think about the way
to say things, which can result in misunderstandings, under-thought
answers, potentially even anger.
- people at different places in the planet just can't be all at the same
time connected
- all topics are messed up in the discussion, thus even if we had a logging
system it would be hard to find a particular topic easily

Existing dev mailing list:
pros:
- it is logical for new devs to join the mailing list (way more logical
than connecting 24h/24h to an IRC channel in my opinion)
- people can see the message when they want, and reply when they want,
wherever they are on the planet
- we can have conversations organised by topics
cons:
- the mailing list is used for several purpose (help requested to develop a
module, bugs, interaction between user and devs, and information about
ongoing developments). This con could be compensated by using a tag in
subjects related to big changes, like [big-changes] or whatever you prefer.
This way, even devs which are overwhelmed with emails would have a way to
filter out email that may involve important decision.

Creating another mailing list:
Basically the same pros than using the existing mailing list. The only
difference I see is that we wouldn't need to put a tag in the email topic.

Using discuss.pixls.us with a category with limited access:
pros:
- we could benefit from the forum's math support to discuss while showing
math stuff if needed
- people can see the message when they want, and reply when they want,
wherever they are on the planet
- we can have conversations organised by topics
cons:
- a limited access would exclude new contributors from the conversation,
whereas in my opinion any contributor should be able to defend his ideas

Using github by commenting directly on PRs
pros:
- we can comment directly near the code, and we can comment code details
- people can see the message when they want, and reply when they want,
wherever they are on the planet
- we can have conversations organised by topics (PRs)
cons:
- devs will have to check new PRs regularly to give their opinion, and a
big-change PR may "hide" in between small ones. However, we could easily
have a tag "big-change" to request devs to pay attention to particular PRs,
or use the PRs names to indicate such big changes
- big changes should be discussed before making them. Yet, I think this
drawback can be compensated by making PRs really early, which is already
done by several of us (see PRs with [WIP] in the title)

What do you think?
Please, give your opinion on all options, and select at least 2 solutions
that could be ok for you, so that we can make a decision at the end.
Also, if you don't like a solution, please explain why.

I personally prefer the use of current mailing list with tags in the topic
names, or to use github and comment directly on PRs. It would be ok for me
as well if we use discuss.pixls.us. I think creating a new mailing list is
a bit overkill, but I would be ok with it if others prefer this. Last, I
think continuing using IRC for this purpose would be a big mistake, that
will lead to more communication issues, so I am totally against this
solution.

I hope we will find a solution so that amazing changes to this amazing
software can be peacefully discussed in the future.

Cheers,
rawfiner

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] An algorithm to downscale raw data BEFORE demosaic by whatever scale factor

2019-02-27 Thread rawfiner
Hi Johannes
For now, I am working on denoise profile's anscombe transform, not one for
rawdenoise (this may come later). Sorry if that was unclear.

Basically, I'd like to have a variance closer to 1 for shadows.
The basic idea I have is to use a linear approximation of the sqrt for
shadows, while using the real sqrt for the rest of data.

Let me go into some details.
I will simplify a bit and stay in the case of a simple poisson
distribution, but the principles are the same for generalized anscombe
transform.
Also, note that I did not know at all how anscombe transform worked about a
month ago, so if there are mistakes in the reasoning feel free to comment.

The anscombe transform is built on a taylor expansion of the variance:
if we call f the transform:
var(f(X)) is approximately equal to:
var(f(u)+f'(u)*(X-u)) which is equal to:
f'(u)^2*var(X)
where u is the mean.
As for poisson noise, u=var, we get:
var(f(X)) is approximately equal to f'(u)^2*u, and we want that equal to 1
Taking f(x) = 2*sqrt(x), we get f'(x)=1/sqrt(x)
Thus f'(x)^2 = 1/x
Thus f'(u)^2*u = 1/u*u = 1
The principle for generalized anscombe transform is the same, except we
have u = a*var + b.

Ok, now, why do we get variance higher than 1 for values close to 0?
Well, as said before var(f(X)) is approximately equal to
var(f(u)+f'(u)*(X-u)).
But it seems that the other terms of the Taylor expansion have an influence
when x is small.
Yet, considering them makes the formula way harder to develop:
var(f(u)+f'(u)*(X-u)+f''(u)/2*(X-u)^2+...)=?
A possible solution is to keep f = sqrt where it works, i.e. for values
significantly higher than 0, and take f(x) = dx+e for values close to 0.
With f=dx+e, the nice thing is that f'', f''', etc are all null, so they
cannot impact the variance and make it higher near 0.
Thus, var(f(X)) = var(f(u)+f'(u)*(X-u))
The less nice thing is that:
var(f(X))=f'(u)^2*var(X)=d*d*u
whereas u will vary depending on the darkness of the zone of the image we
are considering.
Thus, we need to find the value of d and e at a point which is close enough
to zero, so that the approximation remain good enough. We choose d and e to
ensure the continuity with the sqrt function and first derivative at this
particular point x0.

Our transform becomes:
f(x) = dx+e if x < x0
or sqrt(x) if x >= x0

For backtransform, we first consider the unbiased inverse of sqrt, then if
the result is lower than x0, we replace it by the algebraic inverse of dx+e
(with is unbiased as this function is linear. That's also why I considered
a linear approximation, even if a second order approximation seems to work
too in practice: I have not yet understood how to find an unbiased inverse
for non linear formulas.)

In pratice, it seems to work well but the x0 point has to be carefully
chosen (not too high, or the dark areas will be denoised way too much, not
too low, or we will loose the purpose of having this).
It seems that the position of the x0 we should take depend on the "a"
parameter of the generalized anscombe transform.
And... I don't know yet how, this is the current state I have ;-)

Cheers,
rawfiner

Le mer. 27 févr. 2019 à 09:57, johannes hanika  a écrit :

> hi,
>
> let me know how you go with the anscombe transform, that sounds
> important. in principle we should run it before black point
> subtraction, to be able to correctly extract the zero-mean values for
> black. the current transform wouldn't work with such data, however.
>
> cheers,
>  jo
>
> On Thu, Feb 21, 2019 at 8:49 AM rawfiner  wrote:
> >
> > Hi Andreas
> >
> > This is currently paused. I may come back to this in some time ;-)
> > I have found various ways to improved denoising performance without
> requiring this, that's why I have not worked on this since a few months.
> >
> > Basically, currently my priority is now to improve denoiseprofile.
> > For instance, you may have noticed the coarse grain slider for non local
> means in master.
> > I have some other changes ongoing, you can test some of them on branch
> rawfiner-denoise-profile-updates on my git
> https://github.com/rawfiner/darktable/tree/rawfiner-denoise-profile-updates
> > Basically I use it like this :
> > - I put the patch size to 4
> > - I increase the coarse grain noise slider until no very coarse chroma
> noise remains (some fine grain chrominance noise can remain)
> > - then I increase the details slider to my taste
> > - I fix the remaining chroma noise with the equalizer
> > Note that with these changes, I am mostly using only one instance of
> denoiseprofile, without any particular blending mode.
> > It works very well for medium and high iso, but is not perfect for very
> high iso. In such case, it can be combined with denoise bilateral to get
> nice results.
> >
> > I am also currently working on improving the ansco

Re: [darktable-dev] An algorithm to downscale raw data BEFORE demosaic by whatever scale factor

2019-02-20 Thread rawfiner
Hi Andreas

This is currently paused. I may come back to this in some time ;-)
I have found various ways to improved denoising performance without
requiring this, that's why I have not worked on this since a few months.

Basically, currently my priority is now to improve denoiseprofile.
For instance, you may have noticed the coarse grain slider for non local
means in master.
I have some other changes ongoing, you can test some of them on
branch rawfiner-denoise-profile-updates on my git
https://github.com/rawfiner/darktable/tree/rawfiner-denoise-profile-updates
Basically I use it like this :
- I put the patch size to 4
- I increase the coarse grain noise slider until no very coarse chroma
noise remains (some fine grain chrominance noise can remain)
- then I increase the details slider to my taste
- I fix the remaining chroma noise with the equalizer
Note that with these changes, I am mostly using only one instance of
denoiseprofile, without any particular blending mode.
It works very well for medium and high iso, but is not perfect for very
high iso. In such case, it can be combined with denoise bilateral to get
nice results.

I am also currently working on improving the anscombe transform which is
associated with the profile to get closer to our goal to have a variance of
1 everywhere (currently we have some big spike above 1 in the shadows,
which is a problem as shadows are usually what we want to denoise the most)

Cheers,
rawfiner

Le jeu. 21 févr. 2019 à 08:03, Andreas Schneider  a
écrit :

> On Wednesday, 5 September 2018 21:34:21 CET rawfiner wrote:
> > Hi!
>
> Hi rawfinder,
>
> > Some of you may now that I am working on a raw denoising algorithm.
> > One of the hard thing was that prior to demosaic, the algorithms are
> > computed on unscaled data, while after demosaic the algorithms can
> compute
> > a preview on a downscaled image, which is easier in terms of speed.
>
> what happended to your work, any news?
>
>
> Thanks,
>
>
> Andreas
>
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] Update usermanual on website

2019-01-08 Thread rawfiner
Hi,
Is there any plan to update the usermanual to 2.6 soon on darktable.org?
(both the online version and pdfs)
It would be nice to have new modules and other change documentation
available :-)
Cheers,
rawfiner

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Preview & focus detection

2018-12-17 Thread rawfiner
Hello

This is explained here:
https://www.darktable.org/2013/11/determining-focus-in-lighttable/

cheers,
rawfiner

Le sam. 15 déc. 2018 à 14:48, FF  a écrit :

> Hello,
>
> Could someone explain what criteria define the area of sharp focus?
>
> Thanks,
>
> Jack.
>
>
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Denoise profile's anscombe transform

2018-12-12 Thread rawfiner
Thanks a lot for your answer!

Le mer. 12 déc. 2018 à 22:24, johannes hanika  a écrit :

> heya,
>
> sorry for the late answer, usual madness. but i think you're raising a
> few important points here. let me try to answer some of it:
>
> No problem for the delay, this is clearly not an urgent matter ;-)


> On Fri, Dec 7, 2018 at 4:04 AM rawfiner  wrote:
> > For wavelet codes, there is a 2x multiplier on B and R channels, while
> it is not the case for the anscombe transform of non local means:
> >   const float wb[3] = { // twice as many samples in green channel:
> > 2.0f * piece->pipe->dsc.processed_maximum[0] *
> d->strength * (in_scale * in_scale), piece->pipe->dsc.processed_maximum[1]
> * d->strength * (in_scale * in_scale),  2.0f *
> piece->pipe->dsc.processed_maximum[2] * d->strength * (in_scale * in_scale)
>
> yes, that is the bayer pattern. the code is from pre-xtrans days.
> also, we're only ever using the variance measured in the green channel
> when profiling noise. that's also a lazy coder thing, because we don't
> measure noise on the raw, but on the input to the denoising module
> (i.e. it holds the statistics that are relevant to the denoising at
> this step).
>
>
I have seen that only variance measured in the green channel is used, but I
do not think it is an issue. All channels should behave more or less the
same, as long as their values are not amplified.


> > Why is there this multiplier? I understand from the comment that it is
> related to the fact that we have 2 times more green pixels than R or B
> pixels on a bayer sensor (note that this is not perfectly valid on xtrans
> sensor). Yet, I do not see the link between this, and the distribution of
> the poisson noise, and thus of the anscombe transform to be done.
>
> what do you mean? the noise model shows the standard deviation per
> brightness input level. if you have 1/4th of the samples variance goes
> up by a factor of two. not sure the formulas there are correct though.
>

I do not have enough background in probability, so maybe I am wrong.
Yet, I can't get the intuition why with more samples the variance will be
lower:
- with a higher brightness on all samples, I agree the variance will be
lower.
- with simply more samples, my intuition is that the noise characteristic
is unchanged: for instance if we take a picture of a smooth surface to see
only the noise, I don't see why having more samples would change the
histogram that we get.
Maybe my intuition is false?


>
> > In addition, I do not understand why this multiplier is only here in the
> case of the wavelets process, and not here in the case of the nlmeans
> process.
>
> now that sounds like a bug to me.. on the other hand it may be the
> product of careful engineering which i also very carefully forgot:
> nlmeans just computes a full tile distances and then splats the thing
> over your center pixel no matter what the colour channel. quite
> possibly the noise per colour doesn't matter to the distance metric at
> all.
>
>
The noise per color, or at least the multipliers associated, matters
indirectly in the distance metric:
the anscombe transform performed will be different from one channel to
another: the values obtained will not be in the same range. As such, when
we do the squared difference to compare patches, a channel that have a more
spread range will probably lead to bigger difference than a channel that
has a narrower range. Thus, it would have "more weight" in the similarity
measure.


> > The second thing I noticed, is that the "processed_maximum" are all
> equal if highlight reconstruction is activated. Basically, they are equal
> to the maximum multiplier of the white balance. Thus, the anscombe
> transform is the same for R and B for instance, even though one may be much
> more "amplified" than the other.
>
> right. that sounds strange but may actually be like this for the same
> reason: the noise statistics are only used for the green channel, and
> the others just follow? it's possible that this is an oversight that
> gives suboptimal results, but it may be possible that the two others
> (RB) don't really affect the result at all, i'd need to carefully
> check the code.
>

The white balance is basically the same as a channel relative exposure: it
just amplifies the values.
As such, if the red channel has a white balance multiplier of 2, and the
image ISO is 6400, it would be like if we have an image with a perfectly
exposed green channel at 6400 ISO and a perfectly exposed red channel at
12800 ISO (if we suppose that the noise is mainly due to photon noise).
That's why I think we should use the white balance multipliers for the
anscombe transform


>
> > If highlight re

Re: [darktable-dev] Denoise profile's anscombe transform

2018-12-12 Thread rawfiner
Le mer. 12 déc. 2018 à 09:34, Björn Sozumschein 
a écrit :

> Hi,
>
> that's a really good question! I cannot understand that also, as the
> comments are not very useful unfortunately
> Is it in some way related to issue
> https://redmine.darktable.org/issues/10704
>
>
It is possible that it is related to this issue indeed, thanks for spoting
that!

rawfiner


> best,
> Bjoern
>
> Am Do., 6. Dez. 2018 um 16:05 Uhr schrieb rawfiner :
>
>> Hi,
>>
>> Looking at the code of denoise profile, I noticed a few things that seems
>> strange to me concerning the anscombe transform.
>>
>> For wavelet codes, there is a 2x multiplier on B and R channels, while it
>> is not the case for the anscombe transform of non local means:
>>   const float wb[3] = { // twice as many samples in green channel:
>> 2.0f * piece->pipe->dsc.processed_maximum[0] *
>> d->strength * (in_scale * in_scale), piece->pipe->dsc.processed_maximum[1]
>> * d->strength * (in_scale * in_scale),  2.0f *
>> piece->pipe->dsc.processed_maximum[2] * d->strength * (in_scale * in_scale)
>>
>> Why is there this multiplier? I understand from the comment that it is
>> related to the fact that we have 2 times more green pixels than R or B
>> pixels on a bayer sensor (note that this is not perfectly valid on xtrans
>> sensor). Yet, I do not see the link between this, and the distribution of
>> the poisson noise, and thus of the anscombe transform to be done.
>> In addition, I do not understand why this multiplier is only here in the
>> case of the wavelets process, and not here in the case of the nlmeans
>> process.
>>
>> The second thing I noticed, is that the "processed_maximum" are all equal
>> if highlight reconstruction is activated. Basically, they are equal to the
>> maximum multiplier of the white balance. Thus, the anscombe transform is
>> the same for R and B for instance, even though one may be much more
>> "amplified" than the other.
>> If highlight reconstruction is turned off, the processed_maximum values
>> are equal to the white balance multipliers, so we don't get this effect.
>> On images were some white balance multipliers are very different, turning
>> off the highlight reconstruction results in a big change in the denoising
>> (more or less equivalent to a big reduction of the force factor).
>> I guess we should use piece->pipe->dsc.temperature.coeffs instead of
>> piece->pipe->dsc.processed_maximum in this code.
>>
>> Doing this correction will allow to "copy-paste" more reliably the
>> settings from one image to another, even across images that have very
>> different white balance.
>> Otherwise, a setting which works well on a picture with a white balance
>> of (1,1,1) for instance may not work well on a picture with a white balance
>> of (1, 1, 2) for instance.
>> Though, correcting this will break backward compatibility.
>>
>> What do you think about it?
>> Thanks! :-)
>> rawfiner
>>
>>
>> ___
>> darktable developer mailing list to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] Denoise profile's anscombe transform

2018-12-06 Thread rawfiner
Hi,

Looking at the code of denoise profile, I noticed a few things that seems
strange to me concerning the anscombe transform.

For wavelet codes, there is a 2x multiplier on B and R channels, while it
is not the case for the anscombe transform of non local means:
  const float wb[3] = { // twice as many samples in green channel:
2.0f * piece->pipe->dsc.processed_maximum[0] *
d->strength * (in_scale * in_scale), piece->pipe->dsc.processed_maximum[1]
* d->strength * (in_scale * in_scale),  2.0f *
piece->pipe->dsc.processed_maximum[2] * d->strength * (in_scale * in_scale)

Why is there this multiplier? I understand from the comment that it is
related to the fact that we have 2 times more green pixels than R or B
pixels on a bayer sensor (note that this is not perfectly valid on xtrans
sensor). Yet, I do not see the link between this, and the distribution of
the poisson noise, and thus of the anscombe transform to be done.
In addition, I do not understand why this multiplier is only here in the
case of the wavelets process, and not here in the case of the nlmeans
process.

The second thing I noticed, is that the "processed_maximum" are all equal
if highlight reconstruction is activated. Basically, they are equal to the
maximum multiplier of the white balance. Thus, the anscombe transform is
the same for R and B for instance, even though one may be much more
"amplified" than the other.
If highlight reconstruction is turned off, the processed_maximum values are
equal to the white balance multipliers, so we don't get this effect.
On images were some white balance multipliers are very different, turning
off the highlight reconstruction results in a big change in the denoising
(more or less equivalent to a big reduction of the force factor).
I guess we should use piece->pipe->dsc.temperature.coeffs instead of
piece->pipe->dsc.processed_maximum in this code.

Doing this correction will allow to "copy-paste" more reliably the settings
from one image to another, even across images that have very different
white balance.
Otherwise, a setting which works well on a picture with a white balance of
(1,1,1) for instance may not work well on a picture with a white balance of
(1, 1, 2) for instance.
Though, correcting this will break backward compatibility.

What do you think about it?
Thanks! :-)
rawfiner

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] local contrast -> laplacian filter seems to be broken

2018-11-16 Thread rawfiner
Hi
It seems that Aurélien reported this issue:
https://redmine.darktable.org/issues/12349
rawfiner

Le vendredi 16 novembre 2018, Pascal Obry  a écrit :

> Hu Andreas,
>
> > the local contrast laplacian filter doesn't work anymore, it produces
> halos
> > and strange artifacts. Here is just one example image showing the issue:
> >
> > https://xor.cryptomilk.org/darktable/dt_local_contrast_laplacian.jpg
> >
> > Was perfectly fine some days ago.
>
> I cannot reproduce! Do you have this with all pictures or some only?
> What settings you're using? This module has not been changed recently.
> Any other module activated that could create bad interaction?
>
> Thanks,
>
> --
>   Pascal Obry /  Magny Les Hameaux (78)
>
>   The best way to travel is by means of imagination
>
>   http://www.obry.net
>
>   gpg --keyserver keys.gnupg.net --recv-key F949BD3B
>
> 
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: Fwd: [darktable-dev] advanced mask adjustments

2018-11-11 Thread rawfiner
Hi Heiko
I do not have a very detailed feedback to give for now, as I only played
with it a little (maybe latter) ;-)
Still, I wanted to say that I am very impressed, as I obtained great
results very easily.
This is awesome!
Thanks a lot!
rawfiner

Le sam. 10 nov. 2018 à 20:07, Heiko Bauke  a écrit :

> Hi Björn,
>
> many thanks for your feedback.
>
> Am 10.11.18 um 11:24 schrieb Björn Sozumschein:
> > I also believe that a proper explanation would prevent confusion
> > regarding the inversion behavior.
> > However, I have concerns with respect to the usability, based on my
> > initial experience:
> > In most cases, I use the masks to apply a module either to my subject or
> > to the background individually.
> > Let's assume, for instance, that there's a portrait shoot where I like
> > to apply a tone curve to the subject and I also want to use color
> > correction on the background.
> > In order to achieve this, I usually create a mask for the subject first,
> > because it is easier and more reliable to create a mask for the subject
> > than for the background.
> > This is due to the fact that, when the colored mask overlay is
> > activated, it seems just easier for human vision to classify whether the
> > subject is covered by the mask than if a mask of the background does not
> > cover parts of the subject.
> > At this step, I can draw a coarse mask and then use feathering to obtain
> > a great result and apply the tone curve.
> > Then, however, in order to perform color correction of the background, I
> > like to reuse the mask for the subject, apply the same contrast and
> > brightness parameters and simply invert it in order to obtain a mask for
> > the background that is complementary to the subject mask.
> > This is not possible with the current implementation, as brightness and
> > contrast have to be adjusted.
> >
> > So, aside from the better conformity with the user's intuitive
> > understanding, maybe inverting the mask at the end of the pipeline would
> > benefit usability.
>
> I completely agree.
>
> > There is a second point I noticed:
> > Especially when using the mask with hair, after proper adjustment of
> > brightness and saturation in order to match the edges well, the mask is
> > rather sharp and thus, for most modules, the edges of small structures
> > as well as soft edges do not look good.
> > I would like to apply a gaussian blur to the mask after feathering.
> > Also, I am not sure whether the brightness and contrast provide a real
> > benefit for the gaussian blur.
> > Hence, I wonder whether it could be useful to not have either gaussian
> > blur or feathering, but simply have the feathering with its options
> > first, followed by a slider for gaussian blur?
>
> There are many possible options how to integrate the new feathering
> algorithm into the exciting mask refinement facilities.  This is the
> reason why I am locking for feedback.  Giving the option to apply both a
> Gaussian filter and a guided filter to the mask is just one possible
> direction to go.  With this option, however, the question of order
> appears.  Which filter comes first, the Gaussian or the guided filter.
> Furthermore, more options and more flexibility require more UI elements
> and add more complexity.  We have to find the right balance.
>
> In addition to the existing integration of the guided filter one might
> give the user the possibility to adjust the following parameters:
>
> * Choose which image is used as a guide to feather the mask, the
> module's input or the module's output (before blending).  Currently it's
> always the input.  In most cases, the feathering result is not
> significantly affected by this choice.  It might be relevant, however,
> for blurring or sharpening modules.
>
> * Allow to apply both, a Gaussian filter plus a guided filer.  Possibly
> even a Gaussian filter before and after the guided filter with different
> parameters.
>
> * One might give the user the ability to determine when the mask
> tone-curve is applied, before or after feathering, before or after
> Gaussian blur etc.  One might even allow to apply several tone-curves at
> different stages.
>
> * One might add further parameters to adjust the mask tone-curve, e.g.,
> white and black points.
>
> I definitely do not want to go into the direction of implementing all
> these options.  I just want so sketch the rich possibilities.  I think
> we have to find a minimalistic solution that keeps complexity low but
> allows flexible mask adjustments.
>
> Currently I think the following approach is reasonable:
>
> * There are to sliders in t

Re: [darktable-dev] Online help → dt crashing

2018-10-29 Thread rawfiner
Thank you.
I think I have all I need to fix this.
Have a good day
rawfiner

Le lundi 29 octobre 2018, Timur Irikovich Davletshin <
timur.davlets...@gmail.com> a écrit :

> English both in OS and dt.
>
> On Mon, 2018-10-29 at 10:34 +0100, rawfiner wrote:
> > I mean, is it in english, or another language? (And same question
> > concerning your operating system)
> > rawfiner
> >
> > Le 29 oct. 2018 10:29 AM, "Timur Irikovich Davletshin"  > h...@gmail.com> a écrit :
> > > LANG=C.UTF-8
> > >
> > > On Mon, 2018-10-29 at 10:19 +0100, rawfiner wrote:
> > > > Could you also tell me the language you use inside darktable
> > > please?
> > > > Is the language of your system supported by darktable?
> > > > Thanks
> > > > rawfiner
> > > >

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Online help → dt crashing

2018-10-29 Thread rawfiner
I mean, is it in english, or another language? (And same question
concerning your operating system)
rawfiner

Le 29 oct. 2018 10:29 AM, "Timur Irikovich Davletshin" <
timur.davlets...@gmail.com> a écrit :

> LANG=C.UTF-8
>
> On Mon, 2018-10-29 at 10:19 +0100, rawfiner wrote:
> > Could you also tell me the language you use inside darktable please?
> > Is the language of your system supported by darktable?
> > Thanks
> > rawfiner
> >

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Online help → dt crashing

2018-10-29 Thread rawfiner
Could you also tell me the language you use inside darktable please?
Is the language of your system supported by darktable?
Thanks
rawfiner

Le lundi 29 octobre 2018, rawfiner  a écrit :

> Ok. Thank you for the updated backtrace, this one has information I can
> use. I will see if I manage to find the source of the bug with this.
> rawfiner
>
> Le 29 oct. 2018 10:04 AM, "Timur Irikovich Davletshin" <
> timur.davlets...@gmail.com> a écrit :
>
> Actually I'm not building it myself, I use official OBS packages.
>
> I've installed *-dbgsym package, and... and see output in the
> attachment.
>
> I tried to start dt with emptied profile and it works as expected. So
> problem might be in my settings/database/cache somewhere.
>
> Timur.
>
> On Mon, 2018-10-29 at 09:42 +0100, rawfiner wrote:
> > The backtrace has very little information unfortunately.
> > Did you build the code in "release" mode?
> > If so, could you please build the code in "debug" mode instead, and
> > give the backtrace obtained in this mode?
> > Thanks
> > rawfiner
> >
>
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Online help → dt crashing

2018-10-29 Thread rawfiner
Ok. Thank you for the updated backtrace, this one has information I can
use. I will see if I manage to find the source of the bug with this.
rawfiner

Le 29 oct. 2018 10:04 AM, "Timur Irikovich Davletshin" <
timur.davlets...@gmail.com> a écrit :

Actually I'm not building it myself, I use official OBS packages.

I've installed *-dbgsym package, and... and see output in the
attachment.

I tried to start dt with emptied profile and it works as expected. So
problem might be in my settings/database/cache somewhere.

Timur.

On Mon, 2018-10-29 at 09:42 +0100, rawfiner wrote:
> The backtrace has very little information unfortunately.
> Did you build the code in "release" mode?
> If so, could you please build the code in "debug" mode instead, and
> give the backtrace obtained in this mode?
> Thanks
> rawfiner
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Online help → dt crashing

2018-10-29 Thread rawfiner
The backtrace has very little information unfortunately.
Did you build the code in "release" mode?
If so, could you please build the code in "debug" mode instead, and give
the backtrace obtained in this mode?
Thanks
rawfiner

Le 28 oct. 2018 9:29 PM, "Timur Irikovich Davletshin" <
timur.davlets...@gmail.com> a écrit :

Debian stable, 64 bit, I don't know what items are supplied with online
help (I know that import was the first) but it crashed every time I
clicked (~10 times). I did not clean my profile, just jumped from
2.4.4.

Timur.

On Sun, 2018-10-28 at 21:24 +0100, rawfiner wrote:
> Hello
> Could you give more details about this?
> Does this happens whatever you click on, or does it happens only when
> you click on something that has a help url?
> Do you have anything special activated (lua scripts, some
> preferences, etc)?
> Any other information that could help me to reproduce?
> Thanks
> rawfiner
>
> Le dimanche 28 octobre 2018, Timur Irikovich Davletshin  s...@gmail.com> a écrit :
> > Hello!
> >
> > I don't know is it just me or dt's fault but when I click online
> > help
> > icon and then click something in dt (e.g. import tab) I get crash.
> >
> > Timur.
> >
> > P.S. Crash report attached.
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Online help → dt crashing

2018-10-28 Thread rawfiner
Hello
Could you give more details about this?
Does this happens whatever you click on, or does it happens only when you
click on something that has a help url?
Do you have anything special activated (lua scripts, some preferences, etc)?
Any other information that could help me to reproduce?
Thanks
rawfiner

Le dimanche 28 octobre 2018, Timur Irikovich Davletshin <
timur.davlets...@gmail.com> a écrit :

> Hello!
>
> I don't know is it just me or dt's fault but when I click online help
> icon and then click something in dt (e.g. import tab) I get crash.
>
> Timur.
>
> P.S. Crash report attached.

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] error on compiling dt

2018-10-28 Thread rawfiner
Le 28 oct. 2018 1:58 PM, "Andreas Schneider"  a écrit :

On Sunday, 28 October 2018 13:54:01 CET rawfiner wrote:
> Hello
>
> I see it is code from one of my pull request, though it compiled well on
my
> computer.
> I learned afterwards that we should use gboolean instead of bool.
> I will correct this.
> Sorry for the mistake

I've already created https://github.com/darktable-org/darktable/pull/1780


Perfect! Thanks a lot :-)

rawfiner


-- 
Andreas Schneider a...@cryptomilk.org
GPG-ID: 8DFF53E18F2ABC8D8F3C92237EE0FC4DCC014E3D

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] error on compiling dt

2018-10-28 Thread rawfiner
Hello

I see it is code from one of my pull request, though it compiled well on my
computer.
I learned afterwards that we should use gboolean instead of bool.
I will correct this.
Sorry for the mistake

rawfiner

Le 28 oct. 2018 1:45 PM, "André Felipe Carvalho" 
a écrit :

Hello,
This morning, I am not being able to compile dt (last build). What should I
do?
Have a very good sunday ;-) This can surely wait ;-)

The error is:

[ 22%] Building C object src/CMakeFiles/lib_darktable.
dir/common/colorspaces.c.o
In file included from /home/andre/compilar/darktable/src/common/
collection.c:26:0:
/home/andre/compilar/darktable/src/control/control.h:149:3: error: unknown
type name ‘bool’
   bool lock_cursor_shape;
   ^
src/CMakeFiles/lib_darktable.dir/build.make:244: recipe for target
'src/CMakeFiles/lib_darktable.dir/common/collection.c.o' failed
make[2]: *** [src/CMakeFiles/lib_darktable.dir/common/collection.c.o] Error
1
make[2]: ** Esperando que outros processos terminem.
In file included from /home/andre/compilar/darktable/src/bauhaus/bauhaus.
h:23:0,
 from /home/andre/compilar/darktable/src/bauhaus/bauhaus.
c:20:
/home/andre/compilar/darktable/src/control/control.h:149:3: error: unknown
type name ‘bool’
   bool lock_cursor_shape;
   ^
[ 22%] Building C object src/CMakeFiles/lib_darktable.
dir/common/curve_tools.c.o
src/CMakeFiles/lib_darktable.dir/build.make:124: recipe for target
'src/CMakeFiles/lib_darktable.dir/bauhaus/bauhaus.c.o' failed
make[2]: *** [src/CMakeFiles/lib_darktable.dir/bauhaus/bauhaus.c.o] Error 1
[ 22%] Built target darktable-rs-identify
In file included from /home/andre/compilar/darktable/src/common/
colorlabels.c:24:0:
/home/andre/compilar/darktable/src/control/control.h:149:3: error: unknown
type name ‘bool’
   bool lock_cursor_shape;
   ^
src/CMakeFiles/lib_darktable.dir/build.make:292: recipe for target
'src/CMakeFiles/lib_darktable.dir/common/colorlabels.c.o' failed
make[2]: *** [src/CMakeFiles/lib_darktable.dir/common/colorlabels.c.o]
Error 1
In file included from /home/andre/compilar/darktable/src/common/
colorspaces.c:26:0:
/home/andre/compilar/darktable/src/control/control.h:149:3: error: unknown
type name ‘bool’
   bool lock_cursor_shape;
   ^
src/CMakeFiles/lib_darktable.dir/build.make:316: recipe for target
'src/CMakeFiles/lib_darktable.dir/common/colorspaces.c.o' failed
make[2]: *** [src/CMakeFiles/lib_darktable.dir/common/colorspaces.c.o]
Error 1
CMakeFiles/Makefile2:1432: recipe for target
'src/CMakeFiles/lib_darktable.dir/all'
failed
make[1]: *** [src/CMakeFiles/lib_darktable.dir/all] Error 2
Makefile:149: recipe for target 'all' failed
make: *** [all] Error 2


-- 
André Felipe


___
darktable developer mailing list to unsubscribe send a mail to
darktable-dev+unsubscr...@lists.darktable.org

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-10-16 Thread rawfiner
Hello
Here is a little update on what I have done recently on denoising.
My work on a "new" raw denoise module is still ongoing, but it takes a lot
of time (as expected), as I have to try various things before getting
correct ones. So no news on this side.

Yet, I found a quicker and easier way to improve darktable's denoising
capabilities.
In fact, it does not even change the algorithm!

The idea is to give equalizer-like GUI for all wavelets based modules, and
to allow the user to change the force for red, green and blue channel, as
these channels usually suffer from different levels on noise (especially
after demosaic, where red and blue channel have coarser noise due to the
fact that errors propagated during the demosaic).
This way, the user can reduce coarse grain noise while keeping fine grain
noise if he wants, or whatever fits its need.

I have implemented this idea both for denoiseprofile and rawdenoise
modules, and I have just opened two pull requests for them:
https://github.com/darktable-org/darktable/pull/1752
https://github.com/darktable-org/darktable/pull/1753

Using these updated GUIs, I personally found that the existing algorithm
had plenty of hidden power (especially in case of high ISO images where the
coarse grain noise is more important)!
I hope you will enjoy this as much as I do.

rawfiner


Le dim. 22 juil. 2018 à 20:50, rawfiner  a écrit :

> Thank you Aurélien, that is a great answer.
> I think I will try to incorporate this in the weight computation of non
> local means to use only "non noisy" pixels in the computations of the
> weights, in addition to trying to use this as a (parametric?) mask.
>
> rawfiner
>
>
> Le samedi 21 juillet 2018, Aurélien Pierre  a
> écrit :
>
>> The TV is the norm (L1, L2, or something else) of the gradient along the
>> dimensions. Here, we have TV = || du/dx ; du/dy||. The discretized gradient
>> of a function u along a direction x is a simple forward or backward finite
>> difference such as du/dx = [u(i) - u(i-1)] / [x(i) - x(i-1)] (backward) or
>> du/dx = [u(i +1) - u(i)] / [x(i+1) - x(i)] (forward).
>>
>> For contiguous pixels on main directions, the distance between 2 pixels
>> is x(i) - x(i-1) = 1 (I don't divide explicitely by 1 in the code though),
>> on diagonals it's = sqrt(2) (result of Pythagore's theorem). Hence the
>> division by sqrt(2).
>>
>> Now, imagine a 2D problem where we have an inconsistent pixel in a smooth
>> sub-area of a picture with 0 all around:
>>
>> [0 ; 0 ; 0]
>> [0 ; 1 ; 0]
>> [0 ; 0 ; 0]
>>
>> That is the matrix of a 2D Dirac delta function (impulse). Computing the
>> TV L1 in forward difference leads to :
>>
>> ([0.0 ; 0.5 ; 0.0]
>>  [0.5 ; 1.0 ; 0.0]
>>  [0.0 ; 0.0 ; 0.0])*2
>>
>> Doing the same backwards leads to :
>>
>> ([0.0 ; 0.0 ; 0.0]
>>  [0.0 ; 1.0 ; 0.5]
>>  [0.0 ; 0.5 ; 0.0])*2
>>
>> So what happens is in both cases, the immediate neighbours of the noisy
>> pixel are detected as somewhat noisy as well because of the first order
>> discretization, but they are not noise. That's a limit of the discrete
>> computation. Also the derivative of a Dirac delta function is supposed to
>> be an even function, obviously that property is broken here. If you compute
>> the L2 norm of these arrays, you get 1.22. A delta function should have a
>> L2 norm = 1. Actually, the best approximation of the TV of the delta
>> function would be the original delta function itself.
>>
>> If we average the both TV norms, we get :
>>
>> ([0.00 ; 0.25 ; 0.00]
>>   [0.25 ; 1.00 ; 0.25]
>>   [0.00 ; 0.25 ; 0.00])*4
>>
>> So, now, we have an error on more neighbours, but smaller in magnitude
>> and the TV map is now even. Also, the L2 norm of the array is now 1.12,
>> which is closer to 1. So we have a better approximation of the delta
>> derivative.
>>
>> With that in mind, on the 8 neighbours variant, we also compute the TV L1
>> norms (average of backward and forward) on diagonals, meaning :
>>
>> ([0.25 ; 0.00 ; 0.25]
>>   [0.00 ; 1.00 ; 0.00]
>>   [0.25 ; 0.00 ; 0.25])*4/sqrt(2)
>>
>> And… you are right, there is a problem of normalization because we should
>> divide by 4*(1 + 1/sqrt(2)) instead of 4. Then, our TV L1 map will be :
>>
>> [0.1036 ; 0.1464 ; 0.1036]
>> [0.1464 ; 1. ; 0.1464]
>> [0.1036 ; 0.1464 ; 0.1036]
>>
>> That's an even better approximation to the Dirac delta. Now, the L2 norm
>> is 1.06. And now that I see it, that could lead to a separable kernel to
>> compute the TV L1 with two 1D convolutions…
>>
>> I didn't plan on going full math her

Re: [darktable-dev] Darkroom UI refactoring

2018-10-12 Thread rawfiner
Hi

I strongly agree that the order of modules should be more clear in the UI,
and that the UI should guide the user more. I like the suggestion Aurélien
made for this.

Trying to follow the module order in the pipeline gives the best
performance, as computations are done once.
In addition, not following the module order turns into a nightmare if you
use parametric mask: as soon as you modify a module which is earlier in
pipeline, you have to redo your parametric masks.
Currently, I do that by learning by heart which module comes after which
module, which is not an ideal solution.

Cheers,
rawfiner

Le jeudi 11 octobre 2018, Jason Polak  a écrit :

> I have given a lot of thought about your idea, which is obviously very
> well thought out. Thanks for having this discussion; at the very least,
> it is making me examine editing carefully. Of course I am not a dev so I
> don't make any decisions for darktable, so feel free to ignore this but
> I have some thoughts to your process:
>
> 1. I don't think denoising should happen before sharpening/local
> constrast. Here's why: I take a lot of noisy shots (typically an APS-C
> camera at ISO3200 in the forest will make some noise). I have tried an
> experiment of denoising before the sharpening. What happens here is that
> if I denoise first, the later sharpening stage sometimes can enhance the
> noise, or make the OOF area worse. It seems much more logical to me to
> do the sharpening/equalizer enhancement before denoising. Only then can
> I see what kind and how much denoising to apply. Moreover, your
> suggested experiment with local contrast and denoising does not seem to
> have much effect in a real-life scenario.
>
> 2.  However, to your credit, the use of the color balance module as you
> suggested DOES work pretty well for portraits.
>
> To me it seems then that the fundamental problem then is: what is the
> most efficient way to process a photo so that it goes from the flat Raw
> image to something with the correct dynamic range and correct colours at
> the same time with a minimal amount of editing? Is this the same for all
> types of shots? And will changing the user interface help with this
> process? Well I'm not sure...but at least I learned something new in
> this discussion :)
>
> Jason
>
> On 2018-10-09 07:17 PM, Aurélien Pierre wrote:
> > What I call "signal-processing" here are all the module intended to
> > clean the data and turn an (always) damaged picture into what it is
> > supposed to look like in theory. That is :
> >
> >  1. reconstructing missing parts (clipped highlights)
> >  2. recovering the dynamic range (tonemapping)
> >  3. reconstructing the damaged parts (denoising)
> >  4. reverting the optical artefacts (vignette, CA, distorsion),
> >  5. reverting the color inaccuracies (white balance and ICC color
> > correction).
> >
> > You think you can waltz around modules and do the retouch in the order
> > you like. Well, you can, but that is asking for trouble.
> >
> > Take 2 examples :
> >
> > 1. Open a noisy image, turn on the laplacian local contrast, save a
> > snapshot, then enable a heavy denoising, and compare the 2 outputs : in
> > some case, the local contrast output will look harsher with denoising.
> > That means you should fix the noise before setting the local contrast.
> >
> > 2. On a portrait photo done with a camera for which you have an enhanced
> > matrix (basecurve = OFF), tweak the exposure until you get a nice
> > contrast (Lmax = 100, Lmin = 0). Then, in the color balance, tweak the
> > gamma factor to get the L_average on the face = 50. Save the snapshot.
> > Now, disable the color balance, tweak the exposure again to get a dull
> > image (fix Lmax = 96, Lmin = 18). Then, in the color balance, tweak the
> > gain factor to get Lmax = 100, the lift factor to get Lmin = 0 and the
> > gamma factor to get L_average on the face = 50. Which skin tones look
> > the more natural and which has less out-of-gamut issues ? (spoiler alert
> > : #2)
> >
> > Nobody will think of crushing the contrast first in the exposure module,
> > then bring it up later in the pixelpipe, in order to get better colors,
> > until he has seen the math going on inside… In fact, the autoexposure
> > tool even lures you into doing the opposite.
> >
> > Because darktable is modular by nature, modules are fully independant
> > and don't share data, but that leads to a fair amount of inconsistency.
> > You can tweak the contrast and lightness in 8 different modules
> > (exposure, contrast/saturation/lightness, tone curve, base curve, zone
> > system, color balance, unbreak input profile, levels)

Re: [darktable-dev] roi_in, piece in iop module

2018-10-04 Thread rawfiner
Hi
I asked similar questions some time ago, please find bellow the answers
that Johannes gave me.
Regards,
rawfiner

About scales:

> What is the difference between:
> -roi_out->scale

that is the current output region of interest scale, relating what
would happen when you had processed the full resolution image vs. what
is actually being processed in the pipe now. sorry i forget which way
around scale = a/b or = b/a it is.

> -piece->iscale

this is the input scale factor, i.e. how much did the pipeline where
the current piece belongs to downsize the real input (raw image)
before throwing it into the pipeline. this should be 1.0 for the full
and export pipelines and may be different for the preview. again i
forgot whether this should be < or > 1 in that case.

> -self->dev->preview_pipe->iscale

if the piece is a piece of the preview pipeline, these last two should
be the same (the piece copies iscale and iwidth/iheight for
convenience. also there were places at least in the past where you
would have the piece but not the whole pipe)

> And the difference between:
> -roi_out->width

the size of the output buffer that is passed to process() for
instance. note that these may be different for each module, because
some require some padding or need to run on full res bayer data, do
distortions etc.

> -piece->iwidth

see above. width of the prescaled input. should be == raw resolution
unless you're running a preview pipeline.


> -self->dev->preview_pipe->processed_width

that's the output size of the fully processed image, i.e. what would
be stored on disk if you were running an export pipeline. this is
needed up front to get scale to fit work correctly for instance. it's
computed by a chain of modify_roi_out() which is run in a first pass
with the full input buffer resolution and full region of interest.

> -self->dev->width

that is the size of the develop view. i.e. if you need to determine
the scale factor for the pipeline, you would relate this view window
size to the processed_width above, and then run the pipeline.


About roi:

1) a pass of modify_roi_out() from raw to screen output is performed.
this is done full resolution, full region of interest, to determine
the hypothetical size of the output image when processed in full.

2) given the size and region of interest of the view window, the
develop module requests a certain input to be able to render the
output. this is done by calling a chain of modify_roi_in() from view
window back to raw image. this is only done on the exact pixels that
are needed on screen right now, i.e. scaled and cropped.

3) process() is called with about exactly the ROI that were computed
in pass number 2. i think there are some minor sanity checks done, so
you shouldn't rely on what you asked for in modify_roi_in but use what
you get in process().



Le mer. 3 oct. 2018 à 10:58,  a écrit :

> Hello!
>
> I want to rewrite the module "shadhi.c". In every iop module are described
> function process() with input arguments "struct dt_dev_pixelpipe_iop_t
> *piece" and "const struct dt_iop_roi_t *const roi_in".
>
> Could you explain pls how variables roi_in and piece are chosen? These
> values are constantly changing every time. I can't find it in the code.
>
> What does "roi_in->scale" and "piece->iscale" mean?
>
> Thank you.
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Code reviews requested : Profile gamma correction

2018-09-12 Thread rawfiner
Sorry for the spelling mistake, didn't mean "Fist" but "First"
My apologies
cheers,
rawfiner

Le mer. 12 sept. 2018 à 18:59, rawfiner  a écrit :

> Hi Aurélien
>
> Fist, thank you for showing me this interesting video.
> I just compiled your branch.
>
> My first question is, is it possible to find shift power slope values that
> reproduce the result we had before with linear and gamma?
> If yes, I think you should compute the new parameters values from the old
> ones.
> You can take a look at function "legacy_param" in denoiseprofile.c to see
> an example.
> If that is not possible, we could imagine to have a "mode" selector in the
> GUI to switch between "linear and gamma" and "shift power slope".
>
> Considering opencl, I cannot help you here as I have never coded in opencl
> and I do not have a GPU.
> Yet, even without opencl, code seems already quite fast.
>
> Considering the code itself, my only remarks are for this line:
>   for(size_t k = 1; k < (size_t)ch * roi_out->width * roi_out->height;
> k++)
> First, is there a reason why you are using a size_t type? int or unsigned
> would be fine I think, and you wouldn't need a cast.
> Second, in C, array indexes start at 0, so the red value of the pixel at
> the top left corner is not processed by your loop (you can see it on
> exported image)
>
> Sso I guess you want the for loop to be:
>  for(unsigned k = 0; k < ch * roi_out->width * roi_out->height; k++)
>
> I know that C is hard to learn, so congratulations Aurélien! :-)
>
> rawfiner
>
>
> Le mer. 12 sept. 2018 à 14:46, Aurélien Pierre 
> a écrit :
>
>> Hi everyone,
>>
>> when working with color profiles, the main historic issue was the
>> non-linearity of the sensors/films. Now, it is rather that the color
>> profile is performed on a chart having 6-7 EV of dynamic range while modern
>> cameras have 12-15 EV. Simple gamma corrections (invented for CRT screens)
>> don't work anymore, and video editors have invented a new standard able to
>> remap the dynamic range and to fix the mid-tones at once :
>> https://www.youtube.com/watch?v=kVKnhJN-BrQ=7=PLa1F2ddGya_9XER0wnFS6Mgnp3T-hgSZO
>>
>> I have embedded the formula used in Blender (
>> https://en.wikipedia.org/wiki/ASC_CDL) into the profile correction
>> module of dt (using the same parameters for each RGB channel). The result
>> looks more natural than the current version, without gamut or saturation
>> issues in the highlights. It also speeds-up the worflow, since all is
>> needed is this module to adjust the dynamic range, then a tone curve in
>> auto RGB mode shaped as a stiff S to bring back the contrast. The result is
>> much better than with the tonemapping modules, with less color fixes.
>>
>> I'm a newbie at C and it's the first time I achieve something inside dt,
>> so I could use some reviews on my code and also some help on the OpenCL
>> part (the kernel does not load, I don't know why) :
>> https://github.com/aurelienpierre/darktable/tree/color-grading
>>
>> Thanks a lot !
>>
>> Aurélien.
>>
>> ___
>> darktable developer mailing list to unsubscribe send a mail to
>> darktable-dev+unsubscr...@lists.darktable.org
>>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Code reviews requested : Profile gamma correction

2018-09-12 Thread rawfiner
Hi Aurélien

Fist, thank you for showing me this interesting video.
I just compiled your branch.

My first question is, is it possible to find shift power slope values that
reproduce the result we had before with linear and gamma?
If yes, I think you should compute the new parameters values from the old
ones.
You can take a look at function "legacy_param" in denoiseprofile.c to see
an example.
If that is not possible, we could imagine to have a "mode" selector in the
GUI to switch between "linear and gamma" and "shift power slope".

Considering opencl, I cannot help you here as I have never coded in opencl
and I do not have a GPU.
Yet, even without opencl, code seems already quite fast.

Considering the code itself, my only remarks are for this line:
  for(size_t k = 1; k < (size_t)ch * roi_out->width * roi_out->height;
k++)
First, is there a reason why you are using a size_t type? int or unsigned
would be fine I think, and you wouldn't need a cast.
Second, in C, array indexes start at 0, so the red value of the pixel at
the top left corner is not processed by your loop (you can see it on
exported image)

Sso I guess you want the for loop to be:
 for(unsigned k = 0; k < ch * roi_out->width * roi_out->height; k++)

I know that C is hard to learn, so congratulations Aurélien! :-)

rawfiner


Le mer. 12 sept. 2018 à 14:46, Aurélien Pierre 
a écrit :

> Hi everyone,
>
> when working with color profiles, the main historic issue was the
> non-linearity of the sensors/films. Now, it is rather that the color
> profile is performed on a chart having 6-7 EV of dynamic range while modern
> cameras have 12-15 EV. Simple gamma corrections (invented for CRT screens)
> don't work anymore, and video editors have invented a new standard able to
> remap the dynamic range and to fix the mid-tones at once :
> https://www.youtube.com/watch?v=kVKnhJN-BrQ=7=PLa1F2ddGya_9XER0wnFS6Mgnp3T-hgSZO
>
> I have embedded the formula used in Blender (
> https://en.wikipedia.org/wiki/ASC_CDL) into the profile correction module
> of dt (using the same parameters for each RGB channel). The result looks
> more natural than the current version, without gamut or saturation issues
> in the highlights. It also speeds-up the worflow, since all is needed is
> this module to adjust the dynamic range, then a tone curve in auto RGB mode
> shaped as a stiff S to bring back the contrast. The result is much better
> than with the tonemapping modules, with less color fixes.
>
> I'm a newbie at C and it's the first time I achieve something inside dt,
> so I could use some reviews on my code and also some help on the OpenCL
> part (the kernel does not load, I don't know why) :
> https://github.com/aurelienpierre/darktable/tree/color-grading
>
> Thanks a lot !
>
> Aurélien.
>
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] An algorithm to downscale raw data BEFORE demosaic by whatever scale factor

2018-09-11 Thread rawfiner
hi,

Le lun. 10 sept. 2018 à 14:47, johannes hanika  a écrit :

> hi!
>
> nice, your downscaled images look impeccable :) maybe they jump up or
> down by one pixel or so?
>

I think this is partly due to rounding errors as we multiply a width by a
float factor to find the new width.
Also, as we get fewer and fewer lines and colums, it may be possible that
some details appear slightly moved, due to the algorithm itself.


>
> do you have any example result images for the denoising already? are
> you running before or after black point subtraction? if you can, i
> think you should run before.
>

For the moment, I develop the new algorithm as part of the raw denoise
module, so I guess it is after black point substraction.
I will consider to make it run before.

You can find to before/after examples here:
https://drive.google.com/open?id=18UU3AAHqyE0oE_zAvTdQbFL2i7zBAgCV
Images that end with _01 are the "after".
Note that only raw non local means was used to denoise these images,
nothing else. It reduces both luminance and chrominance noise.

Yet, this is only a preliminary result, with a basic nlm algorithm, and I
have mainy ideas of improvements:
- use anscombe transformation (and maybe improve it: see
https://ieeexplore.ieee.org/document/7812567/)
- use downscaling and upscaling to get a multiscale denoising. This should
relax the constraints on the neighborhood parameter of the nlm algorithm,
and would allow better denoising for very noisy image
- adapt total variation to raw data so that total variation could be used
to get a better patch comparison (MSE+TV), and to avoid denoising pixels
which are not noisy
- use MSE+TV only to get a list of 8-16 most similar patches, and use "two
direction non local model" with these patches (
https://ieeexplore.ieee.org/abstract/document/6307863/). This should
completely solve the "rare patch" issue that we encounter with nlm.
- and of course, make this run as fast as possible for preview!

So basically, a lot of work remaining, but very exciting work ;-)

cheers,
rawfiner


>
> exciting stuff :)
>
> cheers,
>  jo
>
>
> On Wed, Sep 5, 2018 at 9:34 PM rawfiner  wrote:
> >
> > Hi!
> > Some of you may now that I am working on a raw denoising algorithm.
> > One of the hard thing was that prior to demosaic, the algorithms are
> computed on unscaled data, while after demosaic the algorithms can compute
> a preview on a downscaled image, which is easier in terms of speed.
> >
> > So I tried to downscale the raw data, to be able to use that for preview.
> > There was 2 existing algorithm in darktable to do that (thank you
> Johannes for showing me these algorithms!):
> dt_iop_clip_and_zoom_mosaic_half_size_f for bayer files and
> dt_iop_clip_and_zoom_mosaic_third_size_xtrans_f for xtrans files
> > My problem was that they only allow to downscale the images by a fixed
> factor (2 in the case of bayer, and 3 in the case of xtrans).
> >
> > So I designed an algorithm, that work on both bayer and xtrans, and that
> can be used whatever the scale factor.
> > The source code is available here:
> > https://github.com/rawfiner/darktable/tree/rawfiner-fix-downscale-algo
> > (commit 9992cf66fc8510f637e5e5f8ae26c49c2cba2eaa)
> > The graphic interface in raw denoise module is just here to be able to
> see the effect of the algorithm in "fit to screen" zoom mode, and to
> activate or desactivate the algorithm. It allows to compare what we get by
> downscaling the picture before demosaic to what we would obtain without
> this downscaling.
> > The first slider controls the downscaling factor (0.25 means that width
> is multiplied by 0.25, thus divided by 4)
> > The second slider is useless for now.
> >
> > I made a quick video to compare the algorithm with the existing ones,
> and to explain how the algorithm work:
> > https://youtu.be/oE38w1YOhNQ
> > Sorry for the slow speed of speech, I am not used yet to do videos in
> english ;-)
> >
> > You can also find some examples here:
> > https://drive.google.com/open?id=19xveG0EeF2RUjlRjDTs1AA9f-TnFZe2W
> >
> > cheers,
> > rawfiner
> >
> >
> >
> ___
> darktable developer mailing list to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] How do you feel about code bounties ?

2018-09-08 Thread rawfiner
Hi everyone
In my opinion, it would be nice to be able to fund work on such small
feature requests.
Yet, I think that it should be done with a particular way to ensure that:
- one can only found development for bugs/features that were approved by
the main developers / or by community first
- the developed code is of high quality (for instance, funded developers
may be paid after the code review, to ensure some quality of code)

rawfiner

Le samedi 8 septembre 2018, Andreas Schneider  a écrit :

> On Saturday, 8 September 2018 07:24:51 CEST Aurélien Pierre wrote:
> > Hi everyone,
>
> Hi Aurélien,
>
> > Shouldn't we merge Github issues and Redmine bugs/FR, and promote
> > bountysource ?
>
> I doesn't really make sense to start with bounties while github merge
> requests
> are bit rotting. Even small changes which are easy to review ...
>
>
> Andreas
>
>
> 
> ___
> darktable developer mailing list
> to unsubscribe send a mail to
> darktable-dev+unsubscr...@lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] An algorithm to downscale raw data BEFORE demosaic by whatever scale factor

2018-09-05 Thread rawfiner
Hi!
Some of you may now that I am working on a raw denoising algorithm.
One of the hard thing was that prior to demosaic, the algorithms are
computed on unscaled data, while after demosaic the algorithms can compute
a preview on a downscaled image, which is easier in terms of speed.

So I tried to downscale the raw data, to be able to use that for preview.
There was 2 existing algorithm in darktable to do that (thank you Johannes
for showing me these algorithms!): dt_iop_clip_and_zoom_mosaic_half_size_f
for bayer files and dt_iop_clip_and_zoom_mosaic_third_size_xtrans_f for
xtrans files
My problem was that they only allow to downscale the images by a fixed
factor (2 in the case of bayer, and 3 in the case of xtrans).

So I designed an algorithm, that work on both bayer and xtrans, and that
can be used whatever the scale factor.
The source code is available here:
https://github.com/rawfiner/darktable/tree/rawfiner-fix-downscale-algo
(commit 9992cf66fc8510f637e5e5f8ae26c49c2cba2eaa)
The graphic interface in raw denoise module is just here to be able to see
the effect of the algorithm in "fit to screen" zoom mode, and to activate
or desactivate the algorithm. It allows to compare what we get by
downscaling the picture before demosaic to what we would obtain without
this downscaling.
The first slider controls the downscaling factor (0.25 means that width is
multiplied by 0.25, thus divided by 4)
The second slider is useless for now.

I made a quick video to compare the algorithm with the existing ones, and
to explain how the algorithm work:
https://youtu.be/oE38w1YOhNQ
Sorry for the slow speed of speech, I am not used yet to do videos in
english ;-)

You can also find some examples here:
https://drive.google.com/open?id=19xveG0EeF2RUjlRjDTs1AA9f-TnFZe2W

cheers,
rawfiner

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-22 Thread rawfiner
Thank you Aurélien, that is a great answer.
I think I will try to incorporate this in the weight computation of non
local means to use only "non noisy" pixels in the computations of the
weights, in addition to trying to use this as a (parametric?) mask.

rawfiner


Le samedi 21 juillet 2018, Aurélien Pierre  a
écrit :

> The TV is the norm (L1, L2, or something else) of the gradient along the
> dimensions. Here, we have TV = || du/dx ; du/dy||. The discretized gradient
> of a function u along a direction x is a simple forward or backward finite
> difference such as du/dx = [u(i) - u(i-1)] / [x(i) - x(i-1)] (backward) or
> du/dx = [u(i +1) - u(i)] / [x(i+1) - x(i)] (forward).
>
> For contiguous pixels on main directions, the distance between 2 pixels is
> x(i) - x(i-1) = 1 (I don't divide explicitely by 1 in the code though), on
> diagonals it's = sqrt(2) (result of Pythagore's theorem). Hence the
> division by sqrt(2).
>
> Now, imagine a 2D problem where we have an inconsistent pixel in a smooth
> sub-area of a picture with 0 all around:
>
> [0 ; 0 ; 0]
> [0 ; 1 ; 0]
> [0 ; 0 ; 0]
>
> That is the matrix of a 2D Dirac delta function (impulse). Computing the
> TV L1 in forward difference leads to :
>
> ([0.0 ; 0.5 ; 0.0]
>  [0.5 ; 1.0 ; 0.0]
>  [0.0 ; 0.0 ; 0.0])*2
>
> Doing the same backwards leads to :
>
> ([0.0 ; 0.0 ; 0.0]
>  [0.0 ; 1.0 ; 0.5]
>  [0.0 ; 0.5 ; 0.0])*2
>
> So what happens is in both cases, the immediate neighbours of the noisy
> pixel are detected as somewhat noisy as well because of the first order
> discretization, but they are not noise. That's a limit of the discrete
> computation. Also the derivative of a Dirac delta function is supposed to
> be an even function, obviously that property is broken here. If you compute
> the L2 norm of these arrays, you get 1.22. A delta function should have a
> L2 norm = 1. Actually, the best approximation of the TV of the delta
> function would be the original delta function itself.
>
> If we average the both TV norms, we get :
>
> ([0.00 ; 0.25 ; 0.00]
>   [0.25 ; 1.00 ; 0.25]
>   [0.00 ; 0.25 ; 0.00])*4
>
> So, now, we have an error on more neighbours, but smaller in magnitude and
> the TV map is now even. Also, the L2 norm of the array is now 1.12, which
> is closer to 1. So we have a better approximation of the delta derivative.
>
> With that in mind, on the 8 neighbours variant, we also compute the TV L1
> norms (average of backward and forward) on diagonals, meaning :
>
> ([0.25 ; 0.00 ; 0.25]
>   [0.00 ; 1.00 ; 0.00]
>   [0.25 ; 0.00 ; 0.25])*4/sqrt(2)
>
> And… you are right, there is a problem of normalization because we should
> divide by 4*(1 + 1/sqrt(2)) instead of 4. Then, our TV L1 map will be :
>
> [0.1036 ; 0.1464 ; 0.1036]
> [0.1464 ; 1. ; 0.1464]
> [0.1036 ; 0.1464 ; 0.1036]
>
> That's an even better approximation to the Dirac delta. Now, the L2 norm
> is 1.06. And now that I see it, that could lead to a separable kernel to
> compute the TV L1 with two 1D convolutions…
>
> I didn't plan on going full math here, but, here we are…
>
> I will correct my code soon.
>
> 16/07/2018 à 01:51, rawfiner a écrit :
>
> I went through Aurélien's study again
> I wonder why the result of TV is divided by 4 (in case of 8 neighbors, "
> out[i, j, k] /= 4.")
>
> I guess it is kind of a normalisation.
> But as we divided the differences along diagonals by sqrt(2), the maximum
> achievable (supposing the values of the image are in [0,1], thus taking a
> difference of 1 along each direction) are:
> sqrt(1 + 1) + sqrt(1 + 1) + sqrt(1/2+1/2) + sqrt(1/2+1/2) = 2*sqrt(2) + 2
> in case of L2 norm
> 2 + 2 + 2*1/sqrt(2) + 2*1/sqrt(2) = 4 + 2*sqrt(2) in case of L1 norm
>
> So why this 4 and not a 4.83 and a 6.83 for L2 norm and L1 norm
> respectively?
> Or is it just a division by the number of directions? (if so, why are the
> diagonals difference divided by sqrt(2)?)
>
> Thanks!
>
> rawfiner
>
>
> 2018-07-02 21:34 GMT+02:00 rawfiner :
>
> Thank you for all these explanations!
> Seems promising to me.
>
> Cheers,
>
> rawfiner
>
> 2018-07-01 21:26 GMT+02:00 Aurélien Pierre :
>
> You're welcome ;-)
>
> That's true : the multiplication is equivalent to an "AND" operation, the
> resulting mask has non-zero values where both TV AND Laplacian masks has
> non-zero values, which - from my tests - is where the real noise is.
>
> That is because TV alone is too sensitive : when the image is noisy, it
> works fine, but whenever the image is clean or barely noisy, it detect
> edges as well, thus false-positive in the case of noise detection.
>
> The TV × Laplacian is a safety jacket that al

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-15 Thread rawfiner
I went through Aurélien's study again
I wonder why the result of TV is divided by 4 (in case of 8 neighbors, "out[
i, j, k] /= 4.")

I guess it is kind of a normalisation.
But as we divided the differences along diagonals by sqrt(2), the maximum
achievable (supposing the values of the image are in [0,1], thus taking a
difference of 1 along each direction) are:
sqrt(1 + 1) + sqrt(1 + 1) + sqrt(1/2+1/2) + sqrt(1/2+1/2) = 2*sqrt(2) + 2
in case of L2 norm
2 + 2 + 2*1/sqrt(2) + 2*1/sqrt(2) = 4 + 2*sqrt(2) in case of L1 norm

So why this 4 and not a 4.83 and a 6.83 for L2 norm and L1 norm
respectively?
Or is it just a division by the number of directions? (if so, why are the
diagonals difference divided by sqrt(2)?)

Thanks!

rawfiner


2018-07-02 21:34 GMT+02:00 rawfiner :

> Thank you for all these explanations!
> Seems promising to me.
>
> Cheers,
>
> rawfiner
>
> 2018-07-01 21:26 GMT+02:00 Aurélien Pierre :
>
>> You're welcome ;-)
>>
>> That's true : the multiplication is equivalent to an "AND" operation, the
>> resulting mask has non-zero values where both TV AND Laplacian masks has
>> non-zero values, which - from my tests - is where the real noise is.
>>
>> That is because TV alone is too sensitive : when the image is noisy, it
>> works fine, but whenever the image is clean or barely noisy, it detect
>> edges as well, thus false-positive in the case of noise detection.
>>
>> The TV × Laplacian is a safety jacket that allows the TV to work as
>> expected on noisy images (see the example) but will protect sharp edges on
>> clean images (on the example, the masks barely grabs a few pixels in the
>> in-focus area).
>>
>> I have found that the only way we could overcome the oversensibility of
>> the TV alone is by setting a window (like a band-pass filter) instead of a
>> threshold (high-pass filter) because, in a noisy picture, depending on the
>> noise level, the TV values of noisy and edgy pixels are very close. From an
>> end-user perspective, this is tricky.
>>
>> Using TV × Laplacian, given that the noise stats should not vary much for
>> a given sensor at a given ISO, allows to confidently set a simple threshold
>> as a factor of the standard deviation. It gives more reproductibility and
>> allows to build preset/styles for given camera/ISO. Assuming gaussian
>> noise, if you set your threshold factor to X (which means "unmask
>> everything above the mean (TV × Laplacian) + X standard deviation), you
>> know beforehand how many high-frequency pixels will be affected, no matter
>> what :
>>
>>- X = -1 =>  84 %,
>>- 0 => 50 %,
>>- 1 =>  16 % ,
>>- 2 =>  2.5 %,
>>- 3 => 0.15 %
>>- …
>>
>> Le 01/07/2018 à 14:13, rawfiner a écrit :
>>
>> Thank you for this study Aurélien
>>
>> As far as I understand, TV and Laplacians are complementary as they
>> detect noise in different regions of the image (noise in sharp edge for
>> Laplacian, noise elsewhere for TV).
>> Though, I do not understand why you multiply the TV and Laplacian results
>> to get the mask.
>> Multiplying them would result in a mask containing non-zero values only
>> for pixels that are detected as noise both by TV and Laplacian.
>> Is there a particular reason for multiplying (or did I misunderstood
>> something?), or could we take the maximum value among TV and Laplacian for
>> each pixel instead?
>>
>> Thanks again
>>
>> Cheers,
>> rawfiner
>>
>>
>> 2018-07-01 3:45 GMT+02:00 Aurélien Pierre :
>>
>>> Hi,
>>>
>>> I have done experiments on that matter and took the opportunity to
>>> correct/test further my code.
>>>
>>> So here are my attempts to code a noise mask and a sharpness mask with
>>> total variation and laplacian norms : https://github.com/aurelienpie
>>> rre/Image-Cases-Studies/blob/master/notebooks/Total%20Variat
>>> ion%20masking.ipynb
>>>
>>> Performance benchmarks are at the end.
>>>
>>> Cheers,
>>>
>>> Aurélien.
>>>
>>> Le 17/06/2018 à 15:03, rawfiner a écrit :
>>>
>>>
>>>
>>> Le dimanche 17 juin 2018, Aurélien Pierre 
>>> a écrit :
>>>
>>>>
>>>>
>>>> Le 13/06/2018 à 17:31, rawfiner a écrit :
>>>>
>>>>
>>>>
>>>> Le mercredi 13 juin 2018, Aurélien Pierre 
>>>> a écrit :
>>>>
>>>>>
>>>>>
>>>>>> On Thu, Jun 14, 2018 at 12:23 AM, A

Re: [darktable-dev] How to get image preview zoom factor information

2018-07-11 Thread rawfiner
Again, thank you for all this information!
This does clear up a lot of things :-)

rawfiner

2018-07-11 10:18 GMT+02:00 johannes hanika :

> heya,
>
> On Fri, Jul 6, 2018 at 12:06 AM, rawfiner  wrote:
> > Hi,
> >
> > I am still trying to resize raw before demosaicing to speed up raw
> > denoising.
> >
> > I now get the zoom level using the following code:
> >   float scale = 1.0;
> >   int closeup = dt_control_get_dev_closeup();
> >   if (piece->pipe->type == DT_DEV_PIXELPIPE_FULL)
> > scale = dt_dev_get_zoom_scale(self->dev, zoom, closeup ? 2.0 : 1.0,
> 0);
> >   else if (piece->pipe->type == DT_DEV_PIXELPIPE_PREVIEW)
> > scale = dt_dev_get_zoom_scale(self->dev, zoom, closeup ? 2.0 : 1.0,
> 1);
>
> as of lately, the obscure closeup became more obscure. it should now
> be 1< pixels).
>
> > This works fine in modify_roi_out, but sometimes gives 1 instead of what
> I
> > expect (the zoom factor) when called from modify_roi_in.
> > My problem is to manage to get the scale and use it correctly.
>
> okay. the way the pipeline works is in three stages:
>
> 1) a pass of modify_roi_out() from raw to screen output is performed.
> this is done full resolution, full region of interest, to determine
> the hypothetical size of the output image when processed in full.
>
> 2) given the size and region of interest of the view window, the
> develop module requests a certain input to be able to render the
> output. this is done by calling a chain of modify_roi_in() from view
> window back to raw image. this is only done on the exact pixels that
> are needed on screen right now, i.e. scaled and cropped.
>
> 3) process() is called with about exactly the ROI that were computed
> in pass number 2. i think there are some minor sanity checks done, so
> you shouldn't rely on what you asked for in modify_roi_in but use what
> you get in process().
>
> so in your case i think messing with modify_roi_in() is the more
> important case. you can just request the full image as passed through
> 1) by asking for piece->buf_in or piece->buf_out (these are stored
> when running 1) ).
>
> hope that clears up some things!
>
> cheers,
>  jo
>
> > My module would resize the raw image.
> > Thus, input and output have different dimentions.
> > I tried to set the roi_out width and height with modify_roi_out, this
> works
> > fine.
> >
> > However, even after trying various things in modify_roi_in, I don't
> manage
> > to get the full image as ivoid.
> > First thing that I don't understand is where roi_in is modified between
> > modify_roi_out and modify_roi_in, as at the beginning of modify_roi_in,
> the
> > roi_in width is equal to roi_out width, while they were different at the
> end
> > of roi_out?
> >
> > Also, in modify_roi_out I tried to save the scale in roi_out->scale, but
> in
> > modify_roi_in if I try to print roi_out->scale, I always get 1.
> > Is the roi_out variable used in modify_roi_out different from the one in
> > modify_roi_in?
> >
> > Maybe am I not catching something about the role of these passes.
> >
> > Thanks for any help!
> >
> > Cheers,
> > rawfiner
> >
> > 2018-05-10 23:13 GMT+02:00 rawfiner :
> >>
> >> Thank you for your answer.
> >>
> >> 2018-05-09 13:22 GMT+02:00 johannes hanika :
> >>>
> >>> heya,
> >>>
> >>> On Wed, May 9, 2018 at 12:27 PM, rawfiner  wrote:
> >>> > 2018-05-08 17:16 GMT+02:00 johannes hanika :
> >>> >> i'm guessing you want to detect whether you are running a
> >>> >> DT_DEV_PIXELPIPE_FULL pipe in darkroom mode (as opposed to
> >>> >> DT_DEV_PIXELPIPE_PREVIEW or _EXPORT) and then do this downscaling
> >>> >> yourself before running your algorithm on reduced resolution.
> >>> >>
> >>> >
> >>> > Yes, and I would like to know the zoom factor in case of
> >>> > DT_DEV_PIXELPIPE_PREVIEW , in order to downscale only if the image is
> >>> > sufficiently zoomed out (for example, I don't want to downscale the
> >>> > image if
> >>> > the zoom is at 90%, but I want to downscale if it is below 50%).
> >>>
> >>> right. to determine the total scale factor, you would need to do
> >>> something like for instance in sharpen.c:
> >>>
> >>> const int rad = MIN(MAXR, ceilf(d->radius * roi_in->scale /
> >>> piece->iscale));
> >>>
&g

Re: [darktable-dev] How to get image preview zoom factor information

2018-07-10 Thread rawfiner
Dear all,

I think that I do not manage to differenciate correctly the role of some
variables.
What is the difference between:
-roi_out->scale
-piece->iscale
-self->dev->preview_pipe->iscale

And the difference between:
-roi_out->width
-piece->iwidth
-self->dev->preview_pipe->processed_width
-self->dev->width

Which variable has to be used for which purpose?

What I currently have:
before downscaling:
https://drive.google.com/open?id=12Zj_Pcyhnm1yt_kgSS6F01q979TLMVMO
after downscaling, we are too zoomed out:
https://drive.google.com/open?id=1eMQ-wj8DznKT7GNoZy7pRtOvkUB-_vk1
I would like to "tell the engine" that we are downscaling the image for the
preview, and that the 29% zoom is going to become a 100% zoom (as the image
is smaller), but without affecting the information visible by the user (the
user will see that he is at 29% zoom).
Basically, the user should see no difference whether the downscaling is
activated or not

Thank you for any help

rawfiner



2018-07-06 0:06 GMT+02:00 rawfiner :

> Hi,
>
> I am still trying to resize raw before demosaicing to speed up raw
> denoising.
>
> I now get the zoom level using the following code:
>   float scale = 1.0;
>   int closeup = dt_control_get_dev_closeup();
>   if (piece->pipe->type == DT_DEV_PIXELPIPE_FULL)
> scale = dt_dev_get_zoom_scale(self->dev, zoom, closeup ? 2.0 : 1.0,
> 0);
>   else if (piece->pipe->type == DT_DEV_PIXELPIPE_PREVIEW)
> scale = dt_dev_get_zoom_scale(self->dev, zoom, closeup ? 2.0 : 1.0,
> 1);
>
> This works fine in modify_roi_out, but sometimes gives 1 instead of what I
> expect (the zoom factor) when called from modify_roi_in.
> My problem is to manage to get the scale and use it correctly.
>
> My module would resize the raw image.
> Thus, input and output have different dimentions.
> I tried to set the roi_out width and height with modify_roi_out, this
> works fine.
>
> However, even after trying various things in modify_roi_in, I don't manage
> to get the full image as ivoid.
> First thing that I don't understand is where roi_in is modified between
> modify_roi_out and modify_roi_in, as at the beginning of modify_roi_in, the
> roi_in width is equal to roi_out width, while they were different at the
> end of roi_out?
>
> Also, in modify_roi_out I tried to save the scale in roi_out->scale, but
> in modify_roi_in if I try to print roi_out->scale, I always get 1.
> Is the roi_out variable used in modify_roi_out different from the one in
> modify_roi_in?
>
> Maybe am I not catching something about the role of these passes.
>
> Thanks for any help!
>
> Cheers,
> rawfiner
>
> 2018-05-10 23:13 GMT+02:00 rawfiner :
>
>> Thank you for your answer.
>>
>> 2018-05-09 13:22 GMT+02:00 johannes hanika :
>>
>>> heya,
>>>
>>> On Wed, May 9, 2018 at 12:27 PM, rawfiner  wrote:
>>> > 2018-05-08 17:16 GMT+02:00 johannes hanika :
>>> >> i'm guessing you want to detect whether you are running a
>>> >> DT_DEV_PIXELPIPE_FULL pipe in darkroom mode (as opposed to
>>> >> DT_DEV_PIXELPIPE_PREVIEW or _EXPORT) and then do this downscaling
>>> >> yourself before running your algorithm on reduced resolution.
>>> >>
>>> >
>>> > Yes, and I would like to know the zoom factor in case of
>>> > DT_DEV_PIXELPIPE_PREVIEW , in order to downscale only if the image is
>>> > sufficiently zoomed out (for example, I don't want to downscale the
>>> image if
>>> > the zoom is at 90%, but I want to downscale if it is below 50%).
>>>
>>> right. to determine the total scale factor, you would need to do
>>> something like for instance in sharpen.c:
>>>
>>> const int rad = MIN(MAXR, ceilf(d->radius * roi_in->scale /
>>> piece->iscale));
>>>
>>> which determines the pixel radius scaled by input buffer scaling
>>> (iscale) and region of interest scaling (roi_in->scale).
>>>
>>
>> Yes, I have seen that kind of things in the code of non local means.
>> Yet, if I understand correctly, this allows to retreive the scale factor
>> for an already downscaled image, i.e. when the image was downscaled
>> previously in the pipeline.
>> What I would like is a bit different, as it would be to know if I can
>> downscale the image or not, depending on the zoom level in the darkroom.
>> But I guess that I will find the necessary information in the function
>> dt_iop_clip_and_zoom_mosaic_half_size_f() that you pointed me out!
>>
>>
>>> note that the preview pipe is what fills the whole image but
>>> downs

Re: [darktable-dev] How to get image preview zoom factor information

2018-07-05 Thread rawfiner
Hi,

I am still trying to resize raw before demosaicing to speed up raw
denoising.

I now get the zoom level using the following code:
  float scale = 1.0;
  int closeup = dt_control_get_dev_closeup();
  if (piece->pipe->type == DT_DEV_PIXELPIPE_FULL)
scale = dt_dev_get_zoom_scale(self->dev, zoom, closeup ? 2.0 : 1.0, 0);
  else if (piece->pipe->type == DT_DEV_PIXELPIPE_PREVIEW)
scale = dt_dev_get_zoom_scale(self->dev, zoom, closeup ? 2.0 : 1.0, 1);

This works fine in modify_roi_out, but sometimes gives 1 instead of what I
expect (the zoom factor) when called from modify_roi_in.
My problem is to manage to get the scale and use it correctly.

My module would resize the raw image.
Thus, input and output have different dimentions.
I tried to set the roi_out width and height with modify_roi_out, this works
fine.

However, even after trying various things in modify_roi_in, I don't manage
to get the full image as ivoid.
First thing that I don't understand is where roi_in is modified between
modify_roi_out and modify_roi_in, as at the beginning of modify_roi_in, the
roi_in width is equal to roi_out width, while they were different at the
end of roi_out?

Also, in modify_roi_out I tried to save the scale in roi_out->scale, but in
modify_roi_in if I try to print roi_out->scale, I always get 1.
Is the roi_out variable used in modify_roi_out different from the one in
modify_roi_in?

Maybe am I not catching something about the role of these passes.

Thanks for any help!

Cheers,
rawfiner

2018-05-10 23:13 GMT+02:00 rawfiner :

> Thank you for your answer.
>
> 2018-05-09 13:22 GMT+02:00 johannes hanika :
>
>> heya,
>>
>> On Wed, May 9, 2018 at 12:27 PM, rawfiner  wrote:
>> > 2018-05-08 17:16 GMT+02:00 johannes hanika :
>> >> i'm guessing you want to detect whether you are running a
>> >> DT_DEV_PIXELPIPE_FULL pipe in darkroom mode (as opposed to
>> >> DT_DEV_PIXELPIPE_PREVIEW or _EXPORT) and then do this downscaling
>> >> yourself before running your algorithm on reduced resolution.
>> >>
>> >
>> > Yes, and I would like to know the zoom factor in case of
>> > DT_DEV_PIXELPIPE_PREVIEW , in order to downscale only if the image is
>> > sufficiently zoomed out (for example, I don't want to downscale the
>> image if
>> > the zoom is at 90%, but I want to downscale if it is below 50%).
>>
>> right. to determine the total scale factor, you would need to do
>> something like for instance in sharpen.c:
>>
>> const int rad = MIN(MAXR, ceilf(d->radius * roi_in->scale /
>> piece->iscale));
>>
>> which determines the pixel radius scaled by input buffer scaling
>> (iscale) and region of interest scaling (roi_in->scale).
>>
>
> Yes, I have seen that kind of things in the code of non local means.
> Yet, if I understand correctly, this allows to retreive the scale factor
> for an already downscaled image, i.e. when the image was downscaled
> previously in the pipeline.
> What I would like is a bit different, as it would be to know if I can
> downscale the image or not, depending on the zoom level in the darkroom.
> But I guess that I will find the necessary information in the function
> dt_iop_clip_and_zoom_mosaic_half_size_f() that you pointed me out!
>
>
>> note that the preview pipe is what fills the whole image but
>> downscaled (iscale != 1) in the navigation view in the top left
>> corner. the "full" pipeline fills the pixels in the center view of
>> darkroom mode, at exactly the scale and crop you see on screen (iscale
>> == 1 mostly but the other scale and bounds in roi_in will change with
>> the current view).
>>
>> to find out whether you're running either one of the two you'd write
>> something similar to bilat.c:
>>
>> if(self->dev->gui_attached && g && piece->pipe->type ==
>> DT_DEV_PIXELPIPE_PREVIEW)
>>
>
> Ok, thank you for these explainations
> I think I have everything I need to make some new trials!
>
> Regards,
>
> rawfiner
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-02 Thread rawfiner
Thank you for all these explanations!
Seems promising to me.

Cheers,

rawfiner

2018-07-01 21:26 GMT+02:00 Aurélien Pierre :

> You're welcome ;-)
>
> That's true : the multiplication is equivalent to an "AND" operation, the
> resulting mask has non-zero values where both TV AND Laplacian masks has
> non-zero values, which - from my tests - is where the real noise is.
>
> That is because TV alone is too sensitive : when the image is noisy, it
> works fine, but whenever the image is clean or barely noisy, it detect
> edges as well, thus false-positive in the case of noise detection.
>
> The TV × Laplacian is a safety jacket that allows the TV to work as
> expected on noisy images (see the example) but will protect sharp edges on
> clean images (on the example, the masks barely grabs a few pixels in the
> in-focus area).
>
> I have found that the only way we could overcome the oversensibility of
> the TV alone is by setting a window (like a band-pass filter) instead of a
> threshold (high-pass filter) because, in a noisy picture, depending on the
> noise level, the TV values of noisy and edgy pixels are very close. From an
> end-user perspective, this is tricky.
>
> Using TV × Laplacian, given that the noise stats should not vary much for
> a given sensor at a given ISO, allows to confidently set a simple threshold
> as a factor of the standard deviation. It gives more reproductibility and
> allows to build preset/styles for given camera/ISO. Assuming gaussian
> noise, if you set your threshold factor to X (which means "unmask
> everything above the mean (TV × Laplacian) + X standard deviation), you
> know beforehand how many high-frequency pixels will be affected, no matter
> what :
>
>- X = -1 =>  84 %,
>- 0 => 50 %,
>- 1 =>  16 % ,
>- 2 =>  2.5 %,
>- 3 => 0.15 %
>- …
>
> Le 01/07/2018 à 14:13, rawfiner a écrit :
>
> Thank you for this study Aurélien
>
> As far as I understand, TV and Laplacians are complementary as they detect
> noise in different regions of the image (noise in sharp edge for Laplacian,
> noise elsewhere for TV).
> Though, I do not understand why you multiply the TV and Laplacian results
> to get the mask.
> Multiplying them would result in a mask containing non-zero values only
> for pixels that are detected as noise both by TV and Laplacian.
> Is there a particular reason for multiplying (or did I misunderstood
> something?), or could we take the maximum value among TV and Laplacian for
> each pixel instead?
>
> Thanks again
>
> Cheers,
> rawfiner
>
>
> 2018-07-01 3:45 GMT+02:00 Aurélien Pierre :
>
>> Hi,
>>
>> I have done experiments on that matter and took the opportunity to
>> correct/test further my code.
>>
>> So here are my attempts to code a noise mask and a sharpness mask with
>> total variation and laplacian norms : https://github.com/aurelienpie
>> rre/Image-Cases-Studies/blob/master/notebooks/Total%
>> 20Variation%20masking.ipynb
>>
>> Performance benchmarks are at the end.
>>
>> Cheers,
>>
>> Aurélien.
>>
>> Le 17/06/2018 à 15:03, rawfiner a écrit :
>>
>>
>>
>> Le dimanche 17 juin 2018, Aurélien Pierre  a
>> écrit :
>>
>>>
>>>
>>> Le 13/06/2018 à 17:31, rawfiner a écrit :
>>>
>>>
>>>
>>> Le mercredi 13 juin 2018, Aurélien Pierre 
>>> a écrit :
>>>
>>>>
>>>>
>>>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>>>  wrote:
>>>>> > Hi,
>>>>> >
>>>>> > The problem of a 2-passes denoising method involving 2 differents
>>>>> > algorithms, the later applied where the former failed, could be the
>>>>> grain
>>>>> > structure (the shape of the noise) would be different along the
>>>>> picture,
>>>>> > thus very unpleasing.
>>>>
>>>>
>>>> I agree that the grain structure could be different. Indeed, the grain
>>>> could be different, but my feeling (that may be wrong) is that it would be
>>>> still better than just no further processing, that leaves some pixels
>>>> unprocessed (they could form grain structures far from uniform if we are
>>>> not lucky).
>>>> If you think it is only due to a change of algorithm, I guess we could
>>>> apply non local means again on pixels where a first pass failed, but with
>>>> different parameters to be quite confident that the second pass will work.
>>>>
>>>> That sound

Re: [darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-07-01 Thread rawfiner
Thank you for this study Aurélien

As far as I understand, TV and Laplacians are complementary as they detect
noise in different regions of the image (noise in sharp edge for Laplacian,
noise elsewhere for TV).
Though, I do not understand why you multiply the TV and Laplacian results
to get the mask.
Multiplying them would result in a mask containing non-zero values only for
pixels that are detected as noise both by TV and Laplacian.
Is there a particular reason for multiplying (or did I misunderstood
something?), or could we take the maximum value among TV and Laplacian for
each pixel instead?

Thanks again

Cheers,
rawfiner


2018-07-01 3:45 GMT+02:00 Aurélien Pierre :

> Hi,
>
> I have done experiments on that matter and took the opportunity to
> correct/test further my code.
>
> So here are my attempts to code a noise mask and a sharpness mask with
> total variation and laplacian norms : https://github.com/
> aurelienpierre/Image-Cases-Studies/blob/master/notebooks/
> Total%20Variation%20masking.ipynb
>
> Performance benchmarks are at the end.
>
> Cheers,
>
> Aurélien.
>
> Le 17/06/2018 à 15:03, rawfiner a écrit :
>
>
>
> Le dimanche 17 juin 2018, Aurélien Pierre  a
> écrit :
>
>>
>>
>> Le 13/06/2018 à 17:31, rawfiner a écrit :
>>
>>
>>
>> Le mercredi 13 juin 2018, Aurélien Pierre  a
>> écrit :
>>
>>>
>>>
>>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>>  wrote:
>>>> > Hi,
>>>> >
>>>> > The problem of a 2-passes denoising method involving 2 differents
>>>> > algorithms, the later applied where the former failed, could be the
>>>> grain
>>>> > structure (the shape of the noise) would be different along the
>>>> picture,
>>>> > thus very unpleasing.
>>>
>>>
>>> I agree that the grain structure could be different. Indeed, the grain
>>> could be different, but my feeling (that may be wrong) is that it would be
>>> still better than just no further processing, that leaves some pixels
>>> unprocessed (they could form grain structures far from uniform if we are
>>> not lucky).
>>> If you think it is only due to a change of algorithm, I guess we could
>>> apply non local means again on pixels where a first pass failed, but with
>>> different parameters to be quite confident that the second pass will work.
>>>
>>> That sounds better to me… but practice will have the last word.
>>>
>>
>> Ok :-)
>>
>>>
>>>
>>>> >
>>>> > I thought maybe we could instead create some sort of total variation
>>>> > threshold on other denoising modules :
>>>> >
>>>> > compute the total variation of each channel of each pixel as the
>>>> divergence
>>>> > divided by the L1 norm of the gradient - we then obtain a "heatmap"
>>>> of the
>>>> > gradients over the picture (contours and noise)
>>>> > let the user define a total variation threshold and form a mask where
>>>> the
>>>> > weights above the threshold are the total variation and the weights
>>>> below
>>>> > the threshold are zeros (sort of a highpass filter actually)
>>>> > apply the bilateral filter according to this mask.
>>>> >
>>>> > This way, if the user wants to stack several denoising modules, he
>>>> could
>>>> > protect the already-cleaned areas from further denoising.
>>>> >
>>>> > What do you think ?
>>>
>>>
>>> That sounds interesting.
>>> This would maybe allow to keep some small variations/details that are
>>> not due to noise or not disturbing, while denoising the other parts.
>>> Also, it may be computationally interesting (depends on the complexity
>>> of the total variation computation, I don't know it), as it could reduce
>>> the number of pixels to process.
>>> I guess the user could use something like that also the other way?: to
>>> protect high detailed zones and apply the denoising on quite smoothed zones
>>> only, in order to be able to use stronger denoising on zones that are
>>> supposed to be background blur.
>>>
>>>
>>> The noise is high frequency, so the TV (total variation) threshold will
>>> have to be high pass only. The hypothesis behind the TV thresholding is
>>> noisy pixels should have abnormally higher gradients than true details, so
>>> you is

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-17 Thread rawfiner
Le dimanche 17 juin 2018, Aurélien Pierre  a
écrit :

>
>
> Le 13/06/2018 à 17:31, rawfiner a écrit :
>
>
>
> Le mercredi 13 juin 2018, Aurélien Pierre  a
> écrit :
>
>>
>>
>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>  wrote:
>>> > Hi,
>>> >
>>> > The problem of a 2-passes denoising method involving 2 differents
>>> > algorithms, the later applied where the former failed, could be the
>>> grain
>>> > structure (the shape of the noise) would be different along the
>>> picture,
>>> > thus very unpleasing.
>>
>>
>> I agree that the grain structure could be different. Indeed, the grain
>> could be different, but my feeling (that may be wrong) is that it would be
>> still better than just no further processing, that leaves some pixels
>> unprocessed (they could form grain structures far from uniform if we are
>> not lucky).
>> If you think it is only due to a change of algorithm, I guess we could
>> apply non local means again on pixels where a first pass failed, but with
>> different parameters to be quite confident that the second pass will work.
>>
>> That sounds better to me… but practice will have the last word.
>>
>
> Ok :-)
>
>>
>>
>>> >
>>> > I thought maybe we could instead create some sort of total variation
>>> > threshold on other denoising modules :
>>> >
>>> > compute the total variation of each channel of each pixel as the
>>> divergence
>>> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
>>> the
>>> > gradients over the picture (contours and noise)
>>> > let the user define a total variation threshold and form a mask where
>>> the
>>> > weights above the threshold are the total variation and the weights
>>> below
>>> > the threshold are zeros (sort of a highpass filter actually)
>>> > apply the bilateral filter according to this mask.
>>> >
>>> > This way, if the user wants to stack several denoising modules, he
>>> could
>>> > protect the already-cleaned areas from further denoising.
>>> >
>>> > What do you think ?
>>
>>
>> That sounds interesting.
>> This would maybe allow to keep some small variations/details that are not
>> due to noise or not disturbing, while denoising the other parts.
>> Also, it may be computationally interesting (depends on the complexity of
>> the total variation computation, I don't know it), as it could reduce the
>> number of pixels to process.
>> I guess the user could use something like that also the other way?: to
>> protect high detailed zones and apply the denoising on quite smoothed zones
>> only, in order to be able to use stronger denoising on zones that are
>> supposed to be background blur.
>>
>>
>> The noise is high frequency, so the TV (total variation) threshold will
>> have to be high pass only. The hypothesis behind the TV thresholding is
>> noisy pixels should have abnormally higher gradients than true details, so
>> you isolate them this way.  Selecting noise in low frequencies areas would
>> require in addition something like a guided filter, which I believe is what
>> is used in the dehaze module. The complexity of the TV computation depends
>> on the order of accuracy you expect.
>>
>> A classic approximation of the gradient is using a convolution product
>> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
>> accurate for edges, probably less accurate for punctual noise). I have
>> developped myself optimized methods using 2, 4, and 8 neighbouring pixels
>> that give higher order accuracy, given the sparsity of the data, at the
>> expense of computing cost : https://github.com/aurelienpie
>> rre/Image-Cases-Studies/blob/947fd8d5c2e4c3384c80c1045d86f8c
>> f89ddcc7e/lib/deconvolution.pyx#L342 (ignore the variable ut in the
>> code, only u is relevant for us here).
>>
> Great, thanks for the explanations.
> Looking at the code of the 8 neighbouring pixels, I wonder if we would
> make sense to compute something like that on raw data considering only
> neighbouring pixels of the same color?
>
>
> the RAW data are even more sparse, so the gradient can't be computed this
> way. One would have to tweak the Taylor theorem to find an expression of
> gradient for sparse data. And that would be different for Bayer and X-Trans
> patterns. It's a bit of a conundrum.
>

Ok, thank you for these 

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-17 Thread rawfiner
Here are some of the RAW files I use to test the changes I make to
denoising modules (including the one I used as an exemple in the beginning
of this conversation):
https://drive.google.com/open?id=11LxZWpZbS66m7vFdcoIHNTiG20JnwlJT
The reference-jpg folder contains the JPGs produced by the camera for these
raws (except for 2 of the RAWs for which I don't have the reference JPG).
I also use several other RAW files to test, but unfortunately I cannot
upload them as either they were not made by me, either they are photos of
people.

These are really noisy pictures, as I would like to be able to easily
process such pictures in darktable and to reach levels of quality similar
or better than the cameras.
Hope it will help.

If you have noisy photos you would like to share too, I'd like to have them
as my database of noisy pictures is a little biased (majority of photos in
my little "noisy database" are from my own cameras Lumix FZ1000 and Fuji
XT20 and I'd like to have more photos from other cameras)

Thanks!

rawfiner



2018-06-13 23:31 GMT+02:00 rawfiner :

>
>
> Le mercredi 13 juin 2018, Aurélien Pierre  a
> écrit :
>
>>
>>
>>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>>  wrote:
>>> > Hi,
>>> >
>>> > The problem of a 2-passes denoising method involving 2 differents
>>> > algorithms, the later applied where the former failed, could be the
>>> grain
>>> > structure (the shape of the noise) would be different along the
>>> picture,
>>> > thus very unpleasing.
>>
>>
>> I agree that the grain structure could be different. Indeed, the grain
>> could be different, but my feeling (that may be wrong) is that it would be
>> still better than just no further processing, that leaves some pixels
>> unprocessed (they could form grain structures far from uniform if we are
>> not lucky).
>> If you think it is only due to a change of algorithm, I guess we could
>> apply non local means again on pixels where a first pass failed, but with
>> different parameters to be quite confident that the second pass will work.
>>
>> That sounds better to me… but practice will have the last word.
>>
>
> Ok :-)
>
>>
>>
>>> >
>>> > I thought maybe we could instead create some sort of total variation
>>> > threshold on other denoising modules :
>>> >
>>> > compute the total variation of each channel of each pixel as the
>>> divergence
>>> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
>>> the
>>> > gradients over the picture (contours and noise)
>>> > let the user define a total variation threshold and form a mask where
>>> the
>>> > weights above the threshold are the total variation and the weights
>>> below
>>> > the threshold are zeros (sort of a highpass filter actually)
>>> > apply the bilateral filter according to this mask.
>>> >
>>> > This way, if the user wants to stack several denoising modules, he
>>> could
>>> > protect the already-cleaned areas from further denoising.
>>> >
>>> > What do you think ?
>>
>>
>> That sounds interesting.
>> This would maybe allow to keep some small variations/details that are not
>> due to noise or not disturbing, while denoising the other parts.
>> Also, it may be computationally interesting (depends on the complexity of
>> the total variation computation, I don't know it), as it could reduce the
>> number of pixels to process.
>> I guess the user could use something like that also the other way?: to
>> protect high detailed zones and apply the denoising on quite smoothed zones
>> only, in order to be able to use stronger denoising on zones that are
>> supposed to be background blur.
>>
>>
>> The noise is high frequency, so the TV (total variation) threshold will
>> have to be high pass only. The hypothesis behind the TV thresholding is
>> noisy pixels should have abnormally higher gradients than true details, so
>> you isolate them this way.  Selecting noise in low frequencies areas would
>> require in addition something like a guided filter, which I believe is what
>> is used in the dehaze module. The complexity of the TV computation depends
>> on the order of accuracy you expect.
>>
>> A classic approximation of the gradient is using a convolution product
>> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
>> accurate for edges, probably less accurate for punctual noise). I have
>> developped myself o

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-13 Thread rawfiner
Le mercredi 13 juin 2018, Aurélien Pierre  a
écrit :

>
>
>> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>>  wrote:
>> > Hi,
>> >
>> > The problem of a 2-passes denoising method involving 2 differents
>> > algorithms, the later applied where the former failed, could be the
>> grain
>> > structure (the shape of the noise) would be different along the picture,
>> > thus very unpleasing.
>
>
> I agree that the grain structure could be different. Indeed, the grain
> could be different, but my feeling (that may be wrong) is that it would be
> still better than just no further processing, that leaves some pixels
> unprocessed (they could form grain structures far from uniform if we are
> not lucky).
> If you think it is only due to a change of algorithm, I guess we could
> apply non local means again on pixels where a first pass failed, but with
> different parameters to be quite confident that the second pass will work.
>
> That sounds better to me… but practice will have the last word.
>

Ok :-)

>
>
>> >
>> > I thought maybe we could instead create some sort of total variation
>> > threshold on other denoising modules :
>> >
>> > compute the total variation of each channel of each pixel as the
>> divergence
>> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
>> the
>> > gradients over the picture (contours and noise)
>> > let the user define a total variation threshold and form a mask where
>> the
>> > weights above the threshold are the total variation and the weights
>> below
>> > the threshold are zeros (sort of a highpass filter actually)
>> > apply the bilateral filter according to this mask.
>> >
>> > This way, if the user wants to stack several denoising modules, he could
>> > protect the already-cleaned areas from further denoising.
>> >
>> > What do you think ?
>
>
> That sounds interesting.
> This would maybe allow to keep some small variations/details that are not
> due to noise or not disturbing, while denoising the other parts.
> Also, it may be computationally interesting (depends on the complexity of
> the total variation computation, I don't know it), as it could reduce the
> number of pixels to process.
> I guess the user could use something like that also the other way?: to
> protect high detailed zones and apply the denoising on quite smoothed zones
> only, in order to be able to use stronger denoising on zones that are
> supposed to be background blur.
>
>
> The noise is high frequency, so the TV (total variation) threshold will
> have to be high pass only. The hypothesis behind the TV thresholding is
> noisy pixels should have abnormally higher gradients than true details, so
> you isolate them this way.  Selecting noise in low frequencies areas would
> require in addition something like a guided filter, which I believe is what
> is used in the dehaze module. The complexity of the TV computation depends
> on the order of accuracy you expect.
>
> A classic approximation of the gradient is using a convolution product
> with Sobel or Prewitt operators (3×3 arrays, very efficient, fairly
> accurate for edges, probably less accurate for punctual noise). I have
> developped myself optimized methods using 2, 4, and 8 neighbouring pixels
> that give higher order accuracy, given the sparsity of the data, at the
> expense of computing cost : https://github.com/aurelienpierre/Image-Cases-
> Studies/blob/947fd8d5c2e4c3384c80c1045d86f8cf89ddcc7e/lib/deconvolution.
> pyx#L342 (ignore the variable ut in the code, only u is relevant for us
> here).
>
> Great, thanks for the explanations.
Looking at the code of the 8 neighbouring pixels, I wonder if we would make
sense to compute something like that on raw data considering only
neighbouring pixels of the same color?

Also, when talking about the mask formed from the heat map, do you mean
that the "heat" would give for each pixel a weight to use between input and
output? (i.e. a mask that is not only ones and zeros, but that controls how
much input and output are used for each pixel)
If so, I think it is a good idea to explore!

rawfiner

>
>
>
>> >
>> > Aurélien.
>> >
>> >
>> > Le 13/06/2018 à 03:16, rawfiner a écrit :
>> >
>> > Hi,
>> >
>> > I don't have the feeling that increasing K is the best way to improve
>> noise
>> > reduction anymore.
>> > I will upload the raw next week (if I don't forget to), as I am not at
>> home
>> > this week.
>> > My feeling is that doing non local mean

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-06-13 Thread rawfiner
Le mercredi 13 juin 2018, johannes hanika  a écrit :

> hi,
>
> that doesn't sound like a bad idea at all. for what it's worth, in
> practice the nlmeans doesn't let any grain at all through due to the
> piecewise constant prior that it's based on. well, only in regions
> where it finds enough other patches that is. in the current
> implementation with a radius of 7 that is not always the case.


That's precisely the type of grain that I thought to try to tackle with a 2
pass.
When the image is very noisy, it is quite frequent to have pixels without
enough other patches.
It sometimes forces me to raise the strength sliders, resulting in an
overly smoothed image.
The idea is to give the user the choice of how to handle these pixels,
either by leaving them like this, either by using another denoising
algorithm so that they integrate better with their surroundings.
Anyway, I guess I may try that and come back after some results to discuss
if it's worth it or no ;-)


>
> also, i usually use some blending to add the input buffer back on top
> of the output. this essentially leaves the grain alone but tones it
> down.


I do the same ;-)


>
> cheers,
>  jo
>
>
> On Thu, Jun 14, 2018 at 12:23 AM, Aurélien Pierre
>  wrote:
> > Hi,
> >
> > The problem of a 2-passes denoising method involving 2 differents
> > algorithms, the later applied where the former failed, could be the grain
> > structure (the shape of the noise) would be different along the picture,
> > thus very unpleasing.


I agree that the grain structure could be different. Indeed, the grain
could be different, but my feeling (that may be wrong) is that it would be
still better than just no further processing, that leaves some pixels
unprocessed (they could form grain structures far from uniform if we are
not lucky).
If you think it is only due to a change of algorithm, I guess we could
apply non local means again on pixels where a first pass failed, but with
different parameters to be quite confident that the second pass will work.


> >
> > I thought maybe we could instead create some sort of total variation
> > threshold on other denoising modules :
> >
> > compute the total variation of each channel of each pixel as the
> divergence
> > divided by the L1 norm of the gradient - we then obtain a "heatmap" of
> the
> > gradients over the picture (contours and noise)
> > let the user define a total variation threshold and form a mask where the
> > weights above the threshold are the total variation and the weights below
> > the threshold are zeros (sort of a highpass filter actually)
> > apply the bilateral filter according to this mask.
> >
> > This way, if the user wants to stack several denoising modules, he could
> > protect the already-cleaned areas from further denoising.
> >
> > What do you think ?


That sounds interesting.
This would maybe allow to keep some small variations/details that are not
due to noise or not disturbing, while denoising the other parts.
Also, it may be computationally interesting (depends on the complexity of
the total variation computation, I don't know it), as it could reduce the
number of pixels to process.
I guess the user could use something like that also the other way?: to
protect high detailed zones and apply the denoising on quite smoothed zones
only, in order to be able to use stronger denoising on zones that are
supposed to be background blur.

rawfiner



> >
> > Aurélien.
> >
> >
> > Le 13/06/2018 à 03:16, rawfiner a écrit :
> >
> > Hi,
> >
> > I don't have the feeling that increasing K is the best way to improve
> noise
> > reduction anymore.
> > I will upload the raw next week (if I don't forget to), as I am not at
> home
> > this week.
> > My feeling is that doing non local means on raw data gives much bigger
> > improvement than that.
> > I still have to work on it yet.
> > I am currently testing some raw downsizing ideas to allow a fast
> execution
> > of the algorithm.
> >
> > Apart of that, I also think that to improve noise reduction such as the
> > denoise profile in nlm mode and the denoise non local means, we could do
> a 2
> > passes algorithm, with non local means applied first, and then a
> bilateral
> > filter (or median filter or something else) applied only on pixels where
> non
> > local means failed to find suitable patches (i.e. pixels where the sum of
> > weights was close to 0).
> > The user would have a slider to adjust this setting.
> > I think that it would make easier to have a "uniform" output (i.e. an
> output
> > where noise has been reduced quite uniformly)
> > I have not 

Re: [darktable-dev] How to get image preview zoom factor information

2018-05-10 Thread rawfiner
Thank you for your answer.

2018-05-09 13:22 GMT+02:00 johannes hanika <hana...@gmail.com>:

> heya,
>
> On Wed, May 9, 2018 at 12:27 PM, rawfiner <rawfi...@gmail.com> wrote:
> > 2018-05-08 17:16 GMT+02:00 johannes hanika <hana...@gmail.com>:
> >> i'm guessing you want to detect whether you are running a
> >> DT_DEV_PIXELPIPE_FULL pipe in darkroom mode (as opposed to
> >> DT_DEV_PIXELPIPE_PREVIEW or _EXPORT) and then do this downscaling
> >> yourself before running your algorithm on reduced resolution.
> >>
> >
> > Yes, and I would like to know the zoom factor in case of
> > DT_DEV_PIXELPIPE_PREVIEW , in order to downscale only if the image is
> > sufficiently zoomed out (for example, I don't want to downscale the
> image if
> > the zoom is at 90%, but I want to downscale if it is below 50%).
>
> right. to determine the total scale factor, you would need to do
> something like for instance in sharpen.c:
>
> const int rad = MIN(MAXR, ceilf(d->radius * roi_in->scale /
> piece->iscale));
>
> which determines the pixel radius scaled by input buffer scaling
> (iscale) and region of interest scaling (roi_in->scale).
>

Yes, I have seen that kind of things in the code of non local means.
Yet, if I understand correctly, this allows to retreive the scale factor
for an already downscaled image, i.e. when the image was downscaled
previously in the pipeline.
What I would like is a bit different, as it would be to know if I can
downscale the image or not, depending on the zoom level in the darkroom.
But I guess that I will find the necessary information in the function
dt_iop_clip_and_zoom_mosaic_half_size_f() that you pointed me out!


> note that the preview pipe is what fills the whole image but
> downscaled (iscale != 1) in the navigation view in the top left
> corner. the "full" pipeline fills the pixels in the center view of
> darkroom mode, at exactly the scale and crop you see on screen (iscale
> == 1 mostly but the other scale and bounds in roi_in will change with
> the current view).
>
> to find out whether you're running either one of the two you'd write
> something similar to bilat.c:
>
> if(self->dev->gui_attached && g && piece->pipe->type ==
> DT_DEV_PIXELPIPE_PREVIEW)
>

Ok, thank you for these explainations
I think I have everything I need to make some new trials!

Regards,

rawfiner

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] How to get image preview zoom factor information

2018-05-09 Thread rawfiner
Thank you for the detailed answer

2018-05-08 17:16 GMT+02:00 johannes hanika <hana...@gmail.com>:

> heya,
>
> for modules that work on raw data, the full pipeline is unscaled
> (hence your constant scale factors). all we do here is provide input
> cropped to the region of interest your module requested during the
> modify_roi_in() pass that is run before process() is called (there is
> a default implementation of modify_roi_in).
>

> we have some experimental/working code that downsizes raw data so that
> the preview pipeline can be run on raw/mosaic input, yet downscaled
> buffers. look for functions like
> dt_iop_clip_and_zoom_mosaic_half_size_f().
>

Cool, I will look at this!


>
> i'm guessing you want to detect whether you are running a
> DT_DEV_PIXELPIPE_FULL pipe in darkroom mode (as opposed to
> DT_DEV_PIXELPIPE_PREVIEW or _EXPORT) and then do this downscaling
> yourself before running your algorithm on reduced resolution.
>
>
Yes, and I would like to know the zoom factor in case of
DT_DEV_PIXELPIPE_PREVIEW , in order to downscale only if the image is
sufficiently zoomed out (for example, I don't want to downscale the image
if the zoom is at 90%, but I want to downscale if it is below 50%).


> note that there are some problems with this when it comes to aliasing
> or the treatment of filtered colours, some of which may be above the
> raw clipping threshold.
>

I can imagine that downscaling raw is far from easy. I think (hope?) that,
maybe, the denoising step *may* reduce the artifacts that result of the
downscaling. Anyway, I will have to test this to figure out if it is an
acceptable solution or not!


>
> let me know how you go with this, sounds very interesting!
> cheers,
>  jo
>

Ok I'll do that!
Thank you again for your help.

rawfiner


>
> On Tue, May 8, 2018 at 8:28 AM, rawfiner <rawfi...@gmail.com> wrote:
> > Dear all,
> >
> > I am currently working on a module for denoising at raw level.
> > The first results are very promising in term of denoising quality, but
> the
> > speed of the module is not as fast as it should be (processing the full
> > image takes about 20 seconds, which is decent for export but too slow for
> > the processing in darkroom).
> >
> > The denoising modules that work on the image after demosaic can display
> > previews in darkroom fastly as they work on a downscaled version of the
> > image.
> >
> > I would like to investigate if simple methods of downscaling could work
> > decently at raw level, so that raw denoising could compute a fast
> preview on
> > the downscaled image.
> >
> > My question is, how (if it is possible) can I get the zoom factor of the
> > darkroom within the raw denoise module ?
> > I tried to look into the data available in "piece", but did not find
> > anything that could give me the zoom factor.
> > (piece->iscale is constantly equal to 1, no matter the zoom level, and
> the
> > same holds for piece->buf_in.scale)
> >
> > Thank you !
> >
> > Regards,
> >
> > rawfiner
> >
> > 
> ___
> > darktable developer mailing list to unsubscribe send a mail to
> > darktable-dev+unsubscr...@lists.darktable.org
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

[darktable-dev] How to get image preview zoom factor information

2018-05-07 Thread rawfiner
Dear all,

I am currently working on a module for denoising at raw level.
The first results are very promising in term of denoising quality, but the
speed of the module is not as fast as it should be (processing the full
image takes about 20 seconds, which is decent for export but too slow for
the processing in darkroom).

The denoising modules that work on the image after demosaic can display
previews in darkroom fastly as they work on a downscaled version of the
image.

I would like to investigate if simple methods of downscaling could work
decently at raw level, so that raw denoising could compute a fast preview
on the downscaled image.

My question is, how (if it is possible) can I get the zoom factor of the
darkroom within the raw denoise module ?
I tried to look into the data available in "piece", but did not find
anything that could give me the zoom factor.
(piece->iscale is constantly equal to 1, no matter the zoom level, and the
same holds for piece->buf_in.scale)

Thank you !

Regards,

rawfiner

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Re: Regularized NL Means Denoise for poisson noise

2018-02-23 Thread rawfiner
Hi !

I am currently trying various things to improve noise reduction in
darktable.
It seems that NLM on RAW is a very good idea !
On my early tests it works really well.
I have made a (ugly and slow) prototype (without sensor profiling),
available here:
https://github.com/rawfiner/darktable
(branch is draft-denoise-NLM-raw, commit
10064058d56650900cacaf8323dc86abcdd2b66f)

To test it, you only have to activate raw denoise module.
The threshold parameter controls the force of the non local means algorithm.
The GUI is not updated yet, there are plenty of things remaining to do, and
the code is ugly and slow.
Consider activating the module only at 100% zoom level so that it doesn't
take too long to execute.

Please also note that I modified slightly the algorithm compared to the
normal NLM: you have to use the opacity of the module to make the balance
between the original pixel value and the value computed from OTHER pixels
(excluding the original pixel)
It seems that using an opacity of 70% works quite well.
The reason why I put this is that I don't like when denoise modules give
not uniform results (ie there is some noise remaining in some parts of the
image). This is (maybe) temporary, as I may test other ways to ensure some
uniformity.

Also note that in the current state, I do not handle the borders of the
image, as I did not want to make the code too complicated before being
confident about the interest of the approach.
Again, this is a prototype, it will evolve pretty soon !

Best regards,
rawfiner

2018-02-21 23:18 GMT+01:00 Björn Sozumschein <wilecoyote2...@gmail.com>:

> Hi,
>
> thank you for the information!
> I have tried out a simple approach by profiling the color filters in the
> Sensor pattern separately, and use only patches for each pixel that are
> centered around a pixel that has the same position in the repeating filter
> pattern. The results are quite interesting, as the algorithm seems to
> perform very well in flat surfaces and texture is also rendered quite
> nicely. Maybe it is worth investigating whether that may be developed into
> a more sophisticated approach.
>
> I've uploaded the code I used and a sample image to try it out here:
> https://github.com/wilecoyote2015/NLMeans_Raw_Test
> One can use profile_camera.py to profile the camera and nlm_raw_profiled
> to denoise a raw image. The code isn't elaborated and slow as hell, but it
> provides an idea of the results. It only works with Bayer sensors.
>
> Best,
> Bjoern
>
> 2018-02-18 13:33 GMT+01:00 johannes hanika <hana...@gmail.com>:
>
>> hi!
>>
>> sure, everything that improves noise reduction will be a welcomed
>> addition. glancing over the paper you mention, it is based on
>> non-local means and a global minimisation/total variation step. keep
>> in mind that for interactive usage we need to render several
>> megapixels through all necessary modules in a very short time frame.
>> this usually means a couple 10 milliseconds per module.
>>
>> currently we run a generalised anscombe transform (gaussian and
>> poissonian noise), which can be combined with wavelet or non-local
>> means. the closest they have as comparison is in table II in
>> conjunction with BM3D, which seems to be the winning combination.
>>
>> working on raw data seems like a good idea and i had started that
>> years ago in some unfinished branch. one thing about raw data is also
>> that the black point has not been subtracted yet and that you can
>> effectively better filter gaussian noise near zero using the negative
>> values still contained in the data. designing denoising algorithms in
>> this space is a bit painful to implement and needs to consider the
>> different colour filter array layouts, which is tedious and can result
>> in slow algorithms. also the anscombe transform doesn't work very well
>> near zero, so this branch was based on a fisz transform and wavelets.
>> in the end the results were usually not all that different except in
>> one or two extreme oddball images. in this context a specialised
>> nlmeans approach may be useful (but the gaussian part of the noise is
>> important).
>>
>> cheers,
>>  jo
>>
>> On Sun, Feb 18, 2018 at 1:43 AM, Björn Sozumschein
>> <wilecoyote2...@gmail.com> wrote:
>> > Hello all,
>> >
>> > the Paper "Adaptive regularization of the NL-means: Application to
>> image and
>> > video denoising" by Sutour et al, 2014 provides a nice overview
>> regarding
>> > methods to adapt the NL Means algorithm for poisson noise and introduces
>> > regularization in order to remove typical artifacts. As far as I have
>> read
>> > from the documentation, the 

Re: [darktable-dev] denoise profile non local means: neighborhood parameter

2018-01-28 Thread rawfiner
Hi

Yes, the patch size is set to 1 from the GUI, so it is not a bilateral
filter, and I guess it corresponds to a patch window size of 3x3 in the
code.
The runtime difference is near the expected quadratic slowdown:
1,460 secs (8,379 CPU) for 7 and 12,794 secs (85,972 CPU) for 25, which
means about 10.26x slowdown

If you want to make your mind on it, I have pushed a branch here that
integrates the K parameter in the GUI:
https://github.com/rawfiner/darktable.git
The branch is denoise-profile-GUI-K

I think that it may be worth to see if an automated approach for the choice
of K may work, in order not to integrate the parameter in the GUI.
I may try to implement the approach of Kervann and Boulanger (the reference
from the darktable blog post) to see how it performs.

cheers,
rawfiner


2018-01-27 13:50 GMT+01:00 johannes hanika <hana...@gmail.com>:

> heya,
>
> thanks for the reference! interesting interpretation how the blotches
> form. not sure i'm entirely convinced by that argument.
> your image does look convincing though. let me get this right.. you
> ran with radius 1 which means patch window size 3x3? not 1x1 which
> would be a bilateral filter effectively?
>
> also what was the run time difference? is it near the expected
> quadratic slowdown from 7 (i.e. 15x15) to 25 (51x51) so about 11.56x
> slower with the large window size? (test with darktable -d perf)
>
> since nlmeans isn't the fastest thing, even with this coalesced way of
> implementing it, we should certainly keep an eye on this.
>
> that being said if we can often times get much better results we
> should totally expose this in the gui, maybe with a big warning that
> it really severely impacts speed.
>
> cheers,
>  jo
>
> On Sat, Jan 27, 2018 at 7:34 AM, rawfiner <rawfi...@gmail.com> wrote:
> > Thank you for your answer
> > I perfectly agree with the fact that the GUI should not become
> > overcomplicated.
> >
> > As far as I understand, the pixels within a small zone may suffer from
> > correlated noise, and there is a risk of noise to noise matching.
> > That's why this paper suggest not to take pixels that are too close to
> the
> > zone we are correcting, but to take them a little farther (see the
> caption
> > of Figure 2 for a quick explaination):
> >
> > https://pdfs.semanticscholar.org/c458/71830cf535ebe6c2b7656f6a205033
> 761fc0.pdf
> > (in case you ask, unfortunately there is a patent associated with this
> > approach, so we cannot implement it)
> >
> > Increasing the neighborhood parameter results in having proportionally
> less
> > problem of correlation between surrounding pixels, and decreases the
> size of
> > the visible spots.
> > See for example the two attached pictures: one with size 1, force 1, and
> K 7
> > and the other with size 1, force 1, and K 25.
> >
> > I think that the best would probably be to adapt K automatically, in
> order
> > not to affect the GUI, and as we may have different levels of noise in
> > different parts of an image.
> > In this post
> > (https://www.darktable.org/2012/12/profiling-sensor-and-photon-noise/),
> this
> > paper is cited:
> >
> > [4] charles kervrann and jerome boulanger: optimal spatial adaptation for
> > patch-based image denoising. ieee trans. image process. vol. 15, no. 10,
> > 2006
> >
> > As far as I understand, it gives a way to choose an adaptated window size
> > for each pixel, but I don't see in the code anything related to that
> >
> > Maybe is this paper related to the TODOs in the code ?
> >
> > Was it planned to implement such a variable window approach ?
> >
> > Or if it is already implemented, could you point me where ?
> >
> > Thank you
> >
> > rawfiner
> >
> >
> >
> >
> > 2018-01-26 9:05 GMT+01:00 johannes hanika <hana...@gmail.com>:
> >>
> >> hi,
> >>
> >> if you want, absolutely do play around with K. in my tests it did not
> >> lead to any better denoising. to my surprise a larger K often led to
> >> worse results (for some reason often the relevance of discovered
> >> patches decreases with distance from the current point). that's why K
> >> is not exposed in the gui, no need for another irrelevant and cryptic
> >> parameter. if you find a compelling case where this indeed leads to
> >> better denoising we could rethink that.
> >>
> >> in general NLM is a 0-th order denoising scheme, meaning the prior is
> >> piecewise constant (you claim the pixels you find are trying to
> >> express /the same/ mean, so you average them). if you

[darktable-dev] Re: denoise profile non local means: neighborhood parameter

2018-01-26 Thread rawfiner
Oh ok sorry for that...
rawfiner

Le vendredi 26 janvier 2018, Terry Duell <tdu...@iinet.net.au> a écrit :

> On Sat, 27 Jan 2018 05:34:24 +1100, rawfiner <rawfi...@gmail.com> wrote:
>
> Thank you for your answer I perfectly agree with the fact that the GUI
>> should not become
>> overcomplicated.
>>
>
> ...and neither should large attachments (9 MB) be sent directly to a
> mailing list.
> Please use a link to large attached files, not everyone wants or needs to
> get it.
>
> Cheers,
> --
> Regards,
> Terry Duell
> 
> ___
> darktable developer mailing list
> to unsubscribe send a mail to darktable-dev+unsubscribe@
> lists.darktable.org
>
>

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] denoise profile non local means: neighborhood parameter

2018-01-25 Thread rawfiner
Hi

I am surprised to see that we cannot control the neighborhood parameter for
the NLM algorithm (neither for the denoise non local mean, nor for the
denoise profiled) from the GUI.
I see in the code (denoiseprofile.c) this TODO that I don't understand: "//
TODO: fixed K to use adaptive size trading variance and bias!"
And just some lines after that: "// TODO: adaptive K tests here!"
(K is the neighborhood parameter of the NLM algorithm).

In practice, I think that being able to change the neighborhood parameter
allows to have a better noise reduction for one image.
For  example, choosing a bigger K allows to reduce the spotted aspect that
one can get on high ISO images.

Of course, increasing K increase computational time, but I think we could
find an acceptable range that would still be useful.


Is there any reason for not letting the user control the neighborhood
parameter in the GUI ?
Also, do you understand the TODOs ?
I feel that we would probably get better denoising by fixing these, but I
don't understand them.

I can spend some time on these TODOs, or to add the K parameter to the
interface if you think it is worth it (I think so but it is only my
personal opinion), but I have to understand what the TODOs mean before

Thank you for your help

rawfiner

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org