Re: [darktable-dev] Using darktable with Go to convert raw to jpeg

2018-11-11 Thread Heiko Bauke
Moin Moin,

Am Montag, 12. November 2018 schrieb Michael Mayer:
> Moin Moin, 
> We are grateful for every little piece of advice concerning XMP
> sidecar files and you are welcome to write in our wiki. I still don't
> understand how compatible Darktable is with Adobe Lightroom and other
> applications supporting XMP. Would you always get a similar JPEG file
> with the same XMP file? I guess it depends on which filters are
> supported/used and if they have the same name?

although I never investigated these compatibility issues I suspect one cannot 
expect much compatibility here. Different raw converters may work very 
differently internally. In particular closed source programs are a black box.

Heiko

-- 
-- Number Crunch Blog @ https://www.numbercrunch.de
--  Cluster Computing @ https://www.clustercomputing.de
--  Social Networking @ https://www.researchgate.net/profile/Heiko_Bauke
___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.or

Re: [darktable-dev] Using darktable with Go to convert raw to jpeg

2018-11-11 Thread Michael Mayer
Moin Moin,

after I had a detailed look into cgo and libdarktable yesterday, we
decided to continue using darktable-cli as this doesn't interfere with
our build process on other operating systems and leaves the option
open to use alternative raw converters, should darktable not be
available (sips, Photivo, RawTherapee, digiKam,...):

  https://github.com/photoprism/photoprism/wiki/Converting-RAW-to-JPEG

We are grateful for every little piece of advice concerning XMP
sidecar files and you are welcome to write in our wiki. I still don't
understand how compatible Darktable is with Adobe Lightroom and other
applications supporting XMP. Would you always get a similar JPEG file
with the same XMP file? I guess it depends on which filters are
supported/used and if they have the same name?

You've probably seen this before, but for us it was quite impressive
to see Darktable running in a browser using the broadway display
server (couldn't resist to experiment after cloning your repo):
https://twitter.com/browseyourlife/status/1061673399741759489

The exit code for darktable-cli --version is already fixed, pull
request was accepted quickly. Thank you!

Michael


On Sat, Nov 10, 2018 at 4:17 PM, Michael Mayer  wrote:
> Thank you!
>
> Some users were asking for a single binary they can download and run.
> I already figured that's going to be complicated after I checked the
> dependencies of libdarktable.so, also there is no statically linkable
> version of libtensorflow.so yet.
>
> At least we want to provide a small docker image with only the files
> that are actually needed - are there parts of a default darktable
> installation we can safely delete?
>
> Michael
>
> On Sat, Nov 10, 2018 at 3:16 PM, johannes hanika  wrote:
>> hi!
>>
>> looks like an interesting project you're working on.
>>
>> if you have a look at the source of darktable-cli, it's really short:
>>
>> https://github.com/darktable-org/darktable/blob/master/src/cli/main.c
>>
>> you could probably link your code directly to libdarktable.so the same
>> way. i don't understand your build constraints or why you'd prefer
>> static linkage. i guess linking against some form of libdarktable.a
>> statically might be doable, but we heavily depend on loadable modules
>> which are dlopen()ed by us manually (that includes our image
>> operations but also the system's opencl libraries). i doubt linking
>> all that statically is possible without major changes to the codebase
>> (or a good idea, for that matter).
>>
>> cheers,
>>  jo
>> On Sat, Nov 10, 2018 at 11:16 AM Michael Mayer  
>> wrote:
>>>
>>> Hello everyone,
>>>
>>> I'm the maintainer of https://photoprism.org/, a server-based photo
>>> management application based on Go and TensorFlow.
>>>
>>> We use darktable-cli to convert RAW files to JPEG:
>>>
>>> https://github.com/photoprism/photoprism/blob/develop/internal/photoprism/converter.go#L87
>>>
>>> While this works, it would be amazing if we could (statically?) link
>>> against darktable using cgo. I'm wondering if anyone has experience
>>> with that or if you recommend to continue using darktable-cli?
>>>
>>> Also it seems like I found a bug (which I haven't found out how to
>>> report properly yet):
>>>
>>> # darktable-cli --version
>>> this is darktable-cli 2.4.4
>>> # echo $?
>>> 1
>>>
>>> -> Should be 0
>>>
>>> Thank you for the hard work you put into darktable!
>>>
>>> Michael
>>> ___
>>> darktable developer mailing list
>>> to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
>>>
___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] advanced mask adjustments

2018-11-11 Thread Heiko Bauke

Hi Björn,

Am 10.11.18 um 11:22 schrieb Björn Sozumschein:

So, aside from the better conformity with the user's intuitive 
understanding, maybe inverting the mask at the end of the pipeline would 
benefit usability.


for better user experience I revised the code that constructs the mask 
accordingly, i.e., having mask inversion at the very end.


While doing so I also found a bug which has been fixed.  (The 
possible/rare case that input and output rois have different sizes was 
not taken into account.)  Currently these changes have only been only 
applied to the CPU code path.  Fixes for OpenCL will follow as soon as 
possible.  See https://github.com/rabauke/darktable/tree/guided_filter



Heiko

--
-- Number Crunch Blog @ https://www.numbercrunch.de
--  Cluster Computing @ https://www.clustercomputing.de
--  Social Networking @ https://www.researchgate.net/profile/Heiko_Bauke
___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: Fwd: [darktable-dev] advanced mask adjustments

2018-11-11 Thread rawfiner
Hi Heiko
I do not have a very detailed feedback to give for now, as I only played
with it a little (maybe latter) ;-)
Still, I wanted to say that I am very impressed, as I obtained great
results very easily.
This is awesome!
Thanks a lot!
rawfiner

Le sam. 10 nov. 2018 à 20:07, Heiko Bauke  a écrit :

> Hi Björn,
>
> many thanks for your feedback.
>
> Am 10.11.18 um 11:24 schrieb Björn Sozumschein:
> > I also believe that a proper explanation would prevent confusion
> > regarding the inversion behavior.
> > However, I have concerns with respect to the usability, based on my
> > initial experience:
> > In most cases, I use the masks to apply a module either to my subject or
> > to the background individually.
> > Let's assume, for instance, that there's a portrait shoot where I like
> > to apply a tone curve to the subject and I also want to use color
> > correction on the background.
> > In order to achieve this, I usually create a mask for the subject first,
> > because it is easier and more reliable to create a mask for the subject
> > than for the background.
> > This is due to the fact that, when the colored mask overlay is
> > activated, it seems just easier for human vision to classify whether the
> > subject is covered by the mask than if a mask of the background does not
> > cover parts of the subject.
> > At this step, I can draw a coarse mask and then use feathering to obtain
> > a great result and apply the tone curve.
> > Then, however, in order to perform color correction of the background, I
> > like to reuse the mask for the subject, apply the same contrast and
> > brightness parameters and simply invert it in order to obtain a mask for
> > the background that is complementary to the subject mask.
> > This is not possible with the current implementation, as brightness and
> > contrast have to be adjusted.
> >
> > So, aside from the better conformity with the user's intuitive
> > understanding, maybe inverting the mask at the end of the pipeline would
> > benefit usability.
>
> I completely agree.
>
> > There is a second point I noticed:
> > Especially when using the mask with hair, after proper adjustment of
> > brightness and saturation in order to match the edges well, the mask is
> > rather sharp and thus, for most modules, the edges of small structures
> > as well as soft edges do not look good.
> > I would like to apply a gaussian blur to the mask after feathering.
> > Also, I am not sure whether the brightness and contrast provide a real
> > benefit for the gaussian blur.
> > Hence, I wonder whether it could be useful to not have either gaussian
> > blur or feathering, but simply have the feathering with its options
> > first, followed by a slider for gaussian blur?
>
> There are many possible options how to integrate the new feathering
> algorithm into the exciting mask refinement facilities.  This is the
> reason why I am locking for feedback.  Giving the option to apply both a
> Gaussian filter and a guided filter to the mask is just one possible
> direction to go.  With this option, however, the question of order
> appears.  Which filter comes first, the Gaussian or the guided filter.
> Furthermore, more options and more flexibility require more UI elements
> and add more complexity.  We have to find the right balance.
>
> In addition to the existing integration of the guided filter one might
> give the user the possibility to adjust the following parameters:
>
> * Choose which image is used as a guide to feather the mask, the
> module's input or the module's output (before blending).  Currently it's
> always the input.  In most cases, the feathering result is not
> significantly affected by this choice.  It might be relevant, however,
> for blurring or sharpening modules.
>
> * Allow to apply both, a Gaussian filter plus a guided filer.  Possibly
> even a Gaussian filter before and after the guided filter with different
> parameters.
>
> * One might give the user the ability to determine when the mask
> tone-curve is applied, before or after feathering, before or after
> Gaussian blur etc.  One might even allow to apply several tone-curves at
> different stages.
>
> * One might add further parameters to adjust the mask tone-curve, e.g.,
> white and black points.
>
> I definitely do not want to go into the direction of implementing all
> these options.  I just want so sketch the rich possibilities.  I think
> we have to find a minimalistic solution that keeps complexity low but
> allows flexible mask adjustments.
>
> Currently I think the following approach is reasonable:
>
> * There are to sliders in the UI, one for a Gaussian blur radius, one
> for the guided filter radius.
>
> * The toggle box tho choose the filter is removed.
>
> * A new toggle box is added to choose the guide (module input or
> output).  This would be consistent with the fact that for parametric
> masks we have two sliders for each channel.
>
> * Both filters (Gaussian and guided filter) are applied if the
> r