Re: [darktable-dev] Code reformatting...

2023-01-23 Thread Matthias Andree

Am 23.01.23 um 08:40 schrieb Pascal Obry:

Hello devs,

As we are not ready to have an automatic reformatting of the code I
have started at least making the function headers a bit more readable.

From:

void dt_gui_presets_show_edit_dialog(const char *name_in, const char 
*module_name, int rowid,
  GCallback final_callback, gpointer data, 
gboolean allow_name_change,
  gboolean allow_desc_change,
  gboolean allow_remove, GtkWindow *parent)

To:

void dt_gui_presets_show_edit_dialog(const char *name_in,
  const char *module_name,
  const int rowid,
  GCallback final_callback,
  gpointer data,
  const gboolean allow_name_change,
  const gboolean allow_desc_change,
  const gboolean allow_remove,
  GtkWindow *parent)

This is to be done only if the function header does not fit in a single
line of 80 characters.

When I work on a file I'll try to do this change in a separate commit
"Minor reformatting" and I encourage all devs to do the same.

This will make the code a bit more readable and the type/name of the
parameters to stand out a bit more.

Thanks,


While the idea is sound, 80 character wide lines seem so... 1980's. Do
people still need to read and edit darktable source code on 640x480
displays?

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] OBS packages for xUbuntu

2023-01-04 Thread Matthias Andree

Am 04.01.23 um 16:43 schrieb Mica Semrick:

You're making a lot of assumptions here. Seems like you have some
deeper issue than someone asking a simple question about support.
Maybe a break from the computer is in order.


You are considering my earlier messages rude and now you are insinuating
I had a "deeper issue"?

Have I just broke your delusions of Ubuntu LTS or what's up? Why do you
feel and give in to an urge to resort to ad-hominem attacks?

The solution is "you want to run up-to-date software, you get to upgrade
your distro first".­
In many more words that were meant to convey "Ubuntu LTS is not what
many Desktop mistake it for".

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] OBS packages for xUbuntu

2023-01-04 Thread Matthias Andree

Am 04.01.23 um 15:58 schrieb Mica Semrick:

This answer is a bit rude and doesn't answer the original query.


It may be rude if you consider "who cares" rude, and prevents people
from wasting their time while pointing out the actual issue, which is
"old distro" which is too old to build darktable 4.2.


There is an unmet dependency in Ubuntu 20.04 and the latest release
can no longer be built. See
https://discuss.pixls.us/t/what-happened-with-the-obs-builds/33588/2?u=darix for
more information.


Thanks for mass-confirming what I was writing.

And scared users in that thread posted in November 2022, 7 months after
release, that they still considered Ubuntu "new", when 22.04.1 was out
and from-LTS-to-next-LTS upgrades had been enabled. Exactly the kind of
support open-source maintainers want to be distracted with. I haven't
even looked whether the OBS people are the same as the darktable people,
but you'd think it best to move things forward rather than tying them up
in the past.

The thing is you can't have the cake and eat it, so everyone please stop
pretending they could.

Ubuntu 20.04 (code-named focal fossa) shipped darktable 3.0, and
darktable being in the "universe" community-unmaintained package set...
being stuck with older darktable is a choice that people made by NOT
upgrading their Ubuntu LTS in the past three months.
https://packages.ubuntu.com/search?suite=focal&searchon=names&keywords=darktable

And it's also either you choose a Ubuntu LTS distro and live with
whatever unmaintained ("universe") package came with it, and be stuck
with it, or you pick something that installs an app and all its distro
deps redundantly in a distro (snap or flatpack, if available) with all
the drawbacks of its isolation and bulk, or you need to move to a distro
that is up to speed if your interest is "new software" and integrates
such quickly. Rolling or frequent releases and distros exist, but that's
not Ubuntu LTS, and possibly no Debian-based distro at all.

Having said that, Fedora 37 or FreeBSD 13.1 built darktable 4.2 nicely
for me.

I wonder why all the world can expect everyone to maintain every new
package for their museum piece of desktop distro install and NOT be
considered rude. Expecting someone to maintain software or packages
thereof for older distros, on a voluntary basis, free of charge, is what
I consider egoistic and rude. It is an enormous waste of resources.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] OBS packages for xUbuntu

2023-01-04 Thread Matthias Andree

Am 04.01.23 um 04:51 schrieb Bob Tregilus:

Hi -

I'm not sure who I should alert to this issue, someone on this dev
list or should I write to OBS support?

On the openSUSE contributors OBS they list the following four 4.2.0
darktable builds for Unbuntu based distros (I added the support
information):

xUbuntu 22.10 is supported to 2023-07.

xUbuntu 22.04 (LTS) is supported to 2027-04-21.

xUbuntu 21.10 support *ended* 2022-07-14.

xUbuntu 21.04 support *ended* 2022-01-20.

But now missing is a 4.2.0 build for the older:

xUbuntu 20.04 (LTS) which is supported to 2025-04-23.


Who cares?

The proper answer is: Do not use older Ubuntu distros on desktops,
Ubuntu are only maintaining a very small subset of packages in the LTS
context (and understandably so because it's redundant effort), and
please do not ask to encourage people shooting themselves in the foot
with that by providing new packages on older distros. Instead, teach
desktop users to stay updated.


$ ubuntu-security-status
989 packages installed, of which:
859 receive package updates with LTS until 4/2025


Meaning 130 packages without any security updates (this is xubuntu 20.04
that I use as mostly-headless build server for mail-related software),
and on typical desktop installs, it's usually much worse. Ubuntu's "LTS"
tag, for desktops, is window dressing.

Only main/restricted aka base packages receive "support".
https://ubuntu.com/about/release-cycle but not many of community
packages which I found make up considerable parts of desktop installs.

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] dt git master crashes at export

2021-03-17 Thread Matthias Andree
Am 17.03.21 um 11:05 schrieb Peter Harde:
> Hi Matthias,
>
> about 25 minutes ago Pascal created an issue at github, he could
> reproduce, too. So I think it's not necessary to provide a git bisect.
> Thank you.

Oh, then I made a dupe...

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] dt git master crashes at export

2021-03-17 Thread Matthias Andree
Am 17.03.21 um 10:29 schrieb Peter Harde:
> Am 17.03.21 um 10:19 schrieb Pascal Obry:
>> Hi Peter,
>>
>> Please report to GitHub.
>>
>> https://github.com/darktable-org/darktable/issues/new/choose
>>
>> Thanks.
> Hi Pascal,
>
> I would do this with pleasure, but unfortunately it's not possible. I
> can't create an account there, because github massively violates the
> privacy settings of my browser.


Because I can reproduce this, use

https://github.com/darktable-org/darktable/issues/8490


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] dt git master crashes at export

2021-03-17 Thread Matthias Andree
Am 17.03.21 um 10:29 schrieb Peter Harde:
> Hi Pascal,
>
> I would do this with pleasure, but unfortunately it's not possible. I
> can't create an account there, because github massively violates the
> privacy settings of my browser.
>
> Am 17.03.21 um 10:19 schrieb Pascal Obry:
>> Hi Peter,
>>
>> Please report to GitHub.
>>
>> https://github.com/darktable-org/darktable/issues/new/choose
>>
Can you at least Git bisect this failure?

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] dt git master crashes at export -> two backtraces

2021-03-17 Thread Matthias Andree
Am 17.03.21 um 09:04 schrieb Peter Harde:
>
> Dear developers,
>
> dt 3.5.0+1428~gdf1271bfb, linux Ubuntu 20.04
>
> The development version reproducible crashes at export with "double
> free or corruption (fasttop)". To reproduce :
>
>   * select some images (tried with 2, 4, 30) of a collection, tried
> with ARW an JPG images
>   * select "hierarchical tags" in "edit metadata exportation" dialog
> (see red marker in attached screenshot export-crash.png)
>   * further setting of export dialog see second screenshot
> (export-parameters.png)
>   * click "export"
>
> Export works fine with same images if "hierarchical tags" is not selected.
>
I had to find the dialog mentioned... so on lighttable, RIGHT-click on
the hamburger (three-dashes) symbol right of the export module to
reconfigure per Peter's screenshot.

I then can reproduce this with darktable Git 2ec5c0cb4 and this is the
backtrace of the crashing thread:

free(): double free detected in tcache 2
...

> #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49
> #1  0x7796e8a4 in __GI_abort () at abort.c:79
> #2  0x779c8177 in __libc_message
> (action=action@entry=do_abort, fmt=fmt@entry=0x77ada3a7 "%s\n")
>     at ../sysdeps/posix/libc_fatal.c:155
> #3  0x779cfe6c in malloc_printerr
> (str=str@entry=0x77adc7a8 "free(): double free detected in tcache 2")
>     at malloc.c:5389
> #4  0x779d193c in _int_free (av=0x7fffd020,
> p=0x7fffd00611d0, have_lock=0) at malloc.c:4232
> #5  0x776c570d in g_free (mem=0x7fffd00611e0) at
> ../glib/gmem.c:199
> #6  0x776b0e40 in g_list_foreach (list=,
>     list@entry=0x28ed540 = {...}, func=0x776c5700 ,
> user_data=user_data@entry=0x0) at ../glib/glist.c:1090
> #7  0x776bb88f in g_list_free_full (list=0x28ed540 = {...},
> free_func=) at ../glib/glist.c:244
> #8  0x77c40e27 in _exif_xmp_read_data_export
> (metadata=0x7fffe1692140, imgid=971, xmpData=...)
>     at ../src/common/exif.cc:3682
> #9  dt_exif_xmp_attach_export(int, char const*, void*)
>     (imgid=971, filename=filename@entry=0x7fffe168f030
> "/some/path/darktable_exported/_DSC7009_01.jpg",
> metadata=metadata@entry=0x7fffe1692140) at ../src/common/exif.cc:3891
> #10 0x77c6434c in dt_imageio_export_with_flags
> (imgid=,
>     imgid@entry=971, filename=filename@entry=0x7fffe168f030
> "/some/path/darktable_exported/_DSC7009_01.jpg",
> format=format@entry=0x29b60c0,
> format_params=format_params@entry=0x7fffd00744a0,
> ignore_exif=ignore_exif@entry=0,
> display_byteorder=display_byteorder@entry=0, high_quality= out>, upscale=0, thumbnail_export=0, filter=,
> copy_metadata=1, export_masks=0, icc_type=DT_COLORSPACE_SRGB,
> icc_filename=0x3e6f1b0 "", icc_intent=DT_INTENT_PERCEPTUAL,
> storage=0x29dfec0, storage_params=0x4115d00, num=1, total=2,
> metadata=0x7fffe1692140) at ../src/common/imageio.c:1046
> #11 0x77c64ed6 in dt_imageio_export
>     (imgid=971, filename=0x7fffe168f030
> "/some/path/darktable_exported/_DSC7009_01.jpg", format=0x29b60c0,
> format_params=0x7fffd00744a0, high_quality=1, upscale=0,
> copy_metadata=1, export_masks=0, icc_type=DT_COLORSPACE_SRGB,
> icc_filename=0x3e6f1b0 "", icc_intent=DT_INTENT_PERCEPTUAL,
> storage=0x29dfec0, storage_params=0x4115d00, num=1, total=2,
> metadata=0x7fffe1692140) at ../src/common/imageio.c:644
> #12 0x7fffc8305068 in store
>     (self=0x29dfec0, sdata=, imgid=,
> format=0x29b60c0, fdata=0x7fffd00744a0, num=1, total=2,
> high_quality=1, upscale=0, export_masks=0,
> icc_type=DT_COLORSPACE_SRGB, icc_filename=0x3e6f1b0 "",
> icc_intent=DT_INTENT_PERCEPTUAL, metadata=0x7fffe1692140) at
> ../src/imageio/storage/disk.c:334
> #13 0x77cc83e3 in dt_control_export_job_run (job=0x40d1e60) at
> ../src/control/jobs/control_jobs.c:1403
> #14 0x77cc4659 in dt_control_job_execute
> (job=job@entry=0x40d1e60) at ../src/control/jobs.c:300
> #15 0x77cc4f18 in dt_control_run_job (control=0x444dc0) at
> ../src/control/jobs.c:319
> #16 dt_control_work (ptr=) at ../src/control/jobs.c:564
> #17 0x7792f3f9 in start_thread (arg=0x7fffe16a2640) at
> pthread_create.c:463
> #18 0x77a49b53 in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Retrying this twice with other files, I get a different crash:

Thread 9 "worker 3" received signal SIGSEGV, Segmentation fault.
...

> #0  0x75543473 in
> std::auto_ptr::operator=(std::auto_ptr_ref)
>     (__ref=..., this=0x7fffb8004bd0) at
> /usr/include/c++/10/backward/auto_ptr.h:274
> #1  Exiv2::Xmpdatum::Impl::Impl(Exiv2::Xmpdatum::Impl const&)
>     (this=0x7fffb8004bd0, rhs=..., this=,
> rhs=)
>     at /usr/src/debug/exiv2-0.27.3-4.fc33.x86_64/src/xmp.cpp:143
> #2  0x75543855 in Exiv2::Xmpdatum::Xmpdatum(Exiv2::Xmpdatum
> const&)
>     (this=0x7fffb80a0840, rhs=..., this=,
> rhs=)
>     at /usr/src/debug/exiv2-0.27.3-4.fc33.x86_64/src/xmp.cpp:163
> #3  0x755438c3 in
> __gnu_cxx::new_allocator::c

Re: [darktable-dev] Fwd: Welcome to darktable-dev@lists.darktable.org

2020-05-15 Thread Matthias Andree
Am 15.05.20 um 15:51 schrieb Thomas Weigert:
>
>
> I observe that while the
> documentation https://www.darktable.org/resources/camera-support/ states
> that newer Fuji cameras, such as X-E3, X-T30, or X-T3 are provided at
> least base level support, darktable does not find the information
> based on the EXIF data and also, in the lens correction selection,
> these cameras do not appear, see screen shot attached.
>
> Maybe this was just not added to the UI?

Run lensfun-update-data and restart darktable.

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] cmake compatibility patch on 3.0.x branch

2020-05-08 Thread Matthias Andree
Greetings,

on FreeBSD with cmake 3.17.2, I found I needed to rename a variable in
data/kernels/CMakeLists.txt in 3.0.2 release - "IN" does not seem to
work any longer and appears special. Renaming IN to i in two places
fixes a cmake "configure" abort.

Patch attached - please consider for inclusion.

Thanks
Matthias


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org--- data/kernels/CMakeLists.txt.orig2020-04-15 07:10:53 UTC
+++ data/kernels/CMakeLists.txt
@@ -31,8 +31,8 @@ macro (testcompile_opencl_kernel IN)
 endmacro (testcompile_opencl_kernel)
 
 if (TESTBUILD_OPENCL_PROGRAMS)
-  foreach(IN ${DT_OPENCL_KERNELS})
-testcompile_opencl_kernel(${IN})
+  foreach(i ${DT_OPENCL_KERNELS})
+testcompile_opencl_kernel(${i})
   endforeach()
 endif()
 


Re: [darktable-dev] darktable 3.0.0rc0 released

2019-11-08 Thread Matthias Andree
Am 06.11.19 um 19:33 schrieb François Tissandier:
> Well, look at Sony RX100. Sony keeps producing and selling several
> generations at the same time, right ?

Indeed - the only replaced modules is the RX100-V that got replaced by a
RX100-VA. The others (RX100, RX100-II, -III, -IV, -VI, -VII) remain
available.

The thing why I find renaming "new" to be the same as "old"
inappropriate is this: different behaviour requires different names, and
you will want to remain able to open processing profiles from old
versions, so I guess darktable will also require to keep old modules
available unless they were massively faulty in their design.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] DT bad on skin tones?

2019-05-30 Thread Matthias Andree
Am 29.05.19 um 12:28 schrieb Aurélien Pierre:
>
> I guess I will have to record video tutorials in English then…
>
Or find someone to translate French to English.

Subtitles/captions have been proposed by Florian, too.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Re: clang vs. gcc: dramatic performance difference

2018-09-15 Thread Matthias Andree
Am 15.09.18 um 09:40 schrieb Matthias Bodenbinder:
> Am 13.09.18 um 07:38 schrieb Matthias Bodenbinder:
>> export CC=/usr/bin/clang
>> export CXX=/usr/bin/clang++
>> INSTALL_PREFIX_DEFAULT="/opt/darktable-clang"
>>
> This is working now. I had to remove the build directory after editing 
> build.sh
>
> I have the following packages installed on Manjaro:
>
> clang 6.0.1-2
> lib32-llvm-libs 6.0.1-1
> llvm 6.0.1-4
> llvm-libs 6.0.1-4
> openmp 6.0.1-1
>
> And the result is: The DT performance is the same! pixelpipe time differences 
> are less than 3 %.
>
> 22# ./bench-script-clang-vs-gcc.sh   
> 3 runs no opencl
> run clang 1: 16,704839 [dev_process_export] pixel pipeline processing took 
> 16,401 secs (128,498 CPU)
> run gcc   1: 16,537099 [dev_process_export] pixel pipeline processing took 
> 16,205 secs (124,564 CPU)
> run clang 2: 17,087163 [dev_process_export] pixel pipeline processing took 
> 16,798 secs (130,706 CPU)
> run gcc   2: 16,566993 [dev_process_export] pixel pipeline processing took 
> 16,240 secs (124,351 CPU)
> run clang 3: 16,728366 [dev_process_export] pixel pipeline processing took 
> 16,440 secs (128,643 CPU)
> run gcc   3: 16,588298 [dev_process_export] pixel pipeline processing took 
> 16,260 secs (124,372 CPU)
> 
> 3 runs with opencl
> run clang 1: 7,668977 [dev_process_export] pixel pipeline processing took 
> 7,298 secs (20,387 CPU)
> run gcc   1: 7,533379 [dev_process_export] pixel pipeline processing took 
> 7,129 secs (17,671 CPU)
> run clang 2: 7,599311 [dev_process_export] pixel pipeline processing took 
> 7,235 secs (20,150 CPU)
> run gcc   2: 7,516472 [dev_process_export] pixel pipeline processing took 
> 7,115 secs (17,731 CPU)
> run clang 3: 7,639706 [dev_process_export] pixel pipeline processing took 
> 7,275 secs (20,923 CPU)
> run gcc   3: 7,662320 [dev_process_export] pixel pipeline processing took 
> 7,257 secs (17,884 CPU)
>
> But the binaries are a lot smaller: 
>
Are you looking at stripped files, or what does "size" print? Are
optimization options similar?



___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] possible data loss scenario

2017-10-25 Thread Matthias Andree
Am 13.10.2017 um 02:12 schrieb Jonathan Richards:
> On 12/10/17 22:57, Tobias Ellinghaus wrote:
>> Am Donnerstag, 12. Oktober 2017, 17:24:30 CEST schrieb Marcello Mamino:
>>> I can reproduce the bug on Debian stable, under Xfce, master branch
>>> just compiled, every setting default. The steps to follow are
>>> *exactly* these
>>>
>>> 1. Open the "export selected" tab
>>> 2. Click in the "max size" filed.
>>> 3. Slowly move the pointer *downwards*
>>> 4. As soon as the pointer reaches the "allow upscaling" label just
>>> below, the input field loses focus
>>> 5. Hover on a image, press a number, and the star rating changes
> Ah, yes.  This behaviour is identical on the KDE build that I reported
> above.  I did not move the cursor downward before.
> Jonathan
>> We are aware of this and know why it's happening. We are not sure how to 
>> proceed with this though, as grabbing the focus itself is intended (so you 
>> can 
>> use the arrow keys to change the dropdowns and sliders), but the implication 
>> is unwanted. So we have to decide what eggs to break and what omelette to 
>> make. Or something like that. :-)
> What about a confirmation dialog before DT changes the star rating on a
> large number of selected images?  Or an undo stack that remembers star
> ratings and can restore them after a mistaken commit?  Alexander's
> original report was about losing many decisions on star rating, after all.

I'd indeed also wish if ratings (such as star ratings, reject, or
similar) were undo-able, because every once in a while the mouse pointer
hovers over the film strip in darktable mode, and I press r, not having
noticed the mouse went down from the main image, and I accidentally
reject a different image...

(And I've stopped using Wayland/Mir stuff because they'd sometimes let
an event such as a mouse click get noticed by one popup and the
underlying control at the same time, which I do not see when using Xorg).

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Darktable on Zesty

2017-04-17 Thread Matthias Andree
Am 15.04.2017 um 00:35 schrieb François Tissandier:
> Thanks for your answer !
>
> I'll have a look at the CSS.i should be able to fix it temporarily. 
>
> François
>
> Le 14 avr. 2017 10:03, "Roman Lebedev"  > a écrit :
>
>
>
> On Fri, Apr 14, 2017 at 10:47 AM, François Tissandier
>  > wrote:
>
> Hi guys !
>
> Just upgraded my Ubuntu Gnome to Zesty, and Darktable looks a
> bit... different :
>
> [ IMAGE CUT ]
> ​
>

Oh, can you please stop quoting the image for a 1.7 MB mail to hundreds
of subscribers?
Thank you.

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] OpenCL scheduling profiles

2017-04-09 Thread Matthias Andree
Am 09.04.2017 um 18:38 schrieb Ulrich Pegelow:
> Am 09.04.2017 um 17:29 schrieb Matthias Andree:
>>> What's your number of background threads (fourth entry in core
>>> options)?
>>
>> It's currently set to 2, and if removed from the configuration file with
>> darktable stopped,
>> will revert to 2 when darktable gets restarted and closed next time.
>>
>> Note I see this quite often, but I don't see where that time comes from:
>>
>> [dev] took 4,787 secs (5,388 CPU) to load the image.
>> [dev] took 4,787 secs (5,388 CPU) to load the image.
>>
>
> You might try higher values like six or eight. Main advantage of many
> background threads is hiding I/O latency and that might be a main
> issue here.

Copying from USB3 HDD (2 TB, NTFS formatted) to internal SATA Samsung
SSD 830 transferred 40...60 MB/s, the latter reads back >200 MB/s.

Creating a second copy on the same SSD partition managed 105 MB/s (read
+ write, so actually read 105 + write 105), reading from a raw partition
is ~250 MB/s. Old hardware... :-o

> Might easily be that the main issue on your system is stalling I/O
> (for whatever reason). Please make some experiments from a very fast
> storage medium (SSD, ram disk) to find out if this is the main cause.

...that, and 6 threads, speeds things up noticably, nearly maxes out the
CPU and takes ~40 s to generate 136 thumbnails, a few use aggressive
CPU-only IOPs like raw denoise. 2 threads take longer (c. 1 min).

> There are some modules where no OpenCL code is available (Amaze
> demosaic, raw denoise, color input/output profile with LittleCMS2) but
> I cannot say if this is the main cause here. At least several of the
> modules from the output below have OpenCL support. Please try further
> to isolate if slow CPU processing correlates with specific images and
> their history stacks.

This needs some more time. Some might use amaze, raw denoise is part of
a few, color profile should not have happened TTBOMK.

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] OpenCL scheduling profiles

2017-04-09 Thread Matthias Andree
Am 09.04.2017 um 16:38 schrieb Ulrich Pegelow:
> Am 09.04.2017 um 11:00 schrieb Matthias Andree:
>> Am 08.04.2017 um 14:29 schrieb Ulrich Pegelow:
>> 2. What bothers me though are the timeouts and their defaults. In
>> practice, the darktable works ok-ish, but the lighttable does not. When
>> a truckload full of small thumbnails (say, lighttable zoomed out to show
>> 10 columns of images) needs to be regenerated for the lighttable, it
>> *appears* (not yet corroborated with measurements) that bumping up
>> timeouts considerably helps to avoid latencies, as though things were
>> deadlocking and waiting for the timer to break the lock. Might be an
>> internal issue with the synchronization though - how fine granular is
>> the re-attempt? Is it sleep-and-retry, or does it use some form of
>> semaphores and signalling at the system level between threads?
>>
>
> What's your number of background threads (fourth entry in core options)? 

It's currently set to 2, and if removed from the configuration file with
darktable stopped,
will revert to 2 when darktable gets restarted and closed next time.

Note I see this quite often, but I don't see where that time comes from:

[dev] took 4,787 secs (5,388 CPU) to load the image.
[dev] took 4,787 secs (5,388 CPU) to load the image.

Looking at iotop it appears that the prime concern however is that it
maxes out the external USB3 HDD reading from NTFS...
reducing to 1 thread stalled the UI at first but came back with some 30
thumbnails all at once.

I sometimes see modules like highlite reconstruction, CA correction, or
demosaic ("Entrastern") still being dispatched to the CPU, which is very
slow, when it's normally dispatched to the GPU. Statistics below. It
seems the only module that is supposed to be on the CPU is Gamma, and
it's so blazingly fast that we don't need to care. Sorry for the German,
but you get the idea. This is only from launching darktable in
lighttable view:

$ grep 'on CPU' /tmp/dt-perf-opencl.log  | sort -k7 | uniq -f6 -c | sort -nr
124 [dev_pixelpipe] took 0,000 secs (0,000 CPU) processed `Gamma' on
CPU, blended on CPU [thumbnail]
  6 [dev_pixelpipe] took 0,026 secs (0,076 CPU) processed
`Entrastern' on CPU, blended on CPU [thumbnail]
  5 [dev_pixelpipe] took 0,276 secs (0,832 CPU) processed
`Chromatische Aberration' on CPU, blended on CPU [thumbnail]
  5 [dev_pixelpipe] took 0,019 secs (0,060 CPU) processed
`Spitzlicht-Rekonstruktion' on CPU, blended on CPU [thumbnail]
  2 [dev_pixelpipe] took 0,118 secs (0,348 CPU) processed
`Raw-Schwarz-/Weißpunkt' on CPU, blended on CPU [thumbnail]
  2 [dev_pixelpipe] took 0,052 secs (0,140 CPU) processed
`Weißabgleich' on CPU, blended on CPU [thumbnail]
  2 [dev_pixelpipe] took 0,023 secs (0,036 CPU) processed
`Tonemapping' on CPU, blended on CPU [thumbnail]
  2 [dev_pixelpipe] took 0,008 secs (0,016 CPU) processed
`Objektivkorrektur' on CPU, blended on CPU [thumbnail]
  2 [dev_pixelpipe] took 0,001 secs (0,004 CPU) processed
`Ausgabefarbprofil' on CPU, blended on CPU [thumbnail]
  2 [dev_pixelpipe] took 0,001 secs (0,000 CPU) processed
`Eingabefarbprofil' on CPU, blended on CPU [thumbnail]
  2 [dev_pixelpipe] took 0,000 secs (0,000 CPU) processed `Schärfen'
on CPU, blended on CPU [thumbnail]
  2 [dev_pixelpipe] took 0,000 secs (0,000 CPU) processed
`Basiskurve' on CPU, blended on CPU [thumbnail]
  1 [dev_pixelpipe] took 3,126 secs (9,444 CPU) processed
`Raw-Entrauschen' on CPU, blended on CPU [thumbnail]
  1 [dev_pixelpipe] took 0,000 secs (0,000 CPU) processed `Drehung'
on CPU, blended on CPU [thumbnail]


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] OpenCL scheduling profiles

2017-04-09 Thread Matthias Andree
Am 08.04.2017 um 14:29 schrieb Ulrich Pegelow:
> Hi,
>
> I added a bit more flexibility concerning OpenCL device scheduling
> into master. There is a new selection box in preferences (core
> options) that allows to choose among a few typical presets.
>
> The main target are modern systems with very fast GPUs. By default and
> "traditionally" darktable distributes work between CPU and GPU in the
> darkroom: the GPU processes the center (full) view and the CPU is
> responsible for the preview (navigation) panel. Now that GPUs get
> faster and faster there are systems where the GPU so strongly
> outperforms the CPU that it makes more sense to process preview and
> full pixelpipe on the GPU sequentially.
>
> For that reason the "OpenCL scheduling profile" parameter has three
> options:
>
> * "default" describes the old behavior: work is split between GPU and
> CPU and works best for systems where CPU and GPU performance are on a
> similar level.
>
> * "very fast GPU" tackles the case described above: in darkroom view
> both pixelpipes are sequentially processed by the GPU. This is meant
> for GPUs which strongly outperform the CPU on that system.
>
> * "multiple GPUs" is meant for systems with more than one OpenCL
> device so that the full and the preview pixelpipe get processed by
> separate GPUs.
>
> At first startup darktable tries to find the best suited profile based
> on some benchmarking. You may at any time change the profile, this
> takes effect immediately.
>
> I am interested in your experience, both in terms of automatic
> detection of the best suited profile and in terms of overall
> performance. Please note that this is all about system latency and
> perceived system responsiveness in the darkroom view. Calling
> darktable with '-d perf' will only give you limited insights so you
> need to mostly rely on your own judgement.
>

Hi Ulrich,

1. gorgeous, thank you very much!

For me, the benchmarking seems to DTRT™ (do the right thing), it picks
the "very fast GPU" profile with a 2016 NVidia GeForce 1060 GTX 6 GB and
an old 2009 AMD Phenom II X4 2.5 GHz 65 W Quadcore, code is compiled
with -O2 -march=native, OpenMP and OpenCL enabled, and I get this:

[opencl_init] here are the internal numbers and names of OpenCL devices
available to darktable:
[opencl_init]   0   'GeForce GTX 1060 6GB'
[opencl_init] FINALLY: opencl is AVAILABLE on this system.
[opencl_init] initial status of opencl enabled flag is ON.
[opencl_create_kernel] successfully loaded kernel `zero' (0) for device 0
[...]
[opencl_init] benchmarking results: 0.029428 seconds for fastest GPU
versus 0.382860 seconds for CPU.
[opencl_init] set scheduling profile for very fast GPU.
[opencl_priorities] these are your device priorities:
[opencl_priorities] image   preview export  thumbnail
[opencl_priorities] 0   0   0   0
[opencl_priorities] show if opencl use is mandatory for a given pixelpipe:
[opencl_priorities] image   preview export  thumbnail
[opencl_priorities] 1   1   1   1
[opencl_synchronization_timeout] synchronization timout set to 0

2. What bothers me though are the timeouts and their defaults. In
practice, the darktable works ok-ish, but the lighttable does not. When
a truckload full of small thumbnails (say, lighttable zoomed out to show
10 columns of images) needs to be regenerated for the lighttable, it
*appears* (not yet corroborated with measurements) that bumping up
timeouts considerably helps to avoid latencies, as though things were
deadlocking and waiting for the timer to break the lock. Might be an
internal issue with the synchronization though - how fine granular is
the re-attempt? Is it sleep-and-retry, or does it use some form of
semaphores and signalling at the system level between threads?

I am running with these - possibly ridiculously high - timeout settings
(15 s). This is normally enough to process an entire export including a
few CPU segments (say, raw denoise - I need it on some high-ISO images,
ISO 6400+, to avoid black blotches or green stipples, but I have some
concerns about its quality altogether which don't belong in this thread).

opencl_mandatory_timeout=3000
pixelpipe_synchronization_timeout=3000

3. Would it be sensible to set one of these timeouts considerably higher
than the other?

4. Can we have -d perf log when timeouts occur that change the
scheduling decision (i. e. if a timeout causes a job to be dispatched to
a different device, with original intent, and dispatch target), and
4b. possibly a complete scheduler trace including all dispatch attempts?
Might help debug in the long run.


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Lens correction based on EXIF data on Sony cameras

2017-04-08 Thread Matthias Andree
Greetings,

I have created a related ticket with the lensfun project:

https://sourceforge.net/p/lensfun/bugs/78/


This really only matters for VignettingCorrection:

1. If the assessments of your references are true that TCA
(LateralChromaticAberration) are always baked into the ARW then the
lensfun data will have been measured with TCA pre-corrected by the
camera and lensfun data will only compensate for the difference between
the in-camera precorrection and hugin's (lensfun's) idea of how much TCA
should be compensated.

2. Distortion is never baked into the ARW, so the lensfun corrections
can safely be applied to undistort images.

3. I have some images where shading compensation was on in-camera and
then applying the full correction in darktable based on lensfun data
would lead to over-correction. The workaround is either to fake a
smaller aperture setting in the lensfun module, or to turn vignetting
compensation off for now, which is a nuisance.


What needs to happen on darktable's end, code-wise, to:
1. expose the 0x2011 EXIF tag contents to lensfun to assist it in
picking the right of two alternate correction sets?
2. until the day that lensfun supports this (see ticket URL above),
automatically flip the switch for vignetting compensation to "off" in
the lens correction IOP?

I'm willing to help with code on both lensfun's and darktable's end, and
am proposing a Git feature branch tracking the development branches.
I've been programming C for 25+ years and to some limited extent C++ w/
STL, and Python 3 for a few years, too.


I think as a workaround for the time being the only way I have is to
alter the lens names (add "VignPreComp" to the name, or similar) and
manually choose them, to go along with datasets that compensate for the
delta between the in-camera pre-compensated vignetting, and the full
compensation. This can then be re-written properly once lensfun exposes
a way to choose from alternate data sets for the same lens.

Cheers,
Matthias

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Lens correction based on EXIF data on Sony cameras

2017-04-07 Thread Matthias Andree
Am 07.04.2017 um 09:38 schrieb Heiko Bauke:
> Hi,
>
> Am 05.04.2017 um 07:33 schrieb Kelvie Wong:
>> I just realized that on my Sony a7R II (and this probably applies to all
>> of the cameras in this series, probably even the NEX), if you have
>> certain camera settings, some lens corrections are baked into the ARW
>> files (Sony's raw format) -- that is the bit values appear to change,
>> not just the metadata.
>
> just for the record: This seems to be a common feature of Sony
> cameras. If I understand the settings menu correctly, also the Sony
> Alpha 6300 can apply in-camera vignetting correction to raw files. 
> This feature is turned on, when default factory settings are applied. 
> The pictures, which have been utilized to generate lens correction
> data in liblensfun, have actually been taken with Sony Alpha cameras. 
> Thus, it would be interesting to know, if liblensfun data is based on
> raw raw files or on raw files with in-camera vignetting correction. 

The lensfun profiling instructions tell people to switch those
corrections off, but it might be necessary to re-profile everything
paying heed to the settings; there may have to be two settings for each.
I will recheck the images I've used for my profile parts in the lensfun
database.

OTOH, some people expect "RAW" images to be free of all preprocessing.

And, finally, if the camera always bakes a certain correction into the
raw images regardless of settings (or because it won't permit you to
switch a certain correction "off") then of course the lensfun data will
have been produced based on pre-processed RAW.
___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] determining global image features in the pixelpipe

2017-02-18 Thread Matthias Andree
Am 18.02.2017 um 12:38 schrieb Tobias Ellinghaus:
> Am Samstag, 18. Februar 2017, 12:29:08 CET schrieb Matthias Andree

> > PR1441? Is that a typo?
> No: https://github.com/darktable-org/darktable/pull/1441

Having been part of FreeBSD for a while, and, um, used to GNATS we only
"recently" replaced by Bugzilla,
I misread PR as "problem report" (i. e. "issue"), not pull request. Thanks.

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] determining global image features in the pixelpipe

2017-02-18 Thread Matthias Andree
Am 17.02.2017 um 14:56 schrieb Ulrich Pegelow:
> I also suggest to use the globaltonemap module as a guiding example.
> Please beware that the current implementation has an issue if the
> preview pixelpipe runs slower than the full (center) one - a case that
> frequently happens when darktable runs with OpenCL support.
>
> To address this issue there is currently some code in PR1441 which is
> currently in review. As soon as it gets merged I suggest that you
> apply the same principle in your code.

PR1441? Is that a typo?

I generally also propose to review the OpenCL pipeline defaults - with a
mid-range card I frequently observe that the preview pipe is not
permitted to render using OpenCL and that that's far slower (factor of
5) than a quadcore CPU at 2.5 GHz:


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Darktable in Ubuntu on Windows 10

2016-12-01 Thread Matthias Andree
Am 29.11.2016 um 10:45 schrieb Blandyna Bogdol:
> Hi,
>
> yes, we speak about
> this https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux
> .
>
> With Ubuntu 14.04 (at the moment).. I will try the upgrate and hope, I
> can use the new Darktable.
>
> But my question was: How can I use darktable 2.2. rc2 on this system.
> I cannot build the software from the sources.

Marketing blurb (that makes the article questionable) on Wikipedia aside:

Ubuntu 14.04 is - other than 14.10, 15.04, 15.10 - still what Ubuntu
would call "supported", but Ubuntu "support" does not pertain to all
packages. Be sure to install the latest ubuntu-support-status package,
and then run ubuntu-support-status with various options to see which
packages have never been maintained, or have fallen out of support.

https://www.ubuntu.com/info/release-end-of-life
https://wiki.ubuntu.com/Releases

Note that on an average usable Ubuntu Desktop, a considerable part of
packages is NOT supported. This is a flaw in the Ubuntu distribution
concept. On my system, Ubuntu 16.04.1 LTS, env LANGUAGE=en
ubuntu-support-status gets me this:

> You have 14 packages (0.2%) supported until Dezember 2021 (5y)
> You have 2768 packages (40.8%) supported until April 2021 (5y)
> You have 1012 packages (14.9%) supported until Januar 2017 (9m)
> You have 1098 packages (16.2%) supported until April 2019 (3y)
>
> You have 29 packages (0.4%) that can not/no-longer be downloaded
> You have 1857 packages (27.4%) that are unsupported

So the conclusion is that Ubuntu "support" is smoke and mirrors, because
only half of the packages is supported for a reasonable amount of time,
and a quarter isn't supported at all.

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Darktable in Ubuntu on Windows 10

2016-11-29 Thread Matthias Andree
Am 28.11.2016 um 20:34 schrieb Blandyna Bogdol:
> A long time I used Darktable on  Ubuntu. Jet a have WIndows 10 Home (I
> need some Windows tools) and I using the developer modus from Windows 10
> with Ubuntu 14.10.

Ubuntu 14.10 is no longer supported by its vendor. Upgrade.

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] darktable-cli multiple files

2016-10-13 Thread Matthias Andree
Am 11.10.2016 um 02:20 schrieb Ben Suttor:
> Hi all,
> I'm working on small tool for coloring videos (frames) using
> darktable. In order to do that I use darktable-cli to render frame by
> frame. Unfortunately this is a quite slow process compared to what
> darktable itself can do. Exporting the sames files within darktable
> gives me an export rate of 1.1 fps compared to 0.25 fps using
> darktable-cli. Is there any way to give darktable-cli a list of frames
> to render? If not, is it planned to implement such a functionality?
As a workaround, does it help to parallelize the work /externally/, for
instance with GNU parallel (which itself uses Perl)?
https://www.gnu.org/software/parallel/

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] Sony RX10 noise profiles available -> https://www.darktable.org/redmine/issues/11091

2016-09-03 Thread Matthias Andree
Greetings,


since redmine was acting up on me when trying to upload, I wanted to
drop a quick note to check if anyone is aware of the Sony RX-10 noise
profiles I created:

https://www.darktable.org/redmine/issues/11091

The necessary files could not be uploaded and are available at




It'd be good to see these included.

Thanks.

Cheers,
Matthias

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org