Re: [darktable-user] Weird artifacts with color zones module in V3

2020-01-03 Thread Ulrich Pegelow
I confirm that the color zones module easily produces artifacts. This 
happens especially on changes of the lightness curve. And obviously with 
"process mode = strong" you may get artifacts almost immediately.


As far as I can see 2.6 has not been much different. In the old version 
everything has been processed as if "process mode = smooth" had been 
selected but even then you would usually see artifacts on close inspection.


I think this needs some more care in the module. The root cause seems to 
be the granular properties of the intermediate mask in that module. Not 
uncommon if you select by hue which tends to have marked variations from 
pixel to pixel. And it's also logical that a change to the lightness 
curve triggers more problems. Lightness artifacts are easily seen, much 
more than small color inconsistencies.


A possible solution could be to offer a blurring option for the mask 
(probably best done with a bilateral filter). That's something to 
consider for 3.2.


Currently I suggest to use "process mode = smooth" in any case where you 
want to adjust lightness. Another suggestion is to not make a too narrow 
selection in the lightness curve (on hue basis). The new module version 
together with the range selecting color picker invites you to produce 
very narrow, specific peaks in the curves. But then you end up with a 
very granular mask. If you make the peak broader (as you probably did in 
2.6) much of the artifacts are gone.



Am 30.12.19 um 23:28 schrieb Giulio Roman:

Hi,

since upgrade to v3 I am struggling using the color zones module.

With 2.6 I had to really push it to see artifacts. Now it seems much easier.

I have a basic landscape picture with light blue sky and subtle clouds. I'm 
trying to increase the blues and darken them.

If I use the defaults of the color zones module I see artifacts as soon as I 
move slightly downwards the lighness curve in the blues (after increasing 
saturation around the same hue values).

If, instead, I choose the preset "black and white film", then move the whole 
saturation curve back to 0 (keeping the points scattered around the hue axis), and 
increase the saturation in the blues, I can then drop the lightness much more and get a 
better result. It basically seems that having more points scattered around the hue axis 
helps in not getting artifacts on the clouds.

Has there been changes also in the module's inner functionality, or just in the 
UI?

Or am I totally missing something?

I don't remember if the "process mode = strong" was already present in 2.6, however 
changing it to "smooth" helps a little bit but does not solve the problem. It just blands 
the effect.

I'm usually a person who welcomes changes, and even though I initially find 
something new difficult, I try until I succeed before judging negatively. But 
this time i'm having serious issues in using a module which was one of my 
default gotos before...

Please someone shine a light on this :-)

Thanks in advance
Giulio


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL and Memory optimization (6G GPU, 16-32G system)

2019-12-23 Thread Ulrich Pegelow

Am 23.12.19 um 16:37 schrieb Аl Воgnеr:


7,393968 [dev_process_export] pixel pipeline processing took 0,608 secs
(0,915 CPU)


This is blazingly fast.


Interesting is:
Spitzlicht-Rekonstruktion' on GPU, blended on GPU [export]

With the bench.srw it was calculated by the cpu. So the use of the cpu
depends on something and not on the coding of the module itself.


Probably due to selecting color reconstruction in that module which has 
not GPU implementation.




So my question, if some of these values should be changed, because the
new system is a lot more powerful:

cache_memory=1073741824
maximum_number_tiles=1
metadata/resolution=300
opencl_checksum=3732205163
opencl_device_priority=*/!0,*/*/*
opencl_mandatory_timeout=200
opencl_memory_requirement=768
opencl_micro_nap=1000
opencl_number_event_handles=25
opencl_size_roundup=16
pixelpipe_synchronization_timeout=200
plugins/map/max_images_drawn=100
plugins/map/max_outline_nodes=1




Looks good. Make sure to select "very fast GPU" under OpenCL scheduling 
profile in preferences->core options.


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL and Memory optimization (6G GPU, 16-32G system)

2019-12-23 Thread Ulrich Pegelow

Am 23.12.19 um 10:22 schrieb Аl Воgnеr:

I tried "opencl_memory_headroom=100"

10,485934 [dev_process_export] pixel pipeline processing took 3,299
secs (8,980 CPU)

So there is no big difference.


Assuming that the history you provided is a typical one then there is 
only one direction where OpenCL memory optimization could help: if you 
can avoid the contrast equalizer module to use tiling.


But there is not much to gain. With tiling that module needs 0.43 secs. 
The pure GPU processing time is 0.3 secs (eaw_decompose + 
eaw_synthesize). Without tiling there is less overhead and less overlap 
so you might win in total about 0.2 secs. Judge by yourself if this is 
worth the effort.



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpneCL and Memory optimization (6G GPU, 16-32G system)

2019-12-23 Thread Ulrich Pegelow
Also note that in your case there is are a few modules which are 
processed on the CPU:


6,983092 [dev_pixelpipe] took 0,096 secs (0,997 CPU) processed 
`Spitzlicht-Rekonstruktion' on CPU, blended on CPU [export]
7,069714 [dev_pixelpipe] took 0,087 secs (1,365 CPU) processed 
`Entrastern' on CPU, blended on CPU [export]
7,464816 [dev_pixelpipe] took 0,395 secs (3,275 CPU) processed 
`Tonemapping' on CPU, blended on CPU [export]


So you probably should substract an offset of 0.6 secs when comparing 
timings of different opencl settings.


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] GPU recommendation

2019-12-22 Thread Ulrich Pegelow

Am 22.12.19 um 23:10 schrieb Аl Воgnеr:

You might need to adjust parameter opencl_memory_headroom in
darktablerc.


Thanks to remember me to do this. Could you please tell me the exact
name of the variable(s) to change?


As previously written: opencl_memory_headroom



$ nvidia-smi
Sun Dec 22 23:04:06 2019
+-+
| NVIDIA-SMI 435.21   Driver Version: 435.21   CUDA Version:
10.1 |
|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile
Uncorr. ECC | | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage |
GPU-Util  Compute M. |
|===+==+==|
|   0  GeForce GTX 1660Off  | :2F:00.0  On |
   N/A | |  0%   39CP023W / 130W |676MiB /  5941MiB |
1%  Default |
+---+--+--+
+-+
| Processes:   GPU
Memory | |  GPU   PID   Type   Process name
 Usage  |
|=|
|0  2482  G   /usr/lib/xorg/Xorg
586MiB | |0  3298  G   xfwm4
   4MiB | |0  4933  C   /usr/bin/darktable
  73MiB |
+-+



Your output of nvidia-smi could indicate leaking memory. About 700MB of 
VRAM is used which seems high to me. However, I don't know the 
requirements of xfwm4. Maybe it's normal. If this memory usage is stable 
you will need to set opencl_memory_headroom to 700 or 800.



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] GPU recommendation

2019-12-22 Thread Ulrich Pegelow

Am 22.12.19 um 19:32 schrieb Jochen Keil:
However, both denoise module, which used to run perfectly on the GPU, 
now fall back to CPU and add around 14s each!


According to the log there's a tiling issue, but I don't understand why. 
It's still the same 6GB GPU that used to work with 2.6.x.




You might need to adjust parameter opencl_memory_headroom in 
darktablerc. In your case it's set at its default 300MB. The parameter 
tells darktable how much of the total VRAM it should assume to be 
reserved by other processes. darktable then takes all the rest. Now, if 
this parameter it set too low darktable will fail in GPU memory allocations.


You should have on your system the nvidia-smi tool. It will tell you how 
much of the memory is currently used. This would be a first starting 
point for opencl_memory_headroom.


I recently have learnt that on a typical KDE system one of the 
components (plasmashell, krunner, ...) is notorious for leaking GPU 
memory. The longer the system runs the less VRAM is free. You should be 
able to see this if you follow up on nvidia-smi for a longer time.


It is said that calling from time to time this script would free GPU 
memory again:


#!/bin/bash

kquitapp5 plasmashell

kstart5 plasmashell &



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] GPU recommendation

2019-12-20 Thread Ulrich Pegelow

Am 20.12.19 um 09:37 schrieb Jochen Keil:

*Now* I'm really looking forward to Christmas 😄



There is one caveat. The main performance improvement in this context 
has been the implementation of guided filter on GPU. The guided filter 
has huge memory requirements. In your example it needs about 5GB of 
VRAM for export. As there is no support for tiling in the GPU code it 
will fall back to slow CPU processing if memory is too tight. You should 
be fine with your 6GB card, though.


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] GPU recommendation

2019-12-16 Thread Ulrich Pegelow

Am 17.12.19 um 07:30 schrieb Jochen Keil:
However, I usually 
make broad use of parametric masks with feathering. 


I would be very surprised if feathering was the bottleneck. I have a 
1060 myself and feathering adds like <100ms per module on export (7k x 
4.5k image). That's with 3.0.rc2.


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] GPU recommendation

2019-12-16 Thread Ulrich Pegelow

Am 17.12.19 um 00:26 schrieb Holger Wünsche:
The most expensive modules are the exposure 1+2 and tone curve 3. These 
are the three modules with masks. When removing them the time is down to 
6s.


Drawn mask rendering is really slow and does not profit from the GPU. 
There is currently work underway to improve rendering speed (PR 3739, 
needs rebase).


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] GPU recommendation

2019-12-16 Thread Ulrich Pegelow

Am 17.12.19 um 00:38 schrieb Michael Rasmussen:


The reason the amount for such long time is that the GPU run out of
memory and had to be processed by the CPU.



Then you should see error messages when going with '-d opencl' and 
effectively there would be something really wrong.


Is there any indication of tiling? If GPU memory is too tight to process 
the image in one go darktable will run the module tile wise. This is 
also quite expensive in terms of performance but I doubt that this 
explains the numbers reported here.


Are these modules extensively using drawn masks (e.g. the brush)? The 
module names indicate something like that. Shape rendering in drawn 
masks is slow and does not profit from the GPU.



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] crash in dt 2.4.4

2019-11-20 Thread Ulrich Pegelow
Your crash report shows a critical fault in Lua. Maybe an incompatible 
update of the Lua libraries on your system? Or maybe some change to your 
luarc file (in darktables config directory)?


Am 20.11.19 um 18:06 schrieb Bernhard:

Hi,

I'd been working happily with dt 2.4.4 on Mint 18 for the time being 
(simply didn't have the time to upgrade that Mint).


But for some days now darktable crashes while scrolling through a large 
collection in lighttable.


~ darktable -d all reports



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] darktable 3.0rc1 openCl

2019-11-18 Thread Ulrich Pegelow

Error number -6 means CL_OUT_OF_HOST_MEMORY.

Your diver tried to allocate memory on the computer (not the graphics 
card) and failed.


Happened to me once with NVIDIA and a system with 8GB RAM. There were a 
lot of processes running in the background. After terminating them it 
worked.


Not much darktable can do here as this happens in an early stage of 
OpenCL initializiation which is fully in control of the driver.


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL error starting dt

2019-08-17 Thread Ulrich Pegelow
You don't give details on your HW setup. However, from past experience 
this error code looks like a NVIDIA specific one. It tends to be 
generated when the program is not able to access the required device 
special files:


crw-rw+ 1 root video 195,   0 17. Aug 08:19 /dev/nvidia0
crw-rw+ 1 root video 195, 255 17. Aug 08:19 /dev/nvidiactl
crw-rw+ 1 root video 195, 254 17. Aug 08:19 /dev/nvidia-modeset
crw-rw+ 1 root video 239,   0 17. Aug 08:19 /dev/nvidia-uvm

Either the access permissions do not fit: the user running darktable 
needs write access. In my case the user needs to be member of group 
video. Easy to fix.


Or those device special files are not generated at all. This may happen 
if your X11 system is started on a secondary graphics system like Intel. 
In that case the NVIDIA drivers and device special files might not be 
loaded/generated by default. NVIDIA OpenCL would try to take care by 
calling /usr/bin/nvidia-modprobe. But as this happens in user space 
permissions for loading the kernel module and generating the device 
special files are not sufficient as long as nvidia-modprobe is not SUID 
root. And typically the SUID flag is not set for nvidia-modprobe in most 
distros. Simple test: run darktable-cltest as root. If you then can use 
OpenCL in darktable as a normal user the issue is confirmed. You will 
need to set the SUID flag on nvidia-modprobe manually.


Ulrich


Am 16.08.19 um 22:50 schrieb David Vincent-Jones:
I am getting this error message despite all the indications that openCL 
is all in place.


0.231688 [opencl_init]
0.231905 [opencl_init] found opencl runtime library 'libOpenCL'
0.231929 [opencl_init] opencl library 'libOpenCL' found on your system 
and loaded

0.232108 [opencl_init] could not get platforms: -1001
0.232116 [opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.
0.232119 [opencl_init] initial status of opencl enabled flag is OFF.

darktable 2.7.0+1613~gdd7ed32b1 ... Manjaro/Arch

I have tried with and without my secondary monitor in order to avoid 
memory conditions.




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] color contrast

2019-04-24 Thread Ulrich Pegelow
IMHO that module is a bit basic while being easy to handle with just two 
sliders. If you want more control have a look at the a and b panels in 
the tone curve module (switch to Lab, independent channels). Increasing 
the steepness of the curve (without shifting its center point) is 
equivalent to a color contrast enhancement. The advantage of the tone 
curve module: you can adjust positive and negative a/b values independently.


Am 24.04.19 um 18:51 schrieb David Vincent-Jones:
Thank you Ulrich ... that gives me a better understanding of why some 
changes in the sliders are less intuitive (to me). It is a module that, 
without my fully understanding it, I have found fairly useful strangely 
enough.


On 2019-04-24 9:19 a.m., Ulrich Pegelow wrote:
The module enhances color contrast, i.e. the two sliders enhance 
separation between positive and negative a and b values, respectively. 
In this sense it works similar to a tonal value contrast that acts on 
lighter and darker gray tones.


The overall effect depends on your image. Example: if your image has 
substantial red/magenta colors and no green ones a green-magenta 
enhancement will push red/magenta colors. On a balanced image you 
should observe effects in both color directions.


Am 24.04.19 um 06:33 schrieb David Vincent-Jones:
My question firstly is, am I misunderstanding the function. If the 
lables are in fact switched then I certainly will file on RedMine




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] color contrast

2019-04-24 Thread Ulrich Pegelow
The module enhances color contrast, i.e. the two sliders enhance 
separation between positive and negative a and b values, respectively. 
In this sense it works similar to a tonal value contrast that acts on 
lighter and darker gray tones.


The overall effect depends on your image. Example: if your image has 
substantial red/magenta colors and no green ones a green-magenta 
enhancement will push red/magenta colors. On a balanced image you should 
observe effects in both color directions.


Am 24.04.19 um 06:33 schrieb David Vincent-Jones:
My question firstly is, am I misunderstanding the function. If the 
lables are in fact switched then I certainly will file on RedMine


On 2019-04-23 7:22 p.m., Patrick Shanahan wrote:

* David Vincent-Jones  [04-23-19 21:46]:

green vs magenta  moving the mark towards magenta adds magenta (as
expected)

blue vs yellow . moving the mark towards yellow adds blue (not as
expected)

did you file issue at GitHub?



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Colour reconstruction

2019-01-15 Thread Ulrich Pegelow

Good question :)

darktable (mostly) uses an unbounded color workflow, which means that L 
values outside their normal definition range, i.e. above 100, are 
handled like any other values.


The threshold parameter tells the module above which L value a pixel 
should be subject to color reconstruction. Due to the unbounded nature 
of the pixelpipe there is no reason to set this to 100 as a hard upper 
limit. In real life you will probably find few cases with threshold 
values above 100.


Ulrich

Am 15.01.19 um 13:11 schrieb Bruce Williams:

Hi all,
I've been questioned over something I said in ep022 of my video series, 
regarding the threshold slider in the Colour reconstruction module.
Can anyone educate me on why the slider can be set to values higher than 
100%?

Thanks in advance.
Cheers,
Bruce Williams


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Supporting Heidelberg Newcolor 16bit LAB Color TIFF files for high-end film scans

2019-01-15 Thread Ulrich Pegelow

Am 15.01.19 um 15:33 schrieb Christian Stromberg:

Hi,

thanks for the hints! It took me some time to fill up all missing 
dependencies. Because it's the first time I am compiling a whole 
program, I wasn't familiar with the specific library names for Linux 
Mint. Looking them up and installing them while continuously running the 
built script to see if a missing component was successfully installed 
took the largest part of the time.


I've got the new version running and from my point of view, I can 
happily report that the display of the files is absolutely identical 
(see attachments). Thank you very much for your efforts!


Nice to hear!



Is sRGB know just used for displaying or is the export output also 
restricted to sRGB?


Export is not restricted, you may chose the output gamut from some 
pre-defined profiles and you may also use third party profiles. However, 
right now PR1996 would be limited to sRGB on the *input* side when it 
comes to CIELAB or ICCLAB Tiff files. Color dynamics outside of sRGB is 
lost for these files.




The color profiles of the scanners I sent you have been created using 
IT8 targets. So the range of colors included in these profiles is not 
theoretical but a practical measurement using real slide films showing 
IT8 color tables and correcting the scanners colors by using lab 
measurements that are provided alongside the IT8 slide film targets. 
This is also the reason why, using the same Fuji Provia 100F IT8 target, 
the range of colors within the ICC profile of the Tango is much larger 
than that of the Nexscan: the Tango is a drum scanner while the Nexscan 
is a CCD based flatbed scanner.




I've checked your profiles and they are significantly larger than sRGB 
or even AdobeRGB. In order to cope with these colors we would need to 
internally convert to a wide gamut profile. Rec2020 and ProPhoto come 
into mind. As darktable has already Rec2020 defined, this would be my 
profile of choice in this case.


I'v just enhanced PR1996 accordingly.

Ulrich

But if export is not restricted to sRGB and it is just the displaying 
part that uses sRGB, I would think that most users can live with that 
just fine.


Best wishes,

Christian




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Supporting Heidelberg Newcolor 16bit LAB Color TIFF files for high-end film scans

2019-01-14 Thread Ulrich Pegelow

Hi,

you will need to compile from source. Best you follow the procedure 
described as "git version" further down on 
https://www.darktable.org/install/. This will give you darktable in its 
most recent development version.


In order to test the PR I suggest:

cd $HOME/darktable
git checkout -b Lab_tiff master
git pull https://github.com/upegelow/darktable.git Lab_tiff

Then  compile again with ./build.sh.

Best wishes

Ulrich


Am 14.01.19 um 17:55 schrieb Christian Stromberg:
I found the PR on github and the changes in the C code. Can anybody 
point me in the direction how I can test these? Do I have to compile 
something? This is probably a very general question and I don't want to 
pollute this thread with it, so a link to further information would be 
sufficient. I just have trouble finding this kind of information even 
after spending some time googling.


Best wishes,

Christian


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Supporting Heidelberg Newcolor 16bit LAB Color TIFF files for high-end film scans

2019-01-13 Thread Ulrich Pegelow

Am 12.01.19 um 17:42 schrieb johannes hanika:

thanks for this! it works fine on my machine.

the conversion is done in lcms2, right? they do support unbounded
transforms now. if we're not clipping to [0,1] the colour space should
not matter at all. i'm not sure i understand the difference between
icclab and cielab. 


To my understanding this is just a different convention of how a/b 
values are stored, either as signed integer or as unsigned values with 
an 128 offset.



but if we could just keep it in lab (cie, d50), we
would need to select lab as input colour profile, too. this will
result in disabling the colorin module and thus faster processing by
handing down the buffer that is already lab.


But then we would need to make sure that all modules up to colorin can 
handle Lab correctly. Those modules have been developed with RGB input 
in mind. Even simple cases like perspective correction have this 
assumption when it comes to detecting line structure in the image. Other 
modules like haze removal, denoise, graduated density will depend even 
stronger on RGB data.


Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Supporting Heidelberg Newcolor 16bit LAB Color TIFF files for high-end film scans

2019-01-12 Thread Ulrich Pegelow
There is now a PR in github that makes darktable read LAB Tiff files 
(PR1996). Please give it a try.


FYI, the Tiff file gets internally converted to sRGB when opened. sRGB 
represents a much smaller gamut than Lab. Please check if this gives 
problems like much too low color dynamics. A larger gamut like AdobeRGB 
could be an option in this case but would require a bit more work.


Ulrich

Am 09.01.19 um 15:22 schrieb Christian Stromberg:
With the latest update that includes the new spot healing tool, 
darktable is now a serious choice for scanner operators as well. 
However, it still fails to correctly interpret 16bit TIFF files in LAB 
color space that are output by Heidelberg's scanner software Newcolor. 
This software supports Heidelberg's high-end flatbed and drum scanners 
that are still in use worldwide in considerable numbers and its output 
in LAB color space allows for the most flexible editing of the scans. 
Photoshop and Lightroom do read it correctly while darktable screws up 
the colors, making everything look neon colored, which btw. is also the 
case for many other tools. I still haven't found out what the cause is.


What would be needed to get that fixed? Does it make sense to upload 
such a TIFF file to rawpixls?


Best wishes,

Christian



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Supporting Heidelberg Newcolor 16bit LAB Color TIFF files for high-end film scans

2019-01-09 Thread Ulrich Pegelow

Am 09.01.19 um 15:43 schrieb Michael Below:

Hi,
I remember similar issues some years ago, iirc only imagemagick was able to 
process similar tiff images correctly. I think most other programs were using a 
certain library (gtk?) that caused these issues.
Good luck!


Any knowledge if GraphcisMagick is able to read these files correctly?

Background of my question: darktable has two(*) places where it would 
read tiff files. The first one is it's internal routine in 
common/imageio_tiff.c. If this fails (and the file does not get 
processed by rawspeed) we give it a last try with GraphicsMagick.


I am quite sure that our imageio_tiff is not able to handle the 
Heidelberg files correctly (and we will probably not have the capacity 
to change that). However, it might be more easy to properly detect them 
as not processable, fail at that step and thereby delegating it to 
GraphicsMagick.


Ulrich

(*) Well, in fact three places, as many raw files are also tiff files 
internally which implies that rawspeed might also try to read the files 
in question and fail. That's t.b.c.


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Collections don't work after update to 2.6.0

2018-12-27 Thread Ulrich Pegelow

Am 27.12.18 um 14:10 schrieb FK:

Hi Ulrich,

thanks for your advice! Is this a quickfix, meaning with the next update
/ upgrade the problem pops up again or is this the best way to go?
If this is a known bug - can I help in any way to get it solved?



Please post the lines starting with "plugins/lighttable/collect/" from 
your darktablerc.


Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Collections don't work after update to 2.6.0

2018-12-27 Thread Ulrich Pegelow

Hi,

there have been sporadic reports of this issue before. As a quick fix 
you can go into ~/.config/darktable/darktablerc and manually delete all 
lines starting with "plugins/lighttable/collect/".


When you start darktable again you once again see all your images but 
now you should be able to define a new collection, e.g. based one of 
your filmrolls.


Ulrich

Am 26.12.18 um 22:29 schrieb FK:

I just upgraded to 2.6.0 from 2.4.4 under Ubuntu 18.04 and started up
darktable again after the upgrade. Seemed normal, but I wondered, why
I'm facing way old pictures and not the collection I was working on
before the upgrade. So I clicked on Collections -> Foders -> 2018 -> ...
wait a moment nothing changed. Still says 21617 Pictures in recent
collection (that's obviously ALL my pictures). Strange - no folder was
working. Tried Filmrolls. Tags. Kameras. Nothing. Restarted DT.
Rebooted. No change. I can't access any of my collections - always
showing all of them pictures. Only thing working is "show all", "show 1
star", ... to narrow things down.

Any suggestions what I can do?!?!



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] "Inconsistent Output" Message

2018-12-09 Thread Ulrich Pegelow

Am 09.12.18 um 08:59 schrieb Stéphane Gourichon:
My guess is a memory corruption error in some part of darktable. Of 
course it would be nice if somehow could trace this. (I'm willing to, 
but currently not using darktable much, and quite busy with other projects.)




Well, no. The error message appears when darktable is not able to 
synchronize the two pixelpipes in darkroom mode: the preview one for the 
navigation window and the "full" one for the center view. Both 
pixelpipes normally run asynchronously as separate processes. Some of 
darktable's modules require a synchronization: the full pixelpipe needs 
to wait at certain points until the preview pixelpipe produces the 
needed data. Without that missing data the module in the full pixelpipe 
would produce wrong results, the output of the two pixelpipes would no 
longer be consistent.


In some cases synchronization fails, e.g. when the full pixelpipes needs 
to wait too long and a pre-defined timeout value is exceeded (config 
variable pixelpipe_synchronization_timeout). If that happens the above 
warning message is displayed. However, this is of no big concern. 
Typically with the next processing step all is good again. Any zooming 
in or out, panning, (de-)activating of modules or any value change is 
sufficient.


Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] DT on iMac with Mojave: no openCL

2018-12-03 Thread Ulrich Pegelow
I don't see anything specifically broken here. Let's take for example 
the snippet from the quote below.


Time spent in OpenCL kernels including memory I/O is 0.055 seconds. 
Total time in pixelpipe is 0.123 seconds. There is a bit of overhead 
here but that's not dramatical. You may try to set 
opencl_use_pinned_memory=TRUE which might improve this a bit.


Overall the time spent in this sections is between timestamps 130.658s 
and 130,781 which matches the figures above.


I'd say if you are losing performance it's probably not in the pixelpipe 
and likely not OpenCL related. That also fits to your observation that 
you don't see a difference with/without OpenCL.


Ulrich


Am 03.12.18 um 18:20 schrieb Volker Lenhardt:

130,658562 [pixelpipe_process] [full] using device 0
130,660026 [dev_pixelpipe] took 0,001 secs (0,002 CPU) initing base 
buffer [full]
130,680802 [dev_pixelpipe] took 0,021 secs (0,020 CPU) processed 
`Raw-Schwarz-/Weißpunkt' on GPU, blended on GPU [full]
130,682650 [dev_pixelpipe] took 0,002 secs (0,002 CPU) processed 
`Weißabgleich' on GPU, blended on GPU [full]
130,684399 [dev_pixelpipe] took 0,002 secs (0,001 CPU) processed 
`Spitzlicht-Rekonstruktion' on GPU, blended on GPU [full]
130,700017 [dev_pixelpipe] took 0,016 secs (0,013 CPU) processed 
`Entrastern' on GPU, blended on GPU [full]
130,704482 [dev_pixelpipe] took 0,004 secs (0,004 CPU) processed 
`Basiskurve' on GPU, blended on GPU [full]
130,708880 [dev_pixelpipe] took 0,004 secs (0,004 CPU) processed 
`Eingabefarbprofil' on GPU, blended on GPU [full]
130,717590 [dev_pixelpipe] took 0,009 secs (0,007 CPU) processed 
`Schärfen' on GPU, blended on GPU [full]
130,724571 [dev_pixelpipe] took 0,007 secs (0,006 CPU) processed 
`Ausgabefarbprofil' on GPU, blended on GPU [full]
130,781679 [dev_pixelpipe] took 0,057 secs (0,078 CPU) processed `Gamma' 
on CPU, blended on CPU [full]
130,781716 [opencl_profiling] profiling device 0 ('AMD Radeon Pro 570 
Compute Engine'):
130,781720 [opencl_profiling] spent  0,0005 seconds in [Write Image 
(from host to device)]

130,781723 [opencl_profiling] spent  0,0001 seconds in rawprepare_1f
130,781740 [opencl_profiling] spent  0,0002 seconds in whitebalance_1f
130,781742 [opencl_profiling] spent  0,0002 seconds in highlights_1f_clip
130,781744 [opencl_profiling] spent  0,0004 seconds in ppg_demosaic_green
130,781746 [opencl_profiling] spent  0,0005 seconds in ppg_demosaic_redblue
130,781749 [opencl_profiling] spent  0,0002 seconds in border_interpolate
130,781764 [opencl_profiling] spent  0,0120 seconds in 
interpolation_resample

130,781766 [opencl_profiling] spent  0,0022 seconds in basecurve_lut
130,781768 [opencl_profiling] spent  0,0017 seconds in colorin_unbound
130,781770 [opencl_profiling] spent  0,0019 seconds in sharpen_hblur
130,781772 [opencl_profiling] spent  0,0022 seconds in sharpen_vblur
130,781774 [opencl_profiling] spent  0,0025 seconds in sharpen_mix
130,781777 [opencl_profiling] spent  0,0036 seconds in colorout
130,781779 [opencl_profiling] spent  0,0270 seconds in [Read Image (from 
device to host)]
130,781781 [opencl_profiling] spent  0,0550 seconds totally in command 
queue (with 0 events missing)
130,781789 [dev_process_image] pixel pipeline processing took 0,123 secs 
(0,137 CPU)



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] DT on iMac with Mojave: no openCL

2018-12-03 Thread Ulrich Pegelow

Two observations:

1) the total time that darktable reports per pixelpipe lies in the range 
of 0.15s. That's not particularly fast, given your very undemanding 
history stack, but it's also not extremely slow either. What wonders me 
is your observation that the time spent in pixelpipe does not sum up to 
your wall time with a discrepancy of a factor 10. There might be other 
elements eating up performance on your system...


2) you don't report profiling info per OpenCL kernel. This is normally 
printed along the other information when running with '-d opencl -d 
perf'. Did you happen to set opencl_number_event_handles to zero? You 
should set this to a reasonable high value like 25.


Ulrich

Am 03.12.18 um 14:40 schrieb Volker Lenhardt:

Am 03.12.18 um 12:11 schrieb Volker Lenhardt:

Am 02.12.18 um 17:12 schrieb Volker Lenhardt:


There's still the problem of too slow response with shifting a zoomed 
in image to have e.g. a quick look at the corners to detect chromatic 
aberration. And cropping an image is slow, too. All of this rating is 
compared to what I was used to under Linux with less confined equipment.




I made another test using the command+arrow_keys to scroll once around a 
zoomed in image. It is a good deal faster than using the trackpad.


I scrolled from the middle to the upper border, then to the left border, 
down to the bottom, then to the right and up again and back to the 
middle (i.e. once around the outer part of the image). Counted by the 
output of "darktable -d perf" it took about 5 (6 CPU) seconds. In real 
time I measured 50 seconds.


That's no fun.



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] DT on iMac with Mojave: no openCL

2018-12-02 Thread Ulrich Pegelow

Am 02.12.18 um 15:52 schrieb Volker Lenhardt:


Activated openCL
Next image:
1118,913519 [dev_process_image] pixel pipeline processing took 0,434 
secs (1,555 CPU)

Profiled denoise:
1219,319104 [dev_process_image] pixel pipeline processing took 18,669 
secs (72,306 CPU)


There's practically no difference. BTW: the images are cr2 files with 
sizes of about 23 MB.


What next?


For this case I would like to see the full output of 'darktable -d 
opencl -d perf' with activated OpenCL from program start to where you 
process the image with profiled denoise.



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] DT on iMac with Mojave: no openCL

2018-12-02 Thread Ulrich Pegelow
Looks like OpenCL is properly loaded. To better analyse the slow 
response time of your system run with 'darktable -d opencl -d perf'. 
After each processing step you get profiling output. For the start only 
look at lines like


17,607291 [dev_process_image] pixel pipeline processing took 0,212 secs 
(1,374 CPU)


You will see the same output if you switch off OpenCL so this should 
give you a first estimate on how strong the GPU is able to boost 
performance on your system. Note that you need to activate some of the 
more demanding modules to really see a difference. Try for example 
profiled (denoise).


Ulrich

Am 02.12.18 um 13:43 schrieb Volker Lenhardt:
My output looks much the same as yours and Michael's. So I am now 
convinced that I was wrong putting the blame for my DT's shortcomings to 
openCL.


There seems to be some flaw in the graphics. But the output concerning 
the graphics card looks ok:


0.069794 [opencl_init] found opencl runtime library 
'/System/Library/Frameworks/OpenCL.framework/Versions/Current/OpenCL'
0.069829 [opencl_init] opencl library 
'/System/Library/Frameworks/OpenCL.framework/Versions/Current/OpenCL' 
found on your system and loaded

0.069832 [opencl_init] found 1 platform
0.077176 [opencl_init] found 2 devices
0.077218 [opencl_init] discarding CPU device 0 `Intel(R) Core(TM) 
i5-7500 CPU @ 3.40GHz'.
0.077246 [opencl_init] device 1 `AMD Radeon Pro 570 Compute Engine' 
supports image sizes of 16384 x 16384
0.077250 [opencl_init] device 1 `AMD Radeon Pro 570 Compute Engine' 
allows GPU memory allocations of up to 1024MB

[opencl_init] device 1: AMD Radeon Pro 570 Compute Engine
  GLOBAL_MEM_SIZE:  4096MB
  MAX_WORK_GROUP_SIZE:  256
  MAX_WORK_ITEM_DIMENSIONS: 3
  MAX_WORK_ITEM_SIZES:  [ 256 256 256 ]
  DRIVER_VERSION:   1.2 (Oct 16 2018 21:18:14)
  DEVICE_VERSION:   OpenCL 1.2
0.077944 [opencl_init] options for OpenCL compiler: 
-cl-fast-relaxed-math  -DUNKNOWN=1 
-I/Applications/darktable.app/Contents/Resources/share/darktable/kernels

...
0.085891 [opencl_init] kernel loading time: 0.0078
0.085897 [opencl_init] OpenCL successfully initialized.
0.085899 [opencl_init] here are the internal numbers and names of OpenCL 
devices available to darktable:

0.085901 [opencl_init]    0    'AMD Radeon Pro 570 Compute Engine'
0.085904 [opencl_init] FINALLY: opencl is AVAILABLE on this system.
0.085906 [opencl_init] initial status of opencl enabled flag is ON.

I think I should either reinstall DT or start a new subject request. Or 
do you have an idea?


Volker
 



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] DT on iMac with Mojave: no openCL

2018-12-01 Thread Ulrich Pegelow

Am 01.12.18 um 20:32 schrieb Volker Lenhardt:


This is one more riddle. I had tried so and have repeated it just now 
from the terminal. I get "-bash: darktable: command not found". It seems 
the best I can do is to reinstall Darktable. What do you think?


You probably need to give the full path of the darktable executable. I 
have no idea how to get the path on MacOS, maybe some MacOS user knows.


Ulrich



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] DT on iMac with Mojave: no openCL

2018-12-01 Thread Ulrich Pegelow
You are probably aware that you can find out more about OpenCL problems 
by starting darktable from a shell with option 'darktable -d opencl'. 
Likewise you may try 'darktable -d perf' for more info on what (module) 
makes darktable feel sluggish.


Ulrich


Am 01.12.18 um 19:38 schrieb Volker Lenhardt:



Am 01.12.18 um 19:11 schrieb Archie Macintosh:
On Sat, 1 Dec 2018 at 17:45, Archie Macintosh  
wrote:

Sorry, I just realised I'm on a 2017 iMac. Is yours 2018?
See https://support.apple.com/en-gb/HT202823


Doh! Just remembered there aren't any 2018 iMacs yet. So you should
have OpenCL on yours.


I bought my iMac 4 weeks ago. It was shipped with macOS High Sierra. I 
updated to Mojave. Of course I had activated OpenCL in the DT options.


My trackpad troubles could be some hint to a different installation 
problem to be the cause for both.


The sluggishness is e.g. very prominent, when I try to find image faults 
in the corners with highly zoomed in image. To move the image around 
takes much too much time with many pauses between.


But nice to hear from you that it should work.


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Lenscorrection with known lens not working

2018-11-20 Thread Ulrich Pegelow
Maybe a problem with your installation of lensfun on which darktable's 
lens correction relies. I would try to update the lensfun database by 
running lensfun-update-data.


Ulrich

Am 20.11.18 um 20:59 schrieb kneops:

I can't. The dropdown only shows Nikon, and then only this lens.
I'm using the D850.




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Lenscorrection with known lens not working

2018-11-20 Thread Ulrich Pegelow
Looks like your camera has not been detected in the first place (see the 
empty field above the lens name). Please try to select the camera manually.


Best wishes

Ulrich

Am 20.11.18 um 20:40 schrieb kneops:

I'm puzzled why this lens correction is not working.

The lens as noted in the lens correction module is correct, I also get 
the same information when using exiv2.
But correction is not working. This lens is the well known 'older' 
version, so not the newer VR version.
Also, I cannot choose another lens from the list. It only shows Nikon, 
and then only one lens.




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Re: opencl memory issue with GUI vs. commandline

2018-05-25 Thread Ulrich Pegelow

Am 25.05.2018 um 17:48 schrieb Matthias Bodenbinder:

Am 25.05.2018 um 14:35 schrieb Peter McD:

Is there a rule of thumb for headroom settings?


Very good question.

Specifically because I do not see a performance difference between
opencl_memory_headroom=1000 and opencl_memory_headroom=400. The pipeline 
processing always takes the same time (with a 3 % variance). So why shoulkd I 
bother with small values of opencl_memory_headroom?



In fact that depends a lot on your total amount of GPU memory.

When darktable's opencl support was implemented, systems with 1GB were 
the norm. Typically only about 700MB of that would be available to 
darktable, the rest needed to be left alone for system purposes (which 
we found out by trial and error).


There were two corner cases to be taken into account:

* a too small value of opencl_memory_headroom would lead to out of 
memory situations in darktable, processing would fall back to the CPU 
which tends to be much slower.


* a too high value of opencl_memory_headroom would force darktable to go 
into tile-wise processing much too often. This also costs performance.


As a reasonable compromise we now have a setting of 400 as default. That 
should reserve enough space for the system and still prevent tiling in 
most cases.


If your system has lots of GPU memory the opencl_memory_headroom setting 
can be much more relaxed = higher. When doing some tests in the course 
of this thread on my computer I reached peak memory usage of about 4GB 
during export of 5Ds images without any tiling (I have 6GB total GPU 
memory). In my case I may therefore set opencl_memory_headroom to 2000 
without getting any performance difference.


Even on a 4GB system a setting of 1000 might be totally fine. Tiling 
will happen a bit more frequently, but in the end this will only affect 
few modules with high memory demand (e.g. profiled denoise) and large 
images. On a system with 2GB or less the situation will probably look a 
bit different, though.


Ulrich

darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Re: opencl memory issue with GUI vs. commandline

2018-05-24 Thread Ulrich Pegelow
I'll have a deeper look into this. In the meantime it would be helpful 
to learn from you in more detail what nvidia-smi tells you about the GPU 
memory usage in the different steps of your test, especially before and 
after you get your test run failing for the first time.


For your information here on my system it typically looks like this:

+-+
| NVIDIA-SMI 384.111Driver Version: 384.111 
 |

|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile 
Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util 
Compute M. |

|===+==+==|
|   0  GeForce GTX 106...  Off  | :01:00.0  On | 
 N/A |
|  3%   50CP810W / 120W |173MiB /  6065MiB |  0% 
Default |

+---+--+--+


+-+
| Processes:   GPU 
Memory |
|  GPU   PID   Type   Process name Usage 
 |

|=|
|0  3563  G   /usr/bin/X 
93MiB |
|0  3977  G   kwin_x11 
19MiB |
|0  3992  G   /usr/bin/krunner 
1MiB |
|0  3994  G   /usr/bin/plasmashell 
53MiB |
|0  4035  G   /usr/bin/kgpg 
2MiB |

+-+

When darkttable GUI is running and idle it has an additional permanent 
requirement of about 63MB. During image processing GPU memory of 
darktable goes steeply up but always returns to the idle value 
afterwards. When terminating darktable there should be no traces of it 
left in the output of nvidia-smi.


Ulrich

Am 25.05.2018 um 07:34 schrieb Matthias Bodenbinder:

Am 25.05.2018 um 07:10 schrieb Ulrich Pegelow:

I have put all relevant files on dropbox: 
https://www.dropbox.com/sh/yarghqgncirjd0w/AADUNyFGaGUpyBTDil_qaxsUa?dl=0

There you will find a script test-CR2.sh which infinitly calls darktable-cli 
with the big 5Ds file. And it creates a log file for this. I have attached 2 of 
those log files.

The first log file 5Ds-infite-30762-15-runs-no-issue.log.gz is from the start 
of the test, where darktable-cli is runnming just fine 15 times. Then I enter 
the darktable GUI and export 26 of my Canon 6D files. I exit the GUI and start 
test-CR2.sh again. I had to repeat this cycle for 4 times before the issue 
occurs. The result is shown in the second logfile: 
5Ds-infite-2750-3-runs-with-issue.log.gz

I have been using opencl_memory_headroom=500 for this test. I am using DT 
2.4.3. In addition I have firefox 60.0.1, thunderbird 52.8 and three 
gnome-terminals open. My system is running on Manjaro testing with kernel 
4.16.11.


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Re: opencl memory issue with GUI vs. commandline

2018-05-24 Thread Ulrich Pegelow

Am 25.05.2018 um 06:21 schrieb Matthias Bodenbinder:

But for me the issue is more: Why does it work with opencl_memory_headroom=400 
for the first couple of cycles and then I get the issue. Then I increase the 
value to opencl_memory_headroom=500 which fixes it for a while before it 
happens again and I need to go to opencl_memory_headroom=600 and so forth. That 
means that the memory problem is getting worse over time. But why? Is that a 
memory leak?



Well, never say never. But least here I cannot detect a leak. I'll do 
some further tests with a 5Ds raw. Is it one of the two XMPs that you 
have attached which shows the problem most reproducibly?


Ulrich

darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Re: opencl memory issue with GUI vs. commandline

2018-05-24 Thread Ulrich Pegelow

Am 24.05.2018 um 18:15 schrieb Matthias Bodenbinder:

Am 24.05.2018 um 17:43 schrieb Matthias Bodenbinder:

37,787315 [opencl_summary_statistics] device 'GeForce GTX 1050 Ti' (0): peak 
memory usage 3807805440 bytes
37,787326 [opencl_summary_statistics] device 'GeForce GTX 1050 Ti' (0): 499 out 
of 500 events were successful and 1 events lost



Peak memory usage is at 3632MB which is 400MB below your GPUs total of 
4032MB, as to be expected due to your setting of 
opencl_memory_headroom=400.


Intermediately you have

+-+
| NVIDIA-SMI 396.24 Driver Version: 396.24 
 |

|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile 
Uncorr. ECC |
 Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util 
Compute M. |

|===+==+==|
|   0  GeForce GTX 105...  Off  | :01:00.0  On | 
 N/A |
| 21%   47CP0N/A /  75W |365MiB /  4032MiB |  0% 
Default |

+---+--+--+

which is really close to 400MB. Note that darktable (as any program 
using opencl) has no means to detect the free amount of GPU memory at 
any time, so it relies on having access to all minus opencl_memory_headroom.


I suggest to set a significantly higher value for opencl_memory_headroom 
like 800 or even higher in your case. You have a lot of memory, no 
problem to be a bit more permissive.


Ulrich

darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] opencl memory issue with GUI vs. commandline

2018-05-24 Thread Ulrich Pegelow

Am 24.05.2018 um 14:10 schrieb Matthias Bodenbinder:

Can it be that the DT GUI is not releasing all the GPU memory and then the 
commandline fails? And the worst thing is that the system is not recovering 
from that. Even with bench.SRW which is a lot smaller it is not working 
anymore. The GPU memory is gone.



You should check on your system with nvidia-smi (typically comes as part 
of the nvidia-compute package). The tool will tell you the amount of 
used GPU memory. You should be able to see if it differs before and 
after running darktable.


You may also try 'darktable -d opencl -d memory' which gives you a log 
of darktable's opencl memory usage. At the end it should go down back to 
zero and also tell you the peak memory usage.


Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] How to manually adjust perspective

2018-05-18 Thread Ulrich Pegelow

Am 14.05.2018 um 02:07 schrieb David Vincent-Jones:

I use this module extensively and find it very useful.

If I have any thoughts regarding change it would be the ability to use a 
variable sized brush eraser rather than the current picker. A brush 
would allow large areas to be 'swept' more efficiently  


I have just implemented that in the master branch.

Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] GPU advice 2GB or 4GB RAM.

2018-04-18 Thread Ulrich Pegelow

Am 18.04.2018 um 21:40 schrieb frieder:

Am Mittwoch, den 18.04.2018, 18:13 +0200 schrieb Ulrich Pegelow:

No, that's not what I would want to see (which version of darktable
are
you using). The final lines of the output should look something like:


It is darktable 2.2.1 on Debian Linux the higest version I can get at
the moment.


OK, then '-d opencl -d memory' does not give information on GPU peak 
usage. That feature had been introduced with 2.4.


My recommendation: if performance during export plays any role for you 
then you should go for the higher amount of GPU memory.


Ulrich



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] GPU advice 2GB or 4GB RAM.

2018-04-18 Thread Ulrich Pegelow
No, that's not what I would want to see (which version of darktable are 
you using). The final lines of the output should look something like:


8,290702 [opencl_summary_statistics] device 'GeForce GTX 1060 6GB' (0): 
peak memory usage 375014720 bytes
8,290722 [opencl_summary_statistics] device 'GeForce GTX 1060 6GB' (0): 
164 out of 164 events were successful and 0 events lost


Did you combine '-d opencl' with '-d memory'?

Am 18.04.2018 um 14:07 schrieb frieder:

Am Mittwoch, den 18.04.2018, 06:51 +0200 schrieb Ulrich Pegelow:
This is what I did, and this is the output of the last lines with higest
values:
 > [pixelpipe_process] [full] using device 0
[memory] before pixelpipe process
[memory] max address space (vmpeak): 3066560 kB
[memory] cur address space (vmsize): 2920820 kB
[memory] max used memory   (vmhwm ):  656232 kB
[memory] cur used memory   (vmrss ):  511540 kB
[pixelpipe_process] [preview] using device -1
[memory] before pixelpipe process
[memory] max address space (vmpeak): 3066560 kB
[memory] cur address space (vmsize): 2920820 kB
[memory] max used memory   (vmhwm ):  656232 kB
[memory] cur used memory   (vmrss ):  511540 kB
[pixelpipe_process] [thumbnail] using device 0
[memory] before pixelpipe process
[memory] max address space (vmpeak): 3420044 kB
[memory] cur address space (vmsize): 3415376 kB
[memory] max used memory   (vmhwm ):  656232 kB
[memory] cur used memory   (vmrss ):  523032 kB
[pixelpipe_process] [export] using device 0
[memory] before pixelpipe process
[memory] max address space (vmpeak): 3423124 kB
[memory] cur address space (vmsize): 3415376 kB
[memory] max used memory   (vmhwm ):  656232 kB
[memory] cur used memory   (vmrss ):  523108 kB
  
I have 6 GB main memory and 2 GB on CPU, so it seems darktable is using

good half of main memory( ~ 3.2GB)  and ~640MB on the GPU. Is this a
corret interpretation? I was using a 12 Bit Raw of 16MB from an olympus
camera.
Btw. does "using device -1" for preview indicate, that CPU instead of
GPU is used at all?

Yes, device -1 stands for your CPU.



Thanks,
F.



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] GPU advice 2GB or 4GB RAM.

2018-04-17 Thread Ulrich Pegelow
You may also run darktable with -d opencl -d memory, do some typical 
stuff and then close darktable at which point it will tell you the peak 
usage of GPU memory.


Peak usage will be very depending on your usage scheme. It will be high 
during export of large images and reasonably low and independent on 
image size during interactive work.


So you need to consider your main intentions for using OpenCL. You might 
either go for the lower amount of GPU memory (fast interactive work, not 
so fast export) or for the higher amount (fast interactive work and fast 
export).


Ulrich



Am 18.04.2018 um 03:35 schrieb Robert William Hutton:
More memory is good because it allows you to process images through 
opencl without tiling them, which is a lot faster.  Of course this 
depends on the resolution of the images you need to process, but I'd 
always try to get the most video ram possible.


I think you can do darktable -d opencl to see if your images are 
currently being tiled.


Regards,

Rob


On 18/04/18 11:10, Frieder wrote:

Hallo,
I need to buy a new graphics adapter mostly for darktable and open CL.
I'm thinking about a Radeon RX 560 card, which is offered with 2 or 4
GB Video-RAM.
Currnently I'm using a R7 card with 2MB wich seems fine memory wise,
but has no more open-cl-driver support with later distributions.

Has somebody experience that 4GB GPU RAM is/can be an advantage over
2GB RAM? Or is it just waste of money and current?
Thanks
F.


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] What does the "subtract" blend mode do?

2018-03-07 Thread Ulrich Pegelow

Am 07.03.2018 um 17:10 schrieb Matthieu Moy:

Now, what "subtract" should do with colors is debatable, especially in Lab.
On the L channel, something like max(in - out, 0) would be rather natural.
On the a an b channels, I'm not sure what to do. Just doing in-out would
mean the final image would be black&white if the input and the output pixels
have the same colors (the same a and b). They'd keep the same colors if the
output is black & white. But for example, if the input is magenta (positive a)
and the output is even more magenta (greater a), do we expect the output to
be black&white (a clipped to 0)? Or green (negative a)?

One way to avoid the question would be to do a difference only on the L
channel, but just copy a and b from the output.


Well, that's probably not a good way to go. We had this issue in the 
"difference" mode in the past. This way you would get images with let's 
say L=0 and a=-50 ("highly blue saturated pitch black"). Due to the 
practical way how these "unbounded" Lab values are converted into 
monitor RGB you get some strange color artifacts, not the "black" that 
one would normally expect.




Or perhaps another option would be to just deprecate and remove "difference"
if we assume no one's using it.

Regards,




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] What does the "subtract" blend mode do?

2018-03-06 Thread Ulrich Pegelow
You obviously have a point here, the current implementation does not do 
what the word "substact" implies.


Before considering a fix: any idea how the "substract" operator should 
act in the Lab color space?


Ulrich

Am 05.03.2018 um 22:46 schrieb Matthieu Moy:

- Original Message -

There is a link in the documentation to a gimp doc page that explains it, and
some modes are explained in the darktable docs.



https://www.darktable.org/usermanual/en/blending_operators.html
https://docs.gimp.org/2.8/en/gimp-concepts-layer-modes.html


Yes, I did read both documents. But the "subtract" operator of darktable does 
not do what the Gimp page describes, and there's no specific doc for it in dt's doc.

My question comes from the fact that "subtract" is suspiciously closer to "add" 
than what I'd expect from from the name (and from Gimp's doc).




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Checking for errors/debugging my openCL

2018-03-05 Thread Ulrich Pegelow
Additionally you may want to play with opencl_number_event_handles. 
Start with an extreme setting of zero. In case this would solve your 
issues you can then try different values in-between.


Ulrich

Am 05.03.2018 um 11:37 schrieb KOVÁCS István:
The only memory-transfer related setting that I know of 
is opencl_use_pinned_memory. The manual 
(https://www.darktable.org/usermanual/en/darktable_and_opencl_optimization.html) 
says for NVidia it should be set to false. You may want to check - for 
me, it was set to true by default, although it did not cause problems 
apart from being slow.


Kofa





darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Heavily disappointed about openCL

2018-02-26 Thread Ulrich Pegelow
What are your opencl related settings in darktablerc (i.e. all config 
parameters of the form opencl_*) ?


Ulrich

Am 26.02.2018 um 23:28 schrieb Bernhard:

I was told yesterday that this card doesn't have that sensor.
Updating to newer driver shows N/A instead:

|~ $ nvidia-smi Sun Feb 25 10:57:52 2018 
+-+ 
| NVIDIA-SMI 390.25                 Driver Version: 390.25  
       | 
|---+--+--+ 
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile 
Uncorr. ECC | | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | 
GPU-Util  Compute M. | 
|===+==+==| 
|   0  GeForce GTX 105...  Off  | :01:00.0  On |
   N/A | | 20%   39C    P0    N/A /  75W |    166MiB /  4036MiB |  
2%      Default | 
+---+--+--+ 
+-+ 
| Processes:                                                       GPU 
Memory | |  GPU       PID   Type   Process name  
    Usage      | 
|=| 
|    0      1428      G   /usr/lib/xorg/Xorg  
  129MiB | |    0      2032      G   cinnamon
           34MiB | 
+-+ 
|


--



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Heavily disappointed about openCL

2018-02-20 Thread Ulrich Pegelow
With today's typical amount of graphics cards memory we should probably 
increase the default setting of that parameter to maybe 400 or 450.


In "the old days" when we only had like 1GB a too high value would have 
forced darktable to go into useless tiling, but with more GPU memory 
that's really not an issue any longer.


Ulrich

Am 20.02.2018 um 20:46 schrieb Peter Mc Donough:

Am 20.02.2018 um 20:09 schrieb Ulrich Pegelow:
That's an out-of-resources problem on your graphics card. Try to 
increase darktable's config variable opencl_memory_headroom (in file 
darktablerc) to something like 400.


Shouldn't that be configured in darktable GUI settings options?

e.g. With graphics cards with less than "whatever" memory set headroom 
to "whatever" and avoid running other graphics card loads.


cu
Peter


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Heavily disappointed about openCL

2018-02-20 Thread Ulrich Pegelow
That's an out-of-resources problem on your graphics card. Try to 
increase darktable's config variable opencl_memory_headroom (in file 
darktablerc) to something like 400.


Please also make sure that no other application uses substantial amounts 
of GPU memory. You can use program nvidia-smi to find out. Here on a 
GTX1060 with 6GB it looks like below, indicating that only about 200MB 
of GPU memory are in use by the system or any other apps.


Best wishes

Ulrich


Tue Feb 20 20:07:11 2018
+-+
| NVIDIA-SMI 384.111Driver Version: 384.111 
 |

|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile 
Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util 
Compute M. |

|===+==+==|
|   0  GeForce GTX 106...  Off  | :01:00.0  On | 
 N/A |
|  4%   50CP810W / 120W |182MiB /  6065MiB |  0% 
Default |

+---+--+--+


+-+
| Processes:   GPU 
Memory |
|  GPU   PID   Type   Process name Usage 
 |

|=|
|0  3534  G   /usr/bin/X 
107MiB |
|0  4201  G   kwin_x11 
16MiB |
|0  4216  G   /usr/bin/krunner 
1MiB |
|0  4217  G   /usr/bin/plasmashell 
51MiB |
|0  4261  G   /usr/bin/kgpg 
2MiB |

+-+


Am 20.02.2018 um 20:01 schrieb Bernhard:

Hi,

I ran into another problem with my new GeForce GTX 1050 Ti in my system 
regarding openCL.
Every now and then I see a message about darktable finding problems with 
openCL and disabling it "for this session".
I then have to reboot the complete machine to get this working again - 
but not for a long time and I have the same message again.



System:    Host: benutzer Kernel: 4.13.0-32-generic x86_64 (64 bit gcc: 
5.4.0)

    Desktop: Cinnamon 3.4.6 (Gtk 3.18.9-1ubuntu3.3)
    Distro: Linux Mint 18.2 Sonya
Machine:   Mobo: ASUSTeK model: P8Z77-M v: Rev 1.xx
    Bios: American Megatrends v: 2003 date: 05/09/2013
CPU:   Quad core Intel Core i5-3570 (-MCP-) cache: 6144 KB
    flags: (lm nx sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx) bmips: 
27280

    clock speeds: max: 3800 MHz 1: 3410 MHz 2: 3410 MHz 3: 3410 MHz
    4: 3410 MHz
Graphics:  Card: Intel Xeon E3-1200 v2/3rd Gen Core processor Graphics 
Controller

    bus-ID: 00:02.0
    Display Server: X.Org 1.18.4 driver: intel
    Resolution: 1920x1200@59.95hz
    GLX Renderer: Mesa DRI Intel Ivybridge Desktop
    GLX Version: 3.0 Mesa 17.2.4 Direct Rendering: Yes

~ $ darktable -d opencl

reports the following while opening some pictures in darkroom mode:

(...)
[pixelpipe_process] [thumbnail] using device 0
[pixelpipe_process] [full] using device -1
[opencl_pixelpipe] couldn't copy image to opencl device for module 
rawprepare
[opencl_pixelpipe] could not run module 'rawprepare' on gpu. falling 
back to cpu path
[opencl_pixelpipe (b)] late opencl error detected while copying back to 
cpu buffer: -5

[pixelpipe_process] [thumbnail] falling back to cpu path
[pixelpipe_process] [full] using device 0
[pixelpipe_process] [preview] using device -1
[pixelpipe_process] [full] using device 0
[pixelpipe_process] [preview] using device -1
[pixelpipe_process] [thumbnail] using device 0
[opencl_pixelpipe] couldn't copy image to opencl device for module 
rawprepare
[opencl_pixelpipe] could not run module 'rawprepare' on gpu. falling 
back to cpu path
[opencl_pixelpipe (b)] late opencl error detected while copying back to 
cpu buffer: -5
[opencl] frequent opencl errors encountered; disabling opencl for this 
session!

[pixelpipe_process] [thumbnail] falling back to cpu path
[pixelpipe_process] [full] using device -1
[pixelpipe_process] [full] using device -1
[pixelpipe_process] [preview] using device -1

Does anyone have an idea what I could look for?




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] problems with parametric mask

2018-01-29 Thread Ulrich Pegelow

Hi,

are we talking about the highlights or the shadows here?

From time to time I have experienced problems selecting shadows with 
sufficient accuracy. Therefore I contemplated to add a further virtual 
channel which would be on a log_L scale. A bit like the same approach we 
have in the basecurve module. Not sure if this would be of any help here.


Ulrich

Am 29.01.2018 um 10:44 schrieb ternaryd:

Hi,

I'm trying to adjust a larger set of images
which have a rather high dynamic. Trying to
avoid HDR, I've used the exposure module to
move EV and black such that there is no more
over/under exposure, but midtones get very
dark. The idea was to use a parametric mask,
protecting the bright parts of the image and
raise EV of the rest an a second instance.

The problem is that often the range of high
luminosity is very narrow, sometimes even less
than 1 unit. As right-click doesn't open the
normal darktable interface for adjustments, I
find it extremely difficult to adjust the
markers of the sliders at this position.
Sometimes it slips down to zero, overlapping
both markers. Additionally, for having those
markers that tight, it is often impossible to
establish a feathering zone, leading to very
visible artifacts for instance at the border
between an out-of-focus roof and the sky.

Is that really that hard or am I doing it
wrong? Is there another method (besides HDR) to
adjust such images?

Thanks,

Cris


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Parametric mask preview shortcut?

2017-11-28 Thread Ulrich Pegelow
There is a feature as you suggested in the upcoming 2.4.0. When holding 
down the CTRL key while entering one of the parametric masks sliders you 
see the mask preview as long as you stay in the slider. On leaving the 
view goes back to normal (with a 1 second delay).


Likewise if you press the SHIFT key while entering the slider you get a 
a false color view of that specific color channel. You can also combine 
SHIFT and CTRL to see a mask preview on top of the color channel display.


Ulrich

Am 22.11.2017 um 21:15 schrieb Marek Vančo:

Hello folks,

Please, do you know if there is a possibility to preview yellow parametric 
mask with
keyboard shortcuts?

It's very boring to switch the small square icon to preview mask every-time.


Idea:
The behaviour like "zoom with Z in lighttable"  would be very interesting:
When for example key 'ALT' is pressed, yellow masks can be visualised.

Second possibility behaviour (ever more simple to use):
Preview parametric mask is automatically on when someone move one of cursor of
parametric mask.

What do you thinks about idea?

Thanks you very much!
Marek



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL -- did I solve my problem?

2017-09-22 Thread Ulrich Pegelow

Am 22.09.2017 um 16:12 schrieb Howard Helsinger:

yes, as suggested, I'm running two displays at 1920.

problems seem to derive from the demosaic module

Well, that's not fully correct. Problem derives from the fact that there 
is less GPU memory available than darktable thinks it has. The demosaic 
module might be the first one which hits that limit.


darktable's OpenCL system has been designed to be robust against these 
situations. The worst thing that should happen is darktable aborting the 
OpenCL pixelpipe and falling back to CPU. You should get your output 
image, albeit a bit slower.


Ulrich

darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL -- did I solve my problem?

2017-09-21 Thread Ulrich Pegelow

Am 21.09.2017 um 22:00 schrieb Howard Helsinger:


However, I don't quite understand why.

I attach the output of   $ darktable -d opencl.    It says my  GeForce 
GTS 450 "allows GPU memory allocations of up to 239 MB.


I think I don't understand the numbers.


I try to explain and there are quite a few numbers relating to GPU memory.

1) the total amount of physical GPU memory that your graphics card can 
supply to OpenCL is reported by something like:


GLOBAL_MEM_SIZE:  6068MB

This represents typically the total amount of VRAM on your card.

2) there is an upper limit on the size of any *individual* allocated 
buffer under OpenCL. This upper limit is implementation dependent so it 
may be different from GPU to GPU and may also change with drivers. 
darktable reports this like:


[opencl_init] device 0 `GeForce GTX 1060 6GB' allows GPU memory 
allocations of up to 1516MB


It's no problem to allocated several buffers of that size but each 
buffer must not exceed this maximum size limit.


3) the actual amount of free GPU memory that darktable can freely use. 
As the driver needs to reserve some GPU memory for its own purposes, 
darktable can not assume that it can use all of the available physical 
memory (see 1). Unfortunately there is no way in the OpenCL specs to 
query for the available, non-allocated memory at any time. Therefore we 
assume that we can have all physical memory minus the size that is given 
by opencl_memory_headroom.


By default darktable assumes that it's sufficient to leave 300MB 
untouched for driver and display purposes. In your case it seems that 
this was not sufficient and you needed to change it to 350MB. With the 
amount of installed GPU memory constantly increasing in modern graphics 
cards we should consider to increase the default to 400 in order to 
avoid user frustration.


Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] PSD files

2017-09-20 Thread Ulrich Pegelow
OK, looks like things have changed. The place to define the supported 
extensions is now in src/CMakeLists.txt close to if(USE_GRAPHICSMAGICK).


Am 20.09.2017 um 17:22 schrieb darkta...@911networks.com:

Hi,

DT 2.2.5

Is there a way of importing PSD files?

I understand that you can't process them, but I'd like to tag and
archive them.




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] PSD files

2017-09-20 Thread Ulrich Pegelow
darktable makes use of GraphicsMagick to read certain non-RAW file 
formats. However, we only accept those formats that are whitelisted in 
imageio_gm.c:_supported_image(). According to 
http://www.graphicsmagick.org/formats.html GraphicsMagick is able to 
read PSD files, but I heard that the support is not reliable. Therefore 
the PSD format is not whitelisted. If you are willing to experiment you 
could add PSD to the supported files for doing your own tests. Don't 
forget to alter image.c:dt_image_is_raw() as well. And of course, all 
this does not change to fact that darktable is not able to write PSD files.


Ulrich

Am 20.09.2017 um 17:22 schrieb darkta...@911networks.com:

Hi,

DT 2.2.5

Is there a way of importing PSD files?

I understand that you can't process them, but I'd like to tag and
archive them.




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] opencl .. a fresh install

2017-08-06 Thread Ulrich Pegelow

Am 07.08.2017 um 04:03 schrieb David Vincent-Jones:

I have just moved my openSUSE version from 42.2 to 42.3 and now dt does
not appear to recognize the installed openCL.

[opencl_init] could not find opencl runtime library 'libOpenCL'
[opencl_init] could not find opencl runtime library 'libOpenCL.so'
[opencl_init] found opencl runtime library 'libOpenCL.so.1'
[opencl_init] opencl library 'libOpenCL.so.1' found on your system and
loaded


This tells us that at libOpenCL.so.1 is present on your system and 
darktable has been able to load it.



[opencl_init] could not get platforms: -1001


That tells us that libOpenCL.so.1 failed when checking for available GPUs.

I assume you have some NVIDIA card.

Please try the following. As root call program clinfo (should be 
supplied by package clinfo-2.0.15.03.24-2.3.x86_64 or the like). Then 
start darktable again as a normal user. If darktable now works with 
OpenCl this is an indication of a permission issue on your system.


NVIDIA relies on a program /usr/bin/nvidia-modprobe that needs to be set 
SUID root. I found that frequently distributions remove the SUID bit for 
security reasons. It's up to you if you set it manually (chmod +s 
/usr/bin/nvidia-modprobe).


Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Which config for Darktable

2017-06-24 Thread Ulrich Pegelow

Am 24.06.2017 um 19:48 schrieb Mark Heieis:
So in conclusion, pick the HW you are comfortable with, which flavour of 
linux you want, and what it requires to provide the functionality you 
want, and go from there, knowing that if chosen correctly, you can get 
the functionality desired. Because at the end of the day, the history 
doesn't matter to me as I need to work, which requires OpenCL. If a 
better solution avails itself in the future, I will pursue that but 
until then I have a working system and that's what I'm sharing. Isn't 
that what it's all about?




If you are lucky that your HW is supported than this is certainly a way 
to go.


But the AMD story is not complete without mentioning that they have 
dropped support for several mid-aged GPUs. In my case it's the HD7950 (= 
R9/280 = Tahiti) which I had bought primarily to be able to have an eye 
on both major platforms. I used a dual GPU system with the Tahiti plus 
an old NVIDIA GTS450.


The Tahiti is gone now and my investments are lost. Others have reported 
about similar issues with AMD/ATI drivers in the past. So dropping 
support does not seem to be an exceptional case on AMD's side.


I am now back to NVIDIA with a Geforce 1060 and for me it means I will 
not care any longer about AMD compatibility - just because I am 
factually not able to do so. And I certainly have no intentions to 
invest into AMD again.


Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Unusual Message

2017-05-26 Thread Ulrich Pegelow
Anyhow those error messages should be transient. Any further action will 
typically bring back things to normal.


Am 26.05.2017 um 20:51 schrieb David Vincent-Jones:
Ulrich; The problem occurred during a zoom ... I will go back and try to 
find the specific image but I am not sure what the specific function was 
being used.


David



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Unusual Message

2017-05-26 Thread Ulrich Pegelow

Hi David,

see PR1441 (https://github.com/darktable-org/darktable/pull/1441) and 
redmine ticket #11497 to which it refers.


Inconsistent output has been a long standing issue with some few modules 
where temporarily the center view would be incorrectly processed. PR1441 
tries to prevent those situations and as a last resort emit a warning 
message. Normally the latter should happen only very rarely. It would be 
interesting to learn under which conditions you can provoke that warning 
message.


Ulrich

Am 26.05.2017 um 12:12 schrieb David Vincent-Jones:
I am getting an "inconsistent output" message on one of my images ... I 
have never seen this message before. What is it indicating?



David




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Does OpenCL require the AMDGPU-PRO driver? Yes, but does not require installation, just some libs

2017-03-27 Thread Ulrich Pegelow

Am 28.03.2017 um 07:55 schrieb Pascal Obry:

That is simply disgraceful!

For my side it means that I can still live with fglrx as long as it
is supported by OpenSUSE. This support will end on May 16th 2017,
which is the announced end of life for Leap 42.1.


Do you mean that my card should still be supported by fglrx?


Certainly. Only that fglrx development has been stopped by AMD. The 
official support of that driver ends with Linux kernel 3.19. Some 
enthusiasts have been able to prolong support up to kernel 4.5 (e.g. 
Sebastian Siebert for OpenSUSE). That efforts allows me to run my card 
with fglrx under Leap 42.1. However, as written: the clock ticks. Latest 
on May 16th I will need to move.



Issues will need to be fixed by those who are still keeping with AMD.


Agreed! We can't fight AMD plan to mess Linux support.


Sad thing is that I had been warned before. AMD already had a bad 
reputation when it comes to Linux drivers. I didn't take this serious 
enough. Now I have to dump a 300 bucks GPU after only three years.


Ulrich



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Does OpenCL require the AMDGPU-PRO driver? Yes, but does not require installation, just some libs

2017-03-27 Thread Ulrich Pegelow

Am 27.03.2017 um 19:58 schrieb Pascal Obry:

01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Venus 
XTX [Radeon HD 8890M / R9 M275X/M375X] (rev 83) (prog-if 00 [VGA controller])
Subsystem: Dell Venus XTX [Radeon HD 8890M / R9 M275X/M375X]
Flags: bus master, fast devsel, latency 0, IRQ 142
Memory at c000 (64-bit, prefetchable) [size=256M]
Memory at dfe0 (64-bit, non-prefetchable) [size=256K]
I/O ports at e000 [size=256]
Expansion ROM at 000c [disabled] [size=128K]
Capabilities: 
Kernel driver in use: radeon
Kernel modules: radeon


That's a Sea Island generation GPU from AMD. amdgpu-pro does not support 
Sea Island and its predecessor Southern Island. The latter  affects my 
HD7950 (Tahiti).


So what AMD currently does is:

* hard-deprecate the working Catalyst driver (fglrx): no longer 
supported with recent kernels


* develop a new version of driver amdgpu-pro which is only 
half-heartedly brought into all relevant distributions


* dump support for two complete generations of GPUs still heavily in use

That is simply disgraceful!

For my side it means that I can still live with fglrx as long as it is 
supported by OpenSUSE. This support will end on May 16th 2017, which is 
the announced end of life for Leap 42.1.


After this date I will likely invest into a new GPU. This will be an 
NVIDIA and I will give up on AMD.


Concerning OpenCL support in darktable for AMD: from my side AMD will be 
half-deprecated by then. Currently I run alongside the AMD and a seven 
years old NVIDIA (fully supported by NVIDIA drivers from day one till 
today) to have an eye on both platforms. In future I will focus all new 
developments only on NVIDIA. AMD platforms might or might not work. 
Issues will need to be fixed by those who are still keeping with AMD.


Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Re: Apply Mask to Single LAB Channel

2017-03-11 Thread Ulrich Pegelow

Hi,

so far there is no such option. What we have are blend modes that limit 
a module's effect to only Lab lightness (= L channel) or Lab color but 
there is no option to have the effect to individual a and b channels. 
But please bear in mind that darktable is fundamentally different to PS. 
So even if we would have such a blend mode, you won't be able to have it 
working like in PS.


Concerning viewing individual channels: not possible at the moment.

Ulrich



Am 11.03.2017 um 20:11 schrieb Andrew Martin:

Any responses on this? Is there a way to view individual LAB channels in
Darktable?

On 25 February 2017 at 14:03, Andrew Martin mailto:andrew.s.mar...@gmail.com>> wrote:

Hello,

I have been reading Photoshop LAB Color by Dan Margulis and learning
about various techniques in the LAB colorspace. Consequently, I'm
excited that so many Darktable modules operate in LAB! One thing I
haven't figured out how to do is to apply a module to a particular
channel (either L, A, or B) - how can I do that?

I tried creating a parametric mask, choosing either L, a, or b, and
adjusting the Input and Output sliders, but I was not able to select
just a single channel.

Moreover, is there a way I can view a single LAB channel by itself?
This is useful for example for checking which channel has more noise
(A or B).

Thanks,

Andrew



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] softproofing & icc profiles

2017-03-08 Thread Ulrich Pegelow

https://www.darktable.org/usermanual/ch03s03s09.html.php

Am 09.03.2017 um 06:40 schrieb darkta...@911networks.com:

DT 2.2.3 on arch

I want to softproof metallic prints at whcc. I have the icc profiles.

The only thing I found to install them is a very old writeup from 2012
by Christoph Glaubitz:

https://chrigl.de/posts/2012/05/13/darktable-install-additional-color-profiles.html

Does this still applies?

In the doc I only see:

https://www.darktable.org/usermanual/ch03s02s10.html.php


or into any other output color space that the user supplies to
darktable as an ICC profile.





darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Parametric Mask Question

2017-02-16 Thread Ulrich Pegelow

These are offered in modules that work in the RGB color space.

The grayscale value g is calculated as 0.3*R + 0.59*G + 0.11*B - that's 
a simple and rather common way of weighting the individual channels to 
get a grey image.


OTOH the L value is calculated out of a RGB -> HSL color space 
conversion. We could have left L out of the offered channels but as we 
anyhow wanted to allow users to select on Hue and Saturation the L 
channels came as a cheap add-on.


Ulrich

Am 16.02.2017 um 17:38 schrieb David Vincent-Jones:

In some instances (for instance the channel mixer) the parametric masks
offer scales in both 'g' (gray-scale) and 'L' (lightness). How do these
scales differ?




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Very slow 2.2.1

2017-01-07 Thread Ulrich Pegelow

Am 06.01.2017 um 22:20 schrieb darkta...@911networks.com:

I just switched from 2.0.7 to 2.2.1 (archlinux) and I have some very significant
slow downs.

Things that would take only a second or two, now take 4, 5 6 seconds.

[ ... a lot of output that tells us OpenCL is working ... ]

Anything I could to speed it up, short of buying a new video card?


Run darktable with '-d opencl -d perf' to get profiling output of your 
GPU. Then - if possible - compare the output of 2.0.7 with 2.2.1 to find 
out the differences.


Ulrich


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Re: Strange artifacts

2016-12-22 Thread Ulrich Pegelow

Am 22.12.2016 um 22:22 schrieb Lorenzo Bossi:

Thank you for the quick reply. The color clipping works well for b&w.

Just to understand better the problem, is it related to the fact that
the blue leds emit some light camera sensors are too much sensitive?



Mostly yes. darktable uses the Lab color space for the larger part of 
its processing pipeline. That color space is designed to represent the 
typical range of colors that human beings are able to see. A technical 
device like a camera sensor may exceed that range and produces unbounded 
values when converted to Lab. That seems to be the case for blue LED 
light and various modern camera sensors.


Ulrich



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Strange artifacts

2016-12-21 Thread Ulrich Pegelow

Hi,

blue LED light sources. They put them now in all places giving us 
photographers a severe headache. Horrible.


The manual tells you what to do: 
http://www.darktable.org/usermanual/ch03s02s10.html.php (see "3.2.10.5. 
Unbounded colors" and "3.2.10.6. Possible color artifacts"). In essence, 
you need to activate the gammut clipping option in the input color 
profile module to deal with those situations.


Ulrich


Am 21.12.2016 um 23:23 schrieb Lorenzo Bossi:

Hello,
I've started using darktable few months ago, and I'm very happy about it.
But there is still some problems that I cannot understand how to solve.

For example, in this image
https://dl.dropboxusercontent.com/u/2155571/artifacts.CR2
I see a lot of blue and purple artifacts in the lake. I thought they
were highlight spikes, but if I convert the image in black and white
they are rendered black.

I don't understand if this is a bug or just some default options which I
should understand how to tune.

The photo is a long exposure RAW taken with a Canon 6D (details in the
exif).

Thank you for any help :)

  Lorenzo




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Fujifilm styles

2016-11-30 Thread Ulrich Pegelow
No acros but the other ones on a Fuji X-T1. Basis has been the 
darktable-chart output with some manual tweaking referring to a set of 
images.


Ulrich

Am 30.11.2016 um 19:54 schrieb darkta...@911networks.com:

DT 2.0.7.

Has anybody created fuji style that match mostly the fujifilm
emulations, especially the acros? (I've tried and I'm not even close)





darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org
X-T1 Astiaaus DSCF9709122colorcheckergz02eJw90olT1HUcxvGNBQS1xFFwpULwAKQWZhEUUfz9Pg+K65EwgriKmnggthiamOQRipULu7LEMTDpiCIph2vOgLiTwP6+X22FDK9Rc8DlkFgdjyB0FNGwximfv+B5z7wOthyGlykb969vhtSlxa5lc+GeFgJzpwrx411QnH+bZDJ1pEzWiD7hR6QPFuBSfA7aH+sxRqnHe5aVKD++EpsXaJD3NAB9L29R0g9mWqQ5RaEBZUhLMMBnth53ZXpE7slGw8/J0C1T45uZhJSXE6Bx94Zu5Aewmp+Rv/YyLe+8RDfyL1L4J1biDjXUkehMtd0VOLdAjw3T9AjevwF/i7642+KD33TeWOFRR4qmSEuY4xO2Lj3D0j88U7oXH8KrS1ax53t7WPSj4XxbQCKT/beFk/sFN0FNu+UbyZ56WXRqvUAn1ytIVXibQlxDMFm+Bt/naSjL+Actap1DlUMSKa89VFQs2TdrcU8vtaZp6UPjAvQOeM/ij5U0rH46BPqIGuKM0P5uEmb7j0VdwhmhavJ1mp82ShyvckZ5bIXQ4alhy1XV7HMhl1WY3HjdhAn82mMTaxhRzlx5PUsqSqV57llU0OcIU7MObTEvqbHlNPX7hCI96Aa11p2g/xve3ykgc6CSHsYWw/uOv5CWrKXkyE20uF+NwU2nKKvSDVOXnCbbkxxS2MMFNRayrX53GK8eye501THz0m6pYUYfqx2TIR1JjuBdG4eyoIBy1mkdxrf4KaVB5S3mKVlZR2gmj92WwFfYSqWLf722PK2ewTy29DGp44V06IEf7zu8iGX2OjNZSw+7cM2JlY84gcbwLPzqp0WCbwqyXKLg7q3C9owpmPinC5Rrb7y1dLmtCssd89HWrYMhxoCt7jroDBqEXIlDw/qF0MEH5pabFBrURCl2E+kKqvFOqRHxUTmoTDMiOsIAh8EtbyyN+yIaHsGTsOyRL37KCHhjyel4Lzkd6KLbu4Zj5t530RTVTJ6TXEn9sBaJ03JRHJcDxY5UrN0fhu5gP7g8CYXdepVGf5tlOeHVyPQ7hkncekFKUSTywJuFDM/GcS/bdJ5ztPytJXu2XpLb28SL88+JSVW54tABHSXUDKFdhvN0P24K1GYj+l/bSB0ThrL2Cuo985nYs8Zf8lZSQ+ktI81OnEK+kif2lCypn3k+0nLGvhpKMZyeNhXDy9lNrFoXi+DRD8SBFUdIdSWPZvjOwTz2tVB07JGU7tnMIo6ksk9fufL8IAdeJLexTKdcJp9fxB7YoujQqHyqyVPi7NmDkI2Ngd/qiViVtBOex0SUHB2L/xsKtL4oe1FEv0w8jKtDPITRXmqq3F5MLj37ENhcQprvAlGv88e6Swo0ycLE8T7HJI8OA6spG8Fio7OZU2avlH33GWs+WibtE1S8paKGybQOfOi9Vbz0Y7Nks6r53MAoHre7lJu+KuFy+3Np68NOS9LJQ1J84QE2KeKV5Ngs54Xt1Sxp/162u3Yqd/hyKVP9++0fyt8Ryw==1gz12eJxjYGBgkGAAgRNODESDBnsIHll8ANNSGQM=70 144coloringz09eJzjZqAfYIHSAAWQABA=1gz12eJxjYGBgkGAAgRNODESDBnsIHll8ANNSGQM=70 152basecurvegz09eJxjYIAAM6vnNnqyn22E9n235b6aa3cy6rVdRaK9/Y970fYf95bbMzA0QPEoGEqADYnNhMQGAO0WEJo=0gz12eJxjYGBgkGAAgRNODESDBnsIHll8ANNSGQM=70 164tonecurvegz04eJxjYIAAjy5TaxDef9XNtqGvxWai9Rnb/V9Ybe//5bL7MmOyLVDO7rGWqB3L4VQ7RYEyO84rvXamgRvsdsxZbSfs+cIOqN5unhC//ZQN7+3ML6rbA/XZOzTa2Hf1ydv/Z/e3B+q3L65PsJf56GP/5V2OPdAc+/XGZfYMDA0EMAyg8wcbGJzuEwRiJiwYGQAAzM86rg==1gz12eJxjYGBgkGAAgRNODESDBnsIHll8ANNSGQM=70 

X-T1 Black and Whiteaus DSCF970992colorcheckergz02eJxVkg9M1GUYx18Y4NmRQ8CCTv7EWVkoHXBCB9nd+wgGegERcMe/xlmwwOLPGAsQ5AU1HYcDSvoz9YBhR8B2MkKo4/dyv9+vgFJHjBCVmCERAZsoLFMRqbvDa/Vu3z3vnn2+z/s8e5/eCAMU/34GPhqqA8OFfJCW5MGphA9gzD4Y9hQEwaaKaxihqD0I/QCj+EvwlOtg+a4Wuhq1YFCXA3SkQ5V9OnRGqaFqIhhwpR80jLhCRFEP5v5ug3te52BspR6Sk2rAwU0LGY5aME6+BSOVsbAcGggn3vGBoiURfJUwjH2DB7HHU0a8L6sS32/agH+dbgPvFB0IQxogQKmFpSgtTNklwLaXngcmQgyDj3xhNqYX65u+MakWWrjkklG2R0xM0UIRn9wvYIN+Oc/tWgnnq/UaDj0+y8Z7cpdtjOKiMlIRpqzHaTn75NXYA2+SP8JuDlJ4L7cLOwUewaqybLi4eRxP58wruos1WPfZLL50VqoweLfh+PgRxfyPWTB28IS8O1At100V4mFcC9NbPeGBwF2xstEov+2H8VFNo/zyTjUnEzKcdGCX6fXrXdxuYSTnGi3gBc+K+XDFz1yRMYg7H3mIa4/twHOtQhzot4pPChzh5N4hfDQ1BiBnI8Q8I4BMfSu2zdByRA7b9VXgWnITp9X545Tcm4o77PtY9G0O5JUbcL6dHfgfdcYhlx1BrDqOt4tLTZ3xSm7J5xxHXCa51PopdqDRidv8wJ73DshkleOxfNToIicZF/LN169w+kOp/MOmADai/DtO4bZqCjodznXf1XHt+0+z/YnLnIfrFbZHIuWbv4jlRsu8+ALTBdargOVuJBkhtP9TGOg8DrqKA3C4IRr+GFPBSqgI9Kt2MDxj/HeXns5ogUZVHYDgGNzKrgFX2Ydwo1wNM5ORsCEgFmY9X4CsOV8QojUcZt+Jm4t64XDS1zAvbgaPxDrI21sPxYW1IF7LhsGP40C2+BpU75TAqe8lMFX/F3Z54pZ1l1ySm3DurDNWiQ1wtbTBuksVdcdgrLAavO5I4M84IWjOusO1+yJ4u7QPo4wFGRpxCEOkT4Z8L61rqkOGFGdkiE0x3/1ltn9A7ITMylkiWvk/b8lbokUZOY99ZsZS36L/em2sLTbVmNlSc6395lz4upeEv2JlLfVtfku08Zb+rH1ulZEnMyl5NZmi2zGUbAmjJHQHJQellMwIKdIsMkTYwdhGIHnplCQkUvRQSVFNPCVJEZT8ZGYHX6Qk/2VKhtwpiXah6PN5hugaGbLjXYr4A5SwaVYfWTDragJFS7sp8ZFQlPuc+S0RJT0iiuS/

Re: [darktable-user] OpenCL support on Fedora 25 using AMD R5

2016-11-29 Thread Ulrich Pegelow

Am 30.11.2016 um 07:47 schrieb Mark Heieis:

On 2016-11-29 21:50, Ulrich Pegelow wrote:


Am 30.11.2016 um 00:23 schrieb Mark Heieis:

some darktable-cltest  output:

[opencl_init] found opencl runtime library 'libOpenCL.so.1'
[opencl_init] opencl library 'libOpenCL.so.1' found on your system
and loaded
[opencl_init] found 1 platform
[opencl_init] found 1 device


Your system does in fact support OpenCL.


[opencl_init] discarding device 0 `AMD CAICOS (DRM 2.46.0 /
4.8.8-300.fc25.x86_64, LLVM 3.8.0)' due to missing image support.


But your device lacks an important OpenCL feature (image support).
Without that feature the corresponding device is of no use for darktable.

Fair enough, but I'm not familiar enough with video/gpu to know where to
look.  What lib, driver, package, etc does the image support come from?


A hardware/driver interplay. I am not fimiliar either with that specific 
system. Maybe others know better.





[opencl_init] no suitable devices found.
[opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.
[opencl_init] initial status of opencl enabled flag is OFF.
[iop_load_module] failed to open operation `OpenCL':
'dt_module_dt_version': /lib64/libOpenCL.so.1: undefined symbol:
dt_module_dt_version


Please double-check your installation. The last line indicates a
problem. Could there be a stray dynamic library of an older install
lying around?

It's a fresh, clean F25 install using wayland, so there shouldn't be any
old residual libs around.


The message indicates that in your install of darktable 
/lib64/libOpenCL.so.1. is tried to be loaded as if it was of darktable's 
modules. Please run 'darktable -d control' and attach the output.


Ulrich



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL support on Fedora 25 using AMD R5

2016-11-29 Thread Ulrich Pegelow

Am 30.11.2016 um 00:23 schrieb Mark Heieis:

some darktable-cltest  output:

[opencl_init] found opencl runtime library 'libOpenCL.so.1'
[opencl_init] opencl library 'libOpenCL.so.1' found on your system and
loaded
[opencl_init] found 1 platform
[opencl_init] found 1 device


Your system does in fact support OpenCL.


[opencl_init] discarding device 0 `AMD CAICOS (DRM 2.46.0 /
4.8.8-300.fc25.x86_64, LLVM 3.8.0)' due to missing image support.


But your device lacks an important OpenCL feature (image support). 
Without that feature the corresponding device is of no use for darktable.



[opencl_init] no suitable devices found.
[opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.
[opencl_init] initial status of opencl enabled flag is OFF.
[iop_load_module] failed to open operation `OpenCL':
'dt_module_dt_version': /lib64/libOpenCL.so.1: undefined symbol:
dt_module_dt_version


Please double-check your installation. The last line indicates a 
problem. Could there be a stray dynamic library of an older install 
lying around?




I'm using a AMD Radeon R5 230, which supports OpenCL 1.2 on Fedora 25 (I
jumped to F25 yesterday due to blowing my F24 system yesterday trying to
install fglrx)

So it appears that Darktable requires OpenCL 1.2


darktable (small letter d) works well with any OpenCL 1.x and OpenCL 2.x 
version. As written above we have some minimum requirement of the 
devices, though.




This leaves me with a number of questions:

1) why is the activate OpenCL support ticked as enabled, when opencl is
clearly not functioning?


This is a known UI glitch.

Ulrich



2) I thought mesa had OpenCL 1.2 support, but apparently not?

3) how does one get OpenCL 1.2 on fedora using Radeon cards as there is
no specific fedora driver on AMD site?

or

4) what I'm I missing?

I did try and find some specific answers but wasn't successful, so
thanks for your patience and support.

Regards.



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Chosing the optimal CPU for darktable

2016-11-27 Thread Ulrich Pegelow

Am 27.11.2016 um 16:25 schrieb Rico Heil:

It should be grayed out already.


It's not on my machine:


At least I am not able to see any difference.



Actually not really visible. See the attached file how it looks here. 
The difference is not very big, though. That's a style issue. Not even 
sure if the relevant settings are within darktable.


Ulrich




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org

Re: [darktable-user] Chosing the optimal CPU for darktable

2016-11-27 Thread Ulrich Pegelow

Am 27.11.2016 um 16:04 schrieb Rico Heil:

Am 27.11.2016 um 15:38 schrieb Christian Kanzian:

Am Sonntag, 27. November 2016, 15:32:24 schrieb Rico Heil:

This discussion made me check the OpenCL paramters in my current
darktable installation.
"activate opencl support" is checked and I cannot uncheck it.
Does this mean I am forced to use OpenCL or does it mean my GPU does not
support OpenCL at all and the disabled control shows a default value
that's irrelevant for me?

It has been disabled. Newer dt versions do a quick test at startup for OpenCL.
If the test fails, you can't enable OpenCL.


Actually, I can't disable it.
Seems like a cosmetical bug to me: The checkmark should not be set


True, that's a minor UI issue. Fixing it requires a bit of thought...


and
there should be some visual representation of the fact that the control
is disabled (ie. it should be grayed).


It should be grayed out already.

Ulrich



At least I noticed the tooltip now, which states quite clearly "auf
diesem System nicht verfügbar" - "not available on this system".

Rico



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Chosing the optimal CPU for darktable

2016-11-27 Thread Ulrich Pegelow

Am 27.11.2016 um 11:00 schrieb Rico Heil:

Am 26.11.2016 um 18:11 schrieb Niccolò Belli:

You will probably get better performance saving some bucks on the CPU
and buying a very fast GPU for OpenCL acceleration. Something like the
RADEON RX 480 would be an optimal solution because of FOSS drivers,
but you will have to use the AMDGPU-PRO driver until Clover Image
support gets mainlined (hopefully soon).


I was planning to use Intel chipset graphics without buying an
additional GPU at all.
Can darktable use OpenCL with those?


Intel graphics does not have a working OpenCL implementation for Linux 
systems. There is the free beignet driver around since quite some time 
but that is still buggy and so far nobody has been able to get it 
working with darktable.



If yes: how much performance improvement would be expected if I add an
additional graphics board?


I doubt that integrated graphics chips will come close to the 
performance win you can expect from a decent NVIDIA or AMD graphics 
card. A modern graphics card in the cost range of 200 to 300€ will 
probably give you a speed-up during export of 3 to 10 times during 
export, compared to a modern CPU. Of course this depends on the modules 
you are using. The more computational expensive the bigger the gap. IMHO 
the most important point is the latency you experience during 
interactive work when you change module settings. YMMV, but here it 
makes a quite noticable difference; between waiting a bit with each 
parameter change and a mostly fluent way of working.


Ulrich



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] darktable 2.2.0 rc0 released

2016-11-13 Thread Ulrich Pegelow
Just a thought. We have encountered problems with the lens module when 
it comes to tiling. Root cause are issues with the modify_roi_in() 
function in lens.c. Tiling doesn't play a role here but the size of 
buffer that modify_roi_in() calculates has a direct effect on processing 
time.


I haven't digged into all details then but I assume that this part could 
be relevant:


// LensFun can return NAN coords, so we need to handle them carefully.
if(!isfinite(xm) || !(0 <= xm && xm < orig_w)) xm = 0;
if(!isfinite(xM) || !(1 <= xM && xM < orig_w)) xM = orig_w;
if(!isfinite(ym) || !(0 <= ym && ym < orig_h)) ym = 0;
if(!isfinite(yM) || !(1 <= yM && yM < orig_h)) yM = orig_h;

So in effect we request the whole image buffer for input if a NAN is 
detected. This is a safe fallback but it will lead to a longer 
processing time. Please also note that the section


xm = MIN(xm, bufptr[k + 0]);
xM = MAX(xM, bufptr[k + 0]);
ym = MIN(ym, bufptr[k + 1]);
yM = MAX(yM, bufptr[k + 1]);

is not robust when it comes to NAN. If there is a NAN found in bufptr 
the final output is implementation dependant. fminf() and fmaxf() would 
be better alternatives.


As a quick check the TO could check if PR #1338 has effects on the timing.

Ulrich


Am 12.11.2016 um 23:01 schrieb junkyardspar...@yepmail.net:



On Sat, Nov 12, 2016, at 03:52, Roman Lebedev wrote:


Please try adding 2 following lines:
fprintf(stderr, "%s roi in %d %d %d %d\n", self->name(), roi_in->x,
roi_in->y, roi_in->width, roi_in->height);
fprintf(stderr, "%s roi out %d %d %d %d\n", self->name(), roi_out->x,
roi_out->y, roi_out->width, roi_out->height);

For 2.0.x, here:
https://github.com/darktable-org/darktable/blob/darktable-2.0.x/src/iop/denoiseprofile.c#L1384

And for master, here
https://github.com/darktable-org/darktable/blob/master/src/iop/denoiseprofile.c#L1789
AND here 
https://github.com/darktable-org/darktable/blob/master/src/iop/denoiseprofile.c#L1800

And repeat that -d perf, zoom to 1:1 and move around.


Logfiles attached. I just patched 2.0.7 and the release candidate, if I need to 
actually pull from current master, let me know, but it might not happen until 
after the weekend.




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL trouble, NVidia driver 367.44 "could not create context"

2016-10-03 Thread Ulrich Pegelow

Hi,

googling for "nvidia opencl error 209" gives a few hits. This one might 
give some indications:


https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=839193

Ulrich

Am 03.10.2016 um 12:11 schrieb Michael Below:

Hi,

today I did the upgrade to the nvidia driver version 367.44, coming
from 361.xx

After the upgrade, darktable can't enable OpenCL anymore, it says it
"could not create context for device", whatever that means.

Any ideas what to do?

I am running darktable 2.06 on Debian testing.

Cheers
Michael




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-18 Thread Ulrich Pegelow

Hi,

the issue should now be fixed in master and in darktable-2.0.x branch.

Ulrich

darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-17 Thread Ulrich Pegelow

Hi,

short update from my side. Looks like I have found a way to restore the 
original OpenCL performance of NVIDIA devices with recent driver versions.


Currently we have some other issues with the OpenCL codepath in master 
which prevents me from working there. If this gets sorted out soon, I 
will apply the changes there. If problems there persist longer I'll make 
a dedicated patch into the darktable-2.0.x branch latest tomorrow.


Ulrich

Am 13.09.2016 um 06:12 schrieb I. Ivanov:

Hi Guys,

Did somebody notice slowing down since DT 2.0.6? I am on ubuntu 16.04 64
bit. 8Gig RAM. I noticed somewhat slower performance of DT. Experimented
with turning off Open CL and it looks like the speed improved. My
security patches are up to date as released by ubuntu.

Is it only me?

Regards,

B



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-16 Thread Ulrich Pegelow
Thanks. As in another case discussed before your OpenCL system does not 
seem to be limited by memory tranfer. host<->device transfers account to 
below 50% of the time spent in the pixelpipe and I regard this as a 
healthy value. Changes in the memory transfer method don't help here and 
you probably already get the maximum you can expect from that device.


(On a sidenote: for OpenCL benchmarking please run 'darktable -d opencl 
-d perf' rather than 'darktable -d all' because the latter produces too 
much junk output).


Ulrich



Am 17.09.2016 um 03:21 schrieb I. Ivanov:

I am attaching my logs

this is darktable 2.0.6
copyright (c) 2009-2016 johannes hanika
darktable-...@lists.darktable.org

compile options:
  bit depth is 64 bit
  normal build
  OpenMP support enabled
  OpenCL support enabled
  Lua support enabled, API version 3.0.0
  Colord support enabled
  gPhoto2 support enabled
  GraphicsMagick support enabled

CPU~Quad core Intel Core i7-2630QM (-HT-MCP-) speed/max~800/2900 MHz
Kernel~4.4.0-36-generic x86_64 Up~3 days Mem~3038.2/7877.1MB
HDD~500.1GB(73.3% used) Procs~289 Client~Shell inxi~2.2.35

Graphics:  Card-1: Intel 2nd Generation Core Processor Family Integrated
Graphics Controller
   Card-2: NVIDIA GF108M [GeForce GT 525M]
   Display Server: X.Org 1.18.3 driver: nvidia Resolution:
1920x1080@60.00hz, 1366x768@60.06hz
   GLX Renderer: GeForce GT 525M/PCIe/SSE2 GLX Version: 4.5.0
NVIDIA 361.42

https://drive.google.com/open?id=0B-ibE69DzumKSXh2VmtyQUJRV2c

clocked results for exporting 50 images.

open cl on pinned true - 10 min 30s
open cl on pinned false - 9 min 13s
open cl off - 8 min 15s

For me - it appears that *open cl off* is the fastest. I have no
explanation why I perceived that pinned=true is faster. It certainly
"looked" faster to me when in dartkable mode. But the numbers are above.
I "think" what happen is that I increased the complexity how many
modules I activate and this somehow convinced me that DT slowed down. In
fact - what did happen is - my images became more complex and it simply
takes more time for DT to deal with them.

Hope this info is useful.

Thank you,

B





On 2016-09-16 01:08 PM, Michael Below wrote:

Hi,

another example from me. As far as I can see, pinning has a slightly
worse performance than the default.

My system:
CPU~Quad core AMD Phenom II X4 810 (-MCP-) speed~2600 MHz (max)
Kernel~4.6.0-1-amd64 x86_64 Up~3:45 Mem~2389.4/5956.0MB
HDD~3250.7GB(17.9% used) Procs~300 Client~Shell inxi~2.3.1

I think it would improve my use case most if the "atrous" module would
run on GPU. There seems to be some issue with tile size that makes the
equalizer module take e.g. 13 seconds on some images.

Cheers
Michael


Am Fr 16 Sep 2016 07:37:45 CEST
schrieb Ulrich Pegelow :


Thanks for sharing. Yours is a good example of an OpenCL system that
is not limited by host<->device memory transfers. In a typical export
job your system spends about 30% of its time in memory transfer, the
rest is pure computing. That's a very good situation in which pinned
memory does not give advantages - maybe even slow down a bit.

Others have systems which are purely limited by memory transfer. We
have reports of insane cases where over 95% of the OpenCL pixelpipe
is used by memory transfers. Those are the ones where
opencl_use_pinned_memory makes a real difference.

Ulrich

Am 15.09.2016 um 22:11 schrieb KOVÁCS István:

Hi,

Core2-Duo E6550 @ 2.33GHz +Nvidia GeForce GTX 650 / 2 GB, driver
361.42, OpenCL 1.2 CUDA, darktable 2.0.6 from PPA.
With pinned memory, performance is slightly (about 10%?) worse.
There are lines like
[opencl_profiling] spent  0,3774 seconds in [Map Buffer]
that are only seen in the 'pinned' log.
One notable difference after exporting 114 photos:
pinned = false gives
[opencl_summary_statistics] device 'GeForce GTX 650': 8960 out of
8960 events were successful and 0 events lost

pinned = true gives
[opencl_summary_statistics] device 'GeForce GTX 650': 9933 out of
9933 events were successful and 0 events lost

as one of the last lines in the output.
My opencl-related darktablerc entries:
opencl=TRUE
opencl_async_pixelpipe=false
opencl_avoid_atomics=false
opencl_checksum=2684983341
opencl_device_priority=*/!0,*/*/*
opencl_library=
opencl_memory_headroom=300
opencl_memory_requirement=768
opencl_micro_nap=1000
opencl_number_event_handles=25
opencl_omit_whitebalance=
opencl_size_roundup=16
opencl_synch_cache=false
opencl_use_cpu_devices=false
opencl_use_pinned_memory=false

The logs are at:
http://tech.kovacs-telekes.org/files/darktable-opencl-pinned-memory/

Thanks,
Kofa



darktable user mailing list
to unsubscribe send a mail to
darktable-user+unsubscr...@lists.darktable.org








da

Re: [darktable-user] open CL 2.0.6

2016-09-15 Thread Ulrich Pegelow
Thanks for sharing. Yours is a good example of an OpenCL system that is 
not limited by host<->device memory transfers. In a typical export job 
your system spends about 30% of its time in memory transfer, the rest is 
pure computing. That's a very good situation in which pinned memory does 
not give advantages - maybe even slow down a bit.


Others have systems which are purely limited by memory transfer. We have 
reports of insane cases where over 95% of the OpenCL pixelpipe is used 
by memory transfers. Those are the ones where opencl_use_pinned_memory 
makes a real difference.


Ulrich

Am 15.09.2016 um 22:11 schrieb KOVÁCS István:

Hi,

Core2-Duo E6550 @ 2.33GHz +Nvidia GeForce GTX 650 / 2 GB, driver
361.42, OpenCL 1.2 CUDA, darktable 2.0.6 from PPA.
With pinned memory, performance is slightly (about 10%?) worse.
There are lines like
[opencl_profiling] spent  0,3774 seconds in [Map Buffer]
that are only seen in the 'pinned' log.
One notable difference after exporting 114 photos:
pinned = false gives
[opencl_summary_statistics] device 'GeForce GTX 650': 8960 out of 8960
events were successful and 0 events lost

pinned = true gives
[opencl_summary_statistics] device 'GeForce GTX 650': 9933 out of 9933
events were successful and 0 events lost

as one of the last lines in the output.
My opencl-related darktablerc entries:
opencl=TRUE
opencl_async_pixelpipe=false
opencl_avoid_atomics=false
opencl_checksum=2684983341
opencl_device_priority=*/!0,*/*/*
opencl_library=
opencl_memory_headroom=300
opencl_memory_requirement=768
opencl_micro_nap=1000
opencl_number_event_handles=25
opencl_omit_whitebalance=
opencl_size_roundup=16
opencl_synch_cache=false
opencl_use_cpu_devices=false
opencl_use_pinned_memory=false

The logs are at:
http://tech.kovacs-telekes.org/files/darktable-opencl-pinned-memory/

Thanks,
Kofa




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-15 Thread Ulrich Pegelow

Thanks. That's a small advantage for opencl_use_pinned_memory=TRUE.

Still even with the flag set to TRUE we loose quite some time in the 
host_memory->device_memory step. I expect that I can change our code to 
get some further improvements there in the next days.


Ulrich

Am 15.09.2016 um 16:10 schrieb Chester:

ULrich: here are my config and the two files:

darktable --version
this is darktable 2.0.6
copyright (c) 2009-2016 johannes hanika
darktable-...@lists.darktable.org

compile options:
  bit depth is 64 bit
  normal build
  OpenMP support enabled
  OpenCL support enabled
  Lua support enabled, API version 3.0.0
  Colord support enabled
  gPhoto2 support enabled
  GraphicsMagick support enabled

inxi
CPU~Quad core Intel Core i7 930 (-HT-MCP-) speed/max~1600/2801 MHz
Kernel~4.4.0-36-generic x86_64 Up~1:50 Mem~2072.5/20069.5MB
HDD~3768.8GB(3.6% used) Procs~268 Client~Shell inxi~2.2.35

inxi -G
Graphics:  Card: NVIDIA GF106 [GeForce GTS 450]
   Display Server: X.Org 1.18.3 drivers: nvidia (unloaded:
fbdev,vesa,nouveau)
   Resolution: 1680x1050@59.88hz, 1920x1200@59.95hz
   GLX Renderer: GeForce GTS 450/PCIe/SSE2
   GLX Version: 4.5.0 NVIDIA 361.42




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-15 Thread Ulrich Pegelow

Am 15.09.2016 um 09:31 schrieb Tobias Ellinghaus:

With a speed difference like that, couldn't we run a small benchmark at init
time (we already compare the speed to the CPU) and set the flag accordingly at
runtime?


Probably we should.

Ulrich

darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-14 Thread Ulrich Pegelow
Well, in your case I see the differences as only marginal - the time 
spent in the OpenCL pixelpipe differs only by 2% between the two setting 
(in favor of TRUE). Not sure if differences persist if you would repeat 
profiling several times to get out any fluctuations.


So it seems that your combination of GPU and driver does not profit from 
the opencl_use_pinned_memory flag. But in your case it would not harm 
either to change the default to TRUE.


To others: I am interested to see if there are systems where 
opencl_use_pinned_memory=TRUE gives a heavy negative impact on performance.


Ulrich

Am 15.09.2016 um 06:00 schrieb Jack Bowling:

On 09/14/2016 09:56 AM, Ulrich Pegelow wrote:

Well, there obviously is an issue with OpenCL and NVIDIA. However, a
quick check reveals that this is not related to 2.0.6 versus 2.0.5.

In fact it seems that NVIDIA did some changes to their drivers in the
way they handle memory transfers over the IDE interface.

There is a quick fix for that in darktable. You can switch config
variable opencl_use_pinned_memory to TRUE (can be found in darktablerc).
At least here on my this makes a difference of up to a factor of 30
(oldish GeForce GTS 450 and 367.35 driver).



Setting pinned_memory=true leads to slower render times on my box. Here
is system info on my fully updated Ubuntu 16.04 box:

$ darktable --version
this is darktable 2.0.6
copyright (c) 2009-2016 johannes hanika
darktable-...@lists.darktable.org

compile options:
  bit depth is 64 bit
  normal build
  OpenMP support enabled
  OpenCL support enabled
  Lua support enabled, API version 3.0.0
  Colord support enabled
  gPhoto2 support enabled
  GraphicsMagick support enabled

$ inxi
CPU~Octa core AMD FX-8300 Eight-Core (-MCP-) speed/max~1400/3300 MHz
Kernel~4.4.0-36-generic x86_64 Up~8 days Mem~2495.3/32090.4MB
HDD~23734.6GB(33.4% used) Procs~340 Client~Shell inxi~2.2.35

$ inxi -G
Graphics:  Card: NVIDIA GK107 [GeForce GT 740]
   Display Server: X.Org 1.18.3 drivers: nvidia (unloaded:
fbdev,vesa,nouveau)
   Resolution: 2560x1440@59.95hz
   GLX Renderer: GeForce GT 740/PCIe/SSE2
   GLX Version: 4.5.0 NVIDIA 361.42

Here is the relevant paste from my darktable config:

opencl=TRUE
opencl_async_pixelpipe=false
opencl_avoid_atomics=false
opencl_checksum=4188966525
opencl_device_priority=*/!0,*/*/*
opencl_library=
opencl_memory_headroom=1000
opencl_memory_requirement=768
opencl_micro_nap=1000
opencl_number_event_handles=25
opencl_omit_whitebalance=
opencl_size_roundup=16
opencl_synch_cache=false
opencl_use_cpu_devices=false
opencl_use_pinned_memory=false

Note the high headroom necessary to prevent atrous dumping to CPU.

Attached are two text files of "darktable -d opencl -d perf" output, one
with pinned_memory=true and one with pinned_memory=false.

Jack




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-14 Thread Ulrich Pegelow
The flag will mostly effect situations where tiling comes into play. 
That's the case for GPUs with low memory and for modules with high 
memory demand (e.g. equalizer).


Am 14.09.2016 um 19:39 schrieb Colin Adams:

Yes, it's working now. I guess the crash must have been a coincidence.
Slower if anything, but there isn't much in it. I can't be sure.
Running with NVIDIA 370.28 driver.




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-14 Thread Ulrich Pegelow
Do I understand correctly that you can run with the flag set to TRUE? 
What are your findings in terms of speed improvements (if any)?


Am 14.09.2016 um 19:34 schrieb Colin Adams:

No.
Doesn't happen anymore.

On Wed, 14 Sep 2016 at 18:26 Ulrich Pegelow mailto:ulrich.pege...@tongareva.de>> wrote:

Any backtrace?

Am 14.09.2016 um 19:12 schrieb Colin Adams:
> It causes darktable 2.0.5 (Fedora) to crash. Switching back to false
> cures the problem. So please don't change.



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-14 Thread Ulrich Pegelow

Any backtrace?

Am 14.09.2016 um 19:12 schrieb Colin Adams:

It causes darktable 2.0.5 (Fedora) to crash. Switching back to false
cures the problem. So please don't change.

On Wed, 14 Sep 2016 at 17:56 Ulrich Pegelow mailto:ulrich.pege...@tongareva.de>> wrote:

Well, there obviously is an issue with OpenCL and NVIDIA. However, a
quick check reveals that this is not related to 2.0.6 versus 2.0.5.

In fact it seems that NVIDIA did some changes to their drivers in the
way they handle memory transfers over the IDE interface.

There is a quick fix for that in darktable. You can switch config
variable opencl_use_pinned_memory to TRUE (can be found in darktablerc).
At least here on my this makes a difference of up to a factor of 30
(oldish GeForce GTS 450 and 367.35 driver).

Background: that switch controls the way of memory transfer between host
and OpenCL device, namely the use of pre-pinned memory. When the flag
was introduced it did only make some improvements on AMD/ATI devices,
while at that time NVIDIA devices would show no or a slight negative
effects. Therefore the flag is set to FALSE by default. It seems that
newer NVIDIA drivers get extremely slow if the default non-pinned memory
transfer method is used.

If my findings are confirmed we will change the default setting of that
flag for new installations. Users of existing installations will need to
change the config flag manually.

Please check and report back.

Ulrich




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] open CL 2.0.6

2016-09-14 Thread Ulrich Pegelow
Well, there obviously is an issue with OpenCL and NVIDIA. However, a 
quick check reveals that this is not related to 2.0.6 versus 2.0.5.


In fact it seems that NVIDIA did some changes to their drivers in the 
way they handle memory transfers over the IDE interface.


There is a quick fix for that in darktable. You can switch config 
variable opencl_use_pinned_memory to TRUE (can be found in darktablerc). 
At least here on my this makes a difference of up to a factor of 30 
(oldish GeForce GTS 450 and 367.35 driver).


Background: that switch controls the way of memory transfer between host 
and OpenCL device, namely the use of pre-pinned memory. When the flag 
was introduced it did only make some improvements on AMD/ATI devices, 
while at that time NVIDIA devices would show no or a slight negative 
effects. Therefore the flag is set to FALSE by default. It seems that 
newer NVIDIA drivers get extremely slow if the default non-pinned memory 
transfer method is used.


If my findings are confirmed we will change the default setting of that 
flag for new installations. Users of existing installations will need to 
change the config flag manually.


Please check and report back.

Ulrich

Am 14.09.2016 um 00:47 schrieb I. Ivanov:

I find it strange...

When I upgraded from 14.04 to 16.04 - DT was at version 2.0.5 and nvidia
361. I actually experienced speed "gain" - did not clock it but it was
very noticeable. I worked in this state for several weeks - all happy,
no changes in settings to DT.

2.0.6 was installed on 2016-09-06. I noticed it works but did not test
any further. Then I didn't use the computer till 11th. Installed the
following updates

2016-09-11 19:56:16 status installed gnome-menus:amd64 3.13.3-6ubuntu3.1
2016-09-11 19:56:16 status installed desktop-file-utils:amd64 0.22-1ubuntu5
2016-09-11 19:56:17 status installed mime-support:all 3.59ubuntu1
2016-09-11 19:56:17 status installed bamfdaemon:amd64
0.5.3~bzr0+16.04.20160701-0ubuntu1
2016-09-11 19:56:17 status installed man-db:amd64 2.7.5-1
2016-09-11 19:56:18 status installed libc-bin:amd64 2.23-0ubuntu3
2016-09-11 19:56:18 status installed dbus:amd64 1.10.6-1ubuntu3
2016-09-11 19:56:18 status installed gconf2:amd64 3.2.6-3ubuntu6
2016-09-11 19:56:18 status installed hicolor-icon-theme:all 0.15-0ubuntu1
2016-09-11 19:56:18 status installed libglib2.0-0:i386
2.48.1-1~ubuntu16.04.1
2016-09-11 19:56:18 status installed libglib2.0-0:amd64
2.48.1-1~ubuntu16.04.1
2016-09-11 19:56:18 status installed sgml-base:all 1.26+nmu4ubuntu1
2016-09-11 19:56:19 status installed google-chrome-stable:amd64
53.0.2785.101-1
2016-09-11 19:56:19 status installed libp11-kit0:amd64
0.23.2-5~ubuntu16.04.1
2016-09-11 19:56:19 status installed libp11-kit0:i386 0.23.2-5~ubuntu16.04.1
2016-09-11 19:56:19 status installed p11-kit-modules:amd64
0.23.2-5~ubuntu16.04.1
2016-09-11 19:56:19 status installed libaccountsservice0:amd64
0.6.40-2ubuntu11.2
2016-09-11 19:56:19 status installed accountsservice:amd64
0.6.40-2ubuntu11.2
2016-09-11 19:56:19 status installed file-roller:amd64 3.16.5-0ubuntu1.2
2016-09-11 19:56:19 status installed gnome-font-viewer:amd64 3.16.2-1ubuntu1
2016-09-11 19:56:19 status installed libappstream-glib8:amd64
0.5.13-1ubuntu3
2016-09-11 19:56:19 status installed libimlib2:amd64 1.4.7-1ubuntu0.1
2016-09-11 19:56:19 status installed metacity-common:all 1:3.18.7-0ubuntu0.1
2016-09-11 19:56:19 status installed libmetacity-private3a:amd64
1:3.18.7-0ubuntu0.1
2016-09-11 19:56:19 status installed libnm-gtk-common:all
1.2.0-0ubuntu0.16.04.4
2016-09-11 19:56:19 status installed libnm-gtk0:amd64 1.2.0-0ubuntu0.16.04.4
2016-09-11 19:56:19 status installed libnma-common:all
1.2.0-0ubuntu0.16.04.4
2016-09-11 19:56:19 status installed libnma0:amd64 1.2.0-0ubuntu0.16.04.4
2016-09-11 19:56:19 status installed network-manager-gnome:amd64
1.2.0-0ubuntu0.16.04.4
2016-09-11 19:56:21 status installed snapd:amd64 2.14.2~16.04
2016-09-11 19:56:21 status installed p11-kit:amd64 0.23.2-5~ubuntu16.04.1
2016-09-11 19:56:21 status installed libc-bin:amd64 2.23-0ubuntu3

and noticed drop in performance - mainly when using darktable (not much
light table). Took a chance to turn off open CL and the performance
improved. After reading the thread

https://www.mail-archive.com/darktable-dev@lists.darktable.org/msg01176.html

Tried to compare with export of a single image.
without open CL - 23s for about 20 MB RAW.
Same image - no change
with open CL - 41s

The OS and the images are stored on SSD so the networking does not come
into play.
I can work without open CL - it is not a deal breaker but the behavior
is surprising.

Regards,
B



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL and multiple GPUs

2016-08-14 Thread Ulrich Pegelow

Hi,

recent versions of darktable only export one image at a time. Therefore 
only the first GPU gets used. In your case it's probably the faster one, 
so all is good.


In the past darktable would allow to have a user selectable number of 
parallel export jobs. However, this has been deprecated for two reasons. 
Firstly, no speed gain on the CPU is to be expected as darktable heavily 
uses parallel processing already within each export job. Secondly, each 
export job "instance" consumes high amounts of system memory and we 
experienced users getting into out-of-memory situations frequently.


Ulrich

Am 14.08.2016 um 21:00 schrieb Aleksey Kunitskiy:

Hi all,

I'm trying to utilize all of my two GPUs in the system installed with
darktable. While I see that they're somehow both used while I use
lightroom, I can't catch any activity of the second GPU while exporting
series of images:

[opencl_init] here are the internal numbers and names of OpenCL devices
available to darktable:
[opencl_init]   0   'GeForce GTX 550 Ti'
[opencl_init]   1   'GeForce GTS 450'
[opencl_init] these are your device priorities:
[opencl_init]   image   preview export thumbnail
[opencl_init]   0   1   0   0
[opencl_init]   1   -1  1   1
[opencl_init] FINALLY: opencl is AVAILABLE on this system.

[pixelpipe_process] [export] using device 0
[pixelpipe_process] [export] using device 0
[pixelpipe_process] [export] using device 0
[pixelpipe_process] [export] using device 0
[opencl_summary_statistics] device 'GeForce GTX 550 Ti': 7240 out of 7240 
events were successful and 0 events lost
[opencl_summary_statistics] device 'GeForce GTS 450': NOT utilized


Does anybody know what is wrong with my setup?
My configuration of opencl is as follows:

opencl=TRUE
opencl_async_pixelpipe=TRUE
opencl_avoid_atomics=false
opencl_checksum=1643215135
opencl_device_priority=*/!0,*/*/*
opencl_library=
opencl_memory_headroom=512
opencl_memory_requirement=768
opencl_micro_nap=1000
opencl_number_event_handles=25
opencl_omit_whitebalance=false
opencl_runtime=
opencl_size_roundup=16
opencl_synch_cache=false
opencl_use_cpu_devices=false
opencl_use_events=true
opencl_use_pinned_memory=false
parallel_export=4




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL on Intel Core i7-6770HQ with Iris Pro 580

2016-05-26 Thread Ulrich Pegelow

Am 26.05.2016 um 21:58 schrieb Peter Mc Donough:

Is there any information somewhere available on which OpenGL
implemantations/Linux kernels presently support Darktable?
AMD/Nvidia/Intel.


We have good experience with AMD and Nvidia. The proprietary drivers of 
both vendors run well with a noticeable speed-up (depending on hardware, 
of course). As said experience with Intel is not good due to the lack of 
working OpenCL drivers on Linux platforms.


Ulrich



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL on Intel Core i7-6770HQ with Iris Pro 580

2016-05-26 Thread Ulrich Pegelow

Hi,

only your CPU has been detected by your OpenCL driver. We do not use 
OpenCL on CPU as it would bring a severe slow-down of darktable compared 
to our hand-optimized code.


Your OpenCL setup seems not to take notice of the Iris Pro. But even if 
it would: our experience with the (free) Beignet OpenCL implementation 
for Intel GPUs in its current state is discouraging. There is little 
chance that it will work.


Ulrich

Am 26.05.2016 um 19:16 schrieb Christian von Kietzell:

Hi,

this is the output:

[opencl_init] opencl related configuration options:
[opencl_init]
[opencl_init] opencl: 1
[opencl_init] opencl_library: ''
[opencl_init] opencl_memory_requirement: 768
[opencl_init] opencl_memory_headroom: 300
[opencl_init] opencl_device_priority: '*/!0,*/*/*'
[opencl_init] opencl_size_roundup: 16
[opencl_init] opencl_async_pixelpipe: 0
[opencl_init] opencl_synch_cache: 0
[opencl_init] opencl_number_event_handles: 25
[opencl_init] opencl_micro_nap: 1000
[opencl_init] opencl_use_pinned_memory: 0
[opencl_init] opencl_use_cpu_devices: 0
[opencl_init] opencl_avoid_atomics: 0
[opencl_init] opencl_omit_whitebalance: 0
[opencl_init]
[opencl_init] found opencl runtime library 'libOpenCL'
[opencl_init] opencl library 'libOpenCL' found on your system and loaded
[opencl_init] found 1 platform
[opencl_init] found 1 device
[opencl_init] discarding CPU device 0 `Intel(R) Core(TM) i7-6770HQ CPU @
2.60GHz'.
[opencl_init] no suitable devices found.
[opencl_init] FINALLY: opencl is NOT AVAILABLE on this system.
[opencl_init] initial status of opencl enabled flag is OFF.

Any hints?


Cheers,
   Chris




darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] OpenCL on Intel Core i7-6770HQ with Iris Pro 580

2016-05-26 Thread Ulrich Pegelow

Hi,

please post what 'darktable -d opencl' says.

Ulrich

Am 26.05.2016 um 11:06 schrieb Christian von Kietzell:

Hello,

I tried to get darktable to use OpenCL on my shiny new system. It sports
an Intel Core i7-6770HQ CPU with and Iris Pro 580 GPU part.

Darktable recognised the device after I installed Intel's drivers. But
running darktable with "-d opencl" shows that it discards the device. Is
that because it wouldn't provide any performance gain? Darktable didn't
give a specific reason.

If not, is there some other way I can get more information on why dt
doesn't want to use OpenCL on this system?


Cheers,
   Chris



darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org



Re: [darktable-user] Fwd: OpenCL not using all NVidia Optimus Memory?

2016-03-21 Thread Ulrich Pegelow

Hi,

your problem is related to this here:

[opencl_profiling] spent  6.8647 seconds in [Write Image (from host to 
device)]


(taken from one of the OpenCL runs in your attached debug output).

This figure represents the time that is needed to transfer data from 
host memory (your main ram) to the graphics card memory. This is a 
processing step that is inherently slow for all OpenCL systems as data 
need to travel through the slow PCI bus. However, in your case it's 
really slow.


I am seeing something similar, albeit not to that extent, with an older 
Geforce GTS450 which I use here on a dual-GPU system (about 1.5s in 
Write Image...). My primary GPU, an AMD HD7950 runs with high speed 
(about 0.02s in Write Image...). No sure if this is related to the 
dual-GPU setup, though.


Maybe some of the other users of some low to mid-end Nvidia GPUs could 
report their figures.


Ulrich


Am 21.03.2016 um 13:32 schrieb Jamie Kitson:

Hi,

I have an Asus UX31VD with 10GB RAM, an i7-3517U and both Intel HD4000
and NVidia GeForce GT 620M GPUs. When I switch OpenCL on in Darktable I
find it runs much slower, I think you can see that from these numbers:

[dev_pixelpipe] took 24.632 secs (22.407 CPU) processed `lens
correction' on GPU with tiling, blended on CPU [export]

[dev_pixelpipe] took 4.648 secs (17.720 CPU) processed `lens correction'
on CPU with tiling, blended on CPU [export]


[dev_pixelpipe] took 0.264 secs (0.320 CPU) processed `shadows and
highlights' on GPU, blended on GPU [full]

[dev_pixelpipe] took 0.050 secs (0.167 CPU) processed `shadows and
highlights' on CPU, blended on CPU [full]


Reading the Darktable OpenCL post [1] it seems that this could be caused
by lack of graphics memory. According various sources (Windows included)
my machine has 2GB of graphics RAM, and according to the Darktable
OpenCL post, Darktable itself won't try to use a GPU with less than
1GB(?) of memory, however the only ways I know of checking the amount of
graphics memory in Linux show less than 2GB, eg:

clinfo:
   Global memory size  1073479680 (1024MiB)
   Max memory allocation   268369920 (255.9MiB)
   Unified memory for Host and Device  No
   Integrated memory (NV)  No
   Global Memory cache typeRead/Write
   Global Memory cache size32768
   Global Memory cache line128 bytes
   Local memory type   Local
   Local memory size   49152 (48KiB)

lspci:
 Memory at f600 (32-bit, non-prefetchable) [size=16M]
 Memory at e000 (64-bit, prefetchable) [size=256M]
 Memory at f000 (64-bit, prefetchable) [size=32M]

lshw:
resources: irq:16 memory:f600-f6ff
memory:e000-efff memory:f000-f1ff
ioport:e000(size=128) memory:f700-f707

Darktable:
[opencl_init] device 0 `GeForce GT 620M' allows GPU memory allocations
of up to 255MB
[opencl_init] device 0: GeForce GT 620M
  GLOBAL_MEM_SIZE:  1024MB
  MAX_WORK_GROUP_SIZE:  1024
  MAX_WORK_ITEM_DIMENSIONS: 3
  MAX_WORK_ITEM_SIZES:  [ 1024 1024 64 ]
  DRIVER_VERSION:   361.28
  DEVICE_VERSION:   OpenCL 1.1 CUDA

Actually dmesg does report 2GB:
[1.965859] [drm] Memory usable by graphics device = 2048M

So my questions are:
Am I right in thinking that the Darktable OpenCL slowness is likely down
to a lack of video memory?

From what I've read on the internet it seems that many people run

Darktable using OpenCL on Optimus systems without this issue, can anyone
vouch for that?
Does anyone have any idea in how I can either a) have Linux/OpenCL
recognise how much video memory I really have or b) speed up Darktable
with OpenCL?
Would the AMD/ATI instructions [2] help in my case?

Thanks, Jamie Kitson


darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org