Ulrich Pegelow wrote:
Hi!
> Forgot to mention. If you have other applications which consume
> significant amounts of GPU memory this could also cause OpenCL in
> darktable to fail. Unfortunately there is no way to find out at any
> time which amount of GPU memory is still available. Therefore
>
Forgot to mention. If you have other applications which consume
significant amounts of GPU memory this could also cause OpenCL in
darktable to fail. Unfortunately there is no way to find out at any time
which amount of GPU memory is still available. Therefore darktable
assumes it can have all m
Hi,
then the only chance which I still could see is testing another driver
version, assuming that a driver might be the root cause. I am currently
successfully running version 346.59 (although my GPU is an ancient GTS450).
Ulrich
BTW here are my settings:
[opencl_init] opencl: 1
[opencl_init]
Hi,
thanks. 500 leads to
...
[opencl_pixelpipe] couldn't copy image to opencl device for module colorin
[opencl_pixelpipe] failed to run module 'colorin'. fall back to cpu path
[opencl_pixelpipe (b)] late opencl error detected while copying back to
cpu buffer: -5
[opencl] frequent opencl errors e
You could try an even higher values of opencl_memory_headroom of 500.
If this does not help please try with lower settings for
opencl_event_handles.
Ulrich
Am 31.05.2015 um 23:37 schrieb joeni:
> Hi,
>
> thanks. Tried that. Same, or similar, result (and I'm seeing colour
> effects where parts o
Hi,
thanks. Tried that. Same, or similar, result (and I'm seeing colour
effects where parts of some lines of pixels anywhere on the screen turn
strange colours)
(350)
...
dev] took 0.000 secs (-0.000 CPU) to load the image.
[pixelpipe_process] [full] using device 0
[dev_pixelpipe] took 0.010 se
Hi,
first thing you should try is increase opencl_memory_headroom to 350 or 400.
Ulrich
Am 31.05.2015 um 14:33 schrieb joeni:
> Hi,
>
> not sure this is the right place to send this to: I'm having trouble
> with OpenCL and my new GPU (darktable 1.6.6, GeForce GT 730, Ubuntu
> 15.04). What shall
Hi,
not sure this is the right place to send this to: I'm having trouble
with OpenCL and my new GPU (darktable 1.6.6, GeForce GT 730, Ubuntu
15.04). What shall I do? Thanks a lot!
darktable -d opencl
[opencl_init] opencl related configuration options:
[opencl_init]
[opencl_init] opencl: 1
[open
Hi,
> But now i have update my gnome version (from 3.12 to 3.14) &
> gnome-shell was always crashing with ati-drivers
> So i now using the radeon (free) drivers in X11.(with kms)
I'm afraid you have to decide to either use the free radeon
driver, or use the fglrx ("ati-drivers"). You can't use b
On 2015-03-19 19:59, Wolfgang Goetz wrote:
> Taahir Ahmed wrote:
>
>> I recall having a similar error, unrelated to darktable. The problem
>> may be with one of the /dev/ati* devices -- it either needs to be
>> world readable/writable, or the user accessing needs to have a
>> certain capability [1
Taahir Ahmed wrote:
> I recall having a similar error, unrelated to darktable. The problem
> may be with one of the /dev/ati* devices -- it either needs to be
> world readable/writable, or the user accessing needs to have a
> certain capability [1]. Unfortunately, I've forgotten precisely which
>
I recall having a similar error, unrelated to darktable. The problem may be
with one of the /dev/ati* devices -- it either needs to be world
readable/writable, or the user accessing needs to have a certain capability
[1]. Unfortunately, I've forgotten precisely which capability it was.
Taahir
Hi,
I'm was using darktable since more than 1 years without issue on opencl (on
my ATI 7950 XT 2Go of RAM)
I was using the ati-driver.
But now i have update my gnome version (from 3.12 to 3.14) & gnome-shell
was always crashing with ati-drivers
So i now using the radeon (free) drivers in X11.
...gentoo!
default/linux/amd64/13.0/desktop/gnome/systemd
=media-gfx/darktable- **
=x11-drivers/nvidia-drivers-346.22
HINT: do *not* run other x11 programs beside darktable(-cli)!
running:
time darktable-cli t.nef t.jpg --core -d opencl
logfiles attached:
dt-cli-good.log: only gnome-ter
I don't have any access to the system you describe, so all I can do is
guess. Most likely the problems are caused by the OpenCL compiler
implementation for the specific hardware. At least the unusal number of
compiler warnings points into that direction - neither AMD nor NVIDIA
show these warni
Somewhat. There were some efforts put into OpenCL on OS X, but it seems
they weren't enough. OpenCL is disabled by default on OS X for that
reason. If you're interested in helping fix it, I suggest creating a
bug report in redmine and attaching output of
/Applications/darktable.app/Contents/MacOS/d
... doesn't seem to work. If I switch this on, some of my thumbnails
turn green and the images are totally blurred in Darkroom. This is an
early 2014 retina MacBookPro with Intel Iris graphics.
Is this expected?
.mm
--
D
Am 05.11.2014 um 19:22 schrieb parafin:
> Try disabling option "Use GPU acceleration via Clutter library" in
> Edit->Preferences->Preferences...->Image
First of all I have to correct myself: The issue happens even with geeqie idle.
At least when geeqie is idle in the sense that I only have opene
Try disabling option "Use GPU acceleration via Clutter library" in
Edit->Preferences->Preferences...->Image
On Wed, 05 Nov 2014 16:49:39 +0100
Matthias Bodenbinder wrote:
> Am 03.11.2014 um 21:09 schrieb Dariusz J. Garbowski:
> > Shot in the dark -- try and disable "Preload next image" and make
OK, thanks for testing.
Then I would say that - given the inherent limitations of GPU memory -
darktable does a pretty decent job :)
We cannot influence other applications claiming GPU memory. All we can
do is deal with this situation - fall back to CPU but continue trying to
use the GPU if po
Am 03.11.2014 um 21:09 schrieb Dariusz J. Garbowski:
> Shot in the dark -- try and disable "Preload next image" and make "Decoded
> image cache size" a small
> value in Preferences -> General in Geeqie?
I did that and it does not help. I did also check what an idle geeqie will do,
as suggested
Interesting observation! Does you issue only happen while you are
actively working with geeqie or does it also occur when geeqie is idle?
Ulrich
Am 03.11.2014 um 12:30 schrieb Matthias Bodenbinder:
> I made some more tests also with newest NVIDIA driver 343.22. What I found so
> far is that the
Shot in the dark -- try and disable "Preload next image" and make "Decoded
image cache size" a small
value in Preferences -> General in Geeqie?
Regards,
Dariusz
On 03/11/14 11:30 AM, Matthias Bodenbinder wrote:
> I made some more tests also with newest NVIDIA driver 343.22. What I found so
>
I don't know ristretto and gwenview, but this might be related to opengl,
do you no which of these three might use opengl for rendering ?
On Mon, Nov 3, 2014 at 12:43 PM, Matthias Bodenbinder <
[email protected]> wrote:
> With gwenview I can not even provoke the error with
> opencl_memory_h
With gwenview I can not even provoke the error with opencl_memory_headroom: 300
Matthias
--
___
darktable-devel mailing list
[email protected]
https://list
I made some more tests also with newest NVIDIA driver 343.22. What I found so
far is that the picture viewer geeqie seems to play an important role in that.
When watching JPG pictures with geeqie while exporting from DT I get the memory
issue as soon as geeqie has opened a few pictures. When I q
Hi Matthias,
unfortunately I did not find anthying helpful in there. I had the faint
hope that you might be using some seldomly used module that might have a
memory leak in its OpenCL code - this would have explained your issues.
However, nothing like that is the case. You have the same modules
Am 21.10.2014 um 07:12 schrieb Ulrich Pegelow:
> @Matthias:
>
> In order to better understand the issues on your system I'd like you to
> generate a full debug output with 'darktable -d opencl -d perf'. I would
> like to see everything from the start and all output during export until
> the pro
@Matthias:
In order to better understand the issues on your system I'd like you to
generate a full debug output with 'darktable -d opencl -d perf'. I would
like to see everything from the start and all output during export until
the problem occurs.
Best wishes
Ulrich
Am 20.10.2014 um 18:23
* Patrick Shanahan [10-20-14 09:59]:
> * Matthias Bodenbinder [10-20-14 01:24]:
> > Am 19.10.2014 um 21:32 schrieb Patrick Shanahan:
> > >
> > > just installed 340.46 but on openSUSE-Factory and using darktable from
> > > git. Will report anomalies if observed.
> > >
> > Can you please export
Some more input from my side. I tried to reproduce your issue on my AMD
Radeon HD7950. For simplicity reasons I exported 340 duplicates of the
same image. All with profiled denoise active. I took an image big enough
to force tiling in that module.
All images were exported without any problems.
In terms of numbers this fits. The tiling code splits this image
processing step as the full image would be too big. The tile to be
processed is 5172 x 3666 pixels in size. This requires an image buffer
of 289MB. We need four full buffers (in, out, tmp, U2) and we need
additional four small buf
Am 20.10.2014 um 07:39 schrieb Ulrich Pegelow:
> What denoising method did you chose in that module? Non-local means or
> wavelet?
>
> Best wishes
>
> Ulrich
>
I am always using non-local means. I have attached an example dtstyle file.
Here the output of "darktable -d opencl -d perf" when it
* Matthias Bodenbinder [10-20-14 01:24]:
> Am 19.10.2014 um 21:32 schrieb Patrick Shanahan:
> >
> > just installed 340.46 but on openSUSE-Factory and using darktable from
> > git. Will report anomalies if observed.
> >
> Can you please export several hundred pictures in one shot and observe
> C
What denoising method did you chose in that module? Non-local means or
wavelet?
Best wishes
Ulrich
Am 20.10.2014 um 07:22 schrieb Matthias Bodenbinder:
> Am 19.10.2014 um 21:32 schrieb Patrick Shanahan:
>>
>> just installed 340.46 but on openSUSE-Factory and using darktable from
>> git. Will r
Am 19.10.2014 um 21:32 schrieb Patrick Shanahan:
>
> just installed 340.46 but on openSUSE-Factory and using darktable from
> git. Will report anomalies if observed.
>
Hello Patrick
Can you please export several hundred pictures in one shot and observe CPU load
resp. console output of "darkta
* Patrick Shanahan [10-19-14 15:32]:
> * Ulrich Pegelow [10-19-14 14:11]:
> > I don't know for sure but most likely neither of the two uses OpenCL.
> > Concerning that graphics driver version maybe some other people using it
> > can tell us their experiences.
> >
> > Ulrich
> >
> >
> > Am 19
* Ulrich Pegelow [10-19-14 14:11]:
> I don't know for sure but most likely neither of the two uses OpenCL.
> Concerning that graphics driver version maybe some other people using it
> can tell us their experiences.
>
> Ulrich
>
>
> Am 19.10.2014 um 19:26 schrieb Matthias Bodenbinder:
> > Am 1
I've not had any problems, but I have only been exporting a few pictures at
a time, so perhaps not a useful datum.
On 19 October 2014 19:09, Ulrich Pegelow
wrote:
> I don't know for sure but most likely neither of the two uses OpenCL.
> Concerning that graphics driver version maybe some other pe
I don't know for sure but most likely neither of the two uses OpenCL.
Concerning that graphics driver version maybe some other people using it
can tell us their experiences.
Ulrich
Am 19.10.2014 um 19:26 schrieb Matthias Bodenbinder:
> Am 19.10.2014 um 15:55 schrieb Ulrich Pegelow:
>> I don't
Am 19.10.2014 um 15:55 schrieb Ulrich Pegelow:
> I don't think that values as high as 600 really make sense as I can't
> imagine that typical gui and driver related requirements are much higher
> than 300MB. I see two possibilities: either there is another application
> or background job on your
I don't think that values as high as 600 really make sense as I can't
imagine that typical gui and driver related requirements are much higher
than 300MB. I see two possibilities: either there is another application
or background job on your system that uses OpenCL memory, or the
graphics drive
Am 19.10.2014 um 11:34 schrieb Matthias Bodenbinder:
> Am 18.10.2014 um 14:42 schrieb KOVÁCS István:
>> http://www.darktable.org/2012/03/darktable-and-opencl/
>> If you get “-4” errors, go into file
>> $HOME/.config/darktable/darktablerc, where DT stores its configuration
>> parameters and look for
Am 18.10.2014 um 14:42 schrieb KOVÁCS István:
> http://www.darktable.org/2012/03/darktable-and-opencl/
> If you get “-4” errors, go into file
> $HOME/.config/darktable/darktablerc, where DT stores its configuration
> parameters and look for opencl_memory_headroom.
Thanks for this hint. I tried it
Hi,
the error code -4 in opencl means that a memory object could not be
allocated. It is not uncommon that the whole memory management on your
graphics card has issues once this happens so that only a reboot helps.
My best guess would be that your setting of opencl_memory_headroom
should be in
http://www.darktable.org/2012/03/darktable-and-opencl/
If you get “-4” errors, go into file
$HOME/.config/darktable/darktablerc, where DT stores its configuration
parameters and look for opencl_memory_headroom.
On 18 October 2014 14:27, Matthias Bodenbinder wrote:
> I made some additional tests.
I made some additional tests.
1) DT is not recovering from the loss of opencl functionality. A restart of DT
does not help. opencl performance is still not available although it is
activated in the GUI. Only a reboot of the computer brings back the opencl
performance.
2) The issue is difficult
Hello Developers,
I just want to share my first opencl experiences with you.
The positiv things to start with:
I have a core i7-2600k. I used to run it with onboard graphics (HD3000). With
this setup I get a JPG export speed of ca. 1.5 pictures per minute.
Today I got a Geforce GTX750TI. With
On 13.6.2014 at 7:22 AM Ulrich Pegelow wrote:
> Hi Sven,
>
> nice to hear of you. I think we talked a bit in the breaks of Pat's
> portrait retouching session.
>
Yes, we did ;-)
Thank you for your precious tips!
To have consistent OpenCL support in GEGL I think one single version
there saves ou
Hi Sven,
nice to hear of you. I think we talked a bit in the breaks of Pat's
portrait retouching session.
When we started with OpenCL in darktable we could only rely on OpenCL
1.0, as drivers for higher versions were not widespread. Therefore even
today we only use the basic set of 1.0 command
Hi,
I'm from GIMP/GEGL (you might know me from LGM)
and am thinking how we can support OpenCL in GEGL
better.
- What OpenCL version does Darktable require?
- Are there any hardware or software recommendations
for testing OpenCL in the continuous integration process?
If you already discussed this
On 16/05/14 17:18, Gonçalo Marrafa wrote:
> I've tried the mesa opencl implementation with DT. I get a segfault but can't
> make anything of it, so maybe one of you
> guys can. What i would like to know is if the problem is within DT or my own
> setup.
You need either the nVidia or ATI binary dr
Hi.
I've tried the mesa opencl implementation with DT. I get a segfault but
can't make anything of it, so maybe one of you guys can. What i would like
to know is if the problem is within DT or my own setup.
I've attached the resulting backtrace.
Thanks in advance.
Gonçalo Marrafa
this is darkt
Hi
> Modern GPUs are ways faster than CPUs usally. Only nvidia cards/drivers have a
> problem on profiled denoise as discussed on IRC yesterday. In general nvidia
> cards seem to have a bad OpenCL performance if you look at [1]. And still my
> low budget passive cooled GT 640 outperforms my i7-260
> If you have two GPUs with the same speed, it makes a lot of sense to set
> parallel_export to 2. However, in most cases users have only one GPU
> that is often much faster than the CPU. Or few people have two GPUs with
> a big performance difference. In these cases you only profit from
> multipl
On Thu, Mar 27, 2014 at 7:14 PM, Ulrich Pegelow wrote:
> Am 26.03.2014 22:33, schrieb Christian Kanzian:
> > Am Mittwoch, 26. März 2014 schrieb jerome:
> >
> >> So is this normal ?
> >> I mean, core use and the fact I need export only one at time ?
> >>
> >> Sorry for long story but it help to un
Am 26.03.2014 22:33, schrieb Christian Kanzian:
> Am Mittwoch, 26. März 2014 schrieb jerome:
>
>> So is this normal ?
>> I mean, core use and the fact I need export only one at time ?
>>
>> Sorry for long story but it help to understand how and why.
>>
>> And sorry for my bad english ;)
>>
>> Thank
Am Mittwoch, 26. März 2014 schrieb jerome:
> So is this normal ?
> I mean, core use and the fact I need export only one at time ?
>
> Sorry for long story but it help to understand how and why.
>
> And sorry for my bad english ;)
>
> Thanks to all
Within OpenCL the GPU does the parallel thing
Hi all
I discover darktable some times ago and quickly switch to version 1.2.3
in debian backports. All was fine and I was very happy.
Then I take the opportunity to buy a nVidia GTX-660 card. As I enjoy
neat things, I setup my system to use my ATI card with opensource driver
for display and t
Regarding OpenCL, thanks for the responses :/
On 11 March 2014 21:20, Ulrich Pegelow wrote:
> In short: currently there is no open source OpenCL driver, neither for
> AMD nor for NVIDIA.
>
> Ulrich
>
> Am 11.03.2014 12:58, schrieb Dave:
>> Hello all.
>>
>> I may get shot for this question, but he
In short: currently there is no open source OpenCL driver, neither for
AMD nor for NVIDIA.
Ulrich
Am 11.03.2014 12:58, schrieb Dave:
> Hello all.
>
> I may get shot for this question, but here goes:
>
> Is it possible to have darktable use libOpenCL without installing
> proprietary graphics driv
Hello all.
I may get shot for this question, but here goes:
Is it possible to have darktable use libOpenCL without installing
proprietary graphics drivers?
I have googled a bit and found this
http://streamcomputing.eu/blog/2011-06-24/install-opencl-on-debianubuntu-orderly/
but I haven't tried it
Ulrich;
I still have the problem running Nvidia version 331.20 on the latest
Fedora-20
David
On 01/08/2014 10:21 PM, Ulrich Pegelow wrote:
> Hi Mark,
>
> it's a known bug in NVIDIA's OpenCL-compiler of that version number
> 304.xx. Please try to upgrade your driver to a later version like 319.xx
Hi Mark,
it's a known bug in NVIDIA's OpenCL-compiler of that version number
304.xx. Please try to upgrade your driver to a later version like 319.xx
or better.
Ulrich
Am 09.01.2014 02:05, schrieb Mark Garrow:
> As I'm on IRC when most are sleeping I thought I'd post.
> I just got a new card i
As I'm on IRC when most are sleeping I thought I'd post.
I just got a new card in the hopes of getting OpenCL working. It's a msi
gt640 2gig pcie 3.0. I've tried darktable-cltest and darktable -d cltest
both with the same results. It appears blendop.cl is not compiling.
Everything else looks goo
After updating to the latest Catalyst 13.12 driver [*] I could not
trigger this screen corruption yet. So, the problem seems to be solved
for me.
Thank you for your feedback that your "similar" cards are working. I
wonder, if they are really similar, as there are for example HD7950s
with turbo and
David,
attached is a patch that tries another workaround for NVIDIA's OpenCL
bug. Maybe you can give it a try.
Ulrich
Am 02.12.2013 20:26, schrieb David Vincent-Jones:
After installing the latest Nvidia drivers my machine appears to go a
long way towards fully implementing the compiles neede
Am Dienstag, 3. Dezember 2013 schrieb David Vincent-Jones:
> On 13-12-03 01:10 AM, Christian Kanzian wrote:
> > I don't know what system your are using, but on Debian you can do:
> >
> > #dpkg -l | grep opencl
> >
> > and look for:
> > nvidia-opencl-icd
> > nvidia-opencl-common
> > nvidia-libopen
On 13-12-03 01:10 AM, Christian Kanzian wrote:
> I don't know what system your are using, but on Debian you can do:
>
> #dpkg -l | grep opencl
>
> and look for:
> nvidia-opencl-icd
> nvidia-opencl-common
> nvidia-libopencl1
I do not find those files that you report!
I am only seeing from my sear
Once more.
> Gesendet: Dienstag, 03. Dezember 2013 um 07:12 Uhr
> Von: "Ulrich Pegelow"
> An: [email protected]
> Betreff: Re: [darktable-devel] OpenCL Question
>
> This is a bug in NVIDIA's OpenCL compiler. I reported it to them a year
&g
This is a bug in NVIDIA's OpenCL compiler. I reported it to them a year
ago and they confirmed it's a bug. Seems that they did not fix it in the
304.xx legacy series of their driver.
I guess there are only two chances for you to get OpenCL running. Either
you switch to a newer driver version an
After installing the latest Nvidia drivers my machine appears to go a
long way towards fully implementing the compiles needed for OpenCL but
then fails trying to compile 'blendop.cl'.
I have attached the terminal info and would appreciate if somebody can
tell me if this is a simple adjustment
Am 23.11.2013 16:21, schrieb Michael Born:
> Am 23.11.2013 15:09, schrieb Ulrich Pegelow:
>
> Nobody would be surprised about driver bugs, but I have to say, that
> this problem isn't there all the time.
> So, I have to find something to trigger/reproduce the bug :-(
>
Please also take hardware is
Am 23.11.2013 15:09, schrieb Ulrich Pegelow:
> Looks like a driver bug. I am running the same HD7950 device with
> Catalyst 13.8.beta1 without any issues. Maybe you should try that one.
>
> Ulrich
Nobody would be surprised about driver bugs, but I have to say, that
this problem isn't there all t
Am 23.11.2013 16:04, schrieb Togan Muftuoglu:
>> "Ulrich" == Ulrich Pegelow writes:
>
> Ulrich> Looks like a driver bug. I am running the same HD7950 device with
> Ulrich> Catalyst 13.8.beta1 without any issues. Maybe you should try that
> Ulrich> one.
>
> You want him to downgr
> "Ulrich" == Ulrich Pegelow writes:
Ulrich> Looks like a driver bug. I am running the same HD7950 device with
Ulrich> Catalyst 13.8.beta1 without any issues. Maybe you should try that
Ulrich> one.
You want him to downgrade himm, nevertheless that sure is an option.
I am using C
Looks like a driver bug. I am running the same HD7950 device with
Catalyst 13.8.beta1 without any issues. Maybe you should try that one.
Ulrich
Am 23.11.2013 13:11, schrieb Michael Born:
> With the GIT version of yesterday (I had this with older GIT versions,
> too) I had massive picture corrupt
With the GIT version of yesterday (I had this with older GIT versions,
too) I had massive picture corruptions when zooming with the mouse wheel
in darkroom mode. The content of the current picture or from pictures of
the current folder gets mixed together. The corrupted result also gets
written to
Am 12.08.2013 18:57, schrieb Robert William Hutton:
> Thanks. From what I can tell by looking at the link in the comments at the
> top of the file, which seems to be the
> source of the card identifiers:
>
> https://developer.nvidia.com/cuda-gpus
>
> The asterisks refer to a note at the bottom of
On 12/08/13 17:40, Ulrich Pegelow wrote:
> Am 12.08.2013 18:22, schrieb Robert William Hutton:
>> I have an old nvidia graphics card that self-identifies as a "GeForce GT
>> 330". The opencl build fails (see attached
>> log). However, if I modify src/common/nvidia_gpus.h line 168 to read:
>>
>>
Am 12.08.2013 18:22, schrieb Robert William Hutton:
> Hi All
>
> I have an old nvidia graphics card that self-identifies as a "GeForce GT
> 330". The opencl build fails (see attached
> log). However, if I modify src/common/nvidia_gpus.h line 168 to read:
>
>"GeForce GT 330","1.0",
>
> Instea
Hi All
I have an old nvidia graphics card that self-identifies as a "GeForce GT 330".
The opencl build fails (see attached
log). However, if I modify src/common/nvidia_gpus.h line 168 to read:
"GeForce GT 330","1.0",
Instead of:
"GeForce GT 330*","1.0",
Then the build works. Does this
Am 17.06.2013 21:13, schrieb David Vincent-Jones:
> Attached is full dt/opencl output from terminal:
> I have ensured now that:
> /etc/OpenCL/vendors/nvidia.icd .. looks correct and
> /usr/lib/nvidia-current/libnvidia-opencl.so.1 also looks correct
>
> The attached appears to indicate that some
Attached is full dt/opencl output from terminal:
I have ensured now that:
/etc/OpenCL/vendors/nvidia.icd .. looks correct and
/usr/lib/nvidia-current/libnvidia-opencl.so.1 also looks correct
The attached appears to indicate that something in 'blendop' is now the
cause of the problem. Is this
Am 17.06.2013 00:37, schrieb David Vincent-Jones:
> I have just made a fresh install of dt and despite OpenCL apparently
> being correctly found it was unable to be initiated.
>
> What does "could not get platforms" indicate?
Hi David,
it's clearly related to a buggy OpenCL installation. Your lib
I have just made a fresh install of dt and despite OpenCL apparently
being correctly found it was unable to be initiated.
What does "could not get platforms" indicate?
darktable version 1.3+427~gb527ed7
[opencl_init] trying to load opencl library: ''
[opencl_init] opencl library 'libOpenCL' foun
Am 11.06.2013 20:22, schrieb Roumano:
> Hi,
>
> Two exemple create.
Hi Roumano,
both of your cases hardly need to go into tiling. In order to force
heavy tiling on a Tahiti you will need to go for images with 5000x5000
pixels or higher. If the images in your example are of a typical size
for y
Hi Roumano,
thanks for giving my code changes a try!
I should mention that the isolated profiling data from OpenCL not always
give a good indication. Some devices/drivers for example do not account
for the full host<->device transfer timings. It's better to rely on the
total time spent in the
Hi Ulrich,
Test on a ATI Tahiti XT (7870 XT 2Go) with a picture 5196 * 3462 pixel
size (18Mbit)
For me, i can't see any difference on the picture. (but file size is 2
Ko diff) (every time a exporting a file i never have the exactly same
size of the file)
For performance,
Yes it's better for me
Hi,
I just pushed an update to darktable's opencl tiling code. On some
devices (namely AMD/ATI) I found a severe performance penalty for
host<->device memory transfers. This mainly hits our tiling code, where
we keep the full input and output images in host memory and repeatedly
process small
for 2) the answer is probably that when you enable opencl you process the
image faster but you need to move your image to/from the CPU, which is
quite long
when opencl enabled modules are next to each othe in the pipe we don't need
to copy the image around, but when there is a non-opencl module, w
Hi,
Ok for this response but i still don't understand 2 problem with that :
1) module watermak on dev version is 3 times slower on the dev version than
the stable version :
stable darktable, module watermark with opencl : 0.476 secs
dev darktable, module watermake with opencl : 1.351 secs
On Wed, Jun 5, 2013 at 1:08 AM, Roumano wrote:
> Hi,
>
> (Only) just now (with the new ati-drivers 13.6) , I can enable the
> opencl on darktable
>
> [opencl_init] device 0 `Tahiti' supports image sizes of 16384 x 16384
> [opencl_init] device 0 `Tahiti' allows GPU memory allocations of up to
> 10
Hi,
(Only) just now (with the new ati-drivers 13.6) , I can enable the
opencl on darktable
[opencl_init] device 0 `Tahiti' supports image sizes of 16384 x 16384
[opencl_init] device 0 `Tahiti' allows GPU memory allocations of up to
1024MB
[opencl_init] device 0: Tahiti
GLOBAL_MEM_SIZE:
For reference: https://bugs.launchpad.net/bugs/1169695
However, the chances someone fixes this bug are quite low, my
experiences regarding launchpad bugreports are rather bad.
Maybe you better set up a script to create the required symlinks for
you, running ldconfig like suggested by Ulrich did no
Am 16.04.2013 21:27, schrieb David Vincent-Jones:
> Ulrich;
>
> I have moved the 2 missing files into place except do not know how to
> create the link file.
>
> David
Normally it should be sufficient to run program ldconfig as root.
Ulrich
-
Thanks Markus ... this is all a bit over my head and a tad frustrating.
My system may be a bit complicated as I am running Bumblebee in an
effort to achieve good battery performance as well as having the ability
to use OpenCL as needed.
My nvidia.icd appears to have not even been located in the
Ulrich;
I have moved the 2 missing files into place except do not know how to
create the link file.
David
On 13-04-16 08:28 PM, Ulrich Pegelow wrote:
> Am 16.04.2013 19:29, schrieb David Vincent-Jones:
>> I have copies of the link libnvidia-opencl.so.1 located in both
>> /usr/lib/nvidia-current
I am filing a bug report against the ubuntu package right now. It
deletes the nvidia.icd symlink in /etc/ all the time and now it does not
even create sane symlinks for libnvidia-opencl. I just "fixed" the issue
by manually creating a nvidia.icd containing the line
"libnvidia-opencl.so.304.88"
Reg
Am 16.04.2013 19:29, schrieb David Vincent-Jones:
> I have copies of the link libnvidia-opencl.so.1 located in both
> /usr/lib/nvidia-current as well as in /usr/lib32/nvidia-current Both are
> broken links that were to linked to libnvidia-opencl.so.304 84 this
> library no longer exists
> The
1 - 100 of 145 matches
Mail list logo