[darktable-dev] Fwd: Bug or feature report / feature suggestion for drawn masks
Hello! First of all let me thank all you developers for your great work, developing darktable! I'm using DT for about two years now, learning a lot about how to use it and the individual modules. And I'm quite happy with what I can archive with it by now. Recently I recognized a behaviour which I find odd, but maybe it's a feature which I simply don't understand, yet. When drawing a mask with the brush, the oppacity can be adjusted before the mask is drawn. Once the mask is drawn, the oppacity can still be changed but only to values between zero and the value at the time of drawing the mask. In other words: when drawing a mask at less than 100% oppacity, this defines the maximum oppacity for that mask. The reduced oppacity when drawing the mask is regarded to be 100% oppacity when editing the mask later. Then the oppacity can not be raised above this maximum value of 100%. When adjusting the oppacity before drawing the mask, the oppacity of the brush obviously changes, but the oppacity value in the info line above doesn't change. Maybe this is the origin of the restricted options for editing the oppacity later. And it would be nice to know in numbers, how high the oppacity is when drawing the mask originally. I hope this makes sense to you. Regards, Stefan ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Automatically select the most focused photo in a burst of photos
I would love to have that feature. I find the current focus preview functionality useless. Br Jørn On 06/10/2019 18:36, Aurélien Pierre wrote: Ok, let's go for a "sharpness score" + focus peaking feature. That way, users will have a way to check which area is in focus, and compare images between them with a score. It won't made it to next release, but maybe first minor update. Le 06/10/2019 à 17:05, Robert Krawitz a écrit : On Sun, 6 Oct 2019 16:40:37 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote: argh. Tales of over-engineering… I don't really disagree with you, just want to point out that getting it anywhere near correct (i. e. without a huge number of false positives and false negatives) is a difficult problem. Just overlay the euclidean norm of the 2D laplacian on top of the pictures (some cameras call that focus-peaking), and let the photographer eyeball them. That will do for subjects at large aperture, when the subject is supposed to pop out of the background. For small apertures, the L2 norm will do a fair job. And it's a Saturday afternoon job, hence a very realistic project given our current resources. That's fair, I just think that this kind of algorithm will likely select a lot of photos that are badly out of focus (because the focus locked on a much more expansive background) and miss ones where it's the relatively small subject that's in focus. What you ask for is AI, it's a big project for a specialist, and it's almost sure we will never make it work reliably. The drawback of AIs, even when they work, is they fail inconsistently and need to be double-checked anyway. So, better give users meaningful scopes and let them take their responsibility, rather than rely on witchcraft that works only in Nvidia's papers on carefully curated samples. Or maybe just implement focus peaking, as you say, but with a UI similar to the camera's UI (flashing regions that are in best focus). Then it's up to the user to select the best photos based on their knowledge of the desired subject. Le 06/10/2019 à 16:18, Robert Krawitz a écrit : On Sun, 6 Oct 2019 15:02:39 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote: That can be easily done by computing the L2 norm of the laplacian of the pictures, or the L2 norm of the first level of wavelets decomposition (which is used in the focus preview), and taking the maximum. As usual, it will be more work to wire the UI to the functionality than writing the core image processing. Consider the case where the AF locks onto the background. This will likely result in a very large fraction of the image being in focus, but this will be exactly the wrong photo to select. Perhaps center-weighting, luminosity-weighting (if an assumption is made that the desired subject is usually brighter than the background, but not extremely light), skin tone recognition (with all of the attendant problems of what constitutes "skin tone"), and face recognition would have to feed into it. ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Automatically select the most focused photo in a burst of photos
Ok, let's go for a "sharpness score" + focus peaking feature. That way, users will have a way to check which area is in focus, and compare images between them with a score. It won't made it to next release, but maybe first minor update. Le 06/10/2019 à 17:05, Robert Krawitz a écrit : > On Sun, 6 Oct 2019 16:40:37 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote: >> argh. Tales of over-engineering… > I don't really disagree with you, just want to point out that getting > it anywhere near correct (i. e. without a huge number of false > positives and false negatives) is a difficult problem. > >> Just overlay the euclidean norm of the 2D laplacian on top of the >> pictures (some cameras call that focus-peaking), and let the >> photographer eyeball them. That will do for subjects at large aperture, >> when the subject is supposed to pop out of the background. For small >> apertures, the L2 norm will do a fair job. And it's a Saturday afternoon >> job, hence a very realistic project given our current resources. > That's fair, I just think that this kind of algorithm will likely > select a lot of photos that are badly out of focus (because the focus > locked on a much more expansive background) and miss ones where it's > the relatively small subject that's in focus. > >> What you ask for is AI, it's a big project for a specialist, and it's >> almost sure we will never make it work reliably. The drawback of AIs, >> even when they work, is they fail inconsistently and need to be >> double-checked anyway. >> >> So, better give users meaningful scopes and let them take their >> responsibility, rather than rely on witchcraft that works only in >> Nvidia's papers on carefully curated samples. > Or maybe just implement focus peaking, as you say, but with a UI > similar to the camera's UI (flashing regions that are in best focus). > Then it's up to the user to select the best photos based on their > knowledge of the desired subject. > >> Le 06/10/2019 à 16:18, Robert Krawitz a écrit : >>> On Sun, 6 Oct 2019 15:02:39 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote: That can be easily done by computing the L2 norm of the laplacian of the pictures, or the L2 norm of the first level of wavelets decomposition (which is used in the focus preview), and taking the maximum. As usual, it will be more work to wire the UI to the functionality than writing the core image processing. >>> Consider the case where the AF locks onto the background. This will >>> likely result in a very large fraction of the image being in focus, >>> but this will be exactly the wrong photo to select. >>> >>> Perhaps center-weighting, luminosity-weighting (if an assumption is >>> made that the desired subject is usually brighter than the background, >>> but not extremely light), skin tone recognition (with all of the >>> attendant problems of what constitutes "skin tone"), and face >>> recognition would have to feed into it. ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Automatically select the most focused photo in a burst of photos
On Sun, 6 Oct 2019 16:40:37 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote: > argh. Tales of over-engineering… I don't really disagree with you, just want to point out that getting it anywhere near correct (i. e. without a huge number of false positives and false negatives) is a difficult problem. > Just overlay the euclidean norm of the 2D laplacian on top of the > pictures (some cameras call that focus-peaking), and let the > photographer eyeball them. That will do for subjects at large aperture, > when the subject is supposed to pop out of the background. For small > apertures, the L2 norm will do a fair job. And it's a Saturday afternoon > job, hence a very realistic project given our current resources. That's fair, I just think that this kind of algorithm will likely select a lot of photos that are badly out of focus (because the focus locked on a much more expansive background) and miss ones where it's the relatively small subject that's in focus. > What you ask for is AI, it's a big project for a specialist, and it's > almost sure we will never make it work reliably. The drawback of AIs, > even when they work, is they fail inconsistently and need to be > double-checked anyway. > > So, better give users meaningful scopes and let them take their > responsibility, rather than rely on witchcraft that works only in > Nvidia's papers on carefully curated samples. Or maybe just implement focus peaking, as you say, but with a UI similar to the camera's UI (flashing regions that are in best focus). Then it's up to the user to select the best photos based on their knowledge of the desired subject. > Le 06/10/2019 à 16:18, Robert Krawitz a écrit : >> On Sun, 6 Oct 2019 15:02:39 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote: >>> That can be easily done by computing the L2 norm of the laplacian of the >>> pictures, or the L2 norm of the first level of wavelets decomposition >>> (which is used in the focus preview), and taking the maximum. >>> >>> As usual, it will be more work to wire the UI to the functionality than >>> writing the core image processing. >> Consider the case where the AF locks onto the background. This will >> likely result in a very large fraction of the image being in focus, >> but this will be exactly the wrong photo to select. >> >> Perhaps center-weighting, luminosity-weighting (if an assumption is >> made that the desired subject is usually brighter than the background, >> but not extremely light), skin tone recognition (with all of the >> attendant problems of what constitutes "skin tone"), and face >> recognition would have to feed into it. -- Robert Krawitz *** MIT Engineers A Proud Tradition http://mitathletics.com *** Member of the League for Programming Freedom -- http://ProgFree.org Project lead for Gutenprint --http://gimp-print.sourceforge.net "Linux doesn't dictate how I work, I dictate how Linux works." --Eric Crampton ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Automatically select the most focused photo in a burst of photos
Hi guys What would be really nice would be a nice tool to let the user choose, since from what I read, even with an IA we would never have, it cannot be reliable. I remember when I was using Aftershot (I think it was the name, before it was Bibble something I think), they had a REALLY great comparison tool : you select up to 6 photos in your list, then it displays them together (all next to each other), and when you zoom on one, it instantly zoom on all the others, at the same location. For a burst, it's great, you can very quickly tell which one is fine and which one is not. If I want to do that with Darktable, of course it's not impossible, but it's significantly slower. A similar tool would be great. Maybe without applying all the modules, we don't need a fully edited photo for the sharpness comparison. François Le dim. 6 oct. 2019 à 16:41, Aurélien Pierre a écrit : > argh. Tales of over-engineering… > > Just overlay the euclidean norm of the 2D laplacian on top of the pictures > (some cameras call that focus-peaking), and let the photographer eyeball > them. That will do for subjects at large aperture, when the subject is > supposed to pop out of the background. For small apertures, the L2 norm > will do a fair job. And it's a Saturday afternoon job, hence a very > realistic project given our current resources. > > What you ask for is AI, it's a big project for a specialist, and it's > almost sure we will never make it work reliably. The drawback of AIs, even > when they work, is they fail inconsistently and need to be double-checked > anyway. > > So, better give users meaningful scopes and let them take their > responsibility, rather than rely on witchcraft that works only in Nvidia's > papers on carefully curated samples. > Le 06/10/2019 à 16:18, Robert Krawitz a écrit : > > On Sun, 6 Oct 2019 15:02:39 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote: > > That can be easily done by computing the L2 norm of the laplacian of the > pictures, or the L2 norm of the first level of wavelets decomposition > (which is used in the focus preview), and taking the maximum. > > As usual, it will be more work to wire the UI to the functionality than > writing the core image processing. > > > Consider the case where the AF locks onto the background. This will > likely result in a very large fraction of the image being in focus, > but this will be exactly the wrong photo to select. > > Perhaps center-weighting, luminosity-weighting (if an assumption is > made that the desired subject is usually brighter than the background, > but not extremely light), skin tone recognition (with all of the > attendant problems of what constitutes "skin tone"), and face > recognition would have to feed into it. > > > Le 06/10/2019 à 14:14, Germano Massullo a écrit : > > Il giorno dom 6 ott 2019 alle ore 13:32 Moritz Mœller > ha scritto: > > Define 'most focused'. > I give you an example to understand this request better. [...] > > > Yes you are right. but in your case, the couple is the main thing that > is moving in the picture. For my use case imagine I am taking photos > to people that are giving a talk. Some photos of the burst may be > blurred because I moved the camera while shooting, instead some other > shoots of the same burst could have less blur effect beause my hands > were not moving during its exposure time so the photo will have less > blur effect. > It would be great if an algoritm could detect the best shots > > > ___ > darktable developer mailing list to unsubscribe send a mail to > darktable-dev+unsubscr...@lists.darktable.org > ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Automatically select the most focused photo in a burst of photos
argh. Tales of over-engineering… Just overlay the euclidean norm of the 2D laplacian on top of the pictures (some cameras call that focus-peaking), and let the photographer eyeball them. That will do for subjects at large aperture, when the subject is supposed to pop out of the background. For small apertures, the L2 norm will do a fair job. And it's a Saturday afternoon job, hence a very realistic project given our current resources. What you ask for is AI, it's a big project for a specialist, and it's almost sure we will never make it work reliably. The drawback of AIs, even when they work, is they fail inconsistently and need to be double-checked anyway. So, better give users meaningful scopes and let them take their responsibility, rather than rely on witchcraft that works only in Nvidia's papers on carefully curated samples. Le 06/10/2019 à 16:18, Robert Krawitz a écrit : > On Sun, 6 Oct 2019 15:02:39 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote: >> That can be easily done by computing the L2 norm of the laplacian of the >> pictures, or the L2 norm of the first level of wavelets decomposition >> (which is used in the focus preview), and taking the maximum. >> >> As usual, it will be more work to wire the UI to the functionality than >> writing the core image processing. > Consider the case where the AF locks onto the background. This will > likely result in a very large fraction of the image being in focus, > but this will be exactly the wrong photo to select. > > Perhaps center-weighting, luminosity-weighting (if an assumption is > made that the desired subject is usually brighter than the background, > but not extremely light), skin tone recognition (with all of the > attendant problems of what constitutes "skin tone"), and face > recognition would have to feed into it. > >> Le 06/10/2019 à 14:14, Germano Massullo a écrit : >>> Il giorno dom 6 ott 2019 alle ore 13:32 Moritz Mœller >>> ha scritto: Define 'most focused'. I give you an example to understand this request better. [...] >>> Yes you are right. but in your case, the couple is the main thing that >>> is moving in the picture. For my use case imagine I am taking photos >>> to people that are giving a talk. Some photos of the burst may be >>> blurred because I moved the camera while shooting, instead some other >>> shoots of the same burst could have less blur effect beause my hands >>> were not moving during its exposure time so the photo will have less >>> blur effect. >>> It would be great if an algoritm could detect the best shots ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Automatically select the most focused photo in a burst of photos
On Sun, 6 Oct 2019 15:02:39 +0200, =?UTF-8?Q?Aur=c3=a9lien_Pierre?= wrote: > That can be easily done by computing the L2 norm of the laplacian of the > pictures, or the L2 norm of the first level of wavelets decomposition > (which is used in the focus preview), and taking the maximum. > > As usual, it will be more work to wire the UI to the functionality than > writing the core image processing. Consider the case where the AF locks onto the background. This will likely result in a very large fraction of the image being in focus, but this will be exactly the wrong photo to select. Perhaps center-weighting, luminosity-weighting (if an assumption is made that the desired subject is usually brighter than the background, but not extremely light), skin tone recognition (with all of the attendant problems of what constitutes "skin tone"), and face recognition would have to feed into it. > Le 06/10/2019 à 14:14, Germano Massullo a écrit : >> Il giorno dom 6 ott 2019 alle ore 13:32 Moritz Mœller >> ha scritto: >>> Define 'most focused'. >>> I give you an example to understand this request better. [...] >> >> Yes you are right. but in your case, the couple is the main thing that >> is moving in the picture. For my use case imagine I am taking photos >> to people that are giving a talk. Some photos of the burst may be >> blurred because I moved the camera while shooting, instead some other >> shoots of the same burst could have less blur effect beause my hands >> were not moving during its exposure time so the photo will have less >> blur effect. >> It would be great if an algoritm could detect the best shots -- Robert Krawitz *** MIT Engineers A Proud Tradition http://mitathletics.com *** Member of the League for Programming Freedom -- http://ProgFree.org Project lead for Gutenprint --http://gimp-print.sourceforge.net "Linux doesn't dictate how I work, I dictate how Linux works." --Eric Crampton ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] GCC version, optimisation options, split-loops
Hi, We support GCC 8 and 9. GCC 6 is quite old already. The commit you refer to affects only CLang. Cheers, Aurélien. Le 06/10/2019 à 15:26, Marco Tedaldi a écrit : > Hi Everyone > After a long time away from this list (but still regularly working > with git master) I'm back here... > > > I've just tried to compile dt master again and it failed on me... > The reason is that my GCC doesn't recognize the option split-loops > > Error: > /home/marco/build/darktable/src/iop/toneequal.c:1312:1: error: > unrecognized command line option ‘-fsplit-loops’ > > My GCC-Version > marco@schwipschwap:~/build/darktable$ gcc --version > gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516 > > Interestingly,the last time I compiled dt it worked. It was: dt > 2.7.0+1709~g580bf49da > > > As a workaround for me I've just removed the option "split-loops" from > the following files: > src/common/fast_guided_filter.h > src/common/luminance_mask.h > src/iop/choleski.h > src/iop/filmicrgb.c > src/iop/toneequal.c > > > So my question is: what version of gcc is required to compile it? > > could it be that commit 50742fa02bdf511e62f3bbe10b11c61c2036e4e5 > https://github.com/darktable-org/darktable/commit/50742fa02bdf511e62f3bbe10b11c61c2036e4e5#diff-b93b6846a64705e34a1eb02a9d620317 > > made my version of gcc stumble? > > best regards > > Marco > > ___ > darktable developer mailing list to unsubscribe send a mail to > darktable-dev+unsubscr...@lists.darktable.org ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
[darktable-dev] GCC version, optimisation options, split-loops
Hi Everyone After a long time away from this list (but still regularly working with git master) I'm back here... I've just tried to compile dt master again and it failed on me... The reason is that my GCC doesn't recognize the option split-loops Error: /home/marco/build/darktable/src/iop/toneequal.c:1312:1: error: unrecognized command line option ‘-fsplit-loops’ My GCC-Version marco@schwipschwap:~/build/darktable$ gcc --version gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516 Interestingly,the last time I compiled dt it worked. It was: dt 2.7.0+1709~g580bf49da As a workaround for me I've just removed the option "split-loops" from the following files: src/common/fast_guided_filter.h src/common/luminance_mask.h src/iop/choleski.h src/iop/filmicrgb.c src/iop/toneequal.c So my question is: what version of gcc is required to compile it? could it be that commit 50742fa02bdf511e62f3bbe10b11c61c2036e4e5 https://github.com/darktable-org/darktable/commit/50742fa02bdf511e62f3bbe10b11c61c2036e4e5#diff-b93b6846a64705e34a1eb02a9d620317 made my version of gcc stumble? best regards Marco ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Automatically select the most focused photo in a burst of photos
That can be easily done by computing the L2 norm of the laplacian of the pictures, or the L2 norm of the first level of wavelets decomposition (which is used in the focus preview), and taking the maximum. As usual, it will be more work to wire the UI to the functionality than writing the core image processing. Le 06/10/2019 à 14:14, Germano Massullo a écrit : > Il giorno dom 6 ott 2019 alle ore 13:32 Moritz Mœller > ha scritto: >> Define 'most focused'. >> I give you an example to understand this request better. [...] > > Yes you are right. but in your case, the couple is the main thing that > is moving in the picture. For my use case imagine I am taking photos > to people that are giving a talk. Some photos of the burst may be > blurred because I moved the camera while shooting, instead some other > shoots of the same burst could have less blur effect beause my hands > were not moving during its exposure time so the photo will have less > blur effect. > It would be great if an algoritm could detect the best shots > ___ > darktable developer mailing list > to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org > ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Darktable for ARM
Hi, Image processing filters on raw pictures (even more now, with 24-52 Mpx images) are really demanding on computational power (especially the nicest ones, unsurprisingly), and darktable uses 32 bits floating point arithmetic to perform them, in order to avoid most numerical issues you would get with integer processing. Although they can theoretically be performed on any 32 bits CPU, these operations are optimized at a relatively low-level only for x86_64 architectures and will most probably run too slow to be practically usable on ARM and 32 bits architectures. Optimizing for performance is already a challenge (and a burden) to support all the existing x86_64 generations of SIMD instructions (desktop users being the core market of photographers), plus GPU offloading through OpenCL, plus ensuring consistent behaviour between GPU and CPU code paths and between different vendors (Intel / Nvidia / AMD) OpenCL. For this reason, there is no active support of 32 bits plateforms in darktable, especially since most Linux distributions have dropped 32 bits kernels, so it might or might not compile/work, but don't expect bug fixes for that at this point (unless someone steps out to do it). For ARM CPU, anyway, you might want to get rid of all the GTK UI + third-party libs bloat in darktable and start fresh with an embedded/lightweight approach, instead of force-fitting a GTK desktop app into something that will never be fluid enough to be practically usable for a photographer, outside of a geek playground/proof-of-concept. Any denoising module, or even local contrast enhancement, will put your ARM on its knees, either 64 or 32 bits, and even if it's enough to shoot a YouTube video to prove opensource zealots you did it and FOSS rocks, it's unrealistic for a daily use by today's standards. Cheers, Aurélien. Le 06/10/2019 à 12:43, Holger Klemm a écrit : > Hello, > I installed Raspbian Buster on a Raspberry Pi 3B + and tried to compile > Darktable 2.6.2. > > cmake is aborted with the error message not supported platform. > Is this a bug or is it due to the 32bit operating system? > > Cheers > Holger > > Am Samstag, 5. Oktober 2019, 13:41:46 CEST schrieben Sie: >> On Saturday, 5 October 2019 08:59:57 CEST Holger Klemm wrote: >>> Hello, >> Hi, >> >>> is an ARM version planned for Darktable 3.0.0? >>> The current Raspberry Pi 4, Rock Pi 4 and NanoPi M4 are available with 4GB >>> of RAM and should be powerful enough to handle small tasks. >>> In particular, with the camera control then new applications would arise. >>> >>> I would be very happy about an ARM version for Rasbian / Armbian. >> there is one for openSUSE >> >> http://download.opensuse.org/repositories/graphics:/darktable/ >> openSUSE_Tumbleweed_ARM/ >> >> and Fedora has also aarch64 >> >> https://koji.fedoraproject.org/koji/buildinfo?buildID=1322464 >> >> >> Cheers, >> >> Andreas > > > > ___ > darktable developer mailing list > to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org > ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Automatically select the most focused photo in a burst of photos
Il giorno dom 6 ott 2019 alle ore 13:32 Moritz Mœller ha scritto: > > Define 'most focused'. > I give you an example to understand this request better. [...] Yes you are right. but in your case, the couple is the main thing that is moving in the picture. For my use case imagine I am taking photos to people that are giving a talk. Some photos of the burst may be blurred because I moved the camera while shooting, instead some other shoots of the same burst could have less blur effect beause my hands were not moving during its exposure time so the photo will have less blur effect. It would be great if an algoritm could detect the best shots ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Darktable for ARM
On Sunday, 6 October 2019 12:43:16 CEST Holger Klemm wrote: > Hello, > I installed Raspbian Buster on a Raspberry Pi 3B + and tried to compile > Darktable 2.6.2. > > cmake is aborted with the error message not supported platform. > Is this a bug or is it due to the 32bit operating system? I think so, you need aarch64 ... Andreas -- Andreas Schneider a...@cryptomilk.org GPG-ID: 8DFF53E18F2ABC8D8F3C92237EE0FC4DCC014E3D ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Automatically select the most focused photo in a burst of photos
Define 'most focused'. I give you an example to understand this request better. I shoot (tango) dancers in low light. A lot of shots are shoulders and heads only. Because I shoot in low light I use a very fast, manual lens at full aperture (f/0.95). In a burst sequence of seven shots that my Sony produces one will have the closest eye of the couple in focus. This only means a few pixels of the focus plane intersecting the face somehow. That's usually the best picture. The rest are for the bin. However, an adjacent picture that is completely useless may have the chest in focus which covers many more pixels than the face. How should darktable decide which is the 'most focused'? I think the only way to solve this well is with ML and possibly user input about the intent. No magic bullet exists for this type of problem. .mm ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
Re: [darktable-dev] Darktable for ARM
Hello, I installed Raspbian Buster on a Raspberry Pi 3B + and tried to compile Darktable 2.6.2. cmake is aborted with the error message not supported platform. Is this a bug or is it due to the 32bit operating system? Cheers Holger Am Samstag, 5. Oktober 2019, 13:41:46 CEST schrieben Sie: > On Saturday, 5 October 2019 08:59:57 CEST Holger Klemm wrote: > > Hello, > > Hi, > > > is an ARM version planned for Darktable 3.0.0? > > The current Raspberry Pi 4, Rock Pi 4 and NanoPi M4 are available with 4GB > > of RAM and should be powerful enough to handle small tasks. > > In particular, with the camera control then new applications would arise. > > > > I would be very happy about an ARM version for Rasbian / Armbian. > > there is one for openSUSE > > http://download.opensuse.org/repositories/graphics:/darktable/ > openSUSE_Tumbleweed_ARM/ > > and Fedora has also aarch64 > > https://koji.fedoraproject.org/koji/buildinfo?buildID=1322464 > > > Cheers, > > Andreas ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org
[darktable-dev] Automatically select the most focused photo in a burst of photos
When you shoot a burst of photos, then in darktable you will select the one that is more focused (less blurred). It would be great if darktable would be able to select automatically the less blurred photo Related ticket: https://redmine.darktable.org/issues/12712 ___ darktable developer mailing list to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org