I guess it's a bit hard to define what is the "natural" look of bright sun, because we usually do not look directly at bright sun (since that could damage our eyes).

On 2020-09-01 17:26, Top Rock Photography wrote:
Do not get caught up on the Sharpen module. I remember reading an article which claimed that ALL digital images NEED sharpening, and justified it by claiming that ALL digital cameras do unsharpening with the OLPF, a.k.a., AA filter. This is not true. My Pentax, (and every Pentax since the K-5 IIs, and several other manufacturers since the K-5 IIs), no longer put an OLPF on sensors with more than 16 Mpx.

The truth which Pentax realised is that there is no point to make something unsharp, just to try and sharpen it again with an unsharp mask. One gets more detail by simply never unsharpening it in the first place; no OLPF. Then why was it put there in the first place??? To combat moiré. Then why did Pentax get rid of it? Because on high Mpx sensors, moiré is less of a problem, and there are other ways to combat moiré than blurring an image, (and if one needed to, they have a simulated AA filter with two strengths options).

Fact is, that unless one has an OLPF in their camera, one probably does not need an unsharp mask, except for what it was invented. Back in the emulsion days, an unsharp mask was not a normal part of the workflow; it was a creative tool for a creative effect. (It also took advantage of “depletion zones” in emulsions, to bring about the effect. CMOS/CCD sensors do not have these depletion zones).

But, if one insists on using the Sharpen module, whenever I used it, (on the pre-3.2.1 releases), was to not use the default method, but the other method, and got better results. (I just opened up Dt to look at what the two methods were called, and realised that Dt ver 3.2.1 has only one method to do sharpening, or at least my build only has one method. Did not realise that until now, as I rarely use it).

Furthermore, if one will be scaling down the image to lower resolution, then sharpening is not only unnecessary, but whatever sharpening is done on the high Mpx original, will probably get lost, anyway, (depends on how much one re-sizes down).

Regarding highlight/shadow recovery, no real image processor can recover what was never there. If the sensor had hit saturation, the highlight detail is forever lost. If the light level was very low, then shot noise overwhelms the details, which can never be truly brought back. Some image processors use computer learning/AI to create what was never there in the first place, and sometimes, just sometimes, the results look acceptable. Usually, not natural.

In a video where Aurelein was demonstrating how to do professional edits with time constraints, someone submitted an awful image with a grossly under-exposed subject, with a heavily clipped sky, (sun behind clouds). One person had asked, when Aurelein was done, why the sun was not yellow. Well, the sun is NOT yellow, so it ought not be. However, some other programs, when they reconstruct such images, make the bright area quite yellow.

This is because, if one had a bright sun in a blue sky, as one approaches the sky, the blue is the first to clip, the red is the last. As the AI is trying to figure out what was supposed to be there, it concludes that what was there was obviously mostly blue, and least red, reproducing a mostly blue+green area, —the sensor is mostly green,— making the sun yellow. This fading into yellow may possibly look better than fading to a big white blob, but nevertheless is unnatural. This is what some are used to seeing when they “reconstruct/recover” the highlights.

Similarly for shadows, in that, what the AI thinks is there, may look better than what was actually there, but is nevertheless, unnatural.

What one may think is a “better job,” may just be what one is used to getting from badly coded AI algorithms.

Sincerely,

Karim Hosein
Top Rock Photography
754.999.1652



On Tue, 1 Sep 2020 at 04:39, Kneops <kne...@gmail.com <mailto:kne...@gmail.com>> wrote:

    DT is great :), the only two things that currently hold me back is that
    I still think LR overall does a better job in highlight / shadow
    recovery and detail/sharpness. I just played with an image of wooden
    beach poles and LR gives me more detail and edge sharpness, even when
    adding sharpness + local contrast + the highpass filter in DT.

    I'm a linux user and have 2 pc's, and there is no doubt in my head that
    I would turn my newest and really fast pc back into an Linux machine
    too
    if I can get those two things right, because I currently consider that
    investment a waste of money if I only use it for LR. DT in Linux on
    that
    new pc is so fast I could not even see the thumbnails being created
    during import, they were just there in a blink of an eye :).



    Op 31-08-2020 om 21:28 schreef Matt Maguire:
     > Darktable is great, it has taught me some much about image
    processing,
     > and I haven't looked back at leaving Lightroom since.
    ____________________________________________________________________________
    darktable user mailing list
    to unsubscribe send a mail to
    darktable-user+unsubscr...@lists.darktable.org
    <mailto:darktable-user%2bunsubscr...@lists.darktable.org>


____________________________________________________________________________ darktable user mailing list to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org
____________________________________________________________________________
darktable user mailing list
to unsubscribe send a mail to darktable-user+unsubscr...@lists.darktable.org

Reply via email to