On Monday, 5 January 2015 at 23:39:17 UTC, Manu via Digitalmars-d wrote:
I'm finding myself at a constant struggle between speed and
maximizing-precision. I feel like a lib should maximise precision, but
the trouble then is that it's not actually useful to me...

If you create a "pixel" converter that aims for speed, the programmer might also want it to generate a shader (as text string) with exactly the same properties. It makes less and less sense to create a performant imaging library that is CPU only.

I suggest reducing the scope to:

1. Provide generic accurate conversion and iterators for colours (or more general; for arrays of spectral values). Useful for doing batch like stuff or initialization.

2. Provide fast colour support for transforms that are simple enough to not warrant GPU processing, but where you accept the cost of building lookup tables before processing. (Build tables using (1).)

Very few applications care about colour precision beyond ubyte, so I
feel like using double for much of the processing is overkill :/
I'm not sure what the right balance would look like exactly.

I think a precise reference implementation using double is a good start. People creating PDFs, SVGs or some other app that does not have real time requirements probably want that. It is also useful for building LUTs.

One thing to consider is that you also might want to handle colour compontents that have negative values or values larger than 1.0:

- it is useful in non-realistic rendering as "darklights" ( http://www.glassner.com/wp-content/uploads/2014/04/Darklights.pdf )

- with negative values you can then have a unique representation of a single colour in CIE (the theoretical base for RGB that was developed in the 1930s).

- it allows the programmer to do his own gamut compression after conversion

Reply via email to