I'm blowing the dust off my colour proposal implementation.
I want to get it into a state where it may be accepted into
std.experimental, but I'm having trouble deciding on the scope of the
initial offering. Some people are arguing for 'complete', and I tend
to argue for minimal/un-contentious. It can grow in the future as
desired.

There is a slightly blurry line separating a colour lib from an image
lib, and I want to focus the boundaries.

My design propositions are this:

1. Colour type definitely belongs in the std library;
  - Colour types and conversions are hard to implement, non-experts
almost always get it wrong.
  - Implementation is more-or-less un-contentious, I don't think
there's much room for debate in terms of API design.
  - It's an enabler for a lot of further library work.
2. Image processing probably doesn't belong in the std library, at
least, not right now;
  - There are so many ways the API could look, no particular one is
objectively 'correct'.
  - We need time for conventions to proliferate before API decisions
for the std lib can reasonably be decided.
3. I am kinda drawing the line between 'colour' and 'image' at the
point you find yourself working with buffers of data. Ie, 'colour' is
strictly @nogc.

So, I guess that means this clearly encompasses colour types,
primitive operations, and then typical functions like interpolation,
and conversion. I think that's the scope, and I'm happy with that
definition, but it raises some questions...

Colour defined this way without buffered enhancements will be
inefficient; performing single-colour operations in series is not
cool. Image operations tend to want support for simd, loop unrolling,
etc, for efficient image library implementations. I don't really feel
it's the place for an image library to re-define these efficient array
versions of colour operations... so, should they be in the base colour
library?

This problem actually extends beyond colours... D positions ranges as
a core language mechanic, and I've often struggled with this problem
that defining an element type and how it behaves and using it in a
range is frequently not an efficient implementation. It's the classic
OOP design fail, but dressed up differently.

What is the current wisdom with respect to implementing efficient
batch implementations for various element types?

I'm feeling like I should not attempt to address this problem in the
initial colour library and just leave it as simple as possible; just
the colour type. It can be used for inefficient purposes for now (and
produce *correct* results) and we can worry about efficient batch
processing in a follow up, or when an image library comes along?


I have another question too; some of the operations are algorithmic in
nature. Take lerp for example. Interpolation may appear in an
algorithm, which means we should have a standard API for interpolation
for all possible types, and modules just provide implementations for
the types they introduce. There should be default implementations in
phobos for builtin types.

Where does this belong? (I haven't found one to already exist)
I feel like `lerp(T, F)(T a, T b, F t) { return a*(1-t) + b*t; }`
belongs kinda near min/max... but std.algorithm.comparison doesn't
feel right. I think a small suite of functions like this exists.

Reply via email to