On Wednesday, 18 February 2015 at 16:10:30 UTC, Pasqui23 wrote:
No.The main strenght of the range api is its ability to
preserve range categories.The loss of range categories is a
*price* we pay for lazy  evalutation,categories wich can be
restored by .array in exchange for the usual price for dynamic
allocation.

The argument naturally extends to ContiguousRanges:
The loss of contiguity is the price paid for lazy evaluation (note that retro is lazily evaluated), and the result of .array is always contiguous.

you are describing ContiguousRange as
a type implicitly convertable to T[]

Not implicitly, but otherwise this may be a much nicer way to define ContiguousRange than what I had proposed.

I see a ContiguousRange as an underlying T[] with user-defined operations and constraints, e.g. a Bitmap, or an AudioClip. This would enable a generic function to operate on a custom ranges without losing information by decaying the argument into a T[]. For example, you could perform some generic search operation on an AudioClip and return a slice while preserving the AudioClip's sampling rate.

for an array r, is r.retro contiguous or not?

I would say yes.In general,a good rule of thumb would be:this
range can be copied via memcpy?Then preserve contiguous-ness.

The problem with this rule is illustrated with an example:

  void copy (R1 src, R2 tgt)
  out {
    foreach (i; 0..min (src.length, tgt.length))
      assert (tgt[i] == src[i]);
  }
  body {
memcpy (tgt.ptr, src.ptr, ElementType!R1.sizeof * min (src.length, tgt.length));
  }

  auto s = [0,1,2,3,4,5];
  auto r = [9,8,7,6,5,4].retro;
  s.copy (r);

Originally, r == [4,5,6,7,8,9].
What's the value of r at the end of this program?

If *r.ptr == 4, then we have
  r == [0,5,6,7,8,9]
and a segfault if we're lucky.

If *r.ptr == 9, then we have
  r == [5,4,3,2,1,0]
and a failing postcondition.

There is no scalable, generic way to satisfy the postcondition and get our expected behavior, i.e.
  r == [0,1,2,3,4,5]

Optimized data transfers don't care at all that the
bytes are in order,only that the data are in the slice you gave
them.

I have to disagree with this. While the transfer itself is free to occur out-of-order, if the result is out of order, then the program is most likely incorrect.

My proposal allow simple extension to multiple dimension.
if  r is a n-dimensional range,then then the following must
hold true:

void[]s=r.toContiguous;
foreach(i1;r)foreach(i2;i1)...foreach(in;inprev)
  assert(s.ptr<=&in<s.ptr+s.lenght);

So, the catch here is illustrated with another example:

Consider a 2D array in row-major order (where [i,j] is adjacent to [i,j+1]). Here, slicing along the rows will yield a ContiguousRange r[x..y, 0..$], but r[x..y, z..w] where z > 0 || w < r.length will be non-contiguous at the column boundaries. There's contiguity hiding in there, but it can't be leveraged.

This could be solved by considering r[x..y, z..w] as a RandomAccessRange of ContiguousRanges, which would imply that r[a,b] is equivalent to r[a][b]. On the other hand, it could be considered a RandomAccessRange which yields ContiguousRanges under 1-dimensional slicing... more verbose, but avoids the former interpretation's implications.

But, like I said, extending range concepts to multiple dimensions is tricky. I've been working on the problem on my own for the last few months, and so far my solution has been to define linear spaces which convert to ranges under certain conditions.

Reply via email to