On Mon, 07 Jun 2010 23:02:48 -0400, Graham Fawcett <fawc...@uwindsor.ca>
wrote:
Hi folks,
This program works as expected in D2:
import std.stdio;
import std.algorithm;
T largestSubelement(T)(T[][] lol) {
alias reduce!"a>b?a:b" max;
return cast(T) max(map!max(lol)); // the cast matters...
}
void main() {
auto a = [[1,2,3],[4,5,6],[8,9,7]];
assert (largestSubelement(a) == 9);
auto b = ["howdy", "pardner"];
assert (largestSubelement(b) == 'y');
auto c = [[1u, 3u, 45u, 2u], [29u, 1u]];
assert (largestSubelement(c) == 45u);
}
But if I leave out the 'cast(T)' in line 7, then this program will not
compile:
lse.d(6): Error: cannot implicitly convert expression
(reduce(map(lol))) of type dchar to immutable(char)
lse.d(14): Error: template instance
lse.largestSubelement!(immutable(char)) error
instantiating
Where did the 'dchar' came from? And why does the cast resolve the issue?
In a recent update, Andrei changed char[] and wchar[] to bi-directional
ranges of dchar instead of straight arrays (at least, I think that was the
change) in the eyes of the range types. I think this is where the dchar
comes from.
If you had a char[], and the 'max' element was a sequence of 2 code
points, how do you return a single char for that result?
-Steve