On 21/09/11 10:11 PM, Andrei Alexandrescu wrote:
On 9/21/11 3:34 PM, Peter Alexander wrote:
The problem it's simply intractable to do lazy computation in D while
maintaining the 'correct' static typing.

For example, in Haskell, map (correctly) has the signature:

map :: (a -> b) -> [a] -> [b]

but in D, std.map has the signature (expressed in some Haskell/D
pseudocode)

map :: (a -> b) -> [a] -> Map!((a -> b), [a])

I think it really is

map :: (a -> b) -> Range1!a -> Map!((a -> b), Range1!a)

i.e. the type of the input range is not fixed to be a built-in list.
That makes D's map more general than Haskell's, but also more difficult
to implement. For example, Haskell's implementation does not need to
worry about O(1) access to the nth element of the result.

True, I oversimplified it.


To get the same kind of signature as Haskell, you'd need to use eager
computation in D, but that's horrendously inefficient in most cases.
Alternatively, you can use interfaces, but then you lose type
information in a lot of situations.

The problem with the D situation is that if I have a function:

void foo(int[])

then it can't be called with:

foo(map!"2*a"([1, 2, 3]));

I'm forced to either:
(a) use eager computation by wrapping it in array(...)
(b) change foo to be a template

This is exactly as it should be. For whatever reason, foo wants an
array. For a good reason, map returns a lazy sequence that depends upon
the type of its input. You necessarily need to force computation into an
array because foo needs an array and because D arrays are eager.

This is tantamount to saying that D is not as good as Haskell at lazy
computation. Fine - lazy computation is Haskell's turf.

Well yes, but that's the whole problem: std.algorithm and std.range attempt to act like lazy, functional programming is easy and expressive in D, but it's not, except in small self-contained examples.


What if foo is a virtual member function? Those can't be templates.

Then it would need to take a dynamic Range as parameter, parameterized
with the element type.

What if foo recursively maps to itself? e.g.

auto foo(Range)(Range r)
{
if (r.length == 1)
return r.front;
r.popFront();
return map!"2*a"(r);
}

Hm, this is not recursive (and should work).

Apologies, the last line should read:

return foo(map!"2*a"(r));


Someone in D.learn tried to write quickSort in a similar way, but it
obviously doesn't work because of the infinite number of template
instantiations. To simulate lazy computation in D, you require a type
transformation, which doesn't work with recursion. You're only choice is
to abstract away to some sort of IRange!T interface to remove the type
divergence, but then you lose performance, and in some cases, type
information.

But Haskell has lost the performance argument right off the bat.

Maybe.

As I mentioned later on, Haskell can do deforestation optimisations (among many other things) that D cannot do due to Haskell's overall design (absolute purity).

I'm curious if you have benchmarks to back it up (I don't, but I'm not making the claim). In D it's easy to predict performance because there's a (relatively) simple mapping of D code to machine code. The same isn't true of Haskell.


You are mixing tradeoffs here. There are pretty immutable rules of what
can and cannot be done, many of which are consequences of a handful of
simple facts. You want Haskell capability with D performance. You can't
have that.

Well, as much as I would like that, I'm not arguing for it.

My post was in reply to Timothy saying:

"If I write functional style code in D I unfortunately usually get feedback that it is 'extremely hard to read'."

and

"std.algorithm/std.range are also considerably (and as I understand, deliberately) underpowered for FP when compared to eg Haskell's standard set of libraries"

I understand the tradeoffs D makes, I'm just agreeing and elaborating on what Tim said: Phobos' functional offering is underpowered and much more difficult to understand compared to Haskell's. As you said, Haskell is more capable than D in this area.


Of course, if every function that accepts a range becomes a template
then you have the practical problem of managing the number of template
instantiations. Code bloat is still a real problem with heavy template
usage.

Finally, map in Haskell just makes sense. You have a list of a's, a
function of 'a to b' and you get a list of b's out the other end. It's
simple, it's exactly what you want, and it just works. You can't say the
same in D.

D's map is superior to Haskell's. There is no contest.

It's only superior if you measure superiority on two very specific properties:

- Applicability to different data structures
- Performance in simple cases

There are other important properties of Haskell's map that I'm sure Haskellite's would use to argue that theirs is superior:

- Simplicity (a couple of lines of code to define with only basic language features)

- Gracefully works everywhere (don't need to change style or interface for virtual functions or recursive functions - it just works)


Again, Haskell's map forces only ONE range abstractions to fit
everybody. D's map allows anyone to choose their own map abstraction,
and is clever enough to define another map abstraction based on it that
offers maximum capability and maximum efficiency.

(In addition, Hakell's map forces ONE higher-order function
representation, whereas D works with a function alias, function, or
delegate.)

Yes, D wins here.

To summarise my position:

I understand there are tradeoffs. Neither Haskell or D is better on every axis of comparison. D's functional offering probably has better performance, and definitely has more applicability to different data structures, no question. On the other hand, Haskell is easier to read, write and understand, and is generally more expressive if you don't mind being limited to one data structure.

Reply via email to