On 08/03/2010 04:31 PM, bearophile wrote:
BCS:

The video is up: http://www.youtube.com/watch?v=RlVpPstLPEc

Thank you for the link and thank you to Andrei for the very nice
talk. I think the link can be put on Reddit too, if not already
present. When Andrei talks he seems a little different from the
Andrei that writes in this newsgroup. The talk was quite interactive,
and the idea of giving away books for good questions was a good one.
The people in the room seem to know how to program.

Thanks. I hope the difference is in the "good" direction. Somehow I find it difficult to grok the right tone on the Usenet.

Regarding this talk, I was so nervous during it that I was (and am after watching the recording) unable to assess the talk's quality. I was mostly on autopilot throughout; for example I seem to vaguely recall at some point there were some people standing in the back, but I have no idea when they appeared.

27.50, "transactional file copy": this example was script-like, and
as short as possible to fit into one single slide, so in this case I
think using enforce() is OK. But in true programs I suggest all D
programmers to use DesignByContract with assert() more and to use
enforce() less. Among other things enforce() kills inlining
possibilities and inflates code. In D.learn I have seen people use
enforce() in a situation where DbC is designed for. I think the D
community needs to learn a bit more DbC, and in my opinion to do this
the leaders have to lead the way.

I agree with Walter that enforce() is the only right choice there.

42.00: I don't know what Big-O() encapsulation is. I think later you
explain it a bit more, but I have lost the meaning.

Sorry. By that I mean an abstraction that uses information hiding to make complexity an implementation detail, instead of an interface requirement. Consider e.g. at(int) for a sequence container which makes no claim about complexity.

57.36: The Wikipedia Levenshtein entry can be improved a bit, to
remove the random access requirement, if not already done :-) Often
in practice you have to pay a certain performance price for
genericity. So an interesting question is, with the current DMD
compiler how much slower than an array-based Levenshtein function is
the generic Phobos one when the input is for example a couple of
ubyte arrays? If it's only about two or three times slower then it's
probably good enough.

The implementation using only forward access is not slower in theory because it never needs to search to an index. It is a bit slower on ASCII strings because it needs one extra test per character to assess its width. But on UTF strings it saves time by avoiding copying and by working correctly :o).


Thanks,

Andrei

Reply via email to