Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
Andrej Mitrovic wrote: I've noticed you have "Version Control with Git" listed in your list of books. Did you just buy that recently, or were you secretly planning to switch to Git at the instant someone mentioned it? :p I listed it recently.
Re: How many HOFs in Phobos?
On 2/1/11 7:17 PM, bearophile wrote: But there are other HOFs that may be useful (they are in dlibs1 too): - "nest" (or iterate), to apply one function many times: nest(sin, 0.2, 4) === sin(sin(sin(sin(0.2 I'd be glad to include such a function if there were good use cases for it. Fixed-point iteration is interesting in math, but in practice you don't just run it naively, e.g. you run it once and check for a termination condition (which may be quite involved when e.g. doing linear algebra; my dissertation uses a lot of fixed-point iteration). To wit, the sin example sucks. At what time have you been at a point in life when you wanted to do not only a few iterations of sin against its own result, but actually you want to be able to express that very notion as a sole function? It at least you chose cos, I would have been more sympathetic because fixed point iteration it's a way to solving cos(x) = x. SICP has if I remember correctly a couple of nice examples of iterations for computing pi and sqrt, but those are recurrences, not fixed point iterations. For that we already have std.range.recurrence which is more general. To do fixed point with sequence is trivial: auto s = recurrence!"sin(a[n-1])"(1.0); The main point is that there are very strong examples for recurrences. - Something similar, that keeps all the intermediate results. It's sometimes named nestList, but in D it may be lazy. I think better examples could be found for that one, but... still tenuous. What's wrong with array, take, and recurrence? You _already_ have in D better abstractions that those you think you want to bring over from functional languages. You have edit distance that laughs Haskell's out the door, you have recurrence that makes iterate look like a useless oddity, you have range refinements that bring you the best of foldr without the cost... -- There is another problem, shown by std.functional.compose. See the part about D here: http://rosettacode.org/wiki/First-class_functions#D This task asks things like (more details on the rosettacode page): - Create new functions from preexisting functions at run-time - Store functions in collections - Use functions as arguments to other functions - Use functions as return values of other functions To do it "well enough" the D implementation doesn't want to use std.functional.compose and defines a more dynamic one, able to use run time delegates: private import std.math ; import std.stdio ; T delegate(S) compose(T, U, S)(T delegate(U) f, U delegate(S) g) { return (S s) { return f(g(s)); }; } void main() { // warper need as not all built-in real functions // have same signature , eg pure/nothrow auto sin = delegate (real x) { return std.math.sin(x) ; } ; auto asin = delegate (real x) { return std.math.asin(x) ; } ; auto cos = delegate (real x) { return std.math.cos(x) ; } ; auto acos = delegate (real x) { return std.math.acos(x) ; } ; auto cube = delegate (real x) { return x*x*x ; } ; auto cbrt = delegate (real x) { return std.math.cbrt(x) ; } ; // built-in : sin/cos/asin/acos/cbrt user:cube auto fun = [sin, cos, cube] ; auto inv = [asin, acos, cbrt] ; foreach(i, f ; fun) writefln("%6.3f", compose(f, inv[i])(0.5)) ; } You are able to write a similar program with std.functional.compose too, but using tuples instead of arrays, this is less flexible: import std.stdio, std.typetuple, std.functional; private import std.math; void main() { // wrappers needed as not all built-in functions // have same signature, eg pure/nothrow auto sin = (real x) { return std.math.sin(x); }; auto asin = (real x) { return std.math.asin(x); }; auto cos = (real x) { return std.math.cos(x); }; auto acos = (real x) { return std.math.acos(x); }; auto cube = (real x) { return x ^^ 3; }; auto cbrt = (real x) { return std.math.cbrt(x); }; alias TypeTuple!(sin, cos, cube) dir; alias TypeTuple!(asin, acos, cbrt) inv; foreach (i, f; dir) writefln("%6.3f", compose!(f, inv[i])(0.5)); } This questions the design of std.functional.compose. More like it spurs the language to allow better local instantiation. Andrei
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 2/2/11, Andrej Mitrovic wrote: > On 2/2/11, Walter Bright wrote: >> > > ...listed in your list... > Crap.. I just made a 2-dimensional book list by accident. My bad.
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
Andrej Mitrovic wrote: Is this why you've made your own version of make and microemacs for Windows? I honestly can't blame you. :) Microemacs floated around the intarnets for free back in the 80's, and I liked it because it was very small, fast, and customizable. Having an editor that fit in 50k was just the ticket for a floppy based system. Most code editors of the day were many times larger, took forever to load, etc. I wrote my own make because I needed one to sell and so couldn't use someone else's.
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 2/2/11, Walter Bright wrote: > I've noticed you have "Version Control with Git" listed in your list of books. Did you just buy that recently, or were you secretly planning to switch to Git at the instant someone mentioned it? :p
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 2/1/2011 7:55 PM, Andrej Mitrovic wrote: > On 2/2/11, Walter Bright wrote: >> Andrej Mitrovic wrote: >>> I don't know what to say.. >> >> Git is a Linux program and will never work right on Windows. The problems >> you're >> experiencing are classic ones I find whenever I attempt to use a Linux >> program >> that has been "ported" to Windows. >> > > Yeah, I know what you mean. "Use my app on Windows too, it works! But > you have to install this Linux simulator first, though". > > Is this why you've made your own version of make and microemacs for > Windows? I honestly can't blame you. :) Of course, it forms a nice vicious circle. Without users, there's little incentive to fix and chances are there's fewer users reporting bugs. Sounds.. familiar. :)
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 2/2/11, Walter Bright wrote: > Andrej Mitrovic wrote: >> I don't know what to say.. > > Git is a Linux program and will never work right on Windows. The problems > you're > experiencing are classic ones I find whenever I attempt to use a Linux > program > that has been "ported" to Windows. > Yeah, I know what you mean. "Use my app on Windows too, it works! But you have to install this Linux simulator first, though". Is this why you've made your own version of make and microemacs for Windows? I honestly can't blame you. :)
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
Andrej Mitrovic wrote: I don't know what to say.. Git is a Linux program and will never work right on Windows. The problems you're experiencing are classic ones I find whenever I attempt to use a Linux program that has been "ported" to Windows.
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 2/1/2011 6:17 PM, Andrej Mitrovic wrote: > Bleh. I tried to use Git to update some of the doc files, but getting > the thing to work will be a miracle. > > git can't find the public keys unless I use msysgit. Great. How > exactly do I cd to D:\ ? > > So I try git-gui. Seems to work fine, I clone the forked repo and make > a few changes. I try to commit, it says I have to update first. So I > do that. *Error: crash crash crash*. I try to close the thing, it just > keeps crashing. CTRL+ALT+DEL time.. > > Okay, I try another GUI package, GitExtensions. I make new > public/private keys and add it to github, I'm about to clone but then > I get this "fatal: The remote end hung up unexpectedly". > > I don't know what to say.. I use cygwin for all my windows work (which I try to keep to a minimum). Works just fine in that environment.
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
Bleh. I tried to use Git to update some of the doc files, but getting the thing to work will be a miracle. git can't find the public keys unless I use msysgit. Great. How exactly do I cd to D:\ ? So I try git-gui. Seems to work fine, I clone the forked repo and make a few changes. I try to commit, it says I have to update first. So I do that. *Error: crash crash crash*. I try to close the thing, it just keeps crashing. CTRL+ALT+DEL time.. Okay, I try another GUI package, GitExtensions. I make new public/private keys and add it to github, I'm about to clone but then I get this "fatal: The remote end hung up unexpectedly". I don't know what to say..
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
Brad Roberts wrote: Ie, essentially negligable. Yeah, and I caught myself worrying about the disk usage from having two clones of the git repository (one for D1, the other for D2).
Re: How many HOFs in Phobos?
Jonathan M Davis: >The issue is that if you want something in Phobos, it _does_ need to be >designed with performance in mind. Anything which isn't efficient needs to >have a very good reason for its existence which balances out its lack of >efficiency. If the Haskell implementation isn't performant enough, then it >doesn't cut it for Phobos, even if it's a good fit for Haskell.< I think you have misunderstood the discussion, or maybe I just don't understand you. My discussion was about HOFs, not about Levenshtein distance (I've shown a fast one, but it's probably not usable for Phobos because of license issues: http://codepad.org/s0ezEojU ). - Andrei: >The fact that foldl and foldr are only one letter apart is a design mistake.< I agree, mostly :-) >But something like foldr that uses head recursion would be indeed rather >dangerous to include.< >std.algorithm.reduce implements foldl, as it should. Simulating foldr on >bidirectional ranges is as easy as reduce(retro(range)). Defining >Haskell-style foldr on forward ranges would be difficult because it needs lazy >evaluation through and through, and is at danger because people may use it >naively. Python2 has only the foldr, probably to keep language simpler & safer to use. Python3 has moved reduce from the language to the std library. So far I have needed both kinds of folds in D only to translate some small programs from Haskell to D (and in this case I have even once confused the two folds, as you have noted), so probably I will be able to survive without a (better renamed) foldr in Phobos, if you don't want it for "safety" for naive programmers. -- But there are other HOFs that may be useful (they are in dlibs1 too): - "nest" (or iterate), to apply one function many times: nest(sin, 0.2, 4) === sin(sin(sin(sin(0.2 - Something similar, that keeps all the intermediate results. It's sometimes named nestList, but in D it may be lazy. -- There is another problem, shown by std.functional.compose. See the part about D here: http://rosettacode.org/wiki/First-class_functions#D This task asks things like (more details on the rosettacode page): - Create new functions from preexisting functions at run-time - Store functions in collections - Use functions as arguments to other functions - Use functions as return values of other functions To do it "well enough" the D implementation doesn't want to use std.functional.compose and defines a more dynamic one, able to use run time delegates: private import std.math ; import std.stdio ; T delegate(S) compose(T, U, S)(T delegate(U) f, U delegate(S) g) { return (S s) { return f(g(s)); }; } void main() { // warper need as not all built-in real functions // have same signature , eg pure/nothrow auto sin = delegate (real x) { return std.math.sin(x) ; } ; auto asin = delegate (real x) { return std.math.asin(x) ; } ; auto cos = delegate (real x) { return std.math.cos(x) ; } ; auto acos = delegate (real x) { return std.math.acos(x) ; } ; auto cube = delegate (real x) { return x*x*x ; } ; auto cbrt = delegate (real x) { return std.math.cbrt(x) ; } ; // built-in : sin/cos/asin/acos/cbrt user:cube auto fun = [sin, cos, cube] ; auto inv = [asin, acos, cbrt] ; foreach(i, f ; fun) writefln("%6.3f", compose(f, inv[i])(0.5)) ; } You are able to write a similar program with std.functional.compose too, but using tuples instead of arrays, this is less flexible: import std.stdio, std.typetuple, std.functional; private import std.math; void main() { // wrappers needed as not all built-in functions // have same signature, eg pure/nothrow auto sin = (real x) { return std.math.sin(x); }; auto asin = (real x) { return std.math.asin(x); }; auto cos = (real x) { return std.math.cos(x); }; auto acos = (real x) { return std.math.acos(x); }; auto cube = (real x) { return x ^^ 3; }; auto cbrt = (real x) { return std.math.cbrt(x); }; alias TypeTuple!(sin, cos, cube) dir; alias TypeTuple!(asin, acos, cbrt) inv; foreach (i, f; dir) writefln("%6.3f", compose!(f, inv[i])(0.5)); } This questions the design of std.functional.compose. Bye, bearophile
Re: monitor.d and critical.d?
> > Please add as a patch to bug 4332. > Cool, I added the attachments! On a second thought, this is a bit trickier than I'd thought, since it's not working without additional modifications that I at first thought were unnecessary. Did the C version run any sort of "static constructors" during the loading of the runtime on Windows? For some reason, when I replace the C version with the D version, I am forced to remove the version statement from here: void _d_criticalInit() { version (Posix) { _STI_monitor_staticctor(); _STI_critical_init(); } } in order to run the static constructors, even though it doesn't seem like the C code would've been any different in this regard. Does anyone know why this is needed?
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On Tue, 1 Feb 2011, Walter Bright wrote: > Bruno Medeiros wrote: > > A more serious issue that I learned (or rather forgotten about before and > > remembered now) is the whole DVCSes keep the whole repository history > > locally aspect, which has important ramifications. If the repository is big, > > although disk space may not be much of an issue, > > I still find myself worrying about disk usage, despite being able to get a 2T > drive these days for under a hundred bucks. Old patterns of thought die hard. For what it's worth, the sizes of the key git dirs on my box: dmd.git == 4.4 - 5.9M (depends on if the gc has run recently to re-pack new objects) druntime.git == 1.4 - 3.0M phobos.git == 5.1 - 6.7M The checked out copy of each of those is considerably more than the packed full history. The size, inclusive of full history and the checked out copy, after a make clean: dmd 15M druntime 4M phobos16M Ie, essentially negligable. Later, Brad
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On Tuesday, February 01, 2011 15:07:58 Walter Bright wrote: > Bruno Medeiros wrote: > > A more serious issue that I learned (or rather forgotten about before > > and remembered now) is the whole DVCSes keep the whole repository > > history locally aspect, which has important ramifications. If the > > repository is big, although disk space may not be much of an issue, > > I still find myself worrying about disk usage, despite being able to get a > 2T drive these days for under a hundred bucks. Old patterns of thought die > hard. And some things will likely _always_ make disk usage a concern. Video would be a good example. If you have much video, even with good compression, it's going to take up a lot of space. Granted, there are _lots_ of use cases which just don't take up enough disk space to matter anymore, but you can _always_ find ways to use up disk space. Entertainingly, a fellow I know had a friend who joked that he could always hold all of his data in a shoebox. Originally, it was punch cards. Then it was 5 1/4" floppy disks. Then it was 3 1/2" floppy disks. Then it was CDs. Etc. Storage devices keep getting bigger and bigger, but we keep finding ways to fill them... - Jonathan M Davis
Re: How many HOFs in Phobos?
Am 01.02.2011 22:30, schrieb Andrei Alexandrescu: > On 2/1/11 2:58 PM, Daniel Gibson wrote: >> Am 01.02.2011 21:53, schrieb Jonathan M Davis: >>> On Tuesday 01 February 2011 12:27:32 bearophile wrote: Walter: > It's exponentially bad performance makes it short, not useful. A program with high complexity is not a problem if you run it only on few very short examples. There is a place to care for performance (like when you design a function for Phobos) and there are places where you care for other things. I suggest top stop focusing only on a fault of a program that was not designed for performance (and if you want to start looking at the numerous good things present in Haskell. Haskell language and its implementation contains tens of good ideas). >>> >>> The issue is that if you want something in Phobos, it _does_ need to be >>> designed >>> with performance in mind. Anything which isn't efficient needs to have a >>> very >>> good >>> reason for its existence which balances out its lack of efficiency. If the >>> Haskell >>> implementation isn't performant enough, then it doesn't cut it for Phobos, >>> even >>> if it's a good fit for Haskell. >>> >>> - Jonathan M Davis >> >> Well, he didn't want the slow Levenshtein implementation in Phobos (if I >> understood correctly), but more higher order functions like fold*. These are >> not >> inherently slow and are most probably useful to implement fast functions as >> well ;) > > The fact that foldl and foldr are only one letter apart is a design mistake. > They have very different behavior, yet many functional programmers consider > them > largely interchangeable and are genuinely surprised when they hear the > relative > tradeoffs. > > std.algorithm.reduce implements foldl, as it should. Simulating foldr on > bidirectional ranges is as easy as reduce(retro(range)). Defining > Haskell-style > foldr on forward ranges would be difficult because it needs lazy evaluation > through and through, and is at danger because people may use it naively. > > For more info see the best answer at > http://stackoverflow.com/questions/3429634/haskell-foldl-vs-foldr-question. > > > Andrei Thanks for the link :-) I haven't used Haskell (or any other functional language) in a few years so I forgot these details (to be honest, I don't think I understood the implications explained in the stackoverflow post back then - I was happy when my Haskell programs did what the exercises demanded). I think that reduce as a non-lazy foldl is what people mostly need/want when they use folding. But I'm not sure if a lazy foldr (with whatever name to prevent people from using it accidentally) may be useful.. I guess it's only useful when a list (or, in D, range) is returned, i.e. the input list/range isn't really reduced like when just calculating a sum or minimum or whatever. I'm trying to think of use cases for this, but none (that aren't covered by map) come to (my) mind - but this may be just because my brain isn't used to functional programming anymore. Cheers, - Daniel
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
Bruno Medeiros wrote: A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, I still find myself worrying about disk usage, despite being able to get a 2T drive these days for under a hundred bucks. Old patterns of thought die hard.
Re: Having fun making tutorials
Andrej Mitrovic wrote: I've just uploaded this page: http://prowiki.org/wiki4d/wiki.cgi?D__Tutorial/CompilingLinkingD Thanks for doing these. Such tutorials are a nice help.
Re: How many HOFs in Phobos?
Jonathan M Davis wrote: The issue is that if you want something in Phobos, it _does_ need to be designed with performance in mind. Yup, because if it isn't, it gets ridicule heaped upon it, and deservedly.
Re: How many HOFs in Phobos?
On Tuesday, February 01, 2011 13:37:44 Andrei Alexandrescu wrote: > On 2/1/11 2:53 PM, Jonathan M Davis wrote: > > On Tuesday 01 February 2011 12:27:32 bearophile wrote: > >> Walter: > >>> It's exponentially bad performance makes it short, not useful. > >> > >> A program with high complexity is not a problem if you run it only on > >> few very short examples. There is a place to care for performance (like > >> when you design a function for Phobos) and there are places where you > >> care for other things. > >> > >> I suggest top stop focusing only on a fault of a program that was not > >> designed for performance (and if you want to start looking at the > >> numerous good things present in Haskell. Haskell language and its > >> implementation contains tens of good ideas). > > > > The issue is that if you want something in Phobos, it _does_ need to be > > designed with performance in mind. Anything which isn't efficient needs > > to have a very good reason for its existence which balances out its lack > > of efficiency. If the Haskell implementation isn't performant enough, > > then it doesn't cut it for Phobos, even if it's a good fit for Haskell. > > > > - Jonathan M Davis > > I think this is a bit much, though probably a good principle to live > into. For example Phobos does include linear search routines that are > "inefficient" - i.e. O(m * n). It also has many abstractions that are > arguably not as efficient as they could be, either at high level, low > level, or both. But something like foldr that uses head recursion would > be indeed rather dangerous to include. Okay. Perhaps, I said it a bit too strongly, but I think that the gist of what I said is sound. Inefficient algorithms need to be bring something to the table that is worth their inefficiency. And generally, if we can reasonably make algorithms more efficient, that's something that we want to do. That's not to say that we don't have or never will have less efficient algorithms in Phobos, but they're there because they're worth what they bring, and just because an algorithm would be considered reasonable for Haskell does not necessarily mean that it would be considered reasonable for Phobos. - Jonathan M Davis
Having fun making tutorials
I've just uploaded this page: http://prowiki.org/wiki4d/wiki.cgi?D__Tutorial/CompilingLinkingD It's a small guide on using DMD and Optlink, and the usual confusion with linker errors when using the import switch. Still it's too Windows specific and it doesn't discuss DLLs. I think they're special enough to warrant a new page, and I might be writing about them soon. I'm no expert though. A few days ago I've also written a small guide on how a typical D template works from the ground up (in this case unaryFun): http://prowiki.org/wiki4d/wiki.cgi?D__Tutorial/D2Templates I'm trying not to duplicate the effort and re-explain everything that TDPL already explains perfectly. What other areas are newbies typically struggling with when it comes to D? Actually, I could use a good tutorial or two for myself! :)
Re: How many HOFs in Phobos?
On 2/1/11 2:53 PM, Jonathan M Davis wrote: On Tuesday 01 February 2011 12:27:32 bearophile wrote: Walter: It's exponentially bad performance makes it short, not useful. A program with high complexity is not a problem if you run it only on few very short examples. There is a place to care for performance (like when you design a function for Phobos) and there are places where you care for other things. I suggest top stop focusing only on a fault of a program that was not designed for performance (and if you want to start looking at the numerous good things present in Haskell. Haskell language and its implementation contains tens of good ideas). The issue is that if you want something in Phobos, it _does_ need to be designed with performance in mind. Anything which isn't efficient needs to have a very good reason for its existence which balances out its lack of efficiency. If the Haskell implementation isn't performant enough, then it doesn't cut it for Phobos, even if it's a good fit for Haskell. - Jonathan M Davis I think this is a bit much, though probably a good principle to live into. For example Phobos does include linear search routines that are "inefficient" - i.e. O(m * n). It also has many abstractions that are arguably not as efficient as they could be, either at high level, low level, or both. But something like foldr that uses head recursion would be indeed rather dangerous to include. Andrei
Re: How many HOFs in Phobos?
On 2/1/11 2:58 PM, Daniel Gibson wrote: Am 01.02.2011 21:53, schrieb Jonathan M Davis: On Tuesday 01 February 2011 12:27:32 bearophile wrote: Walter: It's exponentially bad performance makes it short, not useful. A program with high complexity is not a problem if you run it only on few very short examples. There is a place to care for performance (like when you design a function for Phobos) and there are places where you care for other things. I suggest top stop focusing only on a fault of a program that was not designed for performance (and if you want to start looking at the numerous good things present in Haskell. Haskell language and its implementation contains tens of good ideas). The issue is that if you want something in Phobos, it _does_ need to be designed with performance in mind. Anything which isn't efficient needs to have a very good reason for its existence which balances out its lack of efficiency. If the Haskell implementation isn't performant enough, then it doesn't cut it for Phobos, even if it's a good fit for Haskell. - Jonathan M Davis Well, he didn't want the slow Levenshtein implementation in Phobos (if I understood correctly), but more higher order functions like fold*. These are not inherently slow and are most probably useful to implement fast functions as well ;) The fact that foldl and foldr are only one letter apart is a design mistake. They have very different behavior, yet many functional programmers consider them largely interchangeable and are genuinely surprised when they hear the relative tradeoffs. std.algorithm.reduce implements foldl, as it should. Simulating foldr on bidirectional ranges is as easy as reduce(retro(range)). Defining Haskell-style foldr on forward ranges would be difficult because it needs lazy evaluation through and through, and is at danger because people may use it naively. For more info see the best answer at http://stackoverflow.com/questions/3429634/haskell-foldl-vs-foldr-question. Andrei
Re: monitor.d and critical.d?
> > Hi, > > I was wondering, is there any particular reason why critical.c and monitor.c aren't written in D? > > I've attached the D versions... > Please add as a patch to bug 4332. Cool, I added the attachments! (I have no idea how to use git, or if I have the upload permissions (probably not), so I just uploaded the patches to the bug report.)
Re: How many HOFs in Phobos?
On 2/1/11 2:27 PM, bearophile wrote: Walter: It's exponentially bad performance makes it short, not useful. A program with high complexity is not a problem if you run it only on few very short examples. There is a place to care for performance (like when you design a function for Phobos) and there are places where you care for other things. I suggest top stop focusing only on a fault of a program that was not designed for performance (and if you want to start looking at the numerous good things present in Haskell. Haskell language and its implementation contains tens of good ideas). Bye, bearophile I agree in spirit but only weakly. Cute and complexity corrupt examples such as the exponential Fibonacci, the linear space factorial, and the quicksort that is not quick and not quicksort have long misrepresented what's good about functional programming. Andrei
Re: How many HOFs in Phobos?
Am 01.02.2011 21:53, schrieb Jonathan M Davis: > On Tuesday 01 February 2011 12:27:32 bearophile wrote: >> Walter: >>> It's exponentially bad performance makes it short, not useful. >> >> A program with high complexity is not a problem if you run it only on few >> very short examples. There is a place to care for performance (like when >> you design a function for Phobos) and there are places where you care for >> other things. >> >> I suggest top stop focusing only on a fault of a program that was not >> designed for performance (and if you want to start looking at the numerous >> good things present in Haskell. Haskell language and its implementation >> contains tens of good ideas). > > The issue is that if you want something in Phobos, it _does_ need to be > designed > with performance in mind. Anything which isn't efficient needs to have a very > good > reason for its existence which balances out its lack of efficiency. If the > Haskell > implementation isn't performant enough, then it doesn't cut it for Phobos, > even > if it's a good fit for Haskell. > > - Jonathan M Davis Well, he didn't want the slow Levenshtein implementation in Phobos (if I understood correctly), but more higher order functions like fold*. These are not inherently slow and are most probably useful to implement fast functions as well ;)
Re: How many HOFs in Phobos?
On Tuesday 01 February 2011 12:27:32 bearophile wrote: > Walter: > > It's exponentially bad performance makes it short, not useful. > > A program with high complexity is not a problem if you run it only on few > very short examples. There is a place to care for performance (like when > you design a function for Phobos) and there are places where you care for > other things. > > I suggest top stop focusing only on a fault of a program that was not > designed for performance (and if you want to start looking at the numerous > good things present in Haskell. Haskell language and its implementation > contains tens of good ideas). The issue is that if you want something in Phobos, it _does_ need to be designed with performance in mind. Anything which isn't efficient needs to have a very good reason for its existence which balances out its lack of efficiency. If the Haskell implementation isn't performant enough, then it doesn't cut it for Phobos, even if it's a good fit for Haskell. - Jonathan M Davis
Re: std.unittests [updated] for review
On Tuesday 01 February 2011 11:32:08 Andrei Alexandrescu wrote: > On 2/1/11 11:29 AM, Jonathan M Davis wrote: > > On Tuesday 01 February 2011 09:12:16 Andrei Alexandrescu wrote: > >> On 2/1/11 10:51 AM, Michel Fortin wrote: > >>> On 2011-02-01 11:31:54 -0500, Andrei Alexandrescu > >>> said: > >>> TypeInfo holds a pointer to the toString function, so if the compiler > >>> passes the two operands as D-style variadic arguments to the assert > >>> handler, the assert handler can use toString to print them. The > >>> operator should be passed as a string. > >> > >> In that case problem solved. Don, if you arrange things such that this > >> user-level code: > >> > >> int a = 42; > >> double b = 3.14; > >> assert(a<= b, "Something odd happened"); > >> > >> ultimately calls this runtime function: > >> > >> assertCmpFailed("<=", "42", "3.14", "Something odd happened"); > >> > >> I promise I'll discuss with Sean and implement what it takes in druntime > >> to get that completed. > >> > >> We need to finalize that before Feb 7 though because on that date the > >> vote for Jonathan's library closes. If you do implement that, probably > >> we'll need to reject the library in the current form and propose back an > >> amended version. > > > > You do need to remember to take into account though that expressions can > > be quite a bit more complex than a<= b, and you still want to be able to > > print out something useful. > > > > It's been discussed a bit already, but IIRC what appeared to be the best > > solution was that when an expression evaluated to false, you take the > > expression to evaluated and stop evaluating it when all that was left is > > operators or functions which resulted in bool. Then the value of those > > operands would be printed. e.g. > > > > assert(min(5 + 2, 4)< 2&& max(5, 7)< 10); > > > > would print out the values > > > > 4, 2, 7, and 10. > > It's all about the top-level AST node. In this case I think it's > reasonable to print "false && __unevaluated__" or simply "false". If the > user wants the separate values, they can always write: > > assert(min(5 + 2, 4) < 2); > assert(max(5, 7) < 10); > > > and > > > > assert(canFind("hello world", "goodbye")); > > > > would print out the values > > > > "hello world" and "goodbye". > > I guess again taking the function call as the top node, assert could > print the arguments passed into the function. Well, what the best way to handle it is and what exactly is possible, I don't know. However, it needs to print out the actual values of the expression and be reasonably close to what the programmer would be looking for in terms of which values would be printed. The simple examples are obvious. The complex ones aren't as obvious. And it may be that we can't get it to quite print out what your average programmer would want, but if we're close - particularly if we get the simple expressions right, then it may be good enough. Whatever assert can be made to do, it's unlikely to be as flexible as assertPred, but if we can get to do most of what assertPred does, then the remainder could be broken out into other functions for that purpose, or maybe those functions don't need to make it into Phobos. Regardless, I think that we definitely need better error reporting on unit test failure. assertPred does that. But if the built in assert could do that, then we'd have the best assert out of any language that I've ever seen. - Jonathan M Davis
Re: How many HOFs in Phobos?
Walter: > It's exponentially bad performance makes it short, not useful. A program with high complexity is not a problem if you run it only on few very short examples. There is a place to care for performance (like when you design a function for Phobos) and there are places where you care for other things. I suggest top stop focusing only on a fault of a program that was not designed for performance (and if you want to start looking at the numerous good things present in Haskell. Haskell language and its implementation contains tens of good ideas). Bye, bearophile
Re: Decision on container design
On Tue, 01 Feb 2011 14:26:26 -0500, Simon Buerger wrote: On 01.02.2011 18:08, Steven Schveighoffer wrote: swap isn't the problem. foreach(s; myVectorSet) { // if s is by value, it must be copied for each iteration in the loop } Just to note: the "correct" solution for the last problem is foreach(const ref s; myVectorSet) which is working in current D. In a more value-based language you may even want to default to "const ref" for foreach-loop-values, and even for function-parameters. I suggested that a while ago, but wasn't liked much for D, for good reasons. This is not a "solution". I cannot enforce that someone uses foreach with ref semantics. Unless the type is a reference type, like a class. Note that const ref isn't necessarily want, you could want to mutate the elements of the vector, meaning you really want ref. The whole point is, it's too easy to get it wrong, and the wrong thing looks innocuous and does not look like it will perform horribly. Compare this to someone who actually *wants* value semantics when iterating a reference type (which should be very rare): foreach(s; myVectorSet) { auto setIAmGoingToUse = s.dup; // clear intentions: "yes, I want to do an expensive copy" } -Steve
Re: How much time you spend daily?
== Quote from Nick Sabalausky (a@a.a)'s article > "Gary Whatmore" wrote in message > news:ii970e$1nis$1...@digitalmars.com... > > Recently Bruno M. wrote: > > > >> I may be spending too much time on the NG (especially for someone who > >> doesn't skip the 8 hours of sleep) > > > > A quick look at my daily routines revealed that I spend 7 hours studying > > the dmd and phobos diffs, Debian, Ubuntu, and Arch linux packages status, > > bug reports and comments, planet d, running bearophile's benchmarks, > > dsource & project news, all reddit articles about D, all slides and posts > > by Walter in various forums. > > > > This is all too exciting. If I didn't go to work, there would be even more > > to learn. The downside is, this leaves no time to really contribute. > > Problem #2 is I'm not very good at coming up with ideas on how to improve > > D or the tools. I just spend all time reading, spreading the word, and > > manipulating reddit votes. I hope I could help D and use D some day. Any > > thoughts? > DDMD could use more activity: > http://www.dsource.org/projects/ddmd > LLVM needs exception support on Windows. Does LLVM still need 64bit calling convention support?
Re: How much time you spend daily?
== Quote from Nick Sabalausky (a@a.a)'s article > "Gary Whatmore" wrote in message > news:ii970e$1nis$1...@digitalmars.com... > > Recently Bruno M. wrote: > > > >> I may be spending too much time on the NG (especially for someone who > >> doesn't skip the 8 hours of sleep) > > > > A quick look at my daily routines revealed that I spend 7 hours studying > > the dmd and phobos diffs, Debian, Ubuntu, and Arch linux packages status, > > bug reports and comments, planet d, running bearophile's benchmarks, > > dsource & project news, all reddit articles about D, all slides and posts > > by Walter in various forums. > > > > This is all too exciting. If I didn't go to work, there would be even more > > to learn. The downside is, this leaves no time to really contribute. > > Problem #2 is I'm not very good at coming up with ideas on how to improve > > D or the tools. I just spend all time reading, spreading the word, and > > manipulating reddit votes. I hope I could help D and use D some day. Any > > thoughts? > DDMD could use more activity: > http://www.dsource.org/projects/ddmd > LLVM needs exception support on Windows. > I *think* LDC and GDC need more work on their D2 versions. The Druntime and Phobos teams have already done 70% of the work. AFAIK, missing D2 features are: - TLS symbols: Need to be more than just stub variables, but still just about usable as they are. - DbC: In/Out Contract Inheritance not implemented. Rest is either ironing out the corner cases of codegen, or working on some fancy GCC addition.
Re: How many HOFs in Phobos?
On 2/1/11 12:34 PM, Walter Bright wrote: bearophile wrote: The Haskell implementation doesn't scale. I was quite aware that Haskell version is designed for being short, not fast. It's exponentially bad performance makes it short, not useful. I'm not sure whether it's exponential, polynomial greater than quadratic, or simply quadratic (as it should) with large inefficiencies attached. Maybe a Haskell expert could clarify that. Andrei
Re: How much time you spend daily?
"Gary Whatmore" wrote in message news:ii970e$1nis$1...@digitalmars.com... > Recently Bruno M. wrote: > >> I may be spending too much time on the NG (especially for someone who >> doesn't skip the 8 hours of sleep) > > A quick look at my daily routines revealed that I spend 7 hours studying > the dmd and phobos diffs, Debian, Ubuntu, and Arch linux packages status, > bug reports and comments, planet d, running bearophile's benchmarks, > dsource & project news, all reddit articles about D, all slides and posts > by Walter in various forums. > > This is all too exciting. If I didn't go to work, there would be even more > to learn. The downside is, this leaves no time to really contribute. > Problem #2 is I'm not very good at coming up with ideas on how to improve > D or the tools. I just spend all time reading, spreading the word, and > manipulating reddit votes. I hope I could help D and use D some day. Any > thoughts? DDMD could use more activity: http://www.dsource.org/projects/ddmd LLVM needs exception support on Windows. I *think* LDC and GDC need more work on their D2 versions.
Re: How much time you spend daily?
On 2/1/11 11:57 AM, Gary Whatmore wrote: Recently Bruno M. wrote: I may be spending too much time on the NG (especially for someone who doesn't skip the 8 hours of sleep) A quick look at my daily routines revealed that I spend 7 hours studying the dmd and phobos diffs, Debian, Ubuntu, and Arch linux packages status, bug reports and comments, planet d, running bearophile's benchmarks, dsource& project news, all reddit articles about D, all slides and posts by Walter in various forums. This is all too exciting. If I didn't go to work, there would be even more to learn. The downside is, this leaves no time to really contribute. Problem #2 is I'm not very good at coming up with ideas on how to improve D or the tools. I just spend all time reading, spreading the word, and manipulating reddit votes. I hope I could help D and use D some day. Any thoughts? Go to a park. :-P Seriously, some years later you will regret it. If you work at least 6 hours a day and you spend 7 hours extra in the computer, it's too much for your eyes, brain and body...
Re: How much time you spend daily?
On 2/1/11 12:19 PM, Jacob Carlborg wrote: On 2011-02-01 15:57, Gary Whatmore wrote: Recently Bruno M. wrote: I may be spending too much time on the NG (especially for someone who doesn't skip the 8 hours of sleep) A quick look at my daily routines revealed that I spend 7 hours studying the dmd and phobos diffs, Debian, Ubuntu, and Arch linux packages status, bug reports and comments, planet d, running bearophile's benchmarks, dsource& project news, all reddit articles about D, all slides and posts by Walter in various forums. This is all too exciting. If I didn't go to work, there would be even more to learn. The downside is, this leaves no time to really contribute. Problem #2 is I'm not very good at coming up with ideas on how to improve D or the tools. I just spend all time reading, spreading the word, and manipulating reddit votes. I hope I could help D and use D some day. Any thoughts? Help to port DWT to D2. Or update to newer versions of SWT. http://dsource.org/projects/dwt And in short bits of free time, go through bugzilla and propose pull requests that fix issues. Andrei
Re: Decision on container design
On 01.02.2011 20:01, Michel Fortin wrote: On 2011-02-01 12:07:55 -0500, Andrei Alexandrescu said: With this, the question becomes a matter of choosing the right default: do we want values most of the time and occasional references, or vice versa? I think most of the time you need references, as witnessed by the many '&'s out there in code working on STL containers. What exactly is "most of the time"? In C++, you pass containers by '&' for function parameters, using '&' elsewhere is rare. One thing I proposed some time ago to address this problem (and to which no one replied) was this: ref struct Container { ... } // new "ref struct" concept void func(Container c) { // c is implicitly a "ref Container" } Container a; // by value func(a); // implicitly passed by ref Containers would be stored by value, but always passed by ref in functions parameters. Thats would not be "per-value" as most understand it. Container globalC; func(c); void func(Container paramC) { c.add(42); // modifies globalC in reference-semantics // leaves globalC as it was in value-semantics } Your Idea would be somehow truly value-based if you default not only to "ref" but to "const ref", because then the function would not be able to alter globalC. But making parameters default-const was not considered the right way for D. - Krox
Re: std.unittests [updated] for review
On 2/1/11 11:29 AM, Jonathan M Davis wrote: On Tuesday 01 February 2011 09:12:16 Andrei Alexandrescu wrote: On 2/1/11 10:51 AM, Michel Fortin wrote: On 2011-02-01 11:31:54 -0500, Andrei Alexandrescu said: TypeInfo holds a pointer to the toString function, so if the compiler passes the two operands as D-style variadic arguments to the assert handler, the assert handler can use toString to print them. The operator should be passed as a string. In that case problem solved. Don, if you arrange things such that this user-level code: int a = 42; double b = 3.14; assert(a<= b, "Something odd happened"); ultimately calls this runtime function: assertCmpFailed("<=", "42", "3.14", "Something odd happened"); I promise I'll discuss with Sean and implement what it takes in druntime to get that completed. We need to finalize that before Feb 7 though because on that date the vote for Jonathan's library closes. If you do implement that, probably we'll need to reject the library in the current form and propose back an amended version. You do need to remember to take into account though that expressions can be quite a bit more complex than a<= b, and you still want to be able to print out something useful. It's been discussed a bit already, but IIRC what appeared to be the best solution was that when an expression evaluated to false, you take the expression to evaluated and stop evaluating it when all that was left is operators or functions which resulted in bool. Then the value of those operands would be printed. e.g. assert(min(5 + 2, 4)< 2&& max(5, 7)< 10); would print out the values 4, 2, 7, and 10. It's all about the top-level AST node. In this case I think it's reasonable to print "false && __unevaluated__" or simply "false". If the user wants the separate values, they can always write: assert(min(5 + 2, 4) < 2); assert(max(5, 7) < 10); and assert(canFind("hello world", "goodbye")); would print out the values "hello world" and "goodbye". I guess again taking the function call as the top node, assert could print the arguments passed into the function. Andrei
Re: Decision on container design
On 01.02.2011 18:08, Steven Schveighoffer wrote: On Tue, 01 Feb 2011 11:44:36 -0500, Michel Fortin wrote: On 2011-02-01 11:12:13 -0500, Andrei Alexandrescu said: On 1/28/11 8:12 PM, Michel Fortin wrote: On 2011-01-28 20:10:06 -0500, "Denis Koroskin" <2kor...@gmail.com> said: Unfortunately, this design has big issues: void fill(Appender appender) { appender.put("hello"); appender.put("world"); } void test() { Appender appender; fill(appender); // Appender is supposed to have reference semantics assert(appender.length != 0); // fails! } Asserting above fails because at the time you pass appender object to the fill method it isn't initialized yet (lazy initialization). As such, a null is passed, creating an instance at first appending, but the result isn't seen to the caller. That's indeed a problem. I don't think it's a fatal flaw however, given that the idiom already exists in AAs. That said, the nice thing about my proposal is that you can easily reuse the Impl to create a new container to build a new container wrapper with the semantics you like with no loss of efficiency. As for the case of Appender... personally in the case above I'd be tempted to use Appender.Impl directly (value semantics) and make fill take a 'ref'. There's no point in having an extra heap allocation, especially if you're calling test() in a loop or if there's a good chance fill() has nothing to append to it. I've been thinking of making Appender.Impl public or at least its own type. A stack-based appender makes a lot of sense when you are using it temporarily to build an array. But array-based containers really are in a separate class from node-based containers. It's tempting to conflate the two because they are both 'containers', but arrays allow many more optimizations/features that node-based containers simply can't do. Yep, yep, I found myself wrestling with the same issues. All good points. On one hand containers are a target for optimization because many will use them. On the other hand you'd want to have reasonably simple and idiomatic code in the container implementation because you want people to understand them easily and also to write their own. I thought for a while of a layered approach in which you'd have both the value and the sealed reference version of a container... it's just too much aggravation. But are you not just pushing the aggravation elsewhere? If I need a by value container for some reason (performance or semantics) I'll have to write my own, and likely others will write their own too. foo(container.dup) ; // value semantics I'm sure some template guru can make a wrapper type for this for the rare occasions that you need it. We can work on solving the "auto-initialization" issue (where a nested container 'just works'), I think there are ways to do it. If that still doesn't help for your issues, then writing your own may be the only valid option. Using classes for containers is just marginally better than making them by-value structs: you can use 'new' with a by-value struct if you want it to behave as a class-like by-reference container: struct Container { ... } auto c = new Container(); The only noticeable difference from a class container is that now c is now a Container*. And doesn't get cleaned up by the GC properly. Plus, each member call must check if the container is 'instantiated', since we can have no default ctors. Yes, it's a trade-off, and I think by far class-based containers win for the common case. Personally, I'm really concerned by the case where you have a container of containers. Class semantics make things really complicated as you always have to initialize everything in the container explicitly; value semantics makes things semantically easier but quite inefficient as moving elements inside of the outermost container implies copying the containers. Making containers auto-initialize themselves on first use solves the case where containers are references-types; making containers capable of using move semantics solves the problem for value-type containers. Neither values nor references are perfect indeed. For example, someone mentioned, hey, in STL I write set< vector > and it Just Works(tm). On the other hand, if you swap the two names it still seems to work but it's awfully inefficient (something that may trip even experienced developers). Isn't that solved by C++0x, using move semantics in swap? swap isn't the problem. foreach(s; myVectorSet) { // if s is by value, it must be copied for each iteration in the loop } > > -Steve Just to note: the "correct" solution for the last problem is foreach(const ref s; myVectorSet) which is working in current D. In a more value-based language you may even want to default to "const ref" for foreach-loop-values, and even for function-parameters. I suggested that a while ago, but wasn't liked much for D, for good reasons. - Krox
Re: Decision on container design
On 2011-02-01 12:07:55 -0500, Andrei Alexandrescu said: With this, the question becomes a matter of choosing the right default: do we want values most of the time and occasional references, or vice versa? I think most of the time you need references, as witnessed by the many '&'s out there in code working on STL containers. What exactly is "most of the time"? In C++, you pass containers by '&' for function parameters, using '&' elsewhere is rare. One thing I proposed some time ago to address this problem (and to which no one replied) was this: ref struct Container { ... } // new "ref struct" concept void func(Container c) { // c is implicitly a "ref Container" } Container a; // by value func(a); // implicitly passed by ref Containers would be stored by value, but always passed by ref in functions parameters. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: C# Interop
On Tue, 01 Feb 2011 13:33:20 -0500, Rainer Schuetze wrote: Robert Jacques wrote: On Tue, 01 Feb 2011 03:05:13 -0500, Rainer Schuetze wrote: XP TLS support with dynamically loaded DLLs is fixed for some time now with a workaround implemented in druntime. Also, DLLs can be used in multi-threading environments. Yes, I pointed out in another thread that D loading D DLLs can work around this issue, but the original post was about calling a D DLL from another language, specifically C#, where the limitation in XP still exists. (Of course, you might be able to port the work around to C#. Hmm...) The workaround is not about D loading a D DLL. Visual D lives happily in the C++/C# world of Visual Studio, even on XP. It's the magic inside dll_process_attach() that sets up TLS for existing threads and patches the loader structures to make XP think the DLL was loaded at startup (where implicite TLS works). The downside is that the DLL cannot be unloaded, though. Thanks, again. Though the pros and cons of this should be listed in the docs somewhere. > I've listed some example code from my project below: [snip] This DLLMain code is a bit outdated (is it D1?), the current proposed version is here: http://www.digitalmars.com/d/2.0/dll.html Thanks. It was D2, but it was forked a while ago. Given that the recommended way of doing this might change in the future, a string mixin in core.dll_helper might be appropriate. I don't like mixins too much, but a standard_DllMain that you can forward to from DllMain, might be a good idea to include into the runtime library. Yes, on second thought, a standard_DllMain is the better solution.
Re: How many HOFs in Phobos?
bearophile wrote: The Haskell implementation doesn't scale. I was quite aware that Haskell version is designed for being short, not fast. It's exponentially bad performance makes it short, not useful.
Re: C# Interop
Robert Jacques wrote: On Tue, 01 Feb 2011 03:05:13 -0500, Rainer Schuetze wrote: XP TLS support with dynamically loaded DLLs is fixed for some time now with a workaround implemented in druntime. Also, DLLs can be used in multi-threading environments. Yes, I pointed out in another thread that D loading D DLLs can work around this issue, but the original post was about calling a D DLL from another language, specifically C#, where the limitation in XP still exists. (Of course, you might be able to port the work around to C#. Hmm...) The workaround is not about D loading a D DLL. Visual D lives happily in the C++/C# world of Visual Studio, even on XP. It's the magic inside dll_process_attach() that sets up TLS for existing threads and patches the loader structures to make XP think the DLL was loaded at startup (where implicite TLS works). The downside is that the DLL cannot be unloaded, though. > I've listed some example code from my project below: [snip] This DLLMain code is a bit outdated (is it D1?), the current proposed version is here: http://www.digitalmars.com/d/2.0/dll.html Thanks. It was D2, but it was forked a while ago. Given that the recommended way of doing this might change in the future, a string mixin in core.dll_helper might be appropriate. I don't like mixins too much, but a standard_DllMain that you can forward to from DllMain, might be a good idea to include into the runtime library.
Re: How much time you spend daily?
On 2011-02-01 15:57, Gary Whatmore wrote: Recently Bruno M. wrote: I may be spending too much time on the NG (especially for someone who doesn't skip the 8 hours of sleep) A quick look at my daily routines revealed that I spend 7 hours studying the dmd and phobos diffs, Debian, Ubuntu, and Arch linux packages status, bug reports and comments, planet d, running bearophile's benchmarks, dsource& project news, all reddit articles about D, all slides and posts by Walter in various forums. This is all too exciting. If I didn't go to work, there would be even more to learn. The downside is, this leaves no time to really contribute. Problem #2 is I'm not very good at coming up with ideas on how to improve D or the tools. I just spend all time reading, spreading the word, and manipulating reddit votes. I hope I could help D and use D some day. Any thoughts? Help to port DWT to D2. Or update to newer versions of SWT. http://dsource.org/projects/dwt -- /Jacob Carlborg
Re: Bus error w/combined writeln(int) and uniform
On 2011-02-01 10:16, Magnus Lie Hetland wrote: On 2011-01-31 17:00:57 +0100, Jacob Carlborg said: On 2011-01-31 10:18, Lars T. Kyllingstad wrote: [snip] I'm not sure if it's related: http://d.puremagic.com/issues/show_bug.cgi?id=4854 -Lars Can it be this problem: http://d.puremagic.com/issues/show_bug.cgi?id=4854 ? That's the same one, no? (Did mean some other problem?-) No I meant the same, sorry for the additional post and confusion. -- /Jacob Carlborg
Re: Decision on container design
Andrei Alexandrescu wrote: I do something similar with RefCounted. There are problems - you need to know in advance which functions you can implement on a null container (empty and length are obvious candidates, but there could be others). Static functions can safely be called. Hence, this template: import std.traits; template isStaticFunc(T, string fn) { enum isStaticFunc = is(typeof({ mixin("alias T."~fn~" func;"); ParameterTypeTuple!func args; mixin("T."~fn~"(args);"); })); } Now, Ref can look like this: struct Ref(Impl) { private Impl* _impl; @property ref Impl impl() { return *(_impl = (_impl ? _impl : new Impl)); } //alias impl this; // Apparently, alias this takes precedence over opDispatch, so // using both doesn't work. Well, unless you put opDispatch in Impl. auto opDispatch(string name, T...)(T args) if ( isStaticFunc!(Impl, name)) { mixin("return impl."~name~"(_impl,args);"); } } struct ExampleImpl { static int length(ExampleImpl* that) { return that ? that.actualLength : 0; } int actualLength( ) { return 42; } } import std.stdio; void main( ) { Ref!ExampleImpl a; writeln( a.length ); } -- Simen
Re: Decision on container design
Andrei: > A better solution is to define something like > > auto c = new Classify!Container; > > which transforms a value into a class object. > > With this, the question becomes a matter of choosing the right default: > do we want values most of the time and occasional references, or vice > versa? I think most of the time you need references, as witnessed by the > many '&'s out there in code working on STL containers. I agree that most times a reference is better. This brings back the need for a very good (efficient, syntactically readable, and even if not safe, able to spot most common errors, and RAII-safe) way to allocate class instances in-place, on the stack or inside another struct/class. Bye, bearophile
Re: std.unittests [updated] for review
Andrei: > Don, if you arrange things such that this user-level code: > > int a = 42; > double b = 3.14; > assert(a <= b, "Something odd happened"); > > ultimately calls this runtime function: > > assertCmpFailed("<=", "42", "3.14", "Something odd happened"); > > I promise I'll discuss with Sean and implement what it takes in druntime > to get that completed. This makes me happier. Bye, bearophile
Re: std.unittests [updated] for review
On Tuesday 01 February 2011 09:12:16 Andrei Alexandrescu wrote: > On 2/1/11 10:51 AM, Michel Fortin wrote: > > On 2011-02-01 11:31:54 -0500, Andrei Alexandrescu > > said: > > TypeInfo holds a pointer to the toString function, so if the compiler > > passes the two operands as D-style variadic arguments to the assert > > handler, the assert handler can use toString to print them. The operator > > should be passed as a string. > > In that case problem solved. Don, if you arrange things such that this > user-level code: > > int a = 42; > double b = 3.14; > assert(a <= b, "Something odd happened"); > > ultimately calls this runtime function: > > assertCmpFailed("<=", "42", "3.14", "Something odd happened"); > > I promise I'll discuss with Sean and implement what it takes in druntime > to get that completed. > > We need to finalize that before Feb 7 though because on that date the > vote for Jonathan's library closes. If you do implement that, probably > we'll need to reject the library in the current form and propose back an > amended version. You do need to remember to take into account though that expressions can be quite a bit more complex than a <= b, and you still want to be able to print out something useful. It's been discussed a bit already, but IIRC what appeared to be the best solution was that when an expression evaluated to false, you take the expression to evaluated and stop evaluating it when all that was left is operators or functions which resulted in bool. Then the value of those operands would be printed. e.g. assert(min(5 + 2, 4) < 2 && max(5, 7) < 10); would print out the values 4, 2, 7, and 10. and assert(canFind("hello world", "goodbye")); would print out the values "hello world" and "goodbye". Regardless, trying to do it in assert complicates things a fair bit. assertPred gives you more control, because you can choose how to break up the expression and arguments, and then those arguments are the ones which get printed. assert has to be much smarter about figuring out what it should print. Done correctly, it would be fantastic for assert to be that smart, but I wouldn't expect it to be easy. - Jonathan M davis
Re: Decision on container design
On 02/01/2011 05:00 PM, Andrei Alexandrescu wrote: Regarding the general issue that someone makes an informal proposal (either here, as a DIP, or on the Phobos mailing list), followed by a thundering silence: I believe that a good technique is to formalize the proposal review process, which has been a homerun for Boost. The disadvantage of that is that almost without exception this is very taxing to library submitters. This means the submitter must put a lot of thought and a lot of work into motivating, polishing, and documenting an artifact without any guarantee that it would lead to inclusion in the target library. I've seen very, VERY elaborate Boost submissions fail - literally months of work gone to waste. An alternative, or a complementary approach, may be to delegate part of your responsability. In this case, I'm thinking at a pool of people which "mission" would be to obviously show interest (or lack of) for proposals made on the mailing list --whatever their advancement, formality, code quality... This would provide a valuable indicator while removing some load from your shoulders, I guess. Such people may be chosen by cooptation. Note this approach is not exclusive of formal & heavy adoption process like Boost's; instead it can be a complementary or preliminary way of judging interest for proposals. A similar principle may indeed be used for other purpose: specification & evolution of D-the-language, implementation & bug removal of the reference compiler,... Denis -- _ vita es estrany spir.wikidot.com
Re: Decision on container design
On Tuesday 01 February 2011 09:07:55 Andrei Alexandrescu wrote: > On 2/1/11 10:44 AM, Michel Fortin wrote: > > On 2011-02-01 11:12:13 -0500, Andrei Alexandrescu > > > > said: > >> On 1/28/11 8:12 PM, Michel Fortin wrote: > >>> On 2011-01-28 20:10:06 -0500, "Denis Koroskin" <2kor...@gmail.com> said: > Unfortunately, this design has big issues: > > > void fill(Appender appender) > { > appender.put("hello"); > appender.put("world"); > } > > void test() > { > Appender appender; > fill(appender); // Appender is supposed to have reference semantics > assert(appender.length != 0); // fails! > } > > Asserting above fails because at the time you pass appender object to > the fill method it isn't initialized yet (lazy initialization). As > such, a null is passed, creating an instance at first appending, but > the result isn't seen to the caller. > >>> > >>> That's indeed a problem. I don't think it's a fatal flaw however, given > >>> that the idiom already exists in AAs. > >>> > >>> That said, the nice thing about my proposal is that you can easily > >>> reuse the Impl to create a new container to build a new container > >>> wrapper with the semantics you like with no loss of efficiency. > >>> > >>> As for the case of Appender... personally in the case above I'd be > >>> tempted to use Appender.Impl directly (value semantics) and make fill > >>> take a 'ref'. There's no point in having an extra heap allocation, > >>> especially if you're calling test() in a loop or if there's a good > >>> chance fill() has nothing to append to it. > >>> > >>> That's the issue with containers. The optimal semantics always change > >>> depending on the use case. > >> > >> Yep, yep, I found myself wrestling with the same issues. All good > >> points. On one hand containers are a target for optimization because > >> many will use them. On the other hand you'd want to have reasonably > >> simple and idiomatic code in the container implementation because you > >> want people to understand them easily and also to write their own. I > >> thought for a while of a layered approach in which you'd have both the > >> value and the sealed reference version of a container... it's just too > >> much aggravation. > > > > But are you not just pushing the aggravation elsewhere? If I need a by > > value container for some reason (performance or semantics) I'll have to > > write my own, and likely others will write their own too. > > If semantics are the primary concern, you could (and in fact Phobos > could) provide a Value!C template that automatically calls dup in > this(this) etc. > > For performance I agree there is stuff that class containers leave on > the table. > > > Using classes for containers is just marginally better than making them > > by-value structs: you can use 'new' with a by-value struct if you want > > it to behave as a class-like by-reference container: > > > > struct Container { > > ... > > } > > > > auto c = new Container(); > > > > The only noticeable difference from a class container is that now c is > > now a Container*. > > Well one problem now is that if you have a Container* you don't know > whether it's dynamically allocated or the address of some > stack-allocated object. This is pretty big; a major issue that I believe > C++ has is that you can seldom reason modularly about functions because > C++ makes it impossible to represent reference semantics with > local/remote/shared/no ownership without resorting to convention. > > A better solution is to define something like > > auto c = new Classify!Container; > > which transforms a value into a class object. > > With this, the question becomes a matter of choosing the right default: > do we want values most of the time and occasional references, or vice > versa? I think most of the time you need references, as witnessed by the > many '&'s out there in code working on STL containers. Java implements containers as classes (not that it really has any other choice), so all containers in Java have reference semantics, and I've _never_ found that to be a problem. I do think that there are rare cases where it makes sense for a container to be a value type, but I really do think that it's a rare case for the average programmer. On the other hand, having containers be value types in C++ is _frequently_ a problem. - Jonathan M Davis
Re: std.unittests [updated] for review
On 2/1/11 10:51 AM, Michel Fortin wrote: On 2011-02-01 11:31:54 -0500, Andrei Alexandrescu said: TypeInfo holds a pointer to the toString function, so if the compiler passes the two operands as D-style variadic arguments to the assert handler, the assert handler can use toString to print them. The operator should be passed as a string. In that case problem solved. Don, if you arrange things such that this user-level code: int a = 42; double b = 3.14; assert(a <= b, "Something odd happened"); ultimately calls this runtime function: assertCmpFailed("<=", "42", "3.14", "Something odd happened"); I promise I'll discuss with Sean and implement what it takes in druntime to get that completed. We need to finalize that before Feb 7 though because on that date the vote for Jonathan's library closes. If you do implement that, probably we'll need to reject the library in the current form and propose back an amended version. Andrei
Re: std.unittests [updated] for review
On Tuesday 01 February 2011 09:05:18 Jens Mueller wrote: > Michel Fortin wrote: > > On 2011-02-01 10:34:26 -0500, Andrei Alexandrescu > > > > said: > > >On 2/1/11 9:21 AM, Don wrote: > > >>Jonathan M Davis wrote: > > >>>Do you really find > > >>> > > >>>assertPred!"=="(min(5, 7), 5); > > >>> > > >>>to be all that harder to understand than > > >>> > > >>>assert(min(5, 7) == 5); > > >> > > >>I do. *Much* harder. Factor of two, at least. > > >>In absolute terms, not so much, because it was the original assert is > > >>very easy to understand. But the relative factor matters enormously. > > >>Much as comparing: > > >>a.add(b); > > >>a += b; > > >> > > >>And I think this is a very important issue. > > >> > > >> >I don't see how these functions could be anything but an improvement. > > >> > > > >> > But even if they get into Phobos, you obviously don't have to use > > >> > them. > > >> > > >>This is not true. Including them in Phobos gives a legitimacy to that > > >>style of programming. It's a role model. > > >> > > >>Including stuff like this could give D a reputation for lack of > > >>readability. My belief is that right now, the #1 risk for Phobos is > > >>that it becomes too clever and inaccessible. > > >> > > >>IMHO, something simple which has any appearance of being complicated, > > >>needs a VERY strong justification. > > > > > >Does this count as a vote against the submission? > > > > To me this is a compelling argument against. That and the fact that > > it can't really mimic the true behaviour of assert in some > > situation. assertPred won't work correctly for 'in' contracts with > > inheritance, where the compiler generate the assertion code > > differently. > > > > In my view, the correct way to improve assertion error messages is > > to improve how the compiler handle assertions (it should output > > messages like it does for static asserts). > > > > assertPred might be fine as a stopgap solution, but personally I'd > > not make it part of the public API. We can't tell people to use > > assert() everywhere and then tell them they should use > > assertPred!op() if they want a useful error message except for 'in' > > contracts of virtual functions; that'd just be too confusing. > > Actually we say use assertPred/etc. when writing unittests, don't we? To > me that is not complicated. There used to be a version(unittest). I > don't know for what reason it got removed. If it was still there an else > could help a bit. Like > version(unittest) { > } > else { > static assert(false, "assertPred/etc. are only available within > unittests."); } > Now every code compiled without -unittest and using assertPred/etc will > fail to compile. But of course with -unittest it still seems that it > will work in contracts. I mean at least we can make the problem explicit > in most cases. Not sure how likely it is that people will distribute > software that they only compiled with unittests. The version(unittest) block got removed, because various people thought that it should be usable in normal code. It wouldn't be hard to put a warning on assertPred about contract inheritance if that were necessary, though I have to wonder whether contract inheritance is implemented well if it has to use special asserts like that. It makes it harder to use helper functions with that sort of restriction, and I frequently create helper functions to check invariants and the like (though usually, they return a bool, and then they can be used with either assert or enforce). - Jonathan M Davis
Re: Decision on container design
On Tue, 01 Feb 2011 11:44:36 -0500, Michel Fortin wrote: On 2011-02-01 11:12:13 -0500, Andrei Alexandrescu said: On 1/28/11 8:12 PM, Michel Fortin wrote: On 2011-01-28 20:10:06 -0500, "Denis Koroskin" <2kor...@gmail.com> said: Unfortunately, this design has big issues: void fill(Appender appender) { appender.put("hello"); appender.put("world"); } void test() { Appender appender; fill(appender); // Appender is supposed to have reference semantics assert(appender.length != 0); // fails! } Asserting above fails because at the time you pass appender object to the fill method it isn't initialized yet (lazy initialization). As such, a null is passed, creating an instance at first appending, but the result isn't seen to the caller. That's indeed a problem. I don't think it's a fatal flaw however, given that the idiom already exists in AAs. That said, the nice thing about my proposal is that you can easily reuse the Impl to create a new container to build a new container wrapper with the semantics you like with no loss of efficiency. As for the case of Appender... personally in the case above I'd be tempted to use Appender.Impl directly (value semantics) and make fill take a 'ref'. There's no point in having an extra heap allocation, especially if you're calling test() in a loop or if there's a good chance fill() has nothing to append to it. I've been thinking of making Appender.Impl public or at least its own type. A stack-based appender makes a lot of sense when you are using it temporarily to build an array. But array-based containers really are in a separate class from node-based containers. It's tempting to conflate the two because they are both 'containers', but arrays allow many more optimizations/features that node-based containers simply can't do. Yep, yep, I found myself wrestling with the same issues. All good points. On one hand containers are a target for optimization because many will use them. On the other hand you'd want to have reasonably simple and idiomatic code in the container implementation because you want people to understand them easily and also to write their own. I thought for a while of a layered approach in which you'd have both the value and the sealed reference version of a container... it's just too much aggravation. But are you not just pushing the aggravation elsewhere? If I need a by value container for some reason (performance or semantics) I'll have to write my own, and likely others will write their own too. foo(container.dup) ; // value semantics I'm sure some template guru can make a wrapper type for this for the rare occasions that you need it. We can work on solving the "auto-initialization" issue (where a nested container 'just works'), I think there are ways to do it. If that still doesn't help for your issues, then writing your own may be the only valid option. Using classes for containers is just marginally better than making them by-value structs: you can use 'new' with a by-value struct if you want it to behave as a class-like by-reference container: struct Container { ... } auto c = new Container(); The only noticeable difference from a class container is that now c is now a Container*. And doesn't get cleaned up by the GC properly. Plus, each member call must check if the container is 'instantiated', since we can have no default ctors. Yes, it's a trade-off, and I think by far class-based containers win for the common case. Personally, I'm really concerned by the case where you have a container of containers. Class semantics make things really complicated as you always have to initialize everything in the container explicitly; value semantics makes things semantically easier but quite inefficient as moving elements inside of the outermost container implies copying the containers. Making containers auto-initialize themselves on first use solves the case where containers are references-types; making containers capable of using move semantics solves the problem for value-type containers. Neither values nor references are perfect indeed. For example, someone mentioned, hey, in STL I write set< vector > and it Just Works(tm). On the other hand, if you swap the two names it still seems to work but it's awfully inefficient (something that may trip even experienced developers). Isn't that solved by C++0x, using move semantics in swap? swap isn't the problem. foreach(s; myVectorSet) { // if s is by value, it must be copied for each iteration in the loop } -Steve
Re: Decision on container design
On 2/1/11 10:44 AM, Michel Fortin wrote: On 2011-02-01 11:12:13 -0500, Andrei Alexandrescu said: On 1/28/11 8:12 PM, Michel Fortin wrote: On 2011-01-28 20:10:06 -0500, "Denis Koroskin" <2kor...@gmail.com> said: Unfortunately, this design has big issues: void fill(Appender appender) { appender.put("hello"); appender.put("world"); } void test() { Appender appender; fill(appender); // Appender is supposed to have reference semantics assert(appender.length != 0); // fails! } Asserting above fails because at the time you pass appender object to the fill method it isn't initialized yet (lazy initialization). As such, a null is passed, creating an instance at first appending, but the result isn't seen to the caller. That's indeed a problem. I don't think it's a fatal flaw however, given that the idiom already exists in AAs. That said, the nice thing about my proposal is that you can easily reuse the Impl to create a new container to build a new container wrapper with the semantics you like with no loss of efficiency. As for the case of Appender... personally in the case above I'd be tempted to use Appender.Impl directly (value semantics) and make fill take a 'ref'. There's no point in having an extra heap allocation, especially if you're calling test() in a loop or if there's a good chance fill() has nothing to append to it. That's the issue with containers. The optimal semantics always change depending on the use case. Yep, yep, I found myself wrestling with the same issues. All good points. On one hand containers are a target for optimization because many will use them. On the other hand you'd want to have reasonably simple and idiomatic code in the container implementation because you want people to understand them easily and also to write their own. I thought for a while of a layered approach in which you'd have both the value and the sealed reference version of a container... it's just too much aggravation. But are you not just pushing the aggravation elsewhere? If I need a by value container for some reason (performance or semantics) I'll have to write my own, and likely others will write their own too. If semantics are the primary concern, you could (and in fact Phobos could) provide a Value!C template that automatically calls dup in this(this) etc. For performance I agree there is stuff that class containers leave on the table. Using classes for containers is just marginally better than making them by-value structs: you can use 'new' with a by-value struct if you want it to behave as a class-like by-reference container: struct Container { ... } auto c = new Container(); The only noticeable difference from a class container is that now c is now a Container*. Well one problem now is that if you have a Container* you don't know whether it's dynamically allocated or the address of some stack-allocated object. This is pretty big; a major issue that I believe C++ has is that you can seldom reason modularly about functions because C++ makes it impossible to represent reference semantics with local/remote/shared/no ownership without resorting to convention. A better solution is to define something like auto c = new Classify!Container; which transforms a value into a class object. With this, the question becomes a matter of choosing the right default: do we want values most of the time and occasional references, or vice versa? I think most of the time you need references, as witnessed by the many '&'s out there in code working on STL containers. Personally, I'm really concerned by the case where you have a container of containers. Class semantics make things really complicated as you always have to initialize everything in the container explicitly; value semantics makes things semantically easier but quite inefficient as moving elements inside of the outermost container implies copying the containers. Making containers auto-initialize themselves on first use solves the case where containers are references-types; making containers capable of using move semantics solves the problem for value-type containers. Neither values nor references are perfect indeed. For example, someone mentioned, hey, in STL I write set< vector > and it Just Works(tm). On the other hand, if you swap the two names it still seems to work but it's awfully inefficient (something that may trip even experienced developers). Isn't that solved by C++0x, using move semantics in swap? This particular incarnation yes, but that doesn't automatically fix user code that forgets the cost of copying. But that took a large language change. My point was that values by default is not automatically a good choice. Andrei
Re: std.unittests [updated] for review
Michel Fortin wrote: > On 2011-02-01 10:34:26 -0500, Andrei Alexandrescu > said: > > >On 2/1/11 9:21 AM, Don wrote: > >>Jonathan M Davis wrote: > >>>Do you really find > >>> > >>>assertPred!"=="(min(5, 7), 5); > >>> > >>>to be all that harder to understand than > >>> > >>>assert(min(5, 7) == 5); > >> > >>I do. *Much* harder. Factor of two, at least. > >>In absolute terms, not so much, because it was the original assert is > >>very easy to understand. But the relative factor matters enormously. > >>Much as comparing: > >>a.add(b); > >>a += b; > >> > >>And I think this is a very important issue. > >> > >> >I don't see how these functions could be anything but an improvement. > >> > But even if they get into Phobos, you obviously don't have to use them. > >> > >>This is not true. Including them in Phobos gives a legitimacy to that > >>style of programming. It's a role model. > >> > >>Including stuff like this could give D a reputation for lack of > >>readability. My belief is that right now, the #1 risk for Phobos is that > >>it becomes too clever and inaccessible. > >> > >>IMHO, something simple which has any appearance of being complicated, > >>needs a VERY strong justification. > > > >Does this count as a vote against the submission? > > To me this is a compelling argument against. That and the fact that > it can't really mimic the true behaviour of assert in some > situation. assertPred won't work correctly for 'in' contracts with > inheritance, where the compiler generate the assertion code > differently. > > In my view, the correct way to improve assertion error messages is > to improve how the compiler handle assertions (it should output > messages like it does for static asserts). > > assertPred might be fine as a stopgap solution, but personally I'd > not make it part of the public API. We can't tell people to use > assert() everywhere and then tell them they should use > assertPred!op() if they want a useful error message except for 'in' > contracts of virtual functions; that'd just be too confusing. Actually we say use assertPred/etc. when writing unittests, don't we? To me that is not complicated. There used to be a version(unittest). I don't know for what reason it got removed. If it was still there an else could help a bit. Like version(unittest) { } else { static assert(false, "assertPred/etc. are only available within unittests."); } Now every code compiled without -unittest and using assertPred/etc will fail to compile. But of course with -unittest it still seems that it will work in contracts. I mean at least we can make the problem explicit in most cases. Not sure how likely it is that people will distribute software that they only compiled with unittests. Jens
Re: std.unittests [updated] for review
On 2011-02-01 11:31:54 -0500, Andrei Alexandrescu said: On 2/1/11 9:21 AM, Don wrote: Jonathan M Davis wrote: On Sunday 30 January 2011 05:28:36 SHOO wrote: To be frank, I don't think that such a helper is necessary. I think these helpers will harm intuitive readability of unittest code. For unittest code, it is necessary to be able to understand easily even if without the document. Do you really find assertPred!"=="(min(5, 7), 5); to be all that harder to understand than assert(min(5, 7) == 5); I do. *Much* harder. Factor of two, at least. In absolute terms, not so much, because it was the original assert is very easy to understand. But the relative factor matters enormously. Much as comparing: a.add(b); a += b; And I think this is a very important issue. >I don't see how these functions could be anything but an improvement. > But even if they get into Phobos, you obviously don't have to use them. This is not true. Including them in Phobos gives a legitimacy to that style of programming. It's a role model. Including stuff like this could give D a reputation for lack of readability. My belief is that right now, the #1 risk for Phobos is that it becomes too clever and inaccessible. IMHO, something simple which has any appearance of being complicated, needs a VERY strong justification. One more thought. Don, you are in the unique position to (a) be strongly opinionated about this and (b) have insider knowledge about the compiler. You could, therefore, modify the definition of assert to automatically rewrite failed calls of the form assert(expr == expr), assert(expr < expr) etc. etc. as discussed in this thread. The rewritten forms would give the runtime the compared values to do whatever with. There is the sensitive issue of converting values of arbitrary types to strings at the runtime level, but we may be able to find a good solution to that. TypeInfo holds a pointer to the toString function, so if the compiler passes the two operands as D-style variadic arguments to the assert handler, the assert handler can use toString to print them. The operator should be passed as a string. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: How much time you spend daily?
Iain Buclaw Wrote: > The Haiku person in me says to instead install Haiku. ;) Did you try it?
Re: Decision on container design
On 2011-02-01 11:12:13 -0500, Andrei Alexandrescu said: On 1/28/11 8:12 PM, Michel Fortin wrote: On 2011-01-28 20:10:06 -0500, "Denis Koroskin" <2kor...@gmail.com> said: Unfortunately, this design has big issues: void fill(Appender appender) { appender.put("hello"); appender.put("world"); } void test() { Appender appender; fill(appender); // Appender is supposed to have reference semantics assert(appender.length != 0); // fails! } Asserting above fails because at the time you pass appender object to the fill method it isn't initialized yet (lazy initialization). As such, a null is passed, creating an instance at first appending, but the result isn't seen to the caller. That's indeed a problem. I don't think it's a fatal flaw however, given that the idiom already exists in AAs. That said, the nice thing about my proposal is that you can easily reuse the Impl to create a new container to build a new container wrapper with the semantics you like with no loss of efficiency. As for the case of Appender... personally in the case above I'd be tempted to use Appender.Impl directly (value semantics) and make fill take a 'ref'. There's no point in having an extra heap allocation, especially if you're calling test() in a loop or if there's a good chance fill() has nothing to append to it. That's the issue with containers. The optimal semantics always change depending on the use case. Yep, yep, I found myself wrestling with the same issues. All good points. On one hand containers are a target for optimization because many will use them. On the other hand you'd want to have reasonably simple and idiomatic code in the container implementation because you want people to understand them easily and also to write their own. I thought for a while of a layered approach in which you'd have both the value and the sealed reference version of a container... it's just too much aggravation. But are you not just pushing the aggravation elsewhere? If I need a by value container for some reason (performance or semantics) I'll have to write my own, and likely others will write their own too. Using classes for containers is just marginally better than making them by-value structs: you can use 'new' with a by-value struct if you want it to behave as a class-like by-reference container: struct Container { ... } auto c = new Container(); The only noticeable difference from a class container is that now c is now a Container*. Personally, I'm really concerned by the case where you have a container of containers. Class semantics make things really complicated as you always have to initialize everything in the container explicitly; value semantics makes things semantically easier but quite inefficient as moving elements inside of the outermost container implies copying the containers. Making containers auto-initialize themselves on first use solves the case where containers are references-types; making containers capable of using move semantics solves the problem for value-type containers. Neither values nor references are perfect indeed. For example, someone mentioned, hey, in STL I write set< vector > and it Just Works(tm). On the other hand, if you swap the two names it still seems to work but it's awfully inefficient (something that may trip even experienced developers). Isn't that solved by C++0x, using move semantics in swap? -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: (Was: On 80 columns should (not) be enough for everyone)
Adam Ruppe Wrote: > Steven Schveighoffer wrote: > > It does help, but I was kind of hoping for something that shows the > > structure. > > Those relationships are in the HTML too try it now: > http://arsdnet.net/d-web-site/std_algorithm.html > > (I know it needs some work still, I'm just sick of Javascript after > spending 20 minutes tracking down a bug caused by me using the > same variable name twice! Gah! And wow do I miss foreach.) var foo = [bar, baz]; foo.forEach(function (elem) { elem.doSomthing(); }); Available since version 1.6
Re: On 80 columns should (not) be enough for everyone
Russel Winder Wrote: > Just because anyone over 50 (like me) has worsening eyesight doesn't > mean they can't work quite happily with 110 character lines using 8pt > fonts. I like 110 character lines in smaller fonts, and I like 2 space > indents. And proportional fonts -- Ocean Sans MT rules -- why all this > monospace font obsession (*). I use Verdana 15pt (monospaced fonts don't really scale to this extent and usually don't have characters beyond ASCII). Pixels are small and displays are big nowadays, my editor can accomodate 150 columns in windowed mode.
Re: (Was: On 80 columns should (not) be enough for everyone)
On Mon, 31 Jan 2011 18:08:53 -0500, Adam Ruppe wrote: Steven Schveighoffer wrote: It does help, but I was kind of hoping for something that shows the structure. Those relationships are in the HTML too try it now: http://arsdnet.net/d-web-site/std_algorithm.html (I know it needs some work still, I'm just sick of Javascript after spending 20 minutes tracking down a bug caused by me using the same variable name twice! Gah! And wow do I miss foreach.) Yes, it's in the right direction, but it does need work. BTW, foreach is available, but you can't use it on arrays (HAHAHAHA!) it's one of the reasons I use objects in JS most of the time instead of arrays: for(key in obj) { var elem = obj[key]; /* use elem */ } of course, I'd only recommend this on objects you have used as arrays. What I mean by that is, you don't have any methods or prototypes, because those will also be iterated. https://developer.mozilla.org/en/JavaScript/Reference/Statements/for...in -Steve
Re: std.unittests [updated] for review
On 2/1/11 9:21 AM, Don wrote: Jonathan M Davis wrote: On Sunday 30 January 2011 05:28:36 SHOO wrote: To be frank, I don't think that such a helper is necessary. I think these helpers will harm intuitive readability of unittest code. For unittest code, it is necessary to be able to understand easily even if without the document. Do you really find assertPred!"=="(min(5, 7), 5); to be all that harder to understand than assert(min(5, 7) == 5); I do. *Much* harder. Factor of two, at least. In absolute terms, not so much, because it was the original assert is very easy to understand. But the relative factor matters enormously. Much as comparing: a.add(b); a += b; And I think this is a very important issue. >I don't see how these functions could be anything but an improvement. > But even if they get into Phobos, you obviously don't have to use them. This is not true. Including them in Phobos gives a legitimacy to that style of programming. It's a role model. Including stuff like this could give D a reputation for lack of readability. My belief is that right now, the #1 risk for Phobos is that it becomes too clever and inaccessible. IMHO, something simple which has any appearance of being complicated, needs a VERY strong justification. One more thought. Don, you are in the unique position to (a) be strongly opinionated about this and (b) have insider knowledge about the compiler. You could, therefore, modify the definition of assert to automatically rewrite failed calls of the form assert(expr == expr), assert(expr < expr) etc. etc. as discussed in this thread. The rewritten forms would give the runtime the compared values to do whatever with. There is the sensitive issue of converting values of arbitrary types to strings at the runtime level, but we may be able to find a good solution to that. Andrei
Re: std.unittests [updated] for review
On 2/1/11 9:21 AM, Don wrote: Including stuff like this could give D a reputation for lack of readability. My belief is that right now, the #1 risk for Phobos is that it becomes too clever and inaccessible. I think this is also an argument in favor of making containers straight classes. Andrei
Re: std.unittests [updated] for review
On 2011-02-01 10:34:26 -0500, Andrei Alexandrescu said: On 2/1/11 9:21 AM, Don wrote: Jonathan M Davis wrote: Do you really find assertPred!"=="(min(5, 7), 5); to be all that harder to understand than assert(min(5, 7) == 5); I do. *Much* harder. Factor of two, at least. In absolute terms, not so much, because it was the original assert is very easy to understand. But the relative factor matters enormously. Much as comparing: a.add(b); a += b; And I think this is a very important issue. >I don't see how these functions could be anything but an improvement. > But even if they get into Phobos, you obviously don't have to use them. This is not true. Including them in Phobos gives a legitimacy to that style of programming. It's a role model. Including stuff like this could give D a reputation for lack of readability. My belief is that right now, the #1 risk for Phobos is that it becomes too clever and inaccessible. IMHO, something simple which has any appearance of being complicated, needs a VERY strong justification. Does this count as a vote against the submission? To me this is a compelling argument against. That and the fact that it can't really mimic the true behaviour of assert in some situation. assertPred won't work correctly for 'in' contracts with inheritance, where the compiler generate the assertion code differently. In my view, the correct way to improve assertion error messages is to improve how the compiler handle assertions (it should output messages like it does for static asserts). assertPred might be fine as a stopgap solution, but personally I'd not make it part of the public API. We can't tell people to use assert() everywhere and then tell them they should use assertPred!op() if they want a useful error message except for 'in' contracts of virtual functions; that'd just be too confusing. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: Decision on container design
On 1/28/11 8:12 PM, Michel Fortin wrote: On 2011-01-28 20:10:06 -0500, "Denis Koroskin" <2kor...@gmail.com> said: Unfortunately, this design has big issues: void fill(Appender appender) { appender.put("hello"); appender.put("world"); } void test() { Appender appender; fill(appender); // Appender is supposed to have reference semantics assert(appender.length != 0); // fails! } Asserting above fails because at the time you pass appender object to the fill method it isn't initialized yet (lazy initialization). As such, a null is passed, creating an instance at first appending, but the result isn't seen to the caller. That's indeed a problem. I don't think it's a fatal flaw however, given that the idiom already exists in AAs. That said, the nice thing about my proposal is that you can easily reuse the Impl to create a new container to build a new container wrapper with the semantics you like with no loss of efficiency. As for the case of Appender... personally in the case above I'd be tempted to use Appender.Impl directly (value semantics) and make fill take a 'ref'. There's no point in having an extra heap allocation, especially if you're calling test() in a loop or if there's a good chance fill() has nothing to append to it. That's the issue with containers. The optimal semantics always change depending on the use case. Yep, yep, I found myself wrestling with the same issues. All good points. On one hand containers are a target for optimization because many will use them. On the other hand you'd want to have reasonably simple and idiomatic code in the container implementation because you want people to understand them easily and also to write their own. I thought for a while of a layered approach in which you'd have both the value and the sealed reference version of a container... it's just too much aggravation. An explicit initialization is needed to work around this design issue. The worst thing is that in many cases it would work fine (you might have already initialized it indirectly) but sometimes you get unexpected result. I got hit by this in past, and it wasn't easy to trace down. As such, I strongly believe containers either need to have copy semantics, or be classes. However, copy semantics contradicts with the "cheap copy ctor" idiom because you need to copy all the elements from source container. Personally, I'm really concerned by the case where you have a container of containers. Class semantics make things really complicated as you always have to initialize everything in the container explicitly; value semantics makes things semantically easier but quite inefficient as moving elements inside of the outermost container implies copying the containers. Making containers auto-initialize themselves on first use solves the case where containers are references-types; making containers capable of using move semantics solves the problem for value-type containers. Neither values nor references are perfect indeed. For example, someone mentioned, hey, in STL I write set< vector > and it Just Works(tm). On the other hand, if you swap the two names it still seems to work but it's awfully inefficient (something that may trip even experienced developers). Andrei
Re: std.unittests [updated] for review
On Tuesday 01 February 2011 07:21:54 Don wrote: > Jonathan M Davis wrote: > > On Sunday 30 January 2011 05:28:36 SHOO wrote: > >> To be frank, I don't think that such a helper is necessary. > >> I think these helpers will harm intuitive readability of unittest code. > >> For unittest code, it is necessary to be able to understand easily even > >> if without the document. > > > > Do you really find > > > > assertPred!"=="(min(5, 7), 5); > > > > to be all that harder to understand than > > > > assert(min(5, 7) == 5); > > I do. *Much* harder. Factor of two, at least. > In absolute terms, not so much, because it was the original assert is > very easy to understand. But the relative factor matters enormously. > Much as comparing: > a.add(b); > a += b; > > And I think this is a very important issue. Well, it's quite common for unit testing frameworks to have stuff like assertEqual, assertNotEqual, assertLessThan, etc. assertPred!"==" isn't all that different, except it embeds the actual operator in it via a template argument, so if anything, it's arguably easier to read than those. And honestly, I think that the added debugging benefit of assertPred far outweighs any potential readability issues. assert as it stands is quite poor in comparison. > >I don't see how these functions could be anything but an improvement. > > > > But even if they get into Phobos, you obviously don't have to use them. > > This is not true. Including them in Phobos gives a legitimacy to that > style of programming. It's a role model. > > Including stuff like this could give D a reputation for lack of > readability. My belief is that right now, the #1 risk for Phobos is that > it becomes too clever and inaccessible. > > IMHO, something simple which has any appearance of being complicated, > needs a VERY strong justification. Valid point. However, I do think that the added usefulness of assertPred far outweighs any readability issues, but honestly, I don't think that it's all that hard to read - especially when it's very typical for unit testing frameworks to have function such as assertEqual and assertGreaterThan. - Jonathan M Davis
Re: Decision on container design
On 1/29/11 3:36 PM, dsimcha wrote: I've uploaded the documentation to http://cis.jhu.edu/~dsimcha/randaasealed.html and mentioned it again on the mailing list. The documentation is pretty sparse because interface-wise it's just a standard hash table. More generally, though, are we still interested in sealed/ref counted containers? Sorry for being slow in continuing this thread. Regarding the general issue that someone makes an informal proposal (either here, as a DIP, or on the Phobos mailing list), followed by a thundering silence: I believe that a good technique is to formalize the proposal review process, which has been a homerun for Boost. The disadvantage of that is that almost without exception this is very taxing to library submitters. This means the submitter must put a lot of thought and a lot of work into motivating, polishing, and documenting an artifact without any guarantee that it would lead to inclusion in the target library. I've seen very, VERY elaborate Boost submissions fail - literally months of work gone to waste. I'm not sure how to save people from doing work up front in hope of an uncertain outcome in the future. I do know what does _not_ work: the take it or leave it approach: "Hey, I have this code for abstraction XYZ that I extracted from a project of mine and I think it may be of general interest. It's at http://site.com/path/to/code.{d,html}. It needs polishing here and there, it's largely undocumented, but I'm sure the ideas shine through. Eh?" The doc at http://cis.jhu.edu/~dsimcha/randaasealed.html is somewhere in between. It is clear you have a good understanding of sealing and hash containers, but let me ask you this - if you wanted to sell this to someone, what would you do? Probably you'd show some relevant benchmarks putting the built-in hashes to shame. Maybe you'd have some good examples - yes, we know it's a hash, but it doesn't hurt to see some code pulp over there. Maybe you'd explain sealing and discuss its relative advantages and disadvantages (which have not yet been documented anywhere - a great opportunity). Maybe you'd even show some numbers showing how sealing does as well/better/worse than reference leaking. This is a good amount of upfront work for little promise. Again, I don't know yet how to optimize for minimizing that. What I did see works on Boost is a request for interest in the form of a discussion (usually with NO source, only USAGE examples) asking if there are people who are interested in such a notion. In this particular case, you'd need numbers to make a strong case which means that code must be already written. For something like e.g. "how about a Finite State Automaton library?" perhaps upfront code wouldn't be necessary for gauging interest. Andrei
Re: Decision on container design
On 1/29/11 5:01 AM, Simen kjaeraas wrote: Tomek Sowiński wrote: Michel Fortin napisał: > Is there anything implementation specific in the outer struct that provides > ref semantics to Impl? If not, Container could be generic, parametrized by > Impl type. You could provide an implementation-specific version of some functions as an optimization. For instance there is no need to create the Impl when asking for the length, if the pointer is null, length is zero. Typically, const function can be implemented in the outward container with a shortcut checking for null. I think the reference struct can still be orthogonal to the container. struct Ref(Impl) { private Impl* _impl; ref Impl impl() @property { if (!impl) impl = new Impl; return *impl; } static if (hasLength!Impl) { auto length() @property { return impl ? impl.length : 0; } } alias impl this; } Now, other functions may also exploit such a shortcut. How do you plan to implement that? Actually, by adopting another idiom, it can be done: struct Ref(Impl) { private Impl* _impl; @property ref Impl impl() { return *(_impl = (_impl ? _impl : new Impl)); } alias impl this; auto opDispatch(string name, T...)(T args) { mixin("return impl."~name~"(_impl,args);"); } } struct ExampleImpl { static int length(ExampleImpl* that) { return that ? *that.actualLength : 0; } int actualLength( ) { return 42; } } Of course, I did not say it was a good idiom. :p I do something similar with RefCounted. There are problems - you need to know in advance which functions you can implement on a null container (empty and length are obvious candidates, but there could be others). Andrei
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
Bruno Medeiros Wrote: > On 29/01/2011 10:02, "Jérôme M. Berger" wrote: > > Michel Fortin wrote: > >> On 2011-01-28 11:29:49 -0500, Bruno Medeiros > >> said: > >> > >>> I've also been mulling over whether to try out and switch away from > >>> Subversion to a DVCS, but never went ahead cause I've also been > >>> undecided about Git vs. Mercurial. So this whole discussion here in > >>> the NG has been helpful, even though I rarely use branches, if at all. > >>> > >>> However, there is an important issue for me that has not been > >>> mentioned ever, I wonder if other people also find it relevant. It > >>> annoys me a lot in Subversion, and basically it's the aspect where if > >>> you delete, rename, or copy a folder under version control in a SVN > >>> working copy, without using the SVN commands, there is a high > >>> likelihood your working copy will break! It's so annoying, especially > >>> since sometimes no amount of svn revert, cleanup, unlock, override and > >>> update, etc. will fix it. I just had one recently where I had to > >>> delete and re-checkout the whole project because it was that broken. > >>> Other situations also seem to cause this, even when using SVN tooling > >>> (like partially updating from a commit that delete or moves > >>> directories, or something like that) It's just so brittle. > >>> I think it may be a consequence of the design aspect of SVN where each > >>> subfolder of a working copy is a working copy as well (and each > >>> subfolder of repository is a repository as well) > >>> > >>> Anyways, I hope Mercurial and Git are better at this, I'm definitely > >>> going to try them out with regards to this. > >> > >> Git doesn't care how you move your files around. It track files by their > >> content. If you rename a file and most of the content stays the same, > >> git will see it as a rename. If most of the file has changed, it'll see > >> it as a new file (with the old one deleted). There is 'git mv', but it's > >> basically just a shortcut for moving the file, doing 'git rm' on the old > >> path and 'git add' on the new path. > >> > >> I don't know about Mercurial. > >> > > Mercurial can record renamed or copied files after the fact (simply > > pass the -A option to "hg cp" or "hg mv"). It also has the > > "addremove" command which will automatically remove any missing > > files and add any unknown non-ignored files. Addremove can detect > > renamed files if they are similar enough to the old file (the > > similarity level is configurable) but it will not detect copies. > > > > Jerome > > Indeed, that's want I found out now that I tried Mercurial. So that's > really nice (especially the "addremove" command), it's actually > motivation enough for me to switch to Mercurial or Git, as it's a major > annoyance in SVN. > > I've learned a few more things recently: there's a minor issue with Git > and Mercurial in that they both are not able to record empty > directories. A very minor annoyance (it's workaround-able), but still > conceptually lame, I mean, directories are resources too! It's curious > that the wiki pages for both Git and Mercurial on this issue are exactly > the same, word by word most of them: > http://mercurial.selenic.com/wiki/MarkEmptyDirs > https://git.wiki.kernel.org/index.php/MarkEmptyDirs > (I guess it's because they were written by the same guy) > > A more serious issue that I learned (or rather forgotten about before > and remembered now) is the whole DVCSes keep the whole repository > history locally aspect, which has important ramifications. If the > repository is big, although disk space may not be much of an issue, it's > a bit annoying when copying the repository locally(*), or cloning it > from the internet and thus having to download large amounts of data. > For example in the DDT Eclipse IDE I keep the project dependencies > (https://svn.codespot.com/a/eclipselabs.org/ddt/trunk/org.dsource.ddt-build/target/) > > on source control, which is 141Mb total on a single revision, and they > might change ever semester or so... > I'm still not sure what to do about this. I may split this part of the > project into a separate Mercurial repository, although I do lose some > semantic information because of this: a direct association between each > revision in the source code projects, and the corresponding revision in > the dependencies project. Conceptually I would want this to be a single > repository. > > (*) Yeah, I know Mercurial and Git may use hardlinks to speed up the > cloning process, even on Windows, but that solution is not suitable to > me, as I my workflow is usually to copy entire Eclipse workspaces when I > want to "branch" on some task. Doesn't happen that often though. > > -- > Bruno Medeiros - Software Engineer You raised a valid concern regarding the local copy issue and it has already been taken care of in DVCSes: 1. git stores all the actual data in "blobs" which are compressed wherea
Re: Imprecise running time for topN?
On 2011-02-01 16:29:56 +0100, Andrei Alexandrescu said: On 2/1/11 8:12 AM, Magnus Lie Hetland wrote: [snip] I'm not objecting to the use of algorithm -- it's a good choice in practice -- but the docs should probably specify that the linear guarantee does not hold in the worst case? You're right (and randomization should be there, too). Could you please add a bugzilla entry so we don't forget about this? http://d.puremagic.com/issues. Thanks! Will do. :) Andrei -- Magnus Lie Hetland http://hetland.org
Re: std.unittests [updated] for review
On 2/1/11 9:21 AM, Don wrote: Jonathan M Davis wrote: On Sunday 30 January 2011 05:28:36 SHOO wrote: To be frank, I don't think that such a helper is necessary. I think these helpers will harm intuitive readability of unittest code. For unittest code, it is necessary to be able to understand easily even if without the document. Do you really find assertPred!"=="(min(5, 7), 5); to be all that harder to understand than assert(min(5, 7) == 5); I do. *Much* harder. Factor of two, at least. In absolute terms, not so much, because it was the original assert is very easy to understand. But the relative factor matters enormously. Much as comparing: a.add(b); a += b; And I think this is a very important issue. >I don't see how these functions could be anything but an improvement. > But even if they get into Phobos, you obviously don't have to use them. This is not true. Including them in Phobos gives a legitimacy to that style of programming. It's a role model. Including stuff like this could give D a reputation for lack of readability. My belief is that right now, the #1 risk for Phobos is that it becomes too clever and inaccessible. IMHO, something simple which has any appearance of being complicated, needs a VERY strong justification. Does this count as a vote against the submission? Andrei
Re: Imprecise running time for topN?
On 2/1/11 8:12 AM, Magnus Lie Hetland wrote: I was reading the docs for std.algorithm, when I came across topN. This is, of course, a highly useful problem, with several solutions; I was a bit surprised to see the claim that it runs in linear time. As far as I know, the only ways of achieving that would be (1) using the super-elegant, but highly inefficient, algorithm of Blum, Floyd, Pratt, Rivest and Tarjan, often known as Select, or (2) using soft heaps. (The latter, I know less about.) Checking the source, I found that -- as I suspected -- it uses the more common Randomized-Select (without actual randomization here, though), which only has an *expected* (or average-case) linear running time. It suffers the same worst-case problems as Quicksort. I'm not objecting to the use of algorithm -- it's a good choice in practice -- but the docs should probably specify that the linear guarantee does not hold in the worst case? You're right (and randomization should be there, too). Could you please add a bugzilla entry so we don't forget about this? http://d.puremagic.com/issues. Thanks! Andrei
Re: std.unittests [updated] for review
Jonathan M Davis wrote: On Sunday 30 January 2011 05:28:36 SHOO wrote: To be frank, I don't think that such a helper is necessary. I think these helpers will harm intuitive readability of unittest code. For unittest code, it is necessary to be able to understand easily even if without the document. Do you really find assertPred!"=="(min(5, 7), 5); to be all that harder to understand than assert(min(5, 7) == 5); I do. *Much* harder. Factor of two, at least. In absolute terms, not so much, because it was the original assert is very easy to understand. But the relative factor matters enormously. Much as comparing: a.add(b); a += b; And I think this is a very important issue. >I don't see how these functions could be anything but an improvement. > But even if they get into Phobos, you obviously don't have to use them. This is not true. Including them in Phobos gives a legitimacy to that style of programming. It's a role model. Including stuff like this could give D a reputation for lack of readability. My belief is that right now, the #1 risk for Phobos is that it becomes too clever and inaccessible. IMHO, something simple which has any appearance of being complicated, needs a VERY strong justification.
Re: How much time you spend daily?
== Quote from Gour (g...@atmarama.net)'s article > --Sig_/_Z9B_1vagUlo.9QU5gvlwWx > Content-Type: text/plain; charset=US-ASCII > Content-Transfer-Encoding: quoted-printable > On Tue, 01 Feb 2011 09:57:18 -0500 > Gary Whatmore wrote: > > A quick look at my daily routines revealed that I spend 7 hours > > studying the dmd and phobos diffs, Debian, Ubuntu, and Arch linux > > packages status,=20 > Here is something which I plan to do: install Free(PC)BSD stable > (there will be 8.2 release soon) and stop worrying about all those > packages for the above Linux distros and just update your OS when the > new Free(PC)BSD release become available. ;) > Sincerely, > Gour The Haiku person in me says to instead install Haiku. ;) Regards Iain
Re: How much time you spend daily?
On Tue, 01 Feb 2011 09:57:18 -0500 Gary Whatmore wrote: > A quick look at my daily routines revealed that I spend 7 hours > studying the dmd and phobos diffs, Debian, Ubuntu, and Arch linux > packages status, Here is something which I plan to do: install Free(PC)BSD stable (there will be 8.2 release soon) and stop worrying about all those packages for the above Linux distros and just update your OS when the new Free(PC)BSD release become available. ;) Sincerely, Gour -- Gour | Hlapicina, Croatia | GPG key: CDBF17CA signature.asc Description: PGP signature
Re: Would user polls be useful? (Was: Re: std.unittests [updated] for review)
On Tuesday 01 February 2011 06:44:56 Jens Mueller wrote: > Jonathan M Davis wrote: > > On Monday 31 January 2011 15:49:11 Jens Mueller wrote: > > > spir wrote: > > > > On 01/30/2011 01:13 PM, Jens Mueller wrote: > > > > >I do not like putting it in std.exception. Maybe the name > > > > >std.unittest is also not good. I would propose std.assert if assert > > > > >wasn't a keyword. [...] > > > > > > > > > I would_not_ expect helpers for writing > > > > > > > > >assertions (Assert_Error_) in a module named std.exception. > > > > > > > > Same for me. Find it strange. Would never search assertion helper > > > > funcs inside std.exception. Why not std.assertion? std.unittests > > > > would be fine if there were some more stuff in there, I mean not > > > > only assertions. Else, the obvious name imo is std.assertion. > > > > > > Nice. I just wonder what others think. > > > I'd like to start a poll > > > http://doodle.com/vn2ceenuvfwtx38e > > > In general are those polls useful? I mean there where some > > > discussions in the last time where a poll may help. git vs. hg. 80 vs. > > > 90 characters per line. If all arguments are on the table it can be > > > useful to have an opinion poll to finally settle the discussion. > > > > > > It may even be that I'm totally wrong here. But I think the module > > > naming needs to be done very careful. It's what a D newcomer needs to > > > grasp as easy as possible. Am I too picky? Somehow I care (too?) much > > > about names. > > > > I wouldn't actually be looking for "assertion helpers" anywhere. I might > > be looking for unit test helper, but I wouldn't be thinking about > > assertions at all, even if the unit test helpers threw AssertError like > > they do. But truth be told, I don't generally look for modules with a > > particular name unless I know what I'm looking for. I look at the list > > of modules and see which names seem like they'd have what I'd want. As > > such, std.unittests would make me think that the module held unit test > > stuff, whereas I really wouldn't know what to expect in std.assertion or > > std.exception at all - _especially_ std.assertion. > > When I hear std.assertion and std.exception then I assume that these > modules contain stuff for working with assertions and exceptions > respectively. Just the fact that I want to throw an exception makes me > go to std.exception to check out what's in there. > > > However, given the small number of unit test functions, it makes no sense > > for them to be their own module. So, I really don't think that > > std.unittests makes sense at this point. std.exception may not be the > > best name, but given what's currently in it, the unit testing functions > > fit in there reasonably well. So, the issue then is what to name > > std.exception if that name is not good enough. std.assertion makes no > > sense - particularly with enforce in there - and std.unittests doesn't > > work, because it's not just unit testing stuff in there. > > I really dislike the size argument. Because as a programmer I do not > care how big a module is in the first place. First I care about how to > find it. > If I have found it I may say: "Oh that's a little module". But that > usually doesn't bother me much. > And putting something in a module where it does not belong is not an > option. I think we agree on that since you would like to have a better > name for std.exception. > > Just checked: > std.bigint is quite small looking at it's documentation. std.complex as > well. std.demangle is very small. std.functional has seven functions. > std.getopt has one big function in it. std.uni is very small. There are > more I'll guess. It's not the size of a module that comes first when to > decide where to put something. > > > So, unless you can come up with a better name for std.exception with the > > idea that it's going to be holding my new functions along with what's > > already in there, I don't think that discussing names is going to mean > > much. From a contents standpoint, it does make sense that the stuff in > > std.exception and my stuff would be lumped together, and it does not > > make sense for my stuff to be in its own module at this point. There > > isn't enough of it, and since it fits well in std.exception, there's > > even less reason to create a new module for it. So, if a change is to be > > made, I think that it's going to have to be to the name of > > std.exception, and I can't think of a better name nor have I see a > > better name suggested. > > The poll is less about a good name. It should be more about where do you > expect things to be. And since I do not have a good name for a module > that contains exception and error handling I don't like putting both > things in a module with a bad name. Because it will be very difficult to > fix this later. Even adding functions to modules may cause compile > problems. I may have defined an assertPred already in one of my > modules. I created the poll in the h
How much time you spend daily?
Recently Bruno M. wrote: > I may be spending too much time on the NG (especially for someone who doesn't > skip the 8 hours of sleep) A quick look at my daily routines revealed that I spend 7 hours studying the dmd and phobos diffs, Debian, Ubuntu, and Arch linux packages status, bug reports and comments, planet d, running bearophile's benchmarks, dsource & project news, all reddit articles about D, all slides and posts by Walter in various forums. This is all too exciting. If I didn't go to work, there would be even more to learn. The downside is, this leaves no time to really contribute. Problem #2 is I'm not very good at coming up with ideas on how to improve D or the tools. I just spend all time reading, spreading the word, and manipulating reddit votes. I hope I could help D and use D some day. Any thoughts?
Re: C# Interop
On Tue, 01 Feb 2011 03:05:13 -0500, Rainer Schuetze wrote: Robert Jacques wrote: On Mon, 31 Jan 2011 16:25:11 -0500, Eelco Hoogendoorn wrote: [...] Lastly, D DLLs will only work on Vista/Windows 7/later. They will not work on XP. This is due to a long known bug with DLLs and thread local storage in general on XP. Also, you'll have to use 32-bit C# currently, as DMD isn't 64-bit compatible yet. (Walter is hard at work on a 64-bit version of DMD, but it will be Linux only at first, with Windows following sometime later) XP TLS support with dynamically loaded DLLs is fixed for some time now with a workaround implemented in druntime. Also, DLLs can be used in multi-threading environments. Yes, I pointed out in another thread that D loading D DLLs can work around this issue, but the original post was about calling a D DLL from another language, specifically C#, where the limitation in XP still exists. (Of course, you might be able to port the work around to C#. Hmm...) > I've listed some example code from my project below: [snip] This DLLMain code is a bit outdated (is it D1?), the current proposed version is here: http://www.digitalmars.com/d/2.0/dll.html Thanks. It was D2, but it was forked a while ago. Given that the recommended way of doing this might change in the future, a string mixin in core.dll_helper might be appropriate.
Re: Would user polls be useful? (Was: Re: std.unittests [updated] for review)
Jonathan M Davis wrote: > On Monday 31 January 2011 15:49:11 Jens Mueller wrote: > > spir wrote: > > > On 01/30/2011 01:13 PM, Jens Mueller wrote: > > > >I do not like putting it in std.exception. Maybe the name std.unittest > > > >is also not good. I would propose std.assert if assert wasn't a keyword. > > > >[...] > > > > > > > I would_not_ expect helpers for writing > > > > > > >assertions (Assert_Error_) in a module named std.exception. > > > > > > Same for me. Find it strange. Would never search assertion helper > > > funcs inside std.exception. Why not std.assertion? std.unittests > > > would be fine if there were some more stuff in there, I mean not > > > only assertions. Else, the obvious name imo is std.assertion. > > > > Nice. I just wonder what others think. > > I'd like to start a poll > > http://doodle.com/vn2ceenuvfwtx38e > > In general are those polls useful? I mean there where some > > discussions in the last time where a poll may help. git vs. hg. 80 vs. > > 90 characters per line. If all arguments are on the table it can be > > useful to have an opinion poll to finally settle the discussion. > > > > It may even be that I'm totally wrong here. But I think the module > > naming needs to be done very careful. It's what a D newcomer needs to > > grasp as easy as possible. Am I too picky? Somehow I care (too?) much > > about names. > > I wouldn't actually be looking for "assertion helpers" anywhere. I might be > looking for unit test helper, but I wouldn't be thinking about assertions at > all, even if the unit test helpers threw AssertError like they do. But truth > be > told, I don't generally look for modules with a particular name unless I know > what I'm looking for. I look at the list of modules and see which names seem > like they'd have what I'd want. As such, std.unittests would make me think > that > the module held unit test stuff, whereas I really wouldn't know what to > expect in > std.assertion or std.exception at all - _especially_ std.assertion. When I hear std.assertion and std.exception then I assume that these modules contain stuff for working with assertions and exceptions respectively. Just the fact that I want to throw an exception makes me go to std.exception to check out what's in there. > However, given the small number of unit test functions, it makes no sense for > them to be their own module. So, I really don't think that std.unittests > makes > sense at this point. std.exception may not be the best name, but given what's > currently in it, the unit testing functions fit in there reasonably well. So, > the > issue then is what to name std.exception if that name is not good enough. > std.assertion makes no sense - particularly with enforce in there - and > std.unittests doesn't work, because it's not just unit testing stuff in there. I really dislike the size argument. Because as a programmer I do not care how big a module is in the first place. First I care about how to find it. If I have found it I may say: "Oh that's a little module". But that usually doesn't bother me much. And putting something in a module where it does not belong is not an option. I think we agree on that since you would like to have a better name for std.exception. Just checked: std.bigint is quite small looking at it's documentation. std.complex as well. std.demangle is very small. std.functional has seven functions. std.getopt has one big function in it. std.uni is very small. There are more I'll guess. It's not the size of a module that comes first when to decide where to put something. > So, unless you can come up with a better name for std.exception with the idea > that it's going to be holding my new functions along with what's already in > there, I don't think that discussing names is going to mean much. From a > contents standpoint, it does make sense that the stuff in std.exception and > my > stuff would be lumped together, and it does not make sense for my stuff to be > in > its own module at this point. There isn't enough of it, and since it fits > well in > std.exception, there's even less reason to create a new module for it. So, if > a > change is to be made, I think that it's going to have to be to the name of > std.exception, and I can't think of a better name nor have I see a better > name > suggested. The poll is less about a good name. It should be more about where do you expect things to be. And since I do not have a good name for a module that contains exception and error handling I don't like putting both things in a module with a bad name. Because it will be very difficult to fix this later. Even adding functions to modules may cause compile problems. I may have defined an assertPred already in one of my modules. I created the poll in the hope for more feedback how others relate to this issue. I'm very unsure how important these module naming and hierarchy issues are. But I fear that it may be too late to fix these later. But
Re: Purity
Bruno Medeiros wrote: But for immutable data (like the contents of the elements of a string[]), that doesn't matter, does it? Maybe it won't matter for the *contents of the elements* of the string array, but the whole result value has to be /the same/ as if the optimization was not applied. Otherwise the optimization is invalid, even if for most uses of the result value it would not make a difference for the program. I admit to still not understanding this. The data can't be changed, so the contents do not matter. The array structs (prt/length) would not be the same as those fed to the function in any case, so I really cannot see how those would matter. If others do understand, please elucidate. -- Simen
Imprecise running time for topN?
I was reading the docs for std.algorithm, when I came across topN. This is, of course, a highly useful problem, with several solutions; I was a bit surprised to see the claim that it runs in linear time. As far as I know, the only ways of achieving that would be (1) using the super-elegant, but highly inefficient, algorithm of Blum, Floyd, Pratt, Rivest and Tarjan, often known as Select, or (2) using soft heaps. (The latter, I know less about.) Checking the source, I found that -- as I suspected -- it uses the more common Randomized-Select (without actual randomization here, though), which only has an *expected* (or average-case) linear running time. It suffers the same worst-case problems as Quicksort. I'm not objecting to the use of algorithm -- it's a good choice in practice -- but the docs should probably specify that the linear guarantee does not hold in the worst case? -- Magnus Lie Hetland http://hetland.org
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 2/1/11 2:44 PM, Bruno Medeiros wrote: […] a direct association between each revision in the source code projects, and the corresponding revision in the dependencies project. […] With Git, you could use submodules for that task – I don't know if something similar exists for Mercurial. David
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 29/01/2011 10:02, "Jérôme M. Berger" wrote: Michel Fortin wrote: On 2011-01-28 11:29:49 -0500, Bruno Medeiros said: I've also been mulling over whether to try out and switch away from Subversion to a DVCS, but never went ahead cause I've also been undecided about Git vs. Mercurial. So this whole discussion here in the NG has been helpful, even though I rarely use branches, if at all. However, there is an important issue for me that has not been mentioned ever, I wonder if other people also find it relevant. It annoys me a lot in Subversion, and basically it's the aspect where if you delete, rename, or copy a folder under version control in a SVN working copy, without using the SVN commands, there is a high likelihood your working copy will break! It's so annoying, especially since sometimes no amount of svn revert, cleanup, unlock, override and update, etc. will fix it. I just had one recently where I had to delete and re-checkout the whole project because it was that broken. Other situations also seem to cause this, even when using SVN tooling (like partially updating from a commit that delete or moves directories, or something like that) It's just so brittle. I think it may be a consequence of the design aspect of SVN where each subfolder of a working copy is a working copy as well (and each subfolder of repository is a repository as well) Anyways, I hope Mercurial and Git are better at this, I'm definitely going to try them out with regards to this. Git doesn't care how you move your files around. It track files by their content. If you rename a file and most of the content stays the same, git will see it as a rename. If most of the file has changed, it'll see it as a new file (with the old one deleted). There is 'git mv', but it's basically just a shortcut for moving the file, doing 'git rm' on the old path and 'git add' on the new path. I don't know about Mercurial. Mercurial can record renamed or copied files after the fact (simply pass the -A option to "hg cp" or "hg mv"). It also has the "addremove" command which will automatically remove any missing files and add any unknown non-ignored files. Addremove can detect renamed files if they are similar enough to the old file (the similarity level is configurable) but it will not detect copies. Jerome Indeed, that's want I found out now that I tried Mercurial. So that's really nice (especially the "addremove" command), it's actually motivation enough for me to switch to Mercurial or Git, as it's a major annoyance in SVN. I've learned a few more things recently: there's a minor issue with Git and Mercurial in that they both are not able to record empty directories. A very minor annoyance (it's workaround-able), but still conceptually lame, I mean, directories are resources too! It's curious that the wiki pages for both Git and Mercurial on this issue are exactly the same, word by word most of them: http://mercurial.selenic.com/wiki/MarkEmptyDirs https://git.wiki.kernel.org/index.php/MarkEmptyDirs (I guess it's because they were written by the same guy) A more serious issue that I learned (or rather forgotten about before and remembered now) is the whole DVCSes keep the whole repository history locally aspect, which has important ramifications. If the repository is big, although disk space may not be much of an issue, it's a bit annoying when copying the repository locally(*), or cloning it from the internet and thus having to download large amounts of data. For example in the DDT Eclipse IDE I keep the project dependencies (https://svn.codespot.com/a/eclipselabs.org/ddt/trunk/org.dsource.ddt-build/target/) on source control, which is 141Mb total on a single revision, and they might change ever semester or so... I'm still not sure what to do about this. I may split this part of the project into a separate Mercurial repository, although I do lose some semantic information because of this: a direct association between each revision in the source code projects, and the corresponding revision in the dependencies project. Conceptually I would want this to be a single repository. (*) Yeah, I know Mercurial and Git may use hardlinks to speed up the cloning process, even on Windows, but that solution is not suitable to me, as I my workflow is usually to copy entire Eclipse workspaces when I want to "branch" on some task. Doesn't happen that often though. -- Bruno Medeiros - Software Engineer
Re: What are we missing, in terms of tool support?
On 14/01/2011 09:49, %fil wrote: I for one fully agree with you on this, having spend a lot of my time in recent years coding in c# and the tool support (from an IDE perspective) that comes a along with programming in .Net, I agree that the coding productivity in bigger applications receives a good boost by an IDE with the features you describe. To an extend, I'm actually surprised that there is no good cross platform IDE written in D(2) already as it would be a very good show case for the language and help to lower to barrier for other people to adopt the language (definitly if it were to support a gui designer (QtD, GtkD or sometime else) of some sort directly from the IDE, so people feel they have a complete package to create D(2) application easily. I would even personally happely pay for such a tool (if it were cross platform at least) if were only available under a commercial license... fil. A good cross-platform IDE written in D? There is not even a good cross-platform *D compiler* written in D... how about we start there first, no? (even if just in terms of a wishlist) -- Bruno Medeiros - Software Engineer
Re: DSource (Was: Re: Moving to D )
On 28/01/2011 21:14, retard wrote: Fri, 28 Jan 2011 15:03:24 +, Bruno Medeiros wrote: I know, I know. :) (I am up-to-date on D.announce, just not on "D" and "D.bugs") I still wanted to make that point though. First, for retrospection, but also because it may still apply to a few other DSource projects (current or future ones). You don't need to read every post here. Reading every bug report is just stupid.. but it's not my problem. It just means that the rest of us have less competition in everyday situations (getting women, work offers, and so on) I don't read every bug report, I only (try to) read the titles and see if it's something interesting, for example something that might impact the design of the language and is just not a pure implementation issue. Still, yes, I may be spending too much time on the NG (especially for someone who doesn't skip the 8 hours of sleep), but the bottleneck at the moment is writing posts, especially those that involve arguments. They are an order of magnitude more "expensive" than reading posts. -- Bruno Medeiros - Software Engineer
Re: Purity
On 28/01/2011 20:25, Simen kjaeraas wrote: Bruno Medeiros wrote: On 27/01/2011 21:05, Simen kjaeraas wrote: Bruno Medeiros wrote: string[] func(string arg) pure { string elem2 = "blah".idup; return [ arg, elem2 ]; } The compiler *cannot* know (well, looking at the signature only of course) how to properly deepdup the result from the first return value, so as to give the exact same result as if func was called again. Could you please elucidate, as I am unsure of your reasoning for saying the compiler cannot know how to deepdup the result. string str = "blah"; string[] var1 = func(str); string[] var2 = func(str); How can the compiler optimize the second call to func, the one that is assigned to var2, such that he deepdups var1 instead of calling func again? Which code would be generated? The compiler can't do that because of all the transitive data of var1, the compiler doesn't know which of it was newly allocated by func, and which of it was reused from func's parameters or some other global inputs. But for immutable data (like the contents of the elements of a string[]), that doesn't matter, does it? Maybe it won't matter for the *contents of the elements* of the string array, but the whole result value has to be /the same/ as if the optimization was not applied. Otherwise the optimization is invalid, even if for most uses of the result value it would not make a difference for the program. -- Bruno Medeiros - Software Engineer
Re: Patterns of Bugs
On 28/01/2011 12:41, Daniel Gibson wrote: Am 28.01.2011 13:33, schrieb Bruno Medeiros: On 08/01/2011 09:14, Walter Bright wrote: Jonathan M Davis wrote: On Saturday 08 January 2011 00:16:13 Walter Bright wrote: Jérôme M. Berger wrote: When I built my latest PC, I saw in the MB manual that it would use speech synthesis on the PC speaker to report errors. So I tried to power on the PC without having plugged either CPU or RAM and it started to say "NO CPU FOUND! NO CPU FOUND!" in a loop with a hilarious Asian accent and the kind of rasping voice that used to characterized old DOS games. Pretty fun ;) That's a heckuva lot better than an undocumented beep pattern which is what I got. LOL. The beeps for mine are documented in the motherboadr manual, but the beeps are so hard to distinguish from one another, that it borders on useless. A voice would certainly be better. Yes, what is the difference between a "slow beep" and a "fast beep"? While I'm ranting, does anyone else have trouble remembering which of O and | is on, and which is off? What's the matter with "on" and "off"? Hum, I never had problems with that: I always assumed the | meant a closed electrical circuit (ie, you closed the circuit with the switch), thus naturally it meant "on". O looks like a *closed* circle to me so this isn't that helpful IMHO ;) In my mental model (where the "|" means a closed circuit, or rather the part of the circuit that closes it and makes electricity pass through), the "O" doesn't fit well into that model, indeed, it makes no sense. But just for the purposes of remembering which one is on or off, that doesn't matter at all: I just think of "is the | pressed or not?". -- Bruno Medeiros - Software Engineer
Re: const(Object)ref is here!
On 28/01/2011 15:19, Andrei Alexandrescu wrote: On 1/28/11 5:37 AM, Bruno Medeiros wrote: You mean to say that there would be three possible signatures for toText (for char[], wchar[], dchar[]), that the class coder can choose? But of course, the coder would only need to define one, right? (otherwise that would be the awful idea) Probably standardizing on one width is a good idea. Andrei Indeed. -- Bruno Medeiros - Software Engineer
Re: Would user polls be useful? (Was: Re: std.unittests [updated] for review)
On Monday 31 January 2011 15:49:11 Jens Mueller wrote: > spir wrote: > > On 01/30/2011 01:13 PM, Jens Mueller wrote: > > >I do not like putting it in std.exception. Maybe the name std.unittest > > >is also not good. I would propose std.assert if assert wasn't a keyword. > > >[...] > > > > > I would_not_ expect helpers for writing > > > > >assertions (Assert_Error_) in a module named std.exception. > > > > Same for me. Find it strange. Would never search assertion helper > > funcs inside std.exception. Why not std.assertion? std.unittests > > would be fine if there were some more stuff in there, I mean not > > only assertions. Else, the obvious name imo is std.assertion. > > Nice. I just wonder what others think. > I'd like to start a poll > http://doodle.com/vn2ceenuvfwtx38e > In general are those polls useful? I mean there where some > discussions in the last time where a poll may help. git vs. hg. 80 vs. > 90 characters per line. If all arguments are on the table it can be > useful to have an opinion poll to finally settle the discussion. > > It may even be that I'm totally wrong here. But I think the module > naming needs to be done very careful. It's what a D newcomer needs to > grasp as easy as possible. Am I too picky? Somehow I care (too?) much > about names. I wouldn't actually be looking for "assertion helpers" anywhere. I might be looking for unit test helper, but I wouldn't be thinking about assertions at all, even if the unit test helpers threw AssertError like they do. But truth be told, I don't generally look for modules with a particular name unless I know what I'm looking for. I look at the list of modules and see which names seem like they'd have what I'd want. As such, std.unittests would make me think that the module held unit test stuff, whereas I really wouldn't know what to expect in std.assertion or std.exception at all - _especially_ std.assertion. However, given the small number of unit test functions, it makes no sense for them to be their own module. So, I really don't think that std.unittests makes sense at this point. std.exception may not be the best name, but given what's currently in it, the unit testing functions fit in there reasonably well. So, the issue then is what to name std.exception if that name is not good enough. std.assertion makes no sense - particularly with enforce in there - and std.unittests doesn't work, because it's not just unit testing stuff in there. So, unless you can come up with a better name for std.exception with the idea that it's going to be holding my new functions along with what's already in there, I don't think that discussing names is going to mean much. From a contents standpoint, it does make sense that the stuff in std.exception and my stuff would be lumped together, and it does not make sense for my stuff to be in its own module at this point. There isn't enough of it, and since it fits well in std.exception, there's even less reason to create a new module for it. So, if a change is to be made, I think that it's going to have to be to the name of std.exception, and I can't think of a better name nor have I see a better name suggested. - Jonathan M Davis
Re: Would user polls be useful? (Was: Re: std.unittests [updated] for review)
On 02/01/2011 12:49 AM, Jens Mueller wrote: spir wrote: On 01/30/2011 01:13 PM, Jens Mueller wrote: I do not like putting it in std.exception. Maybe the name std.unittest is also not good. I would propose std.assert if assert wasn't a keyword. [...] I would_not_ expect helpers for writing assertions (Assert_Error_) in a module named std.exception. Same for me. Find it strange. Would never search assertion helper funcs inside std.exception. Why not std.assertion? std.unittests would be fine if there were some more stuff in there, I mean not only assertions. Else, the obvious name imo is std.assertion. Nice. I just wonder what others think. I'd like to start a poll http://doodle.com/vn2ceenuvfwtx38e In general are those polls useful? I mean there where some discussions in the last time where a poll may help. git vs. hg. 80 vs. 90 characters per line. If all arguments are on the table it can be useful to have an opinion poll to finally settle the discussion. It may even be that I'm totally wrong here. But I think the module naming needs to be done very careful. It's what a D newcomer needs to grasp as easy as possible. Am I too picky? Somehow I care (too?) much about names. Jens I care extremely about names, in general (as people may have noticed). Even more, as you say, for standard modules, which names should be as obvious as possible for newcomers. This also requires a clear policy for stdlib structure / organisation; present one, if any, is a mystery for me. I doubt, though, a poll may be good decision means; at best, it may help and have a rough review of what names /not/ to choose. (voted for your poll anyway, just for fun ;-) Denis -- _ vita es estrany spir.wikidot.com
Re: On 80 columns should (not) be enough for everyone
On 31/01/2011 17:54, Ulrik Mikaelsson wrote: One special-case which often cause problems, is function-calls, especially "method"-calls. Roughly lines like: (note 3-level leading indent) otherObj1.doSomethingSensible(otherObj2.internalVariable, this.config, this.context); At this point, I can see two obvious alternatives; otherObj1.doSomethingSensible(otherObj2.internalVariable, this.config, this.context); vs. otherObj1.doSomethingSensible(otherObj2.internalVariable, this.config, this.context); If only your newsreader were also set to wrap at 90, it would be clearer. Why align the continuation lines with the open bracket? I think the shortness of lines resulting therefrom is the root cause of what you say next: Both have advantages and problems. In the first alternative, you might miss the second argument if reading too fast, and in the second alternative, the vertical space can be quickly wasted, especially if the line get's just slightly too long due to many small arguments. If OTOH you stick to a standard number of spaces by which to indent, which generally allows multiple arguments to fit on one line otherObj1.doSomethingSensible(otherObj2.internalVariable, this.config, this.context); you don't lead people into the trap of seeing one argument per line. That said, I've probably in my time done something similar to your example. And at other times, I might do otherObj1.doSomethingSensible( otherObj2.internalVariable, this.config, this.context ); Stewart.
Re: On 80 columns should (not) be enough for everyone
Here's SDC, just for kicks: [SDC]$ find src/sdc -name "*.d" -print0 | xargs --null wc -l | sort -rn | head -n 1 12545 total [SDC]$ find src/sdc -name "*.d" -print0 | xargs --null grep '.\{81,\}' | cut -f1 -d:| uniq -c | sort -nr 81 src/sdc/gen/value.d 44 src/sdc/gen/expression.d 35 src/sdc/lexer.d 26 src/sdc/gen/base.d 24 src/sdc/parser/declaration.d 24 src/sdc/gen/declaration.d 19 src/sdc/gen/sdctemplate.d 16 src/sdc/gen/statement.d 13 src/sdc/global.d 12 src/sdc/gen/sdcfunction.d 11 src/sdc/sdc.d 9 src/sdc/gen/sdcpragma.d 8 src/sdc/parser/expression.d 8 src/sdc/parser/base.d 8 src/sdc/gen/sdcmodule.d 7 src/sdc/parser/conditional.d 7 src/sdc/gen/attribute.d 6 src/sdc/parser/sdcimport.d 6 src/sdc/parser/attribute.d 6 src/sdc/extract/base.d 4 src/sdc/parser/sdctemplate.d 4 src/sdc/parser/enumeration.d 4 src/sdc/gen/sdcimport.d 4 src/sdc/gen/enumeration.d 3 src/sdc/sdc4de.d 2 src/sdc/token.d 2 src/sdc/gen/type.d 2 src/sdc/gen/cfg.d 2 src/sdc/gen/aggregate.d 1 src/sdc/tokenstream.d 1 src/sdc/terminal.d 1 src/sdc/source.d 1 src/sdc/parser/statement.d 1 src/sdc/parser/sdcpragma.d 1 src/sdc/parser/sdcclass.d 1 src/sdc/parser/aggregate.d 1 src/sdc/ast/statement.d 1 src/sdc/ast/expression.d
Re: Bus error w/combined writeln(int) and uniform
On 2011-01-31 17:00:57 +0100, Jacob Carlborg said: On 2011-01-31 10:18, Lars T. Kyllingstad wrote: [snip] I'm not sure if it's related: http://d.puremagic.com/issues/show_bug.cgi?id=4854 -Lars Can it be this problem: http://d.puremagic.com/issues/show_bug.cgi?id=4854 ? That's the same one, no? (Did mean some other problem?-) -- Magnus Lie Hetland http://hetland.org
Re: Bus error w/combined writeln(int) and uniform
On 2011-01-31 17:03:50 +0100, Jacob Carlborg said: To begin with, are you using Mac OS X 10.5 or 10.6? If you're using 10.6 we can rule out that bug. I'm using 10.5.8. -- Magnus Lie Hetland http://hetland.org
Re: C# Interop
Robert Jacques wrote: On Mon, 31 Jan 2011 16:25:11 -0500, Eelco Hoogendoorn wrote: [...] Lastly, D DLLs will only work on Vista/Windows 7/later. They will not work on XP. This is due to a long known bug with DLLs and thread local storage in general on XP. Also, you'll have to use 32-bit C# currently, as DMD isn't 64-bit compatible yet. (Walter is hard at work on a 64-bit version of DMD, but it will be Linux only at first, with Windows following sometime later) XP TLS support with dynamically loaded DLLs is fixed for some time now with a workaround implemented in druntime. Also, DLLs can be used in multi-threading environments. > I've listed some example code from my project below: > > // Written in the D Programming Language (www.digitalmars.com/d) > ///Basic DLL setup and teardown code. From D's/bugzilla's public domain > example code. > module dll; > > import std.c.windows.windows; > import std.c.stdlib; > import core.runtime; > import core.memory; > > extern (Windows) > BOOL DllMain(HINSTANCE hInstance, ULONG ulReason, LPVOID pvReserved) { > switch (ulReason) { > case DLL_PROCESS_ATTACH: > Runtime.initialize(); > break; > case DLL_PROCESS_DETACH: > Runtime.terminate(); > break; > case DLL_THREAD_ATTACH: > case DLL_THREAD_DETACH: > return false; > } > return true; > } > This DLLMain code is a bit outdated (is it D1?), the current proposed version is here: http://www.digitalmars.com/d/2.0/dll.html Unfortunately, there is a regression in the latest dmd release (2.051): http://d.puremagic.com/issues/show_bug.cgi?id=5382 that causes TLS not to be initialized for new threads created by the host application. Rainer