Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Sat, 01 Jun 2013 04:06:06 +0100, Juan Manuel Cabo juanmanuel.c...@gmail.com wrote: On 05/31/2013 05:18 PM, Nick Sabalausky wrote: On Fri, 31 May 2013 15:29:40 +0100 Regan Heath re...@netmail.co.nz wrote: I have old SHA etc hashing routines in old style D, this makes me want to spend some time bringing them up to date... http://dlang.org/phobos/std_digest_sha.html Since 2.061, IIRC. Funny.. the module listing on the left is not in alpha ordering, so I completely missed them. The sha digest in phobos is SHA1. SHA256 and SHA512 are still missing. This too.. I have those, plus a few others. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
On reddit: http://www.reddit.com/r/programming/comments/1fkr5s/dconf_2013_day_2_talk_4_web_development_in_d_by/ On hackernews: https://news.ycombinator.com/item?id=5812723 On facebook: https://www.facebook.com/dlang.org/posts/650767074936977 On twitter: https://twitter.com/D_Programming/status/341527862815367168 Andrei
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
On Monday, 3 June 2013 at 12:14:48 UTC, Andrei Alexandrescu wrote: On reddit: http://www.reddit.com/r/programming/comments/1fkr5s/dconf_2013_day_2_talk_4_web_development_in_d_by/ On hackernews: https://news.ycombinator.com/item?id=5812723 On facebook: https://www.facebook.com/dlang.org/posts/650767074936977 On twitter: https://twitter.com/D_Programming/status/341527862815367168 Andrei http://dconf.org/2013/talks/panteleev.pdf 404 error
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
Andrei Alexandrescu: http://www.reddit.com/r/programming/comments/1fkr5s/dconf_2013_day_2_talk_4_web_development_in_d_by/ On Reddit they seem to suggest the idea of good stack traces for fibers... Bye, bearophile
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Mon, 03 Jun 2013 02:16:45 -0700, Regan Heath re...@netmail.co.nz wrote: On Sat, 01 Jun 2013 04:06:06 +0100, Juan Manuel Cabo juanmanuel.c...@gmail.com wrote: On 05/31/2013 05:18 PM, Nick Sabalausky wrote: On Fri, 31 May 2013 15:29:40 +0100 Regan Heath re...@netmail.co.nz wrote: I have old SHA etc hashing routines in old style D, this makes me want to spend some time bringing them up to date... http://dlang.org/phobos/std_digest_sha.html Since 2.061, IIRC. Funny.. the module listing on the left is not in alpha ordering, so I completely missed them. The sha digest in phobos is SHA1. SHA256 and SHA512 are still missing. This too.. I have those, plus a few others. R Any chance of getting those merged into Phobos? -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
On Monday, 3 June 2013 at 12:14:48 UTC, Andrei Alexandrescu wrote: On reddit: http://www.reddit.com/r/programming/comments/1fkr5s/dconf_2013_day_2_talk_4_web_development_in_d_by/ On hackernews: https://news.ycombinator.com/item?id=5812723 On facebook: https://www.facebook.com/dlang.org/posts/650767074936977 On twitter: https://twitter.com/D_Programming/status/341527862815367168 Andrei Great talk! Would love to see the improvements to phobos suggested. An idea for the virtual address space problem on 32-bit: - Assuming each stack has a marker page at the end to prevent overflow, by simply exchanging the stack memory with a separate block of memory you can reduce the number of marker pages saving up to one page per fiber - Could use some fast compression method to save a not recently used stack in memory - As a last resort can save stack to a file and read it in again when it is required. Since events are queued the event loop can easily peek ahead in the queue and start loading in a stack early so that it is ready by the time that event gets to the front.
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
On Monday, 3 June 2013 at 18:07:57 UTC, Diggory wrote: Great talk! Would love to see the improvements to phobos suggested. An idea for the virtual address space problem on 32-bit: - Assuming each stack has a marker page at the end to prevent overflow, by simply exchanging the stack memory with a separate block of memory you can reduce the number of marker pages saving up to one page per fiber - Could use some fast compression method to save a not recently used stack in memory - As a last resort can save stack to a file and read it in again when it is required. Since events are queued the event loop can easily peek ahead in the queue and start loading in a stack early so that it is ready by the time that event gets to the front. Thanks! One of the main advantages of fibers over threads is the low overhead for switching - basically you save registers and change ESP. By comparison, switching to another thread requires an OS kernel system call. Copying entire stacks might negate this performance benefit. It's worth noting that it looks like the average stack size will grow for D programs in the future, as the push is made to minimize heap allocations throughout Phobos and use the stack more.
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
vibe.d really puts cgi.d to shame when it comes to scalability. With 100 concurrent connections, cgi.d (especially in embedded_http and fastcgi modes, scgi and cgi are a little behind) can hold its own. Not great, I got 5000 requests per second on my box, the same ballpark as C# in the video, but not too bad. In the question section though he started to talk about event loops. I've been thinking about that too. It isn't tied into cgi.d yet, but I've written a linux loop built on epoll: https://github.com/adamdruppe/misc-stuff-including-D-programming-language-web-stuff/blob/master/eventloop.d It uses a pipe back to itself for the injection of arbitrary events. Works something like this: // this helps listening on file descriptors FileEventDispatcher dispatcher; // args are fd, on read ready, on write ready, on error dispatcher.addFile(0, (int fd) { ubyte[100] buffer; auto got = unix.read(fd, buffer.ptr, buffer.length); if(got == -1) throw new Exception(wtf); if(got == 0) exit; else writeln(fd, sent , cast(string) buffer[0 .. got]); }, null, null); // you can also listen to events identified by type addListener(delegate void(int a) { writeln(got , a); }); addListener(delegate void(File a) { writeln(got , a); }); send(20); // calls the listener above send(stdin); // works with fancier types too loop(); // enters the loop } I really like the idea of dispatching things based on type, it is so convenient and lets you plug anything in. The FileEventDispatcher is different because all file descriptors are the same type, and we might want to differentiate it based on descriptor number. But you could also do addListener(FdReady) or something like that, i havent used this for a while. Anyway my goal here is to prove that we can have one generic event loop that all libraries can use. My implementation probably isn't phobos bound, I'm settling for works for me and proves it can work but maybe the experience or some code will help when it is time to do a standard one. I've plugged in my terminal.d to it (optionally, use -version=with_eventloop)) but that's it so far. Eventually I'll probably put simpledisplay and cgi.d in there as well and really see how it is coming together. If all goes well, we can have one program, one event loop, and all those various inputs handled.
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
03-Jun-2013 22:07, Diggory пишет: On Monday, 3 June 2013 at 12:14:48 UTC, Andrei Alexandrescu wrote: On reddit: http://www.reddit.com/r/programming/comments/1fkr5s/dconf_2013_day_2_talk_4_web_development_in_d_by/ On hackernews: https://news.ycombinator.com/item?id=5812723 On facebook: https://www.facebook.com/dlang.org/posts/650767074936977 On twitter: https://twitter.com/D_Programming/status/341527862815367168 Andrei Great talk! Would love to see the improvements to phobos suggested. Indeed. An idea for the virtual address space problem on 32-bit: - Assuming each stack has a marker page at the end to prevent overflow, by simply exchanging the stack memory with a separate block of memory you can reduce the number of marker pages saving up to one page per fiber - Could use some fast compression method to save a not recently used stack in memory - As a last resort can save stack to a file and read it in again when it is required. Since events are queued the event loop can easily peek ahead in the queue and start loading in a stack early so that it is ready by the time that event gets to the front. Copying to disk is certainly strange and rising the cost of context switch by on-the-fly compression is even more so. So is copying memory. Since we know there is plenty RAM but limited address space we can go for MM file and have some say 512M of it mapped at any given time (to have multiple smaller windows). Think of memory window as a slot for fiber - i.e. any fiber is mapped to one of X fixed addresses. Then the only requirement that when it wakes up it's mapped to the same address it was born with. It's sort of hash-table where fixed addresses are slots (collision chains) and items are fiber context that got mapped there. The amount of pages actually used would be fairly low and thus it may never have to pull them off the disk. In fact the moment it starts paging it turns into your idea of writing context to disk. Now the question is relative latency of MapViewOfFile in this setting. It's definitely something to measure if it's fast enough we are done here basically. If not (and I guess not, at least it's a sys call) it makes sense to manage placement of Fibers with some kind of good strategy. Ideally ones that wait on the same resource (say updated index.html to read off disk) should wake up together and thus they better be in the same window (so you can map a pack of them at once). -- Dmitry Olshansky
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
On Monday, 3 June 2013 at 19:27:02 UTC, Adam D. Ruppe wrote: vibe.d really puts cgi.d to shame when it comes to scalability. With 100 concurrent connections [...] I forgot to finish this thought! But you read that right, at puny 100 concurrent connections it holds its own, but if you go up to 1000 or 5000 like the vibe.d benchmarks, my code starts dropping connections fast. My view has always been I'll cross that bridge when I come to it though. The sites I work on do ok, but they don't have that kind of traffic!
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
On Monday, 3 June 2013 at 18:51:45 UTC, Vladimir Panteleev wrote: An idea for the virtual address space problem on 32-bit: [snip] Another problem is passing objects on the stack across fibers. This might even be implicit (e.g. due to delegates).
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
Torrents and links: http://semitwist.com/download/misc/dconf2013/
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
Referring to the last question: Hibernate-D is *not* based on Vibe.d. But I have already been looking into the idea of using Hibernate-D and Vibe.d together. In fact, my recent commits to mysql-native adding support for Phobos sockets was a big part of that. The main issue is that Vibe.d programs should be using Vibe.d sockets instead of ordinary sockets (I don't know what would happen if you don't use Vibe.d sockets. My guess is it just wouldn't happen asynchronously, but Sonke could answer that better). But Hibernate-D aims to be usable even without Vibe.d, so it uses Phobos sockets for MySQL (via a modified fork of an older mysql-native), and for PostgreSQL and SQLite it just uses the C libs. I've already converted mysql-native to support both Vibe.d and Phobos sockets, and I've already created a branch of Hibernate-D that makes Hibernate-D use the new official mysql-native and therefore automatically switch to Vibe.d sockets whenever Vibe.d is being used (detected by -version=Have_vibe_d, which is automatically added by DUB is you're using both Vibe.d and DUB). But G^DF*^@CKD*#MMIT I *just now* noticed that commit (and the pull request I could have sworn I made) seems to have completely disappeared without a trace...Shit, I gotta figure out what happened and where the hell it went... Ugh, anyway, I've been digging through Hibernate-D's source the last couple days checking out what else might be needed. As long as I get my magical disappearing commit resurrected, it *looks* to me like the only other thing that might be needed is to bypass Hibernate-D's built-in connection pool. Even that might still work as-is (I haven't tried), but it's not really necessary for Vibe.d users since Vibe.d has its own fiber-safe connection pool system. But it's pretty easy to bypass Hibernate-D's connection pool in favor of Vibe.D's connection pool in user-code without even patching Hibernate-D. AFAICT so far, it looks like everything else in Hibernate-D should work fine with Vibe.d. tl;dr: Hibernate-D does not use Vibe.d, but I have personal interest in using them together and I've been checking into it. Not sure what can be done about PostgreSQL and SQLite (I *think* they'll work but just not asynchronously - not sure what else can/should be done, I'd have to ask Sonke). But for MySQL, all you *should* need is a patch or two that I've been working on.
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
On Mon, 3 Jun 2013 18:11:37 -0400 Nick Sabalausky seewebsitetocontac...@semitwist.com wrote: Referring to the last question: Hibernate-D is *not* based on Vibe.d. But I have already been looking into the idea of using Hibernate-D and Vibe.d together. In fact, my recent commits to mysql-native adding support for Phobos sockets was a big part of that. The main issue is that Vibe.d programs should be using Vibe.d sockets instead of ordinary sockets (I don't know what would happen if you don't use Vibe.d sockets. My guess is it just wouldn't happen asynchronously, but Sonke could answer that better). But Hibernate-D aims to be usable even without Vibe.d, so it uses Phobos sockets for MySQL (via a modified fork of an older mysql-native), and for PostgreSQL and SQLite it just uses the C libs. I've already converted mysql-native to support both Vibe.d and Phobos sockets, and I've already created a branch of Hibernate-D that makes Hibernate-D use the new official mysql-native and therefore automatically switch to Vibe.d sockets whenever Vibe.d is being used (detected by -version=Have_vibe_d, which is automatically added by DUB is you're using both Vibe.d and DUB). But G^DF*^@CKD*#MMIT I *just now* noticed that commit (and the pull request I could have sworn I made) seems to have completely disappeared without a trace...Shit, I gotta figure out what happened and where the hell it went... Ugh, anyway, I've been digging through Hibernate-D's source the last couple days checking out what else might be needed. As long as I get my magical disappearing commit resurrected, it *looks* to me like the only other thing that might be needed is to bypass Hibernate-D's built-in connection pool. Even that might still work as-is (I haven't tried), but it's not really necessary for Vibe.d users since Vibe.d has its own fiber-safe connection pool system. But it's pretty easy to bypass Hibernate-D's connection pool in favor of Vibe.D's connection pool in user-code without even patching Hibernate-D. AFAICT so far, it looks like everything else in Hibernate-D should work fine with Vibe.d. tl;dr: Hibernate-D does not use Vibe.d, but I have personal interest in using them together and I've been checking into it. Not sure what can be done about PostgreSQL and SQLite (I *think* they'll work but just not asynchronously - not sure what else can/should be done, I'd have to ask Sonke). But for MySQL, all you *should* need is a patch or two that I've been working on. Oh wait, I totally forgot: DDBC was moved out of HibernateD into a separate project, and *that's* where the missing commit was. I was looking for it in the wrong project. Anyway, here are my branches: https://github.com/Abscissa/hibernated/commits/misc https://github.com/Abscissa/ddbc/commits/misc And here's the DDBC pull request to make DDBC (the low-level DB abstraction lib used by Hibernate-D) use the official Vibe.d- and Phobos-compatible mysql-native: https://github.com/buggins/ddbc/pull/1 So all you *should* need (for MySQL anyway) is that pull request (or my full misc branches above), and then bypass Hibernate-D's connection pool in favor of Vibe.d's by changing this part of your code from: SessionFactory factory = new SessionFactoryImpl(mySchema, myDialect, myDataSource); to something like: class MyDataSource : DataSource { static mysql.db.MysqlDB vibePool; override Connection getConnection() { if(!vibePool) vibePool = new mysql.db.MysqlDB(/+ connection info +/); return vibePool.lockConnection(); } } SessionFactory factory = new SessionFactoryImpl(mySchema, myDialect, new MyDataSource());
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
On Monday, 3 June 2013 at 22:11:39 UTC, Nick Sabalausky wrote: Referring to the last question: Hibernate-D is *not* based on Vibe.d. But I have already been looking into the idea of using Hibernate-D and Vibe.d together. In fact, my recent commits to mysql-native adding support for Phobos sockets was a big part of that. The main issue is that Vibe.d programs should be using Vibe.d sockets instead of ordinary sockets (I don't know what would happen if you don't use Vibe.d sockets. My guess is it just wouldn't happen asynchronously, but Sonke could answer that better). But Hibernate-D aims to be usable even without Vibe.d, so it uses Phobos sockets for MySQL (via a modified fork of an older mysql-native), and for PostgreSQL and SQLite it just uses the C libs. I've already converted mysql-native to support both Vibe.d and Phobos sockets, and I've already created a branch of Hibernate-D that makes Hibernate-D use the new official mysql-native and therefore automatically switch to Vibe.d sockets whenever Vibe.d is being used (detected by -version=Have_vibe_d, which is automatically added by DUB is you're using both Vibe.d and DUB). But G^DF*^@CKD*#MMIT I *just now* noticed that commit (and the pull request I could have sworn I made) seems to have completely disappeared without a trace...Shit, I gotta figure out what happened and where the hell it went... Ugh, anyway, I've been digging through Hibernate-D's source the last couple days checking out what else might be needed. As long as I get my magical disappearing commit resurrected, it *looks* to me like the only other thing that might be needed is to bypass Hibernate-D's built-in connection pool. Even that might still work as-is (I haven't tried), but it's not really necessary for Vibe.d users since Vibe.d has its own fiber-safe connection pool system. But it's pretty easy to bypass Hibernate-D's connection pool in favor of Vibe.D's connection pool in user-code without even patching Hibernate-D. AFAICT so far, it looks like everything else in Hibernate-D should work fine with Vibe.d. tl;dr: Hibernate-D does not use Vibe.d, but I have personal interest in using them together and I've been checking into it. Not sure what can be done about PostgreSQL and SQLite (I *think* they'll work but just not asynchronously - not sure what else can/should be done, I'd have to ask Sonke). But for MySQL, all you *should* need is a patch or two that I've been working on. Regarding PostgreSQL, keep in mind that it has an async API, which would be handy in building a Vibe-friendly wrapper: http://www.postgresql.org/docs/devel/static/libpq-async.html Graham
Re: DConf 2013 Day 2 Talk 4: Web Development in D by Vladimir Panteleev
On Monday, 3 June 2013 at 18:51:45 UTC, Vladimir Panteleev wrote: On Monday, 3 June 2013 at 18:07:57 UTC, Diggory wrote: Great talk! Would love to see the improvements to phobos suggested. An idea for the virtual address space problem on 32-bit: - Assuming each stack has a marker page at the end to prevent overflow, by simply exchanging the stack memory with a separate block of memory you can reduce the number of marker pages saving up to one page per fiber - Could use some fast compression method to save a not recently used stack in memory - As a last resort can save stack to a file and read it in again when it is required. Since events are queued the event loop can easily peek ahead in the queue and start loading in a stack early so that it is ready by the time that event gets to the front. Thanks! One of the main advantages of fibers over threads is the low overhead for switching - basically you save registers and change ESP. By comparison, switching to another thread requires an OS kernel system call. Copying entire stacks might negate this performance benefit. It's worth noting that it looks like the average stack size will grow for D programs in the future, as the push is made to minimize heap allocations throughout Phobos and use the stack more. Yes, although it would theoretically be possible to swap the stack by swapping the mappings rather than the memory itself, although I doubt many OSes would support that kind of functionality... I guess it's not too much to ask to use a 64-bit OS for when 1000s of connections need to be handled!
Re: Feature request: Attribute with which to enable the requirement of explicit-initialization of enum variables
On Monday, 3 June 2013 at 02:23:18 UTC, Andrej Mitrovic wrote: Let's say you define an enum, which is to be used as a variable: ... Thoughts? I think it is simpler to set a first enum member as invalid. However, I like an idea of supporting analogue of @disable this() mark for any user-defined types, not structs (I mean it would be pretty good if such feature applied on classes could stop creating null references - it's actually not adding new feature, but increasing scope of existing feature).
Re: What's up with pull request buildbots?
On Monday, 3 June 2013 at 05:49:22 UTC, Adam Wilson wrote: On Sun, 02 Jun 2013 22:27:12 -0700, Dylan Knutson tcdknut...@gmail.com wrote: Hello, I'm a bit confused as to how the DMD buildbot is supposed to work: it seems like 50% of the time (ballparked from the first 15 or so pull requests), the buildbot just doesn't report failures or successes. This bugs (heh) me a little bit because there are tons of months old pull requests just waiting in the pipeline, stuck at Determining merge status. I don't know if this is part of the review process or caused by the yellow status, but for instance I've opened up a bug report a week or so prior: http://d.puremagic.com/issues/show_bug.cgi?id=10113 and a few days afterwards, a pull request was submitted: https://github.com/D-Programming-Language/dmd/pull/2080 which, turns out, was more or less a dup of another pull request, submitted 6 months ago: https://github.com/D-Programming-Language/dmd/pull/1358 I'll see if I can't give you a stab at an explanation. Note that I did not write the Auto-Tester, that is the work of Brad Anderson. Here are some facts about the AT as I understand them: I wish I could take credit for that. It's the work of the awesome Brad Roberts though.
Re: Error after installing DMD v2.063
On Sunday, 2 June 2013 at 21:00:57 UTC, Russel Winder wrote: On Sun, 2013-06-02 at 13:11 -0700, Jonathan M Davis wrote: On Sunday, June 02, 2013 15:47:36 Gary Willoughby wrote: I've just run: sudo ln -s /usr/lib/x86_64-linux-gnu/libphobos2.so /usr/lib/x86_64-linux-gnu/libphobos2.so.0.63 for now but i've never had to do that before. Is this a problem with the installer? If you had to do that, then yes. I didn't have to do that on my Debian Unstable, but maybe I was lucky. However the structure: libphobos2.so the file libphobos2.so.0.63 a symbolic link to libphobos2.so is non-standard and not compliant. The standard structure should be: libphobos2.so.0.63 the file libphobos2.so.0 a symbolic link to libphobos2.so.0.63 libphobos2.so a symbolic link to libphobos2.so.0 And symlink are created automagically by tooling.
Re: Ten Things I Like about D
On 6/2/13 11:28 PM, Daren Scot Wilson wrote: What I like about D... [snip] Would be great if you pasted that all in a blog entry. Though I suggest that comparisons against The Bard, Bhagavad Gita etc. shouldn't make the editorial pass. (Also, an Amazon review would be awesome.) Anyhow, this was a great read. Thanks! Andrei
Re: Slow performance compared to C++, ideas?
On 3 June 2013 01:53, Roy Obena ro...@gmail.com wrote: On Sunday, 2 June 2013 at 14:34:43 UTC, Manu wrote: On 2 June 2013 21:46, Joseph Rushton Wakeling Well this is another classic point actually. I've been asked by my friends at Cambridge to give their code a once-over for them on many occasions, and while I may not understand exactly what their code does, I can often spot boat-loads of simple functional errors. Like basic programming bugs; out-by-ones, pointer logic fails, clear lack of understanding of floating point, or logical structure that will clearly lead to incorrect/unexpected edge cases. And it blows my mind that they then run this code on their big sets of data, write some big analysis/conclusions, and present this statistical data in some journal somewhere, and are generally accepted as an authority and taken seriously! You're making this up. I'm sure they do a lot of data-driven tests or simulations that make most errors detectable. They may not be savvy programmers, and their programs may not be error-free, but boat-loads of errors? C'mon. I'm really not. I mean, this won't all appear in the same function, but I've seen all these sorts of errors on more than one occasion. I suspect that in most cases it will just increase their perceived standard deviation, otherwise I'm sure they'd notice it's all wrong and look for their bugs. But it's sad if a study shows higher than true standard deviation because of code errors, or worse, if it does influence the averages slightly, but they feel the result is plausible within their expected tolerance. The scariest state is the idea that their code is *almost correct*. Clearly, they should be using D ;)
Re: A Small Contribution to Phobos
On Monday, 3 June 2013 at 02:31:00 UTC, Andrei Alexandrescu wrote: On 6/2/13 2:43 PM, monarch_dodra wrote: I think I just had a good idea. First, we introduce cached: cached will take the result of front, but only evaluate it once. This is a good idea in and out of itself, and should take the place of .array() in UFCS chains. Yah, cached() (better cache()?) should be nice. It may also offer lookahead, e.g. cache(5) would offer a non-standard lookahead(size_t n) up to 5 elements ahead. Hum... That'd be a whole different ballpark in terms of power, as opposed to the simple minded cached I had in mind. But I think both can coexist anyway, so I see no problem with adding extra functionality. From there, tee, is nothing more than calls funs on the front element every time front is called, then returns front. From there, users can user either of: MyRange.tee!foo(): This calls foo on every front element, and several times is front gets called several times. MyRange.tee!foo().cached(): This calls foo on every front element, but only once, and guaranteed at least once, if it gets iterated. I kinda dislike that tee() is hardly useful without cache. Andrei I disagree. One thing a user could expect out of tee is to print on every access, just to see which elements get pushed down the pipe, and in which order, as opposed to just print my range. In particular, I don't see why tee would not mix with random access. For example, with this program: auto r = [4, 3, 2, 1].tee!writeln(); writeln(first sort (not sorted)); r.sort(); writeln(second sort (already sorted)); r.sort(); I can see the output as: first sort (not sorted) 2 1 1 2 1 3 1 1 3 2 2 1 2 4 1 1 4 2 2 1 3 3 2 3 2 1 3 2 4 3 second sort (already sorted) 3 4 3 2 3 2 1 2 1 2 1 3 2 4 3 which gives me a good idea of how costly the sort algorithm is. It's a good way to find out if cache(d) or array should be inserted in my chain.
Re: Labels as values and threaded-code interpretation
On 2013-06-02 19:44, Walter Bright wrote: The curious question is why it never gets into the newer C and C++ Standards. It's like inline assembly. I think basically every compiler supports it but it's not in the standard. At least there's a compiler for each platform supporting it. -- /Jacob Carlborg
Re: Slow performance compared to C++, ideas?
On 3 June 2013 02:37, Andrei Alexandrescu seewebsiteforem...@erdani.orgwrote: On 6/2/13 9:59 AM, Manu wrote: I've never said that virtuals are bad. The key function of a class is polymorphism. But the reality is that in non-tool or container/foundational classes (which are typically write-once, use-lots; you don't tend to write these daily), a typical class will have a couple of virtuals, and a whole bunch of properties. I've argued if no dispatch is needed just make those free functions. You're not going to win many friends, and probably not many potential D users by insisting people completely change their coding patterns that they've probably held for decades on a trivial matter like this. And if that's to be realistic, there are some problems that need to be addressed with that. The template issue for a start. _Everything_ in a class is supposed to be overridable, unless inherited and explicitly finalized. Who says? I certainly wouldn't say that. Sounds like a horrible idea to me. C++ and C# users would never say that. It sounds like a disaster waiting to happen to me. There are functions that the author intended to be overridden, and functions that have no business being overridden, that the author probably never imagined anyone would override. What if someone does come along and override one of these, and it was never designed to work under that circumstance in the first place? At very least, it will have never been tested. That's not a very robust API offering if you ask me. It's sort of a historical accident that things got the way they are. But in D we know better because we have the module-level privacy model and UFCS. So we should break clean from history. Why? Can you actually justify what's bad about a class containing it's properties/accessors? (I've read the article you pointed me at once, it was interesting, but didn't convince me) Interestingly, you didn't actually disagree with my point about the common case here, and I don't buy the Java doctrine.
Re: Error after installing DMD v2.063
On 2013-06-03 04:33, Andrei Alexandrescu wrote: I quoted all of the above because I so much agree with it. Packaging all OSs in a zip is really disingenuous and now it's come to a head. Let's fix this for 2.064. DVM relies on DMD being packaged as a single zip. As long as that is kept, you can do whatever you want. There are probably other tools that relies on this as well. -- /Jacob Carlborg
Re: Error after installing DMD v2.063
On 2013-06-03 02:15, Jonathan M Davis wrote: I don't think that much of anyone around here thinks that the zip should contain all of the OSes. DVM relies on DMD being packaged as a single zip. As long as that is kept, you can do whatever you want. -- /Jacob Carlborg
Re: Error after installing DMD v2.063
On 2013-06-03 00:25, Nick Sabalausky wrote: Yea, I'm working on a replacement. Please keep the existing zip packages as well, we don't want to break DVM :) -- /Jacob Carlborg
Re: Slow performance compared to C++, ideas?
On Monday, 3 June 2013 at 07:06:05 UTC, Manu wrote: There are functions that the author intended to be overridden, and functions that have no business being overridden, that the author probably never imagined anyone would override. What if someone does come along and override one of these, and it was never designed to work under that circumstance in the first place? At very least, it will have never been tested. That's not a very robust API offering if you ask me. This is something just as important as the performance issues. Most of the time people will leave functions to simply use whatever the default is for virtual/final. If it's final, this works fairly well. The author hasn't put in the effort to decide how to handle people overriding their function. But with virtual by default, you don't know if the author actually considered that people will be overriding the function or if it's simply that they didn't bother specifying. I know the vast majority of my code is virtual, simply because I didn't specify the 'final' keyword 500 times, and didn't think about that I'd need to do it. The resulting code is unsafe because I didn't consider those functions actually being overridden. A standard example is the addRange vs add functions. If these are left as default, you have no clue if the author considered that someone would be overriding it. Will addRange call add, or do you need to override both? Is this an implementation detail that may change at any moment? Or when a different class overrides addRange and makes it call add? By forcing the author to write 'virtual', you know that they at least *considered* that someone may override it, and hopefully thought about the consequences or documented whether addRange calls add. Or they may think about it later and realize that it's a mistake to have left it default and make addRange final. Going from final to virtual is fine; going from virtual to final breaks code. And of course, the vast majority of functions should *not* be virtual. So why is it the default to have to specify 'final' for all of these functions? I think that there are substantial benefits to final being default, but breaking every single program that uses inheritance is a pretty big issue. Still, fixing the code would be quite trivial if you can see a list of the functions that need to be made virtual (either by specifying --transition=virtual or just looking at the compiler errors that pop up when you build). It would even be possible to write a tool to automatically update code using the results of --transition if it was implemented in a way that gave enough details. Unfortunately this would only solve the issue of making your code work again, you would still have to go through everything and decide if it should be virtual.
Re: Error after installing DMD v2.063
On Monday, June 03, 2013 09:20:58 Jacob Carlborg wrote: On 2013-06-03 02:15, Jonathan M Davis wrote: I don't think that much of anyone around here thinks that the zip should contain all of the OSes. DVM relies on DMD being packaged as a single zip. As long as that is kept, you can do whatever you want. Except that that's _exactly_ what we want to get rid of. It's ridiculous to put them all in one zip. It just wastes bandwidth, and it doesn't work with symlinks, and now that we're adding shared libraries, we need the *nix packages to have symlinks in them. I understand that DVM currently relies on there being a single zip, but aside from trying not to break DVM, I see zero reason to leave it as a single zip. It's just causing us problems. - Jonathan M Davis
Re: Official D Grammar
As threatened at DConf, I've started filing bugs against the grammar specification. Anyone interested can track bug 10233[1], which I've marked as blocked by the various issues I've been finding. As usual, my best guess at D's actual grammar[2] is located here[3]. Top secret project is hinted at here[4]. [1] http://d.puremagic.com/issues/show_bug.cgi?id=10233 [2] I disclaim all responsibility for injuries sustained while looking at the functionDefinition rule. [3] https://github.com/Hackerpilot/DGrammar/blob/master/D.g4 [4] http://hackerpilot.github.io/experimental/std_lexer/phobos/parser.html
Re: Slow performance compared to C++, ideas?
On Monday, 3 June 2013 at 07:30:56 UTC, Kapps wrote: On Monday, 3 June 2013 at 07:06:05 UTC, Manu wrote: There are functions that the author intended to be overridden, and functions that have no business being overridden, that the author probably never imagined anyone would override. What if someone does come along and override one of these, and it was never designed to work under that circumstance in the first place? At very least, it will have never been tested. That's not a very robust API offering if you ask me. This is something just as important as the performance issues. Most of the time people will leave functions to simply use whatever the default is for virtual/final. If it's final, this works fairly well. The author hasn't put in the effort to decide how to handle people overriding their function. But with virtual by default, you don't know if the author actually considered that people will be overriding the function or if it's simply that they didn't bother specifying. I know the vast majority of my code is virtual, simply because I didn't specify the 'final' keyword 500 times, and didn't think about that I'd need to do it. The resulting code is unsafe because I didn't consider those functions actually being overridden. The whole concept of OOP revolve around the fact that a given class and users of the given class don't need to know about its subclasses (Liskov's substitution principle). It is subclass's responsibility to decide what it override or not, not the upper class to decide what is overriden by subclasses. If you want to create a class with customizable parts, pass parameters to the constructor. This isn't OOP what OOP is about. The performance concern is only here because things has been smashed together in a inconsequent way (as it is often done in D). In Java for instance, only overriden function are actually virtual. Everything else is finalized at link time. Which is great because you are able to override everything when testing to create mock for instance, while keeping good performance when actually running the application.
Re: Slow performance compared to C++, ideas?
On Monday, June 03, 2013 10:11:26 deadalnix wrote: The whole concept of OOP revolve around the fact that a given class and users of the given class don't need to know about its subclasses (Liskov's substitution principle). It is subclass's responsibility to decide what it override or not, not the upper class to decide what is overriden by subclasses. It's the base class' job to define the API that derived classes will be overriding and the derived classes' choice as to exactly which functions they override (assuming that they're not abstract and therefore have to be overridden). That doesn't mean that the base class can't or shouldn't have other functions which are not intended to be overridden. Nothing about Liskov's substitution principle requires that the entire API of the base class be polymorphic. - Jonathan M Davis
Re: Slow performance compared to C++, ideas?
On 2013-06-03 10:11, deadalnix wrote: The whole concept of OOP revolve around the fact that a given class and users of the given class don't need to know about its subclasses (Liskov's substitution principle). It is subclass's responsibility to decide what it override or not, not the upper class to decide what is overriden by subclasses. If you want to create a class with customizable parts, pass parameters to the constructor. This isn't OOP what OOP is about. The performance concern is only here because things has been smashed together in a inconsequent way (as it is often done in D). In Java for instance, only overriden function are actually virtual. Everything else is finalized at link time. Which is great because you are able to override everything when testing to create mock for instance, while keeping good performance when actually running the application. I've read a book, Effective Java, where it says, something like: If you don't intend your class to be subclassed make it final, otherwise document how to subclass and which methods to override. -- /Jacob Carlborg
Re: Error after installing DMD v2.063
On 2013-06-03 09:37, Jonathan M Davis wrote: Except that that's _exactly_ what we want to get rid of. It's ridiculous to put them all in one zip. It just wastes bandwidth, and it doesn't work with symlinks, and now that we're adding shared libraries, we need the *nix packages to have symlinks in them. I understand that DVM currently relies on there being a single zip, but aside from trying not to break DVM, I see zero reason to leave it as a single zip. It's just causing us problems. If it was not clear, I would like to keep the cross-platform zip in addition to the platform specific zip/tarball/packages. -- /Jacob Carlborg
Re: Error after installing DMD v2.063
On Monday, June 03, 2013 10:23:10 Jacob Carlborg wrote: On 2013-06-03 09:37, Jonathan M Davis wrote: Except that that's _exactly_ what we want to get rid of. It's ridiculous to put them all in one zip. It just wastes bandwidth, and it doesn't work with symlinks, and now that we're adding shared libraries, we need the *nix packages to have symlinks in them. I understand that DVM currently relies on there being a single zip, but aside from trying not to break DVM, I see zero reason to leave it as a single zip. It's just causing us problems. If it was not clear, I would like to keep the cross-platform zip in addition to the platform specific zip/tarball/packages. Well, part of the problem is that the zip is inherently broken for *nix systems due to the fact that symlinks don't work properly. So, now that we have an so version of Phobos, the zip just isn't going to work properly anymore. And while we aren't really looking to break DVM, AFAIK, there would be no other reason to keep the zip around other than for DVM. And since the zip isn't going to work properly on *nix systems anyway, I'm not sure that keeping it around for DVM really solves much. - Jonathan M Davis
We need to define the semantics of block initialization of arrays
DMD has always accepted this initializer syntax for static arrays: float [50] x = 1.0; If this declaration happens inside a function, or in global scope, the compiler sets all members of x to 1.0. That is, it's the same as: float [50] x = void; x[] = 1.0; In my DMD pull requests, I've called this 'block initialization', since there was no standard name for it. A lot of code relies on this behaviour, but the spec doesn't mention it!!! The problem is not simply that this is unspecified. A long time ago, if this same declaration was a member of a struct declaration, the behaviour was completely different. It used to set x[0] to 1.0, and leave the others at float.init. I'll call this first-element-initialization, and it still applies in many cases, for example when you use a struct static initializer. Ie, it's the same as: float [50] x; x[0] = 1.0; Note however that this part of the compiler has historically been very bug-prone, and the behaviour has changed several times. I didn't know about first-element-initialization when I originally did the CTFE code, so when CTFE is involved, it always does block initialization instead. Internally, the compiler has two functions, defaultInit() and defaultInitLiteral(). The first does first-element-init, the second does block-init. There are several other situations which do block initialization (not just CTFE). There are a greater number of situations where first-init can happen, but the most frequently encountered situations use block-init. There are even some foul cases, like bug 10198, where due to a bug in CTFE, you currently get a bizarre mix of both first-init and block-init! So, we have a curious mix of the two behaviours. Which way is correct? Personally I'd like to just use block-init everywhere. I personally find first-element-init rather unexpected, but maybe that's just me. I don't know when it would be useful. But regardless, we need to get this sorted out. It's a blocker for my CTFE work. Here's an example of some of the oddities: struct S { int [3] x; } struct T { int [3] x = 8; } struct U { int [3][3] y; } void main() { int [3][4] w = 7; assert( w[2][2] == 7); // Passes, it was block-initialized S s = { 8 }; // OK, struct static initializer. first-element-init S r = S( 8 ); // OK, struct literal, block-init. T t; // Default initialized, block-init assert( s.x[2] == 8); // Fails; it was first-element-initialized assert( r.x[2] == 8); // Passes; all elements are 8. Block-init. assert( t.x[2] == 8); // Passes; all elements are 8. Block-init. U u = { 9 }; // Does not compile // Error: cannot implicitly convert expression (9) of type int to int[3LU][3LU] } ---
Rust moving away from GC into reference counting
Even as GC fanboy, I have to admit that reference counting is in trend for system languages. Rust developers are thinking to move GC support to the language library while keeping reference counting as the main way to deal with memory management. http://pcwalton.github.io/blog/2013/06/02/removing-garbage-collection-from-the-rust-language/ Quite in sync with the latest discussions going on. -- Paulo
Re: We need to define the semantics of block initialization of arrays
On Monday, 3 June 2013 at 09:06:25 UTC, Don wrote: Personally I'd like to just use block-init everywhere. I personally find first-element-init rather unexpected, but maybe that's just me. I don't know when it would be useful. +1 I see no point in just initialising the first member. If you want that, just default init then set the first member.
Re: Error after installing DMD v2.063
On 2013-06-03 10:34, Jonathan M Davis wrote: Well, part of the problem is that the zip is inherently broken for *nix systems due to the fact that symlinks don't work properly. So, now that we have an so version of Phobos, the zip just isn't going to work properly anymore. And while we aren't really looking to break DVM, AFAIK, there would be no other reason to keep the zip around other than for DVM. And since the zip isn't going to work properly on *nix systems anyway, I'm not sure that keeping it around for DVM really solves much. *nix is really not correct to say. Currently only Linux 64bit supports shared libraries. It's not like it's broken on all platforms, just one. Sure, it will break once we get support for share libraries on additional platforms. If static linking is default nothing is broken. Which format can we use for these platform specific packages? For which of these are there existing bindings or libraries? -- /Jacob Carlborg
Re: Ten Things I Like about D
Daren Scot Wilson: 3a) Unicode for identifiers. I saw the discussion recently (May 30th or so) and cast my vote in favor of allowing unicode characters for identifiers (and of course comments and string literals.) I think that currently Unicode in identifiers becomes almost tolerable if your programming language has ASCII equivalents for all the Unicode symbols, like Fortress. Bye, bearophile
Re: The state of core.simd
On 3 June 2013 06:38, Benjamin Thaut c...@benjamin-thaut.de wrote: Am 01.06.2013 12:18, schrieb Benjamin Thaut: I've taken a look at core.simd and I have to say is unuseable. In a very small test program I already found 3 bugs I've responded in the bugs, but I'll post here too. 1) Using debug symbols together with core.simd will cause a ICE http://d.puremagic.com/issues/**show_bug.cgi?id=10224http://d.puremagic.com/issues/show_bug.cgi?id=10224 Yup, this has bugged me a few times, but I hadn't pestered Walter yet. I usually debug SIMD code with -O -release though, and I don't think -g is compatible with those flags anyway in DMD(?) I just use visual studio's asm debugging to see what's going on. 2) The STOUPS instruction is not correctly implemented: http://d.puremagic.com/issues/**show_bug.cgi?id=10225http://d.puremagic.com/issues/show_bug.cgi?id=10225 True. I never use unaligned vectors ;) 3) The XMM register allocation is catastrophic: http://d.puremagic.com/issues/**show_bug.cgi?id=10226http://d.puremagic.com/issues/show_bug.cgi?id=10226 What do you get when you remove the explicit mov's? float4 result = [1,2,3,4]; result = __simd(XMM.ADDPS, result, result); writefln(%s, result.array); Whats the current state of core.simd? Is it still beeing worked on? Because it its current state its pretty much unuseable. I find it 'usable', but there are still some holes, and cases where it's not efficient. I've been working on std.simd (but was afk for the start of this year) mostly against GDC. Once I'm happy with the API and it's producing the correct code in GDC/LDC, then I planned to log a bunch of DMD bugs to get that up to scratch. But I needed a solid goal-post and units tests first.. I'm back on std.simd now (although haven't had anywhere near as much time as I'd like lately). Hopefully show some significant progress soon.
Re: Feature request: Attribute with which to enable the requirement of explicit-initialization of enum variables
On Monday, 3 June 2013 at 05:56:42 UTC, Maxim Fomin wrote: On Monday, 3 June 2013 at 02:23:18 UTC, Andrej Mitrovic wrote: Let's say you define an enum, which is to be used as a variable: ... Thoughts? I think it is simpler to set a first enum member as invalid. However, I like an idea of supporting analogue of @disable this() mark for any user-defined types, not structs (I mean it would be pretty good if such feature applied on classes could stop creating null references - it's actually not adding new feature, but increasing scope of existing feature). It's completely meaningless on classes: it's already impossible to create an instance of a class which is null, because if it's null it's not an instance of the class in the first place.
Re: File compare/merge
On 4/2/13 3:53, Walter Bright wrote: Life has gotten a lot easier for me trying to manage multiple branches of D since I've been using file compare/merge tools. I use winmerge for Windows, and meld for Linux. They are both free, and work great. What do you use? Under Windows I use TortoiseMerge, part of TortoiseGit/TortoiseSVN but also works standalone. Under OSX/linux I use git diff --color --word-diff :S L.
Re: We need to define the semantics of block initialization of arrays
On 6/3/13, Don turnyourkidsintoc...@nospam.com wrote: A lot of code relies on this behaviour, but the spec doesn't mention it!!! I didn't know about it until Walter mentioned the syntax to me. I've found it quite useful since then. E.g.: char[100] buffer = 0; Without this buffer is normally initialized with 0xFF, and this could break C functions when you pass a pointer to such an array. Personally I'd like to just use block-init everywhere. Me too. You get my vote.
Re: We need to define the semantics of block initialization of arrays
On 2013-06-03, 11:06, Don wrote: Personally I'd like to just use block-init everywhere. I personally find first-element-init rather unexpected, but maybe that's just me. I don't know when it would be useful. But regardless, we need to get this sorted out. It's a blocker for my CTFE work. Votes++; -- Simen
Re: Feature request: Attribute with which to enable the requirement of explicit-initialization of enum variables
On Monday, 3 June 2013 at 11:12:10 UTC, Diggory wrote: On Monday, 3 June 2013 at 05:56:42 UTC, Maxim Fomin wrote: On Monday, 3 June 2013 at 02:23:18 UTC, Andrej Mitrovic wrote: Let's say you define an enum, which is to be used as a variable: ... Thoughts? I think it is simpler to set a first enum member as invalid. However, I like an idea of supporting analogue of @disable this() mark for any user-defined types, not structs (I mean it would be pretty good if such feature applied on classes could stop creating null references - it's actually not adding new feature, but increasing scope of existing feature). It's completely meaningless on classes: it's already impossible to create an instance of a class which is null, because if it's null it's not an instance of the class in the first place. This is again using wrong terminology to move meaning from type to pointed data (if any) as happened recently with dynamic arrays. Nothing on the Earth promises that if in one language class type is allocated memory, than in another language class should be also so, and if it is not, then hoards of programmist should use first naming conversion with no reason. Consult the spec what class type is in D and please do not confuse D with other languages. Anyway, this irrelevant here, because what I mean is: class A { @disable this(); // or @RequireInit } A a; // does not work Currently @disable prevents allocation with specified ctor, but does not stops from creating null initialized object. Giving demand for non-nullable classes, probably it is a good idea to support this feature by broading @disable this in context of classes or creating similar feature from scrath like @RequireInit. This issue with classes is more important than with enums, and if such feature is implemented, I see no reason for it not to work with enums as with other user-defined types. And if consensus is that the feature in classes is not need, then it is likely less needed in enums.
Re: Error after installing DMD v2.063
On Mon, 2013-06-03 at 08:18 +0200, deadalnix wrote: […] libphobos2.so.0.63 the file libphobos2.so.0 a symbolic link to libphobos2.so.0.63 libphobos2.so a symbolic link to libphobos2.so.0 And symlink are created automagically by tooling. For Debian the symlinks are create by the post-install script for a shared library. So yes by the tooling. The moral of this story is that the current mechanisms for creating the DMD deb file are not compliant with the correct tool chain for creating debs. i.e. it is wrong. There should be a Git repository holding the DMD deb meta data and then just use all the git-buildpackage stuff. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Will I try again? and also C header files.
On Monday, 3 June 2013 at 03:36:52 UTC, SeanVN wrote: Thanks for all the information. It seems that D2 meets all my requirements. I have been reading some of the documentation. It is a extensive language and there is definitely a learning curve for me to overcome. That is manageable though. I looked at the Go programming language. The core ideas in that language are good but it is practically useless for designing desktop applications. The specifics of the standard libraries and the rejection of shared libraries mean it is consigned to being a web-server scripting language only. I am happy that D2 allows me to use 79/80 bit reals, unlike almost every other programming language these days which only allow access to 64 bit floating point numbers. That's great to hear :) Feel free to post for help with any problems in http://forum.dlang.org/group/digitalmars.D.learn
Re: Error after installing DMD v2.063
On Monday, 3 June 2013 at 12:19:49 UTC, Russel Winder wrote: ... In most basic form there should be just set of instructions for packagers to conform post-install hook to. Especially when it comes to main repos and stuff is build from SVN/Git whenever it is possible. Providing .deb and .rpm may be convenient sometimes but is not the right way in general.
Re: Feature request: Attribute with which to enable the requirement of explicit-initialization of enum variables
On Sun, 02 Jun 2013 20:03 -0700, Jonathan M Davis jmdavisp...@gmx.com wrote: Your suggestion for an invalid value for the first enum value was a good one and should be enough IMHO if you don't want the default enum value to be valid. Actually, I just figured out that we already have support for what I've asked for: enum_mod.d: - module enum_mod; private enum MachineEnum { X86, X86_64, } struct InitEnum(E) if (is(E == enum)) { @disable this(); this(E e) { value = e; } E value; alias value this; } alias Machine = InitEnum!MachineEnum; // fake enum - test.d: - import enum_mod; void main() { // Machine machine; // compile-time error Machine machine = Machine.X86; // ok } - How damn cool is that?
Re: Feature request: Attribute with which to enable the requirement of explicit-initialization of enum variables
On 6/3/13, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: Actually, I just figured out that we already have support for what I've asked for. Well apparently switch/final switch doesn't work with subtyping, I'll try to file this as a bug.
Re: Rust moving away from GC into reference counting
Is it an official position or just a blog post / proposal from one of developers? Anyway, given the recent findings by Adam, D is not _that_ far away here. Add configurable global allocators, finally implement scope, move GC to library and reduce runtime a bit - and result may be pretty awesome. Hardest part is still global allocator and assumptions compiler does about runtime. However, the fact that Rust developers already think about this and in D community most real attention to such stuff only came together with Manu gives them some advantage. On Monday, 3 June 2013 at 09:10:14 UTC, Paulo Pinto wrote: Even as GC fanboy, I have to admit that reference counting is in trend for system languages. Rust developers are thinking to move GC support to the language library while keeping reference counting as the main way to deal with memory management. http://pcwalton.github.io/blog/2013/06/02/removing-garbage-collection-from-the-rust-language/ Quite in sync with the latest discussions going on. -- Paulo
Re: Rust moving away from GC into reference counting
I think we could do the whole owned pointer concept in D too, by making a struct with a disabled copy constructor. I don't know much about Rust's specifics though so not sure if that's an exact match. But I'm already running with the idea that all slices are borrowed, so we could say the same thing about pointers, if you have one, assume it is borrowed and you shouldn't store it nor free it.
Re: Rust moving away from GC into reference counting
On Monday, 3 June 2013 at 14:04:23 UTC, Adam D. Ruppe wrote: I think we could do the whole owned pointer concept in D too, by making a struct with a disabled copy constructor. I don't know much about Rust's specifics though so not sure if that's an exact match. I don't see how struct with a disabled copy constructor is relevant to owned pointers. Most similar thing D has is scope qualifier concept (and that is why I do want it so hard :)) - hard guarantee that no pointer to your data will live longer than data itself. All normal pointers become managed pointers in that sense.
Re: Rust moving away from GC into reference counting
On Monday, 3 June 2013 at 14:20:06 UTC, Dicebot wrote: I don't see how struct with a disabled copy constructor is relevant to owned pointers. My thought is then you can control access to the inner pointer better that way. You couldn't even pass this struct to a function without calling some kind of method, which could return a different type to indicate that it is a lent pointer. It also wouldn't need to be reference counted, since the refcount is always going to be 1 because copying it is impossible. Most similar thing D has is scope qualifier concept (and that is why I do want it so hard :)) - hard guarantee that no pointer to your data will live longer than data itself. All normal pointers become managed pointers in that sense. Yes, I'm just trying to do what we can with the language today.
Re: What's up with pull request buildbots?
- If many pulls are merged into [DMD/DRuntime/Phobos] master in a given period of, for example a day, the AT will restart prior to completing a full test pass. - With the current number of open pulls the AT can complete a full testing pass in roughly 8 hours. - Each pull takes a rough average of 15 minutes to test. Thanks for the explanation. I suppose the fix for my issue would be for reviewers to merge the small bugfixes a bit faster. I don't want to sound like I'm rushing the reviewers; as it is the project has amazing community contribution, and I'm constantly blown away at how much a group of volunteers can accomplish. On a side note, there are a few (lots) of pull requests that are several months old, and master has changed too much for them to be compatible and/or relevant. Perhaps those should just be closed? But consider the effort required to move all of those bugs to a new system, automated tooling or no, it wouldn't be easy or quick. Sorry; let me clarify: I'd suggest that BZ stop accepting new issues, and use the GHI tracker for new bugs. Then, just use BZ as needed until all resolvable issues there have been resolved. As for loss of meta information on Github, eh, I suppose so, but GHI offers issue referencing and tagging, so that's something I guess? I don't expect to convince anyone to switch, just see why it isn't used in the first place. Thank you, Dylan Knutson
Re: Rust moving away from GC into reference counting
On Monday, 3 June 2013 at 14:33:12 UTC, Adam D. Ruppe wrote: My thought is then you can control access to the inner pointer better that way. You couldn't even pass this struct to a function without calling some kind of method, which could return a different type to indicate that it is a lent pointer. Better - sure. But without type system support it will always be inferior. One issues that immediately comes to my mind is that it is yet another case when you either have right qualifier as a default or need to resort to automatic inference (owning pointers/references are useless if majority of Phobos does not accept borrowed ones). Another one is template bloat, of course. If we need to resort to workarounds instead of providing good solid approach for interested users, it just won't work. This will be interesting to hack around as a proof-of-concept experiment but won't suit for a production usage, IMHO.
Statement is unreachable error
I reach this error all the time. When debugging for instance, I may want to stick a throw in the middle of some code, or whatever, and get into it, have to go back to the code, comment bunch of stuff, go back to compile. This is something that is obviously important to have, but I think to have it by default is really annoying. In addition, it tends to kick in when using static if. See sample code below : uint foo() { static if(condition) { return 0; } // Do some computation. return value; } Now, if you want that piece of code to compile, you got to stick the whole function body into an else, and it really make the code unreadable sometime when you got static ifs within other static ifs. Can we at least disable the feature for termination of control flow that belongs to different compile time scope ?
Re: Inability to dup/~ for const arrays of class objects
On Fri, 31 May 2013 21:04:47 -0400, Peter Williams pwil3...@bigpond.net.au wrote: On 31/05/13 23:58, Steven Schveighoffer wrote: On Fri, 31 May 2013 00:48:47 -0400, Peter Williams That makes programming much easier, doesn't it. I'll just avoid it by using: a = a ~ b; instead of: a ~= b; This is a conservative always reallocate methodology, it should work just like you allocated a new array to hold a and b. That's what I assumed. I'm still getting used to the idea that a op= b isn't just a shorthand for a = a op b. It is if you don't care about where it lands :) I understand that in some cases, it's important or desirable to dictate whether extension-in-place is used or not, but in most cases, you don't care, you just want to change an array to contain more data. If a is frequently large, and b is frequently small, you will kill your performance vs. a ~= b. I doubt that I'll be doing it often enough (i.e. only when I think that it's an issue) for it to matter. The only time I have two variables for the same array is when I pass one to a function as a parameter and if I'm intending to modify it in the function I'll pass it by reference so that there's no gotchas. The only real gotchas come if you append *and then* modify the original data. If you append *after* modifying the original data, or don't modify the original data, then there is no issue. If you only ever have one reference to the array data, there is no issue. There really are very few cases where appending can cause curious behavior. For the most part, it's undetected. I do like the idea that ~= is generally cheap as it potentially makes building lists easy (is there any need for the singly linked list in D?) and I may modify some of my code. I've been allocating arrays using new array[size] where I know that size will be the max needed but that it may be too much (i.e. throwing away duplicates) inserting into the array and then adjusting length to whatever I used. In the case, where it's highly likely that the whole array will fit in a page I might as well allocate an empty array and use +=. NB there's only one copy of the array. You can .reserve the space that you need ahead of time. Then appending will always deterministically go into the reserved block, and won't reallocate. This should be relatively quick. It's not as quick as pre-allocating the entire array and then writing the data directly -- you still need calls into the runtime for appending. The appending feature of D arrays/slices is intended to be good enough for most usages, not horrendously slow, but also not super-optimized for specific purposes. And yes, we still need linked lists, arrays are good for appending, but not inserting :) -Steve
Re: Statement is unreachable error
On Monday, 3 June 2013 at 14:53:23 UTC, deadalnix wrote: I reach this error all the time. When debugging for instance, I may want to stick a throw in the middle of some code I have the same problem, sometimes for testing purposes (exactly for debugging) I'd like to return early or throw, but then I get compiler errors.
Re: Garbage collection, and practical strategies to avoid allocation
On Sat, 01 Jun 2013 07:10:07 -0400, Michel Fortin michel.for...@michelf.ca wrote: On 2013-06-01 02:02:53 +, Manu turkey...@gmail.com said: * find a solution for deterministic embedded garbage collection I think reference counting while still continuing to use the current GC to release cycles is the way to go. It wouldn't be too hard to implement. +1 I was going to write this exact post, but you already did :) -Steve
Re: Statement is unreachable error
On Monday, 3 June 2013 at 14:57:13 UTC, Andrej Mitrovic wrote: On Monday, 3 June 2013 at 14:53:23 UTC, deadalnix wrote: I reach this error all the time. When debugging for instance, I may want to stick a throw in the middle of some code I have the same problem, sometimes for testing purposes (exactly for debugging) I'd like to return early or throw, but then I get compiler errors. It would be nice to have a compiler switch that turns off these errors.
Re: Slow performance compared to C++, ideas?
On 6/3/13 3:05 AM, Manu wrote: On 3 June 2013 02:37, Andrei Alexandrescu seewebsiteforem...@erdani.org mailto:seewebsiteforem...@erdani.org wrote: On 6/2/13 9:59 AM, Manu wrote: I've never said that virtuals are bad. The key function of a class is polymorphism. But the reality is that in non-tool or container/foundational classes (which are typically write-once, use-lots; you don't tend to write these daily), a typical class will have a couple of virtuals, and a whole bunch of properties. I've argued if no dispatch is needed just make those free functions. You're not going to win many friends, and probably not many potential D users by insisting people completely change their coding patterns that they've probably held for decades on a trivial matter like this. This is actually part of the point. You keep on discussing as if we design the language now, when in fact there's a lot of code out there that relies on the current behavior. We won't win many friends if we break every single method that has ever been overridden in D, over a trivial matter. Andrei
Re: D Ranges in C#
On Sat, 01 Jun 2013 04:11:59 -0400, bearophile bearophileh...@lycos.com wrote: David Piepgrass: In fact, most STL algorithms require exactly two iterators--a range--and none require only a single iterator I think there are some C++ data structures that store many single iterators. If you instead store ranges you double the data amount. This is true. In dcollections, I have the concept of cursors. These are 1 or 0 element ranges (they are 0 after the first popFront). For almost all cases, they require an extra boolean to designate the empty state. So while not consuming 2x, it's more like 1.5x-ish. Because of alignment, it pretty much requires 2x. There have been suggestions on utilizing perhaps unused bits in the pointer to designate the empty flag, but I'm a bit aprehensive on that, especially if the node is GC-managed. But besides the space requirements the *concept* of a single-element pointer is very much needed. And cursors fill that role. As an example, std.algorithm.find consumes a range until it finds the specific value. It then returns a range of the remaining data. But what if you wanted the range of the data UNTIL the first value? In std.container, we have three functions to deal with this, upperBound, lowerBound, and equalRange. But even with these, it can be difficult to construct intersections of these two ranges. In dcollections, we have one function, find, which returns a cursor. Then using the cursor, you can construct the range you need: auto r = mymap[mymap.find(5)..$]; // opDollar not supported yet, but will be. auto rbefore = mymap[mymap.begin()..mymap.find(5)]; All of this is safe and correct, and I think it reads rather well. There are other good places where single-element ranges are useful. It is important to note that a cursor is NOT equivalent to a range of one element. Such a range in a node-based container has two node pointers. A cursor MUST only point at exactly one element. This is important when considering cursor invalidation -- removing an element unreferenced by a cursor should not invalidate that cursor (depending on how the container is structured). For example, adding a new value to a hash set, or removing an unrelated value from a hash set may invalidate all ranges, but not cursors. -Steve
Re: Any plans to fix Issue 9044? aka Language stability question again
25.05.2013 14:07, Denis Shelomovskij пишет: As those of you who do write some non-toy projects in D know, from time to time you projects become unbuildable because of Issue 9044 [1] an you have to juggle with files and randomly copy/move functions from one library to another to detrigger the issue creating mess marked Issue 9044 workaround. It become really annoying when your one-file project using an external library fails as it forcing you to juggle with that library files (e.g. VisualD's `cpp2d` project which triggers the issue randomly). I'd never complain about such things but the language is tend to be self-called stable by main maintainers and I'd like to finally see an official definition of this stability as it obviously contradicts my personal very loyal definition (e.g. I have noting against breaking changes if they are in good direction). [1] http://d.puremagic.com/issues/show_bug.cgi?id=9044 So now the issue is marked as duplicate of Issue 6461 [1]. The issue have no votes. There is no official answers about when it will be fixed. Am I the only one who meet it in almost every project at least at age about dozen days of development? [1] http://d.puremagic.com/issues/show_bug.cgi?id=6461 -- Денис В. Шеломовский Denis V. Shelomovskij
Re: Slow performance compared to C++, ideas?
On Monday, 3 June 2013 at 15:27:58 UTC, Andrei Alexandrescu wrote: This is actually part of the point. You keep on discussing as if we design the language now, when in fact there's a lot of code out there that relies on the current behavior. We won't win many friends if we break every single method that has ever been overridden in D, over a trivial matter. Agreed. But can you please consider the export proposal ? This would allow for finalization using LTO. That we can avoid virtual dispatch when it is unneeded.
Re: Error after installing DMD v2.063
On Sun, 2013-06-02 at 17:50 -0700, Walter Bright wrote: […] The complaint from Russel was about the .deb file. In any case, anyone is free to create a script to build whatever combination they want, in any format they want, and submit it as a pull request to installer. Nobody has to wait on me to do it. https://github.com/D-Programming-Language/installer The use of a script like this is just totally the wrong way of building debs. I can create the debian directory to replace the script (*) but it requires the tarball of the release to be available. Is there a tarball or only this infamous zipfile? (*) Not immediately but sometime in late August. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Rust moving away from GC into reference counting
Haha, wow. Indeed, isn't that well timed with respect to recent discussions! ;) Post-dconf, I gave it some serious thought and I'm kinda convincing myself more and more each day that it's the way to go. On 3 June 2013 19:10, Paulo Pinto pj...@progtools.org wrote: Even as GC fanboy, I have to admit that reference counting is in trend for system languages. Rust developers are thinking to move GC support to the language library while keeping reference counting as the main way to deal with memory management. http://pcwalton.github.io/**blog/2013/06/02/removing-** garbage-collection-from-the-**rust-language/http://pcwalton.github.io/blog/2013/06/02/removing-garbage-collection-from-the-rust-language/ Quite in sync with the latest discussions going on. -- Paulo
Re: Rust moving away from GC into reference counting
On Mon, Jun 03, 2013 at 04:33:10PM +0200, Adam D. Ruppe wrote: On Monday, 3 June 2013 at 14:20:06 UTC, Dicebot wrote: I don't see how struct with a disabled copy constructor is relevant to owned pointers. My thought is then you can control access to the inner pointer better that way. You couldn't even pass this struct to a function without calling some kind of method, which could return a different type to indicate that it is a lent pointer. This is an old idea. It's the same as C++'s auto_ptr, and the same as an old idea I independently came up with many years ago. Basically, you have two kinds of pointers: a reference pointer and an owner pointer. An owner pointer can be freely copied into a reference pointer, but never the other way around. An owner pointer is unique, so it has destructive copy semantics (passing it into a function invalidates it in the caller's scope, for example -- if a function doesn't need ownership of the passed object, it should take a reference argument, to which an owner pointer implicitly converts). When it goes out of scope, it's freed. Reference pointers are never freed. Of course, this scheme doesn't correctly deal with dangling reference pointers, but in a manual memory management system, you have to deal with that manually anyway, so it doesn't really matter. Distinguishing between these two kinds of pointers for the most part eliminates a lot of pointer-related bugs. For the most part, this scheme is sufficient to handle a majority of pointer uses. The remaining cases involve multiple references to objects with no clear ownership designation or lifetime. IME, this is relatively confined to specific areas of usage, for which the GC is a better memory management scheme anyway. It also wouldn't need to be reference counted, since the refcount is always going to be 1 because copying it is impossible. One should always be aware that there's a rough hierarchy of memory management schemes: - direct malloc/free: maximum control, most error-prone - auto_ptr (the above scheme): less error-prone - reference counting: more convenient - GC: very little control, least error-prone The further down you go, the less you have control over the memory management process, but also the more convenient it is to write code. The further up you go, the more control you have, but also the harder it is to write correct code (and the more error-prone it is). Given D's design of correctness-first, it would seem that GC by default is the correct choice. But it should definitely also allow moving up the hierarchy for applications that require finer control over memory management. Currently D allows this in theory, but in practice, much of Phobos assumes the GC, which greatly reduces its usefulness in such cases. Most similar thing D has is scope qualifier concept (and that is why I do want it so hard :)) - hard guarantee that no pointer to your data will live longer than data itself. All normal pointers become managed pointers in that sense. Yes, I'm just trying to do what we can with the language today. auto_ptr can be easily implemented with the language today. It's more a question of Phobos / certain language constructs *using* it. T -- Some ideas are so stupid that only intellectuals could believe them. -- George Orwell
Re: Slow performance compared to C++, ideas?
On 3 June 2013 18:20, Jacob Carlborg d...@me.com wrote: On 2013-06-03 10:11, deadalnix wrote: The whole concept of OOP revolve around the fact that a given class and users of the given class don't need to know about its subclasses (Liskov's substitution principle). It is subclass's responsibility to decide what it override or not, not the upper class to decide what is overriden by subclasses. If you want to create a class with customizable parts, pass parameters to the constructor. This isn't OOP what OOP is about. The performance concern is only here because things has been smashed together in a inconsequent way (as it is often done in D). In Java for instance, only overriden function are actually virtual. Everything else is finalized at link time. Which is great because you are able to override everything when testing to create mock for instance, while keeping good performance when actually running the application. I've read a book, Effective Java, where it says, something like: If you don't intend your class to be subclassed make it final, otherwise document how to subclass and which methods to override. Sounds like even they know the truth I speak, but they must enforce this by convention/documentation rather than offering strict guarantees ;) It's interesting (but not at all surprising) that C# which is much more modern decided to go the C++ way rather than the Java way.
Re: Error after installing DMD v2.063
On Sun, 2013-06-02 at 17:47 -0700, Jonathan M Davis wrote: […] All you should have to do is set the PATH so that it has dmd in it. Everything else should just work. There is also the issue about whether the compiled stuff has the correct soname in it. I think the correct solution for the debs is to build from a pure source-only tarball. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Slow performance compared to C++, ideas?
On 3 June 2013 18:11, deadalnix deadal...@gmail.com wrote: On Monday, 3 June 2013 at 07:30:56 UTC, Kapps wrote: On Monday, 3 June 2013 at 07:06:05 UTC, Manu wrote: There are functions that the author intended to be overridden, and functions that have no business being overridden, that the author probably never imagined anyone would override. What if someone does come along and override one of these, and it was never designed to work under that circumstance in the first place? At very least, it will have never been tested. That's not a very robust API offering if you ask me. This is something just as important as the performance issues. Most of the time people will leave functions to simply use whatever the default is for virtual/final. If it's final, this works fairly well. The author hasn't put in the effort to decide how to handle people overriding their function. But with virtual by default, you don't know if the author actually considered that people will be overriding the function or if it's simply that they didn't bother specifying. I know the vast majority of my code is virtual, simply because I didn't specify the 'final' keyword 500 times, and didn't think about that I'd need to do it. The resulting code is unsafe because I didn't consider those functions actually being overridden. The whole concept of OOP revolve around the fact that a given class and users of the given class don't need to know about its subclasses (Liskov's substitution principle). It is subclass's responsibility to decide what it override or not, not the upper class to decide what is overriden by subclasses. Then OOP is fundamentally unsafe, because the author will never consider all the possibilities! If you want to create a class with customizable parts, pass parameters to the constructor. This isn't OOP what OOP is about. Eh? The performance concern is only here because things has been smashed together in a inconsequent way (as it is often done in D). In Java for instance, only overriden function are actually virtual. Everything else is finalized at link time. Java is not compiled. If you compile Java code, all functions are virtual always. It's impossible in D with separate compilation, and dynamic libraries seal the deal. Which is great because you are able to override everything when testing to create mock for instance, while keeping good performance when actually running the application. I'm not taking away your ability to make everything virtual, you can type 'virtual:' as much as you like.
Re: What exactly does @safe mean?
On Sun, 02 Jun 2013 03:59:08 -0400, monarch_dodra monarchdo...@gmail.com wrote: On Saturday, 1 June 2013 at 22:15:00 UTC, Jonathan M Davis wrote: Well, given that the safety of the operation relies on what's being passed in, the operation itself can't reasonably be marked as @safe, because you can't guarantee that the operation isn't going to corrupt memory. But isn't that exactly the same as my void foo(int* p) @safe{*p = 0} example ? That relies on what is being passed in to guarantee safety :/ @confused provable @safe depends on the precondition that its parameters are valid and @safe. The easiest way to do this is to mark main as @safe. Then you can't go unsafe. As people have pointed out, there are bugs/holes. They need to be fixed. @trusted should be used VERY cautiously. It basically says I know this is @safe, but the compiler can't prove it. These situations should be very very rare. Think of @safe functions as bricks. By themselves, they are solid and will hold up a building well. But if you put them on top of garbage, they will be as useless as cardboard. -Steve
Re: Error after installing DMD v2.063
On Sun, 2013-06-02 at 19:09 -0700, Ellery Newcomer wrote: […] hey! the rpm behaves the same way! Maybe building a fedora package on ubuntu is in fact a terrible idea! Build a Fedora package on Fedora or don't build it at all. Question Fedora 17, 18, 19… I could currently help with 18 but as soon as 19 is released (as opposed to alpha, beta, RC) I will upgrade. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Slow performance compared to C++, ideas?
On 4 June 2013 01:28, Andrei Alexandrescu seewebsiteforem...@erdani.orgwrote: On 6/3/13 3:05 AM, Manu wrote: On 3 June 2013 02:37, Andrei Alexandrescu seewebsiteforem...@erdani.org mailto:SeeWebsiteForEmail@**erdani.org seewebsiteforem...@erdani.org wrote: On 6/2/13 9:59 AM, Manu wrote: I've never said that virtuals are bad. The key function of a class is polymorphism. But the reality is that in non-tool or container/foundational classes (which are typically write-once, use-lots; you don't tend to write these daily), a typical class will have a couple of virtuals, and a whole bunch of properties. I've argued if no dispatch is needed just make those free functions. You're not going to win many friends, and probably not many potential D users by insisting people completely change their coding patterns that they've probably held for decades on a trivial matter like this. This is actually part of the point. You keep on discussing as if we design the language now, when in fact there's a lot of code out there that relies on the current behavior. We won't win many friends if we break every single method that has ever been overridden in D, over a trivial matter. You won't break every single method, they already went through that recently when override was made a requirement. It will only break the base declarations, which are far less numerous. How can you justify the change to 'override' with a position like that? We have already discussed that we know PRECISELY the magnitude of breakage that will occur. It is: magnitude_of_breakage_from_override / total_number_of_derived_classes. A much smaller number than the breakage which was gladly accepted recently. And the matter is far from trivial. In fact, if you think this is trivial, then how did the override change ever get accepted? That is most certainly trivial by contrast, and far more catastrophic in terms of breakage.
Re: Error after installing DMD v2.063
On Sun, 2013-06-02 at 18:20 -0700, Jonathan M Davis wrote: […] I don't believe that it's not an ldconfig problem. It's the fact that there's a libphobos2.so and not a libphobos2.so.0.63. It's the exact same problem that the rpm and deb files are having. dmd.conf already makes it so that the linker looks in the right place. Building the deb from a pure source only tarball using the proper Debian deb building toolchain will sort all this out. I suspect the same goes for the Fedora RPM -- use the distribution specific toolchain. It is 12 years since I built any RPMs so I declare myself lacking in knowledge on that front. Debs I can deal with though. All that needs to be known is where is the official source release tarball for 2.063? -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Rust moving away from GC into reference counting
On Monday, 3 June 2013 at 16:01:29 UTC, H. S. Teoh wrote: C++11 deprecated auto_ptr in favor of unique_ptr, but it's basically the same concept, and it works very well in cases where you prefer to manage your own memory. D should have the same thing in the std lib, it's not difficult to implement. The ref counted pointers are another similar thing, a bit more difficult to implement correctly and there's a circular ref issue with them. The hierarchy of memory management schemes is something we should embrace instead of shun in favor of the GC. I dislike being forced to use the GC, or having to jump through hoops to avoid it, and it's insane to have no control over it even when I really do want to make use out of it. The GC can of course be made a lot better, and at least some important manual control can be given to the programmer, for example we previously discussed ideas like specifying a maximum time limit for each GC run, and also specifying when the GC gets called. Currently we have virtually zero contol over the GC (other than enable and disable which is far too trivial) but I see no reason at all why this must be the case. Allowing some significant control over the GC should be independent of the GC implementation, so having a better GC design should in no way reduce or remove the requirement for having control. Also a better GC in no way invalidates the need for other memory management schemes because there will always be situations where a GC is not an appropriate solution, at least not until someone invents the perfect one size fits all GC. --rt
Re: Feature request: Attribute with which to enable the requirement of explicit-initialization of enum variables
On Monday, 3 June 2013 at 12:13:30 UTC, Maxim Fomin wrote: On Monday, 3 June 2013 at 11:12:10 UTC, Diggory wrote: On Monday, 3 June 2013 at 05:56:42 UTC, Maxim Fomin wrote: On Monday, 3 June 2013 at 02:23:18 UTC, Andrej Mitrovic wrote: Let's say you define an enum, which is to be used as a variable: ... Thoughts? I think it is simpler to set a first enum member as invalid. However, I like an idea of supporting analogue of @disable this() mark for any user-defined types, not structs (I mean it would be pretty good if such feature applied on classes could stop creating null references - it's actually not adding new feature, but increasing scope of existing feature). It's completely meaningless on classes: it's already impossible to create an instance of a class which is null, because if it's null it's not an instance of the class in the first place. This is again using wrong terminology to move meaning from type to pointed data (if any) as happened recently with dynamic arrays. Nothing on the Earth promises that if in one language class type is allocated memory, than in another language class should be also so, and if it is not, then hoards of programmist should use first naming conversion with no reason. Consult the spec what class type is in D and please do not confuse D with other languages. My point is completely applicable to D - it applies to any form of polymorphic type. In D the type of a class variable is determined at runtime, not at compile time, so what you're saying makes no sense. The feature you want is exactly what NotNull!T does, the way you are suggesting of implementing it doesn't work.
Re: Slow performance compared to C++, ideas?
On 6/3/13 12:25 PM, Manu wrote: You won't break every single method, they already went through that recently when override was made a requirement. It will only break the base declarations, which are far less numerous. That's what I meant. How can you justify the change to 'override' with a position like that? We have already discussed that we know PRECISELY the magnitude of breakage that will occur. It is: magnitude_of_breakage_from_override / total_number_of_derived_classes. A much smaller number than the breakage which was gladly accepted recently. Well it's kinda too much relativism that the number of breakages is considered small because it's smaller than another number. And the matter is far from trivial. It is trivial. To paraphrase a classic: I'm not taking away your ability to make everything final, you can type 'final:' as much as you like. In fact, if you think this is trivial, then how did the override change ever get accepted? That is most certainly trivial by contrast, and far more catastrophic in terms of breakage. That's a completely different issue, so this part of the argument can be considered destroyed. Andrei
Re: Slow performance compared to C++, ideas?
On Monday, 3 June 2013 at 16:25:24 UTC, Manu wrote: You won't break every single method, they already went through that recently when override was made a requirement. […] A much smaller number than the breakage which was gladly accepted recently. […] how did the override change ever get accepted […] It appears as if either you have a interesting definition of recently, or you are deliberately misleading people by bringing up that point over and over again. According to http://dlang.org/changelog.html, omitting override produced a warning since D 2.004, which was released back in September 2007! Granted, it was only actually turned from a deprecation warning into an actual deprecation in 2.061 (if my memory serves me right), but it's mostly a flaw in the handling of that particular deprecation that it stayed at the first level for so long. The actual language change was made – and user-visible – almost six (!) years ago, which is a lot on the D time scale. You are also ignoring the fact that in contrast to requiring override, there is no clean deprecation path for your proposal, at least as far as I can see: Omitting the keyword started out as a warning, and IIRC still is allowed when you enable deprecated features via the compiler switch. How would a similar process look for virtual-by-default? As far as am isolated module with only a base class is concerned, this is not question of valid vs. invalid code, but a silent change in language semantics. From DConf I know that you are actually are a friendly, reasonable person, but in this discussion, you really come across as a narrow-minded zealot to me. So, please, let's focus on finding an actually practical solution! For example, if we had !pure/!nothrow/!final or something along the lines, just mandate that final: is put at the top of everything in your style guide (easily machine-enforceable too) – problem solved? And maybe it would even catch on in the whole D community and lead to a language change in D3 or a future iteration of the language. David
Re: Slow performance compared to C++, ideas?
On 6/3/13 1:06 PM, David Nadlinger wrote: On Monday, 3 June 2013 at 16:25:24 UTC, Manu wrote: You won't break every single method, they already went through that recently when override was made a requirement. […] A much smaller number than the breakage which was gladly accepted recently. […] how did the override change ever get accepted […] It appears as if either you have a interesting definition of recently, or you are deliberately misleading people by bringing up that point over and over again. According to http://dlang.org/changelog.html, omitting override produced a warning since D 2.004, which was released back in September 2007! Granted, it was only actually turned from a deprecation warning into an actual deprecation in 2.061 (if my memory serves me right), but it's mostly a flaw in the handling of that particular deprecation that it stayed at the first level for so long. The actual language change was made – and user-visible – almost six (!) years ago, which is a lot on the D time scale. You are also ignoring the fact that in contrast to requiring override, there is no clean deprecation path for your proposal, at least as far as I can see: Omitting the keyword started out as a warning, and IIRC still is allowed when you enable deprecated features via the compiler switch. How would a similar process look for virtual-by-default? As far as am isolated module with only a base class is concerned, this is not question of valid vs. invalid code, but a silent change in language semantics. [snip] There's one more issue with the comparison that must be clarified (I thought it was fairly obvious so I didn't make it explicit in my previous note): override is not comparable because it improves code correctness and maintainability, for which there is ample prior evidence. It's also a matter for which, unlike virtual/final, there is no reasonable recourse. So invoking the cost of imposing explicit override vs imposing virtual as an argument for the latter is fallacious. Andrei
Re: Error after installing DMD v2.063
On 6/2/2013 3:05 PM, Jonathan M Davis wrote: It's done entirely by Walter on his own systems, and I suspect that the deb and rpm files are created from the zip file (though I'm not sure if he creates those or someone else does). We need to change it so that the process for generating them is automated and does not rely on Walter. Work is being done in that area, but it does not appear to be a priority for Walter. Creating deps and rpms is done from the zip file, and the scripts are here: https://github.com/D-Programming-Language/installer I know I'm sounding like a broken record, but those scripts are not a mystery and anyone can produce pull requests to add to them or fix them.
Re: Feature request: Attribute with which to enable the requirement of explicit-initialization of enum variables
On Monday, 3 June 2013 at 16:46:15 UTC, Diggory wrote: On Monday, 3 June 2013 at 12:13:30 UTC, Maxim Fomin wrote: On Monday, 3 June 2013 at 11:12:10 UTC, Diggory wrote: On Monday, 3 June 2013 at 05:56:42 UTC, Maxim Fomin wrote: On Monday, 3 June 2013 at 02:23:18 UTC, Andrej Mitrovic wrote: Let's say you define an enum, which is to be used as a variable: ... Thoughts? I think it is simpler to set a first enum member as invalid. However, I like an idea of supporting analogue of @disable this() mark for any user-defined types, not structs (I mean it would be pretty good if such feature applied on classes could stop creating null references - it's actually not adding new feature, but increasing scope of existing feature). It's completely meaningless on classes: it's already impossible to create an instance of a class which is null, because if it's null it's not an instance of the class in the first place. This is again using wrong terminology to move meaning from type to pointed data (if any) as happened recently with dynamic arrays. Nothing on the Earth promises that if in one language class type is allocated memory, than in another language class should be also so, and if it is not, then hoards of programmist should use first naming conversion with no reason. Consult the spec what class type is in D and please do not confuse D with other languages. My point is completely applicable to D - it applies to any form of polymorphic type. In D the type of a class variable is determined at runtime, not at compile time, so what you're saying makes no sense. No, this is completely wrong. D has static type system and type of expression is determined at compile time. import std.stdio; class A {} class B : A { } void foo(T) (T t) if (is(T == class)) { } void main() { A a; foo(a); // belongs to class types irrespective to // allocation and polymorphic type a = new B; pragma(msg, typeof(a)); // static type system - prints A writeln(typeof(a).stringof); // static type system - prints A } in addition with supporting runtime polymorphism. You completely confuse language type system with polymorphism and allocation state as well as misunderstanding caused by confusion with an official language spec.
Re: Slow performance compared to C++, ideas?
Am 03.06.2013 18:19, schrieb Manu: On 3 June 2013 18:20, Jacob Carlborg d...@me.com mailto:d...@me.com wrote: On 2013-06-03 10:11, deadalnix wrote: The whole concept of OOP revolve around the fact that a given class and users of the given class don't need to know about its subclasses (Liskov's substitution principle). It is subclass's responsibility to decide what it override or not, not the upper class to decide what is overriden by subclasses. If you want to create a class with customizable parts, pass parameters to the constructor. This isn't OOP what OOP is about. The performance concern is only here because things has been smashed together in a inconsequent way (as it is often done in D). In Java for instance, only overriden function are actually virtual. Everything else is finalized at link time. Which is great because you are able to override everything when testing to create mock for instance, while keeping good performance when actually running the application. I've read a book, Effective Java, where it says, something like: If you don't intend your class to be subclassed make it final, otherwise document how to subclass and which methods to override. Sounds like even they know the truth I speak, but they must enforce this by convention/documentation rather than offering strict guarantees ;) It's interesting (but not at all surprising) that C# which is much more modern decided to go the C++ way rather than the Java way. C# just followed the Object Pascal/Delphi model, which is based in C++. That's why. You have to thank Anders for it. -- Paulo
Re: Rust moving away from GC into reference counting
Am 03.06.2013 18:00, schrieb Manu: Haha, wow. Indeed, isn't that well timed with respect to recent discussions! ;) Post-dconf, I gave it some serious thought and I'm kinda convincing myself more and more each day that it's the way to go. As I mentioned I prefer GC based solutions, but then again I live in the JVM/.NET world, so I don't have the memory/timing pressure you have to deal with. But when looking at systems programming languages that offer reference counting as the main memory management, ATS, Parasail, Objective-C, and now Rust, all of them share a common feature: - Compiler support to remove extra increment/decrement operations. I guess on D's case this support would also be required. -- Paulo
Re: Slow performance compared to C++, ideas?
Am 03.06.2013 10:11, schrieb deadalnix: On Monday, 3 June 2013 at 07:30:56 UTC, Kapps wrote: On Monday, 3 June 2013 at 07:06:05 UTC, Manu wrote: There are functions that the author intended to be overridden, and functions that have no business being overridden, that the author probably never imagined anyone would override. What if someone does come along and override one of these, and it was never designed to work under that circumstance in the first place? At very least, it will have never been tested. That's not a very robust API offering if you ask me. This is something just as important as the performance issues. Most of the time people will leave functions to simply use whatever the default is for virtual/final. If it's final, this works fairly well. The author hasn't put in the effort to decide how to handle people overriding their function. But with virtual by default, you don't know if the author actually considered that people will be overriding the function or if it's simply that they didn't bother specifying. I know the vast majority of my code is virtual, simply because I didn't specify the 'final' keyword 500 times, and didn't think about that I'd need to do it. The resulting code is unsafe because I didn't consider those functions actually being overridden. The whole concept of OOP revolve around the fact that a given class and users of the given class don't need to know about its subclasses (Liskov's substitution principle). It is subclass's responsibility to decide what it override or not, not the upper class to decide what is overriden by subclasses. If you want to create a class with customizable parts, pass parameters to the constructor. This isn't OOP what OOP is about. The performance concern is only here because things has been smashed together in a inconsequent way (as it is often done in D). In Java for instance, only overriden function are actually virtual. Everything else is finalized at link time. Which is great because you are able to override everything when testing to create mock for instance, while keeping good performance when actually running the application. While this is true for most OO languages, you should not forget the Fragile base class principle. But this applies to both cases, regardless what is defined by default. -- Paulo
Re: Slow performance compared to C++, ideas?
Am 03.06.2013 18:16, schrieb Manu: On 3 June 2013 18:11, deadalnix deadal...@gmail.com mailto:deadal...@gmail.com wrote: On Monday, 3 June 2013 at 07:30:56 UTC, Kapps wrote: On Monday, 3 June 2013 at 07:06:05 UTC, Manu wrote: There are functions that the author intended to be overridden, and functions that have no business being overridden, that the author probably never imagined anyone would override. What if someone does come along and override one of these, and it was never designed to work under that circumstance in the first place? At very least, it will have never been tested. That's not a very robust API offering if you ask me. This is something just as important as the performance issues. Most of the time people will leave functions to simply use whatever the default is for virtual/final. If it's final, this works fairly well. The author hasn't put in the effort to decide how to handle people overriding their function. But with virtual by default, you don't know if the author actually considered that people will be overriding the function or if it's simply that they didn't bother specifying. I know the vast majority of my code is virtual, simply because I didn't specify the 'final' keyword 500 times, and didn't think about that I'd need to do it. The resulting code is unsafe because I didn't consider those functions actually being overridden. The whole concept of OOP revolve around the fact that a given class and users of the given class don't need to know about its subclasses (Liskov's substitution principle). It is subclass's responsibility to decide what it override or not, not the upper class to decide what is overriden by subclasses. Then OOP is fundamentally unsafe, because the author will never consider all the possibilities! If you want to create a class with customizable parts, pass parameters to the constructor. This isn't OOP what OOP is about. Eh? The performance concern is only here because things has been smashed together in a inconsequent way (as it is often done in D). In Java for instance, only overriden function are actually virtual. Everything else is finalized at link time. Java is not compiled. If you compile Java code, all functions are virtual always. That depends, http://www.excelsior-usa.com/jet.html
Re: Slow performance compared to C++, ideas?
The virtual vs. non-virtual thing doesn't really matter that much. We're talking about a 5% performance difference here, at most. You just type 'final' a little bit here and there when you're near the end of writing our your class hierarchy and get a little performance boost.
Re: Rust moving away from GC into reference counting
On Monday, 3 June 2013 at 16:47:35 UTC, Rob T wrote: On Monday, 3 June 2013 at 16:01:29 UTC, H. S. Teoh wrote: C++11 deprecated auto_ptr in favor of unique_ptr, but it's basically the same concept, and it works very well in cases where you prefer to manage your own memory. D should have the same thing in the std lib, it's not difficult to implement. The ref counted pointers are another similar thing, a bit more difficult to implement correctly and there's a circular ref issue with them. There is this: http://dlang.org/phobos/std_typecons.html#.Unique I can't comment how good it is though.
Re: Slow performance compared to C++, ideas?
On Monday, 3 June 2013 at 16:16:48 UTC, Manu wrote: Java is not compiled. If you compile Java code, all functions are virtual always. Java is JIT compiled and function that aren't overriden are finalized automatically by the JIT compiler. It's even able to virtualize at runtime if an override is finally linked, or the other way around, refinalize. We could do that in D as LTO, as long as we have a way to tell the compiler if a function can be overriden in a shared object.
Re: Error after installing DMD v2.063
On 06/03/2013 01:30 AM, Ellery Newcomer wrote: On 06/02/2013 04:12 PM, Russel Winder wrote: On Sun, 2013-06-02 at 16:03 -0700, Ellery Newcomer wrote: […] $ objdump -p libphobos2.so | grep SONAME SONAME libphobos2.so.0.63 Exactly, the actual file should have the fully qualified soname and all other filenames should be symbolic links to that file. Currently the DMD deb reverses this and therefore violates the standard for deb installation. actually, your resource above says that the soname should have the format lib{lib}.so.X and the real name should have the format lib{lib}.so.X.Y.Z where X = version number Y = minor version number Z = release number so the generated .so itself violates the standard. Currently the Phobos make file generates: libphobos2.so.0.63.0 and creates two simlinks libphobos2.so and libphobos2.so.0.63, and sets the soname to libphobos2.s0.0.63. The soname currently includes the minor version number because the compatibility currently breaks every release, when the phobos abi is more stable it should be removed from the soname. -- Mike Wey
Re: Rust moving away from GC into reference counting
On Monday, 3 June 2013 at 18:10:50 UTC, Kagamin wrote: On Monday, 3 June 2013 at 16:47:35 UTC, Rob T wrote: Also a better GC in no way invalidates the need for other memory management schemes because there will always be situations where a GC is not an appropriate solution, at least not until someone invents the perfect one size fits all GC. phobos has Unique http://dlang.org/phobos/std_typecons.html#.Unique and RefCounted http://dlang.org/phobos/std_typecons.html#.RefCounted RefCounted notably does not work with classes yet (just because no one has taken the time to add support).
Re: Rust moving away from GC into reference counting
On Monday, 3 June 2013 at 16:47:35 UTC, Rob T wrote: Also a better GC in no way invalidates the need for other memory management schemes because there will always be situations where a GC is not an appropriate solution, at least not until someone invents the perfect one size fits all GC. phobos has Unique http://dlang.org/phobos/std_typecons.html#.Unique and RefCounted http://dlang.org/phobos/std_typecons.html#.RefCounted
Re: Slow performance compared to C++, ideas?
On 2013-06-03, 20:00, w0rp wrote: The virtual vs. non-virtual thing doesn't really matter that much. We're talking about a 5% performance difference here, at most. You just type 'final' a little bit here and there when you're near the end of writing our your class hierarchy and get a little performance boost. 5%? Not when the function you call in an inner loop is virtual. I see up to a 100% time increase for simple functions. -- Simen
Re: Slow performance compared to C++, ideas?
On Monday, 3 June 2013 at 16:59:43 UTC, Andrei Alexandrescu wrote: It is trivial. To paraphrase a classic: I'm not taking away your ability to make everything final, you can type 'final:' as much as you like. Wont we at least need 'virtual:' for the inverse of 'final:'?
Re: Feature request: Attribute with which to enable the requirement of explicit-initialization of enum variables
On Monday, 3 June 2013 at 17:36:13 UTC, Maxim Fomin wrote: On Monday, 3 June 2013 at 16:46:15 UTC, Diggory wrote: On Monday, 3 June 2013 at 12:13:30 UTC, Maxim Fomin wrote: On Monday, 3 June 2013 at 11:12:10 UTC, Diggory wrote: On Monday, 3 June 2013 at 05:56:42 UTC, Maxim Fomin wrote: On Monday, 3 June 2013 at 02:23:18 UTC, Andrej Mitrovic wrote: Let's say you define an enum, which is to be used as a variable: ... Thoughts? I think it is simpler to set a first enum member as invalid. However, I like an idea of supporting analogue of @disable this() mark for any user-defined types, not structs (I mean it would be pretty good if such feature applied on classes could stop creating null references - it's actually not adding new feature, but increasing scope of existing feature). It's completely meaningless on classes: it's already impossible to create an instance of a class which is null, because if it's null it's not an instance of the class in the first place. This is again using wrong terminology to move meaning from type to pointed data (if any) as happened recently with dynamic arrays. Nothing on the Earth promises that if in one language class type is allocated memory, than in another language class should be also so, and if it is not, then hoards of programmist should use first naming conversion with no reason. Consult the spec what class type is in D and please do not confuse D with other languages. My point is completely applicable to D - it applies to any form of polymorphic type. In D the type of a class variable is determined at runtime, not at compile time, so what you're saying makes no sense. No, this is completely wrong. D has static type system and type of expression is determined at compile time. No that's wrong, the static type only determines the highest class in the class hierarchy that can be stored, it does not determine the actual runtime type of the expression. If you try to implement @disable this using the static type it breaks all the rules of covariance and contravariance that classes are expected to follow, and thus breaks the type system. That's why it's implemented as NotNull!T.
Re: Error after installing DMD v2.063
On Mon, 2013-06-03 at 20:09 +0200, Mike Wey wrote: […] Currently the Phobos make file generates: libphobos2.so.0.63.0 and creates two simlinks libphobos2.so and libphobos2.so.0.63, and sets the soname to libphobos2.s0.0.63. The soname currently includes the minor version number because the compatibility currently breaks every release, when the phobos abi is more stable it should be removed from the soname. Uuurrr… this isn't what is in the Debian deb file. :-( Also there should be a symbolic link libphobos2.so.0 shouldn't there? -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Error after installing DMD v2.063
On Mon, 2013-06-03 at 10:29 -0700, Walter Bright wrote: On 6/2/2013 3:05 PM, Jonathan M Davis wrote: It's done entirely by Walter on his own systems, and I suspect that the deb and rpm files are created from the zip file (though I'm not sure if he creates those or someone else does). We need to change it so that the process for generating them is automated and does not rely on Walter. Work is being done in that area, but it does not appear to be a priority for Walter. Creating deps and rpms is done from the zip file, and the scripts are here: https://github.com/D-Programming-Language/installer I know I'm sounding like a broken record, but those scripts are not a mystery and anyone can produce pull requests to add to them or fix them. As I noted earlier, the deb creation scripts in that repository are just fundamentally the wrong way of creating debs. Also as noted earlier I can't do anything about this till August. The result will not be a pull request on the script in the repository, but a new Git repository specifically designed to use the Debian standard deb creation toolchain and a request to delete the script mentioned above from the whole release toolset. Sorry guv but you have to use the right tool for the job, and deb creation should use the deb creation tools. The start point has to be a tarball of the source. If this is not part of the distribution release then we need to agree an officially acceptable process for creating a release source tarball. Thanks. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Error after installing DMD v2.063
On 6/3/13 2:30 PM, Russel Winder wrote: On Mon, 2013-06-03 at 10:29 -0700, Walter Bright wrote: On 6/2/2013 3:05 PM, Jonathan M Davis wrote: It's done entirely by Walter on his own systems, and I suspect that the deb and rpm files are created from the zip file (though I'm not sure if he creates those or someone else does). We need to change it so that the process for generating them is automated and does not rely on Walter. Work is being done in that area, but it does not appear to be a priority for Walter. Creating deps and rpms is done from the zip file, and the scripts are here: https://github.com/D-Programming-Language/installer I know I'm sounding like a broken record, but those scripts are not a mystery and anyone can produce pull requests to add to them or fix them. As I noted earlier, the deb creation scripts in that repository are just fundamentally the wrong way of creating debs. Also as noted earlier I can't do anything about this till August. The result will not be a pull request on the script in the repository, but a new Git repository specifically designed to use the Debian standard deb creation toolchain and a request to delete the script mentioned above from the whole release toolset. Sorry guv but you have to use the right tool for the job, and deb creation should use the deb creation tools. The start point has to be a tarball of the source. If this is not part of the distribution release then we need to agree an officially acceptable process for creating a release source tarball. Thanks. Instead of planning to work on it, one alternative would be to post bits and pieces of information in a bug report and guide others how to do it. Just a thought. Andrei
Re: Error after installing DMD v2.063
On Mon, 2013-06-03 at 14:32 -0400, Andrei Alexandrescu wrote: […] Instead of planning to work on it, one alternative would be to post bits and pieces of information in a bug report and guide others how to do it. Just a thought. OK. How about trying: http://www.debian.org/doc/manuals/maint-guide/build.en.html http://wiki.debian.org/PackagingWithGit http://lpenz.org/articles/debgit/index.html http://honk.sigxcpu.org/projects/git-buildpackage/manual-html/gbp.html The core of this is having the source tarball and a debian directory with all the appropriate files so that git-buildpackage can do its stuff. I am not sure a bug report helps, what is needed is action and the creation of the Git repository to do the stuff. My problem is that I have Groovy, Python and Scala stuff to do for the next 2 months :-( I hope this helps. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part