Re: dmd 2.063 released with 260 bugfixes and enhancements
On Friday, 31 May 2013 at 00:28:58 UTC, Jesse Phillips wrote: Perfect chance to try out the new release process. Patch 2.063 and release 2.063.1. Actually v2.063.1 is the current one, check the dmd tags ;) It will be v2.063.2 P.S. It has made life of linux packagers SOOO much easier ^_^
Re: dmd 2.063 released with 260 bugfixes and enhancements
On Thursday, 30 May 2013 at 22:41:08 UTC, Rob T wrote:= Prior to issuing a release like this, it should instead be made public as a stable release candidate with full installer on the downloads page for review by anyone. After the bugs are worked out and some time has elapsed, the stable RC is simply declared stable final and re-released as such with the usual big announcement. --rt I disagree. Anything made public is treated as a release. It does not matter how to you call it. And new release scheme with minor version numbers has been adopted recently, so there are no issues in fixing those regression and updating release with v2.063.2 tag now - essentially will do the same stuff.
Re: dmd 2.063 released with 260 bugfixes and enhancements
On 2013-05-30 17:16, Andrei Alexandrescu wrote: Hello, We are pleased to announce that dmd 2.063, the reference compiler of the D programming language, is now available for download for OSX, Windows, and a variety of Unixen: http://dlang.org/download.html The -transition=field flag seems to be undocumented. -- /Jacob Carlborg
Re: dmd 2.063 released with 260 bugfixes and enhancements
I want to express my sincere gratitude to everyone who has been involved in doing this release. It is a major breakthrough in D development and release process and a solid step towards truly mature project. Really, a lot of small but important changes have just happened that make this release extra awesome: 1) It was a great pleasure to see real project authors coming to beta list and calling problems found by their code out loud before release is made. It is how it was intended to work and it finally works. 2) Judging by tags in D repos, new release versioning scheme is making its way into being adopted and used. That means that those few regression not caught by beta can be fixed now and release as 2.063.2, no need to wait for 2.064 and suffer. Awesome. 3) Final decision about major breaking bug fix is surprisingly wise and close to well-define transition process I have been asking for so long. (If you have missed it, it looks like this: release n: warning, release n+1: deprecation, release n+2: new behavior) Resolution if this problem alone is a major step towards proud title of stable and mature project and I hope it won't be a rare exception ;) 4) Changelog. It rocks. It fulfills my deeply hidden desires. Andrej, you have done an astonishing job here. Yay!
[AVCHD to MOV]Edit Panasonic HDC-TM700 mts/m2ts files on FCP X
Panasonic HDC-TM700 could help you record 1080/50p or 1080/60p Full HD videos, but you may meet a problem that how to edit this Panasonic video files in Final Cut Pro X? Because the Final Cut Pro’s native format is ProRes, so it is better to find a method to convert Panasonic HDC-TM700 video to ProRes. Here I would like to share with you a good Mac Panasonic HDC-TM700 to ProRes converter which helps you output Final Cut Pro X compatible PorRes video files with high quality on Mac. There are some simple steps to achieve our goal. Firstly you can go download to get this amazing Aunsoft Mac MTS/M2TS converter. Install and launch it. Click “Add” button to load your m2ts files from Panasonic HDC-TM700. And then go to “Format” to choose from Final Cut Pro section. The first top format Apple ProRes 422(*.mov) will be a good choice. So you can Convert Camcorder AVCHD files to MOV. 1920*1080 *.mov will be nice worked on Final Cut Pro X on your Mac. If the default destination folder has enough space to save the output files, you can just neglect the Export to section, directly click the Convert button to start the conversion. It will be very fast. Hope my post will help you. http://camcorder-editor.com/how-to-import-the-panasonic-hdc-tm700-videos-to-final-cut-pro-x-on-mac/
Re: dmd 2.063 released with 260 bugfixes and enhancements
Nick Sabalausky, el 30 de May a las 22:47 me escribiste: On Fri, 31 May 2013 03:50:51 +0200 Rob T al...@ucora.com wrote: Yes, but because there's no link on the main page and no installer, the RC's are effectively closed to the public because only people in the know will go through the trouble to get the RC's and install them. I'm only talking about when the next release gets very to release should something like this be done. It'll make installing a new release far less risky business when it goes final. It'll improve confidence in the final product and further reduce or completely eliminate nasty surprises. IMO doing this will have a similar positive effect like as we've seen with the improved release log. --rt Yea, greater visibility for the betas could probably still help. Yeah, and that's exactly what I suggested here several times, and ultimately at DConf :). A step forward has been made in this release, as you said, betas were announced in this NG for the first time, before they were announced only in the beta ML. Now we need to put version numbers to different betas, put a link to them in the main page (with the complete changelog, so people also know what's new and try the new stuff) and wait a little longer before releasing the final version. Also, to avoid a lot of release burden, I don't think there is a need to fix a regression and instantly pseudo-release a new beta, ending with 6 or 7 betas. Is better to wait a little longer between releases and get more fixes in each release (unless there is a very bad regression that makes the compiler almost unusable). Maybe having something like a fixed weekly release of betas would be a good idea. If one week there are no more bug reports against the beta, then release that as the final version. Betas are really release candidates, I think it might be a good idea to just start calling them what they are, so people is more tempted to download them and try them. I hope next time it is DMD 2.064rc1, 2.064rc2, etc... -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Fantasy is as important as wisdom
Re: dmd 2.063 released with 260 bugfixes and enhancements
Dicebot, el 31 de May a las 10:01 me escribiste: On Thursday, 30 May 2013 at 22:41:08 UTC, Rob T wrote:= Prior to issuing a release like this, it should instead be made public as a stable release candidate with full installer on the downloads page for review by anyone. After the bugs are worked out and some time has elapsed, the stable RC is simply declared stable final and re-released as such with the usual big announcement. --rt I disagree. Anything made public is treated as a release. This is just plain and completely wrong. I don't know many big-ish opensource projects that doesn't have release candidates, and I haven't see any distribution targeted at end users using release candidates. Have you ever see a Linux distribution shipping an rc kernel (that is not only installable by explicit user action) for example? -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- JUGAR COMPULSIVAMENTE ES PERJUDICIAL PARA LA SALUD. -- Casino de Mar del Plata
Re: dmd 2.063 released with 260 bugfixes and enhancements
On Friday, 31 May 2013 at 00:28:58 UTC, Jesse Phillips wrote: On Thursday, 30 May 2013 at 22:04:07 UTC, Andrej Mitrovic wrote: On 5/30/13, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: Hello, We seem to have a regression affecting the zipped release: http://d.puremagic.com/issues/show_bug.cgi?id=10215 But I can't recreate this in git-head. It must have been a specific commit the release is based on that introduced this behavior. I don't know how it went through the autotester unnoticed, this is a pretty major bug. Perfect chance to try out the new release process. Patch 2.063 and release 2.063.1. That would be good. The problem is a bit annoying.
Re: dmd 2.063 released with 260 bugfixes and enhancements
Damn you D - I'm using up a large chunk of my free time reading the improved and very-readable Change Log. A great update to D and Log. -=mike=-
Re: dmd 2.063 released with 260 bugfixes and enhancements
Great work all :-) Many thanks to everyone involved, it really is appreciated. Stewart
Re: dmd 2.063 released with 260 bugfixes and enhancements
On Friday, 31 May 2013 at 09:08:17 UTC, Leandro Lucarella wrote: This is just plain and completely wrong. I don't know many big-ish opensource projects that doesn't have release candidates, and I haven't see any distribution targeted at end users using release candidates. Have you ever see a Linux distribution shipping an rc kernel (that is not only installable by explicit user action) for example? Oh, I have meant it completely other way around - if some release is made available through common channels it does not matter if it is called beta or RC, people will just start using it. Remember the issue with UDA syntax? In mature projects RC does not differ that much from actual release other than by extra regression fixes. But for D process is not THAT smooth enough and it will take some time to settle things down.
DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
http://www.reddit.com/r/programming/comments/1feem1/dconf_2013_day_2_talk_3_from_c_to_d_by_adam_wilson/ {Enj,Destr}oy! Andrei
Re: dmd 2.063 released with 260 bugfixes and enhancements
Dicebot, el 31 de May a las 10:11 me escribiste: I want to express my sincere gratitude to everyone who has been involved in doing this release. It is a major breakthrough in D development and release process and a solid step towards truly mature project. Really, a lot of small but important changes have just happened that make this release extra awesome: 1) It was a great pleasure to see real project authors coming to beta list and calling problems found by their code out loud before release is made. It is how it was intended to work and it finally works. 2) Judging by tags in D repos, new release versioning scheme is making its way into being adopted and used. That means that those few regression not caught by beta can be fixed now and release as 2.063.2, no need to wait for 2.064 and suffer. Awesome. About this, AFAIK 2.063.1 is really what's in the release, but the binary version number (and the zip name) have only 2.063. I think that should be fixed and the real version number should be present in both downloadables and binary. Also a micro changelog should be provided, only with the regressions that were fixed. And I don't mean to minimize the incredible breakthrough concerning the release process in this cycle, just pointing out places were we can still do better :) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Your success is measured by your ability to finish things
Re: dmd 2.063 released with 260 bugfixes and enhancements
Dicebot, el 31 de May a las 13:44 me escribiste: On Friday, 31 May 2013 at 09:08:17 UTC, Leandro Lucarella wrote: This is just plain and completely wrong. I don't know many big-ish opensource projects that doesn't have release candidates, and I haven't see any distribution targeted at end users using release candidates. Have you ever see a Linux distribution shipping an rc kernel (that is not only installable by explicit user action) for example? Oh, I have meant it completely other way around - if some release is made available through common channels it does not matter if it is called beta or RC, people will just start using it. Remember the issue with UDA syntax? The UDAs issue was completely different, there were no betas including UDAs. People using it were just using a development snapshot. In mature projects RC does not differ that much from actual release other than by extra regression fixes. But for D process is not THAT smooth enough and it will take some time to settle things down. This is pretty much how it is now. Only minor regressions can be found in a beta/rc usually. There are no changes in behaviour or new features. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- All fathers are intimidating. They're intimidating because they are fathers. Once a man has children, for the rest of his life, his attitude is, To hell with the world, I can make my own people. I'll eat whatever I want. I'll wear whatever I want, and I'll create whoever I want. -- Jerry Seinfeld
Re: dmd 2.063 released with 260 bugfixes and enhancements
On Friday, 31 May 2013 at 14:08:18 UTC, Leandro Lucarella wrote: In mature projects RC does not differ that much from actual release other than by extra regression fixes. But for D process is not THAT smooth enough and it will take some time to settle things down. This is pretty much how it is now. Only minor regressions can be found in a beta/rc usually. There are no changes in behaviour or new features. Erm, I remember you taking good part in const initialization discussion with all semantics changes and compiler flags added until final decision was set in stone. That is something better done in semi-closed beta in my opinion.
Re: dmd 2.063 released with 260 bugfixes and enhancements
On Friday, 31 May 2013 at 14:08:17 UTC, Leandro Lucarella wrote: And I don't mean to minimize the incredible breakthrough concerning the release process in this cycle, just pointing out places were we can still do better :) Btw, I have included minor version number into Arch Linux package version, may suggest other packagers to do the same. Version string shown by DMD front-end itself is not that important as spec shouldn't change within minor versions. Still may be useful though.
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Fri, 31 May 2013 13:33:21 +0100, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: http://www.reddit.com/r/programming/comments/1feem1/dconf_2013_day_2_talk_3_from_c_to_d_by_adam_wilson/ Excellent talk. Gives us a good idea of the things which are missing for C# conversion and less so in general, and good ideas where to concentrate our efforts. I have old SHA etc hashing routines in old style D, this makes me want to spend some time bringing them up to date... R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Re: dmd 2.063 released with 260 bugfixes and enhancements
On Friday, May 31, 2013 10:17:07 Leandro Lucarella wrote: Yeah, and that's exactly what I suggested here several times, and ultimately at DConf :). A step forward has been made in this release, as you said, betas were announced in this NG for the first time, before they were announced only in the beta ML. http://d.puremagic.com/issues/show_bug.cgi?id=10153 http://d.puremagic.com/issues/show_bug.cgi?id=10154 - Jonathan M Davis
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Fri, May 31, 2013 at 6:03 PM, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: {Enj,Destr}oy! Sorry I'm new to D so can anyone please explain that Destroy joke to me? (And I have to say that this is the first -announce list I've seen where all subscribers can post! How come it's allowed here?) -- Shriramana Sharma ஶ்ரீரமணஶர்மா श्रीरमणशर्मा
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Friday, May 31, 2013 19:06:24 Shriramana Sharma wrote: On Fri, May 31, 2013 at 6:03 PM, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: {Enj,Destr}oy! Sorry I'm new to D so can anyone please explain that Destroy joke to me? It's not a D thing. It's an Andrei thing. He likes to tell people to destroy his proposals when he makes them (with the intention that people would point out any problems in them). - Jonathan M Davis
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On 5/31/13 8:33 AM, Andrei Alexandrescu wrote: http://www.reddit.com/r/programming/comments/1feem1/dconf_2013_day_2_talk_3_from_c_to_d_by_adam_wilson/ Hi-def video is now online: http://archive.org/details/dconf2013-day02-talk03 Andrei
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On 05/31/2013 09:33 AM, Andrei Alexandrescu wrote: http://www.reddit.com/r/programming/comments/1feem1/dconf_2013_day_2_talk_3_from_c_to_d_by_adam_wilson/ {Enj,Destr}oy! Andrei Just watched it over lunch and I liked this talk very much. For transforming pieces of code I very often write Vim regex, (supports multiline with a flag) and when that is not enough, writing a Vim function does the trick. About streams: there is some phobos support for streams, though it seems not finalized. I wish something were done about the containers. Note that it is very easy to write C# containers in a OOP style, based on T[] and T[K] internally (though a concurrent hash map with read/write locking would need to be done from scratch without using AAs). It is not true that Array!T is equivalent to ListT. Array!T wants to own their items (because it manages its own memory), so it is only practically useable with structs. Even duplicating the array is unsafe if the element type is a class: import std.stdio, std.container; class A { int val; this(int v) { val = v; } ~this() { writeln(A destroyed); } } void func(Array!A list) { } void main() { A a = new A(3); Array!A list; list ~= a; writeln(a.val);//prints 3 func(list.dup);//prints A destroyed //-- The object cannot be used anymore, though it //is still present in 'list') writeln(a.val);//prints 0 } And one cannot use RefCounted!A because RefCounted doesn't work with classes. I guess that RedBlackTree's suffer the same problem. --jm
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Friday, May 31, 2013 13:59:24 Juan Manuel Cabo wrote: About streams: there is some phobos support for streams, though it seems not finalized. Everything stream-related which is currently in Phobos is outdated and unacceptable, so it will be replaced. A replacement is in the works, but it's not ready yet. - Jonathan M Davis
Re: dmd 2.063 released with 260 bugfixes and enhancements
Dicebot, el 31 de May a las 16:21 me escribiste: On Friday, 31 May 2013 at 14:08:17 UTC, Leandro Lucarella wrote: And I don't mean to minimize the incredible breakthrough concerning the release process in this cycle, just pointing out places were we can still do better :) Btw, I have included minor version number into Arch Linux package version, may suggest other packagers to do the same. Version string shown by DMD front-end itself is not that important as spec shouldn't change within minor versions. Still may be useful though. For users it is. I want to know if the compiler I'm used is the latest with all critical bugfixes included or not. Remember that if we are going massive, we can't count anymore on user installing their compilers themselves anymore. People will start just doing an apt-get install dmd, or even have it preinstalled, and it should be easy for them to know exactly what release they are using. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Y serán tiempos de vanos encuentros entre humano y humano; en que las fieras se comerán entre ellas y después del final; en que se abríran las tierras y los cielos... y en el medio de la nada Racing saldrá campeón. -- Ricardo Vaporeso
Re: dmd 2.063 released with 260 bugfixes and enhancements
On Friday, 31 May 2013 at 14:08:17 UTC, Leandro Lucarella wrote: About this, AFAIK 2.063.1 is really what's in the release, but the binary version number (and the zip name) have only 2.063. I think that should be fixed and the real version number should be present in both downloadables and binary. Also a micro changelog should be provided, only with the regressions that were fixed. Of course, that absolutely makes sense and should be implemented by next release if possible. And I don't mean to minimize the incredible breakthrough concerning the release process in this cycle, just pointing out places were we can still do better :) Agreed. Looking back just a couple of releases ago, the situation has improved considerably, but as always there's a lot more improvements that can and should be done. As for the comment that RC's will be treated as stable releases, that's hard to swallow, esp when you consider what's going on now. The current release is worse than a RC because it's not labeled for what it is, people will think it's stable when in fact it's not. I think that it is far more professional and responsible to explicitly state that the version on the download page is a release candidate rather than not saying anything at all. People will get the wrong impression and think that it is a well tested and honed stable release. To reduce potential confusion, we can place RC's in a separate download page. Finally the RC can be a reasonably well tested version that is near completion to minimize the amount of re-work and bug potential. Even if it is misused by people who should know better, it'll still perform reasonably well, and the rest of us tinkerers will greatly benefit from having it. Finally, making RC's available to the public will greatly help increase the quality of the final product and increase the confidence in it for production use. It'll be a win-win for everyone, no question in my mind. --rt
Re: dmd 2.063 released with 260 bugfixes and enhancements
Dicebot, el 31 de May a las 16:18 me escribiste: On Friday, 31 May 2013 at 14:08:18 UTC, Leandro Lucarella wrote: In mature projects RC does not differ that much from actual release other than by extra regression fixes. But for D process is not THAT smooth enough and it will take some time to settle things down. This is pretty much how it is now. Only minor regressions can be found in a beta/rc usually. There are no changes in behaviour or new features. Erm, I remember you taking good part in const initialization discussion with all semantics changes and compiler flags added until final decision was set in stone. That is something better done in semi-closed beta in my opinion. Well, that case could really be considered just a regression, something that used to work one way was changed in the beta (but was wrong) and during the beta process was restored (with a better migration/deprecation plan). I still think is quite different from introducing new features or behaviour changes *intentionally*. You are never covered 100%. But anyway, I'm not against having a first iteration with less exposure (i.e. targeted only to DMD devels), but I don't think we even need a release for that. Is enough to say in the MLs hey, we are starting with the release process, everyone check the current master and report any problems and freeze new features merge from there. One everything is slightly tested and at least the devels agree the master is in good shape, we can start shipping proper public release candidates, with the proper changelog, version number, etc. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- - Tata Dios lo creó a usté solamente pa despertar al pueblo y fecundar las gayinas. - Otro constrasentido divino... Quieren que yo salga de joda con las hembras y después quieren que madrugue. -- Inodoro Pereyra y un gallo
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Fri, 31 May 2013 05:33:21 -0700, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: http://www.reddit.com/r/programming/comments/1feem1/dconf_2013_day_2_talk_3_from_c_to_d_by_adam_wilson/ {Enj,Destr}oy! Andrei I want to apologize for the glaring technical error in the talk. I knew about T.init() when I was writing this but I was so focused on finding a analog for C#'s default keyword that it completely slipped my mind. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Friday, 31 May 2013 at 16:33:45 UTC, Shriramana Sharma wrote: (And I have to say that this is the first -announce list I've seen where all subscribers can post! How come it's allowed here?) The mailing list is actually an interface to the newsgroup, where discussion has always been encouraged.
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
W dniu 31.05.2013 19:05, Jonathan M Davis pisze: On Friday, May 31, 2013 13:59:24 Juan Manuel Cabo wrote: About streams: there is some phobos support for streams, though it seems not finalized. Everything stream-related which is currently in Phobos is outdated and unacceptable, so it will be replaced. A replacement is in the works, but it's not ready yet. Do you know any timespans, when it probably will happen? I will be very grateful.
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Fri, 31 May 2013 14:13:30 -0400, Piotr Szturmaj bncr...@jadamspam.pl wrote: W dniu 31.05.2013 19:05, Jonathan M Davis pisze: On Friday, May 31, 2013 13:59:24 Juan Manuel Cabo wrote: About streams: there is some phobos support for streams, though it seems not finalized. Everything stream-related which is currently in Phobos is outdated and unacceptable, so it will be replaced. A replacement is in the works, but it's not ready yet. Do you know any timespans, when it probably will happen? I will be very grateful. I would love to say that I have set aside enough time to do it, but it's very difficult to find the time :( I hate to commit to a certain time frame, I have done that here in the past and have been very wrong with my expectations. That being said, my lack of effort on D stuff is really pissing me off, and I want to spend more time on it. Dconf really has yanked me back into D, and I want to finish all the loose ends I've started, including dcollections, this streaming stuff, and some other little bits. -Steve
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Friday, 31 May 2013 at 17:41:42 UTC, Adam Wilson wrote: On Fri, 31 May 2013 05:33:21 -0700, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: http://www.reddit.com/r/programming/comments/1feem1/dconf_2013_day_2_talk_3_from_c_to_d_by_adam_wilson/ {Enj,Destr}oy! Andrei I want to apologize for the glaring technical error in the talk. I knew about T.init() when I was writing this but I was so focused on finding a analog for C#'s default keyword that it completely slipped my mind. should just be init, it's not a function
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Fri, 31 May 2013 11:41:04 -0700, John Colvin john.loughran.col...@gmail.com wrote: On Friday, 31 May 2013 at 17:41:42 UTC, Adam Wilson wrote: On Fri, 31 May 2013 05:33:21 -0700, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: http://www.reddit.com/r/programming/comments/1feem1/dconf_2013_day_2_talk_3_from_c_to_d_by_adam_wilson/ {Enj,Destr}oy! Andrei I want to apologize for the glaring technical error in the talk. I knew about T.init() when I was writing this but I was so focused on finding a analog for C#'s default keyword that it completely slipped my mind. should just be init, it's not a function And I will forever be remember as the guy who got .init wrong, twice. :-D -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Re: dmd 2.063 released with 260 bugfixes and enhancements
On 05/30/2013 08:16 AM, Andrei Alexandrescu wrote: Hello, We are pleased to announce that dmd 2.063, the reference compiler of the D programming language, is now available for download for OSX, Windows, and a variety of Unixen: The rpm package doesn't make the appropriate links in /usr/lib, so when I try to build a shared library, at runtime it issues ./test1d: error while loading shared libraries: libphobos2.so.0.63: cannot open shared object file: No such file or directory I would assume the deb package has the same shortcoming
Re: dmd 2.063 released with 260 bugfixes and enhancements
On Fri, 2013-05-31 at 12:19 -0700, Ellery Newcomer wrote: […] I would assume the deb package has the same shortcoming I have not seen this with the deb on Debian Unstable. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Fri, 31 May 2013 15:29:40 +0100 Regan Heath re...@netmail.co.nz wrote: I have old SHA etc hashing routines in old style D, this makes me want to spend some time bringing them up to date... http://dlang.org/phobos/std_digest_sha.html Since 2.061, IIRC.
Re: dmd 2.063 released with 260 bugfixes and enhancements
On 05/31/2013 12:32 PM, Russel Winder wrote: On Fri, 2013-05-31 at 12:19 -0700, Ellery Newcomer wrote: […] I would assume the deb package has the same shortcoming I have not seen this with the deb on Debian Unstable. just tried it on ubuntu 12.10, and it does the same. are you using -defaultlib=libphobos2.so ?
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Fri, 31 May 2013 08:33:21 -0400 Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: http://www.reddit.com/r/programming/comments/1feem1/dconf_2013_day_2_talk_3_from_c_to_d_by_adam_wilson/ {Enj,Destr}oy! Torrents and links: http://semitwist.com/download/misc/dconf2013/
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
Mike Parker, el 31 de May a las 20:03 me escribiste: On Friday, 31 May 2013 at 16:33:45 UTC, Shriramana Sharma wrote: (And I have to say that this is the first -announce list I've seen where all subscribers can post! How come it's allowed here?) The mailing list is actually an interface to the newsgroup, where discussion has always been encouraged. I think at some point I could be good to add a read-only *real* announce list, so people only interested in knowing when a new release is available don't have to deal with a group like this that have a lot of traffic. This can also be easily covered by a RSS feed though. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Reflexionar con hondo tazón, y verdadero teflón, para elevarnos y alcanzar el estado de Shaquira Yaquira del latin Ya: ahora; Quira: que quira Quiquiriquira: que canta con un gallo y hace feliz a unos pocos. -- Peperino Pómoro
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On 05/31/2013 05:18 PM, Nick Sabalausky wrote: On Fri, 31 May 2013 15:29:40 +0100 Regan Heath re...@netmail.co.nz wrote: I have old SHA etc hashing routines in old style D, this makes me want to spend some time bringing them up to date... http://dlang.org/phobos/std_digest_sha.html Since 2.061, IIRC. The sha digest in phobos is SHA1. SHA256 and SHA512 are still missing. --jm
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On 05/31/2013 03:42 PM, Steven Schveighoffer wrote: [..] I would love to say that I have set aside enough time to do it, but it's very difficult to find the time :( I hate to commit to a certain time frame, I have done that here in the past and have been very wrong with my expectations. That being said, my lack of effort on D stuff is really pissing me off, and I want to spend more time on it. Dconf really has yanked me back into D, and I want to finish all the loose ends I've started, including dcollections, this streaming stuff, and some other little bits. -Steve I'm very happy to read this. It would be awesome to have the power of dcollections in phobos!! I would definitely appreciate it and a lot of people too!!! Streams and collections are very important building blocks. --jm
Re: DConf 2013 Day 2 Talk 3: C# to D by Adam Wilson
On Saturday, June 01, 2013 00:15:47 Juan Manuel Cabo wrote: On 05/31/2013 03:42 PM, Steven Schveighoffer wrote: [..] I would love to say that I have set aside enough time to do it, but it's very difficult to find the time :( I hate to commit to a certain time frame, I have done that here in the past and have been very wrong with my expectations. That being said, my lack of effort on D stuff is really pissing me off, and I want to spend more time on it. Dconf really has yanked me back into D, and I want to finish all the loose ends I've started, including dcollections, this streaming stuff, and some other little bits. -Steve I'm very happy to read this. It would be awesome to have the power of dcollections in phobos!! I would definitely appreciate it and a lot of people too!!! Streams and collections are very important building blocks. He's working on std.io, which would replace std.stdio and provide streams. That is likely to get into Phobos after full review. dcollections on the other hand will never be in Phobos. Anyone is free to take the guts of its containers and submit them to Phobos, but the API that dcollections has does not match that of std.container, a while std.container does need a few tweaks, it's never going to have the same design as dcollections. Andrei and Steven disagree on some things such that what Steve did with dcollections' API is incompatible with std.container (in particular with regards to ranges). The two things holding std.container back are getting custom allocators sorted out (which Andrei is working on), and people need to submit new containers. They're not going to just magically appear. dcollections has some solid implementations of containers which can be adapted to std.container (that's where RedBlackTree came from), but nothing is going to be able to come from dcollections exactly as-is due to the differences in API. - Jonathan M Davis
[Somewhat OT] Textadept 6.6 released
Textadept 6.6 has been released. Changelog located here[1]. Textadept is a cross-platform text editor written using Scintilla and GTK/ncurses, with Lua as its scripting engine. This version has an updated D LPeg[2] lexer which does a few nice things like highlighting certain identifiers when they appear in version and __traits expressions. The syntax highlighting has also been updated for the 2.063 language changes such as the addition of __MODULE__ and __vector.[3] [1] http://foicica.com/textadept/CHANGELOG.html [2] http://www.inf.puc-rio.br/~roberto/lpeg/lpeg.html [3] Of course it doesn't properly support HEREDOC strings, but then again, nothing does.
Re: Slow performance compared to C++, ideas?
I managed to get it even faster. [raz@d3 tmp]$ ./a.out rendering time 282 ms [raz@d3 tmp]$ ./test 202 ms, 481 μs, and 8 hnsecs So D version is 1,4x faster than C++ version. At least on my computer. Same compilers flags etc Final code: http://dpaste.dzfl.pl/61626e88 I guess there is still more room for improvements. On Friday, 31 May 2013 at 05:49:55 UTC, finalpatch wrote: Thanks Nazriel, It is very cool you are able to narrow the gap to within 1.5x of c++ with a few simple changes. I checked your version, there are 3 changes (correct me if i missed any): * Change the (float) constructor from v= [x,x,x] to v[0] = x; v[1] = x; v[2] = x; Correct * Get rid of the (float[]) constructor and use 3 floats instead It was just for debbuging so compiler would yell at me if I use array literal * Change class methods to final Correct The first change alone shaved off 220ms off the runtime, the 2nd one cuts 130ms and the 3rd one cuts 60ms. Lesson learned: by very very careful about dynamic arrays. Yeah, it is currently a problem with array literals. They're always allocated on heap even if they shouldn't be. Final before methods is something that needs to be remembered
Re: Slow performance compared to C++, ideas?
You guys are awesome! I am happy to know that D can indeed offer comparable speed to C++. But it also shows there is room for the compiler to improve as the C++ version also makes heavy use of loops (or STL algorithms) but they get inlined or unrolled automatically. On Friday, 31 May 2013 at 05:35:58 UTC, Juan Manuel Cabo wrote: You might also try changing: float[3] t = mixin(v[]~op~rhs.v[]); return Vec3(t[0], t[1], t[2]); for: Vec3 t; t.v[0] = mixin(v[0] ~op~ rhs.v[0]); t.v[1] = mixin(v[1] ~op~ rhs.v[1]); t.v[2] = mixin(v[2] ~op~ rhs.v[2]); return t; and so on, avoiding the float[3] and the v[] operations (which would loop, unless the compiler/optimizer unrolls them (didn't check)). I tested this change (removing v[] ops) in Vec3 and in normalize(), and it made your version slightly faster with DMD (didn't check with ldmd2). --jm
Re: Why UTF-8/16 character encodings?
On 5/30/2013 5:00 PM, Peter Williams wrote: On 31/05/13 05:07, Walter Bright wrote: On 5/30/2013 4:24 AM, Manu wrote: We don't all know English. Plenty of people don't. I've worked a lot with Sony and Nintendo code/libraries, for instance, it almost always looks like this: { // E: I like cake. // J: ケーキが好きです。 player.eatCake(); } Clearly someone doesn't speak English in these massive codebases that power an industry worth 10s of billions. Sure, but the code itself is written using ASCII! Because they had no choice. Not true, D supports Unicode identifiers.
Re: Why UTF-8/16 character encodings?
On 5/30/2013 5:04 PM, Manu wrote: Currently, D offers a unique advantage; leave it that way. I am going to leave it that way based on the comments here, I only wanted to point out that the example didn't support Unicode identifiers.
Re: D on next-gen consoles and for game development
On 30.05.2013 22:59, Benjamin Thaut wrote: One possible complication: memory block operations would have to treat pointer fields differently somehow. Would they? Shouldn't it be possible to make this part of the post-blit constructor? Not in general, e.g. reference counting needs to know the state before and after the copy.
Re: A different tuple syntax
Then proposal with {} risen: auto {x, y} = foo(); {int x1, string y1} = foo(); {int, string} foo() { {int tmp, string tmp2} = {3, dsa}; return {42, dsa}; } This is a reciepe for disaster. {} are already used way too much, sometime on way that are hard to deambiguate. I agree.
Re: Slow performance compared to C++, ideas?
On Friday, 31 May 2013 at 05:59:00 UTC, finalpatch wrote: You guys are awesome! I am happy to know that D can indeed offer comparable speed to C++. But it also shows there is room for the compiler to improve as the C++ version also makes heavy use of loops (or STL algorithms) but they get inlined or unrolled automatically. Have you tried to use GDC or LDC ? They use similar optimizers and code generator than GCC and clang, so should be able to do it as well.
Re: Slow performance compared to C++, ideas?
On Friday, 31 May 2013 at 05:59:00 UTC, finalpatch wrote: You guys are awesome! I am happy to know that D can indeed offer comparable speed to C++. But it also shows there is room for the compiler to improve as the C++ version also makes heavy use of loops (or STL algorithms) but they get inlined or unrolled automatically. Agree. I feel big hammer going towards my head from Walter/Andrei side but IMHO abandoning DMD in the first place would be the best idea. Focusing on LDC or GDC would bring way much more benefits than trying to make anything from DMD. Version compiled with LDC runs in 202 ms and 192 μs. DMD... 1 sec, 891 ms, 571 μs, and 1 hnsec On Friday, 31 May 2013 at 05:35:58 UTC, Juan Manuel Cabo wrote: You might also try changing: float[3] t = mixin(v[]~op~rhs.v[]); return Vec3(t[0], t[1], t[2]); for: Vec3 t; t.v[0] = mixin(v[0] ~op~ rhs.v[0]); t.v[1] = mixin(v[1] ~op~ rhs.v[1]); t.v[2] = mixin(v[2] ~op~ rhs.v[2]); return t; and so on, avoiding the float[3] and the v[] operations (which would loop, unless the compiler/optimizer unrolls them (didn't check)). I tested this change (removing v[] ops) in Vec3 and in normalize(), and it made your version slightly faster with DMD (didn't check with ldmd2). --jm
Re: Slow performance compared to C++, ideas?
Am 31.05.2013 08:11, schrieb deadalnix: On Friday, 31 May 2013 at 05:59:00 UTC, finalpatch wrote: You guys are awesome! I am happy to know that D can indeed offer comparable speed to C++. But it also shows there is room for the compiler to improve as the C++ version also makes heavy use of loops (or STL algorithms) but they get inlined or unrolled automatically. Have you tried to use GDC or LDC ? They use similar optimizers and code generator than GCC and clang, so should be able to do it as well. he only used GDC,LDC
Re: Slow performance compared to C++, ideas?
On 31 May 2013 11:26, finalpatch fen...@gmail.com wrote: Recently I ported a simple ray tracer I wrote in C++11 to D. Thanks to the similarity between D and C++ it was almost a line by line translation, in other words, very very close. However, the D verson runs much slower than the C++11 version. On Windows, with MinGW GCC and GDC, the C++ version is twice as fast as the D version. On OSX, I used Clang++ and LDC, and the C++11 version was 4x faster than D verson. Since the comparison were between compilers that share the same codegen backends I suppose that's a relatively fair comparison. (flags used for GDC: -O3 -fno-bounds-check -frelease, flags used for LDC: -O3 -release) I really like the features offered by D but it's the raw performance that's worrying me. From what I read D should offer similar performance when doing similar things but my own test results is not consistent with this claim. I want to know whether this slowness is inherent to the language or it's something I was not doing right (very possible because I have only a few days of experience with D). Below is the link to the D and C++ code, in case anyone is interested to have a look. https://dl.dropboxusercontent.**com/u/974356/raytracer.dhttps://dl.dropboxusercontent.com/u/974356/raytracer.d https://dl.dropboxusercontent.**com/u/974356/raytracer.cpphttps://dl.dropboxusercontent.com/u/974356/raytracer.cpp Can you paste the disassembly of the inner loop (trace()) for each G++/GDC, Or LDC/Clang++? That said, I can see almost innumerable red flags (on basically every line). The fact that it takes 200ms to render a frame in C++ (I would expect 10ms) suggests that your approach is amazingly slow to begin with, at which point I would start looking for much higher level problems. Once you have an implementation that's approaching optimal, then we can start making comparisons. Here are some thoughts at first glance: * The fact that you use STL makes me immediately concerned. Generic code for this sort of work will never run well. That said, STL has decades more time spent optimising, so it stands to reason that the C++ compiler will be able to do more to improve the STL code. * Your vector class both in C++/D are pretty nasty. Use 4d SIMD vectors. * So many integer divisions! * There are countless float - int casts. * Innumerable redundant loads/stores. * I would have raised the virtual-by-default travesty, but Andrei did it for me! ;) * intersect() should be __forceinline. * intersect() is full of if's (it's hard to predict if the optimiser can work across those if's. maybe it can...) What's taking the most time? The lighting loop is so template-tastic, I can't get a feel for how fast that loop would be. I believe the reason for the difference is not going to be so easily revealed. It's probably hidden largely in the fact that C++ has had a good decade of optimisation spent on STL over D. It's also possible that the C++ compiler hooks many of those STL functions as compiler intrinsics with internalised logic. Frankly, this is a textbook example of why STL is the spawn of satan. For some reason people are TAUGHT that it's reasonable to write code like this.
Re: Slow performance compared to C++, ideas?
On 31 May 2013 15:49, finalpatch fen...@gmail.com wrote: Thanks Nazriel, It is very cool you are able to narrow the gap to within 1.5x of c++ with a few simple changes. I checked your version, there are 3 changes (correct me if i missed any): * Change the (float) constructor from v= [x,x,x] to v[0] = x; v[1] = x; v[2] = x; * Get rid of the (float[]) constructor and use 3 floats instead * Change class methods to final The first change alone shaved off 220ms off the runtime, the 2nd one cuts 130ms and the 3rd one cuts 60ms. Lesson learned: by very very careful about dynamic arrays. Yeah, I've actually noticed this too on a few occasions. It would be nice if array operations would unroll for short arrays. Particularly so for static arrays!
Will I try again? and also C header files.
I looked at the D programming language a few years ago and though it was good. Then I ran into trouble. The language was in a state of flux. I would write code and with the next version of D it would no longer work. The same thing was happening to people who were writing tools such as IDE's for D and I guess most of them just gave up. That was then. I hope now things have settled down. I will look at the language for a couple of days. I presume I now only have to look at D2 and Phobos and not the previous 4 way split of D1/D2/Phobos/Tango. I have a specific engineering application I want to develop. I want to pick a programming language that will give me the cleanest and most maintainable code possible. I want to use the IUP GUI library. That is a shared library that has C headers. I presume I will be able to call the dll functions easily in D2? Maybe I will only have to make some minimal changes to the C header files to get them to work with D2?
Re: A different tuple syntax
On Friday, 31 May 2013 at 04:49:41 UTC, deadalnix wrote: On Friday, 31 May 2013 at 03:07:22 UTC, nazriel wrote: Now this: auto #(x, y) = foo(); #(int x1, string y1) = foo(); #(int, string) foo() { #(int tmp, string tmp2) = #(3, dsa); return #(42, dsa); } This have the advantage of not having issue with one element tuples. My 2 cents :)) bearophile what do you think? I also do agree that the 3rd one is probably the best. Smalltalk (Also Objective C) went this way long ago, and it worked. http://en.wikipedia.org/wiki/Smalltalk#Literals
Re: A different tuple syntax
On Friday, 31 May 2013 at 07:32:42 UTC, w0rp wrote: On Friday, 31 May 2013 at 04:49:41 UTC, deadalnix wrote: On Friday, 31 May 2013 at 03:07:22 UTC, nazriel wrote: Now this: auto #(x, y) = foo(); #(int x1, string y1) = foo(); #(int, string) foo() { #(int tmp, string tmp2) = #(3, dsa); return #(42, dsa); } This have the advantage of not having issue with one element tuples. My 2 cents :)) bearophile what do you think? I also do agree that the 3rd one is probably the best. Smalltalk (Also Objective C) went this way long ago, and it worked. http://en.wikipedia.org/wiki/Smalltalk#Literals Except that in Smalltalk's case, the language is quite similar to Lisp in the sense that its syntax is quite minimalist with most language constructs being achieved by messages. In D you will be adding even more constructs to the grammar.
Re: I was wrong
On 2013-05-30 21:44, Timothee Cour wrote: shall we have both the current changelog for all releases + individual changelogs per release changelog.html // always latest version only; people go there by default changelog_all.html //same as current changelog.html changelog_2063.html changelog_2062.html Don't know if the changelog_all is necessary. Just have links to the older versions from changelog.html. -- /Jacob Carlborg
Re: 2.063 release
On Thursday, 30 May 2013 at 17:19:51 UTC, Walter Bright wrote: On 5/30/2013 9:04 AM, Russel Winder wrote: It seems that download speed is about 500b/s, so about 3 weeks to download :-( Some of the downloads have been moved to S3 (Thanks, Brad!). The download links are here: http://digitalmars.com/d/download.html Why not, for example, mega.co.nz? Blazing fast and free. Am I missing something?
Re: I was wrong
On Friday, 31 May 2013 at 08:38:11 UTC, Jacob Carlborg wrote: Don't know if the changelog_all is necessary. Just have links to the older versions from changelog.html. It is not necessary but is minor extra convenience for users not needing those 60+ extra links when they just want to check last changes.
Re: I was wrong
On Thursday, 30 May 2013 at 18:06:03 UTC, Walter Bright wrote: about the changelog. Andrej Mitrovic has done a super awesome job with the changelog, and it is paying off big time. I am very happy to be proven wrong about it. It is so good I could not have even expected to see something like that. Andrej, awesome!
Re: 2.063 release
On Friday, 31 May 2013 at 08:42:38 UTC, Dicebot wrote: On Friday, 31 May 2013 at 08:38:47 UTC, Andrea Fontana wrote: Why not, for example, mega.co.nz? Blazing fast and free. Am I missing something? Using common file share services to distribute an official release is bad for reputation. As a mirror at most. Of course I mean that. A mirror link for forum readers, not official link :)
Re: Will I try again? and also C header files.
On 2013-05-31 09:18, SeanVn wrote: I looked at the D programming language a few years ago and though it was good. Then I ran into trouble. The language was in a state of flux. I would write code and with the next version of D it would no longer work. The same thing was happening to people who were writing tools such as IDE's for D and I guess most of them just gave up. That was then. I hope now things have settled down. I will look at the language for a couple of days. I presume I now only have to look at D2 and Phobos and not the previous 4 way split of D1/D2/Phobos/Tango. I have a specific engineering application I want to develop. I want to pick a programming language that will give me the cleanest and most maintainable code possible. I want to use the IUP GUI library. That is a shared library that has C headers. I presume I will be able to call the dll functions easily in D2? Maybe I will only have to make some minimal changes to the C header files to get them to work with D2? You need to convert the C header files to D modules. Or the actual declarations you want to use. There's a tool available to do that: https://github.com/jacob-carlborg/dstep -- /Jacob Carlborg
Re: 2.063 release
On Friday, 31 May 2013 at 08:38:47 UTC, Andrea Fontana wrote: Why not, for example, mega.co.nz? Blazing fast and free. Am I missing something? Using common file share services to distribute an official release is bad for reputation. As a mirror at most.
Re: Slow performance compared to C++, ideas?
On 2013-05-31 06:06, deadalnix wrote: - Introduce a virtual keyword. Virtual by default isn't such a big deal if you can do final: and reverse the default behavior. However, once you key in the final land, you are trapped here, you can't get out. Introducing a virtual keyword would allow for aggressive final: declarations. I guess this would be least intrusive change. -- /Jacob Carlborg
Re: Slow performance compared to C++, ideas?
On Friday, 31 May 2013 at 05:49:55 UTC, finalpatch wrote: Thanks Nazriel, It is very cool you are able to narrow the gap to within 1.5x of c++ with a few simple changes. I checked your version, there are 3 changes (correct me if i missed any): * Change the (float) constructor from v= [x,x,x] to v[0] = x; v[1] = x; v[2] = x; * Get rid of the (float[]) constructor and use 3 floats instead I thought GDC or LDC have something like: float[$] v = [x, x, x]; which is converted to flot[3] v = [x, x, x]; Am I wrong? DMD need something like this too.
Re: I was wrong
On Friday, 31 May 2013 at 08:45:08 UTC, Dicebot wrote: On Thursday, 30 May 2013 at 18:06:03 UTC, Walter Bright wrote: about the changelog. Andrej Mitrovic has done a super awesome job with the changelog, and it is paying off big time. I am very happy to be proven wrong about it. It is so good I could not have even expected to see something like that. Andrej, awesome! I agree the changelog is awesometastic, but the run button could need some tweaks...
Re: Slow performance compared to C++, ideas?
On 31 May 2013 14:06, deadalnix deadal...@gmail.com wrote: On Friday, 31 May 2013 at 02:56:25 UTC, Andrei Alexandrescu wrote: On 5/30/13 9:26 PM, finalpatch wrote: https://dl.dropboxusercontent.**com/u/974356/raytracer.dhttps://dl.dropboxusercontent.com/u/974356/raytracer.d https://dl.dropboxusercontent.**com/u/974356/raytracer.cpphttps://dl.dropboxusercontent.com/u/974356/raytracer.cpp Manu's gonna love this one: make all methods final. I don't think going as far as making thing final by default make sense at this point. But we sure need a way to be able to finalize methods. We had an extensive discussion with Don and Manu at DConf, here are some idea that came out : - Final by default This one is really a plus when it come to performance code. However, virtual by default have proven itself very useful when performance isn't that big of a deal (and it is the case for 90% of a program's code usually) and limiting the usage of some pattern like decorator (that also have been proven to be useful). This is also huge breakage. The correct choice :) virtual by default have proven itself very useful when performance isn't that big of a deal Has it? I'm not sure it's ever proven its self useful, can you point to any such proof, or evidence? I think it's only *proven* its self to be a massive mistake; you've all heard my arguments a million times, Don will also give a massive rant, and others. People already have to type 'override' in every derived class, and they're happy to do that. Requiring to type 'virtual' in the base is hardly an inconvenience by contrast. Actually, it's quite orthogonal. D tends to prefer being explicit. Why bend the rules in this case, especially considering the counterpart (override) is expected to be explicit? Surely both being explicit is what people would expect? There's another detail that came up late at night at dconf; I was talking to Daniel Murphy about the extern(C++) class work he's working on to write the D-DMD front end, and apparently it would make the problem a lot easier if virtual was explicit there too. So perhaps it also offers improved interoperability with C++, which I think most of us have agreed is of heightened importance recently. and it is the case for 90% of a program's code usually Can you justify this? I believe it's more like 5% of methods, or less. In my experience, virtuals account for 1-2 functions, and trivial properties (which should avoid being virtual at all costs) account for almost all of them. int getPrivateMember() { return privateMember; } // - most methods are like this; should NEVER be virtual This is also huge breakage. I think this is another fallacy. The magnitude of the breakage is already known. It's comparable to the recent change where 'override' became an explicit requirement, which was hardly catastrophic. In that case, the breakage occurred in *every derived class*, which is a many:1 relationship to base classes. Explicit virtual would only affect _base_ classes. The magnitude of which is far smaller than what was recently considered acceptable, precisely: magnitudeOfPriorBreakage / numberOfDerivedClasses. So considering explicit override was an argument won, I'd like to think that explicit virtual deserves the same treatment for even more compelling reasons, and at a significantly lesser cost in breakage. - Introduce a virtual keyword. Virtual by default isn't such a big deal if you can do final: and reverse the default behavior. However, once you key in the final land, you are trapped here, you can't get out. Introducing a virtual keyword would allow for aggressive final: declarations. This needs to be done either way. - Require explicitly export when you want to create shared objects. This one is an enabler for an optimizer to finalize virtual method. With this mechanism in place, the compile knows all the override and can finalize many calls during LTO. I especially like that one as it allow for stripping many symbols at link time and allow for other LTO in general (for instance, the compiler can choose custom calling conventions for some methods, knowing all call sites). The explicit export one have my preference, however, ti require that symbol used in shared lib are explicitly declared export. I think we shouldn't break the virtual by default behavior, but we still need to figure out a way to make thing more performant on this point. You're concerned about the magnitude of breakage introducing an explicit virtual requirement. This would seem to be a much much bigger breakage to me, and I don't think it's intuitive.
Re: Why UTF-8/16 character encodings?
On Fri, 31 May 2013 07:57:37 +0200, Walter Bright newshou...@digitalmars.com wrote: On 5/30/2013 5:00 PM, Peter Williams wrote: On 31/05/13 05:07, Walter Bright wrote: On 5/30/2013 4:24 AM, Manu wrote: We don't all know English. Plenty of people don't. I've worked a lot with Sony and Nintendo code/libraries, for instance, it almost always looks like this: { // E: I like cake. // J: ケーキが好きです。 player.eatCake(); } Clearly someone doesn't speak English in these massive codebases that power an industry worth 10s of billions. Sure, but the code itself is written using ASCII! Because they had no choice. Not true, D supports Unicode identifiers. I doubt Sony and Nintendo use D extensively. -- Simen
Re: Slow performance compared to C++, ideas?
On Friday, 31 May 2013 at 04:06:58 UTC, deadalnix wrote: I don't think going as far as making thing final by default make sense at this point. But we sure need a way to be able to finalize methods. We had an extensive discussion with Don and Manu at DConf, here are some idea that came out : - Final by default [..] - Introduce a virtual keyword. [..] C# has final by default, and mandatory virtual keyword. Barely anyone complained. Mostly is considered a good thing to be explicit. As for the D, since the override keyword is mandatory, it would be a compiler error to omit a virtual keyword on base method, and thus easily fixable. As for the methods that are supposed to be virtual but never actual overridden - there you must know which ones that should be, and make that explicit. This applies only for libraries, and only the ones that use lot of methods intend to be overridden (generally questionable design) have some work to do on being explicit about virtual.
Re: Template args to UDA's
On 05/28/2013 05:45 PM, Kenji Hara wrote: It looks reasonable, but in general case it would introduce not trivial semantic issue. Based on the current D language spec, prefix attribute is just rewritten to blocked attribute. @attribute(target, T) void func(string T)() {} to: @attribute(target, T) { void func(string T)() {} } It is my understanding as well, but where is this actually specified? And block attribute can contain other declarations. @attribute(target, T) { enum str = T.stringof; void func(string T)() {} } Well, if the enhancement is implemented, T would be deduced by the each call of template function foo. Then the enum value would become undeterministic. I think it is not implementable. ... This does not follow. @attribute(target, T) void func(string T)() {} would simply need to be treated like: template func(string T){ @attribute(target, T) void func() {} } (The same would then be done for other attributes.) I think it makes a difference only for UDA's and pragmas.
Re: D on next-gen consoles and for game development
On 2013-05-31 06:02:20 +, Rainer Schuetze r.sagita...@gmx.de said: On 30.05.2013 22:59, Benjamin Thaut wrote: One possible complication: memory block operations would have to treat pointer fields differently somehow. Would they? Shouldn't it be possible to make this part of the post-blit constructor? Not in general, e.g. reference counting needs to know the state before and after the copy. No. Reference counting would work with post-blit: you have the pointer, you just need to increment the reference count once. Also, if you're moving instead of copying there's no post-blit called but there's also no need to change the reference count so it's fine. What wouldn't work with post-blit (I think) is a concurrent GC, as the GC will likely want to be notified when pointers are moved. Post-blit doesn't help there, and the compiler currently assumes it can move things around without calling any function. -- Michel Fortin michel.for...@michelf.ca http://michelf.ca/
Re: Template args to UDA's
On 31 May 2013 20:47, Timon Gehr timon.g...@gmx.ch wrote: On 05/28/2013 05:45 PM, Kenji Hara wrote: It looks reasonable, but in general case it would introduce not trivial semantic issue. Based on the current D language spec, prefix attribute is just rewritten to blocked attribute. @attribute(target, T) void func(string T)() {} to: @attribute(target, T) { void func(string T)() {} } It is my understanding as well, but where is this actually specified? And block attribute can contain other declarations. @attribute(target, T) { enum str = T.stringof; void func(string T)() {} } Well, if the enhancement is implemented, T would be deduced by the each call of template function foo. Then the enum value would become undeterministic. I think it is not implementable. ... This does not follow. @attribute(target, T) void func(string T)() {} would simply need to be treated like: template func(string T){ @attribute(target, T) void func() {} } (The same would then be done for other attributes.) I think it makes a difference only for UDA's and pragmas. Or fully expanded: template func(string T) { @attribute(target, T) { void func() { } } } This seems to fix the existing semantics rather nicely.
Re: Template args to UDA's
* fix = fit On 31 May 2013 20:56, Manu turkey...@gmail.com wrote: On 31 May 2013 20:47, Timon Gehr timon.g...@gmx.ch wrote: On 05/28/2013 05:45 PM, Kenji Hara wrote: It looks reasonable, but in general case it would introduce not trivial semantic issue. Based on the current D language spec, prefix attribute is just rewritten to blocked attribute. @attribute(target, T) void func(string T)() {} to: @attribute(target, T) { void func(string T)() {} } It is my understanding as well, but where is this actually specified? And block attribute can contain other declarations. @attribute(target, T) { enum str = T.stringof; void func(string T)() {} } Well, if the enhancement is implemented, T would be deduced by the each call of template function foo. Then the enum value would become undeterministic. I think it is not implementable. ... This does not follow. @attribute(target, T) void func(string T)() {} would simply need to be treated like: template func(string T){ @attribute(target, T) void func() {} } (The same would then be done for other attributes.) I think it makes a difference only for UDA's and pragmas. Or fully expanded: template func(string T) { @attribute(target, T) { void func() { } } } This seems to fix the existing semantics rather nicely.
Re: Slow performance compared to C++, ideas?
On 05/31/2013 08:34 AM, Manu wrote: What's taking the most time? The lighting loop is so template-tastic, I can't get a feel for how fast that loop would be. Hah, I found this out the hard way recently -- have been doing some experimental reworking of code where some key inner functions were templatized, and it had a nasty effect on performance. I'm guessing it made it impossible for the compilers to inline these functions :-(
Re: Will I try again? and also C header files.
On Friday, 31 May 2013 at 07:18:38 UTC, SeanVn wrote: I hope now things have settled down. They have, considerably. I will look at the language for a couple of days. I presume I now only have to look at D2 and Phobos and not the previous 4 way split of D1/D2/Phobos/Tango. Correct I have a specific engineering application I want to develop. I want to pick a programming language that will give me the cleanest and most maintainable code possible. I want to use the IUP GUI library. That is a shared library that has C headers. I presume I will be able to call the dll functions easily in D2? Yes, shouldn't be any problems there. Maybe I will only have to make some minimal changes to the C header files to get them to work with D2? As Jacob mentioned, there are tools to do this for you. Even without them though, simple header files are pretty trivial to convert.
Re: Slow performance compared to C++, ideas?
On 05/31/2013 12:58 PM, Joseph Rushton Wakeling wrote: On 05/31/2013 08:34 AM, Manu wrote: What's taking the most time? The lighting loop is so template-tastic, I can't get a feel for how fast that loop would be. Hah, I found this out the hard way recently -- have been doing some experimental reworking of code where some key inner functions were templatized, and it had a nasty effect on performance. I'm guessing it made it impossible for the compilers to inline these functions :-( That wouldn't make any sense though, since after template expansion there is no difference between the generated version and a particular handwritten version.
Aftershock of 2.063 release
Given the release of 2.063, it would be good to upgrade. Clearly I could download the deb and rpm files and put them in my local repository. However, there is the D APT repository and it seems good to use this instead for Debian. I wonder if it would be a good idea for people interested in Debian stuff to get together led by Jordi and turn https://code.google.com/p/d-apt/ into something that is clearly an integral part of the D core activity based on a dlang.org domain URL? Something similar to get Fedora 18 (and 19?) packages in place as well? The alternative is to inject directly into Debian Unstable and into RPM Fusion which would of course be even better. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Slow performance compared to C++, ideas?
On 05/31/2013 01:05 PM, Timon Gehr wrote: That wouldn't make any sense though, since after template expansion there is no difference between the generated version and a particular handwritten version. That's what I'd assumed too, but there _is_ a speed difference. I'm open to suggestions as to why. Compare these profile results for the core inner function -- the original (even though there's a 'New' in there somewhere): % cumulative self self total time seconds secondscalls s/call s/call name 93.33 4.69 4.69 4084637 0.00 0.00 _D8infected5model115__T13NewSimulationS248infected5model8StateSISS238infected5model7SeedSISS388infected5model21NewUpdateMeanFieldSISTdZ13NewSimulation19__T11UpdateStateTdZ6updateMFKC8infected5model15__T8StateSISTdZ8StateSISKxC8infected5model14__T7SeedSISTdZ7SeedSISZd ... and the newer version: % cumulative self self total time seconds secondscalls s/call s/call name 92.73 5.23 5.23 4078287 0.00 0.00 _D8infected5model292__T10SimulationS358infected5model18UpdateMeanFieldSISTC8infected5model15__T8StateSISTdZ8StateSISTC8infected5model164__T7SeedSISTdTAyAS3std8typecons50__T5TupleTmVAyaa2_6964TdVAyaa9_696e666c75656e6365Z5TupleTyS3std8typecons50__T5TupleTmVAyaa2_6964TdVAyaa9_696e666c75656e6365Z5TupleZ7SeedSISTydZ10Simulation17__T11UpdateStateZ163__T6updateTdTAyAS3std8typecons50__T5TupleTmVAyaa2_6964TdVAyaa9_696e666c75656e6365Z5TupleTyS3std8typecons50__T5TupleTmVAyaa2_6964TdVAyaa9_696e666c75656e6365Z5TupleZ6updateMFNbNfKC8infected5model15__T8StateSISTdZ8StateSISKxC8infected5model164__T7SeedSISTdTAyAS3std8typecons50__T5TupleTmVAyaa2_6964TdVAyaa9_696e666c75656e6365Z5TupleTyS3std8typecons50__T5TupleTmVAyaa2_6964TdVAyaa9_696e666c75656e6365Z5TupleZ7SeedSISZd I'm not sure what, other than a change of template design, could be responsible here. The key bits of code follow -- the original version: mixin template UpdateState(T) { T update(ref StateSIS!T st, const ref SeedSIS!T sd) { T d = to!T(0); static T[] sick; sick.length = st.infected.length; sick[] = st.infected[]; foreach(i; 0..sick.length) { T noTransmission = to!T(1); foreach(link; sd.network[i]) noTransmission *= (to!T(1) - sick[link.id] * link.influence); T getSick = (to!T(1) - sick[i]) * (sd.susceptible[i] + (to!T(1) - sd.susceptible[i]) * (to!T(1) - noTransmission)); T staySick = sick[i] * (to!T(1) - sd.recover[i]); st.infected[i] = (to!T(1) - sd.immune[i]) * (getSick + staySick); assert(to!T(0) = st.infected[i]); assert(st.infected[i] = to!T(1)); d = max(abs(st.infected[i] - sick[i]), d); } return d; } } ... and for clarity, the StateSIS and SeedSIS classes: class StateSIS(T) { T[] infected; this(){} this(T[] inf) { infected = inf; } auto size() @property pure const nothrow { return infected.length; } T infection() @property pure const nothrow { return reduce!a+b(to!T(0), infected); } } class SeedSIS(T) { T[] immune; T[] susceptible; T[] recover; Link!T[][] network; this() {} this(T[] imm, T[] sus, T[] rec, Link!T[][] net) { immune = imm; susceptible = sus; recover = rec; network = net; } auto size() @property pure const nothrow in { assert(immune.length == susceptible.length); assert(immune.length == recover.length); assert(immune.length == network.length); } body { return immune.length; } } ... and the Link template: template Link(T) { alias Tuple!(size_t, id, T, influence) Link; } ... and now for comparison the new versions: mixin template UpdateState() { T update(T, N : L[][], L)(ref StateSIS!T st, const ref SeedSIS!(T, N, L) sd)
Re: Slow performance compared to C++, ideas?
On 31 May 2013 20:58, Joseph Rushton Wakeling joseph.wakel...@webdrake.netwrote: On 05/31/2013 08:34 AM, Manu wrote: What's taking the most time? The lighting loop is so template-tastic, I can't get a feel for how fast that loop would be. Hah, I found this out the hard way recently -- have been doing some experimental reworking of code where some key inner functions were templatized, and it had a nasty effect on performance. I'm guessing it made it impossible for the compilers to inline these functions :-( I find that using templates actually makes it more likely for the compiler to properly inline. But I think the totally generic expressions produce cases where the compiler is considering too many possibilities that inhibit many optimisations. It might also be that the optimisations get a lot more complex when the code fragments span across a complex call tree with optimisation dependencies on non-deterministic inlining. One of the most important jobs for the optimiser is code re-ordering. Generic code is often written in such a way that makes it hard/impossible for the optimiser to reorder the flattened code properly. Hand written code can have branches and memory accesses carefully placed at the appropriate locations. Generic code will usually package those sorts of operations behind little templates that often flatten out in a different order. The optimiser is rarely able to re-order code across if statements, or pointer accesses. __restrict is very important in generic code to allow the optimiser to reorder across any indirection, otherwise compilers typically have to be conservative and presume that something somewhere may have changed the destination of a pointer, and leave the order as the template expanded. Sadly, D doesn't even support __restrict, and nobody ever uses it in C++ anyway. I've always has better results with writing precisely what I intend the compiler to do, and using __forceinline where it needs a little extra encouragement.
Re: Slow performance compared to C++, ideas?
On 31 May 2013 21:05, Timon Gehr timon.g...@gmx.ch wrote: On 05/31/2013 12:58 PM, Joseph Rushton Wakeling wrote: On 05/31/2013 08:34 AM, Manu wrote: What's taking the most time? The lighting loop is so template-tastic, I can't get a feel for how fast that loop would be. Hah, I found this out the hard way recently -- have been doing some experimental reworking of code where some key inner functions were templatized, and it had a nasty effect on performance. I'm guessing it made it impossible for the compilers to inline these functions :-( That wouldn't make any sense though, since after template expansion there is no difference between the generated version and a particular handwritten version. Assuming that you would hand-write exactly the same code as the template expansion... Typically template expansion leads to countless temporary redundancies, which you expect the compiler to try and optimise away, but it's not always able to do so, especially if there is an if() nearby, or worse, a pointer dereference.
Re: The stately := operator feature proposal
On Friday, 31 May 2013 at 00:50:56 UTC, bearophile wrote: Manu: I've raised the topic of multiple-return-values a whole heap of times. It's usually shot down because it would create ambiguities in existing syntax. Solving only that small problem is a bad idea. A language meant to support some functional programming should be able to support tuples well enough. Your problem is a special case of tuple usage. Don't you agree? Bye, bearophile The question is which is more optimal for the MRV style of programming // here the compiler can decide the best way to return the two ints, // probably in two registers, maybe even better for inlining (int, int) positionMRV() { return 1, 2; } // here the compiler is making a tuple and returning it may not be optimal #(int, int) positionTuple() { return #(1, 2); } //assuming #() for tuples I agree tuples cover more cases, but maybe hard to optimize for MRV. a side note := could be use to extract tuples as well. Would be nice if _ was not a valid identifier, could have been used it for value skipping: x, _ := positionMRV(); // only care about x value, optimize away
Re: Slow performance compared to C++, ideas?
On 05/31/2013 01:48 PM, Manu wrote: I find that using templates actually makes it more likely for the compiler to properly inline. But I think the totally generic expressions produce cases where the compiler is considering too many possibilities that inhibit many optimisations. It might also be that the optimisations get a lot more complex when the code fragments span across a complex call tree with optimisation dependencies on non-deterministic inlining. Thanks for the detailed advice. :-) There are two particular things I noted about my own code. One is that whereas in the original the template variables were very simple (just a floating-point type) in the new version they are more complex structures that are indeed more generic (the idea was to enable the code to handle both mutable and immutable forms of one particular data structure). The second is that the templatization gets moved from the mixin to the functions themselves. I guess that the mixin has the effect of copy-pasting _as if_ I was just writing precisely what I intended.
Re: The stately := operator feature proposal
On Thursday, 30 May 2013 at 23:50:40 UTC, Jonathan M Davis wrote: There are orders of magnitudes of difference between providing a new abstraction like a class and simply rewriting auto i = foo; as i := foo; _All_ it does is save you 4 characters and shift where in the statement the piece is that tells the compiler to infer the type. - Jonathan M Davis +1 I really hope such stuff will _never ever_ get into official D spec. It is just going to be a disaster for language that aims to be general-purpose and doesn't want to die because of minor detail overload complexity, like C++ did. That syntax obession makes me afraid. Shorter lambdas had at least some rationale.
Re: Slow performance compared to C++, ideas?
On Friday, 31 May 2013 at 11:49:05 UTC, Manu wrote: I find that using templates actually makes it more likely for the compiler to properly inline. But I think the totally generic expressions produce cases where the compiler is considering too many possibilities that inhibit many optimisations. It might also be that the optimisations get a lot more complex when the code fragments span across a complex call tree with optimisation dependencies on non-deterministic inlining. One of the most important jobs for the optimiser is code re-ordering. Generic code is often written in such a way that makes it hard/impossible for the optimiser to reorder the flattened code properly. Hand written code can have branches and memory accesses carefully placed at the appropriate locations. Generic code will usually package those sorts of operations behind little templates that often flatten out in a different order. The optimiser is rarely able to re-order code across if statements, or pointer accesses. __restrict is very important in generic code to allow the optimiser to reorder across any indirection, otherwise compilers typically have to be conservative and presume that something somewhere may have changed the destination of a pointer, and leave the order as the template expanded. Sadly, D doesn't even support __restrict, and nobody ever uses it in C++ anyway. I've always has better results with writing precisely what I intend the compiler to do, and using __forceinline where it needs a little extra encouragement. Thanks for valuable input. Have never had a pleasure to actually try templates in performance-critical code and this a good stuff to remember about. Have added to notes.
Re: Slow performance compared to C++, ideas?
I actually have some experience with C++ template meta-programming in HD video codecs. My experience is that it is possible for generic code through TMP to match or even beat hand written code. Modern C++ compilers are very good, able to optimize away most of the temporary variables resulting very compact object code, provides you can avoid branches and keep the arguments const refs as much as possible. A real example is my TMP generic codec beat the original hand optimized c/asm version (both use sse intrinsics) by as much as 30% with only a fraction of the line of code. Another example is the Eigen linear algebra library, through template meta-programming it is able to match the speed of Intel MKL. D is very strong at TMP, it provides a lot more tools specifically designed for TMP, that is vastly superior than C++ which relies on abusing the templates. This is actually the main reason drawing me to D: TMP in a more pleasant way. IMO one thing D needs to address is less surprises, eg. innocent looking code like v[] = [x,x,x] shouldn't cause major performance hit. In c++ memory allocation is explicit, either operator new or malloc, or indirectly through a method call, otherwise the language would not do heap allocation for you. On Friday, 31 May 2013 at 11:51:04 UTC, Manu wrote: Assuming that you would hand-write exactly the same code as the template expansion... Typically template expansion leads to countless temporary redundancies, which you expect the compiler to try and optimise away, but it's not always able to do so, especially if there is an if() nearby, or worse, a pointer dereference.
Re: I was wrong
On Fri, 31 May 2013 04:38:10 -0400, Jacob Carlborg d...@me.com wrote: On 2013-05-30 21:44, Timothee Cour wrote: shall we have both the current changelog for all releases + individual changelogs per release changelog.html // always latest version only; people go there by default changelog_all.html //same as current changelog.html changelog_2063.html changelog_2062.html Don't know if the changelog_all is necessary. Just have links to the older versions from changelog.html. I use the all changelog to search for when certain features were added, or bugs were fixed. It's actually a handy page to use for that. -Steve
Re: Slow performance compared to C++, ideas?
On 31 May 2013 23:07, finalpatch fen...@gmail.com wrote: I actually have some experience with C++ template meta-programming in HD video codecs. My experience is that it is possible for generic code through TMP to match or even beat hand written code. Modern C++ compilers are very good, able to optimize away most of the temporary variables resulting very compact object code, provides you can avoid branches and keep the arguments const refs as much as possible. A real example is my TMP generic codec beat the original hand optimized c/asm version (both use sse intrinsics) by as much as 30% with only a fraction of the line of code. Another example is the Eigen linear algebra library, through template meta-programming it is able to match the speed of Intel MKL. Just to clarify, I'm not trying to say templates are slow because they're tempaltes. There's no reason carefully crafted template code couldn't be identical to hand crafted code. What I am saying, is that it introduces the possibility for countless subtle details to get in the way. If you want maximum performance from templates, you often need to be really good at expanding the code in your mind, and visualising it all in expanded context, so you can then reason whether anything is likely to get in the way of the optimiser or not. A lot of people don't possess this skill, and for good reason, it's hard! It usually takes considerable time to optimise template code, and optimised template code may often only be optimal in the context you tested against. At some point, depending on the complexity of your code, it might just be easier/less time consuming to write the code directly. It's a fine line, but I've seen so much code that takes it WY too far. There's always the unpredictable element too. Imagine a large-ish template function, and one very small detail inside is customised of otherwise identical functions. Let's say 2 routines are generated for int and long; the cost of casting int - long and calling the long function in both cases is insignificant, but using templates, your exe just got bigger, branches less predictable, icache got more noisy, and there's no way to profile for loss of performance introduced this way. In-fact, the profiler will typically erroneously lead you to believe your code is FASTER, but it results in code that may be slower at net. I'm attracted to D for the power of it's templates too, but that attraction is all about simplicity and readability. In D, you can do more with less. The goal is not to use more and more templates, but make the few templates I use, more readable and maintainable. D is very strong at TMP, it provides a lot more tools specifically designed for TMP, that is vastly superior than C++ which relies on abusing the templates. This is actually the main reason drawing me to D: TMP in a more pleasant way. IMO one thing D needs to address is less surprises, eg. innocent looking code like v[] = [x,x,x] shouldn't cause major performance hit. In c++ memory allocation is explicit, either operator new or malloc, or indirectly through a method call, otherwise the language would not do heap allocation for you. Yeah well... I have a constant inner turmoil with this in D. I want to believe the GC is the future, but I'm still trying to convince myself of that (and I think the GC is losing the battle at the moment). Fortunately you can avoid the GC fairly effectively (if you forego large parts of phobos!). Buy things like the array initialisation are inexcusable. Array literals should NOT allocate, this desperately needs to be fixed. And scope/escape analysis, so local dynamic arrays can be lowered onto the stack in self-contained situations. That's the biggest source of difficult-to-control allocations in my experience. On Friday, 31 May 2013 at 11:51:04 UTC, Manu wrote: Assuming that you would hand-write exactly the same code as the template expansion... Typically template expansion leads to countless temporary redundancies, which you expect the compiler to try and optimise away, but it's not always able to do so, especially if there is an if() nearby, or worse, a pointer dereference.
Re: Inability to dup/~ for const arrays of class objects
On Fri, 31 May 2013 00:48:47 -0400, Peter Williams pwil3...@bigpond.net.au wrote: On 31/05/13 12:07, Steven Schveighoffer wrote: On Thu, 30 May 2013 20:05:59 -0400, Peter Williams pwil3...@bigpond.net.au wrote: On 30/05/13 16:21, Ali Çehreli wrote: On 05/29/2013 06:54 PM, Peter Williams wrote: I find the mechanism described in the article a little disconcerting and it certainly needs more publicity as it's a bug in waiting for the unwary. It certainly is disconcerting. Performance have played a big role in the current semantics of slices. I should have added that it was the non determinism that disconcerted me. It doesn't really affect me personally as a programmer now that I know about it as I can just avoid it. But it blows out of the water any hopes of having proveably correct non trivial code. I think this is an overstatement. It depends heavily on what you are doing, and most usages will be correct. All uses have to be correct if you want provably correct otherwise you just get mostly correct. All *your* uses have to be correct. What I meant was, you have to know the pitfalls and avoid them. Because there are pitfalls, this doesn't mean you can't prove correctness. And the pitfalls are quite few. You can achieve deterministic behavior depending on what you are looking for. For certain, you can tell without any additional tools that an append will not reallocate if the capacity is large enough. That makes programming much easier, doesn't it. I'll just avoid it by using: a = a ~ b; instead of: a ~= b; If you care nothing for performance, this certainly is a way to go. where I think it might be an issue or is that broken too? This is a conservative always reallocate methodology, it should work just like you allocated a new array to hold a and b. If a is frequently large, and b is frequently small, you will kill your performance vs. a ~= b. I toy in my mind with the idea that the difference between dynamic arrays and slices should be that slices are read only and if you write to them they get reallocated and promoted to dynamic array (kind of like copy on write with hard linked files). But I'm sure that would just create another set of problems. Also I imagine that it's already been considered and discarded. BTW the slice notation could still be used for assigning to sections of an array. This was a proposed feature (not the copy on write, but copy on append). It was so complex to explain that we simply didn't implement it. Instead, we improved array appending performance and semantics. The two largest differences between slices and proper dynamic arrays is that a slice does not own it's viewed data (read: is not responsible for the lifetime), and it's 'view' is passed by value. -Steve
A simple way to do compile time loop unrolling
Just want to share a new way I just discovered to do loop unrolling. template Unroll(alias CODE, alias N) { static if (N == 1) enum Unroll = format(CODE, 0); else enum Unroll = Unroll!(CODE, N-1)~format(CODE, N-1); } after that you can write stuff like mixin(Unroll!(v[%1$d]~op~=rhs.v[%1$d];, 3)); and it gets expanded to v[0]+=rhs.v[0];v[1]+=rhs.v[1];v[2]+=rhs.v[2]; I find this method simpler than with foreach() and a tuple range, and also faster because it's identical to hand unrolling.
Re: Template args to UDA's
On Fri, 31 May 2013 06:47:07 -0400, Timon Gehr timon.g...@gmx.ch wrote: @attribute(target, T) void func(string T)() {} would simply need to be treated like: template func(string T){ @attribute(target, T) void func() {} } In fact, today's current semantics suggest this is exactly what happens: import std.stdio; @(a) void func(T)(T t) {} void main() { writefln(func attributes: %s, [__traits(getAttributes, func)]); writefln(func!int attributes: %s, [__traits(getAttributes, func!int)]); } Output: func attributes: [] func!int attributes: [a] If the attributes applied only to the template symbol, then you would think func would have the attribute a. Plus, the idea that you must actually instantiate the template to get an attribute to apply REALLY suggests the attribute should be aware of the template parameters. -Steve
Re: A simple way to do compile time loop unrolling
Minor improvement: template Unroll(alias CODE, alias N, alias SEP=) { static if (N == 1) enum Unroll = format(CODE, 0); else enum Unroll = Unroll!(CODE, N-1, SEP)~SEP~format(CODE, N-1); } So vector dot product can be unrolled like this: mixin(Unroll!(v1[%1$d]*v2[%1$d], 3, +)); which becomes: v1[0]*v2[0]+v1[1]*v2[1]+v1[2]*v2[2] On Friday, 31 May 2013 at 14:06:19 UTC, finalpatch wrote: Just want to share a new way I just discovered to do loop unrolling. template Unroll(alias CODE, alias N) { static if (N == 1) enum Unroll = format(CODE, 0); else enum Unroll = Unroll!(CODE, N-1)~format(CODE, N-1); } after that you can write stuff like mixin(Unroll!(v[%1$d]~op~=rhs.v[%1$d];, 3)); and it gets expanded to v[0]+=rhs.v[0];v[1]+=rhs.v[1];v[2]+=rhs.v[2]; I find this method simpler than with foreach() and a tuple range, and also faster because it's identical to hand unrolling.
Re: A simple way to do compile time loop unrolling
W dniu 31.05.2013 16:06, finalpatch pisze: Just want to share a new way I just discovered to do loop unrolling. template Unroll(alias CODE, alias N) { static if (N == 1) enum Unroll = format(CODE, 0); else enum Unroll = Unroll!(CODE, N-1)~format(CODE, N-1); } after that you can write stuff like mixin(Unroll!(v[%1$d]~op~=rhs.v[%1$d];, 3)); and it gets expanded to v[0]+=rhs.v[0];v[1]+=rhs.v[1];v[2]+=rhs.v[2]; I find this method simpler than with foreach() and a tuple range, and also faster because it's identical to hand unrolling. The advantage of foreach unrolling is that compiler can optimally choose unrolling depth as different depths may be faster or slower on different CPU targets. It is also an opportunity to do loop vectorization. But I doubt that either is available in DMD, not sure about GDC and LDC.
hello world in D
I just download dmd 2.063, and compile simple hello world program: // hello.d import std.stdio; int main() { writeln(hello world); return 0; } with -O -release -inline -noboundscheck flags. And size of result output file 'hello' equal to 1004.1 Kbyte !!! Why size is big? I'm using fedora 14, 32-x. Regards, Khurshid.
Re: hello world in D
On Friday, 31 May 2013 at 14:33:48 UTC, khurshid wrote: And size of result output file 'hello' equal to 1004.1 Kbyte Whoa, that's up like several times from the last dmd release you can get down to 600 kb or so by not using the flags. Strange, combining all those flags increases the size by 50%. This is probably some kind of bug in the new release. But even without bugs, writeln actually pulls in a lot of code. If you use printf instead of std.stdio, you'll save about 150 KB in the executable import core.stdc.stdio; void main() { printf(hello\n); } $ dmd test2.d $ ls -lh test2 -rwxr-xr-x 1 me users 287K 2013-05-31 10:40 test2 $ strip test2 $ ls -lh test2 -rwxr-xr-x 1 me users 195K 2013-05-31 10:41 test2 D programs don't have any dependencies outside the operating system by default, so they carry the parts of the standard library they use with them. That 195K test2 program is mostly druntime's code. If you pull in std.stdio it also grabs much of the phobos standard library to support printing, conversion of numbers to strings, and much more. A 1 MB hello world is just strange though, I'm sure that's some kind of bug, but the relatively harmless kind, you can still use it.
Re: hello world in D
On Fri, 31 May 2013 15:33:46 +0100, khurshid khurshid.normura...@gmail.com wrote: I just download dmd 2.063, and compile simple hello world program: // hello.d import std.stdio; int main() { writeln(hello world); return 0; } with -O -release -inline -noboundscheck flags. And size of result output file 'hello' equal to 1004.1 Kbyte !!! Why size is big? I'm using fedora 14, 32-x. Phobos the std library is statically linked, currently. You will get a similar size (or greater) if you statically link the stdc library. Eventually D will support dynamically linking to Phobos. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Re: Slow performance compared to C++, ideas?
On 5/31/13 9:07 AM, finalpatch wrote: D is very strong at TMP, it provides a lot more tools specifically designed for TMP, that is vastly superior than C++ which relies on abusing the templates. This is actually the main reason drawing me to D: TMP in a more pleasant way. IMO one thing D needs to address is less surprises, eg. innocent looking code like v[] = [x,x,x] shouldn't cause major performance hit. In c++ memory allocation is explicit, either operator new or malloc, or indirectly through a method call, otherwise the language would not do heap allocation for you. It would be great if we addressed that in 2.064. I'm sure I've seen the report in bugzilla, but the closest I found were: http://d.puremagic.com/issues/show_bug.cgi?id=9335 http://d.puremagic.com/issues/show_bug.cgi?id=8449 Andrei
Template expansion bug?
While doing some testing, I came across this behavior: template foo(T) { void foo() {} } void main() { foo!(int).foo(); //foo!(int)(); // this works } Error: testbug.d(7): Error: template testbug.foo does not match any function template declaration. Candidates are: testbug.d(1):testbug.foo(T)() testbug.d(7): Error: template testbug.foo(T)() cannot deduce template function from argument types !()(void) This happens on 2.063 and 2.061. From D spec: http://dlang.org/template.html If a template has exactly one member in it, and the name of that member is the same as the template name, that member is assumed to be referred to in a template instantiation: template Foo(T) { T Foo; // declare variable Foo of type T } void test() { Foo!(int) = 6; // instead of Foo!(int).Foo } ... I would have expected that foo!(int).foo() is equivalent to foo!(int)(). Is this no longer the case? Is there some issue here with DMD aggressively evaluating foo!(int) to be a call because of the no-arg function? Actually, it does seem that way, if I change foo to accept a parameter, and call: foo!(int).foo(1); I get: testbug.d(7): Error: foo (int t) is not callable using argument types () testbug.d(7): Error: foo (int t) is not callable using argument types () (and yes, double the error). I can't at the moment think of a reason why anyone would ever use the full syntax, but I would expect it to be accessible, and for the compiler to treat foo as a template first, function call second. -Steve
Re: Slow performance compared to C++, ideas?
Namespace: I thought GDC or LDC have something like: float[$] v = [x, x, x]; which is converted to flot[3] v = [x, x, x]; Am I wrong? DMD need something like this too. Right. Vote (currently only 6 votes): http://d.puremagic.com/issues/show_bug.cgi?id=481 Bye, bearophile
Re: A simple way to do compile time loop unrolling
On 5/31/13 10:06 AM, finalpatch wrote: Just want to share a new way I just discovered to do loop unrolling. template Unroll(alias CODE, alias N) { static if (N == 1) enum Unroll = format(CODE, 0); else enum Unroll = Unroll!(CODE, N-1)~format(CODE, N-1); } after that you can write stuff like mixin(Unroll!(v[%1$d]~op~=rhs.v[%1$d];, 3)); and it gets expanded to v[0]+=rhs.v[0];v[1]+=rhs.v[1];v[2]+=rhs.v[2]; I find this method simpler than with foreach() and a tuple range, and also faster because it's identical to hand unrolling. Hehe, first shot is always a trip isn't it. Welcome aboard. We should have something like that in phobos. Andrei
Re: hello world in D
On Friday, 31 May 2013 at 14:48:02 UTC, Adam D. Ruppe wrote: If you use printf instead of std.stdio, you'll save about 150 KB in the executable import core.stdc.stdio; void main() { printf(hello\n); } $ dmd test2.d $ ls -lh test2 -rwxr-xr-x 1 me users 287K 2013-05-31 10:40 test2 $ strip test2 $ ls -lh test2 -rwxr-xr-x 1 me users 195K 2013-05-31 10:41 test2 I was try your code, result: -rwxrwxr-x. 1 khurshid khurshid 299K May 31 19:53 hello i.e. 299 Kbyte. Even, when I type dmd -v : DMD32 D Compiler v2.063 Copyright (c) 1999-2012 by Digital Mars written by Walter Bright Documentation: http://dlang.org/ - Why copyright 2012 not a 2013?