Re: version() abuse! Note of library writers.
Travis Boucher wrote: ... May I suggest to put these notes in Wiki4D so they don't get lost in the flood of postings?
Re: version() abuse! Note of library writers.
Travis Boucher wrote: The use of version(...) in D has the potential for some very elegant portable code design. However, from most of the libraries I have seen, it is abused and misused turning it into a portability nightmare. It has done this for years, so it's already turned that way. Usually it's version(Win32) /*Windows*/; else /*linux*/;... Anything that accesses standard libc functions, standard unix semantics (eg. signals, shm, etc) should use version(Posix) or version(unix). Nice rant, but it's version(Unix) in GCC and we're probably stuck with the horrible version(linux) and version(OSX) forever. Build systems and scripts that are designed to run on unix machines should not assume the locations of libraries and binaries, and refer to posix standards for their locations. For example, bash in /bin/bash or the assumption of the existence of bash at all. If you need a shell script, try writing it with plain bourne syntax without all of the bash extensions to the shell, and use /bin/sh. Also avoid using the GNU extensions to standard tools (sed and awk for example). If you really want to do something fancy, do it in D and use the appropriate {g,l}dmd -run command. I rewrote my shell scripts in C++ for wxD, to work on Windows. Tried to use D (mostly for DSSS), but it wasn't working right. A few things to keep in mind about linux systems vs. pretty much all other unix systems: Nice list, you should put it on a web page somewhere (Wiki4D ?) Usually one also ends up using runtime checks or even autoconf. --anders PS. Some people even think that /usr/bin/python exists. :-) Guess they were confusing it with standard /usr/bin/perl
Should we make DMD1.051 the recommended stable version?
The standard download still points to DMD1.030 (May 2008). A couple of hundred serious bugs have been fixed since that time. Some of the intermediate releases had regressions which prevented many people from using them, but I don't think that's true of this one. I think it's a great release. The known regressions of DMD1.051 compared to DMD1.030 are: 2393 IFTI regression on (T:char)(T[]) vs (T:dchar)(T[]) 370 Compiler stack overflow on recursive typeof in function declaration. 3469 ICE(func.c): Regression. Calling non-template function as a template, from another module but in my opinion these are not serious enough to prevent 1.051 from being recommended. (BTW I've already sent Walter patches for those second two bugs). I'd like to protect newbies from encountering internal compiler errors which have already been fixed, and from experiencing frustration with CTFE. If anyone has a reason that they have to use 1.030 instead of 1.051, now would be a great time to say why.
Re: alignment on stack-allocated arrays/structs
OpenCL requires all types to be naturally aligned. /* * Vector types * * Note: OpenCL requires that all types be naturally aligned. * This means that vector types must be naturally aligned. * For example, a vector of four floats must be aligned to * a 16 byte boundary (calculated as 4 * the natural 4-byte * alignment of the float). The alignment qualifiers here * will only function properly if your compiler supports them * and if you don't actively work to defeat them. For example, * in order for a cl_float4 to be 16 byte aligned in a struct, * the start of the struct must itself be 16-byte aligned. http://d.puremagic.com/issues/show_bug.cgi?id=2278
Re: Developing a browser (Firefox) extension with D
Frank Benoit wrote: Justin Johansson schrieb: I'm just wondering for a moment if D might be a good vehicle for developing a browser extension for Firefox, or any other browser for that matter. Has anyone either considered doing, or has actually done, any browser extension development with D and have some thoughts or experience to share? Thanks for all feedback, Justin Johansson As an extension, there might be more than one extension running in one Firefox process. Each extension written in D must bring in its own GC, here comes the problem. The GC is implemented in D using signals, which are global to the process. So if there are two D extensions, they will get confused by each other. This is also the reason, i give up about running D and Java in the same process. Also Java uses those signals for its own GC. But nevertheless, if you want to give it a try, the DWT firefox bindings might be of interest for you. Hi Frank, Thanks for this very valuable information. I was soon to post about D interoperability with JNI (Java Native Interface), but sounds like as you say D extensions for Java could also be problematic. Okay if you are the only guy in there but big trouble if your extension is trying to co-exist with other vendor extensions in D. I am left wondering though, is this a permanent show-stopper for the future (technically) or could something, at least in theory, be worked out to overcome the GC issue? beers/commiserations Justin
Re: version() abuse! Note of library writers.
Anders F Björklund wrote: Travis Boucher wrote: The use of version(...) in D has the potential for some very elegant portable code design. However, from most of the libraries I have seen, it is abused and misused turning it into a portability nightmare. It has done this for years, so it's already turned that way. Usually it's version(Win32) /*Windows*/; else /*linux*/;... I'm fairly new to D, and one thing I really love about it is the removal of the preprocessor in favor of specific conditional compilation (version, debug, unittest, static if, CTFE, etc). Nothing was worse then trying to decode a massive #ifdef tree supporting different features from different OSes. I don't expect things to change right now, but I think that there should be some standard version() statements that are not only implementation defined. I'd also like people to start thinking about the OS hierarchies with version statements. Windows Win32 Win64 WinCE (as an example...) Posix (or Unix, I don't care which one) BSD FreeBSD OpenBSD NetBSD Darwin Linux Solaris The problem with version(Win32) /*Windows*/; else /*linux*/; is fairly subtle, but I have run into it alot with bindings to C libraries that use the dlopen() family and try to link against libdl. Anything that accesses standard libc functions, standard unix semantics (eg. signals, shm, etc) should use version(Posix) or version(unix). Nice rant, but it's version(Unix) in GCC and we're probably stuck with the horrible version(linux) and version(OSX) forever. On my install (FreeBSD) version(Unix) and version(Posix) are both defined. Build systems and scripts that are designed to run on unix machines should not assume the locations of libraries and binaries, and refer to posix standards for their locations. For example, bash in /bin/bash or the assumption of the existence of bash at all. If you need a shell script, try writing it with plain bourne syntax without all of the bash extensions to the shell, and use /bin/sh. Also avoid using the GNU extensions to standard tools (sed and awk for example). If you really want to do something fancy, do it in D and use the appropriate {g,l}dmd -run command. I rewrote my shell scripts in C++ for wxD, to work on Windows. Tried to use D (mostly for DSSS), but it wasn't working right. Yeah, I can understand in some cases using D itself could be a major bootstrapping hassle. This issue isn't D specific, and exists in alot of packages. I've even gotten to the point to expect most third party packages won't work with FreeBSD's make, and always make sure GNU make is available. A few things to keep in mind about linux systems vs. pretty much all other unix systems: Nice list, you should put it on a web page somewhere (Wiki4D ?) Usually one also ends up using runtime checks or even autoconf. I haven't registered in Wiki4D yet, I might soon once I take the time to clean up this ranty post into something a little more useful. PS. Some people even think that /usr/bin/python exists. :-) Guess they were confusing it with standard /usr/bin/perl I won't even go into my feelings about python. Sadly perl is slowly becoming more extinct. It would be nice for people to remember that perl started as a replacement for sed awk, and still works well for that purpose. At least people don't assume ruby exists. The bad thing is when a build system breaks because of something non-critical failing. A good example of this is the gtkd demoselect.sh script. It use to assume /bin/bash, which would trigger a full build failure. Since it was changed to /bin/sh, it doesn't work correctly on FreeBSD (due to I think some GNU extensions used in sed), but it doesn't cause a build failure. It just means the default demos are built.
Re: Should we make DMD1.051 the recommended stable version?
Don wrote: The standard download still points to DMD1.030 (May 2008). A couple of hundred serious bugs have been fixed since that time. Some of the intermediate releases had regressions which prevented many people from using them, but I don't think that's true of this one. I think it's a great release. [...] I'd like to protect newbies from encountering internal compiler errors which have already been fixed, and from experiencing frustration with CTFE. If anyone has a reason that they have to use 1.030 instead of 1.051, now would be a great time to say why. Not saying have to, but it was matching the GDC version I had: svn co https://dgcc.svn.sourceforge.net/svnroot/dgcc/trunk/ gdc Updating would mean getting the patches from the unofficial tree: hg clone http://bitbucket.org/goshawk/gdc/ But as long as it is working properly, I could do some installers along with the build patches already needed for Vista and Leopard. They would probably have been at DMD 1.020 - had it not been for the issue with Tango not working with that version (i.e. GDC 0.24) http://gdcwin.sourceforge.net/ http://gdcmac.sourceforge.net/ --anders
Re: Developing a browser (Firefox) extension with D
Justin Johansson schrieb: Hi Frank, Thanks for this very valuable information. I was soon to post about D interoperability with JNI (Java Native Interface), but sounds like as you say D extensions for Java could also be problematic. Okay if you are the only guy in there but big trouble if your extension is trying to co-exist with other vendor extensions in D. There might be also a problem if firefox or something else in the same process uses signals. I am left wondering though, is this a permanent show-stopper for the future (technically) or could something, at least in theory, be worked out to overcome the GC issue? Perhaps start a child process and do inter process communication. For one of my application I use a Java GUI and then I start a D process and use stdin+stdout for JSON communication.
Re: Should the comma operator be removed in D2?
On Tue, Nov 17, 2009 at 9:05 PM, Yigal Chripun yigal...@gmail.com wrote: to clarify what I meant regarding function args list lets look at a few ML examples: fun f1() = () f1 is unit - unit fun f2 (a) = a f2 `a - `a fun f3 (a, b, (c, d)) = a + b + c + d f3 is (`a, `a, (`a, `a)) - `a it doesn't auto flatten the tuples but the list of parameters is equivalent to a tuple. Ok, but I think retard is right that tuple==parameter list equivalence is going to be hard to make happen in a language like D, since D has all these storage classes for parameters. regarding unit type, it has by definition exactly one value, so a function that is defined now in D to return void would return that value and than it's perfectly legal to have foo(bar()) when bar returns a unit type. I see. That might come in handy sometimes. Thanks for explaining. But it seems like something we could make happen regardless of tuples. In C/C++ you can declare foo as void foo(void); It makes sense that a function returning void should be allowed to chain with a function taking void. --bb
Re: version() abuse! Note of library writers.
Another note, something I see in tango and I don't know why I didn't think about it before. If you want to require bash, use: #!/usr/bin/env bash instead of #!/bin/bash #!/usr/bin/bash
:? in templates
Didn't this used to work? template factorial(int i) { enum factorial = (i==0) ? 1 : i*factorial!(i-1); } With DMD 2.036 I'm getting: Error: template instance factorial!(-495) recursive expansion Seems like it expands both branches regardless of the condition. And seems to me like it shouldn't. --bb
Re: Should the comma operator be removed in D2?
Wed, 18 Nov 2009 02:36:35 -0800, Bill Baxter wrote: On Tue, Nov 17, 2009 at 9:05 PM, Yigal Chripun yigal...@gmail.com regarding unit type, it has by definition exactly one value, so a function that is defined now in D to return void would return that value and than it's perfectly legal to have foo(bar()) when bar returns a unit type. I see. That might come in handy sometimes. Thanks for explaining. But it seems like something we could make happen regardless of tuples. In C/C++ you can declare foo as void foo(void); It makes sense that a function returning void should be allowed to chain with a function taking void. Aye. It doesn't really matter what you call it. Another difference are the implicit type conversions. The () value cannot be coerced to some other type without leaking immensive amounts of blood and sweat.
Re: :? in templates
Wed, 18 Nov 2009 03:10:57 -0800, Bill Baxter wrote: Didn't this used to work? template factorial(int i) { enum factorial = (i==0) ? 1 : i*factorial!(i-1); } With DMD 2.036 I'm getting: Error: template instance factorial!(-495) recursive expansion Seems like it expands both branches regardless of the condition. And seems to me like it shouldn't. There's probably a confusion here. It evaluates lazily the value of factorial!(), but its type (which happens to be infinitely recursive must be evaluated eagerly in order to infer the type of the ternary op.
Re: Should we make DMD1.051 the recommended stable version?
Don nos...@nospam.com wrote in message news:he0d7l$34...@digitalmars.com... The standard download still points to DMD1.030 (May 2008). A couple of hundred serious bugs have been fixed since that time. Some of the intermediate releases had regressions which prevented many people from using them, but I don't think that's true of this one. I think it's a great release. The known regressions of DMD1.051 compared to DMD1.030 are: 2393 IFTI regression on (T:char)(T[]) vs (T:dchar)(T[]) 370 Compiler stack overflow on recursive typeof in function declaration. 3469 ICE(func.c): Regression. Calling non-template function as a template, from another module but in my opinion these are not serious enough to prevent 1.051 from being recommended. (BTW I've already sent Walter patches for those second two bugs). I'd like to protect newbies from encountering internal compiler errors which have already been fixed, and from experiencing frustration with CTFE. If anyone has a reason that they have to use 1.030 instead of 1.051, now would be a great time to say why. The only potential problem I see with that is that if you want to use tango, any DMD beyond 1.043 would force you to go with tango trunk, which wouldn't make much sense for anyone who is trying to stick with stable releases of things.
Re: Should we make DMD1.051 the recommended stable version?
On Wed, 18 Nov 2009 14:15:47 +0300, Nick Sabalausky a...@a.a wrote: Don nos...@nospam.com wrote in message news:he0d7l$34...@digitalmars.com... The standard download still points to DMD1.030 (May 2008). A couple of hundred serious bugs have been fixed since that time. Some of the intermediate releases had regressions which prevented many people from using them, but I don't think that's true of this one. I think it's a great release. The known regressions of DMD1.051 compared to DMD1.030 are: 2393 IFTI regression on (T:char)(T[]) vs (T:dchar)(T[]) 370 Compiler stack overflow on recursive typeof in function declaration. 3469 ICE(func.c): Regression. Calling non-template function as a template, from another module but in my opinion these are not serious enough to prevent 1.051 from being recommended. (BTW I've already sent Walter patches for those second two bugs). I'd like to protect newbies from encountering internal compiler errors which have already been fixed, and from experiencing frustration with CTFE. If anyone has a reason that they have to use 1.030 instead of 1.051, now would be a great time to say why. The only potential problem I see with that is that if you want to use tango, any DMD beyond 1.043 would force you to go with tango trunk, which wouldn't make much sense for anyone who is trying to stick with stable releases of things. Recent poll has shown that most people use Tango trunk anyway. Perhaps, it's time for another Tango release?
Re: Should we make DMD1.051 the recommended stable version?
On Wed, 18 Nov 2009 14:19:11 +0300, Denis Koroskin wrote: On Wed, 18 Nov 2009 14:15:47 +0300, Nick Sabalausky a...@a.a wrote: Don nos...@nospam.com wrote in message news:he0d7l$34...@digitalmars.com... The standard download still points to DMD1.030 (May 2008). A couple of hundred serious bugs have been fixed since that time. Some of the intermediate releases had regressions which prevented many people from using them, but I don't think that's true of this one. I think it's a great release. The known regressions of DMD1.051 compared to DMD1.030 are: 2393 IFTI regression on (T:char)(T[]) vs (T:dchar)(T[]) 370 Compiler stack overflow on recursive typeof in function declaration. 3469 ICE(func.c): Regression. Calling non-template function as a template, from another module but in my opinion these are not serious enough to prevent 1.051 from being recommended. (BTW I've already sent Walter patches for those second two bugs). I'd like to protect newbies from encountering internal compiler errors which have already been fixed, and from experiencing frustration with CTFE. If anyone has a reason that they have to use 1.030 instead of 1.051, now would be a great time to say why. The only potential problem I see with that is that if you want to use tango, any DMD beyond 1.043 would force you to go with tango trunk, which wouldn't make much sense for anyone who is trying to stick with stable releases of things. Recent poll has shown that most people use Tango trunk anyway. Perhaps, it's time for another Tango release? 1.051 looks like a good choice for a stable dmd version. I think that a new Tango release is underway already.
Re: :? in templates
On Wed, Nov 18, 2009 at 3:16 AM, retard r...@tard.com.invalid wrote: Wed, 18 Nov 2009 03:10:57 -0800, Bill Baxter wrote: Didn't this used to work? template factorial(int i) { enum factorial = (i==0) ? 1 : i*factorial!(i-1); } With DMD 2.036 I'm getting: Error: template instance factorial!(-495) recursive expansion Seems like it expands both branches regardless of the condition. And seems to me like it shouldn't. There's probably a confusion here. It evaluates lazily the value of factorial!(), but its type (which happens to be infinitely recursive must be evaluated eagerly in order to infer the type of the ternary op. That makes sense. I guess the ?: op is defined to do that in all cases. Might be nice though if it didn't do that in cases where the condition was statically known. Or if we just had a separate static if version of ?: --bb
Re: Should the comma operator be removed in D2?
Tue, 17 Nov 2009 20:23:35 -0500, Robert Jacques wrote: Also, all those well known optimizations don't magically work for structs: I've seen modern compilers do some pretty stupid things when structs and temporary values are involved. Are you talking about dmc/dmd now? Have you tried gcc 4.4 or 4.5, llvm dev version or latest visual c++ ? Temporary values are often used with e.g. expression templates and the compilers have generated decently performing code for ages now. dmd is the only stupid compiler which cannot inline e.g. expression templates for matrix operations.
Re: :? in templates
On Wed, 18 Nov 2009 03:10:57 -0800, Bill Baxter wbax...@gmail.com wrote: Didn't this used to work? template factorial(int i) { enum factorial = (i==0) ? 1 : i*factorial!(i-1); } With DMD 2.036 I'm getting: Error: template instance factorial!(-495) recursive expansion Seems like it expands both branches regardless of the condition. And seems to me like it shouldn't. --bb While we are at it, binary logical operators has the same issue: static if (foo!() bar!()) { } else { } bar is instantiated even if 'foo!()' results in false. To work around it, you have to add another 'static if' and duplicate the 'else' block: static if (foo!()) { static if (bar!()) { } else { } } else { } I've encountered the problem several times and would be happy to have it fixed, if possible.
Re: :? in templates
Wed, 18 Nov 2009 03:31:11 -0800, Bill Baxter wrote: On Wed, Nov 18, 2009 at 3:16 AM, retard r...@tard.com.invalid wrote: Wed, 18 Nov 2009 03:10:57 -0800, Bill Baxter wrote: Didn't this used to work? template factorial(int i) { enum factorial = (i==0) ? 1 : i*factorial!(i-1); } With DMD 2.036 I'm getting: Error: template instance factorial!(-495) recursive expansion Seems like it expands both branches regardless of the condition. And seems to me like it shouldn't. There's probably a confusion here. It evaluates lazily the value of factorial!(), but its type (which happens to be infinitely recursive must be evaluated eagerly in order to infer the type of the ternary op. That makes sense. I guess the ?: op is defined to do that in all cases. Might be nice though if it didn't do that in cases where the condition was statically known. so int foo = (1==1) ? 6 : haha; would work, too? I think it would still need to check that the types of both branches match.
Re: Should we make DMD1.051 the recommended stable version?
Denis Koroskin 2kor...@gmail.com wrote in message news:op.u3k8d9i9o7c...@dkoroskin.saber3d.local... On Wed, 18 Nov 2009 14:15:47 +0300, Nick Sabalausky a...@a.a wrote: Don nos...@nospam.com wrote in message news:he0d7l$34...@digitalmars.com... The standard download still points to DMD1.030 (May 2008). A couple of hundred serious bugs have been fixed since that time. Some of the intermediate releases had regressions which prevented many people from using them, but I don't think that's true of this one. I think it's a great release. The known regressions of DMD1.051 compared to DMD1.030 are: 2393 IFTI regression on (T:char)(T[]) vs (T:dchar)(T[]) 370 Compiler stack overflow on recursive typeof in function declaration. 3469 ICE(func.c): Regression. Calling non-template function as a template, from another module but in my opinion these are not serious enough to prevent 1.051 from being recommended. (BTW I've already sent Walter patches for those second two bugs). I'd like to protect newbies from encountering internal compiler errors which have already been fixed, and from experiencing frustration with CTFE. If anyone has a reason that they have to use 1.030 instead of 1.051, now would be a great time to say why. The only potential problem I see with that is that if you want to use tango, any DMD beyond 1.043 would force you to go with tango trunk, which wouldn't make much sense for anyone who is trying to stick with stable releases of things. Recent poll has shown that most people use Tango trunk anyway. Perhaps, it's time for another Tango release? I don't think anyone would disagree that it's long past time for another Tango release ;) But, I would venture to guess very few people stick with DMD stable either, probably even fewer than Tango 0.99.8. Heck, DMD's stable gets updated less often than Tango's stable releases. Personally, I don't see much of a reason for D1/Tango users not to use DMD 1.051 / Tango trunk, at least until Tango 0.99.9 comes out. But I just felt that for anyone who does want to stick with DMD's stable for whatever reason, it's likely they may want to stick with latest stable for Tango too. (Assuming, of course, that they want to use tango...not that that's a vary large assumption for a D1 user).
Re: :? in templates
On Wed, Nov 18, 2009 at 3:36 AM, retard r...@tard.com.invalid wrote: Wed, 18 Nov 2009 03:31:11 -0800, Bill Baxter wrote: On Wed, Nov 18, 2009 at 3:16 AM, retard r...@tard.com.invalid wrote: Wed, 18 Nov 2009 03:10:57 -0800, Bill Baxter wrote: Didn't this used to work? template factorial(int i) { enum factorial = (i==0) ? 1 : i*factorial!(i-1); } With DMD 2.036 I'm getting: Error: template instance factorial!(-495) recursive expansion Seems like it expands both branches regardless of the condition. And seems to me like it shouldn't. There's probably a confusion here. It evaluates lazily the value of factorial!(), but its type (which happens to be infinitely recursive must be evaluated eagerly in order to infer the type of the ternary op. That makes sense. I guess the ?: op is defined to do that in all cases. Might be nice though if it didn't do that in cases where the condition was statically known. so int foo = (1==1) ? 6 : haha; would work, too? I think it would still need to check that the types of both branches match. Yeh, that could be confusing. Actually it would break a lot of code, too, now that I think of it. People use typeof(true?a:b) to get the common type. That's why it probably needs to be a distinct thing, a static ?:. --bb
Re: Should we make DMD1.051 the recommended stable version?
On Wed, Nov 18, 2009 at 12:43 PM, Nick Sabalausky a...@a.a wrote: Denis Koroskin 2kor...@gmail.com wrote in message news:op.u3k8d9i9o7c...@dkoroskin.saber3d.local... On Wed, 18 Nov 2009 14:15:47 +0300, Nick Sabalausky a...@a.a wrote: Don nos...@nospam.com wrote in message news:he0d7l$34...@digitalmars.com... The standard download still points to DMD1.030 (May 2008). A couple of hundred serious bugs have been fixed since that time. Some of the intermediate releases had regressions which prevented many people from using them, but I don't think that's true of this one. I think it's a great release. The known regressions of DMD1.051 compared to DMD1.030 are: 2393 IFTI regression on (T:char)(T[]) vs (T:dchar)(T[]) 370 Compiler stack overflow on recursive typeof in function declaration. 3469 ICE(func.c): Regression. Calling non-template function as a template, from another module but in my opinion these are not serious enough to prevent 1.051 from being recommended. (BTW I've already sent Walter patches for those second two bugs). I'd like to protect newbies from encountering internal compiler errors which have already been fixed, and from experiencing frustration with CTFE. If anyone has a reason that they have to use 1.030 instead of 1.051, now would be a great time to say why. The only potential problem I see with that is that if you want to use tango, any DMD beyond 1.043 would force you to go with tango trunk, which wouldn't make much sense for anyone who is trying to stick with stable releases of things. Recent poll has shown that most people use Tango trunk anyway. Perhaps, it's time for another Tango release? I don't think anyone would disagree that it's long past time for another Tango release ;) But, I would venture to guess very few people stick with DMD stable either, probably even fewer than Tango 0.99.8. Heck, DMD's stable gets updated less often than Tango's stable releases. Personally, I don't see much of a reason for D1/Tango users not to use DMD 1.051 / Tango trunk, at least until Tango 0.99.9 comes out. But I just felt that for anyone who does want to stick with DMD's stable for whatever reason, it's likely they may want to stick with latest stable for Tango too. (Assuming, of course, that they want to use tango...not that that's a vary large assumption for a D1 user). It would also be possible to just release tango 0.99.8.1 (or something), LDC has a patch against 0.99.8 that probably fixes it for the latest dmd as well.
Re: :? in templates
On Wed, 18 Nov 2009 14:50:42 +0300, Bill Baxter wbax...@gmail.com wrote: On Wed, Nov 18, 2009 at 3:36 AM, retard r...@tard.com.invalid wrote: Wed, 18 Nov 2009 03:31:11 -0800, Bill Baxter wrote: On Wed, Nov 18, 2009 at 3:16 AM, retard r...@tard.com.invalid wrote: Wed, 18 Nov 2009 03:10:57 -0800, Bill Baxter wrote: Didn't this used to work? template factorial(int i) { enum factorial = (i==0) ? 1 : i*factorial!(i-1); } With DMD 2.036 I'm getting: Error: template instance factorial!(-495) recursive expansion Seems like it expands both branches regardless of the condition. And seems to me like it shouldn't. There's probably a confusion here. It evaluates lazily the value of factorial!(), but its type (which happens to be infinitely recursive must be evaluated eagerly in order to infer the type of the ternary op. That makes sense. I guess the ?: op is defined to do that in all cases. Might be nice though if it didn't do that in cases where the condition was statically known. so int foo = (1==1) ? 6 : haha; would work, too? I think it would still need to check that the types of both branches match. Yeh, that could be confusing. Actually it would break a lot of code, too, now that I think of it. People use typeof(true?a:b) to get the common type. That's why it probably needs to be a distinct thing, a static ?:. --bb The simplest solution is probably to give the compiler a hint: enum foo = someStaticCondition ? cast(Foo)SomeTemplate!() : cast(Foo)SomeOtherTemplate!(); Looks a bit hackish (and doesn't currently work), though.
Re: [OT] Re: D: at Borders soon?
Chris Nicholson-Sauls wrote: Tim Matthews wrote: A fool thinks himself to be wise, but a wise man knows himself to be a fool. -- William Shakespeare. I know that I know nothing. -- Socrates, c. 399 BCE Shakespeare was such a plagiarist. Billy Shakes, Billy Gates, what's the diff? nothing .. both plagiarists
Re: Should the comma operator be removed in D2?
You're remark of function chaining reminded me of a nice feture that a few OOP languages provide: // pseudo syntax auto obj = new Object(); obj foo() ; bar() ; goo() foo, bar and goo above are three mesages (methods) that are sent to the same object. i.e. it's the same as doing: obj.foo(); obj.bar(); obj.goo(); this means the functions can return void instead of returning this like you'd do in C++/D. I think it provides a cleaner conceptual separation between multiple messages sent to one object and real chaining when foo returns obj2 which then receives message bar and so on.
Re: Should the comma operator be removed in D2?
yigal chripun wrote: You're remark of function chaining reminded me of a nice feture that a few OOP languages provide: // pseudo syntax auto obj = new Object(); obj foo() ; bar() ; goo() foo, bar and goo above are three mesages (methods) that are sent to the same object. i.e. it's the same as doing: obj.foo(); obj.bar(); obj.goo(); this means the functions can return void instead of returning this like you'd do in C++/D. I think it provides a cleaner conceptual separation between multiple messages sent to one object and real chaining when foo returns obj2 which then receives message bar and so on. This has to be flame-bait for sure! :-)
Re: D array expansion and non-deterministic re-allocation
Andrei Alexandrescu, el 17 de noviembre a las 18:45 me escribiste: 3. If you **really** care about performance, you should only append when you don't know the length in advance. If you know the length, you should always pre-allocate. We will have collections and all those good things, but I don't see how the proposal follows from the feedback. My perception is that this is a group of people who use D. Bartosz' concern didn't cause quite a riot, so as far as I can tell there is no big issue at stake. I didn't say anything (until now) because this was discussed already. Dynamic arrays/slices appending is horribly broken (I know it's well defined, and deterministic, but you are just condemned to make mistakes, it just doesn't work as one would expect, even when you know how it works you have to keep fighting your intuition all the time). But it's a little pointless to keep making riots, when we were so close to finally fix this with T[new]/whatever and it all went back. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Hey you, out there on your own Sitting naked by the phone Would you touch me?
Re: alignment on stack-allocated arrays/structs
Don schrieb: http://d.puremagic.com/issues/show_bug.cgi?id=2278 Isn't this a distinct problem or am I wrong? This is not only about 8-byte boundaries.
Re: Should the comma operator be removed in D2?
yigal chripun wrote: You're remark of function chaining reminded me of a nice feture that a few OOP languages provide: // pseudo syntax auto obj = new Object(); obj foo() ; bar() ; goo() foo, bar and goo above are three mesages (methods) that are sent to the same object. i.e. it's the same as doing: obj.foo(); obj.bar(); obj.goo(); this means the functions can return void instead of returning this like you'd do in C++/D. I think it provides a cleaner conceptual separation between multiple messages sent to one object and real chaining when foo returns obj2 which then receives message bar and so on. with (auto obj = new Object) { foo; bar; goo; } Behold. ;)
Re: Go: A new system programing language
If you need something more, then it would be great if you could explain it. I haven't looked at this extensively. Suffice it to say that most of the useful c++ classes are inside a namespace.
Re: Go: A new system programing language
Mixing D's gc world with manually managed memory isn't hard, as long as the following rules are followed: 1. don't allocate in one language and expect to free in another 2. keep a 'root' to all gc allocated data in the D side of the fence (otherwise it may get collected) Yes it's the second that's the tough part. For instance consider the case of passing a callback (delegate) to the RPC system written in c++. How do you keep the associated data rooted without causing leaks? You'd need to remember to manually add it to the gc roots when the callback object is created and then unroot them when it's invoked. So this needs some kind of glue/binding system. I'm not saying it's impossible or even hard. Just that I've seen such things done before and they were non-trivial.
Re: version() abuse! Note of library writers.
Travis Boucher boucher.tra...@gmail.com wrote in message news:hdvoke$1qs...@digitalmars.com... [posix/unix/linux/bsd version info and tips] As a mainly-windows person who just knows enough *nix to get by and to maintain ports of their own software, this is great stuff to know! Some of it I was already aware of, but there was a lot I didn't. So much of that is the kind of information that can be very difficult for a *nix non-expert (to say nothing of novices) to find, or even think to look for. Consider your post much appreciated :) Looking forward to a wiki4d version. And I'll be making sure to check through my d tools/libs for any such misuses. Next time I do linux builds of my Goldie and SemiTwistDTools projects (I normally do most of my dev on Win, so the linux bnaries get old, and I may occasionally break linux without knowing), I just may take you up on that offer to do a FreeBSD check on it.
Re: Go: A new system programing language
Socket IO in Phobos kind of stinks right now so that would need an overhaul first. We wouldn't be using raw socket io but rather binding to the c++ RPC implementation. So the state of socket io in phobos (or tango) in neither here nor there for the purposes of doing network communications inside Google. Anyway, perhaps I'll see if anyone is interested in playing around with this when Andreis book is out and D2 is finalized.
Re: Go: A new system programing language
Walter Bright Wrote: Sean Kelly wrote: Walter Bright Wrote: Mixing D's gc world with manually managed memory isn't hard, as long as the following rules are followed: 1. don't allocate in one language and expect to free in another 2. keep a 'root' to all gc allocated data in the D side of the fence (otherwise it may get collected) This may actually work in D 2.0. core.thread has thread_attachThis() to make the D GC aware of an external thread, and gc_malloc() is an extern (C) function. I haven't tested this extensively however, so if you're keen to try it, please let me know if there are any problems. There's a more fundamental problem. There is simply no reliable way to find all the static data segments in a program. I talked with Hans Boehm about this, he uses some horrific kludges to try and do it in the Boehm gc. This problem is on every OS I've checked. Yeah, that's a problem. I've looked at this part of the Boehm collector too and it's absolutely horrifying. You'd think with the popularity (and history) of GC that there would be some API-level way to do this.
Re: D array expansion and non-deterministic re-allocation
Leandro Lucarella wrote: Andrei Alexandrescu, el 17 de noviembre a las 18:45 me escribiste: 3. If you **really** care about performance, you should only append when you don't know the length in advance. If you know the length, you should always pre-allocate. We will have collections and all those good things, but I don't see how the proposal follows from the feedback. My perception is that this is a group of people who use D. Bartosz' concern didn't cause quite a riot, so as far as I can tell there is no big issue at stake. I didn't say anything (until now) because this was discussed already. Dynamic arrays/slices appending is horribly broken (I know it's well defined, and deterministic, but you are just condemned to make mistakes, it just doesn't work as one would expect, even when you know how it works you have to keep fighting your intuition all the time). In which ways do you think arrays horribly broken? Same as Bartosz mentions? One question is whether you actually have had bugs and problems coding, or if arrays stopped you from getting work done. But it's a little pointless to keep making riots, when we were so close to finally fix this with T[new]/whatever and it all went back. H... resignation is not a trait of this group :o). Andrei
Re: version() abuse! Note of library writers.
Travis Boucher wrote: Another note, something I see in tango and I don't know why I didn't think about it before. If you want to require bash, use: #!/usr/bin/env bash instead of #!/bin/bash #!/usr/bin/bash Sadly, env may be in /bin/ or /usr/bin/. Andrei
Re: version() abuse! Note of library writers.
Travis Boucher Wrote: The problem I run into is the assumption that linux == unix/posix. This assumption is not correct. The use of version(linux) should be limited to code that: 1. Declares externals from sys/*.h 2. Accesses /proc or /sys (or other Linux specific pseudo filesystems) 3. Things that interface directly with the dynamic linker (eg. linking against libdl) 4. Other things that are linux specific Anything that accesses standard libc functions, standard unix semantics (eg. signals, shm, etc) should use version(Posix) or version(unix). It may help to think about weird configurations like targeting Cygwin on Windows (where Windows, Win32, and Posix may theoretically be defined) or WINE on Linux (where linux and Win32 may be defined). I haven't really considered the latter situation, but the former is handled correctly in D2. As you've said, the best idea is to version for the API you're targeting rather than the OS you're on. For the most part, a version also corresponds to a package in core.sys in D2. core.sys.posix contains pretty much everything *nix folks are used to, with core.sys.osx and core.sys.linux containing kernel APIs, etc.
Re: version() abuse! Note of library writers.
Sean Kelly Wrote: or WINE on Linux (where linux and Win32 may be defined) Oops, add Posix to that list.
Re: version() abuse! Note of library writers.
Sean Kelly wrote: Sean Kelly Wrote: or WINE on Linux (where linux and Win32 may be defined) Oops, add Posix to that list. FWIW, Wine is one of the platforms used for testing Phobos. Andrei
Re: alignment on stack-allocated arrays/structs
Trass3r wrote: Don schrieb: http://d.puremagic.com/issues/show_bug.cgi?id=2278 Isn't this a distinct problem or am I wrong? This is not only about 8-byte boundaries. Well, sort of. It's impossible to align stack-allocated structs with any alignment greater than the alignment of the stack itself (which is 4 bytes). Anything larger than that and you HAVE to use the heap or alloca(). Since D2.007, static items use align(16); before that, they were also limited to align(4). Nothing on x86 benefits from more than 16 byte alignment, AFAIK, and it's never mandatory to use more than 8 byte alignment. I don't know so much about the recent GPUs, though -- do they really require 16 byte alignment or more?
Re: version() abuse! Note of library writers.
Anders F Björklund wrote: PS. Some people even think that /usr/bin/python exists. :-) Guess they were confusing it with standard /usr/bin/perl What else should you use for Python scripts?
Re: alignment on stack-allocated arrays/structs
On Wed, 18 Nov 2009 11:03:19 -0500, Don nos...@nospam.com wrote: Trass3r wrote: Don schrieb: http://d.puremagic.com/issues/show_bug.cgi?id=2278 Isn't this a distinct problem or am I wrong? This is not only about 8-byte boundaries. Well, sort of. It's impossible to align stack-allocated structs with any alignment greater than the alignment of the stack itself (which is 4 bytes). Anything larger than that and you HAVE to use the heap or alloca(). Since D2.007, static items use align(16); before that, they were also limited to align(4). Nothing on x86 benefits from more than 16 byte alignment, AFAIK, and it's never mandatory to use more than 8 byte alignment. I don't know so much about the recent GPUs, though -- do they really require 16 byte alignment or more? NVIDIA only requires 16-byte alignment.
Quicker GC group allocations
Can something similar be done to the D GC, that share the same problem? http://bugs.python.org/issue4074 Bye, bearophile
Re: Should we make DMD1.051 the recommended stable version?
Tomas Lindquist Olsen: It would also be possible to just release tango 0.99.8.1 (or something), LDC has a patch against 0.99.8 that probably fixes it for the latest dmd as well. Just a note: after 0.99 there is 0.100 then 0.101, etc. It's not a real number, it's a concatenation of natural numbers in a tree. Bye, bearophile
Re: alignment on stack-allocated arrays/structs
Don schrieb: Well, sort of. It's impossible to align stack-allocated structs with any alignment greater than the alignment of the stack itself (which is 4 bytes). Anything larger than that and you HAVE to use the heap or alloca(). So how do other compilers supporting that alignment syntax do it? Nothing on x86 benefits from more than 16 byte alignment, AFAIK, and it's never mandatory to use more than 8 byte alignment. I don't know so much about the recent GPUs, though -- do they really require 16 byte alignment or more? I'm not sure how exactly this works and why they require alignment. Couldn't find anything about that in the clEnqueueWriteBuffer description where data gets written into GPU memory. The specification for the OpenCL C language itself only states: A data item declared to be a data type in memory is always aligned to the size of the data type in bytes. For example, a float4 variable will be aligned to a 16-byte boundary, a char2 variable will be aligned to a 2-byte boundary. A built-in data type that is not a power of two bytes in size must be aligned to the next larger power of two. This rule applies to built-in types only, not structs or unions. They also strangely state: The components of vector data types with 1 ... 4 components can be addressed as vector_data_type.xyzw. float4 c, a, b; c.xyzw = (float4)(1.0f, 2.0f, 3.0f, 4.0f); c.z = 1.0f; // is a float c.xy = (float2)(3.0f, 4.0f); // is a float2 So I wonder why they used arrays in the headers and not structs to be consistent with this.
Re: D array expansion and non-deterministic re-allocation
Andrei Alexandrescu Wrote: My concern is the semantics of the language. As it is defined right now, a conforming implementation is free to use a quantum random number generator to decide whether to re-allocate or not. Is it likely? I don't think so; but the non-determinism is part of the semantics of D arrays. It would not be difficult to specify in the language definition (and TDPL) that behavior is deterministic for a given platform. I think this has some impact on the freedom of the memory allocator, but probably not major. Actually this wouldn't fix the problem. Although this would make the program deterministic, it would still exhibit chaotic behavior (and chaos is a pretty good simulator of non-determinism--see random number generators). An input string that is one character longer than in the previous run in one part of the program could cause change in allocation in a completely different part of the program (arbitrary long distance coupling). Theoretically, the heap is deterministic, but in practice no program should depend on all pointers having exactly the same values from run to run. For all intents and purposes the heap should be treated as non-deterministic. This is why no language bothers to impose determinism on the heap. Neither should D.
Re: Go: A new system programing language
Mike Hearn wrote: Mixing D's gc world with manually managed memory isn't hard, as long as the following rules are followed: 1. don't allocate in one language and expect to free in another 2. keep a 'root' to all gc allocated data in the D side of the fence (otherwise it may get collected) Yes it's the second that's the tough part. For instance consider the case of passing a callback (delegate) to the RPC system written in c++. How do you keep the associated data rooted without causing leaks? You'd need to remember to manually add it to the gc roots when the callback object is created and then unroot them when it's invoked. So this needs some kind of glue/binding system. I'm not saying it's impossible or even hard. Just that I've seen such things done before and they were non-trivial. Most of the time, nothing needs to be done because the reference is on the parameter stack that calls the C function. For the callback object, manually adding/removing the root (there are calls to the gc to do this) shouldn't be any more onerous than manually managing memory for it, which is done in C/C++ anyway.
Re: D array expansion and non-deterministic re-allocation
Bartosz Milewski wrote: Andrei Alexandrescu Wrote: My concern is the semantics of the language. As it is defined right now, a conforming implementation is free to use a quantum random number generator to decide whether to re-allocate or not. Is it likely? I don't think so; but the non-determinism is part of the semantics of D arrays. It would not be difficult to specify in the language definition (and TDPL) that behavior is deterministic for a given platform. I think this has some impact on the freedom of the memory allocator, but probably not major. Actually this wouldn't fix the problem. Although this would make the program deterministic, it would still exhibit chaotic behavior (and chaos is a pretty good simulator of non-determinism--see random number generators). I am glad you have also reached that conclusion. An input string that is one character longer than in the previous run in one part of the program could cause change in allocation in a completely different part of the program (arbitrary long distance coupling). Then you must restart your argument which was centered around non-determinism. It starts like that: I read Andrei's chapter on arrays and there's one thing that concerns me. When a slice is extended, the decision to re-allocate, and therefore to cut its connection to other slices, is non-deterministic. Andrei
Re: D array expansion and non-deterministic re-allocation
Andrei Alexandrescu Wrote: Leandro Lucarella wrote: Andrei Alexandrescu, el 17 de noviembre a las 18:45 me escribiste: 3. If you **really** care about performance, you should only append when you don't know the length in advance. If you know the length, you should always pre-allocate. We will have collections and all those good things, but I don't see how the proposal follows from the feedback. My perception is that this is a group of people who use D. Bartosz' concern didn't cause quite a riot, so as far as I can tell there is no big issue at stake. I didn't say anything (until now) because this was discussed already. Dynamic arrays/slices appending is horribly broken (I know it's well defined, and deterministic, but you are just condemned to make mistakes, it just doesn't work as one would expect, even when you know how it works you have to keep fighting your intuition all the time). In which ways do you think arrays horribly broken? Same as Bartosz mentions? One question is whether you actually have had bugs and problems coding, or if arrays stopped you from getting work done. What Leandro points out is that using D arrays is not straightforward and may present a steep learning curve. In particular, even the beginner would be required to understand section 4.1.9 of TDPL (Expanding). For many people arrays' behavior might be counter-intuitive and a source of mistakes. In fact, even after reading 4.1.9 I don't know what to expect in some cases. Here's an example: int[] a = [0]; auto b = a; a ~= 1; b ~= 2; What is a[1]? Is this considered stomping and requiring a re-allocation?
Re: version() abuse! Note of library writers.
grauzone wrote: PS. Some people even think that /usr/bin/python exists. :-) Guess they were confusing it with standard /usr/bin/perl What else should you use for Python scripts? #!/usr/bin/env python, or be prepared that it might be installed in /usr/local/bin/python or something. --anders
Re: version() abuse! Note of library writers.
Sean Kelly wrote: It may help to think about weird configurations like targeting Cygwin on Windows (where Windows, Win32, and Posix may theoretically be defined) or WINE on Linux (where linux and Win32 may be defined). Something is broken if Windows is declared in Cygwin, or if linux is declared when running under Wine... --anders
Re: D array expansion and non-deterministic re-allocation
Andrei Alexandrescu Wrote: One thing that Bartosz didn't mention is that this unique is different than the other unique, which I think reveals the half-bakedness of the proposal. The other unique that was intensively discussed before is transitive, meaning that the root reference points to a graph of objects having no other connection to the outer world. This unique is very different because it's shallow - only the array chunk is unaliased, but the members of the array don't need to be. This is a very important distinction. Actually I was thinking of deep unique, but understandably I neither described full details of my proposal, nor have I completely baked it, which would require some serious work and a lot more discussion. One of these days I will blog about uniqueness in more detail. The problem of transitivity with respect to arrays has been discussed before in the context of const and immutable. We realized that shallow constness/immutability would be useful in some scenarios but, because of the added complexity, we punted it. My feeling is that, similarly, in most cases the restriction that a unique array may only hold unique objects is not unreasonable.
Re: Quicker GC group allocations
bearophile Wrote: Can something similar be done to the D GC, that share the same problem? http://bugs.python.org/issue4074 Seems like that would require a generational GC? If so, that pretty much means a new GC. If performance is an issue for a known allocation scheme, it may also be worth calling GC.reserve() beforehand. This will pre-allocate a bunch of free memory within the GC and keep it from needing to collect, assuming you're just growing your data pool. In casual tests, if I simply looked at the high-water mark for my app's memory usage and called GC.reserve() with that number at the beginning of main() my app performance was double or better than it was without the reserve() line.
Re: version() abuse! Note of library writers.
Anders F Björklund Wrote: Sean Kelly wrote: It may help to think about weird configurations like targeting Cygwin on Windows (where Windows, Win32, and Posix may theoretically be defined) or WINE on Linux (where linux and Win32 may be defined). Something is broken if Windows is declared in Cygwin, or if linux is declared when running under Wine... Why? That's the OS the app is being compiled on/for. Let's consider another example. Windows Services for Unix Applications (I think that's what it's called now) is a POSIX subsystem built into Vista and Windows 7. There, both Windows and Posix would definitely be defined, and I *think* Win32 would be defined as well. I suppose one could argue that the OS version should be Interix instead of Windows, but it amounts to the same thing.
Re: version() abuse! Note of library writers.
Sean Kelly wrote: It may help to think about weird configurations like targeting Cygwin on Windows (where Windows, Win32, and Posix may theoretically be defined) or WINE on Linux (where linux and Win32 may be defined). Something is broken if Windows is declared in Cygwin, or if linux is declared when running under Wine... Why? That's the OS the app is being compiled on/for. Let's consider another example. Windows Services for Unix Applications (I think that's what it's called now) is a POSIX subsystem built into Vista and Windows 7. There, both Windows and Posix would definitely be defined, and I *think* Win32 would be defined as well. I suppose one could argue that the OS version should be Interix instead of Windows, but it amounts to the same thing. I guess it'll get fixed when it moves to API over OS... Just that I had been surprised if D1 was changed like that. --anders
Re: Developing a browser (Firefox) extension with D
On 11/18/09 05:03, Justin Johansson wrote: I'm just wondering for a moment if D might be a good vehicle for developing a browser extension for Firefox, or any other browser for that matter. Has anyone either considered doing, or has actually done, any browser extension development with D and have some thoughts or experience to share? Thanks for all feedback, Justin Johansson I think someone tried to do (or did) an extension for internet explorer using DWT.
Re: static interface
In order to achieve good completion support by editors (like IntelliSense) with duck-typed variables, concept-checking code should contain more type information than that of the code now used in Phobos. Example: static interface InputRange(T) { const bool empty(); T front(); void popFront(); } or in the current D syntax, template isInputRange(T, R) { enum isInputRange = is(const(R).init.empty() == bool) is(R.init.front() == T) is(R.init.popFront() == void); } How do you think about this?
Re: Should the comma operator be removed in D2?
Yigal Chripun wrote: snip the only use case that will break is if the two increments are dependent on the order (unless tuples are also evaluated from left to right): e.g. a + 5, b + a // snip If you're overloading the + operator to have an externally visible side effect, you're probably obfuscating your code whether you use the comma operator or not. Moreover, how can you prove that nothing that uses the operator's return value can constitute a use case? Stewart.
Re: Should the comma operator be removed in D2?
Robert Jacques wrote: snip However, I imagine tuple(a++,b++) would have some overhead, which is exactly what someone is trying to avoid by using custom for loops. snip Even if tuples do have an overhead, it would be a naive compiler that generates it in cases where nothing is done with it. Stewart.
Re: :? in templates
Bill Baxter wrote: On Wed, Nov 18, 2009 at 3:16 AM, retard r...@tard.com.invalid wrote: snip There's probably a confusion here. It evaluates lazily the value of factorial!(), but its type (which happens to be infinitely recursive must be evaluated eagerly in order to infer the type of the ternary op. That makes sense. I guess the ?: op is defined to do that in all cases. Might be nice though if it didn't do that in cases where the condition was statically known. That would lead to expressions changing type in unexpected circumstances. But what if we made it depend on whether the context requires a compile-time constant? Or if we just had a separate static if version of ?: Maybe make ?? a compile-time version, which selects the type as well as the value? (static ? would also fit into the grammar without adding ambiguity, but I'm not sure that it looks as nice.) Stewart.
Re: Should the comma operator be removed in D2?
Ellery Newcomer wrote: foo(a, b) is identical to foo(t); does ML have any equivalent of template parameters? eg foo!(1,int); I'd suggest reading the wikipedia page about ML. in short, ML is a strongly, statically typed language much like D, but doesn't require type annotations. it uses the Hindley-Milner type inference algorithm (named after its creators) which infers the types at compile-time. here's a naive factorial implementation in ML: fun f (0 : int) : int = 1 | f (n : int) : int = n * f (n-1) you can provide type annotations as above if you want to specify explicit types. here's another function: fun foo (n) = n + n if you use foo(3.5) the compiler would use a version of foo with signature: real - real but if you use foo(4) the compiler will use a version of foo with signature int - int note that I didn't need to specify the type as parameter. foo's signature is actually: `a - `a which is like doing in D: T foo(T) (T n) { return n + n; } but unlike ML, in D/C++ you need to provide the type parameter yourself. does that answer your question?
Re: Should the comma operator be removed in D2?
Stewart Gordon wrote: Yigal Chripun wrote: snip the only use case that will break is if the two increments are dependent on the order (unless tuples are also evaluated from left to right): e.g. a + 5, b + a // snip If you're overloading the + operator to have an externally visible side effect, you're probably obfuscating your code whether you use the comma operator or not. Moreover, how can you prove that nothing that uses the operator's return value can constitute a use case? Stewart. I don't follow you. What I said was that if you have the above in a for loop with a comma expression, you'd expect to *first* add 5 to a and *then* add the new a to b (comma operator defines left to right order of evaluation). tuples in general do not have to require a specific order since you keep all the results anyway, so the above could break. by defining tuples to be evaluated with the same order, the problem would be solved.
Compile-time DSL to D compilation?
Someone was working on this a while back. Was it BCS? What's the status of the project? Thanks, --bb
Re: D loosing the battle
Don wrote: Sean Kelly wrote: Andrei Alexandrescu Wrote: I have two more that compliment that. Some people think there writing complement and they're correctly but they aren't. I see what you did there! It peeked my interest, but it was a mute point. Well, that point is seperate from mine. Its just that the clarity of the point is marred by it's being prolix. Andrei
Short list with things to finish for D2
We're entering the finale of D2 and I want to keep a short list of things that must be done and integrated in the release. It is clearly understood by all of us that there are many things that could and probably should be done. 1. Currently Walter and Don are diligently fixing the problems marked on the current manuscript. 2. User-defined operators must be revamped. Fortunately Don already put in an important piece of functionality (opDollar). What we're looking at is a two-pronged attack motivated by Don's proposal: http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel/DIPs/DIP7 The two prongs are: * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this: T opBinary(string op)(T rhs) { ... } The string is +, *, etc. We need to design what happens with read-modify-write operators like += (should they be dispatch to a different function? etc.) and also what happens with index-and-modify operators like []=, []+= etc. Should we go with proxies? Absorb them in opBinary? Define another dedicated method? etc. * Loop fusion that generalizes array-wise operations. This idea of Walter is, I think, very good because it generalizes and democratizes magic. The idea is that, if you do a = b + c; and b + c does not make sense but b and c are ranges for which a.front = b.front + c.front does make sense, to automatically add the iteration paraphernalia. 3. It was mentioned in this group that if getopt() does not work in SafeD, then SafeD may as well pack and go home. I agree. We need to make it work. Three ideas discussed with Walter: * Allow taking addresses of locals, but in that case switch allocation from stack to heap, just like with delegates. If we only do that in SafeD, behavior will be different than with regular D. In any case, it's an inefficient proposition, particularly for getopt() which actually does not need to escape the addresses - just fills them up. * Allow @trusted (and maybe even @safe) functions to receive addresses of locals. Statically check that they never escape an address of a parameter. I think this is very interesting because it enlarges the common ground of D and SafeD. * Figure out a way to reconcile ref with variadics. This is the actual reason why getopt chose to traffic in addresses, and fixing it is the logical choice and my personal favorite. 4. Allow private members inside a template using the eponymous trick: template wyda(int x) { private enum geeba = x / 2; alias geeba wyda; } The names inside an eponymous template are only accessible to the current instantiation. For example, wyda!5 cannot access wyda!(4).geeba, only its own geeba. That we we elegantly avoid the issue where is this symbol looked up? 5. Chain exceptions instead of having a recurrent exception terminate the program. I'll dedicate a separate post to this. 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list. Andrei
Chaining exceptions
Consider: void fun() { try { throw new Exception(a); } finally { throw new Exception(b); } } Currently this function would unceremoniously terminate the program. I think it shouldn't. What should happen is that the a exception should be propagated unabated, and the b exception should be appended to it. The Exception class should have a property next that returns a reference to the next exception thrown (in this case b), effectively establishing an arbitrarily long singly-linked list of exceptions. A friend told me that that's what Java does, with the difference that the last exception thrown takes over, so the chain comes reversed. I strongly believe a is the main exception and b is a contingent exception, so we shouldn't do what Java does. But Java must have some good reason to go the other way. Please chime in with (a) a confirmation/infirmation of Java's mechanism above; (b) links to motivations for Java's approach, (c) any comments about all of the above. Thanks, Andrei
Re: Should the comma operator be removed in D2?
On Wed, 18 Nov 2009 15:01:10 -0500, Stewart Gordon smjg_1...@yahoo.com wrote: Robert Jacques wrote: snip However, I imagine tuple(a++,b++) would have some overhead, which is exactly what someone is trying to avoid by using custom for loops. snip Even if tuples do have an overhead, it would be a naive compiler that generates it in cases where nothing is done with it. Stewart. Actually, as the current compiler knows nothing about tuples, I'd have to call it very naive with respect to tuple optimization. :)
Re: Short list with things to finish for D2
What about the business with signed/unsigned integer comparisons?
Re: D array expansion and non-deterministic re-allocation
Andrei Alexandrescu, el 18 de noviembre a las 07:22 me escribiste: Leandro Lucarella wrote: Andrei Alexandrescu, el 17 de noviembre a las 18:45 me escribiste: 3. If you **really** care about performance, you should only append when you don't know the length in advance. If you know the length, you should always pre-allocate. We will have collections and all those good things, but I don't see how the proposal follows from the feedback. My perception is that this is a group of people who use D. Bartosz' concern didn't cause quite a riot, so as far as I can tell there is no big issue at stake. I didn't say anything (until now) because this was discussed already. Dynamic arrays/slices appending is horribly broken (I know it's well defined, and deterministic, but you are just condemned to make mistakes, it just doesn't work as one would expect, even when you know how it works you have to keep fighting your intuition all the time). In which ways do you think arrays horribly broken? Same as Bartosz mentions? Yes, I think dynamic arrays are grouping 2 separate things: a proper dynamic array able to efficiently appending stuff (without hacks like the size cache) and a slice (a read-only view of a piece of memory); a random range if you want to put it in terms of D. A dynamic array should be a reference type, a slice a value type. I explained my view before. I wont explain it with much detail again (I hope you remember the thread). One question is whether you actually have had bugs and problems coding, or if arrays stopped you from getting work done. I had many bugs because of the reallocation when appending. But it's a little pointless to keep making riots, when we were so close to finally fix this with T[new]/whatever and it all went back. H... resignation is not a trait of this group :o). It is after a good fight :) From time to time this kind of issues (that are present in the group since I can remember, which is about 4 years maybe) keep popping in the NG. I get tired of repeating myself over and over again (like may others, I guess). -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Hey, it's George. I got nothing to say. -- George Constanza, dejando un mensaje en la contestadora de Jerry
Re: Short list with things to finish for D2
Andrei Alexandrescu: * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this: T opBinary(string op)(T rhs) { ... } The string is +, *, etc. Can you show an example of defining an operator, like a minus, with that? In my set data structure I'd like to define = among two sets as is subset. Can that design allow me to overload just = and = ? (opCmp is not enough here). Bye, bearophile
Re: Should the comma operator be removed in D2?
On Wed, 18 Nov 2009 06:34:59 -0500, retard r...@tard.com.invalid wrote: Tue, 17 Nov 2009 20:23:35 -0500, Robert Jacques wrote: Also, all those well known optimizations don't magically work for structs: I've seen modern compilers do some pretty stupid things when structs and temporary values are involved. Are you talking about dmc/dmd now? Have you tried gcc 4.4 or 4.5, llvm dev version or latest visual c++ ? Temporary values are often used with e.g. expression templates and the compilers have generated decently performing code for ages now. dmd is the only stupid compiler which cannot inline e.g. expression templates for matrix operations. The bug I'm thinking of was in the Open64 compiler, and I think it's been squashed now. But this happened in the last 2-3 years, and only was caught because a major vendor really stressed the performance of struct operations on a particular backend. (Nvidia and GPUs) The point, is that the actual optimization rules in the compiler for free variables and for structs are sometimes completely separate entities. It doesn't mean it can't be done, it just means it might have to be coded. Basically, Tuples introduce a level of obfuscation, which might (or might not) hamper the current optimizer.
Re: Short list with things to finish for D2
Ellery Newcomer wrote: What about the business with signed/unsigned integer comparisons? I think that's somewhere in the TDPL non-working snippets, but I added your message to my worklist to make sure. Andrei
Re: Short list with things to finish for D2
bearophile wrote: Andrei Alexandrescu: * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this: T opBinary(string op)(T rhs) { ... } The string is +, *, etc. Can you show an example of defining an operator, like a minus, with that? struct BigInt { BigInt opBinary(string op)(BigInt rhs) if (op == -) { ... } } In my set data structure I'd like to define = among two sets as is subset. Can that design allow me to overload just = and = ? (opCmp is not enough here). It could if we decide to deprecate opCmp. I happen to like it; if you define a = b for inclusion, people will think it's natural to also allow a b for strict inclusion. But that's up for debate. I'm not sure what the best way is. Classes have opEquals and opCmp so the question is - do we want structs to be somewhat compatible with classes or not? My personal favorite choice would be to go full bore with compile-time strings. Andrei
Re: Short list with things to finish for D2
bearophile wrote: Andrei Alexandrescu: * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this: T opBinary(string op)(T rhs) { ... } The string is +, *, etc. Can you show an example of defining an operator, like a minus, with that? T opBinary(string op)(T rhs) { static if (op == -) return data - rhs.data; static if (op == +) return data + rhs.data; // ... maybe this would work too ... mixin(return data ~ op ~ rhs.data;); } I love this syntax over the tons of different operation functions. Makes it so much nicer, especially when supporting a bunch of different paramater types (vectors are a good example of this). T opBinary(string op)(T rhs) T opBinary(string op)(float[3] rhs) T opBinary(string op)(float rx, ry, rz) In my set data structure I'd like to define = among two sets as is subset. Can that design allow me to overload just = and = ? (opCmp is not enough here). Bye, bearophile
Re: Short list with things to finish for D2
Andrei Alexandrescu wrote: Ellery Newcomer wrote: What about the business with signed/unsigned integer comparisons? I think that's somewhere in the TDPL non-working snippets, but I added your message to my worklist to make sure. Andrei Cool, thanks
Re: Chaining exceptions
Andrei Alexandrescu Wrote: Consider: void fun() { try { throw new Exception(a); } finally { throw new Exception(b); } } Currently this function would unceremoniously terminate the program. I think it shouldn't. Have you tried it? I think the call to terminate() is commented out. It was like this before I ever started mucking with the runtime many moons ago. What I think currently happens is that the exception in the finally clause will replace the one being thrown. That or it's discarded, I really can't remember which. Either way, the current behavior is a bit weird. What should happen is that the a exception should be propagated unabated, and the b exception should be appended to it. The Exception class should have a property next that returns a reference to the next exception thrown (in this case b), effectively establishing an arbitrarily long singly-linked list of exceptions. The Exception class already has a next property, but I've always seen this as a way to nest exceptions. For example: try { throw new Exception; } catch( Exception e ) { throw new MyException( e ); } So a network API might throw a NetworkException that references a SocketException with more detailed info about the exact problem, etc. The reason I'm unsure about chaining related exceptions as opposed to use chaining as a means of repackaging exceptions is that the catch handler that ultimately executes will be the one that matches the first exception in the chain, not the first that matches any exception in the chain. I haven't thought about this too carefully, but it seems like it might be difficult to write correct code with this model. A friend told me that that's what Java does, with the difference that the last exception thrown takes over, so the chain comes reversed. I strongly believe a is the main exception and b is a contingent exception, so we shouldn't do what Java does. But Java must have some good reason to go the other way. When a dozen holes appear in a dike, I'm not sure it matters which one you try to plug first :-) Unless there's some way to start with the biggest one, I suppose. Seems like with the suggested model, the correct approach may be to always catch Exception and walk the whole chain to figure out what to do. But that sounds awfully close to C-style error handling.
Re: Short list with things to finish for D2
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list. Andrei I assume we're mostly talking about spec stuff, not implementation stuff. Nonetheless, to the extent that the GC API is considered part of the language spec, I think we need to fix that. 1. My precise heap scanning patch has some ripple effects into the GC API and could affect what is done with the `new` operator. Therefore, we might not want to postpone this until after D2 is final. 2. We should probably tighten up the spec to make sure that a moving, fully precise, etc. GC can be implemented at some point in the future without modifying the spec after D2 is declared gold.
Re: Short list with things to finish for D2
Travis Boucher wrote: bearophile wrote: Andrei Alexandrescu: * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this: T opBinary(string op)(T rhs) { ... } The string is +, *, etc. Can you show an example of defining an operator, like a minus, with that? T opBinary(string op)(T rhs) { static if (op == -) return data - rhs.data; static if (op == +) return data + rhs.data; // ... maybe this would work too ... mixin(return data ~ op ~ rhs.data;); } I love this syntax over the tons of different operation functions. Makes it so much nicer, especially when supporting a bunch of different paramater types (vectors are a good example of this). Indeed, the advantage of this is that you can use string mixins to implement many operations at once, instead of laboriously defining many functions. Andrei
Re: Short list with things to finish for D2
dsimcha wrote: == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list. Andrei I assume we're mostly talking about spec stuff, not implementation stuff. Nonetheless, to the extent that the GC API is considered part of the language spec, I think we need to fix that. 1. My precise heap scanning patch has some ripple effects into the GC API and could affect what is done with the `new` operator. Therefore, we might not want to postpone this until after D2 is final. 2. We should probably tighten up the spec to make sure that a moving, fully precise, etc. GC can be implemented at some point in the future without modifying the spec after D2 is declared gold. I think you can safely work on that asynchronously. TDPL won't include a GC API Reference - for that kind of stuff I think it's ok to refer to the online documentation. Thanks for working on this! Andrei
Re: Chaining exceptions
Sean Kelly wrote: Andrei Alexandrescu Wrote: Consider: void fun() { try { throw new Exception(a); } finally { throw new Exception(b); } } Currently this function would unceremoniously terminate the program. I think it shouldn't. Have you tried it? I think the call to terminate() is commented out. It was like this before I ever started mucking with the runtime many moons ago. What I think currently happens is that the exception in the finally clause will replace the one being thrown. That or it's discarded, I really can't remember which. Either way, the current behavior is a bit weird. Guilty as charged. Haven't tried. I don't have D installed at work, but I tried a Java example and indeed it looks like the last exception thrown just takes over. class Test { public static void main(String[] args) { System.out.println(Hello World!); try { (new Test()).fun(); } catch (Exception e) { System.out.print(e); System.out.print(e.getCause()); System.out.print(\n); e.printStackTrace(); } } void fun() throws Exception { try { throw new Exception(a); } finally { throw new Exception(b); } } } Even more interestingly, calling printStackTrace() does not acknowledge the originating exception. Calling getCause() returns null. So essentially at the catch point I'm not sure there's a way to get to the a exception. What should happen is that the a exception should be propagated unabated, and the b exception should be appended to it. The Exception class should have a property next that returns a reference to the next exception thrown (in this case b), effectively establishing an arbitrarily long singly-linked list of exceptions. The Exception class already has a next property, but I've always seen this as a way to nest exceptions. For example: try { throw new Exception; } catch( Exception e ) { throw new MyException( e ); } So a network API might throw a NetworkException that references a SocketException with more detailed info about the exact problem, etc. The reason I'm unsure about chaining related exceptions as opposed to use chaining as a means of repackaging exceptions is that the catch handler that ultimately executes will be the one that matches the first exception in the chain, not the first that matches any exception in the chain. I haven't thought about this too carefully, but it seems like it might be difficult to write correct code with this model. A friend told me that that's what Java does, with the difference that the last exception thrown takes over, so the chain comes reversed. I strongly believe a is the main exception and b is a contingent exception, so we shouldn't do what Java does. But Java must have some good reason to go the other way. When a dozen holes appear in a dike, I'm not sure it matters which one you try to plug first :-) Unless there's some way to start with the biggest one, I suppose. Seems like with the suggested model, the correct approach may be to always catch Exception and walk the whole chain to figure out what to do. But that sounds awfully close to C-style error handling. Well I'm not sure about the metaphor. What I can tell from my code is that exceptions are often contingent one upon another (not parallel and independent). A typical example: writing to a file fails, but then closing it also fails and possibly attempting to remove the partial file off disk also fails. In that case, the important message is that the file couldn't be written to; the rest is aftermath. If the doctor says You have a liver problem, which causes your nails to have a distinctive shape you don't mind the nails as much as the liver. There's also the opposite flow, for example an exception in some validation prevents a database update. But I don't think code regularly does essential work in destructors, finally blocks, and scope statements. The essential work is done on the straight path, and the important error is happening on the straight path. In fact, what I just wrote tilted me a bit more in favor of the master exception + contingent camarilla model. That model also suggests that most of the time you only need to look at the top exception thrown to figure out the root of the problem; talking to the camarilla is optional. Andrei
Re: Short list with things to finish for D2
bearophile wrote: Andrei Alexandrescu: * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this: T opBinary(string op)(T rhs) { ... } The string is +, *, etc. I thought the problem with this was that the lexer/parser would have to know about semantics, which is against the goals of the language. Would the operator actually be inside quotes? Anyway, do we _really_ want to make it possible, that valid D code will look like ASCII art?
Re: Short list with things to finish for D2
grauzone wrote: bearophile wrote: Andrei Alexandrescu: * Encode operators by compile-time strings. For example, instead of the plethora of opAdd, opMul, ..., we'd have this: T opBinary(string op)(T rhs) { ... } The string is +, *, etc. I thought the problem with this was that the lexer/parser would have to know about semantics, which is against the goals of the language. Would the operator actually be inside quotes? It's simpler than that. a + b will be rewritten into a.opBinary!(+)(b) The rewrite is done long after lexing, so no low-level problems there. Anyway, do we _really_ want to make it possible, that valid D code will look like ASCII art? We want to improve the state of the art. If the attempt risks to end up doing the opposite, I'm all ears. Andrei
Re: Short list with things to finish for D2
Andrei Alexandrescu Wrote: 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list. Uniform function call syntax.
Re: Short list with things to finish for D2
Kyle wrote: Andrei Alexandrescu Wrote: 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list. Uniform function call syntax. It's in the book. I'm adding this message as a reminder to add a test case. Thanks! Andrei
Re: Short list with things to finish for D2
Jesse Phillips wrote: Andrei Alexandrescu Wrote: 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list. Andrei Well, there is this page on Wiki4D http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel#FutureDirections which has what the community is trying to track for changes. It also has some things that haven't got officially declined committed to. Great, thanks. List looks very to the point and up to date. Andrei
static foreach is deferred
Walter and I agreed that static foreach, although present in TDPL, poses enough new problems to warrant its deferral to post-D2. Andrei
Re: static foreach is deferred
On Wed, Nov 18, 2009 at 5:15 PM, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: Walter and I agreed that static foreach, although present in TDPL, poses enough new problems to warrant its deferral to post-D2. Andrei Is it trouble with scopes and hygenic variable naming? --bb
Re: static foreach is deferred
Bill Baxter wrote: On Wed, Nov 18, 2009 at 5:15 PM, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: Walter and I agreed that static foreach, although present in TDPL, poses enough new problems to warrant its deferral to post-D2. Andrei Is it trouble with scopes and hygenic variable naming? Yah. I know we all think it's an interesting path to pursue, but we'd rather do a good design instead of hastily planting something we'll be sorry about later. Andrei
Re: Short list with things to finish for D2
Andrei Alexandrescu Wrote: 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list. Static function parameters
Re: Short list with things to finish for D2
Kyle wrote: Andrei Alexandrescu Wrote: 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list. Static function parameters Walter tried to implement them and ran into a number of odd semantic issues. Andrei
Re: static foreach is deferred
Andrei Alexandrescu wrote: Bill Baxter wrote: On Wed, Nov 18, 2009 at 5:15 PM, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: Walter and I agreed that static foreach, although present in TDPL, poses enough new problems to warrant its deferral to post-D2. Andrei Is it trouble with scopes and hygenic variable naming? Yah. I know we all think it's an interesting path to pursue, but we'd rather do a good design instead of hastily planting something we'll be sorry about later. Andrei If this is the reason, thanks for prioritizing good design over feature creep. : )
Re: Short list with things to finish for D2
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article dsimcha wrote: == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article 6. There must be many things I forgot to mention, or that cause grief to many of us. Please add to/comment on this list. Andrei I assume we're mostly talking about spec stuff, not implementation stuff. Nonetheless, to the extent that the GC API is considered part of the language spec, I think we need to fix that. 1. My precise heap scanning patch has some ripple effects into the GC API and could affect what is done with the `new` operator. Therefore, we might not want to postpone this until after D2 is final. 2. We should probably tighten up the spec to make sure that a moving, fully precise, etc. GC can be implemented at some point in the future without modifying the spec after D2 is declared gold. I think you can safely work on that asynchronously. TDPL won't include a GC API Reference - for that kind of stuff I think it's ok to refer to the online documentation. Thanks for working on this! Andrei Can you clarify the higher level development model, then? 1. Are we departing from the D1 model and allowing changes to Phobos after the language spec is declared final? If so, can those changes be breaking, at least at the binary level? 2. Are we punting on the GC API and allowing it to be implementation defined? I thought this API was supposed to be stable and allow for swapping GC implementations at link time. Then again, I think it's actually a bad idea to create such a stable API, since different GC implementations will require different configuration, meta-data, etc. and this is just a fact of life. Most user code will not interact directly with the GC, but will use `new`, builtin arrays, etc.
Re: Short list with things to finish for D2
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article 3. It was mentioned in this group that if getopt() does not work in SafeD, then SafeD may as well pack and go home. I agree. We need to make it work. Three ideas discussed with Walter: * Allow taking addresses of locals, but in that case switch allocation from stack to heap, just like with delegates. If we only do that in SafeD, behavior will be different than with regular D. In any case, it's an inefficient proposition, particularly for getopt() which actually does not need to escape the addresses - just fills them up. IMHO this is a terrible solution. SafeD should not cause major ripple effects for pieces of code that don't want to use it. I'm all for safe defaults even if they're less efficient or less flexible, but if D starts sacrificing performance or flexibility for safety **even when the programmer explicitly asks it not to**, then it will officially have become a bondage and discipline language. Furthermore, as you point out, having the semantics of something vary in subtle ways between SafeD and unsafe D is probably a recipe for confusion. * Allow @trusted (and maybe even @safe) functions to receive addresses of locals. Statically check that they never escape an address of a parameter. I think this is very interesting because it enlarges the common ground of D and SafeD. This is a great idea if it can be implemented. Isn't escape analysis a pretty hard thing to get right, though, especially when you might not have the source code to the function being called? * Figure out a way to reconcile ref with variadics. This is the actual reason why getopt chose to traffic in addresses, and fixing it is the logical choice and my personal favorite. This should be done eventually regardless of what happens with taking addresses of locals, though I'm not sure it still makes the short list if we solve the addresses of locals thing some other way.
The VELOX research project
Introduction: This is the third of three blog articles describing how AMD's Operating System Research Center (OSRC) became involved in the development of the Advanced Synchronization Facility (ASF), how we are evaluating ASF, and how this and other activities fit into the EU-funded VELOX project aiming at improving the state of the art for software-transactional-memory systems. See : http://forums.amd.com/devblog/blogpost.cfm?catid=317threadid=122183utm_source=feedburnerutm_medium=feedutm_campaign=Feed%3A+AmdDeveloperBlogs+%28AMD+Developer+Blogs%29 cheers Nick B
Re: Chaining exceptions
Andrei Alexandrescu Wrote: Sean Kelly wrote: When a dozen holes appear in a dike, I'm not sure it matters which one you try to plug first :-) Unless there's some way to start with the biggest one, I suppose. Seems like with the suggested model, the correct approach may be to always catch Exception and walk the whole chain to figure out what to do. But that sounds awfully close to C-style error handling. Well I'm not sure about the metaphor. What I can tell from my code is that exceptions are often contingent one upon another (not parallel and independent). A typical example: writing to a file fails, but then closing it also fails and possibly attempting to remove the partial file off disk also fails. In that case, the important message is that the file couldn't be written to; the rest is aftermath. If the doctor says You have a liver problem, which causes your nails to have a distinctive shape you don't mind the nails as much as the liver. Upon reflection, I'm inclined to agree. This is pretty much a nonexistent case for me anyway (my experience with C++ has me following the no exceptions from dtors ever mantra for the most part), so it's difficult to come up with counterexamples. I also like that your chaining method represents a timeline of what happened, with the most likely cause of the whole mess at the head of the list. There's also the opposite flow, for example an exception in some validation prevents a database update. But I don't think code regularly does essential work in destructors, finally blocks, and scope statements. The essential work is done on the straight path, and the important error is happening on the straight path. Yeah, I think you're right.
Re: Chaining exceptions
On Wed, 18 Nov 2009 15:24:11 -0800, Andrei Alexandrescu wrote: Consider: void fun() { try { throw new Exception(a); } finally { throw new Exception(b); } } Currently this function would unceremoniously terminate the program. I think it shouldn't. What should happen is that the a exception should be propagated unabated, and the b exception should be appended to it. The Exception class should have a property next that returns a reference to the next exception thrown (in this case b), effectively establishing an arbitrarily long singly-linked list of exceptions. A friend told me that that's what Java does, with the difference that the last exception thrown takes over, so the chain comes reversed. I strongly believe a is the main exception and b is a contingent exception, so we shouldn't do what Java does. But Java must have some good reason to go the other way. Please chime in with (a) a confirmation/infirmation of Java's mechanism above; (b) links to motivations for Java's approach, (c) any comments about all of the above. Thanks, Andrei Best as I can tell, the Java compiler doesn't do the chaining automatically. It is up to the one throwing the exception to make the chain. The exception class just provides a specification that requires all exceptions to support chaining. This explains why it is not the root cause that is at the head of the chain. try { stmt.executeUpdate(sql); } catch (SQLException ex) { throw new EmployeeLookupException( Query failure,ex); // ex is passed to the constructor of the class } Example from: http://java.sys-con.com/node/36579 http://www.developer.com/tech/article.php/1431531/Chained-Exceptions-in- Java.htm
Re: Short list with things to finish for D2
dsimcha wrote: == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article 3. It was mentioned in this group that if getopt() does not work in SafeD, then SafeD may as well pack and go home. I agree. We need to make it work. Three ideas discussed with Walter: * Allow taking addresses of locals, but in that case switch allocation from stack to heap, just like with delegates. If we only do that in SafeD, behavior will be different than with regular D. In any case, it's an inefficient proposition, particularly for getopt() which actually does not need to escape the addresses - just fills them up. IMHO this is a terrible solution. SafeD should not cause major ripple effects for pieces of code that don't want to use it. I'm all for safe defaults even if they're less efficient or less flexible, but if D starts sacrificing performance or flexibility for safety **even when the programmer explicitly asks it not to**, then it will officially have become a bondage and discipline language. Furthermore, as you point out, having the semantics of something vary in subtle ways between SafeD and unsafe D is probably a recipe for confusion. * Allow @trusted (and maybe even @safe) functions to receive addresses of locals. Statically check that they never escape an address of a parameter. I think this is very interesting because it enlarges the common ground of D and SafeD. This is a great idea if it can be implemented. Isn't escape analysis a pretty hard thing to get right, though, especially when you might not have the source code to the function being called? Escape analysis is difficult when you don't have information about the functions you're passing the pointer to. For example: void fun(int* p) { if (condition) gun(p); } Now the problem is that fun's escape-or-not behavior depends on flow (i.e. condition) and on gun's escaping behavior. If we use @safe and @trusted to indicate unequivocally no escape, then there is no analysis to be done - the hard part of the analysis has already been done manually by the user. * Figure out a way to reconcile ref with variadics. This is the actual reason why getopt chose to traffic in addresses, and fixing it is the logical choice and my personal favorite. This should be done eventually regardless of what happens with taking addresses of locals, though I'm not sure it still makes the short list if we solve the addresses of locals thing some other way. I agree. Andrei
Re: Chaining exceptions
Jesse Phillips wrote: On Wed, 18 Nov 2009 15:24:11 -0800, Andrei Alexandrescu wrote: Consider: void fun() { try { throw new Exception(a); } finally { throw new Exception(b); } } Currently this function would unceremoniously terminate the program. I think it shouldn't. What should happen is that the a exception should be propagated unabated, and the b exception should be appended to it. The Exception class should have a property next that returns a reference to the next exception thrown (in this case b), effectively establishing an arbitrarily long singly-linked list of exceptions. A friend told me that that's what Java does, with the difference that the last exception thrown takes over, so the chain comes reversed. I strongly believe a is the main exception and b is a contingent exception, so we shouldn't do what Java does. But Java must have some good reason to go the other way. Please chime in with (a) a confirmation/infirmation of Java's mechanism above; (b) links to motivations for Java's approach, (c) any comments about all of the above. Thanks, Andrei Best as I can tell, the Java compiler doesn't do the chaining automatically. It is up to the one throwing the exception to make the chain. The exception class just provides a specification that requires all exceptions to support chaining. This explains why it is not the root cause that is at the head of the chain. try { stmt.executeUpdate(sql); } catch (SQLException ex) { throw new EmployeeLookupException( Query failure,ex); // ex is passed to the constructor of the class } Example from: http://java.sys-con.com/node/36579 http://www.developer.com/tech/article.php/1431531/Chained-Exceptions-in- Java.htm Thanks! Question - is there a way to fetch the current Throwable from within a finally clause? Andrei
Re: Short list with things to finish for D2
Andrei Alexandrescu wrote: The rewrite is done long after lexing, so no low-level problems there. Oh, I thought it would let you introduce new operators. But it's only about the existing ones. I find the idea to identify the operator using a string very sloppy and sillyl just like using string mixins for small delegates in std.algorithm etc.; but you'd probably say it works and is useful and it's short and it solves the current problem, so... whatever.