Re: dmd 1.046 and 2.031 releases
Steven Schveighoffer: > Does this compile: > > class C {} > > ubyte foo(C n) > { >return true ? 255 : n; > } > > (don't have the latest compiler installed yet, so I couldn't check it > myself) It doesn't compile (DMD v2.031): temp.d(5): Error: incompatible types for ((255) ? (n)): 'int' and 'temp.C' Bye, bearophile
Re: dmd 1.046 and 2.031 releases
On Fri, 17 Jul 2009 09:46:11 -0400, Don wrote: Steven Schveighoffer wrote: On Fri, 17 Jul 2009 08:08:23 -0400, Don wrote: In this case, I think bearophile's right: it's just a problem with range propagation of the ?: operator. I think the compiler should be required to do the semantics analysis for single expressions. Not more, not less. Why? What is the benefit of keeping track of the range of integral variables inside an expression, to eliminate a cast? I don't think it's worth it. As far as I know, the ?: is the only expression where this can happen. You will get cries of inconsistency when the compiler doesn't allow: ubyte foo(uint x) { if(x < 256) return x; return 0; } -Steve Already happens. This works: ubyte foo(uint n) { return true ? 255 : n; } And this fails: ubyte boo(uint n) { if (true) return 255; else return n; } Does that require range propogation? That is, when the compiler sees: return true ? 255 does it even look at the type or range of the other branch? Does this compile: class C {} ubyte foo(C n) { return true ? 255 : n; } (don't have the latest compiler installed yet, so I couldn't check it myself) I think the situation is different because the compiler isn't forced to consider the other branch, it can be optimized out (I'm surprised it doesn't do that in the general if(true) case anyways, even with optimization turned off). -Steve
Re: dmd 1.046 and 2.031 releases
Steven Schveighoffer wrote: On Fri, 17 Jul 2009 08:08:23 -0400, Don wrote: In this case, I think bearophile's right: it's just a problem with range propagation of the ?: operator. I think the compiler should be required to do the semantics analysis for single expressions. Not more, not less. Why? What is the benefit of keeping track of the range of integral variables inside an expression, to eliminate a cast? I don't think it's worth it. As far as I know, the ?: is the only expression where this can happen. You will get cries of inconsistency when the compiler doesn't allow: ubyte foo(uint x) { if(x < 256) return x; return 0; } -Steve Already happens. This works: ubyte foo(uint n) { return true ? 255 : n; } And this fails: ubyte boo(uint n) { if (true) return 255; else return n; }
Re: dmd 1.046 and 2.031 releases
On Fri, 17 Jul 2009 08:08:23 -0400, Don wrote: In this case, I think bearophile's right: it's just a problem with range propagation of the ?: operator. I think the compiler should be required to do the semantics analysis for single expressions. Not more, not less. Why? What is the benefit of keeping track of the range of integral variables inside an expression, to eliminate a cast? I don't think it's worth it. As far as I know, the ?: is the only expression where this can happen. You will get cries of inconsistency when the compiler doesn't allow: ubyte foo(uint x) { if(x < 256) return x; return 0; } -Steve
Re: dmd 1.046 and 2.031 releases
BCS wrote: Reply to bearophile, John C: Did you not read the change log? "Implicit integral conversions that could result in loss of significant bits are no longer allowed." This was the code: ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); That last n is guaranteed to fit inside an ubyte I'm going with Steven on this one. Making the legality of code dependent on it's semantics is risky because it then ends up with bazaar portability issues or requiters that the scope of the semantics analysts engine be part of the language spec. For the record, Nice has a form of automatic downcasting that works something like this, though not AFAIK on numerical comparisons. To take an example from http://nice.sourceforge.net/safety.html#id2488356 : -- Component c = ...; ?List children; if (c instanceof ContainerComponent) children = c.getChildren(); else children = null; -- getChildren is a method of ContainerComponent, but not of general Component. The test performed in the condition of the if statement has the additional effect of casting c to a ContainerComponent within the if statement's body. Nice also has nullable and non-nullable types (note the ?) and, in the same way, it forces you to check that it isn't null before you try to dereference it. The principle could be applied to if statements and ?: expressions alike (as it would appear Nice does), and even && and || expressions. And it could be extended to arithmetic comparisons. A possible way is to spec that, if n is an int, and k is a compile-time constant >= 0, then given n >= k ? expr1 : expr2 any occurrence of n in expr1 is treated as cast(uint) n. And similarly for the other relational operators and other signed integer types. And then that, if u is of some unsigned integer type, and k is a compile-time constant within the range of u's type, then given u <= k ? expr1 : expr2 any occurrence of u in expr1 is treated as cast to the smallest unsigned integer type that u will fit into. And similarly for the other relational operators. Then your example would compile. However, - if we're going to do this, then for consistency we probably ought to define all literals to be of the smallest type they'll fit into, and prefer unsigned over signed, unless overridden with a suffix - we could go on defining rules like this for more complicated conditions, and it could get complicated - I'm not sure if this kind of automatic casting is desirable from a generic programming POV. Stewart.
Re: dmd 1.046 and 2.031 releases
BCS wrote: Reply to bearophile, John C: Did you not read the change log? "Implicit integral conversions that could result in loss of significant bits are no longer allowed." This was the code: ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); That last n is guaranteed to fit inside an ubyte (yes, I understand the compiler is not smart enough yet to understand it, but from the things explained by Andrei I have thought it was. So I am wrong and I have shown this to other people, that may be interested. I have also encouraged to make the compiler smarter to avoid a cast in such case, because this is a single expression, so range propagation is probably not too much hard to implement given the current design of the front-end. You have missed most of the purposes of my post). Bye, bearophile I'm going with Steven on this one. Making the legality of code dependent on it's semantics is risky because it then ends up with bazaar portability issues or requiters that the scope of the semantics analysts engine be part of the language spec. In this case, I think bearophile's right: it's just a problem with range propagation of the ?: operator. I think the compiler should be required to do the semantics analysis for single expressions. Not more, not less.
Re: dmd 1.046 and 2.031 releases
On Thu, Jul 16, 2009 at 6:43 PM, Jason House wrote: > bearophile Wrote: > >> I'm playing with the new D2 a bit, this comes from some real D1 code: >> >> void main(string[] args) { >> int n = args.length; >> ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); >> } >> >> At compile-time the compiler says: >> temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n >= >> 255 ? 255 : n) of type int to ubyte >> >> You have to add a silly cast: >> >> void main(string[] args) { >> int n = args.length; >> ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n)); >> } >> >> In theory if the compiler gets a smarter such cast can be unnecessary. >> >> Bye, >> bearophile > > add it to bugzilla. Bearophile has never reported anything in Bugzilla. It's inexplicable. He constantly complains about D and does nothing to help it.
Re: dmd 1.046 and 2.031 releases
bearophile Wrote: > I'm playing with the new D2 a bit, this comes from some real D1 code: > > void main(string[] args) { > int n = args.length; > ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); > } > > At compile-time the compiler says: > temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n >= 255 > ? 255 : n) of type int to ubyte > > You have to add a silly cast: > > void main(string[] args) { > int n = args.length; > ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n)); > } > > In theory if the compiler gets a smarter such cast can be unnecessary. > > Bye, > bearophile add it to bugzilla.
Re: dmd 1.046 and 2.031 releases
Steven Schveighoffer wrote: On Thu, 16 Jul 2009 08:49:14 -0400, bearophile wrote: I'm playing with the new D2 a bit, this comes from some real D1 code: void main(string[] args) { int n = args.length; ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); } At compile-time the compiler says: temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n >= 255 ? 255 : n) of type int to ubyte You have to add a silly cast: void main(string[] args) { int n = args.length; ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n)); } In theory if the compiler gets a smarter such cast can be unnecessary. I don't see how, doesn't this require semantic analysis to determine whether implicit casting is allowed? I think you are asking too much of the compiler. What if the expression was instead a function call, should the compiler look at the function source to determine whether it can fit in a ubyte? Where do you draw the line? I think the current behavior is fine. The D1 code probably works not because the compiler is 'smarter' but because it blindly truncates data. Perhaps if it were an optimization it could be implemented, but the result of an optimization cannot change the validity of the code... In other words, it couldn't be a compiler feature, it would have to be part of the spec, which would mean all compilers must implement it. BTW, I think cast is a perfect requirement here -- you are saying, yes I know the risks and I'm casting anyways. -Steve He's saying the cast shouldn't be required, as the code entails that n will fit into a ubyte without loss of information. Perhaps it's too much to ask. I'm not sure. I don't think he's sure. But if he doesn't ask, he won't find out. (And it sure would be nice to avoid casts in situations analogous to that.)
Re: dmd 1.046 and 2.031 releases
Reply to bearophile, John C: Did you not read the change log? "Implicit integral conversions that could result in loss of significant bits are no longer allowed." This was the code: ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); That last n is guaranteed to fit inside an ubyte (yes, I understand the compiler is not smart enough yet to understand it, but from the things explained by Andrei I have thought it was. So I am wrong and I have shown this to other people, that may be interested. I have also encouraged to make the compiler smarter to avoid a cast in such case, because this is a single expression, so range propagation is probably not too much hard to implement given the current design of the front-end. You have missed most of the purposes of my post). Bye, bearophile I'm going with Steven on this one. Making the legality of code dependent on it's semantics is risky because it then ends up with bazaar portability issues or requiters that the scope of the semantics analysts engine be part of the language spec.
Re: dmd 1.046 and 2.031 releases
On Thu, 16 Jul 2009 08:49:14 -0400, bearophile wrote: I'm playing with the new D2 a bit, this comes from some real D1 code: void main(string[] args) { int n = args.length; ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); } At compile-time the compiler says: temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n >= 255 ? 255 : n) of type int to ubyte You have to add a silly cast: void main(string[] args) { int n = args.length; ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n)); } In theory if the compiler gets a smarter such cast can be unnecessary. I don't see how, doesn't this require semantic analysis to determine whether implicit casting is allowed? I think you are asking too much of the compiler. What if the expression was instead a function call, should the compiler look at the function source to determine whether it can fit in a ubyte? Where do you draw the line? I think the current behavior is fine. The D1 code probably works not because the compiler is 'smarter' but because it blindly truncates data. Perhaps if it were an optimization it could be implemented, but the result of an optimization cannot change the validity of the code... In other words, it couldn't be a compiler feature, it would have to be part of the spec, which would mean all compilers must implement it. BTW, I think cast is a perfect requirement here -- you are saying, yes I know the risks and I'm casting anyways. -Steve
Re: dmd 1.046 and 2.031 releases
John C: > Did you not read the change log? > "Implicit integral conversions that could result in loss of significant bits > are no longer allowed." This was the code: ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); That last n is guaranteed to fit inside an ubyte (yes, I understand the compiler is not smart enough yet to understand it, but from the things explained by Andrei I have thought it was. So I am wrong and I have shown this to other people, that may be interested. I have also encouraged to make the compiler smarter to avoid a cast in such case, because this is a single expression, so range propagation is probably not too much hard to implement given the current design of the front-end. You have missed most of the purposes of my post). Bye, bearophile
Re: dmd 1.046 and 2.031 releases
bearophile Wrote: > I'm playing with the new D2 a bit, this comes from some real D1 code: > > void main(string[] args) { > int n = args.length; > ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); > } > > At compile-time the compiler says: > temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n >= 255 > ? 255 : n) of type int to ubyte > > You have to add a silly cast: > > void main(string[] args) { > int n = args.length; > ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n)); > } > > In theory if the compiler gets a smarter such cast can be unnecessary. > > Bye, > bearophile Did you not read the change log? "Implicit integral conversions that could result in loss of significant bits are no longer allowed."
Re: dmd 1.046 and 2.031 releases
I'm playing with the new D2 a bit, this comes from some real D1 code: void main(string[] args) { int n = args.length; ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); } At compile-time the compiler says: temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n >= 255 ? 255 : n) of type int to ubyte You have to add a silly cast: void main(string[] args) { int n = args.length; ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n)); } In theory if the compiler gets a smarter such cast can be unnecessary. Bye, bearophile
Re: dmd 1.046 and 2.031 releases
On Sun, 05 Jul 2009 22:05:10 -0700, Walter Bright wrote: >Something for everyone here. > > >http://www.digitalmars.com/d/1.0/changelog.html >http://ftp.digitalmars.com/dmd.1.046.zip > > >http://www.digitalmars.com/d/2.0/changelog.html >http://ftp.digitalmars.com/dmd.2.031.zip Nice release. Thanks! I wonder if expression tuples had been considered for use in the multiple case statement? And if yes, what was the reason they were discarded? Some examples: case InclusiveRange!('a', 'z'): case StaticTuple!(1, 2, 5, 6): case AnEnum.tupleof[1..3]:
Re: dmd 1.046 and 2.031 releases
Leandro Lucarella wrote: Walter Bright, el 5 de julio a las 22:05 me escribiste: Something for everyone here. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.046.zip http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.031.zip I incidentally went through all the D2 bug reports that had being fixed in this release and I was really surprised about how much of them had patches by Don (the vast majority!). Thanks Don! I think it's great that more people are becoming major D contributors. Thanks! Yeah, I did a major assault on the segfault/internal compiler error bugs. I figured that right now, the most useful thing I could do was to make the compiler stable. I have few more to give to Walter, but in general it should be quite difficult to crash the compiler now. A couple of my other bug patches -- 1994 and 3010 -- appear to be fixed in this release, though they are not in the changelog. Also the ICE from 339 is fixed.
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Robert Jacques wrote: On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: That's really cool. But I don't think that's actually happening (Or are these the bugs you're talking about?): byte x,y; short z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to short // Repeat for ubyte, bool, char, wchar and *, -, / http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it. Added. In summary, + * - / % >> >>> don't work for types 8-bits and under. << is inconsistent (x<<1 errors, but x>= <<= >>>=) and pre/post increments (++ --) compile which is maddeningly inconsistent, particularly when the spec defines ++x as sugar for x = x + 1, which doesn't compile. And by that logic shouldn't the following happen? int x,y; int z; z = x+y; // Error: cannot implicitly convert expression (cast(long)x + cast(long)y) of type long to int No. Int remains "special", i.e. arithmetic operations on it don't automatically grow to become long. i.e. why the massive inconsistency between byte/short and int/long? (This is particularly a pain for generic i.e. templated code) I don't find it a pain. It's a practical decision. Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain. Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long. BTW: this means byte and short are not closed under arithmetic operations, which drastically limit their usefulness. I think they shouldn't be closed because they overflow for relatively small values. Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck. I understand, but also keep in mind that making small integers closed is the less safe option. So we'd be hurting everyone for the sake of the image manipulation folks. Andrei You could add modular arithmetic types. They are frequently useful...though I admit that of the common 2^n basis bytes are the most useful, I've often needed base three or other. (Probably not worth the effort, but modular arithmetic on 2^n for n from 1 to, say 64 would be reasonably easy.
Re: dmd 1.046 and 2.031 releases
Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Derek Parnell wrote: It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () I'm afraid this would majorly mess with pairing of parens. I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Consider "+1]" a special symbol that means the range is to be closed to the right :o). Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers. How does it not work for floating point numbers? Is that a trick question? Depending on the actual value of b, you might have b+1 == b (if b is large enough). Conversely, range a .. b+1 may contain a lot of extra numbers I may not want to include (like b+0.5)... It wasn't a trick question, or it was of sorts. If you iterate with e.g. foreach through a floating-point range that has b == b + 1, you're bound to get in a lot of trouble because the running variable will be incremented. Well: - A floating point range should allow you to specify the iteration step, or else it should allow you to iterate through all numbers that can be represented with the corresponding precision; - The second issue remains: what if I want to include b but not b+ε for any ε>0? Jerome I'd say that a floating point range requires a lazy interpretation, and should only get evaluated on an as-needed basis. But clearly open, half-open, and closed intervals aren't the same kind of thing as ranges. They are more frequently used for making assertions about when something is true (or false). I.e., they're used as an integral part of standard mathematics, but not at all in computer science (except in VERY peculiar cases). In math one makes a assertion that, say, a particular equation holds for all members of an interval, and open or closed is only a statement about whether the end-points are included in the interval. Proof isn't usually be exhaustive calculation, but rather by more abstract reasoning. It would be nice to be able to express mathematical reasoning as parts of a computer program, but it's not something that's likely to be efficiently implementable, and certainly not executable. Mathematica can do that kind of thing, I believe, but it's a bit distant from a normal computer language.
Re: dmd 1.046 and 2.031 releases
Walter Bright wrote: grauzone wrote: I oriented this on the syntax of array slices. Which work that way. Not inconsistent at all. It's also consistent with foreach(_; x..y). It would look consistent, but it would behave very differently. x..y for foreach and slices is exclusive of the y, while case x..y is inclusive. Creating such an inconsistency would sentence programmers to forever thinking "which way is it this time". To avoid such confusion an obviously different syntax is required. This isn't a matter that's very important to me, as I rarely use case statements, but the suggestion made elsewhere of allowing restricted pattern matching of some sort, or concatenated logical tests, is appealing. Being able to test for (e.g.) case (< 5 & > j): would be very appealing. (I read that as case less than 5 and greater than j.) OTOH, I'm not at all sure that such a thing could be implemented efficiently. The places that I've usually found such things were in languages interpreted at run time.
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Walter Bright wrote: Andrei Alexandrescu wrote: P.S. With the help of a dictionary I think I figured most of this joke: MP: Cómo está, estimado Bellini? B: Muy bien, Mario, astrologando. MP: Qué tengo? B: Un balcón-terraza. MP: No, en mi mano, Bellini... B: Un secarropas! MP: No, escuche bien, eh. TieneB: El circo de Moscú. números. MP: No Bellini. Toma medidas. B: Un ministro. MP: No Bellini, eh! Algunas sonB: Una modelo, Mario! de plástico y otras de madera. MP: No, Bellini, no y no! -- El Gran Bellini (Mario Podestá con una regla) Translation for the lazy: A donkey, a horse, and a fish walk into a bar. And the bartender asks the horse: "Why the long face?" http://www.youtube.com/watch?v=KZ-Okkpgeh4
Re: dmd 1.046 and 2.031 releases
Walter Bright wrote: Andrei Alexandrescu wrote: P.S. With the help of a dictionary I think I figured most of this joke: MP: Cómo está, estimado Bellini? B: Muy bien, Mario, astrologando. MP: Qué tengo? B: Un balcón-terraza. MP: No, en mi mano, Bellini... B: Un secarropas! MP: No, escuche bien, eh. TieneB: El circo de Moscú. números. MP: No Bellini. Toma medidas. B: Un ministro. MP: No Bellini, eh! Algunas sonB: Una modelo, Mario! de plástico y otras de madera. MP: No, Bellini, no y no! -- El Gran Bellini (Mario Podestá con una regla) Translation for the lazy: A donkey, a horse, and a fish walk into a bar. And the bartender asks the horse: "Why the long face?" Andrei
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: P.S. With the help of a dictionary I think I figured most of this joke: MP: Cómo está, estimado Bellini? B: Muy bien, Mario, astrologando. MP: Qué tengo? B: Un balcón-terraza. MP: No, en mi mano, Bellini... B: Un secarropas! MP: No, escuche bien, eh. TieneB: El circo de Moscú. números. MP: No Bellini. Toma medidas. B: Un ministro. MP: No Bellini, eh! Algunas sonB: Una modelo, Mario! de plástico y otras de madera. MP: No, Bellini, no y no! -- El Gran Bellini (Mario Podestá con una regla) Translation for the lazy: A donkey, a horse, and a fish walk into a bar.
Re: dmd 1.046 and 2.031 releases
Leandro Lucarella wrote: I incidentally went through all the D2 bug reports that had being fixed in this release and I was really surprised about how much of them had patches by Don (the vast majority!). Don's an awesome contributor. I and the rest of the D community are very much indebted to him. Thanks Don! I think it's great that more people are becoming major D contributors.
[OT] Magazine For Fai [was: dmd 1.046 and 2.031 releases]
Andrei Alexandrescu, el 8 de julio a las 11:46 me escribiste: I'm sorry about the spanish taglines, they are selected randomly =) And most (in spanish) are pretty local (argentine) jokes. > P.S. With the help of a dictionary I think I figured most of this joke: > > MP: Cómo está, estimado Bellini? B: Muy bien, Mario, astrologando. > MP: Qué tengo? B: Un balcón-terraza. > MP: No, en mi mano, Bellini... B: Un secarropas! > MP: No, escuche bien, eh. TieneB: El circo de Moscú. > números. > MP: No Bellini. Toma medidas. B: Un ministro. > MP: No Bellini, eh! Algunas sonB: Una modelo, Mario! > de plástico y otras de madera. > MP: No, Bellini, no y no! > -- El Gran Bellini (Mario Podestá con una regla) > > It's about wild and funny semantic confusions made by Bellini in > attempting to guess with hints, due to homonymy and, heck, polysemy > I guess :o). But what does the secarropas (tumble-dryer according to > http://www.spanishdict.com/translate/secarropas) have to do with > anything? It doesn't, it's just absurd to have a tumble-dryer in your hand. This quote is a sketch from a cable (cult) tv show from Argentina calle "Magazine For Fai". It was mostly sketch-based absurd humor (format similar to Monty Python Flying Circus) with the particularity of being interpreted by children (except for the creator). This sketch is about a not-so-good mentalist ("El Gran Bellini" or "The Great Bellini"), who tries to guess what it's in the hand of Mario Podestá (the creator) with his eyes covered. If you manage to understand spoken spanish, you can see this sketch video in YouTube: http://www.youtube.com/watch?v=dANeOdBX6QM or read the Wikipedia article about the show: http://es.wikipedia.org/wiki/Magazine_For_Fai -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) Que importante, entonces en estos días de globalización refregar nuestras almas, pasarle el lampazo a nuestros corazones para alcanzar un verdadero estado de babia peperianal. -- Peperino Pómoro
Re: dmd 1.046 and 2.031 releases
Leandro Lucarella wrote: Walter Bright, el 5 de julio a las 22:05 me escribiste: Something for everyone here. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.046.zip http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.031.zip I incidentally went through all the D2 bug reports that had being fixed in this release and I was really surprised about how much of them had patches by Don (the vast majority!). Thanks Don! I think it's great that more people are becoming major D contributors. Don is awesome and a good example to follow! Andrei P.S. With the help of a dictionary I think I figured most of this joke: MP: Cómo está, estimado Bellini? B: Muy bien, Mario, astrologando. MP: Qué tengo? B: Un balcón-terraza. MP: No, en mi mano, Bellini... B: Un secarropas! MP: No, escuche bien, eh. TieneB: El circo de Moscú. números. MP: No Bellini. Toma medidas. B: Un ministro. MP: No Bellini, eh! Algunas sonB: Una modelo, Mario! de plástico y otras de madera. MP: No, Bellini, no y no! -- El Gran Bellini (Mario Podestá con una regla) It's about wild and funny semantic confusions made by Bellini in attempting to guess with hints, due to homonymy and, heck, polysemy I guess :o). But what does the secarropas (tumble-dryer according to http://www.spanishdict.com/translate/secarropas) have to do with anything?
Re: dmd 1.046 and 2.031 releases
Walter Bright, el 5 de julio a las 22:05 me escribiste: > Something for everyone here. > > > http://www.digitalmars.com/d/1.0/changelog.html > http://ftp.digitalmars.com/dmd.1.046.zip > > > http://www.digitalmars.com/d/2.0/changelog.html > http://ftp.digitalmars.com/dmd.2.031.zip I incidentally went through all the D2 bug reports that had being fixed in this release and I was really surprised about how much of them had patches by Don (the vast majority!). Thanks Don! I think it's great that more people are becoming major D contributors. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) MP: Cómo está, estimado Bellini? B: Muy bien, Mario, astrologando. MP: Qué tengo? B: Un balcón-terraza. MP: No, en mi mano, Bellini... B: Un secarropas! MP: No, escuche bien, eh. TieneB: El circo de Moscú. números. MP: No Bellini. Toma medidas. B: Un ministro. MP: No Bellini, eh! Algunas sonB: Una modelo, Mario! de plástico y otras de madera. MP: No, Bellini, no y no! -- El Gran Bellini (Mario Podestá con una regla)
Re: dmd 1.046 and 2.031 releases
Jesse Phillips, el 8 de julio a las 01:27 me escribiste: > On Tue, 07 Jul 2009 18:43:41 -0300, Leandro Lucarella wrote: > > > > > (BTW, nice job with the Wiki for whoever did it, I don't remember who > > was putting a lot of work on improving the Wiki, but it's really much > > better organized now) > > Hi, thanks. > > > I think we can add a DIP (D Improvement Proposal =) section in the > > "Language Development" section: > > http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel > > I was reusing the Discussion and Ideas for these things, but DIP could be > for those brought forward by the involved few of accepting ideas, since > Ideas and Discussion will likely end up with a lot of old or less thought > out ideas. > > http://www.prowiki.org/wiki4d/wiki.cgi?IdeaDiscussion Oops! I'm sorry I missed that. =/ Anyways I think that page serves as a way to index interesting discussions in the NG. The idea of DIPs is to be the other way around. You first present the idea as a DIP, the it's discussed. When you get sufficient input, you update the DIP with a new revision number, then you put it for discussion again in the NG. You repeat that until it's Accepted or Rejected (or you give up and Withdrawn it =). -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) Pitzulino! Pitzulino! Todos a cantar por el tubo! Pitzulino! Pitzulino! Todos a cantar por el codo!
Re: dmd 1.046 and 2.031 releases
"Lionello Lunesu" wrote in message news:h30vss$pm...@digitalmars.com... > > Walter, since the lib/include folders were split according to OS, the dmd2 > zip consistently has an extensionless "lib" file in the dmd2 folder. It's also in D1.
Re: dmd 1.046 and 2.031 releases
On Wed, 08 Jul 2009 00:08:13 -0400, Brad Roberts wrote: Walter Bright wrote: Robert Jacques wrote: On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright wrote: Robert Jacques wrote: (Caveat: most 32-bit compilers probably defaulted integer to int, though 64-bit compilers are probably defaulting integer to long.) All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are setting int at 32 bits for sensible compatibility reasons. But are the 64-bit compilers setting the internal "integer" type to 32 or 64 bits? (I'm not running any 64-bit OSes at the moment to test this) Not that I've seen. I'd be very surprised if any did. From wikipedia: http://en.wikipedia.org/wiki/64-bit model short int longllong ptrsSample operating systems LLP64 16 32 32 64 64 Microsoft Win64 (X64/IA64) LP6416 32 64 64 64 Most UNIX and UNIX-like systems (Solaris, Linux, etc) ILP64 16 64 64 64 64 HAL SILP64 64 64 64 64 64 ? Thanks, but what we're looking for is is what format the data is in in register. For example, in 32-bit C, bytes/shorts are computed as ints and truncated back down. I've found some references to 64-bit native integers in the CLI spec, but nothing definative. The question boils down to is b == 0 or not: int a = 2147483647; long b = a+a+2; // or long long depending on platform
Re: dmd 1.046 and 2.031 releases
Walter Bright wrote: > Robert Jacques wrote: >> On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright >> wrote: >>> Robert Jacques wrote: (Caveat: most 32-bit compilers probably defaulted integer to int, though 64-bit compilers are probably defaulting integer to long.) >>> >>> All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers >>> are setting int at 32 bits for sensible compatibility reasons. >> >> But are the 64-bit compilers setting the internal "integer" type to 32 >> or 64 bits? (I'm not running any 64-bit OSes at the moment to test this) > > Not that I've seen. I'd be very surprised if any did. >From wikipedia: http://en.wikipedia.org/wiki/64-bit model short int longllong ptrsSample operating systems LLP64 16 32 32 64 64 Microsoft Win64 (X64/IA64) LP6416 32 64 64 64 Most UNIX and UNIX-like systems (Solaris, Linux, etc) ILP64 16 64 64 64 64 HAL SILP64 64 64 64 64 64 ?
Re: dmd 1.046 and 2.031 releases
Robert Jacques wrote: On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright wrote: Robert Jacques wrote: (Caveat: most 32-bit compilers probably defaulted integer to int, though 64-bit compilers are probably defaulting integer to long.) All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are setting int at 32 bits for sensible compatibility reasons. But are the 64-bit compilers setting the internal "integer" type to 32 or 64 bits? (I'm not running any 64-bit OSes at the moment to test this) Not that I've seen. I'd be very surprised if any did.
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright wrote: Robert Jacques wrote: (Caveat: most 32-bit compilers probably defaulted integer to int, though 64-bit compilers are probably defaulting integer to long.) All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are setting int at 32 bits for sensible compatibility reasons. But are the 64-bit compilers setting the internal "integer" type to 32 or 64 bits? (I'm not running any 64-bit OSes at the moment to test this)
Re: dmd 1.046 and 2.031 releases
Thanks.
Re: dmd 1.046 and 2.031 releases
Robert Jacques wrote: So by the spec (and please correct me if I'm reading this wrong) g = e + f => g = cast(long)( cast(integer)e + cast(integer)f ); where integer is unbounded in bits (and therefore has no overflow) therefore g = e + f; => d = cast(long) e + cast(long) f; is more in keeping with the spec than g = cast(long)(e+f); in terms of a practical implementation, since there's less possibility for overflow error. The spec leaves a lot of room for implementation defined behavior. But still, there are common definitions for those implementation defined behaviors, and C programs routinely rely on them. Just like the C standard supports 32 bit "bytes", but essentially zero C programs will port to such a platform without major rewrites. Silently changing the expected results is a significant problem. The guy who does the translation is hardly likely to be the guy who wrote the program. When he notices the program failing, I guarantee he'll write it off as "D sux". He doesn't have the time to debug what looks like a fault in D, and frankly I would agree with him. I have a lot of experience with people porting C/C++ programs to Digital Mars compilers. They run into some implementation-defined issue, or rely on some bug in B/M/X compilers, and yet it's always DM's problem, not B/M/X or the code. There's no point in fighting that, it's just the way it is, and to deal with reality means that DM must follow the same implementation-defined behavior and bugs as B/M/X compilers do. For a C integer expression, D must either refuse to compile it or produce the same results. (Caveat: most 32-bit compilers probably defaulted integer to int, though 64-bit compilers are probably defaulting integer to long.) All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are setting int at 32 bits for sensible compatibility reasons.
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 11:05:31 -0400, bearophile wrote: > KennyTM~ Wrote: >> Maybe http://msdn.microsoft.com/en-us/vcsharp/aa336815.aspx . > > That compromise design looks good to be adopted by D too :-) > > Bye, > bearophile For which we have, case 1, 2, 3: writeln("I believe");
Re: dmd 1.046 and 2.031 releases
"Walter Bright" wrote in message news:h2s0me$30f...@digitalmars.com... Something for everyone here. http://www.digitalmars.com/d/1.0/changelog.html http://ftp.digitalmars.com/dmd.1.046.zip http://www.digitalmars.com/d/2.0/changelog.html http://ftp.digitalmars.com/dmd.2.031.zip Great release, thanks to all those that have contributed to it! Walter, since the lib/include folders were split according to OS, the dmd2 zip consistently has an extensionless "lib" file in the dmd2 folder. This is because of the 'install' target in win32.mak that would previously copy phobos.lib and gcstub.obj to the lib folder, but now copies their contents to a file called "lib" instead. I've made a patch and attached it to http://d.puremagic.com/issues/show_bug.cgi?id=3153 L.
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 21:05:45 -0400, Walter Bright wrote: Andrei Alexandrescu wrote: Robert Jacques wrote: long g; g = e + f; => d = cast(long) e + cast(long) f; Works today. Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not. I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". It's also troublesome because it would silently produce different answers than C would. Please, correct me if I'm wrong, but it seems C works by promoting byte/short/etc to int and then casting back down if need be. (Something tells me this wasn't always true) So (I think) the differences would be limited to integer expressions assigned to longs. Also, doing this 'right' might be important to 64-bit platforms. Actually, after finding and skiming the C spec (from http://frama-c.cea.fr/download/acsl_1.4.pdf via wikipedia) " 2.2.3 Typing The language of logic expressions is typed (as in multi-sorted first-order logic). Types are either C types or logic types defined as follows: ?mathematical? types: integer for unbounded, mathematical integers, real for real numbers, boolean for booleans (with values written \true and \false); logic types introduced by the specification writer (see Section 2.6). There are implicit coercions for numeric types: C integral types char, short, int and long, signed or unsigned, are all subtypes of type integer; integer is itself a subtype of type real; C types float and double are subtypes of type real. ... 2.2.4 Integer arithmetic and machine integers The following integer arithmetic operations apply to mathematical integers: addition, subtraction, multiplication, unary minus. The value of a C variable of an integral type is promoted to a mathematical integer. As a consequence, there is no such thing as "arithmetic overflow" in logic expressions. Division and modulo are also mathematical operations, which coincide with the corresponding C operations on C machine integers, thus following the ANSI C99 conventions. In particular, these are not the usual mathematical Euclidean division and remainder. Generally speaking, division rounds the result towards zero. The results are not specified if divisor is zero; otherwise if q and r are the quotient and the remainder of n divided by d then:" " So by the spec (and please correct me if I'm reading this wrong) g = e + f => g = cast(long)( cast(integer)e + cast(integer)f ); where integer is unbounded in bits (and therefore has no overflow) therefore g = e + f; => d = cast(long) e + cast(long) f; is more in keeping with the spec than g = cast(long)(e+f); in terms of a practical implementation, since there's less possibility for overflow error. (Caveat: most 32-bit compilers probably defaulted integer to int, though 64-bit compilers are probably defaulting integer to long.)
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 18:26:36 -0700, Walter Bright wrote: > All the messages from the dawn of time are online and available at > http://www.digitalmars.com/d/archives/digitalmars/D/ and are searchable > from the search box in the upper left. Okaaayy ... I see that this (checking for integer overflow) has been an issue since at least 2003. http://www.digitalmars.com/d/archives/19850.html At this rate, D v2 will be released some time after C++0X :-) -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Re: dmd 1.046 and 2.031 releases
Robert Jacques wrote: On Tue, 07 Jul 2009 21:21:47 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: long g; g = e + f; => d = cast(long) e + cast(long) f; Works today. Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not. I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei Hmm... why can't multiple expressions be built simultaneously and then the best chosen once the assignment/function call/etc is reached? This would also have the benifet of paving the way for polysemous values & expressions. Anything can be done... in infinite time with infinite resources. :o) Andrei :) Well, weren't polysemous expressions already in the pipeline somewhere? I'm afraid they didn't get wings. We have incidentally found different ways to address the issues they were supposed to address. Andrei
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 21:21:47 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: long g; g = e + f; => d = cast(long) e + cast(long) f; Works today. Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not. I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei Hmm... why can't multiple expressions be built simultaneously and then the best chosen once the assignment/function call/etc is reached? This would also have the benifet of paving the way for polysemous values & expressions. Anything can be done... in infinite time with infinite resources. :o) Andrei :) Well, weren't polysemous expressions already in the pipeline somewhere?
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: You can implement that as a library. In fact I wanted to do it for Phobos for a long time. I've discussed it in this group too (to an unusual consensus), but I forgot the thread's title and stupid Thunderbird "download 500 headers at a time forever even long after have changed that idiotic default option" won't let me find it. All the messages from the dawn of time are online and available at http://www.digitalmars.com/d/archives/digitalmars/D/ and are searchable from the search box in the upper left.
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 18:43:41 -0300, Leandro Lucarella wrote: > > (BTW, nice job with the Wiki for whoever did it, I don't remember who > was putting a lot of work on improving the Wiki, but it's really much > better organized now) Hi, thanks. > I think we can add a DIP (D Improvement Proposal =) section in the > "Language Development" section: > http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel I was reusing the Discussion and Ideas for these things, but DIP could be for those brought forward by the involved few of accepting ideas, since Ideas and Discussion will likely end up with a lot of old or less thought out ideas. http://www.prowiki.org/wiki4d/wiki.cgi?IdeaDiscussion
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 20:13:40 -0500, Andrei Alexandrescu wrote: > Derek Parnell wrote: >> Here is where I propose having a signal to the compiler about which >> specific variables I'm worried about, and if I code an assignment to one of >> these that can potentially overflow, then the compiler must issue a >> message. > > You can implement that as a library. In fact I wanted to do it for > Phobos for a long time. What does "implement that as a library" actually mean? Does it mean that a Phobos module could be written that defines a struct template (presumably) that holds the data and implements opAssign, etc... to issue a message if required. I assume it could do some limited compile-time value tests so it doesn't always have to issue a message. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Re: dmd 1.046 and 2.031 releases
Robert Jacques wrote: On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: long g; g = e + f; => d = cast(long) e + cast(long) f; Works today. Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not. I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei Hmm... why can't multiple expressions be built simultaneously and then the best chosen once the assignment/function call/etc is reached? This would also have the benifet of paving the way for polysemous values & expressions. Anything can be done... in infinite time with infinite resources. :o) Andrei
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: long g; g = e + f; => d = cast(long) e + cast(long) f; Works today. Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not. I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei Hmm... why can't multiple expressions be built simultaneously and then the best chosen once the assignment/function call/etc is reached? This would also have the benifet of paving the way for polysemous values & expressions.
Re: dmd 1.046 and 2.031 releases
Derek Parnell wrote: Here is where I propose having a signal to the compiler about which specific variables I'm worried about, and if I code an assignment to one of these that can potentially overflow, then the compiler must issue a message. You can implement that as a library. In fact I wanted to do it for Phobos for a long time. I've discussed it in this group too (to an unusual consensus), but I forgot the thread's title and stupid Thunderbird "download 500 headers at a time forever even long after have changed that idiotic default option" won't let me find it. Andrei
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu, el 7 de julio a las 16:54 me escribiste: > Leandro Lucarella wrote: > >Andrei Alexandrescu, el 7 de julio a las 15:12 me escribiste: > >>Leandro Lucarella wrote: > >>>Andrei Alexandrescu, el 7 de julio a las 10:56 me escribiste: > Leandro Lucarella wrote: > >This seems nice. I think it would be nice if this kind of things are > >commented in the NG before a compiler release, to allow community input > >and discussion. > Yup, that's what happened to case :o). > > >I think this kind of things are the ones that deserves some kind of RFC > >(like Python PEPs) like someone suggested a couple of days ago. > I think that's a good idea. Who has the time and resources to set that up? > >>>What's wrong with the Wiki? > >>Where's the link? > >I mean the D Wiki! > >http://prowiki.org/wiki4d/wiki.cgi > >(BTW, nice job with the Wiki for whoever did it, I don't remember who was > >putting a lot of work on improving the Wiki, but it's really much better > >organized now) > >I think we can add a DIP (D Improvement Proposal =) section in the > >"Language Development" section: > >http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel > > Great idea. I can only hope the technical level will be much higher than > the two threads related to switch. I think proposals should be published there but discussed here, so be ready for all kind of discussions (the ones you like and the ones you don't =). From time to time, when there is some kind of agreement, the proposal should be updated (with a new "revision number"). I just went wild and added a DIP index[1] and the first DIP (DIP1), a template for creating new DIPs[2]. This are just rought drafts, but I think they are good enought to start with. Comments are apreciated. I will post a "formal" announcement too. [1] http://www.prowiki.org/wiki4d/wiki.cgi?DiPs [2] http://www.prowiki.org/wiki4d/wiki.cgi?DiP1 -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) "The Guinness Book of Records" holds the record for being the most stolen book in public libraries
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Robert Jacques wrote: long g; g = e + f; => d = cast(long) e + cast(long) f; Works today. Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not. I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". It's also troublesome because it would silently produce different answers than C would.
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 19:39:55 -0500, Andrei Alexandrescu wrote: > Nick Sabalausky wrote: >> "bearophile" wrote in message >> news:h3093m$2mu...@digitalmars.com... >>> Before adding a feature X let's discuss them, ... If not enough people >>> like a solution then let's not add it. >> >> Something like that was attempted once before. Andrei didn't like what we >> had to say, got huffy, and withdrew from the discussion. Stay tuned for the >> exciting sequel where the feature goes ahead as planned anyway, and our >> protagonists get annoyed that people still have objections to it. > > Put yourself in my place. What would you do? Honest. Sometimes I find it > difficult to find the right mix of being honest, being technically > accurate, being polite, and not wasting too much time explaining myself. > > Andrei Ditto. We know that the development of the D language is not a democratic process, and that's fine. Really, it is. However, clear rationale for decisions made would go a long way to helping reduce dissent, as would some pre-announcements to avoid surprises. By the way, I appreciate that you guys are now closing off bugzilla issues before the release of their fix implementation. It a good heads-up and demonstrates activity in between releases. Well done. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Re: dmd 1.046 and 2.031 releases
Robert Jacques wrote: long g; g = e + f; => d = cast(long) e + cast(long) f; Works today. Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not. I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei
Re: dmd 1.046 and 2.031 releases
Nick Sabalausky wrote: "bearophile" wrote in message news:h3093m$2mu...@digitalmars.com... Before adding a feature X let's discuss them, ... If not enough people like a solution then let's not add it. Something like that was attempted once before. Andrei didn't like what we had to say, got huffy, and withdrew from the discussion. Stay tuned for the exciting sequel where the feature goes ahead as planned anyway, and our protagonists get annoyed that people still have objections to it. Put yourself in my place. What would you do? Honest. Sometimes I find it difficult to find the right mix of being honest, being technically accurate, being polite, and not wasting too much time explaining myself. Andrei
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 18:10:24 -0400, Robert Jacques wrote: > On Tue, 07 Jul 2009 18:05:26 -0400, Derek Parnell wrote: > >> On Tue, 07 Jul 2009 14:05:33 -0400, Robert Jacques wrote: >> >> >>> Well, how often does everyone else use bytes? >> >> Cryptography, in my case. >> > > Cool. If you don't mind, what's you're take new rules? (As different use > cases and points of view are very valuable) By new rules you mean the ones implemented in D 2.031? I'm not sure yet. I need to use them more in practice to see how they sort themselves out. It seems that what they are trying to do is predict runtime behaviour at compile time and make the appropriate (as defined by Walter) steps to avoid runtime errors. Anyhow, and be warned that I'm just thinking out loud here, we could have a scheme where the coder explicitly tells the compiler that, in certain specific sections of code, the coder would like to have runtime checking of overflow situations added by the compiler. Something like ... byte a,b,c; try { a = b + c; } catch (OverflowException e) { ... } and in this situation the compiler would not give a message, because I've instructed the compiler to generate runtime checking. The problem we would now have though is balancing the issuing-of-messages with the ease-of-coding. It seems that the most common kind of assignment is where the LHS type is the same as the RHS type(s), so we don't want to make that any harder to code. But clearly, this is also the most common source of potential overflows. Ok, let's assume that we don't want the D compiler to be our nanny; that we are adults and understand stuff. This now leads me to think that unless the coder says differently, the compiler should be silent about potential overflows. The "try .. catch" example above is verbose, however it does scream "run-time checking" to me so it is probably worth the effort. The only remaining issue for me is how to catch accidental overflows in the special cases where I, as a responsible coder, knowingly wish to avoid. Here is where I propose having a signal to the compiler about which specific variables I'm worried about, and if I code an assignment to one of these that can potentially overflow, then the compiler must issue a message. NOTE BENE: For the purposes of these examples, I use the word "guard" as the signal for the compiler to guard against overflows. I don't care so much about which specific signalling method could be adopted. This is still conceptual stuff, okay? guard byte a; // I want this byte guarded. byte b,c; // I don't care about these bytes. a = 3 + 29; // No message 'cos 32 fits into a byte. a = b + c; // Message 'cos it could overflow. a = cast(byte)(b + c); // No message 'cos cast overrides messages. a++; // Message - overflow is possible. a += 1; // Message - overflow is possible. a = a + 1 // Message - overflow is possible. a = cast(byte)a + 1; // No message 'cos cast overrides messages. And for a really smart compiler ... a = 0; a++; // No message as it can determine that the run time value // at this point in time is okay. for (a = 'a'; a <= 'z'; a++) // Still no message. Additionally, I'm pretty certain that I think ... auto x = y + z; should ensure that 'x' is a type that will always be able to hold any value from (y.min + z.min) to (y.max + z.max) inclusive. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Re: dmd 1.046 and 2.031 releases
Nick Sabalausky wrote: "Andrei Alexandrescu" wrote in message news:h30907$2lk...@digitalmars.com... Nick Sabalausky wrote: "Andrei Alexandrescu" wrote in message news:h2vprn$1t7...@digitalmars.com... This is a different beast. We simply couldn't devise a satisfactory scheme within the constraints we have. No simple solution we could think of has worked, nor have a number of sophisticated solutions. Ideas would be welcome, though I need to warn you that the devil is in the details so the ideas must be fully baked; too many good sounding high-level ideas fail when analyzed in detail. I assume then that you've looked at something lke C#'s checked/unchecked scheme and someone's (I forget who) idea of expanding that to something like unchecked(overflow, sign)? What was wrong with those sorts of things? An unchecked-based approach was not on the table. Our focus was more on checking things properly, instead of over-checking and then relying on "unchecked" to disable that. C#'s scheme supports the opposite as well. Not checking for the stuff where you mostly don't care, and then "checked" to enable the checks in the spots where you do care. And then there's been the suggestions for finer-graned control for whevever that's needed. Well unfortunately that all wasn't considered. If properly championed, it would. I personally consider the current approach superior because it's safe and unobtrusive. Andrei
Re: dmd 1.046 and 2.031 releases
"Andrei Alexandrescu" wrote in message news:h30907$2lk...@digitalmars.com... > Nick Sabalausky wrote: >> "Andrei Alexandrescu" wrote in message >> news:h2vprn$1t7...@digitalmars.com... >>> This is a different beast. We simply couldn't devise a satisfactory >>> scheme within the constraints we have. No simple solution we could think >>> of has worked, nor have a number of sophisticated solutions. Ideas would >>> be welcome, though I need to warn you that the devil is in the details >>> so the ideas must be fully baked; too many good sounding high-level >>> ideas fail when analyzed in detail. >>> >> >> I assume then that you've looked at something lke C#'s checked/unchecked >> scheme and someone's (I forget who) idea of expanding that to something >> like unchecked(overflow, sign)? What was wrong with those sorts of >> things? > > An unchecked-based approach was not on the table. Our focus was more on > checking things properly, instead of over-checking and then relying on > "unchecked" to disable that. > C#'s scheme supports the opposite as well. Not checking for the stuff where you mostly don't care, and then "checked" to enable the checks in the spots where you do care. And then there's been the suggestions for finer-graned control for whevever that's needed.
Re: dmd 1.046 and 2.031 releases
"bearophile" wrote in message news:h3093m$2mu...@digitalmars.com... > Before adding a feature X let's discuss them, ... If not enough people > like a solution then let's not add it. Something like that was attempted once before. Andrei didn't like what we had to say, got huffy, and withdrew from the discussion. Stay tuned for the exciting sequel where the feature goes ahead as planned anyway, and our protagonists get annoyed that people still have objections to it.
Re: dmd 1.046 and 2.031 releases
Robert Jacques wrote: The new rules are definitely an improvement over C, but they make byte/ubyte/short/ushort second class citizens, because practically every assignment requires a cast: byte a,b,c; c = cast(byte) a + b; They've always been second class citizens, as their types keep getting promoted to int. They've been second class on the x86 CPUs, too, as short operations tend to be markedly slower than the corresponding int operations. And if it weren't for compatibility issues, it would almost be worth it to remove them completely. Shorts and bytes are very useful in arrays and data structures, but aren't worth much as local variables. If I see a: short s; as a local, it always raises an eyebrow with me that there's a lurking bug.
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Nick Sabalausky wrote: I assume then that you've looked at something lke C#'s checked/unchecked scheme and someone's (I forget who) idea of expanding that to something like unchecked(overflow, sign)? What was wrong with those sorts of things? An unchecked-based approach was not on the table. Our focus was more on checking things properly, instead of over-checking and then relying on "unchecked" to disable that. We also should be careful not to turn D into a "bondage and discipline" language that nobody will use unless contractually forced to.
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 21:20:42 +0200, "Jérôme M. Berger" wrote: > Andrei Alexandrescu wrote: >> Jérôme M. Berger wrote: >>> Andrei Alexandrescu wrote: Jérôme M. Berger wrote: > Andrei Alexandrescu wrote: >> Derek Parnell wrote: >>> It seems that D would benefit from having a standard syntax format >>> for >>> expressing various range sets; >>> a. Include begin Include end, i.e. [] >>> b. Include begin Exclude end, i.e. [) >>> c. Exclude begin Include end, i.e. (] >>> d. Exclude begin Exclude end, i.e. () >> >> I'm afraid this would majorly mess with pairing of parens. >> > I think Derek's point was to have *some* syntax to mean this, > not necessarily the one he showed (which he showed because I believe > that's the "standard" mathematical way to express it for English > speakers). For example, we could say that [] is always inclusive and > have another character which makes it exclusive like: > a. Include begin Include end, i.e. [ a .. b ] > b. Include begin Exclude end, i.e. [ a .. b ^] > c. Exclude begin Include end, i.e. [^ a .. b ] > d. Exclude begin Exclude end, i.e. [^ a .. b ^] I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Consider "+1]" a special symbol that means the range is to be closed to the right :o). >>> Ah, but: >>> - This is inconsistent between the left and right limit; >>> - This only works for integers, not for floating point numbers. >> >> How does it not work for floating point numbers? >> > Is that a trick question? Depending on the actual value of b, you > might have b+1 == b (if b is large enough). Conversely, range a .. > b+1 may contain a lot of extra numbers I may not want to include > (like b+0.5)... > > Jerome If Andrei is not joking (the smiley notwithstanding) the "+1" doesn't mean add one to the previous expression, instead it means that the previous expression's value is the last value in the range set. Subtle, no? -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 13:16:14 -0500, Andrei Alexandrescu wrote: > Safe D is concerned with memory safety only. That's a pity. Maybe it should be renamed to Partially-Safe D, or Safe-ish D, Memory-Safe D, or ... well you get the point. Could be misleading for the great unwashed. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 14:16:12 -0500, Andrei Alexandrescu wrote: > Bill Baxter wrote: >> 2009/7/7 Andrei Alexandrescu : >>> I think Walter's message really rendered the whole discussion moot. Post of >>> the year: >>> >>> = >>> I like: >>> >>> a .. b+1 >>> >>> to mean inclusive range. >>> = >> >> Not everything is an integer. > > Works with pointers too. A pointer is an integer because the byte it is referring to always has an integral address value. Pointers do not point to partial bytes. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 18:05:26 -0400, Derek Parnell wrote: On Tue, 07 Jul 2009 14:05:33 -0400, Robert Jacques wrote: Well, how often does everyone else use bytes? Cryptography, in my case. Cool. If you don't mind, what's you're take new rules? (As different use cases and points of view are very valuable)
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 20:13:45 +0200, "Jérôme M. Berger" wrote: > Andrei Alexandrescu wrote: >> Derek Parnell wrote: >>> It seems that D would benefit from having a standard syntax format for >>> expressing various range sets; >>> a. Include begin Include end, i.e. [] >>> b. Include begin Exclude end, i.e. [) >>> c. Exclude begin Include end, i.e. (] >>> d. Exclude begin Exclude end, i.e. () >> >> I'm afraid this would majorly mess with pairing of parens. >> > I think Derek's point was to have *some* syntax to mean this, not > necessarily the one he showed Thank you, Jérôme. I got too frustrated to explain it well enough. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 14:05:33 -0400, Robert Jacques wrote: > Well, how often does everyone else use bytes? Cryptography, in my case. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Re: dmd 1.046 and 2.031 releases
Leandro Lucarella wrote: Andrei Alexandrescu, el 7 de julio a las 15:12 me escribiste: Leandro Lucarella wrote: Andrei Alexandrescu, el 7 de julio a las 10:56 me escribiste: Leandro Lucarella wrote: This seems nice. I think it would be nice if this kind of things are commented in the NG before a compiler release, to allow community input and discussion. Yup, that's what happened to case :o). I think this kind of things are the ones that deserves some kind of RFC (like Python PEPs) like someone suggested a couple of days ago. I think that's a good idea. Who has the time and resources to set that up? What's wrong with the Wiki? Where's the link? I mean the D Wiki! http://prowiki.org/wiki4d/wiki.cgi (BTW, nice job with the Wiki for whoever did it, I don't remember who was putting a lot of work on improving the Wiki, but it's really much better organized now) I think we can add a DIP (D Improvement Proposal =) section in the "Language Development" section: http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel Great idea. I can only hope the technical level will be much higher than the two threads related to switch. Andrei
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu, el 7 de julio a las 15:12 me escribiste: > Leandro Lucarella wrote: > >Andrei Alexandrescu, el 7 de julio a las 10:56 me escribiste: > >>Leandro Lucarella wrote: > >>>This seems nice. I think it would be nice if this kind of things are > >>>commented in the NG before a compiler release, to allow community input > >>>and discussion. > >>Yup, that's what happened to case :o). > >> > >>>I think this kind of things are the ones that deserves some kind of RFC > >>>(like Python PEPs) like someone suggested a couple of days ago. > >>I think that's a good idea. Who has the time and resources to set that up? > >What's wrong with the Wiki? > > Where's the link? I mean the D Wiki! http://prowiki.org/wiki4d/wiki.cgi (BTW, nice job with the Wiki for whoever did it, I don't remember who was putting a lot of work on improving the Wiki, but it's really much better organized now) I think we can add a DIP (D Improvement Proposal =) section in the "Language Development" section: http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) Ya ni el cielo me quiere, ya ni la muerte me visita Ya ni el sol me calienta, ya ni el viento me acaricia
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 14:16:14 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain. Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long. Suggestion 1: Loft the right hand of the expression (when lofting is valid) to the size of the left hand. i.e. What does loft mean in this context? Sorry. loft <=> up-casting. i.e. byte => short => int => long => cent? => bigInt? byte a,b,c; c = a + b; => c = a + b; Unsafe. So is int + int or long + long. Or float + float for that matter. My point is that if a programmer is assigning a value to a byte (or short or int or long) then they are willing to accept the accociated over/under flow errors of that type. short d; d = a + b; => d = cast(short) a + cast(short) b; Should work today modulo bugs. int e, f; e = a + b; => e = cast(short) a + cast(short) b; Why cast to short? e has type int. Opps. You're right. (I was thinking of the new rules, not my suggestion) Should be: e = a + b; => e = cast(int) a + cast(int) b; e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) + cast(int) d; Or e = cast(int) a + (cast(int) b + cast(int)d); I don't understand this. Same "Opps. You're right." as above. e = a + b + d; => e = cast(int) a + cast(int) b + cast(int) d; long g; g = e + f; => d = cast(long) e + cast(long) f; Works today. Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not. When choosing operator overloads or auto, prefer the ideal lofted interpretation (as per the new rules, but without the exception for int/long), over truncated variants. i.e. auto h = a + b; => short h = cast(short) a + cast(short) b; This would yield semantics incompatible with C expressions. How so? The auto rule is identical to the "new rules". The overload rule is identical to the "new rules", except when no match can be found, in which case it tries to "relax" the expression to a smaller number of bits. This would also properly handled some of the corner/inconsistent cases with the current rules: ubyte i; ushort j; j = -i;=> j = -cast(short)i; (This currently evaluates to j = cast(short)(-i); That should not compile, sigh. Walter wouldn't listen... And a += a; is equivalent to a = a + a; Well not quite equivalent. In D2 they aren't. The former clarifies that you want to reassign the expression to a, and no cast is necessary. The latter would not compile if a is shorter than int. I understand, but that dichotomy increases the cognitive load on the programmer. Also, there's the issue of byte x; ++x; which is defined in the spec as being equvilent to x = x + 1; and is logically consistent with byte[] k,l,m; m[] = k[] + l[]; Essentially, instead of trying to prevent overflows, except for those from int and long, this scheme attempts to minimize the risk of overflows, including those from int (and long, once cent exists. Maybe long+long=>bigInt?) But if you close operations for types smaller than int, you end up with a scheme even more error-prone that C! Since C (IIRC) always evaluates "x+x" in the manner most prone to causing overflows, no matter the type, a scheme can't be more error-prone than C (at the instruction level). However, it can be less consistent, which I grant can lead to higher level logic errors. (BTW, operations for types smaller than int are closed (by my non-mathy definition) in C) The new rules are definitely an improvement over C, but they make byte/ubyte/short/ushort second class citizens, because practically every assignment requires a cast: byte a,b,c; c = cast(byte) a + b; And if it weren't for compatibility issues, it would almost be worth it to remove them completely.
Re: dmd 1.046 and 2.031 releases
Walter Bright wrote: Andrei Alexandrescu wrote: Bill Baxter wrote: 2009/7/7 Andrei Alexandrescu : I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Not everything is an integer. Works with pointers too. It works for the cases where an inclusive range makes sense. Doesn't work with floats, which *do* make sense too... Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: dmd 1.046 and 2.031 releases
Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: - A floating point range should allow you to specify the iteration step, or else it should allow you to iterate through all numbers that can be represented with the corresponding precision; We don't have that, so you'd need to use a straigh for statement. struct FloatRange { float begin, end, step; bool includeBegin, includeEnd; int opApply (int delegate (ref float) dg) { whatever; } whatever; } - The second issue remains: what if I want to include b but not b+ε for any ε>0? real a, b; ... for (real f = a; f <= b; update(f)) { } I'd find it questionable to use ranged for with floats anyway. So would I. But a range of floats is useful for more than iterating over it. Think interval arithmetic for example. Cool. I'm positive that open ranges will not prevent you from implementing such a library (and from subsequently proposing it to Phobos :o)). Andrei
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Jérôme M. Berger wrote: - A floating point range should allow you to specify the iteration step, or else it should allow you to iterate through all numbers that can be represented with the corresponding precision; We don't have that, so you'd need to use a straigh for statement. struct FloatRange { float begin, end, step; bool includeBegin, includeEnd; int opApply (int delegate (ref float) dg) { whatever; } whatever; } - The second issue remains: what if I want to include b but not b+ε for any ε>0? real a, b; ... for (real f = a; f <= b; update(f)) { } I'd find it questionable to use ranged for with floats anyway. So would I. But a range of floats is useful for more than iterating over it. Think interval arithmetic for example. Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: dmd 1.046 and 2.031 releases
Leandro Lucarella wrote: Andrei Alexandrescu, el 7 de julio a las 10:56 me escribiste: Leandro Lucarella wrote: This seems nice. I think it would be nice if this kind of things are commented in the NG before a compiler release, to allow community input and discussion. Yup, that's what happened to case :o). I think this kind of things are the ones that deserves some kind of RFC (like Python PEPs) like someone suggested a couple of days ago. I think that's a good idea. Who has the time and resources to set that up? What's wrong with the Wiki? Where's the link? Andrei
Re: dmd 1.046 and 2.031 releases
bearophile wrote: Andrei Alexandrescu: How often did you encounter that issue? Please, let's be serious, and let's stop adding special cases to D, or they will kill the language. Don't get me going about what could kill the language. Lately I have seen too many special cases. For example the current design of the rules of integral seems bad. It has bugs and special cases from the start. Bugs don't imply that the feature is bad. The special cases are well understood and are present in all of C, C++, C#, and Java. Value range propagation as defined in D is principled and puts D on the right side of both safety and speed. It's better than all other languages mentioned above: safer than C and C++, and requiring much fewer casts than C# and Java. Andrei
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu, el 7 de julio a las 10:56 me escribiste: > Leandro Lucarella wrote: > >This seems nice. I think it would be nice if this kind of things are > >commented in the NG before a compiler release, to allow community input > >and discussion. > > Yup, that's what happened to case :o). > > >I think this kind of things are the ones that deserves some kind of RFC > >(like Python PEPs) like someone suggested a couple of days ago. > > I think that's a good idea. Who has the time and resources to set that up? What's wrong with the Wiki? -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) He andáu muchos caminos, muchos caminos he andáu, Chile tiene el buen vino y Suecia, el bacalao. Esta'o Unido tiene el hot do', Cuba tiene el mojito, Guatemala, el cornalito y Brasil la feishoada.
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu: > How often did you encounter that issue? Please, let's be serious, and let's stop adding special cases to D, or they will kill the language. Lately I have seen too many special cases. For example the current design of the rules of integral seems bad. It has bugs and special cases from the start. The .. used in case is another special case, even if Andrei is blind regarding that, and doesn't see its problem. Why for a change people here stop implementing things, and start implementing a feature only after 55-60+% of the people think it's a good idea? Languages like C# and Scala show several features good to be copied, let's copy them, and let's not add any more half-backed things. Before adding a feature X let's discuss them, let's create a forum or place to keep a thred for each feature plus a wiki-based text of the best solution found, etc. If not enough people like a solution then let's not add it. Better not having a feature than having a bad one, see Python that even today misses basic things like a switch/case. Bye, bearophile
Re: dmd 1.046 and 2.031 releases
Nick Sabalausky wrote: "Andrei Alexandrescu" wrote in message news:h2vprn$1t7...@digitalmars.com... This is a different beast. We simply couldn't devise a satisfactory scheme within the constraints we have. No simple solution we could think of has worked, nor have a number of sophisticated solutions. Ideas would be welcome, though I need to warn you that the devil is in the details so the ideas must be fully baked; too many good sounding high-level ideas fail when analyzed in detail. I assume then that you've looked at something lke C#'s checked/unchecked scheme and someone's (I forget who) idea of expanding that to something like unchecked(overflow, sign)? What was wrong with those sorts of things? An unchecked-based approach was not on the table. Our focus was more on checking things properly, instead of over-checking and then relying on "unchecked" to disable that. Andrei
Re: dmd 1.046 and 2.031 releases
Jérôme M. Berger wrote: - A floating point range should allow you to specify the iteration step, or else it should allow you to iterate through all numbers that can be represented with the corresponding precision; We don't have that, so you'd need to use a straigh for statement. - The second issue remains: what if I want to include b but not b+ε for any ε>0? real a, b; ... for (real f = a; f <= b; update(f)) { } I'd find it questionable to use ranged for with floats anyway. Andrei
Re: dmd 1.046 and 2.031 releases
"Andrei Alexandrescu" wrote in message news:h2vprn$1t7...@digitalmars.com... > > This is a different beast. We simply couldn't devise a satisfactory scheme > within the constraints we have. No simple solution we could think of has > worked, nor have a number of sophisticated solutions. Ideas would be > welcome, though I need to warn you that the devil is in the details so the > ideas must be fully baked; too many good sounding high-level ideas fail > when analyzed in detail. > I assume then that you've looked at something lke C#'s checked/unchecked scheme and someone's (I forget who) idea of expanding that to something like unchecked(overflow, sign)? What was wrong with those sorts of things?
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Derek Parnell wrote: It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () I'm afraid this would majorly mess with pairing of parens. I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Consider "+1]" a special symbol that means the range is to be closed to the right :o). Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers. How does it not work for floating point numbers? Is that a trick question? Depending on the actual value of b, you might have b+1 == b (if b is large enough). Conversely, range a .. b+1 may contain a lot of extra numbers I may not want to include (like b+0.5)... It wasn't a trick question, or it was of sorts. If you iterate with e.g. foreach through a floating-point range that has b == b + 1, you're bound to get in a lot of trouble because the running variable will be incremented. Well: - A floating point range should allow you to specify the iteration step, or else it should allow you to iterate through all numbers that can be represented with the corresponding precision; - The second issue remains: what if I want to include b but not b+ε for any ε>0? Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Bill Baxter wrote: 2009/7/7 Andrei Alexandrescu : I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Not everything is an integer. Works with pointers too. It works for the cases where an inclusive range makes sense.
Re: dmd 1.046 and 2.031 releases
Leandro Lucarella wrote: Andrei Alexandrescu, el 7 de julio a las 13:18 me escribiste: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Derek Parnell wrote: It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () I'm afraid this would majorly mess with pairing of parens. I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Consider "+1]" a special symbol that means the range is to be closed to the right :o). What about bearophile response: what about x..uint.max+1? How often did you encounter that issue? Andrei
Re: dmd 1.046 and 2.031 releases
Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Derek Parnell wrote: It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () I'm afraid this would majorly mess with pairing of parens. I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Consider "+1]" a special symbol that means the range is to be closed to the right :o). Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers. How does it not work for floating point numbers? Is that a trick question? Depending on the actual value of b, you might have b+1 == b (if b is large enough). Conversely, range a .. b+1 may contain a lot of extra numbers I may not want to include (like b+0.5)... It wasn't a trick question, or it was of sorts. If you iterate with e.g. foreach through a floating-point range that has b == b + 1, you're bound to get in a lot of trouble because the running variable will be incremented. Andrei
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu, el 7 de julio a las 13:18 me escribiste: > Jérôme M. Berger wrote: > >Andrei Alexandrescu wrote: > >>Derek Parnell wrote: > >>>It seems that D would benefit from having a standard syntax format for > >>>expressing various range sets; > >>> a. Include begin Include end, i.e. [] > >>> b. Include begin Exclude end, i.e. [) > >>> c. Exclude begin Include end, i.e. (] > >>> d. Exclude begin Exclude end, i.e. () > >> > >>I'm afraid this would majorly mess with pairing of parens. > >> > >I think Derek's point was to have *some* syntax to mean this, not > >necessarily the one he showed (which he showed because I believe that's the > >"standard" mathematical way to express it for English speakers). For > >example, we > >could say that [] is always inclusive and have another character which makes > >it > >exclusive like: > > a. Include begin Include end, i.e. [ a .. b ] > > b. Include begin Exclude end, i.e. [ a .. b ^] > > c. Exclude begin Include end, i.e. [^ a .. b ] > > d. Exclude begin Exclude end, i.e. [^ a .. b ^] > > I think Walter's message really rendered the whole discussion moot. Post of > the > year: > > = > I like: > >a .. b+1 > > to mean inclusive range. > = > > Consider "+1]" a special symbol that means the range is to be closed to the > right > :o). What about bearophile response: what about x..uint.max+1? -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) More than 50% of the people in the world have never made Or received a telephone call
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Derek Parnell wrote: It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () I'm afraid this would majorly mess with pairing of parens. I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Consider "+1]" a special symbol that means the range is to be closed to the right :o). Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers. How does it not work for floating point numbers? Is that a trick question? Depending on the actual value of b, you might have b+1 == b (if b is large enough). Conversely, range a .. b+1 may contain a lot of extra numbers I may not want to include (like b+0.5)... Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: dmd 1.046 and 2.031 releases
Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Derek Parnell wrote: It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () I'm afraid this would majorly mess with pairing of parens. I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Consider "+1]" a special symbol that means the range is to be closed to the right :o). Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers. How does it not work for floating point numbers? Andrei
Re: dmd 1.046 and 2.031 releases
Bill Baxter wrote: 2009/7/7 Andrei Alexandrescu : I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Not everything is an integer. Works with pointers too. Andrei
Re: dmd 1.046 and 2.031 releases
2009/7/7 Andrei Alexandrescu : > I think Walter's message really rendered the whole discussion moot. Post of > the year: > > = > I like: > > a .. b+1 > > to mean inclusive range. > = Not everything is an integer. --bb
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu: > Safe D is concerned with memory safety only. And hopefully you will understand that is wrong :-) Bye, bearophile
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu: > I think Walter's message really rendered the whole discussion moot. Post > of the year: > = > I like: > a .. b+1 > to mean inclusive range. That was my preferred solution, starting from months ago. Bye, bearophile
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Derek Parnell wrote: It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () I'm afraid this would majorly mess with pairing of parens. I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Consider "+1]" a special symbol that means the range is to be closed to the right :o). Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers. Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: dmd 1.046 and 2.031 releases
Robert Jacques wrote: On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: BTW: this means byte and short are not closed under arithmetic operations, which drastically limit their usefulness. I think they shouldn't be closed because they overflow for relatively small values. Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck. Wrong example: in most cases, when doing image manipulations, you don't want the overflow to wrap but instead to be clipped. Having the compiler notify you when there is a risk of an overflow and require you to be explicit in how you want it to be handled is actually a good thing IMO. Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: dmd 1.046 and 2.031 releases
Robert Jacques wrote: On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain. Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long. Suggestion 1: Loft the right hand of the expression (when lofting is valid) to the size of the left hand. i.e. What does loft mean in this context? byte a,b,c; c = a + b; => c = a + b; Unsafe. short d; d = a + b; => d = cast(short) a + cast(short) b; Should work today modulo bugs. int e, f; e = a + b; => e = cast(short) a + cast(short) b; Why cast to short? e has type int. e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) + cast(int) d; Or e = cast(int) a + (cast(int) b + cast(int)d); I don't understand this. long g; g = e + f; => d = cast(long) e + cast(long) f; Works today. When choosing operator overloads or auto, prefer the ideal lofted interpretation (as per the new rules, but without the exception for int/long), over truncated variants. i.e. auto h = a + b; => short h = cast(short) a + cast(short) b; This would yield semantics incompatible with C expressions. This would also properly handled some of the corner/inconsistent cases with the current rules: ubyte i; ushort j; j = -i;=> j = -cast(short)i; (This currently evaluates to j = cast(short)(-i); That should not compile, sigh. Walter wouldn't listen... And a += a; is equivalent to a = a + a; Well not quite equivalent. In D2 they aren't. The former clarifies that you want to reassign the expression to a, and no cast is necessary. The latter would not compile if a is shorter than int. and is logically consistent with byte[] k,l,m; m[] = k[] + l[]; Essentially, instead of trying to prevent overflows, except for those from int and long, this scheme attempts to minimize the risk of overflows, including those from int (and long, once cent exists. Maybe long+long=>bigInt?) But if you close operations for types smaller than int, you end up with a scheme even more error-prone that C! Suggestion 2: Enable the full rules as part of SafeD and allow non-promotion in un-safe D. Note this could be synergistically combined with Suggestion 1. Safe D is concerned with memory safety only. Andrei
Re: dmd 1.046 and 2.031 releases
Jérôme M. Berger wrote: Andrei Alexandrescu wrote: Derek Parnell wrote: It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () I'm afraid this would majorly mess with pairing of parens. I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] I think Walter's message really rendered the whole discussion moot. Post of the year: = I like: a .. b+1 to mean inclusive range. = Consider "+1]" a special symbol that means the range is to be closed to the right :o). Andrei
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu wrote: Derek Parnell wrote: It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () I'm afraid this would majorly mess with pairing of parens. I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] Jerome PS: If you *really* want messed parens pairing, try it with the French convention: [] [[ ]] ][ ;) -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain. Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long. Suggestion 1: Loft the right hand of the expression (when lofting is valid) to the size of the left hand. i.e. byte a,b,c; c = a + b; => c = a + b; short d; d = a + b; => d = cast(short) a + cast(short) b; int e, f; e = a + b; => e = cast(short) a + cast(short) b; e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) + cast(int) d; Or e = cast(int) a + (cast(int) b + cast(int)d); long g; g = e + f; => d = cast(long) e + cast(long) f; When choosing operator overloads or auto, prefer the ideal lofted interpretation (as per the new rules, but without the exception for int/long), over truncated variants. i.e. auto h = a + b; => short h = cast(short) a + cast(short) b; This would also properly handled some of the corner/inconsistent cases with the current rules: ubyte i; ushort j; j = -i;=> j = -cast(short)i; (This currently evaluates to j = cast(short)(-i); And a += a; is equivalent to a = a + a; and is logically consistent with byte[] k,l,m; m[] = k[] + l[]; Essentially, instead of trying to prevent overflows, except for those from int and long, this scheme attempts to minimize the risk of overflows, including those from int (and long, once cent exists. Maybe long+long=>bigInt?) Suggestion 2: Enable the full rules as part of SafeD and allow non-promotion in un-safe D. Note this could be synergistically combined with Suggestion 1. BTW: this means byte and short are not closed under arithmetic operations, which drastically limit their usefulness. I think they shouldn't be closed because they overflow for relatively small values. Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck. I understand, but also keep in mind that making small integers closed is the less safe option. So we'd be hurting everyone for the sake of the image manipulation folks. Andrei Well, how often does everyone else use bytes?
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 08:53:49 +0200, Lars T. Kyllingstad wrote: > Ary Borenszweig wrote: >> のしいか (noshiika) escribió: >>> Thank you for the great work, Walter and all the other contributors. >>> >>> But I am a bit disappointed with the CaseRangeStatement syntax. Why is >>> it >>>case 0: .. case 9: >>> instead of >>>case 0 .. 9: >>> >>> With the latter notation, ranges can be easily used together with >>> commas, for example: >>>case 0, 2 .. 4, 6 .. 9: >>> >>> And CaseRangeStatement, being inconsistent with other syntaxes using >>> the .. operator, i.e. slicing and ForeachRangeStatement, includes the >>> endpoint. >>> Shouldn't D make use of another operator to express ranges that >>> include the endpoints as Ruby or Perl6 does? >> >> I agree. >> >> I think this syntax is yet another one of those things people looking >> at D will say "ugly" and turn their heads away. > > > When the discussion first came up in the NG, I was a bit sceptical about > Andrei's suggestion for the case range statement as well. Now, I > definitely think it's the best choice, and it's only because I realised > it can be written like this: > > case 1: > .. > case 4: > // do stuff > [snip] I think it looks much better that way and users are more likely to be comfortable with the syntax. I hope it will be displayed in the examples that way. Still, the syntax at all looks a bit alien because it's a syntax addition.
Re: dmd 1.046 and 2.031 releases
On Tue, Jul 7, 2009 at 11:33 AM, Andrei Alexandrescu wrote: > > Well 32-bit architectures may be a historical relic but I don't think 32-bit > integers are. And I think it would be too disruptive a change to promote > results of arithmetic operation between integers to long. > > ... > > This is a different beast. We simply couldn't devise a satisfactory scheme > within the constraints we have. No simple solution we could think of has > worked, nor have a number of sophisticated solutions. Ideas would be > welcome, though I need to warn you that the devil is in the details so the > ideas must be fully baked; too many good sounding high-level ideas fail when > analyzed in detail. Hm. Just throwing this out there, as a possible solution for both problems. Suppose you kept the current set of integer types, but made all of them "open" (i.e. byte+byte=short, int+int=long etc.). Furthermore, you made it impossible to implicitly convert between the signed and unsigned types of the same size (the int<>uint hole disappears). But then you introduce two new native-size integer types. Well, we already have them - ptrdiff_t and size_t - but give them nicer names, like word and uword. Unlike the other integer types, these would be implicitly convertible to one another. They'd more or less take the place of 'int' and 'uint' in most code, since most of the time, the size of the integer isn't that important.
Re: dmd 1.046 and 2.031 releases
Leandro Lucarella wrote: This seems nice. I think it would be nice if this kind of things are commented in the NG before a compiler release, to allow community input and discussion. Yup, that's what happened to case :o). I think this kind of things are the ones that deserves some kind of RFC (like Python PEPs) like someone suggested a couple of days ago. I think that's a good idea. Who has the time and resources to set that up? Andrei
Re: dmd 1.046 and 2.031 releases
Andrei Alexandrescu, el 7 de julio a las 00:48 me escribiste: > Robert Jacques wrote: > >On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright > > > >wrote: > >>Something for everyone here. > >> > >> > >>http://www.digitalmars.com/d/1.0/changelog.html > >>http://ftp.digitalmars.com/dmd.1.046.zip > >> > >> > >>http://www.digitalmars.com/d/2.0/changelog.html > >>http://ftp.digitalmars.com/dmd.2.031.zip > >Thanks for another great release. > >Also, I'm not sure if this is a bug or a feature with regard to the new > >integer rules: > > byte x,y,z; > > z = x+y;// Error: cannot implicitly convert expression (cast(int)x + > >cast(int)y) of type int to byte > >which makes sense, in that a byte can overflow, but also doesn't make sense, > >since integer behaviour is different. > > Walter has implemented an ingenious scheme for disallowing narrowing > conversions while at the same time minimizing the number of casts > required. He hasn't explained it, so I'll sketch an explanation here. > > The basic approach is "value range propagation": each expression is > associated with a minimum possible value and a maximum possible value. > As complex expressions are assembled out of simpler expressions, the > ranges are computed and propagated. > > For example, this code compiles: > > int x = whatever(); > bool y = x & 1; > > The compiler figures that the range of x is int.min to int.max, the > range of 1 is 1 to 1, and (here's the interesting part), the range of > x & 1 is 0 to 1. So it lets the code go through. However, it won't allow > this: > > int x = whatever(); > bool y = x & 2; > > because x & 2 has range between 0 and 2, which won't fit in a bool. > > The approach generalizes to arbitrary complex expressions. Now here's the > trick > though: the value range propagation is local, i.e. all ranges are forgotten > beyond one expression. So as soon as you move on to the next statement, the > ranges have been forgotten. > > Why? Simply put, increased implementation difficulties and increased > compiler memory footprint for diminishing returns. Both Walter and > I noticed that expression-level value range propagation gets rid of all > dangerous cases and the vast majority of required casts. Indeed, his > test suite, Phobos, and my own codebase required surprisingly few > changes with the new scheme. Moreover, we both discovered bugs due to > the new feature, so we're happy with the status quo. > > Now consider your code: > > byte x,y,z; > z = x+y; > > The first line initializes all values to zero. In an intra-procedural > value range propagation, these zeros would be propagated to the next > statement, which would range-check. However, in the current approach, > the ranges of x, y, and z are forgotten at the first semicolon. Then, > x+y has range -byte.min-byte.min up to byte.max+byte.max as far as the > type checker knows. That would fit in a short (and by the way I just > found a bug with that occasion) but not in a byte. This seems nice. I think it would be nice if this kind of things are commented in the NG before a compiler release, to allow community input and discussion. I think this kind of things are the ones that deserves some kind of RFC (like Python PEPs) like someone suggested a couple of days ago. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05)
Re: dmd 1.046 and 2.031 releases
Robert Jacques wrote: On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: That's really cool. But I don't think that's actually happening (Or are these the bugs you're talking about?): byte x,y; short z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to short // Repeat for ubyte, bool, char, wchar and *, -, / http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it. Added. In summary, + * - / % >> >>> don't work for types 8-bits and under. << is inconsistent (x<<1 errors, but x>= <<= >>>=) and pre/post increments (++ --) compile which is maddeningly inconsistent, particularly when the spec defines ++x as sugar for x = x + 1, which doesn't compile. And by that logic shouldn't the following happen? int x,y; int z; z = x+y; // Error: cannot implicitly convert expression (cast(long)x + cast(long)y) of type long to int No. Int remains "special", i.e. arithmetic operations on it don't automatically grow to become long. i.e. why the massive inconsistency between byte/short and int/long? (This is particularly a pain for generic i.e. templated code) I don't find it a pain. It's a practical decision. Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain. Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long. BTW: this means byte and short are not closed under arithmetic operations, which drastically limit their usefulness. I think they shouldn't be closed because they overflow for relatively small values. Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck. I understand, but also keep in mind that making small integers closed is the less safe option. So we'd be hurting everyone for the sake of the image manipulation folks. Andrei
Re: dmd 1.046 and 2.031 releases
Jarrett Billingsley wrote: The only thing is: why doesn't _this_ fail, then? int x, y, z; z = x + y; I'm sure it's out of convenience, but what about in ten, fifteen years when 32-bit architectures are a historical relic and there's still this hole in the type system? Well 32-bit architectures may be a historical relic but I don't think 32-bit integers are. And I think it would be too disruptive a change to promote results of arithmetic operation between integers to long. The same argument applies for the implicit conversions between int and uint. If you're going to do that, why not have implicit conversions between long and ulong on 64-bit platforms? This is a different beast. We simply couldn't devise a satisfactory scheme within the constraints we have. No simple solution we could think of has worked, nor have a number of sophisticated solutions. Ideas would be welcome, though I need to warn you that the devil is in the details so the ideas must be fully baked; too many good sounding high-level ideas fail when analyzed in detail. Andrei
Re: dmd 1.046 and 2.031 releases
On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu wrote: Robert Jacques wrote: That's really cool. But I don't think that's actually happening (Or are these the bugs you're talking about?): byte x,y; short z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to short // Repeat for ubyte, bool, char, wchar and *, -, / http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it. Added. In summary, + * - / % >> >>> don't work for types 8-bits and under. << is inconsistent (x<<1 errors, but x<(+= *= -= /= %= >>= <<= >>>=) and pre/post increments (++ --) compile which is maddeningly inconsistent, particularly when the spec defines ++x as sugar for x = x + 1, which doesn't compile. And by that logic shouldn't the following happen? int x,y; int z; z = x+y; // Error: cannot implicitly convert expression (cast(long)x + cast(long)y) of type long to int No. Int remains "special", i.e. arithmetic operations on it don't automatically grow to become long. i.e. why the massive inconsistency between byte/short and int/long? (This is particularly a pain for generic i.e. templated code) I don't find it a pain. It's a practical decision. Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain. BTW: this means byte and short are not closed under arithmetic operations, which drastically limit their usefulness. I think they shouldn't be closed because they overflow for relatively small values. Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck.