Re: the traits trap
On Friday, 21 November 2014 at 04:08:52 UTC, Steven Schveighoffer wrote: This is one area that D's templates are very user-unfriendly. -Steve +1, Well said! --- Paolo
Re: Overload using nogc
"Jonathan Marler" wrote in message news:huzhibjpuvqjxqnub...@forum.dlang.org... Has the idea of function overloading via nogc been explored? void func() @nogc { // logic that does not use GC } void func() { // logic that uses GC } void main(string[] args) // @nogc { // if main is @nogc, then the @nogc version of func // will be called, otherwise, the GC version will be func(); } This could be useful for the standard library to expose different implementations based on whether or not the application is using the GC. I think in most cases either the @nogc version would use different types (or it's templated), and therefore you wouldn't need to overload on @nogc, or you would simply have only a @nogc version.
Re: Type Inference Bug?
"Meta" wrote in message news:wzczhiwokauvkkevt...@forum.dlang.org... shared const int i; static if (is(typeof(i) T == shared U, U)) { //Prints "shared(const(int))" pragma(msg, U); } This seems like subtly wrong behaviour to me. If T == shared U, for some U, then shouldn't U be unshared? If T is shared(const(int)), and T is the same as the type U with the 'shared' qualifier applied to it, then U should be of type const(int), not shared(const(int)). I'm bringing this up partially because it seems wrong to me, and partially because we currently don't have a good why of "shaving" the outermost qualifier off a type, and this seemed the natural way to do it to me (I was surprised when it didn't work). It doesn't print anything for me. This code seems to have the desired effect: shared const int i; void main() { static if (is(typeof(i) : shared(U), U)) { //Prints "const(int)" pragma(msg, U); } }
Re: the traits trap
On Friday, 21 November 2014 at 04:08:52 UTC, Steven Schveighoffer wrote: Can anyone figure out a good solution to this problem? I like template constraints, but they are just too black-boxy. Would we have to signify that some enum is actually a trait and so the compiler would know to spit out the junk of compiling? Would it make sense to add some __traits function that allows one to signify that this is a special trait thing? This is one area that D's templates are very user-unfriendly. -Steve I would second this. Personally, I have the same "not very pleasant" experience debugging template constraints. Since more often than not the constraints have the form of: if (clause1 && clause2 && clause3 ...) my naive proposal would be to show which clause was first to be false in the error message. However, I have no idea if this could be implemented easily.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Friday, 21 November 2014 at 04:53:38 UTC, Walter Bright wrote: BTW, granted the 0x7FFF problems exhibit the bugs less often, but paradoxically this can make the bug worse, because then it only gets found much, much later in supposedly tested & robust code. 0 crossing bugs tend to show up much sooner, and often immediately. Yes, I have to say the current design has some issue, but alternative seems worse.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/2014 7:11 PM, Walter Bright wrote: On 11/20/2014 3:25 PM, bearophile wrote: Walter Bright: If that is changed to a signed type, then you'll have a same-only-different set of subtle bugs, This is possible. Can you show some of the bugs, we can discuss them, and see if they are actually worse than the current situation. All you're doing is trading 0 crossing for 0x7FFF crossing issues, and pretending the problems have gone away. BTW, granted the 0x7FFF problems exhibit the bugs less often, but paradoxically this can make the bug worse, because then it only gets found much, much later in supposedly tested & robust code. 0 crossing bugs tend to show up much sooner, and often immediately.
Re: the traits trap
On Friday, 21 November 2014 at 04:08:52 UTC, Steven Schveighoffer wrote: OK, so I'm writing some traits that I'd like my objects to satisfy. And I'm having the worst time debugging them. Most of the traits in D look like this: enum isSomeType(T) = __traits(compiles, (T t){ // some statements using t // some asserts // some static asserts }); All good. Now, let's test my object: unittest { static assert(isSomeType!SomeObject); } Nope. Now, how the hell do I figure out why? I have found the following technique most valuable: 1. Create a function called "testSomeType(T)(T t)", make it's body the same as the trait 2. Instead of static asserting the trait, call the function Much better results! Whichever part of the trait doesn't work shows up as a legitimate error, and I can fix the object or the trait. Now, this idiom of using __traits(compiles, ...) is used everywhere in phobos. Often times you see things like: void foo(T)(T t) if (hasSomeTrait!T && hasSomeOtherTrait!T && alsoHasThisOne!T) { ... If this doesn't compile, the compiler says "Error template instance blah blah does not match template declaration blah blah blah" Useless... Now, even if I want to use my cool technique to figure out where the issue is, I have to do it one at a time to each trait, and I may have to temporarily comment out some code to avoid triggering an error before I get to that point. When I first came to write this post, I wanted to ask if anyone thought it was a good idea to replace the __traits(compiles, someLiteral) with __traits(compiles, someFunctionTemplate!T) somehow, so if one couldn't do it, you had some easy way to debug by calling someFunctionTemplate. But I hate that idea. This means you have all these do-nothing functions whose sole existence is to debug traits. When the traits themselves can just do it for you. Can anyone figure out a good solution to this problem? I like template constraints, but they are just too black-boxy. Would we have to signify that some enum is actually a trait and so the compiler would know to spit out the junk of compiling? Would it make sense to add some __traits function that allows one to signify that this is a special trait thing? This is one area that D's templates are very user-unfriendly. -Steve There has been a bit of promising work done by Shammah Chancellor. It's a bit more heavyweight than a template returning true or false, but it's also more powerful and makes for better error messages. http://forum.dlang.org/thread/m219bj$fpa$1...@digitalmars.com
Re: GSOC Summer 2015 - Second call for Proposals
I've done a tiny bit a research on past GSOC projects, and based on that is my guess is we have been successful on 5/6 projects - but I am not really sure about the unsuccessful project. Successes 1. Linear Algebra Library based on SciD (2011) Student: Cristi CobzarencoMentor: David Simcha I am claiming success based on : https://github.com/cristicbz/scid The readme for which states " Most of the code from the original project by Lars Tandle Kyllingstad has been rewritten during the 2011 Google Summer of Code by Cristi Cobzarenco". 2. An Apache Thrift Implementation for D (2011) Student: David Nadlinger Mentor: Nitay Joffe Claim of success based on: http://klickverbot.at/blog/2012/03/thrift-now-officially-supports-d/ 3. Enhance regular Expressions (2011) Student: Dmitry OlshanskyMentor: Fawzi Mohamed Claim of success based on: https://github.com/DmitryOlshansky/FReD (and recent D-Conf talks if need be). 4. Mono-D (2012) Student: Alex Bothe Mentor: LightBender Claim of success based on: https://github.com/aBothe/Mono-D 5. Extended unicode support (2012) Student: Dmitry Olshansky Mentor: Andrei Alexandrescu Claim of success based on: Dmitry's continued involvement in phobos development, though I do not have anything on how this project actually progressed. Unsure of Status (2012) 1. Removing the global gc lock from common allocations in D. (2012) Student: Antti-Ville Tuunainen Mentor: David Simcha I haven't found much info on this one. Can either David or Antti-Ville comment on how this project went?
the traits trap
OK, so I'm writing some traits that I'd like my objects to satisfy. And I'm having the worst time debugging them. Most of the traits in D look like this: enum isSomeType(T) = __traits(compiles, (T t){ // some statements using t // some asserts // some static asserts }); All good. Now, let's test my object: unittest { static assert(isSomeType!SomeObject); } Nope. Now, how the hell do I figure out why? I have found the following technique most valuable: 1. Create a function called "testSomeType(T)(T t)", make it's body the same as the trait 2. Instead of static asserting the trait, call the function Much better results! Whichever part of the trait doesn't work shows up as a legitimate error, and I can fix the object or the trait. Now, this idiom of using __traits(compiles, ...) is used everywhere in phobos. Often times you see things like: void foo(T)(T t) if (hasSomeTrait!T && hasSomeOtherTrait!T && alsoHasThisOne!T) { ... If this doesn't compile, the compiler says "Error template instance blah blah does not match template declaration blah blah blah" Useless... Now, even if I want to use my cool technique to figure out where the issue is, I have to do it one at a time to each trait, and I may have to temporarily comment out some code to avoid triggering an error before I get to that point. When I first came to write this post, I wanted to ask if anyone thought it was a good idea to replace the __traits(compiles, someLiteral) with __traits(compiles, someFunctionTemplate!T) somehow, so if one couldn't do it, you had some easy way to debug by calling someFunctionTemplate. But I hate that idea. This means you have all these do-nothing functions whose sole existence is to debug traits. When the traits themselves can just do it for you. Can anyone figure out a good solution to this problem? I like template constraints, but they are just too black-boxy. Would we have to signify that some enum is actually a trait and so the compiler would know to spit out the junk of compiling? Would it make sense to add some __traits function that allows one to signify that this is a special trait thing? This is one area that D's templates are very user-unfriendly. -Steve
Overload using nogc
Has the idea of function overloading via nogc been explored? void func() @nogc { // logic that does not use GC } void func() { // logic that uses GC } void main(string[] args) // @nogc { // if main is @nogc, then the @nogc version of func // will be called, otherwise, the GC version will be func(); } This could be useful for the standard library to expose different implementations based on whether or not the application is using the GC.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/19/2014 8:22 AM, David Gileadi wrote: 2. That there's a need for it at all, which requires knowing that length is unsigned. I did know this, but I bet in the heat of programming I'd easily forget it. In a semi-complex algorithm the bug could easily hide for a long time before biting. If it was signed, you'd just have different issues hiding.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/2014 3:25 PM, bearophile wrote: Walter Bright: If that is changed to a signed type, then you'll have a same-only-different set of subtle bugs, This is possible. Can you show some of the bugs, we can discuss them, and see if they are actually worse than the current situation. All you're doing is trading 0 crossing for 0x7FFF crossing issues, and pretending the problems have gone away.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/2014 3:37 PM, FrankLike wrote: What about: uint x; auto z = x - 1; ? When mixing signed and unsigned, as signed, it maybe no mistaken. I think you missed my question - is that legal code under H.S.Teoh's proposal?
Re: Why is `scope` planned for deprecation?
On 11/20/14 5:09 PM, Walter Bright wrote: On 11/20/2014 3:10 PM, "Ola Fosheim Grøstad" " wrote: On Thursday, 20 November 2014 at 22:47:27 UTC, Walter Bright wrote: On 11/20/2014 1:55 PM, deadalnix wrote: All of this is beautiful until you try to implement a quicksort in, haskell. […] Monads! I think Deadalnix meant that you cannot do in-place quicksort easily in Haskell. That's correct. Non-mutating quicksort is easy, no need for monads: quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs https://www.haskell.org/haskellwiki/Introduction#Quicksort_in_Haskell Except that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation. As I like to say, this troika has inflicted a lot of damage on both FP and those beginning to learn it: * Linear-space factorial * Doubly exponential Fibonacci * (Non)Quicksort These losers appear with depressing frequency in FP introductory texts. Andrei
Re: AA rehash threshold
On 11/20/14 8:42 PM, Jerry Quinn wrote: Steven Schveighoffer writes: On 11/20/14 5:30 PM, Jerry Quinn wrote: This works nicely for small types, but has gotchas. For example, if you've got an AA of ints, what value indicates that this is a value folded into the bucket entry vs actually being a pointer? You'll need extra space to make sure it's safe. Alignment is another concern. Let's say you have bool for the entry/element flag. With ints you've doubled the size of the bucket array. Hm.. the bucket entry will simply be what a node is now. You are actually saving space, because you don't need to store that extra pointer to the first node. Almost. You do need a way to indicate that the bucket is empty. So you need another flag though it can be smaller than a pointer (for 64 bit). But the overhead is low except for things like ints. It's easy enough to set the "next" pointer to some invalid-but-not-null value. Hackish, but it would work ;) Where it gets dicey is when you rehash. Some of the nodes will be moved into the bucket space, some will move out of it. But I don't think it's a big deal, as a rehash doesn't happen very often. True. Umm, that does raise a question - are addresses of AA contents required to be stable? C++11 requires that of its hashtables. But it ties your hands when trying to write customized hash tables. I don't think we make any guarantees about that. As it stands now, whatever is implemented seems to be what people think is the spec. But it's not always true. We have so many more options when AA's become a library type. We could say "the builtin AA is this flavor of hashtable", but provide options for many different flavors with the same template if you want specific ones. -Steve
Re: Why is `scope` planned for deprecation?
On 11/20/2014 5:27 PM, "Ola Fosheim Grøstad" " wrote: On Friday, 21 November 2014 at 01:09:27 UTC, Walter Bright wrote: Except that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation. Yes, at least in Haskell, but I find monads in Haskell harder to read than regular imperative code. Exactly my point (and I presume deadalnix's, too).
Re: AA rehash threshold
Steven Schveighoffer writes: > On 11/20/14 5:30 PM, Jerry Quinn wrote: >> This works nicely for small types, but has gotchas. For example, if >> you've got an AA of ints, what value indicates that this is a value >> folded into the bucket entry vs actually being a pointer? You'll need >> extra space to make sure it's safe. Alignment is another concern. >> Let's say you have bool for the entry/element flag. With ints you've >> doubled the size of the bucket array. > > Hm.. the bucket entry will simply be what a node is now. You are actually > saving space, because you don't need to store that extra pointer to the first > node. Almost. You do need a way to indicate that the bucket is empty. So you need another flag though it can be smaller than a pointer (for 64 bit). But the overhead is low except for things like ints. > Where it gets dicey is when you rehash. Some of the nodes will be moved into > the bucket space, some will move out of it. But I don't think it's a big deal, > as a rehash doesn't happen very often. True. Umm, that does raise a question - are addresses of AA contents required to be stable? C++11 requires that of its hashtables. But it ties your hands when trying to write customized hash tables.
Re: Why is `scope` planned for deprecation?
On Friday, 21 November 2014 at 01:09:27 UTC, Walter Bright wrote: Except that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation. Yes, at least in Haskell, but I find monads in Haskell harder to read than regular imperative code. You can apparently cheat a little using libraries, this doesn't look too bad (from slashdot): import qualified Data.Vector.Generic as V import qualified Data.Vector.Generic.Mutable as M qsort :: (V.Vector v a, Ord a) => v a -> v a qsort = V.modify go where go xs | M.length xs < 2 = return () | otherwise = do p <- M.read xs (M.length xs `div` 2) j <- M.unstablePartition (< p) xs let (l, pr) = M.splitAt j xs k <- M.unstablePartition (== p) pr go l; go $ M.drop k pr http://stackoverflow.com/questions/7717691/why-is-the-minimalist-example-haskell-quicksort-not-a-true-quicksort
Re: Why is `scope` planned for deprecation?
On 11/20/2014 3:10 PM, "Ola Fosheim Grøstad" " wrote: On Thursday, 20 November 2014 at 22:47:27 UTC, Walter Bright wrote: On 11/20/2014 1:55 PM, deadalnix wrote: All of this is beautiful until you try to implement a quicksort in, haskell. […] Monads! I think Deadalnix meant that you cannot do in-place quicksort easily in Haskell. That's correct. Non-mutating quicksort is easy, no need for monads: quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs https://www.haskell.org/haskellwiki/Introduction#Quicksort_in_Haskell Except that isn't really quicksort. Monads are the workaround functional languages use to deal with things that need mutation.
Re: Why is `scope` planned for deprecation?
On Thursday, 20 November 2014 at 23:22:40 UTC, deadalnix wrote: You are a goalspot shifting champion, aren't you ? Nope, it follows up your line of argument, but the screwdriver/hammer metaphor is not a good one. You can implement your hammer and your screwdriver at the top if you have a lower level screwdriver/hammer-components at the bottom. That is the good. Piling together hammers and screwdrivers and hoping that nobody are going to miss the remaining 35% involving glue and tape… That is bad.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
What about: uint x; auto z = x - 1; ? When mixing signed and unsigned, as signed, it maybe no mistaken. thhere is a small test,add 'cast(long)' before - operator,if it's auto add,maybe fine. - import std.stdio; void main() { size_t width = 10; size_t height = 20; writeln("before width is ",width," ,height is ",height); height -= 15; width -= cast(long)height; writeln("after width is ",width," ,height is ",height); } result: after width is 5 . it's ok. Frank
Re: 'int' is enough for 'length' to migrate code from x86 to x64
Walter Bright: If that is changed to a signed type, then you'll have a same-only-different set of subtle bugs, This is possible. Can you show some of the bugs, we can discuss them, and see if they are actually worse than the current situation. Bye, bearophile
Re: Why is `scope` planned for deprecation?
You are a goalspot shifting champion, aren't you ?
Re: Why is `scope` planned for deprecation?
On Thursday, 20 November 2014 at 22:47:27 UTC, Walter Bright wrote: On 11/20/2014 1:55 PM, deadalnix wrote: All of this is beautiful until you try to implement a quicksort in, haskell. […] Monads! I think Deadalnix meant that you cannot do in-place quicksort easily in Haskell. Non-mutating quicksort is easy, no need for monads: quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs https://www.haskell.org/haskellwiki/Introduction#Quicksort_in_Haskell
Re: Why is `scope` planned for deprecation?
On 11/20/2014 1:55 PM, deadalnix wrote: All of this is beautiful until you try to implement a quicksort in, haskell. It is not that functional programming is bad (I actually like it a lot) but there are problem where it is simply the wrong tool. Once you acknowledge that, you have 2 road forward : - You create bizarre features to implement quicksort in a functional way. Tge concept become more complex, but some expert guru will secure their job. - Keep your functional feature as they are, but allow for other styles, which cope better with quicksort. The situation 2 is the practical one. There is no point in creating an ackward hammer that can also screw things if I can have a hammer and a screwdriver. Obviously, this has a major drawback in the fact you cannot say to everybody that your favorite style is the one true thing that everybody must use. That is a real bummer for religious zealot, but actual engineers understand that this is a feature, not a bug. Monads!
Re: AA rehash threshold
On 11/20/14 5:30 PM, Jerry Quinn wrote: Steven Schveighoffer writes: On 11/18/14 9:46 PM, deadalnix wrote: After all, unless your hash table is very small, the first hit is most likely a cache miss, meaning ~300 cycles. at this point the cache line is in L1, with a 3 cycle access time. So accessing several element in the same cache line is often preferable. In the case of AAs, all bucket elements are pointers, so this point is moot for our purposes -- one may have to jump outside the cache to find the first element. A good improvement (this is likely to come with a full library type) is to instead store an inline element for each bucket entry instead of just a pointer to an element. I recall when writing dcollections, this added a significant speedup. This works nicely for small types, but has gotchas. For example, if you've got an AA of ints, what value indicates that this is a value folded into the bucket entry vs actually being a pointer? You'll need extra space to make sure it's safe. Alignment is another concern. Let's say you have bool for the entry/element flag. With ints you've doubled the size of the bucket array. Hm.. the bucket entry will simply be what a node is now. You are actually saving space, because you don't need to store that extra pointer to the first node. Where it gets dicey is when you rehash. Some of the nodes will be moved into the bucket space, some will move out of it. But I don't think it's a big deal, as a rehash doesn't happen very often. If you have large objects as the elements, you'll waste space. Probably the implementation needs to be switchable depending on the size of the object. Yes, this is also a concern. This almost necessitates a minimum load as well as a maximum. As an aside, I find it invaluable to have statistics about hash table buckets, ala C++11. When building custom hash functions and hash table implementations, it's almost necessary. I'd like to see bucket stats available in some manner for built-in AAs. If they're going away for a library type, we should have such stat info as part of the API. Absolutely all of this could be added to a library type. -Steve
Type Inference Bug?
shared const int i; static if (is(typeof(i) T == shared U, U)) { //Prints "shared(const(int))" pragma(msg, U); } This seems like subtly wrong behaviour to me. If T == shared U, for some U, then shouldn't U be unshared? If T is shared(const(int)), and T is the same as the type U with the 'shared' qualifier applied to it, then U should be of type const(int), not shared(const(int)). I'm bringing this up partially because it seems wrong to me, and partially because we currently don't have a good why of "shaving" the outermost qualifier off a type, and this seemed the natural way to do it to me (I was surprised when it didn't work).
Re: Why is `scope` planned for deprecation?
On Thursday, 20 November 2014 at 21:55:16 UTC, deadalnix wrote: All of this is beautiful until you try to implement a quicksort in, haskell. It is not that functional programming is bad (I actually like it a lot) but there are problem where it is simply the wrong tool. Sure, I am not arguing in favour of functional programming. But it is possible to define a tight core language (or VM) with the understanding that all other "high level" constructs have to be expressed within that core language in the compiler internals. Then you can do all the analysis on that small critical subset of constructs. With this approach you can create/modify all kinds of convenience features without affect the core semantics that keeps it sounds and clean. Take for instance the concept of isolates, which I believe we both think can be useful. If the concept of an isolated-group-of-objects it is taken to the abstract level and married to a simple core language (or VM) in a sound way, then the more complicated stuff can hopefully be built on top of it. So you get a bottom-up approach to the language that meets the end user. Rather than what happens now where feature requests seem to be piled on top-down, they ought to be "digested" into something that can grow bottom-up. I believe this is what you try to do with your GC proposal. Obviously, this has a major drawback in the fact you cannot say to everybody that your favorite style is the one true thing that everybody must use. That is a real bummer for religious zealot, but actual engineers understand that this is a feature, not a bug. Well, I think this holds: 1. Good language creation goes bottom-up. 2. Good language evaluation goes top-down. 3. Good language design is a circular process between 1 and 2. In essence having a tight "engine" is important (the bottom), but you also need to understand the use context and how it will be used (the top). In D the bottom-part is not so clear and could need a cleanup, but then the community would have to accept the effects of that propagate to the top. Without defining some use contexts for the language I think the debates get very long, because without "data" you cannot do "analysis" and then you end up with "feels right to me" and that is not engineering, it is art or what you are used to. And, there are more good engineers in the world than good artists… If one can define a single use scenario that is demanding enough to ensure that an evaluation against that scenario also will work for the other less demanding scenarios, then maybe some more rational discussions about the direction of D as language could be possible and you could leave out say the 10% that are less useful. When everybody argues out from their own line of work and habits… then they talk past each other.
Re: AA rehash threshold
Steven Schveighoffer writes: > On 11/18/14 9:46 PM, deadalnix wrote: >> >> After all, unless your hash table is very small, the first hit is >> most likely a cache miss, meaning ~300 cycles. at this point the >> cache line is in L1, with a 3 cycle access time. So accessing >> several element in the same cache line is often preferable. > > In the case of AAs, all bucket elements are pointers, so this point is moot > for our purposes -- one may have to jump outside the cache to find the first > element. A good improvement (this is likely to come with a full library type) > is to instead store an inline element for each bucket entry instead of just a > pointer to an element. I recall when writing dcollections, this added a > significant speedup. This works nicely for small types, but has gotchas. For example, if you've got an AA of ints, what value indicates that this is a value folded into the bucket entry vs actually being a pointer? You'll need extra space to make sure it's safe. Alignment is another concern. Let's say you have bool for the entry/element flag. With ints you've doubled the size of the bucket array. If you have large objects as the elements, you'll waste space. Probably the implementation needs to be switchable depending on the size of the object. As an aside, I find it invaluable to have statistics about hash table buckets, ala C++11. When building custom hash functions and hash table implementations, it's almost necessary. I'd like to see bucket stats available in some manner for built-in AAs. If they're going away for a library type, we should have such stat info as part of the API. Jerry
Re: Why is `scope` planned for deprecation?
On Thursday, 20 November 2014 at 21:26:18 UTC, Ola Fosheim Grøstad wrote: On Thursday, 20 November 2014 at 20:15:03 UTC, deadalnix wrote: Many language make the mistake of thinking something is the holly grail, be it OOP, functional programming or linear types. I do think that it is a better engineering solution to provide a decent support for all of theses, and doing so we don't need to get them handle 100% of the case, as we have other language construct/paradigm that suit better difficult cases anyway. FWIW, among language designers it is usually considered a desirable trait to have orthogonality between constructs and let them be combinable in expressive ways. This reduce the burden on the user who then only have to truly understand the key concepts to build a clear mental image of the semantic model. Then you can figure out ways to add syntactical sugar if needed. Having a smaller set of basic constructs makes it easier to prove correctness, which turn is important for optimization (which depends on the ability to prove equivalence over the pre/post semantics). It makes it easier to prove properties such as "@(un)safe". It also makes it easier to later extend the language. Just think about all the areas "fibers" in D affect. It affects garbage collection and memory handling. It affects the ability to do deep semantic analysis. It affects implementation of fast multi-threaded ADTs. One innocent feature can have a great impact. Providing a 70% solution like Go is fine as they have defined a narrow domain for the language, servers, thus as a programmer you don't hit the 30% they left out. But D has not defined narrow use domain, so as a designer you cannot make up a good rationale for which 15-30% to leave out. Design is always related to a specific use scenario. (I like the uncanny valley metaphor, had not thought about using it outside 3D. Cool association!) All of this is beautiful until you try to implement a quicksort in, haskell. It is not that functional programming is bad (I actually like it a lot) but there are problem where it is simply the wrong tool. Once you acknowledge that, you have 2 road forward : - You create bizarre features to implement quicksort in a functional way. Tge concept become more complex, but some expert guru will secure their job. - Keep your functional feature as they are, but allow for other styles, which cope better with quicksort. The situation 2 is the practical one. There is no point in creating an ackward hammer that can also screw things if I can have a hammer and a screwdriver. Obviously, this has a major drawback in the fact you cannot say to everybody that your favorite style is the one true thing that everybody must use. That is a real bummer for religious zealot, but actual engineers understand that this is a feature, not a bug.
Re: Why is `scope` planned for deprecation?
On Thursday, 20 November 2014 at 20:15:03 UTC, deadalnix wrote: Many language make the mistake of thinking something is the holly grail, be it OOP, functional programming or linear types. I do think that it is a better engineering solution to provide a decent support for all of theses, and doing so we don't need to get them handle 100% of the case, as we have other language construct/paradigm that suit better difficult cases anyway. FWIW, among language designers it is usually considered a desirable trait to have orthogonality between constructs and let them be combinable in expressive ways. This reduce the burden on the user who then only have to truly understand the key concepts to build a clear mental image of the semantic model. Then you can figure out ways to add syntactical sugar if needed. Having a smaller set of basic constructs makes it easier to prove correctness, which turn is important for optimization (which depends on the ability to prove equivalence over the pre/post semantics). It makes it easier to prove properties such as "@(un)safe". It also makes it easier to later extend the language. Just think about all the areas "fibers" in D affect. It affects garbage collection and memory handling. It affects the ability to do deep semantic analysis. It affects implementation of fast multi-threaded ADTs. One innocent feature can have a great impact. Providing a 70% solution like Go is fine as they have defined a narrow domain for the language, servers, thus as a programmer you don't hit the 30% they left out. But D has not defined narrow use domain, so as a designer you cannot make up a good rationale for which 15-30% to leave out. Design is always related to a specific use scenario. (I like the uncanny valley metaphor, had not thought about using it outside 3D. Cool association!)
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/2014 7:52 AM, H. S. Teoh via Digitalmars-d wrote: What *could* be improved, is the prevention of obvious mistakes in *mixing* signed and unsigned types. Right now, D allows code like the following with no warning: uint x; int y; auto z = x - y; BTW, this one is the same in essence as an actual bug that I fixed in druntime earlier this year, so downplaying it as a mistake people make 'cos they confound computer math with math math is fallacious. What about: uint x; auto z = x - 1; ?
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/2014 6:22 AM, Ary Borenszweig wrote: Nobody is saying to remove unsigned types from the language. They have their uses. It's just that using them for an array's length leads to subtle bugs. That's all. If that is changed to a signed type, then you'll have a same-only-different set of subtle bugs, plus you'll break the intuition about these things from everyone who has used C/C++ a lot.
Re: Ranges and Exception handling PR 2724
hm, the thing is there are ranges that will throw, making them nothrow is of course a very good idea, but some will still throw map(a => throw ...) This handleXXX ranges deal with them.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thursday, 20 November 2014 at 15:55:21 UTC, H. S. Teoh via Digitalmars-d wrote: Using unsigned types for array length doesn't necessarily lead to subtle bugs, if the language was stricter about mixing signed and unsigned values. Yes, I think that this is the real issue.
Re: Why is `scope` planned for deprecation?
On Thursday, 20 November 2014 at 10:24:30 UTC, Max Samukha wrote: On Sunday, 16 November 2014 at 03:27:54 UTC, Walter Bright wrote: On 11/14/2014 4:32 PM, deadalnix wrote: To quote the guy from the PL for video games video serie, a 85% solution often is preferable. Spoken like a true engineer! 85% often means being at the bottom of the uncanny valey. 65% or 95% are more preferable. 85% is an image rather than an exact number. The point being, every construct are good at some thing, and bad at other. Making them capable of doing everything come at a great complexity cost, so it is preferable to aim for a solution that cope well with most use cases, and provide alternative solutions for the horrible cases. Many language make the mistake of thinking something is the holly grail, be it OOP, functional programming or linear types. I do think that it is a better engineering solution to provide a decent support for all of theses, and doing so we don't need to get them handle 100% of the case, as we have other language construct/paradigm that suit better difficult cases anyway.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thursday, 20 November 2014 at 08:18:24 UTC, Don wrote: ... It's particularly challenging in D because of the widespread use of 'auto': auto x = foo(); auto y = bar(); auto z = baz(); if (x - y > z) { ... } This might be a bug, if one of these functions returns an unsigned type. Good luck finding that. Note that if all functions return unsigned, there isn't even any signed-unsigned mismatch. ... I personally think this code is bad style. If the function requires a signed integer type, then `auto` with no qualifications at all is clearly too loose- if the programmer had specified what he needed to begin with, the error would have been caught at compile time. You can replace `auto` with an explicit signed integer type like `long`. If foo and bar are template parameters and you don't know the precise return type, then a static assert that x and y are signed will do the trick. If it is known that x > y and the function does not require a signed integer type, then an assert should be used. Frankly that snippet just illustrates the sort of constraints that should be put on generic code.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/14 7:40 AM, Araq wrote: It's not only his "opinion", it's his *experience* Yes, there's some good anecdotal evidence too. IMHO not enough to trigger a change to other solutions that have their own issues. -- Andrei
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/14 7:29 AM, Sean Kelly wrote: On Thursday, 20 November 2014 at 00:08:08 UTC, Andrei Alexandrescu wrote: I think we're in good shape with unsigned. I'd actually prefer signed. Index-based algorithms can be tricky to write correctly with unsigned index values. The most difficult pattern that comes to mind is the "long arrow" operator seen in backward iteration: void fun(int[] a) { for (auto i = a.length; i --> 0; ) { // use i } } Andrei
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thursday, 20 November 2014 at 15:40:40 UTC, Araq wrote: Most of the statements I disagreed with were opinions. "unsigned" means "I want to use modulo 2^^n arithmetic". It does not mean, "this is an integer which cannot be negative". Opinion. Using modulo 2^^n arithmetic is *weird*. Opinion. If you are using uint/ulong to represent a non-negative integer, you are using the incorrect type. Opinion. I believe that bugs caused by unsigned calculations are subtle and require an extraordinary level of diligence. Opinion (correctly qualified as belief). It's not only his "opinion", it's his *experience* and if we want to play the "argument by authority" game: he most likely wrote more production quality code in D than you did. Urrmmm, really? Andrei has written a hell of a lot of production quality code. I use it every day, in production, as do many others.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thursday, 20 November 2014 at 15:40:40 UTC, Araq wrote: Most of the statements I disagreed with were opinions. "unsigned" means "I want to use modulo 2^^n arithmetic". It does not mean, "this is an integer which cannot be negative". Opinion. Using modulo 2^^n arithmetic is *weird*. Opinion. If you are using uint/ulong to represent a non-negative integer, you are using the incorrect type. Opinion. I believe that bugs caused by unsigned calculations are subtle and require an extraordinary level of diligence. Opinion (correctly qualified as belief). It's not only his "opinion", it's his *experience* and if we want to play the "argument by authority" game: he most likely wrote more production quality code in D than you did. Here are some more "opinions": http://critical.eschertech.com/2010/04/07/danger-unsigned-types-used-here/ My experience is totally the opposite of his. I have been using unsigned for lengths, widths, heights for the past 15 years in C, C++, C# and more recently in D with great success. I don't pretend to be any kind of authority though. The article you point to is totally flawed and kinda wasteful in terms of having to read it; the very first code snippet is obviously buggy. You can't purposefully write buggy code and then comment on the dangers of this or that! size_t i; for (i = size - 1; i >= 0; --i) { If you that's subtle to you then yes, use signed!
Re: Ranges and Exception handling PR 2724
H. S. Teoh: Unfortunately, it looks like people are more interested in arguing about signed vs. unsigned instead of reviewing new Phobos features. *sigh* :-( Both kind of discussions are important. Regarding this Phobos feature, I suggested something less general and more efficient (no exceptions are involved): https://issues.dlang.org/show_bug.cgi?id=6840 See also: https://issues.dlang.org/show_bug.cgi?id=6843 Bye, bearophile
Re: Ranges and Exception handling PR 2724
On Saturday, 15 November 2014 at 01:43:07 UTC, Robert burner Schadek wrote: This PR https://github.com/D-Programming-Language/phobos/pull/2724 adds an generic way of handling Exception in Range processing. quickfur and Dicebot ask me to start a thread here so the concept could be discussed. It's a small thing, but it might be nice if it were possible to provide a handler that takes only the exception as an argument, or maybe even just a default value with no arguments at all. I like the general idea though.
Re: Ranges and Exception handling PR 2724
On Thu, Nov 20, 2014 at 11:57:41AM +, Robert burner Schadek via Digitalmars-d wrote: > On Wednesday, 19 November 2014 at 05:49:55 UTC, H. S. Teoh via Digitalmars-d > w > >From what I understand, this PR is proposing to add a range wrapper > >that catches exceptions thrown from range primitives and passes them > >to a user-specified handler. Seems to be a promising idea, but it's > >probably > > It is exactly that: > > auto s = "12,1337z32,54,2,7,9,1z,6,8"; > > auto r = s.splitter(',') > .map!(a => to!int(a)) > .handleBack!(ConvException, (e, r) => 0) > .array; > > assert(equal(h, [12, 0, 54, 2, 7, 9, 0, 6, 8])); Unfortunately, it looks like people are more interested in arguing about signed vs. unsigned instead of reviewing new Phobos features. *sigh* :-( T -- PNP = Plug 'N' Pray
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thu, Nov 20, 2014 at 08:18:23AM +, Don via Digitalmars-d wrote: > On Wednesday, 19 November 2014 at 17:55:26 UTC, Andrei Alexandrescu wrote: > >On 11/19/14 6:04 AM, Don wrote: > >>Almost everybody seems to think that unsigned means positive. It > >>does not. > > > >That's an exaggeration. With only a bit of care one can use D's > >unsigned types for positive numbers. Please let's not reduce the > >matter to black and white. > > > >Andrei > > Even in the responses in this thread indicate that about half of the > people here don't understand unsigned. > > "unsigned" means "I want to use modulo 2^^n arithmetic". It does not > mean, "this is an integer which cannot be negative". > > Using modulo 2^^n arithmetic is *weird*. If you are using uint/ulong > to represent a non-negative integer, you are using the incorrect type. [...] By that logic, using an int to represent an integer is also using the incorrect type, because a signed type is *also* subject to module 2^^n arithmetic -- just a different form of it where the most negative value wraps around to the most positive values. Fixed-width integers in computing are NOT the same thing as unrestricted integers in mathematics. No matter how you try to rationalize it, as long as you use hardware fix-width "integers", you're dealing with modulo arithmetic in one form or another. Pretending you're not, is the real source of said subtle bugs. T -- Why waste time learning, when ignorance is instantaneous? -- Hobbes, from Calvin & Hobbes
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thu, Nov 20, 2014 at 11:22:00AM -0300, Ary Borenszweig via Digitalmars-d wrote: > On 11/20/14, 5:02 AM, Walter Bright wrote: [...] > >As for me personally, I like having a complete set of signed and > >unsigned integral types at my disposal. It's like having a full set > >of wrenches that are open end on one end and boxed on the other :-) > >Most of the time either end will work, but sometimes only one will. > > > >Now, if D were a non-systems language like Basic, Go or Java, > >unsigned types could be reasonably dispensed with. But D is a systems > >programming language, and it ought to have available types that match > >what the hardware supports. > > > > Nobody is saying to remove unsigned types from the language. They have > their uses. It's just that using them for an array's length leads to > subtle bugs. That's all. Using unsigned types for array length doesn't necessarily lead to subtle bugs, if the language was stricter about mixing signed and unsigned values. T -- Recently, our IT department hired a bug-fix engineer. He used to work for Volkswagen.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thu, Nov 20, 2014 at 12:02:42AM -0800, Walter Bright via Digitalmars-d wrote: > On 11/19/2014 5:03 PM, H. S. Teoh via Digitalmars-d wrote: > >If this kind of unsafe mixing wasn't allowed, or required explict > >casts (to signify "yes I know what I'm doing and I'm prepared to face > >the consequences"), I suspect that bearophile would be much happier > >about this issue. ;-) > > Explicit casts are worse than the problem - they can easily cause > bugs. Not any worse bugs than are currently *silently accepted* by the compiler! > As for me personally, I like having a complete set of signed and > unsigned integral types at my disposal. It's like having a full set of > wrenches that are open end on one end and boxed on the other :-) Most > of the time either end will work, but sometimes only one will. > > Now, if D were a non-systems language like Basic, Go or Java, unsigned > types could be reasonably dispensed with. But D is a systems > programming language, and it ought to have available types that match > what the hardware supports. Please note that I never suggested anywhere that we get rid of unsigned types. In fact, I think it was a right decision to include unsigned types in the language and to use an unsigned type for array length. What *could* be improved, is the prevention of obvious mistakes in *mixing* signed and unsigned types. Right now, D allows code like the following with no warning: uint x; int y; auto z = x - y; BTW, this one is the same in essence as an actual bug that I fixed in druntime earlier this year, so downplaying it as a mistake people make 'cos they confound computer math with math math is fallacious. T -- He who laughs last thinks slowest.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
Most of the statements I disagreed with were opinions. "unsigned" means "I want to use modulo 2^^n arithmetic". It does not mean, "this is an integer which cannot be negative". Opinion. Using modulo 2^^n arithmetic is *weird*. Opinion. If you are using uint/ulong to represent a non-negative integer, you are using the incorrect type. Opinion. I believe that bugs caused by unsigned calculations are subtle and require an extraordinary level of diligence. Opinion (correctly qualified as belief). It's not only his "opinion", it's his *experience* and if we want to play the "argument by authority" game: he most likely wrote more production quality code in D than you did. Here are some more "opinions": http://critical.eschertech.com/2010/04/07/danger-unsigned-types-used-here/
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thursday, 20 November 2014 at 00:08:08 UTC, Andrei Alexandrescu wrote: I think we're in good shape with unsigned. I'd actually prefer signed. Index-based algorithms can be tricky to write correctly with unsigned index values. The reason size_t is unsigned in Druntime is because I felt that half the memory range on 32-bit was potentially too small a maximum size in a systems language, and it's unsigned on 64-bit for the sake of consistency.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/14 6:20 AM, Ary Borenszweig wrote: On 11/20/14, 6:47 AM, Andrei Alexandrescu wrote: On 11/20/14 12:18 AM, Don wrote: On Wednesday, 19 November 2014 at 17:55:26 UTC, Andrei Alexandrescu wrote: On 11/19/14 6:04 AM, Don wrote: Almost everybody seems to think that unsigned means positive. It does not. That's an exaggeration. With only a bit of care one can use D's unsigned types for positive numbers. Please let's not reduce the matter to black and white. Andrei Even in the responses in this thread indicate that about half of the people here don't understand unsigned. "unsigned" means "I want to use modulo 2^^n arithmetic". It does not mean, "this is an integer which cannot be negative". Using modulo 2^^n arithmetic is *weird*. If you are using uint/ulong to represent a non-negative integer, you are using the incorrect type. "With only a bit of care one can use D's unsigned types for positive numbers." I do not believe that that statement to be true. I believe that bugs caused by unsigned calculations are subtle and require an extraordinary level of diligence. I showed an example at DConf, that I had found in production code. It's particularly challenging in D because of the widespread use of 'auto': auto x = foo(); auto y = bar(); auto z = baz(); if (x - y > z) { ... } This might be a bug, if one of these functions returns an unsigned type. Good luck finding that. Note that if all functions return unsigned, there isn't even any signed-unsigned mismatch. I believe the correct statement, is "With only a bit of care one can use D's unsigned types for positive numbers and believe that one's code is correct, even though it contains subtle bugs." Well I'm sorry but I quite disagree. -- Andrei I don't think disagreeing without a reason (like the one Don gave above) is good. Most of the statements I disagreed with were opinions. "unsigned" means "I want to use modulo 2^^n arithmetic". It does not mean, "this is an integer which cannot be negative". Opinion. Using modulo 2^^n arithmetic is *weird*. Opinion. If you are using uint/ulong to represent a non-negative integer, you are using the incorrect type. Opinion. I believe that bugs caused by unsigned calculations are subtle and require an extraordinary level of diligence. Opinion (correctly qualified as belief). Andrei
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/14, 5:02 AM, Walter Bright wrote: On 11/19/2014 5:03 PM, H. S. Teoh via Digitalmars-d wrote: If this kind of unsafe mixing wasn't allowed, or required explict casts (to signify "yes I know what I'm doing and I'm prepared to face the consequences"), I suspect that bearophile would be much happier about this issue. ;-) Explicit casts are worse than the problem - they can easily cause bugs. As for me personally, I like having a complete set of signed and unsigned integral types at my disposal. It's like having a full set of wrenches that are open end on one end and boxed on the other :-) Most of the time either end will work, but sometimes only one will. Now, if D were a non-systems language like Basic, Go or Java, unsigned types could be reasonably dispensed with. But D is a systems programming language, and it ought to have available types that match what the hardware supports. Nobody is saying to remove unsigned types from the language. They have their uses. It's just that using them for an array's length leads to subtle bugs. That's all.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/14, 6:47 AM, Andrei Alexandrescu wrote: On 11/20/14 12:18 AM, Don wrote: On Wednesday, 19 November 2014 at 17:55:26 UTC, Andrei Alexandrescu wrote: On 11/19/14 6:04 AM, Don wrote: Almost everybody seems to think that unsigned means positive. It does not. That's an exaggeration. With only a bit of care one can use D's unsigned types for positive numbers. Please let's not reduce the matter to black and white. Andrei Even in the responses in this thread indicate that about half of the people here don't understand unsigned. "unsigned" means "I want to use modulo 2^^n arithmetic". It does not mean, "this is an integer which cannot be negative". Using modulo 2^^n arithmetic is *weird*. If you are using uint/ulong to represent a non-negative integer, you are using the incorrect type. "With only a bit of care one can use D's unsigned types for positive numbers." I do not believe that that statement to be true. I believe that bugs caused by unsigned calculations are subtle and require an extraordinary level of diligence. I showed an example at DConf, that I had found in production code. It's particularly challenging in D because of the widespread use of 'auto': auto x = foo(); auto y = bar(); auto z = baz(); if (x - y > z) { ... } This might be a bug, if one of these functions returns an unsigned type. Good luck finding that. Note that if all functions return unsigned, there isn't even any signed-unsigned mismatch. I believe the correct statement, is "With only a bit of care one can use D's unsigned types for positive numbers and believe that one's code is correct, even though it contains subtle bugs." Well I'm sorry but I quite disagree. -- Andrei I don't think disagreeing without a reason (like the one Don gave above) is good. You could show us the benefits of unsigned types over signed types (possibly considering that not every program in the world needs an array with 2^64 elements).
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thursday, 20 November 2014 at 13:26:23 UTC, FrankLike wrote: auto x = foo(); auto y = bar(); auto z = baz(); if (x - y > z) { ... } This might be a bug, if one of these functions returns an unsigned type. Good luck finding that. Note that if all functions return unsigned, there isn't even any signed-unsigned mismatch. I believe the correct statement, is "With only a bit of care one can use D's unsigned types for positive numbers and believe that one's code is correct, even though it contains subtle bugs." Well I'm sorry but I quite disagree. -- Andrei This might be a bug. 'Length' always needs to compare sizes. 'Width' and 'Height' like it. dfl/drawing.d line:185 -218 ** /// Size opAdd(Size sz) { Size result; result.width = width + sz.width; result.height = height + sz.height; return result; } /// Size opSub(Size sz) { Size result; result.width = width - sz.width; result.height = height - sz.height; return result; } /// void opAddAssign(Size sz) { width += sz.width; height += sz.height; } /// void opSubAssign(Size sz) { width -= sz.width; height -= sz.height; } ***end* if the type of width and height are size_t,then their values will be error. small test: --- import std.stdio; void main() { size_t width = 10; size_t height = 20; writeln("before width is ",width," ,height is ",height); height -= 1; width -= height; writeln("after width is ",width," ,height is ",height); } -- "after width is " ERROR. I get "after width is 18446744073709551607 ,height is 19", which looks mathematically correct to me.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
auto x = foo(); auto y = bar(); auto z = baz(); if (x - y > z) { ... } This might be a bug, if one of these functions returns an unsigned type. Good luck finding that. Note that if all functions return unsigned, there isn't even any signed-unsigned mismatch. I believe the correct statement, is "With only a bit of care one can use D's unsigned types for positive numbers and believe that one's code is correct, even though it contains subtle bugs." Well I'm sorry but I quite disagree. -- Andrei This might be a bug. 'Length' always needs to compare sizes. 'Width' and 'Height' like it. dfl/drawing.d line:185 -218 ** /// Size opAdd(Size sz) { Size result; result.width = width + sz.width; result.height = height + sz.height; return result; } /// Size opSub(Size sz) { Size result; result.width = width - sz.width; result.height = height - sz.height; return result; } /// void opAddAssign(Size sz) { width += sz.width; height += sz.height; } /// void opSubAssign(Size sz) { width -= sz.width; height -= sz.height; } ***end* if the type of width and height are size_t,then their values will be error. small test: --- import std.stdio; void main() { size_t width = 10; size_t height = 20; writeln("before width is ",width," ,height is ",height); height -= 1; width -= height; writeln("after width is ",width," ,height is ",height); } -- "after width is " ERROR.
Re: Ranges and Exception handling PR 2724
On Wednesday, 19 November 2014 at 05:49:55 UTC, H. S. Teoh via Digitalmars-d w From what I understand, this PR is proposing to add a range wrapper that catches exceptions thrown from range primitives and passes them to a user-specified handler. Seems to be a promising idea, but it's probably It is exactly that: auto s = "12,1337z32,54,2,7,9,1z,6,8"; auto r = s.splitter(',') .map!(a => to!int(a)) .handleBack!(ConvException, (e, r) => 0) .array; assert(equal(h, [12, 0, 54, 2, 7, 9, 0, 6, 8]));
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thursday, 20 November 2014 at 01:05:51 UTC, H. S. Teoh via Digitalmars-d wrote: However, the fact that you can freely mix signed and unsigned types in unsafe ways without any warning, is a fly that spoils the soup. If this kind of unsafe mixing wasn't allowed, or required explict casts (to signify "yes I know what I'm doing and I'm prepared to face the consequences"), I suspect that bearophile would be much happier about this issue. ;-) If usage of unsigned types is not controlled, they will systematically mix with signed types, the mix becomes normal flow of the code. Disallowing normal flow of the code is even worse.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Thursday, 20 November 2014 at 08:14:41 UTC, Walter Bright wrote: Computer math is not math math. It is its own beast, and if you're going to write in a systems programming language it is very important to learn how it works, or you'll be nothing but frustrated. Understanding how it works doesn't mean error prone practices must be forced everywhere. It's not like D can't work with signed types. Rust made the same mistake and now a couple of times I've seen bugs like these being reported. Never seen them in Java or .Net though. I wonder why... D is meant to be easily used by C and C++ programmers. It follows the same model of signed/unsigned arithmetic and integral promotions. This is very, very deliberate. To change this would be a disaster. If unsigned types exist, it doesn't mean they must be forced everywhere. For example, in America we drive on the right. In Australia, they drive on the left. When I visit Australia, I know this, but when stepping out into the road I instinctively check my left for cars, step into the road, and my foot gets run over by a car coming from the right. I've had to be very careful as a pedestrian there, as my intuition would get me killed. Don't mess with systems programmers' intuitions. It'll cause more problems than it solves. Bad things can happen, but why make them more probable instead of trying to make them less probable?
Re: Why is `scope` planned for deprecation?
On Sunday, 16 November 2014 at 03:27:54 UTC, Walter Bright wrote: On 11/14/2014 4:32 PM, deadalnix wrote: To quote the guy from the PL for video games video serie, a 85% solution often is preferable. Spoken like a true engineer! 85% often means being at the bottom of the uncanny valey. 65% or 95% are more preferable.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/20/14 12:18 AM, Don wrote: On Wednesday, 19 November 2014 at 17:55:26 UTC, Andrei Alexandrescu wrote: On 11/19/14 6:04 AM, Don wrote: Almost everybody seems to think that unsigned means positive. It does not. That's an exaggeration. With only a bit of care one can use D's unsigned types for positive numbers. Please let's not reduce the matter to black and white. Andrei Even in the responses in this thread indicate that about half of the people here don't understand unsigned. "unsigned" means "I want to use modulo 2^^n arithmetic". It does not mean, "this is an integer which cannot be negative". Using modulo 2^^n arithmetic is *weird*. If you are using uint/ulong to represent a non-negative integer, you are using the incorrect type. "With only a bit of care one can use D's unsigned types for positive numbers." I do not believe that that statement to be true. I believe that bugs caused by unsigned calculations are subtle and require an extraordinary level of diligence. I showed an example at DConf, that I had found in production code. It's particularly challenging in D because of the widespread use of 'auto': auto x = foo(); auto y = bar(); auto z = baz(); if (x - y > z) { ... } This might be a bug, if one of these functions returns an unsigned type. Good luck finding that. Note that if all functions return unsigned, there isn't even any signed-unsigned mismatch. I believe the correct statement, is "With only a bit of care one can use D's unsigned types for positive numbers and believe that one's code is correct, even though it contains subtle bugs." Well I'm sorry but I quite disagree. -- Andrei
Re: Why is `scope` planned for deprecation?
On Wednesday, 19 November 2014 at 01:35:19 UTC, Alo Miehsof Datsørg wrote: On Tuesday, 18 November 2014 at 23:48:27 UTC, Walter Bright wrote: On 11/18/2014 1:23 PM, "Ola Fosheim Grøstad" " wrote: I am arguing against the position that it was a design mistake to keep the semantic model simple and with few presumptions. On the contrary, it was the design goal. Another goal for a language like C is ease of implementation so that you can easily port it to new hardware. The proposals I made do not change that in any way, and if K&R designed C without those mistakes, it would have not made C more complex in the slightest. VLAs have been available in gcc for a long time. They are not useless, I've used them from time to time. I know you're simply being argumentative when you defend VLAs, a complex and useless feature, and denigrate simple ptr/length pairs as complicated. Argumentative ?!! More like a fucking gaping fucking asshole. His posts are the blight of this group. Wow that's uncalled for. I don't always agree with Ola but his posts are rarely uninformed and often backed up with actual code examples or links supoprting his arguments. They generally lead to very interesting discussions on the forum. Cheers, uri
Re: Why is `scope` planned for deprecation?
On 11/18/2014 5:35 PM, "Alo Miehsof Datsørg" " wrote: Argumentative ?!! More like a fucking gaping fucking asshole. His posts are the blight of this group. Rude posts are not welcome here.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On Wednesday, 19 November 2014 at 17:55:26 UTC, Andrei Alexandrescu wrote: On 11/19/14 6:04 AM, Don wrote: Almost everybody seems to think that unsigned means positive. It does not. That's an exaggeration. With only a bit of care one can use D's unsigned types for positive numbers. Please let's not reduce the matter to black and white. Andrei Even in the responses in this thread indicate that about half of the people here don't understand unsigned. "unsigned" means "I want to use modulo 2^^n arithmetic". It does not mean, "this is an integer which cannot be negative". Using modulo 2^^n arithmetic is *weird*. If you are using uint/ulong to represent a non-negative integer, you are using the incorrect type. "With only a bit of care one can use D's unsigned types for positive numbers." I do not believe that that statement to be true. I believe that bugs caused by unsigned calculations are subtle and require an extraordinary level of diligence. I showed an example at DConf, that I had found in production code. It's particularly challenging in D because of the widespread use of 'auto': auto x = foo(); auto y = bar(); auto z = baz(); if (x - y > z) { ... } This might be a bug, if one of these functions returns an unsigned type. Good luck finding that. Note that if all functions return unsigned, there isn't even any signed-unsigned mismatch. I believe the correct statement, is "With only a bit of care one can use D's unsigned types for positive numbers and believe that one's code is correct, even though it contains subtle bugs."
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/19/2014 10:09 AM, Ary Borenszweig wrote: I agree. An array's length makes sense as an unsigned ("an array can't have a negative length, right?") but it leads to the bugs you say. For example: ~~~ import std.stdio; void main() { auto a = [1, 2, 3]; auto b = [1, 2, 3, 4]; if (a.length - b.length > 0) { writeln("Can you spot the bug that easily?"); Yes. } } ~~~ Yes, it makes sense, but at the same time it leads to super unintuitive math operations being involved. Computer math is not math math. It is its own beast, and if you're going to write in a systems programming language it is very important to learn how it works, or you'll be nothing but frustrated. Rust made the same mistake and now a couple of times I've seen bugs like these being reported. Never seen them in Java or .Net though. I wonder why... D is meant to be easily used by C and C++ programmers. It follows the same model of signed/unsigned arithmetic and integral promotions. This is very, very deliberate. To change this would be a disaster. For example, in America we drive on the right. In Australia, they drive on the left. When I visit Australia, I know this, but when stepping out into the road I instinctively check my left for cars, step into the road, and my foot gets run over by a car coming from the right. I've had to be very careful as a pedestrian there, as my intuition would get me killed. Don't mess with systems programmers' intuitions. It'll cause more problems than it solves.
Re: 'int' is enough for 'length' to migrate code from x86 to x64
On 11/19/2014 5:03 PM, H. S. Teoh via Digitalmars-d wrote: If this kind of unsafe mixing wasn't allowed, or required explict casts (to signify "yes I know what I'm doing and I'm prepared to face the consequences"), I suspect that bearophile would be much happier about this issue. ;-) Explicit casts are worse than the problem - they can easily cause bugs. As for me personally, I like having a complete set of signed and unsigned integral types at my disposal. It's like having a full set of wrenches that are open end on one end and boxed on the other :-) Most of the time either end will work, but sometimes only one will. Now, if D were a non-systems language like Basic, Go or Java, unsigned types could be reasonably dispensed with. But D is a systems programming language, and it ought to have available types that match what the hardware supports.