Re: layout wiki4d - part 2
W dniu 06.06.2010 19:14, Matthias Pleh pisze: Ok, as already mentioned in the old thread I have set up a testpage to make the suggested (and some other minor) changes. So we should decide, which layout we should use for the wiki. a) the old one b) the current http://www.prowiki.org/wiki4d/wiki.cgi c) the testpage http://www.prowiki.org/wiki4d/wiki.cgi?MatthiasPleh/TestPage d) some other layout, anybody want create, NOTE: the changes, I have made, are public and can changed by every wiki-user without registration! So everybody is welcome to improve not only the content but also the layout of the wiki!! greets Matthias I would go for the layout b but with the font definitions from the c. Cheers Piotrek
Re: new layout on wiki4d
Adam Ruppe wrote: On 6/5/10, Stewart Gordon smjg_1...@yahoo.com wrote: snip Others claim that some layouts just can't be made fluid. The most annoying thing is the web is fluid by default - you have to fight it to make it fixed width! But, meh, people are stupid. Taken the words out of my mouth there. I once came across this: http://www.wiltshirefarmfoods.com/accessibility.asp Wiltshire Farm Foods has worked hard to make this site as accessible to as many customers as possible, whether you have a disability or are simply not using the latest technology. I guess this shows how content writers and coders are often not on the same wavelength. I've a feeling I've come across a delusion that websites have to be explicitly programmed to make browers' built-in text size settings work. But I'm not sure snip You know what annoys me? alt=image. Ugh. Or another bad one: alt=logo. gah, these people have obviously never browsed the web without images! Perhaps, worst of all, alt=left_rounded_corner. Ew! Sometimes you even see images explicitly mislabelled as being purely decorative (alt=). You may have noticed from the page I linked to before that this is one of the many things WebPlus likes to do of its own accord. snip Anyway, the css transforms that regular link into a button, with a gradient background, rounded corners, an icon, fluid width and height (height isn't ideal if it changes though, since the gradient is fixed height. snip One possibility is to make the background image tall enough to allow for this. Either do the gradient over the default height and pad it with solid colour, or use a sigmoid gradient. But it would be better if only CSS provided a means to scale the background image Stewart.
Re: new layout on wiki4d
Stewart Gordon smjg_1...@yahoo.com wrote in message news:huj8co$od...@digitalmars.com... Taken the words out of my mouth there. I once came across this: http://www.wiltshirefarmfoods.com/accessibility.asp Wiltshire Farm Foods has worked hard to make this site as accessible to as many customers as possible, whether you have a disability or are simply not using the latest technology. I absolutely love this part: or are simply not using the latest technology. One of my [many] huge pet peeves about the web is how there's so many sites out there that feel it's their duty to try to push/shame/scare people into using the alleged latest and greatest.
Re: layout wiki4d - part 2
Piotrek star...@tlen.pl wrote in message news:huj4su$hv...@digitalmars.com... W dniu 06.06.2010 19:14, Matthias Pleh pisze: Ok, as already mentioned in the old thread I have set up a testpage to make the suggested (and some other minor) changes. So we should decide, which layout we should use for the wiki. a) the old one b) the current http://www.prowiki.org/wiki4d/wiki.cgi c) the testpage http://www.prowiki.org/wiki4d/wiki.cgi?MatthiasPleh/TestPage d) some other layout, anybody want create, NOTE: the changes, I have made, are public and can changed by every wiki-user without registration! So everybody is welcome to improve not only the content but also the layout of the wiki!! greets Matthias I would go for the layout b but with the font definitions from the c. That's a very good idea. I second this. --- Not sent from an iPhone.
Re: Go Programming talk [OT]
Leandro Lucarella Wrote: It looks like Go now have scope (exit) =) http://golang.org/doc/go_spec.html#DeferStmt And in order to execute block of statements you must make compiler happy: // f returns 1 func f() (result int) { defer func() { result++ }() return 0 }
Re: Go Programming talk [OT]
On 06/06/2010 05:13 PM, bearophile wrote: A recent talk about Go, Google I/O 2010 - Go Programming, the real talk stops at about 33 minutes: http://www.youtube.com/user/GoogleDevelopers#p/u/9/jgVhBThJdXc At 9.30 you can see the switch used on a type type :-) You can see a similar example here: http://golang.org/src/pkg/exp/datafmt/datafmt.go Look for the line switch t := fexpr.(type) { Originally Go was looking almost like a toy language, I thought Google was thinking of it as a toy, but I now think Google is getting more serious about it, and I can see Go has developed some serious features to solve/do the basic things. So maybe Andrei was wrong, you can design a good flexible language that doesn't need templates. Which part of the talk conveyed to you that information? Compared to Go D2 is way more complex. I don't know if people today want to learn a language as complex as D2. Go target flexibility and performance is not C++-class one (but probably it's not too much difficult to build a compiler able to produce very efficient Go programs). In the talk they show some interfaces and more things done with free functions, I don't know those things get compiled in assembly. Bye, bearophile I'm surprised you found the talk compelling, as I'm sure you know better. The talk uses a common technique - cherry-picking examples and avoiding to discuss costs and tradeoffs - to make the language look good. In addition, the talk put Go in relation with the likes of Java, C++, and Python but ignores the fact that Go's choices have been made by other languages as well, along with the inherent advantages and disadvantages. The reality is that in programming language design decisions that are all-around wins are few and far apart; it's mostly tradeoffs and compromises. Structural conformance and implicit, run-time checked interfaces are well-known and have advantages but also disadvantages. Flat, two-level hierarchies with interfaces and implementations are also well-known along with their with advantages and disadvantages. I think an honest discussion - as I hope is the tone in TDPL - serves the language and its users better than giving half of the story. Andrei
Re: Containers I'd like to see in std.containers
On Sun, 06 Jun 2010 14:48:27 -0400, Johan Granberg lijat.me...@ovegmail.com wrote: I also think a set would be highly usefull, and when defining it pleas don't let the set operations (union,intersection,maybe complement) be defined. I recently was writing some c++ code and got a nasty preformance hit from not finding a fast set union operation. I don't understand this. What better place to put set operations such as intersection and union besides the implementation of the set itself? For example, dcollections includes these operations (union (add) and complement (remove) are essential anyways, intersection is the odd duck) as the quickest possible implementation O(n * lg(n)). -Steve
Re: Go Programming talk [OT]
Andrei Alexandrescu Wrote: I think an honest discussion - as I hope is the tone in TDPL - serves the language and its users better than giving half of the story. An honest advertisement is an unusual thing. I saw none. You think, TDPL is the first one. There're many features in other languages no speak of costs, and every forgotten cost is a lie.
Re: Binary data-structure serialization
On 2010-06-06 21:32, Eric Poggel wrote: On 6/1/2010 5:31 PM, Robert M. Münch wrote: On 2010-06-01 02:13:18 +0200, Eric Poggel said: After having difficulty getting ddbg to work, I decided to write a dump function so I could easily see my data structures at runtime. The biggest part of it is a json encoder which can handle most D structures. If I recall, it only has trouble with enum's and pointers (since there's no length info). It's available in Yage's source here (http://dsource.org/projects/yage/browser/trunk/src/yage/core/json.d) Yage is licensed LGPL but I grant permission to use this under the Boost 1.0 license if you or anyone else needs it. Thanks for this. I will take a look. However, for now it's only one-way and there's no deserialization part to it. Well, this part is left for the reader as an exercise ;-). It seems it also doesn't serialize members of parent classes, which can be a major caveat. I took a look at it yesterday but unfortunately wasn't able to figure it out. You can have a look at my serialization library: http://dsource.org/projects/orange/ it could use some testing. * It handles both serializing and deserializing * It automatically serializes the base classes * It supports events (before and after (de)serializing) * It supports non-serialized fields (you can say that some fields in a class should not be serialized) * It's licensed under the Boost license * It's fairly std/runtime library independent (hopefully it's only the XMLArchive that is dependent on the runtime library, I've only tested it with Tango) * You can create new archive types and use them with the existing serializer * It currently only supports XML as the archive type but the library is built so you can create new archive types and use them with the existing serializer * If you want to serialize objects through base class references you need to register a serialize function, everything else should be handled automatically You can have a look at an example of usage at the project site. -- /Jacob Carlborg
Re: Go Programming talk [OT]
On 06/07/2010 06:36 AM, Kagamin wrote: Andrei Alexandrescu Wrote: I think an honest discussion - as I hope is the tone in TDPL - serves the language and its users better than giving half of the story. An honest advertisement is an unusual thing. I saw none. You think, TDPL is the first one. There're many features in other languages no speak of costs, and every forgotten cost is a lie. Agreed. Well, we'll see. I try to discuss costs, tradeoffs, and alternative approaches that were considered and rejected (and why) for all major features of D. For example, few books give a balanced view of dynamic polymorphism; in some you'll find the nice parts, in others you'll find the not-so-nice parts. I tried to discuss both in TDPL: \dee's reference semantics approach to handling class objects is similar to that found in many object-oriented languages. Using reference semantics and \idx{garbage collection} for class objects has both positive and negative consequences, among which the following: \begin{itemize*} \item[+]\index{polymorphism}\emph{\textbf{Polymorphism.}} The level of indirection brought by the consistent use of references enables support for \idx{polymorphism}. All references have the same size, but related objects can have different sizes even though they have ostensibly the same type (through the use of inheritance, which we'll discuss shortly). Because references have the same size regardless of the size of the object they refer to, you can always substitute references to derived objects for references to base objects. Also, arrays of objects work properly even when the actual objects in the array have different sizes. If you've used C++, you sure know about the necessity of using pointers with \idx{polymorphism}, and about the various lethal problems you encounter when you forget to. \item[+]\emph{\textbf{Safety.}}Many of us see \idx{garbage collection} as just a convenience that simplifies coding by relieving the programmer of managing memory. Perhaps surprisingly, however, there is a very strong connection between the infinite lifetime model (which \idx{garbage collection} makes practical) and memory safety. Where there's infinite lifetime, there are no dangling references, that is, references to some object that has gone out of existence and has had its memory reused for an unrelated object. Note that it would be just as safe to use value semantics throughout (have \cc{auto a2 = a1;} duplicate the @A@ object that @a1@ refers to and have @a2@ refer to the copy). That setup, however, is hardly interesting because it disallows creation of any referential structure (such as lists, trees, graphs, and more generally shared resources). \item[--]\emph{\textbf{Allocation cost.}} Generally, class objects must reside in the \index{garbage collection}garbage-collected heap, which generally is slower and eats more memory than memory on the stack. The margin has diminished quite a bit lately but is still nonzero. \item[--]\emph{\textbf{Long-range coupling.}} The main risk with using references is undue aliasing. Using reference semantics throughout makes it all too easy to end up with references to the same object residing in different---and unexpected---places. In~Figure~\vref{fig:aliasing}, @a1@ and @a2@ may be arbitrarily far from each other as far as the application logic is concerned, and additionally there may be many other references hanging off the same object. Interestingly, if the referred object is immutable, the problem vanishes---as long as nobody modifies the object, there is no coupling. Difficulties arise when one change effected in a certain context affects surprisingly and dramatically the state as seen in a different part of the application. Another way to alleviate this problem is explicit duplication, often by calling a special method @clone@, whenever passing objects around. The downside of that technique is that it is based on discipline and that it could lead to inefficiency if several parts of an application decide to conservatively clone objects ``just to be sure.'' \end{itemize*} Contrast reference semantics with value semantics \`a~la @i...@. Value semantics has advantages, notably equational reasoning: you can always substitute equals for equals in expressions without altering the result. (In contrast, references that use method calls to modify underlying objects do not allow such reasoning.) Speed is also an important advantage of value semantics, but if you want the dynamic generosity of \idx{polymorphism}, reference semantics is a must. Some languages tried to accommodate both, which earned them the moniker
Re: Go Programming talk [OT]
Andrei Alexandrescu Wrote: You get to choose at design time whether you use~OOP for a particular type, in which case you use \kidx{class}; otherwise, you go with @struct@ and forgo the particular~OOP amenities that go hand in hand with reference semantics. Good, but this is about user's decision. I meant decisions that were made by the language designer, so if you want a feature, you're forced to choose between languages. Well, I'm not sure whether such book can be about just D.
Re: Go Programming talk [OT]
On 06/07/2010 09:02 AM, Kagamin wrote: Andrei Alexandrescu Wrote: You get to choose at design time whether you use~OOP for a particular type, in which case you use \kidx{class}; otherwise, you go with @struct@ and forgo the particular~OOP amenities that go hand in hand with reference semantics. Good, but this is about user's decision. I meant decisions that were made by the language designer, so if you want a feature, you're forced to choose between languages. Well, I'm not sure whether such book can be about just D. The book includes honest discussions of language design decisions, including merits of approaches D decided to diverge from. Example: Now what happens when the compiler sees the improved definition of @f...@? The compiler faces a tougher challenge compared to the @int[]@ case because now\sbs @T@ is not known yet---it could be just about any type. And different types are stored differently, passed around differently, and sport different definitions o...@==@. Dealing with this challenge is important because type parameters really open up possibilities and multiply reusability of code. When it comes to generating code for type parameterization, two schools of thought are prevalent today~\cite{pizza}: \begin{itemize*} \item\emph{Homogeneous translation:} Bring all data to a common format, which allows compiling only one version of @find@ that will work for everybody. \item\emph{Heterogeneous translation:} Invoking @find@ with various type arguments (e.g., @int@ versus @double@ versus @string@) prompts the compiler to generate as many specialized versions of @f...@. \end{itemize*} In homogeneous translation, the language must offer a uniform access interface to data as a prerequisite to presenting it to @f...@. Heterogeneous translation is pretty much as if you had an assistant writing one special @find@ for each data format you may come up with, all built from the same mold. Clearly the two approaches have relative advantages and disadvantages, which are often the subject of passionate debates in various languages' communities. Homogeneous translation favors uniformity, simplicity, and compact generated code. For example, traditional functional languages favor putting everything in list format, and many traditional object-oriented languages favor making everything an object offering a uniform access to its features. However, the disadvantages of homogeneous translation may include rigidity, lack of expressive power, and inefficiency. In contrast, heterogeneous translation favors specialization, expressive power, and speed of generated code. The costs may include bloating of generated code, increases in language complexity, and an awkward compilation model (a frequently aired argument against heterogeneous approaches is that they're glorified macros [gasp]; and ever since~C gave such a bad reputation to macros, the label evokes quite a powerful negative connotation). A detail worth noting is an inclusion relationship: heterogeneous translation includes homogeneous translation for the simple reason that ``many formats'' includes ``one format,'' and ``many implementations'' includes ``one implementation.'' Therefore it can be argued (all other issues left aside) that heterogeneous translation is more powerful than homogeneous translation. If you have heterogeneous translation means at your disposal, at least in principle there's nothing stopping you from choosing one unified data format and one unified function when you so wish. The converse option is simply not available under a homogeneous approach. However, it would be oversimplifying to conclude that heterogeneous approaches are ``better'' because aside from expressive power there are, again, other arguments that need to be taken into consideration. \dee~uses heterogeneous translation with (warning, incoming technical terms flak) statically scoped symbol lookup and deferred typechecking. This means that when the~\dee compiler sees the generic @find@ definition, it parses and saves the body, remembers where the function was defined, and does nothing else until @find@ gets called. At that point, the compiler fetches the parsed definition of @find@ and attempts to compile it with the type that the caller chose in lieu of\sbs @t...@. When the function uses symbols, they are looked up in the context in which the function was defined. Andrei
Re: Go Programming talk [OT]
Adam Ruppe, el 6 de junio a las 21:24 me escribiste: On 6/6/10, Leandro Lucarella llu...@gmail.com wrote: It looks like Go now have scope (exit) =) Not quite the same (defer is apparently only on function level), but definitely good to have. The scope statements are awesome beyond belief. Yes, they are not implemented exactly the same, but the concept is very similar. And I agree that scope is really a life saver, it makes life much easier and code much more readable. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- For a minute there I lost myself, I lost myself. Phew, for a minute there, I lost myself, I lost myself.
Re: Go Programming talk [OT]
On 6/7/10, Leandro Lucarella llu...@gmail.com wrote: Yes, they are not implemented exactly the same, but the concept is very similar. And I agree that scope is really a life saver, it makes life much easier and code much more readable. There is one important difference though: Go doesn't seem to have scope(failure) vs scope(success). I guess it doesn't have exceptions, so it is moot, but it looks to me like suckage. Take some recent code I wrote: void bid(MySql db, Money amount) { db.query(START TRANSACTION); scope(success) db.query(COMMIT); scope(failure) db.query(ROLLBACK); // moderate complex logic of verifying and storing the bid, written as a simple linear block of code, with the faith that the scope guard and exceptions keep everything sane } Just beautiful, that scales in complexity and leaves no error unhandled. Looks like in Go, you'd be stuck mucking up the main logic with return value checks, then use a goto fail; like pattern, which is bah. It works reasonably well, but leaves potential for gaps in the checking, uglies up the work code, and might have side effects on variables. (The Go spec says goto isn't allowed to skip a variable declaration... so when I do: auto result = db.query(); if(result.failed) goto error; // refuses to compile thanks to the next line! auto otherResult = db.query(); error: Ew, gross.). That sucks hard. I prefer it to finally{} though, since finally doesn't scale as well in code complexity (it'd do fine in this case, but not if there were nested transactions), but both suck compared to the scalable, beautiful, and *correct* elegance of D's scope guards. That said, of course, Go's defer /is/ better than nothing, and it does have goto, so it is a step up from C. But it is leagues behind D.
Re: Binary data-structure serialization
On 6/7/2010 7:37 AM, Jacob Carlborg wrote: On 2010-06-06 21:32, Eric Poggel wrote: On 6/1/2010 5:31 PM, Robert M. Münch wrote: On 2010-06-01 02:13:18 +0200, Eric Poggel said: After having difficulty getting ddbg to work, I decided to write a dump function so I could easily see my data structures at runtime. The biggest part of it is a json encoder which can handle most D structures. If I recall, it only has trouble with enum's and pointers (since there's no length info). It's available in Yage's source here (http://dsource.org/projects/yage/browser/trunk/src/yage/core/json.d) Yage is licensed LGPL but I grant permission to use this under the Boost 1.0 license if you or anyone else needs it. Thanks for this. I will take a look. However, for now it's only one-way and there's no deserialization part to it. Well, this part is left for the reader as an exercise ;-). It seems it also doesn't serialize members of parent classes, which can be a major caveat. I took a look at it yesterday but unfortunately wasn't able to figure it out. You can have a look at my serialization library: http://dsource.org/projects/orange/ it could use some testing. * It handles both serializing and deserializing * It automatically serializes the base classes * It supports events (before and after (de)serializing) * It supports non-serialized fields (you can say that some fields in a class should not be serialized) * It's licensed under the Boost license * It's fairly std/runtime library independent (hopefully it's only the XMLArchive that is dependent on the runtime library, I've only tested it with Tango) * You can create new archive types and use them with the existing serializer * It currently only supports XML as the archive type but the library is built so you can create new archive types and use them with the existing serializer * If you want to serialize objects through base class references you need to register a serialize function, everything else should be handled automatically You can have a look at an example of usage at the project site. Thanks. This looks pretty nice!
Re: Containers I'd like to see in std.containers
Steven Schveighoffer wrote: On Sun, 06 Jun 2010 14:48:27 -0400, Johan Granberg lijat.me...@ovegmail.com wrote: I also think a set would be highly usefull, and when defining it pleas don't let the set operations (union,intersection,maybe complement) be defined. I recently was writing some c++ code and got a nasty preformance hit from not finding a fast set union operation. I don't understand this. What better place to put set operations such as intersection and union besides the implementation of the set itself? I think that the complaint is that they _weren't_ included in the implementation itself, and he hopes that D's implementation _does_ include them. - Jonathan M Davis
Re: Marketing of D - article topic ideas?
Nick Sabalausky wrote: I actually find that funny. Something in Java that isn't an Object? I remember Everything's an object! being paraded around as a selling point. Yes, in Java, everything is an object except where that bothered the language designers. There are several such design decisions in Java... Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: Marketing of D - article topic ideas?
dsimcha wrote: == Quote from Walter Bright (newshou...@digitalmars.com)'s article D is an extremely powerful language, but when I read complaints and sighs about other languages, few seem to know that these problems are solved with D. Essentially, we have a marketing problem. One great way to address it is by writing articles about various aspects of D and how they solve problems, like http://www.reddit.com/r/programming/comments/cb14j/compiletime_function_execution_in_d/ which was well received on reddit. Anyone have good ideas on topics for D articles? And anyone want to stand up and write an article? They don't have to be comprehensive articles (though of course those are better), even blog entries will do. This probably won't be replied to because I'm starting a new sub-thread in a mature discussion, but I wonder if we could write about the advantages and disadvantages of duck typing vs. static typing, comparing Python vs. Java at first, then bring D into the picture to show how, to a greater extent than C++ templates or C#/Java generics, it solves may of the problems of static typing without introducing the pitfalls of duck typing. Here's a simple example of something that would be awkward to impossible to do efficiently in any other language: /**Finds the largest element present in any of the ranges passed in.\ */ CommonType!(staticMap!(ElementType, T)) largestElement(T...)(T args) { // Quick and dirty impl ignoring error checking: typeof(return) ret = args[0].front(); foreach(arg; args) { foreach(elem; arg) { ret = max(elem, ret); } } return ret; } Do this in C++ - FAIL because there are no variadics. (Yes, C++1x will have them, but I might die of old age by the time C++1x exists.) Do this in any dynamic language - FAIL because looping is so slow that you might die of old age before it executes. Besides, who wants to do computationally intensive, multithreaded work in a dynamic language? In python: max (map (max, args)) should have reasonable performances and is *much* more elegant... Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: Marketing of D - article topic ideas?
On 06/07/2010 12:57 PM, Jérôme M. Berger wrote: dsimcha wrote: == Quote from Walter Bright (newshou...@digitalmars.com)'s article D is an extremely powerful language, but when I read complaints and sighs about other languages, few seem to know that these problems are solved with D. Essentially, we have a marketing problem. One great way to address it is by writing articles about various aspects of D and how they solve problems, like http://www.reddit.com/r/programming/comments/cb14j/compiletime_function_execution_in_d/ which was well received on reddit. Anyone have good ideas on topics for D articles? And anyone want to stand up and write an article? They don't have to be comprehensive articles (though of course those are better), even blog entries will do. This probably won't be replied to because I'm starting a new sub-thread in a mature discussion, but I wonder if we could write about the advantages and disadvantages of duck typing vs. static typing, comparing Python vs. Java at first, then bring D into the picture to show how, to a greater extent than C++ templates or C#/Java generics, it solves may of the problems of static typing without introducing the pitfalls of duck typing. Here's a simple example of something that would be awkward to impossible to do efficiently in any other language: /**Finds the largest element present in any of the ranges passed in.\ */ CommonType!(staticMap!(ElementType, T)) largestElement(T...)(T args) { // Quick and dirty impl ignoring error checking: typeof(return) ret = args[0].front(); foreach(arg; args) { foreach(elem; arg) { ret = max(elem, ret); } } return ret; } Do this in C++ - FAIL because there are no variadics. (Yes, C++1x will have them, but I might die of old age by the time C++1x exists.) Do this in any dynamic language - FAIL because looping is so slow that you might die of old age before it executes. Besides, who wants to do computationally intensive, multithreaded work in a dynamic language? In python: max (map (max, args)) should have reasonable performances and is *much* more elegant... I very much doubt that. Andrei
Re: Marketing of D - article topic ideas?
Jérôme M. Berger jeber...@free.fr wrote in message news:hujboe$tp...@digitalmars.com... Nick Sabalausky wrote: I actually find that funny. Something in Java that isn't an Object? I remember Everything's an object! being paraded around as a selling point. Yes, in Java, everything is an object except where that bothered the language designers. There are several such design decisions in Java... Yea, that's a good example of why I've grown a distaste towards hard-and-fast religious design strategies. The designer inevitably comes across cases where it just doesn't work particularly well, and then they're forced to either stay true to their misguided principles by accepting an awkward problematic design, or contradict their alleged principles and go with a better design. And when they do the latter, that runs the risk of causing problems in other areas that had been relying on the old principle being rigidly followed. --- Not sent from an iPhone.
Re: Marketing of D - article topic ideas?
Andrei Alexandrescu seewebsiteforem...@erdani.org wrote in message news:hujc46$v4...@digitalmars.com... On 06/07/2010 12:57 PM, Jérôme M. Berger wrote: dsimcha wrote: == Quote from Walter Bright (newshou...@digitalmars.com)'s article D is an extremely powerful language, but when I read complaints and sighs about other languages, few seem to know that these problems are solved with D. Essentially, we have a marketing problem. One great way to address it is by writing articles about various aspects of D and how they solve problems, like http://www.reddit.com/r/programming/comments/cb14j/compiletime_function_execution_in_d/ which was well received on reddit. Anyone have good ideas on topics for D articles? And anyone want to stand up and write an article? They don't have to be comprehensive articles (though of course those are better), even blog entries will do. This probably won't be replied to because I'm starting a new sub-thread in a mature discussion, but I wonder if we could write about the advantages and disadvantages of duck typing vs. static typing, comparing Python vs. Java at first, then bring D into the picture to show how, to a greater extent than C++ templates or C#/Java generics, it solves may of the problems of static typing without introducing the pitfalls of duck typing. Here's a simple example of something that would be awkward to impossible to do efficiently in any other language: /**Finds the largest element present in any of the ranges passed in.\ */ CommonType!(staticMap!(ElementType, T)) largestElement(T...)(T args) { // Quick and dirty impl ignoring error checking: typeof(return) ret = args[0].front(); foreach(arg; args) { foreach(elem; arg) { ret = max(elem, ret); } } return ret; } Do this in C++ - FAIL because there are no variadics. (Yes, C++1x will have them, but I might die of old age by the time C++1x exists.) Do this in any dynamic language - FAIL because looping is so slow that you might die of old age before it executes. Besides, who wants to do computationally intensive, multithreaded work in a dynamic language? In python: max (map (max, args)) should have reasonable performances and is *much* more elegant... I very much doubt that. Andrei It might be faster than using nested loops in Python. But yea, seems unlikely it would compare to the D version. Plus, can't you still do something like this? (I may not have this exactly right) CommonType!(staticMap!(ElementType, T)) largestElement(T...)(T args) { static assert( !is(typeof(return) == void) ); return max( map!max(args) ); } Assuming, of course, a 'max' that works on a range, which would be easy enough to do. Probably something like: T max(T range) { return reduce!ordinaryMax(range); // Or return reduce!ab?a:b(range); } --- Not sent from an iPhone.
Re: Go Programming talk [OT]
Adam Ruppe wrote: That sucks hard. I prefer it to finally{} though, since finally doesn't scale as well in code complexity (it'd do fine in this case, but not if there were nested transactions), but both suck compared to the scalable, beautiful, and *correct* elegance of D's scope guards. I agree. D's scope statement looks fairly innocuous and one can easily pass it by with blah, blah, another statement, blah, blah but the more I use it the more I realize it is a game changer in how one writes code. For example, here's the D1 implementation of std.file.read: - / * Read file name[], return array of bytes read. * Throws: * FileException on error. */ void[] read(char[] name) { DWORD numread; HANDLE h; if (useWfuncs) { wchar* namez = std.utf.toUTF16z(name); h = CreateFileW(namez,GENERIC_READ,FILE_SHARE_READ,null,OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN,cast(HANDLE)null); } else { char* namez = toMBSz(name); h = CreateFileA(namez,GENERIC_READ,FILE_SHARE_READ,null,OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN,cast(HANDLE)null); } if (h == INVALID_HANDLE_VALUE) goto err1; auto size = GetFileSize(h, null); if (size == INVALID_FILE_SIZE) goto err2; auto buf = std.gc.malloc(size); if (buf) std.gc.hasNoPointers(buf.ptr); if (ReadFile(h,buf.ptr,size,numread,null) != 1) goto err2; if (numread != size) goto err2; if (!CloseHandle(h)) goto err; return buf[0 .. size]; err2: CloseHandle(h); err: delete buf; err1: throw new FileException(name, GetLastError()); } -- Note the complex logic to recover and unwind from errors (none of the called functions throw exceptions), and the care with which this is constructed to ensure everything is done properly. Contrast this with D2's version written by Andrei: --- void[] read(in char[] name, size_t upTo = size_t.max) { alias TypeTuple!(GENERIC_READ, FILE_SHARE_READ, (SECURITY_ATTRIBUTES*).init, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN, HANDLE.init) defaults; auto h = useWfuncs ? CreateFileW(std.utf.toUTF16z(name), defaults) : CreateFileA(toMBSz(name), defaults); cenforce(h != INVALID_HANDLE_VALUE, name); scope(exit) cenforce(CloseHandle(h), name); auto size = GetFileSize(h, null); cenforce(size != INVALID_FILE_SIZE, name); size = min(upTo, size); auto buf = GC.malloc(size, GC.BlkAttr.NO_SCAN)[0 .. size]; scope(failure) delete buf; DWORD numread = void; cenforce(ReadFile(h,buf.ptr, size, numread, null) == 1 numread == size, name); return buf[0 .. size]; } The code is the same logic, but using scope it is dramatically simplified. There's not a single control flow statement in it! Furthermore, it is correct even if functions like CloseHandle throw exceptions.
Re: Marketing of D - article topic ideas?
Nick Sabalausky a...@a.a wrote in message news:hujd6a$11e...@digitalmars.com... Assuming, of course, a 'max' that works on a range, which would be easy enough to do. Probably something like: ElementType!T max(T range) // Corrected { return reduce!ordinaryMax(range); // Or return reduce!ab?a:b(range); }
Re: Marketing of D - article topic ideas?
Nick Sabalausky wrote: Yea, that's a good example of why I've grown a distaste towards hard-and-fast religious design strategies. The designer inevitably comes across cases where it just doesn't work particularly well, and then they're forced to either stay true to their misguided principles by accepting an awkward problematic design, or contradict their alleged principles and go with a better design. And when they do the latter, that runs the risk of causing problems in other areas that had been relying on the old principle being rigidly followed. D has design principles, but those principles are often contradictory. I don't see a good reason to follow a design principle out of principle if it destroys the utility of the language. For example, consider: version (unittest) 'unittest' is a keyword, not an identifier. Making this work requires a special case in the grammar. But the alternatives, version (Unittest) version (unit_test) version (unittests) etc. are all much worse than simply violating a principle and putting the special case in.
Re: Go Programming talk [OT]
Walter Bright newshou...@digitalmars.com wrote in message news:hujd7m$11g...@digitalmars.com... Adam Ruppe wrote: That sucks hard. I prefer it to finally{} though, since finally doesn't scale as well in code complexity (it'd do fine in this case, but not if there were nested transactions), but both suck compared to the scalable, beautiful, and *correct* elegance of D's scope guards. I agree. D's scope statement looks fairly innocuous and one can easily pass it by with blah, blah, another statement, blah, blah but the more I use it the more I realize it is a game changer in how one writes code. For example, here's the D1 implementation of std.file.read: - / * Read file name[], return array of bytes read. * Throws: * FileException on error. */ void[] read(char[] name) { DWORD numread; HANDLE h; if (useWfuncs) { wchar* namez = std.utf.toUTF16z(name); h = CreateFileW(namez,GENERIC_READ,FILE_SHARE_READ,null,OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN,cast(HANDLE)null); } else { char* namez = toMBSz(name); h = CreateFileA(namez,GENERIC_READ,FILE_SHARE_READ,null,OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN,cast(HANDLE)null); } if (h == INVALID_HANDLE_VALUE) goto err1; auto size = GetFileSize(h, null); if (size == INVALID_FILE_SIZE) goto err2; auto buf = std.gc.malloc(size); if (buf) std.gc.hasNoPointers(buf.ptr); if (ReadFile(h,buf.ptr,size,numread,null) != 1) goto err2; if (numread != size) goto err2; if (!CloseHandle(h)) goto err; return buf[0 .. size]; err2: CloseHandle(h); err: delete buf; err1: throw new FileException(name, GetLastError()); } -- Looking at that, if I didn't know better, I would think you were a VB programmer ;)
Re: Marketing of D - article topic ideas?
Walter Bright newshou...@digitalmars.com wrote in message news:hujdg6$125...@digitalmars.com... Nick Sabalausky wrote: Yea, that's a good example of why I've grown a distaste towards hard-and-fast religious design strategies. The designer inevitably comes across cases where it just doesn't work particularly well, and then they're forced to either stay true to their misguided principles by accepting an awkward problematic design, or contradict their alleged principles and go with a better design. And when they do the latter, that runs the risk of causing problems in other areas that had been relying on the old principle being rigidly followed. D has design principles, but those principles are often contradictory. I don't see a good reason to follow a design principle out of principle if it destroys the utility of the language. For example, consider: version (unittest) 'unittest' is a keyword, not an identifier. Making this work requires a special case in the grammar. But the alternatives, version (Unittest) version (unit_test) version (unittests) etc. are all much worse than simply violating a principle and putting the special case in. Great example. Of course, one could argue that we religiously follow pragmatism ;) --- Not sent from an iPhone.
Re: Go Programming talk [OT]
Andrei Alexandrescu: Which part of the talk conveyed to you that information? After thinking well about this question, my conclusion is that I was not just (as usual) wrong, I was trolling: I didn't know what I was talking about. I am sorry. I have not even programmed in Go. Bye, bearophile
Re: Go Programming talk [OT]
On Mon, Jun 7, 2010 at 11:19 AM, Walter Bright newshou...@digitalmars.comwrote: Adam Ruppe wrote: That sucks hard. I prefer it to finally{} though, since finally doesn't scale as well in code complexity (it'd do fine in this case, but not if there were nested transactions), but both suck compared to the scalable, beautiful, and *correct* elegance of D's scope guards. I agree. D's scope statement looks fairly innocuous and one can easily pass it by with blah, blah, another statement, blah, blah but the more I use it the more I realize it is a game changer in how one writes code. For example, here's the D1 implementation of std.file.read: - ... -- Note the complex logic to recover and unwind from errors (none of the called functions throw exceptions), and the care with which this is constructed to ensure everything is done properly. Contrast this with D2's version written by Andrei: ... The code is the same logic, but using scope it is dramatically simplified. There's not a single control flow statement in it! Furthermore, it is correct even if functions like CloseHandle throw exceptions. Hmm, but I can actually understand your code. :-( --bb
Re: Go Programming talk [OT]
On Mon, Jun 7, 2010 at 12:25 PM, Walter Bright newshou...@digitalmars.com wrote: Bill Baxter wrote: Hmm, but I can actually understand your code. :-( Yeah, but how long would it take you to be sure that it is handling all errors correctly and cleaning up properly in case of those errors? It'd probably take me at least 5 intensive minutes. But in the scope version, once you're comfortable with scope and enforce, it wouldn't take half that. Probably so. What's cenforce do anyway? --bb
Re: Go Programming talk [OT]
On 6/7/10, Bill Baxter wbax...@gmail.com wrote: Hmm, but I can actually understand your code. :-( The confusing part is probably cenforce, which is a little helper function in the std.file module. cenforce(condition, filename) is the same as if( ! condition) throw new FileException(filename, __FILE__, __LINE__, GetLastError()); So the new read() does still have control statements, but they are hidden in that helper function template so you don't have to repeat them all over the main code. Then, of course, the scope guards clean up on the event of those exceptions, so you don't have to worry about the special error labels, which is what allows the helper function to actually be useful!
Re: Go Programming talk [OT]
Adam Ruppe wrote: On 6/7/10, Bill Baxter wbax...@gmail.com wrote: Hmm, but I can actually understand your code. :-( The confusing part is probably cenforce, which is a little helper function in the std.file module. cenforce(condition, filename) is the same as The tldr version of what cenforce does is convert a C-style error code return into an exception. Hence the C in enforce.
Re: Go Programming talk [OT]
Bill Baxter wrote: Probably so. What's cenforce do anyway? private T cenforce(T, string file = __FILE__, uint line = __LINE__) (T condition, lazy const(char)[] name) { if (!condition) { throw new FileException( text(In , file, (, line, ), data file , name), .getErrno); } return condition; }
Re: Marketing of D - article topic ideas?
Walter Bright, el 7 de junio a las 11:24 me escribiste: Nick Sabalausky wrote: Yea, that's a good example of why I've grown a distaste towards hard-and-fast religious design strategies. The designer inevitably comes across cases where it just doesn't work particularly well, and then they're forced to either stay true to their misguided principles by accepting an awkward problematic design, or contradict their alleged principles and go with a better design. And when they do the latter, that runs the risk of causing problems in other areas that had been relying on the old principle being rigidly followed. D has design principles, but those principles are often contradictory. I don't see a good reason to follow a design principle out of principle if it destroys the utility of the language. For example, consider: version (unittest) 'unittest' is a keyword, not an identifier. Making this work requires a special case in the grammar. But the alternatives, version (Unittest) version (unit_test) version (unittests) etc. are all much worse than simply violating a principle and putting the special case in. Please, document this! http://d.puremagic.com/issues/show_bug.cgi?id=4230 -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- ¿Qué será lo que hace que una brújula siempre marque el norte? - Ser aguja, nada más, y cumplir su misión. -- Ricardo Vaporeso
Re: Go Programming talk [OT]
Adam Ruppe, el 7 de junio a las 11:30 me escribiste: On 6/7/10, Leandro Lucarella llu...@gmail.com wrote: Yes, they are not implemented exactly the same, but the concept is very similar. And I agree that scope is really a life saver, it makes life much easier and code much more readable. There is one important difference though: Go doesn't seem to have scope(failure) vs scope(success). I guess it doesn't have exceptions, so it is moot, but it looks to me like suckage. Go doesn't have exceptions, so scope(failure/success) makes no sense. You can argue about if not having exceptions is good or bad (I don't have a strong opinion about it, sometimes I feel exceptions are nice, sometimes I think they are evil), though. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- La esperanza es una amiga que nos presta la ilusión.
Re: Marketing of D - article topic ideas?
Mon, 07 Jun 2010 14:06:24 -0400, Nick Sabalausky wrote: Jérôme M. Berger jeber...@free.fr wrote in message news:hujboe$tp...@digitalmars.com... Nick Sabalausky wrote: I actually find that funny. Something in Java that isn't an Object? I remember Everything's an object! being paraded around as a selling point. Yes, in Java, everything is an object except where that bothered the language designers. There are several such design decisions in Java... Yea, that's a good example of why I've grown a distaste towards hard-and-fast religious design strategies. The designer inevitably comes across cases where it just doesn't work particularly well, and then they're forced to either stay true to their misguided principles by accepting an awkward problematic design, or contradict their alleged principles and go with a better design. And when they do the latter, that runs the risk of causing problems in other areas that had been relying on the old principle being rigidly followed. Part of the religious feel of Java comes from the fact that it runs on a VM. The safe memory model imposes some restrictions as you might have noticed in SafeD. The 'everything is an Object' idea is a bit broken, performance-wise. That's why C# added reified types. The reason they added primitive types has a rationale behind it. The decision made the VM a useful tool for real enterprise applications.
Re: Marketing of D - article topic ideas?
Andrei Alexandrescu wrote: On 06/07/2010 12:57 PM, Jérôme M. Berger wrote: Do this in any dynamic language - FAIL because looping is so slow that you might die of old age before it executes. Besides, who wants to do computationally intensive, multithreaded work in a dynamic language? In python: max (map (max, args)) should have reasonable performances and is *much* more elegant... I very much doubt that. What do you doubt? That it has reasonable performance or that it is more elegant? Jerome -- mailto:jeber...@free.fr http://jeberger.free.fr Jabber: jeber...@jabber.fr signature.asc Description: OpenPGP digital signature
Re: Go Programming talk [OT]
Leandro Lucarella wrote: Go doesn't have exceptions, so scope(failure/success) makes no sense. You can argue about if not having exceptions is good or bad (I don't have a strong opinion about it, sometimes I feel exceptions are nice, sometimes I think they are evil), though. Just to compare the two styles... Without exceptions, every step of the code must be checked explicitly: // C code: int foo() { int err = 0; // allocate resources err = bar(); if (err) goto finally; err = zar(); if (err) goto finally; err = car(); if (err) goto finally; finally: // do cleanup return err; } (Ordinarily, the if(err) checks are hidden inside macros like check_error, check_error_null, etc.) With exceptions, the actual code emerges: // C++ or D code void foo() { // allocate resources bar(); zar(); car(); } Ali
Re: Marketing of D - article topic ideas?
Leandro Lucarella wrote: Please, document this! http://d.puremagic.com/issues/show_bug.cgi?id=4230 Done.
Re: Marketing of D - article topic ideas?
On 06/07/2010 04:35 PM, Jérôme M. Berger wrote: Andrei Alexandrescu wrote: On 06/07/2010 12:57 PM, Jérôme M. Berger wrote: Do this in any dynamic language - FAIL because looping is so slow that you might die of old age before it executes. Besides, who wants to do computationally intensive, multithreaded work in a dynamic language? In python: max (map (max, args)) should have reasonable performances and is *much* more elegant... I very much doubt that. What do you doubt? That it has reasonable performance or that it is more elegant? That it has reasonable performance. Then, there are a number of things that can't be compared such as figuring out the tightest static types, something that Python doesn't worry about (at the expense of precision and performance). Andrei
Re: Marketing of D - article topic ideas?
On 06/07/2010 04:35 PM, Jérôme M. Berger wrote: Andrei Alexandrescu wrote: On 06/07/2010 12:57 PM, Jérôme M. Berger wrote: Do this in any dynamic language - FAIL because looping is so slow that you might die of old age before it executes. Besides, who wants to do computationally intensive, multithreaded work in a dynamic language? In python: max (map (max, args)) should have reasonable performances and is *much* more elegant... I very much doubt that. What do you doubt? That it has reasonable performance or that it is more elegant? That it has reasonable performance. Then, there are a number of things that can't be compared such as figuring out the tightest static types, something that Python doesn't worry about (at the expense of precision and performance). I see such examples as simple illustrations look, if you give up X, you gain Y! - just coming without mentioning X. Andrei
Wide characters support in D
Note: I posted this already on runtime D list, but I think that list was a wrong one for this question. Sorry for duplication :-) Hi. I am new to D. It looks like D supports 3 types of characters: char, wchar, dchar. This is cool, however, I have some questions about it: 1. When we have 2 methods (one with wchar[] and another with char[]), how D will determine which one to use if I pass a string hello world? 2. Many libraries (e.g. tango or phobos) don't provide functions/methods (or have incomplete support) for wchar/dchar e.g. writefln probably assumes char[] for strings like Number %d... 3. Even if they do support, it is kind of annoying to provide methods for all 3 types of chars. Especially, if we want to use native mode (e.g. for Windows wchar is better, for Linux char is better). E.g. Windows has _wopen, _wdirent, _wreaddir, _wopenddir, _wmain(int argc, wchar_t[] argv) and so on, and they should be native (in a sense that no conversion is necessary when we do, for instance, _wopen). Linux doesn't have them as UTF-8 is used widely there. Since D language is targeted on system programming, why not to try to use whatever works better on a particular system (e.g. char will be 2 bytes on Windows and 1 byte on Linux; it can be a compiler switch, and all libraries can be compiled properly on a particular system). It's still necessary to have all 3 types of char for cooperation with C. But in those cases byte, short and int will do their work. For this kind of situation, it would be nice to have some built-in functions for transparent conversion from char to byte/short/int and vice versa (especially, if conversion only happens if needed on a particular platform). In my opinion, to separate notion of character from byte would be nice, and it makes sense as a particular platform uses either UTF-8 or UTF-16 natively. Programmers may write universal code (like TCHAR on Windows). Unfortunately, C uses 'char' and 'byte' interchangeably but why D has to make this mistake again? Sorry if my suggestion sounds odd. Anyway, it would be great to hear something from D gurus :-) Ruslan.
Re: Marketing of D - article topic ideas?
Nick Sabalausky a...@a.a wrote in message news:hujd9m$11o...@digitalmars.com... Nick Sabalausky a...@a.a wrote in message news:hujd6a$11e...@digitalmars.com... Assuming, of course, a 'max' that works on a range, which would be easy enough to do. Probably something like: ElementType!T max(T range) // Corrected { return reduce!ordinaryMax(range); // Or return reduce!ab?a:b(range); } Or: alias reduce!ab?a:b max; God, I love D :)
Re: Wide characters support in D
Ruslan Nikolaev nruslan_de...@yahoo.com wrote: 1. When we have 2 methods (one with wchar[] and another with char[]), how D will determine which one to use if I pass a string hello world? String literals in D(2) are of type immutable(char)[] (char[] in D1) by default, and thus will be handled by the char[]-version of the function. Should you want a string literal of a different type, append a c, w, or d to specify char[], wchar[] or dchar[]. Or use a cast. Since D language is targeted on system programming, why not to try to use whatever works better on a particular system (e.g. char will be 2 bytes on Windows and 1 byte on Linux; it can be a compiler switch, and all libraries can be compiled properly on a particular system). Because this leads to unportable code, that fails in unexpected ways when moved from one system to another, thus increasing rather than decreasing the cognitive load on the hapless programmer. It's still necessary to have all 3 types of char for cooperation with C. But in those cases byte, short and int will do their work. Absolutely not. One of the things D tries, is doing strings right. For that purpose, all 3 types are needed. In my opinion, to separate notion of character from byte would be nice, and it makes sense as a particular platform uses either UTF-8 or UTF-16 natively. Programmers may write universal code (like TCHAR on Windows). Unfortunately, C uses 'char' and 'byte' interchangeably but why D has to make this mistake again? D has not. A char is a character, a possibly incomplete UTF-8 codepoint, while a byte is a byte, a humble number in the order of -128 to +127. Yes, it is possible to abuse char in D, and byte likewise. D aims to allow programmers to program close to the metal if the programmer so wishes, and thus does not pretend char is an opaque type about which nothing can be known. -- Simen
Re: Wide characters support in D
Ruslan Nikolaev wrote: 1. When we have 2 methods (one with wchar[] and another with char[]), how D will determine which one to use if I pass a string hello world? I asked the same question on the D.learn group recently. Literals like that don't have a particular encoding. The programmer must specify explicitly to resolve ambiguities: hello worldc or hello worldw. 3. Even if they do support, it is kind of annoying to provide methods for all 3 types of chars. Especially, if we want to use native mode I think the solution is to take advantage of templates and use template constraints if the template parameter is too flexible. Another approach might be to use dchar within the application and use other encodings on the intefraces. Ali
Re: Wide characters support in D
On 07/06/10 22:48, Ruslan Nikolaev wrote: Note: I posted this already on runtime D list, but I think that list was a wrong one for this question. Sorry for duplication :-) Hi. I am new to D. It looks like D supports 3 types of characters: char, wchar, dchar. This is cool, however, I have some questions about it: 1. When we have 2 methods (one with wchar[] and another with char[]), how D will determine which one to use if I pass a string hello world? If you pass Hello World, this is always a string (char[] in D1, immutable(char)[] in D2). If you want to specify a type with a string literal, you can use Hello Worldw or Hello Worldd for wstring anddstringrespectively. 2. Many libraries (e.g. tango or phobos) don't provide functions/methods (or have incomplete support) for wchar/dchar e.g. writefln probably assumes char[] for strings like Number %d... In tango most, if not all string functions are templated, so work with all string types, char[], wchar[] and dchar[]. I don't know how well phobos supports other string types, I know phobos 1 is extremely limited for types other than char[], I don't know about Phobos 2 3. Even if they do support, it is kind of annoying to provide methods for all 3 types of chars. Especially, if we want to use native mode (e.g. for Windows wchar is better, for Linux char is better). E.g. Windows has _wopen, _wdirent, _wreaddir, _wopenddir, _wmain(int argc, wchar_t[] argv) and so on, and they should be native (in a sense that no conversion is necessary when we do, for instance, _wopen). Linux doesn't have them as UTF-8 is used widely there. Enter templates! You can write the function once and have it work with all three string types with little effort involved. All the lower level functions that interact with the operating system are abstracted away nicely for you in both Tango and Phobos, so you'll never have to deal with this for basic functions. For your own it's a simple matter of templating them in most cases. Since D language is targeted on system programming, why not to try to use whatever works better on a particular system (e.g. char will be 2 bytes on Windows and 1 byte on Linux; it can be a compiler switch, and all libraries can be compiled properly on a particular system). It's still necessary to have all 3 types of char for cooperation with C. But in those cases byte, short and int will do their work. For this kind of situation, it would be nice to have some built-in functions for transparent conversion from char to byte/short/int and vice versa (especially, if conversion only happens if needed on a particular platform). This is something C did wrong. If compilers are free to choose their own width for the string type you end up with the mess C has where every library introduces their own custom types to make sure they're the expected length, eg uint32_t etc. Having things the other way around makes life far easier - int is always 32bits signed for example, the same applies to strings. You can use version blocks if you want to specify a type which changes based on platform, I wouldn't recommend it though, it just makes life harder in the long run. In my opinion, to separate notion of character from byte would be nice, and it makes sense as a particular platform uses either UTF-8 or UTF-16 natively. Programmers may write universal code (like TCHAR on Windows). Unfortunately, C uses 'char' and 'byte' interchangeably but why D has to make this mistake again? They are different types in D, so I'm not sure what you mean. byte/ubyte have no encoding associated with them, char is always UTF-8, wchar UTF-16 etc. Robert
Re: Wide characters support in D
This doesn't answer all your questions and suggestions, but here goes. In answer to #1, Hello world is a literal of type char[] (or string). If you want to use UTF-16 or 32, use Hello worldw and Hello worldd respectively. In partial answer to #2 and #3, it's generally pretty easy to adapt a string function to support string, wstring, and dstring by using templating and the fact that D can do automatic conversions for you. For instance: string blah = hello world; foreach (dchar c; blah) // guaranteed to get a full character // do something
Re: Wide characters support in D
Ok, ok... that was just a suggestion... Thanks, for reply about Hello world representation. Was postfix w and d added initially or just recently? I did not know about it. I thought D does automatic conversion for string literals. Yes, templates may help. However, that unnecessary make code bigger (since we have to compile it for every char type). The other problem is that it allows programmer to choose which one to use. He or she may just prefer char[] as UTF-8 (or wchar[] as UTF-16). That will be fine on platform that supports this encoding natively (e.g. for file system operations, screen output, etc.), whereas it will cause conversion overhead on the other. Not to say that it's a big overhead, but unnecessary one. Having said this, I do agree that there must be some flexibility (e.g. in Java char[] is always 2 bytes), however, I don't believe that this flexibility should be available for application programmer. I don't think there is any problem with having different size of char. In fact, that would make programs better (since application programmers will have to think in terms of characters as opposed to bytes). System programmers (i.e. OS programmers) may choose to think as they expect it to be (since char width option can be added to compiler). TCHAR in Windows is a good example of it. Whenever you need to determine size of element (e.g. for allocation), you can use 'sizeof'. Again, it does not mean that you're deprived of char/wchar/dchar capability. It still can be supported (e.g. via ubyte/ushort/uint) for the sake of interoperability or some special cases. Special string constants (e.g. b, w, d) can be supported, too. My only point is that it would be good to have universal char type that depends on platform. That, in turns, allows to have unified char for all libraries on this platform. In addition, commonly used constants '\n', '\r', '\t' will be the same regardless of char width. Anyway, that was just a suggestion. You may disagree with this if you wish. Ruslan.
Re: Wide characters support in D
Ruslan Nikolaev wrote: Note: I posted this already on runtime D list, Although D is designed to be fairly agnostic about character types, in practice I recommend the following: 1. Use the string type for strings, it's char[] on D1 and immutable(char)[] on D2. 2. Use dchar's to hold individual characters. The problem with wchar's is that everyone forgets about surrogate pairs. Most UTF-16 programs in the wild, including nearly all Java programs, are broken with regard to surrogate pairs. The problem with dchar's is strings of them consume memory at a prodigious rate.
Re: Wide characters support in D
Just one more addition: it is possible to have built-in function that converts multibyte (or multiword) char sequence (even though in my proposal it can be of different size) to dchar (UTF-32) character. Again, my only point is that it would be nice to have something similar to TCHAR so that all libraries can use it if they choose not to provide functions for all 3 types. 2Walter: Yes, programmers do often ignore surrogate pairs in case of UTF-16. But in case of undetermined char size (1 or 2 bytes) they will have to use special builtin conversion functions to dchar unless they want their code to be completely broken. Thanks, Ruslan. --- On Tue, 6/8/10, Ruslan Nikolaev nruslan_de...@yahoo.com wrote: From: Ruslan Nikolaev nruslan_de...@yahoo.com Subject: Re: Wide characters support in D To: digitalmars.D digitalmars-d@puremagic.com Date: Tuesday, June 8, 2010, 3:16 AM Ok, ok... that was just a suggestion... Thanks, for reply about Hello world representation. Was postfix w and d added initially or just recently? I did not know about it. I thought D does automatic conversion for string literals. Yes, templates may help. However, that unnecessary make code bigger (since we have to compile it for every char type). The other problem is that it allows programmer to choose which one to use. He or she may just prefer char[] as UTF-8 (or wchar[] as UTF-16). That will be fine on platform that supports this encoding natively (e.g. for file system operations, screen output, etc.), whereas it will cause conversion overhead on the other. Not to say that it's a big overhead, but unnecessary one. Having said this, I do agree that there must be some flexibility (e.g. in Java char[] is always 2 bytes), however, I don't believe that this flexibility should be available for application programmer. I don't think there is any problem with having different size of char. In fact, that would make programs better (since application programmers will have to think in terms of characters as opposed to bytes). System programmers (i.e. OS programmers) may choose to think as they expect it to be (since char width option can be added to compiler). TCHAR in Windows is a good example of it. Whenever you need to determine size of element (e.g. for allocation), you can use 'sizeof'. Again, it does not mean that you're deprived of char/wchar/dchar capability. It still can be supported (e.g. via ubyte/ushort/uint) for the sake of interoperability or some special cases. Special string constants (e.g. b, w, d) can be supported, too. My only point is that it would be good to have universal char type that depends on platform. That, in turns, allows to have unified char for all libraries on this platform. In addition, commonly used constants '\n', '\r', '\t' will be the same regardless of char width. Anyway, that was just a suggestion. You may disagree with this if you wish. Ruslan.
Re: Go Programming talk [OT]
On Sun, 06 Jun 2010 18:13:36 -0400, bearophile wrote: At 9.30 you can see the switch used on a type type :-) You can see a similar example here: http://golang.org/src/pkg/exp/datafmt/datafmt.go Look for the line switch t := fexpr.(type) { ... Bye, bearophile That isn't a type type. Untested D code void fun(T, U)(T op, U y) { switch(typeof(y)) { case immutable(char)[]: case int: } }
Re: I'm holding it in my hands
On Sun, 06 Jun 2010 08:05:32 -0400, Guillaume B. wrote: Andrei Alexandrescu wrote: http://erdani.com Don't worry, it's SFW :o). Andrei I've already preordered via amazon in Canada but it doesn't seem that it will be shipped before June 14... I'll have to wait! Guillaume Don't feel too bad, Amazon in the US is saying the same thing.
Re: Go Programming talk [OT]
On 06/07/2010 07:44 PM, Jesse Phillips wrote: On Sun, 06 Jun 2010 18:13:36 -0400, bearophile wrote: At 9.30 you can see the switch used on a type type :-) You can see a similar example here: http://golang.org/src/pkg/exp/datafmt/datafmt.go Look for the line switch t := fexpr.(type) { ... Bye, bearophile That isn't a type type. Untested D code void fun(T, U)(T op, U y) { switch(typeof(y)) { case immutable(char)[]: case int: } } Actually the uses are not equivalent. A closer example is: class A {} void main() { Object a = new A; switch (typeid(a).name) { case object.Object: writeln(it's an object); break; case test.A: writeln(yeah, it's an A); break; default: writeln(default: , typeid(a).name); break; } } Go stores the dynamic types together with objects, so what looks like a simple typedef for int is in fact a full-fledged class with one data member. Those objects are stored on the garbage-collected heap. Andrei
Re: Wide characters support in D
On Mon, 07 Jun 2010 17:48:09 -0400, Ruslan Nikolaev nruslan_de...@yahoo.com wrote: Note: I posted this already on runtime D list, but I think that list was a wrong one for this question. Sorry for duplication :-) Hi. I am new to D. It looks like D supports 3 types of characters: char, wchar, dchar. This is cool, however, I have some questions about it: 1. When we have 2 methods (one with wchar[] and another with char[]), how D will determine which one to use if I pass a string hello world? 2. Many libraries (e.g. tango or phobos) don't provide functions/methods (or have incomplete support) for wchar/dchar e.g. writefln probably assumes char[] for strings like Number %d... 3. Even if they do support, it is kind of annoying to provide methods for all 3 types of chars. Especially, if we want to use native mode (e.g. for Windows wchar is better, for Linux char is better). E.g. Windows has _wopen, _wdirent, _wreaddir, _wopenddir, _wmain(int argc, wchar_t[] argv) and so on, and they should be native (in a sense that no conversion is necessary when we do, for instance, _wopen). Linux doesn't have them as UTF-8 is used widely there. Since D language is targeted on system programming, why not to try to use whatever works better on a particular system (e.g. char will be 2 bytes on Windows and 1 byte on Linux; it can be a compiler switch, and all libraries can be compiled properly on a particular system). It's still necessary to have all 3 types of char for cooperation with C. But in those cases byte, short and int will do their work. For this kind of situation, it would be nice to have some built-in functions for transparent conversion from char to byte/short/int and vice versa (especially, if conversion only happens if needed on a particular platform). In my opinion, to separate notion of character from byte would be nice, and it makes sense as a particular platform uses either UTF-8 or UTF-16 natively. Programmers may write universal code (like TCHAR on Windows). Unfortunately, C uses 'char' and 'byte' interchangeably but why D has to make this mistake again? One thing that may not be clear from your interpretation of D's docs, all strings representable by one character type are also representable by all the other character types. This means that a function that takes a char[] can also take a dchar[] if it is sent through a converter (i.e. toUtf8 on Tango I think). So D's char is decidedly not like byte or ubyte, or C's char. In general, I use char (utf8) because I am used to C and ASCII (which is exactly represented in utf-8). But because char is utf-8, it could potentially accept any unicode string. -Steve
Re: Wide characters support in D
Ruslan Nikolaev nruslan_de...@yahoo.com wrote in message news:mailman.122.1275952601.24349.digitalmar...@puremagic.com... Ok, ok... that was just a suggestion... Thanks, for reply about Hello world representation. Was postfix w and d added initially or just recently? I did not know about it. I thought D does automatic conversion for string literals. The postfix 'c', 'w' and 'd' have been in there a long time. But D does have a little bit of automatic conversion. Let me try to clarify: helloc // string, UTF-8 hellow // wstring, UTF-16 hellod // dstring, UTF-32 hello // Depends how you use it Suppose I have a function that takes a UTF-8 string, and I call it: void cfoo(string a) {} cfoo(helloc); // Works cfoo(hellow); // Error, wrong type cfoo(hellod); // Error, wrong type cfoo(hello); // Works, assumed to be UTF-8 string If I make a different function that takes a UTF-16 wstring instead: void wfoo(wstring a) {} wfoo(helloc); // Error, wrong type wfoo(hellow); // Works wfoo(hellod); // Error, wrong type wfoo(hello); // Works, assumed to be UTF-16 wstring And then, a UTF-32 dstring version would be similar: void dfoo(dstring a) {} dfoo(helloc); // Error, wrong type dfoo(hellow); // Error, wrong type dfoo(hellod); // Works dfoo(hello); // Works, assumed to be UTF-32 dstring As you can see, the literals with postfixes are always the exact type you specify. If you have no postfix, then you get whatever the compiler expects it to be. But, then the question is, what happens if any of those types can be used? Which does the compiler choose? void Tfoo(T)(T a) { // When compiling, display the type used. pragma(msg, T.stringof); } Tfoo(hello); (Normally you'd want to add in a constraint that T must be one of the string types, so that no one tries to pass in an int or float or something. I skipped that in there.) In that, Tfoo isn't expecting any particular type of string, it can take any type. And hello doesn't have a postfix, so the compiler uses the default: UTF-8 string. Yes, templates may help. However, that unnecessary make code bigger (since we have to compile it for every char type). It only generates code for the types that are actually needed. If, for instance, your progam never uses anything except UTF-8, then only one version of the function will be made - the UTF-8 version. If you don't use every char type, then it doesn't generate it for every char type - just the ones you choose to use. The other problem is that it allows programmer to choose which one to use. He or she may just prefer char[] as UTF-8 (or wchar[] as UTF-16). That will be fine on platform that supports this encoding natively (e.g. for file system operations, screen output, etc.), whereas it will cause conversion overhead on the other. I don't think there is any problem with having different size of char. In fact, that would make programs better (since application programmers will have to think in terms of characters as opposed to bytes). Not to say that it's a big overhead, but unnecessary one. Having said this, I do agree that there must be some flexibility (e.g. in Java char[] is always 2 bytes), however, I don't believe that this flexibility should be available for application programmer. That's not good. First of all, UTF-16 is a lousy encoding, it combines the worst of both UTF-8 and UTF-32: It's multibyte and non-word-aligned like UTF-8, but it still wastes a lot of space like UTF-32. So even if your OS uses it natively, it's still best to do most internal processing in either UTF-8 or UTF-32. (And with templated string functions, if the programmer actually does want to use the native type in the *rare* cases where he's making enough OS calls that it would actually matter, he can still do so.) Secondly, the programmer *should* be able to use whatever type he decides is appropriate. If he wants to stick with native, he can do so, but he shouldn't be forced into choosing between use the native encoding and abuse the type system by pretending that an int is a character. For instance, complex low-level text processing *relies* on knowing exactly what encoding is being used and coding specifically to that encoding. As an example, I'm currently working on a generalized parser library ( http://www.dsource.org/projects/goldie ). Something like that is complex enough already that implementing the internal lexer natively for each possible native text encoding is just not worthwhile, expecially since the text hardly every gets passed to or from any OS calls that expect any particular encoding. Or maybe you're on a fancy OS that can handle any encoding natively. Or maybe the programmer is in a low-memory (or very-large-data) situation and needs the space savings of UTF-8 regardless of OS and doesn't care about speed. Or maybe they're actually *writing*
Re: Marketing of D - article topic ideas?
Hello Nick, Nick Sabalausky a...@a.a wrote in message news:hujd9m$11o...@digitalmars.com... Nick Sabalausky a...@a.a wrote in message news:hujd6a$11e...@digitalmars.com... Assuming, of course, a 'max' that works on a range, which would be easy enough to do. Probably something like: ElementType!T max(T range) // Corrected { return reduce!ordinaryMax(range); // Or return reduce!ab?a:b(range); } Or: alias reduce!ab?a:b max; God, I love D :) so we have: alias reduce!ab?a:b range_max; CommonType!(staticMap!(ElementType, T)) largestElement(T...)(T args) { static assert( !is(typeof(return) == void) ); return max( map!max(args) ); } Why isn't that just one line, like: alias polyMapReduce!(reduce!ab?a:b, ab?a:b) largestElelemt; I'm shure a better one could be written but I think this would do it: auto polyMapReduce(alias map, string reduce, T...)(T t) { static assert(T.length 0); static if(T.length 1) { auto a = map(t[0]); auto b = polyMapReduce!(map,reduce)(t[1..$]); return mixin(reduce); } else return map(t[0]); } -- ... IXOYE
Re: Marketing of D - article topic ideas?
Walter Bright, el 7 de junio a las 14:42 me escribiste: Leandro Lucarella wrote: Please, document this! http://d.puremagic.com/issues/show_bug.cgi?id=4230 Done. Thanks =) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Es mejor probar el sabor de sapo y darse cuenta que es feo, antes que no hacerlo y creer que es una gran gomita de pera. -- Dr Ricardo Vaporesso, Malta 1951
Re: Go Programming talk [OT]
Ali Çehreli, el 7 de junio a las 14:41 me escribiste: Leandro Lucarella wrote: Go doesn't have exceptions, so scope(failure/success) makes no sense. You can argue about if not having exceptions is good or bad (I don't have a strong opinion about it, sometimes I feel exceptions are nice, sometimes I think they are evil), though. Just to compare the two styles... Without exceptions, every step of the code must be checked explicitly: // C code: int foo() { int err = 0; // allocate resources err = bar(); if (err) goto finally; err = zar(); if (err) goto finally; err = car(); if (err) goto finally; finally: // do cleanup return err; } (Ordinarily, the if(err) checks are hidden inside macros like check_error, check_error_null, etc.) With exceptions, the actual code emerges: // C++ or D code void foo() { // allocate resources bar(); zar(); car(); } You are right, but when I see the former code, I know exactly was it going on, and when I see the later code I don't have a clue how errors are handled, or if they are handled at all. And try adding the try/catch statements, the code is even more verbose than the code without exceptions. Is a trade-off. When you don't handle the errors, exceptions might be a win, but when you do handle them, I'm not so sure. And again, I'm not saying I particularly like one more than the other, I don't have a strong opinion =) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Qué sabía Galileo de astronomía, Mendieta! Lo que pasa es que en este país habla cualquiera. -- Inodoro Pereyra
Re: Wide characters support in D
Ruslan Nikolaev wrote: Just one more addition: it is possible to have built-in function that converts multibyte (or multiword) char sequence (even though in my proposal it can be of different size) to dchar (UTF-32) character. Again, my only point is that it would be nice to have something similar to TCHAR so that all libraries can use it if they choose not to provide functions for all 3 types. 2Walter: Yes, programmers do often ignore surrogate pairs in case of UTF-16. But in case of undetermined char size (1 or 2 bytes) they will have to use special builtin conversion functions to dchar unless they want their code to be completely broken. The nice thing about char[] is that you'll find out real fast if your multibyte code is broken. With surrogate pairs in wchar[], the bug may lurk undetected for a decade.
Re: Wide characters support in D
It only generates code for the types that are actually needed. If, for instance, your progam never uses anything except UTF-8, then only one version of the function will be made - the UTF-8 version. If you don't use every char type, then it doesn't generate it for every char type - just the ones you choose to use. Not quite right. If we create system dynamic libraries or dynamic libraries commonly used, we will have to compile every instance unless we want to burden user with this. Otherwise, the same code will be duplicated in users program over and over again. That's not good. First of all, UTF-16 is a lousy encoding, it combines the worst of both UTF-8 and UTF-32: It's multibyte and non-word-aligned like UTF-8, but it still wastes a lot of space like UTF-32. So even if your OS uses it natively, it's still best to do most internal processing in either UTF-8 or UTF-32. (And with templated string functions, if the programmer actually does want to use the native type in the *rare* cases where he's making enough OS calls that it would actually matter, he can still do so.) First of all, UTF-16 is not a lousy encoding. It requires for most characters 2 bytes (not so big wastage especially if you consider other languages). Only for REALLY rare chars do you need 4 bytes. Whereas UTF-8 will require from 1 to 3 bytes for the same common characters. And also 4 chars for REALLY rare ones. In UTF-16 surrogate is an exception whereas in UTF-8 it is a rule (when something is an exception, it won't affect performance in most cases; when something is a rule - it will affect). Finally, UTF-16 is used by a variety of systems/tools: Windows, Java, C#, Qt and many others. Developers of these systems chose to use UTF-16 even though some of them (e.g. Java, C#, Qt) were developed in the era of UTF-8 Secondly, the programmer *should* be able to use whatever type he decides is appropriate. If he wants to stick with native, he can do Why? He/She can just use conversion to UTF-32 (dchar) whenever better understanding of character is needed. At least, that's what should be done anyway. You can have that easily: version(Windows) alias wstring tstring; else alias string tstring; See that's my point. Nobody is going to do this unless the above is standardized by the language. Everybody will stick to something particular (either char or wchar). With templated text functions, there is very little benefit to be gained from having a unified char. Just wouldn't serve any real see my comment above about templates and dynamic libraries Ruslan
Re: Wide characters support in D
Steven Schveighoffer wrote: a function that takes a char[] can also take a dchar[] if it is sent through a converter (i.e. toUtf8 on Tango I think). In Phobos, there are text, wtext, and dtext in std.conv: /** Convenience functions for converting any number and types of arguments into _text (the three character widths). Example: assert(text(42, ' ', 1.5, : xyz) == 42 1.5: xyz); assert(wtext(42, ' ', 1.5, : xyz) == 42 1.5: xyzw); assert(dtext(42, ' ', 1.5, : xyz) == 42 1.5: xyzd); */ Ali
Is the declaration grammar definition of 'Parameter' correct?
http://www.digitalmars.com/d/2.0/declaration.html So, cut down: Decl BasicType Declarators ; BasicType int ... BasicType2 * [] and co function Parameters Parameter Declarator ... Declarator BasicType2 Identifier DeclaratorSuffixes (the suffixes are [] [assignexpr] [type] and a template parameter list) So given all that, I can't see how this: int function(int, int) a; can be parsed with that grammar. Additionally, Declarator requires identifier, so wouldn't that make this: int function(* a, [] b) c; a valid Decl according to that grammar. I think this is seriously incorrect, but I would be open to correction! :D
Re: Wide characters support in D
On Mon, 07 Jun 2010 19:26:02 -0700, Ruslan Nikolaev wrote: It only generates code for the types that are actually needed. If, for instance, your progam never uses anything except UTF-8, then only one version of the function will be made - the UTF-8 version. If you don't use every char type, then it doesn't generate it for every char type - just the ones you choose to use. Not quite right. If we create system dynamic libraries or dynamic libraries commonly used, we will have to compile every instance unless we want to burden user with this. Otherwise, the same code will be duplicated in users program over and over again. I think you really need to look more into what templates are and do. There is also going to be very little performance gain by using the system type for strings. Considering that most of the work is not likely going be to the system commands you mentioned, but within D itself.
Re: Wide characters support in D
--- On Tue, 6/8/10, Jesse Phillips jessekphillip...@gmail.com wrote: I think you really need to look more into what templates are and do. Excuse me? Unless templates are something different in D (I can't be 100% sure since I am new D), it should be the case. At least in C++, that would be the case. As I said, for libraries you need to compile every commonly used instance, so that user will not be burdened with this overhead. http://www.digitalmars.com/d/2.0/template.html There is also going to be very little performance gain by using the system type for strings. Considering that most of the work is not likely going be to the system commands you mentioned, but within D itself. It depends. For instance, if you work with files, write on the console output, use system functions, use Win32 api, DFL, there can be overhead.
Re: Go Programming talk [OT]
Thanks, the important thing to note is that D can do what Go was doing in the example, Sorry bearophile. On Mon, 07 Jun 2010 19:55:06 -0500, Andrei Alexandrescu wrote: On 06/07/2010 07:44 PM, Jesse Phillips wrote: On Sun, 06 Jun 2010 18:13:36 -0400, bearophile wrote: At 9.30 you can see the switch used on a type type :-) You can see a similar example here: http://golang.org/src/pkg/exp/datafmt/datafmt.go Look for the line switch t := fexpr.(type) { ... Bye, bearophile That isn't a type type. Untested D code void fun(T, U)(T op, U y) { switch(typeof(y)) { case immutable(char)[]: case int: } } Actually the uses are not equivalent. A closer example is: class A {} void main() { Object a = new A; switch (typeid(a).name) { case object.Object: writeln(it's an object); break; case test.A: writeln(yeah, it's an A); break; default: writeln(default: , typeid(a).name); break; } } Go stores the dynamic types together with objects, so what looks like a simple typedef for int is in fact a full-fledged class with one data member. Those objects are stored on the garbage-collected heap. Andrei
Re: Is the declaration grammar definition of 'Parameter' correct?
Yeah, it's wrong. (close reads of parse.c are much more useful than reading the spec. heh.) A peek in my grammar and... Parameter: ... BasicType Declarator BasicType Declarator = AssignExpression BasicType Declarator ... Type Type ... I probably should have filed bug reports back when I was going through the grammar. Oh well.
Re: Is the declaration grammar definition of 'Parameter' correct?
On 08/06/10 16:00, Ellery Newcomer wrote: Yeah, it's wrong. (close reads of parse.c are much more useful than reading the spec. heh.) A peek in my grammar and... Parameter: ... BasicType Declarator BasicType Declarator = AssignExpression BasicType Declarator ... Type Type ... I probably should have filed bug reports back when I was going through the grammar. Oh well. Hmm. On the same page, Declarator has an identifier in it. Which means I still couldn't parse int function(int, int) with it, no?
Re: Wide characters support in D
Yes, to clarify what I suggest, I can put it as follows (2 possibilities): 1. Have a special standardized type tchar and tstring. Then, system libraries as well as users can use this type unless they want to do something special. There can be a compiler switch to change tchar width (essentially, to assign tchar to char, wchar or dchar), so that for each platform it can be used accordingly. In addition, tmain(tstring[] args) can be used as entry point; _topen, _treaddir, _tfopen, etc. can be added to binding. Adv: doesn't break existent code. Disadv: tchar and tstring may look weird for users. 2. Rename current char to bchar or schar, or something similar. Then 'char' can be used as type described above. Adv: users are likely to use this type Disadv: may break existent code; especially in part of bindings I think to have something (at least (1)) would be nice feature and addition to D. Although, I do admit that there can be different opinions about it. However, TCHAR in Windows DOES work fine. In the case described above it's even better since we always work with Unicode (UTF8/16/32) unlike Windows (which use ANSI for 1 byte char), thus everything should be more or less transparent. It would be cool to hear something from D, phobos and tango developers. P.S. For commonly used characters (e.g. '\n') the size of char will never make any difference. The problems should not occur in good code, or should occur really rare (which can be adjusted by programmer). Thanks, Ruslan Nikolaev
Re: Wide characters support in D
Hello Ruslan, --- On Tue, 6/8/10, Jesse Phillips jessekphillip...@gmail.com wrote: I think you really need to look more into what templates are and do. As I said, for libraries you need to compile every commonly used instance, so that user will not be burdened with this overhead. You only need to do that where you are shipping closed source and for that, it should be trivial to get the compiler to generate all three versions. There is also going to be very little performance gain by using the system type for strings. Considering that most of the work is not likely going be to the system commands you mentioned, but within D itself. It depends. For instance, if you work with files, write on the console output, use system functions, use Win32 api, DFL, there can be overhead. Your, right: it depends. In the few cases I can think of where more of the D code will be interacting with non D code than just processing the text, you could almost use void[] as your type. Where would you care about the encoding but not do much worth it? Also unless you have large amounts of text, you are going to have to work hard to get perf problems. If you do have large amounts of text, you are going to be I/O bound (cache misses etc.) and at that point, the cost of any operation, is it's I/O. From that, Reading in some date, doing a single pass of processing on it and writing it back out would only take 2/3 long with translations on both side. -- ... IXOYE
[ot] D users at Google
IIRC there are a few D users who work for Google (I know there is now at least one :D ) but I don't remember who. For that matter, are there other D users in the Mountain View/San Jose area? -- ... IXOYE
Re: Is the declaration grammar definition of 'Parameter' correct?
On 06/07/2010 11:06 PM, Bernard Helyer wrote: On 08/06/10 16:00, Ellery Newcomer wrote: Yeah, it's wrong. (close reads of parse.c are much more useful than reading the spec. heh.) A peek in my grammar and... Parameter: ... BasicType Declarator BasicType Declarator = AssignExpression BasicType Declarator ... Type Type ... I probably should have filed bug reports back when I was going through the grammar. Oh well. Hmm. On the same page, Declarator has an identifier in it. Which means I still couldn't parse int function(int, int) with it, no? Eh? Parameter |= Type |= BasicType Declarator2 |= int Declarator2 |= int wait, are you talking about the params inside the function type, or the whole thing as a param? I'm pretty sure it works either way.
Re: Wide characters support in D
You only need to do that where you are shipping closed source and for that, it should be trivial to get the compiler to generate all three versions. You will also need to do it in open source projects if you want to include generated template code into dynamic library as opposed to user's program (read as unnecessary space burden where code is repeated over and over again across user programs). But, yes, closed source programs is a good particular example. True, you can compile all 3 versions. But the whole argument was about additional generated code which someone claimed will not happen. Your, right: it depends. In the few cases I can think of where more of the D code will be interacting with non D code than just processing the text, you could almost use void[] as your type. Where would you care about the encoding but not do much worth it? Also unless you have large amounts of text, you are going to have to work hard to get perf problems. If you do have large amounts of text, you are going to be I/O bound (cache misses etc.) and at that point, the cost of any operation, is it's I/O. From that, Reading in some date, doing a single pass of processing on it and writing it back out would only take 2/3 long with translations on both side. True. But even simple string handling is faster for UTF-16. The time required to read 2 bytes from UTF-16 string is the same 1 byte from UTF-8. Generally, we have to read one code point after another (not more than this) since data guaranteed to be aligned by 2 byte boundary for wchar and 1 byte for char. Not to mention that converting 2 code points takes less time in UTF-16. And why not use this opportunity if system already natively support this? In addition, I want to mention that reading/writing file in text mode is very transparent. For instance, in Windows, the conversion will happen automatically from multibyte to unicode for open, fopen, etc. when text mode is specified. In general, it is a good practice since 1 byte char text is not necessary UTF-8 anyway and can be ANSI as well. Also, some other OS use 2 bytes UTF-16 natively, so it's not just for Windows. If I am not wrong, Symbian should be one such example.
Re: Is the declaration grammar definition of 'Parameter' correct?
On 08/06/10 17:19, Ellery Newcomer wrote: On 06/07/2010 11:06 PM, Bernard Helyer wrote: On 08/06/10 16:00, Ellery Newcomer wrote: Yeah, it's wrong. (close reads of parse.c are much more useful than reading the spec. heh.) A peek in my grammar and... Parameter: ... BasicType Declarator BasicType Declarator = AssignExpression BasicType Declarator ... Type Type ... I probably should have filed bug reports back when I was going through the grammar. Oh well. Hmm. On the same page, Declarator has an identifier in it. Which means I still couldn't parse int function(int, int) with it, no? Eh? Parameter |= Type |= BasicType Declarator2 |= int Declarator2 |= int wait, are you talking about the params inside the function type, or the whole thing as a param? I'm pretty sure it works either way. Parameter doesn't resolve to Type, not that I can see...
Re: Wide characters support in D
Ruslan Nikolaev nruslan_de...@yahoo.com wrote in message news:mailman.124.1275963971.24349.digitalmar...@puremagic.com... Nick wrote: It only generates code for the types that are actually needed. If, for instance, your progam never uses anything except UTF-8, then only one version of the function will be made - the UTF-8 version. If you don't use every char type, then it doesn't generate it for every char type - just the ones you choose to use. Not quite right. If we create system dynamic libraries or dynamic libraries commonly used, we will have to compile every instance unless we want to burden user with this. Otherwise, the same code will be duplicated in users program over and over again. That's a rather minor issue. I think you're overestimating the amount of bloat that occurs from having one string type versus three string types. Absolute worst case scenario would be a library that contains nothing but text-processing functions. That would triple in size, but what's the biggest such lib you've ever seen anyway? And for most libs, only a fraction is going to be taken up by text processing, so the difference won't be particularly large. In fact, the difference would likely be dwarfed anyway by the bloat incurred from all the other templated code (ie which would be largely unaffected by number of string types), and yes, *that* can get to be a problem, but it's an entirely separate one. That's not good. First of all, UTF-16 is a lousy encoding, it combines the worst of both UTF-8 and UTF-32: It's multibyte and non-word-aligned like UTF-8, but it still wastes a lot of space like UTF-32. So even if your OS uses it natively, it's still best to do most internal processing in either UTF-8 or UTF-32. (And with templated string functions, if the programmer actually does want to use the native type in the *rare* cases where he's making enough OS calls that it would actually matter, he can still do so.) First of all, UTF-16 is not a lousy encoding. It requires for most characters 2 bytes (not so big wastage especially if you consider other languages). Only for REALLY rare chars do you need 4 bytes. Whereas UTF-8 will require from 1 to 3 bytes for the same common characters. And also 4 chars for REALLY rare ones. In UTF-16 surrogate is an exception whereas in UTF-8 it is a rule (when something is an exception, it won't affect performance in most cases; when something is a rule - it will affect). Maybe lousy is too strong a word, but aside from compatibility with other libs/software that use it (which I'll address separately), UTF-16 is not particularly useful compared to UTF-8 and UTF-32: Non-latin-alphabet language: UTF-8 vs UTF-16: The real-word difference in sizes is minimal. But UTF-8 has some advantages: The nature of the the encoding makes backwards-scanning cheaper and easier. Also, as Walter said, bugs in the handling of multi-code-unit characters become fairly obvious. Advantages of UTF-16: None. Latin-alphabet language: UTF-8 vs UTF-16: All the same UTF-8 advantages for non-latin-alphabet languages still apply, plus there's a space savings: Under UTF-8, *most* characters are going to be 1 byte. Yes, there will be the occasional 2+ byte character, but they're so much less common that the overhead compared to ASCII (I'm only using ASCII as a baseline here, for the sake of comparisons) would only be around 0% to 15% depending on the language. UTF-16, however, has a consistent 100% overhead (slightly more when you count surrogate pairs, but I'll just leave it at 100%). So, depending on language, UTF-16 would be around 70%-100% larger than UTF-8. That's not insignificant. Any language: UTF-32 vs UTF-16: Using UTF-32 takes up extra space, but when that matters, UTF-8 already has the advantage over UTF-16 anyway regardless of whether or not UTF-8 is providing a space savings (see above), so the question of UTF-32 vs UTF-16 becomes useless. The rest of the time, UTF-32 has these advantages: Guaranteed one code-unit per character. And, the code-unit size is faster on typical CPUs (typical CPUs generally handle 32-bits faster than they handle 8- or 16-bits). Advantages of UTF-16: None. So compatibility with certain tools/libs is really the only reason ever to choose UTF-16. Finally, UTF-16 is used by a variety of systems/tools: Windows, Java, C#, Qt and many others. Developers of these systems chose to use UTF-16 even though some of them (e.g. Java, C#, Qt) were developed in the era of UTF-8 First of all, it's not exactly unheard of for big projects to make a sub-optimal decision. Secondly, Java and Windows adapted 16-bit encodings back when many people were still under the mistaken impression that would allow them to hold any character in one code-unit. If that had been true, then it would indeed have had at least certain advantages over UTF-8. But by the time the programming world at large knew better, it was too late for Java
Re: delegates with C linkage
Simen kjaeraas wrote: dennis luehring dl.so...@gmx.net wrote: D still won't accept an delegat in an extern C because this type does not exists in the C world Nor do classes, and those certainly can be passed to a C-linkage function. Yes, but I think that's a bug too. Quite a horrible one, in fact, since the class may get GC'd. On the interfaceToC page, class, type[], type[type] and delegate() are listed as having no C equivalent. They should all fail to compile.
Re: Handy templates
Simen kjaeraas simen.kja...@gmail.com wrote: Another few that showed up now with my work on combinatorial products of ranges: /** Determines whether a template parameter is a type of value (alias). Example: template foo( T... ) if (allSatisfy!( isAlias, T ) {...} */ template isAlias( alias T ) { enum isAlias = true; } template isAlias( T ) { enum isAlias = false; } /** Switches between template instantiations depending on the parameters passed. Example: alias staticSwitch( foo, 1, 2, 3 ).With callFoo; callFoo( 2 ); // Actually calls foo!(2)( ) */ template staticSwitch( alias F, T... ) if ( allSatisfy!( isAlias, T ) ) { auto With( CommonType!T index, ParameterTypeTuple!( F!( T[0] ) ) args ) { switch ( index ) { foreach ( i, e; T ) { mixin( Format!( q{case %s:}, e ) ); return F!( e )( args ); break; } } assert( false ); } } version( unittest ) { int foo( int n ) { return n; } } unittest { assert( staticSwitch!( foo, 1, 2 ).With( 2 ) == 2 ); } The latter does currently not work, due to bug 4292, but a patch has been submitted. Granted, a simple template would work around that problem, but better to remove it at the root. Philippe, if you or anyone else want to add any of these templates to your dranges or their own collection of templates, I would be pleased to allow it. But please do give credit. -- Simen
Tuple to tuple conversion
Sounds stupid, don't it? 123456789012345678901234567890123456789012345678901234567890123456789012 Carrying in my hat my trusty foo, a std.typecons.Tuple!(float), I want to use it as a parameter to a function taking non-tuple parameters, i.e. a single float. foo.tupleof gives me an unwieldy conglomerate of tuple((Tuple!(float))._field_field_0,(Tuple!(float))._0). First, I'm not sure what all of this means, second I'm completely sure it does not mean what I want. foo.field seems much more close to what I want, returning a nice and clean (float) when I ask for it. However, doing so in the context of being a function parameter yields other problems, in the form of: src\phobos\std\typecons.d(424): Error: static assert (is(Tuple!(string,float) == Tuple!(string,float))) is false src\phobos\std\typecons.d(413):instantiated from here: Tuple!(string,float) src\phobos\std\typecons.d(423):instantiated from here: slice!(1,3) problem.d(15):3 recursive instantiations from here: Tuple!(float) Especially interesting might be line 424, as that assert ought to be true in most cases. I guess what I'm asking for here is, is there a way to do what I want? -- Simen
Re: Tuple to tuple conversion
Simen kjaeraas simen.kja...@gmail.com wrote: I guess what I'm asking for here is, is there a way to do what I want? Hm, it seems the problem was not where I thought it was. However, this is getting curiouser and curiouser. -- Simen
why is this cast necessary?
Hi folks, This program works as expected in D2: import std.stdio; import std.algorithm; T largestSubelement(T)(T[][] lol) { alias reduce!ab?a:b max; return cast(T) max(map!max(lol)); // the cast matters... } void main() { auto a = [[1,2,3],[4,5,6],[8,9,7]]; assert (largestSubelement(a) == 9); auto b = [howdy, pardner]; assert (largestSubelement(b) == 'y'); auto c = [[1u, 3u, 45u, 2u], [29u, 1u]]; assert (largestSubelement(c) == 45u); } But if I leave out the 'cast(T)' in line 7, then this program will not compile: lse.d(6): Error: cannot implicitly convert expression (reduce(map(lol))) of type dchar to immutable(char) lse.d(14): Error: template instance lse.largestSubelement!(immutable(char)) error instantiating Where did the 'dchar' came from? And why does the cast resolve the issue? Best, Graham
Re: why is this cast necessary?
On Mon, 07 Jun 2010 23:02:48 -0400, Graham Fawcett fawc...@uwindsor.ca wrote: Hi folks, This program works as expected in D2: import std.stdio; import std.algorithm; T largestSubelement(T)(T[][] lol) { alias reduce!ab?a:b max; return cast(T) max(map!max(lol)); // the cast matters... } void main() { auto a = [[1,2,3],[4,5,6],[8,9,7]]; assert (largestSubelement(a) == 9); auto b = [howdy, pardner]; assert (largestSubelement(b) == 'y'); auto c = [[1u, 3u, 45u, 2u], [29u, 1u]]; assert (largestSubelement(c) == 45u); } But if I leave out the 'cast(T)' in line 7, then this program will not compile: lse.d(6): Error: cannot implicitly convert expression (reduce(map(lol))) of type dchar to immutable(char) lse.d(14): Error: template instance lse.largestSubelement!(immutable(char)) error instantiating Where did the 'dchar' came from? And why does the cast resolve the issue? In a recent update, Andrei changed char[] and wchar[] to bi-directional ranges of dchar instead of straight arrays (at least, I think that was the change) in the eyes of the range types. I think this is where the dchar comes from. If you had a char[], and the 'max' element was a sequence of 2 code points, how do you return a single char for that result? -Steve
[Issue 4175] linux.mak doesn't declare sufficient dependencies to support parallel builds
http://d.puremagic.com/issues/show_bug.cgi?id=4175 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added Status|NEW |RESOLVED CC||bugzi...@digitalmars.com Resolution||FIXED --- Comment #1 from Walter Bright bugzi...@digitalmars.com 2010-06-06 22:56:45 PDT --- http://www.dsource.org/projects/dmd/changeset/524 -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3634] return value not passed to out contracts of private methods
http://d.puremagic.com/issues/show_bug.cgi?id=3634 Don clugd...@yahoo.com.au changed: What|Removed |Added Status|NEW |RESOLVED CC||clugd...@yahoo.com.au Resolution||DUPLICATE --- Comment #3 from Don clugd...@yahoo.com.au 2010-06-07 00:11:57 PDT --- This was a regression in 2.037. Fixed in the beta of 2.047. *** This issue has been marked as a duplicate of issue 3667 *** -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3667] Regression(D2 only): broken out(result) in contracts
http://d.puremagic.com/issues/show_bug.cgi?id=3667 --- Comment #8 from Don clugd...@yahoo.com.au 2010-06-07 00:11:57 PDT --- *** Issue 3634 has been marked as a duplicate of this issue. *** -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 4291] New: Pure functions cannot access mixed in variables
http://d.puremagic.com/issues/show_bug.cgi?id=4291 Summary: Pure functions cannot access mixed in variables Product: D Version: 2.041 Platform: All OS/Version: All Status: NEW Keywords: rejects-valid Severity: normal Priority: P2 Component: DMD AssignedTo: nob...@puremagic.com ReportedBy: rsi...@gmail.com --- Comment #0 from Shin Fujishiro rsi...@gmail.com 2010-06-07 02:40:13 PDT --- DMD raises a compiler error when a mixed in variable is used in a pure function. void test() pure { mixin declareVariable; var = 42; // Error: pure nested function 'test' cannot access //mutable data 'var' } template declareVariable() { int var; } The mixed-in variable var should be treated as if it's directly declared in test()'s scope. So the above code should be correct and accepted. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 4291] Pure functions cannot access mixed in variables
http://d.puremagic.com/issues/show_bug.cgi?id=4291 Shin Fujishiro rsi...@gmail.com changed: What|Removed |Added Keywords||patch --- Comment #1 from Shin Fujishiro rsi...@gmail.com 2010-06-07 02:41:25 PDT --- Patch against DMD r524: --- src/expression.c +++ src/expression.c @@ -4406,7 +4406,7 @@ Expression *VarExp::semantic(Scope *sc) error(pure function '%s' cannot access mutable static data '%s', sc-func-toChars(), v-toChars()); } -else if (sc-func-isPure() sc-parent != v-parent +else if (sc-func-isPure() sc-parent-pastMixin() != v-parent-pastMixin() !v-isImmutable() !(v-storage_class STCmanifest)) { The patched code also deals with function's scope (sc-parent-pastMixin) so that this valid code is accepted: void test() pure { mixin declareVariable; mixin declareFunction; readVar(); } template declareVariable() { int var; } template declareFunction() { int readVar() { return var; } } -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3822] alloca() can return the same address inside a function
http://d.puremagic.com/issues/show_bug.cgi?id=3822 --- Comment #3 from bearophile_h...@eml.cc 2010-06-07 04:04:04 PDT --- Maybe the alloca() used by dmd frees memory as soon as the current scope is left, instead of deferring all deallocation until function exit. See: http://compilers.iecc.com/comparch/article/91-12-079 D documentation has to explain how exactly its alloca() works. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3822] Memory allocated with alloca() is freed at end of scope instead at end of function
http://d.puremagic.com/issues/show_bug.cgi?id=3822 nfx...@gmail.com changed: What|Removed |Added Keywords||wrong-code CC||nfx...@gmail.com Summary|alloca() can return the |Memory allocated with |same address inside a |alloca() is freed at end of |function|scope instead at end of ||function --- Comment #4 from nfx...@gmail.com 2010-06-07 04:17:53 PDT --- C code that compiles in D without modification should work exactly as it does in C. This means this is a rather bad code gen bug. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 4192] Regression (1.061, D1 only): Certain CTFs can't be evaluated anymore
http://d.puremagic.com/issues/show_bug.cgi?id=4192 Don clugd...@yahoo.com.au changed: What|Removed |Added Status|NEW |RESOLVED Resolution||DUPLICATE --- Comment #1 from Don clugd...@yahoo.com.au 2010-06-07 04:42:01 PDT --- Fixed in svn commit 498. *** This issue has been marked as a duplicate of issue 4210 *** -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 4210] Random crashes / heisenbugs caused by dmd commit 478: compiler messes up vtables
http://d.puremagic.com/issues/show_bug.cgi?id=4210 Don clugd...@yahoo.com.au changed: What|Removed |Added CC||aziz.koek...@gmail.com --- Comment #7 from Don clugd...@yahoo.com.au 2010-06-07 04:42:01 PDT --- *** Issue 4192 has been marked as a duplicate of this issue. *** -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3562] Static Array ops create duplicate method definitions
http://d.puremagic.com/issues/show_bug.cgi?id=3562 Don clugd...@yahoo.com.au changed: What|Removed |Added CC||clugd...@yahoo.com.au Severity|regression |normal --- Comment #1 from Don clugd...@yahoo.com.au 2010-06-07 06:53:13 PDT --- To reproduce: --- dmd -c hello dmd -c goodbye dmd hello.obj goodbye.obj Error 1: Previous Definition Different : __arraySliceNegSliceAssign_f This isn't a regression. It behaved in exactly the same way on DMD2.020. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3569] DMD Stack Overflow with a struct member function inside a C-style struct initializer
http://d.puremagic.com/issues/show_bug.cgi?id=3569 Don clugd...@yahoo.com.au changed: What|Removed |Added Keywords|rejects-valid |ice-on-valid-code --- Comment #5 from Don clugd...@yahoo.com.au 2010-06-07 06:59:26 PDT --- This is causing a stack overflow again. I don't know why I thought it was fixed. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3420] [PATCH] Allow string import of files using subdirectories
http://d.puremagic.com/issues/show_bug.cgi?id=3420 Don clugd...@yahoo.com.au changed: What|Removed |Added Keywords|patch | CC||clugd...@yahoo.com.au --- Comment #18 from Don clugd...@yahoo.com.au 2010-06-07 09:08:52 PDT --- Removing 'patch' keyword -- there's no patch for the systems for which the issue still applies. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3569] DMD Stack Overflow with a struct member function inside a C-style struct initializer
http://d.puremagic.com/issues/show_bug.cgi?id=3569 Don clugd...@yahoo.com.au changed: What|Removed |Added Keywords||patch --- Comment #6 from Don clugd...@yahoo.com.au 2010-06-07 13:24:35 PDT --- The fact that struct initializers are evaluated at compile time is bug 3809; only the stack overflow is unique to this bug. Some tough cases for the test suite. --- template Compileable(int z) { bool OK;} struct Bug3569 { int bar() { return 7; } } struct Bug3569b { Bug3569 foo; void crash() { static assert(!is(typeof(Compileable!(foo.bar(); static assert(!is(typeof(Compileable!((foo = Bug3569.init).bar(); } } PATCH Index: interpret.c === --- interpret.c(revision 524) +++ interpret.c(working copy) @@ -1110,8 +1110,9 @@ Expression *ThisExp::interpret(InterState *istate) { -if (istate-localThis) +if (istate istate-localThis) return istate-localThis-interpret(istate); +error(value of 'this' is not known at compile time); return EXP_CANT_INTERPRET; } @@ -2105,6 +2106,11 @@ #endif Expression *e = EXP_CANT_INTERPRET; Expression *e1 = this-e1; +if (!istate) +{ +error(value of %s is not known at compile time, e1-toChars()); +return e; +} if (fp) { -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 4230] version(unittest)
http://d.puremagic.com/issues/show_bug.cgi?id=4230 Walter Bright bugzi...@digitalmars.com changed: What|Removed |Added Status|NEW |RESOLVED CC||bugzi...@digitalmars.com Resolution||FIXED --- Comment #5 from Walter Bright bugzi...@digitalmars.com 2010-06-07 14:42:30 PDT --- This is for D2 only. changeset 1580 -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3831] writeln of a delegate typeid
http://d.puremagic.com/issues/show_bug.cgi?id=3831 --- Comment #2 from bearophile_h...@eml.cc 2010-06-07 14:45:46 PDT --- Another example, with functions too: import std.stdio: writeln; int foo(float x) { return 0; } void main() { writeln(typeid(typeof(foo))); // int() int bar(float x) { return 0; } writeln(typeid(typeof(bar))); // int() int delegate(float) baz = (float x) { return 0; }; writeln(typeid(typeof(baz))); // int delegate() } -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 4292] New: [PATCH] CommonType fails for singular alias value
http://d.puremagic.com/issues/show_bug.cgi?id=4292 Summary: [PATCH] CommonType fails for singular alias value Product: D Version: unspecified Platform: All OS/Version: All Status: NEW Severity: normal Priority: P2 Component: Phobos AssignedTo: nob...@puremagic.com ReportedBy: simen.kja...@gmail.com --- Comment #0 from Simen Kjaeraas simen.kja...@gmail.com 2010-06-07 16:35:07 PDT --- std.traits.CommonType does not correctly handle the situation of one single value (non-type) parameter, i.e. CommonType!3. Solution here: template CommonType(T...) { static if (!T.length) alias void CommonType; else static if (T.length == 1) { static if (is(typeof(T[0]))) alias typeof( T[0] ) commonOrSingleType; else alias T[0] commonOrSingleType; } else static if (is(typeof(true ? T[0].init : T[1].init) U)) alias CommonType!(U, T[2 .. $]) CommonType; else alias void CommonType; } -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 4293] New: Wrong line number with @disable
http://d.puremagic.com/issues/show_bug.cgi?id=4293 Summary: Wrong line number with @disable Product: D Version: 2.041 Platform: Other OS/Version: Linux Status: NEW Severity: normal Priority: P2 Component: DMD AssignedTo: nob...@puremagic.com ReportedBy: jason.james.ho...@gmail.com --- Comment #0 from Jason House jason.james.ho...@gmail.com 2010-06-07 19:16:47 PDT --- $ cat test.d struct x{ @disable this(); // line 2 this(bool b){ _b = b; } bool _b; } int main(){ x; // line 7, the real source of the error return cast(int) x._b; } $ dmd test.d test.d(2): Error: constructor test.x.this default constructor not allowed for structs -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3086] TypeInfo opEquals returns incorrect results
http://d.puremagic.com/issues/show_bug.cgi?id=3086 nfx...@gmail.com changed: What|Removed |Added Keywords||wrong-code --- Comment #1 from nfx...@gmail.com 2010-06-07 20:18:41 PDT --- Attributes such as pure are also ignored. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3831] writeln of a delegate typeid
http://d.puremagic.com/issues/show_bug.cgi?id=3831 nfx...@gmail.com changed: What|Removed |Added CC||nfx...@gmail.com --- Comment #3 from nfx...@gmail.com 2010-06-07 20:18:07 PDT --- This has the same cause as #3086. If you don't think it deserves to be marked as duplicate, undo it. (But I think it's a good idea to keep bug reports caused by the same thing at minimum.) -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3086] TypeInfo opEquals returns incorrect results
http://d.puremagic.com/issues/show_bug.cgi?id=3086 nfx...@gmail.com changed: What|Removed |Added CC||bearophile_h...@eml.cc --- Comment #2 from nfx...@gmail.com 2010-06-07 20:20:14 PDT --- *** Issue 3831 has been marked as a duplicate of this issue. *** -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---
[Issue 3831] writeln of a delegate typeid
http://d.puremagic.com/issues/show_bug.cgi?id=3831 nfx...@gmail.com changed: What|Removed |Added Status|NEW |RESOLVED Resolution||DUPLICATE --- Comment #4 from nfx...@gmail.com 2010-06-07 20:20:13 PDT --- *** This issue has been marked as a duplicate of issue 3086 *** -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email --- You are receiving this mail because: ---