Re: dmd 2.057 release
Jonathan M Davis: On Friday, December 16, 2011 22:37:50 Christian Manning wrote: ubyte[4] a; auto x() { return a; } void main() { auto b = x()[1..$]; } ... Regardless, the compiler shouldn't be ICEing though. Is it in Bugzilla? Bye, bearophile
Re: dmd 2.057 release
On Saturday, 17 December 2011 at 11:02:41 UTC, bearophile wrote: Jonathan M Davis: On Friday, December 16, 2011 22:37:50 Christian Manning wrote: ubyte[4] a; auto x() { return a; } void main() { auto b = x()[1..$]; } ... Regardless, the compiler shouldn't be ICEing though. Is it in Bugzilla? Bye, bearophile Looks to be the same issue as http://d.puremagic.com/issues/show_bug.cgi?id=4414
Re: New homepage design of d-p-l.org is now live. eom
On 17/12/2011 06:35, Nick Sabalausky wrote: snip But if it'sijust/i ordinary text that simply needs to bebbolded/b oriitalicized/i, then handling it in any roundabout way like that is justiridiculous/i (and self-documenting would be completely inapplicable). You miss the point - why would you need to bold or italicise ordinary text? If the point is to illustrate what bold looks like, or what italics look like, _then_ it might make sense to use presentational markup In such a situation, replacing hardcoded bold or italic with some vague concept of emphasis (old-school example: theem tag) em isn't really an old-school example. It's the proper semantic markup for emphasis. or extra-emphasis, etc, is not only a useless abstraction merely for the sake of abstraction, itbican/i/b subtly change meaning/interpretation of the actualicontent/i because only theiauthor/i, not the stylist, is able to look at the final result and know whether the result bicorrectly/i/b depicts the amount/type of emphasis intended. It seems to me that the essence of what you're saying is that the choice of em and strong is too coarse-grained for your purposes. I'm not sure how best to deal with this either. Moreover, what markup are you going to use so that it looks/sounds/feels right in non-graphical browsers? Additionally, how does the stylist know if a given styling is going to cause too much visual noise? Or be too visually monotone? Theyican't/i, because it'sicompletely/i dependent on the text that the biauthor/i/b writes. It might be too much visual stuff for one article and just right for another. Only the text's author can know what's appropriate, not the stylesheet. If the author is overusing emphasis, manually setting font weights and stuff to compensate seems to me to be trying to fix the wrong problem. Stewart.
Re: New homepage design of d-p-l.org is now live. eom
On Saturday, 17 December 2011 at 13:09:40 UTC, Stewart Gordon wrote: What built-in support does HTML/JS/CSS have for dragging of elements? http://dev.w3.org/html5/spec/dnd.html It started as an IE5 feature, and is now being expanded to everyone else. In the old IE, it only worked on some text, links, and images, but the new standard says you can set it on whatever you want. Still, if you want to support them all, a is the way to do it. Moreover, dummy hrefs are an abomination. Not just compatibility when JS is disabled - this link is also the one followed when you open a link in a new window/tab. This regularly bites me. Yea, I hate them too. In practice, I try to put them somewhere useful, if I can. (In my app with the drag drop, the links lead to the contact profile page; this is a mailing list/CRM app.) But I am made to wonder why. What will happen when HTML6 comes out? I guess the idea is there won't be a html6; instead they'll just keep sbreaking/s evolving the current thing and expect everyone to keep up. No, because in order to determine whether it's well-formed, one must know whether it's meant to be in SGML-based HTML, HTML5 or XHTML. Meh, it works anyway. One reason is websites tend to be so poorly written that if you tried to be strict, you'd just break most of them! Anyway, this said, if dpl.org wanted to validate, I don't think it'd be a *bad* thing. (I'd say go with xhtml; I feel dirty saying this, but I almost like. xml for this kind of thing.) So it's something that web authors can use to store custom data in an element for scripting purposes, but browsers aren't supposed to have any built-in handling of them? Right. You aren't even supposed to use them with other third party tools; the idea is that area is completely open for the page author and his scripts to do with as he pleases.
Re: New homepage design of d-p-l.org is now live. eom
Stewart Gordon smjg_1...@yahoo.com wrote in message news:jci2bj$225s$1...@digitalmars.com... On 17/12/2011 06:35, Nick Sabalausky wrote: snip But if it'sijust/i ordinary text that simply needs to bebbolded/b oriitalicized/i, then handling it in any roundabout way like that is justiridiculous/i (and self-documenting would be completely inapplicable). You miss the point - why would you need to bold or italicise ordinary text? To be clear, I didn't mean that as in plaintext...if that's what you meant...? I meant like the examples in that paragraph (not all of which were literal examples of bold/italic). If the point is to illustrate what bold looks like, or what italics look like, _then_ it might make sense to use presentational markup Only might? ;) In such a situation, replacing hardcoded bold or italic with some vague concept of emphasis (old-school example: theem tag) em isn't really an old-school example. It's the proper semantic markup for emphasis. Ok. It was a dedicated HTML tag instead of a span/div with class attribute. Seems like most of those are non-kosher these days. or extra-emphasis, etc, is not only a useless abstraction merely for the sake of abstraction, itbican/i/b subtly change meaning/interpretation of the actualicontent/i because only theiauthor/i, not the stylist, is able to look at the final result and know whether the result bicorrectly/i/b depicts the amount/type of emphasis intended. It seems to me that the essence of what you're saying is that the choice of em and strong is too coarse-grained for your purposes. Yes. Well, too vague, really. I'm not sure how best to deal with this either. It's easy to deal with: You just say Fuck dat 'purity' booshit, I'm usin' b and i!! :) And as far as inferring semantic meaning, I think it's pretty obvious that b and i imply this text is emphasised. (Not that I can imagine any realistic use for being able to identify what text is emphasised.) Moreover, what markup are you going to use so that it looks/sounds/feels right in non-graphical browsers? Non-graphical browsers are going to result in a *lot* of difference from the original style/layout anyway. There's a lot of stuff that's going to be wrong. If you're using one, it's just understood that you're merely viewing an approximation. Additionally, how does the stylist know if a given styling is going to cause too much visual noise? Or be too visually monotone? Theyican't/i, because it'sicompletely/i dependent on the text that the biauthor/i/b writes. It might be too much visual stuff for one article and just right for another. Only the text's author can know what's appropriate, not the stylesheet. If the author is overusing emphasis, manually setting font weights and stuff to compensate seems to me to be trying to fix the wrong problem. Not necessarily. Imagine a paragraph that uses a fair amount of italic, but not quite an overuse of italic, so it still looks fine. If that's done with, say em, and the stylist changes em from italic to either bold or bold+italic, it's suddenly going to look like shit. It'll *become* an overuse, and the only way for the stylist to fix it is to just let the author choose bold/italic/etc on their own. Maybe I'm just atypical as an author, but when I write something and use emphasis, I take into account things like bold/italic and how it'll look when I decide what to emphasise, how, and how much. If I *do* use things like em, I inevitably end up choosing them based *not* on level of emphasis but on whether they end up being bold/italic/underline/etc...Which obviously defeats the whole damn point of em, etc. I'd be surprised if most people do it any different from that. Heck, I almost always end up changing my emphasis/bold/italic/etc after writing+previewing it because it never looks right until I've tweaked it *taking into account* the final presentation. Honestly, I can't imagine how anyone could do it effectively without having direct control over such things (even if it's by abusing levels of emphasis as euphamisms for more specific stylings). I think there's good reason wiki markups invariably have syntax for bold and italic rather than emphasis. There's two basic problems with the idealistic separation of presentation from content: 1. (X)HTML and CSS are just simply not very good as (X)HTML is content and CSS is presentation. You can get by in *some* cases, but in general they're just poorly suited for it. I think that *part* of the problem may be that it's like ColdFusion: A mediocre Model and a mediocre View hooked directly together with basically no Controller. 2. Content and presentation *are not always separable*. There *is* interplay. And this makes a strict and complete separation of content and presentation nothing more than yet another example in programming's long history of idealistic dreams (like Java's everything must be OO
Re: New homepage design of d-p-l.org is now live. eom
Adam D. Ruppe destructiona...@gmail.com wrote in message news:xjikejblvuxulhoqx...@dfeed.kimsufi.thecybershadow.net... On Saturday, 17 December 2011 at 13:09:40 UTC, Stewart Gordon wrote: But I am made to wonder why. What will happen when HTML6 comes out? I guess the idea is there won't be a html6; instead they'll just keep sbreaking/s evolving the current thing and expect everyone to keep up. And why not? That's what they've been doing all along. *cough* object *cough* ;) No, because in order to determine whether it's well-formed, one must know whether it's meant to be in SGML-based HTML, HTML5 or XHTML. Meh, it works anyway. One reason is websites tend to be so poorly written that if you tried to be strict, you'd just break most of them! Anyway, this said, if dpl.org wanted to validate, I don't think it'd be a *bad* thing. (I'd say go with xhtml; I feel dirty saying this, but I almost like. xml for this kind of thing.) Yea, HTML looks, acts and feels like XML so it may as well actually *be* XML. Plus, tranformations to/from HTML is one of the main reasons for XML anyway. So they *should* be compatible. ('Course there's *technically* SGML too, but honestly, HTML is the only reason anyone's ever cared about or even known about SGML. It may as well not exist.)
Re: New homepage design of d-p-l.org is now live. eom
On 2011-12-17 13:09:35 +, Stewart Gordon smjg_1...@yahoo.com said: Strange. I don't recall ever seeing !DOCTYPE html before HTML5 came along. But I am made to wonder why. What will happen when HTML6 comes out? Or have they decided that validators are just going to update themselves to the new standard rather than keeping separate HTML5/HTML6 DTDs (or whatever the HTML5+ equivalent of a DTD is)? Thing is, if they could have removed the doctype completely they would have done so. The doctype doesn't tell anything meaningful to a browser, except that today's browser use the presence of a doctype to switch between a quirk mode and a standard mode. !DOCTYPE html was the shortest thing that'd make every browser use standard mode. The problem was that forcing everyone to specify either one or another HTML version is just a exercise in pointlessness. Most people get the doctype wrong, either initially or over time when someone updated the site to add some new content. If you're interested in validating your web page, likely you'll know which version you want to validate against and you can tell the validator. Stuff like improperly closed tags or bad entity encoding can break, but that's pretty well independent of doctype validation. That's simply a matter of the document being well-formed. No, because in order to determine whether it's well-formed, one must know whether it's meant to be in SGML-based HTML, HTML5 or XHTML. Perhaps for it matters for validation if you don't say which spec to validate against, but validating against a spec doesn't always reflect reality either. There is no SGML-based-HTML-compliant parser used by a browser out there. Browsers have two parsers: one for HTML and one for XML (and sometime the HTML parser behaves slightly differently in quirk mode, but that's not part of any spec). And whether a browser uses the HTML or the XML parser has nothing to do with the doctype at the top of the file: it depends on the MIME types given in the Content-Type HTTP header or the file extension if it is a local file. HTML 5 doesn't change that. Almost all web pages declared as XHTML out there are actually parsed using the HTML parser because they are served with the text/html content type and not application/xhtml+xml. A lot of them are not well formed XML and wouldn't be viewable anyway if parsed according to their doctype. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
D bindings for openssl added to Deimos
Courtesy of David Nadlinger. https://github.com/D-Programming-Deimos/openssl
Re: Export and Protected Scoping in Dynamic Libraries
On 12/16/2011 9:01 PM, Adam Wilson wrote: On Fri, 16 Dec 2011 01:34:32 -0800, Walter Bright newshou...@digitalmars.com wrote: On 12/14/2011 11:41 AM, Adam Wilson wrote: Hello Everyone, I want to start this conversation by pointing out that I come from a C/C++/C# background and my ideas and frustrations in this post will be colored by that history. When I first approached D, the idea of an 'export' confused me. I've since figured out, at least in libraries, that in C# terms D's 'export' means 'public'. I'm not too familiar with C#'s public, but what D 'export' means is a function is an entry point for a DLL. In the Windows world, that means it gets an extra level of indirection when calling it, and it corresponds to: __declspec(export) in Windows compilers. Huh, I didn't know that. It makes sense though, I recall using __declspec(dllexport) in MSVC and hating the unwieldy syntax. Does it perform a similar function on other OS'es? No, as such magic isn't necessary. Any public symbol will be automatically accessible from a shared library. C#'s public means this function can be used anyone, including other libraries or executables, which according to the D docs, is what D's export means. C# also has 'protected' and 'internal protected'. 'protected' is available to any subclass, even those outside of the library, and 'internal protected' is only available to subclasses within the same library. It's actually quite a useful to have that distinction. For example, in WPF you end up subclassing the system classes (which are in separate DLL's) repeatedly and it is an encouraged pattern to extend existing functionality with your own. The reason I went with the syntax I did is that it doesn't add any new keywords and is, I think, even more clear about the programmers intentionality than C#'s scoping model. If 'export' really is just the equivalent of __declspec(dllexport) then it makes even more sense as an attribute and not a scope in it's own right. However, this raise a problem, specifically, how do I export a protected member from a dynamic library? Do you mean how does one directly call a protected function inside a DLL from outside that DLL, without going through the virtual table? What I am after is a member that can be overridden in a subclass outside of the DLL but is otherwise only reachable within the module it's defined in. Mr. Carlborg's code example is spot on. It isn't necessary to export a protected symbol in order to override it.
Re: LLVM talks 1: Clang for Chromium
The back end will evaluate them in different orders, as it is more efficient to evaluate varargs functions from right-to-left, and others from left-to-right. It's not an insurmountable problem, it just needs to be worked on. Are you taking about push vs mov? By default gcc preallocates space for arguments, evaluates them from left to right and `mov`es them to the stack. `push`es take more cycles, `mov`s take more space. Which of them is more efficient?
Re: LLVM talks 1: Clang for Chromium
On 12/17/2011 2:04 AM, Kagamin wrote: The back end will evaluate them in different orders, as it is more efficient to evaluate varargs functions from right-to-left, and others from left-to-right. It's not an insurmountable problem, it just needs to be worked on. Are you taking about push vs mov? Yes. By default gcc preallocates space for arguments, evaluates them from left to right and `mov`es them to the stack. `push`es take more cycles, `mov`s take more space. Which of them is more efficient? Depends on the processor.
Re: SDC ddmd
Bernard Helyer Wrote: We intend to be compatible with DMD to a point. Where we are not, will be through omission. Off the top of my head: *delete will not be implemented. *scope classes will not be implemented. *complex numbers will not be implemented. *version/debug levels will not be implemented, as their semantics are poorly documented, and I've never seen them used in any code anywhere. *C style function pointers will not be supported. *The D calling convention will not be matched, at least not in 1.0 (extern (D) is currently fastcall) *D's forward reference and module order bugs will not be supported. And others I've probably forgotten. In terms of release date, I can't really say, sorry. Currently working on the D interpreter for CTFE and constant expressions, if you're interested. :D -Bernard. So many features are not implemented can be compatible with dmd2 ? In addition , the current sdc in what state? Feature list is up to date? what time is expected time to be completed for 0.1? Good luck! dolive
Re: LLVM talks 1: Clang for Chromium
Walter: x = x++; Define order of evaluation as rvalue, then lvalue. So I presume your desire is to define the semantics of all code like that, and statically refuse no cases like that. This is an acceptable solution. The advantage of this solution is that it gives no restrictions on the acceptable D code, and it avoids complexities caused by choosing what are the cases to refuse statically. Some of its disadvantages are that programmers are free to write very unreadable D code that essentially only the compiler is able to understand, and that is hard to port to C/C++. Bye, bearophile
Re: Second Round CURL Wrapper Review
On Thursday, 15 December 2011 at 07:46:56 UTC, Jonathan M Davis wrote: On Friday, December 02, 2011 23:26:10 dsimcha wrote: I volunteered ages ago to manage the review for the second round of Jonas Drewsen's CURL wrapper. After the first round it was decided that, after a large number of minor issues were fixed, a second round would be necessary. Significant open issues: 1. Should libcurl be bundled with DMD on Windows? I'd argue that it should be a separate download but that it should definitely be provided. 2. etc.curl, std.curl, or std.net.curl? (We had a vote a while back but it was buried deep in a thread and a lot of people may have missed it: http://www.easypolls.net/poll.html?p=4ebd3219011eb0e4518d35ab ) If we're going to follow a policy that wrappers around 3rd party libraries go in etc, then it should be etc.curl. Otherwise, I think that it should be std.net.curl. I'd argue that AutoConnection should be AutoConnect, because it's shorter and just as informative. AutoConnect sounds like a command to connection automatically which might be confusing since that is not what it does. Therefore I went with AutoConnection which I still believe is better. I'd argue that the acronyms should be in all caps if camelcasing would require that the first letter of the acronym be capitalized and all lower case if camelcasing would require that the first letter of the acronym be lower case. We're not completely consistent in how we using acronyms in names in Phobos, but I believe that we primarily follow that rule when putting them in symbol names. So, for instance, it would be HTTP, FTP, and SMTP rather than Http, Ftp, and Smtp. I like Http better as personal taste. But if HTTP is preferred then I'll do that. Actually I think I will make a pull request to extend the dlang.org/dstyle.html doc with and example that we can point to when someone asks about styling. I've spend too much time restyling because there is no such style doc already and people are complaining about style in reviews anyway. I'd rename T to C or Char in the template parameters, since it's a character type rather than a generic type. Not a huge deal, but I think that it makes the code quicker to understand. I've named it T because it can also be ubyte. In that case C for Char is confusing. Why are some parameters const(char)[] and others string? Why not make them all const(char)[] or even actually templatize them on character type (one per parameter which is a string)? The parameters that are const(char)[] will accept both string and char[] which is what I want because libcurl make an internal copy of it anyway. If I typed it as string then you would have to idup a char[] for the parameter without any reason. The parameters that are string is internally passed to functions (in other std modules) that only accept strings. I could accept const(char)[] and idup it in order to have consistant parameter types but that gives an unnecessary overhead. Shorten the URL in your examples. Its long length increases the risk of making the examples too wide. It's a made up URL, so there's no need for it to be that long. Something like foo.bar.com is probably more than enough. Actually the long urls point to a google appspot app that fits the example. I've done it this way so that people can just copy/paste the example and have something working. Not too many people have a test HTTP POST server setup for a quick test AFAIK. Why does byLine use '\x0a' instead of '\n'? '\n' is clear, whereas most people will have to look up what '\x0a' is, so I really think that it should be '\n'. I looked at how it was done elsewhere in phobos and did like that: http://dlang.org/phobos/std_stdio.html#byLine But I agree with you that \n is better and will change it. Also, if byLine is going to return an internal range type instead of having an externally defined one, its documentation needs to explain what Char is used for. As it stands, it looks like a pointless template parameter. To do that, the return section should probably say something like A range of Char[]... All of this goes for byLineAsync as well. Ok. In general, the functions should do a better job of documenting their parameters. Many of them don't document their parameters at all. For instance, while it's fairly easy to guess what keepTerminator and terminator are used for, there is no explanation about them whatsoever in either byLine or byLineAsync's documentation. Again I looked at phobos to see what was current the style: http://dlang.org/phobos/std_stdio.html#byLine I'll include some more detail though. If any of the range return types of these functions have non-standard functions on them, they need to be properly documented. e.g. wait(Duration) is mentioned in passing in byLineAsync's documentation, but no explanation for its purpose is given. The documentation needs to explain everything the
Re: 64-bit DMD for windows?
On 17.12.2011 04:26, Trass3r wrote: My girlfriend is interviewing for a job at a major government company here in Norway, and was told that she'd need to use DOS at work. Likely some ancient software that no-one's ever wanted to try and upgrade. What is wrong with this world? ;) DOS software can be more productive, since it's often keyboard-only. It all depends, of course. Might be a FoxPro app or something.
Re: Program size, linking matter, and static this()
On Sat, 17 Dec 2011 01:50:51 +0200, Jonathan M Davis jmdavisp...@gmx.com wrote: On Friday, December 16, 2011 17:13:49 Andrei Alexandrescu wrote: Maybe there's an issue with the design. Maybe Singleton (the most damned of all patterns) is not the best choice here. Or maybe the use of an inheritance hierarchy with a grand total of 4 classes. Or maybe the encapsulation could be rethought. The general point is, a design lives within a language. Any language is going to disallow a few designs or make them unsuitable for particular situation. This is, again, multiplied by the context: it's the standard library. I don't know what's wrong with singletons. It's a great pattern in certain circumstances. I don't like patterns much but when it comes to singleton i absolutely hate it. Just ask yourself what does it do to earn that fancy name. NOTHING. It is nothing but a hype of those who want to rule everything with one paradigm. Generic solutions/rules/paradigms are our final target WHEN they are elegant. If you are using singleton in your C++/D (or any other M-P language) code, do yourself a favor and trash that book you learned it from. --- class A { static A make(); } class B; B makeB(); --- What A.make can do makeB can not? (Other than creating objects of two different types :P )
Re: auto testing
On 17/12/2011 06:40, Brad Roberts wrote: On 12/16/2011 1:29 PM, Brad Anderson wrote: On Thu, Dec 15, 2011 at 6:43 PM, Brad Robertsbra...@puremagic.commailto:bra...@puremagic.com wrote: Left to do: 1) deploy changes to the tester hosts (it's on 2 already) done 2) finish the ui very ugly but minimally functional: http://d.puremagic.com/test-results/pulls.ghtml 3) trigger pull rebuilds when trunk is updated partly implemented, but not being done yet Idea: I noticed most pull requests were failing when I looked at it, due to the main build failing - that's a lot of wasted computing time. Perhaps it would be a good idea to refuse to test pulls if dmd HEAD isn't compiling? This would be problematic for the 1/100 pull requests designed to fix this, but would save a lot of testing. An alternative method could be to test all of them, but if the pull request previously passed, then dmd HEAD broke, then the pull broke, stop testing until dmd HEAD is fixed. -- Robert http://octarineparrot.com/
Re: Double Checked Locking
No. The most efficient thing would be to use core.atomic atomicLoad!msync.acq() for the read and atomicStore!msync.rel() for the write. Use a temporary to construct the instance, etc. I think Andrei outlined the proper approach in a series of articles a while back. Sent from my iPhone On Dec 16, 2011, at 11:47 PM, Andrew Wiley wiley.andre...@gmail.com wrote: I was looking through Jonathan Davis's pull request to remove static constructors from std.datetime, and I realized that I don't know whether Double Checked Locking is legal under D's memory model, and what the requirements for it to work would be. (if you're not familiar with the term, check out http://en.wikipedia.org/wiki/Double-checked_locking - it's a useful but problematic programming pattern that can cause subtle concurrency bugs) It seems like it should be legal as long as the variable tested and initialized is flagged as shared so that the compiler enforces proper fences, but is this actually true?
Re: Second Round CURL Wrapper Review
On Sat, Dec 17, 2011 at 5:56 AM, jdrewsen jdrew...@nospam.com wrote: On Thursday, 15 December 2011 at 07:46:56 UTC, Jonathan M Davis wrote: ... I'd argue that the acronyms should be in all caps if camelcasing would require that the first letter of the acronym be capitalized and all lower case if camelcasing would require that the first letter of the acronym be lower case. We're not completely consistent in how we using acronyms in names in Phobos, but I believe that we primarily follow that rule when putting them in symbol names. So, for instance, it would be HTTP, FTP, and SMTP rather than Http, Ftp, and Smtp. I like Http better as personal taste. But if HTTP is preferred then I'll do that. Actually I think I will make a pull request to extend the dlang.org/dstyle.html doc with and example that we can point to when someone asks about styling. I've spend too much time restyling because there is no such style doc already and people are complaining about style in reviews anyway. I absolutely agree that the D Style page should be revised and extended to reflect the current standards that new Phobos modules are expected to adhere to. I found an open pull request that indicates a past effort in this regard: https://github.com/D-Programming-Language/d-programming-language.org/pull/16 (I thought that another document was also floating around that was a proposed new Style Guide for Phobos, but I don't recall who authored it and I don't remember where it was located.) jcc7
D Style Guide (was Re: Second Round CURL Wrapper Review )
== Quote from jdrewsen (jdrew...@nospam.com)'s article On Thursday, 15 December 2011 at 07:46:56 UTC, Jonathan M Davis wrote: ... I'd argue that the acronyms should be in all caps if camelcasing would require that the first letter of the acronym be capitalized and all lower case if camelcasing would require that the first letter of the acronym be lower case. We're not completely consistent in how we using acronyms in names in Phobos, but I believe that we primarily follow that rule when putting them in symbol names. So, for instance, it would be HTTP, FTP, and SMTP rather than Http, Ftp, and Smtp. I like Http better as personal taste. But if HTTP is preferred then I'll do that. Actually I think I will make a pull request to extend the dlang.org/dstyle.html doc with and example that we can point to when someone asks about styling. I've spend too much time restyling because there is no such style doc already and people are complaining about style in reviews anyway. (I'm sorry if this comes across as a repost, but I guess I don't know how to post using the mailing list.) I absolutely agree that the D Style page should be revised and extended to reflect the current standards that new Phobos modules are expected to adhere to. I found an open pull request that indicated a past effort in this regard: https://github.com/D-Programming-Language/d-programming-language.org/pull/16 (I thought that another document was also floating around that was a proposed new Style Guide for Phobos, but I don't recall who authored it and I don't remember where it was located.) jcc7
D Style Guide (was Re: Second Round CURL Wrapper Review )
== Quote from jdrewsen (jdrew...@nospam.com)'s article On Thursday, 15 December 2011 at 07:46:56 UTC, Jonathan M Davis wrote: ... I'd argue that the acronyms should be in all caps if camelcasing would require that the first letter of the acronym be capitalized and all lower case if camelcasing would require that the first letter of the acronym be lower case. We're not completely consistent in how we using acronyms in names in Phobos, but I believe that we primarily follow that rule when putting them in symbol names. So, for instance, it would be HTTP, FTP, and SMTP rather than Http, Ftp, and Smtp. I like Http better as personal taste. But if HTTP is preferred then I'll do that. Actually I think I will make a pull request to extend the dlang.org/dstyle.html doc with and example that we can point to when someone asks about styling. I've spend too much time restyling because there is no such style doc already and people are complaining about style in reviews anyway. (I'm sorry if this comes across as a repost, but I guess I don't know how to post using the mailing list.) I absolutely agree that the D Style page should be revised and extended to reflect the current standards that new Phobos modules are expected to adhere to. I found an open pull request that indicated a past effort in this regard: https://github.com/D-Programming-Language/d-programming-language.org/pull/16 (I thought that another document was also floating around that was a proposed new Style Guide for Phobos, but I don't recall who authored it and I don't remember where it was located.) jcc7
Re: Double Checked Locking
On 12/17/11 1:56 AM, Andrew Wiley wrote: On Sat, Dec 17, 2011 at 1:47 AM, Andrew Wileywiley.andre...@gmail.com wrote: I was looking through Jonathan Davis's pull request to remove static constructors from std.datetime, and I realized that I don't know whether Double Checked Locking is legal under D's memory model, and what the requirements for it to work would be. (if you're not familiar with the term, check out http://en.wikipedia.org/wiki/Double-checked_locking - it's a useful but problematic programming pattern that can cause subtle concurrency bugs) It seems like it should be legal as long as the variable tested and initialized is flagged as shared so that the compiler enforces proper fences, but is this actually true? This entry in the FAQ makes me suspicious: ``` What does shared have to do with memory barriers? Reading/writing shared data emits memory barriers to ensure sequential consistency (not implemented). ``` So DCL should be alright with data flagged as shared, but it's not implemented in the compiler? That is correct. Andrei
Re: 64-bit DMD for windows?
On 17.12.2011 16:37, Trass3r wrote: DOS software can be more productive, since it's often keyboard-only. How is that different from a Windows console app? From an interface point of view, it's basically the same thing. They both support character graphics (like ncurses). Internally, they wouldn't have anything in common at all.
Re: 64-bit DMD for windows?
On 17.12.2011 17:59, torhu wrote: On 17.12.2011 16:37, Trass3r wrote: DOS software can be more productive, since it's often keyboard-only. How is that different from a Windows console app? From an interface point of view, it's basically the same thing. They both support character graphics (like ncurses). Internally, they wouldn't have anything in common at all. The only commercial application I can think of that runs in the Windows console and uses character graphics is Far Manager. 20 years there were lots of applications like that, but they ran on top of DOS instead of Windows.
Re: 64-bit DMD for windows?
Trass3r Wrote: DOS software can be more productive, since it's often keyboard-only. How is that different from a Windows console app? No Solitare, Facebook... much more productive!
Re: 64-bit DMD for windows?
Windows still ships with edit, which has more features than notepad. Heheh. cmd.exe /c edit
Re: Program size, linking matter, and static this()
Le 17/12/2011 00:18, maarten van damme a écrit : how did other languages solve this issue? I can't imagine D beeing the only language with static constructors, do they have that problem too? AFAIK, I believe like in D, it's best practice to avoid static constructors as much as possible in Java, Python and I imagine C# as well, even though the running order is well-defined. The dependency injection design pattern seems to help here.
Re: Program size, linking matter, and static this()
Le 16/12/2011 22:45, Andrei Alexandrescu a écrit : On 12/16/11 3:38 PM, Trass3r wrote: A related issue is phobos being an intermodule dependency monster. A simple hello world pulls in almost 30 modules! And std.stdio is supposed to be just a simple wrapper around C FILE. In fact it doesn't (after yesterday's commit). The std code in hello, world is a minuscule 3KB. The rest of 218KB is druntime. Once we solve the static constructor issue, function-level linking should take care of pulling only the minimum needed. One interesting fact is that a lot of issues that I tended to take non-critically (templates cause bloat, intermodule dependencies cause bloat, static linking creates large programs) looked a whole lot differently when I looked closer at causes and effects. Andrei Fantastic ! :)
Re: Program size, linking matter, and static this()
Le 17/12/2011 02:39, Andrei Alexandrescu a écrit : On 12/16/11 6:54 PM, Jonathan M Davis wrote: By contrast, we could have a simple feature that was explained in the documenation along with static constructors which made it easy to tell the compiler that the order doesn't matter - either by saying that it doesn't matter at all or that it doesn't matter in regards to a specific module. e.g. @nodepends(std.file) static this() { } Now the code doesn't have to be redesigned to get around the fact that the compiler just isn't smart enough to figure it out on its own. Sure, the feature is potentially unsafe, but so are plenty of other features in D. That is hardly a good argument in favor of the feature :o). One issue that you might have not considered is that this is more brittle than it might seem. Even though the dependency pattern is painfully obvious to the human at a point in time, maintenance work can easily change that, and in very non-obvious ways (e.g. dependency cycles spanning multiple modules). I've seen it happening in C++, and when you realize it it's quite mind-boggling. The best situation would be if the compiler was smart enough to figure it out for itself, but barring that this definitely seems like a far cleaner solution than having to try and figure out how to break up some of the initialization code for a module into a separate module, especially when features such as immutable and pure tend to make such separation impossible without some nasty casts. It would just be way simpler to have a feature which allowed you to tell the compiler that there was no dependency. I think the only right approach to this must be principled - either by CTFEing the constructor or by guaranteeing it calls no functions that may close a dependency cycle. Even without that, I'd say we're in very good shape. Andrei Very good point. CTFE is improving with each version of dmd, and is a real alternative to static this(); It should be considered when apropriate, it has many benefices.
Re: 64-bit DMD for windows?
On 17.12.2011 19:05, Andrej Mitrovic wrote: Windows still ships with edit, which has more features than notepad. Heheh. cmd.exe /c edit Makes me wonder what it's for, can you run a Windows server without the GUI?
Re: Any plans to make it possible to call C++ functions that reside in a namespace?
On Thursday, 15 December 2011 at 08:16:52 UTC, deadalnix wrote: That would be great. ATM, I just create dumb function in the global namespace and thoses functions forward to the function within the namespace. That is what I do as well, but when I have thousands functions to add it takes time... ;)
Re: Any plans to make it possible to call C++ functions that reside in a namespace?
On Friday, 16 December 2011 at 00:44:07 UTC, Walter Bright wrote: I hadn't planned to, but it's a good idea. I suggest adding it as an enhancement request on bugzilla. I will do as you suggested. Thanks Walter.
Re: 64-bit DMD for windows?
On 17.12.2011 18:21, Bane wrote: Trass3r Wrote: DOS software can be more productive, since it's often keyboard-only. How is that different from a Windows console app? No Solitare, Facebook... much more productive! Most likely they're running the DOS app in a window in Windows, but that's a good point.
Re: Program size, linking matter, and static this()
On 12/17/11 6:34 AM, so wrote: If you are using singleton in your C++/D (or any other M-P language) code, do yourself a favor and trash that book you learned it from. --- class A { static A make(); } class B; B makeB(); --- What A.make can do makeB can not? (Other than creating objects of two different types :P ) Singleton has two benefits. One, you can't accidentally create more than one instance. The second, which is often overlooked, is that you still benefit of polymorphism (as opposed to making its state global). Andrei
Re: Export and Protected Scoping in Dynamic Libraries
On Sat, 17 Dec 2011 01:33:33 -0800, Walter Bright newshou...@digitalmars.com wrote: On 12/16/2011 9:01 PM, Adam Wilson wrote: On Fri, 16 Dec 2011 01:34:32 -0800, Walter Bright newshou...@digitalmars.com wrote: On 12/14/2011 11:41 AM, Adam Wilson wrote: Hello Everyone, I want to start this conversation by pointing out that I come from a C/C++/C# background and my ideas and frustrations in this post will be colored by that history. When I first approached D, the idea of an 'export' confused me. I've since figured out, at least in libraries, that in C# terms D's 'export' means 'public'. I'm not too familiar with C#'s public, but what D 'export' means is a function is an entry point for a DLL. In the Windows world, that means it gets an extra level of indirection when calling it, and it corresponds to: __declspec(export) in Windows compilers. Huh, I didn't know that. It makes sense though, I recall using __declspec(dllexport) in MSVC and hating the unwieldy syntax. Does it perform a similar function on other OS'es? No, as such magic isn't necessary. Any public symbol will be automatically accessible from a shared library. So my understanding then is that, for dynamic libs on OS'es like Linux/OSX, DI files also need to include members marked public? And how does the compiler treat the export keyword on systems that don't need it? I am assuming they are treated as publics? C#'s public means this function can be used anyone, including other libraries or executables, which according to the D docs, is what D's export means. C# also has 'protected' and 'internal protected'. 'protected' is available to any subclass, even those outside of the library, and 'internal protected' is only available to subclasses within the same library. It's actually quite a useful to have that distinction. For example, in WPF you end up subclassing the system classes (which are in separate DLL's) repeatedly and it is an encouraged pattern to extend existing functionality with your own. The reason I went with the syntax I did is that it doesn't add any new keywords and is, I think, even more clear about the programmers intentionality than C#'s scoping model. If 'export' really is just the equivalent of __declspec(dllexport) then it makes even more sense as an attribute and not a scope in it's own right. However, this raise a problem, specifically, how do I export a protected member from a dynamic library? Do you mean how does one directly call a protected function inside a DLL from outside that DLL, without going through the virtual table? What I am after is a member that can be overridden in a subclass outside of the DLL but is otherwise only reachable within the module it's defined in. Mr. Carlborg's code example is spot on. It isn't necessary to export a protected symbol in order to override it. I see, that's a neat little trick. +1 for D! So for DI file generation I should just include the protected members in the file and let the compiler sort it out? I know DLL's are relatively new to D so not much documentation exists about how they work in D. I think this would be something good to document, especially the special behaviors for Windows. I have to admit that, coming from a C# and C++ background, I've been a little confused by how the scope system works in relation to dynamic libraries and have been shooting the dark trying to figure it out. Thanks for the clarifications! Hopefully this lesson will become part of the D lore surrounding dynamic libs. :-) -- Adam Wilson Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Re: 64-bit DMD for windows?
On Sat, 17 Dec 2011 11:14:51 -0800, torhu no@spam.invalid wrote: On 17.12.2011 19:05, Andrej Mitrovic wrote: Windows still ships with edit, which has more features than notepad. Heheh. cmd.exe /c edit Makes me wonder what it's for, can you run a Windows server without the GUI? Starting with Windows Server 2008 there is something called the Server Core role, which has no GUI. And they've been improving it ever since. MS is having a back-to-the-basics push internally right now. -- Adam Wilson Project Coordinator The Horizon Project http://www.thehorizonproject.org/
CURL Wrapper: Vote Thread
The time has come to vote on the inclusion of Jonas Drewsen's CURL wrapper in Phobos. Code: https://github.com/jcd/phobos/blob/curl-wrapper/etc/curl.d Docs: http://freeze.steamwinter.com/D/web/phobos/etc_curl.html For those of you on Windows, a libcurl binary built by DMC is available at http://gool.googlecode.com/files/libcurl_7.21.7.zip. Voting lasts one week and ends on 12/24.
Re: auto testing
On 12/17/2011 4:56 AM, Robert Clipsham wrote: On 17/12/2011 06:40, Brad Roberts wrote: On 12/16/2011 1:29 PM, Brad Anderson wrote: On Thu, Dec 15, 2011 at 6:43 PM, Brad Robertsbra...@puremagic.commailto:bra...@puremagic.com wrote: Left to do: 1) deploy changes to the tester hosts (it's on 2 already) done 2) finish the ui very ugly but minimally functional: http://d.puremagic.com/test-results/pulls.ghtml 3) trigger pull rebuilds when trunk is updated partly implemented, but not being done yet Idea: I noticed most pull requests were failing when I looked at it, due to the main build failing - that's a lot of wasted computing time. Perhaps it would be a good idea to refuse to test pulls if dmd HEAD isn't compiling? This would be problematic for the 1/100 pull requests designed to fix this, but would save a lot of testing. An alternative method could be to test all of them, but if the pull request previously passed, then dmd HEAD broke, then the pull broke, stop testing until dmd HEAD is fixed. Yeah. I know I need to do something in that space and just haven't yet. This whole thing is only a few evenings old and is just now starting to really work. I'm still focused on making sure it's grossly functional enough to be useful. Optimizations and polish will wait a little longer. Thanks, Brad
Re: Export and Protected Scoping in Dynamic Libraries
On 12/17/2011 11:22 AM, Adam Wilson wrote: On Sat, 17 Dec 2011 01:33:33 -0800, Walter Bright newshou...@digitalmars.com wrote: On 12/16/2011 9:01 PM, Adam Wilson wrote: On Fri, 16 Dec 2011 01:34:32 -0800, Walter Bright newshou...@digitalmars.com wrote: On 12/14/2011 11:41 AM, Adam Wilson wrote: Hello Everyone, I want to start this conversation by pointing out that I come from a C/C++/C# background and my ideas and frustrations in this post will be colored by that history. When I first approached D, the idea of an 'export' confused me. I've since figured out, at least in libraries, that in C# terms D's 'export' means 'public'. I'm not too familiar with C#'s public, but what D 'export' means is a function is an entry point for a DLL. In the Windows world, that means it gets an extra level of indirection when calling it, and it corresponds to: __declspec(export) in Windows compilers. Huh, I didn't know that. It makes sense though, I recall using __declspec(dllexport) in MSVC and hating the unwieldy syntax. Does it perform a similar function on other OS'es? No, as such magic isn't necessary. Any public symbol will be automatically accessible from a shared library. So my understanding then is that, for dynamic libs on OS'es like Linux/OSX, DI files also need to include members marked public? The member function will be called through the vtbl[]. But for the compiler to know that member even exists, and where it is in the vtbl[], it needs to be in the .di file. And how does the compiler treat the export keyword on systems that don't need it? I am assuming they are treated as publics? Yes. C#'s public means this function can be used anyone, including other libraries or executables, which according to the D docs, is what D's export means. C# also has 'protected' and 'internal protected'. 'protected' is available to any subclass, even those outside of the library, and 'internal protected' is only available to subclasses within the same library. It's actually quite a useful to have that distinction. For example, in WPF you end up subclassing the system classes (which are in separate DLL's) repeatedly and it is an encouraged pattern to extend existing functionality with your own. The reason I went with the syntax I did is that it doesn't add any new keywords and is, I think, even more clear about the programmers intentionality than C#'s scoping model. If 'export' really is just the equivalent of __declspec(dllexport) then it makes even more sense as an attribute and not a scope in it's own right. However, this raise a problem, specifically, how do I export a protected member from a dynamic library? Do you mean how does one directly call a protected function inside a DLL from outside that DLL, without going through the virtual table? What I am after is a member that can be overridden in a subclass outside of the DLL but is otherwise only reachable within the module it's defined in. Mr. Carlborg's code example is spot on. It isn't necessary to export a protected symbol in order to override it. I see, that's a neat little trick. +1 for D! C++ works the same way. So for DI file generation I should just include the protected members in the file and let the compiler sort it out? Yes. I know DLL's are relatively new to D so not much documentation exists about how they work in D. I think this would be something good to document, especially the special behaviors for Windows. I have to admit that, coming from a C# and C++ background, I've been a little confused by how the scope system works in relation to dynamic libraries and have been shooting the dark trying to figure it out. Thanks for the clarifications! Hopefully this lesson will become part of the D lore surrounding dynamic libs. :-) DLL's have been supported in D forever, but hardly anyone uses them, and sometimes they get bit rotted. I strongly recommend Jeffrey Richter's book Advanced Windows for a low level and lucid explanation of how DLLs work.
Re: Second Round CURL Wrapper Review
The docs for onReceiveHeader, http://freeze.steamwinter.com/D/web/phobos/etc_curl.html#onReceiveHeader explicitly says that the const string parameters are not valid after the function returns. This is all well and good; but a minor improvement would be to change the signature to: @property void onReceiveHeader(void delegate(in char[] key, in char[] value) callback); to add `scope` to the parameters. This documents the fact that the constant strings shouldn't be escaped as-is in both code and documentation. I suspect many other instances of `const(char)[]` could be changed to `in char[]` to better document how they use the parameters (I also personally think it looks better as it is more concise).
Re: Second Round CURL Wrapper Review
On Saturday, 17 December 2011 at 11:56:04 UTC, jdrewsen wrote: AutoConnect sounds like a command to connection automatically which might be confusing since that is not what it does. Therefore I went with AutoConnection which I still believe is better. How about something completely different then, like URLInfer? Also, some instances in the docs of url in lowercase should probably be changed to uppercase.
Re: Program size, linking matter, and static this()
On Sat, 17 Dec 2011 21:20:33 +0200, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 12/17/11 6:34 AM, so wrote: If you are using singleton in your C++/D (or any other M-P language) code, do yourself a favor and trash that book you learned it from. --- class A { static A make(); } class B; B makeB(); --- What A.make can do makeB can not? (Other than creating objects of two different types :P ) Singleton has two benefits. One, you can't accidentally create more than one instance. The second, which is often overlooked, is that you still benefit of polymorphism (as opposed to making its state global). Andrei Now i am puzzled, makeB does both and does better. (better as it doesn't expose any detail to user)
Re: Program size, linking matter, and static this()
On Saturday, 17 December 2011 at 21:02:58 UTC, so wrote: On Sat, 17 Dec 2011 21:20:33 +0200, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 12/17/11 6:34 AM, so wrote: If you are using singleton in your C++/D (or any other M-P language) code, do yourself a favor and trash that book you learned it from. --- class A { static A make(); } class B; B makeB(); --- What A.make can do makeB can not? (Other than creating objects of two different types :P ) Singleton has two benefits. One, you can't accidentally create more than one instance. The second, which is often overlooked, is that you still benefit of polymorphism (as opposed to making its state global). Andrei Now i am puzzled, makeB does both and does better. (better as it doesn't expose any detail to user) Both of your examples are the singleton pattern if `make` returns the same instance every time, and arguably (optionally?) A or B shouldn't be instantiable in any other way. I suspect that the reason a static member function is prevalent is because it's easy to just make the constructor private (and not have to mess with things like C++'s `friend`). In D, there's no real difference because you can still use private members as long as you're in the same module. The only difference between them I can see is that the module-level function doesn't expose the class name directly when using the function, which is but a minor improvement.
Re: CURL Wrapper: Vote Thread
On 2011-12-17 22:36:15 +0300, dsimcha said: The time has come to vote on the inclusion of Jonas Drewsen's CURL wrapper in Phobos. Code: https://github.com/jcd/phobos/blob/curl-wrapper/etc/curl.d Docs: http://freeze.steamwinter.com/D/web/phobos/etc_curl.html For those of you on Windows, a libcurl binary built by DMC is available at http://gool.googlecode.com/files/libcurl_7.21.7.zip. Voting lasts one week and ends on 12/24. Yes.
Re: CURL Wrapper: Vote Thread
The time has come to vote on the inclusion of Jonas Drewsen's CURL wrapper in Phobos. yes.
Re: Second Round CURL Wrapper Review
On Thursday, 15 December 2011 at 08:51:29 UTC, Jonathan M Davis wrote: Line# 235 is identical to line# 239. Shouldn't line# 235 be creating an Http object, not an Ftp object? That mistake definitely makes it look like download hasn't been properly tested. It has been tested as you can see by the unittest just below the function. It just did not fail because libcurl requires the same setup for this download situation for both ftp and http. Will fix it of course. You should create template similar to template isCurlConn(Conn) { auto isCurlConn = is(Conn : Http) || is(Conn : Ftp) || is(Conn : AutoConnection); } It would reduce code duplication. ok The functions which have a template parameter T = char really need to have that parameter properly explained in the documentation. From the documentation, I would have thought that it was any character type, but it's char or ubyte. There is no hint whatsoever in the documentation that ubyte would work. All of the parameters need to be made clear. That goes for all functions. ok Is there a reason that the functions which have a template argument T which can be either a char or a ubyte can't work with immutable char? Glancing at _basicHttp, I don't see any reason why T couldn't be immutable char. Yes, it would require casting to immutable(char)[], but you're already casting to char[], and the data being returned appears to be unique such that it could be safely cast to immutable. That being the case, I'd encourage you to not only make it work with immutable char but to make immutable char the default instead of char. Is is indeed unique and can be cast to immutable. I'll add that option. Why do you think immutable char as the default is better than char? I know that the return type in that case would be string and not char[] - but why is that better? Once you've fixed the exception types like I requested in my first review, you can and should use enforceEx!CurlException(cond, msg) instead of enforce(cond, new CurlException(msg)). ok it may not matter, since we're dealing with strings here, but I'd argue that !empty should be used rather than length 0 (e.g. line# 475). In stuff other than strings it definitely can be more efficient, and even with strings, it may be, since I think that checking whether the length is 0 (as empty does) is slightly more efficient than checking whether it's greater than 0. ok Please make sure that opening braces on on their own line. That's the way that the rest of Phobos does it and it's one of the few formatting rules that we've been trying to use consistently throughout Phobos. For the most part, you get it right, but not everywhere - nested functions in particular seem to have braces on the same line for some reason. When I look through the code it seems ok with regards to braces on own line. Maybe by nested functions you mean delegates? The delegates I use do indeed have braces on same line which is ok afaik. It's not a big deal, but you should use auto more. For instance, lines #626, #638, and #640 don't use auto when they could. I'm a bit in doubt. On one hand it is great to make everything auto in order to make it easy to change type and to remove redundancy. One the other hand it is very convenient to be able to see the type when you read the code. I know that a clever editor would be able to figure out the type for me. But I also read code in normal editors for diffs etc. or on github. Anyway - I've changed as much as possible to auto now. I have no idea why you keep putting parens around uses of the in operator, but it's not necessary and makes the code harder to read IMHO. It's certainly not required that you change that, but I'd appreciate it if you did. Bad habit. Will remove. I see that regexes are used in the module. Please make sure that they still work correctly with the new std.regex. They probably do, but it's not 100% backwards compatible. ok byLine's template constraint needs to verify that the types of Terminator and Char are valid. As it is, I could try and pass it something like an Http struct if I wanted to. In general, _all_ templates need to verify that their arguments are of the appropriate type for _all_ of their arguments. ok It's odd that popFront (line #780) is not @safe when empty and front are. Does findSplit or findSplitAfter prevent it? exactly With a name like Char on byLine, I'd expect it to take any char type, but not only do you not verify the type (as previously mentioned), but you instantiate _basicHttp with it, which works with char and ubyte, not any char. If Char is going to take char and ubyte specifically, then Char is a bad name for it. I'll verify the type. For byLine it will only accept char types. I'd suggest changing line# 823 to auto result = get(url, isFtpUrl(url) ? FTP() : Http()); Not possible. get() is a template function where the template parameter is connection
Re: Second Round CURL Wrapper Review
On Saturday, 17 December 2011 at 20:38:46 UTC, Jakob Ovrum wrote: On Saturday, 17 December 2011 at 11:56:04 UTC, jdrewsen wrote: AutoConnect sounds like a command to connection automatically which might be confusing since that is not what it does. Therefore I went with AutoConnection which I still believe is better. How about something completely different then, like URLInfer? Also, some instances in the docs of url in lowercase should probably be changed to uppercase. Will change case. Don't know if the URLInfer name is better though. /Jonas
Re: Second Round CURL Wrapper Review
On 12/2/11 10:26 PM, dsimcha wrote: I volunteered ages ago to manage the review for the second round of Jonas Drewsen's CURL wrapper. After the first round it was decided that, after a large number of minor issues were fixed, a second round would be necessary. Significant open issues: 1. Should libcurl be bundled with DMD on Windows? Yes. 2. etc.curl, std.curl, or std.net.curl? (We had a vote a while back but it was buried deep in a thread and a lot of people may have missed it: http://www.easypolls.net/poll.html?p=4ebd3219011eb0e4518d35ab ) std.net.curl Code: https://github.com/jcd/phobos/blob/curl-wrapper/etc/curl.d Docs: http://freeze.steamwinter.com/D/web/phobos/etc_curl.html I have a few comments. They are not critical for library's admission (apologies I didn't make the deadline) and could be integrated before the commit. 1. Tables must be centered (this is a note to self) 2. Replace d-p-l.org in examples with dlang.org 3. Compared to using libcurl directly this module provides a simpler API for performing common tasks. - Compared to using libcurl directly this module allows simpler client code for common uses, requires no unsafe operations, and integrates better with the rest of the language. 4. From the high level ops it seems there's no way to issue a PUT/POST and then direct the result to a file, or read the result with a foreach. 5. Also there seems to be no way to issue a PUT/POST from a range. 6. s/Examples:/Example:/ 7. s/HTTP http = HTTP(www.d-p-l.org);/auto http = HTTP(www.d-p-l.org);/ 8. receives a header or a data buffer - receives a header and a data buffer, respectively 9. In this simple example, the headers are writting to stdout and the data is ignored. How can we make it to stop the request? A sentence about that would be great. 10. Finally the HTTP request is started by calling perform(). - Finally the HTTP request is effected by calling perform(), which is synchronous. 11. Use InferredProtocol instead of AutoConnect(ion). The type is exactly what it claims. If inferred is too long, AutoProtocol would be fine too. But it's not the connection that is automated. The word must not be a verb, i.e. InferProtocol/DeduceProtocol would not be good. 12. The example for InferredProtocol nee AutoConnection does not show any interesting example, e.g. ftp. 13. I think download() and upload() are good convenience functions but are very limited. Such operations can last arbitrarily long and it's impossible to stop them. Therefore, they will foster poor program design (e.g. program hangs until you kill it etc). We should keep these for convenience in scripts, and we already have byXxx() for downloading, but we don't have anything for uploading. We should offer output ranges for uploading sync/async. Maybe in the next minor release. 14. Parameter doc for put(), post(), del() etc. is messed up. 15. I don't think connection is the right term. For example we have T[] get(Conn = AutoConnection, T = char)(const(char)[] url, Conn conn = Conn()); The connection is established only during the call to get(), so conn is not really a connection - more like a sort of a bundle of connection parameters. But then it does clean up the connection once established, so I'm unsure... 16. For someone who'd not versed in HTTP options, the example string content = options(d-programming-language.appspot.com/testUrl2, something); isn't terribly informative. 17. In byLine: A range of lines is returned when the request is complete. This suggests the last byte has been read as the client gets to the first, which is not the case. Data will be read on demand as the foreach loop makes progress. Similar for byChunk. 18. In byXxxAsync wait(Duration) should be cross-referenced to its definition. = Overall: I think this is a valuable addition to Phobos, but I have the feeling we don't have a good scenario for interrupting connections. For example, if someone wants to offer a Cancel button, the current API does not give them a robust option to do so: all functions that transfer data are potentially blocking, and there's no thread-shared way to interrupt a connection in a thread from another thread. Such functionality may be a pure addition later, so I think we can commit this as is. Please use std.net.curl. Thanks, Andrei
Re: CURL Wrapper: Vote Thread
Yes
Re: Export and Protected Scoping in Dynamic Libraries
On 12/17/11 1:22 PM, Adam Wilson wrote: I know DLL's are relatively new to D so not much documentation exists about how they work in D. I think this would be something good to document, especially the special behaviors for Windows. I have to admit that, coming from a C# and C++ background, I've been a little confused by how the scope system works in relation to dynamic libraries and have been shooting the dark trying to figure it out. Thanks for the clarifications! Hopefully this lesson will become part of the D lore surrounding dynamic libs. :-) The best thing you'd do is to write an article - an RFC - about it, which we publish on the website. While in draft mode, the article documents what needs be done, and after the implementation of shared libs is done (and there's a lot of momentum behind it), the article will become a great description of it. Andrei
Re: CURL Wrapper: Vote Thread
On 12/17/11 1:36 PM, dsimcha wrote: The time has come to vote on the inclusion of Jonas Drewsen's CURL wrapper in Phobos. Code: https://github.com/jcd/phobos/blob/curl-wrapper/etc/curl.d Docs: http://freeze.steamwinter.com/D/web/phobos/etc_curl.html For those of you on Windows, a libcurl binary built by DMC is available at http://gool.googlecode.com/files/libcurl_7.21.7.zip. Voting lasts one week and ends on 12/24. yes Andrei
Re: CURL Wrapper: Vote Thread
yes
Re: Second Round CURL Wrapper Review
On Saturday, 17 December 2011 at 22:25:10 UTC, jdrewsen wrote: On Saturday, 17 December 2011 at 20:38:46 UTC, Jakob Ovrum wrote: On Saturday, 17 December 2011 at 11:56:04 UTC, jdrewsen wrote: AutoConnect sounds like a command to connection automatically which might be confusing since that is not what it does. Therefore I went with AutoConnection which I still believe is better. How about something completely different then, like URLInfer? Also, some instances in the docs of url in lowercase should probably be changed to uppercase. Will change case. Don't know if the URLInfer name is better though. /Jonas Andrei is right, a noun is better. I'm with InferredProtocol.
Re: Double Checked Locking
On Saturday, December 17, 2011 09:50:26 Andrei Alexandrescu wrote: On 12/17/11 1:56 AM, Andrew Wiley wrote: On Sat, Dec 17, 2011 at 1:47 AM, Andrew Wileywiley.andre...@gmail.com wrote: I was looking through Jonathan Davis's pull request to remove static constructors from std.datetime, and I realized that I don't know whether Double Checked Locking is legal under D's memory model, and what the requirements for it to work would be. (if you're not familiar with the term, check out http://en.wikipedia.org/wiki/Double-checked_locking - it's a useful but problematic programming pattern that can cause subtle concurrency bugs) It seems like it should be legal as long as the variable tested and initialized is flagged as shared so that the compiler enforces proper fences, but is this actually true? This entry in the FAQ makes me suspicious: ``` What does shared have to do with memory barriers? Reading/writing shared data emits memory barriers to ensure sequential consistency (not implemented). ``` So DCL should be alright with data flagged as shared, but it's not implemented in the compiler? That is correct. Well, you learn something new every day I guess. I'd never even heard of double-checked locking before this. I came up with it on my own in an attempt to reduce how much the mutex was used. Is the problem with it that the write isn't actually atomic? Wikipedia makes it sound like the problem might be that the object might be partially initialized but not fully initialized, which I wouldn't have thought possible, since I would have thought that the object would be fully initialized and _then_ the reference would be assigned to it. And it's my understanding that a pointer assignment like that would be atomic. Or is there more going on than that, making it so that the assignment itself really isn't atomic? - Jonathan M Davis
Re: Double Checked Locking
On 12/17/11 5:03 PM, Jonathan M Davis wrote: Well, you learn something new every day I guess. I'd never even heard of double-checked locking before this. I came up with it on my own in an attempt to reduce how much the mutex was used. Is the problem with it that the write isn't actually atomic? Wikipedia makes it sound like the problem might be that the object might be partially initialized but not fully initialized, which I wouldn't have thought possible, since I would have thought that the object would be fully initialized and _then_ the reference would be assigned to it. And it's my understanding that a pointer assignment like that would be atomic. Or is there more going on than that, making it so that the assignment itself really isn't atomic? There so much going on about double-checked locking, it's not even funny. Atomic assignments have the least to do with it. Check this out: http://goo.gl/f0VQG Andrei
Re: Export and Protected Scoping in Dynamic Libraries
On Sat, 17 Dec 2011 14:27:12 -0800, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 12/17/11 1:22 PM, Adam Wilson wrote: I know DLL's are relatively new to D so not much documentation exists about how they work in D. I think this would be something good to document, especially the special behaviors for Windows. I have to admit that, coming from a C# and C++ background, I've been a little confused by how the scope system works in relation to dynamic libraries and have been shooting the dark trying to figure it out. Thanks for the clarifications! Hopefully this lesson will become part of the D lore surrounding dynamic libs. :-) The best thing you'd do is to write an article - an RFC - about it, which we publish on the website. While in draft mode, the article documents what needs be done, and after the implementation of shared libs is done (and there's a lot of momentum behind it), the article will become a great description of it. Andrei This is a fantastic idea Andrei, I'll see what I can come up with. It shouldn't be that hard, mostly i'll just have document the process, talk about the limitations of dynamic libraries, and make sure that the scoping issues are clearly defined and explained. -- Adam Wilson Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Re: Export and Protected Scoping in Dynamic Libraries
On 12/17/11 5:14 PM, Adam Wilson wrote: This is a fantastic idea Andrei, I'll see what I can come up with. It shouldn't be that hard, mostly i'll just have document the process, talk about the limitations of dynamic libraries, and make sure that the scoping issues are clearly defined and explained. Great. Please format your RFC drafts as pull requests for d-p-l.org (use any existing page as a template, e.g. https://github.com/D-Programming-Language/d-programming-language.org/blob/master/abi.dd), and I'll upload them as we make progress. Andrei
Re: CURL Wrapper: Vote Thread
YES! Finally an easy way to do SMTP.
Re: std.container and classes
On 12/13/11 9:08 PM, Jonathan M Davis wrote: Is the plan for std.container still to have all of its containers be final classes (classes so that they're reference types and final so that their functions are inlinable)? Or has that changed? I believe that Andrei said something recently about discussing reference counting and containers with Walter. The reason that I bring this up is that Array and SList are still structs, and the longer that they're structs, the more code that will break when they get changed to classes. Granted, some level of code breakage may occur when we add custom allocators to them, but since that would probably only affect the constructor (and preferably wouldn't affect anything if you want to simply create a container with the GC heap as you would now were Array and SList classes), the breakage for that should be minimal. Is there any reason for me to not just go and make Array and SList final classes and create a pull request for it? - Jonathan M Davis Apologies for being slow on this. It may be a fateful time to discuss that right now, after all the discussion of what's appropriate for stdlib vs. application code etc. As some of you know, Walter and I went back and forth several times on this. First, there was the issue of making containers value types vs. reference types. Making containers value types would be in keep with the STL approach. However, Walter noted that copying entire containers by default is most often NOT desirable and there's significant care and adornments in C++ programs to make sure that that default behavior is avoided (e.g. adding const to function parameters). So we decided to make containers reference types, and that seemed to be a good choice. The second decision is classes vs. structs. Walter correctly pointed out that the obvious choice for defining a reference type in D - whether the type is momonorphic or polymorphic - is making it a class. If containers aren't classes, the reasoning went, it means we took a wrong step somewhere; it might mean our flagship abstraction for reference types is not suitable for, well, defining a reference type. Fast forward a couple of months, a few unslept nights, and a bunch of newsgroup and IRC conversations. Several additional pieces came together. The most important thing I noticed is that people expect standard containers to have sophisticated memory management. Many ask not about containers as much as containers with custom allocators. Second, containers tend to be large memory users by definition. Third, containers are self-contained (heh) and relatively simple in terms of what they model, meaning that they _never_ suffer from circular references, like general entity types might. All of these arguments very strongly suggest that many want containers to be types with deterministic control over memory and accept configurable allocation strategies (regions, heap, malloc, custom). So that would mean containers should be reference counted structs. This cycle of thought has happened twice, and the evidence coming the second time has been stronger. The first time around I went about and started implementing std.container with reference counting in mind. The code is not easy to write, and is not to be recommended for most types, hence my thinking (at the end of the first cycle) that we should switch to class containers. One fear I have is that people would be curious, look at the implementation of std.container, and be like so am I expected to do all this to define a robust type? I start to think that the right answer to that is to improve library support for good reference counted types, and define reference counted struct containers that are deterministic. Safety is also an issue. I was hoping I'd provide safety as a policy, e.g. one may choose for a given container whether they want safe or not (and presumably fast). I think it's best to postpone that policy and focus for now on defining safe containers with safe ranges. This precludes e.g. using T[] as a range for Array!T. Please discuss. Andrei
Re: Export and Protected Scoping in Dynamic Libraries
On Sat, 17 Dec 2011 11:50:39 -0800, Walter Bright newshou...@digitalmars.com wrote: On 12/17/2011 11:22 AM, Adam Wilson wrote: On Sat, 17 Dec 2011 01:33:33 -0800, Walter Bright newshou...@digitalmars.com wrote: On 12/16/2011 9:01 PM, Adam Wilson wrote: On Fri, 16 Dec 2011 01:34:32 -0800, Walter Bright newshou...@digitalmars.com wrote: On 12/14/2011 11:41 AM, Adam Wilson wrote: Hello Everyone, I want to start this conversation by pointing out that I come from a C/C++/C# background and my ideas and frustrations in this post will be colored by that history. When I first approached D, the idea of an 'export' confused me. I've since figured out, at least in libraries, that in C# terms D's 'export' means 'public'. I'm not too familiar with C#'s public, but what D 'export' means is a function is an entry point for a DLL. In the Windows world, that means it gets an extra level of indirection when calling it, and it corresponds to: __declspec(export) in Windows compilers. Huh, I didn't know that. It makes sense though, I recall using __declspec(dllexport) in MSVC and hating the unwieldy syntax. Does it perform a similar function on other OS'es? No, as such magic isn't necessary. Any public symbol will be automatically accessible from a shared library. So my understanding then is that, for dynamic libs on OS'es like Linux/OSX, DI files also need to include members marked public? The member function will be called through the vtbl[]. But for the compiler to know that member even exists, and where it is in the vtbl[], it needs to be in the .di file. And how does the compiler treat the export keyword on systems that don't need it? I am assuming they are treated as publics? Yes. Understood. I'll update my DI gen changes to bring those in. C#'s public means this function can be used anyone, including other libraries or executables, which according to the D docs, is what D's export means. C# also has 'protected' and 'internal protected'. 'protected' is available to any subclass, even those outside of the library, and 'internal protected' is only available to subclasses within the same library. It's actually quite a useful to have that distinction. For example, in WPF you end up subclassing the system classes (which are in separate DLL's) repeatedly and it is an encouraged pattern to extend existing functionality with your own. The reason I went with the syntax I did is that it doesn't add any new keywords and is, I think, even more clear about the programmers intentionality than C#'s scoping model. If 'export' really is just the equivalent of __declspec(dllexport) then it makes even more sense as an attribute and not a scope in it's own right. However, this raise a problem, specifically, how do I export a protected member from a dynamic library? Do you mean how does one directly call a protected function inside a DLL from outside that DLL, without going through the virtual table? What I am after is a member that can be overridden in a subclass outside of the DLL but is otherwise only reachable within the module it's defined in. Mr. Carlborg's code example is spot on. It isn't necessary to export a protected symbol in order to override it. I see, that's a neat little trick. +1 for D! C++ works the same way. I may have know that at some point, but for the most part in the classes I took, they said it just works and left it at that without any further explanation. So for DI file generation I should just include the protected members in the file and let the compiler sort it out? Yes. I know DLL's are relatively new to D so not much documentation exists about how they work in D. I think this would be something good to document, especially the special behaviors for Windows. I have to admit that, coming from a C# and C++ background, I've been a little confused by how the scope system works in relation to dynamic libraries and have been shooting the dark trying to figure it out. Thanks for the clarifications! Hopefully this lesson will become part of the D lore surrounding dynamic libs. :-) DLL's have been supported in D forever, but hardly anyone uses them, and sometimes they get bit rotted. Well, as long as I'm around I'll be sure to pester someone when it starts to get stale. Dynamic libraries are fundamental to my projects. I strongly recommend Jeffrey Richter's book Advanced Windows for a low level and lucid explanation of how DLLs work. I'll look into it ... my bookshelf is already creaking under the strain of all the programming books I should have. :-) -- Adam Wilson Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Re: CURL Wrapper: Vote Thread
On Sat, 17 Dec 2011 14:36:15 -0500, dsimcha wrote: The time has come to vote on the inclusion of Jonas Drewsen's CURL wrapper in Phobos. Code: https://github.com/jcd/phobos/blob/curl-wrapper/etc/curl.d Docs: http://freeze.steamwinter.com/D/web/phobos/etc_curl.html For those of you on Windows, a libcurl binary built by DMC is available at http://gool.googlecode.com/files/libcurl_7.21.7.zip. Voting lasts one week and ends on 12/24. Yes.
Re: Double Checked Locking
Both concurrent execution and a compiler that assumes a single threaded execution model can do really weird things in the name of optimization. Sent from my iPhone On Dec 17, 2011, at 3:03 PM, Jonathan M Davis jmdavisp...@gmx.com wrote: On Saturday, December 17, 2011 09:50:26 Andrei Alexandrescu wrote: On 12/17/11 1:56 AM, Andrew Wiley wrote: On Sat, Dec 17, 2011 at 1:47 AM, Andrew Wileywiley.andre...@gmail.com wrote: I was looking through Jonathan Davis's pull request to remove static constructors from std.datetime, and I realized that I don't know whether Double Checked Locking is legal under D's memory model, and what the requirements for it to work would be. (if you're not familiar with the term, check out http://en.wikipedia.org/wiki/Double-checked_locking - it's a useful but problematic programming pattern that can cause subtle concurrency bugs) It seems like it should be legal as long as the variable tested and initialized is flagged as shared so that the compiler enforces proper fences, but is this actually true? This entry in the FAQ makes me suspicious: ``` What does shared have to do with memory barriers? Reading/writing shared data emits memory barriers to ensure sequential consistency (not implemented). ``` So DCL should be alright with data flagged as shared, but it's not implemented in the compiler? That is correct. Well, you learn something new every day I guess. I'd never even heard of double-checked locking before this. I came up with it on my own in an attempt to reduce how much the mutex was used. Is the problem with it that the write isn't actually atomic? Wikipedia makes it sound like the problem might be that the object might be partially initialized but not fully initialized, which I wouldn't have thought possible, since I would have thought that the object would be fully initialized and _then_ the reference would be assigned to it. And it's my understanding that a pointer assignment like that would be atomic. Or is there more going on than that, making it so that the assignment itself really isn't atomic? - Jonathan M Davis
Re: LLVM talks 1: Clang for Chromium
On 12/17/2011 2:54 AM, bearophile wrote: Walter: x = x++; Define order of evaluation as rvalue, then lvalue. So I presume your desire is to define the semantics of all code like that, and statically refuse no cases like that. This is an acceptable solution. The advantage of this solution is that it gives no restrictions on the acceptable D code, and it avoids complexities caused by choosing what are the cases to refuse statically. I think you overlook the real advantage - how it works is guaranteed. Keep in mind that it is IMPOSSIBLE for any static analyzer to prove there are no order dependencies when the order is implementation defined. It's trivial to write code that'll pass the static analyzer but will rely on implementation defined behavior. You'll always be relying on faith-based programming with the C/C++ approach. Some of its disadvantages are that programmers are free to write very unreadable D code that essentially only the compiler is able to understand, D's approach on this allows one to reason reliably about how the code works - this does NOT contribute to making code unreadable. and that is hard to port to C/C++. D isn't designed to be ported to C/C++. D is full of features that are not reasonably portable to C/C++. It's a completely irrelevant point.
Re: d future or plans for d3
On 12/18/2011 01:09 AM, Ruslan Mullakhmetov wrote: Hi all, I want to ask you about D future, i mean next big iteration of D and propose some new feature, agent-based programming. Currently, after introducing C++11 i see the only advantages of D over C++11 except syntax sugare is garbage collector and modules. C++11 does not change the relation between D and C++ a lot. Why do you think it does? I recentrly attended student school (workshop) on multi-agent systems (MAS)and self-organizing sysetems. I was really impressed and thought that this probably is the silver bullet which Brucks declared to be absent. I mean agent-based programming as foundation of self-organzing systems. If you are interested you can find a lot of information by googling. So I would like to get your feedback to introduce new paradigm, paradigm of agent programming into D. Actually, I'm not deep into MAS, but as far as i know it's just autonomous class, i.e. class that has it's own independent context of execution that can communicate with other parties (agents) and can affect on environment if any (like ant). So it would be nice to have this in language core/library. There is erlang that already satisfied all requirements (as far as i know) of MAS language. So the question is does D need to take this paradigm? - Or concentrate on its current paradigms? The only advatnage ovder erlang i see is that D propose itself as embedded programming language which erlang do not satisfy (am i right?). So i need your feedback on the following: (i) do you think that D needs to adsorb agent-programming paradigm The language does not have to be changed to get that to work. (ii) can it benefit D Yes. comparing to other modern languages IMO that is not a very important question. It is not a contest.
Re: std.container and classes
On 18/12/11 12:06 AM, Jesse Phillips wrote: But as to whether this should be what is implemented in the standard library, I don't know. You make mention of custom allocators and such. Is this interest going to be of benefit, or is it just something people are use to having from C++? If it makes sense to have such then the containers should support it. Don't classes allow for custom allocators? Is there a reason that can't be used? Do we need to improve on that? Allocators are very commonly used in C++, although people rarely use them as they are implemented in the standard library because they were poorly designed. We can't repeat the same mistakes.
Re: Second Round CURL Wrapper Review
On Saturday, December 17, 2011 12:56:02 jdrewsen wrote: On Thursday, 15 December 2011 at 07:46:56 UTC, Jonathan M Davis I'd argue that the acronyms should be in all caps if camelcasing would require that the first letter of the acronym be capitalized and all lower case if camelcasing would require that the first letter of the acronym be lower case. We're not completely consistent in how we using acronyms in names in Phobos, but I believe that we primarily follow that rule when putting them in symbol names. So, for instance, it would be HTTP, FTP, and SMTP rather than Http, Ftp, and Smtp. I like Http better as personal taste. But if HTTP is preferred then I'll do that. Actually I think I will make a pull request to extend the dlang.org/dstyle.html doc with and example that we can point to when someone asks about styling. I've spend too much time restyling because there is no such style doc already and people are complaining about style in reviews anyway. I actually have such a pull request, but it's been languishing because there are some items in it that need to be discussed. I keep forgetting to bring them up in the Phobos news group so that they can be decided and we can move on with that. https://github.com/D-Programming-Language/d-programming-language.org/pull/16 I'd rename T to C or Char in the template parameters, since it's a character type rather than a generic type. Not a huge deal, but I think that it makes the code quicker to understand. I've named it T because it can also be ubyte. In that case C for Char is confusing. Yeah. I got that after reading the source, but the needs to be clearer in the documentation. Normally, when something is templated the way that that is, it's going to be templated on character type. And something which can be either ubyte or char but nothing else is rather abnormal. It needs to be clearer. And it's possible that at some point, it really should be changed to be any character type and ubyte rather than just char or ubyte (depending on how that affect efficiency - basically if it's more efficient to make it work with dchar from the get-go rather then get a string and convert it, then it's more desirable to make it work with wchar and dchar, but if you're just going to have to duplicate it or convert it anyway, then you might as well just restrict it to char and ubyte). Why are some parameters const(char)[] and others string? Why not make them all const(char)[] or even actually templatize them on character type (one per parameter which is a string)? The parameters that are const(char)[] will accept both string and char[] which is what I want because libcurl make an internal copy of it anyway. If I typed it as string then you would have to idup a char[] for the parameter without any reason. The parameters that are string is internally passed to functions (in other std modules) that only accept strings. I could accept const(char)[] and idup it in order to have consistant parameter types but that gives an unnecessary overhead. Okay. In general, functions should either take string or take a range of dchar. Taking a range of dchar is the most flexible, so it's generally the best (though this often results in specializations within the template for narrow strings - std.algorithm does that a lot). If you need a string spefically (e.g. all of the std.string functions specifically operate on strings), then taking an array that's templated on character type is generally better, because it's more flexible. If you need a string internally, then do _not_ use const(char)[]. or const(C) []. When you do that, you're forced to idup the string (or to!string is forced to idup it). It's far better to either take string (in which case the caller will idup if it's necessary) or to not use const. That way, the string is copied only when it actually needs to be copied. That does sometimes result in functions which take string for some parameters and const(char)[] for others, which looks odd to the programmer calling it, but that's life I guess. It's less of an issue when ranges are used, however, because then they're all ranges and it's just how theyre treated internally, I guess. I don't know how well you've done with this in general without looking over the code again. I know that you've generally taken const(char)[] instead of string, and sometimes you've iduped that. And those parameters needs to be fixed to take string and use to!string so that the iduping isn't always necessary. Also, since in most cases, you're ultimately passing the strings to the C curl API, the benefit of operating on ranges of dchar instead of strings is debatable, so it's not necessarily a problem that you're not taking ranges of dchar. You do, however, need to make sure that you use string rather than const(char)[] in places where you're iduping so that we can avoid such unnecessary allocations. Shorten the URL
Re: Second Round CURL Wrapper Review
On Saturday, December 17, 2011 23:10:00 jdrewsen wrote: On Thursday, 15 December 2011 at 08:51:29 UTC, Jonathan M Davis wrote: Line# 235 is identical to line# 239. Shouldn't line# 235 be creating an Http object, not an Ftp object? That mistake definitely makes it look like download hasn't been properly tested. It has been tested as you can see by the unittest just below the function. It just did not fail because libcurl requires the same setup for this download situation for both ftp and http. Will fix it of course. Well, when there's code like that that is clearly wrong, it makes it look like it's either not being tested by the unit tests or that the unit tests haven't been run. I guess that this is just a case where the unit tests are unable to catch the problem. Is there a reason that the functions which have a template argument T which can be either a char or a ubyte can't work with immutable char? Glancing at _basicHttp, I don't see any reason why T couldn't be immutable char. Yes, it would require casting to immutable(char)[], but you're already casting to char[], and the data being returned appears to be unique such that it could be safely cast to immutable. That being the case, I'd encourage you to not only make it work with immutable char but to make immutable char the default instead of char. Is is indeed unique and can be cast to immutable. I'll add that option. Why do you think immutable char as the default is better than char? I know that the return type in that case would be string and not char[] - but why is that better? strings can't be sliced with impunity, because their elements are immutable. With char[] and const(char)[], you have to worry about the elements changing, so you're often forced to duplicate the string rather than simply slice it. As such, string is far preferrable to char[] or const(char)[] in the general case. It's even better when there's a choice between them, but if you have to choose, you should almost always choose string (there are exceptions though - e.g. buffers like in std.stdio.ByLine where it reuses its char[] buffer rather than allocating a new one). Please make sure that opening braces on on their own line. That's the way that the rest of Phobos does it and it's one of the few formatting rules that we've been trying to use consistently throughout Phobos. For the most part, you get it right, but not everywhere - nested functions in particular seem to have braces on the same line for some reason. When I look through the code it seems ok with regards to braces on own line. Maybe by nested functions you mean delegates? The delegates I use do indeed have braces on same line which is ok afaik. In general, just make sure that braces are on their own line unless you're dealing with a one line lambda or something similar. You do have nested functions or delegates which don't do that. It's not a big deal, but you should use auto more. For instance, lines #626, #638, and #640 don't use auto when they could. I'm a bit in doubt. On one hand it is great to make everything auto in order to make it easy to change type and to remove redundancy. One the other hand it is very convenient to be able to see the type when you read the code. I know that a clever editor would be able to figure out the type for me. But I also read code in normal editors for diffs etc. or on github. Anyway - I've changed as much as possible to auto now. Just in general, it's best practice to use auto unless you can't. There _are_ exceptions to that rule, but that's almost always the way that it should be. I'd suggest changing line# 823 to auto result = get(url, isFtpUrl(url) ? FTP() : Http()); Not possible. get() is a template function where the template parameter is connection type ie. one of FTP or HTTP. isFtpUrl(url) is evaluated at runtime and not compile time. Yeah. I missed that. Stuff like that makes me wish that we had some kind of static ternary operator, but it would probably complicate the language too much. In some cases though, you might be able to use alias to fix the problem (static if the alias and then use the alias in the function call so that the entire function call isn't duplicated). - Jonathan M Davis
Re: Second Round CURL Wrapper Review
On Saturday, December 17, 2011 23:51:08 Jakob Ovrum wrote: On Saturday, 17 December 2011 at 22:25:10 UTC, jdrewsen wrote: On Saturday, 17 December 2011 at 20:38:46 UTC, Jakob Ovrum wrote: On Saturday, 17 December 2011 at 11:56:04 UTC, jdrewsen wrote: AutoConnect sounds like a command to connection automatically which might be confusing since that is not what it does. Therefore I went with AutoConnection which I still believe is better. How about something completely different then, like URLInfer? Also, some instances in the docs of url in lowercase should probably be changed to uppercase. Will change case. Don't know if the URLInfer name is better though. /Jonas Andrei is right, a noun is better. I'm with InferredProtocol. You inferred protocol is better. Still annoyingly long, but it's a clearer name. - JonathnM m Davis
Re: Second Round CURL Wrapper Review
On Saturday, December 17, 2011 09:21:42 Justin C Calvarese wrote: On Sat, Dec 17, 2011 at 5:56 AM, jdrewsen jdrew...@nospam.com wrote: On Thursday, 15 December 2011 at 07:46:56 UTC, Jonathan M Davis wrote: ... I'd argue that the acronyms should be in all caps if camelcasing would require that the first letter of the acronym be capitalized and all lower case if camelcasing would require that the first letter of the acronym be lower case. We're not completely consistent in how we using acronyms in names in Phobos, but I believe that we primarily follow that rule when putting them in symbol names. So, for instance, it would be HTTP, FTP, and SMTP rather than Http, Ftp, and Smtp. I like Http better as personal taste. But if HTTP is preferred then I'll do that. Actually I think I will make a pull request to extend the dlang.org/dstyle.html doc with and example that we can point to when someone asks about styling. I've spend too much time restyling because there is no such style doc already and people are complaining about style in reviews anyway. I absolutely agree that the D Style page should be revised and extended to reflect the current standards that new Phobos modules are expected to adhere to. I found an open pull request that indicates a past effort in this regard: https://github.com/D-Programming-Language/d-programming-language.org/pull/16 (I thought that another document was also floating around that was a proposed new Style Guide for Phobos, but I don't recall who authored it and I don't remember where it was located.) Yeah. I need to get back to that. Portions of it were completely agreed upon and others need further discussion, and I keep forgetting to get those discussed among the Phobos devs and get those questions settled. - Jonathan m Davis
Re: std.container and classes
On 17/12/11 11:31 PM, Andrei Alexandrescu wrote: The most important thing I noticed is that people expect standard containers to have sophisticated memory management. Many ask not about containers as much as containers with custom allocators. Second, containers tend to be large memory users by definition. Third, containers are self-contained (heh) and relatively simple in terms of what they model, meaning that they _never_ suffer from circular references, like general entity types might. All of these arguments very strongly suggest that many want containers to be types with deterministic control over memory and accept configurable allocation strategies (regions, heap, malloc, custom). So that would mean containers should be reference counted structs. My thoughts on this: Allocators are a must, and need to handled better than they were in C++. Classes are simpler than ref-counted structs, and ref-counting is a massive performance drain and code-bloater anyway, so just go with final classes. Safety is also an issue. I was hoping I'd provide safety as a policy, e.g. one may choose for a given container whether they want safe or not (and presumably fast). I think it's best to postpone that policy and focus for now on defining safe containers with safe ranges. This precludes e.g. using T[] as a range for Array!T. IMO they should just be safe and not provide the option of non-safe. It just complicates things to cater to people that won't use them anyway C++ standard library containers are unsafe and relatively lean, yet are rarely used when performance really matters.
Re: d future or plans for d3
Ruslan Mullakhmetov Wrote: Currently, after introducing C++11 i see the only advantages of D over C++11 except syntax sugare is garbage collector and modules. So you are saying that sane templates, range based standard library and concurrency improvements (thread local variables, immutable, message passing) are all just syntactic sugar?
Re: d future or plans for d3
On 12/18/2011 02:42 AM, a wrote: Ruslan Mullakhmetov Wrote: Currently, after introducing C++11 i see the only advantages of D over C++11 except syntax sugare is garbage collector and modules. So you are saying that sane templates, range based standard library and concurrency improvements (thread local variables, immutable, message passing) are all just syntactic sugar? And you didn't even mention CTFE and code generation yet.
Re: Program size, linking matter, and static this()
On Saturday, December 17, 2011 19:44:28 deadalnix wrote: Very good point. CTFE is improving with each version of dmd, and is a real alternative to static this(); It should be considered when apropriate, it has many benefices. I think that in general, the uses for static this fall into one of two categories: 1. Initializing stuff that can't be initialized at compile time. This includes stuff like classes or AAs as well as stuff which needs to be initialized with a value which isn't known until runtime (e.g. when the program started running). 2. Calling functions which need to be called at the beginning of the program (e.g. a function which does something to the environment that the program is running in). As CTFE improves, #1 should become smaller and smaller, and static this should be needed less and less, but #2 will always remain. It _is_ however the far rarer of the two use cases. So, ultimately static this may become very rare. - Jonathan M Davis
Re: Program size, linking matter, and static this()
On Saturday, December 17, 2011 13:20:33 Andrei Alexandrescu wrote: On 12/17/11 6:34 AM, so wrote: If you are using singleton in your C++/D (or any other M-P language) code, do yourself a favor and trash that book you learned it from. --- class A { static A make(); } class B; B makeB(); --- What A.make can do makeB can not? (Other than creating objects of two different types :P ) Singleton has two benefits. One, you can't accidentally create more than one instance. The second, which is often overlooked, is that you still benefit of polymorphism (as opposed to making its state global). Yes. There are occasions when singleton is very useful and makes perfect sense. There's every possibity that it's a design pattern which is overused, and if you don't need it, you probably shouldn't use it, but there _are_ cases where it's useful. In the case of std.datetime, the UTC and LocalTime classes are singletons because there's absolutely no point in ever allocating multiple of them. It would be a waste of memory. Imagine if auto time = Clock.currTime(); had to allocate a LocalTime object every time. That's a lot of useless heap allocation. By making it a singleton, it's far more efficient. Currently, it does _no_ heap allocation, and once the singleton becomes lazy, it'll only allocate on the first call. I don't see a valid reason _not_ to use a singleton in this case - certainly not as long as time zones are classes, and I think that they make the most sense as classes considering what they have to do and how they have to behave. - Jonathan M Davis
Re: Double Checked Locking
On Saturday, December 17, 2011 17:10:19 Andrei Alexandrescu wrote: On 12/17/11 5:03 PM, Jonathan M Davis wrote: Well, you learn something new every day I guess. I'd never even heard of double-checked locking before this. I came up with it on my own in an attempt to reduce how much the mutex was used. Is the problem with it that the write isn't actually atomic? Wikipedia makes it sound like the problem might be that the object might be partially initialized but not fully initialized, which I wouldn't have thought possible, since I would have thought that the object would be fully initialized and _then_ the reference would be assigned to it. And it's my understanding that a pointer assignment like that would be atomic. Or is there more going on than that, making it so that the assignment itself really isn't atomic? There so much going on about double-checked locking, it's not even funny. Atomic assignments have the least to do with it. Check this out: http://goo.gl/f0VQG Looks interesting. I'll have to give it a read. I really like te subtitle though: Multithreading is just one damn thing after, before, or simultaneous with another. In any case, I obviously need to learn more about some of the issues with multi-threading. - Jonathan M Davis
Re: std.container and classes
On Saturday, 17 December 2011 at 23:31:47 UTC, Andrei Alexandrescu wrote: Safety is also an issue. I was hoping I'd provide safety as a policy, e.g. one may choose for a given container whether they want safe or not (and presumably fast). I think it's best to postpone that policy and focus for now on defining safe containers with safe ranges. This precludes e.g. using T[] as a range for Array!T. Since containers are templated, why not use asserts for this, and let the user choose with -release (just as with built-in arrays)?
Re: CURL Wrapper: Vote Thread
On Saturday, December 17, 2011 14:36:15 dsimcha wrote: The time has come to vote on the inclusion of Jonas Drewsen's CURL wrapper in Phobos. Code: https://github.com/jcd/phobos/blob/curl-wrapper/etc/curl.d Docs: http://freeze.steamwinter.com/D/web/phobos/etc_curl.html For those of you on Windows, a libcurl binary built by DMC is available at http://gool.googlecode.com/files/libcurl_7.21.7.zip. Voting lasts one week and ends on 12/24. Yes. And as dsimcha points out, my suggestions are primarily with regards to the implementation and relatively minor changes to the API, so they shouldn't affect the vote (though they should definitely be addressed before the code gets merged into Phobos). - Jonathan M Davis
Re: d future or plans for d3
On Sunday, December 18, 2011 04:09:21 Ruslan Mullakhmetov wrote: I want to ask you about D future It will be years before we seriously start looking at D3, and while there are ideas for what we might like to do with it, it's far too early to say what's likely to happen with it. D2 needs to be fully stabilized and be in general use for a while before we really look at expanding it into D3. We really need to work at making D2 a success before we worry about where we're going next. Also, many of the best changes to D3 won't be evident until D2 has been used enough for its problems to become evident. D1 and D2 manage to improve on C++'s problems as well as they do precisely because we know what they are. We don't really know what D2's primary problems are yet, and that will take time. Also, while D3 may be years off, it could be that after D2 has stabilized more, we'll add new features that are backwards compatible. So, just because D3 is years off does not necessarily mean that D2 is static. In addition, many things can be done in libraries without needing to add anything to the language, so what you can do with D2 will continue to improve, even if the language itself doesn't change much. Regardless, the main focus right now is in stabilizing dmd and fleshing out the standard library, not in creating a new version of D with new features. It's too early for that. - Jonathan M Davis
Re: std.container and classes
On Saturday, December 17, 2011 17:31:46 Andrei Alexandrescu wrote: On 12/13/11 9:08 PM, Jonathan M Davis wrote: Is the plan for std.container still to have all of its containers be final classes (classes so that they're reference types and final so that their functions are inlinable)? Or has that changed? I believe that Andrei said something recently about discussing reference counting and containers with Walter. The reason that I bring this up is that Array and SList are still structs, and the longer that they're structs, the more code that will break when they get changed to classes. Granted, some level of code breakage may occur when we add custom allocators to them, but since that would probably only affect the constructor (and preferably wouldn't affect anything if you want to simply create a container with the GC heap as you would now were Array and SList classes), the breakage for that should be minimal. Is there any reason for me to not just go and make Array and SList final classes and create a pull request for it? - Jonathan M Davis Apologies for being slow on this. It may be a fateful time to discuss that right now, after all the discussion of what's appropriate for stdlib vs. application code etc. As some of you know, Walter and I went back and forth several times on this. First, there was the issue of making containers value types vs. reference types. Making containers value types would be in keep with the STL approach. However, Walter noted that copying entire containers by default is most often NOT desirable and there's significant care and adornments in C++ programs to make sure that that default behavior is avoided (e.g. adding const to function parameters). So we decided to make containers reference types, and that seemed to be a good choice. The second decision is classes vs. structs. Walter correctly pointed out that the obvious choice for defining a reference type in D - whether the type is momonorphic or polymorphic - is making it a class. If containers aren't classes, the reasoning went, it means we took a wrong step somewhere; it might mean our flagship abstraction for reference types is not suitable for, well, defining a reference type. Fast forward a couple of months, a few unslept nights, and a bunch of newsgroup and IRC conversations. Several additional pieces came together. The most important thing I noticed is that people expect standard containers to have sophisticated memory management. Many ask not about containers as much as containers with custom allocators. Second, containers tend to be large memory users by definition. Third, containers are self-contained (heh) and relatively simple in terms of what they model, meaning that they _never_ suffer from circular references, like general entity types might. All of these arguments very strongly suggest that many want containers to be types with deterministic control over memory and accept configurable allocation strategies (regions, heap, malloc, custom). So that would mean containers should be reference counted structs. This cycle of thought has happened twice, and the evidence coming the second time has been stronger. The first time around I went about and started implementing std.container with reference counting in mind. The code is not easy to write, and is not to be recommended for most types, hence my thinking (at the end of the first cycle) that we should switch to class containers. One fear I have is that people would be curious, look at the implementation of std.container, and be like so am I expected to do all this to define a robust type? I start to think that the right answer to that is to improve library support for good reference counted types, and define reference counted struct containers that are deterministic. Safety is also an issue. I was hoping I'd provide safety as a policy, e.g. one may choose for a given container whether they want safe or not (and presumably fast). I think it's best to postpone that policy and focus for now on defining safe containers with safe ranges. This precludes e.g. using T[] as a range for Array!T. Please discuss. The only reason that I can think of to use a reference-counted struct instead of a class is becuse then it's easier to avoid the GC heap entirely. Almost all of a container's memory is going to end up on the heap regardless, because the elements almost never end up in the container itself. They're in a dynamic array or in nodes or something similar. So, whether the container is ref- counted or a class is almost irrelevant. If anything making it a class makes more sense, because then it doesn't have to worry about the extra cost of ref- counting. However, once we bring allocators into the picture, the situation changes slightly. In both cases, the elements themselves end up where the allocator puts them. However, in the case of a struct, the member
Re: Double Checked Locking
On 2011-12-17 23:10:19 +, Andrei Alexandrescu seewebsiteforem...@erdani.org said: On 12/17/11 5:03 PM, Jonathan M Davis wrote: Well, you learn something new every day I guess. I'd never even heard of double-checked locking before this. I came up with it on my own in an attempt to reduce how much the mutex was used. Is the problem with it that the write isn't actually atomic? Wikipedia makes it sound like the problem might be that the object might be partially initialized but not fully initialized, which I wouldn't have thought possible, since I would have thought that the object would be fully initialized and _then_ the reference would be assigned to it. And it's my understanding that a pointer assignment like that would be atomic. Or is there more going on than that, making it so that the assignment itself really isn't atomic? There so much going on about double-checked locking, it's not even funny. Atomic assignments have the least to do with it. Check this out: http://goo.gl/f0VQG Shouldn't a properly implemented double-checked locking pattern be part of the standard library? This way people will have a better chance of not screwing up. I think the pattern is common enough to warrant it. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: Java Scala
On Fri, Dec 2, 2011 at 2:19 AM, Russel Winder rus...@russel.org.uk wrote: Java is the main language of development just now. D is a tiny little backwater in the nether regions of obscurity. If any language is a joke here, it is D since it is currently unable to claim any serious market share in the world of development. The sooner you accept this, the sooner you can discuss the shortcomings of a language you have no experience of, by your own admission. Your point about how languages become popular has some merit, albeit stated in an overly bigoted fashion. That's like saying people should take Coke and Pepsi more seriously because they have bigger market shares when in reality all you need is water. Money isn't real, you know? D is already a success, a BIG success. Walter and Andrei (and the amazing community, of course) have created a programming language that is light years ahead of C++, Java and Go. I don't think you know this, but every high school student who takes a computer science course is required to learn Java. It doesn't stop there: in college and university it's all Java, too, and this has been going on for almost two decades. And before Java it was mostly C++, but it was phased out. Unless the course specifically requires a different programming language (which is rare), you have to beg to use a different programming language (which I did). Sometimes professors do allow other programming languages, but they mostly limit it to C/C++. In most cases students either have to accept it and do what they are told to do, or fail the course. If that's not indoctrination, I don't know what is. Also, the reason they restrict education to things like Java and C++ has very little to do with the fact that those languages have claimed big market share; rather, it's because corporations have had a vested interest in universities in the first place and they receive what they order. Just look at what Microsoft has been doing in universities: everything from free gifts such as free copies of Windows OS and Visual Studio Ultimate that cost thousands of dollars to sponsoring various kinds of events. The students who are influenced by such tactics, to whom do you think they are going to be loyal to? The _main point_ here is that if students had been give the choice to learn a programming language of their choosing, many of the so called successful programming languages would not have been so successful today. So next time you decide to lecture someone on how popular or successful Java is, just remember how it got to be so successful. Your point about exploitation should be aimed at the entirety of the economic systems of the world. The systems in the USA, India and China (the three main economies of the world) rest completely and solely on exploitation. It's called capitalism. I do see the entirety of the economic system of the world, and, no, it's NOT called capitalism. It's called the Monetary System. Capitalism, Socialism, Communism, etc,... they are all inherently the same because they are all based on the Monetary System. Money is created out of debt, and money is inherently scarce. Differential advantage and exploitation is name of the game, regardless of the form of government you have. And as far as I know, India isn't even in the top five; USA, China, and Japan are in the top three. [...] So far I have competed in the ACM ICPC regional programming contests twice. I've met many students there and I've had many teammates, most if not all of them Java programmers. Besides me (I've never actually done any Java), I don't know any other C++ programmer in there. I have seen countless problems solved in Java and C++, with Java always being 10-20 times slower: same problem, same algorithms and/or data structures. Whenever I find an article that talks about Java being faster than C++, I know it's BS. You can find fair comparisons at http://www.spoj.pl/ If you have never used Java or never actually investigated the issues as to when Java is significantly slower than C++ and when it is as fast as C++ then clearly you have no grounds on which to express any opinion based on facts, it is just prejudice and bigotry. Such comments have no place in any discussion. I choose to ignore Java for technical and non-technical reasons. Unlike you, I don't need to spend years of my life doing Java programming to realize what a joke it is, and I have never seen a case where Java was just as fas as C++. This is one of those myths, or corporate propaganda, that's been propagated by educated idiots. I and the teams I've been a member of have solved countless CS problems that have required every kind of data structure and algorithm, and not once have I seen Java come close to C/C++. On average, Java has been about 20 times slower than C++ and requiring on average 50 times more memory when it came to solving those problems. If you honestly believe that
Re: dfeed/gravatar issue
On Sun, Dec 11, 2011 at 3:03 PM, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: I'm finding myself unable to set up my own Gravatar picture with dfeed. This is because I post on this newsgroup as seewebsiteforem...@erdani.org, which is a non-existent email address. Gravatar, however, requires the email to point to an attended mailbox because they send you the activation link to it. I already have a Gravatar account set up with my actual email address. What steps could I take to associate my newsgroup post email with my actual Gravatar account? Thanks, Andrei OT, but what is your email address for TDPL Errata? I wanted to send in a correction, but the link 'contact Andrei' is broken.
Re: std.container and classes
On Saturday, December 17, 2011 22:57:46 Caligo wrote: On Tue, Dec 13, 2011 at 9:08 PM, Jonathan M Davis jmdavisp...@gmx.comwrote: Is the plan for std.container still to have all of its containers be final classes (classes so that they're reference types and final so that their functions are inlinable)? Or has that changed? I believe that Andrei said something recently about discussing reference counting and containers with Walter. The reason that I bring this up is that Array and SList are still structs, and the longer that they're structs, the more code that will break when they get changed to classes. Granted, some level of code breakage may occur when we add custom allocators to them, but since that would probably only affect the constructor (and preferably wouldn't affect anything if you want to simply create a container with the GC heap as you would now were Array and SList classes), the breakage for that should be minimal. Is there any reason for me to not just go and make Array and SList final classes and create a pull request for it? - Jonathan M Davis What happens in cases where you need to extend functionality of those containers by inheritance? With structs you'll have to rely on composition. You have to rely on composition in either case. If std.container uses classes, they're going to be final. The reason is that if you don't do that, its functions are virtual and can't be inlined. For containers which get used all over the place and need to be as efficient as possible, that's not acceptable. And it's fairly rare to need to derive from a container class anyway. Yes, it's useful sometimes, but that occasional use case does not merit the cost to the general case. So, _all_ extending of containers will require composition, regardless of whether we use structs or classes. - Jonathan M Davis
Re: Java Scala
On Saturday, December 17, 2011 22:45:51 Caligo wrote: On Fri, Dec 2, 2011 at 2:19 AM, Russel Winder rus...@russel.org.uk wrote: Java is the main language of development just now. D is a tiny little backwater in the nether regions of obscurity. If any language is a joke here, it is D since it is currently unable to claim any serious market share in the world of development. The sooner you accept this, the sooner you can discuss the shortcomings of a language you have no experience of, by your own admission. Your point about how languages become popular has some merit, albeit stated in an overly bigoted fashion. That's like saying people should take Coke and Pepsi more seriously because they have bigger market shares when in reality all you need is water. Money isn't real, you know? D is already a success, a BIG success. Walter and Andrei (and the amazing community, of course) have created a programming language that is light years ahead of C++, Java and Go. I don't think you know this, but every high school student who takes a computer science course is required to learn Java. It doesn't stop there: in college and university it's all Java, too, and this has been going on for almost two decades. And before Java it was mostly C++, but it was phased out. Unless the course specifically requires a different programming language (which is rare), you have to beg to use a different programming language (which I did). Sometimes professors do allow other programming languages, but they mostly limit it to C/C++. In most cases students either have to accept it and do what they are told to do, or fail the course. If that's not indoctrination, I don't know what is. Also, the reason they restrict education to things like Java and C++ has very little to do with the fact that those languages have claimed big market share; rather, it's because corporations have had a vested interest in universities in the first place and they receive what they order. Just look at what Microsoft has been doing in universities: everything from free gifts such as free copies of Windows OS and Visual Studio Ultimate that cost thousands of dollars to sponsoring various kinds of events. The students who are influenced by such tactics, to whom do you think they are going to be loyal to? The _main point_ here is that if students had been give the choice to learn a programming language of their choosing, many of the so called successful programming languages would not have been so successful today. So next time you decide to lecture someone on how popular or successful Java is, just remember how it got to be so successful. In my experience, it's the professors who get to choose what they're teaching and the main reason that Java is used is a combination of its simplicitly and the fact that it's heavily used in the industry. C and C++ have a lot more pitfalls which make learning harder for newbie programmers. Java does more for you (like garbage collection instead of manual memory management) and has fewer ways to completely screw yourself over, so it makes more sense as a teaching language than C or C++. And since the primary focus is teaching the principles of computer science rather than a particular programming language, the exact language usually doesn't matter much. Now, this _does_ have the effect that the majority of college students are most familiar and comfortable with Java, and so that's what they're generally going to use (so there _is_ a lot of indoctrination in that sense), but that's pretty much inevitable. You use what you know. Ultimately though, that's what's likely to happen with most any university simply because teaching programming languages is not the focus - teaching computer science is. And for most of that, the language isn't particularly relevant. And Java was successful before they started using it in universities, or it likely wouldn't have been used much in them in the first place. It's just that that has a feedback effect, since the increased used in universities tends to increase its use in industry, which tends to then make the universities more likely to select it or to stick with it as long as they don't have a solid reason to switch. But I believe that the initial switch was a combination of the fact that its popularity in industry was increasing and the fact that it works much better as a teaching language than C or C++. It's not because of anything that corporations did (aside from saying that they used it), since Java isn't a language where donating stuff or discounting really helps (unlike C++), since almost all of the tools for Java are free. - Jonathan M Davis
Re: Double Checked Locking
On Saturday, December 17, 2011 22:16:38 Michel Fortin wrote: On 2011-12-17 23:10:19 +, Andrei Alexandrescu seewebsiteforem...@erdani.org said: On 12/17/11 5:03 PM, Jonathan M Davis wrote: Well, you learn something new every day I guess. I'd never even heard of double-checked locking before this. I came up with it on my own in an attempt to reduce how much the mutex was used. Is the problem with it that the write isn't actually atomic? Wikipedia makes it sound like the problem might be that the object might be partially initialized but not fully initialized, which I wouldn't have thought possible, since I would have thought that the object would be fully initialized and _then_ the reference would be assigned to it. And it's my understanding that a pointer assignment like that would be atomic. Or is there more going on than that, making it so that the assignment itself really isn't atomic? There so much going on about double-checked locking, it's not even funny. Atomic assignments have the least to do with it. Check this out: http://goo.gl/f0VQG Shouldn't a properly implemented double-checked locking pattern be part of the standard library? This way people will have a better chance of not screwing up. I think the pattern is common enough to warrant it. Well, from the sounds of it, the basic double-checked locking pattern would work just fine with a shared variable if shared were fully implemented, but since it's not, it doesn't work right now. So, I don't know that we need to do anything other than finish implementing shared. - Jonathan M Davis
Re: Java Scala
On Sat, 2011-12-17 at 22:45 -0600, Caligo wrote: [...] I thought this thread had finished, but... That's like saying people should take Coke and Pepsi more seriously because they have bigger market shares when in reality all you need is water. Money isn't real, you know? Taking that paragraph out of the context of the previous emails in the thread leads to misinterpretation of what was being said. D is already a success, a BIG success. Walter and Andrei (and the amazing community, of course) have created a programming language that is light years ahead of C++, Java and Go. I don't think you know this, but every high school student who takes a computer science course is required to learn Java. It doesn't stop there: I didn't know this, but I guess it is only a factor in the USA. In the rest of the world, it is almost certainly not the case. It definitely isn't in the UK. in college and university it's all Java, too, and this has been going on for almost two decades. And before Java it was mostly C++, but it was phased out. Unless the course specifically requires a different Well 15 anyway :-) Up until 1985 or there abouts all educational institutions used Pascal. The global mindset was that Pascal was THE language for teaching. If you weren't teaching and learning using Pascal, you were deemed improperly educated. At UCL, Paul Otto and myself set about revamping the teaching of programming, initially using Scheme + C++ and later using Miranda + C++. This proved massively effective. Students were forced early to appreciate different paradigms of computation as well as learning a language that was increasingly important in the real world. We used SICP for Scheme which was great. C++ had no books so I wrote one, and that turned out well. Miranda had no books originally so a couple of colleagues at UCL wrote one, and that turned out well. Then came 1995 and The Great Revolution: Java hit the streets, Web browsers got funky, and the international educators mindset spontaneously switched to virtual machines are great with a smattering of programming using applets is so cool. Almost overnight all educators switched to Java. I had moved to KCL which was still using Modula-2, and revamped the programming. I offered the folks the choice of C++, Ada or Java as the first programming language, knowing that Clean was used later in the course. They chose Java. Meanwhile at UCL they switched to Java in 1997. Many people were writing many books on educating people using Java. Graham Roberts at UCL and I at KCL needed a book different to the rubbish that was being published in the switch to Java fashion world-wide so ended up writing our own book, which proved very successful. The problem here is that educators forgot the importance of learning multiple languages and especially multiple paradigms. Java was used for all teaching and students suffered. If they had used Java and Haskell and Prolog things would be much better. I exited academia in 2000 as it was fairly obvious that the sector was going to be ruined, at least in the UK, due to the dreadful rhetoric all political parties were issuing. Graham stayed at UCL though and has managed to switch the early programming teaching to use Groovy and Prolog followed by Java. This produces far better programmers than any other sequence I know of just now. Obviously the Hawthorn effect matters: students are enthused and made better by having enthusiastic and knowledgeable teachers who inspire. programming language (which is rare), you have to beg to use a different programming language (which I did). Sometimes professors do allow other programming languages, but they mostly limit it to C/C++. In most cases students either have to accept it and do what they are told to do, or fail the course. If that's not indoctrination, I don't know what is. Also, the In a sense all education is indoctrination, but we probably don't want to go there. I suspect the major problem with most educators -- only in USA and Italy are all educators called professors, in places like UK, France and Germany, professor is a title that has to be earned and is a matter of status within the system -- is that they are themselves under-educated. Far too many educators teaching programming cannot themselves program. Their defensive reaction is to enforce certain choices on their students. Sometimes there is a reasonable rationale -- you don't write device drivers in Prolog, well unless you are using ICL CAFS -- but generally the restriction is because the educator doesn't know any other programming language than the one they enforce. reason they restrict education to things like Java and C++ has very little to do with the fact that those languages have claimed big market share; rather, it's because corporations have had a vested interest in universities in the first place and they receive what they order. Just look at what Microsoft has
Re: std.container and classes
On 12/17/11 7:52 PM, Jonathan M Davis wrote: The only reason that I can think of to use a reference-counted struct instead of a class is becuse then it's easier to avoid the GC heap entirely. Almost all of a container's memory is going to end up on the heap regardless, because the elements almost never end up in the container itself. Being on the heap is not the main issue. The main issue is the data becoming garbage once all references are gone. They're in a dynamic array or in nodes or something similar. So, whether the container is ref- counted or a class is almost irrelevant. I think this argument is starting off the wrong premise, and is already wrong by this point, so I skipped the rest. Am I right? Andrei
Re: Java Scala
On Sat, 2011-12-17 at 21:01 -0800, Jonathan M Davis wrote: [...] In my experience, it's the professors who get to choose what they're teaching and the main reason that Java is used is a combination of its simplicitly and the fact that it's heavily used in the industry. C and C++ have a lot more pitfalls which make learning harder for newbie programmers. Java does more for you (like garbage collection instead of manual memory management) and has fewer ways to completely screw yourself over, so it makes more sense as a teaching language than C or C++. And since the primary focus is teaching the principles of computer science rather than a particular programming language, the exact language usually doesn't matter much. Not sure about in the USA, but in the UK and other places I have been an examiner, educators (not all of whom are professors since that is a high level position in many places not just a synonym for educator) do not have a totally free choice. There are processes in place through which choices have to be put and hence validated by more than just the individual. I suspect far too many educators use Java because everyone else does, and that they don't actually think about the choice they are making. Increasingly, from what I hear, educators are moving away from Java as a first programming language. I think this is a good move. Using languages like Python makes for much easier learning of programming and the principles of programming. UCL looked at this, but had the constraint of having to teach Java in the second year, so went with Groovy rather than Python. Followed by Prolog. It is using more than one language that makes for best education. Now, this _does_ have the effect that the majority of college students are most familiar and comfortable with Java, and so that's what they're generally going to use (so there _is_ a lot of indoctrination in that sense), but that's pretty much inevitable. You use what you know. Ultimately though, that's what's likely to happen with most any university simply because teaching programming languages is not the focus - teaching computer science is. And for most of that, the language isn't particularly relevant. Sadly too little computer science gets taught in most universities and colleges as well as too little programming. And yes, if these institutions teach with Java then the bias is towards Java in the job market. Corollary to all this is everyone on this list should go into academia and start teaching all the introductory programming courses using D. Except at Texas AM where C++ will continue to be used. :-) And Java was successful before they started using it in universities, or it likely wouldn't have been used much in them in the first place. It's just that that has a feedback effect, since the increased used in universities tends to increase its use in industry, which tends to then make the universities more likely to select it or to stick with it as long as they don't have a solid reason to switch. But I believe that the initial switch was a combination of the fact that its popularity in industry was increasing and the fact that it works much better as a teaching language than C or C++. It's not because of anything that corporations did (aside from saying that they used it), since Java isn't a language where donating stuff or discounting really helps (unlike C++), since almost all of the tools for Java are free. At UCL and KCL we had switched to using Java long before it was an industry requirement for jobs. Overall though I think this is a chicken and egg situation. In all of this, the issue of portability of code has seemingly been missed. One of the main reasons for Java in 1995 (other than the trendiness of Web browser programming) was portability across all platforms. This made the sys admin of provision of resources for programming classes significantly less than it was. C, C++ and D cannot match this even today. Back then it was a Big Win (tm). Interesting to note how Intel put much marketing and sales resource into C++ and associated tools. It's all about lock in. Which is fine if portability is not an issue. Someone once coined the term WORA (write once run anywhere) -- which really is a (tm) phrase -- and yet this is a total lie with respect to Java. It sort of worked when there was only Java 1.0, but already when 1.2 came out it was clearly a fib. Now with Java 5, 6, 7, ... it is a clear lie. Hence OSGi and Project Jigsaw. The problem is basically the same as with dynamic linking in C, C++ and D, you have to know exactly the right soname for the library you use. The Go folk have got round this by ignoring dynamic linking and insisting on static linking of all code. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster
Re: Java Scala
On Sat, Dec 17, 2011 at 11:01 PM, Jonathan M Davis jmdavisp...@gmx.comwrote: In my experience, it's the professors who get to choose what they're teaching and the main reason that Java is used is a combination of its simplicitly and the fact that it's heavily used in the industry. C and C++ have a lot more pitfalls which make learning harder for newbie programmers. Java does more for you (like garbage collection instead of manual memory management) and has fewer ways to completely screw yourself over, so it makes more sense as a teaching language than C or C++. And since the primary focus is teaching the principles of computer science rather than a particular programming language, the exact language usually doesn't matter much. In my experience professors only get to choose what to wear to class, lol. It's interesting how many professors choose the same exact text book for the same courses they teach. And it's also interesting how those textbooks cost 10 times more than the equivalent book covering the same material. Some professors even give out the same exams as other professors in different universities. So, no, I don't think professors get to choose either. It's as if they are given a script, and they have to follow it pretty closely (ABET might have something to do with this, idk). I've had many professors who severely rejected the idea of using something else besides Java for a given project, and I never understood why (even in junior and senior years). Python is just as simplistic as Java, used heavily in the industry, and a more elegant language. So, what's the excuse for not allowing something like Python? oh, maybe because it's an open source project and no corporation has direct control over it, no? It's also interesting to see how the choice of Java in schools and universities has NOT produced better computer scientists and software engineers. I've lost count of people I've worked with in group projects that had not freaking clue as to what they were doing. I've even had TAs working on their PhDs and couldn't compile 200 lines of code written in something else besides Java or C#. *sigh* Maybe I should have gone to a private school. Now, this _does_ have the effect that the majority of college students are most familiar and comfortable with Java, and so that's what they're generally going to use (so there _is_ a lot of indoctrination in that sense), but that's pretty much inevitable. You use what you know. Ultimately though, that's what's likely to happen with most any university simply because teaching programming languages is not the focus - teaching computer science is. And for most of that, the language isn't particularly relevant. And Java was successful before they started using it in universities, or it likely wouldn't have been used much in them in the first place. It's just that that has a feedback effect, since the increased used in universities tends to increase its use in industry, which tends to then make the universities more likely to select it or to stick with it as long as they don't have a solid reason to switch. But I believe that the initial switch was a combination of the fact that its popularity in industry was increasing and the fact that it works much better as a teaching language than C or C++. It's not because of anything that corporations did (aside from saying that they used it), since Java isn't a language where donating stuff or discounting really helps (unlike C++), since almost all of the tools for Java are free. - Jonathan M Davis Well, I disagree because Java in the beginning was a complete failure as a language, and they looked for ways to market it. To them it was a product rather than a programming language that was going to help them make money and have control over the industry. Nearly the same exact thing happened with Microsoft Windows: an inferior OS that suddenly became popular and has helped generate billions of dollars of profit and control over 90% of the desktop market share. Java being a great teaching language is something that not everyone will accept. Allowing diversity in schools so that students and professors get to choose what programming language they want to learn and teach, without pressure from the industry, is something that I think most will agree needs to happen.
Re: Java Scala
On 12/17/2011 10:36 PM, Russel Winder wrote: In all of this, the issue of portability of code has seemingly been missed. One of the main reasons for Java in 1995 (other than the trendiness of Web browser programming) was portability across all platforms. This made the sys admin of provision of resources for programming classes significantly less than it was. C, C++ and D cannot match this even today. Back then it was a Big Win (tm). I find this an odd statement because the Java VM is written in C, so therefore it is on the same or fewer platforms than C. BTW, if I was King of the World, universities would teach assembler programming first. I learned BASIC first, then FORTRAN, then I learned assembler (6800) and it was like someone turned the lights on. I liken it to trying to teach kids algebra first, give them a calculator, and never bother teaching them arithmetic. A programmer who doesn't know assembler is never going to write better than second rate programs.
Re: Java Scala
On Sun, 2011-12-18 at 00:38 -0600, Caligo wrote: [...] In my experience professors only get to choose what to wear to class, lol. :-) It's interesting how many professors choose the same exact text book for the same courses they teach. And it's also interesting how those textbooks cost 10 times more than the equivalent book covering the same material. Some professors even give out the same exams as other professors in different universities. So, no, I don't think professors get to choose either. It's as if they are given a script, and they have to follow it pretty closely (ABET might have something to do with this, idk). I've had many professors who severely rejected the idea of using something else besides Java for a given project, and I never understood why (even in junior and senior years). Most students no longer buy textbooks at all. The bottom has fallen out of the market. Python is just as simplistic as Java, used heavily in the industry, and a more elegant language. So, what's the excuse for not allowing something like Python? oh, maybe because it's an open source project and no corporation has direct control over it, no? Python is simple but not simplistic. Many educators are now turning to using Python as the first teaching language. Python is also used in industry and commerce, so it is not just a teaching language. Almost all post-production software uses C++ and Python. Most HPC is now Fortran, C++ and Python. This latter would be a great area for D to try and break into, but sadly I don't hink it would now be possible. [...] Well, I disagree because Java in the beginning was a complete failure as a language, and they looked for ways to market it. To them it was a product rather than a programming language that was going to help them make money and have control over the industry. Nearly the same exact thing happened with Microsoft Windows: an inferior OS that suddenly became popular and has helped generate billions of dollars of profit and control over 90% of the desktop market share. I am not sure this analysis works, certainly Java was not a failure from the outset. If it were Oracle in 1995 then yes the market-driven analysis might work, but Sun didn't really work that way at that time. They were then still a hardware company that did software to sell more hardware. Later things changed, cf. JavaCard. HotJava was certainly an innovation, but it ultimately failed. Java switched to traditional client and, more effectively, server. Though you can trace the effects of HotJava through all the browsers and to HTML5. Java being a great teaching language is something that not everyone will accept. Allowing diversity in schools so that students and professors get to choose what programming language they want to learn and teach, without pressure from the industry, is something that I think most will agree needs to happen. Having been in the vanguard of using Java as a first teaching language in 1995--1996, I am now very much of the view that to use Java as a first teaching language now is a gross error. Second or third language, no problem, just not the first. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Java Scala
On Sat, 2011-12-17 at 23:09 -0800, Walter Bright wrote: [...] I find this an odd statement because the Java VM is written in C, so therefore it is on the same or fewer platforms than C. It's the indirection thing again: rather than provide a C toolchain for each platform, you load Java (or Python, Ruby, ...) which is already precompiled for the platform which then allows a single toolchain across all platforms. BTW, if I was King of the World, universities would teach assembler programming first. I think that sort of worked in the 1980s when computers were (relatively) simple, but I don't think it works now. Clearly any self-respecting programmer should be able to work with assembly language, so it needs to be taught, but these days it comes as the link between hardware and software rather than being the language of software. I learned BASIC first, then FORTRAN, then I learned assembler (6800) and it was like someone turned the lights on. It's all about the operational semantics. Some people are happy with very abstract semantics and so can work with the likes of Fortran very well without knowing assembly language. For others the link to how the computer actually works is critically important. I liken it to trying to teach kids algebra first, give them a calculator, and never bother teaching them arithmetic. A programmer who doesn't know assembler is never going to write better than second rate programs. I am not sure I'd go quite that far but I agree that all programmers really ought to have worked with assembly language at least once in their lives. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: Java Scala
On Sunday, December 18, 2011 06:17:22 Russel Winder wrote: The problem here is that educators forgot the importance of learning multiple languages and especially multiple paradigms. Java was used for all teaching and students suffered. If they had used Java and Haskell and Prolog things would be much better. In my experience, it's fairly common for there to be _one_ required class which is intended to teach about other paradigms - primarily functional languages - so I think that it's fairly typical for students to be exposed to such languages, but given how foreign they are, I think that the typical reaction is that the students don't want to touch such languages again unless they have to. C (and maybe C++) stand a good chance of being used for classes like those on networking and operating systems, so a fair number of students will have some exposure to those, but I think that their typical reaction is to dislike those languages and avoid them unless they have to use them (though I think that they're more to use them than a functional language). Naturally, every student is unique, but most of them seem to prefer what they know best - and that's Java. For most classes though, the focus is on the concepts not the language, which on the whole is exactly where it should be. Any halfway decent programmer should be able to learn a new language, and the concepts of computer science apply to all of them. So, on the whole, that approach is a solid one IMHO. The problem is that it does lead to programmers who are versed primarily in one language rather than being familiar with several, unless they the initiatize and learn them on their own. - Jonathan M Davis
Re: std.container and classes
On Sunday, December 18, 2011 00:15:40 Andrei Alexandrescu wrote: On 12/17/11 7:52 PM, Jonathan M Davis wrote: The only reason that I can think of to use a reference-counted struct instead of a class is becuse then it's easier to avoid the GC heap entirely. Almost all of a container's memory is going to end up on the heap regardless, because the elements almost never end up in the container itself. Being on the heap is not the main issue. The main issue is the data becoming garbage once all references are gone. They're in a dynamic array or in nodes or something similar. So, whether the container is ref- counted or a class is almost irrelevant. I think this argument is starting off the wrong premise, and is already wrong by this point, so I skipped the rest. Am I right? My initial take on this is that the primary difference between a final class and a ref-counted struct is the fact that the class is on the heap and needs no ref-counting, whereas the struct is no the stack and has to do ref-counting, and everything else is the same regardless, since the memory for the container has to go on the heap regardless (be it the GC heap or wherever the allocator puts it). But what you're bringing up is that in the case of the class, everything in the container stays around until it's garbage collected, whereas in the case of the struct, it goes away as soon as the ref-count hits zero? I hadn't thought of that. But if we're talking about the GC heap, since most of the internals are going to be on the heap in either case, with the only difference being the container itself and its value type member variables. However, once you use a custom allocator (say one that uses malloc and free), then in the struct case, everything gets cleaned up as soon as the ref-count hits zero, whereas with the class it sits around until the container itself is collected - assuming that the class itself is on the GC heap. If it was using the malloc-free allocator, then it would be around until it was manually freed by the allocator. I don't know. That is certainly food for thought. My natural inclinition is to just make it a class, since it's a reference type, and then you don't have to worry about the cost of ref-counting or have any confusion over whether the container is a reference type or not. But if there's any real performance advantage in using ref-counted structs due to something to do with memory management, then that would be highly valuable. If no custom allocators are used, then I'd expect the struct to be worse off, since it has the cost of being copied as well as the cost of ref-counting, whereas the only cost in passing the class around is copying it's reference, and other than the few member variables that the container has, there's no difference in how long the memory in the container sticks around. It has to wait for the GC in either case. It's only when a custom allocator which could free the memory as soon as the ref-count hits zero comes into play that it makes a difference, in which case the struct would probably be more efficient. Unless I'm missing something? It certainly doesn't seem like an easy call. - Jonathan M Davis
Re: Alias/Ref Tuples ?
On Sat, Dec 17, 2011 at 01:23, Simen Kjærås simen.kja...@gmail.com wrote: A few small tests later: import std.typetuple; import std.typecons; import std.stdio; void main() { int a, b; TypeTuple!(a, b) = tuple(4,5); assert(a == 4 b == 5); } In other words, the language already has this. Wow. How does that work? I'd understand: TypeTuple!(a,b) = tuple(a,b).expand; // Or .tupleof, even. but not your example... Does that mean TypeTuple!() = does some destructuring? Let's do some test: struct Pair1 { TypeTuple!(int,int) value; } struct Pair2 { TypeTuple!(int,int) value; alias value this;} void main() { int a, b; a = 1; b = 2; // TypeTuple!(a, b) = Pair1(b, a+b); // boom! TypeTuple!(a, b) = Pair2(b, a+b); // works! writeln(a, ,b); } So it's a side effect of alias this and tuples...
How to using opCmp
I have defined a opCmp function to overload comparison operators, then how can I use it correctly? I can use it like this: int b = t1.opCmp(info); and is it right like this: int b = t1 info; In my test code, I get different value for b. What can I do? Thanks for helps. //= //Test code: //= import std.stdio; import std.conv; import std.file; import std.algorithm; import std.range; import std.array; import std.format; import std.path; import std.ascii; import std.utf; import std.process; public final class LogLevel { @property public static LogLevel Trace() { if( m_Trace is null ) m_Trace = new LogLevel(Trace, 0); return m_Trace; } private static LogLevel m_Trace; @property public static LogLevel Info() { if( m_Info is null ) m_Info = new LogLevel(Info, 2); return m_Info; } private static LogLevel m_Info; public string getName() { return name; } private int ordinal; private string name; public static LogLevel FromString(string levelName) { if (levelName.empty()) { throw new Exception(levelName); } return Trace; } public int opCmp(LogLevel level1) { int result = 0; if( this.Ordinal level1.Ordinal) result = 1; else if( this.Ordinal == level1.Ordinal) result = 0; else result = -1; writefln(result == %d, result); return result; } @property package int Ordinal() { return this.ordinal; } private this(string name, int ordinal) { this.name = name; this.ordinal = ordinal; } } int main(string[] args) { LogLevel t1 = LogLevel.Trace; LogLevel t2 = LogLevel.Trace; LogLevel info = LogLevel.Info; int a = t1 t2; writefln(a == %d, a); int b = t1.opCmp(info); writefln(b == %d, b); b = t1 info; writefln(b == %d, b); b = t1 info; writefln(b == %d, b); return 0; } //=
Re: How to using opCmp
Heromyth: I have defined a opCmp function to overload comparison operators, then how can I use it correctly? I don't have an answer yet, but I suggest you to look at the D docs that describe how to use opCmp. Probably you need to accept a const Object, and to use override. I suggest to strip your code from all the not essential things, so you will see the problems better. Bye, bearophile
Re: How to using opCmp
Heromyth wrote: I can use it like this: int b = t1.opCmp(info); and is it right like this: int b = t1 info; In my test code, I get different value for b. What can I do? Nothing because the call of the operator `' interprets the result of the call of `opCmp', which is constant in this case! Remember that the result of `opCmp' is used for at least the operators `', `', `=' and `='. -manfred
Re: How to using opCmp
On Sat, 17 Dec 2011 14:06:48 +, Heromyth wrote: I have defined a opCmp function to overload comparison operators, then how can I use it correctly? I can use it like this: int b = t1.opCmp(info); and is it right like this: int b = t1 info; Consider the code more to the point of: int b = t1.opCmp(info); bool c = t1 info;
Re: Array concat quiz
On 12/17/2011 02:43 AM, bearophile wrote: A small quiz. This Python2 code: m1 = [[A', B']] print m1 m2 = m1 + [[]] print m2 Prints: [[A', B']] [[A', B'], []] What does this D2 program print? import std.stdio; void main() { string[][] m1 = [[A', B']]; writeln(m1); string[][] m2 = m1 ~ [[]]; writeln(m2); } Bye, bearophile I do not think that this is very sensible (array-array appends are more natural than array-element appends, so disambiguation should go the other way). Is it documented anywhere or is it just an implementation artefact?