Re: inlining or not inlining...
so wrote: You cannot force inlining in C(++) either. The inline keyword is only a suggestion. I'm not understanding your last comment that a .lib would be required. That's not correct, as since you're supplying the full source anyway (needed for inlining), just compile in that source from the command line. No lib step is needed. Hinting wasn't enough, every major implementation have a forceinline extension now. So you know if something you asked is inlined or not. There are all kinds of extensions popular in C++, but they are not part of Standard C++.
Re: inlining or not inlining...
Jonathan M Davis Wrote: On Thursday 10 February 2011 22:35:34 Walter Bright wrote: Stewart Gordon wrote: On 09/02/2011 12:14, spir wrote: Hello, Walter states that inline annotations are useless, since programmers cannot generally know which function /should/ be inlined --depending on a variety of factors, inlining may in fact be counter-productive. snip I hate not being able to force functions to be inline. A consequence is that you can't fully interface certain APIs without an extra .lib over what would be needed in C(++). You cannot force inlining in C(++) either. The inline keyword is only a suggestion. True. However, IIRC -O3 in gcc forces inlining, so in some cases you _can_ force it (though that's obviously compiler-specific), but forcing inlining with -O3 does it for _everything_, so it's not exactly precision instrument. Regardless, I would _hope_ that the compiler would be smart enough to make intelligent choices about inlining. That's probably one of those areas that can always be improved however. I also think that this decision should be left to the compiler. The inline keyword was deemed useful for the same reason that symbols had to be declared before their use (causing the C/C++ header hell) -- it's easier to implement such a compiler.
Re: inlining or not inlining...
Jonathan M Davis wrote: Regardless, I would _hope_ that the compiler would be smart enough to make intelligent choices about inlining. That's probably one of those areas that can always be improved however. I agree completely. All compilers could use better register allocation algorithms, too.
Re: inlining or not inlining...
so wrote: While in isolation that's a good idea, how far should it be taken? Should the compiler emit information on which variables wound up in which registers, and why? What about other of the myriad of compiler optimizations? Isn't Inlining by far the most important (most practical) optimization among those that actually we can control? No, not even close. The first step is figure out where your program is slow, and then why it is slow. For example, if it is slow because foo() is being called 1,000,000 times, you'll get a one thousand times speedup if you can tweak your algorithms so that it is only called 1,000 times. A few times i have seen comparisons here to similar languages and in most of them the inlining was the reason (only) for the inferior performance. I agree it would be awesome if the compilers had the ability to chose the best method, but comparisons show sometimes the opposite, i don't know maybe they are hand-picked for some reason. Certainly, the inliner in dmd can be improved.
Re: Stupid little iota of an idea
On 2011-02-11 04:15, Nick Sabalausky wrote: Andrej Mitrovicandrej.mitrov...@gmail.com wrote in message news:mailman.1476.1297391467.4748.digitalmar...@puremagic.com... What the hell does to! have to do with anything. Disregard my last post, it's obviously 3 AM and I'm talking gibberish. I just meant that iota looks a lot like (spaces added for clarity) i to a. In other words, the first time I ever saw iota, I confused it for the old C function that converts an integer to an ASCII string. It may very well have been 3am for me at the time ;) I thought that as well that first time I saw it. -- /Jacob Carlborg
Re: Stupid little iota of an idea
On 2011-02-10 23:05, Andrei Alexandrescu wrote: On 2/10/11 9:47 AM, spir wrote: Even then, noone forces D2 to blindly reproduce stupid naming from APL/C++, I guess. Or what? I don't find the name iota stupid. Andrei Of course you don't think it's stupid, you named it. It starts to look more and more that you are the only one that likes it. How about we vote about it ? -- /Jacob Carlborg
Re: More on Rust
On 2011-02-11 08:39, Jim wrote: Jacob Carlborg Wrote: On 2011-02-10 20:15, Walter Bright wrote: Nick Sabalausky wrote: bearophilebearophileh...@lycos.com wrote in message news:iivb5n$na3$1...@digitalmars.com... auto x; if (localtime().hours= 8) { x = awake! } else { x = asleep, go away. } log I'm + x; That would be really nice to have in D. auto x = (localtime().hours= 8) ? awake! : asleep, go away.; For this simple if statement it works but as soon you have a few lines in the if statement it will become really ugly. But one could wrap the if statement in a function instead. In other languages where statements really are expressions this works: auto x = if (localtime().hours= 8) awake!; else asleep, go away.; log I'm + x; Other languages may have bloated syntaxes, with no particular benefit. auto x = localtime().hours= 8 ? awake! : asleep, go away.; log( I'm ~ x ); If the expressions are complex I put them in functions. 1) It hides and isolates details, which allows you to focus on the more abstract aspects. 2) It gives the expression a name and facilitates reuse. And that was the first thing I said one could do. -- /Jacob Carlborg
Re: More on Rust
Jim Wrote: Jacob Carlborg Wrote: On 2011-02-10 20:15, Walter Bright wrote: Nick Sabalausky wrote: bearophile bearophileh...@lycos.com wrote in message news:iivb5n$na3$1...@digitalmars.com... auto x; if (localtime().hours = 8) { x = awake! } else { x = asleep, go away. } log I'm + x; That would be really nice to have in D. auto x = (localtime().hours = 8) ? awake! : asleep, go away.; For this simple if statement it works but as soon you have a few lines in the if statement it will become really ugly. But one could wrap the if statement in a function instead. In other languages where statements really are expressions this works: auto x = if (localtime().hours = 8) awake!; else asleep, go away.; log I'm + x; Other languages may have bloated syntaxes, with no particular benefit. auto x = localtime().hours = 8 ? awake! : asleep, go away.; log( I'm ~ x ); You're right. ? : is a masterpiece among syntaxes. Maybe a bit too terse, but it does exactly what people expect it to do and it has existed since C was born.
Re: inlining or not inlining...
On 2/11/2011 12:37 AM, Walter Bright wrote: so wrote: While in isolation that's a good idea, how far should it be taken? Should the compiler emit information on which variables wound up in which registers, and why? What about other of the myriad of compiler optimizations? Isn't Inlining by far the most important (most practical) optimization among those that actually we can control? No, not even close. The first step is figure out where your program is slow, and then why it is slow. For example, if it is slow because foo() is being called 1,000,000 times, you'll get a one thousand times speedup if you can tweak your algorithms so that it is only called 1,000 times. A few times i have seen comparisons here to similar languages and in most of them the inlining was the reason (only) for the inferior performance. I agree it would be awesome if the compilers had the ability to chose the best method, but comparisons show sometimes the opposite, i don't know maybe they are hand-picked for some reason. Certainly, the inliner in dmd can be improved. Improving the inline is one of the many itches I intend to scratch at some point. I did some a while back to get my feet wet. I'll get back to it again at some point. Currently it only does the really easy stuff, and that's clearly not good enough in the long run.
Re: inlining or not inlining...
No, not even close. The first step is figure out where your program is slow, and then why it is slow. For example, if it is slow because foo() is being called 1,000,000 times, you'll get a one thousand times speedup if you can tweak your algorithms so that it is only called 1,000 times. I think we are talking about two different things, i don't mean locating the cause of the bottleneck, it is of course the most logical thing to do. Assume we know the problem, a function that has been reduced to simplest case, still compiler for some reason didn't do the inlining and we need every bit. Wrappers and frequent matrix, vector operations are a very serious examples that inlining is must. Now, it doesn't matter how easy or hard, have could we get around this? This is a great for an annotation.
Re: inlining or not inlining...
Wrappers and frequent matrix, vector operations are -a- very serious examples that inlining is must. Now, it doesn't matter how easy or hard, -have- +how+ could we get around this? This is a great +excuse+ for an annotation. duh... how hard to synchronize brain, hands and eyes...
Re: How will we fix opEquals?
On 02/11/2011 07:13 AM, Jonathan M Davis wrote: We _must_ have it there, so anyone overriding those functions _must_ use it for those functions. They could create non-const versions in addition to the const ones, It is the whole point, they can't. Hmm. You're right (I just tried it). In the worst case, people who really, really, want it it non-const are left with: struct S { ... bool equals (ref S s) {...} } ... if (s1.equals(s2)) {...} I do not find it /that/ terrible (esp.compared to many other wrokarounds we commonly are forced to use for various reasons). The one bad case is if S belongs to the interface to client code. In other words, this is not an acceptable solution for library public types. But since const opEquals (and same reasoning for other methods) is only annoying for structs that (1) need custom equality predicate (2) call other funcs, else as member opEquals, (3) which themselves may be called by yet other funcs (else virality does not spread), then I guess very few cases remain. What do you think? Denis -- _ vita es estrany spir.wikidot.com
Re: inlining or not inlining...
On 02/11/2011 09:33 AM, Jim wrote: Regardless, I would _hope_ that the compiler would be smart enough to make intelligent choices about inlining. That's probably one of those areas that can always be improved however. I also think that this decision should be left to the compiler. The inline keyword was deemed useful for the same reason that symbols had to be declared before their use (causing the C/C++ header hell) -- it's easier to implement such a compiler. Agreed; but what about having the compiler tell you, on demand, func 'f' at line #l in module 'm' was not inlined ? Denis -- _ vita es estrany spir.wikidot.com
Re: inlining or not inlining...
Walter: While in isolation that's a good idea, how far should it be taken? Should the compiler emit information on which variables wound up in which registers, and why? What about other of the myriad of compiler optimizations? Inlining is an important optimization, so give this information to the programmer is a good start. With CommonLisp compiler when you compile with max optimization levels the compiler gives many comments that explain why it isn't optimizing something, including some forms of inlining, see for example: http://shootout.alioth.debian.org/u32/program.php?test=fastalang=sbclid=3 Part of the comments: ; -- CEILING MULTIPLE-VALUE-BIND MULTIPLE-VALUE-CALL FUNCTION IF VALUES 1+ ; == ; (+ SB-C::TRU 1) ; ; note: forced to do GENERIC-+ (cost 10) ; unable to do inline fixnum arithmetic (cost 1) because: ; The first argument is a INTEGER, not a FIXNUM. ; The result is a (VALUES INTEGER OPTIONAL), not a (VALUES FIXNUM REST T). ; unable to do inline fixnum arithmetic (cost 2) because: ; The first argument is a INTEGER, not a FIXNUM. ; The result is a (VALUES INTEGER OPTIONAL), not a (VALUES FIXNUM REST T). ; etc. Another similar kind of useful notes from the compiler: http://d.puremagic.com/issues/show_bug.cgi?id=5070 Bye, bearophile
Re: More on Rust
On 02/11/2011 08:39 AM, Jim wrote: Jacob Carlborg Wrote: On 2011-02-10 20:15, Walter Bright wrote: Nick Sabalausky wrote: bearophilebearophileh...@lycos.com wrote in message news:iivb5n$na3$1...@digitalmars.com... auto x; if (localtime().hours= 8) { x = awake! } else { x = asleep, go away. } log I'm + x; That would be really nice to have in D. auto x = (localtime().hours= 8) ? awake! : asleep, go away.; For this simple if statement it works but as soon you have a few lines in the if statement it will become really ugly. But one could wrap the if statement in a function instead. In other languages where statements really are expressions this works: auto x = if (localtime().hours= 8) awake!; else asleep, go away.; log I'm + x; Other languages may have bloated syntaxes, with no particular benefit. auto x = localtime().hours= 8 ? awake! : asleep, go away.; log( I'm ~ x ); If the expressions are complex I put them in functions. 1) It hides and isolates details, which allows you to focus on the more abstract aspects. 2) It gives the expression a name and facilitates reuse. Agreed. But in practice I often end up beeing dubious about seemingly nice features like, precisely, the ternary operator. In languages that do not have such a nicety people end up writng eg: if (localtime().hours= 8) x = awake; else x = asleep, go away.; Apart from the very mild annoyance, I guess, of writing twice x =, this is all gain: code is both more compact and legible. Precisely, because the ternary operator is a bit weird syntactically (and not that commonly needed and used in real code), people feel, just like you, the need to clarify it in code, finally using more vertical space. Note that the case here is the simplest possible, expressions being plain literal constants. I personly only consume 3 lines: auto x = (localtime().hours= 8) ? awake! : asleep, go away.; What do you think? Denis -- _ vita es estrany spir.wikidot.com
Re: Stupid little iota of an idea
On 02/11/2011 03:06 AM, Jonathan M Davis wrote: I feel pretty much the same way. iota seems like a horrible name as far as figuring out what the function does from its name goes. I don't know what a good name would be though (genSequence?) why not interval? (not obvious enough ;-) denis -- _ vita es estrany spir.wikidot.com
Re: Stupid little iota of an idea
On 02/11/2011 02:38 AM, Nick Sabalausky wrote: Max Samukhamaxsamu...@spambox.com wrote in message news:ij10n7$25p0$1...@digitalmars.com... On 02/10/2011 05:18 PM, Andrei Alexandrescu wrote: On 2/10/11 12:30 AM, Olivier Pisano wrote: Le 09/02/2011 21:08, Ary Manzana a écrit : On 2/9/11 3:54 PM, bearophile wrote: - There is no need to learn to use a function with a weird syntax like iota, coming from APL. This makes Phobos and learning D a bit simpler. I would recommend stop using weird names for functions. Sorry if this sounds a little harsh but the only reason I see this function is called iota is to demonstrate knowledge (or to sound cool). But programmers using a language don't care about whether the other programmer demonstrates knowledge behind a function name, they just want to get things done, fast. I mean, if I want to create a range of numbers I would search range. iota will never, ever come to my mind. D has to be more open to public, not only to people who programmed in APL, Go or are mathematics freaks. Guess how a range is called in Ruby? That's right, Range. Another example: retro. The documentation says iterates a bidirectional name backwards. Hm, where does retro appear in that text? If I want to iterate it backwards, or to reverse the order, the first thing I would write is reverse(range) or backwards(range), retro would never come to my mind. (and no, replies like you can always alias xxx are not accepted :-P) Hi, I agree iota is a bad name. Fifth result of simply googling the entire Web for iota: http://www.sgi.com/tech/stl/iota.html Andrei Google search takes your preferences into account. They must be tracking your search history, peeking into your gmail accounts etc. I searched for 'iota' and couldn't find the STL link on the first 5 pages. Yea, it's definitely user-specific. It's on the thrid page for me. Result #38 for me. Denis -- _ vita es estrany spir.wikidot.com
Re: How will we fix opEquals?
On Friday 11 February 2011 02:43:11 spir wrote: On 02/11/2011 07:13 AM, Jonathan M Davis wrote: We _must_ have it there, so anyone overriding those functions _must_ use it for those functions. They could create non-const versions in addition to the const ones, It is the whole point, they can't. Hmm. You're right (I just tried it). In the worst case, people who really, really, want it it non-const are left with: struct S { ... bool equals (ref S s) {...} } ... if (s1.equals(s2)) {...} I do not find it /that/ terrible (esp.compared to many other wrokarounds we commonly are forced to use for various reasons). The one bad case is if S belongs to the interface to client code. In other words, this is not an acceptable solution for library public types. But since const opEquals (and same reasoning for other methods) is only annoying for structs that (1) need custom equality predicate (2) call other funcs, else as member opEquals, (3) which themselves may be called by yet other funcs (else virality does not spread), then I guess very few cases remain. What do you think? structs are a different beast. The issue under discussion here is classes where polymorphism gets involved. There, the function signatures have to match. Now, structs are still problematic, because the compiler currently insists that their signature for opEquals look something like bool opEquals(const ref S s) const; Structs shouldn't be so picky. Pretty much anything named opEquals which takes one argument and returns bool should work. But that hasn't been fixed yet. Regardless, it's a separate issue from classes where polymorphism puts much stricter requirements on opEquals' signature. - Jonathan M Davis
Re: inlining or not inlining...
But I'm sure this sort of thing is also highly variable based on type of applications, code style, language, etc. Indeed it is, for example you won't hear much complaints from game developers because they rely on GPU for most of the computations these days, but there are other areas where cpu is used intensively, you can be sure just because of this simple issue they won't use D. And the funny part is that it doesn't hurt anyone having this with the specific features D has (annotations), it is a win-win. Also i am not talking about c++ inline keyword here, if you go check a few open-source cpu heavy projects, they mostly use compiler specific forced inlines. @inline // either inline this or give me an error why you can't / won't.
Re: inlining or not inlining...
On 02/11/2011 07:32 AM, Walter Bright wrote: spir wrote: Thus, at best, we would need to know a bit about criteria used by the compiler for deciding whether to inline or not; provided a doc explaining this is at all readable by people who do not have the compiler-writer gene. Aside that, let us imagine an inline annotation beeing, not a request for inlining, but a request for compiler warning emission when inlining would not be applied to a given annotated func. Then, programmers would at least know, beeing thus able to choose on an informed basis. Complement to that may be a little (and hopefully clear) how-to guide on best chances to get a func inlined. This howto would start by describing most common and/or most critical criteria for the compiler to /not/ inline a given func. Then, a short set of negative positive examples actually generating or not the fatal warning. As a nice side-effect, such a doc may help make clear some schemes of (in)efficiency, in general, even for an inlined piece of code. (*) While in isolation that's a good idea, how far should it be taken? Should the compiler emit information on which variables wound up in which registers, and why? What about other of the myriad of compiler optimizations? Also, if you're willing to look at the assembler output of the compiler, it's pretty trivial to see if a function was inlined or not. If you're interested in optimizing at that level, I think it would make sense to get familiar with the asm output. People possibly interested in the question of inlining (or more generally factors of (in)efficiency) must start somehow, granted. But making it even more difficult than necessary, while we all know it is inherently a very complex topic, does not bring much, don't you think? In this case, I guess emitting such an information is very easy, since the compiler already needs to compute whether or not to inline. Or am I wrong and overlook a relevant point? On the other side, the feedback brought is extremely valuable; it allows learning by trial error, and/or guided by a little howto as evoked above. Both count, and personal experience primes (I guess). I have actually programmed some pieces of code in ASM (a very long time ago), so I know it is possible for normal people. But the barrier is still very high; and anyway one approach does not prevent the other, instead compiler feedback is very complementary to asm contemplation ;-) Don't you think so? Even more, the compiler routine deciding on inlining probably has, at least partly, a form of checklist, so that the compiler could even say /why/... which would help much when decoding asm by giving some hint on /what/ to look for. Denis -- _ vita es estrany spir.wikidot.com
Re: inlining or not inlining...
On 02/11/2011 07:53 AM, so wrote: While in isolation that's a good idea, how far should it be taken? Should the compiler emit information on which variables wound up in which registers, and why? What about other of the myriad of compiler optimizations? Isn't Inlining by far the most important (most practical) optimization among those that actually we can control? A few times i have seen comparisons here to similar languages and in most of them the inlining was the reason (only) for the inferior performance. I agree it would be awesome if the compilers had the ability to chose the best method, but comparisons show sometimes the opposite, i don't know maybe they are hand-picked for some reason. I recently read a study using a dozen test cases to compare optimisations performed by 3 C compiler (IIRC: gcc, a win product, and an LLVM one). Very instructive, and even more surprising for me. In every case, some optimsation was done by a compiler that others did not, or conversely. This let me think for a while... how come? Don't compiler authors know, more or less, what kinds or optimisation tactics *exist* in given situations? and thus are performed by others. Strange. If this is the case, then the world of programming definitely needs a public knowledge base dedicated to compiler technique, esp. on optimisation. A wiki, indeed. denis -- _ vita es estrany spir.wikidot.com
Re: inlining or not inlining...
spir: People possibly interested in the question of inlining (or more generally factors of (in)efficiency) must start somehow, granted. But making it even more difficult than necessary, while we all know it is inherently a very complex topic, does not bring much, don't you think? In this case, I guess emitting such an information is very easy, since the compiler already needs to compute whether or not to inline. Or am I wrong and overlook a relevant point? On the other side, the feedback brought is extremely valuable; it allows learning by trial error, and/or guided by a little howto as evoked above. Both count, and personal experience primes (I guess). I have added an enhancement request, where you are able to add more comments like those ones: http://d.puremagic.com/issues/show_bug.cgi?id=5563 Bye, bearophile
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 09/02/2011 23:02, Ulrik Mikaelsson wrote: 2011/2/9 Bruno Medeirosbrunodomedeiros+spam@com.gmail: It's unlikely you will see converted repositories with a lot of changing blob data. DVCS, at the least in the way they work currently, simply kill this workflow/organization-pattern. I very much suspect this issue will become more important as time goes on - a lot of people are still new to DVCS and they still don't realize the full implications of that architecture with regards to repo size. Any file you commit will add to the repository size *FOREVER*. I'm pretty sure we haven't heard the last word on the VCS battle, in that in a few years time people are *again* talking about and switching to another VCS :( . Mark these words. (The only way this is not going to happen is if Git or Mercurial are able to address this issue in a satisfactory way, which I'm not sure is possible or easy) You don't happen to know about any projects of this kind in any other VCS that can be practically tested, do you? You mean a project like that, hosted in Subversion or CVS (so that you can convert it to Git/Mercurial and see how it is in terms of repo size)? I don't know any of the top of my head, except the one in my job, but naturally it is commercial and closed-source so I can't share it. I'm cloning the Mozilla Firefox repo right now, I'm curious how big it is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29) But other than that, what exactly do you want to test? There is no specific thing to test, if you add a binary file (from a format that is already compressed, like zip, jar, jpg, etc.) of size X, you will increase the repo size by X bytes forever. There is no other way around it. (Unless on Git you rewrite the history on the repo, which doubtfully will ever be allowed on central repositories) -- Bruno Medeiros - Software Engineer
Re: Stupid little iota of an idea
Andrei Alexandrescu Wrote: I don't find the name iota stupid. Andrei Of course _you_ don't. However practically all the users _do_ find it poorly named, including other developers in the project.. This is the umpteenth time this comes up in the NG and incidentally this is the only reason I know what the function does. If the users think the name is stupid than it really is. That's how usability works and the fact the you think otherwise or that it might be more accurate mathematically is really not relevant. If you want D/Phobos to be used by other people besides yourself you need to cater for their requirements.
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 09/02/2011 14:27, Michel Fortin wrote: On 2011-02-09 07:49:31 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail said: On 04/02/2011 20:11, Michel Fortin wrote: On 2011-02-04 11:12:12 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail said: Can Git really have an usable but incomplete local clone? Yes, it's called a shallow clone. See the --depth switch of git clone: http://www.kernel.org/pub/software/scm/git/docs/git-clone.html I was about to say Cool!, but then I checked the doc on that link and it says: A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :( Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what I understand from: http://git.661346.n2.nabble.com/pushing-from-a-shallow-repo-allowed-td2332252.html Interesting. But it still feels very much like a second-class functionality, not something they really have in mind to support well, at least not yet. Ideally, if one wants to do push but the ancestor history is incomplete, the VCS would download from the central repository whatever revision/changeset information was missing. Before someone says, oh but that defeats some of the purposes of a distributed VCS, like being able to work offline. I know, and I personally don't care that much, in fact I find this benefit of DVCS has been overvalued way out of proportion. Does anyone do any serious coding while being offline for an extended period of time? Some people mentioned coding on the move, with laptops, but seriously, even if I am connected to the Internet I cannot code with my laptop only, I need it connected to a monitor, as well as a mouse, (and preferably a keyboard as well). -- Bruno Medeiros - Software Engineer
Re: High performance XML parser
On Mon, 07 Feb 2011 10:37:46 -0500, Robert Jacques wrote: On Mon, 07 Feb 2011 07:40:30 -0500, Steven Schveighoffer schvei...@yahoo.com wrote: On Fri, 04 Feb 2011 17:36:50 -0500, Tomek Sowiński j...@ask.me wrote: Steven Schveighoffer napisał: Here is how I would approach it (without doing any research). First, we need a buffered I/O system where you can easily access and manipulate the buffer. I have proposed one a few months ago in this NG. Second, I'd implement the XML lib as a range where front() gives you an XMLNode. If the XMLNode is an element, it will have eager access to the element tag, and lazy access to the attributes and the sub-nodes. Each XMLNode will provide a forward range for the child nodes. Thus you can skip whole elements in the stream by popFront'ing a range, and dive deeper via accessing the nodes of the range. I would consider a tokenizer which can be used for SAX style parsing to be a key feature of std.xml. I know it was considered very important when I was gathering requirements for my std.JSON re-write. XML parser /dsource/xmlp/trunk/std I have experimented with various means to balance efficiency and flexibility with XML parsing. The core parsing uses an ItemReturn struct. This returns transient pointers to reused buffers, so that there is reduced memory buffer reallocation for just churning through the XML document. struct ItemReturn { ItemTypetype = ItemType.RET_NULL; char[] scratch; char[][]names; char[][]values; } The central parse method fills the ItemReturn with transient tag names, and pointers to names and values, somewhat like a SAX parser. To measuring performance components, take the throughput of making a XML document using a linked DOM tree structure as 100%, with validation, attribute normalisation. This implementation, buffers for file or string sources, I get this breakdown of processing. This is done on books.xml. Other examples of documents, with different structure, entities, DTD, schema, namespaces, will differ. Input overhead. Throughput of examining each single Unicode character in the document, as a dchar value. About 12-15% of time. So there is not a relatively great cost in the input buffering. Tag, attribute and content throughput. Parsing and filling the ItemReturn struct for each parse method call, called for every identifyable XML element token in the document sequence, about 60% of time, without doing anything with the result. This also includes Input Overhead. No DOM structure is assumed, and their is no recursion. The general sort of work done is keeping track of state, and assembling and returning the various types of tokens. To actually build a full DOM, without doing much in the way of validation, and attribute normalisation, is up to about 86% of total time. This includes converting the returned transient buffered values of tags, attributes names, values, and content into string, and the creation and linking of DOM nodes. This involves some recursive method calls that mirror the XML document structure. Some time and memory seems to be saved by aliasing tag and attribute names using an AA. This takes about 85% of the full job. Additional validation and attribute normalisation takes more time.
Re: d-programming-language.org
On 30/01/2011 08:03, Andrei Alexandrescu wrote: I've had some style updates from David Gileadi rotting in a zip file in my inbox for a good while. It took me the better part of today to manually merge his stale files with the ones in the repository, which have in the meantime undergone many changes. The result is in http://d-programming-language.org. It has (or at least should have) no new content, only style changes. I added a simple site index, see http://d-programming-language.org/siteindex.html. It's not linked from anywhere but gives a good entry point for all pages on the site. One other link of possible interest is http://d-programming-language.org/phobos-prerelease/phobos.html which will contain the latest and greatest Phobos committed to github. I've included build targets to synchronize /phobos-prerelease/ and /phobos/. (Right now both contain the prerelease version; don't let that confuse you.) In agreement with Walter, I removed the Digitalmars reference. The message is simple - D has long become an entity independent from the company that created it. (However, this makes the page header look different and probably less visually appealing.) Anyway, this all is not done in relation or in response to the recent related activity on redesigning the homepage. I just wanted to make sure that we have a clean basis to start from, and am looking with interest at the coming developments. Cheers, Andrei I gave a few comments on this some time ago, I'm not sure if they were seen (the post was way after the thread creation). It regards the search button and functionality, it goes like this: The search section looks fugly, IMO. The textbutton itself is not bad, but the dropdown is ugly, and not just on aspect, but also functionality. I'm surprised no else commented likewise. :( My suggestion is to remove the drop-down altogether. Let the more refined search scope options be available elsewhere, perhaps on the search results page itself. Also, we should use Google Custom Search. Just linking to raw google looks amateurish. That's because (amongst other things) the search page shows up with all the Google personalized homepage stuff (if you enable it for google.com). Compare: http://oi55.tinypic.com/350mmxc.jpg to: http://www.google.com/cse?q=foobarcx=013598269713424429640%3Ag5orptiw95wie=UTF-8sa=Search Here's an example of what I'm suggesting for the search functionality, try it out: http://svn.dsource.org/projects/descent/downloads/temp/dwebpage.htm (obviously the layout and colors are broken, I just want to demo the functionality, especially using Google Custom Search. Also please try it *with Firefox*, with Chrome it's broken) An alternative is to maintain the current behavior: and have the search page be presented on its own, instead of contained the D programming language site: http://www.google.com/cse?cx=016833344392370455076%3Afjy38cei55cie=UTF-8q=foobarsa=Search However I don't know how to customize the CSS for this hosted page, plus, when you click the scope labels, the search query changes: you get an annoying extra more:library_reference keyword one it. Meh. Yet another alternative is to put the search textbutton as a section in the navigation leftbar, and put the three search scopes as 3 radio buttion options, each on their own line... but please, no dropdown on a header! :S -- Bruno Medeiros - Software Engineer
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
Bruno Medeiros Wrote: On 09/02/2011 23:02, Ulrik Mikaelsson wrote: 2011/2/9 Bruno Medeirosbrunodomedeiros+spam@com.gmail: It's unlikely you will see converted repositories with a lot of changing blob data. DVCS, at the least in the way they work currently, simply kill this workflow/organization-pattern. I very much suspect this issue will become more important as time goes on - a lot of people are still new to DVCS and they still don't realize the full implications of that architecture with regards to repo size. Any file you commit will add to the repository size *FOREVER*. I'm pretty sure we haven't heard the last word on the VCS battle, in that in a few years time people are *again* talking about and switching to another VCS :( . Mark these words. (The only way this is not going to happen is if Git or Mercurial are able to address this issue in a satisfactory way, which I'm not sure is possible or easy) You don't happen to know about any projects of this kind in any other VCS that can be practically tested, do you? You mean a project like that, hosted in Subversion or CVS (so that you can convert it to Git/Mercurial and see how it is in terms of repo size)? I don't know any of the top of my head, except the one in my job, but naturally it is commercial and closed-source so I can't share it. I'm cloning the Mozilla Firefox repo right now, I'm curious how big it is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29) But other than that, what exactly do you want to test? There is no specific thing to test, if you add a binary file (from a format that is already compressed, like zip, jar, jpg, etc.) of size X, you will increase the repo size by X bytes forever. There is no other way around it. (Unless on Git you rewrite the history on the repo, which doubtfully will ever be allowed on central repositories) One thing we've done at work with game asset files is we put them in a separate repository and to conserve space we use a cleaned branch as a base for work repository. The graph below shows how it works initial state - alpha1 - alpha2 - beta1 - internal rev X - internal rev X+1 - internal rev X+2 - ... - internal rev X+n - beta2 Now we have a new beta2. What happens next is we take a snapshot copy of the state of beta2, go back to beta1, create a new branch and paste the snapshot there. Now we move the old working branch with internal revisions to someplace safe and start using this as a base. And the work continues with this: initial state - alpha1 - alpha2 - beta1 - beta2 internal rev X+n+1 - ... The repository size won't become a problem with text / source code. Since you're a SVN advocate, please explain how well it works with 2500 GB of asset files?
Re: std.xml should just go
On 04/02/2011 21:07, Steven Schveighoffer wrote: On Fri, 04 Feb 2011 15:44:46 -0500, Jeff Nowakowski j...@dilacero.org wrote: On 02/03/2011 10:07 PM, Walter Bright wrote: The way to get a high performance string parser in D is to take advantage of one of D's unique features - slices. Java, C++, C#, etc., all rely on copying strings. With D you can just use slices into the original XML source text. If you're copying the text, you're doing it wrong. Java's substring() does not copy the text, at least in the official JDK implementation. Unfortunately, it doesn't specify this behavior as part of the String API. Yes, but Java's strings are immutable. Typically a buffered I/O stream has a mutable buffer used to read data. This necessitates a copy. At the very least, you need to continue allocating more memory to hold all the strings. -Steve True, but in this case you will have the exact same problem with any other language as well. So it doesn't seem like D will have any particular advantage over Java, with regards to slicing and strings. -- Bruno Medeiros - Software Engineer
Re: More on Rust
spir Wrote: On 02/11/2011 08:39 AM, Jim wrote: Jacob Carlborg Wrote: On 2011-02-10 20:15, Walter Bright wrote: Nick Sabalausky wrote: bearophilebearophileh...@lycos.com wrote in message news:iivb5n$na3$1...@digitalmars.com... auto x; if (localtime().hours= 8) { x = awake! } else { x = asleep, go away. } log I'm + x; That would be really nice to have in D. auto x = (localtime().hours= 8) ? awake! : asleep, go away.; For this simple if statement it works but as soon you have a few lines in the if statement it will become really ugly. But one could wrap the if statement in a function instead. In other languages where statements really are expressions this works: auto x = if (localtime().hours= 8) awake!; else asleep, go away.; log I'm + x; Other languages may have bloated syntaxes, with no particular benefit. auto x = localtime().hours= 8 ? awake! : asleep, go away.; log( I'm ~ x ); If the expressions are complex I put them in functions. 1) It hides and isolates details, which allows you to focus on the more abstract aspects. 2) It gives the expression a name and facilitates reuse. Agreed. But in practice I often end up beeing dubious about seemingly nice features like, precisely, the ternary operator. In languages that do not have such a nicety people end up writng eg: if (localtime().hours= 8) x = awake; else x = asleep, go away.; Apart from the very mild annoyance, I guess, of writing twice x =, this is all gain: code is both more compact and legible. Precisely, because the ternary operator is a bit weird syntactically (and not that commonly needed and used in real code), people feel, just like you, the need to clarify it in code, finally using more vertical space. Note that the case here is the simplest possible, expressions being plain literal constants. I personly only consume 3 lines: auto x = (localtime().hours= 8) ? awake! : asleep, go away.; What do you think? I always make a conscious effort to write succinct expressions. My rule is to only use the ternary operator when it actually makes the code cleaner. Why? Branches should be conspicuous because they increase the number of code paths. Exceptions are the other big source of code paths. This is, btw, why I think that scope statements in D are just superb.
Re: std.xml should just go
On 06/02/2011 21:30, Jacob Carlborg wrote: On 2011-02-06 20:59, Walter Bright wrote: Jacob Carlborg wrote: On 2011-02-04 20:33, Walter Bright wrote: so wrote: It doesn't matter what signature you use for the function, compiler is aware and will output an error when you do the opposite of the signature. If this is the case, why do we need that signature? Examine the API of a function in a library. It says it doesn't modify anything reachable through its arguments, but is that true? How would you know? And how would you know if the API doc doesn't say? You'd fall back to const by convention, and that is not reliable and does not scale. This is quite interesting, I generally agree with this but on the other hand Ruby on Rails is basically built on conventions, it works out very well and I love it. I'm not tapped into the ruby community, but I've heard some scuttlebutt that usage of ruby is declining in large systems because ruby seems to have problems with large systems due to monkey patching and other cowboying that ruby encourages. Maybe, I have no idea. Although I noticed myself that I wanted to have static typing in Ruby a couple of times. Problems with large systems? Wanting to use static typing? Well, there is a solution to that, but it is even a more radical kind of a monkey patch: basically you remove 100% of the Ruby runtime and install and use Java and Java frameworks instead... ;) -- Bruno Medeiros - Software Engineer
Re: inlining or not inlining...
spir Wrote: On 02/11/2011 09:33 AM, Jim wrote: Regardless, I would _hope_ that the compiler would be smart enough to make intelligent choices about inlining. That's probably one of those areas that can always be improved however. I also think that this decision should be left to the compiler. The inline keyword was deemed useful for the same reason that symbols had to be declared before their use (causing the C/C++ header hell) -- it's easier to implement such a compiler. Agreed; but what about having the compiler tell you, on demand, func 'f' at line #l in module 'm' was not inlined ? I rarely need to go that low-level. My hope is that the compiler will sort this out in the end. Give it some time, or effort to have these optimizations implemented in the compiler.
Re: std.xml should just go
On Fri, 11 Feb 2011 08:19:51 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail wrote: On 04/02/2011 21:07, Steven Schveighoffer wrote: On Fri, 04 Feb 2011 15:44:46 -0500, Jeff Nowakowski j...@dilacero.org wrote: On 02/03/2011 10:07 PM, Walter Bright wrote: The way to get a high performance string parser in D is to take advantage of one of D's unique features - slices. Java, C++, C#, etc., all rely on copying strings. With D you can just use slices into the original XML source text. If you're copying the text, you're doing it wrong. Java's substring() does not copy the text, at least in the official JDK implementation. Unfortunately, it doesn't specify this behavior as part of the String API. Yes, but Java's strings are immutable. Typically a buffered I/O stream has a mutable buffer used to read data. This necessitates a copy. At the very least, you need to continue allocating more memory to hold all the strings. -Steve True, but in this case you will have the exact same problem with any other language as well. So it doesn't seem like D will have any particular advantage over Java, with regards to slicing and strings. I think D can do it without copying out of the buffer. You just have to avoid using immutable strings. -Steve
Re: Purity
On 18/12/2010 12:46, Don wrote: spir wrote: On Sat, 18 Dec 2010 01:08:20 -0800 Jonathan M Davis jmdavisp...@gmx.com wrote: Thank you for the explanation about strongly pure funcs calling weakly pure ones --this fully makes sense. I would like weakly pure to include output funcs, and exclude all possibilities to modify (non-local) state. The problem is that output is accessing global variables - which weakly pure functions _cannot_ do. Why? What is the rationale for excluding output (I don't mean I/O, only O)? You're correct in saying that it doesn't affect the operation of the program. But in practice, program output is almost always important. For example, suppose we allowed output to be pure. Then consider: writeln(Hello, world!); Since it returns nothing, and has no influence on the future execution of the program, the writeln can be dropped from the program. Hmmm Hum, it might still be useful to have something like a compiler switch that disables pure altogether, then people could use I/O and other non-pure operations for debugging purposes. One could wrap such code with a version statement: void myPurefunc(Foo foo) pure { version(pure_disabled) { writeln(some debug info: , foo); } //... -- Bruno Medeiros - Software Engineer
std.regex
Recently I tried std.regex, and must say Im very satisfied with new interface. Just one small objection, I think captures range dont need match.hit at front. But I can live with it :)
Re: Stupid little iota of an idea
On 2/11/11 12:15 AM, Nick Sabalausky wrote: Andrej Mitrovicandrej.mitrov...@gmail.com wrote in message news:mailman.1476.1297391467.4748.digitalmar...@puremagic.com... What the hell does to! have to do with anything. Disregard my last post, it's obviously 3 AM and I'm talking gibberish. I just meant that iota looks a lot like (spaces added for clarity) i to a. In other words, the first time I ever saw iota, I confused it for the old C function that converts an integer to an ASCII string. It may very well have been 3am for me at the time ;) You are the second one who confuses iota with itoa. Actually, the third, I confused it too. According to the book The Design of Everyday Things the design of that function name is wrong, it's not your fault and it's not because it was 3am. When many people make mistakes with regards to the design of something it's *always* the design's fault, never the human's fault.
Re: Stupid little iota of an idea
On Fri, 11 Feb 2011 09:06:06 -0500, Ary Manzana a...@esperanto.org.ar wrote: On 2/11/11 12:15 AM, Nick Sabalausky wrote: Andrej Mitrovicandrej.mitrov...@gmail.com wrote in message news:mailman.1476.1297391467.4748.digitalmar...@puremagic.com... What the hell does to! have to do with anything. Disregard my last post, it's obviously 3 AM and I'm talking gibberish. I just meant that iota looks a lot like (spaces added for clarity) i to a. In other words, the first time I ever saw iota, I confused it for the old C function that converts an integer to an ASCII string. It may very well have been 3am for me at the time ;) You are the second one who confuses iota with itoa. Actually, the third, I confused it too. According to the book The Design of Everyday Things the design of that function name is wrong, it's not your fault and it's not because it was 3am. When many people make mistakes with regards to the design of something it's *always* the design's fault, never the human's fault. Also, C code is callable from D. consider a seasoned d coder who sees code like: auto x = itoa(5); what is he going to thing x is? -Steve
Re: Stupid little iota of an idea
On Fri, 11 Feb 2011 09:06:06 -0500, Ary Manzana a...@esperanto.org.ar wrote: On 2/11/11 12:15 AM, Nick Sabalausky wrote: Andrej Mitrovicandrej.mitrov...@gmail.com wrote in message news:mailman.1476.1297391467.4748.digitalmar...@puremagic.com... What the hell does to! have to do with anything. Disregard my last post, it's obviously 3 AM and I'm talking gibberish. I just meant that iota looks a lot like (spaces added for clarity) i to a. In other words, the first time I ever saw iota, I confused it for the old C function that converts an integer to an ASCII string. It may very well have been 3am for me at the time ;) You are the second one who confuses iota with itoa. Actually, the third, I confused it too. Me 2. According to the book The Design of Everyday Things the design of that function name is wrong, it's not your fault and it's not because it was 3am. When many people make mistakes with regards to the design of something it's *always* the design's fault, never the human's fault. I love that book, I wish more software engineers used it. One of my favorite classes at college. -Steve
Re: std.xml should just go
On 11/02/2011 13:48, Steven Schveighoffer wrote: On Fri, 11 Feb 2011 08:19:51 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail wrote: On 04/02/2011 21:07, Steven Schveighoffer wrote: On Fri, 04 Feb 2011 15:44:46 -0500, Jeff Nowakowski j...@dilacero.org wrote: On 02/03/2011 10:07 PM, Walter Bright wrote: The way to get a high performance string parser in D is to take advantage of one of D's unique features - slices. Java, C++, C#, etc., all rely on copying strings. With D you can just use slices into the original XML source text. If you're copying the text, you're doing it wrong. Java's substring() does not copy the text, at least in the official JDK implementation. Unfortunately, it doesn't specify this behavior as part of the String API. Yes, but Java's strings are immutable. Typically a buffered I/O stream has a mutable buffer used to read data. This necessitates a copy. At the very least, you need to continue allocating more memory to hold all the strings. -Steve True, but in this case you will have the exact same problem with any other language as well. So it doesn't seem like D will have any particular advantage over Java, with regards to slicing and strings. I think D can do it without copying out of the buffer. You just have to avoid using immutable strings. -Steve The data that you want to keep afterwards you will have to copy, that much is obvious. As for the data you don't want to keep (but just guide you through the parsing), yes in D you can look at it without copying it out of the buffer. But you can do the same in Java, there is this core interface CharSequence that is roughly equivalent to a D slice for chars (http://download.oracle.com/javase/1.4.2/docs/api/java/lang/CharSequence.html) -- Bruno Medeiros - Software Engineer
Re: Stupid little iota of an idea
Am 10.02.2011 12:40, schrieb spir: Certainly, because it's /highly/ important for a community of programmers to share the same culture. And names are the main support vehicle for this culture. Denis (For this reason, I stoppped aliasing size_t and size_diff_t to Ordinal and Cardinal ;-) I use uint everywhere) This will cause trouble on 64bit systems, because there size_t is ulong. Cheers, - Daniel
Re: std.xml should just go
On 04/02/2011 16:14, Eric Poggel wrote: On 2/3/2011 10:20 PM, Andrei Alexandrescu wrote: At this point there is no turning back from ranges, unless we come about with an even better idea (I discussed one with Walter but we're not pursuing it yet). Care to elaborate on the new idea? Or at least a quick summary so we're not all left wondering? That comment left me curious as well... -- Bruno Medeiros - Software Engineer
Re: std.xml should just go
On 2011-02-11 09:29:03 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail said: On 11/02/2011 13:48, Steven Schveighoffer wrote: I think D can do it without copying out of the buffer. You just have to avoid using immutable strings. -Steve The data that you want to keep afterwards you will have to copy, that much is obvious. In fact, if the amount of data you want to keep is greater than the one you want to throw away, it might be better to make the buffer immutable and allocate new buffers as you go forward. One allocation per buffer is likely going to be less wasteful than one allocation + one copy per string (+ some space wasted after each memory block). If you don't intend to keep most of the data after parsing however, then you should go for a mutable buffer and copy what you need. What I like very much about Andrei's proposal for a buffered input range is that it makes it supports equally well mutable and immutable buffers. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: D vs Go on reddit
Am 09.02.2011 23:56, schrieb Ulrik Mikaelsson: Maybe it's [JavaScript] the ASM of next decade. Bringing the performance of the second last decades systems to hardware of the next decade. Hooray \o/
Re: D vs Go on reddit
On 02/10/2011 10:08 PM, Nick Sabalausky wrote: Regarding Java 1.3/1.4: They may very well have been closer to 1.2 than they were to 1.5/1.6 (I wouldn't know), but IIRC 1.3 was when it finally started to give people little bits of suger (ex: foreach). 1.5 was when Java got foreach, generics, and enum. I don't think there were any syntactical language changes before that. Previous version updates were mostly about libraries and frameworks.
Re: std.regex
On 02/11/2011 03:05 PM, jovo wrote: Recently I tried std.regex, and must say I’m very satisfied with new interface. Just one small objection, I think captures range don’t need match.hit at front. ??? Denis -- _ vita es estrany spir.wikidot.com
Re: Purity
Bruno Medeiros: Hum, it might still be useful to have something like a compiler switch that disables pure altogether, then people could use I/O and other non-pure operations for debugging purposes. One could wrap such code with a version statement: void myPurefunc(Foo foo) pure { version(pure_disabled) { writeln(some debug info: , foo); } //... This seems an interesting idea to help debug pure functions. Bye, bearophile
Re: Stupid little iota of an idea
On 02/11/2011 03:32 PM, Daniel Gibson wrote: Am 10.02.2011 12:40, schrieb spir: Certainly, because it's /highly/ important for a community of programmers to share the same culture. And names are the main support vehicle for this culture. Denis (For this reason, I stoppped aliasing size_t and size_diff_t to Ordinal and Cardinal ;-) I use uint everywhere) This will cause trouble on 64bit systems, because there size_t is ulong. Yes and No. For pointers and mem diffs, yes. But for 99% uses of cardinals and ordinals uint is by far big enough. Denis -- _ vita es estrany spir.wikidot.com
Re: inlining or not inlining...
Jim: I rarely need to go that low-level. Two times I have had D1 code that was too much slow compared to equivalent C code. After profiling and some changes I have understood that the cause was an important missing inline. With a list of the inlined functions (as done by CommonLisp some compilers, see the enhancement request in Bugzilla), this search becomes quicker. My hope is that the compiler will sort this out in the end. Give it some time, or effort to have these optimizations implemented in the compiler. The LLVM back-end of LDC is able to inline much more, but even here a list of inlined/not inlined functions helps. D is almost a system language, so sometimes you need to go lower level (or you just need a program that's not too much slow). Bye, bearophile
Re: Stupid little iota of an idea
spir: But for 99% uses of cardinals and ordinals uint is by far big enough. Then in D2 for those use an int. Unsigned values are _very_ bug-prone in D2. Bye, bearophile
Re: inlining or not inlining...
On 02/11/2011 07:08 PM, bearophile wrote: Jim: I rarely need to go that low-level. Two times I have had D1 code that was too much slow compared to equivalent C code. After profiling and some changes I have understood that the cause was an important missing inline. With a list of the inlined functions (as done by CommonLisp some compilers, see the enhancement request in Bugzilla), this search becomes quicker. My hope is that the compiler will sort this out in the end. Give it some time, or effort to have these optimizations implemented in the compiler. The LLVM back-end of LDC is able to inline much more, but even here a list of inlined/not inlined functions helps. D is almost a system language, so sometimes you need to go lower level (or you just need a program that's not too much slow). To me the relevant aspect is not that much practical effect, but understanding how/why/what is inlined by (hopefully good) compilers. Learning about that, even if not much put in practice (I don't intend to write the next big language's compiler ;-) can only improve coding skills and, say... help and stop shooting in the dark. Denis -- _ vita es estrany spir.wikidot.com
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
On 2011-02-11 08:05:27 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail said: On 09/02/2011 14:27, Michel Fortin wrote: On 2011-02-09 07:49:31 -0500, Bruno Medeiros brunodomedeiros+spam@com.gmail said: I was about to say Cool!, but then I checked the doc on that link and it says: A shallow repository has a number of limitations (you cannot clone or fetch from it, nor push from nor into it), but is adequate if you are only interested in the recent history of a large project with a long history, and would want to send in fixes as patches. So it's actually not good for what I meant, since it is barely usable (you cannot push from it). :( Actually, pushing from a shallow repository can work, but if your history is not deep enough it will be a problem when git tries determine the common ancestor. Be sure to have enough depth so that your history contains the common ancestor of all the branches you might want to merge, and also make sure the remote repository won't rewrite history beyond that point and you should be safe. At least, that's what I understand from: http://git.661346.n2.nabble.com/pushing-from-a-shallow-repo-allowed-td2332252.html Interesting. But it still feels very much like a second-class functionality, not something they really have in mind to support well, at least not yet. Ideally, if one wants to do push but the ancestor history is incomplete, the VCS would download from the central repository whatever revision/changeset information was missing. Actually, there's no central repository in Git. But I agree with your idea in general: one of the remotes could be designated as being a source to look for when encountering a missing object, probably the one from which you shallowly cloned from. All we need is someone to implement that. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: Stupid little iota of an idea
On 2/10/11 5:29 PM, Sean Kelly wrote: Andrei Alexandrescu Wrote: I don't find the name iota stupid. I never entirely understood the name choice. I suppose iota could be related to a small step so iota(1,5) is a series of small steps from 1 to 5? Pretty much, with the note that the smallest nondegenerated integral step is 1 :o). The name iota makes perfect sense to me, I knew what it does in the STL from its signature before reading its definition. There are people who don't like it, there are people who do, and there are people who simply pick up the name and use it. I don't see how to improve global happiness. Andrei
Re: Stupid little iota of an idea
On 2/10/11 8:28 PM, Andrej Mitrovic wrote: What the hell does to! have to do with anything. Disregard my last post, it's obviously 3 AM and I'm talking gibberish. In any case, alias iota range; Problem solved for me! Aside from the fact that range has another meaning in D, the word does not convey the notion that iota adds incremental steps to move from one number to another. Iota does convey that notion. Andrei
Re: Unilink - alternative linker for win32/64, DMD OMF extensions?
Ok, bumping this up with the latest news from UniLink developers: quote Ok, we release it's as D extension in next release. Best regards, UniLink /quote That's just plain awesome ;) -- Dmitry Olshansky
Re: inlining or not inlining...
bearophile wrote: While in isolation that's a good idea, how far should it be taken? Should the compiler emit information on which variables wound up in which registers, and why? What about other of the myriad of compiler optimizations? Inlining is an important optimization, so give this information to the programmer is a good start. Register allocation is far more important than inlining. Why not give information about why a variable was not enregistered?
Re: D vs Go on reddit
Jeff Nowakowski wrote: 1.5 was when Java got foreach, generics, and enum. I don't think there were any syntactical language changes before that. Previous version updates were mostly about libraries and frameworks. Inner classes were added earlier.
Re: Unilink - alternative linker for win32/64, DMD OMF extensions?
Am 11.02.2011 19:56, schrieb Dmitry Olshansky: Ok, bumping this up with the latest news from UniLink developers: quote Ok, we release it's as D extension in next release. Best regards, UniLink /quote That's just plain awesome ;) Great :)
Re: unsigned 0
== Quote from bearophile (bearophileh...@lycos.com)'s article To avoid troubles in generic code you need a little workaround: if (__traits(isUnsigned, x) || x = 0) { ... That's not good enough yet. The first part of the test needs to be done in a static if. Bye, bearophile You also need to watch out for code like this too: if (T.min 0) { ... As that could possibly trigger off unsigned 0 warnings too. Regards Iain
Re: Stupid little iota of an idea
On 2/11/11, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 2/10/11 8:28 PM, Andrej Mitrovic wrote: What the hell does to! have to do with anything. Disregard my last post, it's obviously 3 AM and I'm talking gibberish. In any case, alias iota range; Problem solved for me! Aside from the fact that range has another meaning in D, the word does not convey the notion that iota adds incremental steps to move from one number to another. Iota does convey that notion. Andrei So why does Python use it? It seems Go uses iota, but for something different: http://golang.org/doc/go_spec.html#Iota That's rather ugly imo. But that's Go. :)
0nnn octal notation considered harmful
Hello, Just had a strange bug --in a test func!-- caused by this notation. This is due in my case to the practice (common, I guess) of pretty printing int numbers using %0nd or %0ns format, to get a nice alignment. Then, if one feeds back results into D code, they are interpreted as octal... Now, i know it: will pad with spaces instead ;-) Copying a string'ed integer is indeed not the only this notation is bug-prone: prefixing a number with '0' should not change its value (!). Several programming languages switched to another notation; like 0onnn, which is consistent with common hex bin notations and cannot lead to misinterpretation. Such a change would be, I guess, backward compatible; and would not be misleading for C coders. Denis -- _ vita es estrany spir.wikidot.com
Re: inlining or not inlining...
bearophile Wrote: The LLVM back-end of LDC is able to inline much more, but even here a list of inlined/not inlined functions helps. D is almost a system language, so sometimes you need to go lower level (or you just need a program that's not too much slow). If forced inlining is to be supported I think it would be good idea to also let the _caller_ decide whether to inline a function. The compiler could simply find the function definition, perhaps parameterize it, and then insert it. Should it not be able to inline almost any function if asked to?
Re: Stupid little iota of an idea
bearophile Wrote: Then in D2 for those use an int. Unsigned values are _very_ bug-prone in D2. May I ask why?
Re: inlining or not inlining...
On 02/11/2011 08:11 PM, Walter Bright wrote: bearophile wrote: While in isolation that's a good idea, how far should it be taken? Should the compiler emit information on which variables wound up in which registers, and why? What about other of the myriad of compiler optimizations? Inlining is an important optimization, so give this information to the programmer is a good start. Register allocation is far more important than inlining. Why not give information about why a variable was not enregistered? Because we do not ask for it ;-) Joke apart, I would love that too, on tiny test apps indeed. Why not? Since the decision-taking exist (and is certainly well structured around its criteria), why not allow it writing out its reasoning? About inline, note that no-one asks for information on every potentially inlinable func, blindly. But having a way to know that about /this/ func one is wondering about would be great: just append @inline to it, recompile, et voilà! you know :-) (provided you can interpret the output, but it's another story) side-note If I were a compiler writer, no doubt my code would hold snippets dedicated to spitting out such information, on need. For myself, as a testing tool to check the code really does what I mean. This is integral part of my coding style. Else, how can I know? Checking the ASM indeed can tell you, but it seems to me far more heavy complicated than following the trace of a reasoning, and tells nothing about where/why/how the encoded logic fails. Then, if this can be useful to users... side-note Denis -- _ vita es estrany spir.wikidot.com
Re: 0nnn octal notation considered harmful
spir Wrote: Hello, Just had a strange bug --in a test func!-- caused by this notation. This is due in my case to the practice (common, I guess) of pretty printing int numbers using %0nd or %0ns format, to get a nice alignment. Then, if one feeds back results into D code, they are interpreted as octal... Now, i know it: will pad with spaces instead ;-) Copying a string'ed integer is indeed not the only this notation is bug-prone: prefixing a number with '0' should not change its value (!). Several programming languages switched to another notation; like 0onnn, which is consistent with common hex bin notations and cannot lead to misinterpretation. Such a change would be, I guess, backward compatible; and would not be misleading for C coders. Fine with me. I've never used octal numbers (intently!). I guess some notation is needed for them, if not for historical reasons.
Re: inlining or not inlining...
Jim Wrote: bearophile Wrote: The LLVM back-end of LDC is able to inline much more, but even here a list of inlined/not inlined functions helps. D is almost a system language, so sometimes you need to go lower level (or you just need a program that's not too much slow). If forced inlining is to be supported I think it would be good idea to also let the _caller_ decide whether to inline a function. The compiler could simply find the function definition, perhaps parameterize it, and then insert it. Should it not be able to inline almost any function if asked to? Just had another idea.. A function would know statically whether it has been inlined or not. If inlined it could choose to propagate this attribute to any of its own callees. Not sure it would be useful though, just thinking aloud..
Re: Stupid little iota of an idea
Jim: bearophile Wrote: Then in D2 for those use an int. Unsigned values are _very_ bug-prone in D2. May I ask why? Because: - D unsigned numbers are fixed-sized bitfields, they overflow. (Multi-precision values are not built-in, they are currently slow if you need a 30 or 50 or 70 bit long value, and generally they feel like grafted on the language). - There are no run-time overflow errors, as in C#/Delphi/etc (this is ridiculous for any language that hopes to make safety one of its strong points. Delphi has this feature since ages ago. Not having this in D is like going back to 1980 or before. It gives a peculiar stone-age style to the whole D language). - D copies the weird/bad C signed-unsigned conversion rules, that cause plenty of troubles. - D doesn't have warnings like GCC that give a bit of help against the C signed-unsigned conversion rules, nor against things like unsigned0. In Delphi using unsigned numbers is safer, but in D it's actually safer to use signed values :-) All this is compound with the design choice of using signed values for arrays and indexes in D. One even less bright design decision was to use unsigned longs for array positions, etc: http://d.puremagic.com/issues/show_bug.cgi?id=5452 Generally in the current D the best advice is to limit the usage of unsigned values as much as possible, and use them only in the uncommon situations where they are needed, like: - When you need the full range of 8, 16, 32 or 64 bits. This is uncommon, but it happens. Example: you really want to save memory to store indexes and you need you will have no more than about 40_000 items. Then use an ushort. - To store bitfields, like an array of 50_000 bits, to implement a bit set, some kind of bitmap, bloom filter, etc. - When you need to deserialize or receive data from some channel or memory, that you know is for example a 32 unsigned int or 16 bit unsigned int, a unsigned 8 bit digital signal from some instrument, etc. In most other cases it's better to use signed values, for example you will avoid several bugs if in your code you use lines of code like: int len = array.length; and then you use len in the rest of your function. Bye, bearophile
Re: 0nnn octal notation considered harmful
We actually have a library replacement for octal literals: http://dpldocs.info/octal But until the C style syntax is disallowed, it doesn't change anything. But, Walter is resistant to the change, last I knew.
Re: 0nnn octal notation considered harmful
spir: like 0onnn, which is consistent with common hex bin notations and cannot lead to misinterpretation. Such a change would be, I guess, backward compatible; and would not be misleading for C coders. The 0nnn octal syntax is bug-prone, and not explicit, it's out of place in a language like D that's designed to be a bit safer than C. The 0onnn syntax adopted by Python3 is safer, more explicit, it's good enough. But the leading zero syntax can't be kept in D for backwards C compatibility, so it needs to be just disallowed statically... Walter likes the C octal syntax as a personal thing. Andrei prefers a syntax like octal! that's less compact, even more explicit, library-defined, and it has a corner case (when numbers become very large you need a string): http://www.digitalmars.com/d/2.0/phobos/std_conv.html#octal Bye, bearophile
Re: Stupid little iota of an idea
Andrei: Aside from the fact that range has another meaning in D, the word does not convey the notion that iota adds incremental steps to move from one number to another. Iota does convey that notion. I have accepted the iota name, it's short, easy to remember, it has one historical usage in APL, and Range has another meaning in D (but it's weird, and it's something you need to learn, it's not something a newbie is supposed to know before reading D2 docs well. The name interval is better, simpler to understand, but it's longer for a so common function). But this answer of yours is stepping outside the bounds of reasonableness :-) If you ask a pool of 20 programmers what range(10,20) or iota(10,20) means, I'm sure more people will guess range() correctly than iota(). The word range() do convey a complete enumeration of values in an interval. iota() does not convey that. Said all this, I suggest to introduce the first-class a..b interval syntax in D (or even a..b:c), this is able to remove most (all?) usage of iota(). Bye, bearophile
Re: inlining or not inlining...
Jim: If forced inlining is to be supported spir was asking for a list of functions that the compiled has inlined, not for a forced inlining functionality. Bye, bearophile
Re: More on Rust
On 02/10/11 13:49, Andrej Mitrovic wrote: On 2/10/11, Walter Bright newshou...@digitalmars.com wrote: auto x = (localtime().hours = 8) ? awake! : asleep, go away.; Aye, a one liner! I hate seeing things like this: if (funcall()) { var = foo; } else { var = bar; } So much clutter instead of using the simple: var = funcall() ? foo : bar; I also see this sometimes: auto var = funcall(); if (var == foo || var == bar || var == foobar || var == barfoo) { // some complex code } else if (var == blue || var == green) { // some complex code } else if ()// more and more code.. { } But not many people seem to know that D supports strings in case statements: switch(funcall()) { case foo: case bar: case foobar: case barfoo: { // complex code } break; case blue: case green: { // complex code } break; default: } Even better: switch( funcall() ) { case foo, bar, foobar, barfoo: { // complex code break; } case blue, green: { // complex code break; } default: // do nothing -- i like to comment defaults } Also often forgotten, that 'case' clauses take an argument list, not just an expression. And yeah, in this case at least... it still fits in 80 columns. (I prefer 90 myself, but it's moot.) -- Chris N-S
Re: inlining or not inlining...
Walter: bearophile wrote: While in isolation that's a good idea, how far should it be taken? Should the compiler emit information on which variables wound up in which registers, and why? What about other of the myriad of compiler optimizations? Inlining is an important optimization, so give this information to the programmer is a good start. Register allocation is far more important than inlining. Why not give information about why a variable was not enregistered? Some answers: - I am not asking for this information because it is harder to use for me. If a function was inlined or not is simpler to use for me. - In my D1 code I have found two or more problems caused by failed inlining. So this is of my interest. - If you want to optionally give the register information to the programmer, then do it. I am not going to stop you :-) Some people need this information, like when you implement the little kernels of a very fast FFT. - Register allocation on 32 bit CPUs is not the same as on 64 bit ones. On 64 bit you have many more registers (and you have SSE registers, and now AVX too), so in many situations the register pressure is lower. - There are two groups of register allocation algorithms. The very fast ones, and the more precise ones. You even have perfect ones. Experience has shown that the difference in runtime performance between the precise algorithm and the perfect ones is often about 5% (this measured on LLVM(. This means that with LLVM you will probably never see a significant improvement of automatic register allocation, because it's as good as it gets, it's kind of a solved problem. In JITs like in the JavaVM you have register allocation algorithms that are less precise but faster. Here there is space for possible future improvements. And then, there are the special situations, like implementing those little FFT kernels, or when you want to compile a functional language like Haskell into assembly. In such situations even the very good automatic register allocation algorithms are not good enough. In this case information about register allocation is useful, but this is a very specialized usage.! The need to know about inlining is in my opinion more common. Bye, bearophile
Re: unsigned 0
Iain Buclaw: You also need to watch out for code like this too: if (T.min 0) { ... As that could possibly trigger off unsigned 0 warnings too. I was not talking about warnings. I was talking about changing the D language, turning that into a _error_ if x is unsigned. Bye, bearophile
Re: inlining or not inlining...
spir: About inline, note that no-one asks for information on every potentially inlinable func, blindly. But having a way to know that about /this/ func one is wondering about would be great: just append @inline to it, recompile, et voilà ! you know :-) (provided you can interpret the output, but it's another story) If that's your purpose then I suggest a name as @isInlinable :-) Bye, bearophile
Re: inlining or not inlining...
On 02/11/11 14:26, Jim wrote: Jim Wrote: bearophile Wrote: The LLVM back-end of LDC is able to inline much more, but even here a list of inlined/not inlined functions helps. D is almost a system language, so sometimes you need to go lower level (or you just need a program that's not too much slow). If forced inlining is to be supported I think it would be good idea to also let the _caller_ decide whether to inline a function. The compiler could simply find the function definition, perhaps parameterize it, and then insert it. Should it not be able to inline almost any function if asked to? Just had another idea.. A function would know statically whether it has been inlined or not. If inlined it could choose to propagate this attribute to any of its own callees. Not sure it would be useful though, just thinking aloud.. And at this point what once seemed a simple thing starts to show its complexity teeth. Suddenly there are all this adjunct features to be provided in order to make it properly useful. Don't get me wrong, I'd actually like having all this... but I'm not sure of the cost in compiler complexity (and likely slowdown) and language bloat. But, here's some notions: == I really want foo() to be inlined, if even remotely possible! == pragma( inline ) int foo () { ... } == I'll be calling foo(), and I'd like it inlined if possible == int bar () { pragma( inline, foo ); // ... auto x = foo(); } == I'm foo(), and I'd like to know if I am being inlined == int foo () { pragma( inline, true ) { // inline code } pragma( inline, false ) { // ordinary code } } -- or if we ever get that 'meta' namespace some of us want -- int foo () { static if ( meta.inlined ) { // inline code } else { // ordinary code } } My chief complaint with my own notions is that 'pragma(inline' ends up with three different forms. This just isn't typical of a pragma. -- Chris N-S
Re: 0nnn octal notation considered harmful
spir denis.s...@gmail.com wrote in message news:mailman.1504.1297453559.4748.digitalmar...@puremagic.com... Hello, Just had a strange bug --in a test func!-- caused by this notation. This is due in my case to the practice (common, I guess) of pretty printing int numbers using %0nd or %0ns format, to get a nice alignment. Then, if one feeds back results into D code, they are interpreted as octal... Now, i know it: will pad with spaces instead ;-) Copying a string'ed integer is indeed not the only this notation is bug-prone: prefixing a number with '0' should not change its value (!). Several programming languages switched to another notation; like 0onnn, which is consistent with common hex bin notations and cannot lead to misinterpretation. Such a change would be, I guess, backward compatible; and would not be misleading for C coders. Yea, octal!nnn has already made the exceedingly rare uses of octal literals completely obsolete *long* ago. I know I for one am getting really tired of this completely unnecessary landmine in the language continuing to exist. The heck with std.xml, if anything, *this* needs nuked. If silently changed behavior is a problem, then just make it an error. Done. Minefield cleared.
Re: Stupid little iota of an idea
Steven Schveighoffer schvei...@yahoo.com wrote in message news:op.vqqsdxcaeav7ka@steve-laptop... According to the book The Design of Everyday Things the design of that function name is wrong, it's not your fault and it's not because it was 3am. When many people make mistakes with regards to the design of something it's *always* the design's fault, never the human's fault. I love that book, I wish more software engineers used it. One of my favorite classes at college. I probably would have liked that class if my college hadn't degenerated it into nothing more than group VB6 project. Bleh. I always hated group projects. /me Does not play well with others. I really should actually read that book. I've heard a lot about what it says, and I like everything I've heard. Even without having read it, it's still influenced me a fair amount.
Re: inlining or not inlining...
On 02/11/2011 09:49 PM, bearophile wrote: Jim: If forced inlining is to be supported spir was asking for a list of functions that the compiled has inlined, not for a forced inlining functionality. You are (nearly) right, Bearophile. More precisely, I rather wish @inline on a given func to output a compiler message if said func is *not* inlined, due to some criterion the compiler uses to decide; at best, some hint about said criterion. I certainly do /not/ ask for forced inlining. (But others take the thread and speak of what they wish...) Denis -- _ vita es estrany spir.wikidot.com
Re: More on Rust
On 2/11/11, Christopher Nicholson-Sauls ibisbase...@gmail.com wrote: Even better: switch( funcall() ) { case foo, bar, foobar, barfoo: { // complex code break; } case blue, green: { // complex code break; } default: // do nothing -- i like to comment defaults } Also often forgotten, that 'case' clauses take an argument list, not just an expression. And yeah, in this case at least... it still fits in 80 columns. (I prefer 90 myself, but it's moot.) -- Chris N-S Damn I didn't know that! Thanks.
Re: Stupid little iota of an idea
bearophile bearophileh...@lycos.com wrote in message news:ij473k$1tfn$1...@digitalmars.com... Andrei: Aside from the fact that range has another meaning in D, the word does not convey the notion that iota adds incremental steps to move from one number to another. Iota does convey that notion. I have accepted the iota name, it's short, easy to remember, it has one historical usage in APL, and Range has another meaning in D (but it's weird, and it's something you need to learn, it's not something a newbie is supposed to know before reading D2 docs well. The name interval is better, simpler to understand, but it's longer for a so common function). But this answer of yours is stepping outside the bounds of reasonableness :-) If you ask a pool of 20 programmers what range(10,20) or iota(10,20) means, I'm sure more people will guess range() correctly than iota(). The word range() do convey a complete enumeration of values in an interval. iota() does not convey that. Said all this, I suggest to introduce the first-class a..b interval syntax in D (or even a..b:c), this is able to remove most (all?) usage of iota(). I like interval, too. I do think the name iota is a nice extra reason to just use a..b or a..b:c like you say. It also makes it clear that it's a series of discrete values rather than a true mathematical range, since that's exactly how foreach already uses a..b: as a series of discrete values.
Re: inlining or not inlining...
On 02/11/2011 10:22 PM, Christopher Nicholson-Sauls wrote: On 02/11/11 14:26, Jim wrote: Jim Wrote: bearophile Wrote: The LLVM back-end of LDC is able to inline much more, but even here a list of inlined/not inlined functions helps. D is almost a system language, so sometimes you need to go lower level (or you just need a program that's not too much slow). If forced inlining is to be supported I think it would be good idea to also let the _caller_ decide whether to inline a function. The compiler could simply find the function definition, perhaps parameterize it, and then insert it. Should it not be able to inline almost any function if asked to? Just had another idea.. A function would know statically whether it has been inlined or not. If inlined it could choose to propagate this attribute to any of its own callees. Not sure it would be useful though, just thinking aloud.. And at this point what once seemed a simple thing starts to show its complexity teeth. Suddenly there are all this adjunct features to be provided in order to make it properly useful. Don't get me wrong, I'd actually like having all this... but I'm not sure of the cost in compiler complexity (and likely slowdown) and language bloat. But, here's some notions: == I really want foo() to be inlined, if even remotely possible! == pragma( inline ) int foo () { ... } == I'll be calling foo(), and I'd like it inlined if possible == int bar () { pragma( inline, foo ); // ... auto x = foo(); } == I'm foo(), and I'd like to know if I am being inlined == int foo () { pragma( inline, true ) { // inline code } pragma( inline, false ) { // ordinary code } } -- or if we ever get that 'meta' namespace some of us want -- int foo () { static if ( meta.inlined ) { // inline code } else { // ordinary code } } My chief complaint with my own notions is that 'pragma(inline' ends up with three different forms. This just isn't typical of a pragma. -- Chris N-S All of this is hardly related to the simple feature I initially asked for: string escString (string s) @tellmeifnotinlined { s2 = s.replace(\n,\\n); s2 = s.replace(\t,\\t); return s2; } void show (X x) { // ... use escString ... } == Warning: function 'escString' in module 'foo' (line 123) was not inlined. (or else it was actually inlined) Which (I guess) is not that a big deal since the compiler needs to decide anyway. I just wish to be informed of the result of the decision procedure, only in case of 'no'. Denis -- _ vita es estrany spir.wikidot.com
Re: inlining or not inlining...
On 02/11/2011 10:08 PM, bearophile wrote: spir: About inline, note that no-one asks for information on every potentially inlinable func, blindly. But having a way to know that about /this/ func one is wondering about would be great: just append @inline to it, recompile, et voilà! you know :-) (provided you can interpret the output, but it's another story) If that's your purpose then I suggest a name as @isInlinable :-) Fine :-) denis -- _ vita es estrany spir.wikidot.com
Re: Stupid little iota of an idea
On Fri, 11 Feb 2011 17:03:13 -0500, Nick Sabalausky a@a.a wrote: Steven Schveighoffer schvei...@yahoo.com wrote in message news:op.vqqsdxcaeav7ka@steve-laptop... According to the book The Design of Everyday Things the design of that function name is wrong, it's not your fault and it's not because it was 3am. When many people make mistakes with regards to the design of something it's *always* the design's fault, never the human's fault. I love that book, I wish more software engineers used it. One of my favorite classes at college. I probably would have liked that class if my college hadn't degenerated it into nothing more than group VB6 project. Bleh. I always hated group projects. /me Does not play well with others. From what I can remember, I did not have to write any code in that class. I had to pick an item in my household and write a paper on what features were well designed and which ones were poor. I have a feeling you would have loved that ;) -Steve
Re: Stupid little iota of an idea
Nick Sabalausky: I really should actually read that book. Donald Norman is not a genius, he seems to lack both in engineering knowledge and classic literary culture, but despite this he has the right mindset to explore the world and he does look a lot at the world and its things, so he ends saying many interesting things. I suggest to read all books written by him :-) Bye, bearophile
Re: Stupid little iota of an idea
Steven Schveighoffer schvei...@yahoo.com wrote in message news:op.vqrewtcfeav7ka@steve-laptop... On Fri, 11 Feb 2011 17:03:13 -0500, Nick Sabalausky a@a.a wrote: Steven Schveighoffer schvei...@yahoo.com wrote in message news:op.vqqsdxcaeav7ka@steve-laptop... According to the book The Design of Everyday Things the design of that function name is wrong, it's not your fault and it's not because it was 3am. When many people make mistakes with regards to the design of something it's *always* the design's fault, never the human's fault. I love that book, I wish more software engineers used it. One of my favorite classes at college. I probably would have liked that class if my college hadn't degenerated it into nothing more than group VB6 project. Bleh. I always hated group projects. /me Does not play well with others. From what I can remember, I did not have to write any code in that class. I had to pick an item in my household and write a paper on what features were well designed and which ones were poor. I have a feeling you would have loved that ;) lol, that's a good point, I probably could have had a lot of fun with that :)
Re: Stupid little iota of an idea
foobar f...@bar.com wrote in message news:ij3cal$cee$1...@digitalmars.com... Andrei Alexandrescu Wrote: I don't find the name iota stupid. Andrei Of course _you_ don't. However practically all the users _do_ find it poorly named, including other developers in the project.. This is the umpteenth time this comes up in the NG and incidentally this is the only reason I know what the function does. If the users think the name is stupid than it really is. That's how usability works and the fact the you think otherwise or that it might be more accurate mathematically is really not relevant. If you want D/Phobos to be used by other people besides yourself you need to cater for their requirements. I'd bet that most of the people looking at D's filtering even numbers example over at ( https://gist.github.com/817504 ) are thinking, WTF is 'iota'? (Either that or What the hell is he converting to ASCII for?)
Re: Stupid little iota of an idea
Ary Manzana Wrote: On 2/11/11 12:15 AM, Nick Sabalausky wrote: Andrej Mitrovicandrej.mitrov...@gmail.com wrote in message news:mailman.1476.1297391467.4748.digitalmar...@puremagic.com... What the hell does to! have to do with anything. Disregard my last post, it's obviously 3 AM and I'm talking gibberish. I just meant that iota looks a lot like (spaces added for clarity) i to a. In other words, the first time I ever saw iota, I confused it for the old C function that converts an integer to an ASCII string. It may very well have been 3am for me at the time ;) You are the second one who confuses iota with itoa. Actually, the third, I confused it too. According to the book The Design of Everyday Things the design of that function name is wrong, it's not your fault and it's not because it was 3am. When many people make mistakes with regards to the design of something it's *always* the design's fault, never the human's fault. Thanks for this, I'm adding this book to my read list. :)
Re: 0nnn octal notation considered harmful
spir napisał: Just had a strange bug --in a test func!-- caused by this notation. This is due in my case to the practice (common, I guess) of pretty printing int numbers using %0nd or %0ns format, to get a nice alignment. Then, if one feeds back results into D code, they are interpreted as octal... Now, i know it: will pad with spaces instead ;-) Copying a string'ed integer is indeed not the only this notation is bug-prone: prefixing a number with '0' should not change its value (!). Several programming languages switched to another notation; like 0onnn, which is consistent with common hex bin notations and cannot lead to misinterpretation. Such a change would be, I guess, backward compatible; and would not be misleading for C coders. This has been discussed before. There's octal!123 in Phobos if you don't like these confusing literals but they stay because Walter likes them. -- Tomek
Re: Purity
Bruno Medeiros brunodomedeiros+spam@com.gmail wrote: In this code sample if the optimization is applied on the second call to func, it would cause different code with be executed: the else clause instead of the then clause. Obviously this is not acceptable for an optimization, even if such code would be rare in practice. Ah yes, I see now. I could argue that 'is' is cheating. :p I believe actually, that pure is only said to promise that the returned values should be such that func(args) == func(args), not accounting for overloaded operators. -- Simen
Re: Should we have an Unimplemented Attribute?
Bruno Medeiros brunodomedeiros+spam@com.gmail wrote: I think he means that any use of an @unimplemented class should give a warning/error/other message. I think you definitely get an error when trying to use a commented out class/struct... :) Absolutely. But such a class/struct would also not show up in documentation, allMembers, and the like. Not that I mind, just saying it's not quite the same. -- Simen
Re: inlining or not inlining...
spir wrote: On 02/11/2011 08:11 PM, Walter Bright wrote: bearophile wrote: While in isolation that's a good idea, how far should it be taken? Should the compiler emit information on which variables wound up in which registers, and why? What about other of the myriad of compiler optimizations? Inlining is an important optimization, so give this information to the programmer is a good start. Register allocation is far more important than inlining. Why not give information about why a variable was not enregistered? Because we do not ask for it ;-) Actually, that is a good reason. Joke apart, I would love that too, on tiny test apps indeed. Why not? Since the decision-taking exist (and is certainly well structured around its criteria), why not allow it writing out its reasoning? About inline, note that no-one asks for information on every potentially inlinable func, blindly. But having a way to know that about /this/ func one is wondering about would be great: just append @inline to it, recompile, et voilà! you know :-) (provided you can interpret the output, but it's another story) side-note If I were a compiler writer, no doubt my code would hold snippets dedicated to spitting out such information, on need. For myself, as a testing tool to check the code really does what I mean. This is integral part of my coding style. Else, how can I know? Checking the ASM indeed can tell you, but it seems to me far more heavy complicated than following the trace of a reasoning, and tells nothing about where/why/how the encoded logic fails. Then, if this can be useful to users... side-note Denis
Re: 0nnn octal notation considered harmful
On 02/11/2011 10:54 PM, Nick Sabalausky wrote: spirdenis.s...@gmail.com wrote in message news:mailman.1504.1297453559.4748.digitalmar...@puremagic.com... Hello, Just had a strange bug --in a test func!-- caused by this notation. This is due in my case to the practice (common, I guess) of pretty printing int numbers using %0nd or %0ns format, to get a nice alignment. Then, if one feeds back results into D code, they are interpreted as octal... Now, i know it: will pad with spaces instead ;-) Copying a string'ed integer is indeed not the only this notation is bug-prone: prefixing a number with '0' should not change its value (!). Several programming languages switched to another notation; like 0onnn, which is consistent with common hex bin notations and cannot lead to misinterpretation. Such a change would be, I guess, backward compatible; and would not be misleading for C coders. Yea, octal!nnn has already made the exceedingly rare uses of octal literals completely obsolete *long* ago. I know I for one am getting really tired of this completely unnecessary landmine in the language continuing to exist. The heck with std.xml, if anything, *this* needs nuked. If silently changed behavior is a problem, then just make it an error. Done. Minefield cleared. Thanks you (and Bearohile, IIRC) for the tip about octal!nnn. Useless for me, unfortunately, since my problem (as you suggest) is not with how to write them, but the sheer existence of this $%*£µ#! notation ;-) What we need is a time bomb sent to ~ 1973. Denis -- _ vita es estrany spir.wikidot.com
Re: inlining or not inlining...
bearophile wrote: There are two groups of register allocation algorithms. The very fast ones, and the more precise ones. You even have perfect ones. Experience has shown that the difference in runtime performance between the precise algorithm and the perfect ones is often about 5% (this measured on LLVM(. This means that with LLVM you will probably never see a significant improvement of automatic register allocation, because it's as good as it gets, it's kind of a solved problem. I've seen those papers on precise or even perfect register allocation. They're only precise within a certain set of assumptions the compiler makes about usage patterns. Those assumptions are, just that, assumptions. For example, assumptions are made about how many times this loop executes relative to that loop. An asm programmer who knows his salt can do a better job, because he knows what the usage patterns are. Of course it's only rarely worth his while to do so, but nevertheless, calling such an algorithm perfect is misleading.
Re: DVCS vs. Subversion brittleness (was Re: Moving to D)
Bruno Medeiros wrote: but seriously, even if I am connected to the Internet I cannot code with my laptop only, I need it connected to a monitor, as well as a mouse, (and preferably a keyboard as well). I found I can't code on my laptop anymore; I am too used to and needful of a large screen.
Re: Stupid little iota of an idea
On 2/11/11 2:46 AM, Jacob Carlborg wrote: On 2011-02-10 23:05, Andrei Alexandrescu wrote: On 2/10/11 9:47 AM, spir wrote: Even then, noone forces D2 to blindly reproduce stupid naming from APL/C++, I guess. Or what? I don't find the name iota stupid. Andrei Of course you don't think it's stupid, you named it. It starts to look more and more that you are the only one that likes it. How about we vote about it ? Sure. Have at it! I'll be glad to comply with the vote. Andrei
Re: new documentation format for std.algorithm
On 2/2/11, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: Following ideas and advice from this newsgroup, I have a draft at http://d-programming-language.org/cutting-edge/phobos/std_algorithm.html Andrei You know, we could use the same thing for the language reference. E.g.: http://www.digitalmars.com/d/2.0/template.html We could have a table at the top linking to these sections: Explicit Template Instantiation Instantiation Scope Argument Deduction Template Type Parameters Specialization Template This Parameters Template Value Parameters Template Alias Parameters Template Tuple Parameters Template Parameter Default Values Implicit Template Properties Template Constructors Class Templates Struct, Union, and Interface Templates Function Templates Function Templates with Auto Ref Parameters Recursive Templates Template Constraints Limitations That is a lot of sections, and they're not being linked to right now so they're hard to find without a search.
Re: Stupid little iota of an idea
On 2/11/11 7:07 AM, foobar wrote: Andrei Alexandrescu Wrote: I don't find the name iota stupid. Andrei Of course _you_ don't. However practically all the users _do_ find it poorly named, including other developers in the project.. This is the umpteenth time this comes up in the NG and incidentally this is the only reason I know what the function does. If the users think the name is stupid than it really is. That's how usability works and the fact the you think otherwise or that it might be more accurate mathematically is really not relevant. If you want D/Phobos to be used by other people besides yourself you need to cater for their requirements. Not all users dislike iota, and besides arguments ad populum are fallacious. Iota rocks. But have at it - vote away, and I'll be glad if a better name for iota comes about. Andrei
Re: inlining or not inlining...
Register allocation is far more important than inlining. Why not give information about why a variable was not enregistered? I am sorry Walter but your stance on this more politic than a practical fact, it is not you, sounds like you secured a professorship!. :)
Re: Stupid little iota of an idea
On 2/11/11 8:32 AM, Daniel Gibson wrote: Am 10.02.2011 12:40, schrieb spir: Certainly, because it's /highly/ important for a community of programmers to share the same culture. And names are the main support vehicle for this culture. Denis (For this reason, I stoppped aliasing size_t and size_diff_t to Ordinal and Cardinal ;-) I use uint everywhere) This will cause trouble on 64bit systems, because there size_t is ulong. Cheers, - Daniel Yah... big problems on Phobos' 64-bit port. Andrei
Re: std.xml should just go
On 2/11/11 8:31 AM, Bruno Medeiros wrote: On 04/02/2011 16:14, Eric Poggel wrote: On 2/3/2011 10:20 PM, Andrei Alexandrescu wrote: At this point there is no turning back from ranges, unless we come about with an even better idea (I discussed one with Walter but we're not pursuing it yet). Care to elaborate on the new idea? Or at least a quick summary so we're not all left wondering? That comment left me curious as well... The discussed idea went as follows. Currently we have r.front and r.back for accessing the first and last element, and r[n] for an arbitrary element. Plus, r[n] is extremely flexible (opIndex, opIndexAssign, opIndexOpAssign... awesome level of control... just perfect). So then I thought, how about unifying everything? Imagine we gave up on r.front and r.back. Poof. They disappeared. Now we define two entities first and last such that r[first] and r[last] refer the first and last elements in the range. Now we have the situ: - Input and forward ranges statically allow only r[first] - Bidirectional ranges allow r[first] and r[last] - Random-access ranges allow r[first], r[last], and r[n] for integrals n Now we have a unified way of referring to elements in ranges. Walter's excellent follow-up is that the compiler could use lowering such that you don't even need to use first and last. You'd just use r[0] and r[$ - 1] and the compiler would take care of handling these special cases. Advantages: unified syntax, increased flexibility with opIndexAssign and opIndexOpAssign. Disadvantages: breaks all range-oriented code out there. Andrei