Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 18/01/11 16:46, Andrei Alexandrescu wrote: On 1/17/11 9:48 PM, Michel Fortin wrote: On 2011-01-17 17:54:04 -0500, Michel Fortin michel.for...@michelf.com said: More seriously, you have four choice: 1. code unit 2. code point 3. grapheme 4. require the client to state explicitly which kind of 'character' he wants; 'character' being an overloaded word, it's reasonable to ask for disambiguation. This makes me think of what I did with my XML parser after you made code points the element type for strings. Basically, the parser now uses 'front' and 'popFront' whenever it needs to get the next code point, but most of the time it uses 'frontUnit' and 'popFrontUnit' instead (which I had to add) when testing for or skipping an ASCII character is sufficient. This way I avoid a lot of unnecessary decoding of code points. For this to work, the same range must let you skip either a unit or a code point. If I were using a separate range with a call to toDchar or toCodeUnit (or toGrapheme if I needed to check graphemes), it wouldn't have helped much because the new range would essentially become a new slice independent of the original, so you can't interleave I want to advance by one unit with I want to advance by one code point. So perhaps the best interface for strings would be to provide multiple range-like interfaces that you can use at the level you want. I'm not sure if this is a good idea, but I thought I should at least share my experience. Very insightful. Thanks for sharing. Code it up and make a solid proposal! Andrei How does this differ from Steve Schveighoffer's string_t, subtract the indexing and slicing of code-points, plus a bidirectional grapheme range?
Re: repeat
Am 18.01.2011 01:24, schrieb spir: On 01/17/2011 07:57 PM, Daniel Gibson wrote: IMHO * (multiply) is not good because in theoretical computer science multiply is used to concatenate two words and thus concatenating a word with itself n times is word^n (pow(word, n) in mathematical terms). Weird. Excuse my ignorance, but how can multiply even mean concat? How is this written concretely (example welcome)? Do theoretical computer science people find this syntax a Good Thing? Denis _ vita es estrany spir.wikidot.com Dunno, I'm not a theoretical computer science person, but everyone who has studied computer science and hasn't completely forgotten the theoretical classes may be confused if multiply is used for repetition :) Cheers, - Daniel
Re: Google Summer of Code and the like
Am 18.01.2011 06:14, schrieb Gary Whatmore: new2d Wrote: Trass3r Wrote: new2d's recent post made me think about this. Couldn't we try to get D development sponsored by Google SoC or something similar? You should seriously consider if D is the next big language. It's hard to believe D works without major funding. All the other high performance languages and projects are getting multimillion dollar grants. There's also half a dozen books about D, but when I tried to find papers about D from Citeseer, I came back empty handed. Was the conference in the past associated with ACM? Probably answering to a troll, but you see D is a pragmatic language. We don't write academic nonsense. All the papers are there on the web. See the Dr Dobbs page and the links in the left bar on digitalmars.com site. See the Bartosz's blog. Agile software development produces very small amounts of document deliverables. What you see is rapid prototyping in action. A single man made an earth shattering new language for serious computing tasks. If we spent all day writing papers, there simply would be no D. Later when other co-operative members found the language, a git repository was made for distributed teamwork. There is also a D wiki project. These all provides an equal position to all community members. We could write as much documentation as we want together, but we've chosen developing code instead. I don't know about the conference. Took place before I found D. They probably had connections to Amazon. You might find some old (outdated) slides from the web. AFAIK Andrei used D in his Doctoral Thesis, there was also that guy who wrote an advanced Garbage Collector for D as his masters thesis.. so D *is* used in academics as well. Cheers, - Daniel
Re: Google Summer of Code and the like
Am 18.01.2011 00:45, schrieb Trass3r: new2d's recent post made me think about this. Couldn't we try to get D development sponsored by Google SoC or something similar? One problem may be that DMD's back end is not open source. But for GDC/LDC/Phobos or maybe even only the frontend that is then used in all compilers it may be possible? Cheers, - Daniel
Re: Google Summer of Code and the like
Gary Whatmore wrote: new2d Wrote: Trass3r Wrote: new2d's recent post made me think about this. Couldn't we try to get D development sponsored by Google SoC or something similar? You should seriously consider if D is the next big language. It's hard to believe D works without major funding. All the other high performance languages and projects are getting multimillion dollar grants. There's also half a dozen books about D, but when I tried to find papers about D from Citeseer, I came back empty handed. Was the conference in the past associated with ACM? Probably answering to a troll, but you see D is a pragmatic language. We don't write academic nonsense. All the papers are there on the web. See the Dr Dobbs page and the links in the left bar on digitalmars.com site. See the Bartosz's blog. Agile software development produces very small amounts of document deliverables. What you see is rapid prototyping in action. A single man made an earth shattering new language for serious computing tasks. If we spent all day writing papers, there simply would be no D. Later when other co-operative members found the language, a git repository was made for distributed teamwork. There is also a D wiki project. These all provides an equal position to all community members. We could write as much documentation as we want together, but we've chosen developing code instead. I don't know about the conference. Took place before I found D. They probably had connections to Amazon. You might find some old (outdated) slides from the web. Definitely not a troll! Here's the reality of D: It's a very ambitious language, with a small development team. We have no large-scale corporate backing. We believe we have very strong fundamentals, but the language implementation is essentially in an advanced beta stage. The standard library is about halfway through the beta stage. The toolchain is far from maturity. So, I agree, we need a attract a major sponsor. Until then, we're doing the best we can. We've come a very long way in the last year.
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 07:20:56 +0200, Walter Bright newshou...@digitalmars.com wrote: http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/ So, why do users still get a scary linker error when they try to compile a program with more than 1 module? IMO, sticking to the C-ism of one object file at a time and dependency on external build tools / makefiles is the biggest mistake DMD did in this regard. Practically everyone to whom I recommended to try D hit this obstacle. rdmd is nice but I see no reason why this shouldn't be in the compiler. Think of the time wasted by build tool authors (bud, rebuild, xfbuild and others, and now rdmd), which could have been put to better use if this were handled by the compiler, who could do it much easier (until relatively recently it was very hard to track dependencies correctly). -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: What Makes A Programming Language Good
Vladimir Panteleev wrote: On Tue, 18 Jan 2011 07:20:56 +0200, Walter Bright newshou...@digitalmars.com wrote: http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/ So, why do users still get a scary linker error when they try to compile a program with more than 1 module? What is that message? IMO, sticking to the C-ism of one object file at a time and dependency on external build tools / makefiles is the biggest mistake DMD did in this regard. Practically everyone to whom I recommended to try D hit this obstacle. rdmd is nice but I see no reason why this shouldn't be in the compiler. Think of the time wasted by build tool authors (bud, rebuild, xfbuild and others, and now rdmd), which could have been put to better use if this were handled by the compiler, who could do it much easier (until relatively recently it was very hard to track dependencies correctly). dmd can build entire programs with one command: dmd file1.d file2.d file3.d ...etc...
Re: repeat
Andrei Alexandrescu wrote: I want to generalize the functionality in string's repeat and move it outside std.string. There is an obvious semantic clash here. If you say repeat(abc, 3) did you mean one string abcabcabc or three strings abc, abc, and abc? Just a thought: concat(repeat(abc,3)) yields abcabcabc ?
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 11:11:01 +0200, Vladimir Panteleev vladi...@thecybershadow.net wrote: a) does not indicate what exactly is wrong (module not passed to linker, not that the linker knows that) By the way, disregarding extern(C) declarations et cetera, the compiler has the ability to detect when such linker errors will appear and take appropriate measures (e.g. suggest using the -c flag, passing the appropriate .d or .obj file on its command line, or using a build tool). -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: repeat
Am 18.01.2011 10:07, schrieb Walter Bright: Andrei Alexandrescu wrote: I want to generalize the functionality in string's repeat and move it outside std.string. There is an obvious semantic clash here. If you say repeat(abc, 3) did you mean one string abcabcabc or three strings abc, abc, and abc? Just a thought: concat(repeat(abc,3)) yields abcabcabc ? :-)
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 11:05:34 +0200, Walter Bright newshou...@digitalmars.com wrote: Vladimir Panteleev wrote: On Tue, 18 Jan 2011 07:20:56 +0200, Walter Bright newshou...@digitalmars.com wrote: http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/ So, why do users still get a scary linker error when they try to compile a program with more than 1 module? What is that message? C:\Temp\D\Build dmd test1.d OPTLINK (R) for Win32 Release 8.00.8 Copyright (C) Digital Mars 1989-2010 All rights reserved. http://www.digitalmars.com/ctg/optlink.html test1.obj(test1) Error 42: Symbol Undefined _D5test21fFZv --- errorlevel 1 1) The error message is very technical: a) does not indicate what exactly is wrong (module not passed to linker, not that the linker knows that) b) does not give any indication of what the user has to do to fix it 2) OPTLINK doesn't demangle D mangled names, when it could, and it would improve the readability of its error messages considerably. (I know not all mangled names are demangleable, but it'd be a great improvement regardless) dmd can build entire programs with one command: dmd file1.d file2.d file3.d ...etc... That doesn't scale anywhere. What if you want to use a 3rd-party library with a few dozen modules? -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: What Makes A Programming Language Good
Vladimir Panteleev: IMO, sticking to the C-ism of one object file at a time and dependency on external build tools / makefiles is the biggest mistake DMD did in this regard. A Unix philosophy is to create tools that are able to do only one thing well, and rdmd uses DMD to do its job of helping compile small projects automatically. Yet the D compiler is not following that philosophy in many situations because it is doing lot of stuff beside compiling D code, like profiler, code coverage analyser, unittester, docs generator, JSON summary generator, and more. D1 compiler used to have a cute literary programming feature too, that's often used by Haskell blogs. Here Walter is pragmatic: docs generator happens to be quicker to create and maintain if it's built inside the compiler. So it's right to fold this rdmd functionality inside the compiler? Is this practically useful, like is this going to increase rdmd speed? Folding rdmd functionality inside the compiler may risk freezing the future evolution of future D build tools, so it has a risks/costs too. Bye, bearophile
Too much flexibility is dangerous for large systems
Found through Reddit, similar things can be said about D2: http://kirkwylie.blogspot.com/2011/01/scala-considered-harmful-for-large.html Bye, bearophile
Re: What Makes A Programming Language Good
Vladimir Panteleev vladi...@thecybershadow.net wrote: - I must stress that having a shared community-wide style to write D code helps a lot when you want to use in your program modules written by other people. Otherwise your program looks like a patchwork of wildly different styles. I assume you mean naming conventions and not actual code style (indentation etc.) Likely he meant more than that. At least such is the impression I've had before. I am not vehemently opposed to such an idea, and I definitely agree that naming conventions should be observed, but I have at times had the impression that bearophile wants all aspects of code to be controlled by such a coding style. -- Simen
Re: What Makes A Programming Language Good
Vladimir Panteleev: Forcing a code repository is bad. In this case I was not suggesting to force things :-) But having a place to find reliable modules is very good. This is not practical. It works in Python, Ruby and often in Perl too, so I don't agree. I assume you mean naming conventions and not actual code style (indentation etc.) I meant that D code written by different people is better looking similar, where possible. C/C++ programmers have too much freedom where freedom is not necessary. Reducing some of such useless freedom helps improve the code ecosystem. - Probably D the package system needs to be improved. Some Java people are even talking about introducing means to create superpackages. Some module system theory from ML-like languages may help here. Why? - Currently D packages are not working well yet, there are bug reports on this. - Something higher level than packages is useful when you build very large systems. - Module system theory from ML-like languages shows many years old ideas that otherwise will need to be painfully re-invented half-broken by D language developers. Sometimes wasting three days reading saves you some years of pain. I don't think this is practical until someone writes a D interpreter. CTFE interpter is already there :-) How would DMD become even more IDE-friendly that it already is? - error messages that give column number - folding annotations? - less usage of string mixins and more on delegates and normal D code - More introspection - etc I have to agree that named arguments are awesome, they make the code much more readable and maintainable in many instances. I haven not already written an enhancement request on this because until few weeks ago I have thought that named arguments improve the usage of functions with many arguments, so they may encourage D programmers to create more functions like this from Windows API: HWND CreateWindow( LPCTSTR lpClassName, LPCTSTR lpWindowName,DWORD style,int x, int y, int width, int height, HWND hWndParent,HMENU hMenu,HANDLE hInstance,LPVOID lpParam); but lately I have understood that this is not the whole truth, named arguments are useful even when your functions have just 3 arguments. They make code more readable in both little script-like programs, and help avoid some mistakes in larger programs too. Bye, bearophile
Re: What Makes A Programming Language Good
Vladimir Panteleev wrote: On Tue, 18 Jan 2011 11:05:34 +0200, Walter Bright newshou...@digitalmars.com wrote: Vladimir Panteleev wrote: On Tue, 18 Jan 2011 07:20:56 +0200, Walter Bright newshou...@digitalmars.com wrote: http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a-programming-language-good/ So, why do users still get a scary linker error when they try to compile a program with more than 1 module? What is that message? C:\Temp\D\Build dmd test1.d OPTLINK (R) for Win32 Release 8.00.8 Copyright (C) Digital Mars 1989-2010 All rights reserved. http://www.digitalmars.com/ctg/optlink.html test1.obj(test1) Error 42: Symbol Undefined _D5test21fFZv --- errorlevel 1 1) The error message is very technical: a) does not indicate what exactly is wrong (module not passed to linker, not that the linker knows that) There could be many reasons for the error, see: http://www.digitalmars.com/ctg/OptlinkErrorMessages.html#symbol_undefined which is linked from the url listed: http://www.digitalmars.com/ctg/optlink.html and more directly from the FAQ: http://www.digitalmars.com/faq.html b) does not give any indication of what the user has to do to fix it The link above does give such suggestions, depending on what the cause of the error is. 2) OPTLINK doesn't demangle D mangled names, when it could, and it would improve the readability of its error messages considerably. (I know not all mangled names are demangleable, but it'd be a great improvement regardless) The odd thing is that Optlink did demangle the C++ mangled names, and people actually didn't like it that much. dmd can build entire programs with one command: dmd file1.d file2.d file3.d ...etc... That doesn't scale anywhere. What if you want to use a 3rd-party library with a few dozen modules? Just type the filenames and library names on the command line. You can put hundreds if you like. If you do blow up the command line processor (nothing dmd can do about that), you can put all those files in a file, say cmd, and invoke with: dmd @cmd The only limit is the amount of memory in your system.
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 10:32:53 + (UTC) Trass3r u...@known.com wrote: We must avoid having the same disastrous situation like C/C++ where everyone uses a different system, CMake, make, scons, blabla. I agree (planning not to use blabla build system, but waf). Otoh, I hope D2 will also be able to avoid things like: http://cdsmith.wordpress.com/2011/01/16/haskells-own-dll-hell/ However, for now I'm more concerned to see 64bit DMD, complete QtD (or some other workable GUI bindings), some database bindings etc. first... Sincerely, Gour -- Gour | Hlapicina, Croatia | GPG key: CDBF17CA signature.asc Description: PGP signature
Re: What Makes A Programming Language Good
Vladimir Panteleev wrote: On Tue, 18 Jan 2011 12:10:25 +0200, bearophile bearophileh...@lycos.com wrote: Walter: http://urbanhonking.com/ideasfordozens/2011/01/18/what-makes-a- programming-language-good/ It's a cute blog post. It suggests that it will be good to: Getting Code: 1) Have a central repository for D modules that is easy to use for both submitters and users. Forcing a code repository is bad. Let authors use anything that they're comfortable with. The repository must be nothing more than a database of metadata (general information about a package, and how to download it). I'm pretty happy that my Fedora repositories are just a handful, most of which are setup out of the box. It's a big time saver, one of it's best features. I would use / evaluate much less software if I had to read instructions and download each package manually. - D code in such repository must Just Work. This is not practical. The only practical way is to put that responsibility on the authors, and to encourage forking and competition. True, though one of the cool things Gregor did back the days with dsss is automagically run unittests for each package in the repository and publish the results. It wasn't perfect but gave at least some indication.
Re: Too much flexibility is dangerous for large systems
bearophile wrote: Found through Reddit, similar things can be said about D2: http://kirkwylie.blogspot.com/2011/01/scala-considered-harmful-for-large.html I think that article is bunk. Large Java programs (as related to me by corporate Java programmers) tend to be excessively complex because the language is too simple. Too often people think that with a simple language, the programs created with it must be simple. This is dead wrong. Simple languages lead to complex, incomprehensible programs.
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 13:28:32 +0200, Walter Bright newshou...@digitalmars.com wrote: What is that message? C:\Temp\D\Build dmd test1.d OPTLINK (R) for Win32 Release 8.00.8 Copyright (C) Digital Mars 1989-2010 All rights reserved. http://www.digitalmars.com/ctg/optlink.html test1.obj(test1) Error 42: Symbol Undefined _D5test21fFZv --- errorlevel 1 1) The error message is very technical: a) does not indicate what exactly is wrong (module not passed to linker, not that the linker knows that) There could be many reasons for the error, see: Sorry, you're missing the point. The toolchain has the ability to output a much more helpful error message (or just do the right thing and compile the whole project, which is obviously what the user intends to do in 99% of the time). http://www.digitalmars.com/ctg/OptlinkErrorMessages.html#symbol_undefined which is linked from the url listed: http://www.digitalmars.com/ctg/optlink.html and more directly from the FAQ: http://www.digitalmars.com/faq.html b) does not give any indication of what the user has to do to fix it The link above does give such suggestions, depending on what the cause of the error is. This is not nearly good enough. I can bet you that over 95% of users will Google for the error message instead. Further more, that webpage is very technical. Some D users (those wanting a high-performance high-level programming language) don't even need to know what a linker is or does. 2) OPTLINK doesn't demangle D mangled names, when it could, and it would improve the readability of its error messages considerably. (I know not all mangled names are demangleable, but it'd be a great improvement regardless) The odd thing is that Optlink did demangle the C++ mangled names, and people actually didn't like it that much. I think we can agree that there is a significant difference between the two audiences (users of your C++ toolchain who need a high-end, high-performance C++ compiler, vs. people who want to try a new programming language). You can make it an option, or just print both mangled and demangled. dmd can build entire programs with one command: dmd file1.d file2.d file3.d ...etc... That doesn't scale anywhere. What if you want to use a 3rd-party library with a few dozen modules? Just type the filenames and library names on the command line. You can put hundreds if you like. If you do blow up the command line processor (nothing dmd can do about that), you can put all those files in a file, say cmd, and invoke with: dmd @cmd The only limit is the amount of memory in your system. That's not what I meant - I meant it doesn't scale as far as user effort in concerned. There is no reason why D should force users to maintain response files, make files, etc. D (the language) doesn't need them, and nor should the reference implementation. -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 13:27:56 +0200, bearophile bearophileh...@lycos.com wrote: Vladimir Panteleev: Forcing a code repository is bad. In this case I was not suggesting to force things :-) But having a place to find reliable modules is very good. This is not practical. It works in Python, Ruby and often in Perl too, so I don't agree. I think we have a misunderstanding, then? Who ensures that the modules just work? If someone breaks something, are they thrown out of The Holy Repository? I assume you mean naming conventions and not actual code style (indentation etc.) I meant that D code written by different people is better looking similar, where possible. C/C++ programmers have too much freedom where freedom is not necessary. Reducing some of such useless freedom helps improve the code ecosystem. It also demotivates and alienates programmers. - Currently D packages are not working well yet, there are bug reports on this. - Something higher level than packages is useful when you build very large systems. - Module system theory from ML-like languages shows many years old ideas that otherwise will need to be painfully re-invented half-broken by D language developers. Sometimes wasting three days reading saves you some years of pain. I'm curious (not arguing), can you provide examples? I can't think of any drastic improvements to the package system. I don't think this is practical until someone writes a D interpreter. CTFE interpter is already there :-) So you think the subset of D that's CTFE-able is good enough to make an interactive console that's actually useful? -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn lutger.blijdest...@gmail.com wrote: I'm pretty happy that my Fedora repositories are just a handful, most of which are setup out of the box. It's a big time saver, one of it's best features. I would use / evaluate much less software if I had to read instructions and download each package manually. I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic. True, though one of the cool things Gregor did back the days with dsss is automagically run unittests for each package in the repository and publish the results. It wasn't perfect but gave at least some indication. I think that idea is taken from CPAN. CPAN refuses to install the package if it fails unit tests (unless you force it to). -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: Too much flexibility is dangerous for large systems
Walter: Large Java programs (as related to me by corporate Java programmers) tend to be excessively complex because the language is too simple. I agree, the original Java was too much simple, the lack of things like generics and delegates increases code complexity and size. But I think that too much flexibility too is dangerous for large projects, as the article says. So I presume for large commercial systems some intermediate point is the optimum. I think C# is very well on this middle point :-) (C# and D purposes are not the same, so I don't expect D and C# to be equal). Bye, bearophile
Re: repeat
I would prefer it to work like this: repeat(abc, 3) - abcabcabc repeat([abc], 3) - [abc,abc,abc] repeat([1,2,3], 3) - [1,2,3,1,2,3,1,2,3] -- Dil D compiler: http://code.google.com/p/dil/
Re: What Makes A Programming Language Good
Vladimir Panteleev: I think we have a misunderstanding, then? Who ensures that the modules just work? If someone breaks something, are they thrown out of The Holy Repository? There is no single solution to such problems. It's a matter of creating rules and lot of work to enforce them as years pass. If you talk about Holy things you are pushing this discussion toward a stupid direction. It also demotivates and alienates programmers. Many programmers are able to understand the advantages of removing some unnecessary freedoms. Python has shown me that brace wars are not productive :-) I'm curious (not arguing), can you provide examples? I can't think of any drastic improvements to the package system. I was talking about fixing bugs, improving strength, maybe later adding super-packages, and generally taking a good look at the literature about the damn ML-style module systems and their theory. So you think the subset of D that's CTFE-able is good enough to make an interactive console that's actually useful? The built-in interpreter needs some improvements in its memory management, and eventually it may support exceptions and other some other missing things. Currently functions can't access global mutable state in the compile-time execution path, despite they don't need to be wholly pure. But in a REPL you may want to do almost everything, like mutating global variables, importing modules and opening a GUI window on the fly, etc. SO currently the D CTFE interpreter is not good enough for a console, but I think it's already better than nothing (I'd like right now a console able to run D code with the current limitations of the CTFE interpreter), it will be improved, and it may even be made more flexible to be usable both for CTFE with pure-ish functions and in a different modality for the console. This allows to have a single interpreter for two purposes. Most modern video games are partially written with a scripting language, like Lua. So a third possible purpose is to allow run! -time execution of code (so the program needs the compile at run time too), avoiding the need of a Lua/Python/MiniD interpreter. Bye, bearophile
Re: repeat
On 01/18/2011 01:10 PM, Aziz K. wrote: I would prefer it to work like this: repeat(abc, 3) - abcabcabc repeat([abc], 3) - [abc,abc,abc] repeat([1,2,3], 3) - [1,2,3,1,2,3,1,2,3] I find this consistent in that the operation keeps the nesting level constant. But then we need a good name for an operation that nests into a higher-level array ;-) What about arrayOf (T) (T something, uint count=1) ? arrayOf(abc, 3) - [abc,abc,abc] arrayOf([abc], 3) - [[abc],[abc],[abc]] arrayOf([1,2,3], 3) - [[1,2,3],[1,2,3],[1,2,3]] Denis _ vita es estrany spir.wikidot.com
Re: What Makes A Programming Language Good
Vladimir Panteleev wrote: On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn lutger.blijdest...@gmail.com wrote: I'm pretty happy that my Fedora repositories are just a handful, most of which are setup out of the box. It's a big time saver, one of it's best features. I would use / evaluate much less software if I had to read instructions and download each package manually. I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic. Why? It works quite well for Ruby as well as other languages.
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 14:30:53 +0200, bearophile bearophileh...@lycos.com wrote: Vladimir Panteleev: I think we have a misunderstanding, then? Who ensures that the modules just work? If someone breaks something, are they thrown out of The Holy Repository? There is no single solution to such problems. It's a matter of creating rules and lot of work to enforce them as years pass. If you talk about Holy things you are pushing this discussion toward a stupid direction. If a single entity controls the inclusion of submissions into an important set, then there will inevitably be conflicts. Also I still have no idea what you meant when you said that Python, Ruby and Perl do it. AFAIK their repositories are open and anyone can submit their project. I'm curious (not arguing), can you provide examples? I can't think of any drastic improvements to the package system. I was talking about fixing bugs, improving strength, maybe later adding super-packages, and generally taking a good look at the literature about the damn ML-style module systems and their theory. I meant examples of why this is useful for D. (Why are you damning the ML-style module systems?) -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 14:36:43 +0200, Lutger Blijdestijn lutger.blijdest...@gmail.com wrote: Vladimir Panteleev wrote: On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn lutger.blijdest...@gmail.com wrote: I'm pretty happy that my Fedora repositories are just a handful, most of which are setup out of the box. It's a big time saver, one of it's best features. I would use / evaluate much less software if I had to read instructions and download each package manually. I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic. Why? It works quite well for Ruby as well as other languages. Um? Maybe I don't know enough about RubyGems (I don't use Ruby but used it once or twice for a Ruby app) but AFAIK it isn't maintained by a group of people who select and package libraries from authors' web pages, but it is the authors who publish their libraries directly on RubyGems. -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: Too much flexibility is dangerous for large systems
On 01/18/2011 12:41 PM, Walter Bright wrote: bearophile wrote: Found through Reddit, similar things can be said about D2: http://kirkwylie.blogspot.com/2011/01/scala-considered-harmful-for-large.html I think that article is bunk. Large Java programs (as related to me by corporate Java programmers) tend to be excessively complex because the language is too simple. Too often people think that with a simple language, the programs created with it must be simple. This is dead wrong. Simple languages lead to complex, incomprehensible programs. Oh, how true! Think at Lisp, for instance, probably one of the most simple languages ever. This simplicity, precisely, forces to create tons of abstraction levels just to define notions not present in the language --due to simplicity-- but absolutely needed to escape too low-level programming. This is on the semantic side. On the syntactic one, all of these custom notions look the same, namely (doSomethingWith args...) (*) instead of each having a distinct outlook helping the reader decode the code. Great! (if ( IQ 150) findAnotherPL haveFunWithLISP) On the other hand: which notions distinctions should be defined in a language? D2 may have far too many in my opinion, but which ones should stay, and why? Denis (*) and if you're happy actual collection lists look like [e1 e2 e3], not (e1 e2 e3) _ vita es estrany spir.wikidot.com
Re: What Makes A Programming Language Good
dmd @cmd The only limit is the amount of memory in your system. That's not what I meant - I meant it doesn't scale as far as user effort in concerned. There is no reason why D should force users to maintain response files, make files, etc. D (the language) doesn't need them, and nor should the reference implementation. I have to second that. Your main.d imports abd.d which, in turn, imports xyz.d. Why can't the compiler traverse this during compilation in order to find all relevant modules and compile them if needed? I imagine such a compiler could also do some interesting optimisations based on its greater perspective. The single file as a compilation unit seems a little myopic to me. Its reasons are historic, I bet.
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 14:47:29 +0200, Jim bitcir...@yahoo.com wrote: I imagine such a compiler could also do some interesting optimisations based on its greater perspective. Compiling the entire program at once opens the door to much more than just optimizations. You could have virtual templated methods, for one. -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 01/18/2011 04:48 AM, Michel Fortin wrote: On 2011-01-17 17:54:04 -0500, Michel Fortin michel.for...@michelf.com said: More seriously, you have four choice: 1. code unit 2. code point 3. grapheme 4. require the client to state explicitly which kind of 'character' he wants; 'character' being an overloaded word, it's reasonable to ask for disambiguation. This makes me think of what I did with my XML parser after you made code points the element type for strings. Basically, the parser now uses 'front' and 'popFront' whenever it needs to get the next code point, but most of the time it uses 'frontUnit' and 'popFrontUnit' instead (which I had to add) when testing for or skipping an ASCII character is sufficient. This way I avoid a lot of unnecessary decoding of code points. For this to work, the same range must let you skip either a unit or a code point. If I were using a separate range with a call to toDchar or toCodeUnit (or toGrapheme if I needed to check graphemes), it wouldn't have helped much because the new range would essentially become a new slice independent of the original, so you can't interleave I want to advance by one unit with I want to advance by one code point. So perhaps the best interface for strings would be to provide multiple range-like interfaces that you can use at the level you want. I'm not sure if this is a good idea, but I thought I should at least share my experience. This looks like a very interesting approach. And clear. I guess range synchronisation would be based on an internal lowest-level (codeunit) index. Then, you also need internal validity-checking and/or offseting routines when a higher-level range is used after a lowel-level one has been used. (I mean eg to ensure start-of-codepoint after a codeunit popFront, or throw an error.) Also, how to avoid duplicating many operational functions (eg find a given slice) for each level? Denis _ vita es estrany spir.wikidot.com
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 01/18/2011 06:48 AM, Jonathan M Davis wrote: On Monday 17 January 2011 15:13:42 spir wrote: See range bug evoked above. opApply is the only workaround AFAIK. Also, ranges cannot yet provide indexed iteration like foreach(i, char ; text) {...} While it would be nice at times to be able to have an index with foreach when using ranges, I would point out that it's trivial to just declare a variable which you increment each iteration, so it's easy to get an index even when using foreach with ranges. Certainly, I wouldn't consider the lack of index with foreach and ranges a good reason to use opApply instead of ranges. There may be other reasons which make it worthwhile, but it's so trivial to get an index that the loss of range abilities (particularly the ability to use such ranges with std.algorithm) dwarfs it in importance. You are right. I fully agree, in fact. On the other hand, think at expectations of users of a library providing iteration on naturally sequential thingies. The point is that D makes indexed iteration available elsewhere. Denis _ vita es estrany spir.wikidot.com
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 01/18/2011 07:11 AM, Andrei Alexandrescu wrote: On 1/17/11 11:48 PM, Jonathan M Davis wrote: On Monday 17 January 2011 15:13:42 spir wrote: See range bug evoked above. opApply is the only workaround AFAIK. Also, ranges cannot yet provide indexed iteration like foreach(i, char ; text) {...} While it would be nice at times to be able to have an index with foreach when using ranges, I would point out that it's trivial to just declare a variable which you increment each iteration, so it's easy to get an index even when using foreach with ranges. Certainly, I wouldn't consider the lack of index with foreach and ranges a good reason to use opApply instead of ranges. There may be other reasons which make it worthwhile, but it's so trivial to get an index that the loss of range abilities (particularly the ability to use such ranges with std.algorithm) dwarfs it in importance. - Jonathan M Davis It's a bit more difficult than that. When iterating a variable-length encoded range, what you need more than the current item being iterated is the physical offset reached inside the range. That's not all that difficult either as the range can always provide an extra primitive, but a bit annoying (e.g. because it makes iteration with foreach impossible if you want the index, unless you return a tuple with each step). This is a very valid point: a range's logical offset is not necessary equal to physical (hum) offset, even on a plain sequence. But for the case of Text it is in fact, precisely because codepoints have been grouped in piles each representing true character (grapheme). This is actually one third of the purpose of Text (the others beeing to ensure unique representation of each character, and to provide users with clear interface). Thus, Jonathan's point simply applies to Text. At any rate, I agree with two things - one, we need to fix the foreach situation. Two, even before we find a fix, at this point committing to iteration with opApply essentially commits the iteratee to an island where all basic algorithms need to be reinvented from first principles. I agree. The situation would be different if D had not proposed indexed iteration already, and programmers would routinely count manually and/or call an extra range primitive, as you say. Upon using opApply: it works fine nevertheless, at least for a first rough implementation like in the case of Text. Reinventing basic algos is not an issue at this stage, as long as they are simple enough, and mainly for testing. (Actually, it can be an advantage in avoiding integration issues, possibly due to D's current beta stage --I mean bugs that pop up only when combinng given features-- like we had eg with range formatValue). Andrei Denis _ vita es estrany spir.wikidot.com
Re: What Makes A Programming Language Good
Vladimir Panteleev wrote: On Tue, 18 Jan 2011 14:36:43 +0200, Lutger Blijdestijn lutger.blijdest...@gmail.com wrote: Vladimir Panteleev wrote: On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn lutger.blijdest...@gmail.com wrote: I'm pretty happy that my Fedora repositories are just a handful, most of which are setup out of the box. It's a big time saver, one of it's best features. I would use / evaluate much less software if I had to read instructions and download each package manually. I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic. Why? It works quite well for Ruby as well as other languages. Um? Maybe I don't know enough about RubyGems (I don't use Ruby but used it once or twice for a Ruby app) but AFAIK it isn't maintained by a group of people who select and package libraries from authors' web pages, but it is the authors who publish their libraries directly on RubyGems. Aha, I've been misunderstanding you all this time, thinking you were arguing against the very idea of standard repository and package *format*. Then I agree, I also prefer something more decentralized.
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 2011-01-18 01:16:13 -0500, Andrei Alexandrescu seewebsiteforem...@erdani.org said: On 1/17/11 9:48 PM, Michel Fortin wrote: On 2011-01-17 17:54:04 -0500, Michel Fortin michel.for...@michelf.com said: More seriously, you have four choice: 1. code unit 2. code point 3. grapheme 4. require the client to state explicitly which kind of 'character' he wants; 'character' being an overloaded word, it's reasonable to ask for disambiguation. This makes me think of what I did with my XML parser after you made code points the element type for strings. Basically, the parser now uses 'front' and 'popFront' whenever it needs to get the next code point, but most of the time it uses 'frontUnit' and 'popFrontUnit' instead (which I had to add) when testing for or skipping an ASCII character is sufficient. This way I avoid a lot of unnecessary decoding of code points. For this to work, the same range must let you skip either a unit or a code point. If I were using a separate range with a call to toDchar or toCodeUnit (or toGrapheme if I needed to check graphemes), it wouldn't have helped much because the new range would essentially become a new slice independent of the original, so you can't interleave I want to advance by one unit with I want to advance by one code point. So perhaps the best interface for strings would be to provide multiple range-like interfaces that you can use at the level you want. I'm not sure if this is a good idea, but I thought I should at least share my experience. Very insightful. Thanks for sharing. Code it up and make a solid proposal! What I use right now is this (see below). I'm not sure what would be a good name for it though. The expectation is that I'll get either an ASCII char or something out of ASCII range if it isn't ASCII. The abstraction doesn't seem very 'solid' to me, in the sense that I can't see how it'd apply to ranges other than strings, so it's only useful for strings (the character array kind), and it's only useful as a workaround since you made ElementType!(char[]) a dchar. Well, any range returning char,dchar,wchar could map frontUnit to front and popFrontUnit to popFront to keep things working, but it makes the optimization rather pointless. I don't really have an idea where to go from here. char frontUnit(string input) { assert(input.length 0); return input[0]; } wchar frontUnit(wstring input) { assert(input.length 0); return input[0]; } dchar frontUnit(dstring input) { assert(input.length 0); return input[0]; } void popFrontUnit(ref string input) { assert(input.length 0); input = input[1..$]; } void popFrontUnit(ref wstring input) { assert(input.length 0); input = input[1..$]; } void popFrontUnit(ref dstring input) { assert(input.length 0); input = input[1..$]; } version (unittest) { import std.string : front, popFront; } unittest { string test = été; assert(test.length == 5); string test2 = test; assert(test2.front == 'é'); test2.popFront(); assert(test2.length == 3); // removed é which is two UTF-8 code units string test3 = test; assert(test3.frontUnit == éc[0]); test3.popFrontUnit(); assert(test3.length == 4); // removed first half of é which, one UTF-8 code units } -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 01/18/2011 03:52 AM, Andrei Alexandrescu wrote: On 1/17/11 5:13 PM, spir wrote: On 01/17/2011 07:57 PM, Andrei Alexandrescu wrote: * Line 130: representing a text as a dchar[][] has its advantages but major efficiency issues. To be frank I think it's a disaster. I think a representation building on UTF strings directly is bound to be vastly better. I don't understand your point. Where is the difference with D's builtin types, then? Unfortunately I won't have much time to discuss all these points, but this is a simple one: using dchar[][] wastes memory and time. You need to build on a flatter representation. Don't confuse the abstraction you are building with its underlying representation. The difference between your abstraction and char[]/wchar[]/dchar[] (which I strongly recommend you to build on) is that the abstractions offer different, higher-level primitives that the representation doesn't. I think it is needed to repeat again the following: Text in my view (or whatever variant solution to work correctly with universal text) is _not_ intended as a basic string type, even less default. If programmers can guarantee all their app's input will ever hold single-codepoint characters only, _or_ if they jst pass pieces of text around without manipulation, then such a tool is big overkill. It has a time cost a Text construction time, which I consider as an investment. It has also some space time cost for operations that should be only slightly relevant compared to speed offered by the simple facts routines can then operate just (actualy nearly) like with historic charsets. Indexing is just normal O(1) indexing, possibly plus producing the result. Not O(n) across the source with building piles along the way. (1000X slower, 100X slower?) Counting is just O(n) with mini-array compares, not building normalising piles across the whole code sequence. (10X, 100X slower?) Let me repeat again: if anyone in this community wants to put work in a forward range that iterates one grapheme at a time, that work would be very valuable because it will allow us to experiment with graphemes in a non-disruptive way while benefiting of a host of algorithms. ByGrapheme and friends will help more than defining new string types. Right. I understand your point-of-view, esp non-disruptive. But then, how to avoid the possibly huge inefficiency evoked above? We have no true perf numbers yet, right, for any alternative to Text's approach. But for this reason we also should not randomly speak of this approach's space time costs. Compared to what? Denis _ vita es estrany spir.wikidot.com
Re: repeat
Okay, so your arrayOf() function could be written as: auto arrayOf(T x, uint n){ return repeat([x], n); } Only needs to wrap the argument inside another array literal.
Re: What Makes A Programming Language Good
Jim wrote: Why can't the compiler traverse this during compilation in order to find all relevant modules and compile them if needed? How will it find all the modules? Since modules and files don't have to have matching names, it can't assume import foo; will necessarily be found in foo.d. I use this fact a lot to get all a program's dependencies in one place. The modules don't necessarily have to be under the current directory either. It'd have a lot of files to search, which might be brutally slow. ... but, if you do want that behavior, you can get it today somewhat easily: dmd *.d, which works quite well if all the things are in one folder anyway.
Re: What Makes A Programming Language Good
Interestingly, my own experience with Ruby, a few years ago, was almost 180 degrees opposite of the blogger's. The two most frustrating aspects were documentation and deployment. The documents were sparse and useless and deployment was the hugest headache I've ever experienced, in great part due to Rubygems not working properly! They've probably improved it a lot since then, but it reinforced my long-standing belief that third party libraries are, more often than not, more trouble than they're worth anyway.
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 15:51:58 +0200, Adam Ruppe destructiona...@gmail.com wrote: Jim wrote: Why can't the compiler traverse this during compilation in order to find all relevant modules and compile them if needed? How will it find all the modules? Since modules and files don't have to have matching names, it can't assume import foo; will necessarily be found in foo.d. I use this fact a lot to get all a program's dependencies in one place. I think this is a misfeature. I suppose you avoid using build tools and prefer makefiles/build scripts for some reason? The modules don't necessarily have to be under the current directory either. It'd have a lot of files to search, which might be brutally slow. Not if the compiler knows the file name based on the module name. ... but, if you do want that behavior, you can get it today somewhat easily: dmd *.d, which works quite well if all the things are in one folder anyway. ...which won't work on Windows, for projects with packages, and if you have any unrelated .d files (backups, test programs) in your directory (which I almost always do). -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: Proposal: Multidimensional opSlice solution
Hi All, I'm still learning D(2) and was toying around with creating a matrix class and wanting to overload opSlice for this purpose. As I can only find very limited documentation on multidimensional arrays in the TDPL book (to my big suprise given the D should be an attractive language for numerical coding), I stumbled accross this post looking elsewhere. What was the result of this discussion in the end? Are there plans to allow opslice to work for multidimensional arrays or will this be done in Phobos? Many thanks and sorry to bring us such an old post. fil Norbert Nemec Wrote: Don wrote: Multidimensional slices normally result in appallingly inefficient use of caches. Indeed, cache usage is a challenge. My general approach would be fairly conservative: give the user full control over memory layout, but do this as comfortably as possible. Provide good performance for straightforward code but allow the user to tweak the details to improve performance A library function that takes several arrays as input and output should allow arbitrary memory layouts, but it should also specify which memory layout is most efficient. In any case, I think that the expressiveness of multidimensional slices is worth having them even if the performance is not optimal in every case with the first generation of libraries.
Re: What Makes A Programming Language Good
Vladimir Panteleev: I think [file/module name mismatches] is a misfeature. Maybe. 9/10 times they match anyway, but I'd be annoyed if the package names had to match the containing folder. Here's what I think might work: just use the existing import path rule. If it gets a match, great. If not, the user can always manually add the other file to the command line anyway. I suppose you avoid using build tools and prefer makefiles/build scripts for some reason? Yeah, makefiles and build scripts are adequately fit already. That is, they don't suck enough to justify the effort of getting something new. I've thought about making an automatic build+download thing myself in the past, but the old way has been good enough for me. (If I were to do it, I'd take rdmd and add a little http download facility to it. If you reference a module that isn't already there, it'd look up the path to download it from a config file, grab it, and try the compile. If the config file doesn't exist, it can grab one automatically from a central location. That way, it'd be customizable and extensible by anyone, but still just work out of the box. But, like I said, it stalled out because my classic makefile and simple scripts have been good enough for me.) ...which won't work on Windows, for projects with packages, and if you have any unrelated .d files (backups, test programs) in your directory (which I almost always do). Indeed.
Re: What Makes A Programming Language Good
On 1/18/11, Walter Bright newshou...@digitalmars.com wrote: You can put hundreds if you like. DMD can, but Optlink can't handle long arguments.
Re: What Makes A Programming Language Good
On 1/18/11, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: On 1/18/11, Walter Bright newshou...@digitalmars.com wrote: You can put hundreds if you like. DMD can, but Optlink can't handle long arguments. Although now that I've read the error description I might have passed a wrong argument somehow. I'll take a look.
Re: DVCS (was Re: Moving to D)
On 18/01/11 01:09, Brad Roberts wrote: On Mon, 17 Jan 2011, Walter Bright wrote: Robert Clipsham wrote: Speaking of which, are you able to remove the The Software was not designed to operate after December 31, 1999 sentence at all, or does that require you to mess around contacting symantec? Not that anyone reads it, it is kind of off putting to see that over a decade later though for anyone who bothers reading it :P Consider it like the DNA we all still carry around for fish gills! In all seriousness, the backend license makes dmd look very strange. It threw the lawyers I consulted for a serious loop. At a casual glance it gives the impression of software that's massively out of date and out of touch with the real world. I know that updating it would likely be very painful, but is it just painful or impossible? Is it something that money could solve? I'd chip in to a fund to replace the license with something less... odd. Later, Brad Make that a nice open source license and I'm happy to throw some money at it too : -- Robert http://octarineparrot.com/
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 16:58:31 +0200, Adam Ruppe destructiona...@gmail.com wrote: Yeah, makefiles and build scripts are adequately fit already. Then the question is: does the time you spent writing and maintaining makefiles and build scripts exceed the time it would take you to set up a build tool? -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 1/18/11 1:58 AM, Steven Wawryk wrote: On 18/01/11 16:46, Andrei Alexandrescu wrote: On 1/17/11 9:48 PM, Michel Fortin wrote: On 2011-01-17 17:54:04 -0500, Michel Fortin michel.for...@michelf.com said: More seriously, you have four choice: 1. code unit 2. code point 3. grapheme 4. require the client to state explicitly which kind of 'character' he wants; 'character' being an overloaded word, it's reasonable to ask for disambiguation. This makes me think of what I did with my XML parser after you made code points the element type for strings. Basically, the parser now uses 'front' and 'popFront' whenever it needs to get the next code point, but most of the time it uses 'frontUnit' and 'popFrontUnit' instead (which I had to add) when testing for or skipping an ASCII character is sufficient. This way I avoid a lot of unnecessary decoding of code points. For this to work, the same range must let you skip either a unit or a code point. If I were using a separate range with a call to toDchar or toCodeUnit (or toGrapheme if I needed to check graphemes), it wouldn't have helped much because the new range would essentially become a new slice independent of the original, so you can't interleave I want to advance by one unit with I want to advance by one code point. So perhaps the best interface for strings would be to provide multiple range-like interfaces that you can use at the level you want. I'm not sure if this is a good idea, but I thought I should at least share my experience. Very insightful. Thanks for sharing. Code it up and make a solid proposal! Andrei How does this differ from Steve Schveighoffer's string_t, subtract the indexing and slicing of code-points, plus a bidirectional grapheme range? There's no string, only range... Andrei
Re: repeat
On 1/18/11 3:07 AM, Walter Bright wrote: Andrei Alexandrescu wrote: I want to generalize the functionality in string's repeat and move it outside std.string. There is an obvious semantic clash here. If you say repeat(abc, 3) did you mean one string abcabcabc or three strings abc, abc, and abc? Just a thought: concat(repeat(abc,3)) yields abcabcabc ? Actually we already have join. Fortunately repeat in std.range is lazy, which avoid multiple memory allocations. I only need to make join accept general ranges. Question is, do we deprecate std.string.repeat? Andrei
Re: What Makes A Programming Language Good
On 1/18/11 4:32 AM, Trass3r wrote: Then I would expect the library vendor provides either a pre-compiled binary library As soon as you provide templates in your library this isn't sufficient anymore. or the means to readily generate same -- whether that means a Makefile, a script, or what have you. We must avoid having the same disastrous situation like C/C++ where everyone uses a different system, CMake, make, scons, blabla. Makefiles aren't portable (imo stuff like msys is no solution, it's a hack) and especially for small or medium-sized projects it's often enough to compile a single main file and all of its dependencies. We really need a standard, portable way to compile D projects, be it implemented in the compiler or in some tool everyone uses. dsss was kind of promising but as you know it's dead. You may add to bugzilla the features that rdmd needs to acquire. Andrei
Re: What Makes A Programming Language Good
On 1/18/11 6:36 AM, Lutger Blijdestijn wrote: Vladimir Panteleev wrote: On Tue, 18 Jan 2011 13:35:34 +0200, Lutger Blijdestijn lutger.blijdest...@gmail.com wrote: I'm pretty happy that my Fedora repositories are just a handful, most of which are setup out of the box. It's a big time saver, one of it's best features. I would use / evaluate much less software if I had to read instructions and download each package manually. I don't see how this relates to code libraries. Distribution repositories simply repackage and distribute software others have written. Having something like that for D is unrealistic. Why? It works quite well for Ruby as well as other languages. Package management something we really need to figure out for D. Question is, do we have an expert on board (apt-get architecture, cpan, rubygems...)? Andrei
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 1/18/11 7:25 AM, spir wrote: On 01/18/2011 03:52 AM, Andrei Alexandrescu wrote: On 1/17/11 5:13 PM, spir wrote: On 01/17/2011 07:57 PM, Andrei Alexandrescu wrote: * Line 130: representing a text as a dchar[][] has its advantages but major efficiency issues. To be frank I think it's a disaster. I think a representation building on UTF strings directly is bound to be vastly better. I don't understand your point. Where is the difference with D's builtin types, then? Unfortunately I won't have much time to discuss all these points, but this is a simple one: using dchar[][] wastes memory and time. You need to build on a flatter representation. Don't confuse the abstraction you are building with its underlying representation. The difference between your abstraction and char[]/wchar[]/dchar[] (which I strongly recommend you to build on) is that the abstractions offer different, higher-level primitives that the representation doesn't. I think it is needed to repeat again the following: Text in my view (or whatever variant solution to work correctly with universal text) is _not_ intended as a basic string type, even less default. If programmers can guarantee all their app's input will ever hold single-codepoint characters only, _or_ if they jst pass pieces of text around without manipulation, then such a tool is big overkill. It has a time cost a Text construction time, which I consider as an investment. It has also some space time cost for operations that should be only slightly relevant compared to speed offered by the simple facts routines can then operate just (actualy nearly) like with historic charsets. Indexing is just normal O(1) indexing, possibly plus producing the result. Not O(n) across the source with building piles along the way. (1000X slower, 100X slower?) Counting is just O(n) with mini-array compares, not building normalising piles across the whole code sequence. (10X, 100X slower?) You don't provide O(n) indexing. Andrei
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 1/18/11 7:17 AM, Michel Fortin wrote: On 2011-01-18 01:16:13 -0500, Andrei Alexandrescu seewebsiteforem...@erdani.org said: On 1/17/11 9:48 PM, Michel Fortin wrote: On 2011-01-17 17:54:04 -0500, Michel Fortin michel.for...@michelf.com said: More seriously, you have four choice: 1. code unit 2. code point 3. grapheme 4. require the client to state explicitly which kind of 'character' he wants; 'character' being an overloaded word, it's reasonable to ask for disambiguation. This makes me think of what I did with my XML parser after you made code points the element type for strings. Basically, the parser now uses 'front' and 'popFront' whenever it needs to get the next code point, but most of the time it uses 'frontUnit' and 'popFrontUnit' instead (which I had to add) when testing for or skipping an ASCII character is sufficient. This way I avoid a lot of unnecessary decoding of code points. For this to work, the same range must let you skip either a unit or a code point. If I were using a separate range with a call to toDchar or toCodeUnit (or toGrapheme if I needed to check graphemes), it wouldn't have helped much because the new range would essentially become a new slice independent of the original, so you can't interleave I want to advance by one unit with I want to advance by one code point. So perhaps the best interface for strings would be to provide multiple range-like interfaces that you can use at the level you want. I'm not sure if this is a good idea, but I thought I should at least share my experience. Very insightful. Thanks for sharing. Code it up and make a solid proposal! What I use right now is this (see below). I'm not sure what would be a good name for it though. The expectation is that I'll get either an ASCII char or something out of ASCII range if it isn't ASCII. The abstraction doesn't seem very 'solid' to me, in the sense that I can't see how it'd apply to ranges other than strings, so it's only useful for strings (the character array kind), and it's only useful as a workaround since you made ElementType!(char[]) a dchar. Well, any range returning char,dchar,wchar could map frontUnit to front and popFrontUnit to popFront to keep things working, but it makes the optimization rather pointless. I don't really have an idea where to go from here. [snip] I was thinking along the lines of: struct Grapheme { private string support_; ... } struct ByGrapheme { private string iteratee_; bool empty(); Grapheme front(); void popFront(); // Additional funs dchar frontCodePoint(); void popFrontCodePoint(); char frontCodeUnit(); void popFrontCodeUnit(); ... } // helper function ByGrapheme byGrapheme(string s); // usage string s = ...; size_t i; foreach (g; byGrapheme(s)) { writeln(Grapheme #, i, is , g); } We need this range in Phobos. Andrei
Re: FP equality tests
double x = 0.0; // ... code that may modify x if (x == 0.0) { ... So I test x to have the original exact value I have put inside it, but this kind of code is not so safe. Having == among FP values in C isn't a strong enough justification. What are the use cases to keep allowing == between built-in FP values in D2? It is perfectly safe if you are aware of what you are asking. We can't just judge it here with two lines, hard to see your intention. Say you have a precomputed value and after some point you want to check if it is the value you are after, you need an exact comparison here. Problem is using floating point without knowing that it is not accurate, and expecting mathematically correct results. Even if we know it, there are still unsolved problems with it. High quality collision detection and response is one example.
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 2011-01-18 11:38:45 -0500, Andrei Alexandrescu seewebsiteforem...@erdani.org said: On 1/18/11 7:17 AM, Michel Fortin wrote: On 2011-01-18 01:16:13 -0500, Andrei Alexandrescu seewebsiteforem...@erdani.org said: On 1/17/11 9:48 PM, Michel Fortin wrote: On 2011-01-17 17:54:04 -0500, Michel Fortin michel.for...@michelf.com said: More seriously, you have four choice: 1. code unit 2. code point 3. grapheme 4. require the client to state explicitly which kind of 'character' he wants; 'character' being an overloaded word, it's reasonable to ask for disambiguation. This makes me think of what I did with my XML parser after you made code points the element type for strings. Basically, the parser now uses 'front' and 'popFront' whenever it needs to get the next code point, but most of the time it uses 'frontUnit' and 'popFrontUnit' instead (which I had to add) when testing for or skipping an ASCII character is sufficient. This way I avoid a lot of unnecessary decoding of code points. For this to work, the same range must let you skip either a unit or a code point. If I were using a separate range with a call to toDchar or toCodeUnit (or toGrapheme if I needed to check graphemes), it wouldn't have helped much because the new range would essentially become a new slice independent of the original, so you can't interleave I want to advance by one unit with I want to advance by one code point. So perhaps the best interface for strings would be to provide multiple range-like interfaces that you can use at the level you want. I'm not sure if this is a good idea, but I thought I should at least share my experience. Very insightful. Thanks for sharing. Code it up and make a solid proposal! What I use right now is this (see below). I'm not sure what would be a good name for it though. The expectation is that I'll get either an ASCII char or something out of ASCII range if it isn't ASCII. The abstraction doesn't seem very 'solid' to me, in the sense that I can't see how it'd apply to ranges other than strings, so it's only useful for strings (the character array kind), and it's only useful as a workaround since you made ElementType!(char[]) a dchar. Well, any range returning char,dchar,wchar could map frontUnit to front and popFrontUnit to popFront to keep things working, but it makes the optimization rather pointless. I don't really have an idea where to go from here. [snip] I was thinking along the lines of: struct Grapheme { private string support_; ... } struct ByGrapheme { private string iteratee_; bool empty(); Grapheme front(); void popFront(); // Additional funs dchar frontCodePoint(); void popFrontCodePoint(); char frontCodeUnit(); void popFrontCodeUnit(); ... } // helper function ByGrapheme byGrapheme(string s); // usage string s = ...; size_t i; foreach (g; byGrapheme(s)) { writeln(Grapheme #, i, is , g); } We need this range in Phobos. Yes, we need a grapheme range. But that's not what my thing was about. It was about shortcutting code point decoding when it isn't necessary while still keeping the ability to decode to code points when iterating on the same range. For instance, here's a simple made up example: string s = hello; if (!s.empty s.frontUnit == '') s.popFrontUnit(); // skip while (!s.empty s.frontUnit != '') s.popFront(); // do something with each code point if (!s.empty s.frontUnit == '') s.popFrontUnit(); // skip assert(s.empty); Here, since I know I'm testing and skipping for '', an ASCII character, decoding the code point is wasted time, so I skip that decoding. The problem is that this optimization can't happen with a range that abstracts things at the code point level. I can do it with strings because strings still allow you to access code units through the indexing operators, but this can't really apply to ranges of code points in general. And parsing with range of code unit would also be a pain, because even if I'm testing for '' for the first character, sometimes I really need to advance by code point and test for code points. One thing that might be interesting is benchmarking my XML parser by replacing every instance of frontUnit and popFrontUnit with front and popFront. That won't change there results, but it'd give us an idea of the overhead of the unnecessary decoded characters code points. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: repeat
On Mon, 17 Jan 2011 20:53:33 +0200, Adam Ruppe destructiona...@gmail.com wrote: It seems to me that you actually want two separate functions: repeat(abc, 3) = [abc, abc, abc] join(repeat(abc, 3)) = abcabcabc A join function already exists in std.string and does this, although it expects a second argument for word separator. I'd be ok with adding a default arg to it, the empty string, so this works. Indeed, we could do lazy this same way... have both return the lazy ranges, and if your want the array, just call array() on it. Though I could see that being annoying with strings - the array probably is the common case, but this would be more conceptually pure. As always, Adam has the solution! Or maybe overload it like: join(abc, 3)
Re: What Makes A Programming Language Good
the features that rdmd needs to acquire Well something that's also missing in xfBuild is a proper way to organize different build types: (debug, release) x (x86, x64) x ... But that would require config files similar to dsss' ones I think.
Re: What Makes A Programming Language Good
Adam Ruppe Wrote: Maybe. 9/10 times they match anyway, but I'd be annoyed if the package names had to match the containing folder. This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use.
Re: What Makes A Programming Language Good
On 01/18/2011 06:33 PM, Jim wrote: Adam Ruppe Wrote: Maybe. 9/10 times they match anyway, but I'd be annoyed if the package names had to match the containing folder. This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use. The D styleguide requires on one hand capitalised names for types, and lowercase for filenames on the other. How are we supposed to make them match? Denis _ vita es estrany spir.wikidot.com
Re: What Makes A Programming Language Good
spir: The D styleguide requires on one hand capitalised names for types, and lowercase for filenames on the other. How are we supposed to make them match? Why do you want them to match? Bye, bearophile
Re: What Makes A Programming Language Good
Am 18.01.2011 18:41, schrieb spir: On 01/18/2011 06:33 PM, Jim wrote: Adam Ruppe Wrote: Maybe. 9/10 times they match anyway, but I'd be annoyed if the package names had to match the containing folder. This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use. The D styleguide requires on one hand capitalised names for types, and lowercase for filenames on the other. How are we supposed to make them match? Denis _ vita es estrany spir.wikidot.com Filenames should match with the module they contain, not with the contained class(es).
Re: What Makes A Programming Language Good
Vladimir Panteleev wrote: Then the question is: does the time you spent writing and maintaining makefiles and build scripts exceed the time it would take you to set up a build tool? I never spent too much time on it anyway, but this thread prompted me to write my own build thing. It isn't 100% done yet, but it does basically work in just 100 lines of code: http://arsdnet.net/dcode/build.d Also depends on these: http://arsdnet.net/dcode/exec.d http://arsdnet.net/dcode/curl.d The exec.d is Linux only, so this program is linux only too. When the new std.process gets into Phobos, exec.d will be obsolete and we'll be cross platform. I borrowed some code from rdmd, so thanks to Andrei for that. I didn't use rdmd directly though since it seems more script oriented than I wanted. The way it works: build somefile.d It uses dmd -v (same as rdmd) to get the list of files it tries to import. It watches dmd's error output for files it can't find. It then tries to fetch those files from my dpldocs.info http folder and tries again (http://dpldocs.info/repository/FILE). If dmd -v completes without errors, it moves on to run the actual compile. All of build's arguments are passed straight to dmd. In my other post, I talked about a configuration file. That would be preferred over just using my own http server so we can spread out our efforts. I just wanted something simple now to see if it actually works well. It worked on my simple program, but on my more complex program, the linker failed...but about the stupid assocative array opapply. Usually my hack to add object_.d from druntime fixes that, but not here. I don't know why. undefined reference to `_D6object30__T16AssociativeArrayTAyaTyAaZ16AssociativeArray7opApplyMFMDFKAyaKyAaZiZi' Meh, I should get to my real work anyway, maybe I'll come back to it. The stupid AAs give me more linker errors than anything else, and they are out of my control!
Re: What Makes A Programming Language Good
On 2011-01-18 10:22, bearophile wrote: Vladimir Panteleev: IMO, sticking to the C-ism of one object file at a time and dependency on external build tools / makefiles is the biggest mistake DMD did in this regard. A Unix philosophy is to create tools that are able to do only one thing well, and rdmd uses DMD to do its job of helping compile small projects automatically. Yet the D compiler is not following that philosophy in many situations because it is doing lot of stuff beside compiling D code, like profiler, code coverage analyser, unittester, docs generator, JSON summary generator, and more. D1 compiler used to have a cute literary programming feature too, that's often used by Haskell blogs. Here Walter is pragmatic: docs generator happens to be quicker to create and maintain if it's built inside the compiler. So it's right to fold this rdmd functionality inside the compiler? Is this practically useful, like is this going to increase rdmd speed? Folding rdmd functionality inside the compiler may risk freezing the future evolution of future D build tools, so it has a risks/costs too. Bye, bearophile I would say that in this case the LLVM/Clang approach would be the best. Build a solid compiler library that other tools can be built upon, including the compiler. -- /Jacob Carlborg
Re: What Makes A Programming Language Good
Trass3r u...@known.com wrote in message news:ih4ij7$1g01$1...@digitalmars.com... the features that rdmd needs to acquire Well something that's also missing in xfBuild is a proper way to organize different build types: (debug, release) x (x86, x64) x ... But that would require config files similar to dsss' ones I think. FWIW, stbuild (part of semitwist d tools) exists to do exactly that: http://www.dsource.org/projects/semitwist/browser/trunk/src/semitwist/apps/stmanage/stbuild http://www.dsource.org/projects/semitwist/browser/trunk/bin Although I'm thinking of replacing it with something more rake-like.
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 01/18/2011 06:14 PM, Michel Fortin wrote: On 2011-01-18 11:38:45 -0500, Andrei Alexandrescu seewebsiteforem...@erdani.org said: I was thinking along the lines of: struct Grapheme { private string support_; ... } struct ByGrapheme { private string iteratee_; bool empty(); Grapheme front(); void popFront(); // Additional funs dchar frontCodePoint(); void popFrontCodePoint(); char frontCodeUnit(); void popFrontCodeUnit(); ... } // helper function ByGrapheme byGrapheme(string s); // usage string s = ...; size_t i; foreach (g; byGrapheme(s)) { writeln(Grapheme #, i, is , g); } We need this range in Phobos. Yes, we need a grapheme range. But that's not what my thing was about. It was about shortcutting code point decoding when it isn't necessary while still keeping the ability to decode to code points when iterating on the same range. For instance, here's a simple made up example: string s = hello; if (!s.empty s.frontUnit == '') s.popFrontUnit(); // skip while (!s.empty s.frontUnit != '') s.popFront(); // do something with each code point if (!s.empty s.frontUnit == '') s.popFrontUnit(); // skip assert(s.empty); Here, since I know I'm testing and skipping for '', an ASCII character, decoding the code point is wasted time, so I skip that decoding. The problem is that this optimization can't happen with a range that abstracts things at the code point level. I can do it with strings because strings still allow you to access code units through the indexing operators, but this can't really apply to ranges of code points in general. And parsing with range of code unit would also be a pain, because even if I'm testing for '' for the first character, sometimes I really need to advance by code point and test for code points. This means a single string type that exposes various _synchrone_ range levels (codeunit, codepoint, grapheme), doesn't it? As opposed to Andrei's approach of ranges beeing structures external to string types, IIUC, which thus move on independantly? One thing that might be interesting is benchmarking my XML parser by replacing every instance of frontUnit and popFrontUnit with front and popFront. That won't change there results, but it'd give us an idea of the overhead of the unnecessary decoded characters code points. Yes, would you have time to do it? I would be interesting in such perf measurements. (-- your idea about a Text variant, for which I would like to know whether it's worth still decoding systematically.) Denis _ vita es estrany spir.wikidot.com
join
I implemented a simple separatorless joiner as follows: auto joiner(RoR)(RoR r) if (isInputRange!RoR isInputRange!(ElementType!RoR)) { static struct Result { private: RoR _items; ElementType!RoR _current; void prime() { for (;; _items.popFront()) { if (_items.empty) return; if (!_items.front.empty) break; } _current = _items.front; _items.popFront(); } public: this(RoR r) { _items = r; prime(); } @property auto empty() { return _current.empty; } @property auto ref front() { assert(!empty); return _current.front; } void popFront() { assert(!_current.empty); _current.popFront(); if (_current.empty) prime(); } static if (isForwardRange!RoR isForwardRange!(ElementType!RoR)) { @property auto save() { Result copy; copy._items = _items.save; copy._current = _current.save; return copy; } } } return Result(r); } The code has a few properties that I'd like to discuss a bit: 1. It doesn't provide bidirectional primitives, although it often could. The rationale is that implementing back and popBack incurs size and time overheads that are difficult to justify. Most of the time people just want to join stuff and go through it forward. The counterargument is that providing those primitives would make join more interesting and opens the door to other idioms. What say you? 2. joiner uses an idiom that I've experimented with in the past: it defines a local struct and returns it. As such, joiner's type is impossible to express without auto. I find that idiom interesting for many reasons, among which the simplest is that the code is terse, compact, and doesn't pollute the namespace. I'm thinking we should do the same for Appender - it doesn't make much sense to create an Appender except by calling the appender() function. 3. Walter, Don, kindly please fix ddoc so it works with auto. What used to be an annoyance becomes a disabler for the idiom above. Currently it is impossible to document joiner (aside from unsavory tricks). 4. I found the prime() idiom quite frequent when defining ranges. Essentially prime() positions the troops by the border. Both the constructor and popFront() call prime(). 5. auto and auto ref rock - they allow simple, correct definitions. 6. Currently joiner() has a bug: it will not work as expected for certain ranges. Which ranges are those, and how can the bug be fixed? Andrei
Re: What Makes A Programming Language Good
Nick Sabalausky a@a.a wrote in message news:ih4p4o$1r1o$1...@digitalmars.com... Trass3r u...@known.com wrote in message news:ih4ij7$1g01$1...@digitalmars.com... the features that rdmd needs to acquire Well something that's also missing in xfBuild is a proper way to organize different build types: (debug, release) x (x86, x64) x ... But that would require config files similar to dsss' ones I think. FWIW, stbuild (part of semitwist d tools) exists to do exactly that: http://www.dsource.org/projects/semitwist/browser/trunk/src/semitwist/apps/stmanage/stbuild http://www.dsource.org/projects/semitwist/browser/trunk/bin Oh, and an example of the config file: http://www.dsource.org/projects/semitwist/browser/trunk/stbuild.conf Although I'm thinking of replacing it with something more rake-like.
Re: What Makes A Programming Language Good
Vladimir Panteleev wrote: IMO, sticking to the C-ism of one object file at a time and dependency on external build tools / makefiles is the biggest mistake DMD did in this regard. You don't need such a tool with dmd until your project exceeds a certain size. Most of my little D projects' build tool is a one line script that looks like: dmd foo.d bar.d There's just no need to go farther than that.
Re: What Makes A Programming Language Good
Jim wrote: Adam Ruppe Wrote: Maybe. 9/10 times they match anyway, but I'd be annoyed if the package names had to match the containing folder. This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use. Forcing the module name to match the file name sounds good, but in practice it makes it hard to debug modules. What I like to do is to copy a suspicious module to foo.d (or whatever.d) and link it in explicitly, which will override the breaking one. Then, I hack away at it until I discover the problem, then fix the original.
Re: What Makes A Programming Language Good
Andrej Mitrovic wrote: On 1/18/11, Walter Bright newshou...@digitalmars.com wrote: You can put hundreds if you like. DMD can, but Optlink can't handle long arguments. Example?
Re: join
2. joiner uses an idiom that I've experimented with in the past: it defines a local struct and returns it. As such, joiner's type is impossible to express without auto. I find that idiom interesting for many reasons, among which the simplest is that the code is terse, compact, and doesn't pollute the namespace. I'm thinking we should do the same for Appender - it doesn't make much sense to create an Appender except by calling the appender() function. Didn't know there was a solution to namespace pollution. This one is a very nice idea, are you planning to use it in phobos in general? Retro, Stride... there should be many.
Re: What Makes A Programming Language Good
Walter Bright Wrote: Forcing the module name to match the file name sounds good, but in practice it makes it hard to debug modules. What I like to do is to copy a suspicious module to foo.d (or whatever.d) and link it in explicitly, which will override the breaking one. Then, I hack away at it until I discover the problem, then fix the original. This would admittedly impose some constraints, but I think it would ultimately be worth it. It makes everything much clearer and creates a bunch of opportunities for further development. I'd create a branch (in git or mercury) for that task, it's quick and dirt cheap, very easy to switch to and from, and you get the diff for free.
Re: Too much flexibility is dangerous for large systems
On 01/18/2011 06:46 AM, spir wrote: On 01/18/2011 12:41 PM, Walter Bright wrote: bearophile wrote: Found through Reddit, similar things can be said about D2: http://kirkwylie.blogspot.com/2011/01/scala-considered-harmful-for-large.html I think that article is bunk. Large Java programs (as related to me by corporate Java programmers) tend to be excessively complex because the language is too simple. Too often people think that with a simple language, the programs created with it must be simple. This is dead wrong. Simple languages lead to complex, incomprehensible programs. Oh, how true! Think at Lisp, for instance, probably one of the most simple languages ever. This simplicity, precisely, forces to create tons of abstraction levels just to define notions not present in the language --due to simplicity-- but absolutely needed to escape too low-level programming. please elucidate?
Re: repeat
Christopher Nicholson-Sauls wrote: On 01/18/11 03:07, Walter Bright wrote: Andrei Alexandrescu wrote: I want to generalize the functionality in string's repeat and move it outside std.string. There is an obvious semantic clash here. If you say repeat(abc, 3) did you mean one string abcabcabc or three strings abc, abc, and abc? Just a thought: concat(repeat(abc,3)) yields abcabcabc ? Nah. Too obvious. On a more serious note, I have no issue with join(repeat(abc,3)). Eh, I forgot about join!
Re: Too much flexibility is dangerous for large systems
On 1/18/11 3:01 PM, Ellery Newcomer wrote: On 01/18/2011 06:46 AM, spir wrote: On 01/18/2011 12:41 PM, Walter Bright wrote: bearophile wrote: Found through Reddit, similar things can be said about D2: http://kirkwylie.blogspot.com/2011/01/scala-considered-harmful-for-large.html I think that article is bunk. Large Java programs (as related to me by corporate Java programmers) tend to be excessively complex because the language is too simple. Too often people think that with a simple language, the programs created with it must be simple. This is dead wrong. Simple languages lead to complex, incomprehensible programs. Oh, how true! Think at Lisp, for instance, probably one of the most simple languages ever. This simplicity, precisely, forces to create tons of abstraction levels just to define notions not present in the language --due to simplicity-- but absolutely needed to escape too low-level programming. please elucidate? I think Lisp is in fact rather complex. Andrei
Re: join
On 1/18/11 2:55 PM, so wrote: 2. joiner uses an idiom that I've experimented with in the past: it defines a local struct and returns it. As such, joiner's type is impossible to express without auto. I find that idiom interesting for many reasons, among which the simplest is that the code is terse, compact, and doesn't pollute the namespace. I'm thinking we should do the same for Appender - it doesn't make much sense to create an Appender except by calling the appender() function. Didn't know there was a solution to namespace pollution. This one is a very nice idea, are you planning to use it in phobos in general? Retro, Stride... there should be many. I plan to, albeit cautiously. Sometimes people would want e.g. to store a member of that type in a class. They still can by saying typeof(joiner(...)) but we don't want to make it awkward for them. Andrei
Re: repeat
On 1/18/11 3:24 PM, Walter Bright wrote: Christopher Nicholson-Sauls wrote: On 01/18/11 03:07, Walter Bright wrote: Andrei Alexandrescu wrote: I want to generalize the functionality in string's repeat and move it outside std.string. There is an obvious semantic clash here. If you say repeat(abc, 3) did you mean one string abcabcabc or three strings abc, abc, and abc? Just a thought: concat(repeat(abc,3)) yields abcabcabc ? Nah. Too obvious. On a more serious note, I have no issue with join(repeat(abc,3)). Eh, I forgot about join! It's Porsche-uh (and join-er). Andrei
Re: What Makes A Programming Language Good
On 18/01/11 20:26, Walter Bright wrote: Jim wrote: Adam Ruppe Wrote: Maybe. 9/10 times they match anyway, but I'd be annoyed if the package names had to match the containing folder. This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use. Forcing the module name to match the file name sounds good, but in practice it makes it hard to debug modules. What I like to do is to copy a suspicious module to foo.d (or whatever.d) and link it in explicitly, which will override the breaking one. Then, I hack away at it until I discover the problem, then fix the original. Couldn’t you do exactly the same thing by just copying the file? cp suspicious.d suspicious.orig edit suspicious.d
Re: What Makes A Programming Language Good
Jim Wrote: Walter Bright Wrote: Forcing the module name to match the file name sounds good, but in practice it makes it hard to debug modules. What I like to do is to copy a suspicious module to foo.d (or whatever.d) and link it in explicitly, which will override the breaking one. Then, I hack away at it until I discover the problem, then fix the original. This would admittedly impose some constraints, but I think it would ultimately be worth it. It makes everything much clearer and creates a bunch of opportunities for further development. I don't see such benefit. First off, I don't see file/module names not matching very often. Tools can be developed to assume such structure exists which means more incentive to keep such structure, I believe rdmd already makes this assumption. It also wouldn't be hard to make a program that takes a list of files, names and places them into the proper structure. I'd create a branch (in git or mercury) for that task, it's quick and dirt cheap, very easy to switch to and from, and you get the diff for free. Right, using such tools is great. But what if you are like me and don't have a dev environment set up for Phobos, but I want to fix some module? Do I have to setup such an environment or through the file in a folder std/ just do some work on it? I don't really know how annoying I would find such a change, but I don't think I would ever see at as a feature.
Potential patent issues
I spotted some patents that can theaten current DMD implementation. Wanted to clarify things. http://www.freepatentsonline.com/6185728.pdf - this patent describes method pointers implementation (delegates) http://www.freepatentsonline.com/5628016.pdf - describes compiler support for SEH, also this is the reason GNU toolchain does not support SEH right now Both patents were owned by Borland, right now, I believe Microsoft owns them. Walter, could you give some comments about this? Does dmd violate anything?
Re: Potential patent issues
BlazingWhitester max.kl...@gmail.com wrote in message news:ih55mp$2gtl$1...@digitalmars.com... I spotted some patents that can theaten current DMD implementation. Wanted to clarify things. http://www.freepatentsonline.com/6185728.pdf - this patent describes method pointers implementation (delegates) http://www.freepatentsonline.com/5628016.pdf - describes compiler support for SEH, also this is the reason GNU toolchain does not support SEH right now Both patents were owned by Borland, right now, I believe Microsoft owns them. Walter, could you give some comments about this? Does dmd violate anything? I don't think there's a piece of software in existence that doesn't violate about ten different software patents.
Re: What Makes A Programming Language Good
Thias v...@invalid.com wrote in message news:ih52a8$2bba$1...@digitalmars.com... On 18/01/11 20:26, Walter Bright wrote: Jim wrote: Adam Ruppe Wrote: Maybe. 9/10 times they match anyway, but I'd be annoyed if the package names had to match the containing folder. This is enforced in some languages, and I like it. It'd be confusing if they didn't match when I would go to look for something. I think it would be a good idea for D to standardise this. Not only so that the compiler can traverse and compile but for all dev tools (static analysers, package managers, etc). Standardisation makes it easier to create toolchains, which I believe are essential for the growth of any language use. Forcing the module name to match the file name sounds good, but in practice it makes it hard to debug modules. What I like to do is to copy a suspicious module to foo.d (or whatever.d) and link it in explicitly, which will override the breaking one. Then, I hack away at it until I discover the problem, then fix the original. Couldnt you do exactly the same thing by just copying the file? cp suspicious.d suspicious.orig edit suspicious.d That's what I do. Works fine. (Although I keep the .d extension, and do like suspicious orig.d)
Re: Potential patent issues
Am 18.01.2011 23:52, schrieb BlazingWhitester: I spotted some patents that can theaten current DMD implementation. Wanted to clarify things. http://www.freepatentsonline.com/6185728.pdf - this patent describes method pointers implementation (delegates) This is trivial, what idiot grants these kind of patents? And were there really no delegates before Jan. 31 1996 when this was filed? http://www.freepatentsonline.com/5628016.pdf - describes compiler support for SEH, also this is the reason GNU toolchain does not support SEH right now Both patents were owned by Borland, right now, I believe Microsoft owns them. Walter, could you give some comments about this? Does dmd violate anything?
Re: What Makes A Programming Language Good
Jesse Phillips Wrote: It makes everything much clearer and creates a bunch of opportunities for further development. I don't see such benefit. It's easier for the programmer to find the module if it shares the name with the file. Especially true when faced with other people's code, or code that's more than 6 months old, or just large projects. The same goes for packages and directories. The relationship is clear: each file defines a module. The natural thing would be to have them bear the same name. It lets the compiler traverse dependencies by itself. This is good for the following reasons: 1) You don't need build tools, makefiles. Just dmd myApp.d. Do you know how many build tools there are, each trying to do the same thing. They are at disadvantage to the compiler because the compiler can do conditional compiling and generally understands the code better than other programs. There's also extra work involved in keeping makefiles current. They are just like header files are for C/C++ -- an old solution. 2) The compiler can do more optimisation, inlining, reduction and refactoring. The compiler also knows which code interacts with other code and can use that information for cache-specific optimisations. Vladimir suggested it would open the door to new language features (like virtual templated methods). Generally I think it would be good for templates, mixins and the like. In the TDPL book Andrei makes hints about future AST-introspection functionality. Surely access to the source would benefit from this. It would simplify error messages now caused by the linker. Names within a program wouldn't need to be mangled. More information about the caller / callee would also be available at the point of error. It would also be of great help to third-party developers. Static code analysers (for performance, correctness, bugs, documentation etc), packet managers... They could all benefit from the simpler structure. They wouldn't have to guess what code is used or built (by matching names themselves or trying to interpret makefiles). It would be easier for novices. The simpler it is to build a program the better. It could be good for the community of D programmers. Download some code and it would fit right in. Naming is a little bit of a Wild West now. Standardised naming makes it easier to sort, structure and reuse code. I'd create a branch (in git or mercury) for that task, it's quick and dirt cheap, very easy to switch to and from, and you get the diff for free. Right, using such tools is great. But what if you are like me and don't have a dev environment set up for Phobos, but I want to fix some module? Do I have to setup such an environment or through the file in a folder std/ just do some work on it? You have compilers, linkers and editors but no version control system? They are generally very easy to install and use. When you have used one for a while you wonder how you ever got along without it before. In git for example, creating a feature branch is one command (or two clicks with a gui). There you can tinker and experiment all you want without causing any trouble with other branches. I usually create a new branch for every new feature. I do some coding on one, switch to another branch and fix something else. They are completely separate. When they are done you merge them into your mainline.
Re: Potential patent issues
On 2011-01-19 01:15:03 +0200, Daniel Gibson said: This is trivial, what idiot grants these kind of patents? And were there really no delegates before Jan. 31 1996 when this was filed? If I'm not mistaken, Oberon-2 implemented pointers to record-bound procedures as fat-pointers
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 19/01/11 02:40, Andrei Alexandrescu wrote: On 1/18/11 1:58 AM, Steven Wawryk wrote: On 18/01/11 16:46, Andrei Alexandrescu wrote: On 1/17/11 9:48 PM, Michel Fortin wrote: This makes me think of what I did with my XML parser after you made code points the element type for strings. Basically, the parser now uses 'front' and 'popFront' whenever it needs to get the next code point, but most of the time it uses 'frontUnit' and 'popFrontUnit' instead (which I had to add) when testing for or skipping an ASCII character is sufficient. This way I avoid a lot of unnecessary decoding of code points. For this to work, the same range must let you skip either a unit or a code point. If I were using a separate range with a call to toDchar or toCodeUnit (or toGrapheme if I needed to check graphemes), it wouldn't have helped much because the new range would essentially become a new slice independent of the original, so you can't interleave I want to advance by one unit with I want to advance by one code point. So perhaps the best interface for strings would be to provide multiple range-like interfaces that you can use at the level you want. I'm not sure if this is a good idea, but I thought I should at least share my experience. Very insightful. Thanks for sharing. Code it up and make a solid proposal! Andrei How does this differ from Steve Schveighoffer's string_t, subtract the indexing and slicing of code-points, plus a bidirectional grapheme range? There's no string, only range... Which is exactly what I asked you about. I understand that you must be very busy, But how do I get you to look at the actual technical content of something? Is there something in the way I phrase thing that you dismiss my introductory motivation without looking into the content? I don't mean this as a criticism. I really want to know because I'm considering a proposal on a different topic but wasn't sure it's worth it as there seems to be a barrier to getting things considered.
Re: Too much flexibility is dangerous for large systems
On 1/18/2011 7:46 AM, spir wrote: Think at Lisp, for instance, probably one of the most simple languages ever. This simplicity, precisely, forces to create tons of abstraction levels just to define notions not present in the language --due to simplicity-- but absolutely needed to escape too low-level programming. This is on the semantic side. On the syntactic one, all of these custom notions look the same, namely (doSomethingWith args...) (*) instead of each having a distinct outlook helping the reader decode the code. Great! (if ( IQ 150) findAnotherPL haveFunWithLISP) Ironically, from what I understand (correct me if I'm wrong since I've never used Lisp beyond the toying stage), Common Lisp does most of the work for you here, by putting all this abstraction in the standard library. This works because, while the core language is simple, it's extremely flexible. However, the standard library is for many purposes part of the language and therefore Common Lisp has a reputation for being bloated and complex. On the other hand: which notions distinctions should be defined in a language? D2 may have far too many in my opinion, but which ones should stay, and why? IMHO anything that is clearly useful and cannot be defined well in a library should be defined in the language. Of course the notion of well is somewhat subjective. Furthermore, when deciding how good is good enough for a library implementation, it's important to consider how widely used the feature is. Thus, we have arrays in the core language even though they could be done fairly well in a library. IMHO D2 does not have too much in the core language. The nice thing about it is that the more complex parts are only important for generic code, doing low level/performance critical work and providing automatically checkable guarantees about code. None of these can be implemented well in a library. If you're just writing run of the mill application code and don't care about fancy compiler checked guarantees or writing generic code, low level or super efficient code, it's pretty obvious what subset of the language to use. This subset is also extremely easy to use. Of course performing deeper magic requires deeper knowledge, but when has that ever not been true?
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 1/18/11 6:00 PM, Steven Wawryk wrote: On 19/01/11 02:40, Andrei Alexandrescu wrote: On 1/18/11 1:58 AM, Steven Wawryk wrote: On 18/01/11 16:46, Andrei Alexandrescu wrote: On 1/17/11 9:48 PM, Michel Fortin wrote: This makes me think of what I did with my XML parser after you made code points the element type for strings. Basically, the parser now uses 'front' and 'popFront' whenever it needs to get the next code point, but most of the time it uses 'frontUnit' and 'popFrontUnit' instead (which I had to add) when testing for or skipping an ASCII character is sufficient. This way I avoid a lot of unnecessary decoding of code points. For this to work, the same range must let you skip either a unit or a code point. If I were using a separate range with a call to toDchar or toCodeUnit (or toGrapheme if I needed to check graphemes), it wouldn't have helped much because the new range would essentially become a new slice independent of the original, so you can't interleave I want to advance by one unit with I want to advance by one code point. So perhaps the best interface for strings would be to provide multiple range-like interfaces that you can use at the level you want. I'm not sure if this is a good idea, but I thought I should at least share my experience. Very insightful. Thanks for sharing. Code it up and make a solid proposal! Andrei How does this differ from Steve Schveighoffer's string_t, subtract the indexing and slicing of code-points, plus a bidirectional grapheme range? There's no string, only range... Which is exactly what I asked you about. I understand that you must be very busy, But how do I get you to look at the actual technical content of something? Is there something in the way I phrase thing that you dismiss my introductory motivation without looking into the content? I don't mean this as a criticism. I really want to know because I'm considering a proposal on a different topic but wasn't sure it's worth it as there seems to be a barrier to getting things considered. One simple fact is that I'm not the only person who needs to look at a design. If you want to propose something for inclusion in Phobos, please put the code in good shape, document it properly, and make a submission in this newsgroup following the Boost model. I get one vote and everyone else gets a vote. Looking back at our exchanges in search for a perceived dismissive attitude on my part (apologies if it seems that way - it was unintentional), I infer your annoyance stems from my answer to this: How does this differ from Steve Schveighoffer's string_t, subtract the indexing and slicing of code-points, plus a bidirectional grapheme range? I happen to have discussed at length my beef with Steve's proposal. Now in one sentence you change the proposed design on the fly without fleshing out the consequences, add to it again without substantiation, and presumably expect me to come with a salient analysis of the result. I don't think it's fair to characterize my answer to that as dismissive, nor to pressure me into expanding on it. Finally, let me say again what I already said for a few times: in order to experiment with grapheme-based processing, we need a byGrapheme range. There is no need for a new string class. We need a range over the existing string types. That would allow us to play with graphemes, assess their efficiency and ubiquity, and would ultimately put us in a better position when it comes to deciding whether it makes sense to make grapheme a character type or the default character type. Andrei
Re: Too much flexibility is dangerous for large systems
On 01/19/2011 01:26 AM, dsimcha wrote: On 1/18/2011 7:46 AM, spir wrote: Think at Lisp, for instance, probably one of the most simple languages ever. This simplicity, precisely, forces to create tons of abstraction levels just to define notions not present in the language --due to simplicity-- but absolutely needed to escape too low-level programming. This is on the semantic side. On the syntactic one, all of these custom notions look the same, namely (doSomethingWith args...) (*) instead of each having a distinct outlook helping the reader decode the code. Great! (if ( IQ 150) findAnotherPL haveFunWithLISP) Ironically, from what I understand (correct me if I'm wrong since I've never used Lisp beyond the toying stage), Common Lisp does most of the work for you here, by putting all this abstraction in the standard library. This works because, while the core language is simple, it's extremely flexible. However, the standard library is for many purposes part of the language and therefore Common Lisp has a reputation for being bloated and complex. You are right. The core is simple, or rather flat' ;-) What I actually meant is the myth simplicity = facility is _very_ wrong, especially in programming. But for any reason in everyday speech (not only in english) simple tends to be used as synonym of easy, I guess. Very misleading. I think the notions that should exist in a PL ('s core) are the ones we humans use to think, more precisely to model. Engineers, designers, artists, scenarists, researchers... programmers model. Thus, PL design may be a field of applied cognitive science. But cognitive science is very far to be there :-( (able to tell us anything sensible useful about how we model). In absence of any such fundament, we are left with absurd choices, wild statements and irrational rationalisations (in the common case) or plain pragmatics (in the best case). [Sorry for the OT] Denis _ vita es estrany spir.wikidot.com
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 19/01/11 11:37, Andrei Alexandrescu wrote: On 1/18/11 6:00 PM, Steven Wawryk wrote: Which is exactly what I asked you about. I understand that you must be very busy, But how do I get you to look at the actual technical content of something? Is there something in the way I phrase thing that you dismiss my introductory motivation without looking into the content? I don't mean this as a criticism. I really want to know because I'm considering a proposal on a different topic but wasn't sure it's worth it as there seems to be a barrier to getting things considered. One simple fact is that I'm not the only person who needs to look at a design. If you want to propose something for inclusion in Phobos, please put the code in good shape, document it properly, and make a submission in this newsgroup following the Boost model. I get one vote and everyone else gets a vote. Ok, thanks for this suggestion. But if developing a proposal as concrete code is a lot of work that may be rejected, is there a way to sound out the idea first before deciding to commit to developing it? Looking back at our exchanges in search for a perceived dismissive attitude on my part (apologies if it seems that way - it was unintentional), I infer your annoyance stems from my answer to this: How does this differ from Steve Schveighoffer's string_t, subtract the indexing and slicing of code-points, plus a bidirectional grapheme range? No, this was just a summary. Here is the post that you answered dismissively: news://news.digitalmars.com:119/ih030g$1ok1$1...@digitalmars.com In the interest of moving this on, would it become acceptable to you if: 1. indexing and slicing of the code-point range were removed? 2. any additional ranges are exposed to the user according to decisions made about graphemes, etc? 3. other constructive criticisms were accommodated? Steve On 15/01/11 03:33, Andrei Alexandrescu wrote: On 1/14/11 5:06 AM, Steven Schveighoffer wrote: I respectfully disagree. A stream built on fixed-sized units, but with variable length elements, where you can determine the start of an element in O(1) time given a random index absolutely provides random-access. It just doesn't provide length. I equally respectfully disagree. I think random access is defined as accessing the ith element in O(1) time. That's not the case here. Andrei I happen to have discussed at length my beef with Steve's proposal. Now in one sentence you change the proposed design on the fly without fleshing out the consequences, add to it again without substantiation, and presumably expect me to come with a salient analysis of the result. I don't think it's fair to characterize my answer to that as dismissive, nor to pressure me into expanding on it. Sorry, I could have given more context. But you didn't discuss what I asked, based on the observation that your detailed criticisms of Steve's proposal all related to a single aspect of it. Steve
Re: DVCS (was Re: Moving to D)
On 1/16/2011 5:07 PM, Walter Bright wrote: We'll be moving dmd, phobos, druntime, and the docs to Github shortly. The accounts are set up, it's just a matter of getting the svn repositories moved and figuring out how it all works. I know very little about git and github, but the discussions about it here and elsewhere online have thoroughly convinced me (and the other devs) that this is the right move for D. I'm sure you've already seen this, but Pro Git is probably the best guide for git. http://progit.org/book/ Once you understand what a commit is, what a tree is, what a merge is, what a branch is, etc... its actually really simple (Chapter 9 in Pro Git). Definitely a radical departure from svn, and a good one for D.
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 1/18/11 7:48 PM, Steven Wawryk wrote: On 19/01/11 11:37, Andrei Alexandrescu wrote: On 1/18/11 6:00 PM, Steven Wawryk wrote: Which is exactly what I asked you about. I understand that you must be very busy, But how do I get you to look at the actual technical content of something? Is there something in the way I phrase thing that you dismiss my introductory motivation without looking into the content? I don't mean this as a criticism. I really want to know because I'm considering a proposal on a different topic but wasn't sure it's worth it as there seems to be a barrier to getting things considered. One simple fact is that I'm not the only person who needs to look at a design. If you want to propose something for inclusion in Phobos, please put the code in good shape, document it properly, and make a submission in this newsgroup following the Boost model. I get one vote and everyone else gets a vote. Ok, thanks for this suggestion. But if developing a proposal as concrete code is a lot of work that may be rejected, is there a way to sound out the idea first before deciding to commit to developing it? This is the best place as far as I know. Looking back at our exchanges in search for a perceived dismissive attitude on my part (apologies if it seems that way - it was unintentional), I infer your annoyance stems from my answer to this: How does this differ from Steve Schveighoffer's string_t, subtract the indexing and slicing of code-points, plus a bidirectional grapheme range? No, this was just a summary. Here is the post that you answered dismissively: news://news.digitalmars.com:119/ih030g$1ok1$1...@digitalmars.com My response of Sun, 16 Jan 2011 20:58:43 -0600 was a fair attempt at a response. If you found that dismissive, I'd be hard pressed to improve it. To quote myself: I believe the proposed scheme: 1. Changes the language in a major way; 2. Is highly disruptive; 3. Improves the status quo in only minor ways. I'd be much more willing to improve things by e.g. defining the representation() function I talked about a bit ago, and other less disruptive additions. That took into consideration your amendments. In the interest of moving this on, would it become acceptable to you if: 1. indexing and slicing of the code-point range were removed? 2. any additional ranges are exposed to the user according to decisions made about graphemes, etc? 3. other constructive criticisms were accommodated? Steve On 15/01/11 03:33, Andrei Alexandrescu wrote: On 1/14/11 5:06 AM, Steven Schveighoffer wrote: I respectfully disagree. A stream built on fixed-sized units, but with variable length elements, where you can determine the start of an element in O(1) time given a random index absolutely provides random-access. It just doesn't provide length. I equally respectfully disagree. I think random access is defined as accessing the ith element in O(1) time. That's not the case here. Andrei I happen to have discussed at length my beef with Steve's proposal. Now in one sentence you change the proposed design on the fly without fleshing out the consequences, add to it again without substantiation, and presumably expect me to come with a salient analysis of the result. I don't think it's fair to characterize my answer to that as dismissive, nor to pressure me into expanding on it. Sorry, I could have given more context. But you didn't discuss what I asked, based on the observation that your detailed criticisms of Steve's proposal all related to a single aspect of it. I really don't know what to add to make my answer more meaningful. Andrei
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 19/01/11 13:53, Andrei Alexandrescu wrote: My response of Sun, 16 Jan 2011 20:58:43 -0600 was a fair attempt at a response. If you found that dismissive, I'd be hard pressed to improve it. To quote myself: I believe the proposed scheme: 1. Changes the language in a major way; 2. Is highly disruptive; 3. Improves the status quo in only minor ways. I'd be much more willing to improve things by e.g. defining the representation() function I talked about a bit ago, and other less disruptive additions. That took into consideration your amendments. I don't think that it did. I proposed no language change, nor anything disruptive. The change in status quo I proposed was essentially the same one you encouraged here, about a type that gives the user the choice of what kind of range to be operated on. It appears to me that you were responding to some perception you had about Steve's full proposal (that may have been triggered by something I said in the introduction), not what I actually said in the content. So, I would still be interested to know how to sound out this newsgroup with an idea (before coding commitment) and have the suggestions considered on something more than a superficial level. Is the newsgroup too busy? Should there be people nominated to screen ideas that are worth looking at? Should I use a completely different approach? Your suggestions so far I will take into account, but it still looks like there's a barrier to me. Sorry, I could have given more context. But you didn't discuss what I asked, based on the observation that your detailed criticisms of Steve's proposal all related to a single aspect of it. I really don't know what to add to make my answer more meaningful. Andrei
Re: What Makes A Programming Language Good
Jim Wrote: Jesse Phillips Wrote: It makes everything much clearer and creates a bunch of opportunities for further development. I don't see such benefit. It's easier for the programmer to find the module if it shares the name with the file. Especially true when faced with other people's code, or code that's more than 6 months old, or just large projects. The same goes for packages and directories. The relationship is clear: each file defines a module. The natural thing would be to have them bear the same name. Like I sad, I haven't seen this as an issue. People don't go around naming their files completely different from module name. There are just too many benefits to do it otherwise, I believe the include path makes use of this. It lets the compiler traverse dependencies by itself. This is good for the following reasons: 1) You don't need build tools, makefiles. Just dmd myApp.d. Do you know how many build tools there are, each trying to do the same thing. They are at disadvantage to the compiler because the compiler can do conditional compiling and generally understands the code better than other programs. There's also extra work involved in keeping makefiles current. They are just like header files are for C/C++ -- an old solution. This is what the Open Scalable Language Toolchains talk is about http://vimeo.com/16069687 The idea is that the compile has the job of compiling the program and providing information about the program to allow other tools to make use of the information without their own lex/parser/analysis work. Meaning the compile should not have an advantage. Lastly Walter has completely different reasons for not wanting to have auto find in the compiler. It will become yet another magic black box that will still confuse people when it fails. 2) The compiler can do more optimisation, inlining, reduction and refactoring. The compiler also knows which code interacts with other code and can use that information for cache-specific optimisations. Vladimir suggested it would open the door to new language features (like virtual templated methods). Generally I think it would be good for templates, mixins and the like. In the TDPL book Andrei makes hints about future AST-introspection functionality. Surely access to the source would benefit from this. No, you do not get optimization benefits from how the files are stored on the disk. What Vladimir was talking about was the restriction that compilation unit was the module. DMD already provides many of these benefits if you just list all the files you want compiled on the command line. It would simplify error messages now caused by the linker. Names within a program wouldn't need to be mangled. More information about the caller / callee would also be available at the point of error. Nope, because the module you are looking for could be in a library somewhere, and if you forget to point the linker to it, you'll still get linker errors. It would also be of great help to third-party developers. Static code analysers (for performance, correctness, bugs, documentation etc), packet managers... They could all benefit from the simpler structure. They wouldn't have to guess what code is used or built (by matching names themselves or trying to interpret makefiles). As I said, have all these tools assume such a structure. If people aren't already using the layout, they will if they want to use these tools. I believe that is how using the import path already works in dmd. It would be easier for novices. The simpler it is to build a program the better. It could be good for the community of D programmers. Download some code and it would fit right in. Naming is a little bit of a Wild West now. Standardised naming makes it easier to sort, structure and reuse code. rdmd is distributed with the compiler... do you have examples of poorly chosen module names, which have caused issue? Right, using such tools is great. But what if you are like me and don't have a dev environment set up for Phobos, but I want to fix some module? Do I have to setup such an environment or through the file in a folder std/ just do some work on it? You have compilers, linkers and editors but no version control system? They are generally very easy to install and use. When you have used one for a while you wonder how you ever got along without it before. In git for example, creating a feature branch is one command (or two clicks with a gui). There you can tinker and experiment all you want without causing any trouble with other branches. I usually create a new branch for every new feature. I do some coding on one, switch to another branch and fix something else. They are completely separate. When they are done you merge them into your mainline. No no no, having git installed on the system is completely different from have a dev environment for Phobos. You'd have
Re: What Makes A Programming Language Good
On 1/18/2011 10:31 AM, Vladimir Panteleev wrote: Then the question is: does the time you spent writing and maintaining makefiles and build scripts exceed the time it would take you to set up a build tool? For D, no. When I tried to get started with D2, there were a lot of pointers to kewl build utilities on d-source. None of them worked. None of them that needed to self-build were capable of it. (Some claimed to just run, which was also false.) So I wound up pissing away about two days (spread out here and there as one library or another would proudly report this uses build tool Z - isn't it cool?! and I'd chase down another failure). On the other hand, Gnu Make works. And Perl works. And the dmd2 compiler spits out a dependency list that, with a little bit of perl foo, turns into a makefile fragment nicely. So now I have a standard makefile package that knows about parsing D source to figure out all the incestuous little details about what calls what. And I'm able, thanks to the miracle of start here and recurse, to move this system from project to project with about 15 minutes of tweaking. Sometimes more, if there's a whole bunch of targets getting built. What's even more, of course, is that my little bit of Makefile foo is portable. I can use make with C, D, Java, C++, Perl, XML, or whatever language-of-the-week I'm playing with. Which is certainly not true of L33+ build tool Z. And make is pretty much feature-complete at this point, again unlike any of the D build tools. Which means that investing in knowing how to tweak make pays off way better than time spent learning BTZ. =Austin
Re: What Makes A Programming Language Good
On Tue, 18 Jan 2011 22:17:08 +0200, Walter Bright newshou...@digitalmars.com wrote: Vladimir Panteleev wrote: IMO, sticking to the C-ism of one object file at a time and dependency on external build tools / makefiles is the biggest mistake DMD did in this regard. You don't need such a tool with dmd until your project exceeds a certain size. Most of my little D projects' build tool is a one line script that looks like: dmd foo.d bar.d There's just no need to go farther than that. Let's review the two problems discussed in this thread: 1) Not passing all modules to the compiler results in a nearly-incomprehensible (for some) linker error. 2) DMD's inability (or rather, unwillingness) to build the whole program when it's in the position to, which creates the dependency on external build tools (or solutions that require unnecessary human effort). Are you saying that there's no need to fix neither of these because they don't bother you personally? -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: DVCS (was Re: Moving to D)
Vladimir Panteleev wrote: On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright newshou...@digitalmars.com wrote: Yeah, I could spend an afternoon doing that. sudo apt-get build-dep meld wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2 tar jxf meld-1.5.0.tar.bz2 cd meld-1.5.0 make sudo make install You're welcome ;) (Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no ./configure needed. For anyone else who tries this and didn't already have meld, you may need to apt-get install python-gtk2 manually.) It doesn't work: walter@mercury:~$ ./buildmeld [sudo] password for walter: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to find a source package for meld --2011-01-18 21:35:07-- http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2%0D Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173 Connecting to ftp.gnome.org|130.239.18.163|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2011-01-18 21:35:08 ERROR 404: Not Found. tar: meld-1.5.0.tar.bz2\r: Cannot open: No such file or directory tar: Error is not recoverable: exiting now tar: Child returned status 2 tar: Error exit delayed from previous errors : No such file or directoryld-1.5.0 : command not found: make '. Stop. No rule to make target `install
Re: VLERange: a range in between BidirectionalRange and RandomAccessRange
On 1/18/11 9:46 PM, Steven Wawryk wrote: On 19/01/11 13:53, Andrei Alexandrescu wrote: My response of Sun, 16 Jan 2011 20:58:43 -0600 was a fair attempt at a response. If you found that dismissive, I'd be hard pressed to improve it. To quote myself: I believe the proposed scheme: 1. Changes the language in a major way; 2. Is highly disruptive; 3. Improves the status quo in only minor ways. I'd be much more willing to improve things by e.g. defining the representation() function I talked about a bit ago, and other less disruptive additions. That took into consideration your amendments. I don't think that it did. I proposed no language change, nor anything disruptive. Adding a new string type would be disruptive. Unless I misunderstood, there is still a new string type in Steve's proposal, and one that would be the default one, even after the amendments you mentioned. That is a problem because people write this: auto s = hello; and the question is, what is the type of s. The change in status quo I proposed was essentially the same one you encouraged here, about a type that gives the user the choice of what kind of range to be operated on. It appears to me that you were responding to some perception you had about Steve's full proposal (that may have been triggered by something I said in the introduction), not what I actually said in the content. If that's what it is, great. To clarify: no new string type, only a range that iterates one grapheme over existing strings. So, I would still be interested to know how to sound out this newsgroup with an idea (before coding commitment) and have the suggestions considered on something more than a superficial level. Is the newsgroup too busy? Should there be people nominated to screen ideas that are worth looking at? Should I use a completely different approach? Your suggestions so far I will take into account, but it still looks like there's a barrier to me. My perception is that you want to minimize risks before starting to invest work into this. I'm not sure how you can do that. Andrei
Re: What Makes A Programming Language Good
On Wed, 19 Jan 2011 07:16:40 +0200, Austin Hastings ah0801...@yahoo.com wrote: None of them worked. Most of those build utilities do exactly what make + your perl-foo do. -- Best regards, Vladimirmailto:vladi...@thecybershadow.net