Re: Running D in the Java VM
On Friday, 15 November 2013 at 07:13:34 UTC, Jeremy DeHaan wrote: Hey everyone! I have been experimenting for the past couple of days with an idea I had, and since I recently made a little progress I thought I would share some of what I have been doing with you. What I have done, in a nutshell, is began the process for a language converter that takes D source files, converts them into Java source files, and then compiles them as Java class files so that they can be ran on Java's VM. It is extremely limited in what it can do right now, only being able to convert/compile a simple Hello World program, but I was proud of myself for getting even that far so I wanted to brag. :P You may want to ask, Hey, man. D is a great language. Why would I ever want to convert it to Java? Normally, you wouldn't. Java blows. What I am envisioning for this project is something quite magical in my opinion. If we can take D code and have it compile into Java class files, we can then compile them into Android dex files. This would make D able to build and run on Android devices through its VM. Sure, people are working on getting D to compile to ARM binaries, but this could be another option as a Java alternative on Android.(eventually) Unfortunately I do not know much about compilers, but even in the last couple of days I feel like I have learned a great deal about what kinds of stuff goes into them. Eventually I'd like to make a full blown compiler that takes D code and can go right to dex files, but that would be something that would happen way down the road. In order to get D working on Android sooner, I figured a language converter would be the easier route. I can, and would love to go in to more detail about this, but it is getting late and this post is already quite long. Maybe I should start a blog about my D escapades? Anyways, I would love to hear feedback on this idea! Thanks for your time! It is an impossible task, because many of the D semantics cannot be expressed in JVM/Dalvik bytecode. -- Paulo
Re: DDT 0.9.0 released - GDB debugging integration
15.11.2013 00:54, Bruno Medeiros пишет: DDT 0.9.0 (Debugging is Magic) is out, see post: https://groups.google.com/d/msg/ddt-ide/VwA7ifYt9c0/wBcvUSVKNqMJ I installed 4.2 version but get error again: Cannot complete the install because of a conflicting dependency. Software being installed: DDT - D Development Tools 0.9.0.v201311141659 (org.dsource.ddt.feature.group 0.9.0.v201311141659) Software currently installed: Eclipse Platform 4.2.0.I20120608-1400 (org.eclipse.platform.ide 4.2.0.I20120608-1400) Only one of the following can be installed at once: Eclipse Workbench 3.105.0.v20130529-1406 (org.eclipse.ui.workbench 3.105.0.v20130529-1406) Eclipse Workbench 3.103.0.v20120530-1824 (org.eclipse.ui.workbench 3.103.0.v20120530-1824) Eclipse Workbench 3.103.1.v20120906-120042 (org.eclipse.ui.workbench 3.103.1.v20120906-120042) Eclipse Workbench 3.105.1.v20130821-1411 (org.eclipse.ui.workbench 3.105.1.v20130821-1411) Eclipse Workbench 3.104.0.v20130204-164612 (org.eclipse.ui.workbench 3.104.0.v20130204-164612) Cannot satisfy dependency: From: DDT - D Development Tools 0.9.0.v201311141659 (org.dsource.ddt.feature.group 0.9.0.v201311141659) To: org.dsource.ddt.ide.debug [0.1.0.v201311141659] Cannot satisfy dependency: From: DDT Debug support (DSF) 0.1.0.v201311141659 (org.dsource.ddt.ide.debug 0.1.0.v201311141659) To: bundle org.eclipse.ui 3.105.0 Version is the last, what should I do more?
Re: Running D in the Java VM
Surely the main requirements would be gui and networking? Which would be completely possible. On 15 Nov 2013 12:05, John Colvin john.loughran.col...@gmail.com wrote: On Friday, 15 November 2013 at 08:50:14 UTC, Paulo Pinto wrote: On Friday, 15 November 2013 at 07:13:34 UTC, Jeremy DeHaan wrote: Hey everyone! I have been experimenting for the past couple of days with an idea I had, and since I recently made a little progress I thought I would share some of what I have been doing with you. What I have done, in a nutshell, is began the process for a language converter that takes D source files, converts them into Java source files, and then compiles them as Java class files so that they can be ran on Java's VM. It is extremely limited in what it can do right now, only being able to convert/compile a simple Hello World program, but I was proud of myself for getting even that far so I wanted to brag. :P You may want to ask, Hey, man. D is a great language. Why would I ever want to convert it to Java? Normally, you wouldn't. Java blows. What I am envisioning for this project is something quite magical in my opinion. If we can take D code and have it compile into Java class files, we can then compile them into Android dex files. This would make D able to build and run on Android devices through its VM. Sure, people are working on getting D to compile to ARM binaries, but this could be another option as a Java alternative on Android.(eventually) Unfortunately I do not know much about compilers, but even in the last couple of days I feel like I have learned a great deal about what kinds of stuff goes into them. Eventually I'd like to make a full blown compiler that takes D code and can go right to dex files, but that would be something that would happen way down the road. In order to get D working on Android sooner, I figured a language converter would be the easier route. I can, and would love to go in to more detail about this, but it is getting late and this post is already quite long. Maybe I should start a blog about my D escapades? Anyways, I would love to hear feedback on this idea! Thanks for your time! It is an impossible task, because many of the D semantics cannot be expressed in JVM/Dalvik bytecode. -- Paulo Like what? Barring low-level device access, they are both turing complete languages and given the necessary transformations a complete program in one can always be statically* translated to the one in the other. However, efficient java code would be a different matter. *very important. There are many code-level concepts in D that are not expressible in the JVM, but a static translation doesn't need to honor those: it simply needs to make a program that has the same observable effects for any given input. This is definitely possible, although quite possibly monstrously involved when done in detail.
Re: DDT 0.9.0 released - GDB debugging integration
On 11/14/13 2:54 PM, Bruno Medeiros wrote: DDT 0.9.0 (Debugging is Magic) is out, see post: https://groups.google.com/d/msg/ddt-ide/VwA7ifYt9c0/wBcvUSVKNqMJ Awesome. I like your solution for the debugger (instead of writing something from scratch). Congratulations!
Re: Running D in the Java VM
On Fri, 2013-11-15 at 09:44 +0200, Rory McGuire wrote: […] Nice one, I have to use Java at work, it would be awesome if I didn't have to. Would be cool if you make it so that the outputs to java are just trasforms of the AST that way people could write other types of output such as C. Just use Scala, Ceylon, Kotlin or statically compiled Groovy instead of Java. Everyone I know who has ever tried it has never looked back. Management can be handled easily since all the languages interwork well with Java, so change over to a new language is incremental, it does not require a revolution. Java is dead, long live the JVM. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Re: DDT 0.9.0 released - GDB debugging integration
On 15/11/2013 08:56, Alexandr Druzhinin wrote: 15.11.2013 00:54, Bruno Medeiros пишет: DDT 0.9.0 (Debugging is Magic) is out, see post: https://groups.google.com/d/msg/ddt-ide/VwA7ifYt9c0/wBcvUSVKNqMJ I installed 4.2 version but get error again: Cannot complete the install because of a conflicting dependency. Software being installed: DDT - D Development Tools 0.9.0.v201311141659 (org.dsource.ddt.feature.group 0.9.0.v201311141659) Software currently installed: Eclipse Platform 4.2.0.I20120608-1400 (org.eclipse.platform.ide 4.2.0.I20120608-1400) Only one of the following can be installed at once: Eclipse Workbench 3.105.0.v20130529-1406 (org.eclipse.ui.workbench 3.105.0.v20130529-1406) Eclipse Workbench 3.103.0.v20120530-1824 (org.eclipse.ui.workbench 3.103.0.v20120530-1824) Eclipse Workbench 3.103.1.v20120906-120042 (org.eclipse.ui.workbench 3.103.1.v20120906-120042) Eclipse Workbench 3.105.1.v20130821-1411 (org.eclipse.ui.workbench 3.105.1.v20130821-1411) Eclipse Workbench 3.104.0.v20130204-164612 (org.eclipse.ui.workbench 3.104.0.v20130204-164612) Cannot satisfy dependency: From: DDT - D Development Tools 0.9.0.v201311141659 (org.dsource.ddt.feature.group 0.9.0.v201311141659) To: org.dsource.ddt.ide.debug [0.1.0.v201311141659] Cannot satisfy dependency: From: DDT Debug support (DSF) 0.1.0.v201311141659 (org.dsource.ddt.ide.debug 0.1.0.v201311141659) To: bundle org.eclipse.ui 3.105.0 Version is the last, what should I do more? My bad. The effective minimum Eclipse Platform version is 4.3.0, not 4.2.0 (I've updated the DDT Installation wiki). Also, it seems as things stand Eclipse cannot update the Platform and install DDT *at the same time*, even within the same Platform major version number. So just do it in two steps, first update Eclipse (Check for updates), then update/install DDT. -- Bruno Medeiros - Software Engineer
Facebook puts bounties on bugs in the D programming language implementation
Hello, As part of a larger program to support the open source community, Facebook has is putting bounties on bugs in the D programming language implementation. The initial budget is small, but if we find that it accelerates development we may add to it. As a sample we've added $80 bounties to three regressions: http://goo.gl/JvajFP http://goo.gl/LLhRIw http://goo.gl/2crX4V We're still tweaking the best allocation of sums to bugs depending on their importance and difficulty. I will update this list once I've allocated the entire budget, likely before the end of day today. Beyond the money involved, this is a gesture of good faith, confidence, and investment in the future of D. Let's respond in kind! Andrei
Re: Facebook puts bounties on bugs in the D programming language implementation
On 11/15/13 9:51 AM, Andrei Alexandrescu wrote: [snip] http://www.reddit.com/r/programming/comments/1qpdil/facebook_puts_bounties_on_bugs_in_the_d/ Andrei
Re: Facebook puts bounties on bugs in the D programming language implementation
On 11/15/13 9:51 AM, Andrei Alexandrescu wrote: [snip] https://news.ycombinator.com/item?id=6740999 Andrei
Re: Facebook puts bounties on bugs in the D programming language implementation
On Friday, 15 November 2013 at 17:51:54 UTC, Andrei Alexandrescu wrote: Hello, As part of a larger program to support the open source community, Facebook has is putting bounties on bugs in the D programming language implementation. The initial budget is small, but if we find that it accelerates development we may add to it. Anyone interested better hurry before Kenji wakes up in a few hours. :P
Re: DDT 0.9.0 released - GDB debugging integration
On 15/11/2013 13:29, Ary Borenszweig wrote: On 11/14/13 2:54 PM, Bruno Medeiros wrote: DDT 0.9.0 (Debugging is Magic) is out, see post: https://groups.google.com/d/msg/ddt-ide/VwA7ifYt9c0/wBcvUSVKNqMJ Awesome. I like your solution for the debugger (instead of writing something from scratch). Congratulations! Yeah, looks great, although it's not that I've done much myself, in part it was mostly being lucky to have this kind of support available and being able to integrate it this way. At first I thought I might had to copy CDT's debug source code (like Descent did with JDT) and adapt it to DDT but fortunately it was not necessary. -- Bruno Medeiros - Software Engineer
Re: Facebook puts bounties on bugs in the D programming language implementation
On Friday, 15 November 2013 at 17:51:54 UTC, Andrei Alexandrescu wrote: Hello, As part of a larger program to support the open source community, Facebook has is putting bounties on bugs in the D programming language implementation. The initial budget is small, but if we find that it accelerates development we may add to it. As a sample we've added $80 bounties to three regressions: http://goo.gl/JvajFP http://goo.gl/LLhRIw http://goo.gl/2crX4V We're still tweaking the best allocation of sums to bugs depending on their importance and difficulty. I will update this list once I've allocated the entire budget, likely before the end of day today. Beyond the money involved, this is a gesture of good faith, confidence, and investment in the future of D. Let's respond in kind! Andrei The D Programming Language? $80? Ha... Fail.
Re: Facebook puts bounties on bugs in the D programming language implementation
On Friday, 15 November 2013 at 21:05:56 UTC, Joseph Frank wrote: On Friday, 15 November 2013 at 17:51:54 UTC, Andrei Alexandrescu wrote: Hello, As part of a larger program to support the open source community, Facebook has is putting bounties on bugs in the D programming language implementation. The initial budget is small, but if we find that it accelerates development we may add to it. As a sample we've added $80 bounties to three regressions: http://goo.gl/JvajFP http://goo.gl/LLhRIw http://goo.gl/2crX4V We're still tweaking the best allocation of sums to bugs depending on their importance and difficulty. I will update this list once I've allocated the entire budget, likely before the end of day today. Beyond the money involved, this is a gesture of good faith, confidence, and investment in the future of D. Let's respond in kind! Andrei The D Programming Language? $80? Ha... Fail. It is a start! What IDE is facebook using for D development? Would it be an option to support the development of that too?
Re: DCD 0.2.0 Released
Hi, I noticed the vim setup requires Vundle, whereas I know a lot of folks use pathogen to manage their vim plugins. is there any particular reason for Vundle over pathogen? Can the two co-exist together? thanks
Re: DCD 0.2.0 Released
On Friday, 15 November 2013 at 23:25:31 UTC, Jacek Furmankiewicz wrote: Hi, I noticed the vim setup requires Vundle, whereas I know a lot of folks use pathogen to manage their vim plugins. is there any particular reason for Vundle over pathogen? Can the two co-exist together? thanks I installed it with pathogen just fine. Just manually create a symlink.
Re: DCD 0.2.0 Released
Thanks. Maybe it would be worth adding a pathogen section to the docs to show how to set it up besides Vundle.
Re: DCD 0.2.0 Released
On Friday, 15 November 2013 at 23:25:31 UTC, Jacek Furmankiewicz wrote: is there any particular reason for Vundle over pathogen? A Vundle user created a pull request. Can the two co-exist together? Vundle is available, but not required. If anyone has suggestions for improving the installation process or documentation for the Vim plugin, please create a pull request.
Re: Facebook puts bounties on bugs in the D programming language implementation
On 11/15/13 1:05 PM, Joseph Frank wrote: The D Programming Language? $80? Ha... Fail. How do you mean that? The budget is of course much larger than that. I'd just started assigning it. Andrei
Re: Facebook puts bounties on bugs in the D programming language implementation
On Saturday, 16 November 2013 at 00:45:18 UTC, Andrei Alexandrescu wrote: How do you mean that? The budget is of course much larger than that. I'd just started assigning it. Andrei What's going to be the bounty for getting a precise GC in D? Is it even fair to say there's a single bounty hunter? OK, OK, just kidding! I think it's great that FB does this. -- Brian
Re: Facebook puts bounties on bugs in the D programming language implementation
On 11/15/2013 5:40 PM, Brian Rogoff wrote: On Saturday, 16 November 2013 at 00:45:18 UTC, Andrei Alexandrescu wrote: How do you mean that? The budget is of course much larger than that. I'd just started assigning it. Andrei What's going to be the bounty for getting a precise GC in D? Is it even fair to say there's a single bounty hunter? OK, OK, just kidding! I think it's great that FB does this. It's not restricted to FB. Using the site https://www.bountysource.com/trackers/383571-d-programming-language any outfit (or individual!) using D that wants to encourage the community to fix a particular bug can post any bounty they like there. Sometimes companies do ask me if there is a way they can give back to the D community (since we make free software), and this is a great way to do it.
Re: Look and think good things about D
On 2013-11-15 02:09, bearophile wrote: I have created two interesting D entries for this Rosettacode Task, is someone willing to create a Reddit entry for this? They show very different kinds of code in D. http://rosettacode.org/wiki/Look-and-say_sequence#D What does this show, that ranges is slow in D? -- /Jacob Carlborg
Re: DIP 45 - approval discussion
Walter Bright newshou...@digitalmars.com wrote in message news:l64imh$27na$1...@digitalmars.com... Also, at least on Windows, you can call functions in a DLL without saying dllimport on them and suffering a layer of indirection. The magic happens in the import library, which provides the relevant thunk. It's about 15 years since I worked on this stuff, so I might be a bit fuzzy on the details. The symbol in the import library just translates to an import table indirection. The trick you may have been thinking of is system dlls (kernel32.dll, user32.dll etc) are always loaded at the same address, so a little hackery will let you bypass the import table.
Re: DIP 45 - approval discussion
On 11/15/2013 12:00 AM, Daniel Murphy wrote: Walter Bright newshou...@digitalmars.com wrote in message news:l64imh$27na$1...@digitalmars.com... Also, at least on Windows, you can call functions in a DLL without saying dllimport on them and suffering a layer of indirection. The magic happens in the import library, which provides the relevant thunk. It's about 15 years since I worked on this stuff, so I might be a bit fuzzy on the details. The symbol in the import library just translates to an import table indirection. Yes, meaning the compiler doesn't have to do it if the import library is set up correctly. (implib.exe should do it.)
Re: Ehem, ARM
On Friday, 15 November 2013 at 07:22:07 UTC, Paulo Pinto wrote: On Friday, 15 November 2013 at 06:18:00 UTC, Joakim wrote: As Kai says, has anyone worked on getting D running on Android before? I've been thinking about attempting an Android port for years. I thought I'd spin up some x86 VMs this weekend and take a crack at getting D working on Android/x86 (http://www.android-x86.org/) as a first step. If anyone has started on this already, I could chip in on their branch. Yes, have a look at an initial port of GDC to Android https://bitbucket.org/goshawk/gdc/wiki/GDC%20on%20Android Thanks for the link. It looks like Johannes Pfau took a stab at getting some minimal Android support into gdc a couple years back, with the handful of small patches listed here: https://bitbucket.org/jpf/gdc/branch/android Anyone get any farther than that? Would it make sense to use dmd for linux/x86 to cross-compile to Android/x86 or is this a job for ldc/gdc only?
Re: Build Master: Scheduling
tl;dr thread. What about release circle as debian project have. Stable (LTS) release, for example, each year or each 3 versions with backports available. Testing release, for example, each month. Unstable (head revision), each week? Naming: dmd-2.065.0-2013.11.15-unstable.zip, dmd-2.065.0-2013.11.11-testing.zip, dmd-2.065.2.zip stable release with 2 update. dmd-unstable.zip - latest head rev build. dmd-testing.zip - latest testing/feature build. dmd-stable.zip - latest stable / LTS.
Re: Ehem, ARM
On Friday, 15 November 2013 at 08:24:55 UTC, Joakim wrote: On Friday, 15 November 2013 at 07:22:07 UTC, Paulo Pinto wrote: On Friday, 15 November 2013 at 06:18:00 UTC, Joakim wrote: As Kai says, has anyone worked on getting D running on Android before? I've been thinking about attempting an Android port for years. I thought I'd spin up some x86 VMs this weekend and take a crack at getting D working on Android/x86 (http://www.android-x86.org/) as a first step. If anyone has started on this already, I could chip in on their branch. Yes, have a look at an initial port of GDC to Android https://bitbucket.org/goshawk/gdc/wiki/GDC%20on%20Android Thanks for the link. It looks like Johannes Pfau took a stab at getting some minimal Android support into gdc a couple years back, with the handful of small patches listed here: https://bitbucket.org/jpf/gdc/branch/android Anyone get any farther than that? Would it make sense to use dmd for linux/x86 to cross-compile to Android/x86 or is this a job for ldc/gdc only? I would say ldc/gdc only, as LLVM/gcc are the supported NDK toolchains and dmd lacks an ARM backend.
Re: Build Master: Scheduling
On 14.11.2013. 10:55, Jacob Carlborg wrote: On 2013-11-14 09:39, luka8088 wrote: Just a wild thought... Maybe we can have monthly release and still keep it stable. Imagine this kind of release schedule: Month # 11 12 1 2 3 2.064 2.065 2.066 2.067 2.068 2.065rc22.066rc22.067rc22.068rc22.069rc2 2.066rc12.067rc12.068rc12.069rc12.070rc1 2.067b2 2.068b2 2.069b2 2.070b2 2.071b2 2.068b1 2.069b1 2.070b1 2.071b1 2.072b1 2.069alpha 2.070alpha 2.071alpha 2.072alpha 2.073alpha Where new features are only added to alpha release. And bug fixes are added to all releases. This way new bug fixes and new features would be released every month but there would be a 5 month delay between the time that feature A is added to the alpha release and the time feature A is propagated to the stable release. But during this period feature A would be propagated through releases and there would be plenty of opportunity to test it and clear it of bugs. I am not a fan of such delay but I don't see any other way new features could be added without higher risk of bugs. Also vote up for daily snapshots. Are you saying we should have six releases going on at the same time? Yes. For example, if you have version 0.1, 0.2 and 0.3. And you find and fix a bug in 0.3 but you still wish to support backport for 0.2 and 0.1 that you indeed need to make 3 releases. 0.1.1, 0.2.1 and 0.3.1. But then again having LTS that others have mentioned seems better. So that only each nth release has 2.x.1, 2.x.2, 2.x.3. From my perspective, not separating releases with improvements + bug fixes from releases with only bug fixes is an issue. Because every new improvement implies risk of new bugs and some users just want to have one version that is as stable as possible. What do you all think about http://semver.org/ ? We use this king of versioning notation at work and it turns out to be very good.
Re: Build Master: Scheduling
On Thursday, 14 November 2013 at 00:37:38 UTC, Tyro[17] wrote: Bugfix releases (2.064.1) Based on the previous major release or bugfix; contains only bugfixes and perhaps documentation corrections. Your thoughts and concerns please. Do you happen to work with me? We have two dozens of such releases branch :) And customers still tend to prefer the slightly bleeding edge ones. While this effectively _does_ work in creating more stable releases, I think that it puts all the burden on the developers in a way that is difficult to ignore. Most of the time backporting is little work, but sometimes you need to redo a fix you already did, for a branch which only exist to be phased out a bit later, in a codebase slightly different in ways nobody can tell anymore. Sometimes it's even very hard to backport a fix, but if you don't do it how to tell which branch has which bugs? Coupled with being forced to backport every bugfix to every branch, this can make a compelling enough reason never to contribute. With a typical bad fix injection rate of ~5%, this also mean regressions will crop up in release branches and never be noticed since they are not introduced at HEAD level but by the very act of backporting. I'm not sure it can be done in a way that feels right. I prefer the N-month release cycle we have, and take as much time and Release Candidates as needed.
Db cursor as range
Check this original post: http://forum.dlang.org/thread/bqimjjyjnjnzmkyak...@forum.dlang.org?page=1 Now they add a function to save db cursor, so I can implement it as struct or add a .save() method. I don't know if it works fine. Data could change - from outside - even during traversing, of course. If I save cursor, next time it could give different results. I wonder if this is accettable for a range implementation...
Re: Ehem, ARM
On Friday, 15 November 2013 at 08:54:21 UTC, Paulo Pinto wrote: On Friday, 15 November 2013 at 08:24:55 UTC, Joakim wrote: Would it make sense to use dmd for linux/x86 to cross-compile to Android/x86 or is this a job for ldc/gdc only? I would say ldc/gdc only, as LLVM/gcc are the supported NDK toolchains and dmd lacks an ARM backend. Yeah, I'm aware of these facts, but I don't think they matter. For one, dmd not having an ARM backend doesn't impact me since I'm targeting Android/x86 for now, :) as stated earlier. I don't think it's relevant what toolchains are integrated into the NDK as, for example, the Free Pascal compiler can now compile to Android and it isn't based on gcc or llvm: http://wiki.freepascal.org/Android I think the bigger issue is that llvm/gcc and therefore ldc/gdc have support for cross-compilation, but I don't know the status of cross-compiling support with dmd.
Re: Build Master: Scheduling
On 15.11.2013. 0:22, Xavier Bigand wrote: Le 14/11/2013 09:39, luka8088 a écrit : On 14.11.2013. 5:29, Tyro[17] wrote: On 11/13/13, 11:06 PM, Brad Roberts wrote: On 11/13/13 7:13 PM, Tyro[17] wrote: On 11/13/13, 9:46 PM, Brad Roberts wrote: On 11/13/13 4:37 PM, Tyro[17] wrote: I'm of the opinion, however, that the cycle should be six months long. This particular schedule is not of my own crafting but I believe it to be sound and worthy of emulation: I think 6 months between releases is entirely too long. I'd really like us to be back closer to the once every month or two rather than only twice a year. The pace of change is high and increasing (which is a good thing). Release early and often yields a smoother rate of introducing those changes to the non-bleeding-edge part of the community. The larger the set of changes landing in a release the more likely it is to be a painful, breaking, experience. Surely for those of us that live on the edge, it is fun to be able to use the latest and greatest. Hence the reason for monthly release of betas. Within a month (sometimes shorter) of any new feature being implemented in the language, you'll be able to download the binaries for your favorite distro and begin testing it. The side effect is that there is more time to flesh out a particular implementation and get it corrected prior to it being an irreversible eyesore in the language. You also have a greater play in the filing bug reports as to aid in minimizing the number of bugs that make it into the final release. Unlike us adventurers however, companies require a more stable environment to work in. As such, the six month cycle provides a dependable time frame in which they can expect to see only bugfixes in to the current release in use. I think this release plan is a reasonable compromise for both parties. Companies that don't want frequent changes can just skip releases, using whatever update frequency meets their desires. Companies do this already all the time. That only issue there is how long we continue to backport fixes into past releases. So far we've done very minimal backporting. And what I am proposing is that we start backporting to stable releases and with subsequent bugfix releases. I'm also suggesting that for people interested in a more frequent release will have at least five, if not more, such releases (betas) prior to the official release. Live on the edge... use the beta. That's what we do now. At the moment there's nothing that make dmd.2.064.2 any more bug free than its previously released counterparts. Very few people participated in the review of the betas which were released arbitrarily (when the time seemed right). We simply call on of those betas dmd.2.064.2 and moved on. It still has a slew of bugs and more are discovered daily as people start to finally use the so called release. I'm saying we are go about it a little different. We get more people involved in the testing process by providing more frequent release of betas and getting much of the bugs identified fixed before designating a release. To me you get what your are after a faster turnaround on fixes (monthly). And the broader customer base gets a more stable product with less bugs. Just a wild thought... Maybe we can have monthly release and still keep it stable. Imagine this kind of release schedule: Month # 11 12 1 2 3 2.064 2.065 2.066 2.067 2.068 2.065rc22.066rc22.067rc22.068rc22.069rc2 2.066rc12.067rc12.068rc12.069rc12.070rc1 2.067b2 2.068b2 2.069b2 2.070b2 2.071b2 2.068b1 2.069b1 2.070b1 2.071b1 2.072b1 2.069alpha 2.070alpha 2.071alpha 2.072alpha 2.073alpha Where new features are only added to alpha release. And bug fixes are added to all releases. This way new bug fixes and new features would be released every month but there would be a 5 month delay between the time that feature A is added to the alpha release and the time feature A is propagated to the stable release. But during this period feature A would be propagated through releases and there would be plenty of opportunity to test it and clear it of bugs. I am not a fan of such delay but I don't see any other way new features could be added without higher risk of bugs. Also vote up for daily snapshots. As pure D user, I am not really comfortable with having to wait 6 months between releases. I like the idea to get new small features regularly (phobos API enhancement,...), and on some release getting a long wait hudge feature like Allocators. Small releases will help to community to live with D. It's important to make noise regularly around the product. Users will also xp more progressively in their development. There is a lot of development I am waiting
Re: Ehem, ARM
On Friday, 15 November 2013 at 09:20:18 UTC, Joakim wrote: On Friday, 15 November 2013 at 08:54:21 UTC, Paulo Pinto wrote: On Friday, 15 November 2013 at 08:24:55 UTC, Joakim wrote: Would it make sense to use dmd for linux/x86 to cross-compile to Android/x86 or is this a job for ldc/gdc only? I would say ldc/gdc only, as LLVM/gcc are the supported NDK toolchains and dmd lacks an ARM backend. Yeah, I'm aware of these facts, but I don't think they matter. For one, dmd not having an ARM backend doesn't impact me since I'm targeting Android/x86 for now, :) as stated earlier. I don't think it's relevant what toolchains are integrated into the NDK as, for example, the Free Pascal compiler can now compile to Android and it isn't based on gcc or llvm: http://wiki.freepascal.org/Android I think the bigger issue is that llvm/gcc and therefore ldc/gdc have support for cross-compilation, but I don't know the status of cross-compiling support with dmd. Yes, but FreePascal already had an ARM backend before they started with Android, and cross-compiling support infrastructure. As far as I know dmd does not support cross compiling. -- Paulo
Re: Build Master: Scheduling
On 2013-11-15 00:22, Xavier Bigand wrote: There is a lot of development I am waiting for my project, I'll really be happy to see one come before the next summer : - std.logger - std.serialization - Weak Ptr - New signals/slots - RPC (Remote procedure call) - AST Macro I'm wondering if we can managed to release the compiler and Phobos and independently. -- /Jacob Carlborg
Re: Ehem, ARM
On 15 November 2013 09:45, Paulo Pinto pj...@progtools.org wrote: On Friday, 15 November 2013 at 09:20:18 UTC, Joakim wrote: On Friday, 15 November 2013 at 08:54:21 UTC, Paulo Pinto wrote: On Friday, 15 November 2013 at 08:24:55 UTC, Joakim wrote: Would it make sense to use dmd for linux/x86 to cross-compile to Android/x86 or is this a job for ldc/gdc only? I would say ldc/gdc only, as LLVM/gcc are the supported NDK toolchains and dmd lacks an ARM backend. Yeah, I'm aware of these facts, but I don't think they matter. For one, dmd not having an ARM backend doesn't impact me since I'm targeting Android/x86 for now, :) as stated earlier. I don't think it's relevant what toolchains are integrated into the NDK as, for example, the Free Pascal compiler can now compile to Android and it isn't based on gcc or llvm: http://wiki.freepascal.org/Android I think the bigger issue is that llvm/gcc and therefore ldc/gdc have support for cross-compilation, but I don't know the status of cross-compiling support with dmd. Yes, but FreePascal already had an ARM backend before they started with Android, and cross-compiling support infrastructure. As far as I know dmd does not support cross compiling. -- Paulo -m32/-m64 is the closest you'll get to a cross-compilation in dmd. ;-) -- Iain Buclaw *(p e ? p++ : p) = (c 0x0f) + '0';
Re: Build Master: Scheduling
On 2013-11-15 10:16, luka8088 wrote: Yes. For example, if you have version 0.1, 0.2 and 0.3. And you find and fix a bug in 0.3 but you still wish to support backport for 0.2 and 0.1 that you indeed need to make 3 releases. 0.1.1, 0.2.1 and 0.3.1. There's a difference in still supporting old releases and working on five different releases at the same time that hasn't been released at all. But then again having LTS that others have mentioned seems better. So that only each nth release has 2.x.1, 2.x.2, 2.x.3. From my perspective, not separating releases with improvements + bug fixes from releases with only bug fixes is an issue. Because every new improvement implies risk of new bugs and some users just want to have one version that is as stable as possible. What do you all think about http://semver.org/ ? We use this king of versioning notation at work and it turns out to be very good. I like it but I'm not sure who it's applied to applications. It's clear to see how it works for libraries but not for applications. I mean, what is considered an API change for an application? Changing the command line flags? -- /Jacob Carlborg
Re: Ehem, ARM
On 2013-11-15 10:45, Paulo Pinto wrote: As far as I know dmd does not support cross compiling. Only for 32bit/64bit on the same architecture. -- /Jacob Carlborg
Re: Qt5 and D
On 15/11/13 00:37, Xavier Bigand wrote: It's only a prototype for the moment, and it will certainly never compatible with QML files. But if you are only interested by property binding and the composition system of QML, DQuick will make you happy. First of all -- thanks for your work on DQuick :-) However, I think we have to acknowledge that Qt5 in general is a really important toolkit that needs first-class D support. This is only going to become more true as more and more development moves onto mobile platforms. An effective QtD may be a hard problem but it's one that needs a solution.
Re: Build Master: Scheduling
On Friday, November 15, 2013 11:03:37 Jacob Carlborg wrote: On 2013-11-15 00:22, Xavier Bigand wrote: There is a lot of development I am waiting for my project, I'll really be happy to see one come before the next summer : - std.logger - std.serialization - Weak Ptr - New signals/slots - RPC (Remote procedure call) - AST Macro I'm wondering if we can managed to release the compiler and Phobos and independently. I don't think so. It's still far too often the case that something has to be tweaked in Phobos because of a compiler change (generally for bug fixes, but that arguably doesn't matter much - it prevents separating their releases regardless). - Jonathan M Davis
Re: Ehem, ARM
On 2013-11-15 00:35, Manu wrote: Very good point. I wonder if there's room to make a push for this in 2.065. Note, I'm willing work on to syncing my branches to upstream if Walter is interested in this. Getting support for 64bit and modern runtime would be too far away for this release. -- /Jacob Carlborg
Re: Ehem, ARM
On Friday, 15 November 2013 at 06:18:00 UTC, Joakim wrote: On Friday, 15 November 2013 at 00:18:50 UTC, Martin Nowak wrote: On 11/14/2013 05:14 PM, Kai Nacke wrote: But this is only half of the story. My target is Linux/ARM which is already supported by druntime/phobos. If you target a smartphone then you also have to add Android or iOS support to druntime/phobos. Currently version (linux) in druntime is equivalent to glibc. Porting druntime to bionic or newlib might be quite an effort but I have no concrete idea what issues to expect. As Kai says, has anyone worked on getting D running on Android before? I've been thinking about attempting an Android port for years. I thought I'd spin up some x86 VMs this weekend and take a crack at getting D working on Android/x86 (http://www.android-x86.org/) as a first step. If anyone has started on this already, I could chip in on their branch. Also, does dmd have any support for cross-compilation or is it better to stick to ldc/gdc when cross compiling to Android? On a related note, it looks like Rust was ported to Android/ARM earlier this year: https://github.com/mozilla/rust/wiki/Doc-building-for-android A month ago I tried to cross compile a Hello World for Android with ldc on Debian7 x64 with android_ndk_r9 but failed with lots of link errors. One of those issues reveal that qsort is absent from Android stdlib, I get over it by grabbing a qsort.c and it works. Other issues are beyond my knowledge and I'm too busy to continue, hope I can take some time to hack on this late this month.
Re: Look and think good things about D
Jacob Carlborg: What does this show, that ranges is slow in D? It shows that D is awesome. Do you know other languages that allow you to write two programs to solve that problem as short/simple and as fast as those two? :-) Bye, bearophile
Re: Build Master: Scheduling
On 15.11.2013. 11:01, Jacob Carlborg wrote: On 2013-11-15 10:16, luka8088 wrote: Yes. For example, if you have version 0.1, 0.2 and 0.3. And you find and fix a bug in 0.3 but you still wish to support backport for 0.2 and 0.1 that you indeed need to make 3 releases. 0.1.1, 0.2.1 and 0.3.1. There's a difference in still supporting old releases and working on five different releases at the same time that hasn't been released at all. Yeah. I agree. Bad idea. But then again having LTS that others have mentioned seems better. So that only each nth release has 2.x.1, 2.x.2, 2.x.3. From my perspective, not separating releases with improvements + bug fixes from releases with only bug fixes is an issue. Because every new improvement implies risk of new bugs and some users just want to have one version that is as stable as possible. What do you all think about http://semver.org/ ? We use this king of versioning notation at work and it turns out to be very good. I like it but I'm not sure who it's applied to applications. It's clear to see how it works for libraries but not for applications. I mean, what is considered an API change for an application? Changing the command line flags? I think API change could be analog to features change (and the way they are interfaced). So the version consists of x.y.z z increments only on bug fixes. y increments when new features are added, but they are backwards compatable. Incrementing y resets z to 0. x increments when backwards incompatible change are made. Incrementing x resets y and z to 0.
Re: Look and think good things about D
qznc: http://rosettacode.org/wiki/Look-and-say_sequence#D The fast version is nearly functional, too. Since memory management is not considered a side effect there is only the printfs. If the printfs are wrapped into debug, the function is pure. In a system that allows you a more fine-graded management of effects through an algebra, memory allocation is an effect. It's easy to show D functions annotated with pure that aren't pure at all because of memory allocation. Bye, bearophile
Re: Ehem, ARM
On Friday, 15 November 2013 at 09:45:42 UTC, Paulo Pinto wrote: As far as I know dmd does not support cross compiling. I started skimming the dmd source to see how it handled porting to new platforms and I found the following: * Linux Version * - * There are two main issues: hosting the compiler on linux, * and generating (targetting) linux executables. * The linux and __GNUC__ macros control hosting issues * for operating system and compiler dependencies, respectively. * To target linux executables, use ELFOBJ for things specific to the * ELF object file format, and TARGET_LINUX for things specific to * the linux memory model. * If this is all done right, one could generate a linux object file * even when compiling on win32, and vice versa. * The compiler source code currently uses these macros very inconsistently * with these goals, and should be fixed. https://github.com/D-Programming-Language/dmd/blob/master/src/backend/cdef.h#L71 So it appears that the dmd backend has some support for cross-compiling, although likely incomplete.
Re: Review of std.signal
On Thursday, 7 November 2013 at 13:45:57 UTC, Dicebot wrote: Our current victim is std.signal by Robert Klotzner which is supposed to deprecate and supersede existing (and broken) std.signals Robert, it will be great to see some more documentation for `std.signal`. For example, you can: - add more usage examples for each public function - add tread-safe example for multi-tread application - add better signal description and add more external links (for examle, to the wikipedia - http://en.wikipedia.org/wiki/Signals_and_slots). Look at the old std.signals - http://dlang.org/phobos/std_signals.html
Re: Vibe.d DUB
On Thursday, 14 November 2013 at 17:32:14 UTC, Jordi Sayol wrote: El 14/11/13 17:48, Chris ha escrit: On Thursday, 14 November 2013 at 16:42:05 UTC, Chris wrote: On Thursday, 14 November 2013 at 16:32:59 UTC, Sönke Ludwig wrote: Am 14.11.2013 16:54, schrieb Chris: Excuse me my ignorance. I haven't used DUB for a while now. Don't know what's wrong. Found no hint on the (h)internet. I it that the latest version of dmd is too high for dub? Use dmd2.063 instead? $ dub upgrade Upgrading project in /home/path/to/project Triggering update of package vibe-d Geting package metadata for vibe-d failed, exception: object.Exception@source/dub/project.d(460): Could not find package candidate for vibe-d =0.7.12 (...) Judging by the line number of the exception, you are using DUB 0.9.13. There have been a number of changes since then, including how version numbers are validated. It's possible that the reason for the error is the latest vibe.d version 0.7.18-beta.1. I'd recommend to upgrade to 0.9.19 - everything is working there AFAICS. Thanks for the answer. I tried to update dub, but it says that dub is already the newest version., which is hard for me to believe. Ubuntu 12.04 LTS What says last line of $ dub help dub's last words were: Install options: --versionUse the specified version/branch instead of the latest --system Install system wide instead of user local --local Install as in a sub folder of the current directory And that's it. I couldn't find a simple version switch that just tells me dub version 0.XYZ
Re: Ehem, ARM
Thanks for all your answers. I see that it's still a big big issue. I believe we really have to push this, because ARM support is vital. If we want people to use D, there will have to be a port to ARM, else it will put people off. The code I've been working on runs fine on Windows/Mac/Linux. But users will ask for effin iPhone or Android support. Everyone wants it and if D can't cater for it ...
Re: Ehem, ARM
On Friday, 15 November 2013 at 09:20:18 UTC, Joakim wrote: For one, dmd not having an ARM backend doesn't impact me since I'm targeting Android/x86 for now, :) as stated earlier. Interesting, then you'll mostly focus on druntime and glibc vs. bionic issues. The linux/ELF support of dmd should mostly work.
Re: Build Master: Scheduling
On 2013-11-15 11:51, luka8088 wrote: I think API change could be analog to features change (and the way they are interfaced). So the version consists of x.y.z z increments only on bug fixes. y increments when new features are added, but they are backwards compatable. Incrementing y resets z to 0. x increments when backwards incompatible change are made. Incrementing x resets y and z to 0. The problem is that it's very easy to break code, even with bug fixes. -- /Jacob Carlborg
Re: Look and think good things about D
On 11/15/13 7:54 AM, bearophile wrote: Jacob Carlborg: What does this show, that ranges is slow in D? It shows that D is awesome. Do you know other languages that allow you to write two programs to solve that problem as short/simple and as fast as those two? :-) Bye, bearophile Probably Crystal. Here's what I was able to do in some minutes: --- if ARGV.length != 1 puts missing argument: n exit 1 end n = ARGV[0].to_i str = 1 buffer = String::Buffer.new(20) n.times do puts str.length str.each_chunk do |digit, count| buffer '0' + count buffer digit end str = buffer.to_s buffer.clear end --- With n=70 it takes about 4.89s. With n=45 it takes about 0.012s. And with Crystal you could do the second version as well, because you have access to low level stuff like pointers. And also, the language is pretty new so there's still a lot of optimizations to be done. I also thought ranges were pretty fast because of their nature. Why are they slow in this example?
Re: Ehem, ARM
On Friday, 15 November 2013 at 11:46:29 UTC, Chris wrote: I see that it's still a big big issue. It's not that much effort. Build gdc for ARM and fix druntime. I believe we really have to push this, because ARM support is vital. Well somebody has to do it, so if you have so much interest in this...
Re: Ehem, ARM
On Friday, 15 November 2013 at 11:18:06 UTC, Joakim wrote: On Friday, 15 November 2013 at 09:45:42 UTC, Paulo Pinto wrote: As far as I know dmd does not support cross compiling. I started skimming the dmd source to see how it handled porting to new platforms and I found the following: * Linux Version * - * There are two main issues: hosting the compiler on linux, * and generating (targetting) linux executables. * The linux and __GNUC__ macros control hosting issues * for operating system and compiler dependencies, respectively. * To target linux executables, use ELFOBJ for things specific to the * ELF object file format, and TARGET_LINUX for things specific to * the linux memory model. * If this is all done right, one could generate a linux object file * even when compiling on win32, and vice versa. * The compiler source code currently uses these macros very inconsistently * with these goals, and should be fixed. https://github.com/D-Programming-Language/dmd/blob/master/src/backend/cdef.h#L71 So it appears that the dmd backend has some support for cross-compiling, although likely incomplete. Hi Joakim! Yes, there is a some support, but not too much. The existence of the TARGET_* macros means that you can't have one compiler with 2 or more platform targets. But there should be no real problem to create a dmd executable on Linux/ARM producing object files for Windows/x86. (Well - no problem except for the real data type. :-) ) But who needs that kind of cross-compiling? To be useful for producing ARM binaries, you need an ARM backend. This is already available for LDC and GDC. IMHO it is easier to pick one of those compilers and think about and create a cross-compiling environment instead of starting by zero. (For LDC, this is issue #490: https://github.com/ldc-developers/ldc/issues/490) Regards, Kai
Re: Look and think good things about D
On Friday, 15 November 2013 at 01:59:27 UTC, bearophile wrote: The first program doesn't even compile with ldc2 v.2.063.2, and Does it compile with dmd 2.063.2? (Is it a D version problem or a LDC bug?) Regards, Kai
Re: Ehem, ARM
Bionic even got [dl_iterate_phdr](https://github.com/android/platform_bionic/commit/24053a461e7a20f34002262c1bb122023134989) which we need for shared library support. Write me a mail if you hit any druntime issues during the port.
Re: Look and think good things about D
Kai Nacke: Does it compile with dmd 2.063.2? (Is it a D version problem or a LDC bug?) Regards, Kai If I find a ldc2 bug I show it in the right newsgroup, so don't worry. It's a matter of updates (purity) in a Phobos function. Bye, bearophile
Re: Ehem, ARM
On 2013-11-15 07:26:56 +, Jacob Carlborg d...@me.com said: On 2013-11-15 00:35, Manu wrote: Very good point. I wonder if there's room to make a push for this in 2.065. Highly unlikely. It seems like Walter wanted us to first implement ARC, to not be worse the Objective-C currently is. But we haven't been able to come to an agreement on how to do that. Honestly, what I'd do is implement ARC for Objective-C types in the compiler without waiting for Walter to decide on anything. There's almost nothing to decide when it comes to how D/Objective-C does it: you have to do it the same way as clang. And you can't reuse anything Walter will come with without much tinkering because Objective-C ARC has to manage autoreleased objects. On the other hand once you have implemented Objective-C ARC it should be easy to retrofit the mechanics of it to other parts of D. I find it funny how I though about implementing ARC for D/Objective-C even before clang came with it. Another idea involved making the D GC capable of tracking pointers to external blocks of memory (Objective-C objects in this case) and making it call a release function when no longer referenced by D code. Some remnants of that: https://github.com/michelf/druntime/blob/d-objc/src/objc/dobjc.d#L159 -- Michel Fortin michel.for...@michelf.ca http://michelf.ca
Re: Ehem, ARM
On 2013-11-15 07:33:40 +, Jacob Carlborg d...@me.com said: On 2013-11-15 02:50, Michel Fortin wrote: And since the DMD backend won't emit ARM code, if I were still working on this the first thing I'd do is rebase everything to work on top of LDC. I think that would be quite difficult. Although it would probably be easier to get 64bit and modern runtime support. An advantage of doing it in DMD (except that it's already in DMD) is that it's probably a bigger chance of it being officially added to D and not just an extension to LDD. If it's added to DMD it will automatically be merged by both LDC and GDC. That was my idea too initially: put it in the reference implementation and other implementations will follow, and it'll become part of the language. That'd be great. But it's hard when you have to fix the backend to emit what you need. I have some fears about that for the exception handling stuff in the modern runtime, perhaps they're unjustified. Another idea would be to add it to DMD with the current platform support. Then when it's merged with LDC, let them add support for ARM, 64bit and modern runtime. I wonder if Walter will approve the merge, despite his stated intention to do so eventually. The surface area of that patch is huge, it'll take him many hours for an initial review, and probably several iterations of that review process will be required to get it to pass. I remember my smaller-scale pull request #3 that never got reviewed... but maybe (hopefully?) things have changed since then. BTW, do you automatically scan the newsgroups for http://michelf.ca/projects/d-objc/; :) Haha. No. I skim by topic of interest. But I generally play the passive observer. Replying generally brings other replies, begging for more followup. I have a couple of replies that were written but which I never posted because I anticipated writing the eventual followups wouldn't be worth my time. It doesn't help that I tend to spend too much time carefully writing anything too, proofreading and weighing every word. But if you're talking about one of my projects there's more chance I'll pop in the conversation. -- Michel Fortin michel.for...@michelf.ca http://michelf.ca
Re: Look and think good things about D
Ary Borenszweig: Here's what I was able to do in some minutes: --- if ARGV.length != 1 puts missing argument: n exit 1 end n = ARGV[0].to_i str = 1 buffer = String::Buffer.new(20) n.times do puts str.length str.each_chunk do |digit, count| buffer '0' + count buffer digit end str = buffer.to_s buffer.clear end With n=70 it takes about 4.89s. With n=45 it takes about 0.012s. This program is much longer in number of tokens to the first D program. You can write a D program about as fast as this in about the same number of tokens. Perhaps I should add an intermediate third version that shows code that's not as extreme as the two versions there. Thank you for the implicit suggestion. And with Crystal you could do the second version as well, because you have access to low level stuff like pointers. In Crystal do you have final switches, gotos, etc too? And also, the language is pretty new so there's still a lot of optimizations to be done. And LDC2 will improve in the meantime. I also thought ranges were pretty fast because of their nature. It also matters a lot how you use them, this is normal in computer programming. Why are they slow in this example? Just because the first example is not written for speed, I didn't even add run-time timings for it at first. And it's not that slow. Bye, bearophile
Re: Ehem, ARM
On 2013-11-15 13:42, Michel Fortin wrote: Honestly, what I'd do is implement ARC for Objective-C types in the compiler without waiting for Walter to decide on anything. There's almost nothing to decide when it comes to how D/Objective-C does it: you have to do it the same way as clang. And you can't reuse anything Walter will come with without much tinkering because Objective-C ARC has to manage autoreleased objects. On the other hand once you have implemented Objective-C ARC it should be easy to retrofit the mechanics of it to other parts of D. The question is if this is something that Walter would accept to be included. -- /Jacob Carlborg
Re: Build Master: Scheduling
On Friday, 15 November 2013 at 09:17:45 UTC, ponce wrote: Do you happen to work with me? We have two dozens of such releases branch :) And customers still tend to prefer the slightly bleeding edge ones. While this effectively _does_ work in creating more stable releases, I think that it puts all the burden on the developers in a way that is difficult to ignore. Most of the time backporting is little work, but sometimes you need to redo a fix you already did, for a branch which only exist to be phased out a bit later, in a codebase slightly different in ways nobody can tell anymore. Sometimes it's even very hard to backport a fix, but if you don't do it how to tell which branch has which bugs? Coupled with being forced to backport every bugfix to every branch, this can make a compelling enough reason never to contribute. With a typical bad fix injection rate of ~5%, this also mean regressions will crop up in release branches and never be noticed since they are not introduced at HEAD level but by the very act of backporting. I'm not sure it can be done in a way that feels right. I prefer the N-month release cycle we have, and take as much time and Release Candidates as needed. That's the exact problem with most of the release ideas proposed here, they are terribly inefficient. The schedule proposed by Andrew only requires one maintenance branch (point releases) besides the regular beta releases from master. Backporting to a single stable branch should be within our budget.
Re: Build Master: Scheduling
I'm having a hard time requiring my users to use anything that is not a release (that is, a beta). The point is, there has never been a really stable dmd release. Using it requires to read the mailing list just like with most beta software. We won't be able to release a stable compiler every month anytime soon.
Re: Look and think good things about D
On Friday, 15 November 2013 at 12:47:21 UTC, bearophile wrote: Ary Borenszweig: Here's what I was able to do in some minutes: --- if ARGV.length != 1 puts missing argument: n exit 1 end n = ARGV[0].to_i str = 1 buffer = String::Buffer.new(20) n.times do puts str.length str.each_chunk do |digit, count| buffer '0' + count buffer digit end str = buffer.to_s buffer.clear end With n=70 it takes about 4.89s. With n=45 it takes about 0.012s. This program is much longer in number of tokens to the first D program. You can write a D program about as fast as this in about the same number of tokens. Perhaps I should add an intermediate third version that shows code that's not as extreme as the two versions there. Thank you for the implicit suggestion. And with Crystal you could do the second version as well, because you have access to low level stuff like pointers. In Crystal do you have final switches, gotos, etc too? And also, the language is pretty new so there's still a lot of optimizations to be done. And LDC2 will improve in the meantime. I also thought ranges were pretty fast because of their nature. It also matters a lot how you use them, this is normal in computer programming. Why are they slow in this example? Just because the first example is not written for speed, I didn't even add run-time timings for it at first. And it's not that slow. Bye, bearophile Slightly OT: Why do languages like Ruby (and now Crystal) have to state the obvious in an awkward way? (2...max).each do Of course you _do_ _each one_ from 2 to max. Is it to make it more human? If anything, human languages and thinking get rid of the superfluous (i.e. the obvious). Just like x = x + 1 (Pythronizing) x++; (obvious, concise, all you need). Buy one, get one free. (= If you buy one (of these), you will get one (another one of these) for free.) Sorry that was just a rant-om comment :)
Re: Look and think good things about D
On 2013-11-15 14:07, Chris wrote: Slightly OT: Why do languages like Ruby (and now Crystal) have to state the obvious in an awkward way? (2...max).each do You can do this as well in Ruby: for e in 2 ... max end But Ruby follows the philosophy that everything is an object. So you invoke the each method on the range object. Of course you _do_ _each one_ from 2 to max. Is it to make it more human? If anything, human languages and thinking get rid of the superfluous (i.e. the obvious). Just like x = x + 1 (Pythronizing) x++; (obvious, concise, all you need). This middle ground is possible in Ruby: x += 1 -- /Jacob Carlborg
Re: Ehem, ARM
On 2013-11-15 13:41, Michel Fortin wrote: That was my idea too initially: put it in the reference implementation and other implementations will follow, and it'll become part of the language. That'd be great. But it's hard when you have to fix the backend to emit what you need. I have some fears about that for the exception handling stuff in the modern runtime, perhaps they're unjustified. Yeah, that's the biggest advantage of doing it in LDC. They already have the correct back end parts in place. I wonder if Walter will approve the merge, despite his stated intention to do so eventually. The surface area of that patch is huge, it'll take him many hours for an initial review, and probably several iterations of that review process will be required to get it to pass. I remember my smaller-scale pull request #3 that never got reviewed... but maybe (hopefully?) things have changed since then. Hopefully things are better now. There seems to be more people now that have a greater knowledge of DMD. Haha. No. I skim by topic of interest. But I generally play the passive observer. Replying generally brings other replies, begging for more followup. I have a couple of replies that were written but which I never posted because I anticipated writing the eventual followups wouldn't be worth my time. It doesn't help that I tend to spend too much time carefully writing anything too, proofreading and weighing every word. Understandable. Like my post about AST macros. It's always a hot topic. But if you're talking about one of my projects there's more chance I'll pop in the conversation. -- /Jacob Carlborg
Re: Look and think good things about D
On 11/15/13 9:47 AM, bearophile wrote: Ary Borenszweig: Here's what I was able to do in some minutes: --- if ARGV.length != 1 puts missing argument: n exit 1 end n = ARGV[0].to_i str = 1 buffer = String::Buffer.new(20) n.times do puts str.length str.each_chunk do |digit, count| buffer '0' + count buffer digit end str = buffer.to_s buffer.clear end With n=70 it takes about 4.89s. With n=45 it takes about 0.012s. This program is much longer in number of tokens to the first D program. You can write a D program about as fast as this in about the same number of tokens. No. I just counted the tokens and D has more tokens: http://pastebin.com/XZFf8dsj It's even longer if I keep all the programs arguments checking (ARGV). If I remove that, the program has 45 tokens. The D version has 81 tokens. And without the imports, D still has 68 tokens. (well, I didn't count newlines as tokens, so maybe that's unfair because newlines in Crystal are significant) Yes, it's not as functional as D. But only because the language is still young. Perhaps I should add an intermediate third version that shows code that's not as extreme as the two versions there. Thank you for the implicit suggestion. And with Crystal you could do the second version as well, because you have access to low level stuff like pointers. In Crystal do you have final switches, gotos, etc too? No. In this D will always be better, if you really want your program to have gotos. And also, the language is pretty new so there's still a lot of optimizations to be done. And LDC2 will improve in the meantime. And so LLVM, which is what Crystal uses as a backend. I also thought ranges were pretty fast because of their nature. It also matters a lot how you use them, this is normal in computer programming. But I thought ranges were meant to be fast. No allocations and all of that. In fact, I was kind of sad that Crystal doesn't have a similar concept so it could never get as fast as D ranges. But if D ranges are not fast, what's the point of having them and making everyone use them? Why are they slow in this example? Just because the first example is not written for speed, I didn't even add run-time timings for it at first. And it's not that slow. Ah, I thought you did. So I misunderstood your timings. Sorry. Bye, bearophile
Re: Look and think good things about D
On 11/15/13 10:07 AM, Chris wrote: On Friday, 15 November 2013 at 12:47:21 UTC, bearophile wrote: Ary Borenszweig: Here's what I was able to do in some minutes: --- if ARGV.length != 1 puts missing argument: n exit 1 end n = ARGV[0].to_i str = 1 buffer = String::Buffer.new(20) n.times do puts str.length str.each_chunk do |digit, count| buffer '0' + count buffer digit end str = buffer.to_s buffer.clear end With n=70 it takes about 4.89s. With n=45 it takes about 0.012s. This program is much longer in number of tokens to the first D program. You can write a D program about as fast as this in about the same number of tokens. Perhaps I should add an intermediate third version that shows code that's not as extreme as the two versions there. Thank you for the implicit suggestion. And with Crystal you could do the second version as well, because you have access to low level stuff like pointers. In Crystal do you have final switches, gotos, etc too? And also, the language is pretty new so there's still a lot of optimizations to be done. And LDC2 will improve in the meantime. I also thought ranges were pretty fast because of their nature. It also matters a lot how you use them, this is normal in computer programming. Why are they slow in this example? Just because the first example is not written for speed, I didn't even add run-time timings for it at first. And it's not that slow. Bye, bearophile Slightly OT: Why do languages like Ruby (and now Crystal) have to state the obvious in an awkward way? (2...max).each do Of course you _do_ _each one_ from 2 to max. Is it to make it more human? Absolutely. You are a human and you spend a lot of time reading code. The more human the code looks to you, the better, I think, as long as it doesn't become too long or too annoying to read, like: for every number between 2 and max do ... end :-P
Re: Build Master: Scheduling
On 2013-11-15 14:06, Martin Nowak wrote: The point is, there has never been a really stable dmd release. Using it requires to read the mailing list just like with most beta software. We won't be able to release a stable compiler every month anytime soon. True, but a beta would be even less stable. -- /Jacob Carlborg
Re: Vibe.d DUB
El 15/11/13 12:49, Chris ha escrit: On Thursday, 14 November 2013 at 17:32:14 UTC, Jordi Sayol wrote: El 14/11/13 17:48, Chris ha escrit: On Thursday, 14 November 2013 at 16:42:05 UTC, Chris wrote: On Thursday, 14 November 2013 at 16:32:59 UTC, Sönke Ludwig wrote: Am 14.11.2013 16:54, schrieb Chris: Excuse me my ignorance. I haven't used DUB for a while now. Don't know what's wrong. Found no hint on the (h)internet. I it that the latest version of dmd is too high for dub? Use dmd2.063 instead? $ dub upgrade Upgrading project in /home/path/to/project Triggering update of package vibe-d Geting package metadata for vibe-d failed, exception: object.Exception@source/dub/project.d(460): Could not find package candidate for vibe-d =0.7.12 (...) Judging by the line number of the exception, you are using DUB 0.9.13. There have been a number of changes since then, including how version numbers are validated. It's possible that the reason for the error is the latest vibe.d version 0.7.18-beta.1. I'd recommend to upgrade to 0.9.19 - everything is working there AFAICS. Thanks for the answer. I tried to update dub, but it says that dub is already the newest version., which is hard for me to believe. Ubuntu 12.04 LTS What says last line of $ dub help dub's last words were: Install options: --versionUse the specified version/branch instead of the latest --system Install system wide instead of user local --local Install as in a sub folder of the current directory And that's it. I couldn't find a simple version switch that just tells me dub version 0.XYZ You can set d-apt http://d-apt.sourceforge.net/ on your Ubuntu 12.04 LTS. This will allow you to easily install the last dub release: $ sudo apt-get install dub -- Jordi Sayol
Re: Look and think good things about D
Ary Borenszweig: And so LLVM, which is what Crystal uses as a backend. LDC2 uses the same back end :-) But I thought ranges were meant to be fast. No allocations and all of that. In fact, I was kind of sad that Crystal doesn't have a similar concept so it could never get as fast as D ranges. But if D ranges are not fast, what's the point of having them and making everyone use them? If you use ranges badly you will get a slow program, if you use them well with a good back-end, you will have a fast program. I have written a third intermediate program, it's longer than yours, and it seems much slower than your code: void main(in string[] args) { import std.stdio, std.conv, std.algorithm, std.array; immutable n = (args.length == 2) ? args[1].to!uint : 10; if (n == 0) return; auto seq = 1; writefln(%2d: n. digits: %d, 1, seq.length); foreach (immutable i; 2 .. n + 1) { Appender!string result; foreach (immutable digit, immutable count; seq.group) { result ~= 123[count - 1]; result ~= digit; } seq = result.data; writefln(%2d: n. digits: %d, i, seq.length); } } On my system it runs in 0.34 seconds for n=50. Could you compare some of the timings of the various D/Crystal versions on your system (using ldc2 for D)? Bye, bearophile
Re: Ehem, ARM
On 2013-11-15 12:53:18 +, Jacob Carlborg d...@me.com said: On 2013-11-15 13:42, Michel Fortin wrote: Honestly, what I'd do is implement ARC for Objective-C types in the compiler without waiting for Walter to decide on anything. There's almost nothing to decide when it comes to how D/Objective-C does it: you have to do it the same way as clang. And you can't reuse anything Walter will come with without much tinkering because Objective-C ARC has to manage autoreleased objects. On the other hand once you have implemented Objective-C ARC it should be easy to retrofit the mechanics of it to other parts of D. The question is if this is something that Walter would accept to be included. You mean if Walter would accept D/Objective-C without ARC? No idea. Ask him, or submit a pull request just to gauge the reaction. People have been manually managing memory with retain/release/autorelease for more than a decade and it worked pretty well, much better than any other manual reference counting scheme out there. One problem with introducing ARC later is that you'll need a compiler flag to disable or enable ARC to support both legacy and new code. Personally, I'd be more bothered by the lack of 64-bit than the lack of ARC, but that might be just because I'm good with retain/release/autorelease. -- Michel Fortin michel.for...@michelf.ca http://michelf.ca
Re: Vibe.d DUB
On Friday, 15 November 2013 at 11:49:29 UTC, Chris wrote: dub's last words were: Install options: --versionUse the specified version/branch instead of the latest --system Install system wide instead of user local --local Install as in a sub folder of the current directory And that's it. I couldn't find a simple version switch that just tells me dub version 0.XYZ That indicates a rather old version. All recent ones have version statement in the end of help output.
Re: Build Master: Scheduling
On Friday, 15 November 2013 at 09:23:27 UTC, luka8088 wrote: Yes but not having a delay between the time a new feature is implemented and the time it is released is very risky in terms of bugs. Because this new features has not been tester properly. And that is a fact. Common misconception. We have relatively few breaking changes and bugs introduced by new features, much more common are regressions introduced by routine bug fixing (because of all hidden relations) What is bad with calling those fast releases betas is that they will be unusable with current approach to betas. Right now beta is just some snapshot from master which can contain any possible amount of regressions. Point of time when beta is released is point of time when people start actively fixing all those issues and that is the key property. If it becomes just another snapshot with no implies efforts to fix all regressions in relatively short term it gains us nothing over nightly master builds. Other than making actual releases delayed even longer.
Re: Ehem, ARM
On 2013-11-15 14:38, Michel Fortin wrote: You mean if Walter would accept D/Objective-C without ARC? No idea. Ask him, or submit a pull request just to gauge the reaction. No, I was referring to just implementing ARC like it's done in Objective-C. People have been manually managing memory with retain/release/autorelease for more than a decade and it worked pretty well, much better than any other manual reference counting scheme out there. One problem with introducing ARC later is that you'll need a compiler flag to disable or enable ARC to support both legacy and new code. That's true. Personally, I'd be more bothered by the lack of 64-bit than the lack of ARC, but that might be just because I'm good with retain/release/autorelease. Both are important. Although, I personally don't have a need for 64bit, but it's nice to have the modern runtime. -- /Jacob Carlborg
Re: Build Master: Scheduling
On Friday, 15 November 2013 at 12:58:47 UTC, Martin Nowak wrote: That's the exact problem with most of the release ideas proposed here, they are terribly inefficient. The schedule proposed by Andrew only requires one maintenance branch (point releases) besides the regular beta releases from master. Backporting to a single stable branch should be within our budget. It is no different than supporting one LTS backporting branch and current actual release branch. Only difference from proposed model is that by calling it release you are taking certain responsibilities to get in at least a bit usable shape at the point of release. DMD releases may have never been stable but they are _much_ more stable than usage of git master HEAD.
Re: Vibe.d DUB
On Friday, 15 November 2013 at 13:27:25 UTC, Jordi Sayol wrote: El 15/11/13 12:49, Chris ha escrit: On Thursday, 14 November 2013 at 17:32:14 UTC, Jordi Sayol wrote: El 14/11/13 17:48, Chris ha escrit: On Thursday, 14 November 2013 at 16:42:05 UTC, Chris wrote: On Thursday, 14 November 2013 at 16:32:59 UTC, Sönke Ludwig wrote: Am 14.11.2013 16:54, schrieb Chris: Excuse me my ignorance. I haven't used DUB for a while now. Don't know what's wrong. Found no hint on the (h)internet. I it that the latest version of dmd is too high for dub? Use dmd2.063 instead? $ dub upgrade Upgrading project in /home/path/to/project Triggering update of package vibe-d Geting package metadata for vibe-d failed, exception: object.Exception@source/dub/project.d(460): Could not find package candidate for vibe-d =0.7.12 (...) Judging by the line number of the exception, you are using DUB 0.9.13. There have been a number of changes since then, including how version numbers are validated. It's possible that the reason for the error is the latest vibe.d version 0.7.18-beta.1. I'd recommend to upgrade to 0.9.19 - everything is working there AFAICS. Thanks for the answer. I tried to update dub, but it says that dub is already the newest version., which is hard for me to believe. Ubuntu 12.04 LTS What says last line of $ dub help dub's last words were: Install options: --versionUse the specified version/branch instead of the latest --system Install system wide instead of user local --local Install as in a sub folder of the current directory And that's it. I couldn't find a simple version switch that just tells me dub version 0.XYZ You can set d-apt http://d-apt.sourceforge.net/ on your Ubuntu 12.04 LTS. This will allow you to easily install the last dub release: $ sudo apt-get install dub Ok, thanks a million. It works now. Everything is up to date. Now I can start serious work on a vibe-d project. Sönke, the vibed.org homepage has a beautiful layout. Very tasteful design.
Re: Vibe.d DUB
On Thursday, 14 November 2013 at 15:54:09 UTC, Chris wrote: Excuse me my ignorance. I haven't used DUB for a while now. Don't know what's wrong. Found no hint on the (h)internet. I it that the latest version of dmd is too high for dub? Use dmd2.063 instead? $ dub upgrade Upgrading project in /home/path/to/project Triggering update of package vibe-d Geting package metadata for vibe-d failed, exception: object.Exception@source/dub/project.d(460): Could not find package candidate for vibe-d =0.7.12 dub(pure @safe bool std.exception.enforce!(bool).enforce(bool, lazy const(char)[], immutable(char)[], uint)+0x2c) [0x81bd8f4] dub(bool dub.project.Project.gatherMissingDependencies(dub.packagesupplier.PackageSupplier[], dub.dependency.DependencyGraph).int __foreachbody4035(ref immutable(char)[], ref dub.dependency.RequestedDependency)+0x20f) [0x81b852f] dub(_aaApply2+0x4f) [0x8232f8f] dub(bool dub.project.Project.gatherMissingDependencies(dub.packagesupplier.PackageSupplier[], dub.dependency.DependencyGraph)+0xe0) [0x81b8288] dub(dub.project.Action[] dub.project.Project.determineActions(dub.packagesupplier.PackageSupplier[], int)+0x9f) [0x81b7877] dub(bool dub.dub.Dub.update(dub.project.UpdateOptions)+0x54) [0x82262c4] dub(_Dmain+0x735) [0x81a0d41] dub(extern (C) int rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).void runMain()+0x10) [0x82341d4] dub(extern (C) int rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).void tryExec(scope void delegate())+0x18) [0x8233e80] dub(extern (C) int rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).void runAll()+0x32) [0x8234212] dub(extern (C) int rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).void tryExec(scope void delegate())+0x18) [0x8233e80] dub(_d_run_main+0x121) [0x8233e51] dub(main+0x14) [0x8233d24] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0xb74cc4d3] Could not resolve dependencies The dependency graph could not be filled. The following changes could be performed: Failure vibe-d =0.7.12, projectLocal Issued by: TTSServer: =0.7.12 Seeing as you're having bootstrap problems, just glone the dub git repo and build dub from source. It's completely trivial and will 99% sure solve the problem.
Re: Qt5 and D
On Fri, 2013-11-15 at 00:37 +0100, Xavier Bigand wrote: […] What do you mean exactly by same UI? Same QML files directly? Indeed. I have one QML file (well maybe more than one given the way QML works but basically the UI is entirely in QML) and this can be used from PyQt5 or go-qml to provide multiple implementations behind the same UI. Because it's hard to port/Wrap Qt to D, I am working on a project that is based on QML principles (Property binding, Components,...). It's called DQuick. D has always claimed to have excellent link to C, and hence C++ (sort of) support, so what is it about the Qt library that causes hassle? It's only a prototype for the moment, and it will certainly never compatible with QML files. But if you are only interested by property binding and the composition system of QML, DQuick will make you happy. Sadly I really do need to work with the same UI file. I'll take a look at DQuick though and see what it's like. Thanks. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Re: Qt5 and D
On Fri, 2013-11-15 at 11:10 +0100, Joseph Rushton Wakeling wrote: […] However, I think we have to acknowledge that Qt5 in general is a really important toolkit that needs first-class D support. This is only going to become more true as more and more development moves onto mobile platforms. Agreed. We have GtkD, but Gtk doesn't have the same acceptance as a cross-platform system at Qt and wx. An effective QtD may be a hard problem but it's one that needs a solution. There is the QtD_Experimental Git repository, but it seems there is no consistent resource available to keep the project moving at a good pace. :-( -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@winder.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
Re: Look and think good things about D
On Friday, 15 November 2013 at 13:13:51 UTC, Ary Borenszweig wrote: On 11/15/13 10:07 AM, Chris wrote: On Friday, 15 November 2013 at 12:47:21 UTC, bearophile wrote: Ary Borenszweig: Here's what I was able to do in some minutes: --- if ARGV.length != 1 puts missing argument: n exit 1 end n = ARGV[0].to_i str = 1 buffer = String::Buffer.new(20) n.times do puts str.length str.each_chunk do |digit, count| buffer '0' + count buffer digit end str = buffer.to_s buffer.clear end With n=70 it takes about 4.89s. With n=45 it takes about 0.012s. This program is much longer in number of tokens to the first D program. You can write a D program about as fast as this in about the same number of tokens. Perhaps I should add an intermediate third version that shows code that's not as extreme as the two versions there. Thank you for the implicit suggestion. And with Crystal you could do the second version as well, because you have access to low level stuff like pointers. In Crystal do you have final switches, gotos, etc too? And also, the language is pretty new so there's still a lot of optimizations to be done. And LDC2 will improve in the meantime. I also thought ranges were pretty fast because of their nature. It also matters a lot how you use them, this is normal in computer programming. Why are they slow in this example? Just because the first example is not written for speed, I didn't even add run-time timings for it at first. And it's not that slow. Bye, bearophile Slightly OT: Why do languages like Ruby (and now Crystal) have to state the obvious in an awkward way? (2...max).each do Of course you _do_ _each one_ from 2 to max. Is it to make it more human? Absolutely. You are a human and you spend a lot of time reading code. The more human the code looks to you, the better, I think, as long as it doesn't become too long or too annoying to read, like: for every number between 2 and max do ... end :-P Well, that was exactly my point. As a human being you don't need the patronizing (and highly annoying) for every number This is what you say when you explain it to a newbie. But there is no need to spell this out in the syntax. Syntax of programming languages is (or should be) like road signs, or any other signs. Concise and expressive. Else, what's the point? I know that languages like Lua have the philosophy that non-programmers should be able to use it. But every human being is capable of abstracting things. There is no need for this terrible syntax (2..max).each do: end It doesn't add anything to the code except for useless characters. Humans have used signs and acronyms for ages. We can cope with it. I once saw the most beautiful encrypted message in Arabic, which when read properly unfolds into an array of characters and meaning. We humans can deal with it. I still don't see why x++; is a problem and has to be spelled out as x = x + 1, or even x += 1 (slightly better). If Ruby programmers had invented spelling, you would Double U Ar I Tee ee like this. Ha ha ha! :-)
Re: Ehem, ARM
On 2013-11-15 14:02:20 +, Jacob Carlborg d...@me.com said: On 2013-11-15 14:38, Michel Fortin wrote: You mean if Walter would accept D/Objective-C without ARC? No idea. Ask him, or submit a pull request just to gauge the reaction. No, I was referring to just implementing ARC like it's done in Objective-C. I honestly don't think it can work much differently. You call retain and release at the right places. Extending it to other D types is just a matter of calling a different functions depending on the type. Objective-C has a couple of special cases for pointers to autoreleased objects, but that's just a couple more rules to add, and those should be kept specific to Objective-C types. Weak pointers are a more difficult topic, but a separate one I'd say. -- Michel Fortin michel.for...@michelf.ca http://michelf.ca
Re: Look and think good things about D
On 11/15/13 10:33 AM, bearophile wrote: Ary Borenszweig: And so LLVM, which is what Crystal uses as a backend. LDC2 uses the same back end :-) But I thought ranges were meant to be fast. No allocations and all of that. In fact, I was kind of sad that Crystal doesn't have a similar concept so it could never get as fast as D ranges. But if D ranges are not fast, what's the point of having them and making everyone use them? If you use ranges badly you will get a slow program, if you use them well with a good back-end, you will have a fast program. I have written a third intermediate program, it's longer than yours, and it seems much slower than your code: void main(in string[] args) { import std.stdio, std.conv, std.algorithm, std.array; immutable n = (args.length == 2) ? args[1].to!uint : 10; if (n == 0) return; auto seq = 1; writefln(%2d: n. digits: %d, 1, seq.length); foreach (immutable i; 2 .. n + 1) { Appender!string result; foreach (immutable digit, immutable count; seq.group) { result ~= 123[count - 1]; result ~= digit; } seq = result.data; writefln(%2d: n. digits: %d, i, seq.length); } } On my system it runs in 0.34 seconds for n=50. Could you compare some of the timings of the various D/Crystal versions on your system (using ldc2 for D)? Sure. This last version you wrote, compiling it with -enable-inlining -release -O3, takes 0.054s (please tell me if I'm missing some flags, in Crystal I just used --release). The Crystal version takes 0.031s. I also tried with n=70. In D: 9.265s. In Crystal: 4.863s. I couldn't compile the first version written in D because I get: Error: pure function '__lambda1' cannot call impure function 'join' (I think you mentioned this in another post in this thread) The super-optimized D version, with n=70, takes 1.052s. This is the fastest. However, I'm starting to think that all those immutable, final switches and gotos are useless if they don't give a performance benefit (well, final switches do give you more safety). Maybe it's just that D/ldc doesn't use the immutability information and everything else to do aggressive optimizations?
Re: Look and think good things about D
On 11/15/13 11:39 AM, Chris wrote: Well, that was exactly my point. As a human being you don't need the patronizing (and highly annoying) for every number This is what you say when you explain it to a newbie. But there is no need to spell this out in the syntax. Syntax of programming languages is (or should be) like road signs, or any other signs. Concise and expressive. Else, what's the point? I know that languages like Lua have the philosophy that non-programmers should be able to use it. But every human being is capable of abstracting things. There is no need for this terrible syntax (2..max).each do: end No need to do that. You can, if you want to. I would have done: 2.upto(max) do ... end It doesn't add anything to the code except for useless characters. What do you mean by useless characters? How do you do it in D?
Re: Backtraces on Linux 64-bit
On Sunday, 10 November 2013 at 07:01:57 UTC, yazd wrote: I've implemented a simple thing using addr2line. It uses dwarf debug information to resolve the addresses provided by the backtrace to line numbers. https://github.com/yazd/backtrace-d Very cool, thanks, will check it out later today and report back. Wondering if there any chances of something like that being provided by the standard D library in the foreseeable future? Because without it, effectively debugging even small-sized projects is nigh impossible and painful.
Re: Look and think good things about D
Ary Borenszweig: Your timings are good enough for me, I have updated the rosettacode page with the third D version. However, I'm starting to think that all those immutable, final switches and gotos are useless if they don't give a performance benefit (well, final switches do give you more safety). In the second D program if you compile with ldc2 the final switch gives a significant performance increase :-) Maybe it's just that D/ldc doesn't use the immutability information and everything else to do aggressive optimizations? This was discussed some time ago. Probably there are ways for ldc2 to use a little better the static information of well annotated D code. Bye, bearophile
Re: Build Master: Scheduling
On Friday, 15 November 2013 at 07:50:28 UTC, Jacob Carlborg wrote: The opposite of an LTS release is _not_ a beta release. It's a regular release. It seems it would be better if the beta release were regular releases and the releases were LTS releases. I'm having a hard time requiring my users to use anything that is not a release (that is, a beta). On the other hand, supporting build with 2 versions: latest beta and latest LTS is not a big burden imo (unless you expose bleeding edge features in the api).
Re: Look and think good things about D
On Friday, 15 November 2013 at 15:20:20 UTC, Ary Borenszweig wrote: On 11/15/13 11:39 AM, Chris wrote: Well, that was exactly my point. As a human being you don't need the patronizing (and highly annoying) for every number This is what you say when you explain it to a newbie. But there is no need to spell this out in the syntax. Syntax of programming languages is (or should be) like road signs, or any other signs. Concise and expressive. Else, what's the point? I know that languages like Lua have the philosophy that non-programmers should be able to use it. But every human being is capable of abstracting things. There is no need for this terrible syntax (2..max).each do: end No need to do that. You can, if you want to. I would have done: 2.upto(max) do ... end It doesn't add anything to the code except for useless characters. What do you mean by useless characters? How do you do it in D? I prefer the C style syntax: foreach (whatever) { // ... } // closes block All this end stuff is superfluous, but at least it closes blocks. My pet hate as far as syntax is concerned is Python. The indentation (t)error. Change, copy paste run: Indentation error on line ... WTF?! In C style it doesn't matter if you have if (a b) { return; } if (a b) { return; } if (a b) { return; } or in loops foreach (i; array) { if (i == Hello!) { break; } } Cleaning up indentation is the programmer's business not the language's.
Re: DIP 45 - approval discussion
Am 15/11/2013 08:32, schrieb Walter Bright: It's not that bad. Phobos can be built by specifying all the files on the command line. Also, at least on Windows, you can call functions in a DLL without saying dllimport on them and suffering a layer of indirection. The magic happens in the import library, which provides the relevant thunk. It's about 15 years since I worked on this stuff, so I might be a bit fuzzy on the details. I know that. And we are using that fact in DIP 45. For data symbols we did suggest a similar behaviour that has to be implemented by us first. And yes we need data symbols, because of all the implicit data symbols the D programming language generates. TypeInfos, vtables etc. I suggest that you read all the links I gave for further reading in DIP 45 and DIP 45 again, because you are pretty close to what we suggested in DIP 45 without actually realizing it. Kind Regards Benjamin Thaut
Re: DIP 45 - approval discussion
Am 15/11/2013 08:27, schrieb Walter Bright: On 11/14/2013 3:37 AM, Benjamin Thaut wrote: Am 14.11.2013 11:28, schrieb Walter Bright: On 11/12/2013 2:23 PM, Martin Nowak wrote: One possibility is modules listed on the command line are regarded as export==dllexport, and other modules as export==dllimport. This of course means that functions may wind up going through the dllimport indirection even for calling functions in the same dll, but it should work. That doesns't work for the case where a dll A uses a dll B. In that case export has to mean dllexport for all modules of A but dllimport for all modules of B. I don't follow. If you're compiling A, you're specifying A modules on the command line, and those will regard the B modules as dllimport. Ok now I understand what you suggest. So basically you want to do the exact same as DIP 45 just giving the compiler parameter a different name. But you still didn't give a solution for the problem that the compiler does not know which modules are part of a shared library, which are part of a static library and which are part of the exeuctable. And please also consider single file compilation. Kind Regards Benjamin Thaut
Re: Build Master: Scheduling
On Friday, 15 November 2013 at 15:25:29 UTC, QAston wrote: On Friday, 15 November 2013 at 07:50:28 UTC, Jacob Carlborg wrote: On the other hand, supporting build with 2 versions: latest beta and latest LTS is not a big burden imo (unless you expose bleeding edge features in the api). I like the Ubuntu release model. Translated into D would be: - a (regular) release every 2 months * supported until the next (regular) release gets out * point releases will follow every 2 weeks, with bug fixes only (better a short list of bugs well fixed, that is without regressions, than a longer list with plenty of regressions) - each 3rd (regular) release, that is every 6 months, will be a LTS release * supported until the next LTS release gets out (possibly longer) * point releases will follow every 2 weeks, with bug fixes only (unless causing serious code breakage) The first 2 (regular) releases will introduce bugfixes as well as new features, while the LTS release will (aim to) provide the new features from the above regular releases, plus bug fixes. So, for creating a LTS release, the first for months will be of features and bug fixes, while the latter 2 months will be just bug fixes (features could also appear here if judged stable enough).
Re: DIP 45 - approval discussion
Walter Bright newshou...@digitalmars.com wrote in message news:l64lji$2buh$1...@digitalmars.com... On 11/15/2013 12:00 AM, Daniel Murphy wrote: Walter Bright newshou...@digitalmars.com wrote in message news:l64imh$27na$1...@digitalmars.com... Also, at least on Windows, you can call functions in a DLL without saying dllimport on them and suffering a layer of indirection. The magic happens in the import library, which provides the relevant thunk. It's about 15 years since I worked on this stuff, so I might be a bit fuzzy on the details. The symbol in the import library just translates to an import table indirection. Yes, meaning the compiler doesn't have to do it if the import library is set up correctly. (implib.exe should do it.) Right, I was saying the indirection still exists.
Re: Look and think good things about D
Am Fri, 15 Nov 2013 02:09:52 +0100 schrieb bearophile bearophileh...@lycos.com: I have created two interesting D entries for this Rosettacode Task, is someone willing to create a Reddit entry for this? They show very different kinds of code in D. http://rosettacode.org/wiki/Look-and-say_sequence#D Bye, bearophile APL is awesome! -- Marco
Re: std.templatecons ready for comments
12.11.2013 2:19, John Colvin пишет: On Monday, 11 November 2013 at 22:18:27 UTC, John Colvin wrote: On Monday, 11 November 2013 at 20:33:37 UTC, Denis Shelomovskij wrote: 10.11.2013 19:30, Ilya Yaroshenko пишет: Hello, All! std.templatecons: Functional style template constructors. Documentation: http://9il.github.io/phobosx/std.templatecons.html Source: https://github.com/9il/phobosx/blob/master/std/templatecons.d Note: dmd = 2.64 required Please destroy! I am sorry for my English in sources/docs. Best Regards, Ilya No more plain modules, please. Call it `std.meta.something`. I'm against of including a few range-like (i.e. like `std.range` and `std.algorithm` stuff) templates like `RepeatExactly`, `templateStaticMap`, and `templateFilter`. First these are generic tuple manipulation templates belong to `std.meta.generictuple` (or how it's called now). Second I'd like to see one-to-one analog of all (most) applicable range operation functions for generic tuples in Phobos by merging in e.g. [1] As for `std.functional`-like template I can't tell much as the only ones http://denis-sh.bitbucket.org/unstandard/unstd.templates.html [1] http://denis-sh.bitbucket.org/unstandard/unstd.generictuple.html How would you feel about me cannibalising parts of that for my attempt at making a proper std.meta module? Conveniently it seems that we use similar core ideas, but have implemented disjoint sets of functionality. *somewhat disjoint Use it as you wish. Also feel free to e-mail me with questions/functionality requests (or open issues for Unstandard project). Just in case you want we also could make `unstd.meta` and then push to Phobos. -- Денис В. Шеломовский Denis V. Shelomovskij
Re: DIP 45 - approval discussion
On 15.11.2013 08:02, Benjamin Thaut wrote: Am 15.11.2013 00:38, schrieb Rainer Schuetze: Maybe. Another rule might be that only the declarations actually annotated with export gets exported with the instantiation, so you could add export to the whole class or only some declaraations. I don't think this is a good idea. It should be possible to put export: on top of a file and just export everything. If you limit it to decelerations the following would work: export __gshared int g_var; but the following wouldn't: export __gshared int g_var = 0; Although it would really produce equivalent code. I don't follow. What does this have to do with template instances? I was referring to your example where you wanted to export just one symbol from a template class. compiling c and d as single files will silently generate different code, because when compiling d, the export alias is never seen. (this cannot happen with standard variables, only when declared multiple times, but differently, with extern(C/C++/System)). And do you already have a idea how we could work around this problem? It might be possible to produce some linker errors: e.g. the dllimport version drags in the __imp_var symbol that also provides a _var definition record that produces some link error (e.g. by referring to a non-existing symbol). If it links to a non-dllimport version that actually refers to _var, it bails out. Generated COMDAT records that define the _var symbol aswell might cause problems here, though, because the current COMDAT selection strategy is pick any.
Re: DIP 45 - approval discussion
Am 15/11/2013 18:04, schrieb Rainer Schuetze: I don't follow. What does this have to do with template instances? I was referring to your example where you wanted to export just one symbol from a template class. Then I misunderstood your use of the word declaration. With declarations you did mean template declarations only?
Re: DIP 45 - approval discussion
On 15.11.2013 18:18, Benjamin Thaut wrote: Am 15/11/2013 18:04, schrieb Rainer Schuetze: I don't follow. What does this have to do with template instances? I was referring to your example where you wanted to export just one symbol from a template class. Then I misunderstood your use of the word declaration. With declarations you did mean template declarations only? I actually meant declaration inside template definitions as in your example: class WeakReferenced(T) { export __gshared WeakReferenced!T[] m_weakTable; } The proposal was that export alias WeakReferenced!int exported_weak_int_array; would only export the symbol for m_weakTable, not any other class symbols like the class info. I'm not sure if this is a good idea, I'm just exploring possibilities here.
Re: DIP 45 - approval discussion
Am 15/11/2013 18:41, schrieb Rainer Schuetze: I actually meant declaration inside template definitions as in your example: class WeakReferenced(T) { export __gshared WeakReferenced!T[] m_weakTable; } The proposal was that export alias WeakReferenced!int exported_weak_int_array; would only export the symbol for m_weakTable, not any other class symbols like the class info. I'm not sure if this is a good idea, I'm just exploring possibilities here. Sounds good to me. It gives the ability to percisely select what gets exported and what not. Basically also matches was C++ does.
Re: Ehem, ARM
On Friday, 15 November 2013 at 12:07:19 UTC, Martin Nowak wrote: On Friday, 15 November 2013 at 09:20:18 UTC, Joakim wrote: For one, dmd not having an ARM backend doesn't impact me since I'm targeting Android/x86 for now, :) as stated earlier. Interesting, then you'll mostly focus on druntime and glibc vs. bionic issues. The linux/ELF support of dmd should mostly work. Yes, I thought that would be easier, to split the effort into two parts. First, get D working on Android/x86, then, linux/ARM. Some fine day, we combine the two into Android/ARM. :) On Friday, 15 November 2013 at 12:18:20 UTC, Kai Nacke wrote: Hi Joakim! Yes, there is a some support, but not too much. The existence of the TARGET_* macros means that you can't have one compiler with 2 or more platform targets. I think what I'll try initially is to hack the linux target on dmd to produce an Android/ELF/x86 executable, by tying into the Android NDK and linker. That might be the shortest path to something that works. But there should be no real problem to create a dmd executable on Linux/ARM producing object files for Windows/x86. (Well - no problem except for the real data type. :-) ) But who needs that kind of cross-compiling? Maybe not linux/ARM, but if D ever takes off, I could see it being convenient someday to cross-compile Windows/x64 executables on linux/x64 servers, perhaps by tying into Wine. I have compiled a small Windows utility using dmd under Wine on a FreeBSD host before: it worked. :) You wouldn't need to license Windows for a bunch of build servers and could cut costs that way. Obviously not a concern today, but we should get cross-compiling working as much as possible and cross-compiling to Android might be a good first step. Hell, by then, linux/ARM might be prevalent in the data center. ;) To be useful for producing ARM binaries, you need an ARM backend. This is already available for LDC and GDC. IMHO it is easier to pick one of those compilers and think about and create a cross-compiling environment instead of starting by zero. (For LDC, this is issue #490: https://github.com/ldc-developers/ldc/issues/490) I'm only focusing on Android/x86 for now: why does everyone keep bringing up ARM? It's almost as though Android/x86 gets translated into Android/ARM in their head. ;) Perhaps it is easier to cross-compile with ldc/gdc, or perhaps the overall port will be harder because I have to then dive into those larger llvm/gcc codebases, whereas the dmd backend looks simpler to me, at least so far. I'll give it a whirl with dmd first and then move to ldc if I get stuck. Thanks for the advice, I may end up following it.
[OT] Best algorithm for extremely large hashtable?
This isn't directly related to D (though the code will be in D), and I thought this would be a good place to ask. I'm trying to implement an algorithm that traverses a very large graph, and I need some kind of data structure to keep track of which nodes have been visited, that (1) allows reasonably fast lookups (preferably O(1)), and (2) doesn't require GB's of storage (i.e., some kind of compression would be nice). The graph nodes can be represented in various ways, but possibly the most convenient representation is as n-dimensional vectors of (relatively small) integers. Furthermore, graph edges are always between vectors that differ only by a single coordinate; so the edges of the graph may be thought of as a subset of the edges of an n-dimensional grid. The hashtable, therefore, needs to represent some connected subset of this grid in a space-efficient manner, that still allows fast lookup times. The naïve approach of using an n-dimensional bit array is not feasible because n can be quite large (up to 100 or so), and the size of the grid itself can get up to about 10 in each direction, so we're looking at a potential maximum size of 10^100, clearly impractical to store explicitly. Are there any known good algorithms for tackling this problem? Thanks! T -- Creativity is not an excuse for sloppiness.
Re: Ehem, ARM
On 15 November 2013 18:40, Joakim joa...@airpost.net wrote: On Friday, 15 November 2013 at 12:07:19 UTC, Martin Nowak wrote: On Friday, 15 November 2013 at 09:20:18 UTC, Joakim wrote: For one, dmd not having an ARM backend doesn't impact me since I'm targeting Android/x86 for now, :) as stated earlier. Interesting, then you'll mostly focus on druntime and glibc vs. bionic issues. The linux/ELF support of dmd should mostly work. Yes, I thought that would be easier, to split the effort into two parts. First, get D working on Android/x86, then, linux/ARM. Some fine day, we combine the two into Android/ARM. :) GNU/Linux on ARM will come first... it's now only a matter of time. ;) -- Iain Buclaw *(p e ? p++ : p) = (c 0x0f) + '0';
Re: Look and think good things about D
Ary Borenszweig: This last version you wrote, compiling it with -enable-inlining -release -O3, takes 0.054s (please tell me if I'm missing some flags, in Crystal I just used --release). The Crystal version takes 0.031s. I also tried with n=70. In D: 9.265s. In Crystal: 4.863s. This version is more than twice faster, because group() avoids decoding UTF: void main(in string[] args) { import std.stdio, std.conv, std.algorithm, std.array, std.string; immutable n = (args.length == 2) ? args[1].to!uint : 10; if (n == 0) return; auto seq = ['1']; writefln(%2d: n. digits: %d, 1, seq.length); foreach (immutable i; 2 .. n + 1) { Appender!(typeof(seq)) result; foreach (const digit, const count; seq.representation.group) { result ~= 123[count - 1]; result ~= digit; } seq = result.data; writefln(%2d: n. digits: %d, i, seq.length); } } I couldn't compile the first version written in D because I get: Error: pure function '__lambda1' cannot call impure function 'join' Just remove the pure for ldc2. However, I'm starting to think that all those immutable, final switches and gotos are useless if they don't give a performance benefit Adding immutable to variables sometimes helps catch some troubles in your code. In a modern language variables should be immutable on default, and functions should be pure on default, because the default choice should be the faster (if it's equally safe) and because when you read code it's simpler to reason on things that have less capabilities (like the capability to mutate for a value). Bye, bearophile