Re: Start of dmd 2.064 beta program
On 17 Oct 2013 00:40, bearophile bearophileh...@lycos.com wrote: Walter Bright: I'll go have myself flogged, then. But please be gentle and use something soft, like a fake snow leopard tail: Surely having to deal with c++ whenever Walter works on dmd is punishment enough :D.
Re: Facebook is using D in production starting today
On Saturday, 12 October 2013 at 12:08:03 UTC, Todor wrote: On Friday, 11 October 2013 at 05:11:49 UTC, Walter Bright wrote: On 10/10/2013 10:05 PM, Nick Sabalausky wrote: Awesome! Great bragging rights for D :) It's the first battle signaling the end of Middle Earth, and the rise of the Age of D. The old guard will be sailing to the Grey Havens soon. They're taking the Hobbits to Isengard! Actually, I think this development is akin to the March of the Ents. They spend a long time thinking and are slow to rouse ... but when they are roused ... :-P https://www.youtube.com/watch?v=h5YwMpSN6CU
Re: Start of dmd 2.064 beta program
On 2013-10-16 23:16, Jonathan M Davis wrote: Yes, but after Andej did the great changelog for 2.063, Walter publicly admitted that he had been wrong about the changelog. Andrej showed Walter that it _is_ worth doing something more than just a list of bugzilla issues. So, I would assume that whatever Andrej is unhappy with Walter for is something else. Andrej wrote: I'm wondering whether there will be the nifty changelog like it was for 2.063? Andrej? :D We'll see if someone else volunteers to do it. I'm not doing it out of protest. http://forum.dlang.org/thread/l3chnd$1mvs$1...@digitalmars.com?page=4#post-mailman.2221.1381889714.1719.digitalmars-d-announce:40puremagic.com I interpreted that as he originally created the changelog out of protest to Walter's claim that it's not necessary. -- /Jacob Carlborg
Re: Mono-D 0.5.4.1 - Build, completion other fixes + Unittests via rdmd
On Wed, Oct 16, 2013 at 2:21 PM, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: On 10/16/13 5:38 AM, Bruno Medeiros wrote: On 08/10/2013 14:18, Alexander Bothe wrote: Are there any plans/tricks/hacks on how to get programs built with dmd debuggable with gdb? Then we also could release the addin for Windows as well! (Afaik I asked the same question some time ago, but well, perhaps something did change over the time :-)) I was wondering the same as well... But from the lack of answers I think not much can be done? :/ What are the matters involved? I did get basic debugging sessions working, but I forgot whether it was dmd or gdc. Andrei on OSX, lldb has better support than gdb.
Re: code.dlang.org now supports categories and search - license information now required
On Thursday, 17 October 2013 at 09:33:46 UTC, Sönke Ludwig wrote: There has been another important change that requires existing packages to be updated: All packages must now have the fields description and license present to be published. The license field has to be set according to the specification [1]. All existing branches and version tags stay unaffected by this requirement and are still available. This change has been done to prepare for an automated validation of license terms in complex dependency hierarchies. This may be an important feature as the number of available packages grows, which is why this requirement has been introduced now as early as possible. [1]: http://code.dlang.org/package-format#licenses A little addition: allow use full license name, not only short name: `BSL-1.0` or `Boost Software License 1.0` `AFL-3.0` or `Academic Free License 3.0` It simplify creation of human-readable license name. Add `public domain` license. Add abbility to add the array with licenses: license: [BSL-1.0, AFL-3.0, public domain] I think it's better than license: BSL-1.0 or AFL-3.0 or public domain
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 11:55, schrieb ilya-stromberg: On Thursday, 17 October 2013 at 09:33:46 UTC, Sönke Ludwig wrote: There has been another important change that requires existing packages to be updated: All packages must now have the fields description and license present to be published. The license field has to be set according to the specification [1]. All existing branches and version tags stay unaffected by this requirement and are still available. This change has been done to prepare for an automated validation of license terms in complex dependency hierarchies. This may be an important feature as the number of available packages grows, which is why this requirement has been introduced now as early as possible. [1]: http://code.dlang.org/package-format#licenses A little addition: allow use full license name, not only short name: `BSL-1.0` or `Boost Software License 1.0` `AFL-3.0` or `Academic Free License 3.0` It simplify creation of human-readable license name. How about letting the registry display the full name, but keep the short name for package descriptions? Having a single compact name reduces the chances for errors or ambiguities and reduces the amount of mapping code that is needed when reasoning about licenses. My initial idea was to fuzzy match licenses and also allow alternatives like GPLv2 instead of GPL-2.0, but in the end it just increases the potential for mistakes. Add `public domain` license. Will do. Add abbility to add the array with licenses: license: [BSL-1.0, AFL-3.0, public domain] I think it's better than license: BSL-1.0 or AFL-3.0 or public domain There will still be the need to specify or later, so this will only make it partially more structured. I'm a little undecided on this one.
Re: code.dlang.org now supports categories and search - license information now required
On Thursday, 17 October 2013 at 10:07:40 UTC, Sönke Ludwig wrote: Am 17.10.2013 11:55, schrieb ilya-stromberg: On Thursday, 17 October 2013 at 09:33:46 UTC, Sönke Ludwig wrote: There has been another important change that requires existing packages to be updated: All packages must now have the fields description and license present to be published. The license field has to be set according to the specification [1]. All existing branches and version tags stay unaffected by this requirement and are still available. This change has been done to prepare for an automated validation of license terms in complex dependency hierarchies. This may be an important feature as the number of available packages grows, which is why this requirement has been introduced now as early as possible. [1]: http://code.dlang.org/package-format#licenses A little addition: allow use full license name, not only short name: `BSL-1.0` or `Boost Software License 1.0` `AFL-3.0` or `Academic Free License 3.0` It simplify creation of human-readable license name. How about letting the registry display the full name, but keep the short name for package descriptions? Having a single compact name reduces the chances for errors or ambiguities and reduces the amount of mapping code that is needed when reasoning about licenses. My initial idea was to fuzzy match licenses and also allow alternatives like GPLv2 instead of GPL-2.0, but in the end it just increases the potential for mistakes. OK, maybe you are right. Add `public domain` license. Will do. Add abbility to add the array with licenses: license: [BSL-1.0, AFL-3.0, public domain] I think it's better than license: BSL-1.0 or AFL-3.0 or public domain There will still be the need to specify or later, so this will only make it partially more structured. I'm a little undecided on this one. We can use `+` to indicate or later: license: [BSL-1.0+, AFL-3.0+, public domain] I think it will be clear.
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 12:14, schrieb ilya-stromberg: Add abbility to add the array with licenses: license: [BSL-1.0, AFL-3.0, public domain] I think it's better than license: BSL-1.0 or AFL-3.0 or public domain There will still be the need to specify or later, so this will only make it partially more structured. I'm a little undecided on this one. We can use `+` to indicate or later: license: [BSL-1.0+, AFL-3.0+, public domain] I think it will be clear. Fair enough, that should work. But one potential issue just occurred to me. What if a product is licensed under multiple licenses that must _all_ apply? That would basically be MPL-2.0 and Apache-1.0 instead of or. This is something that happens quite frequently when code is taken from multiple projects or when the license was changed, but some files were under foreign copyright.
Re: code.dlang.org now supports categories and search - license information now required
But one potential issue just occurred to me. What if a product is licensed under multiple licenses that must _all_ apply? That would basically be MPL-2.0 and Apache-1.0 instead of or. This is something that happens quite frequently when code is taken from multiple projects or when the license was changed, but some files were under foreign copyright. A lot of projects have per-file license specifics.
Re: code.dlang.org now supports categories and search - license information now required
On Thursday, 17 October 2013 at 10:39:45 UTC, Sönke Ludwig wrote: But one potential issue just occurred to me. What if a product is licensed under multiple licenses that must _all_ apply? That would basically be MPL-2.0 and Apache-1.0 instead of or. This is something that happens quite frequently when code is taken from multiple projects or when the license was changed, but some files were under foreign copyright. It's impossible. For example, GPL-2.0 and GPL-3.0 are incompatible. So, if user requires both GPL-2.0 AND GPL-3.0, it means that we have invalid license items. So, we should provide only OR modifier, not AND. And I agree with ponce: we should provide a per-file license specifics, it should solve your problem.
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 13:42, schrieb ilya-stromberg: On Thursday, 17 October 2013 at 10:39:45 UTC, Sönke Ludwig wrote: But one potential issue just occurred to me. What if a product is licensed under multiple licenses that must _all_ apply? That would basically be MPL-2.0 and Apache-1.0 instead of or. This is something that happens quite frequently when code is taken from multiple projects or when the license was changed, but some files were under foreign copyright. It's impossible. For example, GPL-2.0 and GPL-3.0 are incompatible. So, if user requires both GPL-2.0 AND GPL-3.0, it means that we have invalid license items. So, we should provide only OR modifier, not AND. And I agree with ponce: we should provide a per-file license specifics, it should solve your problem. If you have per-file differences, then this in fact means that both licenses need to be obeyed when using the package. If those licenses are incompatible, that's a problem of the package combining them - it's basically unusable then. But going a per-file way is by-far too detailed. It's hard enough to assure proper per-package licensing and keeping license comments up to date, but this will IMO just result in chaos. Also, while GPL 2 and 3 may not be compatible, there are other licenses which are compatible, but one is not a superset of the other.
Re: code.dlang.org now supports categories and search - license information now required
On Thursday, 17 October 2013 at 12:06:49 UTC, Sönke Ludwig wrote: If you have per-file differences, then this in fact means that both licenses need to be obeyed when using the package. If those licenses are incompatible, that's a problem of the package combining them - it's basically unusable then. But going a per-file way is by-far too detailed. It's hard enough to assure proper per-package licensing and keeping license comments up to date, but this will IMO just result in chaos. Also, while GPL 2 and 3 may not be compatible, there are other licenses which are compatible, but one is not a superset of the other. OK, understand your position. May be just provide special syntax for this cases, for example: license: [{BSL-1.0, MIT}]
Re: code.dlang.org now supports categories and search - license information now required
On 10/17/13, Sönke Ludwig slud...@outerproduct.org wrote: Having a single compact name reduces the chances for errors Speaking of which, if I forget to add the license to a package file is there any way to get this information from the server? I mean like a page saying that my package was rejected because it's missing X or Y, rather than having to guess whether the package file is bad or the server is just temporarily overloaded. Personally I think it would be better if we had a dub publish command, which would then error back if the server rejects the package, rather than make this whole process automated based on searching github (I assume this is how the dub server works now).
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 14:13, schrieb ilya-stromberg: On Thursday, 17 October 2013 at 12:06:49 UTC, Sönke Ludwig wrote: If you have per-file differences, then this in fact means that both licenses need to be obeyed when using the package. If those licenses are incompatible, that's a problem of the package combining them - it's basically unusable then. But going a per-file way is by-far too detailed. It's hard enough to assure proper per-package licensing and keeping license comments up to date, but this will IMO just result in chaos. Also, while GPL 2 and 3 may not be compatible, there are other licenses which are compatible, but one is not a superset of the other. OK, understand your position. May be just provide special syntax for this cases, for example: license: [{BSL-1.0, MIT}] It would have to be still valid JSON. So something like license: {or: [{and: [BSL-1.0, MIT]}, GPL-2.0]} would work. But that is hardly more practical than license: BSL-1.0 and MIT or GPL-2.0 With the advantage of not requiring operator precedence, though.
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 14:25, schrieb Andrej Mitrovic: Personally I think it would be better if we had a dub publish command, which would then error back if the server rejects the package, rather than make this whole process automated based on searching github (I assume this is how the dub server works now). dub publish sounds like something that may considerably increase the complexity of the command line tool, especially in the long term, and it also increases the coupling to the public registry, whereas now it just needs a very small HTTP API that can be fulfilled by any HTTP file server. So I'd rather want to avoid that if possible.
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 14:25, schrieb Andrej Mitrovic: On 10/17/13, Sönke Ludwig slud...@outerproduct.org wrote: Having a single compact name reduces the chances for errors Speaking of which, if I forget to add the license to a package file is there any way to get this information from the server? I mean like a page saying that my package was rejected because it's missing X or Y, rather than having to guess whether the package file is bad or the server is just temporarily overloaded. Personally I think it would be better if we had a dub publish command, which would then error back if the server rejects the package, rather than make this whole process automated based on searching github (I assume this is how the dub server works now). When you log in on the website and then go to My packages, you'll see a table of all packages along with excerpts of possible errors. You can then click on each package to see the full list of errors. There is also a button to trigger a manual update after changes now, so that the usual uncertain wait is not necessary anymore.
Re: code.dlang.org now supports categories and search - license information now required
On Thursday, 17 October 2013 at 12:27:02 UTC, Sönke Ludwig wrote: Am 17.10.2013 14:13, schrieb ilya-stromberg: On Thursday, 17 October 2013 at 12:06:49 UTC, Sönke Ludwig wrote: If you have per-file differences, then this in fact means that both licenses need to be obeyed when using the package. If those licenses are incompatible, that's a problem of the package combining them - it's basically unusable then. But going a per-file way is by-far too detailed. It's hard enough to assure proper per-package licensing and keeping license comments up to date, but this will IMO just result in chaos. Also, while GPL 2 and 3 may not be compatible, there are other licenses which are compatible, but one is not a superset of the other. OK, understand your position. May be just provide special syntax for this cases, for example: license: [{BSL-1.0, MIT}] It would have to be still valid JSON. So something like license: {or: [{and: [BSL-1.0, MIT]}, GPL-2.0]} would work. But that is hardly more practical than license: BSL-1.0 and MIT or GPL-2.0 With the advantage of not requiring operator precedence, though. We can use or as default. So, your example: license: [{and: [BSL-1.0, MIT]}, GPL-2.0] Yes, the example license: BSL-1.0 and MIT or GPL-2.0 looks better, but what about more complex cases: license: BSL-1.0 and MIT or GPL-2.0 and BSL-1.0 What does it mean?
Re: code.dlang.org now supports categories and search - license information now required
On 2013-10-17 14:33, Sönke Ludwig wrote: dub publish sounds like something that may considerably increase the complexity of the command line tool, especially in the long term, and it also increases the coupling to the public registry, whereas now it just needs a very small HTTP API that can be fulfilled by any HTTP file server. So I'd rather want to avoid that if possible. You could have something like this: dub publish git-tag Should be much difference compare to how it works now. It would just trigger the server to look for that tag, instead of doing it automatically. -- /Jacob Carlborg
Re: code.dlang.org now supports categories and search - license information now required
On 2013-10-17 11:33, Sönke Ludwig wrote: There has been another important change that requires existing packages to be updated: All packages must now have the fields description and license present to be published. The license field has to be set according to the specification [1]. All existing branches and version tags stay unaffected by this requirement and are still available. Perhaps add the license: Apple Public Source License. This can be useful for creating bindings to Apple specific libraries. Is there a corresponding license for Microsoft? -- /Jacob Carlborg
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 15:26, schrieb Jacob Carlborg: On 2013-10-17 14:06, Sönke Ludwig wrote: If you have per-file differences, then this in fact means that both licenses need to be obeyed when using the package. Not necessarily. There can be two completely separated targets, that don't share any code. I don't know if that's possible in Dub, but in theory it would be. Not necessarily, but possibly, so it probably has to cope with it. One possibility to handle your example would be to make different sub packages for the two targets.
Re: code.dlang.org now supports categories and search - license information now required
On Thursday, 17 October 2013 at 13:31:06 UTC, Jacob Carlborg wrote: On 2013-10-17 11:33, Sönke Ludwig wrote: There has been another important change that requires existing packages to be updated: All packages must now have the fields description and license present to be published. The license field has to be set according to the specification [1]. All existing branches and version tags stay unaffected by this requirement and are still available. Perhaps add the license: Apple Public Source License. This can be useful for creating bindings to Apple specific libraries. Is there a corresponding license for Microsoft? Yes: Microsoft Public License (Ms-PL) Microsoft Reciprocal License (Ms-RL) http://en.wikipedia.org/wiki/Shared_source
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 15:28, schrieb Jacob Carlborg: On 2013-10-17 14:33, Sönke Ludwig wrote: dub publish sounds like something that may considerably increase the complexity of the command line tool, especially in the long term, and it also increases the coupling to the public registry, whereas now it just needs a very small HTTP API that can be fulfilled by any HTTP file server. So I'd rather want to avoid that if possible. You could have something like this: dub publish git-tag Should be much difference compare to how it works now. It would just trigger the server to look for that tag, instead of doing it automatically. Well, the other issue with that is that there is no guarantee that the server can fulfill the request in a timely manner. It may be busy getting information about other packages/tags/branches which makes it impossible to get direct feedback. What about an e-mail notification, though? Seems like the most natural channel.
Re: code.dlang.org now supports categories and search - license information now required
The only license JSON that looks valid is the string. Simple bracketing will suffice for complex licenses. On 17 Oct 2013 16:05, Sönke Ludwig slud...@outerproduct.org wrote: Am 17.10.2013 15:28, schrieb Jacob Carlborg: On 2013-10-17 14:33, Sönke Ludwig wrote: dub publish sounds like something that may considerably increase the complexity of the command line tool, especially in the long term, and it also increases the coupling to the public registry, whereas now it just needs a very small HTTP API that can be fulfilled by any HTTP file server. So I'd rather want to avoid that if possible. You could have something like this: dub publish git-tag Should be much difference compare to how it works now. It would just trigger the server to look for that tag, instead of doing it automatically. Well, the other issue with that is that there is no guarantee that the server can fulfill the request in a timely manner. It may be busy getting information about other packages/tags/branches which makes it impossible to get direct feedback. What about an e-mail notification, though? Seems like the most natural channel.
Re: code.dlang.org now supports categories and search - license information now required
On Thursday, 17 October 2013 at 13:26:06 UTC, Jacob Carlborg wrote: On 2013-10-17 14:06, Sönke Ludwig wrote: If you have per-file differences, then this in fact means that both licenses need to be obeyed when using the package. Not necessarily. There can be two completely separated targets, that don't share any code. I don't know if that's possible in Dub, but in theory it would be. I have an example of such a thing, but honestly I don't think dub should go that much far. Just providing the superset of what licenses a package _might_ fall under is already useful.
Re: code.dlang.org now supports categories and search - license information now required
On 2013-10-17 15:44, Sönke Ludwig wrote: Not necessarily, but possibly, so it probably has to cope with it. One possibility to handle your example would be to make different sub packages for the two targets. What's happens then with the main/super package, in regards to licensing? -- /Jacob Carlborg
Re: code.dlang.org now supports categories and search - license information now required
On 2013-10-17 15:53, Sönke Ludwig wrote: Added APSL-2.0 (Apple Public Source License) and MS-PL (Microsoft Public License). Cool, thanks. -- /Jacob Carlborg
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 16:59, schrieb Jacob Carlborg: On 2013-10-17 15:44, Sönke Ludwig wrote: Not necessarily, but possibly, so it probably has to cope with it. One possibility to handle your example would be to make different sub packages for the two targets. What's happens then with the main/super package, in regards to licensing? It's independent as long as it doesn't explicitly add the submodules as dependencies. If it does add them, it would have to add both licenses. But other packages can still only reference a sub package if they want.
Re: code.dlang.org now supports categories and search - license information now required
Am 17.10.2013 17:02, schrieb Sönke Ludwig: Am 17.10.2013 16:59, schrieb Jacob Carlborg: On 2013-10-17 15:44, Sönke Ludwig wrote: Not necessarily, but possibly, so it probably has to cope with it. One possibility to handle your example would be to make different sub packages for the two targets. What's happens then with the main/super package, in regards to licensing? It's independent as long as it doesn't explicitly add the submodules as dependencies. If it does add them, it would have to add both licenses. But other packages can still only reference a sub package if they want. s/only reference a/just reference a single/
Re: code.dlang.org now supports categories and search
Am 16.10.2013 21:01, schrieb Sönke Ludwig: The DUB package registry [1] has finally gained support for the text and category based search of packages. There is also a category for D standard library candidate modules, as has been suggested recently. If you already have any registered packages, please log in and add the proper categories to each of them (My packages - click on package name). Should there be no exact category match, and that specific category is likely to have multiple entries in the future, please make a corresponding pull request against the category file [2] on GitHub. It's still all a little rough around the edges. Any bugs can be reported on the issue tracker [3] or discussed in the forum [4]. [1]: http://code.dlang.org [2]: https://github.com/rejectedsoftware/dub-registry/blob/master/categories.json [3]: https://github.com/rejectedsoftware/dub-registry/issues [4]: http://forum.rejectedsoftware.com/groups/rejectedsoftware.dub/ Now also with JavaScript support for switching categories and alphabetic sorting.
Re: Start of dmd 2.064 beta program
On 10/16/13, Brad Roberts bra...@puremagic.com wrote: That's not a what, that's a who. - We do not have any vision or major plans ahead for the language. Currently we're stuck in a bug-driven development environment, where bugs are arbitrarily picked off of bugzilla and fixed. But there's no major plans ahead, e.g. let's plan to fix these X major bugs for some upcoming release. We can't force people to work on X or Y, but if they're in a visual place and marked important and scheduled to be fixed, this will give an incentive for contributors to work on these problems. - We do not have any defined release timeline. When is it time to start prepping for a release? It's up to Walter's arbitrary decision for when this happens, he doesn't even talk to the community or contributors on whether it's a good time for a beta phase (maybe there's a huge or disruptive new pull request that's planning to be merged and a beta should be delayed). - We do not have a defined timeline for the beta testing period. How long before we decide that the beta has been tested for long enough before a release is made? Again, it's up to Walter's decision. Having a defined release cycle and a beta testing period will ultimately be beneficial for everyone. People who are busy can rearrange their schedule to make free time and ensure they have enough time to test a beta compiler on their projects, and contributors to D can prepare for a cycle of days or weeks where they can rapidly work on reducing regressions and polish everything up for the release (e.g. writing an elaborate changelog). - We do not have a proper release system. Nick Sabalausky has been working hard on one[1], but Walter seems to have completely ignored it. It would have been a terrific opportunity to try and make it work for this release. What better way to test it than to test it on a release? - The betas are still not visible. The homepage makes no notes on a new beta being available, the download page does not list the betas either, and AFAICT there's no RSS feed either. The social media groups are not notified either (neither Andrei's nor Walter's twitter feed make a mention of a new beta being out, the same applies to https://twitter.com/D_Programming or the #dlang hash tags). To top it all off, you cannot post to the dmd-beta newsgroups from the D Forums, you have to separately register to this mailing list. If we want user feedback on betas we absolutely must make the betas visible and give an opportunity for people to post feedback. - Walter is still not tagging the beta releases by the file name (it's always dmd2beta.zip). I've complained about this several times and IIRC someone else did as well at dconf (maybe I'm remembering wrong though). They should at least be named as dmd2_064_beta1.zip, dmd2_064_beta2.zip. And all of them should always be available for download (including visibility on the download page), so people who do not use Git or build manually from master can quicky check whether a regression was introduced in a specific beta version. - I still sigh when I hear about Walter and Andrei having private phone conversations, or any kind of decision is made behind-the-scenes rather than publicly where the community has a say in it. Walter's implementation of UDAs directly in master which lead to having a deprecated syntax even though nobody used this syntax is what comes to mind. - Both Walter and Andrei not following the rules and making novice mistakes w.r.t Git and Github. Walter still seems to struggle with basic usage of git, where people continually have to re-explain what's wrong and how to fix an issue. I'm sorry, but if someone bought the Git book years ago and is still struggling with *concepts* then no amount of hand-guiding is going to help. And Andrei doing things like merging a dozen pull requests at once, with complete disregard to the fact that merging to master means other pulls could easily break (and so master can be broken). You cannot make so many merges unless you're absolutely sure each pull request does not interfere with another. - Back to Walter, a few weeks ago he merged a pull request over night, without regard to the pull request not being fully tested by the test machines. The result? Master was broken **for the entire next day**. Nobody knew which commit broke master, so nothing was done until Walter came back to Github the next day and started reverting his pulls. In the meantime the entire day was wasted because nobody's pull request could get green. Luckily it was a sunday, so there weren't too many complaints. But I could have easily merged a few pulls that day (as it happens I like to do things on a sunday), and as a result we would have a smaller pull queue. -- There's just many things that are going plain wrong here, and a lot of promises are always made but ultimately never delivered (whether it's about breaking changes or an improved development process -- again think about scheduling bug fixes
Re: Start of dmd 2.064 beta program
On Wednesday, 16 October 2013 at 17:41:37 UTC, deadalnix wrote: Hum I have several regression is SDC's test suite. I have to investigate more to fix the code or submit bug report. It looks related to AA. What are the changes that affect AA in this new release ? It turns out that is is a closure bug. Sadly, this involve compiling SDC completely and run it on some test data. I can repro consistently, but it seems really hard to get a small repro. I just moved to the US, and am quite busy especially since the gvt shutdown have complicated things quite a bit. Anyway, It is unlikely that I'll have a redux in the next ~2 weeks. I'm not how we should proceed here, but the bug seems serious to me (the worst kind : everything compile fine, bug the codegen is bogus).
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
On Wed, Oct 16, 2013 at 11:07:20PM -0400, Jonathan M Davis wrote: On Thursday, October 17, 2013 04:49:29 growler wrote: On Thursday, 17 October 2013 at 02:37:35 UTC, H. S. Teoh wrote: On Wed, Oct 16, 2013 at 10:16:17PM -0400, Jonathan M Davis [...] I can't possibly like any language where the type of a variable could change based on whether the condition in an if statement is true (because a variable gets assigned a completely different type depending no the branch of the if statement). auto func(alias condition)() { static if (condition()) int t; else float t; return t; } ;-) But if you make a mistake it is very likely that you'll see it at compile time, not runtime. Plus D has very explicit casting which also helps. The key difference is that the type of a variable won't change on you in D. Sure, the return type of a function could change depending on the types of its arguments or the value of its template arguments, but it's all known at compile time, and you'll get an error for any type mismatch. Yes, I know that. :) I was only being half-serious. D actually does it right in this case: if for whatever reason the resulting type from the static if is incompatible with the surrounding code, then as you say the static typing system will throw up its hands at compile-time, rather than at runtime on a customer's production environment. In contrast, with a dynamically typed language, the type of a variable can actually change while your program is running, resulting in function calls being wrong due to the fact that they don't work with the new type. If you're dealing with static typing, the type of every variable is fixed, and the legality of code doesn't suddenly change at runtime. bool func(Variant x, Variant y) { return x y; } func(1, 2); // ok func(1.0, 2.0); // ok func(a, 1); // hmmm... ;-) And it's not like scripting requires dynamic typing (e.g. you can write scripts in D, which is statically typed), so as far as I'm concerned, there's no excuse for using dynamic typing. It just causes bugs - not only that, but the bugs that it causes are trivially caught with a statically typed language. I pretty much outright hate dynamic typing and expect that I will never heavily use a language that has it. [...] Dynamic typing has its place... database query results, for example. However, the trouble with today's so-called dynamic languages is that you're forced to use dynamic typing everywhere, even where it doesn't make sense (and TBH, most of the time it's not appropriate). It's great for writing truly generic functions like: // Javascript function add(x,y) { return x+y; } but seriously, how much of *real*-world JS code is *that* generic? Most actual JS code looks like this: function checkInput(data) { if (data.name == 'myname' || data.age 10 || ...) return 1; return 0; } which presumes the existence of specific fields in the 'data' parameter, which implies that 'data' is an object type (as opposed to, say, an int), and which will fail miserably if 'data' just so happens to be a non-object, or an object without the presumed fields, or an object with those exact fields that just happen to be of the wrong type, etc.. There are just so many levels of wrong with using dynamic typing for this kind of code that being *forced* to do it is simply asking for bugs. Not to mention that the name 'data' says absolutely nothing about exactly what structure it must have in order for the function to work; you have to read the function body to find out (and even then, it's not always obvious exactly what is expected). You end up wasting so much time trying to deduce what type(s) the function must take (where in a statically-typed language you just read the type name and look it up) or otherwise working around the type system (or lack thereof) in similar ways that it completely negates the purported productivity benefit of dynamic typing. (And yes I know there are ways to improve code readability -- by parameter naming conventions, for example, which are essentially reinventing type annotations poorly -- you still won't have the benefit of compile-time type checking.) Static typing with automatic type inference is a far superior solution in most cases. And in D, you even have std.variant for those occasions where you *do* want dynamic typing (whereas in languages like C, you have to use tricky type casts and void* dereferences, which are very prone to bugs). Being forced one way or another never ends well. T -- Just because you survived after you did it, doesn't mean it wasn't stupid!
Re: Help needed testing automatic win64 configuration
Attempt four: http://gnuk.net/dmd-2.064-beta-newsc-lib64-4.exe This one has a couple more changes to the dmd2beta.zip it is downloading from my server. 1. dmd.exe has been replaced with one built from 2.064 branch HEAD (so I didn't have to use LINKCMD64). 2. lib64 has been added and phobos64.lib and gcstub64.lib have been moved into there from lib.
Re: Help needed testing automatic win64 configuration
On Thursday, 17 October 2013 at 05:27:15 UTC, Brad Anderson wrote: I'm switching to a simpler approach for this next iteration which I will post shortly. http://gnuk.net/dmd-2.064-beta-newsc-lib64-4.exe
Re: Help needed testing automatic win64 configuration
before i tried x64 i was happy to see its finally coming to the masses, but now, *cough* excuse me, how can we develop x64 apps if we can't hook debugger? is this something with my configuration or something on dmd part which denies any possibility to use debuggers at this moment?
Re: Help needed testing automatic win64 configuration
On Thursday, 17 October 2013 at 06:15:43 UTC, evilrat wrote: before i tried x64 i was happy to see its finally coming to the masses, but now, *cough* excuse me, how can we develop x64 apps if we can't hook debugger? is this something with my configuration or something on dmd part which denies any possibility to use debuggers at this moment? This is above my paygrade. Perhaps Walter or Rainer can answer.
Re: Help needed testing automatic win64 configuration
On Thursday, 17 October 2013 at 06:17:57 UTC, Brad Anderson wrote: On Thursday, 17 October 2013 at 05:27:15 UTC, Brad Anderson wrote: I'm switching to a simpler approach for this next iteration which I will post shortly. http://gnuk.net/dmd-2.064-beta-newsc-lib64-4.exe now it builds and links just from the box(tested with console dmd build). will test with visuld later.
Re: Debugging support for D - wiki
On 16.10.2013 14:33, Bruno Medeiros wrote: * How complete is the debugging info for DMD-Win64? Is it fully implemented, and/or are there any issues or limitations? (Rainer you are likely the best to answer this one) The stock compiler does not do the replacement '@' for '.' which confuses the Visual Studio debugger when inspecting class members. The debug info emitted for win64 should more or less be the same as for Win32 (which is only basic to begin with).
Re: Debugging support for D - wiki
On Thursday, 17 October 2013 at 06:42:58 UTC, Rainer Schuetze wrote: On 16.10.2013 14:33, Bruno Medeiros wrote: * How complete is the debugging info for DMD-Win64? Is it fully implemented, and/or are there any issues or limitations? (Rainer you are likely the best to answer this one) The stock compiler does not do the replacement '@' for '.' which confuses the Visual Studio debugger when inspecting class members. The debug info emitted for win64 should more or less be the same as for Win32 (which is only basic to begin with). i need halp!11 during testing Brad Anderson's dmd beta x64 setup(http://forum.dlang.org/thread/ebrbcdmsothbutakr...@forum.dlang.org) i've encountered a serious problem, compiled x64 programs works, but i'm totally unable to debug them. what's the problem? (visuald says something like cannot launch debugger on 'program path', hr=89710016)
Re: Help needed testing automatic win64 configuration
On 16.10.2013 13:13, Manu wrote: On 16 October 2013 17:16, Rainer Schuetze r.sagita...@gmx.de We are trying to talk Walter into doing this but it seems there are topics that fail to gain traction. Cool, well if it get's there, I should add that it would be nice to lose the '64' suffix too. No reason for them to have different filenames if they're in lib64/. Just creates extra annoying logic in build scripts. I agree. [...] The installer tries to pick the latest version of both VS and SDK installations. I see there is a problem when selecting a different C runtime than what your C/C++ code is assuming. Is the Windows-SDK a problem, too? The files used are just import libraries, so the latest should be fine as long as you don't need linker errors when you build an application to be run on XP but are calling Win8 only functions. You're probably right about the system library path. I haven't had any issues of this sort, but I just tend to be behave conservatively when it comes to this sort of thing. There are so many unexpected ways that linking goes wrong in the windows ecosystem. The runtime libraries are definitely a problem. The 'select most recent' policy is incorrect in my case. VS2010 is the environment I do 99% of my work, and I have experienced issued when C and D projects are together in the same solution. I'm not sure what the best solution is here. My feeling is that a prompt in the installer to offer which version to hook up as the default, and VisualD overriding these variables somehow. There's no way for VisualD to override this variable when invoking the compiler? You mention below that it would only be possible with separate linkage, why is that? [...] It seems the installer failed to replace two occurences of %WindowsSdkDir%. WindowsSdkDir is set by batch files vsvars32.bat and friends. I see conflicting goals here: 1. the installer expands variables WindowsSdkDir and VCINSTALLDIR in sc.ini to work without running vsvars32.bat. It has to make decisions on what versions to pick up. 2. when running dmd by Visual D you want to select settings according to the current Visual Studio, which means it needs the unexpanded variables. The current option to allow both is to not run the linker through dmd, but invoke it manually. What do you mean by 'manually' exactly? Is there anything that can be done in VisualD to override these variables when invoking the compiler? By manually I mean that the linker is not run through dmd, but is called directly from the batch generated by Visual D. This means, Visual D has to extract all the settings from sc.ini and rebuild the command line that dmd would generate. In addition, it needs to know which settings have to be replaced and which have been set deliberately by the user. There's one other detail that I forgot in my prior email; I think it would be really handy to include the DirectX lib path by default. It's a very standard MS lib package, and anyone who does multimedia development will surely have it on their system, and require it to be hooked up. My DX libs are here: C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Lib\x64 It seems I have an environment variable: DXSDK_DIR=C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\ It also seems to register a presence in the registry at: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\DirectX\Microsoft DirectX SDK (June 2010)\InstallPath I usually have more faith in the registry, but the env variable is surely going to be present on everyone's machine. I'm not sure we should add too many special cases, everybody has his own set of favorite libraries (I haven't touched DirectX for more than 10 years). Considering that you probably have to make your own imports for the respective declarations, I think it is ok to add an appropriate library path to your project aswell. It seems the DX-SDK does not end up in the LIB environment variable for the VS command prompt aswell, though I see it added in the Visual Studio settings.
Re: Help needed testing automatic win64 configuration
On 16.10.2013 18:05, evilrat wrote: also, does anyone knows why it fails to start debugger on x64 binary using VisualD? Are you using VS2012 shell? I was experiencing the same problem, that is a bug in the shell installer. The newer Visual D installers fix this problem, the latest is currently http://www.dsource.org/projects/visuald/browser/downloads/VisualD-v0.3.37rc4.exe If you try to use mago as the debugger: it does not (yet) support x64, you have to switch to the VS debugger.
Re: Help needed testing automatic win64 configuration
On 17.10.2013 01:49, Manu wrote: On 17 October 2013 02:51, dnewbie r...@myopera.com mailto:r...@myopera.com wrote: 1. 64-bit link.exe: C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\Bin\amd64\link.exe 2. 64-bit C Runtime libraries: C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\lib\amd64 3. 64-bit Windows import libraries: C:\Program Files\Microsoft SDKs\Windows\v7.0\Lib\x64 That looks like VS2008, does VisualD work under VS2008? Either way, the DMD installer should support detection of those paths too. VS2008 is supported, still my preferred version of VS ;-)
Re: Help needed testing automatic win64 configuration
On Thursday, 17 October 2013 at 07:13:20 UTC, Rainer Schuetze wrote: On 16.10.2013 18:05, evilrat wrote: also, does anyone knows why it fails to start debugger on x64 binary using VisualD? Are you using VS2012 shell? I was experiencing the same problem, that is a bug in the shell installer. The newer Visual D installers fix this problem, the latest is currently http://www.dsource.org/projects/visuald/browser/downloads/VisualD-v0.3.37rc4.exe If you try to use mago as the debugger: it does not (yet) support x64, you have to switch to the VS debugger. yes i have vs2012 shell on my machine. thanks it works now 0_0
Re: Debugging support for D - wiki
On 17.10.2013 08:47, evilrat wrote: On Thursday, 17 October 2013 at 06:42:58 UTC, Rainer Schuetze wrote: On 16.10.2013 14:33, Bruno Medeiros wrote: * How complete is the debugging info for DMD-Win64? Is it fully implemented, and/or are there any issues or limitations? (Rainer you are likely the best to answer this one) The stock compiler does not do the replacement '@' for '.' which confuses the Visual Studio debugger when inspecting class members. The debug info emitted for win64 should more or less be the same as for Win32 (which is only basic to begin with). i need halp!11 during testing Brad Anderson's dmd beta x64 setup(http://forum.dlang.org/thread/ebrbcdmsothbutakr...@forum.dlang.org) i've encountered a serious problem, compiled x64 programs works, but i'm totally unable to debug them. what's the problem? (visuald says something like cannot launch debugger on 'program path', hr=89710016) please see my reply there.
Re: Early review of std.logger
On Thursday, 17 October 2013 at 02:13:12 UTC, Eric Anderton wrote: The strength of this is that it would allow us to freely integrate D libraries that use std.logger, yet filter their log output from *outside* the library through the std.logger API. This is one of the most important aspects in my opinion. Std.logger should be easy to use, so library writers are encouraged to use it. Compare this with the unittest keyword, which makes it easy to write some simple tests. Of course, flexibility to use complex machinery for using the messages/tests is necessary. Just like we can hook up more advanced unit testing frameworks, we should be able to hook up more advanced logging machinery. The advanced stuff is not for Phobos though. Advanced stuff for unittests is for example, parallel execution and graphical reports. Advanced stuff for logging is for example log rotation and networking.
Re: Reflection/Introspection?
On 2013-10-17 03:47, DDD wrote: Does D have any reflection/introspection? I'd like to do something like this class Contact { string first, last int number } contact = json_deserialized!Contact(json_string); Is something like that possible? If you need something more advanced you can try Orange: https://github.com/jacob-carlborg/orange Soon (hopefully) to be integrated to Phobos as std.serialization. -- /Jacob Carlborg
Re: Early review of std.logger
On Thursday, 17 October 2013 at 02:13:12 UTC, Eric Anderton wrote: On Tuesday, 15 October 2013 at 15:16:44 UTC, Andrei Alexandrescu wrote: Eric, could you please enumerate a short list of features of log4j that you think would be really missed if absent? Certainly. Here's my top 3, with some background to explain why I think they'd be missed. - Hierarchical logging with a centralized (singleton) logging facility. The strength of this is that it would allow us to freely integrate D libraries that use std.logger, yet filter their log output from *outside* the library through the std.logger API. This can be accomplished by tagging each log event with a category name. Log events are then filtered by prefix matching of the category name, as well as by log level. Without this feature, library authors would have to provide explicit API calls to manipulate their library's logging, or require API users to pass logging contexts forward to the library. - Configurable output strategies for each log category. In log4cpp, these are known as appenders. Appenders need not be registered explicitly for each category, but can be registered by category name prefix match, just like the filtering for the hierarchical system (above). The idea is to allow for different formatting strategies and output targets, including the logrotate issue I mentioned earlier. This provides a nice integration point to tackle basic capabilities today, like file logging and syslog support, and more advanced features by 3rd party authors. - Nested Diagnostic Context support (Mapped Diagnostic Context in log4j). The NDC facility in log4cpp/log4cxx is incredibly handy for cutting down on the amount of data one usually puts into a given log event. The gist of an NDC is just a way to bracket a series of log calls with a prefix that is emitted with the rest of the log line. These contexts are maintained on a thread-specific stack, such that each log event is prefixed with all the information in the entire stack, at the time of the event. Without this, one winds up re-inventing the concept (usually poorly) to forward the same information to each call to emit a log message. It also eliminates the need for a stack trace in a log message in most cases, which is something that people who use SIEM software (e.g. Splunk) would appreciate. There are other things that would be nice - configuration file support, lazy evaluation of log event arguments, custom output formats - but I think the above is really core of what's needed. For what it's worth: my opinions mostly come from my experience in integrating with log4j and log4cpp. Log4j is a very heavyweight library - I don't think we need anything anywhere close to that nuanced. In contrast, log4cpp is very small library and worth a look as something that may be a good model for what std.logger could accomplish. - Eric +1 I have used log4net some years ago, really liked the the 'context logging' feature. (Nested Diagnostic Context support). I've also used www.panteios.org logging api, liked that approach too.
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
On 2013-10-16 22:55, H. S. Teoh wrote: Even *with* developer tools, where would you even start? I mean, the blank page could have resulted from any one point of about 5kloc worth of JS initialization code (which BTW dynamically loads in a whole bunch of other JS code, each of which need to run their own initialization which includes talking to a remote backend server and processing the response data, all before anything even gets displayed on the page -- don't ask me why it was designed this way, this is what happens when you take the browser-as-a-platform concept too far). I think (relatively) recently Opera's Dragonfly added a feature to break into the debugger as soon as an error is thrown (rather than only when it's unhandled), but even that doesn't seem to catch all of the errors. If you get an error the developer tools will show you where. At least it's a start. -- /Jacob Carlborg
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
On 2013-10-16 23:08, Nick Sabalausky wrote: Compiling it shouldn't be a problem: http://xkcd.com/224/ So, it's written in Perl. That's why we haven't figured out how the universe works: You shoot yourself in the foot, but nobody can understand how you did it. Six months later, neither can you http://www.fullduplex.org/humor/2006/10/how-to-shoot-yourself-in-the-foot-in-any-programming-language/ -- /Jacob Carlborg
Re: Early review of std.logger
On 10/17/2013 09:34 AM, qznc wrote: On Thursday, 17 October 2013 at 02:13:12 UTC, Eric Anderton wrote: The strength of this is that it would allow us to freely integrate D libraries that use std.logger, yet filter their log output from *outside* the library through the std.logger API. This is one of the most important aspects in my opinion. Std.logger should be easy to use, so library writers are encouraged to use it. Compare this with the unittest keyword, which makes it easy to write some simple tests. Of course, flexibility to use complex machinery for using the messages/tests is necessary. Just like we can hook up more advanced unit testing frameworks, we should be able to hook up more advanced logging machinery. The advanced stuff is not for Phobos though. Advanced stuff for unittests is for example, parallel execution and graphical reports. Advanced stuff for logging is for example log rotation and networking. +1
Re: [Proposal] Weak reference implementation for D
On 10/16/2013 12:45 AM, Walter Bright wrote: http://d.puremagic.com/issues/show_bug.cgi?id=4151 does not contain the info in your post starting this thread, nor does it contain any link to this thread. Yeah, more cross references please. I personally dislike the DIP proliferation for anything but big changes.
Re: Help needed testing automatic win64 configuration
That one's working for me. It still looks a little funny though: ; default to 32-bit linker (can generate 64-bit code) that has a common path ; for VS2010, VS2012, and VS2013. This will be overridden below if the ; installer detects VS. LINKCMD=%VCINSTALLDIR%\bin\link.exe ; - ; This enclosed section is specially crafted to be activated by the Windows ; installer when it detects the actual paths to VC and SDK installations so ; modify this in the default sc.ini within the git repo with care ; ; End users: You can fill in the path to VC and Windows SDK and uncomment out ; the appropriate LINKCMD to manually enable support ; Windows installer replaces the following lines with the actual paths VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\ WindowsSdkDir=C:\Program Files (x86)\Windows Kits\8.0\ Notice that it refers to LINKCMD=%VCINSTALLDIR%\... at the top, but VCINSTALLDIR is not set until down lower. Then later in the file: ; Platform libraries (Windows SDK 8) LIB=%LIB%;%WindowsSdkDir%\Lib\win8\um\x64 ; Platform libraries (Windows SDK 7) LIB=%LIB%;%WindowsSdkDir%\Lib\x64 The first one (Win8 SDK) is correct, but the second path (Win7 SDK) doesn't exist. The Win7 SDK is at Microsoft SDKs\Windows\v7.0A on my machine (installed by VS2010). None of this seems to cause DMD to fail, but it may be confusing to have technically erroneous settings. On 17 October 2013 16:16, Brad Anderson e...@gnuk.net wrote: Attempt four: http://gnuk.net/dmd-2.064-**beta-newsc-lib64-4.exehttp://gnuk.net/dmd-2.064-beta-newsc-lib64-4.exe This one has a couple more changes to the dmd2beta.zip it is downloading from my server. 1. dmd.exe has been replaced with one built from 2.064 branch HEAD (so I didn't have to use LINKCMD64). 2. lib64 has been added and phobos64.lib and gcstub64.lib have been moved into there from lib.
Re: [Proposal] Weak reference implementation for D
On 10/13/2013 09:47 AM, Denis Shelomovskij wrote: * Alex's one from MCI: https://github.com/lycus/mci/blob/f9165c287f92e4ef70674828fbadb33ee3967547/src/mci/core/weak.d I remember talking about this with Alex. He wanted to add some functions to the GC and this is what I came up with based on the current implementation. It uses the synchronized GC.addrOf to check whether the loaded pointer is still valid. Still looks correctly synchronized to me. https://gist.github.com/dawgfoto/2852438 In fact the load!(msync.acq) could be made load!(msync.raw) too.
Re: Help needed testing automatic win64 configuration
On 17 October 2013 17:08, Rainer Schuetze r.sagita...@gmx.de wrote: On 16.10.2013 13:13, Manu wrote: It seems the installer failed to replace two occurences of %WindowsSdkDir%. WindowsSdkDir is set by batch files vsvars32.bat and friends. I see conflicting goals here: 1. the installer expands variables WindowsSdkDir and VCINSTALLDIR in sc.ini to work without running vsvars32.bat. It has to make decisions on what versions to pick up. 2. when running dmd by Visual D you want to select settings according to the current Visual Studio, which means it needs the unexpanded variables. The current option to allow both is to not run the linker through dmd, but invoke it manually. What do you mean by 'manually' exactly? Is there anything that can be done in VisualD to override these variables when invoking the compiler? By manually I mean that the linker is not run through dmd, but is called directly from the batch generated by Visual D. This means, Visual D has to extract all the settings from sc.ini and rebuild the command line that dmd would generate. In addition, it needs to know which settings have to be replaced and which have been set deliberately by the user. Hmmm, I tend to think that sc.ini should be ignored/overridden entirely under VisualD. Visual Studio has all its own places to configure paths and options. Anyone who runs more than one version of Visual Studio has to micro-manage sc.ini, which is really annoying. As a VisualD user, I expect to be able to access all settings and paths in Visual Studio, and they should be relevant for the version of Visual Studio in use. At least that's my take on it, from an end-user perspective. On a side note, Visual Studio tends to maintain it's default settings in property sheets (you can access the x64 defaults under Microsoft.Cpp.x64.user under the Property Manager). I wonder if VisualD should also store defaults in the same place, but then I noticed that VisualD project's don't seem to have any presence in the Property Manager... I guess it's a special project type, and therefore subvert's MSBuild? I don't really know how all that stuff fits together. You know, thinking on it, it's kinda strange in a sense that D should have completely distinct library paths at all. It might be useful in VisualD to add all the C/C++ x64 library paths as standard link paths aswell. Surely it's reasonable as a Visual Studio end-user to assume that any libs available to C/C++ should also be available to D too? These are 'system libs' after all. At least, they've been registered with VS as if they are. There's one other detail that I forgot in my prior email; I think it would be really handy to include the DirectX lib path by default. It's a very standard MS lib package, and anyone who does multimedia development will surely have it on their system, and require it to be hooked up. My DX libs are here: C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\Lib\x64 It seems I have an environment variable: DXSDK_DIR=C:\Program Files (x86)\Microsoft DirectX SDK (June 2010)\ It also seems to register a presence in the registry at: HKEY_LOCAL_MACHINE\SOFTWARE\**Wow6432Node\Microsoft\DirectX\**Microsoft DirectX SDK (June 2010)\InstallPath I usually have more faith in the registry, but the env variable is surely going to be present on everyone's machine. I'm not sure we should add too many special cases, everybody has his own set of favorite libraries (I haven't touched DirectX for more than 10 years). Considering that you probably have to make your own imports for the respective declarations, I think it is ok to add an appropriate library path to your project aswell. It seems the DX-SDK does not end up in the LIB environment variable for the VS command prompt aswell, though I see it added in the Visual Studio settings. I only suggest the DXSDK lib in particular for a few reasons: 1. It's a really standard Microsoft lib, not just some 3rd party thing. 2. Being a Microsoft lib, it integrates into Visual Studio automatically when installed, and it's necessary to do basically any multimedia in windows. 3. It's been integrated into the Windows 8 SDK from VS2012 and on (that's why the stand-alone package is quite old), but for the sake of 'it just works', for prior versions of Visual Studio (which we do support), the path needs to be added. Ie, there's a risk of VS2012 users saying well it works for me!, but the VS2010 users complaining that it doesn't seem to work for them, and then scratching heads why it works for some but not others.
Re: Early review of std.logger
On 10/17/2013 04:13 AM, Eric Anderton wrote: On Tuesday, 15 October 2013 at 15:16:44 UTC, Andrei Alexandrescu wrote: Eric, could you please enumerate a short list of features of log4j that you think would be really missed if absent? Certainly. Here's my top 3, with some background to explain why I think they'd be missed. - Hierarchical logging with a centralized (singleton) logging facility. The strength of this is that it would allow us to freely integrate D libraries that use std.logger, yet filter their log output from *outside* the library through the std.logger API. This can be accomplished by tagging each log event with a category name. Log events are then filtered by prefix matching of the category name, as well as by log level. Without this feature, library authors would have to provide explicit API calls to manipulate their library's logging, or require API users to pass logging contexts forward to the library. With the new MultiLogger and names for Logger, I consider this done. - Configurable output strategies for each log category. In log4cpp, these are known as appenders. Appenders need not be registered explicitly for each category, but can be registered by category name prefix match, just like the filtering for the hierarchical system (above). The idea is to allow for different formatting strategies and output targets, including the logrotate issue I mentioned earlier. This provides a nice integration point to tackle basic capabilities today, like file logging and syslog support, and more advanced features by 3rd party authors. - Nested Diagnostic Context support (Mapped Diagnostic Context in log4j). The NDC facility in log4cpp/log4cxx is incredibly handy for cutting down on the amount of data one usually puts into a given log event. The gist of an NDC is just a way to bracket a series of log calls with a prefix that is emitted with the rest of the log line. These contexts are maintained on a thread-specific stack, such that each log event is prefixed with all the information in the entire stack, at the time of the event. Without this, one winds up re-inventing the concept (usually poorly) to forward the same information to each call to emit a log message. It also eliminates the need for a stack trace in a log message in most cases, which is something that people who use SIEM software (e.g. Splunk) would appreciate. There are other things that would be nice - configuration file support, lazy evaluation of log event arguments, custom output formats - but I think the above is really core of what's needed. IMO, this is to heavyweight and would water the current design to much. That being said, I think it is easy to subclass Logger or MultiLogger to achieve this. But I don't think that this should be part of the default feature set.
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
On Thursday, 17 October 2013 at 07:42:22 UTC, Jacob Carlborg wrote: On 2013-10-16 23:08, Nick Sabalausky wrote: Compiling it shouldn't be a problem: http://xkcd.com/224/ So, it's written in Perl. That's why we haven't figured out how the universe works: You shoot yourself in the foot, but nobody can understand how you did it. Six months later, neither can you http://www.fullduplex.org/humor/2006/10/how-to-shoot-yourself-in-the-foot-in-any-programming-language/ So what's the D equivalent? You're only allowed to shoot yourself in the foot if you use system.
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
On 2013-10-17 11:15, Meta wrote: So what's the D equivalent? You're only allowed to shoot yourself in the foot if you use system. From the comments: D You shoot yourself in the foot in two linse using a builtin Gun and Bullet[]. The experience is so enjoyable you shoot yourself again…. -- /Jacob Carlborg
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
On Thursday, 17 October 2013 at 09:49:37 UTC, Jacob Carlborg wrote: From the comments: I had to laugh at this one: .Net Microsoft hands you a gun and swears blind it’s a toenail clipper Someone throws a fucking chair at you.
Re: [Proposal] Weak reference implementation for D
17.10.2013 12:09, Martin Nowak пишет: On 10/13/2013 09:47 AM, Denis Shelomovskij wrote: * Alex's one from MCI: https://github.com/lycus/mci/blob/f9165c287f92e4ef70674828fbadb33ee3967547/src/mci/core/weak.d I remember talking about this with Alex. He wanted to add some functions to the GC and this is what I came up with based on the current implementation. It uses the synchronized GC.addrOf to check whether the loaded pointer is still valid. Still looks correctly synchronized to me. https://gist.github.com/dawgfoto/2852438 In fact the load!(msync.acq) could be made load!(msync.raw) too. The only thing we need from `GC.addrOf` here is a GC barrier i.e. `lock`/`unlock` pair so runtime changes are necessary for performance. -- Денис В. Шеломовский Denis V. Shelomovskij
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
+1 What can I say? For the web I have to use JavaScript, PHP and Python. Imagine the amount of stupid-yet-hard-to-find bugs I've had to deal with. Bugs that you only become aware of at runtime. Am much happier with D (or Java, Objective-C). As for the arguments concerning compile time, extra typing for typing, c'mon, they must be kidding. Not to mention increased execution speed, not only in terms of script vs. binary, but also in terms of known type vs. dynamically assigned type. Another issue I've come across is that languages like JS and PHP lend themselves to quick and dirty stuff, or rather seduce programmers to quick and dirty solutions. Languages like D guide you towards cleaner and better structured code.
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
On Thursday, 17 October 2013 at 07:43:07 UTC, Jacob Carlborg wrote: On 2013-10-16 22:55, H. S. Teoh wrote: Even *with* developer tools, where would you even start? I mean, the blank page could have resulted from any one point of about 5kloc worth of JS initialization code (which BTW dynamically loads in a whole bunch of other JS code, each of which need to run their own initialization which includes talking to a remote backend server and processing the response data, all before anything even gets displayed on the page -- don't ask me why it was designed this way, this is what happens when you take the browser-as-a-platform concept too far). I think (relatively) recently Opera's Dragonfly added a feature to break into the debugger as soon as an error is thrown (rather than only when it's unhandled), but even that doesn't seem to catch all of the errors. If you get an error the developer tools will show you where. At least it's a start. Unless you are developing a f hybrid application targeting to mobiles. No debugger there to talk to the corresponding native browser widgets. :( :( -- Paulo
Re: Optimizing a raytracer
@Jacob Carlborg I would say use structs. For compiler I would go with LDC or GDC. Both of these are faster for floating point calculations than DMD. You can always benchmark. Thank you for the advice! I installed ldc and used ldmd2. Te benchmarks are amazing! :O DMD compile = 2503 run = 26210 LDMD compile = 3953 run = 8935 These are in milliseconds, benchmarked with time command. Both were compiled with smae Flags: -O -inline -release -noboundscheck @finalpatch I find it critical to ensure all loops are unrolled in basic vector ops (copy/arithmathc/dot etc.) In these crucial parts I don't use loops, made these operations by hand. There are simple 3 named doubles. But thanks for the advice. @ponce If you are on x86, SSE 4.1 introduced an instruction called DPPS which performs a dot product. Maybe you can force it into doing a cross-product with clever swizzles and masks. Could you give me a hint, how it could be implemented in D to use that dot product? I am not expirienced with such low-level programming. And would you suggest to try to use SIMD double4 for 3D vectors? It would take some time to change code.
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
On Thursday, 17 October 2013 at 13:54:34 UTC, PauloPinto wrote: No debugger there to talk to the corresponding native browser widgets. :( :( Hm, some mobile browsers (e.g. Chrome on Android) come with pretty tight remote debugging integration, maybe something like that is available for embedded widgets as well? David
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
On 2013-10-17 15:54, PauloPinto wrote: Unless you are developing a f hybrid application targeting to mobiles. No debugger there to talk to the corresponding native browser widgets. :( :( You missed my other post about Firebug Lite: http://forum.dlang.org/thread/l3keqg$e31$1...@digitalmars.com?page=4#post-l3moi7:24ak5:241:40digitalmars.com -- /Jacob Carlborg
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
Am 17.10.2013 16:26, schrieb David Nadlinger: On Thursday, 17 October 2013 at 13:54:34 UTC, PauloPinto wrote: No debugger there to talk to the corresponding native browser widgets. :( :( Hm, some mobile browsers (e.g. Chrome on Android) come with pretty tight remote debugging integration, maybe something like that is available for embedded widgets as well? David This is a custom enterprise framework. :( -- Paulo
Re: Eloquently sums up my feelings about the disadvantages of dynamic typing
Am 17.10.2013 17:03, schrieb Jacob Carlborg: On 2013-10-17 15:54, PauloPinto wrote: Unless you are developing a f hybrid application targeting to mobiles. No debugger there to talk to the corresponding native browser widgets. :( :( You missed my other post about Firebug Lite: http://forum.dlang.org/thread/l3keqg$e31$1...@digitalmars.com?page=4#post-l3moi7:24ak5:241:40digitalmars.com Not when you have a custom built Java/Objective-C framework that controls how the requests are handled, with a few hooks for special behaviours. The wonderful world of enterprise NIH frameworks. :( -- Paulo
Re: Optimizing a raytracer
Róbert László Páli: And would you suggest to try to use SIMD double4 for 3D vectors? It would take some time to change code. Using a double4 could improve the performance of your code, but it must be used wisely. (One general tip is to avoid mixing SIMD and serial code. if you want to use SIMD code, then it's often better to keep using SIMD registers even if you have one value). Bye, bearophile
Re: [Proposal] Weak reference implementation for D
On Thursday, 17 October 2013 at 08:09:24 UTC, Martin Nowak wrote: On 10/13/2013 09:47 AM, Denis Shelomovskij wrote: * Alex's one from MCI: https://github.com/lycus/mci/blob/f9165c287f92e4ef70674828fbadb33ee3967547/src/mci/core/weak.d I remember talking about this with Alex. He wanted to add some functions to the GC and this is what I came up with based on the current implementation. It uses the synchronized GC.addrOf to check whether the loaded pointer is still valid. I'm afraid this is insufficient. If a same-sized block is allocated before the dispose event is triggered, the WeakRef could end up pointing to something else. It's a rare case (in the current GC, a finalizer would have to do the allocation), but possible. This is what I referred to as the ABA problem the other day. Not strictly accurate, but the effect is similar.
Re: draft proposal for ref counting in D
On Oct 16, 2013, at 1:05 PM, Benjamin Thaut c...@benjamin-thaut.de wrote: Am 16.10.2013 21:05, schrieb Sean Kelly: On Oct 16, 2013, at 11:54 AM, Benjamin Thaut c...@benjamin-thaut.de wrote: The problem is not that there are no GCs around in other languages which satisfy certain requirements. The problem is actually implementing them in D. I suggest that you read The Garbage Collection Handbook which explains this in deep detail. I'm currently reading it, and I might write an article about the entire D GC issue once I'm done with it. I think the short version is that D being able to directly call C code is a huge problem here. Incremental GCs all rely on the GC being notified when pointers are changed. We might be able to manage it for SafeD, but then SafeD would basically be its own language. I think a even bigger problem are structs. Because if you need write barriers for pointers on the heap you are going to have a problem with structs. Because you will never know if they are located on the heap or the stack. Additionally making the stack percisely scannable and adding GC-Points will require a lot of compiler support. And even if this is doable in respect to DMD its going to be a big problem for GDC or LDC to change the codegen. Yes, any pointer anywhere. I recall someone posting a doc about a compromise solution a few years back, but I'd have to do some digging to figure out what the approach was.
Re: draft proposal for ref counting in D
On Thursday, 17 October 2013 at 17:11:06 UTC, Sean Kelly wrote: And even if this is doable in respect to DMD its going to be a big problem for GDC or LDC to change the codegen. Yes, any pointer anywhere. I recall someone posting a doc about a compromise solution a few years back, but I'd have to do some digging to figure out what the approach was. LLVM actually comes with a quite expensive GC support infrastructure: http://llvm.org/docs/GarbageCollection.html. As far as I'm aware, it is not widely used in terms of the top-tier LLVM projects, so there might be quite a bit of work involved in getting that to run. David
Re: draft proposal for ref counting in D
Am 17.10.2013 19:16, schrieb David Nadlinger: On Thursday, 17 October 2013 at 17:11:06 UTC, Sean Kelly wrote: And even if this is doable in respect to DMD its going to be a big problem for GDC or LDC to change the codegen. Yes, any pointer anywhere. I recall someone posting a doc about a compromise solution a few years back, but I'd have to do some digging to figure out what the approach was. LLVM actually comes with a quite expensive GC support infrastructure: http://llvm.org/docs/GarbageCollection.html. As far as I'm aware, it is not widely used in terms of the top-tier LLVM projects, so there might be quite a bit of work involved in getting that to run. David Uhhh, this sounds really good. They in fact have everything to implement a generational garbage collector. This would improve the D GC situation a lot. But reading the part about the shadow stack really lowers my expectations. Thats really something you don't want. The performance impact is so going to be so big, that it doesn't make sense to use the better GC in the first place. Kind Regards Benjamin Thaut
Re: Target hook - compiler specific pragmas.
On Thursday, 10 October 2013 at 12:39:00 UTC, Iain Buclaw wrote: As part of the front-end harmonising work I'm doing, will be adding in a Target hook to handle compiler-specific pragmas. […] David, you OK with this? :-) I can't speak for the other LDC devs, but I think this should be fine. ;) We still need some way to actually tag the marked up declarations as such internally. Currently, we have Dsymbol::llvmInternal for this – god, that name is awful –, but of course it would be nice if you could find a nicely generic solution for that as well. David
Re: draft proposal for ref counting in D
On Thursday, 17 October 2013 at 17:28:17 UTC, Benjamin Thaut wrote: Am 17.10.2013 19:16, schrieb David Nadlinger: But reading the part about the shadow stack really lowers my expectations. Thats really something you don't want. The performance impact is so going to be so big, that it doesn't make sense to use the better GC in the first place. There's always a tradeoff. If an app is very delay-sensitive, then making the app run slower in general might be worthwhile if the delay could be eliminated.
Re: draft proposal for ref counting in D
Am 17.10.2013 19:47, schrieb Sean Kelly: On Thursday, 17 October 2013 at 17:28:17 UTC, Benjamin Thaut wrote: Am 17.10.2013 19:16, schrieb David Nadlinger: But reading the part about the shadow stack really lowers my expectations. Thats really something you don't want. The performance impact is so going to be so big, that it doesn't make sense to use the better GC in the first place. There's always a tradeoff. If an app is very delay-sensitive, then making the app run slower in general might be worthwhile if the delay could be eliminated. Well I just read it again, and it appears to me that the shadow stack is something they already have implemented and it can be used for gc prototyping but if you want you can write your own code generator plugin and generate your own stack maps. It actually sounds more feasable to implement a generational GC with LLVM then with what we have in dmd. Kind Regards Benjamin Thaut
I don't like slices in D
I expected slices to be in D (http://dlang.org/arrays.html) like they are in Go (http://blog.golang.org/go-slices-usage-and-internals). But they are not. Why the array have to be reallocated after the length of a slice is changed? It makes slices useless. Here a little example (it fails): void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; dSlice.length++; assert(dArr[2]==dSlice[2]); // failure }
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:00:20 UTC, Vitali wrote: I expected slices to be in D (http://dlang.org/arrays.html) like they are in Go (http://blog.golang.org/go-slices-usage-and-internals). But they are not. Why the array have to be reallocated after the length of a slice is changed? It makes slices useless. Here a little example (it fails): void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; dSlice.length++; assert(dArr[2]==dSlice[2]); // failure } Change your slice to int[] dSlice = dArr[0..$] or [0..3]; The way you are doing it only takes the first 2 elements. Modified code: import std.stdio : writeln; void main() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..$]; assert(dArr[2] is dSlice[2]); // passes dSlice.length++; assert(dArr[2] == dSlice[2]); // passes }
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:00:20 UTC, Vitali wrote: I expected slices to be in D (http://dlang.org/arrays.html) like they are in Go (http://blog.golang.org/go-slices-usage-and-internals). But they are not. Why the array have to be reallocated after the length of a slice is changed? It makes slices useless. Here a little example (it fails): void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; dSlice.length++; assert(dArr[2]==dSlice[2]); // failure } What's the use case for this? I haven't found myself ever needing something like that so far, but i'd be open to seeing an example.
D and event-based programming
Hello Friends, D How do you want to do with the event-based programming language that has the support of D is Thanks.
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:21:30 UTC, David Eagen wrote: On Thursday, 17 October 2013 at 18:00:20 UTC, Vitali wrote: I expected slices to be in D (http://dlang.org/arrays.html) like they are in Go (http://blog.golang.org/go-slices-usage-and-internals). But they are not. Why the array have to be reallocated after the length of a slice is changed? It makes slices useless. Here a little example (it fails): void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; dSlice.length++; assert(dArr[2]==dSlice[2]); // failure } Change your slice to int[] dSlice = dArr[0..$] or [0..3]; The way you are doing it only takes the first 2 elements. Modified code: import std.stdio : writeln; void main() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..$]; assert(dArr[2] is dSlice[2]); // passes dSlice.length++; assert(dArr[2] == dSlice[2]); // passes } This doesn't answer my question. I repeat my question: Why does a reallocation accure AFTER a resize of the length of a slice, although there is still capacity?
Re: I don't like slices in D
On 10/17/2013 11:00 AM, Vitali wrote: I expected slices to be in D (http://dlang.org/arrays.html) There is also this article: http://dlang.org/d-array-article.html like they are in Go (http://blog.golang.org/go-slices-usage-and-internals). But they are not. Every design comes with pros and cons. Go's slices can do that because they consis of three members: pointer, length, and capacity. Apparently they also know the actual array that they provide access to. D slices have only the first two of those members. (However, the capacity can still be obtained by the function capacity(), which is ordinarily called with the property syntax.) Why the array have to be reallocated after the length of a slice is changed? The effect of incrementing length is adding an element to the end. Since slices don't own the underlying array elements, in order to preserve the potential element beyond their current end, the entire slice gets relocated. As described in the article above, this does not happen every time. It makes slices useless. Slices have been very useful so far. Slices do have some gotchas, neither of which have been showstoppers. Here a little example (it fails): void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; dSlice.length++; That operation has a potential of relocating the slice. You can check whether that will be the case: if (slice.capacity == 0) { /* Its elements would be relocated if one more element * is added to this slice. */ // ... } else { /* This slice may have room for new elements before * needing to be relocated. Let's calculate how * many: */ auto howManyNewElements = slice.capacity - slice.length; // ... } assert(dArr[2]==dSlice[2]); // failure } Ali
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:29:47 UTC, John Colvin wrote: On Thursday, 17 October 2013 at 18:00:20 UTC, Vitali wrote: I expected slices to be in D (http://dlang.org/arrays.html) like they are in Go (http://blog.golang.org/go-slices-usage-and-internals). But they are not. Why the array have to be reallocated after the length of a slice is changed? It makes slices useless. Here a little example (it fails): void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; dSlice.length++; assert(dArr[2]==dSlice[2]); // failure } What's the use case for this? I haven't found myself ever needing something like that so far, but i'd be open to seeing an example. The use case is: void appendElement(ref int[] arr, int x) { arr ~= x; } void removeElement(ref int[] arr, int index) { arr = arr[0..index] ~ arr[index+1..$]; } void main() { int[] arr = [1, 2, 3]; arr.reserve(7); // Reserve capacity. arr.appendElement(4); // Here should be arr.removeElement(1); // no realocation of array, arr.appendElement(5); // but it is. assert(arr[1] == 3); assert(arr[2] == 4); assert(arr[3] == 5); } But maybe I don't understand what slices are for in D. Anyway in Go this works whithout reallocation.
Re: D bindings for OpenCV
On Wednesday, 16 October 2013 at 08:56:26 UTC, John Colvin wrote: On Tuesday, 15 October 2013 at 21:51:06 UTC, Timothee Cour wrote: I've done it using swig, and using C++ api (not C api), as well as for other libs (sfml etc). it requires a bit of tweaking the '.i' file but is doable. Much better than hand maintaining c wrappers. Link? Or at least a how-to? This would be a really valuable asset, the C++ api is a LOT nicer to work with than the C one. Plus IIRC new features are no longer always available via the C API. I'm going to second this. I need to program computer vision a lot, but I'd like to do it in something a bit better than C++.
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:41:53 UTC, Vitali wrote: arr = arr[0..index] ~ arr[index+1..$]; This line is where the reallocation happens. a ~ b on built in arrays and slices, ALWAYS allocates to ensure it doesn't stomp over some other in-use memory. ~= can extend if there's capacity, but ~ always makes a new array without modifying the input. Ali linked to the array article that explains the implementation in more detail. The remove function should copy the data itself instead of using the ~ operator if you want it to operate in place.
Re: I don't like slices in D
On 10/17/2013 11:41 AM, Vitali wrote: On Thursday, 17 October 2013 at 18:29:47 UTC, John Colvin wrote: On Thursday, 17 October 2013 at 18:00:20 UTC, Vitali wrote: I expected slices to be in D (http://dlang.org/arrays.html) like they are in Go (http://blog.golang.org/go-slices-usage-and-internals). But they are not. Why the array have to be reallocated after the length of a slice is changed? It makes slices useless. Here a little example (it fails): void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; dSlice.length++; assert(dArr[2]==dSlice[2]); // failure } What's the use case for this? I haven't found myself ever needing something like that so far, but i'd be open to seeing an example. The use case is: void appendElement(ref int[] arr, int x) { arr ~= x; That will not allocate if there is capacity. } void removeElement(ref int[] arr, int index) { arr = arr[0..index] ~ arr[index+1..$]; It must necessarily allocate, right? How does Go deal with that case? However, I see that the capacity of arr after that operation is just 3! Since a new array is allocated by the runtime, the capacity of arr could be larger as well. Perhaps that's the only problem here. (?) } void main() { int[] arr = [1, 2, 3]; arr.reserve(7); // Reserve capacity. arr.appendElement(4); // Here should be arr.removeElement(1); // no realocation of array, arr.appendElement(5); // but it is. assert(arr[1] == 3); assert(arr[2] == 4); assert(arr[3] == 5); } But maybe I don't understand what slices are for in D. Anyway in Go this works whithout reallocation. Ali
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:00:20 UTC, Vitali wrote: I expected slices to be in D (http://dlang.org/arrays.html) like they are in Go (http://blog.golang.org/go-slices-usage-and-internals). But they are not. Why the array have to be reallocated after the length of a slice is changed? It makes slices useless. Here a little example (it fails): void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; dSlice.length++; assert(dArr[2]==dSlice[2]); // failure } As David Eagen said, your problem is that in D, dArr[0..2] is not inclusive, it's exclusive, so you get dArr[0] and dArr[1]. Changing it to dSlice = dArr[0..3] will work (or better, dArr[0..$]). However, there's something else going on here that's fishy: void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; writeln(dArr.ptr, , dArr.capacity, , dArr.length); writeln(dSlice.ptr, , dSlice.capacity, , dSlice.length); dSlice.length++; writeln(dSlice.ptr, , dSlice.capacity, , dSlice.length); writeln(dArr); writeln(dSlice); } 4002DFF0 3 3 4002DFF0 0 2 4002DFE0 3 3 [10, 11, 12] [10, 11, 0] dSlice says that it has length 2, but accessing dSlice[2] does not produce a RangeError... Likewise, printing it out prints 3 elements, not 2. This looks like a bug to me.
Re: I don't like slices in D
dSlice says that it has length 2, That's before appending an element. but accessing dSlice[2] does not produce a RangeError... That's after appending an element. Ali
Re: I don't like slices in D
On Thursday, October 17, 2013 20:31:26 Vitali wrote: I repeat my question: Why does a reallocation accure AFTER a resize of the length of a slice, although there is still capacity? Because it _doesn't_ have the capacity. If you do writeln(dSlice.capacity); before incrementing dSlice's length, it'll print out 0. dSlice can't grow in place, because if it did, its new elements would overlap with those of dArr; dArr - [10, 11, 12] dSlice - [10, 11] ++dSlice.length; dArr - [10, 11, 12] dSlice - [10, 11, 0] You can't make a slice cover more of another array without reslicing that array. e.g. dSlice = dArr[0 .. $] dArr - [10, 11, 12] dSlice - [10, 11, 12] Appending to a slice or increasing its length (which is just appending to it with the init value of its element type) is adding a new element which is not part of any other array. If there is capacity available, then that will be done in place, and the slice will still be a slice of the array that it was sliced from. But if there isn't any capacity available (and there isn't in your example, because the original array has that space), then the runtime is forced to reallocate the slice in order to make space, and then it's no longer a slice of the original array. - Jonathan M Davis
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:51:08 UTC, Meta wrote: As David Eagen said, your problem is that in D, dArr[0..2] is not inclusive, it's exclusive, so you get dArr[0] and dArr[1]. Changing it to dSlice = dArr[0..3] will work (or better, dArr[0..$]). However, there's something else going on here that's fishy: void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; writeln(dArr.ptr, , dArr.capacity, , dArr.length); writeln(dSlice.ptr, , dSlice.capacity, , dSlice.length); dSlice.length++; writeln(dSlice.ptr, , dSlice.capacity, , dSlice.length); writeln(dArr); writeln(dSlice); } 4002DFF0 3 3 4002DFF0 0 2 4002DFE0 3 3 [10, 11, 12] [10, 11, 0] dSlice says that it has length 2, but accessing dSlice[2] does not produce a RangeError... Likewise, printing it out prints 3 elements, not 2. This looks like a bug to me. Never mind, that was extremely silly of me.
Re: D and event-based programming
vibed.org
Re: Help needed testing automatic win64 configuration
On Thursday, 17 October 2013 at 08:08:08 UTC, Manu wrote: That one's working for me. It still looks a little funny though: This is all behaving as intended. I'll explain below. ; default to 32-bit linker (can generate 64-bit code) that has a common path ; for VS2010, VS2012, and VS2013. This will be overridden below if the ; installer detects VS. LINKCMD=%VCINSTALLDIR%\bin\link.exe ; - ; This enclosed section is specially crafted to be activated by the Windows ; installer when it detects the actual paths to VC and SDK installations so ; modify this in the default sc.ini within the git repo with care ; ; End users: You can fill in the path to VC and Windows SDK and uncomment out ; the appropriate LINKCMD to manually enable support ; Windows installer replaces the following lines with the actual paths VCINSTALLDIR=C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC\ WindowsSdkDir=C:\Program Files (x86)\Windows Kits\8.0\ Notice that it refers to LINKCMD=%VCINSTALLDIR%\... at the top, but VCINSTALLDIR is not set until down lower. There are two goals with the new sc.ini. 1. Work as is with all supported versions of VS so long as the user is within the Visual Studio Command Prompt (I take it you gamedev guys never had to build early versions of Boost? :P). I know you don't use it but many people do. 2. Work outside of the VS Command Prompt if the installer can detect a VS installation. To satisfy both goals I define a LINKCMD that works with all versions of Visual Studio (the 32-bit linker which can compile 64-bit code has the same tail path for every VS version) and add to the LIB the VS/SDK tail paths for all versions of the VS/SDK. Then in the installer I modify sc.ini to define VCINSTALLDIR and WindowsSdkDir based on the detected VS/SDK installation and override the LINKCMD with a better version (the 64-bit linker). This is why it's not a problem that VCINSTALLDIR is defined below it's first use in LINKCMD. The value is just replaced if VCINSTALLDIR gets defined. Then later in the file: ; Platform libraries (Windows SDK 8) LIB=%LIB%;%WindowsSdkDir%\Lib\win8\um\x64 ; Platform libraries (Windows SDK 7) LIB=%LIB%;%WindowsSdkDir%\Lib\x64 The first one (Win8 SDK) is correct, but the second path (Win7 SDK) doesn't exist. The Win7 SDK is at Microsoft SDKs\Windows\v7.0A on my machine (installed by VS2010). None of this seems to cause DMD to fail, but it may be confusing to have technically erroneous settings. Sticking all the possible lib paths in there is a bit unhygienic but would extremely unlikely to create a problem in practice so I think it's worth it so we can have a much better user experience. We can just tell users to use the VS Command Prompt or the installer rather than having them modify sc.ini themselves which from the the many NG posts about trying and failing to get 64-bit working we know is error prone. This also makes the installer less brittle to sc.ini changes than it would be if I did much more complicated enabling/disabling.
Re: I don't like slices in D
I didn't expect that you would doubt that the array gets reallocated. Here a code to test: void appendElement(ref int[] arr, int x) { arr ~= x; } void removeElement(ref int[] arr, int index) { arr = arr[0..index] ~ arr[index+1..$]; } void main() { int* arrPtr1; int* arrPtr2; int[] arr = [1, 2, 3]; arr.reserve(7); // Reserve capacity. arr.appendElement(4); // I don't want a realocation arrPtr1 = arr.ptr; assert(arr.capacity==7); arr.removeElement(1); // happen here, assert(arr.capacity==3); arr.appendElement(5); // but it happens. arrPtr2 = arr.ptr; assert(arr[1] == 3); assert(arr[2] == 4); assert(arr[3] == 5); assert(arrPtr1 != arrPtr2); // different arrays - here } I'm not accusing D having a bug here. I'm saying that in my eyes a reallocation of the array referenced by the slice is not useful, when capacity is still available. @Ali Çehreli: You are right, Go's slices consist of three members. I have read it, although I din't test the code. In http://blog.golang.org/go-slices-usage-and-internals is said: Slicing does not copy the slice's data. It creates a new slice value that points to the original array. This makes slice operations as efficient as manipulating array indices. I repeat [...] as efficient as manipulating array indices. In D this is not the case and can't imagine what purpose can it suit else. @Meta and @David Eagen: My question is not about creating slices, but about make them shrink and grow without reallocating the referenced array.
Re: Help needed testing automatic win64 configuration
On Thursday, 17 October 2013 at 07:15:48 UTC, Rainer Schuetze wrote: On 17.10.2013 01:49, Manu wrote: On 17 October 2013 02:51, dnewbie r...@myopera.com mailto:r...@myopera.com wrote: 1. 64-bit link.exe: C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\Bin\amd64\link.exe 2. 64-bit C Runtime libraries: C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\lib\amd64 3. 64-bit Windows import libraries: C:\Program Files\Microsoft SDKs\Windows\v7.0\Lib\x64 That looks like VS2008, does VisualD work under VS2008? Either way, the DMD installer should support detection of those paths too. VS2008 is supported, still my preferred version of VS ;-) Does VS2008 and SDK 6 work for 64-bit?
Re: I don't like slices in D
On Thursday, October 17, 2013 21:23:36 Vitali wrote: @Meta and @David Eagen: My question is not about creating slices, but about make them shrink and grow without reallocating the referenced array. The only way to make a slice grow and slice more of the array that it was a slice of is to reslice the original array and have the new slice be bigger. There isn't really any difference between a slice of an array and an array in D. They're both slices of a block of memory that the runtime owns. It's just that when you create an array by slicing an existing array, they end up referring to the same elements until one of them gets reallocated (which generally occurs when you append to one of them, and there is no available capacity for that array to grow - either because it's at the end of that block of memory or because another array refers to the memory past the end of that array). So, while a slice of an array is effectively a window into that array, it's really that both arrays are windows into a block of memory that the runtime owns, and so the runtime isn't going to treat an array differently just because it was created from slicing another array instead of by explicitly allocating it. In general, when you slice arrays, and you want the slice to continue to refer to the same elements, you only shrink the slice (generally by slicing it further, though you can also decrement its length), and if you want to increase the slice and still have it refer to the same elements as the original array rather than having it reallocate, then you need to reslice the original array with a greater length rather than manipulating the length of the current slice. - Jonathan M Davis
Re: I don't like slices in D
On Thursday, 17 October 2013 at 19:23:37 UTC, Vitali wrote: I'm not accusing D having a bug here. I'm saying that in my eyes a reallocation of the array referenced by the slice is not useful, when capacity is still available. There is no capacity available for the binary concatenation like that. Given this: arr = arr[0..index] ~ arr[index+1..$]; arr[index] is, as far as this function can tell anyway, still in use. The concat operator never overwrites in-use data. Consider this example: int[] a = [1,2,3,4]; void foo(int[] b) { int[] c = b ~ 4; } foo(a[0 .. 2]); Surely you wouldn't expect a == [1,2,4] now. But, that's what would happen if your function remove function didn't allocate! a[0 .. 2] would become [1,2] with a capacity of 4, because that's the original length. Then b ~ 4 sees that there's capacity, and then appends in place, writing b.length++; b[3] = 4; but assert(b[3] is a[3]), so that just overwrote a's data. But since b and c are both local variables, that's certainly not what you intended. Your example is a little less clear, since it takes the slice by reference, but it is still the same idea: there might be other slices to that same data that should not be overwritten. A new allocation is the only way to be sure. If you want to overwrite it though, your remove function can just copy the data itself without reallocation. Slicing does not copy the slice's data. It creates a new slice value that points to the original array. This makes slice operations as efficient as manipulating array indices. D does slicing the same way. Your problem is with appending/concatenating.
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:00:20 UTC, Vitali wrote: int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; Oh, and there is no difference between dynamic arrays and slices of them in D (as shown by the fact that the type is the same). David
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:00:20 UTC, Vitali wrote: Why the array have to be reallocated after the length of a slice is changed? It makes slices useless. No, it doesn't. They are extremely helpful, particularly for high-performance applications. Here a little example (it fails): void testSlices() { int[] dArr = [10, 11, 12]; int[] dSlice = dArr[0..2]; dSlice.length++; assert(dArr[2]==dSlice[2]); // failure } Of course it does. Increasing the length of an array causes the new elements to be default-allocated by the language spec, and as dArr and dSlice share the same memory, not reallocating dSlice would clobber the third element of dArr.
Re: I don't like slices in D
On Thursday, 17 October 2013 at 18:41:53 UTC, Vitali wrote: void removeElement(ref int[] arr, int index) { arr = arr[0..index] ~ arr[index+1..$]; } If you know that 'arr' is the only reference to this piece of data, you can use arr.assumeSafeAppend() to enable re-use of the remaining storage: http://dlang.org/phobos/object.html#.assumeSafeAppend David
Re: I don't like slices in D
On Thursday, 17 October 2013 at 19:05:32 UTC, Jonathan M Davis wrote: before incrementing dSlice's length, it'll print out 0. dSlice can't grow in place, because if it did, its new elements would overlap with those of dArr; dArr - [10, 11, 12] dSlice - [10, 11] Well this is only one part that makes no sense in my eyes. I thought the slices should be overlaping. Here, step by step: void main() { int* arrPtr1; int* arrPtr2; int[] arr = [1, 2, 3]; arrPtr1 = arr.ptr; arr.reserve(5); // reserve capacity arrPtr2 = arr.ptr; assert(arrPtr1 != arrPtr2); // ok, this makes sense assert(arr.capacity==7); // ok, this makes not so // much sense, but it's bigger than 5, // I guess it's ok // I reserved extra capacity. I got more // than I needed, but ok. arr ~= 4; // appending an element assert(arr[3]==4); arrPtr1 = arr.ptr; assert(arrPtr1==arrPtr2); // no reallocation, assert(arr.capacity==7); // good // I have enough capacity to append an // element; everything went fine arr.length++; assert(arr[4]==0 arr.length==5); arrPtr1 = arr.ptr; assert(arrPtr1==arrPtr2); // still no reallocation, assert(arr.capacity==7); // very good // Also the direct manipulation of // the length works, as long as a // value is assigned that is bigger // then the length. arr.length--; arrPtr1 = arr.ptr; assert(arrPtr1==arrPtr2); // good, but.. assert(arr.capacity==0); // - WHY ?? // after the length is reduced the // capacity is set to ZERO, this will // cause the array to be reallocated when // the length is increased by next time, // but what is the purpose of this? arr.length++; arrPtr1 = arr.ptr; assert(arrPtr1!=arrPtr2); // different arrays now! assert(arr.capacity==7); // yes, as expected }
Re: I don't like slices in D
On Thursday, 17 October 2013 at 19:23:37 UTC, Vitali wrote: I repeat [...] as efficient as manipulating array indices. In D this is not the case and can't imagine what purpose can it suit else. This is also the case for D. Without knowing the Go design in detail, I'd say the major difference between them is that in D, you can always be sure that *extending* a slice never leads to overwriting some memory you don't know about: --- void appendAnswer(int[] data) { data ~= 42; } void main() { auto a = [1, 2, 3, 4]; auto b = a[0 .. $ - 1]; void dump() { import std.stdio; writefln(a = %s, b = %s (%s), a, b, b.capacity); } dump(); // Now let's assume b would still have all the capacity. b.assumeSafeAppend(); dump(); appendAnswer(b); dump(); } --- The D design makes sure that if your piece of code hands off a slice into some buffer, other parts can (by default) never end up actually modifying anything but that slice. There is quite a benefit to that, design-wise. David
Re: I don't like slices in D
On Thursday, 17 October 2013 at 20:01:37 UTC, David Nadlinger wrote: On Thursday, 17 October 2013 at 18:41:53 UTC, Vitali wrote: void removeElement(ref int[] arr, int index) { arr = arr[0..index] ~ arr[index+1..$]; } If you know that 'arr' is the only reference to this piece of data, you can use arr.assumeSafeAppend() to enable re-use of the remaining storage: http://dlang.org/phobos/object.html#.assumeSafeAppend David This is maybe what I have missed. I will take a closer look on it. Thank you. Thanks for every one who has posted here. I think the function mentioned by David Nadlinger will solve my problems. That's all for today ^^.