Re: [OT Security PSA] Shellshock: Update your bash, now!
On 1 October 2014 06:09, Nick Sabalausky via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: Don't mean to be alarmist, but I'm posting this in case anyone else is like me and hasn't been paying attention since this news broke (AIUI) about a week ago. Apparently bash has it's own heartbleed now, dubbed shellshock. Warm fuzzy flashbacks of TMNT: The Arcade Game aside, this appears to be pretty nasty *and* it affects pretty much every version of bash ever released. And of course bash exists on practically everything, so...pretty big deal. Security sites, blogs-o'-spheres, cloudosphere, etc are all over this one. (Don't know how I managed to miss it until now.) Patches have been issued (and likely more to come from what I gather), so: Go update bash on all your computers and server, NOW. No, don't hit reply, do it now. Personally, I'd keep updating fairly frequently until the whole matter settles down a bit. At work we do two things: 1) Add our main email to the Debian Security ML, so we tend to know about any vulnerabilities that need patching at least 24 hours before it hits the media. 2) Use an automated configuration management system, such as Puppet. By the time we read the initial email, the fix had already been applied to all servers without manual intervention. ;) Of course, merely updating your packages is not enough to keep you safe. You must also consider which front-end facing applications are using the now patched software, and restart it. grep libvulnerable /proc/*/maps | grep deleted Iain
Re: [OT Security PSA] Shellshock: Update your bash, now!
On 10/1/14 1:09 AM, Nick Sabalausky wrote: Patches have been issued (and likely more to come from what I gather), so: FWIW, MacOS X now has an update for bash that fixes the bug, apparently came out last night. http://support.apple.com/kb/HT6495 -Steve
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 05:09:45 UTC, Nick Sabalausky wrote: Other OSes/distros are likely equally easy. Please, reply with examples to help ensure other people on the same OS/distro as you have no excuse not to update! I find it ironic that it's another big global security hole about which Windows users don't even have to be concerned about.
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 13:41:43 UTC, JN wrote: On Wednesday, 1 October 2014 at 05:09:45 UTC, Nick Sabalausky wrote: I find it ironic that it's another big global security hole about which Windows users don't even have to be concerned about. That's of course very true, since Windows runs on no serious servers.
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 13:58:25 UTC, eles wrote: On Wednesday, 1 October 2014 at 13:41:43 UTC, JN wrote: On Wednesday, 1 October 2014 at 05:09:45 UTC, Nick Sabalausky wrote: I find it ironic that it's another big global security hole about which Windows users don't even have to be concerned about. That's of course very true, since Windows runs on no serious servers. You would be surprised how some Fortune 500 companies are doing their serious work in 100% Windows servers. Sadly I need to comply with NDAs. -- Paulo
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 14:29:16 UTC, Paulo Pinto wrote: You would be surprised how some Fortune 500 companies are doing their serious work in 100% Windows servers. Sadly I need to comply with NDAs. Isn't NASDAQ enough?
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 05:09:45 UTC, Nick Sabalausky wrote: Apparently bash has it's own heartbleed now, dubbed shellshock. Does it affect dash? Also, how does one update software on linux? Last I checked, when new version is out, repository of the previous version becomes utterly abandoned. A pity, on windows one can roll new software versions as long as they are maintained.
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 14:44:06 UTC, Kagamin wrote: Also, how does one update software on linux? Last I checked, when new version is out, repository of the previous version becomes utterly abandoned. A pity, on windows one can roll new software versions as long as they are maintained. This claim is so strange I can't even understand what it is about. Which repositories get abandoned?
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 14:44:06 UTC, Kagamin wrote: On Wednesday, 1 October 2014 at 05:09:45 UTC, Nick Sabalausky wrote: Does it affect dash? No. It is a bashism, ie an extension specific to Bash. Busybox users are not concerned neither. A pity, on windows one can roll new software versions as long as they are maintained. It depends on the software (many abandoned Windows XP while still officially supported) and you shall not ask about the quality of this software neither. Is not the same effort that goes into legacy versions that it goes into newer versions. BTW updating software on Windows is the PITAst of all ever (except maybe some medieval tortures). You have to install software manually, software after software. The first thing that I love in Linux is the centralized update.
Re: Digger 1.0
On Tuesday, 30 September 2014 at 09:35:20 UTC, Marco Leise wrote: So why would Apple be able to get away with 1GB on its just released iPhone 6? Maybe 1048576 kilobytes is enough for everyone? ARC is more memory efficient than mark sweep GC like Javascript uses. Though a lot of it is just that iOS developers are simply very careful about memory use. Writing a performant game in iOS is really quite hard because of the memory constraints.
Re: [OT Security PSA] Shellshock: Update your bash, now!
On 10/1/14 12:57 PM, Kagamin wrote: On Wednesday, 1 October 2014 at 15:48:58 UTC, Dicebot wrote: This claim is so strange I can't even understand what it is about. Which repositories get abandoned? Repositories of the not latest version of the OS. Because only latest version receives development. That is, if the OS doesn't have rolling updates. https://wiki.ubuntu.com/LTS -Steve
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 16:57:07 UTC, Kagamin wrote: On Wednesday, 1 October 2014 at 15:45:26 UTC, eles wrote: Repositories of the not latest version of the OS. Because only latest version receives development. That is, if the OS doesn't have rolling updates. What is the difference wrt Microsoft phasing out a Windows version? Except tha upgrading from Windows to Windows is such a PITA that even the Brazen Bull seems to be just a nice couch.
Re: [OT Security PSA] Shellshock: Update your bash, now!
On 1 October 2014 18:12, Steven Schveighoffer via Digitalmars-d-announce digitalmars-d-announce@puremagic.com wrote: On 10/1/14 12:57 PM, Kagamin wrote: On Wednesday, 1 October 2014 at 15:48:58 UTC, Dicebot wrote: This claim is so strange I can't even understand what it is about. Which repositories get abandoned? Repositories of the not latest version of the OS. Because only latest version receives development. That is, if the OS doesn't have rolling updates. https://wiki.ubuntu.com/LTS One nice thing about Ubuntu is that they even give you access to future kernel versions through what they call HWE. In short, I can run a 14.04 LTS kernel on a 12.04 server, so that I'm able to use modern hardware and take advantage of software that uses features of Linux that are actively worked on (like LXC) on an older software stack. Iain.
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 16:57:07 UTC, Kagamin wrote: On Wednesday, 1 October 2014 at 15:45:26 UTC, eles wrote: The first thing that I love in Linux is the centralized update. The downside is it's taken down centrally too, while distributed windows software continues to work independently of each other. On Wednesday, 1 October 2014 at 15:48:58 UTC, Dicebot wrote: This claim is so strange I can't even understand what it is about. Which repositories get abandoned? Repositories of the not latest version of the OS. Because only latest version receives development. That is, if the OS doesn't have rolling updates. This is simply telling lies, sorry. All distros that don't have rolling release model provide LTS versions that get all important updates (including security updates, of course) for years. For example Ubuntu LTS lasts for 4 years where one can count on fast updates. And even after that period your distro does not disappear magically, you are simply force to install necessary updates manually (as opposed to 1 click / command update from repo), basically getting you back to Windows _default_ state of things.
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 18:42:41 UTC, Kagamin wrote: A have linux mint 12 installation with mint4win (wubi), on linux mint forums I was told, that updating from the latest repository won't work. I would be grateful, if you explain, how to upgrade it to the latest version. Yeah, theoretically it should be able to just overwrite files on disk without paying much attention to disk nature. Linux Mint 12 is not LTS release (and _insanely_ old). You are supposed to do regular full upgrades with non-LTS releases, this is why bash update was not propagated to its repositories. However you can simply go to http://packages.linuxmint.com/search.php?keyword=bashrelease=anysection=any and download .deb package of more recent release from there to install manually. It may work or may not depending on how compatible dependencies are. This a very unpleasant experience you get compared to sticking to LTS or up to date distro but pretty much on the same level as one you normally have in the Windows all the time. And with little time investments it is miles and miles ahead any possible Windows experience you can get even theoretically (speaking exclusively about upgrade/update process here).
Re: [OT Security PSA] Shellshock: Update your bash, now!
On 10/01/2014 03:19 PM, Brad Roberts via Digitalmars-d-announce wrote: On 10/1/2014 6:41 AM, JN via Digitalmars-d-announce wrote: On Wednesday, 1 October 2014 at 05:09:45 UTC, Nick Sabalausky wrote: Other OSes/distros are likely equally easy. Please, reply with examples to help ensure other people on the same OS/distro as you have no excuse not to update! I find it ironic that it's another big global security hole about which Windows users don't even have to be concerned about. False. All of my windows boxes needed to be updated. One of the first things I do on any new windows box is install cygwin to get a saner development environment with bash as my shell. Yea. I've been very tempted to put bash on my Win desktops as well. Heck, I may even have some old installation of msys/mingw bash still lying around somewhere. I wouldn't be shocked at all if other windows apps bundle bash for one reason or another too. It might not come as part of the base install (though given the huge pile of stuff that gets installed, I wouldn't put huge bets on it not lurking off in a dark corner somewhere), but that's not the end of the story. Yup, Git comes to mind. (Or at least Git GUI?) Don't know whether that actually exposes any attack vectors, but I guess that's kinda the big question everyone's trying to find out, isn't it? What are all the possible attack vectors of this flaw? Some of them have been discovered, but who knows what else there may be.
Re: [OT Security PSA] Shellshock: Update your bash, now!
On 10/01/2014 02:42 PM, Kagamin wrote: A have linux mint 12 installation with mint4win (wubi), on linux mint forums I was told, that updating from the latest repository won't work. I sympathize: http://www.linuxquestions.org/questions/linux-software-2/how-to-install-enlightenment-on-mint-15-a-4175492936/ That annoyance is why (aside from servers) I've switched to rolling-release distros. In my case, Debian Testing (which, as I've been told by others here, and can personally confirm, is much more stable than it's unfortunately-chosen name would suggest). I picked that one since I'm most familiar with the general Debian family of distros (apt-get and all). But I've heard good things about Arch too and may look into it. FWIW, I don't think all release-based distros are quite as aggressive as Mint with abandoning older releases. Even the super-outdated Debian 6 apparently still has some support via its LTS repos. I suspect Mint may need to do things that way just as a manpower issue. Mint's a popular distro, but I get the impression it's development is a relatively small grassroots thing with much more limited resources than say Debian or Ubuntu. (Of course, I could be wrong.)
Re: [OT Security PSA] Shellshock: Update your bash, now!
On 10/01/2014 01:38 PM, Iain Buclaw via Digitalmars-d-announce wrote: One nice thing about Ubuntu is that they even give you access to future kernel versions through what they call HWE. In short, I can run a 14.04 LTS kernel on a 12.04 server, so that I'm able to use modern hardware and take advantage of software that uses features of Linux that are actively worked on (like LXC) on an older software stack. Is there anything similar in Debian?
Re: [OT Security PSA] Shellshock: Update your bash, now!
On Wednesday, 1 October 2014 at 20:45:14 UTC, Nick Sabalausky wrote: I suspect Mint may need to do things that way just as a manpower issue. Mint's a popular distro, but I get the impression it's development is a relatively small grassroots thing with much more limited resources than say Debian or Ubuntu. (Of course, I could be wrong.) This matches my observations too. It gained lot of popularity when Ubuntu switched to Unity as default desktop environment and Fedora moved with Gnome 3 - quite many users started looking for a distro with more conservative defaults. However its development / maintenance team does not seem to match that popularity burst.
Re: RFC: moving forward with @nogc Phobos
On Monday, 29 September 2014 at 10:49:53 UTC, Andrei Alexandrescu wrote: Back when I've first introduced RCString I hinted that we have a larger strategy in mind. Here it is. Slightly related :) https://github.com/D-Programming-Language/phobos/pull/2573
How to build phobos docs for dlang.org
I am in the process of working on some documentation improvements for Phobos. I am running into an issue while testing. Namely, I do not know how to build the ddocs for Phobos in quite the way that dlang.org does. I can build them with: posix.mak -f DMD=TheRightOne html But everything is poorly formatted, and more importantly, there's some wizardry going on to make std.container look like one file on dlang.org and I therefore cannot find out how to preview my changes to the several files that actually compose that package. In other words, if I go to the page for std_string.html, it works perfectly, but if I try go to std_container.html, it does not exist because there is no container.d file. If I build dlang.org separately, I cannot follow the library reference link. The makefile for dlang.org includes rules for phobos-release and phobos-prerelease, but as far as I can tell, this does not generate the content I need (or I am not able to easily find it). If I copy the fully-built phobos html build into dlang.org/web/phobos then I can see the pages with the familiar dlang.org color scheme and layouts, but std_container.html still does not exist, and that is my fundamental problem. This should really be documented somewhere. If nowhere else, this file seems appropriate: https://github.com/D-Programming-Language/dlang.org/blob/master/CONTRIBUTING.md I hereby volunteer to document whatever answer I am given.
Re: So what exactly is coming with extended C++ support?
On Tuesday, 30 September 2014 at 21:19:44 UTC, Ethan wrote: On Tuesday, 30 September 2014 at 08:48:19 UTC, Szymon Gatner wrote: Considered how many games (and I don't mean indie anymore, but for example Blizzard's Heartstone) are now created in Unity which uses not only GC but runs in Mono I am very skeptical of anybody claiming GC is a no-go for games. - Especially- that native executable is being built in case of D. I realize AAA's have have their reasons against GC i but in that case one should probably just get UE4 license anyway. Hello. AAA developer (Remedy) here using D. Custom tech, with a custom binding solution written originally by Manu and continued by myself. A GC itself is not a bad thing. The implementation, however, is. With a codebase like ours (mostly C++, some D), there's a few things we need. Deterministic garbage collection is a big one - when our C++ object is being destroyed, we need the D object to be destroyed at the same time in most cases. This can be handled by calling GC.collect() often, but that's where the next thing comes in - the time the GC needs. If the time isn't being scheduled at object destruction, then it all gets lumped together in the GC collect. It automatically moves the time cost to a place where we may not want it. ARC garbage collection would certainly be beneficial there. I looked in to adding support at a language level and at a library level for it, but the time it would have taken for me to learn both of those well enough to not muck it up is not feasible. Writing a garbage collector that we have greater control over will also take up too much time. The simpler solution is to enforce coding standards that avoid triggering the GC. It's something I will look at again in the future, to be sure. And also to be sure, nothing is being done in Unity to the scale we do stuff in our engine (at least, nothing in Unity that also doesn't use a ton of native code to bypass Unity's limitations). GC.free() can be used to manually delete GC-allocated data. (destroy() must be called first to call te destructor, though) - delete does both but is deprecated. You could write a simple RAII pointer wrapper if you don't want to always call destroy()+GC.free() manually. Or do you need something else?
Re: So what exactly is coming with extended C++ support?
On Tuesday, 30 September 2014 at 21:19:44 UTC, Ethan wrote: On Tuesday, 30 September 2014 at 08:48:19 UTC, Szymon Gatner wrote: Considered how many games (and I don't mean indie anymore, but for example Blizzard's Heartstone) are now created in Unity which uses not only GC but runs in Mono I am very skeptical of anybody claiming GC is a no-go for games. - Especially- that native executable is being built in case of D. I realize AAA's have have their reasons against GC i but in that case one should probably just get UE4 license anyway. Hello. AAA developer (Remedy) here using D. Custom tech, with a custom binding solution written originally by Manu and continued by myself. A GC itself is not a bad thing. The implementation, however, is. With a codebase like ours (mostly C++, some D), there's a few things we need. Deterministic garbage collection is a big one - when our C++ object is being destroyed, we need the D object to be destroyed at the same time in most cases. This can be handled by calling GC.collect() often, but that's where the next thing comes in - the time the GC needs. If the time isn't being scheduled at object destruction, then it all gets lumped together in the GC collect. It automatically moves the time cost to a place where we may not want it. ARC garbage collection would certainly be beneficial there. I looked in to adding support at a language level and at a library level for it, but the time it would have taken for me to learn both of those well enough to not muck it up is not feasible. Writing a garbage collector that we have greater control over will also take up too much time. The simpler solution is to enforce coding standards that avoid triggering the GC. It's something I will look at again in the future, to be sure. And also to be sure, nothing is being done in Unity to the scale we do stuff in our engine (at least, nothing in Unity that also doesn't use a ton of native code to bypass Unity's limitations). Thanks for the feedback, quite interesting.
Re: @safe pure nothrow compiler inference
On 9/29/2014 7:40 AM, Daniel N wrote: On Monday, 29 September 2014 at 14:32:16 UTC, Atila Neves wrote: So somehow I missed that for template functions the attributes can be inferred. From what I can tell it has to do with having the body available. But when not using .di files, why can't it be done for regular functions? Atila It can be done, Walter wanted to do it, but there was large resistance, mainly because library APIs would become unstable, possibly changing between every release. I wanted to do it for auto returning functions, since they require a function body. IMHO it would be safe to use inference for private functions... Not a bad idea. Please file as an enhancement request. The more attribute inference we can do, the better.
Re: RFC: moving forward with @nogc Phobos
On 9/30/14, 9:10 AM, Sean Kelly wrote: On Monday, 29 September 2014 at 10:49:53 UTC, Andrei Alexandrescu wrote: The policy is a template parameter to functions in Phobos (and elsewhere), and informs the functions e.g. what types to return. Consider: auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 path, R2 ext) if (...) { static if (mmp == gc) alias S = string; else alias S = RCString; S result; ... return result; } Is this for exposition purposes or actually how you expect it to work? That's pretty much what it would take. The key here is that RCString is almost a drop-in replacement for string, so the code using it is almost identical. There will be places where code needs to be replaced, e.g. auto s = literal; would need to become S s = literal; So creation of strings will change a bit, but overall there's not a lot of churn. Quite honestly, I can't imagine how I could write a template function in D that needs to work with this approach. You mean write a function that accepts a memory management policy, or a function that uses one? As much as I hate to say it, this is pretty much exactly what C++ allocators were designed for. They handle allocation, sure, but they also hold aliases for all relevant types for the data being allocated. If the MemoryManagementPolicy enum were replaced with an alias to a type that I could use to at least obtain relevant aliases, that would be something. But even that approach dramatically complicates code that uses it. I think making MemoryManagementPolicy a meaningful type is a great idea. It would e.g. define the string type, so the code becomes: auto setExtension(alias MemoryManagementPolicy = gc, R1, R2)(R1 path, R2 ext) if (...) { MemoryManagementPolicy.string result; ... return result; } This is a lot more general and extensible. Thanks! Why do you think there'd be dramatic complication of code? (Granted, at some point we must acknowledge that some egg breaking is necessary for the proverbial omelette.) Having written standards-compliant containers in C++, I honestly can't imagine the average user writing code that works this way. Once you assert that the reference type may be a pointer or it may be some complex proxy to data stored elsewhere, a lot of composability pretty much flies right out the window. The thing is, again, we must make some changes if we want D to be usable without a GC. One of them is e.g. to not allocate built-in slices all over the place. For example, I have an implementation of C++ unordered_map/set/etc designed to be a customizable cache, so one of its template arguments is a policy type that allows eviction behavior to be chosen at declaration time. Maybe the cache is size-limited, maybe it's age-limited, maybe it's a combination of the two or something even more complicated. The problem is that the container defines all the aliases relating to the underlying data, but the policy, which needs to be aware of these, is passed as a template argument to this container. To make something that's fully aware of C++ allocators then, I'd have to define a small type that takes the container template arguments (the contained type and the allocator type) and generates the aliases and pass this to the policy, which in turn passes the type through to the underlying container so it can declare its public aliases and whatever else is true standards-compliant fashion (or let the container derive this itself, but then you run into the potential for disagreement). And while this is possible, doing so would complicate the creation of the cache policies to the point where it subverts their intent, which was to make it easy for the user to tune the behavior of the cache to their own particular needs by defining a simple type which implements a few functions. Ultimately, I decided against this approach for the cache container and decided to restrict the allocators to those which defined a pointer to T as T* so the policies could be coded with basically no knowledge of the underlying storage. That sounds like a rather involved artifact. Hopefully we can leverage D's better expressiveness to make building such complex libraries easier. So... while I support the goal you're aiming at, I want to see a much more comprehensive example of how this will work and how it will affect code written by D *users*. Agreed. Because it isn't enough for Phobos to be written this way. Basically all D code will have to take this into account for the strategy to be truly viable. Simply outlining one of the most basic functions in Phobos, which already looks like it will have a static conditional at the beginning and *need to be aware of the fact that an RCString type exists* makes me terrified of what a realistic example will look like. That would be overreacting :o). Andrei
Re: RFC: moving forward with @nogc Phobos
On 9/29/14, 11:44 AM, Shammah Chancellor wrote: I don't like the idea of having to pass in template parameters everywhere -- even for allocators. Is there some way we could have allocator contexts? E.G. with( auto allocator = ReferencedCounted() ) { auto foo = setExtension(hello, txt); } ReferenceCounted() could replace a thread-local new delegate with something it has, and when it goes out of scope, it would reset it to whatever it was before. This would create some runtime overhead -- but I'm not sure how much more than already exists. I'm not sure whether we can do this within D's type system. -- Andrei
Re: So what exactly is coming with extended C++ support?
On Tuesday, 30 September 2014 at 23:31:36 UTC, Cliff wrote: Not a GC specialist here, so maybe the thought arises - why not turn off automatic GC until such times in the code where you can afford the cost of it, then call GC.collect explicitly - essentially eliminating the opportunity for the GC to run at random times and force running at deterministic times? Is memory usage so constrained that failing to execute runs in-between those deterministic blocks could lead to OOM? Does such a strategy have other nasty side-effects which make it impractical? The latter. If you want a game to run at 60 fps, you have about 16 ms for each frame, during which time you need to make all the necessary game and graphics updates. There's no upper to limit to the amount of time a GC run can take, so it can easily exceed the few ms you have left for it. There are however GC algorithms that support incremental collection, meaning that you can give the GC a deadline. If it can't finish before this deadline, it will have to interrupt its work and continue on the next run. Unfortunately, these GCs usually require special compiler support (barriers, and distinguishing GC from non-GC pointers), which we don't have. But there is CDGC writte by Leandro Lucarella for D1, which uses a forking to achieve the same effect, and which Dicebot is currently porting to D2: http://forum.dlang.org/thread/exfrifcfczgjwkudq...@forum.dlang.org
Re: How to build phobos docs for dlang.org
On Wednesday, 1 October 2014 at 06:29:46 UTC, Mark Isaacson wrote: I hereby volunteer to document whatever answer I am given. already done http://wiki.dlang.org/Building_DMD#Building_the_Docs
Re: RFC: moving forward with @nogc Phobos
On 9/29/14, 3:11 PM, Freddy wrote: Internally we should have something like: --- template String(MemoryManagementPolicy mmp=gc){ /++ ... +/ } auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 path, R2 ext) if (...) { auto result=String!mmp(); /++ +/ } or maybe even allowing user types in the template argument(the original purpose of templates) --- auto setExtension(String = string, R1, R2)(R1 path, R2){ /++ +/ } Good idea, and it seems Sean's is even better because it groups everything related to memory management where it belongs - in the memory management policy. -- Andrei
Re: RFC: moving forward with @nogc Phobos
On 9/29/14, 1:07 PM, Uranuz wrote: 1. As far as I understand allocation and memory management of entities like class (Object), dynamic arrays and associative arrays is part of language/ runtime. What is proposed here is *fix* to standart library. But that allocation and MM happening via GC is not *fault* of standart library but is predefined behaviour of D lang itself and it's runtime. The standard library becomes a `hostage` of runtime library in this situation. Do you really sure that we should fix standart library in that way? For me it looks like implementing struts for standard lib (which is not broken yet ;) ) in order to compensate behaviour of runtime lib. The change will be to both the runtime and the standard library. 2. Second question is slightly oftopic, but I still want put it there. What I dislike about ranges and standart library is that it's hard to understand what is the returned value of library function. I have some *pedals* (front, popFront) to push and do some magic. Of course it was made for purpose of making universal algorithms. But the mor I use ranges, *auto* then less I believe that I use static-typed language. What is wanted to make code clear is having distinct variable declaration with specification of it's type. With all of these auto's logic of programme becomes unclear, because data structures are unclear. So I came to the question: is the memory management or allocation policy syntacticaly part of declaration or is it a inner implementation detail that should not be shown in decl? Sadly this is the way things are going (not only in D, but other languages such as C++, Haskell, Scala, etc). Type proliferation has costs, but also a ton of benefits. Most often the memory management policy will be part of function signatures because it affects data type definitions. Should rc and gc string look simillar or not? string str1 = makeGCString(test); string str2 = makeRCString(test); // --- vs --- GCString str1 = test; RCString str2 = test; // --- or --- String!GC str1 = test; String!RC str2 = test; // --- or even --- @gc string str1 = test; @rc string str2 = test; As far as I understand currently we will have: string str1 = test; RCString str2 = test; Per Sean's idea things would go GC.string vs. RC.string, where GC and RC are two memory management policies (simple structs defining aliases and probably a few primitives). So another question is why the same object string is implemented as different types. Array and struct (class)? A reference counted string has a different layout than immutable(char)[]. 3. Should algorithms based on range interface care about allocation? Range is about iteration and access to elements but not about allocation and memory mangement. Most don't. I would like to have attributes @rc, @gc (or like these) to switch MM-policy versus *String!RC* or *RCString* but we cannot apply attributes to literal. Passing to allgorithm something like this: find( @rc test, @rc t ) is syntactically incorrect. But we can use this form: find( RCString(test), RCString(t) ) But above form is more verbose. As continuation of this question I have next question. If language changes are necessary, we will make language changes. I'm trying first to explore solutions within the language. 4. How to deal with literals? How to make them ref-counted? I don't know yet. I ask this because even when writing RCString(test) syntactically expression test is still GC-managed literal. I pass GC-managed literal into struct to make it RC-managed. Why just not make it RC from the start? Adding some additional template parameter to algrorithm wil not fix this. It is a problem of D itself and it's runtime library. I understand. The problem is actually worse with array literals, which are silently dynamically allocated on the garbage-collected heap: auto s = hello; // at least there's no allocation auto a = [1, 2, 3]; // dynamic allocation A language-based solution would change array literal syntax. A library-based solution would leave array literals with today's syntax and semantics and offer a controlled alternative a la: auto a = MyMemPolicy.array(1, 2, 3); // cool So I assume that std lib is not broken this way and we should not try to fix it this way. Thanks for attention. And thanks for your great points. Andrei
Re: RFC: moving forward with @nogc Phobos
On 9/30/14, 7:07 AM, John Colvin wrote: Instead of adding a new template parameter to every function (which won't necessarily play nicely with existing IFTI and variadic templates), why not allow template modules? Nice idea, but let's try and explore possibilities within the existing rich language. If a need for new language features arises, I trust we'll see it. -- Andrei
Re: RFC: moving forward with @nogc Phobos
On 9/30/14, 9:49 AM, Johannes Pfau wrote: I guess my point is that although RC is useful in some cases output ranges / sink delegates / pre-allocated buffers are still necessary in other cases and RC is not the solution for _everything_. Agreed. As Manu often pointed out sometimes you do not want any dynamic allocation (toStringz in games is a good example) and here RC doesn't help. Another example is format which can already write to output ranges and uses sink delegates internally. That's a much better abstraction than simply returning a reference counted string (allocated with a thread local allocator). Using sink delegates internally is also more efficient than creating temporary RCStrings. And sometimes there's no allocation at all this way (directly writing to a socket/file). Agreed. What if I don't want automated memory _management_? What if I want a function to use a stack buffer? Or if I want to free manually? If I want std.string.toStringz to put the result into a temporary stack buffer your solution doesn't help at all. Passing an ouput range, allocator or buffer would all solve this. Correct. The output of toStringz would be either a GC string or an RC string. But why not provide 3 overloads then? toStringz(OutputRange) string toStringz(Policy) //char*, actually RCString toStringz(Policy) The notion I got from some of your posts is that you're opposed to such overloads, or did I misinterpret that? I'm not opposed. Here's what I think. As an approach to using Phobos without a GC, it's been suggested that we supplement garbage-creating functions with new functions that use output ranges everywhere, or lazy ranges everywhere. I think a better approach is to make memory management a policy that makes convenient use of reference counting possible. So instead of garbage there'd be reference counted stuff. Of course, to the extent using lazy computation and/or output ranges is a good thing to have for various reasons, they remain valid techniques that are and will continue being used in Phobos. My point is that acknowledging and systematically using reference counted types is an essential part of the entire approach. Andrei
Re: RFC: moving forward with @nogc Phobos
On 9/30/14, 10:33 AM, H. S. Teoh via Digitalmars-d wrote: Yeah, this echoes my concern. This looks not that much different, from a user's POV, from C++ containers' allocator template parameters. Yes I know we're not talking about*allocators* per se but about *memory management*, but I'm talking about the need to explicitly pass mmp to *every* *single* *function* if you desire anything but the default. How many people actually*use* the allocator parameter in STL? Certainly, many people do... but the code is anything but readable / maintainable. The parallel with STL allocators is interesting, but I'm not worried about it that much. I don't want to go off on a tangent but I'm fairly certain std::allocator is hard to use for entirely different reasons than the intended use patterns of MemoryManagementPolicy. Not only that, but every single function will have to handle this parameter somehow, and if static if's at the top of the function is what we're starting with, I fear seeing what we end up with. Apparently Sean's idea would take care of that. Furthermore, in order for this to actually work, it has to be percolated throughout the entire codebase -- any D library that even remotely uses Phobos for anything will have to percolate this parameter throughout its API -- at least, any part of the API that might potentially use a Phobos function. Yes, but that's entirely expected. We're adding genuinely new functionality to Phobos. Otherwise, you still have the situation where a given D library doesn't allow the user to select a memory management scheme, and internally calls Phobos functions with the default settings. Correct. So this still doesn't solve the problem that today, people who need to use @nogc can't use a lot of existing libraries because the library depends on the GC, even if it doesn't assume anything about the MM scheme, but just happens to call some obscure Phobos function with the default MM parameter. The only way this could work was if*every* D library author voluntarily rewrites a lot of code in order to percolate this MM parameter through to the API, on the off-chance that some obscure user somewhere might have need to use it. I don't see much likelihood of this actually happening. A simple way to put this is Libraries that use the GC will continue to use the GC. There's no way around that unless we choose to break them all. Then there's the matter of functions like parseJSON() that needs to allocate nodes and return a tree (or whatever) of these nodes. Note that they need to*allocate*, not just know what kind of memory management model is to be used. So how do you propose to address this? Via another parameter (compile-time or otherwise) to specify which allocator to use? So how does the memory management parameter solve anything then? And how would such a thing be implemented? Using a 3-way static-if branch in every single point in parseJSON where it needs to allocate nodes? We could just as well write it in C++, if that's the case. parseJSON() would get a memory management policy parameter, and will use the currently installed memory allocator for allocation. This proposal has many glaring holes that need to be fixed before it can be viable. Affirmative. That's why it's an RFC, very far from a proposal. I'm glad I got a bunch of good ideas. Andrei
Re: RFC: moving forward with @nogc Phobos
On 9/30/14, 12:10 PM, Marc Schütz schue...@gmx.net wrote: I would argue that GC is at its core _only_ a memory management strategy. It just so happens that the one in D's runtime also comes with an allocator, with which it is tightly integrated. In theory, a GC can work with any (and multiple) allocators, and you could of course also call GC.free() manually, because, as you say, management and allocation are entirely distinct topics. I'm not very sure. A GC might need to interoperate closely with the allocator. -- Andrei
Re: RFC: moving forward with @nogc Phobos
On 9/30/14, 11:06 AM, Dmitry Olshansky wrote: 29-Sep-2014 14:49, Andrei Alexandrescu пишет: auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 path, R2 ext) if (...) { static if (mmp == gc) alias S = string; else alias S = RCString; S result; ... return result; } Incredible code bloat? Boilerplate in each function for the win? I'm at loss as to how it would make things better. Sean's idea to make string an alias of the policy takes care of this concern. -- Andrei
Re: RFC: moving forward with @nogc Phobos
On 9/30/14, 6:53 PM, Manu via Digitalmars-d wrote: I generally like the idea, but my immediate concern is that it implies that every function that may deal with allocation is a template. This interferes with C/C++ compatibility in a pretty big way. Or more generally, the idea of a lib. Does this mean that a lib will be required to produce code for every permutation of functions according to memory management strategy? Usually libs don't contain code for uninstantiated templates. If a lib chooses one specific memory management policy, it can of course be non-templated with regard to that. If it wants to offer its users the choice, it would probably have to offer some templates. With this in place, I worry that traditional use of libs, separate compilation, external language linkage, etc, all become very problematic. Pervasive templates can only work well if all code is D code, and if all code is compiled together. Most non-OSS industry doesn't ship source, they ship libs. And if libs are to become impractical, then dependencies become a problem; instead of linking libphobos.so, you pretty much have to compile phobos together with your app (already basically true for phobos, but it's fairly unique). What if that were a much larger library? What if you have 10s of dependencies all distributed in this manner? Does it scale? I guess this doesn't matter if this is only a proposal for phobos... but I suspect the pattern will become pervasive if it works, and yeah, I'm not sure where that leads. Thanks for the point. I submit that Phobos has and will be different from other D libraries; as the standard library, it has the role of supporting widely varying needs, and as such it makes a lot of sense to make it highly generic and configurable. Libraries that are for specific domains can avail themselves of a narrower design scope. Andrei
Re: RFC: moving forward with @nogc Phobos
On 9/30/14, 10:46 PM, Nordlöw wrote: On Monday, 29 September 2014 at 10:49:53 UTC, Andrei Alexandrescu wrote: Back when I've first introduced RCString I hinted that we have a larger strategy in mind. Here it is. Slightly related :) https://github.com/D-Programming-Language/phobos/pull/2573 Nice, thanks! -- Andrei
Re: How to build phobos docs for dlang.org
On 10/1/14, 2:00 AM, Robert burner Schadek wrote: On Wednesday, 1 October 2014 at 06:29:46 UTC, Mark Isaacson wrote: I hereby volunteer to document whatever answer I am given. already done http://wiki.dlang.org/Building_DMD#Building_the_Docs Shall we link or copy that to CONTRIBUTING.md? -- Andrei
Re: How to build phobos docs for dlang.org
On Wednesday, 1 October 2014 at 10:03:33 UTC, Andrei Alexandrescu wrote: On 10/1/14, 2:00 AM, Robert burner Schadek wrote: On Wednesday, 1 October 2014 at 06:29:46 UTC, Mark Isaacson wrote: I hereby volunteer to document whatever answer I am given. already done http://wiki.dlang.org/Building_DMD#Building_the_Docs Shall we link or copy that to CONTRIBUTING.md? -- Andrei I will create a PR with a link to the Building DMD wiki right now.
std.utf.decode @nogc please
lately when working on std.string I run into problems making stuff nogc as std.utf.decode is not nogc. https://issues.dlang.org/show_bug.cgi?id=13458 Also I would like a version of decode that takes the string not as ref. Something like: bool decode2(S,C)(S str, out C ret, out size_t strSliceIdx) if(isSomeString!S isSomeChar!C) {} where true is returned if the decode worked and false otherwise. Ideas, Suggestions ... ? any takers?
Re: How to build phobos docs for dlang.org
On Wednesday, 1 October 2014 at 10:14:26 UTC, Robert burner Schadek wrote: Shall we link or copy that to CONTRIBUTING.md? -- Andrei I will create a PR with a link to the Building DMD wiki right now. https://github.com/D-Programming-Language/phobos/pull/2575
Re: std.utf.decode @nogc please
On 10/1/2014 3:10 AM, Robert burner Schadek wrote: Ideas, Suggestions ... ? any takers? You can use .byDchar instead, which is nothrow @nogc.
Re: std.experimental.logger formal review round 3
I haven't tested it yet, but have two questions anyway: 1. I did not see any reference to the use of Clock.currTime(), which on the last round accounted for about 90% of the total time spent in a log call. Reference: https://issues.dlang.org/show_bug.cgi?id=13433 . (This is the difference between logging-and-filtering ~100k logs/sec and ~1M logs/sec for loggers that use criteria other than logLevel for filtering messages.) Same question for this cycle: Does std.logger API need a method for clients or subclasses to change/defer/omit the call to Clock.currTime? Or defer for a change in std.datetime? 2. We have Tid in the API. What about Fiber and Thread? If we can only pick one, I would vote for Thread rather than Tid, as Tid's currently have no way to be uniquely identified in a logging message. Reference: https://issues.dlang.org/show_bug.cgi?id=6989 General comment: very nice to see continued progress!
Re: RFC: moving forward with @nogc Phobos
On Wednesday, 1 October 2014 at 09:52:46 UTC, Andrei Alexandrescu wrote: On 9/30/14, 12:10 PM, Marc Schütz schue...@gmx.net wrote: I would argue that GC is at its core _only_ a memory management strategy. It just so happens that the one in D's runtime also comes with an allocator, with which it is tightly integrated. In theory, a GC can work with any (and multiple) allocators, and you could of course also call GC.free() manually, because, as you say, management and allocation are entirely distinct topics. I'm not very sure. A GC might need to interoperate closely with the allocator. -- Andrei It needs to know what to scan (ideally with type info), and which allocator to release memory with, but it doesn't need to be an allocator itself. It certainly helps with the implementation, but ideally there would be a well defined interface between allocators and GCs, so that both can be plugged in as desired, even with multiple GCs in parallel.
Re: std.utf.decode @nogc please
On Wednesday, 1 October 2014 at 10:51:25 UTC, Walter Bright wrote: On 10/1/2014 3:10 AM, Robert burner Schadek wrote: Ideas, Suggestions ... ? any takers? You can use .byDchar instead, which is nothrow @nogc. thanks, I will try that.
Re: Creeping Bloat in Phobos
On Sunday, 28 September 2014 at 12:09:50 UTC, Andrei Alexandrescu wrote: On 9/27/14, 4:31 PM, H. S. Teoh via Digitalmars-d wrote: On Sat, Sep 27, 2014 at 11:00:16PM +, bearophile via Digitalmars-d wrote: H. S. Teoh: If we can get Andrei on board, I'm all for killing off autodecoding. Killing auto-decoding for std.algorithm functions will break most of my D2 code... perhaps we can do that in a D3 language. [...] Well, obviously it's not going to be done in a careless, drastic way! Stuff that's missing: * Reasonable effort to improve performance of auto-decoding; * A study of the matter revealing either new artifacts and idioms, or the insufficiency of such; * An assessment of the impact on compilability of existing code * An assessment of the impact on correctness of existing code (that compiles and runs in both cases) * An assessment of the improvement in speed of eliminating auto-decoding I think there's a very strong need for this stuff, because claims that current alternatives to selectively avoid auto-decoding use the throwing of hands (and occasional chairs out windows) without any real investigation into how library artifacts may help. This approach to justifying risky moves is frustratingly unprincipled. As far as I see, backward compatibility is fairly easy. Extract autodecoding modules into `autodecoding` dub package and clean up phobos modules into non-decoding behavior. The phobos code will be simplified: it will deal with ranges as is without specialization, the `autodecoding` dub package will be simple: just wraps strings into dchar range and invokes non-decoding function from phobos, preserves current module interface to keep legacy D code working. Run dfix on your sources, it will replace `import std.algorithm` with `import autodecoding.algorithm` - then the code should work. What do you think? Worth a DIP?
Re: Program logic bugs vs input/environmental errors
Am Sun, 28 Sep 2014 13:14:43 -0700 schrieb Walter Bright newshou...@digitalmars.com: On 9/28/2014 12:33 PM, Jacob Carlborg wrote: On 2014-09-28 19:36, Walter Bright wrote: I suggest removal of stack trace for exceptions, but leaving them in for asserts. If you don't like the stack track, just wrap the main function in a try-catch block, catch all exceptions and print the error message. That's what the runtime that calls main() is supposed to do. Guys, a druntime flag could settle matters in 10 minutes. But this topic is clearly about the right school of thought. I use contracts to check for logical errors, like when an argument must not be null or a value less than the length of some data structure. I use exceptions to check for invalid input and the return values of external libraries. External libraries can be anything from my own code in the same project to OpenGL from vendor XY. They could error out on valid input (if we leave out of memory aside for now), because of bugs or incorrect assumptions of the implementation. If that happens and all I get is: Library XY Exception: code 0x13533939 (Invalid argument). I'm at a loss, where the library might have had a hickup. Did some function internally handle a uint as an int and wrapped around? Maybe with std.logger we will see single line messages on the terminal and multi-line exception traces in the logs (which by default print to stderr as well). And then this discussion can be resolved. -- Marco
Re: std.experimental.logger formal review round 3
On Wednesday, 1 October 2014 at 10:50:54 UTC, Kevin Lamonte wrote: I haven't tested it yet, but have two questions anyway: 1. I did not see any reference to the use of Clock.currTime(), which on the last round accounted for about 90% of the total time spent in a log call. Reference: https://issues.dlang.org/show_bug.cgi?id=13433 . (This is the difference between logging-and-filtering ~100k logs/sec and ~1M logs/sec for loggers that use criteria other than logLevel for filtering messages.) Same question for this cycle: Does std.logger API need a method for clients or subclasses to change/defer/omit the call to Clock.currTime? Or defer for a change in std.datetime? maybe I should add a disableGetSysTime switch 2. We have Tid in the API. What about Fiber and Thread? If we can only pick one, I would vote for Thread rather than Tid, as Tid's currently have no way to be uniquely identified in a logging message. Reference: https://issues.dlang.org/show_bug.cgi?id=6989 General comment: very nice to see continued progress! I'm gone take a closer look
Re: std.utf.decode @nogc please
On Wednesday, 1 October 2014 at 10:51:25 UTC, Walter Bright wrote: On 10/1/2014 3:10 AM, Robert burner Schadek wrote: Ideas, Suggestions ... ? any takers? You can use .byDchar instead, which is nothrow @nogc. Being forced out of using exception just to be able to have the magic @nogc tag is the real issue here... The original request was mostly for @nogc, not necessarilly for nothrow.
Re: std.utf.decode @nogc please
On Wednesday, 1 October 2014 at 10:10:51 UTC, Robert burner Schadek wrote: lately when working on std.string I run into problems making stuff nogc as std.utf.decode is not nogc. https://issues.dlang.org/show_bug.cgi?id=13458 Also I would like a version of decode that takes the string not as ref. Something like: bool decode2(S,C)(S str, out C ret, out size_t strSliceIdx) if(isSomeString!S isSomeChar!C) {} where true is returned if the decode worked and false otherwise. Ideas, Suggestions ... ? any takers? Kind of like the non-throwing std.conv.to: I'm pretty sure that if you wrote your tryDecode function, then you could back-wards implement the old decode in terms of the new tryDecode: dchar decode(ref str) { dchar ret; size_t idx; enforce(tryDecode(str, ret, idx)); str = str[idx .. $]; return ret; } The implementation of tryDecode would be pretty much the old one's implementation, exceptions replaced in favor of return false.
Re: Program logic bugs vs input/environmental errors
On 28/09/2014 23:00, Walter Bright wrote: I can't get behind the notion of reasonably certain. I certainly would not use such techniques in any code that needs to be robust, On 29/09/2014 04:04, Walter Bright wrote: I know I'm hardcore and uncompromising on this issue, but that's where I came from (the aviation industry). Walter, you do understand that not all software has to be robust - in the critical systems sense - to be quality software? And that in fact, the majority of software is not critical systems software?... I was under the impression that D was meant to be a general purpose language, not a language just for critical systems. Yet, on language design issues, you keep making a series or arguments and points that apply *only* to critical systems software. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: Program logic bugs vs input/environmental errors
On 29/09/2014 10:06, Walter Bright wrote: On 9/29/2014 1:27 AM, Johannes Pfau wrote: In a daemon which logs to syslog or in a GUI application or a game an uncaught 'disk full exception' would go completely unnoticed and that's definitely a bug. Failure to respond properly to an input/environmental error is a bug. But the input/environmental error is not a bug. If it was, then the program should assert on the error, not throw. I agree. And isn't that exactly what Teoh said then: That's why I said, an uncaught exception is a BUG. I think people should be more careful with the term uncaught exception because it's not very precise. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: std.experimental.logger formal review round 3
On Wednesday, 1 October 2014 at 10:50:54 UTC, Kevin Lamonte wrote: 2. We have Tid in the API. What about Fiber and Thread? If we can only pick one, I would vote for Thread rather than Tid, as Tid's currently have no way to be uniquely identified in a logging message. Reference: https://issues.dlang.org/show_bug.cgi?id=6989 In my opinion only solution that scales is to provide same ID as one used by std.concurrency - it can be thread, fiber, process or pretty much anything. There are many possible threading abstractions and all can't be easily supported, makes sense to stick to one considered standard.
Re: RFC: moving forward with @nogc Phobos
On Wednesday, 1 October 2014 at 08:55:55 UTC, Andrei Alexandrescu wrote: On 9/30/14, 9:10 AM, Sean Kelly wrote: Is this for exposition purposes or actually how you expect it to work? That's pretty much what it would take. The key here is that RCString is almost a drop-in replacement for string, so the code using it is almost identical. There will be places where code needs to be replaced, e.g. auto s = literal; would need to become S s = literal; So creation of strings will change a bit, but overall there's not a lot of churn. I'm confused. Is this a general-purpose solution or just one that switches between string and RCString?
Re: @safe pure nothrow compiler inference
I wanted to do it for auto returning functions, since they require a function body. Is there any good reason to _not_ do it for auto return functions? Atila
Re: Program logic bugs vs input/environmental errors
On 10/1/14 9:47 AM, Bruno Medeiros wrote: On 29/09/2014 19:58, Steven Schveighoffer wrote: Any uncaught exceptions are BY DEFINITION programming errors. Not necessarily. For some applications (for example simple console apps), you can consider the D runtime's default exception handler to be an appropriate way to respond to the exception. No, this is lazy/incorrect coding. You don't want your user to see an indecipherable stack trace on purpose. -Steve
Re: Program logic bugs vs input/environmental errors
On 29/09/2014 19:58, Steven Schveighoffer wrote: Any uncaught exceptions are BY DEFINITION programming errors. Not necessarily. For some applications (for example simple console apps), you can consider the D runtime's default exception handler to be an appropriate way to respond to the exception. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: RFC: moving forward with @nogc Phobos
On Wednesday, 1 October 2014 at 08:55:55 UTC, Andrei Alexandrescu wrote: On 9/30/14, 9:10 AM, Sean Kelly wrote: Quite honestly, I can't imagine how I could write a template function in D that needs to work with this approach. You mean write a function that accepts a memory management policy, or a function that uses one? Both, I suppose? A static if block at the top of each function that must be aware of every RC type the user may expect? What if it's a user-defined RC type and this function is in Phobos? As much as I hate to say it, this is pretty much exactly what C++ allocators were designed for. They handle allocation, sure, but they also hold aliases for all relevant types for the data being allocated. If the MemoryManagementPolicy enum were replaced with an alias to a type that I could use to at least obtain relevant aliases, that would be something. But even that approach dramatically complicates code that uses it. I think making MemoryManagementPolicy a meaningful type is a great idea. It would e.g. define the string type, so the code becomes: auto setExtension(alias MemoryManagementPolicy = gc, R1, R2)(R1 path, R2 ext) if (...) { MemoryManagementPolicy.string result; ... return result; } This is a lot more general and extensible. Thanks! Why do you think there'd be dramatic complication of code? (Granted, at some point we must acknowledge that some egg breaking is necessary for the proverbial omelette.) From my experience with C++ containers. Having an alias for a type is okay, but bank of aliases where one is a pointer to the type, one is a const pointer to the type, etc, makes writing the involved code feel really unnatural. The thing is, again, we must make some changes if we want D to be usable without a GC. One of them is e.g. to not allocate built-in slices all over the place. So let the user supply a scratch buffer that will hold the result? With the RC approach we're still allocating, they just aren't built-in slices, correct? That would be overreacting :o). I hope it is :-)
Re: Program logic bugs vs input/environmental errors
On 28/09/2014 00:15, Walter Bright wrote: This issue comes up over and over, in various guises. I feel like Yosemite Sam here: https://www.youtube.com/watch?v=hBhlQgvHmQ0 In that vein, Exceptions are for either being able to recover from input/environmental errors, or report them to the user of the application. When I say They are NOT for debugging programs, I mean they are NOT for debugging programs. This is incorrect. Yes, the primary purpose of Exceptions is not for debugging, but to report exceptional state to the user (or some other component of the system). But they also have a purpose for debugging, particularly the stack traces of exceptions. Take what you said: Failure to respond properly to an input/environmental error is a bug. But the input/environmental error is not a bug. If it was, then the program should assert on the error, not throw. So, some component/function Foo detects an environmental error, and throws and Exception, accordingly. Foo is not responsible for handling these errors, but some other component is. Component/function Bar is the one that should handle such an error (for example, it should display a dialog to the user, and continue the application). But due to a bug, it doesn't do so, and the Exception goes all the way through main(). The programmer notices this happening, and clearly recognizes it's a bug (but doesn't know where the bug is, doesn't know that it's Bar that should be handling it). Now, what is best, to just have the Exception message (something like File not found) displayed to the programmer - or even an end-user that could report a bug -, or to have the stack trace of the Exception so that the programmer can more easily look at which function should be handling it? -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: Program logic bugs vs input/environmental errors
On 29/09/2014 05:03, Sean Kelly wrote: I recall Toyota got into trouble with their computer controlled cars because of their idea of handling of inevitable bugs and errors. It was one process that controlled everything. When something unexpected went wrong, it kept right on operating, any unknown and unintended consequences be damned. The way to get reliable systems is to design to accommodate errors, not pretend they didn't happen, or hope that nothing else got affected, etc. In critical software systems, that means shut down and restart the offending system, or engage the backup. My point was that it's often more complicated than that. There have been papers written on self-repairing systems, for example, and ways to design systems that are inherently durable when it comes to even internal errors. I think what I'm trying to say is that simply aborting on error is too brittle in some cases, because it only deals with one vector--memory corruption that is unlikely to reoccur. But I've watched always-on systems fall apart from some unexpected ongoing situation, and simply restarting doesn't actually help. Sean, I fully agree with the points you have been making so far. But if Walter is fixated on thinking that all the practical uses of D will be critical systems, or simple (ie, single-use, non-interactive) command-line applications, it will be hard for him to comprehend the whole point that simply aborting on error is too brittle in some cases. PS: Walter, what browser to you use? -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: [Semi OT] Language for Game Development talk
Max Klyga: https://www.youtube.com/watch?v=TH9VCN6UkyQ A third talk (from another person) about related matters: https://www.youtube.com/watch?v=rX0ItVEVjHc He doesn't use RTTI, exceptions, multiple inheritance, STL, templates, and lot of other C++ stuff. On the other hand he writes data-oriented code manually, the compiler and language give him only very limited help, and the code he writes looks twiddly and bug-prone. So why aren't they designing a language without most of the C++ stuff they don't use, but with features that help them write the data-oriented code they need? Bye, bearophile
Re: std.experimental.logger formal review round 3
Am Wed, 01 Oct 2014 12:49:29 + schrieb Robert burner Schadek rburn...@gmail.com: maybe I should add a disableGetSysTime switch CLOCK_REALTIME_COARSE / CLOCK_REALTIME_FAST should be explored. On Linux you can't expect finer resolution than the kernel Hz, for FreeBSD I only found mention of 10ms resolution. If you format log messages with 2 digits time precision anyway, you don't need the precise version. If you disable time completely, what would the LogEntry contain as the time stamp? SysTime.init? -- Marco
Re: Program logic bugs vs input/environmental errors (checked exceptions)
On 29/09/2014 20:28, Sean Kelly wrote: Checked exceptions are good in theory but they failed utterly in Java. I'm not interested in seeing them in D. That is the conventional theory, the established wisdom. But the more I become experienced with Java, over the years, I've become convinced otherwise. What has failed is not the concept of checked exceptions per se, but mostly, the failure of Java programmers to use checked exceptions effectively, and properly design their code around this paradigm. Like Jeremy mentioned, if one puts catch blocks right around the function that throws an exception, and just swallow/forget it there without doing anything else, then it's totally the programmers fault for being lazy. If one is annoyed that often, adding a throws clause in a function will require adding the same throws clause function to several other functions, well, that is editing work you have to accept for the sake of more correctness. But also one should understand there are ways to mitigate this editing work: First point is that in a lot of code, is better to have a function throw just one generic (but checked) exception, that can wrap any other specific errors/exceptions. If you are doing an operation that can throw File-Not-Found, Invalid-Path, No-Permissions, IO-Exception, etc., then often all of these will be handled in the same user-reporting code, so they could be wrapped under a single exception that would be used in the throws clause. And so the whole function call chain doesn't need to be modified every time a new exception is added or removed. If you're thinking that means adding a throws Exception to such functions in Java, then no. Because this will catch RuntimeExceptions too (the unchecked exceptions of Java), and these you often want to handle elsewhere than where you handle the checked exceptions. In this regard, Java does have a design fault, IMO, which is that there is no common superclass for checked Exceptions. (there is only for unchecked exceptions) The second point, is that even adding (or modifying) the throws clause of function signatures cause be made much easier with an IDE, and in particular Eclipse JDT helps a lot. If you have an error in the editor about a checked exception that is not caught or thrown, you can just press Ctrl-1 to automatically add either a throws clause, or a surrounding try-catch block. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: So what exactly is coming with extended C++ support?
On Tuesday, 30 September 2014 at 22:32:26 UTC, Sean Kelly wrote: Would delete on the D side work here? Or the more current destroy()? ie. is release of the memory a crucial part of the equation, or merely finalization? Destruction of an object is *far* more important than releasing memory. Our D code's memory usage is drops in an ocean, but it can potentially hold on to resources that need to be destroyed in special ways depending on middleware/threading usage. Object.destroy() would do the job, but there's also a fragmentation problem that creeps in to a GC solution like the default D implementation the longer you have your application running. We already encounter plenty of cache-incoherent code in other areas of the codebase, and since one of my roles is to evangelise D (so to speak) I'm aiming to keep it running fast and avoiding as many stalls as possible. If I avoid the current implementation's garbage collection, then memory allocated stays in roughly the same region (some work that I did manage to do and intend on submitting a pull request for allows a user to specify a custom set of allocation functions, so all memory from core and phobos goes through our supplied memory allocator). Either way, it still comes down to a function call to free your object which means you're stepping outside of the way the GC natively does things.
Re: Program logic bugs vs input/environmental errors
Bruno Medeiros: But if Walter is fixated on thinking that all the practical uses of D will be critical systems, or simple (ie, single-use, non-interactive) command-line applications, There's still some of way to go for D design to make it well fit for high integrity systems (some people even use a restricted subset of C for such purposes, but it's a bad language for it). Bye, bearophile
Re: [GDC] Evaluation order: Please update the dmd backend
Now I'm working to fix issue 6620 https://issues.dlang.org/show_bug.cgi?id=6620 https://github.com/D-Programming-Language/dmd/pull/4035 Kenji Hara 2014-04-01 20:49 GMT+09:00 Johannes Pfau nos...@example.com: I started fixing GDC bug #8 (*) which is basically that array op evaluation order currently depends on the target architecture. Consider this example: a()[] = b()[] + c()[]; The order in which c,a,b are called is currently architecture specific. As stated in that bug report by Andrei we want this to evaluate LTR, so a() first, then b(), then c(). These operations are actually rewritten to calls to extern(C) functions. Arguments to C function should be evaluated LTR as well, but dmd currently evaluates them RTL (GDC: architecture dependent). In order to fix the array op bug in gdc we have to define the evaluation order for extern(C) function parameters. So I've changed extern(C) functions to evaluate LTR in GDC and then had to change the array op code, cause that assumed extern(C) function evaluate RTL. Now I'd like to push these array op changes into dmd as we want to keep as few gdc specific changes as possible and dmd (and ldc) will need these changes anyway as soon as they implement extern(C) functions as LTR. This is required by dmd issue #6620 (**) and the language spec (***). However, if we apply only these changes the array op order reverses for DMD as it evaluates extern(C) function arguments RTL. So I need someone with dmd backend knowledge to fix the evaluation order of extern(C) function parameters to be LTR. Evaluation order of assignments should also be fixed to be LTR in the dmd backend. Although not strictly required for the array op changes it'd be inconsistent to have array op assignments execute LTR but normal assignments RTL: a()[] = b()[] + c()[]; //Array op assignment a() = b() + c(); //Normal assignment | || 1 23 The frontend changes for dmd are here: https://github.com/jpf91/dmd/tree/fixOrder Frontend: https://github.com/jpf91/dmd/commit/5d61b812977dbdc1f99100e2fbaf1f45e9d25b03 Test cases: https://github.com/jpf91/dmd/commit/82bffe0862b272f02c27cc428b22a7dd113b4a07 Druntime changes (need to be applied at the same time as dmd changes) https://github.com/jpf91/druntime/tree/fixOrder https://github.com/jpf91/druntime/commit/f3f6f49c595d4fb25fb298e435ad1874abac516d (*) http://bugzilla.gdcproject.org/show_bug.cgi?id=8 (**) https://d.puremagic.com/issues/show_bug.cgi?id=6620 (***) https://github.com/D-Programming-Language/dlang.org/pull/6
Re: Program logic bugs vs input/environmental errors
On 01/10/2014 14:55, Steven Schveighoffer wrote: On 10/1/14 9:47 AM, Bruno Medeiros wrote: On 29/09/2014 19:58, Steven Schveighoffer wrote: Any uncaught exceptions are BY DEFINITION programming errors. Not necessarily. For some applications (for example simple console apps), you can consider the D runtime's default exception handler to be an appropriate way to respond to the exception. No, this is lazy/incorrect coding. You don't want your user to see an indecipherable stack trace on purpose. -Steve Well, at the very least it's bad UI design for sure (textual UI is still UI). But it's only a *bug* if it's not the behavior the programmer intended. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Re: Program logic bugs vs input/environmental errors
On 10/1/14 10:36 AM, Bruno Medeiros wrote: On 01/10/2014 14:55, Steven Schveighoffer wrote: On 10/1/14 9:47 AM, Bruno Medeiros wrote: On 29/09/2014 19:58, Steven Schveighoffer wrote: Any uncaught exceptions are BY DEFINITION programming errors. Not necessarily. For some applications (for example simple console apps), you can consider the D runtime's default exception handler to be an appropriate way to respond to the exception. No, this is lazy/incorrect coding. You don't want your user to see an indecipherable stack trace on purpose. -Steve Well, at the very least it's bad UI design for sure (textual UI is still UI). But it's only a *bug* if it's not the behavior the programmer intended. Sure, one could also halt a program by reading a null pointer on purpose. This is a grey area that I think reasonable people can correctly call a bug if they so wish, despite the intentions of the developer. -Steve
Re: Program logic bugs vs input/environmental errors
On 10/1/14, Steven Schveighoffer via Digitalmars-d digitalmars-d@puremagic.com wrote: No, this is lazy/incorrect coding. You don't want your user to see an indecipherable stack trace on purpose. So when they file a bug report are you going to also ask them to run the debugger so they capture the stack trace and file that to you? Come on.
Re: std.experimental.logger formal review round 3
On Wednesday, 1 October 2014 at 14:24:52 UTC, Marco Leise wrote: Am Wed, 01 Oct 2014 12:49:29 + schrieb Robert burner Schadek rburn...@gmail.com: maybe I should add a disableGetSysTime switch CLOCK_REALTIME_COARSE / CLOCK_REALTIME_FAST should be explored. good pointer, but what about Win and Mac If you disable time completely, what would the LogEntry contain as the time stamp? SysTime.init? That was my first idea.
Re: @safe pure nothrow compiler inference
On Monday, 29 September 2014 at 14:40:34 UTC, Daniel N wrote: It can be done, Walter wanted to do it, but there was large resistance, mainly because library APIs would become unstable, possibly changing between every release. Huh? Templates are part of library API too, see std.algorithm. So what's the difference if the API consists of templated or non-templated functions? Why for one part of API it's ok to change with every release and for the other not ok?
Re: [Semi OT] Language for Game Development talk
On Wednesday, 1 October 2014 at 14:16:38 UTC, bearophile wrote: Max Klyga: https://www.youtube.com/watch?v=TH9VCN6UkyQ A third talk (from another person) about related matters: https://www.youtube.com/watch?v=rX0ItVEVjHc He doesn't use RTTI, exceptions, multiple inheritance, STL, templates, and lot of other C++ stuff. On the other hand he writes data-oriented code manually, the compiler and language give him only very limited help, and the code he writes looks twiddly and bug-prone. So why aren't they designing a language without most of the C++ stuff they don't use, but with features that help them write the data-oriented code they need? Bye, bearophile He (deliberately I'd guess)conflates cache friendly data structures and access patterns with his particular preference for a C style. It is a fallacy that he presents as fact. The key to these type of fast code isn't the C style, it is the contiguous data layout, and cache friendly access patterns, both of which are easily enough to perform in modern C++.
Re: Program logic bugs vs input/environmental errors
On 10/1/14 11:00 AM, Andrej Mitrovic via Digitalmars-d wrote: On 10/1/14, Steven Schveighoffer via Digitalmars-d digitalmars-d@puremagic.com wrote: No, this is lazy/incorrect coding. You don't want your user to see an indecipherable stack trace on purpose. So when they file a bug report are you going to also ask them to run the debugger so they capture the stack trace and file that to you? Come on. No what I mean is: ./niftyapp badfilename.txt Result should be: Error: Could not open badfilename.txt, please check and make sure the file exists and is readable. Not: std.exception.ErrnoException@std/stdio.d(345): Cannot open file `badfilename.txt' in mode `rb' (No such file or directory) 5 testexception 0x000104fad02d ref std.stdio.File std.stdio.File.__ctor(immutable(char)[], const(char[])) + 97 6 testexception 0x000104f8d735 _Dmain + 69 7 testexception 0x000104f9f771 void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).runAll().void __lambda1() + 33 8 testexception 0x000104f9f6bd void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).tryExec(scope void delegate()) + 45 9 testexception 0x000104f9f71d void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).runAll() + 45 10 testexception 0x000104f9f6bd void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).tryExec(scope void delegate()) + 45 11 testexception 0x000104f9f639 _d_run_main + 449 12 testexception 0x000104f8d75c main + 20 13 libdyld.dylib 0x7fff8fb2a5fd start + 1 14 ??? 0x0001 0x0 + 1 If it's an error due to *user input*, you should not rely on the exception handling of the runtime, you should have a more user-friendly message. Obviously, if you fail to handle it, the full trace happens, and then you must fix that in your code. It's for your benefit too :) This way you get less nuisance troubleshooting calls since the error message is clearer. -Steve
Re: [Semi OT] Language for Game Development talk
On Wednesday, 1 October 2014 at 15:17:33 UTC, po wrote: On Wednesday, 1 October 2014 at 14:16:38 UTC, bearophile wrote: Max Klyga: https://www.youtube.com/watch?v=TH9VCN6UkyQ A third talk (from another person) about related matters: https://www.youtube.com/watch?v=rX0ItVEVjHc He doesn't use RTTI, exceptions, multiple inheritance, STL, templates, and lot of other C++ stuff. On the other hand he writes data-oriented code manually, the compiler and language give him only very limited help, and the code he writes looks twiddly and bug-prone. So why aren't they designing a language without most of the C++ stuff they don't use, but with features that help them write the data-oriented code they need? Bye, bearophile He (deliberately I'd guess)conflates cache friendly data structures and access patterns with his particular preference for a C style. It is a fallacy that he presents as fact. The key to these type of fast code isn't the C style, it is the contiguous data layout, and cache friendly access patterns, both of which are easily enough to perform in modern C++. OOP style and AoS in general does cause cache unfriendly data access. You can separate out your hot and cold data but at that point you're not really programming in an OO style. He doesn't use RTTI, templates, exceptions, etc. for different reasons than cache friendliness. At what point does he say it's difficult to code in a SoA style in C++? He clearly states he sees no advantage to C++ over C.
Re: std.experimental.logger formal review round 3
Am Wed, 01 Oct 2014 15:05:53 + schrieb Robert burner Schadek rburn...@gmail.com: On Wednesday, 1 October 2014 at 14:24:52 UTC, Marco Leise wrote: Am Wed, 01 Oct 2014 12:49:29 + schrieb Robert burner Schadek rburn...@gmail.com: maybe I should add a disableGetSysTime switch CLOCK_REALTIME_COARSE / CLOCK_REALTIME_FAST should be explored. good pointer, but what about Win and Mac Windows 2000 had some function that returns 4ms accurate time, I hope it is implemented like CLOCK_REALTIME_COARSE. OS X ... oh well. Don't know. Just declare the fast timer a hint, I guess. Like when you ask for anti-aliasing in OpenGL and the implementation is free to decide if it can or want's to deliver. So it turns into: I need a sub-second timestamp, but make it as fast as possible on the target OS. Maybe some day Apple will copy CLOCK_REALTIME_FAST from FreeBSD. If you disable time completely, what would the LogEntry contain as the time stamp? SysTime.init? That was my first idea. -- Marco
Re: [Semi OT] Language for Game Development talk
I certainly believe C++ style and it's community promote the idea of zero overhead abstractions and the kind of OOP style which _does_ cause cache misses.
Re: RFC: moving forward with @nogc Phobos
On Tuesday, 30 September 2014 at 19:10:19 UTC, Marc Schütz wrote: [...] I'm convinced this isn't necessary. Let's take `setExtension()` as an example, standing in for any of a class of similar functions. This function allocates memory, returns it, and abandons it; it gives up ownership of the memory. The fact that the memory has been freshly allocated means that it is (head) unique, and therefore the caller (= library user) can take over the ownership. This, in turn, means that the caller can decide how she wants to manage it. Bingo. Have some way to mark the function return type as a unique pointer. This does not imply full-fledged unique pointer type support in the language - just enough to have the caller ensure continuity of memory management policy from there. One problem with actually implementing this is that using reference counting as a memory management policy requires extra space for the reference counter in the object, just as garbage collection requires support for scanning and identification of interior object memory range. While allocation and memory management may be quite independent in theory, practical high performance implementations tend to be intimately related. (I'll try to make a sketch on how this can be implemented in another post.) Do elaborate! As a conclusion, I would say that APIs should strive for the following principles, in this order: 1. Avoid allocation altogether, for example by laziness (ranges), or by accepting sinks. 2. If allocations are necessary (or desirable, to make the API more easily usable), try hard to return a unique value (this of course needs to be expressed in the return type). 3. If both of the above fails, only then return a GCed pointer, or alternatively provide several variants of the function (though this shouldn't be necessary often). An interesting alternative: Instead of passing a flag directly describing the policy, pass the function a type that it should wrap it's return value in. As for the _allocation_ strategy: It indeed needs to be configurable, but here, the same objections against a template parameter apply. As the allocator doesn't necessarily need to be part of the type, a (thread) global variable can be used to specify it. This lends itself well to idioms like with(MyAllocator alloc) { // ... } Assuming there is some dependency between the allocator and the memory management policy I guess this would be initialized on thread start that cannot be modified later. All code running inside the thread would need to either match the configured policy, not handle any kind of pointers or use a limited subset of unique pointers. Another way to ensure that code can run on either RC or GC is to make certain objects (specifically, Exceptions) always allocate a reference counter, regardless of the currently configured policy.
Re: [Semi OT] Language for Game Development talk
currysoup: At what point does he say it's difficult to code in a SoA style in C++? Perhaps a (part of) language more fit/helpful/nice for that purpose/use can be invented. Bye, bearophile
Re: RFC: moving forward with @nogc Phobos
Oren Tirosh: Bingo. Have some way to mark the function return type as a unique pointer. This does not imply full-fledged unique pointer type support in the language Let's have full-fledged memory zones tracking in the D type system :-) Bye, bearophile
Re: [Semi OT] Language for Game Development talk
On Wednesday, 1 October 2014 at 14:16:38 UTC, bearophile wrote: Max Klyga: https://www.youtube.com/watch?v=TH9VCN6UkyQ A third talk (from another person) about related matters: https://www.youtube.com/watch?v=rX0ItVEVjHc He doesn't use RTTI, exceptions, multiple inheritance, STL, templates, and lot of other C++ stuff. On the other hand he writes data-oriented code manually, the compiler and language give him only very limited help, and the code he writes looks twiddly and bug-prone. So why aren't they designing a language without most of the C++ stuff they don't use, but with features that help them write the data-oriented code they need? Probably because C++ is good enough and already has mature infrastructure.
Re: std.experimental.logger formal review round 3
On 10/1/14 11:53 AM, Marco Leise wrote: Am Wed, 01 Oct 2014 15:05:53 + schrieb Robert burner Schadek rburn...@gmail.com: On Wednesday, 1 October 2014 at 14:24:52 UTC, Marco Leise wrote: Am Wed, 01 Oct 2014 12:49:29 + schrieb Robert burner Schadek rburn...@gmail.com: maybe I should add a disableGetSysTime switch CLOCK_REALTIME_COARSE / CLOCK_REALTIME_FAST should be explored. good pointer, but what about Win and Mac Windows 2000 had some function that returns 4ms accurate time, I hope it is implemented like CLOCK_REALTIME_COARSE. I think it wouldn't be prudent to explore Windows options until we prove the windows currTime is slow like the Linux version. On Mac, gettimeofday is used. I don't know that it's necessarily slow, but I don't know of a better way to get the wall clock time. -Steve
Re: FOSDEM'15 - let us propose a D dev room!!!
Unfortunately, D developer room was rejected: Like every year, we received quite a lot more proposals than we have rooms at our disposal. Unfortunately, we were not able to schedule your proposed room this year. The list of accepted rooms can be found on our website. We hope you'll agree that we have an interesting lineup despite the absence of the room you proposed. If the content you intended to schedule in a dedicated developer room fits in one of the accepted rooms, please submit it there when the CFP is announced. List of accepted developer rooms: https://fosdem.org/2015/news/2014-09-30-accepted-devrooms/ Don't know if something related to D may fit one of accepted devrooms.
Re: [Semi OT] Language for Game Development talk
Am 01.10.2014 17:40, schrieb currysoup: I certainly believe C++ style and it's community promote the idea of zero overhead abstractions and the kind of OOP style which _does_ cause cache misses. C++ does not imply OOP. And the zero overhead abstractions are a culture heritage from C as it was the only way to sell C++ to most C developers. -- Paulo
Re: RFC: moving forward with @nogc Phobos
On 10/1/14, 6:52 AM, Sean Kelly wrote: On Wednesday, 1 October 2014 at 08:55:55 UTC, Andrei Alexandrescu wrote: On 9/30/14, 9:10 AM, Sean Kelly wrote: Is this for exposition purposes or actually how you expect it to work? That's pretty much what it would take. The key here is that RCString is almost a drop-in replacement for string, so the code using it is almost identical. There will be places where code needs to be replaced, e.g. auto s = literal; would need to become S s = literal; So creation of strings will change a bit, but overall there's not a lot of churn. I'm confused. Is this a general-purpose solution or just one that switches between string and RCString? General purpose since your suggested change. -- Andrei
Re: RFC: moving forward with @nogc Phobos
On 10/1/14, 7:03 AM, Sean Kelly wrote: So let the user supply a scratch buffer that will hold the result? With the RC approach we're still allocating, they just aren't built-in slices, correct? Correct. -- Andrei
Re: How to build phobos docs for dlang.org
On Wednesday, 1 October 2014 at 09:00:51 UTC, Robert burner Schadek wrote: On Wednesday, 1 October 2014 at 06:29:46 UTC, Mark Isaacson wrote: I hereby volunteer to document whatever answer I am given. already done http://wiki.dlang.org/Building_DMD#Building_the_Docs I saw this yesterday and followed the instructions but was unable to get it to work. At a minimum, the first 'make' does not do anything and the 'generated/linux/default' directory does not exist. I see 'generated/linux/64' instead if I use the posix.mak file. If I proceed using that directory instead, I end up with the same problem: Now when I run the phobos html build it does format everything in the dlang.org colors, but I'm still missing std_container.html.
Re: RFC: moving forward with @nogc Phobos
On 10/1/14, 8:48 AM, Oren Tirosh wrote: On Tuesday, 30 September 2014 at 19:10:19 UTC, Marc Schütz wrote: [...] I'm convinced this isn't necessary. Let's take `setExtension()` as an example, standing in for any of a class of similar functions. This function allocates memory, returns it, and abandons it; it gives up ownership of the memory. The fact that the memory has been freshly allocated means that it is (head) unique, and therefore the caller (= library user) can take over the ownership. This, in turn, means that the caller can decide how she wants to manage it. Bingo. Have some way to mark the function return type as a unique pointer. I'm skeptical about this approach (though clearly we need to explore it for e.g. passing ownership of data across threads). For strings and other casual objects I think we should focus on GC/RC strategies. This is because people do things like: auto s = setExtension(s1, s2); and then attempt to use s as a regular variable (copy it etc). Making s unique would make usage quite surprising and cumbersome. Andrei
Re: How to build phobos docs for dlang.org
On 10/1/14, 10:11 AM, Mark Isaacson wrote: On Wednesday, 1 October 2014 at 09:00:51 UTC, Robert burner Schadek wrote: On Wednesday, 1 October 2014 at 06:29:46 UTC, Mark Isaacson wrote: I hereby volunteer to document whatever answer I am given. already done http://wiki.dlang.org/Building_DMD#Building_the_Docs I saw this yesterday and followed the instructions but was unable to get it to work. At a minimum, the first 'make' does not do anything and the 'generated/linux/default' directory does not exist. I think I fixed that a while ago; default was a mistake. I see 'generated/linux/64' instead if I use the posix.mak file. If I proceed using that directory instead, I end up with the same problem: Now when I run the phobos html build it does format everything in the dlang.org colors, but I'm still missing std_container.html. I think for a while now Phobos documentation has been produced as part of building dlang.org (which itself is right now difficult to build without babysitting for unrelated reasons). So building Phobos docs straight from Phobos might have some bit rot. We should sort this out when I get back in a Facebook chat. I recommend you start with phobos/posix.mak and build from there. Take a white box approach - open posix.mak in an editor and see what happens when you build html. You'll see where the generated docs go (I think in a dir called web that is configurable by choosing DOC_OUTPUT_DIR. Doing so is easy from the outside: make html DOC_OUTPUT_DIR=/tmp/wtf Andrei
Re: FOSDEM'15 - let us propose a D dev room!!!
On 1 October 2014 17:21, Dicebot via Digitalmars-d digitalmars-d@puremagic.com wrote: Unfortunately, D developer room was rejected: Like every year, we received quite a lot more proposals than we have rooms at our disposal. Unfortunately, we were not able to schedule your proposed room this year. The list of accepted rooms can be found on our website. We hope you'll agree that we have an interesting lineup despite the absence of the room you proposed. If the content you intended to schedule in a dedicated developer room fits in one of the accepted rooms, please submit it there when the CFP is announced. List of accepted developer rooms: https://fosdem.org/2015/news/2014-09-30-accepted-devrooms/ Don't know if something related to D may fit one of accepted devrooms. I could always gate crash Go's dev room. :o)
Re: RFC: moving forward with @nogc Phobos
On Wednesday, 1 October 2014 at 17:13:38 UTC, Andrei Alexandrescu wrote: On 10/1/14, 8:48 AM, Oren Tirosh wrote: On Tuesday, 30 September 2014 at 19:10:19 UTC, Marc Schütz wrote: [...] I'm convinced this isn't necessary. Let's take `setExtension()` as an example, standing in for any of a class of similar functions. This function allocates memory, returns it, and abandons it; it gives up ownership of the memory. The fact that the memory has been freshly allocated means that it is (head) unique, and therefore the caller (= library user) can take over the ownership. This, in turn, means that the caller can decide how she wants to manage it. Bingo. Have some way to mark the function return type as a unique pointer. I'm skeptical about this approach (though clearly we need to explore it for e.g. passing ownership of data across threads). For strings and other casual objects I think we should focus on GC/RC strategies. This is because people do things like: auto s = setExtension(s1, s2); and then attempt to use s as a regular variable (copy it etc). Making s unique would make usage quite surprising and cumbersome. The idea is that the unique property is very short-lived: the caller immediately assigns it to a pointer of the appropriate policy: either RC or GC. This keeps the callee agnostic of the chosen policy and does not require templating multiple versions of the code. The allocator configured for the thread must match the generated code at the call site i.e. if the caller uses RC pointers the allocator must allocate space for the reference counter (at negative offset to keep compatibility).
Re: FOSDEM'15 - let us propose a D dev room!!!
I think I will attend anyway as a casual visitor, can have an informal D meetup at least.
Re: RFC: moving forward with @nogc Phobos
On 10/1/14, 10:25 AM, Oren T wrote: The idea is that the unique property is very short-lived: the caller immediately assigns it to a pointer of the appropriate policy: either RC or GC. This keeps the callee agnostic of the chosen policy and does not require templating multiple versions of the code. The allocator configured for the thread must match the generated code at the call site i.e. if the caller uses RC pointers the allocator must allocate space for the reference counter (at negative offset to keep compatibility). This all... looks arcane. I'm not sure how it can even made to work if user code just uses auto. -- Andrei
Re: RFC: moving forward with @nogc Phobos
On Wed, Oct 01, 2014 at 02:51:08AM -0700, Andrei Alexandrescu via Digitalmars-d wrote: On 9/30/14, 11:06 AM, Dmitry Olshansky wrote: 29-Sep-2014 14:49, Andrei Alexandrescu пишет: auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 path, R2 ext) if (...) { static if (mmp == gc) alias S = string; else alias S = RCString; S result; ... return result; } Incredible code bloat? Boilerplate in each function for the win? I'm at loss as to how it would make things better. Sean's idea to make string an alias of the policy takes care of this concern. -- Andrei But Sean's idea only takes strings into account. Strings aren't the only allocated resource Phobos needs to deal with. So extrapolating from that idea, each memory management struct (or whatever other aggregate we end up using), say call it MMP, will have to define MMP.string, MMP.jsonNode (since parseJSON() need to allocate not only strings but JSON nodes), MMP.redBlackTreeNode, MMP.listNode, MMP.userDefinedNode, ... Nope, still don't see how this could work. Please clarify, kthx. T -- Sometimes the best solution to morale problems is just to fire all of the unhappy people. -- despair.com
Re: How to build phobos docs for dlang.org
On Wed, Oct 01, 2014 at 05:11:47PM +, Mark Isaacson via Digitalmars-d wrote: On Wednesday, 1 October 2014 at 09:00:51 UTC, Robert burner Schadek wrote: On Wednesday, 1 October 2014 at 06:29:46 UTC, Mark Isaacson wrote: I hereby volunteer to document whatever answer I am given. already done http://wiki.dlang.org/Building_DMD#Building_the_Docs I saw this yesterday and followed the instructions but was unable to get it to work. At a minimum, the first 'make' does not do anything and the 'generated/linux/default' directory does not exist. I see 'generated/linux/64' instead if I use the posix.mak file. If I proceed using that directory instead, I end up with the same problem: Now when I run the phobos html build it does format everything in the dlang.org colors, but I'm still missing std_container.html. Here's what I do: cd /path/to/dlang.org make -f posix.mak html cp -rf web /path/to/webdir/ cd /path/to/phobos make -f posix.mak html cp -rf ../web /path/to/webdir/ This installs the Phobos docs in /path/to/webdir/web/phobos-prerelease/* (note, by default it does NOT install to ../web/phobos/*). So assuming your webserver root points to /path/to/webdir, you can then point your browser to: http://your.web.server/web/phobos-prerelease/std_container.html Let me know if this still doesn't help. T -- There are three kinds of people in the world: those who can count, and those who can't.
Re: RFC: moving forward with @nogc Phobos
On Wednesday, 1 October 2014 at 17:53:43 UTC, H. S. Teoh via Digitalmars-d wrote: On Wed, Oct 01, 2014 at 02:51:08AM -0700, Andrei Alexandrescu via Digitalmars-d wrote: On 9/30/14, 11:06 AM, Dmitry Olshansky wrote: 29-Sep-2014 14:49, Andrei Alexandrescu пишет: auto setExtension(MemoryManagementPolicy mmp = gc, R1, R2)(R1 path, R2 ext) if (...) { static if (mmp == gc) alias S = string; else alias S = RCString; S result; ... return result; } Incredible code bloat? Boilerplate in each function for the win? I'm at loss as to how it would make things better. Sean's idea to make string an alias of the policy takes care of this concern. -- Andrei But Sean's idea only takes strings into account. Strings aren't the only allocated resource Phobos needs to deal with. So extrapolating from that idea, each memory management struct (or whatever other aggregate we end up using), say call it MMP, will have to define MMP.string, MMP.jsonNode (since parseJSON() need to allocate not only strings but JSON nodes), MMP.redBlackTreeNode, MMP.listNode, MMP.userDefinedNode, ... Nope, still don't see how this could work. Please clarify, kthx. T MMP.Ref!redBlackTreeNode ? (where Ref is e.g. a ref-counted pointer type (like RefCounted but with class support) for RC MMP but plain GC reference for GC MMP, etc.) I kinda like this idea, since it might possibly allow user-defined memory management policies (which wouldn't get special compiler treatment that e.g. RC may need, though).
Re: RFC: moving forward with @nogc Phobos
On Wednesday, 1 October 2014 at 17:33:34 UTC, Andrei Alexandrescu wrote: On 10/1/14, 10:25 AM, Oren T wrote: The idea is that the unique property is very short-lived: the caller immediately assigns it to a pointer of the appropriate policy: either RC or GC. This keeps the callee agnostic of the chosen policy and does not require templating multiple versions of the code. The allocator configured for the thread must match the generated code at the call site i.e. if the caller uses RC pointers the allocator must allocate space for the reference counter (at negative offset to keep compatibility). This all... looks arcane. I'm not sure how it can even made to work if user code just uses auto. -- Andrei At the moment, @nogc code can't call any function returning a pointer. Under this scheme @nogc is allowed to call either code that returns an explicitly RC ty
Re: RFC: moving forward with @nogc Phobos
On Wednesday, 1 October 2014 at 17:33:34 UTC, Andrei Alexandrescu wrote: On 10/1/14, 10:25 AM, Oren T wrote: The idea is that the unique property is very short-lived: the caller immediately assigns it to a pointer of the appropriate policy: either RC or GC. This keeps the callee agnostic of the chosen policy and does not require templating multiple versions of the code. The allocator configured for the thread must match the generated code at the call site i.e. if the caller uses RC pointers the allocator must allocate space for the reference counter (at negative offset to keep compatibility). This all... looks arcane. I'm not sure how it can even made to work if user code just uses auto. -- Andrei At the moment, @nogc code can't call any function returning a pointer. Under this scheme @nogc is allowed to call either code that returns an explicitly RC type (Exception, RCString) or code returning an agnostic unique pointer that may be used from either @gc or @nogc code. I already see some holes and problems, but I wonder if something along these lines may be made to work.
Re: RFC: moving forward with @nogc Phobos
On Wednesday, 1 October 2014 at 17:53:43 UTC, H. S. Teoh via Digitalmars-d wrote: But Sean's idea only takes strings into account. Strings aren't the only allocated resource Phobos needs to deal with. So extrapolating from that idea, each memory management struct (or whatever other aggregate we end up using), say call it MMP, will have to define MMP.string, MMP.jsonNode (since parseJSON() need to allocate not only strings but JSON nodes), MMP.redBlackTreeNode, MMP.listNode, MMP.userDefinedNode, ... Nope, still don't see how this could work. Please clarify, kthx. Assuming you're willing to take the memoryModel type as a template argument, I imagine we could do something where the user can specialize the memoryModel for their own types, a bit like how information is derived for iterators in C++. The problem is that this still means passing the memoryModel in as a template argument. What I'd really want is for it to be a global, except that templated virtuals is logically impossible. I guess something could maybe be sorted out via a factory design, but that's not terribly D-like. I'm at a loss for how to make this memoryModel thing work the way I'd actually want it to if I were to use it.
Re: RFC: moving forward with @nogc Phobos
On Wednesday, 1 October 2014 at 18:37:50 UTC, Sean Kelly wrote: On Wednesday, 1 October 2014 at 17:53:43 UTC, H. S. Teoh via Digitalmars-d wrote: But Sean's idea only takes strings into account. Strings aren't the only allocated resource Phobos needs to deal with. So extrapolating from that idea, each memory management struct (or whatever other aggregate we end up using), say call it MMP, will have to define MMP.string, MMP.jsonNode (since parseJSON() need to allocate not only strings but JSON nodes), MMP.redBlackTreeNode, MMP.listNode, MMP.userDefinedNode, ... Nope, still don't see how this could work. Please clarify, kthx. Assuming you're willing to take the memoryModel type as a template argument, I imagine we could do something where the user can specialize the memoryModel for their own types, a bit like how information is derived for iterators in C++. The problem is that this still means passing the memoryModel in as a template argument. What I'd really want is for it to be a global, except that templated virtuals is logically impossible. I guess something could maybe be sorted out via a factory design, but that's not terribly D-like. I'm at a loss for how to make this memoryModel thing work the way I'd actually want it to if I were to use it. If you were to forget D restrictions for a moment, and consider an idealized language, how would you express this? Maybe providing that will trigger some ideas from people beyond what we have seen so far by removing implied restrictions.
Re: Program logic bugs vs input/environmental errors
On Wednesday, 1 October 2014 at 14:46:50 UTC, Steven Schveighoffer wrote: On 10/1/14 10:36 AM, Bruno Medeiros wrote: This is a grey area that I think reasonable people can correctly call a bug if they so wish, despite the intentions of the developer. Correctly? In a discussion, It's amazing how difficult it is to agree also on simple words meaning: an _intentional programmer behaviour_ a bug? Whah ;-P --- /Paolo