Re: shared - i need it to be useful
On Monday, 15 October 2018 at 20:53:32 UTC, Manu wrote: On Mon, Oct 15, 2018 at 1:05 PM Peter Alexander via Digitalmars-d wrote: On Monday, 15 October 2018 at 18:46:45 UTC, Manu wrote: 1. A single producer, single consumer (SPSC) queue is necessarily shared, but is only safe if there is one writing thread and one reading thread. Is it ok if shared also requires user discipline and/or runtime checks to ensure correct usage? I think you can model this differently... perhaps rather than a single object, it's a coupled pair. That's a nice design. Your swap function is plain broken; it doesn't do what the API promises. You can write all sorts of broken code, and this is a good example of just plain broken code. If it is broken then why allow it? Why do we need to cast shared away if they weren't atomic and why do we allow it if they are atomic? I understand that shared can't magically tell you when code is thread safe or not. It does make sense to disallow almost everything and require casts. I'm just not seeing the value of allowing shared methods to access shared members if it isn't thread safe. Make it require casts.
Re: shared - i need it to be useful
On Monday, 15 October 2018 at 18:46:45 UTC, Manu wrote: Destroy... What you describe sounds better than what we currently have. I have at least two concerns: 1. A single producer, single consumer (SPSC) queue is necessarily shared, but is only safe if there is one writing thread and one reading thread. Is it ok if shared also requires user discipline and/or runtime checks to ensure correct usage? 2. In your scheme (as I understand), a struct composed entirely of atomics would be able to implement shared methods without any casts, but also be completely thread *unsafe*. Is this okay? Example of #2: struct TwoInts { Atomic!int x, y; void swap() shared { int z = x.load; x.store(y.load); y.store(z); } }
Re: shared - i need it to be useful
On Monday, 15 October 2018 at 18:46:45 UTC, Manu wrote: 2. object may have shared methods; such methods CAN be called on shared instances. such methods may internally implement synchronisation to perform their function. perhaps methods of a lock-free queue structure for instance, or operator overloads on `Atomic!int`, etc. Just checking my understanding: are you saying here that shared methods can effectively do anything and the burden of correctness is on the author? Or do you still have to cast the shared away first?
Re: Interesting Observation from JAXLondon
On Friday, 12 October 2018 at 07:13:33 UTC, Russel Winder wrote: On Thu, 2018-10-11 at 13:00 +, bachmeier via Digitalmars-d wrote: […] Suggestions? My guess is that the reason they've heard of those languages is because their developers were writing small projects using Go and Rust, but not D. I fear it may already be too late. [...] I don't think it's ever too late. Python was stagnant for a long time but exploded in popularity in recent years due to Pandas, TensorFlow, SciPy etc. Similar things have happened in other languages, and it can happen for D. The technical differences between languages is mostly immaterial as well IMO. Plenty of awful languages have succeeded despite being really terrible for the domain they won. As long as there are libraries and integrations that help solve a problem, people will use them. PHP is the obvious example here. R is another. As long as D continues to be a nice language to work in for hobbyists, there will always be potential for a killer use case to come along. D just needs to make sure it doesn't piss off its fans. vibe.d happened because a single person was a fan of D. You don't need a lot of marketing for that to happen. Maybe vibe.d hasn't been the killer app for D, but the next thing might be, so you just need fans. I do believe in "Build it and they will come", but "it" needs to be something of value. At the moment, the "it" of D on its own just isn't valuable enough. Lots of marketing without a strong value proposition will just be a waste of effort.
Re: Walter's Guide to Translating Code From One Language to Another
On Friday, 21 September 2018 at 06:00:33 UTC, Walter Bright wrote: I've learned this the hard way, and I've had to learn it several times because I am a slow learner. I've posted this before, and repeat it because bears repeating. I find this is a great procedure for any sort of large refactoring -- minimal changes at each step and ensure tests are passing after every change. Thanks for sharing!
Re: Debugging mixins - we need to do something
On Saturday, 8 September 2018 at 22:01:23 UTC, Manu wrote: As I and others have suggested before; mixin expansion should emit a `[sourcefile].d.mixin` file to the object directory, this file should accumulate mixin instantiations, and perhaps it would be ideal to also emit surrounding code for context. This implies also emitting all template instantiation instances, right? I agree this is an important problem.
Re: Small @nogc experience report
On Saturday, 8 September 2018 at 08:32:58 UTC, Guillaume Piolat wrote: On Saturday, 8 September 2018 at 08:07:07 UTC, Peter Alexander wrote: I'd love to know if anyone is making good use of @nogc in a larger code base and is happy with it. Weka.io? Not Weka but we are happy with @nogc and without @nogc our job would be impossible. You don't like it fine. But I can guarantee it has its uses. There is no other choice when the runtime is disabled but to have @nogc. It's a fantastic peace of mind for high-performance to be able to _enforce_ something will not allocate. If anything, that's superior to C++ when copying a std::vector will trigger an allocation etc. Thanks for chiming in. That's good to know.
Re: Small @nogc experience report
On Friday, 7 September 2018 at 17:01:09 UTC, Meta wrote: You are allowed to call "@gc" functions inside @nogc functions if you prefix them with a debug statement, e.g.: Thanks! I was aware that debug is an escape hatch for pure, but didn't consider it for @nogc. I've been thinking lately that @nogc may have been going to far, and -vgc was all that was actually needed. -vgc gives you the freedom to remove or ignore GC allocations as necessary, instead of @nogc's all or nothing approach. I was thinking the same thing. The type system is a very heavy-weight and intrusive way to enforce something. I'd love to know if anyone is making good use of @nogc in a larger code base and is happy with it. Weka.io?
Small @nogc experience report
I recently wrote a small program of ~600 lines of code to solve an optimisation puzzle. Profiling showed that GC allocations were using non-trivial CPU, so I decided to try and apply @nogc to remove allocations. This is a small experience report of my efforts. 1. My program does some initialisation before the main solver. I don't care about allocations in the initialisation. Since not all of my code needed to be @nogc, I couldn't add `@nogc:` to the top of the file and instead had to refactor my code into initialisation parts and main loop parts and wrap the latter in @nogc { ... }. This wasn't a major issue, but inconvenient. 2. For my code the errors were quite good. I was immediately able to see where GC allocations were occurring and fix them. 3. It was really frustrating that I had to make the compiler happy before I was able to run anything again. Due to point #1 I had to move code around to restructure things and wanted to make sure everything continued working before all GC allocations were removed. 4. I used std.algorithm.topNCopy, which is not @nogc. The error just says "cannot call non-@nogc function [...]". I know there are efforts to make Phobos more @nogc friendly, but seeing this error is like hitting a brick wall. I wouldn't expect topNCopy to use GC, but as a user, what do I do with the error? Having to dig into Phobos source is unpleasant. Should I file a bug? What if it is intentionally not @nogc for some subtle reason? Do I rewrite topNCopy? 5. Sometimes I wanted to add writeln to my code to debug things, but writeln is not @nogc, so I could not. I could have used printf in hindsight, but was too frustrated to continue. 6. In general, peppering my code with @nogc annotations was just unpleasant. 7. In the end I just gave up and used -vgc flag, which worked great. I had to ignore allocations from initialisation, but that was easy. It might be nice to have some sort of `ReportGC` RAII struct to scope when -vgc reports the GC.
Re: Is @safe still a work-in-progress?
On Thursday, 30 August 2018 at 16:57:05 UTC, Maksim Fomin wrote: My point is somewhat different from expressed already here by Jonathan. He speaks about whitelisting/blacklisting and that the former model will always contain loopholes. In my view, the reason is that memory safety in (essentially good old C) memory model depends on runtime memory type which is unrelated to static type. Taking random variable - it can be allocated on stack, heap, GC-heap, thread-local and there is no way at CT to determine this in general case. In other words, static type rules is bad approach to determine memory safety. It can be used to detect some obvious bugs, but does not work across compilation boundaries. The better approach in my view is to insert runtime code which performs some tests (at least this how C# works). I think it is understood that any compile time checking must be conservative in the general case to be sound. The same is true for compile time type checking, which must reject type safe code that can only be determined type safe at runtime. Adding runtime checks does sound reasonable, and D already does this for array bounds.
Re: Dicebot on leaving D: It is anarchy driven development in all its glory.
On Sunday, 26 August 2018 at 08:40:32 UTC, Andre Pany wrote: In the whole discussion I miss 2 really important things. If your product compiles fine with a dmd version, no one forces you to update to the next dmd version. In the company I work for, we set for each project the DMD version in the build settings. The speed of DMD releases or breaking changes doesn't affect us at all. If your product is a library then your customers dictate which dmd version you build with.
D on Twitter
I follow a number of official programming language accounts on Twitter. It is a good way to keep up to date with what's happening in those communities and I imagine many people do the same thing. Something I've noticed is that D is relatively silent on this front. At least on Twitter, it gives an impression that the D community is less active than it is. For comparison: @rustlang: 32.7k followers 12.7k tweets @D_Programming: 10.1k followers 1k tweets 10,000 is a lot of people to reach, and 1k tweets over 8 years is too little to seem engaging. Some suggestions: * Post everything that happens from the Announce forum. https://twitter.com/dlang_ng does this, but it has effectively no followers. A bot could do this. * Subscribe to #dlang on twitter and retweet anything good. Not only does this highlight interesting D-related content, but also gives others incentive to discuss #dlang there. I don't know who runs the account, but I think both of these should be quite easy to achieve with little effort. Of course, it is easy to be generous with others' time :-)
wiki.dlang.org confirmation emails marked as spam by gmail
gmail gives the reason: "Lots of messages from k3.1azy.net were identified as spam in the past." Not sure what can be done. Just an FYI.
Re: Engine of forum
On Monday, 20 August 2018 at 08:39:38 UTC, Andrey wrote: On Sunday, 19 August 2018 at 11:11:56 UTC, rikki cattermole wrote: This is a news group not a forum. The web interface is driven by DFeed and is written in D. It has been designed to be very fast (quite a notable feature). I see this address: https://forum.dlang.org. It is forum. Ok, even if it isn't a forum, will dlang community have someday the real forum? Are there any movements in this direction? What are the specific problems solved or opportunities realised by moving to a real forum?
Is @safe still a work-in-progress?
import std.algorithm, std.stdio; @safe: auto foo() { int[6] xs = [0, 1, 2, 3, 4, 5]; return xs[].map!(x => x); } void main() { writeln(foo()); } https://run.dlang.io/is/qC7HUR For me this gives: [0, 0, -2132056872, 22008, 0, 0] Which looks like its just reading arbitrary memory. I've filed https://issues.dlang.org/show_bug.cgi?id=19175 My question is: what is the status of @safe? I am quite surprised to see such a simple case fail. Is @safe believed to be fully implemented (modulo bugs) and this is just an unfortunate corner case, or is it known work-in-progress?
Re: High-level vision for 2018 H2?
On Saturday, 28 July 2018 at 08:37:25 UTC, Peter Alexander wrote: The wiki still links to high-level vision for 2018 H1. We're now nearly one month into H2. Is a H2 document in progress? Bump
code.dlang.org is down
https://downforeveryoneorjustme.com/code.dlang.org 502 Bad Gateway
High-level vision for 2018 H2?
The wiki still links to high-level vision for 2018 H1. We're now nearly one month into H2. Is a H2 document in progress?
Re: Bug? opIn with associative array keyed on static arrays
On Sunday, 22 July 2018 at 19:42:45 UTC, Peter Alexander wrote: void main() { int[int[1]] aa; aa[[2]] = 1; assert([2] in aa); } --- This assertion fails in 2081.1. Is this a bug? https://dpaste.dzfl.pl/d4c0d4607482 https://issues.dlang.org/show_bug.cgi?id=19112
Bug? opIn with associative array keyed on static arrays
void main() { int[int[1]] aa; aa[[2]] = 1; assert([2] in aa); } --- This assertion fails in 2081.1. Is this a bug? https://dpaste.dzfl.pl/d4c0d4607482
Re: Where will D sit in the web service space?
On Sunday, 12 July 2015 at 12:14:31 UTC, Ola Fosheim Grøstad wrote: Yet, D is currently not in a strong position for mobile apps or web servers. Mobile apps: I agree. Web servers: Why not? What do you think about the future for D in the web service space? It seems like a space where D has a lot of potential to succeed. Compared to mobile and desktop, server-side integration should be easy for D since (at least for Linux servers) most the important integration points are through C APIs, which D supports well. Server side is generally more forgiving to new technology since it is generally all open source, and there aren't really any hoops to jump through.
Re: std.allocator.allocate(0) -> return null or std.allocator.allocate(1)?
On Friday, 15 May 2015 at 16:36:29 UTC, Andrei Alexandrescu wrote: This is a matter with some history behind it. In C, malloc(0) always returns a new, legit pointer that can be subsequently reallocated, freed etc. Is the invariant malloc(0) != malloc(0) the only thing that makes 0 a special case here?
Re: std.allocator.allocate(0) -> return null or std.allocator.allocate(1)?
On Sunday, 17 May 2015 at 20:31:50 UTC, deadalnix wrote: On Sunday, 17 May 2015 at 14:13:03 UTC, Peter Alexander wrote: On Friday, 15 May 2015 at 16:36:29 UTC, Andrei Alexandrescu wrote: This is a matter with some history behind it. In C, malloc(0) always returns a new, legit pointer that can be subsequently reallocated, freed etc. Is the invariant malloc(0) != malloc(0) the only thing that makes 0 a special case here? Doesn't need to be, the spec only say it must be passable to free. So here's my question: can we just make allocate(0) do nothing special? i.e. allocate a non-null, but still 0 length buffer?
Re: A few thoughts on std.allocator
On Monday, 11 May 2015 at 15:45:38 UTC, Andrei Alexandrescu wrote: On 5/10/15 5:58 AM, Timon Gehr wrote: Keep in mind that currently, entire regions of memory can change from mutable to immutable implicitly when returned from pure functions. Furthermore, as Michel points out, the ways 'immutable' can be leveraged is constrained by the fact that it implies 'shared'. After sleeping on this for a bit, it seems to me pure function need to identify their allocations in some way to the caller. The simplest way is to have them conservatively use the most conservative heap. We get to refine these later. For now, here's a snapshot of flags that the allocation primitives should know about: enum AllocOptions { /// Allocate an array, not an individual object array, /// Allocate a string of characters string, /// Plan to let the GC take care of this object noFree, /// This object will be shared between threads forSharing, /// This object will be moved between threads forThreadTransfer, /// This object will be mutable after initialization mutableTarget, /// The caller is a pure function, so result may be immutable fromPureFunction, /// Object allocated has pointers hasPointers, /// Typical (default) options typical = array | noFree | forSharing | mutableTarget | hasPointers } Anything to add to this? Would it be better to name these after their interpretation, rather than expected use cases? For example, the allocator doesn't care if something is an array, it cares about resizing, and large block allocations. Perhaps s/array/expectRealloc/ (as an example). Similar for other ones. The benefit here is that if, for example, I have some non-array use case for frequently reallocing then I can express that directly rather than having to pretend it's like an array.
Re: Memory safety depends entirely on GC ?
On Sunday, 22 February 2015 at 14:49:37 UTC, Marc Schütz wrote: On Sunday, 22 February 2015 at 14:41:43 UTC, Peter Alexander wrote: On Sunday, 22 February 2015 at 04:19:32 UTC, deadalnix wrote: On Saturday, 21 February 2015 at 22:13:09 UTC, Peter Alexander wrote: malloc+free can be trusted if wrapped in something like a ref counted pointer, no? Foo bazoom; class Foo { void bar() { bazoom = this; } } void foo() { RefCounted!Foo f = ... f.bar(); // bazoom is now a dandling pointer. } I see, thanks. Is assigning 'this' from a member function the only problem case? No. There's also returning the reference from a member function, storing it in a passed-in reference (pointer, ref, out or slice), and passing it to other functions that in turn leak the reference, as well as throwing it. And leaking closures containing the reference. That's all that I can think of now... Sorry, I meant things other than moving this from a member function to somewhere else, i.e. the RefCounted shouldn't leak the pointer itself in any other way. Just wondering if there are ways to avoid the problem by making 'this' unescapable in certain situations.
Re: Memory safety depends entirely on GC ?
On Sunday, 22 February 2015 at 04:19:32 UTC, deadalnix wrote: On Saturday, 21 February 2015 at 22:13:09 UTC, Peter Alexander wrote: malloc+free can be trusted if wrapped in something like a ref counted pointer, no? Foo bazoom; class Foo { void bar() { bazoom = this; } } void foo() { RefCounted!Foo f = ... f.bar(); // bazoom is now a dandling pointer. } I see, thanks. Is assigning 'this' from a member function the only problem case?
Re: Please tell me this is a bug?
On Sunday, 22 February 2015 at 07:11:24 UTC, deadalnix wrote: On Sunday, 22 February 2015 at 02:27:30 UTC, Peter Alexander wrote: On Sunday, 22 February 2015 at 01:24:09 UTC, Almighty Bob wrote: a += b; // Compiles with no ERROR! Please tell me that's a bug? Not a bug. From spec: http://dlang.org/expression.html#AssignExpression Assignment operator expressions, such as: a op= b are semantically equivalent to: a = cast(typeof(a))(a op b) Seems questionable to me. Anyone know the rationale? If a = b; is disallowed, I don't see why a += b; should be more acceptable. The rationale make sens for things like : byte a; a += 1; Here, because of type promotion, a + 1 is an int, and if VRP of a is unknown, you can't cast implicitly back to byte. If VRP is unknown then it should be disallowed for precisely that reason! It can't implicitly cast back to a byte, so it shouldn't. If you know that the int fits in a byte then do the cast yourself, that's what it's for. It is true that this create questionable side effect for float, but in the end, that is consistent, and it would be very annoying to not be able to increment/decrement integral smaller than int. Even if you forget about float, it's still inconsistent. byte a; int b; a = a + b; a += b; These do the same thing, and have the same pitfalls if the int is too big. Both should be disallowed.
Re: Please tell me this is a bug?
On Sunday, 22 February 2015 at 02:35:02 UTC, Adam D. Ruppe wrote: On Sunday, 22 February 2015 at 02:27:30 UTC, Peter Alexander wrote: Seems questionable to me. Anyone know the rationale? I actually agree it is a bit questionable, but I think the difference is += is, conceptually at least, atomic - it is a single function, append() instead of two calls, set(calculate()). I can see where you are coming from, but the fact of the matter is that it is evaluated no differently than a = a + b. Why, in the a += b case, does it somehow become acceptable for the compiler to irresponsibly insert in an implicit narrowing conversion?
Re: Please tell me this is a bug?
On Sunday, 22 February 2015 at 02:15:29 UTC, Almighty Bob wrote: "Assign Expressions: The right operand is implicitly converted to the type of the left operand" which quite clearly is not the case since... a=b; // causes an error. but.. a+=b; // does not. float does not *implicitly* convert to int. Required explicit coercion, which is what the opAssign expression does.
Re: Please tell me this is a bug?
On Sunday, 22 February 2015 at 01:24:09 UTC, Almighty Bob wrote: a += b; // Compiles with no ERROR! Please tell me that's a bug? Not a bug. From spec: http://dlang.org/expression.html#AssignExpression Assignment operator expressions, such as: a op= b are semantically equivalent to: a = cast(typeof(a))(a op b) Seems questionable to me. Anyone know the rationale? If a = b; is disallowed, I don't see why a += b; should be more acceptable.
Re: Memory safety depends entirely on GC ?
On Saturday, 21 February 2015 at 20:13:26 UTC, deadalnix wrote: On Saturday, 21 February 2015 at 19:38:02 UTC, Peter Alexander wrote: @safe @nogc :-) (I rewrote the post a few times. Originally I just wrote "mark main @safe @nogc and you're fine", but I think it's a bit misleading since @nogc is still difficult to use, so I wrote about that instead and forgot to mention @safe at all. Thanks for pointing out.) free is an unsafe operation. Unless you don't allocate at all or choose to leak everything, you won't be able to be safe and nogc. The only way out that I know of is an ownership system. malloc+free can be trusted if wrapped in something like a ref counted pointer, no?
Re: Memory safety depends entirely on GC ?
On Saturday, 21 February 2015 at 18:42:54 UTC, deadalnix wrote: On Saturday, 21 February 2015 at 18:06:57 UTC, Peter Alexander wrote: On Saturday, 21 February 2015 at 10:00:07 UTC, FrankLike wrote: Now,some people think D is a 'Memory safety depends entirely on GC' system Language,what do you think? It's kind of right, at the moment, since @nogc D is still quite difficult to use, mostly due to exceptions and library artifacts. Both of these can be solved though. That wouldn't make @nogc safe in any way. @safe @nogc :-) (I rewrote the post a few times. Originally I just wrote "mark main @safe @nogc and you're fine", but I think it's a bit misleading since @nogc is still difficult to use, so I wrote about that instead and forgot to mention @safe at all. Thanks for pointing out.)
Re: Memory safety depends entirely on GC ?
On Saturday, 21 February 2015 at 10:00:07 UTC, FrankLike wrote: Now,some people think D is a 'Memory safety depends entirely on GC' system Language,what do you think? It's kind of right, at the moment, since @nogc D is still quite difficult to use, mostly due to exceptions and library artifacts. Both of these can be solved though.
Re: groupBy/chunkBy redux
On Saturday, 14 February 2015 at 19:39:44 UTC, Andrei Alexandrescu wrote: Peter, could you please take this? Yep, I have some time. https://issues.dlang.org/show_bug.cgi?id=14183
Re: groupBy/chunkBy redux
On Friday, 13 February 2015 at 18:32:35 UTC, Andrei Alexandrescu wrote: * Perhaps rename groupBy to chunkBy. People coming from SQL and other languages might expect groupBy to do hash-based grouping. Agreed. * The unary function implementation must return for each group a tuple consisting of the key and the lazy range of values. The binary function implementation should continue to only return the lazy range of values. Is the purpose of this just to avoid the user potentially needing to evaluate the key function twice? * SortedRange should add a method called group(). Invoked with no predicate, group() should do what chunkBy does, using the sorting predicate. Will need to be called something else since there may be existing code trying to call std.algorithm.group using UFCS. This would change its behaviour. * aggregate() should detect the two kinds of results per group (well, chunk) and process them accordingly: for unary-predicate chunks, pass the key through and only process the lazy range. Meaning: auto data = [ tuple("John", 100), tuple("John", 35), tuple("Jane", 200), tuple("Jane", 87), ]; auto r = data.chunkBy!(x => x[0]).aggregate!sum; yields a range of tuples: tuple("John", 135), tuple("Jane", 187). Not sure I understand how this is meant to work. With your second bullet implemented, data.chunkBy!(x => x[0]) will return: tuple("John", [tuple("John", 100), tuple("John", 35)]), tuple("Jane", [tuple("Jane", 200), tuple("Jane", 87)]) (here [...] denotes the sub-range, not an array). So aggregate will ignore the key part, but how does it know to ignore the name in sub-ranges?
Re: Is NRVO part of the spec?
On Saturday, 7 February 2015 at 15:02:43 UTC, Andrei Alexandrescu wrote: On 2/7/15 6:35 AM, Daniel Murphy wrote: "Peter Alexander" wrote in message news:uiqnamficseklfowm...@forum.dlang.org... I'm writing a blog post about why we don't need rvalue references in D. It seems that we rely on NRVO being performed, not just as an optimization, but for correct semantics (e.g. for objects without destructors or postblits). This doesn't appear to be documented anywhere. Is it meant to be part of the spec? NRVO isn't required for correct semantics, as structs can be moved with bitcopy. It is required for structs that disable postblit. -- Andrei NRVO specifically means that a pointer to the destination object is passed to the function, and the returned object is constructed in place. The in place construction isn't required. What is required is that the local is moved. e.g. S foo() { S s; return s; } S s = foo(); With NRVO becomes: void foo(ref S dst) { dst = S(); } S s = void; foo(s); But this isn't necessary. Would also be valid to just do: void foo(ref S dst) { S s; move(dst, s); // do the memcpys } S s; foo(s); This distinction matters because NRVO cannot be performed when foo may return two different objects, but we can still move and avoid postblit.
Re: Is NRVO part of the spec?
On Saturday, 7 February 2015 at 14:46:55 UTC, Daniel Murphy wrote: NRVO isn't required for correct semantics, as structs can be moved with bitcopy. Yes, you're right. I suppose what I mean is that it should be guaranteed that returning a local Lvalue by value should always be moved to the caller destination, rather than copied then destroyed. S foo() { S s; return s; } S s = foo(); // no destructors or postblits should be called here The spec needs to guarantee this, otherwise unary std.algorithm.move isn't guaranteed to work for non-copyable types.
Is NRVO part of the spec?
I'm writing a blog post about why we don't need rvalue references in D. It seems that we rely on NRVO being performed, not just as an optimization, but for correct semantics (e.g. for objects without destructors or postblits). This doesn't appear to be documented anywhere. Is it meant to be part of the spec? Relevant issues: See: https://issues.dlang.org/show_bug.cgi?id=10372 https://issues.dlang.org/show_bug.cgi?id=12180
Re: DIP56 - inlining
On Tuesday, 3 February 2015 at 23:23:35 UTC, deadalnix wrote: We have an attribute system, why make this a pragma ? Rationale is in the DIP: "These are not attributes because they should not affect the semantics of the function. In particular, the function signature must not be affected."
Re: sortUniq
On Saturday, 24 January 2015 at 14:53:02 UTC, Andrei Alexandrescu wrote: On 1/24/15 3:50 AM, Peter Alexander wrote: On Friday, 23 January 2015 at 18:22:08 UTC, zeljkog wrote: On 23.01.15 19:13, Andrei Alexandrescu wrote: On 1/23/15 10:05 AM, zeljkog wrote: On 23.01.15 18:48, H. S. Teoh via Digitalmars-d wrote: I think what he's trying to do is to call a function that returns a delegate, and use that delegate to instantiate the filter template. AFAIK I've never seen code like this before, and it looks like the compiler isn't prepared to handle this. Yes, I tried to use filter for unique, need closure. I think there are many applications for this pattern. Please post a complete snippet then. Thanks! -- Andrei import std.stdio, std.algorithm; auto unique(){ bool[int] c; return (int a){ if (a in c) return false; else{ c[a] = true; return true; } }; } void main() { [1, 5, 5, 2, 1, 5, 6, 6].filter!(unique()).writeln; } auto f = unique(); [1, 5, 5, 2, 1, 5, 6, 6].filter!(f).writeln; // [1, 5, 2, 6] Filter needs an alias, and you cannot alias an R-value (it has no symbol). Hmmm... we do allow rvalues sometimes (e.g. for strings). I think we could and should relax the rule to allow rvalues in this case, too. -- Andrei I was wrong. R-values work, as long as they are compile time constants. The problem here is that closures don't yet work in CTFE (as the error says). Pulling it out as a local variable works because the alias then binds to the symbol rather than the value.
Re: sortUniq
On Friday, 23 January 2015 at 18:22:08 UTC, zeljkog wrote: On 23.01.15 19:13, Andrei Alexandrescu wrote: On 1/23/15 10:05 AM, zeljkog wrote: On 23.01.15 18:48, H. S. Teoh via Digitalmars-d wrote: I think what he's trying to do is to call a function that returns a delegate, and use that delegate to instantiate the filter template. AFAIK I've never seen code like this before, and it looks like the compiler isn't prepared to handle this. Yes, I tried to use filter for unique, need closure. I think there are many applications for this pattern. Please post a complete snippet then. Thanks! -- Andrei import std.stdio, std.algorithm; auto unique(){ bool[int] c; return (int a){ if (a in c) return false; else{ c[a] = true; return true; } }; } void main() { [1, 5, 5, 2, 1, 5, 6, 6].filter!(unique()).writeln; } auto f = unique(); [1, 5, 5, 2, 1, 5, 6, 6].filter!(f).writeln; // [1, 5, 2, 6] Filter needs an alias, and you cannot alias an R-value (it has no symbol).
Re: Improving http://dlang.org/library/index.html
On Sunday, 11 January 2015 at 23:54:16 UTC, Andrei Alexandrescu wrote: On 1/11/15 3:38 PM, Robert burner Schadek wrote: what about making it multi column like on http://en.cppreference.com/w/ A nice possibility, though I do like the entity + blurb layout. -- Andrei Most of those blurbs add little value beyond what the name of the module already provides. I'd prefer if they were ditched and instead the modules were categorized into larger groups: core - std.algorithm, std.range, std.array, etc. io - std.file, std.csv, std.mmfile, etc. strings - std.string, std.uni, std.utf, etc. math - std.bigint, std.math, std.mathspecial, std.numeric, etc. etc. The purpose of that page (as I see it) is for people to find what they need quickly. I think categorization would be a better format to achieve that.
Re: Discussion on groupBy
On Saturday, 10 January 2015 at 20:19:14 UTC, Andrei Alexandrescu wrote: groupBy is an important primitive for relational algebra queries on data. Soon to follow are operators such as aggregate() which is a sort of reduce() but operating on ranges of ranges. GroupBy is a very important range. It's one the first one ranges I wrote when starting to write D code (being unsatisfied with the existing "group" offering). I agree with separating out the non-equivalence relation groupBy. Having both under the same name just confuses matters and makes for complex implementations.
Copy only frame pointer between objects of nested struct
I asked in D.learn, but didn't get a satisfactory answer. I think this may be unachievable in the current language. Consider: auto foo(T)(T a) { T b; // Error: cannot access frame pointer of main.X b.data[] = 1; return b; } void main() { struct X { this(int) {} int[4096] data; } foo(X()); } Note the error is because you cannot construct the main.X object without a frame pointer. You could do `T b = a` here to get a's frame pointer, but it would also copy all of a's data, which is expensive and unnecessary. Is there a way to only copy a's frame pointer into b? (Note: this is just an illustrative example, real problem here: https://issues.dlang.org/show_bug.cgi?id=13935)
Re: Another init() bug, can we deprecate yet?
On Wednesday, 7 January 2015 at 23:31:30 UTC, H. S. Teoh via Digitalmars-d wrote: On Wed, Jan 07, 2015 at 10:03:00PM +, Peter Alexander via Digitalmars-d wrote: https://issues.dlang.org/show_bug.cgi?id=13806 For the lazy: BitArray has an init() method, which hides the property BitArray.init This, or something similar, appears every few months. Walter has said in the past that the ability to override init is a feature. As far as I can tell, no one is using it as a feature; it only seems to cause trouble. Can we just deprecate it? At the very least deprecate functions named init(). https://github.com/D-Programming-Language/phobos/pull/2854 Destroy! Thanks. Just to be clear, I'm advocating deprecating all user-defined init functions, not just BitArray (so we don't get into this situation again).
Another init() bug, can we deprecate yet?
https://issues.dlang.org/show_bug.cgi?id=13806 For the lazy: BitArray has an init() method, which hides the property BitArray.init This, or something similar, appears every few months. Walter has said in the past that the ability to override init is a feature. As far as I can tell, no one is using it as a feature; it only seems to cause trouble. Can we just deprecate it? At the very least deprecate functions named init().
Re: Constant template arguments
In C++ head const is stripped for ifti, but we can't do that in general in D due to transitivity. I'd like for it to happen when it can though, particularly for scalar types.
Re: Phobos colour module?
On Thursday, 1 January 2015 at 06:38:41 UTC, Manu via Digitalmars-d wrote: I've been working on a pretty comprehensive module for dealing with colours in various formats and colour spaces and conversions between all of these. It seems like a hot area for duplicated effort, since anything that deals with multimedia will need this, and I haven't seen a really comprehensive implementation. Does it seem like something we should see added to phobos? I think it would be a nice addition, but might seem a bit lonely on its own, without an image library. Maybe just put it on code.dlang for now, and then add it together with an image library later?
Re: const Propagation
You need to overload on const, and also pass in a correctly typed function as the argument (you can't call a function with a mutable parameter with a const object. import std.stdio; class Hugo { public int x = 42; void blah(void function(Hugo h) f) { f(this); } // OVERLOAD void blah(void function(const Hugo h) f) const { f(this); } } void main() { Hugo hugo = new Hugo(); void function(Hugo h) f = function(Hugo h) { h.x = 99; }; hugo.blah(f); const Hugo inge = hugo; // CHANGE TYPE HERE void function(const Hugo h) g = function(const Hugo h) { writeln("foobar"); }; inge.blah(g); }
Re: http://wiki.dlang.org/DIP25
On Sunday, 28 December 2014 at 18:16:04 UTC, Andrei Alexandrescu wrote: Very little breakage I can think of. Ranges usually don't own their payload. I'm thinking more about higher order ranges, e.g. take, filter, cycle, retro; over a mutable range with ref front. Even if the underlying range (e.g. an array) has the inout, the higher order range will need the inout as well, so that it is propagated, no? auto ref foo(ref int x) { return x; } // non-ref due to lack of inout on x? "auto" has no meaning there. It does here: auto ref foo(auto ref int x) { return x; } This wouldn't compile anymore - inout is needed for x as well. Ah, yep. That's what I meant :-) Thanks for the clarification.
Re: http://wiki.dlang.org/DIP25
On Sunday, 28 December 2014 at 03:09:20 UTC, Andrei Alexandrescu wrote: Please comment: http://wiki.dlang.org/DIP25 This seems like it may be painful (in terms of breaking existing code): "Member functions of structs must qualify this with inout if they want to return a result by ref that won't outlive this." This breaks all ranges that return ref front, no? (assuming they aren't qualified inout, which appears to be the case in the majority of ranges in std.algorithm/range). Clarification: how does this DIP play with auto ref returns? Infer non-ref if not qualified inout? auto ref foo(ref int x) { return x; } // non-ref due to lack of inout on x?
Re: What's missing to make D2 feature complete?
On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote: Just wondering what the general sentiment is. For me it's these 3 points. - tuple support (DIP32, maybe without pattern matching) - working import, protection and visibility rules (DIP22, 313, 314) - finishing non-GC memory management In my mind there are a few categories of outstanding issues. First, there are cases where the language just does not work as advertised. Imports are an example of this. Probably scope as well and maybe shared (although I'm not sure what the situation with that is). Second, there are cases where the language works as designed, but the design makes it difficult to get work done. For example, @nogc and exceptions, or const with templates (or const altogether). Order of conditional compilation needs to be defined (see deadalnix's DIP). And finally there's the things we would really like for D to be successful. Tuple support and memory management are examples of those. This category is essentially infinite. I really think the first two categories need to be solved before anything is frozen.
Re: Help with d_language subreddit on Reddit
On Friday, 5 December 2014 at 23:25:11 UTC, Walter Bright wrote: https://www.reddit.com/r/d_language/ It's the default, and is kinda boring. Compare with the rust subreddit: http://www.reddit.com/r/rust/ While not great, it's much better than ours. We don't need the subreddit. We have these forums. Rust has their own forum, but it's for implementers. Most of their discussions/announcements happen at reddit. That's why it is more active and maintained. We already have an active forum here for everything, so we don't need another one. No point splitting the community. Just leave the qznc bot their to cross post announcements, but I don't think there's any value in trying to have, or promote, two separate forums.
Re: Testing lazy ranges in post-conditions
On Monday, 24 November 2014 at 14:20:02 UTC, bearophile wrote: Peter Alexander: Chunks.save should also be const, so result.save.{...} should work. But it doesn't. Should I have to file two bug reports (ERs) on iota and chunks? I suppose chunks should be inout, because you might want mutable chunks. You could file bug reports, but you can't really add const/inout manually in templates. The dependencies on the const-ness of template parameters makes it unmanageable. You need it to be inferred. See: https://issues.dlang.org/show_bug.cgi?id=7521 https://issues.dlang.org/show_bug.cgi?id=8407
Re: Testing lazy ranges in post-conditions
On Monday, 24 November 2014 at 12:20:40 UTC, bearophile wrote: Peter Alexander: Should be able to do: assert(result.save.all!(x => x < 10)); But iota's save isn't const, so you can't (that's a bug). Mine was just an example of the general problem, another example: import std.range, std.algorithm; auto foo() out(result) { assert(result.all!(b => b.length == 2)); } body { auto a = new int[10]; return a.chunks(2); } void main() {} Chunks.save should also be const, so result.save.{...} should work. It becomes a real problem with input ranges, because you can't save them. That makes sense though, as there is no way to consume the result in a post-condition check that doesn't consume it. That's just a fact of life and a limitation of trying to verify mutable data.
Re: Testing lazy ranges in post-conditions
Should be able to do: assert(result.save.all!(x => x < 10)); But iota's save isn't const, so you can't (that's a bug).
Re: Algorithms, term rewriting and compile time reflection
On Wednesday, 22 October 2014 at 11:10:35 UTC, Ola Fosheim Grøstad wrote: [snip] These kinds of optimizations are very difficult to achieve in a low level backend, but you really need them in order to do generic programming properly. A simple start would be to not provide term rewriting as a language feature, but rather define a vocabulary that is useful for phobos and hardcode term rewriting for that vocabulary. I think this is feasible. Term rewriting is very interesting, and I believe some work has been done for this in Haskell. I don't believe anything has been done with this propositions and inference approach you describe. I see a number of problems: First, annotating this, in the presence of templates, is very hard. Consider: auto f(alias g, T)(T x) { return g(x); } We cannot possibly annotate this function with any of propositions you described because we know nothing about g or T. Like purity and nothrow, we'd have to deduce these properties, but most escape deduction in all but the most trivial cases. Suppose we could deduce a large subset of useful propositions, how does the programmer know what has been deduced? How can I tell what has been deduced and applied without having to disassemble the code to see what's actually going on? And even if everything is deduced correctly, and I know what's deduced, what if it does a transformation that's undesirable? For example, you changed linear_search to binary_search. What if I knew the element was likely to be at the front and would prefer linear_search to binary_search? If you have any, I'd love to see some papers on this kind of work.
Re: Consistent bugs with dmd -O -inline in a large project
On Thursday, 16 October 2014 at 08:45:18 UTC, Chris wrote: I think there is no easy way of finding out where the optimization goes wrong. But should this happen at all, i.e. does it point to a flaw in my program or is it a compiler bug? I like to think it's the latter, after all the program works perfectly without -O. On the other hand, it's scary because I have no clue where to look for the offender. It could be either. Sometimes, if you program relies on undefined behaviour, enabling optimizations might be what uncovers the bug, and manifest as a crash. On the other hand, it could be just a compiler bug. It has happened several times to me with DMD, so it's not entirely unlikely. These things happen. Run Dustmite, reduce, and if you still think you're program is right, file a bug against DMD.
UFCS in C++
Looks like Bjarne has proposed UFCS for C++ http://isocpp.org/files/papers/N4174.pdf No mention of D though...
Re: Worse is better?
On Friday, 10 October 2014 at 21:11:20 UTC, Ola Fosheim Grostad wrote: On Friday, 10 October 2014 at 09:00:17 UTC, Peter Alexander wrote: You can't have simple, expressive, and low level control. Why not? It's just something I believe from experience. The gist of my reasoning is that to get low level control you need to specify things. When those things are local and isolated, all is good, but often the things you specify bleed across interfaces and affect either all the implementations (making things more complex) or all the users (making things less expressive). For example, consider the current memory allocation/management debate. I cannot think of a possible way to handle this that simultaneously: (a) gives users full control over how every function allocates/manages memory (control). (b) makes the implementation of those functions easy (simple). (c) makes it easy to compose functions with different management policies (expressive). There are trade-offs on every axis. I'm sure we'll be able to find something reasonable, that maybe does a good job on each axis, but I don't think it's possible to get 10/10 on all of them. Maybe there's a way to do it, but if there is I imagine that language and programming experience is going to be vastly different from what we have now (in any language).
Re: Worse is better?
On Friday, 10 October 2014 at 00:36:45 UTC, deadalnix wrote: Is this the politically correct wy to say "we don't care about simplicity anymore!" ? Heh. I don't think so. We've just rebalanced our priorities. You can't have simple, expressive, and low level control. D1 was simple but lacking in expressiveness and control. D2 had traded some simplicity in to improve the situation. I think it has been worthwhile (modulo the inevitable hiccups and warts).
Re: Worse is better?
On Wednesday, 8 October 2014 at 19:44:04 UTC, Joakim wrote: What does this have to D? Well, the phenomenon he describes probably has a big effect on D's adoption even today, as he was talking about the spread of programming languages, ones we use to this day. Certainly worth thinking about, as we move forward with building D. That ship has sailed for D. It is no longer a simple language. It now tries to do The Right Thing. I found the turning point: https://github.com/D-Programming-Language/dlang.org/commit/67e5f0d8b59aa0ce26b2be9bd79c93d1127b2db6#diff-b6ac8bc22fdbb33f7266c9422db97c2bL212 :-)
Re: On Phobos GC hunt
On Wednesday, 8 October 2014 at 20:15:51 UTC, Steven Schveighoffer wrote: On 10/8/14 4:10 PM, Andrei Alexandrescu wrote: On 10/8/14, 1:01 PM, Andrei Alexandrescu wrote: That's a bummer. Can we get the compiler to remove the "if (__ctfe)" code after semantic checking? Or would "static if (__ctfe)" work? -- Andrei Please don't ask me to explain why, because I still don't know. But _ctfe is a normal runtime variable :) It has been explained to me before, why it has to be a runtime variable. I think Don knows the answer. Well, the contents of the static if expression have to be evaluated at compile time, so static if (__ctfe) would always be true. Also, if it were to somehow work as imagined then you'd have nonsensical things like this: static if (__ctfe) class Wat {} auto foo() { static if (__ctfe) return new Wat(); return null; } static wat = foo(); wat now has a type at runtime that only exists at compile time.
Re: On Phobos GC hunt
On Tuesday, 7 October 2014 at 20:13:32 UTC, Jacob Carlborg wrote: I didn't look at any source code to see what "new" is actually allocating, for example. I did some random sampling, and it's 90% exceptions, with the occasional array allocation. I noticed that a lot of the ~ and ~= complaints are in code that only ever runs at compile time (generating strings for mixin). I wonder if there's any way we can silence these false positives.
Re: On Phobos GC hunt
On Tuesday, 7 October 2014 at 16:23:19 UTC, grm wrote: 2.) There seems to be a problem with repeated alarms: When viewing the page source, this link shows up numerous times. See https://github.com/D-Programming-Language//phobos/blob/d4d98124ab6cbef7097025a7cfd1161d1963c87e/std/conv.d#L688 That's because of multiple template instantiations of the same function. These should probably be filtered for this use case.
Re: [Semi OT] Language for Game Development talk
On Wednesday, 1 October 2014 at 14:16:38 UTC, bearophile wrote: Max Klyga: https://www.youtube.com/watch?v=TH9VCN6UkyQ A third talk (from another person) about related matters: https://www.youtube.com/watch?v=rX0ItVEVjHc He doesn't use RTTI, exceptions, multiple inheritance, STL, templates, and lot of other C++ stuff. On the other hand he writes data-oriented code manually, the compiler and language give him only very limited help, and the code he writes looks twiddly and bug-prone. So why aren't they designing a language without most of the C++ stuff they don't use, but with features that help them write the data-oriented code they need? Probably because C++ is good enough and already has mature infrastructure.
Re: RFC: moving forward with @nogc Phobos
On Tuesday, 30 September 2014 at 08:34:26 UTC, Johannes Pfau wrote: What if I don't want automated memory _management_? What if I want a function to use a stack buffer? Or if I want to free manually? Agreed. This is the common case we need to solve for, but this is memory allocation, not management. I'm not sure where manual management fits into Andrei's scheme. Andrei, could you give an example of, e.g. how toStringz would work with a stack buffer in your proposed scheme? Another thought: if we use a template parameter, what's the story for virtual functions (e.g. Object.toString)? They can't be templated.
Re: Creeping Bloat in Phobos
On Sunday, 28 September 2014 at 00:13:59 UTC, Andrei Alexandrescu wrote: On 9/27/14, 3:40 PM, H. S. Teoh via Digitalmars-d wrote: If we can get Andrei on board, I'm all for killing off autodecoding. That's rather vague; it's unclear what would replace it. -- Andrei No autodecoding ;-) Specifically: 1. ref T front(T[] r) always returns r[0] 2. popFront(ref T[] r) always does { ++r.ptr; --r.length; } 3. Narrow string will be hasLength, hasSlicing, isRandomAccessRange (i.e. they are just like any other array). Also: 4. Disallow implicit conversions, comparisons, or any operation among char, wchar, dchar. This makes things like "foo".find('π') compile time errors (or better, errors until we specialize to it to do "foo".find("π"), as it should) 5. Provide byCodePoint for narrow strings (although I suspect this will be rarely used). The argument is as follows: * First, this is a hell of a lot simpler for the implementation. * People rarely ever search for single, non-ASCII characters in strings, and #4 makes it an error if they do (until we specialize to make it work). * Searching, comparison, joining, and splitting functions will be fast and correct by default. One possible counter argument is that this makes it easier to corrupt strings (since you could, e.g. insert a substring into the middle of a multi-byte code point). To that I say that it's unlikely. When inserting into a string, you're either doing it at the front or back (which is safe), or to some point that you've found by some other means (e.g. using find). I can't imagine a scenario where you could find a point in the middle of a string, that is in the middle of a code point. Of course, I'd probably say this change isn't practical right now, but this is how I'd do things if I were to start over.
Re: Messaging
On Sunday, 28 September 2014 at 00:58:00 UTC, Walter Bright wrote: On 9/27/2014 4:27 PM, Peter Alexander wrote: I've now filed a bug. https://issues.dlang.org/show_bug.cgi?id=13547 Thanks for filing the bug report. I was going to raise its priority, and found you'd already done so! Yeah, I consider anything that's actively, demonstrably, and unnecessarily driving people away to be high priority. Any takers? Andrei, wanna put a bounty on it? I don't think a bounty is necessary since it should be fairly easy. Bounties are for motivating people to do difficult things. If no one has taken it by the time I'm done with the benchmark shootout stuff then I'll do it.
Messaging
Just had an unfortunate exchange on Twitter https://twitter.com/bitshifternz/status/515998608009601024 Him: "isn't D garbage collected? That would make it a non-starter for me." Me: "it's optional. malloc/free are available and we'll have allocators soon that can hook into std lib." Him: "if that's the case the D website does a poor job of spelling it out. Nothing in FAQ or here http://dlang.org/garbage.html"; Him: ""D is a fully garbage collected language" doesn't sound very optional!" Him: "now I'm going to have to look at D properly!" I've now filed a bug. https://issues.dlang.org/show_bug.cgi?id=13547 I've literally had 3 or 4 conversations in just the past few days, both online and in person with people that believe D is only garbage collected. We have to fix this perception. It's literally scaring people away from even looking at anything else the language has to offer.
Re: Creeping Bloat in Phobos
On Saturday, 27 September 2014 at 23:04:00 UTC, Walter Bright wrote: On 9/27/2014 3:52 PM, bearophile wrote: There is no char auto decoding in this program, right? Notice the calls to autodecoding 'front' in the assembler dump. I think you're imagining things Walter! There's no auto-decoding my example, it's just adding up the lengths.
Re: Creeping Bloat in Phobos
On Saturday, 27 September 2014 at 20:57:53 UTC, Walter Bright wrote: From time to time, I take a break from bugs and enhancements and just look at what some piece of code is actually doing. Sometimes, I'm appalled. Me too, and yes it can be appalling. It's pretty bad for even simple range chains, e.g. import std.algorithm, std.stdio; int main(string[] args) { return cast(int)args.map!("a.length").reduce!"a+b"(); } Here's what LDC produces (with -O -inline -release -noboundscheck) __Dmain: 00011480pushq %r15 00011482pushq %r14 00011484pushq %rbx 00011485movq%rsi, %rbx 00011488movq%rdi, %r14 0001148b callq 0x10006df10 ## symbol stub for: __D3std5array14__T5emptyTAyaZ5emptyFNaNbNdNfxAAyaZb 00011490xorb$0x1, %al 00011492movzbl %al, %r9d 00011496 leaq _.str12(%rip), %rdx ## literal pool for: "/Users/pja/ldc2-0.14.0-osx-x86_64/bin/../import/std/algorithm.d" 0001149d movq 0xcbd2c(%rip), %r8 ## literal pool symbol address: __D3std9algorithm24__T6reduceVAyaa3_612b62Z124__T6reduceTS3std9algorithm85__T9MapResultS633std10functional36__T8unaryFunVAyaa8_612e6c656e677468Z8unaryFunTAAyaZ9MapResultZ6reduceFNaNfS3std9algorithm85__T 000114a4movl$0x2dd, %edi 000114a9movl$0x3f, %esi 000114aexorl%ecx, %ecx 000114b0 callq 0x10006e0a2 ## symbol stub for: __D3std9exception14__T7enforceTbZ7enforceFNaNfbLAxaAyamZb 000114b5movq(%rbx), %r15 000114b8leaq0x10(%rbx), %rsi 000114bcleaq-0x1(%r14), %rdi 000114c0 callq 0x10006df10 ## symbol stub for: __D3std5array14__T5emptyTAyaZ5emptyFNaNbNdNfxAAyaZb 000114c5testb $0x1, %al 000114c7jne 0x114fa 000114c9addq$-0x2, %r14 000114cdaddq$0x20, %rbx 000114d1nopw%cs:(%rax,%rax) 000114e0addq-0x10(%rbx), %r15 000114e4movq%r14, %rdi 000114e7movq%rbx, %rsi 000114ea callq 0x10006df10 ## symbol stub for: __D3std5array14__T5emptyTAyaZ5emptyFNaNbNdNfxAAyaZb 000114efdecq%r14 000114f2addq$0x10, %rbx 000114f6testb $0x1, %al 000114f8je 0x114e0 000114famovl%r15d, %eax 000114fdpopq%rbx 000114fepopq%r14 00011500popq%r15 00011502ret and for: import std.algorithm, std.stdio; int main(string[] args) { int r = 0; foreach (i; 0..args.length) r += args[i].length; return r; } __Dmain: 000115c0xorl%eax, %eax 000115c2testq %rdi, %rdi 000115c5je 0x115de 000115c7nopw(%rax,%rax) 000115d0movl%eax, %eax 000115d2addq(%rsi), %rax 000115d5addq$0x10, %rsi 000115d9decq%rdi 000115dcjne 0x115d0 000115deret (and sorry, don't even bother looking at what dmd does...) I'm not complaining about LDC here (although I'm surprised array.empty isn't inlined). The way ranges are formulated make them difficult to optimize. I think there's things we can do here in the library. Maybe I'll write up something about that at some point. I think the takeaway here is that people should be aware of (a) what kind of instructions their code is generating, (b) what kind of instructions their code SHOULD be generating, and (c) what is practically possible for present-day compilers. Like you say, it helps to look at the assembled code once in a while to get a feel for this kind of thing. Modern compilers are good, but they aren't magic.
Re: [Semi OT] Language for Game Development talk
On Thursday, 25 September 2014 at 22:52:39 UTC, Sean Kelly wrote: On Saturday, 20 September 2014 at 02:25:31 UTC, po wrote: As a fellow game dev: I don't agree with him about RAII, I find it useful He kind of has a point about exceptions, I'm not big on them ... He goes on about "freeing free'd memory", this is never something that would happen in modern C++, so he is basically proposing an inferior language design. It happens when you don't use RAII. Sounds like he should review his concepts. This is really missing the point. He knows RAII is useful and he knows RAII solves freeing free'd memory. Maybe it's time to re-watch the video.
Re: [Semi OT] Language for Game Development talk
On Thursday, 25 September 2014 at 13:57:17 UTC, Andrei Alexandrescu wrote: On 9/25/14, 4:33 AM, Kagamin wrote: On Thursday, 25 September 2014 at 09:16:27 UTC, Walter Bright wrote: So how do you tell it to call myfree(p) instead of free(p) ? Maybe stock malloc/free is enough for him? That kind of commitment shouldn't be baked into the language. That's why RAII and scope are better than his notation. -- Andrei Only if you accept "The Right Way" philosophy. A "Worse is Better" person may disagree. There is no "better", it's all tradeoffs.
Re: [Semi OT] Language for Game Development talk
On Thursday, 25 September 2014 at 09:58:31 UTC, Walter Bright wrote: On 9/25/2014 2:27 AM, Walter Bright wrote: Looks like he reinvented D dynamic arrays at 1:02. Sigh. At 1:16 he gives credit to Go for the -> and . becomes . thing. Says its great. And he wants: And ... ... he's specified D! Almost, but he also wants a language with the "Worse is Better" philosophy. D does not have this, and I don't think we want it.
Re: Local imports hide local symbols
On Tuesday, 23 September 2014 at 18:52:13 UTC, H. S. Teoh via Digitalmars-d wrote: 1) Change lookup rules so that symbols pulled in by local import are found last. Walter has stated that he disagrees with this approach because it complicates symbol lookup rules. This.
Re: RFC: reference counted Throwable
On Sunday, 21 September 2014 at 19:36:01 UTC, Nordlöw wrote: On Friday, 19 September 2014 at 15:32:38 UTC, Andrei Alexandrescu wrote: Please chime in with thoughts. Why don't we all focus our efforts on upgrading the current GC to a state-of-the GC making use of D's strongly typed memory model before discussing these things? GC improvements are critical, but... "As discussed, having exception objects being GC-allocated is clearly a large liability that we need to address. They prevent otherwise careful functions from being @nogc so they affect even apps that otherwise would be okay with a little litter here and there." No improvements to the GC can fix this. @nogc needs to be usable, whether you are a GC fan or not.
Re: Identifier resolution, the great implementation defined mess.
On Sunday, 21 September 2014 at 20:05:57 UTC, Walter Bright wrote: I don't know what mental model people have for how lookups work, but the above algorithm is how it actually works. My mental model for local imports is "it's the same as module level imports, except the symbols are only available in this scope". I wouldn't expect a module symbol to shadow a local symbol.
Re: Identifier resolution, the great implementation defined mess.
On Wednesday, 17 September 2014 at 22:42:27 UTC, deadalnix wrote: On Wednesday, 17 September 2014 at 16:25:57 UTC, Dicebot wrote: I had impression that general rule is "most inner scope takes priority" (with base classes being one "imaginary" scope above the current one). Are there any actual inconsistencies you have noticed or it just a matter of lacking matching spec entry? There is no inconsistencies because there is no spec. Maybe in this case it is best to just look at what dmd does and add that to the spec (assuming what dmd does is sound, and makes sense).
Re: Increasing D's visibility
On Wednesday, 17 September 2014 at 18:30:37 UTC, David Nadlinger wrote: On Wednesday, 17 September 2014 at 14:59:48 UTC, Andrei Alexandrescu wrote: Awesome. Suggestion in order to leverage crowdsourcing: first focus on setting up the test bed such that adding benchmarks is easy. Then you and others can add a bunch of benchmarks. On a somewhat related note, I've been working on a CI system to keep tabs on the compile-time/run-time performance, memory usage and file size for our compilers. It's strictly geared towards executing the same test case on different compiler configurations, though, so it doesn't really overlap with what is proposed here. Right now, its continually building DMD/GDC/LDC from Git and measuring some 40 mostly small benchmarks, but I need to improve the web UI a lot before it is ready for public consumption. Just thought I would mention it here to avoid scope creep in what Peter Alexander (and others) might be working on. That sounds great. I'm not planning anything grand with this. I'm just going to get the already exiting benchmark framework working with dmd, ldc, and gdc; and put it on github so people can contribute implementations. I imagine what you have could probably be extended to do comparisons with other languages, but I think there's still value in getting these benchmarks working because they are so well known and respected.
Re: Increasing D's visibility
On Wednesday, 17 September 2014 at 14:59:48 UTC, Andrei Alexandrescu wrote: Awesome. Suggestion in order to leverage crowdsourcing: first focus on setting up the test bed such that adding benchmarks is easy. Then you and others can add a bunch of benchmarks. Yep, sounds like a plan.
Re: Increasing D's visibility
On Wednesday, 17 September 2014 at 06:59:40 UTC, bearophile wrote: Andrei Alexandrescu: https://issues.dlang.org/show_bug.cgi?id=13487 If the upload conditions and site are sufficiently good I am willing to offer some implementations in D and to keep them updated. I suggest to add two D versions for some benchmarks, one that shows short high level code, and one that shows longer hairier fast code. In some cases I'd even like to show a third "safe" version (that tries to be more correct), but most Shootout/ComputerGame benchmarks are not very fit for this (you can see some examples of this on Rosettacode). This is what I intend to do (time permitting) * Direct translation from the C++ version. * High-level version using standard library, particularly ranges (this should be @safe!) * Low-level hand optimized using core.simd (when applicable). * CTFE version! (I imagine this will choke on most benchmarks though...) Of course, I'll test across dmd, gdc, and ldc2. Aside from being PR to show the speed of D, hopefully these benchmarks will serve as test beds for potential optimizations. If anyone already has translations of the benchmark programs then please send them to me (or just reply to the bug with an attachment).
Re: Increasing D's visibility
On Tuesday, 16 September 2014 at 22:26:48 UTC, Isaac Gouy wrote: On Tuesday, 16 September 2014 at 21:04:59 UTC, Peter Alexander wrote: -snip- I'll take a stab at it. Will give me something to do on my commute :-) (assuming his scripts work, or can be made to work on OS X). It'll be interesting to see which linux stuff is missing: -- without libgtop2 you could still get cpu and elapsed times (but not resident memory or CPU load) -- without highlight you could still get gzip source code size (but the source would include comments and whitespace) When you have questions, please ask in the benchmarks game discussion forum -- http://benchmarksgame.alioth.debian.org/play.html#misc Thanks Isaac. I think we can live without the resident memory, CPU load, and source size for now. I'll focus on getting some CPU time benchmarks first.
Re: Increasing D's visibility
On Tuesday, 16 September 2014 at 17:32:39 UTC, Andrei Alexandrescu wrote: On 9/16/14, 9:44 AM, Kagamin wrote: I'd say, run the damned benchmark for C and D. C would setup performance scale. What would be interesting is to see, how compiler switches affect performance, especially assert vs release mode and bounds checking on/off. I agree that C and D should be enough. Perhaps C++ and one more near the top (Ada, Fortran) would be good for context. Who wants to do this? Isaac made his setup publicly available. I'll take a stab at it. Will give me something to do on my commute :-) (assuming his scripts work, or can be made to work on OS X).
Re: Stroustrup's slides about c++11 and c++14
On Sunday, 14 September 2014 at 09:42:28 UTC, deadalnix wrote: • Specifies how things are done (implementation) I'm not sure what this one mean precisely. The way I interpret it is that (for template constraints), they are quite clumsy for specifying preferences between overloads, e.g. void foo(R)(R r) if (isInputRange!R && !isRandomAccessRange!R && !isSomeString!R) void foo(R)(R r) if (isRandomAccessRange!R && !isSomeString!R) void foo(R)(R r) if (isSomeString!R) Would be nice to have something like this instead: void foo(InputRange R)(R r); void foo(RandomAccessRange R)(R r); void foo(SomeString R)(R r); along with a way to specify if one "concept" is more refined than another (either explicitly or implicitly).
Re: Stroustrup's slides about c++11 and c++14
On Saturday, 13 September 2014 at 22:25:57 UTC, Walter Bright wrote: Yeah, well, we have many years of experience with "static if" and no apocalypse has yet happened. Well, we are yet to define "static if" when it comes to tricky cases, i.e. cases where static ifs and mixins have interdependencies. http://wiki.dlang.org/DIP31 Would be good to have a resolution on this.
Re: alias two froms
On Thursday, 11 September 2014 at 19:08:27 UTC, eles wrote: See this: http://forum.dlang.org/post/kfdkkwikrfvaukhct...@forum.dlang.org "alias" supports two syntaxes, one of them specifically to address writing things like alias A this. That's inconsistent. I agree not the most urgent thing in the world, but while the fixing things happens (see the @property), why not address this too? So? Deprecate the old syntax? This was discussed recently. The problem is that the new syntax is only a few versions old, so deprecating the old syntax means breaking all D code that's more than a year old. If D had no existing customers then yeah, we'd remove it, but I think it's too early to start deprecation.
Re: Which patches/mods exists for current versions of the DMD parser?
On Monday, 8 September 2014 at 15:25:11 UTC, Timon Gehr wrote: On 09/08/2014 10:51 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= " wrote: What kind of syntactical sugar do you feel is missing in D? int square(int x)=>x*x; Unfortunately we still can't just write: alias square = x => x * x; but you can do this: alias id(alias A) = A; alias square = id!(x => x * x);
Re: [Article] D's Garbage Collector Problem
On Wednesday, 10 September 2014 at 20:01:40 UTC, Joakim wrote: Orvid, where's that new GC when we need it? ;) Andrei posted this in the reddit thread: "OK that does it. I'm going to redesign and implement D's tracing garbage collector using the core allocator I wrote a short while ago. Fatefully Walter Bright and I were talking about the GC over dinner last night (I'm at cppcon in Seattle!) and I figured if this is what matters for D, I'll have to do it. And it does matter. It's actually not that difficult especially given that we have a solid allocator backend."
Re: Destroying structs (literally)
On Friday, 29 August 2014 at 02:21:07 UTC, Andrei Alexandrescu wrote: Dear community, are you ready for this? Yes. This is a significant change of behavior. Should we provide a temporary flag or attribute to disable it? I don't think so, it will just hinder adoption. If people don't want it they can stay with 2.066.
Re: Relaxing the definition of isSomeString and isNarrowString
On Sunday, 24 August 2014 at 01:06:31 UTC, Andrei Alexandrescu wrote: I'm thinking of relaxing the definitions to all types that fulfill the following requirements: * are random access ranges * element type is some character * offer .ptr as a @system property that offers a pointer to the first character Hmm, you also need .length, but it needs to return the length of the encoding, not the string. There might be custom string types where .length returns the number of code points, but where .ptr points to code units. Narrow string optimized functions will also be looking for an opSlice that is indexed by code units.
Re: Interfacing D with C and C++
Welcome! 1. What is the current support for calling C/C++ free functions from D? What level of mangling is supported? What data types can be passed without translation from D to C/C++? 3. How can a C++ object be used from D? Can C++ methods be called from D? The question applies to value types - no virtuals - and polymorphic types with virtuals, inheritance etc. And of course simple C structs. 5. How about the other way? Can a C/C++ function call a D function? http://dlang.org/cpp_interface.html 2. How about template functions? Is it possible to call a C++ template function from D? 4. How about template objects? One issue is that many C++ interfaces pass std::string and std::map<..., ...> as parameters. How feasible is to manipulate such objects in D? Not yet. http://forum.dlang.org/thread/lslofn$2iro$1...@digitalmars.com
Re: auto ref deduction and common type deduction inconsistency
On Thursday, 21 August 2014 at 05:24:13 UTC, Artur Skawina via Digitalmars-d wrote: While D's `ref` is a hack, it's /already/ part of the function type/signature. The return type of a function is /already/ (ie in the D dialects supported by recent frontend releases) determined from *all* returned expression. What would be the advantage of propagating/inferring only the type, but not the lvalueness?... I think I understand the issue better now. D doesn't always deduce a common return type, e.g. class A {} class B {} auto foo() { return new A(); return new B(); } This fails to compile with "mismatched function return type", even though it could easily return Object. However, it seems to do some deduction of sorts with integral types, e.g. this deduces to return double. auto foo() { return 0; return 0.0; return 0UL; } I'm not sure what logic it is using to do common type deductions. I haven't investigated fully. The problem comes with recursion, which we don't handle at the moment for auto or auto ref functions, but handling that becomes much easier when you just assume the return type is the return type from the first return statement, so I see the value in the described approach.
Re: auto ref deduction and common type deduction inconsistency
On Wednesday, 20 August 2014 at 14:52:59 UTC, ketmar via Digitalmars-d wrote: On Wed, 20 Aug 2014 14:44:40 + Peter Alexander via Digitalmars-d wrote: Well, the return type is already the common type of all return paths no, it's not. the return type will be taken from the first return statement in code. auto foo() { if (1) return 1; return 2.0; } This returns double. Try for yourself. That doesn't help at all. I want return by ref when possible, not always return by value. If I wanted return by value, I'd just return by value!! you can't return ref and non-ref simultaneously from one function. Of course, what I want is: 1. If both returns are lvalues, return by ref. 2. Otherwise, return by rvalue (regardless if one is an lvalue).
Re: auto ref deduction and common type deduction inconsistency
On Tuesday, 19 August 2014 at 23:56:12 UTC, ketmar via Digitalmars-d wrote: first: compilation speed. compiler can stop looking at function just after the first 'return'. No, it still has to check the other returns for errors anyway. second: it's easier to human to determine the actual return type this way. Well, the return type is already the common type of all return paths, so you need to look anyway. This is just whether the return is by ref or by value. In any case, I'd argue correct semantics are preferable to a slight convenience when reading. just add something like "if (0) return 42;" to foo(). compiler will eliminate dead code, but will use 'return 42' to determine function return type. That doesn't help at all. I want return by ref when possible, not always return by value. If I wanted return by value, I'd just return by value!!
auto ref deduction and common type deduction inconsistency
Consider these two functions: auto ref foo(ref int x) { if (condition) return x; return 3; } auto ref bar(ref int x) { return condition ? x : 3; } At a first glance, they appear to be equivalent, however foo is a compile-time error "constant 3 is not an lvalue" while bar compiles fine and returns an rvalue int. The rule in the spec is "The lexically first ReturnStatement determines the ref-ness of [an auto ref] function" Why is this? I think it would be more consistent and convenient to be: "An auto ref function returns by ref if all return paths return an lvalue, else it returns by value". Am I missing something? I don't see why foo should be rejected at compile time when it can happily return by value. It is especially problematic in generic code where you opportunistically want to return by ref when possible, e.g.: auto ref f(alias g, alias h)() { if (condition) return g(); return h(); } If g returns by ref while h returns by value then this fails to instantiate. It would be nice if it just returned by value (as return condition ? g() : h() would)
Re: FOSDEM'15 - let us propose a D dev room!!!
On Tuesday, 19 August 2014 at 14:40:44 UTC, Andrei Alexandrescu wrote: If there's a strong D community in Europe I could look into holding the next DConf at Facebook London. A kind of "proof" would be awesome. -- Andrei I like this idea :-)
Re: Why does D rely on a GC?
On Monday, 18 August 2014 at 12:06:27 UTC, Kagamin wrote: On Monday, 18 August 2014 at 10:01:59 UTC, maik klein wrote: Does a GC still have advantages over heap allocations that do not need to be reference counted such as the unique_ptr in c++? Isn't unique_ptr unique? What to do when the object is non-unique? Yes, unique_ptr is unique :-) It is not reference counted -- it just destroys the owned object when it goes out of scope. The near thing about unique_ptrs is that you can move them around, transferring ownership. If the object is non-unique, then typically C++ programmers will use shared_ptr (+ weak_ptr). I'm not sure what the status of std.typecons.Unique is. Last I heard it had some issues, but I haven't tried it much myself.
Re: C++'s std::rotate
On Monday, 11 August 2014 at 03:29:56 UTC, Andrei Alexandrescu wrote: [...] can be implemented generically for ranges that offer front as a reference: bool sameFront(R1, R2)(R1 r1, R2 r2) { return &r1.front == &r2.front; } This doesn't work for ranges that visit the same element twice, e.g. cycle(arr).take(arr.length + 1) [0, 0].map!(i => arr[i]) I suspect most ranges will have to implement the sameFront primitive manually, usually forwarding to the underlying range. Related: most mutating algorithms won't work for these kinds of ranges, as we usually presume lvalue ranges never visit the same lvalue twice. Perhaps this needs to be mentioned on affected algorithms?
Re: C++ template name mangling
On Saturday, 16 August 2014 at 02:23:45 UTC, Tofu Ninja wrote: Would this only be usable be usable for templates already instantiated in the c++ object file you were linking with? Yes, it has to be. You would need to use C++'s explicit template instantiation to create an object file with all the requisite symbols.