Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On 23 September 2014 16:19, deadalnix via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On Tuesday, 23 September 2014 at 03:03:49 UTC, Manu via Digitalmars-d > wrote: > >> I still think most of those users would accept RC instead of GC. Why not >> support RC in the language, and make all of this library noise redundant? >> Library RC can't really optimise well, RC requires language support to >> elide ref fiddling. >> > > I think a library solution + intrinsic for increment/decrement (so they > can be better optimized) would be the best option. > Right, that's pretty much how I imagined it too. Like ranges, where foreach makes implicit calls to contractual methods, there would also be a contract for refcounted objects, and the compiler will emit implicit calls to inc/dec if they exist? That should eliminate 'RefCounted', you would only need to provide opInc()/opDec() and rc fiddling calls would be generated automatically? Then we can preserve the type of things, rather than obscuring them in layers of wrapper templates...
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
23-Sep-2014 03:11, Andrei Alexandrescu пишет: On 9/22/14, 12:34 PM, Dmitry Olshansky wrote: 22-Sep-2014 01:45, Ola Fosheim Grostad пишет: On Sunday, 21 September 2014 at 17:52:42 UTC, Dmitry Olshansky wrote: to use non-atomic ref-counting and have far less cache pollution (the set of fibers to switch over is consistent). Caches are not a big deal when you wait for io. Go also check fiber stack size... But maybe Go should not be considered a target. ??? Just reserve more space. Even Go dropped segmented stack. What Go has to do with this discussion at all BTW? Because that is what you are competing with in the webspace. E-hm Go is hardly the top dog in the web space. Java and JVM crowd like (Scala etc.) are apparently very sexy (and performant) in the web space. They try to sell it as if it was all the rage though. IMO Go is hardly an interesting opponent to compete against. In pretty much any use case I see Go is somewhere down to 4-th+ place to look at. I agree. It does have legs however. We should learn a few things from it, such as green threads, dependency management, networking libraries. Well in short term that would mean.. green threads --> better support for fibers (see std.concurrency pull by Sean) dependency management --> package dub with dmd releases, use it to build e.g.g Phobos? ;) networking libraries -> there are plenty of good inspirational libraries out there in different languages. vibe.d is cool, but we ought to explore more and propagate stuff to std.net.* Also Go shows that good quality tooling makes a lot of a difference. And of course the main lesson is that templates are good to have :o). Agreed. Andrei -- Dmitry Olshansky
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On 23 September 2014 15:37, Andrei Alexandrescu via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On 9/22/14, 9:53 PM, Manu via Digitalmars-d wrote: > >> On 23 September 2014 14:41, Andrei Alexandrescu via Digitalmars-d >> mailto:digitalmars-d@puremagic.com>> wrote: >> >> On 9/22/14, 8:03 PM, Manu via Digitalmars-d wrote: >> >> I still think most of those users would accept RC instead of GC. >> Why not >> support RC in the language, and make all of this library noise >> redundant? >> >> >> A combo approach language + library delivers the most punch. >> >> >> How so? In what instances are complicated templates superior to a >> language RC type? >> > > It just works out that way. I don't know exactly why. In fact I have an > idea why, but conveying it requires building a bunch of context. > The trouble with library types like RefCounted!, is that they appear to be conceptually backwards to me. RefCounted!T suggests that T is a parameter to RefCounted, ie, RefCounted is the significant object, not 'T', which is what I actually want. T is just some parameter... I want a ref-counted T, not a T RefCounted, if that makes sense. When we have T* or T[], we don't lose the 'T'-ness of the object, we're just appending a certain type of pointer, and I really think that RC should be applied the same way. All these library solutions make T into something else, and that has a tendency to complicate generic code in my experience. In most cases, templates are used to capture some type of thing, but in these RefCounted style cases, it's backwards, it effectively obscures the type. We end out with inevitable code like is(T == RefCounted!U, U) to get U from T, which is the thing we typically want to know about, and every instance of a template like this must be special-cased; they can't be rolled into PointerTarget!T, or other patterns like Unqual!T can't affect these cases (not applicable here, but the reliable pattern is what I refer to). I guess I'm saying, RC should be a type of pointer, not a type of thing... otherwise generic code that deals with particular things always seems to run into complications when it expects particular things, and gets something that looks like a completely different sort of thing. Library RC can't really optimise well, RC requires language >> support to >> elide ref fiddling. >> >> >> For class objects that's what's going to happen indeed. >> >> >> Where is this discussion? Last time I raised it, it was fiercely shut >> down and dismissed. >> > > Consider yourself vindicated! (Not really, the design will be different > from what you asked.) The relevant discussion is entitled "RFC: reference > counted Throwable", and you've already participated to it :o). I see. I didn't really get that from that thread, but I only skimmed it quite quickly, since I missed most of the action. I also don't think I ever insisted on a particular design, I asked to have it *explored* (I think I made that point quite clearly), and suggested a design that made sense to me. The idea was shut down in principle, no competing design's explored. I'm very happy to see renewed interest in the topic :)
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On Tuesday, 23 September 2014 at 03:03:49 UTC, Manu via Digitalmars-d wrote: I still think most of those users would accept RC instead of GC. Why not support RC in the language, and make all of this library noise redundant? Library RC can't really optimise well, RC requires language support to elide ref fiddling. I think a library solution + intrinsic for increment/decrement (so they can be better optimized) would be the best option.
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On 9/22/14, 9:53 PM, Manu via Digitalmars-d wrote: On 23 September 2014 14:41, Andrei Alexandrescu via Digitalmars-d mailto:digitalmars-d@puremagic.com>> wrote: On 9/22/14, 8:03 PM, Manu via Digitalmars-d wrote: I still think most of those users would accept RC instead of GC. Why not support RC in the language, and make all of this library noise redundant? A combo approach language + library delivers the most punch. How so? In what instances are complicated templates superior to a language RC type? It just works out that way. I don't know exactly why. In fact I have an idea why, but conveying it requires building a bunch of context. Library RC can't really optimise well, RC requires language support to elide ref fiddling. For class objects that's what's going to happen indeed. Where is this discussion? Last time I raised it, it was fiercely shut down and dismissed. Consider yourself vindicated! (Not really, the design will be different from what you asked.) The relevant discussion is entitled "RFC: reference counted Throwable", and you've already participated to it :o). Andrei
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On 23 September 2014 14:41, Andrei Alexandrescu via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On 9/22/14, 8:03 PM, Manu via Digitalmars-d wrote: > >> I still think most of those users would accept RC instead of GC. Why not >> support RC in the language, and make all of this library noise redundant? >> > > A combo approach language + library delivers the most punch. How so? In what instances are complicated templates superior to a language RC type? Library RC can't really optimise well, RC requires language support to >> elide ref fiddling. >> > > For class objects that's what's going to happen indeed. Where is this discussion? Last time I raised it, it was fiercely shut down and dismissed.
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On 9/22/14, 8:03 PM, Manu via Digitalmars-d wrote: I still think most of those users would accept RC instead of GC. Why not support RC in the language, and make all of this library noise redundant? A combo approach language + library delivers the most punch. Library RC can't really optimise well, RC requires language support to elide ref fiddling. For class objects that's what's going to happen indeed. Andrei
At the language-level support for Micro-thread?
I hope the Dlang can have the Micro-thread at the language-level. Like the Goroutine, maby.
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On 16 September 2014 00:51, Andrei Alexandrescu via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On 9/15/14, 3:30 AM, bearophile wrote: > >> Andrei Alexandrescu: >> >> Walter, Brad, myself, and a couple of others have had a couple of >>> quite exciting ideas regarding code that is configurable to use the GC >>> or alternate resource management strategies. >>> >> >> An alternative design solution is to follow the Java way, leave the D >> strings as they are, and avoid to make a mess of user D code. Java GC >> and runtime contain numerous optimizations for the management of >> strings, like the recently introduced string de-duplication at run-time: >> >> https://blog.codecentric.de/en/2014/08/string-deduplication-new-feature- >> java-8-update-20-2 >> > > Again, it's become obvious that a category of users will simply refuse to > use a GC, either for the right or the wrong reasons. We must make D > eminently usable for them. I still think most of those users would accept RC instead of GC. Why not support RC in the language, and make all of this library noise redundant? Library RC can't really optimise well, RC requires language support to elide ref fiddling.
Re: What are the worst parts of D?
On Tue, 23 Sep 2014 01:45:31 + deadalnix via Digitalmars-d wrote: > If you hate C++, you shouldn't have too much trouble to > understand that offering a way out for people using C++ is key. but there is! D is perfectly able to replace c++. ah, i know, there is alot of legacy c++ code and people can't just rewrite it in D. so... so bad for that people then. signature.asc Description: PGP signature
Re: What are the worst parts of D?
On Mon, 22 Sep 2014 19:16:27 -0700 "H. S. Teoh via Digitalmars-d" wrote: > For a moment, I read that as you'll destroy any traces of C++, so the > first thing that would go is the DMD source code. :-P but we have magicport! well, almost... i'll postpone c++ destruction until magicport will be complete and working. ;-) signature.asc Description: PGP signature
Re: RFC: reference counted Throwable
On 23 September 2014 00:50, Andrei Alexandrescu via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On 9/22/14, 1:56 AM, ixid wrote: > >> Fasten your seatbelt, it's gonna be a bumpy ride! :o) >>> >>> Andrei >>> >> >> The fundamentalness of the changes seem to be sufficient that one could >> argue it's D3. >> > > Let's aim for not. > > If you're going to make major changes wouldn't it be >> worth a fuller break to address some of the other unresolved and >> seemingly pretty major issues such as const/immutable and ref? >> > > What are the major issues with const/immutable and ref? This is precisely why I've mostly given up on this NG...
Re: What are the worst parts of D?
On Tue, Sep 23, 2014 at 04:38:51AM +0300, ketmar via Digitalmars-d wrote: > On Mon, 22 Sep 2014 16:14:28 -0700 > Andrei Alexandrescu via Digitalmars-d > wrote: > > > > D is not c++-compatible anyway. > > D is ABI- and mangling-compatible with C++. > but we were talking about syntactic compatibility. > > > Well what can I say? I'm glad you're not making the decisions. > i HATE c++. i want it to DIE, to disappear completely, with all the > code written in it. so yes, it's good to D that i can't freely mess > with mainline codebase. 'cause the first thing i'll do with it is > destroying any traces of c++ interop. the world will be a better place > without c++. For a moment, I read that as you'll destroy any traces of C++, so the first thing that would go is the DMD source code. :-P T -- Shin: (n.) A device for finding furniture in the dark.
Re: Thread GC non "stop-the-world"
On Tuesday, 23 September 2014 at 00:15:51 UTC, Oscar Martin wrote: The cost of using the current GC in D, although beneficial for many types of programs, is unaffordable for programs such as games, etc... that need to perform repetitive tasks every short periods of time. The fact that a GC.malloc/realloc on any thread can trigger a memory collection that stop ALL threads of the program for a variable time prevents it. Conversations in the forum as "RFC: reference Counted Throwable", "Escaping the Tyranny of the GC: std.rcstring, first blood" and the @nogc attribute show that this is increasingly perceived as a problem. Besides the ever-recurring "reference counting", many people propose to improve the current implementation of GC. Rainer Schuetze developed a concurrent GC in Windows: http://rainers.github.io/visuald/druntime/concurrentgc.html With some/a lot of work and a little help compiler (currently it indicates by a flag if a class/structure contains pointers/references to other classes/structures, it could increase this support to indicate which fields are pointers/references) we could implement a semi-incremental-generational-copying GC-conservative like: http://www.hboehm.info/gc/ or http://www.ravenbrook.com/project/mps/ Being incremental, they try to minimize the "stop-the-world" phase. But even with an advanced GC, as programs become more complex and use more memory, pause time also increases. See for example (I know it's not normal case, but in a few years ...) http://blog.mgm-tp.com/2014/04/controlling-gc-pauses-with-g1-collector (*) What if: - It is forbidden for "__gshared" have references/pointers to objects allocated by the GC (if the compiler can help with this prohibition, perfect, if not the developer have to know what he is doing) - "shared" types are not allocated by the GC (they could be reference counted or manually released or ...) - "immutable" types are no longer implicitly "shared" In short, the memory accessible from multiple threads is not managed by the GC. With these restrictions each thread would have its "I_Allocator", whose default implementation would be an incremental-generational-semi-conservative-copying GC, with no inteference with any of the other program threads (it should be responsible only for the memory reserved for that thread). Other implementations of "I_Allocator" could be based on Andrei's allocators. With "setThreadAllocator" (similar to current gc_setProxy) you could switch between the different implementations if you need. Threads with critical time requirements could work with an implementation of "I_Allocator" not based on the GC. It would be possible simulate scoped classes: { setThreadAllocator(I_Allocator_pseudo_stack) scope(exit) { I_Allocator_pseudo_stack.deleteAll(); setThreadAllocator(I_Allocator_gc); } auto obj = MyClass(); ... // Destructor are called and memory released } Obviously changes (*) break compatibility with existing code, and therefore maybe they are not appropriate for D2. Also these are general ideas, sure these changes lead to other problems. But the point I want to convey is that in my opinion, while these problems are solvable, a language for "system programming" is incompatible with shared data managed by a GC Thoughts? Short, I dislike pretty much all changes to __gshared/shared. Breaks too many things. Atleast with Cmsed, (I'm evil here) where I use __gshared essentially as a read only variable but modifiable when starting up (to modify need synchronized, to read doesn't). I have already suggested before in threads something similar to what your suggesting with regards to setting allocator except: The memory manager is in a stack. Default is GC e.g. the current one. Compiler knows which pointers escapes. Can pass to pure functions however. with(myAllocator) { // myAllocator.opWithIn ...//allocate } // myAllocator.opWithCanFree // myAllocator.opWithOut class MyAllocator : Allocator { override void opWithIn(string func = __FUNCTION__, int line = __LINE__) { GC.pushAllocator(this); } override void opWithCanFree(void** freeablePointers) { //... } override void opWithOut(string func = __FUNCTION__, int line = __LINE__) { GC.popAllocator(); } void* alloc(size_t amount) { return ...; } void free(void*) { //... } } You may have something about thread allocators though. Humm druntime would already need changes so maybe. Ehh this really needs a DIP instead of me whining. If I do it, ETA December.
Re: What are the worst parts of D?
On Tuesday, 23 September 2014 at 01:39:00 UTC, ketmar via Digitalmars-d wrote: On Mon, 22 Sep 2014 16:14:28 -0700 Andrei Alexandrescu via Digitalmars-d wrote: > D is not c++-compatible anyway. D is ABI- and mangling-compatible with C++. but we were talking about syntactic compatibility. Well what can I say? I'm glad you're not making the decisions. i HATE c++. i want it to DIE, to disappear completely, with all the code written in it. so yes, it's good to D that i can't freely mess with mainline codebase. 'cause the first thing i'll do with it is destroying any traces of c++ interop. the world will be a better place without c++. If you hate C++, you shouldn't have too much trouble to understand that offering a way out for people using C++ is key.
Re: What are the worst parts of D?
On Mon, 22 Sep 2014 16:14:28 -0700 Andrei Alexandrescu via Digitalmars-d wrote: > > D is not c++-compatible anyway. > D is ABI- and mangling-compatible with C++. but we were talking about syntactic compatibility. > Well what can I say? I'm glad you're not making the decisions. i HATE c++. i want it to DIE, to disappear completely, with all the code written in it. so yes, it's good to D that i can't freely mess with mainline codebase. 'cause the first thing i'll do with it is destroying any traces of c++ interop. the world will be a better place without c++. signature.asc Description: PGP signature
Re: Library Typedefs are fundamentally broken
On Mon, 22 Sep 2014 16:06:42 -0700 Andrei Alexandrescu via Digitalmars-d wrote: > On 9/22/14, 11:52 AM, ketmar via Digitalmars-d wrote: > > seems that Andrei talking about 'idiomatic D' and we are talking > > about 'hacky typedef replacement'. that's why we can't settle the > > issue: we are both right! ;-) > That I'd agree with. yeah, and then your resistance to turn 'idiomatic D' to 'hacky typedef' is completely understandable. it took me a while to realise that we are talking about different things here. > > and that's why we need 'typedef' revived, methinks. > Sorry, no. "Typedef" is elegant and "typedef" is handy. why can't we have both? at least until AST macros arrives (and then we'll make "typedef", "deftype" and "gimmeacheese" ;-). let's limit "typedef" to simple numeric types and pointers to simple numeric types to ease implementation. sure, we'll be able to multiple money by money but not money by float then, but i found this acceptable, we always can write a free function which multiples our money. ;-) signature.asc Description: PGP signature
Re: Library Typedefs are fundamentally broken
On Mon, 22 Sep 2014 16:05:38 -0700 Andrei Alexandrescu via Digitalmars-d wrote: > Best one was a triple "u" in "ugly". there were four "u"s, but my mail program makes uuugly like breaks with four "u"s. signature.asc Description: PGP signature
Re: Library Typedefs are fundamentally broken
On Tue, Sep 23, 2014 at 12:13:19AM +, Adam D. Ruppe via Digitalmars-d wrote: > On Tuesday, 23 September 2014 at 00:01:51 UTC, Shammah Chancellor wrote: > >What exactly was wrong with the original typedef statement that was > >deprecated? > > I personally find it too inflexible; it virtually never does exactly > what I need. BTW I've never used std.typecons.Typedef for the same > reason. > > But what are some examples of typedef? HANDLE and HMENU are close to > useful indeed, I've used typedef for that kind of thing before. But it > isn't perfect: do you ever want do do *handle? That's allowed with > typedef. Do you want it to explicitly cast to void*? That's allowed > with typedef. > > People couldn't agree on if typedef should be a subtype or a separate > type either, which fed the deprecation. Personally though I think > that's silly, if what it does is well defined, we can keep it and do > other things for the other cases. But it does need to be compelling > over another feature to be kept and I'm not sure it is. > > So I tend to prefer struct and a little alias this and disabled > overloads to a typedef. A wee bit more verbose, but more flexible and > correct too. Yeah, I think struct + alias this + function/operator overloading pretty much does whatever typedef may have done, except better. So I personally don't miss typedef at all (not Typedef, for that matter). T -- Freedom: (n.) Man's self-given right to be enslaved by his own depravity.
Thread GC non "stop-the-world"
The cost of using the current GC in D, although beneficial for many types of programs, is unaffordable for programs such as games, etc... that need to perform repetitive tasks every short periods of time. The fact that a GC.malloc/realloc on any thread can trigger a memory collection that stop ALL threads of the program for a variable time prevents it. Conversations in the forum as "RFC: reference Counted Throwable", "Escaping the Tyranny of the GC: std.rcstring, first blood" and the @nogc attribute show that this is increasingly perceived as a problem. Besides the ever-recurring "reference counting", many people propose to improve the current implementation of GC. Rainer Schuetze developed a concurrent GC in Windows: http://rainers.github.io/visuald/druntime/concurrentgc.html With some/a lot of work and a little help compiler (currently it indicates by a flag if a class/structure contains pointers/references to other classes/structures, it could increase this support to indicate which fields are pointers/references) we could implement a semi-incremental-generational-copying GC-conservative like: http://www.hboehm.info/gc/ or http://www.ravenbrook.com/project/mps/ Being incremental, they try to minimize the "stop-the-world" phase. But even with an advanced GC, as programs become more complex and use more memory, pause time also increases. See for example (I know it's not normal case, but in a few years ...) http://blog.mgm-tp.com/2014/04/controlling-gc-pauses-with-g1-collector (*) What if: - It is forbidden for "__gshared" have references/pointers to objects allocated by the GC (if the compiler can help with this prohibition, perfect, if not the developer have to know what he is doing) - "shared" types are not allocated by the GC (they could be reference counted or manually released or ...) - "immutable" types are no longer implicitly "shared" In short, the memory accessible from multiple threads is not managed by the GC. With these restrictions each thread would have its "I_Allocator", whose default implementation would be an incremental-generational-semi-conservative-copying GC, with no inteference with any of the other program threads (it should be responsible only for the memory reserved for that thread). Other implementations of "I_Allocator" could be based on Andrei's allocators. With "setThreadAllocator" (similar to current gc_setProxy) you could switch between the different implementations if you need. Threads with critical time requirements could work with an implementation of "I_Allocator" not based on the GC. It would be possible simulate scoped classes: { setThreadAllocator(I_Allocator_pseudo_stack) scope(exit) { I_Allocator_pseudo_stack.deleteAll(); setThreadAllocator(I_Allocator_gc); } auto obj = MyClass(); ... // Destructor are called and memory released } Obviously changes (*) break compatibility with existing code, and therefore maybe they are not appropriate for D2. Also these are general ideas, sure these changes lead to other problems. But the point I want to convey is that in my opinion, while these problems are solvable, a language for "system programming" is incompatible with shared data managed by a GC Thoughts?
Re: Library Typedefs are fundamentally broken
On Tuesday, 23 September 2014 at 00:01:51 UTC, Shammah Chancellor wrote: What exactly was wrong with the original typedef statement that was deprecated? I personally find it too inflexible; it virtually never does exactly what I need. BTW I've never used std.typecons.Typedef for the same reason. But what are some examples of typedef? HANDLE and HMENU are close to useful indeed, I've used typedef for that kind of thing before. But it isn't perfect: do you ever want do do *handle? That's allowed with typedef. Do you want it to explicitly cast to void*? That's allowed with typedef. People couldn't agree on if typedef should be a subtype or a separate type either, which fed the deprecation. Personally though I think that's silly, if what it does is well defined, we can keep it and do other things for the other cases. But it does need to be compelling over another feature to be kept and I'm not sure it is. So I tend to prefer struct and a little alias this and disabled overloads to a typedef. A wee bit more verbose, but more flexible and correct too.
Re: Library Typedefs are fundamentally broken
What exactly was wrong with the original typedef statement that was deprecated?
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On 9/22/14, 12:34 PM, Dmitry Olshansky wrote: 22-Sep-2014 01:45, Ola Fosheim Grostad пишет: On Sunday, 21 September 2014 at 17:52:42 UTC, Dmitry Olshansky wrote: to use non-atomic ref-counting and have far less cache pollution (the set of fibers to switch over is consistent). Caches are not a big deal when you wait for io. Go also check fiber stack size... But maybe Go should not be considered a target. ??? Just reserve more space. Even Go dropped segmented stack. What Go has to do with this discussion at all BTW? Because that is what you are competing with in the webspace. E-hm Go is hardly the top dog in the web space. Java and JVM crowd like (Scala etc.) are apparently very sexy (and performant) in the web space. They try to sell it as if it was all the rage though. IMO Go is hardly an interesting opponent to compete against. In pretty much any use case I see Go is somewhere down to 4-th+ place to look at. I agree. It does have legs however. We should learn a few things from it, such as green threads, dependency management, networking libraries. Also Go shows that good quality tooling makes a lot of a difference. And of course the main lesson is that templates are good to have :o). Andrei
Re: What are the worst parts of D?
On 9/22/14, 1:44 PM, ketmar via Digitalmars-d wrote: On Mon, 22 Sep 2014 14:28:47 + AsmMan via Digitalmars-d wrote: It's really needed to keep C++-compatible as possible otherwise too few people are going to use it. If C++ wasn't C-compatible do you think it would be a successfully language it is today? I don't think so. D is not c++-compatible anyway. D is ABI- and mangling-compatible with C++. and talking about compatibility: it's what made c++ such a monster. if someone wants c++ he knows where to download c++ compiler. the last thing D should look at is c++. Well what can I say? I'm glad you're not making the decisions. Andrei
Re: Library Typedefs are fundamentally broken
On 9/22/14, 11:52 AM, ketmar via Digitalmars-d wrote: seems that Andrei talking about 'idiomatic D' and we are talking about 'hacky typedef replacement'. that's why we can't settle the issue: we are both right! ;-) That I'd agree with. and that's why we need 'typedef' revived, methinks. Sorry, no. Andrei
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On 9/22/14, 12:18 PM, "Nordlöw" wrote: On Monday, 15 September 2014 at 02:26:19 UTC, Andrei Alexandrescu wrote: http://dpaste.dzfl.pl/817283c163f5 You implementation seems to hold water at least in my tests and save memory at https://github.com/nordlow/justd/blob/master/conceptnet5.d Awesome, thanks for doing this. How did you measure and what results did you get? -- Andrei
Re: Library Typedefs are fundamentally broken
On 9/22/14, 11:56 AM, Daniel Murphy wrote: "Vladimir Panteleev" wrote in message news:oadjpzibjneyfutoy...@forum.dlang.org... What if you *want* a Typedef instantiation to be the same for all instantiations of a parent template? Declare it outside the template and provide an alias inside. Like you would with any other declaration you wanted common to all instantiations. I think you can have both if Typedef simply takes an "ARGS...", which defaults to TypeTuple!(__FILE__, __LINE__, __COLUMN__), but in this case can be overridden to TypeTuple!(Foo, T). Yeah. If it wasn't for the syntax overhead, the ideal args is something like this: struct MyTypedef_Tag; alias MyTypedef = Typedef!(basetype, init, MyTypedef_Tag); struct MyTypedef_Tag; alias MyTypedef = Typedef!(basetype, init, MyTypedef_Tag.mangleof); should get you off the ground :o). Andrei
Re: Library Typedefs are fundamentally broken
On 9/22/14, 11:35 AM, bearophile wrote: Andrei Alexandrescu: I find the requirement for the cookie perfect. So far you're the only one, it seems. And you have admitted you have not tried to use them significantly in your code. Well there seem to be a sore need of arguments to convince me otherwise. Best one was a triple "u" in "ugly". -- Andrei
Re: Identifier resolution, the great implementation defined mess.
On Mon, Sep 22, 2014 at 02:58:33PM -0700, Walter Bright via Digitalmars-d wrote: > On 9/22/2014 1:27 PM, deadalnix wrote: > >On Monday, 22 September 2014 at 09:17:16 UTC, Walter Bright wrote: > >>It is depth first. It starts at the innermost scope, which is the > >>current scope. Somehow, we don't seem to be talking the same > >>language :-( > >> > > > >Depth first in the sense, go from the inner to the outer scope and > >look for local symbols. If that fails, go from the inner to the outer > >and look for imported symbols. > > I would find that to be surprising behavior, as I'd expect inner > declarations to override outer ones. I think what makes the situation under discussion stick out like a sore thumb, is the fact that a local import statement inserts symbols into the current inner scope "implicitly". That is to say, when you write: struct S { int x; void method(int y) { foreach (z; 0..10) { import somemodule; } } } the statement "import somemodule" can potentially pull in arbitrary symbols into the inner scope, even though none of these symbols are explicitly named in the code. If we were to write, instead: struct S { int x; void method(int y) { foreach (z; 0..10) { import somemodule : x, y, z; } } } then it would not be surprising that x, y, z, refer to the symbols from somemodule, rather than the loop variable, method parameter, or member variable. However, in the former case, the user is left at the mercy of somemodule (keep in mind that this can be a 3rd party module over which the user has little control) what symbols will be introduced into the inner scope. If somemodule defines a symbol 'x', then it silently overrides S.x in the inner scope. But since 'x' is implicit from the import statement, a casual perusal of the code would give the wrong impression that 'x' refers to S.x, whereas it actually refers to somemodule.x. This problem is intrinsically the same problem exhibited by shadowing of local variables: void func(int x) { int x; { int x; x++; } } which is rejected by the compiler. Both result in code that is difficult to reason about and error-prone because of symbol ambiguity. Arguably, we should reject such import statements if it would introduce this kind of symbol shadowing. But regardless of whether we decide to do that or not, it's becoming clear that unqualified local imports are a bad idea, and I, for one, will avoid using it for the above reasons. Instead, I'd recommend only using qualified local imports: void func(int x, int z) { import submodule : x, y; // I explicitly want submodule.x to shadow parameter x; // I also name submodule.y so that in the event that // submodule actually contains a symbol 'z', it does not // accidentally shadow parameter z. } leaving unqualified imports to the module global scope. In an ideal world, the language would enforce this usage. T -- Life would be easier if I had the source code. -- YHL
Re: Identifier resolution, the great implementation defined mess.
On 9/22/2014 1:27 PM, deadalnix wrote: On Monday, 22 September 2014 at 09:17:16 UTC, Walter Bright wrote: It is depth first. It starts at the innermost scope, which is the current scope. Somehow, we don't seem to be talking the same language :-( Depth first in the sense, go from the inner to the outer scope and look for local symbols. If that fails, go from the inner to the outer and look for imported symbols. I would find that to be surprising behavior, as I'd expect inner declarations to override outer ones.
Re: Identifier resolution, the great implementation defined mess.
On 09/22/2014 10:27 PM, deadalnix wrote: On Monday, 22 September 2014 at 09:17:16 UTC, Walter Bright wrote: It is depth first. It starts at the innermost scope, which is the current scope. Somehow, we don't seem to be talking the same language :-( Depth first in the sense, go from the inner to the outer scope and look for local symbols. If that fails, go from the inner to the outer and look for imported symbols. That sounds almost right, but it still suffers from hijacking issues, because more nested (non-explicitly!) imported identifiers would hide less nested ones. What about: Go from nested to outer scopes and look for local symbols. If that fails, look up the symbol simultaneously in all modules that are imported in scopes enclosing the lookup. I currently think this would be the most sane behaviour for imports. It would need to be determined what to do about template mixins.
Re: What are the worst parts of D?
On Mon, 22 Sep 2014 14:28:47 + AsmMan via Digitalmars-d wrote: > It's really needed to keep C++-compatible as possible otherwise > too few people are going to use it. If C++ wasn't C-compatible do > you think it would be a successfully language it is today? I > don't think so. D is not c++-compatible anyway. and talking about compatibility: it's what made c++ such a monster. if someone wants c++ he knows where to download c++ compiler. the last thing D should look at is c++. signature.asc Description: PGP signature
Re: Identifier resolution, the great implementation defined mess.
On Monday, 22 September 2014 at 09:17:16 UTC, Walter Bright wrote: It is depth first. It starts at the innermost scope, which is the current scope. Somehow, we don't seem to be talking the same language :-( Depth first in the sense, go from the inner to the outer scope and look for local symbols. If that fails, go from the inner to the outer and look for imported symbols.
Re: if still there
On 22.09.2014 15:24, Daniel Murphy wrote: "Rainer Schuetze" wrote in message news:lvmjqr$h13$1...@digitalmars.com... The branch is still in the doIt function: Yes. dmd didn't do any inlining at all. It is very restrained with inlining, you'll get much better results with GDC or LDC. Check again, it inlined doIt into main. It still writes out doIt even though it's now unreferenced, but that doesn't have much to do with the inliner. Yeah, my bad. I must have been confused by your quote of the non-inlining main. Sorry to dmd for blaming it's inliner, again ;-)
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
22-Sep-2014 13:45, Ola Fosheim Grostad пишет: Locking fibers to threads will cost you more than using threadsafe features. One 300ms request can then starve waiting fibers even if you have 7 free threads. This statement doesn't make any sense taken in isolation. It lacks way too much context to be informative. For instance, "locking a thread for 300ms" is easily averted if all I/O and blocking sys-call are managed in a separate thread pool (that may grow far beyond fiber-scheduled "web" thread pool). And if "locked" means CPU-bound locked, then it's a) hard to fix without help from OS: re-scheduling a fiber without explicit yield ain't possible (it's cooperative, preemption is in the domain of OS). Something like Windows User-Mode Scheduling is required or user-mode threads a-la FreeBSD (haven't checked in a while?). b) If CPU-bound is happening more often then once in a while, then fibers are poor fit anyway - threads (and pools of 'em) do exactly what's needed in this case by being natively preemptive and well suited for running multiple CPU intensive tasks. That's bad for latency, because then all fibers on that thread will get 300+ms in latency. E-hm locking threads to fibers and arbitrary latency figures have very little to do with each other. The nature of that latency is extremely important. How anyone can disagree with this is beyond me. IMHO poorly formed problem statements are not going to prove your point. Pardon me making a personal statement, but for instance showing how Go avoids your problem and clearly specifying the exact conditions that cause it would go a long way to demonstrated whatever you wanted to. -- Dmitry Olshansky
Re: RFC: reference counted Throwable
On Monday, 22 September 2014 at 15:18:00 UTC, bearophile wrote: Piotrek: No, it's not a good idea. Tweaking memory management shouldn't require the language branching. IMHO, this would be a suicide. I didn't meant the advancement as a language branching, but as a successive version that is (mostly) backwards compatible. Likewise C#6.0 is not a branching of C#5. Bye, bearophile I'm not sure how you define a branch, but I look at it from SCM pov. E.g. assuming C#6 is master/trunk/development branch then C#5 is a maintenance branch of it (same for other C# versions). That means, MS has to keep fixing all versions in parallel. Piotrek
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
22-Sep-2014 01:45, Ola Fosheim Grostad пишет: On Sunday, 21 September 2014 at 17:52:42 UTC, Dmitry Olshansky wrote: to use non-atomic ref-counting and have far less cache pollution (the set of fibers to switch over is consistent). Caches are not a big deal when you wait for io. Go also check fiber stack size... But maybe Go should not be considered a target. ??? Just reserve more space. Even Go dropped segmented stack. What Go has to do with this discussion at all BTW? Because that is what you are competing with in the webspace. E-hm Go is hardly the top dog in the web space. Java and JVM crowd like (Scala etc.) are apparently very sexy (and performant) in the web space. They try to sell it as if it was all the rage though. IMO Go is hardly an interesting opponent to compete against. In pretty much any use case I see Go is somewhere down to 4-th+ place to look at. Go checks and extends stacks. Since 1.2 or 1.3 i.e. relatively new stuff. -- Dmitry Olshansky
Re: RFC: reference counted Throwable
On 21.9.2014. 22:57, Peter Alexander wrote: On Sunday, 21 September 2014 at 19:36:01 UTC, Nordlöw wrote: On Friday, 19 September 2014 at 15:32:38 UTC, Andrei Alexandrescu wrote: Please chime in with thoughts. Why don't we all focus our efforts on upgrading the current GC to a state-of-the GC making use of D's strongly typed memory model before discussing these things? GC improvements are critical, but... "As discussed, having exception objects being GC-allocated is clearly a large liability that we need to address. They prevent otherwise careful functions from being @nogc so they affect even apps that otherwise would be okay with a little litter here and there." No improvements to the GC can fix this. @nogc needs to be usable, whether you are a GC fan or not. I think that what is being suggested is that upgrading GC would widespread the point of view on what can and should be done. For example, now that ranges and mixins exist great ideas comes to mind, and without them we can only guess. I think that GC is in the same position.
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On Monday, 15 September 2014 at 02:26:19 UTC, Andrei Alexandrescu wrote: http://dpaste.dzfl.pl/817283c163f5 You implementation seems to hold water at least in my tests and save memory at https://github.com/nordlow/justd/blob/master/conceptnet5.d Thanks :) I'm however struggling with fast serialization with msgpack. FYI: https://github.com/msgpack/msgpack-d/issues/43
Re: Library Typedefs are fundamentally broken
"Vladimir Panteleev" wrote in message news:oadjpzibjneyfutoy...@forum.dlang.org... What if you *want* a Typedef instantiation to be the same for all instantiations of a parent template? Declare it outside the template and provide an alias inside. Like you would with any other declaration you wanted common to all instantiations. I think you can have both if Typedef simply takes an "ARGS...", which defaults to TypeTuple!(__FILE__, __LINE__, __COLUMN__), but in this case can be overridden to TypeTuple!(Foo, T). Yeah. If it wasn't for the syntax overhead, the ideal args is something like this: struct MyTypedef_Tag; alias MyTypedef = Typedef!(basetype, init, MyTypedef_Tag);
Re: Library Typedefs are fundamentally broken
On Mon, 22 Sep 2014 16:21:42 + Wyatt via Digitalmars-d wrote: > On Saturday, 20 September 2014 at 04:52:58 UTC, Andrei > Alexandrescu wrote: > > > > alias A = Typedef!float; > > alias B = Typedef!float; > > > > By basic language rules, A and B are identical. Making them > > magically distinct would be surprising... > > > Hold up. See, "Making them magically distinct would be > surprising" is really the sticking point for me because in my > experience it rings false. it's abusing 'alias' which makes it all a mess. if we look at the code above as 'idiomatic D code', then A and B should be similar. but if we looking at the code as 'hacky replacement for real typedef', A and B should not be similar. seems that Andrei talking about 'idiomatic D' and we are talking about 'hacky typedef replacement'. that's why we can't settle the issue: we are both right! ;-) and that's why we need 'typedef' revived, methinks. signature.asc Description: PGP signature
Re: Library Typedefs are fundamentally broken
On Monday, 22 September 2014 at 17:21:50 UTC, Andrei Alexandrescu wrote: I find the requirement for the cookie perfect. There is one thing I like about it and wish was available elsewhere: two modules can define the same type for interoperability without needing to import each other. My simpledisplay.d and image.d modules both used to be standalone. Both defined struct Color {}. Identical layout. When I added optional integration, these two structs clashed. The solution was to introduce module color, which held that struct. But now both modules require that import. Works fine but I think it would have been great if I could have kept the two separate definitions and just told the compiler it's ok, these two types are indeed identically compatible despite coming from independent modules. kinda like pragma(mangle) but for types and preferably less hacky. Another potential use of this concept would be separate interface and implementation files. But that's all pretty different than the Typedef cookie. I'm just saying it because the concept of matching types in different modules IS something that I've wanted in the past.
Re: Library Typedefs are fundamentally broken
Andrei Alexandrescu: I find the requirement for the cookie perfect. So far you're the only one, it seems. And you have admitted you have not tried to use them significantly in your code. Bye, bearophile
Re: RFC: reference counted Throwable
What are the major issues with const/immutable and ref? Const and immutable seem to be difficult to work with, Maxime wrote a piece about how difficult to use they were in practice. Ref is difficult to combine properly with generic templates, Manu covers that in another thread here. Maxime's blog: http://pointersgonewild.wordpress.com/2014/07/11/the-constness-problem/ Manu's post about the combinatorial complexity of ref for templates (which you've responded to earlier in the chain): http://forum.dlang.org/post/mailman.1381.1411381413.5783.digitalmar...@puremagic.com
Re: Library Typedefs are fundamentally broken
On 9/22/14, 9:21 AM, Wyatt wrote: On Saturday, 20 September 2014 at 04:52:58 UTC, Andrei Alexandrescu wrote: alias A = Typedef!float; alias B = Typedef!float; By basic language rules, A and B are identical. Making them magically distinct would be surprising... Hold up. See, "Making them magically distinct would be surprising" is really the sticking point for me because in my experience it rings false. When I reach for a typedef, I expect these things to NOT be identical. More to the point, I'm explicitly stating: "This thing is not like any other thing." That's the fundamental reason any sort of typedef exists in my world. That the idiomatic library Typedef doesn't actually give these semantics unless I do extra stuff? _That_ is surprising and inconvenient. I'm personally having difficulty coming up with a situation where the current default behaviour is even useful. I would agree with that. If I'd do it over again I'd probably make the string the second argument with no default. You can make the argument that it's not that much of a burden. And on a cursory read, sure, that makes enough sense. It's a good argument. At some point some RTFM is necessary; I think it's reasonable to assume that whoever is in the market for using Typedef would spend a minute with the documentation. But it's still there and still acts as positive punishment. We tend to tout type safety as a major feature, which builds the expectation that getting it for the common case is _completely trivial_ and in-line with the pragmatic approach taken by the rest of the language. Adding a gotcha like this makes it less likely to be used correctly. Type safety is not the problem here. I do agree that surprising behavior for those who don't RTFM is possible. And I think it needs to be stressed: No one is arguing that the current behaviour shouldn't be possible at all; just that it's an unusual special case that makes for a warty default. I'd agree with that. (Again if I could do things over again there'd be no default for the cookie.) But my understanding is that there's quite a bit of blowing this out of proportion. If you want two distinctly-named types to hash the same, give them a common cookie and be on your merry way. Meanwhile, the issues with e.g. Typedef in templates disappear because it makes introducing a unique type the trivial default. Anecdotally, I was explaining how neat it is that D has library Typedef to an engineer on my team and he commented that he'd never expect a word like "alias" to be associated with defining a distinct type, suggesting its use here is a misfeature. He also called the cookie parameter a wart (unprompted). It's an anecdote. How you explained matters matters a lot :o). I find the requirement for the cookie perfect. Andrei
Re: What are the worst parts of D?
On Sunday, 21 September 2014 at 22:17:59 UTC, H. S. Teoh via Digitalmars-d wrote: On Sun, Sep 21, 2014 at 08:49:38AM +, via Digitalmars-d wrote: On Sunday, 21 September 2014 at 00:07:36 UTC, Vladimir Panteleev wrote: >On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja >wrote: >>What do you think are the worst parts of D? > >The regressions! > >https://issues.dlang.org/buglist.cgi?bug_severity=regression&list_id=106988&resolution=--- > >I filed over half of those... I guess you found them using your own code base? Maybe it would make sense to add one or more larger projects to the autotester, in addition to the unit tests. They don't necessarily need to be blocking, just a notice "hey, your PR broke this and that project" would surely be helpful to detect the breakages early on. This has been suggested before. The problem is resources. If you're willing to donate equipment for running these tests, it would be greatly appreciated, I believe. For my part, I regularly try compiling my own projects with git HEAD, and filing any regressions I find. T What is needed?
Re: Library Typedefs are fundamentally broken
On Monday, 22 September 2014 at 16:21:43 UTC, Wyatt wrote: On Saturday, 20 September 2014 at 04:52:58 UTC, Andrei Alexandrescu wrote: alias A = Typedef!float; alias B = Typedef!float; By basic language rules, A and B are identical. Making them magically distinct would be surprising... Hold up. See, "Making them magically distinct would be surprising" is really the sticking point for me because in my experience it rings false. When I reach for a typedef, I expect these things to NOT be identical. More to the point, I'm explicitly stating: "This thing is not like any other thing." That's the fundamental reason any sort of typedef exists in my world. IIRC, in the beginning, typedef was (assumed to be) just that. The problem was that the conversion rules lacked and, finally, nobody wrote them. Typedef itself became deprecated and merely a synonim for "alias". Too bad, since the "strong typedef" was one of the things that dragged me into D from the C world...
Re: Library Typedefs are fundamentally broken
On Saturday, 20 September 2014 at 04:52:58 UTC, Andrei Alexandrescu wrote: alias A = Typedef!float; alias B = Typedef!float; By basic language rules, A and B are identical. Making them magically distinct would be surprising... Hold up. See, "Making them magically distinct would be surprising" is really the sticking point for me because in my experience it rings false. When I reach for a typedef, I expect these things to NOT be identical. More to the point, I'm explicitly stating: "This thing is not like any other thing." That's the fundamental reason any sort of typedef exists in my world. That the idiomatic library Typedef doesn't actually give these semantics unless I do extra stuff? _That_ is surprising and inconvenient. I'm personally having difficulty coming up with a situation where the current default behaviour is even useful. You can make the argument that it's not that much of a burden. And on a cursory read, sure, that makes enough sense. But it's still there and still acts as positive punishment. We tend to tout type safety as a major feature, which builds the expectation that getting it for the common case is _completely trivial_ and in-line with the pragmatic approach taken by the rest of the language. Adding a gotcha like this makes it less likely to be used correctly. And I think it needs to be stressed: No one is arguing that the current behaviour shouldn't be possible at all; just that it's an unusual special case that makes for a warty default. If you want two distinctly-named types to hash the same, give them a common cookie and be on your merry way. Meanwhile, the issues with e.g. Typedef in templates disappear because it makes introducing a unique type the trivial default. Anecdotally, I was explaining how neat it is that D has library Typedef to an engineer on my team and he commented that he'd never expect a word like "alias" to be associated with defining a distinct type, suggesting its use here is a misfeature. He also called the cookie parameter a wart (unprompted). -Wyatt
Re: RFC: scope and borrowing
On 23 September 2014 01:00, via Digitalmars-d wrote: > On Monday, 22 September 2014 at 12:37:47 UTC, Manu via Digitalmars-d wrote: > >> On 22 September 2014 22:14, via Digitalmars-d < >> digitalmars-d@puremagic.com> >> wrote: >> >> On Monday, 22 September 2014 at 11:45:39 UTC, Manu via Digitalmars-d >>> wrote: >>> >>> Application to scope will be identical to ref. A function that returns or receives scope that is inserted into generic code must have that property cascaded outwards appropriately. If typeof() or alias loses 'scope', then it will all go tits-up. >>> For receiving it's not necessary, because whether or not the argument is >>> scoped, the function can always borrow it. The lifetime of its parameter >>> is >>> narrower than what it gets passed. >>> >>> >> It's particularly common in D to produce templates that wrap functions. >> If the wrapper doesn't propagate scope outwards, then it can no longer be >> called by a caller who borrowed arguments which are to be forwarded to the >> function being called. Likewise for return values. >> > > You have a point there. > It's massive. Trust me, when you're fabricating functions from introspecting other functions, you NEED all these details in the type. If they're not part of the types, then you need to craft lots of junk code to detect that information explicitly, and then you need to branch out n^^2 distinct paths (massive DRY violation) to handle all the combinations. Imagine if 'const' was a storage class in the context of generic code... the practicality of that would be identical to 'ref', except that const appears everywhere, and ref appears rarely (probably because people tend to avoid it, because it's broken). For return values, the situation is a bit different: They can of course not >> >>> be assigned to non-scoped variables. But the solution for this simple: >>> the >>> generic code needs to use scope, too. >>> >> >> >> This is precisely the problem with ref... >> Are you saying that ALL generic code needs to be 'scope' always? That's >> not >> semantically correct. >> >> > To be clear, I am referring to the implementation, the actual code of the > generic functions, not to its signature. The signature of course needs to > match the semantics of the generic function. > So...? I also over-generalized when I said that the return value cannot be > assigned to non-scope. It can theoretically depend on the input, though I'm > not sure whether it's a good idea to allow this: > > scope!a T scopeFunc(scope T a, scope T b); > > T* genericFunc(T)(T* input1, T* input2) { > ... > // this is fine in theory: input1 points to GC or global data > // (because it's not designated as scope) > string temp = scopeFunc(input1, input2); > ... > return temp; > } > > Evidently, this generic function cannot accept scoped pointers, thus it > can't take advantage of the fact that scopeFunc() does. It's therefore a > good idea, to make any generic (and non-generic, too) function take its > parameters by scope if at all possible: > > scope!input1 T* genericFunc(T)(scope T* input1, scope T* input2) { > ... > scope temp = scopeFunc(input1, input2); > ... > return temp; > } > > This second version of the function will work with scope and non-scope > inputs alike. More importantly, it doesn't depend on whether it's allowed > to assign a scope return value to non-scope if its owners aren't scoped > (which I'd like to avoid). > We arrive at yet another case of "it should have been that way from the start" wrt 'scope'. The baggage of annotation, and the lack of annotation to existing code is a pretty big pill to swallow. If it were just part of the type, there would be no problem, T would already be 'scope T' in the cases where you expect. I can't see any disadvantages there. Now, `genericFunc()` in turn returns a scoped reference, so any other > generic code that calls it must again be treated in the same way. > Everything else would be unsafe, after all. But note that this only goes as > far as an actual scoped value is returned up the call-chain. Once you stop > doing so (because you only need to call the scope-returning functions > internally for intermediate results, for example), returning scope would no > longer be necessary. It still makes sense for these higher-up functions to > _accept_ scope, of course, if it's possible. > > Of course, this is only true as long as the generic function knows about > the semantics of `scopeFunc()`. Once you're trying to wrap functions (as > alias predicates, opDispatch), there needs to be another solution. I'm not > sure what this could be though. I see now why you mentioned ref. But the > problem is not restricted to ref and scope, it would also apply to UDAs. > Maybe, because it is a more general problem independent of scope, the > solution needs to be a more general one, too. > I thin
Re: RFC: reference counted Throwable
Piotrek: No, it's not a good idea. Tweaking memory management shouldn't require the language branching. IMHO, this would be a suicide. I didn't meant the advancement as a language branching, but as a successive version that is (mostly) backwards compatible. Likewise C#6.0 is not a branching of C#5. Bye, bearophile
Re: RFC: scope and borrowing
On 22 September 2014 23:38, Kagamin via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On Monday, 22 September 2014 at 11:20:57 UTC, Manu via Digitalmars-d wrote: > >> It is a useful tool, but you can see how going to great lengths to write >> this explosion of paths is a massive pain in the first place, let alone >> additional overhead to comprehensively test that it works... it should >> never have been a problem to start with. >> > > Hmm... even if the code is syntactically succinct, it doesn't necessarily > mean lower complexity or that it requires less testing. You provided an > example yourself: you have generic code, which works for values, but not > for references. You need a lot of testing not because the features have > different syntax, but because they work differently, so code, which works > for one thing, may not work for another. > Eliminating static branches containing different code has a very significant reduction in complexity. It's also DRY. I don't think I provided that example... although it's certainly true that there are semantic differences that may lead to distinct code paths, it is my experience that in the majority of cases, if I just had the ref-ness as part of the type, the rest would follow naturally. I have never encountered a situation where I would feel hindered by ref as part of the type. I think it's also easier to get from ref in the type to the raw type than the reverse (which we must do now); We are perfectly happy with Unqual!T and things like that.
Re: RFC: reference counted Throwable
On Monday, 22 September 2014 at 09:13:49 UTC, bearophile wrote: ixid: The fundamentalness of the changes seem to be sufficient that one could argue it's D3. This seems a good idea. No, it's not a good idea. Tweaking memory management shouldn't require the language branching. IMHO, this would be a suicide. Piotrek
Re: RFC: scope and borrowing
On Monday, 22 September 2014 at 12:37:47 UTC, Manu via Digitalmars-d wrote: On 22 September 2014 22:14, via Digitalmars-d wrote: On Monday, 22 September 2014 at 11:45:39 UTC, Manu via Digitalmars-d wrote: Application to scope will be identical to ref. A function that returns or receives scope that is inserted into generic code must have that property cascaded outwards appropriately. If typeof() or alias loses 'scope', then it will all go tits-up. For receiving it's not necessary, because whether or not the argument is scoped, the function can always borrow it. The lifetime of its parameter is narrower than what it gets passed. It's particularly common in D to produce templates that wrap functions. If the wrapper doesn't propagate scope outwards, then it can no longer be called by a caller who borrowed arguments which are to be forwarded to the function being called. Likewise for return values. You have a point there. For return values, the situation is a bit different: They can of course not be assigned to non-scoped variables. But the solution for this simple: the generic code needs to use scope, too. This is precisely the problem with ref... Are you saying that ALL generic code needs to be 'scope' always? That's not semantically correct. To be clear, I am referring to the implementation, the actual code of the generic functions, not to its signature. The signature of course needs to match the semantics of the generic function. I also over-generalized when I said that the return value cannot be assigned to non-scope. It can theoretically depend on the input, though I'm not sure whether it's a good idea to allow this: scope!a T scopeFunc(scope T a, scope T b); T* genericFunc(T)(T* input1, T* input2) { ... // this is fine in theory: input1 points to GC or global data // (because it's not designated as scope) string temp = scopeFunc(input1, input2); ... return temp; } Evidently, this generic function cannot accept scoped pointers, thus it can't take advantage of the fact that scopeFunc() does. It's therefore a good idea, to make any generic (and non-generic, too) function take its parameters by scope if at all possible: scope!input1 T* genericFunc(T)(scope T* input1, scope T* input2) { ... scope temp = scopeFunc(input1, input2); ... return temp; } This second version of the function will work with scope and non-scope inputs alike. More importantly, it doesn't depend on whether it's allowed to assign a scope return value to non-scope if its owners aren't scoped (which I'd like to avoid). Now, `genericFunc()` in turn returns a scoped reference, so any other generic code that calls it must again be treated in the same way. Everything else would be unsafe, after all. But note that this only goes as far as an actual scoped value is returned up the call-chain. Once you stop doing so (because you only need to call the scope-returning functions internally for intermediate results, for example), returning scope would no longer be necessary. It still makes sense for these higher-up functions to _accept_ scope, of course, if it's possible. Of course, this is only true as long as the generic function knows about the semantics of `scopeFunc()`. Once you're trying to wrap functions (as alias predicates, opDispatch), there needs to be another solution. I'm not sure what this could be though. I see now why you mentioned ref. But the problem is not restricted to ref and scope, it would also apply to UDAs. Maybe, because it is a more general problem independent of scope, the solution needs to be a more general one, too. As far as I can see, there's always a variadic template parameter involved (which is actually a list of aliases in most cases, right?). Would it work if aliases would forward their storage classes, too? Thinking about it, this seems natural, because aliases mean "pass by name". > A function that returns scope does so for a reason after all. And the generic code can't know what it is. That knowledge must be encoded in the type system. This will work even if the return value of the called function turns out not to be scoped for this particular instantiation. And all this is an implementation of the generic code, it won't bleed outside, unless the generic code wants to return the scoped value. In this case, simply apply the same technique, just one lever higher. I can't see the solution you're trying to ilustrate, can you demonstrate? I hope that the examples above illustrate what I mean. Of course, this doesn't solve the "perfect forwarding" problem, which should maybe be treated separately. Maybe you can give counter examples too, if you think it doesn't work.
Re: Library Typedefs are fundamentally broken
On 9/22/14, 2:39 AM, Don wrote: On Sunday, 21 September 2014 at 18:09:26 UTC, Andrei Alexandrescu wrote: On 9/21/14, 8:29 AM, ketmar via Digitalmars-d wrote: On Sun, 21 Sep 2014 08:15:29 -0700 Andrei Alexandrescu via Digitalmars-d wrote: alias Int1 = Typedef!(int, "a.Int1"); alias Int2 = Typedef!(int, "b.Int2"); ah, now that's cool. module system? wut? screw it, we have time-proven manual prefixing! Use __MODULE__. -- Andrei Yes, but you're advocating a hack. Oh but I very much disagree. The original premise does seem to be correct: library Typedefs are fundamentally broken. The semantics of templates does not match what one expects from a typedef: ie, declaring a new, unique type. If you have to pass __MODULE__ in, it's not really a library solution. The user code needs to pass in a nasty implementation detail in order to get a unique type. How many libraries did you use that came with no idioms for their usage? And it does seem to me, that because it isn't possible to do a proper library typedef, you've attempted to redefine what a Typedef is supposed to do. And sure, it you remove the requirement to create a unique type, Typedef isn't broken. You're two paragraphs away from "library Typedefs are fundamentally broken". Now which one is it? But then it isn't very useful, either. You can't, for example, use it to define the various Windows HANDLEs (HMENU, etc), which was one of the most successful use cases for D1's typedef. alias HMENU = Typedef!(void*, __MODULE__ ~ ".HMENU"); So please s/can't/can't the same exact way built-in typedef would have done it/. Having said that, though, the success of 'alias this' does raise some interesting questions about how useful the concept of a typedef is. Certainly it's much less useful than when Typedef was created. My feeling is that almost every time when you want to create a new type from an existing one, you actually want to restrict the operations which can be performed on it. (Eg if you have typedef money = double; then money*money doesn't make much sense). For most typedefs I think you're better off with 'alias this'. When control is needed, yah. I had some thoughts on adding policies to Typedef (convert to base type etc) but as it seems it's already unusable, I won't bring them up :o). Andrei
Re: RFC: reference counted Throwable
On 9/22/14, 1:56 AM, ixid wrote: Fasten your seatbelt, it's gonna be a bumpy ride! :o) Andrei The fundamentalness of the changes seem to be sufficient that one could argue it's D3. Let's aim for not. If you're going to make major changes wouldn't it be worth a fuller break to address some of the other unresolved and seemingly pretty major issues such as const/immutable and ref? What are the major issues with const/immutable and ref? Andrei
Re: What are the worst parts of D?
On Saturday, 20 September 2014 at 14:22:32 UTC, Tofu Ninja wrote: On Saturday, 20 September 2014 at 13:30:24 UTC, Ola Fosheim Grostad wrote: On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote: What do you think are the worst parts of D? 1. The whining in the forums. 2. Lacks focus on a dedicated application area. 3. No strategy for getting more people on board. 4. No visible roadmap. Not really a problem with the language. Just problems. 5. Too much focus on retaining C semantics (go does a bit better) 6. Inconsistencies and hacks (too many low hanging fruits) 7. More hacks are being added rather than removing existing ones. Definitely can agree, I think it has to do with the sentiment that it is "too much like C++" It's really needed to keep C++-compatible as possible otherwise too few people are going to use it. If C++ wasn't C-compatible do you think it would be a successfully language it is today? I don't think so.
Re: What are the worst parts of D?
On 09/22/2014 03:26 PM, Daniel Murphy wrote: "Timon Gehr" wrote in message news:lvmh5b$eo9$1...@digitalmars.com... When was int x(T)=2; introduced? At the same time as enum x(T) = 2; I think. ... Is this documented? Also, C-style array syntax would actually be string results(T)[] = "";. Nah, array type suffix goes before the template argument list. It is results!T[2], not results[2]!T.
Re: RFC: scope and borrowing
On Monday, 22 September 2014 at 11:20:57 UTC, Manu via Digitalmars-d wrote: It is a useful tool, but you can see how going to great lengths to write this explosion of paths is a massive pain in the first place, let alone additional overhead to comprehensively test that it works... it should never have been a problem to start with. Hmm... even if the code is syntactically succinct, it doesn't necessarily mean lower complexity or that it requires less testing. You provided an example yourself: you have generic code, which works for values, but not for references. You need a lot of testing not because the features have different syntax, but because they work differently, so code, which works for one thing, may not work for another.
Re: What are the worst parts of D?
"Timon Gehr" wrote in message news:lvmh5b$eo9$1...@digitalmars.com... When was int x(T)=2; introduced? At the same time as enum x(T) = 2; I think. Also, C-style array syntax would actually be string results(T)[] = "";. Nah, array type suffix goes before the template argument list.
Re: if still there
"Rainer Schuetze" wrote in message news:lvmjqr$h13$1...@digitalmars.com... The branch is still in the doIt function: Yes. dmd didn't do any inlining at all. It is very restrained with inlining, you'll get much better results with GDC or LDC. Check again, it inlined doIt into main. It still writes out doIt even though it's now unreferenced, but that doesn't have much to do with the inliner.
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On Sunday, 21 September 2014 at 21:42:03 UTC, Ola Fosheim Grostad wrote: On Sunday, 21 September 2014 at 19:28:13 UTC, Kagamin wrote: Only isolated cluster can safely migrate between threads. D has no means to check isolation, you should check it manually, and in addition check if the logic doesn't depend on tls. This can easily be borked if built in RC does not provide threadsafety. Isolated data is single-threaded w.r.t. concurrent access. What thread-safety do you miss? You should only check for environmental dependencies, which are not strictly related to concurrency.
Re: Library Typedefs are fundamentally broken
On Monday, 22 September 2014 at 09:39:29 UTC, Don wrote: My feeling is that almost every time when you want to create a new type from an existing one, you actually want to restrict the operations which can be performed on it. (Eg if you have typedef money = double; then money*money doesn't make much sense). For most typedefs I think you're better off with 'alias this'. `alias this` doesn't restrict what operations can be performed on the supertype. struct Money { this(double d) { amount = d; } double amount; alias amount this; } void main() { //This doesn't compile without a constructor defined //that takes a double... I thought alias this took //care of that, but apparently not Money m = 2.0; Money n = m * m; assert(n == 4.0); }
Re: RFC: scope and borrowing
On 22 September 2014 22:14, via Digitalmars-d wrote: > On Monday, 22 September 2014 at 11:45:39 UTC, Manu via Digitalmars-d wrote: > >> Application to scope will be identical to ref. A function that returns or >> receives scope that is inserted into generic code must have that property >> cascaded outwards appropriately. If typeof() or alias loses 'scope', then >> it will all go tits-up. >> > > For receiving it's not necessary, because whether or not the argument is > scoped, the function can always borrow it. The lifetime of its parameter is > narrower than what it gets passed. > It's particularly common in D to produce templates that wrap functions. If the wrapper doesn't propagate scope outwards, then it can no longer be called by a caller who borrowed arguments which are to be forwarded to the function being called. Likewise for return values. For return values, the situation is a bit different: They can of course not > be assigned to non-scoped variables. But the solution for this simple: the > generic code needs to use scope, too. This is precisely the problem with ref... Are you saying that ALL generic code needs to be 'scope' always? That's not semantically correct. A function that returns scope does so for a reason after all. And the generic code can't know what it is. That knowledge must be encoded in the type system. This will work even if the return value of the called function turns out > not to be scoped for this particular instantiation. And all this is an > implementation of the generic code, it won't bleed outside, unless the > generic code wants to return the scoped value. In this case, simply apply > the same technique, just one lever higher. > I can't see the solution you're trying to ilustrate, can you demonstrate?
Re: RFC: scope and borrowing
On Monday, 22 September 2014 at 11:45:39 UTC, Manu via Digitalmars-d wrote: Application to scope will be identical to ref. A function that returns or receives scope that is inserted into generic code must have that property cascaded outwards appropriately. If typeof() or alias loses 'scope', then it will all go tits-up. For receiving it's not necessary, because whether or not the argument is scoped, the function can always borrow it. The lifetime of its parameter is narrower than what it gets passed. For return values, the situation is a bit different: They can of course not be assigned to non-scoped variables. But the solution for this simple: the generic code needs to use scope, too. A function that returns scope does so for a reason after all. This will work even if the return value of the called function turns out not to be scoped for this particular instantiation. And all this is an implementation of the generic code, it won't bleed outside, unless the generic code wants to return the scoped value. In this case, simply apply the same technique, just one lever higher. I don't see this as a problem for (new) code written with scope in mind. For existing code, of course some adjustments are necessary, but the same is true if you change existing code to be const correct, for example, or to be compatible with `shared`.
Re: RFC: scope and borrowing
On 22 September 2014 19:22, via Digitalmars-d wrote: > On Sunday, 21 September 2014 at 11:37:19 UTC, Manu via Digitalmars-d wrote: > >> On 21 September 2014 16:02, deadalnix via Digitalmars-d < >> digitalmars-d@puremagic.com> wrote: >> >> On Sunday, 21 September 2014 at 03:48:36 UTC, Walter Bright wrote: >>> >>> On 9/12/2014 6:48 PM, Manu via Digitalmars-d wrote: What happens when a scope() thing finds it's way into generic code? If > the type > doesn't carry that information, then you end up in a situation like > ref. > Have > you ever had to wrestle with ref in generic code? > ref is the biggest disaster zone in D, and I think all it's problems > will > translate straight to scope if you do this. > > I'm unaware of this disaster zone. >>> Well it is very real. I had to duplicate bunch of code in my visitor >>> generator recently because of it. Getting generic code ref correct is >>> very >>> tedious, error prone, and guarantees code duplication and/or various >>> static >>> ifs all over the place. >>> >>> >> It's also extremely hard to unittest; explodes the number of static if >> paths exponentially. I'm constantly finding bugs appear a year after >> writing some code because I missed some static branch paths when >> originally >> authoring. >> > > If I understand you right, your problems come from the fact that sometimes > in a template you want ref, and sometimes you don't. > > But I think this mostly doesn't apply to scope: either you borrow things, > or you don't. In particular, when you do borrow something, you're not > interested in the owner your parameter has inside the caller, you just take > it by scope (narrowing the lifetime). Thus there needs to be no information > about it inside the callee, and you don't need different instantiations > depending on it. > > One special case where scope deduction might be desirable are template > functions that apply predicates (delegates, lambdas) to passed-in > parameters, like map and filter. For these, the scope-ness of the input > range can depend on whether the predicates are able to take their > parameters as scope. > Application to scope will be identical to ref. A function that returns or receives scope that is inserted into generic code must have that property cascaded outwards appropriately. If typeof() or alias loses 'scope', then it will all go tits-up.
Re: RFC: reference counted Throwable
Nick Treleaven: - It is slow to compile. Surely that's not an inherent property of Rust? We don't know yet. Perhaps Rust type inference needs lot of computations. Bye, bearophile
Re: RFC: reference counted Throwable
On 21/09/2014 03:35, deadalnix wrote: - It is slow to compile. Surely that's not an inherent property of Rust? - Constraints too much the dev in some paradigms, which obviously won't fit all area of programming. Absolutely. The unique mutable borrow rule too often prevents even reading a variable, this is a big pain IMO. I wonder if this is partly because Rust doesn't have const, only immutable. - The macro system is plain horrible, The syntax is not great (I heard it may change), but I think the capabilities are actually pretty good. > and the only way to do code generation. I think you're wrong, you can add syntax extensions such as the compile-time regex support: http://blog.burntsushi.net/rust-regex-syntax-extensions "Luckily, the second way to write a macro is via a syntax extension (also known as a “procedural macro”). This is done by a compile time hook that lets you execute arbitrary Rust code, rewrite the abstract syntax to whatever you want and opens up access to the Rust compiler’s parser. In short, you get a lot of power. And it’s enough to implement native regexes. The obvious drawback is that they are more difficult to implement."
Re: RFC: scope and borrowing
On 22 September 2014 13:19, Walter Bright via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On 9/21/2014 4:27 AM, Manu via Digitalmars-d wrote: > >> It's also extremely hard to unittest; explodes the number of static if >> paths >> exponentially. I'm constantly finding bugs appear a year after writing >> some code >> because I missed some static branch paths when originally authoring. >> > > If you throw -cov while running unittests, it'll give you a report on > which code was executed and which wasn't. Very simple and useful. > It is a useful tool, but you can see how going to great lengths to write this explosion of paths is a massive pain in the first place, let alone additional overhead to comprehensively test that it works... it should never have been a problem to start with. You may argue that I didn't test my code effectively. I argue that my code should never have existed in the first place. It's wildly unsanitary, and very difficult to maintain; I can rarely understand the complexity of ref handling code looking back after some months. This was my very first complaint about D, on day 1... 6 years later, I'm still struggling with it on a daily basis. Here's some code I've been struggling with lately, tell me you think this is right: https://github.com/FeedBackDevs/LuaD/blob/master/luad/conversions/functions.d#L286 <- this one only deals with the return value. is broken, need to special-case properties that receive arguments by ref, since you can't pass rvalue->ref https://github.com/FeedBackDevs/LuaD/blob/master/luad/conversions/functions.d#L115 <- Ref!RT workaround, because I need a ref local https://github.com/FeedBackDevs/LuaD/blob/master/luad/conversions/functions.d#L172 <- Ref!T again, another instance where I need a ref local https://github.com/FeedBackDevs/LuaD/blob/master/luad/conversions/functions.d#L180 https://github.com/FeedBackDevs/LuaD/blob/master/luad/stack.d#L129 <- special case handling for Ref!T that I could eliminate if ref was part of the type. Note: 3rd party code never has this concession... https://github.com/FeedBackDevs/LuaD/blob/master/luad/stack.d#L448 <- more invasion of Ref!, because getValue may or may not return ref, which would be lost beyond this point, and it's not practical to static-if duplicate this entire function with another version that returns ref. https://github.com/FeedBackDevs/LuaD/blob/master/luad/stack.d#L157 <- special-case function, again because ref isn't part of the type There are many more instances of these throughout this code. This is just one example, and not even a particularly bad one (I've had worse), because there are never multiple args involved. These sorts of things come up on most projects I've worked on. The trouble with these examples, is that you can't try and imagine a direct substitution of the static branches and Ref!T with 'ref T' if ref were part of the type. This problem has deeply interfered with the code, and API concessions have been made throughout to handle it. If ref were part of the type, this code would be significantly re-worked and simplified. There would probably be one little part somewhere that did logic on T, alias with or without 'ref' as part of the type, and the cascading noise would mostly disappear. Add to all of that that I still can't pass an rvalue to a ref function (5 years later!!) >_< It's also worth noting that, with regard to 'ref', the above code only _just_ suits my needs. It's far from bug-free; there are places in there where I was dealing with ref, but it got so complicated, and I didn't actually make front-end use of the case in my project, that I gave up and ignored those cases... which is not really ideal considering this is a fairly popular library. I have spent days trying to get this right. If I were being paid hourly, I think I would have burned something like $2000.
Re: RFC: reference counted Throwable
On 9/21/2014 3:12 PM, Andrei Alexandrescu via Digitalmars-d wrote: On 9/21/14, 12:35 PM, "Nordlöw" wrote: On Friday, 19 September 2014 at 15:32:38 UTC, Andrei Alexandrescu wrote: Please chime in with thoughts. Why don't we all focus our efforts on upgrading the current GC to a state-of-the GC making use of D's strongly typed memory model before discussing these things? Not sure I understand, but: an important niche is apps that COMPLETELY disallow garbage collection -- Andrei It's also far from being a zero sum game. Work on one is not necessarily taking away from the other. There needs to be more 'doing' that follows the discussions than there typically is. But progress can and needs to be made on both (and many many other) fronts.
Re: RFC: scope and borrowing
On 22 September 2014 01:10, Andrei Alexandrescu via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On 9/21/14, 4:27 AM, Manu via Digitalmars-d wrote: > >> On 21 September 2014 16:02, deadalnix via Digitalmars-d >> mailto:digitalmars-d@puremagic.com>> wrote: >> >> On Sunday, 21 September 2014 at 03:48:36 UTC, Walter Bright wrote: >> >> On 9/12/2014 6:48 PM, Manu via Digitalmars-d wrote: >> >> What happens when a scope() thing finds it's way into >> generic code? If the type >> doesn't carry that information, then you end up in a >> situation like ref.. Have >> you ever had to wrestle with ref in generic code? >> ref is the biggest disaster zone in D, and I think all it's >> problems will >> translate straight to scope if you do this. >> >> >> I'm unaware of this disaster zone. >> >> >> Well it is very real. I had to duplicate bunch of code in my visitor >> generator recently because of it. Getting generic code ref correct >> is very tedious, error prone, and guarantees code duplication and/or >> various static ifs all over the place. >> >> >> It's also extremely hard to unittest; explodes the number of static if >> paths exponentially.. I'm constantly finding bugs appear a year after >> writing some code because I missed some static branch paths when >> originally authoring. >> > > Is this because of problems with ref's definition, or a natural > consequence of supporting ref parameters? -- Andrei > It's all because ref is not part of the type. You can't capture ref with typeof() or templates, you can't make ref locals, it's hard to find if something is ref or not (detection is different than everything else), etc. The nature of it not being a type leads to static if's in every template that ref appears, which must detect if things are ref (which is awkward), and produce multiple paths for a ref and not-ref version. If we're dealing with arguments, this might lead to num-arguments^^2 paths. The only practical conclusion I (and others too) have reached, is to eventually give up and invent Ref!T, but then we arrive at a new world of problems. It's surprisingly hard to write a transparent Ref template which interacts effectively with other generic code, and no 3rd party library will support it.
Re: FOSDEM'15 - let us propose a D dev room!!!
On Monday, 22 September 2014 at 08:01:43 UTC, Iain Buclaw via Digitalmars-d wrote: On 19 August 2014 19:22, Andrei Alexandrescu via Digitalmars-d wrote: On 8/19/14, 9:25 AM, Kai Nacke wrote: On Tuesday, 19 August 2014 at 14:08:30 UTC, Andrei Alexandrescu wrote: On 8/18/14, 11:18 PM, Kai Nacke wrote: I think we should propose a D dev room for FOSDEM'15. Sounds like a great initiative!! -- Andrei Any chance you have some scientific work to do in Europe by end of January? ;-) I'll make time for traveling for any even that's important to our community. -- Andrei Me too. And it would be nice to go somewhere close to home for once. ;) Same here. Dusseldorf is quite close and I attended FOSDEM a few times. I am just not sure if I will be available by then. -- Paulo
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On Monday, 22 September 2014 at 02:34:00 UTC, Googler Lurker wrote: Go fizzled inside google but granted has traction outside of google. Paulo stop feeding the troll for Petes sake. Don't be such a coward, show your face and publish you real name. Your style and choice of words reminds me of A.A. Do the man a favour and clear up this source for confusion. Locking fibers to threads will cost you more than using threadsafe features. One 300ms request can then starve waiting fibers even if you have 7 free threads. That's bad for latency, because then all fibers on that thread will get 300+ms in latency. How anyone can disagree with this is beyond me.
Re: Library Typedefs are fundamentally broken
On Sunday, 21 September 2014 at 18:09:26 UTC, Andrei Alexandrescu wrote: On 9/21/14, 8:29 AM, ketmar via Digitalmars-d wrote: On Sun, 21 Sep 2014 08:15:29 -0700 Andrei Alexandrescu via Digitalmars-d wrote: alias Int1 = Typedef!(int, "a.Int1"); alias Int2 = Typedef!(int, "b.Int2"); ah, now that's cool. module system? wut? screw it, we have time-proven manual prefixing! Use __MODULE__. -- Andrei Yes, but you're advocating a hack. The original premise does seem to be correct: library Typedefs are fundamentally broken. The semantics of templates does not match what one expects from a typedef: ie, declaring a new, unique type. If you have to pass __MODULE__ in, it's not really a library solution. The user code needs to pass in a nasty implementation detail in order to get a unique type. And it does seem to me, that because it isn't possible to do a proper library typedef, you've attempted to redefine what a Typedef is supposed to do. And sure, it you remove the requirement to create a unique type, Typedef isn't broken. But then it isn't very useful, either. You can't, for example, use it to define the various Windows HANDLEs (HMENU, etc), which was one of the most successful use cases for D1's typedef. Having said that, though, the success of 'alias this' does raise some interesting questions about how useful the concept of a typedef is. Certainly it's much less useful than when Typedef was created. My feeling is that almost every time when you want to create a new type from an existing one, you actually want to restrict the operations which can be performed on it. (Eg if you have typedef money = double; then money*money doesn't make much sense). For most typedefs I think you're better off with 'alias this'.
Re: RFC: scope and borrowing
On Sunday, 21 September 2014 at 11:37:19 UTC, Manu via Digitalmars-d wrote: On 21 September 2014 16:02, deadalnix via Digitalmars-d < digitalmars-d@puremagic.com> wrote: On Sunday, 21 September 2014 at 03:48:36 UTC, Walter Bright wrote: On 9/12/2014 6:48 PM, Manu via Digitalmars-d wrote: What happens when a scope() thing finds it's way into generic code? If the type doesn't carry that information, then you end up in a situation like ref. Have you ever had to wrestle with ref in generic code? ref is the biggest disaster zone in D, and I think all it's problems will translate straight to scope if you do this. I'm unaware of this disaster zone. Well it is very real. I had to duplicate bunch of code in my visitor generator recently because of it. Getting generic code ref correct is very tedious, error prone, and guarantees code duplication and/or various static ifs all over the place. It's also extremely hard to unittest; explodes the number of static if paths exponentially. I'm constantly finding bugs appear a year after writing some code because I missed some static branch paths when originally authoring. If I understand you right, your problems come from the fact that sometimes in a template you want ref, and sometimes you don't. But I think this mostly doesn't apply to scope: either you borrow things, or you don't. In particular, when you do borrow something, you're not interested in the owner your parameter has inside the caller, you just take it by scope (narrowing the lifetime). Thus there needs to be no information about it inside the callee, and you don't need different instantiations depending on it. One special case where scope deduction might be desirable are template functions that apply predicates (delegates, lambdas) to passed-in parameters, like map and filter. For these, the scope-ness of the input range can depend on whether the predicates are able to take their parameters as scope.
Re: Identifier resolution, the great implementation defined mess.
On 9/22/2014 12:02 AM, deadalnix wrote: On Monday, 22 September 2014 at 06:59:14 UTC, Walter Bright wrote: On 9/21/2014 11:09 PM, deadalnix wrote: We should simply do a lookup for local symbol, and if that fail, imported symbols. That's what it does now, i.e. lookup in the current scope, and if that fails, look in imports, if that fails, go to the enclosing scope. Can't this be made depth first ? That would seem more sensible to me, and apparently to other in this thread. After all, it seems legitimate to resolve what is under your nose than what is imported (and with lazy imports, it may even allow the compiler to process some imports). It is depth first. It starts at the innermost scope, which is the current scope. Somehow, we don't seem to be talking the same language :-( In that case, a should resolve as the parameter, all the time. Parameters are in an uplevel enclosing scope. Yes I know, there is good reason for that. But from the programmer perspective, not the implementer, that do not look right. I don't know of a better rule.
Re: RFC: reference counted Throwable
ixid: The fundamentalness of the changes seem to be sufficient that one could argue it's D3. This seems a good idea. If you're going to make major changes wouldn't it be worth a fuller break to address some of the other unresolved and seemingly pretty major issues such as const/immutable and ref? This seems a bad idea, because GC and scope tracking is a sufficiently large design & implementation space. You can't face all the large problems at the same time. Bye, bearophile
Re: RFC: reference counted Throwable
Fasten your seatbelt, it's gonna be a bumpy ride! :o) Andrei The fundamentalness of the changes seem to be sufficient that one could argue it's D3. If you're going to make major changes wouldn't it be worth a fuller break to address some of the other unresolved and seemingly pretty major issues such as const/immutable and ref?
Re: What are the worst parts of D?
On 21 September 2014 15:54, Andrei Alexandrescu via Digitalmars-d wrote: > On 9/21/14, 1:27 AM, ponce wrote: >> >> On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote: >>> >>> >>> What do you think are the worst parts of D? >>> >> >> Proper D code is supposed to have lots of attributes (pure const nothrow >> @nogc) that brings little and makes it look bad. > > > No because deduction. -- Andrei Agreed. The time when you want to explicitly use these attributes is if you want to enforce @nogc, pure ... As it turns out, it is a good idea to enforce these from the start, rather than after you've written your program. Iain
Re: Library Typedefs are fundamentally broken
On Sunday, 21 September 2014 at 23:00:09 UTC, ketmar via Digitalmars-d wrote: On Sun, 21 Sep 2014 22:07:21 + Ola Fosheim Grostad via Digitalmars-d wrote: I am waiting for a patch... i believe that we should revive 'typedef' keyword, but i'm not fully convinced yet. so i'll wait a little more. but you guessed it right, i'm thinking about another patch. ;-) Why not introduce a std.typecons.Newtype(T) with the desired semantics in addition to Typedef(T)? Btw, there is an argument to be made _for_ the current Typedef: If it generates an unpredictable cookie every time it is used (and I would count __LINE__ as one, as it can change easily), it's very hard to keep binary compatibility. Maybe taking only __MODULE__ in consideration would be a good compromise, because it at least avoids accidental cross module hijacking.
Re: Identifier resolution, the great implementation defined mess.
On Monday, 22 September 2014 at 08:06:11 UTC, Andrei Alexandrescu wrote: D lookup rules are logical and relatively simple. If you look at my first post, you'll notice that the discussion so far touched only a fraction of the issue (and I forgot to mention opDispatch in there). That being said, I'm fairly sure we can come up with something logical. The problem is right now, if put aside the surprise of the local import, that various methods of resolutions are defined independently, but not how they interact.
Re: Identifier resolution, the great implementation defined mess.
On 9/22/14, 12:09 AM, Walter Bright wrote: On 9/21/2014 11:36 PM, Jacob Carlborg wrote: You better write down the scope rules as well. It gets complicated with base classes, template mixins and all features available in D. Sure, I had thought they were. BTW, template mixins work exactly like imports. See "Mixin Scope" here: https://dlang.org/template-mixin D lookup rules are logical and relatively simple. In the case of local imports however, there's definitely an element of surprise, and also an issue akin to hijacking. Consider: void main(string[] args) { import some_module; ... use args ... } Let's assume module "some_module" does not define a name "args". All works fine. Time goes by, some_module gets updated to also define the name "args". Now this application is recompiled and may actually compile successfully, but the semantics has changed - it doesn't use the "args" in the parameter list, but instead the symbol exported by some_module. That's clearly something difficult to ignore. It's one of those cases in which logical and relatively simple doesn't fit the bill the same way that a pair of pants built out of simple shapes like cylinders and hemispheres won't be a good fit. We must look into this. Andrei
Re: FOSDEM'15 - let us propose a D dev room!!!
On 19 August 2014 19:22, Andrei Alexandrescu via Digitalmars-d wrote: > On 8/19/14, 9:25 AM, Kai Nacke wrote: >> >> On Tuesday, 19 August 2014 at 14:08:30 UTC, Andrei Alexandrescu wrote: >>> >>> On 8/18/14, 11:18 PM, Kai Nacke wrote: I think we should propose a D dev room for FOSDEM'15. >>> >>> >>> Sounds like a great initiative!! -- Andrei >> >> >> Any chance you have some scientific work to do in Europe by end of >> January? ;-) > > > I'll make time for traveling for any even that's important to our community. > -- Andrei > > Me too. And it would be nice to go somewhere close to home for once. ;)
Re: Identifier resolution, the great implementation defined mess.
On 9/21/2014 11:36 PM, Jacob Carlborg wrote: You better write down the scope rules as well. It gets complicated with base classes, template mixins and all features available in D. Sure, I had thought they were. BTW, template mixins work exactly like imports. See "Mixin Scope" here: https://dlang.org/template-mixin
Re: Identifier resolution, the great implementation defined mess.
On 9/21/2014 11:44 PM, Marco Leise wrote: But quite understandable that people expect them to be in the same scope, seeing as there is only one set of {}. { } introduce a new nested scope, they do not extend an existing one. Adding some shadowing warnings should deal with that, so that the earlier example with `text` being hijacked somehow errors out. Are how the current lookup rules work (for better or worse) clear to you or are they still mysterious?
Re: Identifier resolution, the great implementation defined mess.
On Monday, 22 September 2014 at 06:59:14 UTC, Walter Bright wrote: On 9/21/2014 11:09 PM, deadalnix wrote: We should simply do a lookup for local symbol, and if that fail, imported symbols. That's what it does now, i.e. lookup in the current scope, and if that fails, look in imports, if that fails, go to the enclosing scope. Can't this be made depth first ? That would seem more sensible to me, and apparently to other in this thread. After all, it seems legitimate to resolve what is under your nose than what is imported (and with lazy imports, it may even allow the compiler to process some imports). In that case, a should resolve as the parameter, all the time. Parameters are in an uplevel enclosing scope. Yes I know, there is good reason for that. But from the programmer perspective, not the implementer, that do not look right.
Re: Identifier resolution, the great implementation defined mess.
On 9/21/2014 11:09 PM, deadalnix wrote: We should simply do a lookup for local symbol, and if that fail, imported symbols. That's what it does now, i.e. lookup in the current scope, and if that fails, look in imports, if that fails, go to the enclosing scope. In that case, a should resolve as the parameter, all the time. Parameters are in an uplevel enclosing scope.