Re: dcollections 1.0 and 2.0a beta released
superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article Andrei Alexandrescu Wrote: To get back to one of my earlier points, the fact that the container interfaces are unable to express iteration is a corollary of the design's problem and an climactic disharmony. My vision, in very brief, is to foster a federation of independent containers abiding to identical names for similar functionality. Then a few concept checks (a la std.range checks) can easily express what capabilities a given client function needs from a container. This might have a simple answer. Dcollections implementations are not a hierarchy, just the interfaces are. I.e. there aren't many kinds of HashMaps that derive from each other. But the interfaces are not detrimental to your ideas. The only thing interfaces require is that the entities implementing them are classes and not structs. As long as you agree that classes are the right call, then interfaces can co-exist with your other suggestions without interference. classes suck ass. structs give ye freedom 2 define copy ctors n shit. haven't seen andre agreein' classes are da way 2 go and i hope he don't. anyway u put together some cool shit. hope andre u do a pow-wow n shit and adjust shit fer puttin' into phobos. I think classes are the right move. First, a collection makes more sense as a reference type. Note that both arrays and associative arrays are reference types. If collections are value types, like in C++, then copying a collection that is a node-based collection means duplicating all the nodes. Copy construction is essentially possible through a function -- dup. Having value-semantics makes it too easy to copy large amounts of heap data hurting performance. Many inexperienced C++ coders pass a std::set by value, not realizing why their code is so ridiculously slow. I think one of the things that makes D so damn fast is because large data structures such as arrays and AA's are always passed by reference. Second, since reference types are the right thing to do, classes are much easier to deal with. I know AA's are reference types that are structs, but the code needed to perform this feat is not trivial. The AA has only one member, a reference to the data struct, which is allocated on the heap. Any member function/property that is used on the AA must first check whether the implementation is allocated yet. The only benefit this gives you IMO is not having to use 'new' on it. And even that has some drawbacks. For example, pass an empty AA by value to a function, and if that function adds any data to it, it is lost. But pass an AA by value with one element in it, and the new data sticks. A class gives you much more in terms of options -- interfaces, builtin synchronization, runtime comparison, etc. And it forces full reference semantics by default. I think regardless of whether interfaces are defined for dcollections, classes give a better set of options than structs. Also note that I intend to make all dcollections classes final, so there will be no virtual calls as long as you have a reference to the concrete type. Is there some other reason to use structs besides copy construction? -Steve
Re: dcollections 1.0 and 2.0a beta released
== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article Andrei Alexandrescu Wrote: To get back to one of my earlier points, the fact that the container interfaces are unable to express iteration is a corollary of the design's problem and an climactic disharmony. My vision, in very brief, is to foster a federation of independent containers abiding to identical names for similar functionality. Then a few concept checks (a la std.range checks) can easily express what capabilities a given client function needs from a container. This might have a simple answer. Dcollections implementations are not a hierarchy, just the interfaces are. I.e. there aren't many kinds of HashMaps that derive from each other. But the interfaces are not detrimental to your ideas. The only thing interfaces require is that the entities implementing them are classes and not structs. As long as you agree that classes are the right call, then interfaces can co-exist with your other suggestions without interference. classes suck ass. structs give ye freedom 2 define copy ctors n shit. haven't seen andre agreein' classes are da way 2 go and i hope he don't. anyway u put together some cool shit. hope andre u do a pow-wow n shit and adjust shit fer puttin' into phobos. I think classes are the right move. First, a collection makes more sense as a reference type. Note that both arrays and associative arrays are reference types. If collections are value types, like in C++, then copying a collection that is a node-based collection means duplicating all the nodes. Copy construction is essentially possible through a function -- dup. Having value-semantics makes it too easy to copy large amounts of heap data hurting performance. Many inexperienced C++ coders pass a std::set by value, not realizing why their code is so ridiculously slow. I think one of the things that makes D so damn fast is because large data structures such as arrays and AA's are always passed by reference. Second, since reference types are the right thing to do, classes are much easier to deal with. I know AA's are reference types that are structs, but the code needed to perform this feat is not trivial. The AA has only one member, a reference to the data struct, which is allocated on the heap. Any member function/property that is used on the AA must first check whether the implementation is allocated yet. The only benefit this gives you IMO is not having to use 'new' on it. And even that has some drawbacks. For example, pass an empty AA by value to a function, and if that function adds any data to it, it is lost. But pass an AA by value with one element in it, and the new data sticks. A class gives you much more in terms of options -- interfaces, builtin synchronization, runtime comparison, etc. And it forces full reference semantics by default. I think regardless of whether interfaces are defined for dcollections, classes give a better set of options than structs. Also note that I intend to make all dcollections classes final, so there will be no virtual calls as long as you have a reference to the concrete type. Is there some other reason to use structs besides copy construction? -Steve memory management n shit. with a struct u can use refcounting n malloc n realloc n shit. still stays a reference type. nothing gets fucked up. den there's all that null ref shit. with a class u have void foo(container!shit poo) { poo.addElement(Shit(diarrhea)); } dat works with struct but don't work with motherfucking classes. u need to write. void foo(container!shit poo) { if (!poo) poo = new container!shit; // fuck dat shit poo.addElement(Shit(diarrhea)); } u feel me?
Re: dcollections 1.0 and 2.0a beta released
superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article Is there some other reason to use structs besides copy construction? -Steve memory management n shit. with a struct u can use refcounting n malloc n realloc n shit. still stays a reference type. nothing gets fucked up. This is not necessary with purely memory-based constructs -- the GC is your friend. The custom allocator ability in dcollections should provide plenty of freedom for memory allocation schemes. den there's all that null ref shit. with a class u have void foo(container!shit poo) { poo.addElement(Shit(diarrhea)); } dat works with struct but don't work with motherfucking classes. u need to write. void foo(container!shit poo) { if (!poo) poo = new container!shit; // fuck dat shit poo.addElement(Shit(diarrhea)); } u feel me? It doesn't work. Just with your example, you won't get what you expect. Try it with AA's. Even your class example doesn't make any sense. -Steve
Re: dcollections 1.0 and 2.0a beta released
== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article Is there some other reason to use structs besides copy construction? -Steve memory management n shit. with a struct u can use refcounting n malloc n realloc n shit. still stays a reference type. nothing gets fucked up. This is not necessary with purely memory-based constructs -- the GC is your friend. The custom allocator ability in dcollections should provide plenty of freedom for memory allocation schemes. how do u set up yer custom allocator to free memory? u cant tell when its ok. copying refs iz under da radar. dats my point. den there's all that null ref shit. with a class u have void foo(container!shit poo) { poo.addElement(Shit(diarrhea)); } dat works with struct but don't work with motherfucking classes. u need to write. void foo(container!shit poo) { if (!poo) poo = new container!shit; // fuck dat shit poo.addElement(Shit(diarrhea)); } u feel me? It doesn't work. wut? it don't work? whaddaya mean it dun work? is you crazy? what dun work? maybe therez sum misundercommunication. repeating. if container is struct this shit works: void foo(container!shit poo) { poo.addElement(Shit(diarrhea)); } dun tell me it dun work. i dun explain shit again. it works coz a struct cant be null. but a struct can be a ref if it only haz one pointer inside. methinks the builtin hash iz dat way. if container iz class dat shit dun work. u need to write dis shit: void foo(container!shit poo) { if(!poo) poo = new container!shit; // fuck dat shit poo.addElement(Shit(diarrhea)); } dat sucks bull ballz.
Re: dcollections 1.0 and 2.0a beta released
superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article Is there some other reason to use structs besides copy construction? -Steve memory management n shit. with a struct u can use refcounting n malloc n realloc n shit. still stays a reference type. nothing gets fucked up. This is not necessary with purely memory-based constructs -- the GC is your friend. The custom allocator ability in dcollections should provide plenty of freedom for memory allocation schemes. how do u set up yer custom allocator to free memory? u cant tell when its ok. copying refs iz under da radar. dats my point. It frees an element's memory when the element is removed from the container. The container itself is managed by the GC. den there's all that null ref shit. with a class u have void foo(container!shit poo) { poo.addElement(Shit(diarrhea)); } dat works with struct but don't work with motherfucking classes. u need to write. void foo(container!shit poo) { if (!poo) poo = new container!shit; // fuck dat shit poo.addElement(Shit(diarrhea)); } u feel me? It doesn't work. wut? it don't work? whaddaya mean it dun work? is you crazy? what dun work? maybe therez sum misundercommunication. void foo(int[int] x) { x[5] = 5; } void main() { int[int] x; foo(x); assert(x[5] == 5); // fails } -Steve
Re: dcollections 1.0 and 2.0a beta released
On 05/21/2010 09:14 AM, Steven Schveighoffer wrote: Second, since reference types are the right thing to do, classes are much easier to deal with. I know AA's are reference types that are structs, but the code needed to perform this feat is not trivial. The AA has only one member, a reference to the data struct, which is allocated on the heap. Any member function/property that is used on the AA must first check whether the implementation is allocated yet. The only benefit this gives you IMO is not having to use 'new' on it. And even that has some drawbacks. For example, pass an empty AA by value to a function, and if that function adds any data to it, it is lost. But pass an AA by value with one element in it, and the new data sticks. A class gives you much more in terms of options -- interfaces, builtin synchronization, runtime comparison, etc. And it forces full reference semantics by default. I think regardless of whether interfaces are defined for dcollections, classes give a better set of options than structs. Wow. A partially-nullable type. Great. Now I have to review everywhere I ever used an AA. Thanks, D. is there any serious drawback to something like (int[int]).init = InitializedAA!(int,int) ?
Re: dcollections 1.0 and 2.0a beta released
On Fri, May 21, 2010 at 9:43 AM, Steven Schveighoffer schvei...@yahoo.com wrote: superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article Is there some other reason to use structs besides copy construction? -Steve memory management n shit. with a struct u can use refcounting n malloc n realloc n shit. still stays a reference type. nothing gets fucked up. This is not necessary with purely memory-based constructs -- the GC is your friend. The custom allocator ability in dcollections should provide plenty of freedom for memory allocation schemes. how do u set up yer custom allocator to free memory? u cant tell when its ok. copying refs iz under da radar. dats my point. It frees an element's memory when the element is removed from the container. The container itself is managed by the GC. den there's all that null ref shit. with a class u have void foo(container!shit poo) { poo.addElement(Shit(diarrhea)); } dat works with struct but don't work with motherfucking classes. u need to write. void foo(container!shit poo) { if (!poo) poo = new container!shit; // fuck dat shit poo.addElement(Shit(diarrhea)); } u feel me? It doesn't work. wut? it don't work? whaddaya mean it dun work? is you crazy? what dun work? maybe therez sum misundercommunication. void foo(int[int] x) { x[5] = 5; } void main() { int[int] x; foo(x); assert(x[5] == 5); // fails } And with arrays at least it's even more insidious, because sometimes it will seem to work, and sometimes it won't. void foo(int[] x) { x ~= 10; } Caller's .length will never get updated by that, but it won't crash so it may take a while to find the bug. Very easy bug to get caught by in D. I'm pretty sure that one's zapped me three or four times at least. Probably because I started thinking I wasn't going to modify the length of an array in a particular function, then later I decide to (or in some function that function calls). --bb
Re: dcollections 1.0 and 2.0a beta released
On 05/19/2010 07:21 PM, bearophile wrote: Andrei Alexandrescu: Destroy me :o). You ideas are surely interesting, but I don't think there's a simple way to change his code according to your ideas. People can use the dcollections in the following weeks and months, and when you have implemented your ideas the people that like them can switch to using your collections. Bye, bearophile I actually believe there is a simple transition path. In essence the interfaces in model/ should be rewritten as isXxx tests and the containers should have all of their methods final. Andrei
Re: dcollections 1.0 and 2.0a beta released
On 05/21/2010 11:55 AM, Ellery Newcomer wrote: On 05/21/2010 09:14 AM, Steven Schveighoffer wrote: Second, since reference types are the right thing to do, classes are much easier to deal with. I know AA's are reference types that are structs, but the code needed to perform this feat is not trivial. The AA has only one member, a reference to the data struct, which is allocated on the heap. Any member function/property that is used on the AA must first check whether the implementation is allocated yet. The only benefit this gives you IMO is not having to use 'new' on it. And even that has some drawbacks. For example, pass an empty AA by value to a function, and if that function adds any data to it, it is lost. But pass an AA by value with one element in it, and the new data sticks. A class gives you much more in terms of options -- interfaces, builtin synchronization, runtime comparison, etc. And it forces full reference semantics by default. I think regardless of whether interfaces are defined for dcollections, classes give a better set of options than structs. Wow. A partially-nullable type. Great. Now I have to review everywhere I ever used an AA. Thanks, D. is there any serious drawback to something like (int[int]).init = InitializedAA!(int,int) ? Or should one just always give an AA param either a const or ref modifier?
Re: dcollections 1.0 and 2.0a beta released
On 05/19/2010 07:57 PM, Bill Baxter wrote: On Wed, May 19, 2010 at 4:01 PM, Andrei Alexandrescu seewebsiteforem...@erdani.org wrote: My vision, in very brief, is to foster a federation of independent containers abiding to identical names for similar functionality. Then a few concept checks (a la std.range checks) can easily express what capabilities a given client function needs from a container. Destroy me :o). So instead of STL's concept hierarchy, you have essentially concept tags. Very Web 2.0. :-) I agree that there doesn't seem to be any coding benefit to STL's concepts being hierarchical. If you need a push_back(), you've got to check for push_back(). The main benefit seems to be for documentation purposes, allowing you to say things like bidirectional_iterator has this and that, plus everything in forward_iterator. But that could easily be rephrased as it has backward_iteration plus forward_iteration with two pages describing those two tags. So I like the sound of it. But it seems actually a pretty small departure from the STL approach, in practice. Well in fact STL has a concept hierarchy for iterators (which D also has for ranges), and a flat, unstructured approach to container. I don't mind keeping what STL does if there's no good reason. One change I do think is beneficial is making containers reference types by default. Andrei
Re: dcollections 1.0 and 2.0a beta released
On 05/19/2010 08:42 PM, Steven Schveighoffer wrote: Andrei Alexandrescu Wrote: To get back to one of my earlier points, the fact that the container interfaces are unable to express iteration is a corollary of the design's problem and an climactic disharmony. My vision, in very brief, is to foster a federation of independent containers abiding to identical names for similar functionality. Then a few concept checks (a la std.range checks) can easily express what capabilities a given client function needs from a container. This might have a simple answer. Dcollections implementations are not a hierarchy, just the interfaces are. Without final, they are the roots of a hierarchy. But I understand you are making containers final, which is great. I.e. there aren't many kinds of HashMaps that derive from each other. But the interfaces are not detrimental to your ideas. The only thing interfaces require is that the entities implementing them are classes and not structs. As long as you agree that classes are the right call, then interfaces can co-exist with your other suggestions without interference. This brings back a discussion I had with Walter a while ago, with echoes in the newsgroup. Basically the conclusion was as follows: if a container never escapes the addresses of its elements, it can manage its own storage. That forces, however, the container to be a struct because copying references to a class container would break that encapsulation. I called those perfectly encapsulated containers and I think they are good candidates for manual memory management because they tend to deal in relatively large chunks. I noticed that your collections return things by value, so they are good candidates for perfect encapsulation. Yes, if you want to define this function needs something that is both addable and purgeable, I don't have an interface for that. But a concept can certainly define that generically (which is what you want anyways), or you could just say I need a List and get those functions also. It also does not force entities other than dcollections objects to be classes, they could be structs and implement the correct concepts. I myself don't really use the interface aspect of the classes, it is mostly a carryover from the Java/Tango inspirations. I don't know Tango, but Java's containers are a terrible example to follow. Java's container library is a ill-advised design on top of an underpowered language, patched later with some half-understood seeming of genericity. I think Java containers are a huge disservice to the programming community because they foster bad design. But I can see one good reason to keep them -- binary interoperability. For example, it might be the case some day when D has good support with dynamic libraries that a library exposes some piece of itself as a Map or List interface. I need to disagree with that. I've done and I do a ton of binary interoperability stuff. You never expose a generic container interface! Interoperable objects always embody high-level logic that is specific to the application. They might use containers inside, but they invariably expose high-level, application-specific functionality. So my answer is -- go ahead and define these concepts and required names, and you can ignore the interfaces if they don't interest you. They do not subtract from the possibilities, and others may find good use for them. Does that make sense? I understand I could ignore the interfaces and call it a day, but it seems that at this point we are both convinced they are not quite good at anything: you only put them in because you suffered the Stockholm syndrome with Java, and I hate them with a passion. Why would we keep in the standard library bad design with the advice that if you don't like it ignore it? Andrei
Re: dcollections 1.0 and 2.0a beta released
== Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article superdan Wrote: == Quote from Steven Schveighoffer (schvei...@yahoo.com)'s article Is there some other reason to use structs besides copy construction? -Steve memory management n shit. with a struct u can use refcounting n malloc n realloc n shit. still stays a reference type. nothing gets fucked up. This is not necessary with purely memory-based constructs -- the GC is your friend. The custom allocator ability in dcollections should provide plenty of freedom for memory allocation schemes. how do u set up yer custom allocator to free memory? u cant tell when its ok. copying refs iz under da radar. dats my point. It frees an element's memory when the element is removed from the container. The container itself is managed by the GC. den there's all that null ref shit. with a class u have void foo(container!shit poo) { poo.addElement(Shit(diarrhea)); } dat works with struct but don't work with motherfucking classes. u need to write. void foo(container!shit poo) { if (!poo) poo = new container!shit; // fuck dat shit poo.addElement(Shit(diarrhea)); } u feel me? It doesn't work. wut? it don't work? whaddaya mean it dun work? is you crazy? what dun work? maybe therez sum misundercommunication. void foo(int[int] x) { x[5] = 5; } void main() { int[int] x; foo(x); assert(x[5] == 5); // fails } -Steve wrote a long post but it got lost. shit. bottom line dats a bug in dmd or phobos.
Re: dcollections 1.0 and 2.0a beta released
On 05/19/2010 09:59 PM, Robert Jacques wrote: On Wed, 19 May 2010 21:42:35 -0400, Steven Schveighoffer schvei...@yahoo.com wrote: Andrei Alexandrescu Wrote: To get back to one of my earlier points, the fact that the container interfaces are unable to express iteration is a corollary of the design's problem and an climactic disharmony. My vision, in very brief, is to foster a federation of independent containers abiding to identical names for similar functionality. Then a few concept checks (a la std.range checks) can easily express what capabilities a given client function needs from a container. This might have a simple answer. Dcollections implementations are not a hierarchy, just the interfaces are. I.e. there aren't many kinds of HashMaps that derive from each other. But the interfaces are not detrimental to your ideas. The only thing interfaces require is that the entities implementing them are classes and not structs. As long as you agree that classes are the right call, then interfaces can co-exist with your other suggestions without interference. Yes, if you want to define this function needs something that is both addable and purgeable, I don't have an interface for that. But a concept can certainly define that generically (which is what you want anyways), or you could just say I need a List and get those functions also. It also does not force entities other than dcollections objects to be classes, they could be structs and implement the correct concepts. I myself don't really use the interface aspect of the classes, it is mostly a carryover from the Java/Tango inspirations. But I can see one good reason to keep them -- binary interoperability. For example, it might be the case some day when D has good support with dynamic libraries that a library exposes some piece of itself as a Map or List interface. So my answer is -- go ahead and define these concepts and required names, and you can ignore the interfaces if they don't interest you. They do not subtract from the possibilities, and others may find good use for them. Does that make sense? -Steve Yes and No. I understand where your coming from, but I think it's a bad idea. First, I think it needlessly expands the radius of comprehension needed to understand and use the library. (See Tangled up in tools http://www.pragprog.com/magazines/2010-04/tangled-up-in-tools) For the record, I strongly agree with this. Second, I think designing a library to be flexible enough to meet some future, anticipated need (e.g. dlls) is a good idea, but actually implementing vaporous future needs is fraught with peril; it's too easy to guess wrong. Third, interface base design is viral; If library X uses interfaces then I have to use interfaces to interface with it. And if another library Y uses classes, then I'm going have to write a (needless) wrapper around one of them. That's a good argument as well. I like to put it a different way: you can get the advantages of an interface by wrapping a struct, but you can't get the advantages of a struct by wrapping an interface. Andrei
Re: dcollections 1.0 and 2.0a beta released
On 05/20/2010 08:22 AM, Steven Schveighoffer wrote: Michel Fortin Wrote: On 2010-05-20 06:34:42 -0400, Steven Schveighofferschvei...@yahoo.com said: I understand these points, but I'm already using interfaces to copy data between containers. I don't have to, I could have used generic code, but this way, only one function is instantiated to copy data from all the other containers. The problem with using generic code is that the compiler will needlessly duplicate functions that are identical. One question. Have you calculated the speed difference between using an interface and using generic code? Surely going through all those virtual calls slows things down a lot. I do like interfaces in principle, but I fear it'll make things much slower when people implement things in term of interfaces. That's why I'm not sure it's a good idea to offer container interfaces in the standard library. It's not that much slower. You get a much higher speedup when things can be inlined than virtual vs. non-virtual. However, I should probably make all the functions in the concrete implementations final. I made several of them final, but I should do it across the board. Yup. Even Java does that. I forgot whether it was a recanting of a former stance (as was the case with synchronized) or things were like that from the get-go. One thing I just thought of -- in dcollections, similar types can be compared to one another. For example, you can check to see if a HashSet is equal to a TreeSet. But that would not be possible without interfaces. Of course it would be possible. You write a generic function that takes two generic container and constrain the inputs such that at least one has element lookup capability. (Complexity-oriented design for the win!) Andrei
Re: dcollections 1.0 and 2.0a beta released
On 05/20/2010 06:17 AM, Michel Fortin wrote: On 2010-05-20 06:34:42 -0400, Steven Schveighoffer schvei...@yahoo.com said: I understand these points, but I'm already using interfaces to copy data between containers. I don't have to, I could have used generic code, but this way, only one function is instantiated to copy data from all the other containers. The problem with using generic code is that the compiler will needlessly duplicate functions that are identical. One question. Have you calculated the speed difference between using an interface and using generic code? Surely going through all those virtual calls slows things down a lot. I do like interfaces in principle, but I fear it'll make things much slower when people implement things in term of interfaces. That's why I'm not sure it's a good idea to offer container interfaces in the standard library. There will be differences, but let's keep in mind that that's one of several arguments against container interfaces. Andrei
Re: dcollections 1.0 and 2.0a beta released
On 05/20/2010 09:14 AM, Pelle wrote: On 05/20/2010 03:22 PM, Steven Schveighoffer wrote: One thing I just thought of -- in dcollections, similar types can be compared to one another. For example, you can check to see if a HashSet is equal to a TreeSet. But that would not be possible without interfaces. -Steve I'm sorry, but I think that's a misfeature. In my opinion, a tree is not equal to a hash table, ever. Yes. By the way, TDPL's dogma imposes that a == b for classes means they have the same dynamic type. Andrei
Re: dcollections 1.0 and 2.0a beta released
On 05/20/2010 09:41 AM, Michel Fortin wrote: On 2010-05-20 08:30:59 -0400, bearophile bearophileh...@lycos.com said: Michel Fortin: Surely going through all those virtual calls slows things down a lot. Right. But the purpose of a good compiler is to kill those, devirtualizing. LLVM devs are working on this too. See: http://llvm.org/bugs/show_bug.cgi?id=6054 http://llvm.org/bugs/show_bug.cgi?id=3100 Devirtualization is only possible in certain cases: when the function knows exactly which type it'll get. But downcasting to a more generic type and passing it around function calls strips it of this precise information required for devirtualization. (I assume you meant upcasting.) Even then, it's possible to accelerate calls by doing method casing, type casing, class hierarchy analysis... The only way to propagate the exact type is to either instanciate a new version of the function you call for that specific type (which is what a template does) or inline it (because it also creates a new instance of the function, inline inside the caller). For instance: void test1(List list) { list.clear(); // can't devirtualize since we do now know which kind of list we'll get } void test2() { List list = new ArrayList; list.clear(); // now the compiler can see we'll always have an ArrayList, can devritualize } void test3(L)(L list) { list.clear(); // parent's type is propagated, can devirtualize if parent can. } Steven Schveighoffer: The problem with using generic code is that the compiler will needlessly duplicate functions that are identical. See the -mergefunc compiler switch of LDC, to merge identical functions (like ones produced by template instantiations). This feature is not very powerful yet, but it's present and I have seen it works. Indeed. I'm no expert in linkers, but in my opinion this is one of the most basic optimizations a linker should perform. And C++ has pushed linkers to do that for years now so I'd expect most linkers to do that already... The problem with templates are more the multiple slightly different instanciations. In general this is good for performance, but it's only needed for the code paths that need to be fast. I think generic containers should be fast. There's been talk that Walter's slow porting of the linker to C and then D will put him in the position to introduce such an optimization. Andrei
Re: dcollections 1.0 and 2.0a beta released
On 05/20/2010 02:47 PM, bearophile wrote: Michel Fortin: Devirtualization is only possible in certain cases: when the function knows exactly which type it'll get. You are wrong, in most cases there are ways to de-virtualize, even when the runtime type isn't exactly known, but sometimes to do it you have to work too much. This is probably why C# dotnet doesn't perform this optimization. If we get deeper into this branch, we'll forget where we started from (are interfaces sensible for this design?) and we'll reframe the interfaces vs. no interfaces decision into a speed loss vs. no speed loss decision. There are other arguments to look at besides performance. Andrei
Re: dcollections 1.0 and 2.0a beta released
On Fri, May 21, 2010 at 10:50 AM, superdan su...@dan.org wrote: void foo(int[int] x) { x[5] = 5; } void main() { int[int] x; foo(x); assert(x[5] == 5); // fails } -Steve wrote a long post but it got lost. shit. bottom line dats a bug in dmd or phobos. Unfortunately it works exactly as designed. --bb
Re: dcollections 1.0 and 2.0a beta released
Andrei Alexandrescu wrote: I wrote a solution to the problem in native D. It goes like this: alias Container!(int, addable | purgeable) Messerschmidt; void messWith(Messerschmidt i) { ... use i's capabilities to add and purge ... } I agree with Michael Fortin that the | is questionable. I'd like to suggest instead that it should instead be a variadic list of names, like: alias Container!(int, addable, purgeable) Msserschmidt; Perhaps the names should follow a naming convention, alias Container!(int, ContainerAddable, ContainerPurgeable) Msserschmidt; The problem with using scoped names, like Container.Addable, is scoped names cannot be added to.
Re: dcollections 1.0 and 2.0a beta released
Andrei Alexandrescu wrote: On 05/19/2010 09:59 PM, Robert Jacques wrote: Yes and No. I understand where your coming from, but I think it's a bad idea. First, I think it needlessly expands the radius of comprehension needed to understand and use the library. (See Tangled up in tools http://www.pragprog.com/magazines/2010-04/tangled-up-in-tools) For the record, I strongly agree with this. I do too, but that's the easy part. Living up to those ideals is extremely hard, usually because most designers think they have designed simple interfaces when everyone else thinks they didn't. In this whole container discussion, I'd like to point out something Andrei pointed out to me. The one extremely successful example of pluggable components is the unix filter model. Essentially, one program pipes its output to the next, which pipes its output to the next, etc. The unix console is designed around that paradigm. If we can get anywhere close to that level of success with ranges and containers, we should all be well pleased.
Re: dcollections 1.0 and 2.0a beta released
Walter Bright wrote: If we can get anywhere close to that level of success with ranges and containers, we should all be well pleased. Mike Taylor has a phrase for that I think is well-coined: impedance matching, defined as the work necessary to get one library module to work with another library module. One example of bad impedance matching is C++ iostreams' attempt to make a memory buffer look like a file. Phobos propagated that mistake in its own streams.
Re: dcollections 1.0 and 2.0a beta released
Hello superdan, dun tell me it dun work. i dun explain shit again. it works coz a struct cant be null. but a struct can be a ref if it only haz one pointer inside. methinks the builtin hash iz dat way. void foo(container!shit poo) { if(!poo) poo = new container!shit; // fuck dat shit poo.addElement(Shit(diarrhea)); } dat sucks bull ballz. That code is broken, if poo is of a class type, that just new's a container, adds an element, and then drops the reference to get GC'ed. To many it do anything generally useful, it would need to have another level of indirection. -- ... IXOYE
Re: dcollections 1.0 and 2.0a beta released
Hello Vladimir, On Thu, 20 May 2010 04:42:35 +0300, Steven Schveighoffer schvei...@yahoo.com wrote: interfaces Does that imply that the most important methods are virtual? If so, say good-bye to inlining, and hello to an additional level of dereferencing. From a technical standpoint there is no reason that a method needs to be called virtually from a class reference just because the same method gets called virtually from an interface reference. -- ... IXOYE