Re: DConf 2013 Day 2 Talk 5: A Precise Garbage Collector for D by Rainer Schütze
On 27.06.2013 19:33, bearophile wrote: Andrei Alexandrescu: http://www.reddit.com/r/programming/comments/1fpw2r/dconf_2013_day_2_talk_5_a_precise_garbage/ Another thing to keep in account while designing a more precise garbage collection is a possible special casing for Algebraic (and Variant, and more generally for some standardized kind of tagged union): http://d.puremagic.com/issues/show_bug.cgi?id=5057 In an Algebraic there is run-time information for the GC to decide if inside it there are pointers to follow or not. It's mostly a matter of letting the GC recognize and use such information. In the proposed implementation, the gc_emplace function can be used to pass this information to the GC. This would need to be called whenever the location of pointers changes, so it's not high-performance.
Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu
On Sunday, 30 June 2013 at 19:45:06 UTC, Joakim wrote: OK, glad to hear that you wouldn't be against it. You'd be surprised how many who use permissive licenses still go nuts when you propose to do exactly what the license allows, ie close up parts of the source. Because people don't just care about the strict legal constraints, but also about the social compact around software. Often people choose permissive licenses because they want to ensure other free software authors can use their software without encountering the licensing incompatibilities that can result from the various forms of copyleft. Closing up their software is rightly seen as an abuse of their goodwill. In other cases there may be a broad community consensus that builds up around a piece of software, that this work should be shared and contributed to as a common good (e.g. X.org). Attempts to close it up violate those social norms and are rightly seen as an attack on that community and the valuable commons they have cultivated. Community anger against legal but antisocial behaviour is hardly limited to software, and is a fairly important mechanism for ensuring that people behave well towards one another. Since you have been so gracious to use such permissive licenses for almost all of D, I'm sure someone will try the closed/paid experiment someday and see if which of us is right. :) Good luck with that :-) By the way, you mentioned a project of your own where you employed the short-term open core model you describe. Want to tell us more about that? Regardless of differences of opinion, it's always good to hear about someone's particular experience with a project.
Re: DConf 2013 Day 2 Talk 5: A Precise Garbage Collector for D by Rainer Schütze
Rainer Schuetze: In the proposed implementation, the gc_emplace function can be used to pass this information to the GC. This would need to be called whenever the location of pointers changes, so it's not high-performance. Thank you for the answer. Let me see if I understand what you are saying. If I have an usage of Algebraic like this, at line 4 x contains no GC-managed pointers, while at line 6 it contains a pointer to the array data: import std.variant: Algebraic; void main() { alias T = Algebraic!(ulong, int[]); T x = 1UL; // line 4 auto items = [1, 2]; x = items; // line 6 x = 2UL; // line 7 x = items; // line 8 } You say that every time you change the _type_ of the contents of x you have to call gc_emplace. But isn't the D GC called only when an allocation occurs? So what's the point of calling gc_emplace (at lines 7 and 8) if no garbage collection happens? Isn't it possible for the GC to use (and update) this information lazily, only when a collection occurs? (Extra note: Algebraic is meant for functional-style programming. In such kind of programming data is mostly immutable. This means you don't change the contents and the type of an algebraic variable once it is assigned. So usually you call gc_emplace only once for one of such variables). Bye, bearophile
Re: Announcing bottom-up-build - a build system for C/C++/D
Hooo, a self-contained build tool? That's cool. 1. Are arbitrary make-style commands supported? For example, on windows one may want to compile resources. Resources consist of a declaration .rc file, icons and manifest files, which are compiled into .res file, though only .rc file is passed to the resource compiler, but other files are still dependencies of course. 2. How to do heterogeneous linking? Again on windows an executable can be linked from code .obj's, resource .res and module definition .def: gcc main.o r.res mytool.def -o mytool.exe 3. Probably not actual for big projects. Can build workspase be created automatically by bub instead of explicitly by bub-config... for some default mode? 4. If a build server does builds from scratch, shouldn't it be better for performance to compile several source files in one command? Also applies to user builds: when one wants to install a project from source, he usually does it only once and from scratch. Have you any idea, how this affects compilation speed of c, c++ and d? Though dmd frontend can go out of memory.
Re: Bugfix release 0.9.16
Am 01.07.2013 11:43, schrieb MrSmith: Hello, is it possible to build package consisting of few subpackages using dub? Or i need to build all subpackages manually? You should be able to do that by adding all sub-packages as dependencies in the parent package and then building the parent package as a library. Derelict is configured that way: https://github.com/aldacron/Derelict3/blob/master/package.json
Re: Bugfix release 0.9.16
I have tried to build test project consisting of 2 libraries. { name: project, description: An example project skeleton, homepage: http://example.org;, copyright: Copyright © 2000, Your Name, targetType: library, authors: [ Your Name ], dependencies: { project:lib1 : ~master, project:lib2 : ~master }, subPackages : [ { name:lib1, targetPath: lib, targetType: library, sourcePaths: [source/lib1], targetName: app1 }, { name:lib2, targetPath: lib, dependencies: {project:lib1: ~master}, targetType: library, sourcePaths: [source/lib2], targetName: app1 } ] } Here is error log http://pastebin.com/MBhzeuwZ
Re: DConf 2013 Closing Keynote: Quo Vadis by Andrei Alexandrescu
On 7/1/2013 2:04 PM, Brad Roberts wrote: On 7/1/13 11:42 AM, Walter Bright wrote: On 7/1/2013 10:45 AM, Joakim wrote: Then they should choose a mixed license like the Mozilla Public License or CDDL, which keeps OSS files open while allowing linking with closed source files within the same application. If they instead chose a license that allows closing all source, one can only assume they're okay with it. In any case, I could care less if they're okay with it or not, I was just surprised that they chose the BSD license and then were mad when someone was thinking about closing it up. I should point out that the Boost license was chosen for Phobos specifically because it allowed people to copy it and use it for whatever purpose, including making closed source versions, adapting them for use with Go :-), whatever. Actually, Boost was specifically chosen because it didn't require attribution when redistributing. If BSD hadn't had that clause we probably would be using it instead. That was indeed another important reason for it. But we were well aware of and approved of the idea that people could take it and make closed source versions.
Re: D vs C++ - Where are the benchmarks?
Few time ago I had open http://versusit.org site about comparison of languages and technologies. I have some stuff about D, but still not directly compared with C++. If anybody want to help me with articles and systematization of material I would very thanks.
Re: D vs C++ - Where are the benchmarks?
Am 01.07.2013 03:07, schrieb Kapps: If you're concerned about performance, I'd recommend against using DMD for your release builds. GDC and LDC will give much better performance, and GDC works perfectly fine on Windows. LDC has some problems with exception handling AFAIK on Windows. GDC got the same Exception problems like LDC - no support for SEH but Exceptions are working - only the Windows-internal-Exception - D-Exception transition is not working properbly but even using Visual Studio you need to add special flags or using __try, __catch to get these - so normaly not a problem
Re: D vs C++ - Where are the benchmarks?
On Monday, 1 July 2013 at 02:53:24 UTC, Jonathan M Davis wrote: On Monday, July 01, 2013 04:37:43 Mehrdad wrote: On Sunday, 30 June 2013 at 20:49:28 UTC, Peter Alexander wrote: sometimes faster Would love an example that demonstrates it! Anything involving taking a lot of substrings is likely to be faster in D thanks to slices (which is one of the main reasons that Tango's xml parser is so lightning fast). You could write the same code in C++, but it's harder, because slices aren't built-in, and you have no GC, probably forcing you to create your own string type that supports slices and does reference counting if you want a similar effect. - Jonathan M Davis Well... in C++, a slice is called an iterator pair. If you just: typedef std::pairstd::string::const_iterator, const_itrator string_slice; Then there is no reason you can't do it... The only problem is that it is not a standard semantic in C++, so nobody ever thinks about doing this, and much less actually ever does it. There is a *little* bit of barrier to entry too. I've done this once about two years ago (before I knew about D) because I needed a subview of a vector. My typedef's name was shallow_vector. It was a fun experience given I didn't know about the range concept back then :) In any case, if you *do* want to go there, it doesn't really require you creating that much new stuff, especially not your own string/vector type.
Re: D vs C++ - Where are the benchmarks?
On Monday, July 01, 2013 08:28:54 monarch_dodra wrote: On Monday, 1 July 2013 at 02:53:24 UTC, Jonathan M Davis wrote: On Monday, July 01, 2013 04:37:43 Mehrdad wrote: On Sunday, 30 June 2013 at 20:49:28 UTC, Peter Alexander wrote: sometimes faster Would love an example that demonstrates it! Anything involving taking a lot of substrings is likely to be faster in D thanks to slices (which is one of the main reasons that Tango's xml parser is so lightning fast). You could write the same code in C++, but it's harder, because slices aren't built-in, and you have no GC, probably forcing you to create your own string type that supports slices and does reference counting if you want a similar effect. - Jonathan M Davis Well... in C++, a slice is called an iterator pair. If you just: typedef std::pairstd::string::const_iterator, const_itrator string_slice; Then there is no reason you can't do it... The only problem is that it is not a standard semantic in C++, so nobody ever thinks about doing this, and much less actually ever does it. There is a *little* bit of barrier to entry too. I've done this once about two years ago (before I knew about D) because I needed a subview of a vector. My typedef's name was shallow_vector. It was a fun experience given I didn't know about the range concept back then :) In any case, if you *do* want to go there, it doesn't really require you creating that much new stuff, especially not your own string/vector type. It does if you don't want to code your stuff in a manner that there's a specific piece of code which owns the string, since you've now separated the string from the slice. Sure, it's feasible, but it's not the same thing and requires you to code differently than you'd do it in D. Regardless, it requires you to code very differently from how you'd do it in C++, so while it's quite possible to do something similar to slices in C++, pretty much no one does. - Jonathan M Davis
Re: Bug in readln interface ?
On Sunday, 30 June 2013 at 23:10:07 UTC, Steven Schveighoffer wrote: On Sun, 30 Jun 2013 19:08:35 -0400, Steven Schveighoffer schvei...@yahoo.com wrote: On Sun, 30 Jun 2013 15:12:40 -0400, monarch_dodra monarchdo...@gmail.com wrote: So my question is: Do we *want* to keep this? Should we deprecate it? I think we should deprecated it. Thoughts? Fully agree. The fact that the condition explicitly disallows enums should clue us in that it was not intended to simply play nice with types, it really wants a mutable buffer. If you want an immutable result, use the version that gives you the result as a return value. BTW, this should simply be changed, not deprectated IMO. It is an accepts invalid bug. The function description specifically says it reuses the buffer and extends as necessary. Since it can't possibly reuse, that means this is truly a bug. -Steve Alright, thanks. Officially Filed: http://d.puremagic.com/issues/show_bug.cgi?id=10517 And under correction.
Re: Automatic typing
On Monday, 1 July 2013 at 04:19:51 UTC, Timon Gehr wrote: On 07/01/2013 05:44 AM, JS wrote: On Monday, 1 July 2013 at 01:56:22 UTC, Timon Gehr wrote: ... The described strategy can easily result in non-termination, and which template instantiations it performs can be non-obvious. auto foo(T)(T arg){ static if(is(T==int)) return 1.0; else return 1; } void main(){ auto x; x = 1; x = foo(x); } Sorry, That's fine. it only results in non-termination if you don't check all return types out of a function. Why is this relevant? I was specifically responding to the method lined out in the post I was answering. There have not been any other attempts to formalize the proposal so far. It is a rather easy case to handle by just following all the return types and choosing the largest one. That neither handles the above case in a sensible way nor is it a solution for the general issue. (Hint: D's type system is Turing complete.) No big deal... any other tries? That's not how it goes. The proposed inference method has to be completely specified for all instances, not only for those instances that I can be bothered to provide to you as counterexamples. well duh, but it is quite a simple mathematical problem and your counter-example is not one at all. For a statically typed language all types must be known at compile time... so you can't come up with any valid counter-example. Just because you come up with some convoluted example that seems to break the algorithm does not prove anything. Do you agree that a function's return type must be known at compile time in a statically typed language? If not then we have nothing more to discuss... (Just because you allow a function to be compile time polymorphic doesn't change anything because each type that a function can possibly return must be known)
Re: Automatic typing
On Monday, 1 July 2013 at 06:38:20 UTC, JS wrote: well duh, but it is quite a simple mathematical problem and your counter-example is not one at all. For a statically typed language all types must be known at compile time... so you can't come up with any valid counter-example. Just because you come up with some convoluted example that seems to break the algorithm does not prove anything. Do you agree that a function's return type must be known at compile time in a statically typed language? If not then we have nothing more to discuss... (Just because you allow a function to be compile time polymorphic doesn't change anything because each type that a function can possibly return must be known) As a compiler implementer, Timon is probably way more competent than you are on the question. You'll get anything interesting to add by considering you know better. The type of problem he mention are already present in many aspect of D and makes it really hard to compile in a consistent way accross implementations. Adding new one is a really bad idea. If you don't understand what the problem is, I suggest you to study the question or ask questions rather than try to make a point.
Re: D vs C++ - Where are the benchmarks?
On Monday, 1 July 2013 at 04:50:29 UTC, Jonathan M Davis wrote: On Monday, July 01, 2013 06:27:15 Marco Leise wrote: Am Sun, 30 Jun 2013 22:55:26 +0200 schrieb Gabi galim...@bezeqint.net: I wonder why is that.. Why would deleting 1 million objects in C++ (using std::shared_ptr for example) have to be slower than the garbage collection freeing a big chunk of million objects all at once. I have no numbers, but I think especially when you have complex graph structures linked with pointers, the GC needs a while to follow all the links and mark the referenced objects as still in use. And this will be done every now and then when you allocate N new objects. The other thing to consider is that when the GC runs, it has to figure out whether anything needs to be collected. And regardless of whether anything actually needs to be collected, it has to go through all of the various references to mark them and then to sweep them. With deterministic destruction, you don't have to do that. If you have a fairly small number of heap allocations in your program, it's generally not a big deal. But if you're constantly allocating and deallocating small objects, then the GC is going to be run a lot more frequently, and it'll have a lot more objects to have to examine. So, having lots of small objects which are frequently being created and destroyed is pretty much guaranteed to tank your performance if they're being allocated by the GC. You really want reference counting for those sorts of situations. This is only true in the current D GC's situation. Modern parallel compacting GCs don't suffer from this. -- Paulo
Re: D vs C++ - Where are the benchmarks?
On Jul 1, 2013 7:16 AM, dennis luehring dl.so...@gmx.net wrote: Am 01.07.2013 03:07, schrieb Kapps: If you're concerned about performance, I'd recommend against using DMD for your release builds. GDC and LDC will give much better performance, and GDC works perfectly fine on Windows. LDC has some problems with exception handling AFAIK on Windows. GDC got the same Exception problems like LDC - no support for SEH but Exceptions are working - only the Windows-internal-Exception - D-Exception transition is not working properbly but even using Visual Studio you need to add special flags or using __try, __catch to get these - so normaly not a problem Right, gcc (thus, gdc) uses sjlj (setjmp/longjmp) exceptions on Windows. AFAIK, structured exception handling support in gcc is being developed to overcome the weaknesses of both dw2 and sjlj. Regards -- Iain Buclaw *(p e ? p++ : p) = (c 0x0f) + '0';
Re: UDP enhancement
On 2013-07-01 03:43, JS wrote: But yet absolutely useless and does nothing over using a field directly. The advantage is that you get virtual methods. I also think it should be possible to manually implement just the setter, or getter. The compiler would only generate what's not already present. -- /Jacob Carlborg
Re: D vs C++ - Where are the benchmarks?
On 2013-07-01 01:14, Steven Schveighoffer wrote: I think memory usage is still important. Many people don't consider that their computer is running hundreds of programs at a time. If each one of those didn't care about memory usage, the one that you are currently interested in would not have any breathing room. I quite often run out of memory at work on my machine with 6GB of RAM when coding Ruby on Rails. I don't know if there's something I do with the code but sometimes something happens in the Ruby code making the Rails server take over 1GB of RAM from 300MB, the same thing happens with the web browser at the same time. Suddenly it decided to eat 2GB of extra RAM. -- /Jacob Carlborg
Re: Automatic typing
On Monday, 1 July 2013 at 06:51:53 UTC, deadalnix wrote: On Monday, 1 July 2013 at 06:38:20 UTC, JS wrote: well duh, but it is quite a simple mathematical problem and your counter-example is not one at all. For a statically typed language all types must be known at compile time... so you can't come up with any valid counter-example. Just because you come up with some convoluted example that seems to break the algorithm does not prove anything. Do you agree that a function's return type must be known at compile time in a statically typed language? If not then we have nothing more to discuss... (Just because you allow a function to be compile time polymorphic doesn't change anything because each type that a function can possibly return must be known) As a compiler implementer, Timon is probably way more competent than you are on the question. You'll get anything interesting to add by considering you know better. The type of problem he mention are already present in many aspect of D and makes it really hard to compile in a consistent way accross implementations. Adding new one is a really bad idea. If you don't understand what the problem is, I suggest you to study the question or ask questions rather than try to make a point. You can't be as smart as you think or you would know that proof by authority is a fallacy.
Re: Automatic typing
On Monday, 1 July 2013 at 09:31:04 UTC, JS wrote: On Monday, 1 July 2013 at 06:51:53 UTC, deadalnix wrote: On Monday, 1 July 2013 at 06:38:20 UTC, JS wrote: well duh, but it is quite a simple mathematical problem and your counter-example is not one at all. For a statically typed language all types must be known at compile time... so you can't come up with any valid counter-example. Just because you come up with some convoluted example that seems to break the algorithm does not prove anything. Do you agree that a function's return type must be known at compile time in a statically typed language? If not then we have nothing more to discuss... (Just because you allow a function to be compile time polymorphic doesn't change anything because each type that a function can possibly return must be known) As a compiler implementer, Timon is probably way more competent than you are on the question. You'll get anything interesting to add by considering you know better. The type of problem he mention are already present in many aspect of D and makes it really hard to compile in a consistent way accross implementations. Adding new one is a really bad idea. If you don't understand what the problem is, I suggest you to study the question or ask questions rather than try to make a point. You can't be as smart as you think or you would know that proof by authority is a fallacy. Authority is not proof, but many years of experience provide a perspective that is worth serious consideration. Which is what deadalnix said.
Re: D vs C++ - Where are the benchmarks?
On Monday, 1 July 2013 at 06:11:20 UTC, dennis luehring wrote: Am 01.07.2013 03:07, schrieb Kapps: If you're concerned about performance, I'd recommend against using DMD for your release builds. GDC and LDC will give much better performance, and GDC works perfectly fine on Windows. LDC has some problems with exception handling AFAIK on Windows. GDC got the same Exception problems like LDC - no support for SEH but Exceptions are working - only the Windows-internal-Exception - D-Exception transition is not working properbly but even using Visual Studio you need to add special flags or using __try, __catch to get these - so normaly not a problem Could you please elaborate on this? What should I beware/avoid of when using GDC under Windows ?
Re: Regarding warnings
On Saturday, 29 June 2013 at 20:26:15 UTC, Jonathan M Davis wrote: http://d.puremagic.com/issues/show_bug.cgi?id=10147 This problem is because the compiler processes warnings too early. If the is expression issues only warnings, it probably should succeed, because those warnings don't escape the is expression (just like errors). Only if they make it to be reported, turn them into errors and stop compilation.
Re: D vs C++ - Where are the benchmarks?
On 1 July 2013 11:18, Gabi galim...@bezeqint.net wrote: On Monday, 1 July 2013 at 06:11:20 UTC, dennis luehring wrote: Am 01.07.2013 03:07, schrieb Kapps: If you're concerned about performance, I'd recommend against using DMD for your release builds. GDC and LDC will give much better performance, and GDC works perfectly fine on Windows. LDC has some problems with exception handling AFAIK on Windows. GDC got the same Exception problems like LDC - no support for SEH but Exceptions are working - only the Windows-internal-Exception - D-Exception transition is not working properbly but even using Visual Studio you need to add special flags or using __try, __catch to get these - so normaly not a problem Could you please elaborate on this? What should I beware/avoid of when using GDC under Windows ? Mixing MSVC and GCC when C++ linkage is involved. :) -- Iain Buclaw *(p e ? p++ : p) = (c 0x0f) + '0';
Re: D vs C++ - Where are the benchmarks?
Am 01.07.2013 10:14, schrieb Iain Buclaw: On Jul 1, 2013 7:16 AM, dennis luehring dl.so...@gmx.net wrote: Am 01.07.2013 03:07, schrieb Kapps: If you're concerned about performance, I'd recommend against using DMD for your release builds. GDC and LDC will give much better performance, and GDC works perfectly fine on Windows. LDC has some problems with exception handling AFAIK on Windows. GDC got the same Exception problems like LDC - no support for SEH but Exceptions are working - only the Windows-internal-Exception - D-Exception transition is not working properbly but even using Visual Studio you need to add special flags or using __try, __catch to get these - so normaly not a problem Right, gcc (thus, gdc) uses sjlj (setjmp/longjmp) exceptions on Windows. AFAIK, structured exception handling support in gcc is being developed to overcome the weaknesses of both dw2 and sjlj. ...is being developed thats means gcc got Windows-SEH support?
Re: D vs C++ - Where are the benchmarks?
On 1 July 2013 12:02, dennis luehring dl.so...@gmx.net wrote: Am 01.07.2013 10:14, schrieb Iain Buclaw: On Jul 1, 2013 7:16 AM, dennis luehring dl.so...@gmx.net wrote: Am 01.07.2013 03:07, schrieb Kapps: If you're concerned about performance, I'd recommend against using DMD for your release builds. GDC and LDC will give much better performance, and GDC works perfectly fine on Windows. LDC has some problems with exception handling AFAIK on Windows. GDC got the same Exception problems like LDC - no support for SEH but Exceptions are working - only the Windows-internal-Exception - D-Exception transition is not working properbly but even using Visual Studio you need to add special flags or using __try, __catch to get these - so normaly not a problem Right, gcc (thus, gdc) uses sjlj (setjmp/longjmp) exceptions on Windows. AFAIK, structured exception handling support in gcc is being developed to overcome the weaknesses of both dw2 and sjlj. ...is being developed thats means gcc got Windows-SEH support? No, it hasn't. If there are any patches, I can't see them after a cursory look. There a wiki for discussion at least: http://gcc.gnu.org/wiki/WindowsGCCImprovements -- Iain Buclaw *(p e ? p++ : p) = (c 0x0f) + '0';
Re: D vs C++ - Where are the benchmarks?
On 1 July 2013 12:38, Iain Buclaw ibuc...@ubuntu.com wrote: On 1 July 2013 12:02, dennis luehring dl.so...@gmx.net wrote: Am 01.07.2013 10:14, schrieb Iain Buclaw: On Jul 1, 2013 7:16 AM, dennis luehring dl.so...@gmx.net wrote: Am 01.07.2013 03:07, schrieb Kapps: If you're concerned about performance, I'd recommend against using DMD for your release builds. GDC and LDC will give much better performance, and GDC works perfectly fine on Windows. LDC has some problems with exception handling AFAIK on Windows. GDC got the same Exception problems like LDC - no support for SEH but Exceptions are working - only the Windows-internal-Exception - D-Exception transition is not working properbly but even using Visual Studio you need to add special flags or using __try, __catch to get these - so normaly not a problem Right, gcc (thus, gdc) uses sjlj (setjmp/longjmp) exceptions on Windows. AFAIK, structured exception handling support in gcc is being developed to overcome the weaknesses of both dw2 and sjlj. ...is being developed thats means gcc got Windows-SEH support? No, it hasn't. If there are any patches, I can't see them after a cursory look. There a wiki for discussion at least: http://gcc.gnu.org/wiki/WindowsGCCImprovements Which after a few clicks brings you to this page: http://gcc.gnu.org/wiki/WindowsGCCImprovementsGSoC2008#General_Information Though, that was support as of 2008... there might have been a few changes since then to improve it. =) -- Iain Buclaw *(p e ? p++ : p) = (c 0x0f) + '0';
Re: D repl
bearophile bearophileh...@lycos.com wrote in message news:aknmrnhodledtmfyp...@forum.dlang.org... Dicebot: Can you link his comment? That sounds weird. It was years ago, maybe more than three years ago. It's not easy to find. I have found discussions about this topic, but not comments from Walter: http://www.digitalmars.com/d/archives/digitalmars/D/D_Compiler_as_a_Library_164027.html http://www.digitalmars.com/d/archives/digitalmars/D/Compiler_as_a_service_in_C_4.0_104405.html Later I have found this (from 2009), where Walter gives me an answer, about D compiler as a DLL: http://www.digitalmars.com/d/archives/digitalmars/D/Compiler_as_dll_82715.html Things have changed a lot since 2009. This is definitely the way things are headed. First step: port the compiler to D.
Re: D vs C++ - Where are the benchmarks?
On Monday, 1 July 2013 at 09:14:07 UTC, Jacob Carlborg wrote: On 2013-07-01 01:14, Steven Schveighoffer wrote: I think memory usage is still important. Many people don't consider that their computer is running hundreds of programs at a time. If each one of those didn't care about memory usage, the one that you are currently interested in would not have any breathing room. I quite often run out of memory at work on my machine with 6GB of RAM when coding Ruby on Rails. I don't know if there's something I do with the code but sometimes something happens in the Ruby code making the Rails server take over 1GB of RAM from 300MB, the same thing happens with the web browser at the same time. Suddenly it decided to eat 2GB of extra RAM. My 4 gig machine needs swap space... just to surf on internet. I think one of the biggest culprits are not the programs, but today's *shiny* and *exciting* websites...
Re: Automatic typing
On 6/30/13 7:39 PM, JS wrote: On Saturday, 29 June 2013 at 19:18:13 UTC, Ary Borenszweig wrote: On 6/27/13 9:34 PM, JS wrote: Would it be possible for a language(specifically d) to have the ability to automatically type a variable by looking at its use cases without adding too much complexity? It seems to me that most compilers already can infer type mismatchs which would allow them to handle stuff like: main() { auto x; auto y; x = 3; // x is an int, same as auto x = 3; y = f(); // y is the same type as what f() returns x = 3.9; // x is really a float, no mismatch with previous type(int) } in this case x and y's type is inferred from future use. The compiler essentially just lazily infers the variable type. Obviously ambiguity will generate an error. What you are asking is essentially what Crystal does for all variables (and types): https://github.com/manastech/crystal/wiki/Introduction#type-inference Your example would be written like this: x = 3 y = f() x = 3.9 But since Crystal transforms your code to SSA (http://en.wikipedia.org/wiki/Static_single_assignment_form) you actually have *two* x variables in your code. The first one is of type Int32, the second of type Float64. The above solves the problem mentioned by Steven Schveighoffer, where you didn't know what overloaded version you was calling: x = 3 f(x) # always calls f(Int32), because at run-time # x will always be an Int32 at this point x = 3.9 But to have this in a language you need some things: 1. Don't have a different syntax for declaring and updating variables 2. Transform your code to SSA (maybe more?) So this is not possible in D right now, and I don't think it will ever be because it requires a huge change to the whole language. This is not what I am talking about and it seems quite dangerous to have one variable name masquerade as multiple variables. Why dangerous? I've been programming in Ruby for quite a time and never found it to be a problem, but an advantage. Now I'm programming in Crystal and it's the same, but the compiler can catch some errors too. Show me an example where this is dangerous (the pointer example gave by Walter is not valid anymore since it has a fix).
Re: Automatic typing
On 6/30/13 10:30 PM, JS wrote: On Monday, 1 July 2013 at 01:08:49 UTC, Kenji Hara wrote: 2013/7/1 JS js.m...@gmail.com I am simply talking about having the compiler enlarge the type if needed. (this is mainly for built in types since the type hierarchy is explicitly known) Just a simple matter, it would *drastically* increase compilation time. void foo() { auto elem; auto arr = [elem]; elem = 1; elem = 2.0; // typeof(elem) change should modify the result of typeof(arr) } Such type dependencies between multiple variables are common in the realistic program. When `elem = 2.0;` is found, compiler should run semantic analysis of the whole function body of foo _once again_, because the setting type of elem ignites the change of typeof(arr), and it would affect the code meaning. If another variable type would be modified, it also ignites the whole function body semantic again. After all, semantic analysis repetition would drastically increase. I can easily imagine that the compilation cost would not be worth the small benefits. Kenji Hara No, this would be a brute force approach. Only one preprocessing pass of (#lines) would be required. Since parsing statement by statement already takes place, it should be an insignificant cost. Believe me, it's not. Look at this: --- int foo(int elem) { return 1; } char foo(float elem) { return 'a'; } auto elem; elem = 1; auto other = foo(elem); elem = other + 2.5; --- Explain to me how the compiler would work in this case, step by step.
Re: Automatic typing
On 6/30/13 10:56 PM, Timon Gehr wrote: On 07/01/2013 03:08 AM, Kenji Hara wrote: 2013/7/1 JS js.m...@gmail.com mailto:js.m...@gmail.com I am simply talking about having the compiler enlarge the type if needed. (this is mainly for built in types since the type hierarchy is explicitly known) Just a simple matter, it would *drastically* increase compilation time. void foo() { auto elem; auto arr = [elem]; elem = 1; elem = 2.0; // typeof(elem) change should modify the result of typeof(arr) } Such type dependencies between multiple variables are common in the realistic program. When `elem = 2.0;` is found, compiler should run semantic analysis of the whole function body of foo _once again_, because the setting type of elem ignites the change of typeof(arr), and it would affect the code meaning. If another variable type would be modified, it also ignites the whole function body semantic again. After all, semantic analysis repetition would drastically increase. I can easily imagine that the compilation cost would not be worth the small benefits. Kenji Hara The described strategy can easily result in non-termination, and which template instantiations it performs can be non-obvious. auto foo(T)(T arg){ static if(is(T==int)) return 1.0; else return 1; } void main(){ auto x; x = 1; x = foo(x); } Just tried it in Crystal and it ends alright. It works like this: 1. x is an Int 2. you call foo(x), it returns a float so x is now a float (right now in Crystal that's a union of int and float, but that will soon change). 3. Since x is a float, foo returns an int, but assigning it to x, which is already a float, gives back a float. 4. No type changed, so we end. Crystal also supports recursive and mutuilly recursive functions. The compiler is always guaranteed to finish. (I'm just using Crystal as an example to have a proof that it can be done)
Re: D vs C++ - Where are the benchmarks?
My 4 gig machine needs swap space... just to surf on internet. I think one of the biggest culprits are not the programs, but today's *shiny* and *exciting* websites... Wow! Then you must be running Windows... I have currently 1.5Gb used, with ~30 tabs open, Kdevelop, a few terminals and Thunderbird (and thunderbird takes the most ram!)
Re: D vs C++ - Where are the benchmarks?
On Monday, 1 July 2013 at 14:11:50 UTC, David wrote: My 4 gig machine needs swap space... just to surf on internet. I think one of the biggest culprits are not the programs, but today's *shiny* and *exciting* websites... Wow! Then you must be running Windows... I have currently 1.5Gb used, with ~30 tabs open, Kdevelop, a few terminals and Thunderbird (and thunderbird takes the most ram!) Firefox with ~70 tabs including an hour long youtube video, matlab and ipython all running at once: Used memory + swap still under 4GB. It's worth bearing in mind that just because you've got swap space being used, doesn't mean you're actually short on memory.
Re: Automatic typing
On 07/01/2013 03:44 PM, Ary Borenszweig wrote: On 6/30/13 10:56 PM, Timon Gehr wrote: On 07/01/2013 03:08 AM, Kenji Hara wrote: 2013/7/1 JS js.m...@gmail.com mailto:js.m...@gmail.com I am simply talking about having the compiler enlarge the type if needed. (this is mainly for built in types since the type hierarchy is explicitly known) Just a simple matter, it would *drastically* increase compilation time. void foo() { auto elem; auto arr = [elem]; elem = 1; elem = 2.0; // typeof(elem) change should modify the result of typeof(arr) } Such type dependencies between multiple variables are common in the realistic program. When `elem = 2.0;` is found, compiler should run semantic analysis of the whole function body of foo _once again_, because the setting type of elem ignites the change of typeof(arr), and it would affect the code meaning. If another variable type would be modified, it also ignites the whole function body semantic again. After all, semantic analysis repetition would drastically increase. I can easily imagine that the compilation cost would not be worth the small benefits. Kenji Hara The described strategy can easily result in non-termination, and which template instantiations it performs can be non-obvious. auto foo(T)(T arg){ static if(is(T==int)) return 1.0; else return 1; } void main(){ auto x; x = 1; x = foo(x); } Just tried it in Crystal Using overloaded functions, I guess? It is not really the same thing, because those need to be type checked in any case. and it ends alright. (Note that I was specifically addressing the method Kenji Hara lined out, which appears to completely restart type checking every time a type changes.) It works like this: 1. x is an Int 2. you call foo(x), it returns a float so x is now a float (right now in Crystal that's a union of int and float, but that will soon change). 3. Since x is a float, foo returns an int, but assigning it to x, which is already a float, gives back a float. 4. No type changed, so we end. ... This kind of fixed-point iteration will terminate in D in most relevant cases (it is possible to create an infinitely ascending chain of types, but then, type checking failing implicit conversions won't terminate anyway). But note that now x is a double even though it is only assigned ints. Furthermore, this approach still implicitly instantiates template versions that are not referred to in the final type checked code.
Re: Automatic typing
On 7/1/2013 6:39 AM, Ary Borenszweig wrote: This is not what I am talking about and it seems quite dangerous to have one variable name masquerade as multiple variables. Why dangerous? D already disallows: int x; { float x; } as an error-prone construct, so why should it allow: int x; float x; ? I've been programming in Ruby for quite a time and never found it to be a problem, but an advantage. What advantage? Does Ruby have a shortage of names for variables? (Early versions of BASIC only allowed variable names with one letter, leading to some pretty awful workarounds.) Show me an example where this is dangerous (the pointer example gave by Walter is not valid anymore since it has a fix). I'm actually rather sure that one can come up with rule after rule to 'fix' each issue, but good luck with trying to fit all those rules into some sort of comprehensible framework. Consider what happened to C++ when they tried to fit new things into the overloading rules - the rules now span multiple pages of pretty arbitrary rules, and practically nobody understands the whole. What people do is randomly try things until it seems to do the right thing for them, and then they move on. And all this for what - nobody has come up with a significant reason to support this proposal. I see post after post in this thread trying to make it work, and nothing about what problem it solves.
Re: UDP enhancement
On 06/30/2013 08:54 PM, JS wrote: On Monday, 1 July 2013 at 02:17:24 UTC, Ali Çehreli wrote: I have the complete opposite view: Seeing what m_data explicitly in the code would be simpler than reading code to see that data.value would mean implicit storage. huh? There is absolutely no semantic difference between the two. Agreed. I find implicit storage making code more complex. The proposed case is easier because the field can't be hidden away somewhere making it hard to find. @property T x() { } represents a function and possibly a variable of type T. You know that by looking at the property. It is not a hard leap to understand. Agreed but I was talking about understanding the implementation, not the API. When a function returns data.value, it returns the 'value' member of a variable 'data'. Where is 'data'? Not a local variable. Not a member? A global? Oh! I wonder? Yes, it is an implicit member that is created by the compiler. Note the old proposal that Jonathan has reminded us about does not have that problem. It is obvious that we are looking at a property. The old way: @property T x() { } T _x; Is more verbose, and verbose is not always better. Agreed in general but not in this case. If your class as many variables and some are hidden then it could be difficult to know where the variable is. That is always possible and requires discipline and coding guidelines. The programmers must know to communicate ideas and designs. It's no different than writing separate setters and getters... no difference... just they are more verbose. If you are against my suggestion you should be against properties in general because they are a simplification of such. I am not against how they make syntax easier. I don't need to prefix function names by get_ or set_ and I don't need to use parentheses. (if propertyname.value is used then there needs to be an internal variable, else not), Where would the compiler make room for that variable in relation to the other members? With programming languages, explicit is almost always better than implicit. Ali huh? The exact same place it does so if the programmer explicitly adds it. How can the compiler put it in *the exact spot* if I am not adding it explicitly? Are you suggesting that such functions be inserted between other member variables? struct Foo { int m; @property int data() { return data.value; } // read property @property int data(int value) { return data.value; } // write property double d; } What if there is another member between these special functions? Compiler error? It's location in the class my not be the same but that is, in general, irrelevant unless you are messing with the bits of the class. I was thinking about structs. Ali
Re: Automatic typing
On 7/1/13 1:15 PM, Walter Bright wrote: On 7/1/2013 6:39 AM, Ary Borenszweig wrote: This is not what I am talking about and it seems quite dangerous to have one variable name masquerade as multiple variables. Why dangerous? D already disallows: int x; { float x; } as an error-prone construct, so why should it allow: int x; float x; ? Well, those constructs don't even make sense because in the examples I gave I never say what type I want my variables to be. I let the compiler figure it out. I've been programming in Ruby for quite a time and never found it to be a problem, but an advantage. What advantage? Does Ruby have a shortage of names for variables? (Early versions of BASIC only allowed variable names with one letter, leading to some pretty awful workarounds.) I'll give you an example: # var can be an Int or String def foo(var) var = var.to_s # do something with var, which is now guaranteed to be a string end I can call it like this: foo(1) foo(hello) If I had to put types, I would end up doing of of these: 1. void foo(int var) { foo(to!string(var)) } void foo(string var) { // do something with var } 2. void foo(T)(T var) { string myVar; static if (is(T == string)) { myVar = var; } else if (is(T == int)) { myVar = to!string(var) } else { static assert(false); } // do something with myVar } Both examples are ugly and verbose (or, at least, the example in Ruby/Crystal is much shorter and cleaner). The example I give is very simple, I can reuse a var which *has the same meaning* for me when I'm coding and I don't need to come up with a new name. It's not that Ruby has a shortage of names. It's just that I don't want to spend time thinking new, similar names, just to satisfy the compiler. (And if you are worried about efficiency, the method #to_s of String just returns itself, so in the end it compiles to the same code you could have written manually like I showed in D) Show me an example where this is dangerous (the pointer example gave by Walter is not valid anymore since it has a fix). I'm actually rather sure that one can come up with rule after rule to 'fix' each issue, but good luck with trying to fit all those rules into some sort of comprehensible framework. Consider what happened to C++ when they tried to fit new things into the overloading rules - the rules now span multiple pages of pretty arbitrary rules, and practically nobody understands the whole. What people do is randomly try things until it seems to do the right thing for them, and then they move on. And all this for what - nobody has come up with a significant reason to support this proposal. I see post after post in this thread trying to make it work, and nothing about what problem it solves. Exactly, because in D you need to specify the types of variables. And that goes against inferring a variable's type from its usage (which is different from inferring it from its initializer). I'm also against this proposal. I'm just saying that in D it's not feasible, and if you want to make it work you'll have to change so many things that you'll end up with a different language.
Re: Automatic typing
On 7/1/13 1:45 PM, John Colvin wrote: T)(T var) { auto myVar = var.to!string; //do something with myVar string } Ah, that's also ok. But then you have to remember to use myVar instead of var.
Re: Automatic typing
On Monday, 1 July 2013 at 16:33:56 UTC, Ary Borenszweig wrote: I'll give you an example: # var can be an Int or String def foo(var) var = var.to_s # do something with var, which is now guaranteed to be a string end I can call it like this: foo(1) foo(hello) If I had to put types, I would end up doing of of these: 1. void foo(int var) { foo(to!string(var)) } void foo(string var) { // do something with var } 2. void foo(T)(T var) { string myVar; static if (is(T == string)) { myVar = var; } else if (is(T == int)) { myVar = to!string(var) } else { static assert(false); } // do something with myVar } Why not this? void foo(T)(T var) { auto myVar = var.to!string; //do something with myVar string }
Re: D vs C++ - Where are the benchmarks?
On Monday, 1 July 2013 at 06:28:57 UTC, monarch_dodra wrote: On Monday, 1 July 2013 at 02:53:24 UTC, Jonathan M Davis wrote: On Monday, July 01, 2013 04:37:43 Mehrdad wrote: On Sunday, 30 June 2013 at 20:49:28 UTC, Peter Alexander wrote: sometimes faster Would love an example that demonstrates it! Anything involving taking a lot of substrings is likely to be faster in D thanks to slices (which is one of the main reasons that Tango's xml parser is so lightning fast). You could write the same code in C++, but it's harder, because slices aren't built-in, and you have no GC, probably forcing you to create your own string type that supports slices and does reference counting if you want a similar effect. - Jonathan M Davis Well... in C++, a slice is called an iterator pair. If you just: typedef std::pairstd::string::const_iterator, const_itrator string_slice; Then there is no reason you can't do it... The only problem is that it is not a standard semantic in C++, so nobody ever thinks about doing this, and much less actually ever does it. There is a *little* bit of barrier to entry too. I've done this once about two years ago (before I knew about D) because I needed a subview of a vector. My typedef's name was shallow_vector. It was a fun experience given I didn't know about the range concept back then :) In any case, if you *do* want to go there, it doesn't really require you creating that much new stuff, especially not your own string/vector type. Boost recently added string_ref, a non-owning reference to a string: http://www.boost.org/doc/libs/1_53_0/libs/utility/doc/html/string_ref.html Gets you something similar to slices but is inherently more dangerous to use than GC backed slices.
Re: Automatic typing
On Monday, 1 July 2013 at 16:15:04 UTC, Walter Bright wrote: On 7/1/2013 6:39 AM, Ary Borenszweig wrote: This is not what I am talking about and it seems quite dangerous to have one variable name masquerade as multiple variables. Why dangerous? D already disallows: int x; { float x; } as an error-prone construct, so why should it allow: int x; float x; ? I'm not proposing this. There is only one variable. I think there is big confusion in what I'm suggesting(it's not a proposal because I don't expect anyone to take the time to prove its validity... and you can't know how useful it could be if you don't have any way to test it out). It's two distinctly different concepts when you allow a multi-variable and one that can be up-typed by inference(the compiler automatically up-types). I've been programming in Ruby for quite a time and never found it to be a problem, but an advantage. What advantage? Does Ruby have a shortage of names for variables? (Early versions of BASIC only allowed variable names with one letter, leading to some pretty awful workarounds.) Show me an example where this is dangerous (the pointer example gave by Walter is not valid anymore since it has a fix). I'm actually rather sure that one can come up with rule after rule to 'fix' each issue, but good luck with trying to fit all those rules into some sort of comprehensible framework. Consider what happened to C++ when they tried to fit new things into the overloading rules - the rules now span multiple pages of pretty arbitrary rules, and practically nobody understands the whole. What people do is randomly try things until it seems to do the right thing for them, and then they move on. And all this for what - nobody has come up with a significant reason to support this proposal. I see post after post in this thread trying to make it work, and nothing about what problem it solves. I'm more interested in a true counterexample where my concept(which I've not seen in any language before) results in an invalid context and not a naive example that uses a half baked implementation of the algorithm(which I've already outlined). Just because someone comes up with an example and says this will produce a non-termination doesn't mean it will except in the most naive implementations. The problem is when such a idea is present you get people who are automatically against it for various irrational fears and they won't take any serious look at it to see if it has any merit... If you jump to the conclusion that something is useless without any real thought on it then it obviously is... but the same type of mentality has been used to prove just about anything was useless at one time or another. (and if that mentality ruled we'd still be using 640k of memory) I have a very basic question for you and would like a simple answer: In some programming languages, one can do the following type of code: var x; // x is some type of variable that holds data. It's type is not statically defined and can change at run time. x = 3; // x holds some type of number... usually an integer but the language may store all numbers as doubles or even strings. now, suppose we have a program that contains essentially the following: var x; x = 3; Is it possible that the compiler can optimize such code to find the least amount of data to represent x without issue? Yes or no? Is this a good thing? Yes or no? (I don't need and don't want any explanation) (the above example is at the heart of the matter... regardless if it is probably a valid semantic in D or easily to implement(since no one knows and most don't care because they think it won't benefit them(just like how bill gates thought all everyone needed was 640k)))
Re: Automatic typing
On 7/1/2013 9:46 AM, Ary Borenszweig wrote: On 7/1/13 1:45 PM, John Colvin wrote: T)(T var) { auto myVar = var.to!string; //do something with myVar string } Ah, that's also ok. But then you have to remember to use myVar instead of var. Heck, why bother with different variable names at all? We can just use x for all variables, t for all types, and f for all functions! t f(t x) { t x; t x = x 0xFF; if (x == x) x = 0; else if ((x 0xFFFD00) == 0x0F3800) x = x[(x 8) 0xFF]; else if ((x 0xFF00) == 0x0F00) x = x[x]; else x = x[x]; return x x; } Sorry for the sarcasm, but I just don't get the notion that it's a burden to use a different name for a variable that has a different type and a different purpose. I'd go further and say it is a bad practice to use the same name for such.
Re: Automatic typing
On Monday, 1 July 2013 at 16:46:57 UTC, Ary Borenszweig wrote: On 7/1/13 1:45 PM, John Colvin wrote: T)(T var) { auto myVar = var.to!string; //do something with myVar string } Ah, that's also ok. But then you have to remember to use myVar instead of var. Personally I like the explicit use of a new variable. If you're changing the type of a variable then you want it to be explicit. I spend far too many hours a month chasing down accidental type changes in python. A convenience feature is only a feature if it helps *stop* you shooting yourself in the foot, not if it actively encourages it. auto a; //loads of code, with function calls to all sorts of unfamiliar libraries //do something with a. How do I know what type a is to work with? I have to either read and understand all the code in between, try and write something generic, or put a pragma(msg, ...) in to show it for me. Either way I have to pray that nobody changes it.
Re: Automatic typing
On 07/01/2013 06:15 PM, Walter Bright wrote: On 7/1/2013 6:39 AM, Ary Borenszweig wrote: This is not what I am talking about and it seems quite dangerous to have one variable name masquerade as multiple variables. Why dangerous? D already disallows: int x; { float x; } as an error-prone construct, ... module b; int x; module a; void main(){ int x; { import b; x = 2; } import std.stdio; writeln(x); // prints 0 }
Re: Automatic typing
On Monday, 1 July 2013 at 16:46:57 UTC, Ary Borenszweig wrote: Ah, that's also ok. But then you have to remember to use myVar instead of var. I have wanted to remove a variable from scope before. I think it would be kinda cool if we could do something like __undefine(x), and subsequent uses of it would be an error. (It could also prohibit redeclaration of it if we wanted.) D doesn't have that, but we can get reasonably close using other functions when we change param types (just forward it to the other overload) and functions and/or scopes for local variables: void foo() { { int a; /* use a */ } // a is now gone, so you don't accidentally use it later in the function }
Re: SIMD on Windows
Thanks Manu, I think I understand. Quick questions, so I've updated my test to allow for loop unrolling http://dpaste.dzfl.pl/12933bc8 as the calculation is done over an array of elements and does not depend on the last operation. My problem is that the program reports using 0 time. However, as soon as I start printing out elements the time then jumps to looking more realistic. However, even if I print the elements of the list after I print the calculation operation, I still get zero seconds. Like: 1: calc time 2: do operations 3: print time delta (result:0 time) 4: print all values from operation 1: calc time 2: do operations 3: print all values from operation 4: print time delta (result:large time delta actually shown) Is D performing operations lazily by default or am I missing something?
Re: UDP enhancement
On Monday, 1 July 2013 at 16:24:40 UTC, Ali Çehreli wrote: On 06/30/2013 08:54 PM, JS wrote: On Monday, 1 July 2013 at 02:17:24 UTC, Ali Çehreli wrote: I have the complete opposite view: Seeing what m_data explicitly in the code would be simpler than reading code to see that data.value would mean implicit storage. huh? There is absolutely no semantic difference between the two. Agreed. I find implicit storage making code more complex. (Well, I'm sure some will find it useful in some cases and I don't think such a case could hurt much but I'd probably never use it) The proposed case is easier because the field can't be hidden away somewhere making it hard to find. @property T x() { } represents a function and possibly a variable of type T. You know that by looking at the property. It is not a hard leap to understand. Agreed but I was talking about understanding the implementation, not the API. When a function returns data.value, it returns the 'value' member of a variable 'data'. Where is 'data'? Not a local variable. Not a member? A global? Oh! I wonder? Yes, it is an implicit member that is created by the compiler. Note the old proposal that Jonathan has reminded us about does not have that problem. It is obvious that we are looking at a property. Well, I personally don't care what symbols or syntax you want to use(well, within reason). I used a common syntax because it is something people are familiar with. To me it is nitpicking because it has nothing to do with the real issue. It's not the syntax that is under question but the concept/implementation. What's important to me is to not have to create a private field every time I want to create a property. It seems like a waste of time and is verbose for no reason. It doesn't confuse me one bit to hide the field in the property because essentially that's what properties do(to the user of the property)... So it doesn't change anything from the outside and only goes to reduce your code size. The old way: @property T x() { } T _x; Is more verbose, and verbose is not always better. Agreed in general but not in this case. If your class as many variables and some are hidden then it could be difficult to know where the variable is. That is always possible and requires discipline and coding guidelines. The programmers must know to communicate ideas and designs. Yes, but any programming language is there to simplify... If we had infinite memories and intelligence then direct machine language(hex) would be just fine. IMO by removing excess and essentially useless text in source code makes it easier to follow and maintain. Almost all programming constructs do this... sometimes it's their sole purpose(a macro, function, struct, etc...). (encapsulation of data/code is mainly to simplify complexity and not for security/safety) It's no different than writing separate setters and getters... no difference... just they are more verbose. If you are against my suggestion you should be against properties in general because they are a simplification of such. I am not against how they make syntax easier. I don't need to prefix function names by get_ or set_ and I don't need to use parentheses. (if propertyname.value is used then there needs to be an internal variable, else not), Where would the compiler make room for that variable in relation to the other members? With programming languages, explicit is almost always better than implicit. Ali huh? The exact same place it does so if the programmer explicitly adds it. How can the compiler put it in *the exact spot* if I am not adding it explicitly? Are you suggesting that such functions be inserted between other member variables? I'm not sure we are are talking about the same thing: struct Foo { int m; @property int data() { return data.value; } // read property @property int data(int value) { return data.value; } // write property double d; } What if there is another member between these special functions? Compiler error? It's location in the class my not be the same but that is, in general, irrelevant unless you are messing with the bits of the class. I was thinking about structs. Ali struct Foo { // int data.value; inserted here int m; // int data.value; or here @property int data() { return data.value; } // read property // int data.value; or here @property int data(int value) { return data.value = value; } // write property // int data.value; or here double d; // int data.value; or here } vs struct Foo { int m; @property int data() { return val; } // read property @property int data(int value) { return val = value; } // write property double d; private int val; } It will almost never matter where the compiler inserts the hidden variable for us except when hacking the struct(which, as long as it's consistent, it
Re: Automatic typing
Don't feed the troll.
Re: Automatic typing
On 7/1/2013 9:57 AM, JS wrote: The problem is when such a idea is present you get people who are automatically against it for various irrational fears and they won't take any serious look at it to see if it has any merit... If you jump to the conclusion that something is useless without any real thought on it then it obviously is... but the same type of mentality has been used to prove just about anything was useless at one time or another. It's up to you to demonstrate your idea has merit. Throwing ideas out and asking others to find the merit for you is not going to work. It's even worse when you insult them for not finding the merit that you didn't find. Once you demonstrate merit then go about finding ways to make it work. Not the other way around. There are famous cases in business history where a solution was found before anyone identified a problem - 3M's not-very-sticky adhesive that was eventually turned into the hugely profitable postit notes is an example - but it languished for nearly a decade before someone thought of a use for it. And 3M certainly didn't productize it before the problem was discovered.
Re: Automatic typing
On Monday, 1 July 2013 at 16:57:53 UTC, JS wrote: (the above example is at the heart of the matter... regardless if it is probably a valid semantic in D or easily to implement(since no one knows and most don't care because they think it won't benefit them(just like how bill gates thought all everyone needed was 640k))) For the record, this quote is plain wrong : https://groups.google.com/forum/#!msg/alt.folklore.computers/mpjS-h4jpD8/9DW_VQVLzpkJ But if you one very stupid one, I declared in the late 90s that a phone with a tactile screen on its whole surface was a stupid idea and that it would never work. I you look hard enough, I guess we all said the most stupid thing at some point. We got to admit it and not repeat the mistake.
Re: Automatic typing
On 7/1/2013 10:07 AM, Timon Gehr wrote: module b; int x; module a; void main(){ int x; { import b; x = 2; I'd encourage you to submit an enhancement request that would produce the message: Error: import b.x hides local declaration of x } import std.stdio; writeln(x); // prints 0 }
Re: Automatic typing
On 7/1/2013 10:51 AM, deadalnix wrote: But if you one very stupid one, I declared in the late 90s that a phone with a tactile screen on its whole surface was a stupid idea and that it would never work. I you look hard enough, I guess we all said the most stupid thing at some point. We got to admit it and not repeat the mistake. None of us have to look very hard at ourselves to find such, if we're being remotely honest. I don't much care for the popular gotcha practice of digging up something someone did or said decades ago. It presumes that we are all born wise, and offers no hope for learning from our mistakes. Sadly, the internet and the surveillance state are going to make life difficult for anyone trying to live down something stupid. Makes me glad I grew up before the internet. Makes me glad that most of the drivel I posted to Usenet back in the 80's has been hopefully lost :-)
Re: Automatic typing
On Monday, 1 July 2013 at 17:51:02 UTC, deadalnix wrote: On Monday, 1 July 2013 at 16:57:53 UTC, JS wrote: (the above example is at the heart of the matter... regardless if it is probably a valid semantic in D or easily to implement(since no one knows and most don't care because they think it won't benefit them(just like how bill gates thought all everyone needed was 640k))) For the record, this quote is plain wrong : https://groups.google.com/forum/#!msg/alt.folklore.computers/mpjS-h4jpD8/9DW_VQVLzpkJ I liked this answer: QUESTION: I read in a newspaper that in 1981 you said, 640K should be enough for anybody. I always thought he was talking about his monthly bonus, not computer memory... :) Matheus.
Re: SIMD on Windows
On Monday, 1 July 2013 at 17:19:02 UTC, Jonathan Dunlap wrote: Thanks Manu, I think I understand. Quick questions, so I've updated my test to allow for loop unrolling http://dpaste.dzfl.pl/12933bc8 The loop body in testSimd doesn't do anything. This line: auto di = d[i]; copies the vector, it does not reference it.
Re: Compiler could elide many more postblit constructor calls
Yet one small observation: This optimization would mean that a lot of the use cases of auto ref const MyType parameters (the upcoming non-templated auto ref feature... although I don't know if that's the syntax for it) could be replaced by using const MyType parameters. Or, if you look at it from the other side of the coin: if you always took function arguments by auto ref const MyType, there wouldn't be any functions to apply this optimization for.
Re: Automatic typing
On 07/01/2013 10:51 AM, deadalnix wrote: I declared in the late 90s that a phone with a tactile screen on its whole surface was a stupid idea and that it would never work. I still think so. :D Ali
Re: SIMD on Windows
Thanks Jerro, I went ahead and used a pointer reference to ensure it's being saved back into the array (http://dpaste.dzfl.pl/52710926). Two things: 1) still showing zero time delta 2) On windows 7 x74, using a SAMPLE_AT size of 3 or higher will cause the program to immediately quit with no output at all. Even the first statement of writeln in the constructor doesn't execute.
Re: Automatic typing
On 7/1/13 9:59 AM, Walter Bright wrote: Sorry for the sarcasm, but I just don't get the notion that it's a burden to use a different name for a variable that has a different type and a different purpose. I'd go further and say it is a bad practice to use the same name for such. Reducing the number of names seems worthless, but increasing it can be quite annoying. Andrei
Re: Automatic typing
On 7/1/13 9:57 AM, JS wrote: I think there is big confusion in what I'm suggesting(it's not a proposal because I don't expect anyone to take the time to prove its validity... and you can't know how useful it could be if you don't have any way to test it out). It's two distinctly different concepts when you allow a multi-variable and one that can be up-typed by inference(the compiler automatically up-types). To me the basic notion was very clear from day one. Changing the type of a variable is equivalent with unaliasing the existing variable (i.e. destroy it and force it out of the symbol table) and defining an entirely different variable, with its own lifetime. It just so happens it has the same name. It's a reasonable feature to have -- a nice cheat that brings a statically-typed language closer to the look-and-feel of dynamic languages. Saves on names, which is more helpful than one might think. In D things like overloading and implicit conversions would probably make it too confusing to be useful. I'm more interested in a true counterexample where my concept(which I've not seen in any language before) results in an invalid context It's obvious to me that the concept is sound within reasonable use bounds. The problem is when such a idea is present you get people who are automatically against it for various irrational fears and they won't take any serious look at it to see if it has any merit... If you jump to the conclusion that something is useless without any real thought on it then it obviously is... but the same type of mentality has been used to prove just about anything was useless at one time or another. (and if that mentality ruled we'd still be using 640k of memory) I think this is an unfair characterization. The discussion was pretty good and gave the notion a fair shake. I have a very basic question for you and would like a simple answer: In some programming languages, one can do the following type of code: var x; // x is some type of variable that holds data. It's type is not statically defined and can change at run time. x = 3; // x holds some type of number... usually an integer but the language may store all numbers as doubles or even strings. now, suppose we have a program that contains essentially the following: var x; x = 3; Is it possible that the compiler can optimize such code to find the least amount of data to represent x without issue? Yes or no? Yes, and in fact it's already done. Consider: if (expr) { int a; ... } else { int b; ... } In some C implementations, a and b have the same physical address. In some others, they have distinct addresses. This appears to not be related, but it is insofar as a and b have non-overlapping lifetimes. Is this a good thing? Yes or no? It's marginally good - increases stack locality and makes it simpler for a register allocator. (In fact all register allocators do that already, otherwise they'd suck.) (I don't need and don't want any explanation) Too late :o). Andrei
Re: Automatic typing
On 7/1/2013 4:30 PM, Andrei Alexandrescu wrote: Yes, and in fact it's already done. Consider: if (expr) { int a; ... } else { int b; ... } In some C implementations, a and b have the same physical address. In some others, they have distinct addresses. This appears to not be related, but it is insofar as a and b have non-overlapping lifetimes. What is happening with (modern) compilers is the live range of each variable is computed. A live range is nothing more than a bitmap across the instructions for a function, with a bit set meaning the variable is in play at this point. The compiler then uses a tetris style algorithm to try to fit as many variables as possible into the limited register set, and to use as little stack space as possible. The usual algorithms do not use scoping to determine the live range, but look at actual usage. A variable that, for example, that has no usage is considered 'dead' and is removed. The proposal here neither adds nor subtracts from this.
Re: Automatic typing
On Monday, 1 July 2013 at 23:30:19 UTC, Andrei Alexandrescu wrote: On 7/1/13 9:57 AM, JS wrote: I think there is big confusion in what I'm suggesting(it's not a proposal because I don't expect anyone to take the time to prove its validity... and you can't know how useful it could be if you don't have any way to test it out). It's two distinctly different concepts when you allow a multi-variable and one that can be up-typed by inference(the compiler automatically up-types). To me the basic notion was very clear from day one. Changing the type of a variable is equivalent with unaliasing the existing variable (i.e. destroy it and force it out of the symbol table) and defining an entirely different variable, with its own lifetime. It just so happens it has the same name. It's a reasonable feature to have -- a nice cheat that brings a statically-typed language closer to the look-and-feel of dynamic languages. Saves on names, which is more helpful than one might think. In D things like overloading and implicit conversions would probably make it too confusing to be useful. I'm more interested in a true counterexample where my concept(which I've not seen in any language before) results in an invalid context It's obvious to me that the concept is sound within reasonable use bounds. The problem is when such a idea is present you get people who are automatically against it for various irrational fears and they won't take any serious look at it to see if it has any merit... If you jump to the conclusion that something is useless without any real thought on it then it obviously is... but the same type of mentality has been used to prove just about anything was useless at one time or another. (and if that mentality ruled we'd still be using 640k of memory) I think this is an unfair characterization. The discussion was pretty good and gave the notion a fair shake. I have a very basic question for you and would like a simple answer: In some programming languages, one can do the following type of code: var x; // x is some type of variable that holds data. It's type is not statically defined and can change at run time. x = 3; // x holds some type of number... usually an integer but the language may store all numbers as doubles or even strings. now, suppose we have a program that contains essentially the following: var x; x = 3; Is it possible that the compiler can optimize such code to find the least amount of data to represent x without issue? Yes or no? Yes, and in fact it's already done. Consider: if (expr) { int a; ... } else { int b; ... } In some C implementations, a and b have the same physical address. In some others, they have distinct addresses. This appears to not be related, but it is insofar as a and b have non-overlapping lifetimes. Is this a good thing? Yes or no? It's marginally good - increases stack locality and makes it simpler for a register allocator. (In fact all register allocators do that already, otherwise they'd suck.) (I don't need and don't want any explanation) Too late :o). Andrei Too be honest, your reply seems to be the only one that attempts to discuss exactly what I asked. Nothing more, nothing less. I do realize there was some confusion between what Crystal does and what I'm talking about... I still think the two are confused by some and I'm not sure if anyone quite gets exactly what I am talking about(Which is not re-aliasing any variables, using a sort of variant type(directly at least), or having a multi-variable(e.g., crystal)). Would would be nice is an experimental version of D where would could easily extend the language to try out such concepts to see if they truly are useful and how difficult to implement. e.g., I could attempt to add said feature, it could be merged with the experimental compiler, those interested can download the compiler and test the feature out... all without negatively affecting D directly. If such features could be implemented dynamically then it would probably be pretty powerful. The example I gave was sort of the reverse. Instead of expanding the type into a supertype we are reducing it. float x; x = 3; x could be stored as a byte which would potentially be an increase in performance. Reducing the type can be pretty dangerous though unless it is verifiable. I'm somewhat convinced that expanding the type is almost always safe(at least in safe code) although not necessarily performant. IMO it makes auto more powerful in most cases but only having a test bed can really say how much.
Re: Automatic typing
On 7/1/13 6:29 PM, JS wrote: Would would be nice is an experimental version of D where would could easily extend the language to try out such concepts to see if they truly are useful and how difficult to implement. e.g., I could attempt to add said feature, it could be merged with the experimental compiler, those interested can download the compiler and test the feature out... all without negatively affecting D directly. If such features could be implemented dynamically then it would probably be pretty powerful. I don't think such a feature would make it in D, even if the implementation cost was already sunken (i.e. an implementation was already done and one pull request away). Ascribing distinct objects to the same symbol is a very core feature that affects and is affected by everything else. We'd design a lot of D differently if that particular feature were desired, and now the fundamentals of the design are long frozen. For a very simple example, consider: auto a = 2.5; // fine, a is double ... a = 3; By the proposed rule a will become an entirely different variable of type int, and the previous double variable would disappear. But current rules dictate that the type stays double. So we'd either have an unthinkably massive breakage, or we'd patch the language with a million exceptions. Even so! If the feature were bringing amazing power, there may still be a case in its favor. But fundamentally it doesn't bring anything new - it's just alpha renaming; it doesn't enable doing anything that couldn't be done without it. Andrei
Re: Automatic typing
On Tuesday, 2 July 2013 at 02:15:09 UTC, Andrei Alexandrescu wrote: On 7/1/13 6:29 PM, JS wrote: Would would be nice is an experimental version of D where would could easily extend the language to try out such concepts to see if they truly are useful and how difficult to implement. e.g., I could attempt to add said feature, it could be merged with the experimental compiler, those interested can download the compiler and test the feature out... all without negatively affecting D directly. If such features could be implemented dynamically then it would probably be pretty powerful. I don't think such a feature would make it in D, even if the implementation cost was already sunken (i.e. an implementation was already done and one pull request away). Ascribing distinct objects to the same symbol is a very core feature that affects and is affected by everything else. We'd design a lot of D differently if that particular feature were desired, and now the fundamentals of the design are long frozen. For a very simple example, consider: auto a = 2.5; // fine, a is double ... a = 3; No, not under what I am talking about. You can't downgrade a type, only upgrade it. a = 3, a is still a float. Using the concept I am talking about, your example does nothing new. but reverse the numbers: auto a = 3; a = 2.5; and a is now a float, and your logic then becomes correct EXCEPT a is expanded, which is safe. I really don't know how to make it any clearer but I'm not sure if anyone understands what I'm talking about ;/ By the proposed rule a will become an entirely different variable of type int, and the previous double variable would disappear. But current rules dictate that the type stays double. So we'd either have an unthinkably massive breakage, or we'd patch the language with a million exceptions. Even so! If the feature were bringing amazing power, there may still be a case in its favor. But fundamentally it doesn't bring anything new - it's just alpha renaming; it doesn't enable doing anything that couldn't be done without it. Expanding a type is always valid because it just consumes more memory. A double can always masquerade as an int without issue because one just wastes 4 bytes. An int can't masquerade as a double because any function think uses it as a double will cause corruption of 4 bytes of memory. (I'm ignoring that a double and int use different cpu instructions. This is irrelevant unless we are hacking stuff up) The simplest example I can give is: auto x = 2; x = 2.5; x IS a double, regardless of the fact that auto x = 2; makes it look like an int BECAUSE that is how auto currently is defined(which might be the confusion). The reason is, that the compiler looked at the scope for all assignments to x and was able to determine automatically that x needed to be a double. I'll give one more way to look at this, that is a sort of inbetween but necessary logical step: We have currently that auto looks at the immediate assignment after it's keyword to determine the type, correct? e.g., auto x = 3; What if we allow auto to look at the first assignment to x, not necessarily the immediate assignment, e.g.: auto x; x = 3; (should be identical to above) or auto x; (no assignments to x) x = 3; All this should be semantically equivalent, correct? To me, the last case is more powerful since it is more general. Of course, one could argue that it makes it more difficult to know the type of x but I doubt this would be a huge issue.
Re: Automatic typing
On 07/01/2013 07:35 PM, JS wrote: On Tuesday, 2 July 2013 at 02:15:09 UTC, Andrei Alexandrescu wrote: auto a = 2.5; // fine, a is double ... a = 3; No, not under what I am talking about. You can't downgrade a type, only upgrade it. a = 3, a is still a float. Using the concept I am talking about, your example does nothing new. but reverse the numbers: auto a = 3; a = 2.5; and a is now a float, and your logic then becomes correct EXCEPT a is expanded, which is safe. I really don't know how to make it any clearer but I'm not sure if anyone understands what I'm talking about ;/ I think I understand. I think I heard either on this or your other thread that function overloading may produce confusing results. Consider the following program: void foo(int i) {} void foo(double d) {} void main() { auto a = 3; foo(a); // Some time later somebody adds the following line: a = 2.5; } If the type of 'a' would suddenly be double from that point on, foo(a) would silently go to a different function. It may be that calling the 'double' overload is the right thing to do but it may as well be that it would be the completely the wrong thing to do. The difference is, today the compiler warns me about the incompatible types. With the proposed feature, the semantics of the program might be different without any warning. Of course one may argue that every line must be added very carefully and the unit tests must be comprehensive, etc. Of course I agree but I am another person who does not see the benefit of this proposal. It is never a chore to modify the type of a variable when the compiler warns me about an incompatibility. Ali
Re: Automatic typing
On Tuesday, 2 July 2013 at 03:17:58 UTC, Ali Çehreli wrote: On 07/01/2013 07:35 PM, JS wrote: On Tuesday, 2 July 2013 at 02:15:09 UTC, Andrei Alexandrescu wrote: auto a = 2.5; // fine, a is double ... a = 3; No, not under what I am talking about. You can't downgrade a type, only upgrade it. a = 3, a is still a float. Using the concept I am talking about, your example does nothing new. but reverse the numbers: auto a = 3; a = 2.5; and a is now a float, and your logic then becomes correct EXCEPT a is expanded, which is safe. I really don't know how to make it any clearer but I'm not sure if anyone understands what I'm talking about ;/ I think I understand. I think I heard either on this or your other thread that function overloading may produce confusing results. Consider the following program: void foo(int i) {} void foo(double d) {} void main() { auto a = 3; foo(a); // Some time later somebody adds the following line: a = 2.5; } If the type of 'a' would suddenly be double from that point on, foo(a) would silently go to a different function. It may be that calling the 'double' overload is the right thing to do but it may as well be that it would be the completely the wrong thing to do. Yes, basically. If one coded for integers then decided to change it to doubles and was doing some weird stuff then it could completely change the semantics. The difference is, today the compiler warns me about the incompatible types. With the proposed feature, the semantics of the program might be different without any warning. Yes, this is the downside I see. But there doesn't otherwise seem to be any inherent reason why it's a bad idea. Of course one may argue that every line must be added very carefully and the unit tests must be comprehensive, etc. Of course I agree but I am another person who does not see the benefit of this proposal. It is never a chore to modify the type of a variable when the compiler warns me about an incompatibility. No one says that the compiler can't warn you still. One could use a different keyword from the start. Say *autoscope*. If you use that from the get go then you should know full well that strange things are possible(and D can still do strange things without such a feature). Although I am of a different mind set than you are in that I like to have more control instead of less. You can't adapt to change if it never happens. Such a feature would, probably in 98% of cases result in something beneficial. Again, after all, if you don't like it don't use it... I do think it would simplify a few things though. Actually, as I mentioned before, there are some use cases where it does reduce code complexity. If one is doing something like this: auto foo(double x) { if(typeof(x) == int) return ; else return 2; } then they are asking for trouble. auto x; x = foo(1); x = 2.5 or x = 3 result in x becoming a very different type. (and maybe this is essentially the objection of people... but note it's not because of x... auto x = foo(); does the exact same thing. I believe that having the best tool for the job is what is important. But the tools should be available to be used. (This is a general statement, I'm not talking about this feature) When you need a bazooka you need a bazooka... to not have one really sucks... your pea shooter might do the job 99% of the time and that's fine, that's what should be used. But trying to take out a tank with a pea shooter isn't going to cut it.
Re: SIMD on Windows
Maybe make the arrays public? it's conceivable the optimiser could eliminate all that code, since it can prove the results are never referenced... I doubt that's the problem though, just a first guess. On 2 July 2013 09:14, Jonathan Dunlap jad...@gmail.com wrote: Thanks Jerro, I went ahead and used a pointer reference to ensure it's being saved back into the array (http://dpaste.dzfl.pl/**52710926http://dpaste.dzfl.pl/52710926). Two things: 1) still showing zero time delta 2) On windows 7 x74, using a SAMPLE_AT size of 3 or higher will cause the program to immediately quit with no output at all. Even the first statement of writeln in the constructor doesn't execute.
Re: UDP enhancement
It's location in the class my not be the same but that is, in general, irrelevant unless you are messing with the bits of the class. Actually we do this a lot in C++ where I work to ensure proper alignment. We are also starting to do this in D where we have C++ - D bindings so we can make our D structs exactly match our C++ structs in memory. Personally I see less benefit over: public @property int value; This approach is nice. It can be used both when layout is important and when it is don't care and is clearer. I can look at the struct and immediately read its memory footprint. Your suggested proposal cannot be used when layout is important as it is left to the compiler. It would require a workaround to coerce the compiler into submission, or additional compiler circuitry making it even more complex and slowing it down.
Assert failures in threads
I've noticed that when an assert fails inside a thread, no error message is printed and the program/thread just hangs. Is there any way to ensure that an assertion failure inside a thread does output a message? For the purposes of my current needs, it's fine if it also brings down the whole program, just so long as I'm alerted to what generated the error.
Handling different types gracefully
I'm asking because I'm doing some game development in D and I've come upon the entity component architecture, and it looks like a good way to handle the complexity and interdependency that games seem to have. The components wind up being plain old data types (I currently use structs), with systems that actually modify and interact with them on a per-entity basis. The basic idea is that you compose entities (basically just an id) with various components to give the specific functionality you need for that particular game object. The whole entity system is just a database for the different subsystems to query for entities (just an id) which have components that fulfill their criteria (for example a physics subsystem might only be interested in entities with the components position, movement, and collision, while a render subsystem might only be interested in entities with components position, mesh, sprite). There's also the reverse where the subsystems register which components they are looking for and then the entity system serves up entities that match the criteria. The standard way to store components is just to subclass them from a base class Component and then store a pointer to them in a Component[] in an overall entity manager. But then you have to do things like type casting the pointers when you return the queries, which seems a bit rough. It also feels wrong to make them inherit a class for the SOLE reason of getting around the type system. Anyway, I think I'm just rubber ducking a bit here, but I'm wondering if there's another way to do this, if someone has any experience with this sort of system. As an example of what I'm looking for, say there is a class Entities, with some type of container that could hold arrays of different component types (this is the part that really stumps me). I'd like to be able to do something like: ... auto entities = new Entities(); auto entity_id = entities.createEntity(); entities.addComponent!(position)(entity_id, pos); entities.addComponent!(movement)(entity_id, mov); entities.addComponent!(collision)(entity_id, col); auto physics_data = entities.getEntitiesWithComponents!(position, movement, collision)(); The two big requirements are some kind of regular, queryable structure to hold the components (of different types), and the ability to filter by type. Is anything like that remotely possible?
Re: Handling different types gracefully
Roderick Gibson: auto entities = new Entities(); auto entity_id = entities.createEntity(); entities.addComponent!(position)(entity_id, pos); entities.addComponent!(movement)(entity_id, mov); entities.addComponent!(collision)(entity_id, col); auto physics_data = entities.getEntitiesWithComponents!(position, movement, collision)(); The two big requirements are some kind of regular, queryable structure to hold the components (of different types), and the ability to filter by type. Is anything like that remotely possible? If the possible types are know, then there's std.variant.Algebraic, otherwise there is a free Variant or VariantN. They are not perfect, but maybe they are good enough for you. Bye, bearophile
Re: Handling different types gracefully
On Monday, 1 July 2013 at 12:03:25 UTC, Roderick Gibson wrote: I'm asking because I'm doing some game development in D and I've come upon the entity component architecture, and it looks like a good way to handle the complexity and interdependency that games seem to have. The components wind up being plain old data types (I currently use structs), with systems that actually modify and interact with them on a per-entity basis. The basic idea is that you compose entities (basically just an id) with various components to give the specific functionality you need for that particular game object. The whole entity system is just a database for the different subsystems to query for entities (just an id) which have components that fulfill their criteria (for example a physics subsystem might only be interested in entities with the components position, movement, and collision, while a render subsystem might only be interested in entities with components position, mesh, sprite). There's also the reverse where the subsystems register which components they are looking for and then the entity system serves up entities that match the criteria. The standard way to store components is just to subclass them from a base class Component and then store a pointer to them in a Component[] in an overall entity manager. But then you have to do things like type casting the pointers when you return the queries, which seems a bit rough. It also feels wrong to make them inherit a class for the SOLE reason of getting around the type system. Anyway, I think I'm just rubber ducking a bit here, but I'm wondering if there's another way to do this, if someone has any experience with this sort of system. As an example of what I'm looking for, say there is a class Entities, with some type of container that could hold arrays of different component types (this is the part that really stumps me). I'd like to be able to do something like: ... auto entities = new Entities(); auto entity_id = entities.createEntity(); entities.addComponent!(position)(entity_id, pos); entities.addComponent!(movement)(entity_id, mov); entities.addComponent!(collision)(entity_id, col); auto physics_data = entities.getEntitiesWithComponents!(position, movement, collision)(); The two big requirements are some kind of regular, queryable structure to hold the components (of different types), and the ability to filter by type. Is anything like that remotely possible? Waiting for multiple alias this ?? And opDispatch to respond to unimplemented components.
Re: GC dead-locking ?
Am Tue, 18 Jun 2013 19:12:06 -0700 schrieb Sean Kelly s...@invisibleduck.org: On Jun 18, 2013, at 7:01 AM, Marco Leise marco.le...@gmx.de wrote: Am Mon, 17 Jun 2013 10:46:19 -0700 schrieb Sean Kelly s...@invisibleduck.org: On Jun 13, 2013, at 2:22 AM, Marco Leise marco.le...@gmx.de wrote: Here is an excerpt from a stack trace I got while profiling with OProfile: #0 sem_wait () from /lib64/libpthread.so.0 #1 thread_suspendAll () at core/thread.d:2471 #2 gc.gcx.Gcx.fullcollect() (this=...) at gc/gcx.d:2427 #3 gc.gcx.Gcx.bigAlloc() (this=..., size=16401, poolPtr=0x7fc3d4bfe3c8, alloc_size=0x7fc3d4bfe418) at gc/gcx.d:2099 #4 gc.gcx.GC.mallocNoSync (alloc_size=0x7fc3d4bfe418, bits=10, size=16401, this=...) gc/gcx.d:503 #5 gc.gcx.GC.malloc() (this=..., size=16401, bits=10, alloc_size=0x7fc3d4bfe418) gc/gcx.d:421 #6 gc.gc.gc_qalloc (ba=10, sz=optimized out) gc/gc.d:203 #7 gc_qalloc (sz=optimized out, ba=10) gc/gc.d:198 #8 _d_newarrayT (ti=..., length=4096) rt/lifetime.d:807 #9 sequencer.algorithm.gzip.HuffmanTree.__T6__ctorTG32hZ.__ctor() (this=..., bitLengths=...) sequencer/algorithm/gzip.d:444 Two more threads are alive, but waiting on a condition variable (i.e.: in pthread_cond_wait(), but from my own and not from druntime code. Is there some obvious way I could have dead-locked the GC ? Or is there a bug ? I assume you're running on Linux, which uses signals (SIGUSR1, specifically) to suspend threads for a collection. So I imagine what's happening is that your thread is trying to suspend all the other threads so it can collect, and those threads are ignoring the signal for some reason. I would expect pthread_cond_wait to be interrupted if a signal arrives though. Have you overridden the signal handler for SIGUSR1? No, I have not overridden the signal handler. I'm aware of the fact that signals make pthread_cond_wait() return early and put them in a while loop as one would expect, that is all. Hrm... Can you trap this in a debugger and post the stack traces of all threads? That stack above is a thread waiting for others to say they're suspended so it can collect. I could do that (with a little work setting the scenario up again), but it wont help. As I said, the other two threads were paused in pthread_cond_wait() in my own code. There was nothing special about their stack trace. -- Marco
C standard libraries
Is there some header/module that includes declaration for all C standard libraries? I'm wondering both in general for future reference, and for the specific case of wanting to time a function and not knowing what in D--even after looking through the docs--would do something equivalent to clock and CLOCKS_PER_SEC in the C standard library time.h.
Re: C standard libraries
On Monday, 1 July 2013 at 16:32:32 UTC, CJS wrote: Is there some header/module that includes declaration for all C standard libraries? It is in core.stdc. For example: import core.stdc.stdio; // stdio.h import core.stdc.stdlib;// stdlib.h etc. what in D--even after looking through the docs--would do something equivalent to clock and CLOCKS_PER_SEC in the C standard library time.h. import core.stdc.time; import std.stdio; // for writeln writeln(CLOCKS_PER_SEC); The C headers in D aren't much documented, but they have all the same stuff as in C itself, so if you translate the include to import, the rest should continue to just work. If you want to get to more OS specific stuff, outside the C standard but still typical C libs, it is core.sys.posix.unistd; /* unistd.h */ core.sys.windows.windows /* windows.h */ and so on. The windows.h translation is *horribly* incomplete though, so if you want to do a serious win32 program you'll probably want to get something else. There's a win32 bindings somewhere on the net, if you need it I can find the link.
Slices and arrays problems?
void testref(ref int[] arr) { arr[0] = 1; } void test(int[] arr) { arr[0] = 1; } void main() { //int[] buffer1 = new int[4]; // This works int[4] buffer1; // This doesn't int[4] buffer2; testref(buffer1); test(buffer2); assert(buffer1[0] == 1); assert(buffer2[0] == 1); } I'm not sure why my code doesn't work?? Isn't the buffer just an array with a fixed length? DMD is telling me 'buffer1 is not an lvalue'. The non ref version works fine?!
Re: Slices and arrays problems?
On 07/01/2013 10:34 AM, Damian wrote: void testref(ref int[] arr) { arr[0] = 1; } void test(int[] arr) { arr[0] = 1; } void main() { //int[] buffer1 = new int[4]; // This works int[4] buffer1; // This doesn't int[4] buffer2; testref(buffer1); When that call is made, a slice would have to be created to represent all of the elements of the fixed-length array buffer1. A slice would be needed because buffer1 is not a slice but testref() takes a slice. By the simplest definition, that slice is an rvalue because it is not defined as a variable in the program. And rvalues cannot be bound to non-const references. (If I am not mistaken not even to const references yet, if ever.) test(buffer2); Similarly, when that call is made, a slice is created. The difference is, because the parameter is by-value, the slice gets copied to test(). Now there is no problem because 'arr' is just a local variable of test(). (Note that when I say a slice is created or a slice is copied, they are very cheap operations. A slice is nothing but the number of elements and a pointer to the first one of those elements. Just a size_t and a pointer.) assert(buffer1[0] == 1); assert(buffer2[0] == 1); } I'm not sure why my code doesn't work?? Isn't the buffer just an array with a fixed length? DMD is telling me 'buffer1 is not an lvalue'. The non ref version works fine?! Ali
Re: C standard libraries
On Monday, July 01, 2013 18:32:30 CJS wrote: Is there some header/module that includes declaration for all C standard libraries? I'm wondering both in general for future reference, and for the specific case of wanting to time a function and not knowing what in D--even after looking through the docs--would do something equivalent to clock and CLOCKS_PER_SEC in the C standard library time.h. If you want to time a function, checkout std.datetime.StopWatch: http://dlang.org/phobos/std_datetime.html#StopWatch As for C standard library functions in general, as Adam pointed out, they're in the in the core.stdc.* modules. - Jonathan M Davis
Eponymous template with full template syntax
I think main's second line used to work: template isSmall(T) { enum isSmall = (T.sizeof 12345); } void main() { static assert(isSmall!int); // -- the usual syntax works static assert(isSmall!int.isSmall); // -- compilation ERROR } Error: template deneme.isSmall does not match any function template declaration. Candidates are: deneme.isSmall(T) Error: template deneme.isSmall(T) cannot deduce template function from argument types !()(bool) Am I imagining it? I don't have a problem with it. :) Was the change intentional? Ali
Is this a bug in the concurrency lib or am i using it incorrectly?
I was hoping the below example would display 'hello world' but it only displays 'hello'. Is this a bug in the concurrency lib or am i using it incorrectly? import std.stdio; import std.concurrency; void writer() { try { while (true) { receive((string s){ writefln(s); }); } } catch (OwnerTerminated ex) { // die. } } void sender(Tid writer) { send(writer, world); } void main(string[] args) { auto writer = spawn(writer); send(writer, hello); spawn(sender, writer); }
Re: Is this a bug in the concurrency lib or am i using it incorrectly?
try { while (true) { receive((string s){ writefln(s); }); } } catch (OwnerTerminated ex) { // die. } If you remove the the try..catch you will notice that OwnerTerminated is thrown, if this is the intended behaviour, I don't know. Probably is, because this would be a pretty obvious bug.
Re: Eponymous template with full template syntax
On Monday, July 01, 2013 11:15:04 Ali Çehreli wrote: I think main's second line used to work: template isSmall(T) { enum isSmall = (T.sizeof 12345); } void main() { static assert(isSmall!int); // -- the usual syntax works static assert(isSmall!int.isSmall); // -- compilation ERROR } Error: template deneme.isSmall does not match any function template declaration. Candidates are: deneme.isSmall(T) Error: template deneme.isSmall(T) cannot deduce template function from argument types !()(bool) Am I imagining it? I don't have a problem with it. :) Was the change intentional? I'm not aware of it ever having worked, but given that the whole point of eponymous templates is that they be replaced with the symbol carrying their name and _everything_ else in the template is hidden, I would think that what you're seeing is correct behavior. isSmall!int is replaced with the isSmall within isSmall!int, and isSmall.isSmall makes no sense, so if that was allowed before, I'd definitely argue that disallowing it was a bug fix. - Jonathan m Davis
Re: Eponymous template with full template syntax
On Monday, 1 July 2013 at 18:15:06 UTC, Ali Çehreli wrote: I think main's second line used to work: template isSmall(T) { enum isSmall = (T.sizeof 12345); } void main() { static assert(isSmall!int); // -- the usual syntax works static assert(isSmall!int.isSmall); // -- compilation ERROR } Error: template deneme.isSmall does not match any function template declaration. Candidates are: deneme.isSmall(T) Error: template deneme.isSmall(T) cannot deduce template function from argument types !()(bool) Am I imagining it? I don't have a problem with it. :) Was the change intentional? Ali I think that this probably worked as early as in the end of 2011 but I can be wrong as don't remember exactly. It seems that dmd recognizes isSmall!int.isSmall as potential UFCS property, converts isSmall!int to bool and tries to issue call isSmall(bool) and fails, because that template does not define any function.
Re: Is this a bug in the concurrency lib or am i using it incorrectly?
If you remove the the try..catch you will notice that OwnerTerminated is thrown, if this is the intended behaviour, I don't know. Probably is, because this would be a pretty obvious bug. Ah right, so i guess the main thread is finishing and throwing the exception to writer before sender has sent anything?
.Exe file how to embed in resource after start program
Hi. Sorry for my bad english. I want to embed .exe file in resources my program. Etc. Y.JAR embed = DProgram.Exe after if start DProgram.exe Y.Jar program start. :) I'm very, very sorry. Because very bad my english. Thanks :)
Re: Slices and arrays problems?
Thanks Ali and Adam for the good explanations I understand now.
Re: .Exe file how to embed in resource after start program
On 07/01/2013 12:28 PM, Ali GOREN wrote: Sorry for my bad english. Please don't say that. :) Thank you very much for writing in English so that we can understand you. I don't have an answer to your question though... :-/ Ali
Re: Eponymous template with full template syntax
On Mon, 01 Jul 2013 14:15:04 -0400, Ali Çehreli acehr...@yahoo.com wrote: I think main's second line used to work: template isSmall(T) { enum isSmall = (T.sizeof 12345); } void main() { static assert(isSmall!int); // -- the usual syntax works static assert(isSmall!int.isSmall); // -- compilation ERROR } Error: template deneme.isSmall does not match any function template declaration. Candidates are: deneme.isSmall(T) Error: template deneme.isSmall(T) cannot deduce template function from argument types !()(bool) Am I imagining it? I don't have a problem with it. :) Was the change intentional? I think it used to work, and I think the change was intentional. I also discovered this not too long ago. -Steve
Re: Eponymous template with full template syntax
On 07/01/2013 12:03 PM, Maxim Fomin wrote: I think that this probably worked as early as in the end of 2011 but I can be wrong as don't remember exactly. To answer Jonathan's question as well, it must have worked because I see it in code that is definitely tested when it was written. It seems that dmd recognizes isSmall!int.isSmall as potential UFCS property, converts isSmall!int to bool and tries to issue call isSmall(bool) and fails, because that template does not define any function. That explains it. :) Let's play with it a little: import std.stdio; template isSmall(T) { enum isSmall = (T.sizeof 12345); struct S { T m; } } struct S { int[10] i; } void main() { writeln(isSmall!int); writeln(isSmall!int.S.init); writeln(isSmall!int.S); } First of all, apparently a template can include a definition with the same name but I still cannot type isSmall!int.isSmall. I guess the above is still an eponymous template and isSmall!int still means isSmall!int.isSmall. Now guess what the last two lines print. :) isSmall!int.S is *not* the S that is included in the template! Here is the output: true S([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) S([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) The last line is actually an anonymous struct object of type S (the S that is defined at module level). That is confusing. Ali
Re: Is this a bug in the concurrency lib or am i using it incorrectly?
On Monday, 1 July 2013 at 19:15:45 UTC, Gary Willoughby wrote: If you remove the the try..catch you will notice that OwnerTerminated is thrown, if this is the intended behaviour, I don't know. Probably is, because this would be a pretty obvious bug. Ah right, so i guess the main thread is finishing and throwing the exception to writer before sender has sent anything? An easy way of dealing with this would be to have main wait for a message from another thread telling it to terminate. My way of imagining threads in the std.concurrency model (for some reason it helps me not forget about these problems): It's a tree structure, where main is the master node and all other threads are - directly or indirectly - owned by main (main is owned by the OS) OS | main / | \gravity 01 2 || / \ |\ || 3 4 5 6|| / \ \/ 7 8 If any thread lets go of it's parent for any reason, all the children below it fall to their deaths. Hmm...Concurrency Tree Diagrams. Is this already a thing? With some coloured arrows showing message pathways it could be a really nice visualisation of a complex multi-threaded program.
Re: .Exe file how to embed in resource after start program
On Monday, 1 July 2013 at 20:11:40 UTC, Ali Çehreli wrote: On 07/01/2013 12:28 PM, Ali GOREN wrote: Sorry for my bad english. Please don't say that. :) Thank you very much for writing in English so that we can understand you. I don't have an answer to your question though... :-/ Ali Hmm thank you very much :) i want to embed because high security program :(
Re: Eponymous template with full template syntax
On Monday, 1 July 2013 at 20:28:28 UTC, Ali Çehreli wrote: On 07/01/2013 12:03 PM, Maxim Fomin wrote: I think that this probably worked as early as in the end of 2011 but I can be wrong as don't remember exactly. To answer Jonathan's question as well, it must have worked because I see it in code that is definitely tested when it was written. It seems that dmd recognizes isSmall!int.isSmall as potential UFCS property, converts isSmall!int to bool and tries to issue call isSmall(bool) and fails, because that template does not define any function. That explains it. :) Let's play with it a little: import std.stdio; template isSmall(T) { enum isSmall = (T.sizeof 12345); struct S { T m; } } struct S { int[10] i; } void main() { writeln(isSmall!int); writeln(isSmall!int.S.init); writeln(isSmall!int.S); } First of all, apparently a template can include a definition with the same name but I still cannot type isSmall!int.isSmall. I guess the above is still an eponymous template and isSmall!int still means isSmall!int.isSmall. Now guess what the last two lines print. :) isSmall!int.S is *not* the S that is included in the template! Here is the output: true S([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) S([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) The last line is actually an anonymous struct object of type S (the S that is defined at module level). I though UFCS wasn't possible with constructors? *That* very usecase is one of the reasons why. Shouldn't that be an accepts-invalid? That is confusing. UFCS construction: Yes. The rest, not so much: The idea is that once a template is eponymous, it *fully* becomes the eponymous function/type/value (s). Every other function, regardless of public/private*, simply seizes to exist to the outside world. You can't make a qualified call to an eponymous template, because the qualification is already the call. Long story short, it's not mix and match: Either you have a normal template, or you have an something eponymous, not a bit of both: *What qualifies for eponymous template is kind of buggy, since what actually qualifies is not exactly what the spec says. Still, *once* something is considered qualified by the implementation, then it is fully eponymous.
Re: memcpy in D
I'd just like to say thanks for your suggestions and quite complete solutions. I particularly liked the alternate implementation presented by Marco. I must admit that I did not understand what was happening at first. So just to confirm my understanding: Because the address at memory is casted to ushort* in defining base, every iteration of the foreach loop advances the pointer two bytes into the array? Thanks, Andrew
Re: memcpy in D
On 07/01/2013 03:23 PM, Tyro[17] wrote: So just to confirm my understanding: Because the address at memory is casted to ushort* in defining base, every iteration of the foreach loop advances the pointer two bytes into the array? Yes. The incerement operator on a pointer advances the pointer to point at the next element. Automatic! :) Same with arithmetic operations. (ptr + 1) is the address value that points at the next element. Ali
Re: Eponymous template with full template syntax
On 07/01/2013 02:10 PM, monarch_dodra wrote: That is confusing. UFCS construction: Yes. I *think* I did not know it but I can't be sure. :) struct S { int i; } void main() { static assert (S(42) == 42.S); } It works with 2.063 (v2.064-devel-a1a1537 too). The rest, not so much: The idea is that once a template is eponymous, it *fully* becomes the eponymous function/type/value (s). Every other function, regardless of public/private*, simply seizes to exist to the outside world. You can't make a qualified call to an eponymous template, because the qualification is already the call. A single definition with the same name makes it an eponymous template. I used to think that the template should also have a single definition. So, currently other definitions act as implementation details of the template. The following template sees the local S, not the module-level one: struct S { int[10] i; } template epo(T) { size_t epo() { return S.sizeof; // -- epo.S, not .S } struct S { int i; } double foo() { return 1.5; } } void main() { assert(epo!int() == int.sizeof); // -- yes, epo.S mixin epo!int; assert(foo() == 1.5); } Also note that mixing in the template is still possible but it is an orthogonal feature anyway. Ali
Template constraints and opAdd
I'm getting conflicting templates in this struct and I'm not sure how. I specifically excluded the second definition of opAdd from using type T in place of O but the compiler still tells me I'm getting template conflicts. Compiler error using Mass!(double,string): Error: template mass.Mass!(double,string).Mass.opAdd(O) if ((typeof(O)) != (typeof(T))) conflicts with function mass.Mass!(double,string).Mass.opAdd at src\mass.d(38) I have a struct: struct Mass(T, S) { ... Mass!(T,S) opAdd(Mass!(T,S) other) { return op!+(other); } Mass!(O,S) opAdd(O)(Mass!(O,S) other) if (typeof(O) != typeof(T)) { return op!+(other); } ... } And I'm trying to do something like: Mass!(double,string) first = ... Mass!(double,string) second = ... auto result = first + second; I'm trying to add a Mass!(double,string) + Mass!(double,string), which should mean the second template gets ignored since T=double and O=double. What am I missing?
Re: Template constraints and opAdd
John: Mass!(T,S) opAdd(Mass!(T,S) other) { If you are using D2 then don't use opAdd, use opBinary: http://dlang.org/operatoroverloading.html#Binary Bye, bearophile
opDispatch and UFCS
import std.conv, std.stdio, std.algorithm; struct S { void opDispatch(string s, T...)(T t) if (s.startsWith(foo)) { writeln(s); } } void main() { S s; s.foo(); auto p = s.to!string(); // Error: s.opDispatch!(to) isn't a template } Should the constraint on opDispatch allow the UFCS to call on S?
Re: Template constraints and opAdd
On Tuesday, 2 July 2013 at 00:01:48 UTC, bearophile wrote: John: Mass!(T,S) opAdd(Mass!(T,S) other) { If you are using D2 then don't use opAdd, use opBinary: http://dlang.org/operatoroverloading.html#Binary Bye, bearophile Thanks, I switched over to using the new function and after fiddling with the functions I came up with a pair that seem to work. Mass!(T,S) opBinary(alias operator)(Mass!(T,S) other) { Mass!(T,S) opBinary(alias operator, O)(Mass!(O,S) other) if (!is(O == T)) {
Re: Template constraints and opAdd
On Monday, 1 July 2013 at 23:36:27 UTC, John wrote: struct Mass(T, S) { ... Mass!(T,S) opAdd(Mass!(T,S) other) { You can't overload non-templates with templates, yet. It's supposed to work, but not implemented. The workaround is simple enough: Mass!(T,S) opAdd()(Mass!(T,S) other) { // note the () after opAdd return op!+(other); } Mass!(O,S) opAdd(O)(Mass!(O,S) other) if (typeof(O) != typeof(T)) { That constraint should be: if (!is(O == T)) return op!+(other); } ... }
Re: Template constraints and opAdd
John: Mass!(T,S) opBinary(alias operator)(Mass!(T,S) other) { Mass!(T,S) opBinary(alias operator, O)(Mass!(O,S) other) if (!is(O == T)) { Isn't operator better as string? -- anonymous: You can't overload non-templates with templates, I think it was recently fixed in Git. Bye, bearophile