Poll regarding delete removal
Just to satisfy my curiousity about how many D users think that the delete operator really is evil, I launched a poll in micropoll.com. http://www.micropoll.com/akira/mpview/979834-265542
[100% OT]
OK, only 98% http://www.hulu.com/initial-d -- ... <
dflplot/Plot2Kill, Most Mature *nix GUI For D2
I've refactored my dflplot lib to the point where the GUI-specific stuff is well abstracted from the GUI-agnostic stuff in preparation for a port to a GUI lib that supports rotated fonts, saving bitmaps, and/or *nix. The plan is to support multiple GUI libs, including DFL (already working except for rotated fonts and saving) and at least one or two more. I started trying to do a port to gtkD, but found the API to be intolerable in that it's poorly documented and requires you to use the low-level C APIs (read: raw pointers and functions_named_like_this_to_prevent_naming_collisions()) for basic stuff like constructing a drawing context. It might be a good choice when it matures, the docs improve and the C cruft is wrapped better, but in the short run I don't think a gtkD port is happening. OTOH, I realize that much, possibly the majority, of the D community, is *nix users and my plotting lib is useless to them as long as it's DFL only. I also want to be able to create plots on some *nix machines I SSH into. Therefore, I want to get a *nix port working in the near future. What is the most mature GUI lib for D2 that supports *nix? By mature, I mean: 1. Any low-level C APIs are well wrapped in nice D OO APIs to the point where you don't need to use the C APIs at least in the common cases. 2. It compiles out of the box on 2.047. 3. Preferably the documentation is decent, though I got by without this for the original DFL version. Also, I've tentatively named the multiple GUI lib version of this plotting lib Plot2Kill. Is this reasonable, or do we want to keep the names of our scientific libs more boring and politically correct?
Re: Getting # Physical CPUs
Georg Wrede wrote: On 07/14/2010 08:55 PM, dsimcha wrote: == Quote from eris (jvbur...@gmail.com)'s article This is a relatively difficult problem in general to do portably due to hardware differences, topology differences, changes to hardware, OS variations. Even the pthreads library doesn't reliably implement it in a portable manner. I came to the conclusion that the people most motivated to keep up to date on a portable CPU core topology are the national supercomputing labs. INRIA and various US labs came up with "Portable Hardware Locality" library. It gives you *everything* you need to discover the number of CPU sockets, memory architecture, number of cores per socket, control cpu affinity etc. The HWLoc C libraries are written by the open-mpi project here: http://www.open-mpi.org/projects/hwloc/ I appreciate the help, but honestly, if detecting this properly requires adding dependencies to my projects, I'm happier with the simple workaround of having a manual command line switch to specify the number of CPUs. The projects in question are internal research projects, not things that are going to be released on the computer-illiterate masses. It would be nice to not have to manually specify such a parameter on every run, but not nice enough to be worth introducing a dependency. I can't imagine how this would not be a required part of the core library. For a language that claims to be thread savvy, knowing the number of cpus and the number of cores, is simply obligatory homework. An extra point: the code that identifies them, should not ever assume that all cores are identical. Nor that they have identical access to machine resources. The day that someone invents the 'unequal cores paradigm', where cores of dissimilar power are included in the same computer, should not expose us with our pants down. It really depends on what the purpose is. If you want to determine the precise core topology, the available information is heavily OS-dependent. Note that there's potentially a large difference between the number of cores in the machine, versus the number of cores which the OS makes available to your app. Generally the second number is the one which matters. (A case in point, at bootup, the Linux core already enumerates and evaluates each found core individually.) Of course it does. It's trivial when you're an OS and have unrestricted access to the machine. An app is severely limited to what it can get from the OS. Currently core.cpuid doesn't make any OS calls at all. I think std.cpuid should be replaced with a new module std.sysinfo, which determines more features (such as available RAM).
Re: Overloading property vs. non-property
On 16.07.2010 01:46, BCS wrote: Hello dsimcha, Histogram(someData, 10) .barColor(getColor(255, 0, 0)) .histType(HistType.Probability) .toFigure.title("A Histogram") .xLabel("Stuff").showAsMain(); With a little meta programming you might be able to make a type that generate a fluent interface for any type. Using opDispatch you pass the args into a contained type and return self. The only difference from the users standpoint is that you need one more function call in the chain. Great idea. I figured a fancy solution wouldn't be worth it, but if it could be fully generic... --- import std.stdio; class Foo { int a; string b; @property int propA(int v) { return a = v; } @property int propA() { return a; } @property string propB(string v) { return b = v; } @property string propB() { return b; } void foo() { writeln("foo"); } } struct PSet(T) { T _obj; typeof(this) opDispatch(string prop, T...)(T val) { mixin("_obj." ~ prop ~ "=val;"); return this; } } PSet!T pset(T)(T obj) { return PSet!T(obj); } void main() { Foo f = new Foo; // Not sure why putting those foo()'s in there just like that works. // compiler bug? pset(f).propA(1).foo().propB("something").foo(); assert(f.a == 1); assert(f.b == "something"); } ---
Re: Getting # Physical CPUs
On 07/14/2010 08:55 PM, dsimcha wrote: == Quote from eris (jvbur...@gmail.com)'s article This is a relatively difficult problem in general to do portably due to hardware differences, topology differences, changes to hardware, OS variations. Even the pthreads library doesn't reliably implement it in a portable manner. I came to the conclusion that the people most motivated to keep up to date on a portable CPU core topology are the national supercomputing labs. INRIA and various US labs came up with "Portable Hardware Locality" library. It gives you *everything* you need to discover the number of CPU sockets, memory architecture, number of cores per socket, control cpu affinity etc. The HWLoc C libraries are written by the open-mpi project here: http://www.open-mpi.org/projects/hwloc/ I appreciate the help, but honestly, if detecting this properly requires adding dependencies to my projects, I'm happier with the simple workaround of having a manual command line switch to specify the number of CPUs. The projects in question are internal research projects, not things that are going to be released on the computer-illiterate masses. It would be nice to not have to manually specify such a parameter on every run, but not nice enough to be worth introducing a dependency. I can't imagine how this would not be a required part of the core library. For a language that claims to be thread savvy, knowing the number of cpus and the number of cores, is simply obligatory homework. An extra point: the code that identifies them, should not ever assume that all cores are identical. Nor that they have identical access to machine resources. The day that someone invents the 'unequal cores paradigm', where cores of dissimilar power are included in the same computer, should not expose us with our pants down. (A case in point, at bootup, the Linux core already enumerates and evaluates each found core individually.)
Re: One case of careless opDispatch :)
"Dmitry Olshansky" wrote in message news:i1nns8$d4...@digitalmars.com... > > The tricky part is that *any* class with unconstrained (or loosely > constrained) opDispatch is also a Range, and at least a bidirectional one, > since it "provides" all the primitives: front, popFront etc. > In fact such classes could penetrate almost any attempts at C++ trait-like > stuff and should be avoided. > > The moral: unconstrainted opDispatch == TROUBLE. > Hope that helps! > Duck typing == TROUBLE
Re: Overloading property vs. non-property
Hello dsimcha, Histogram(someData, 10) .barColor(getColor(255, 0, 0)) .histType(HistType.Probability) .toFigure.title("A Histogram") .xLabel("Stuff").showAsMain(); With a little meta programming you might be able to make a type that generate a fluent interface for any type. Using opDispatch you pass the args into a contained type and return self. The only difference from the users standpoint is that you need one more function call in the chain. -- ... <
Re: Getting # Physical CPUs
Walter Bright Wrote: > > > > $ make -flinux.mak > > make --no-print-directory -f OS=posix BUILD=release > > make[1]: OS=posix: No such file or directory > > make[1]: *** No rule to make target `OS=posix'. Stop. > > make: *** [release] Error 2 > > The "OS=posix" sets the macro OS to the value posix, it does not set the > target. > This has been a feature of make since at least the 1980's, earlier than Linux > even existed. So I'm astonished you're seeing this error. Looks to me like a macro isn't being set. In the first output line it has the argument '-f ' there are two spaces after requesting to look in a file. So the file name it sees is OS=posix
Re: State of and plans for the garbage collector
== Quote from Leandro Lucarella (l...@llucax.com.ar)'s article > dsimcha, el 15 de julio a las 19:23 me escribiste: > > == Quote from Bane (branimir.milosavlje...@gmail.com)'s article > > > Anyway, I'm here bitching myself :) Just want to say that idea to have > > > more than > > one GC type to chose when compiling would be very interesting thing, if > > single > > implementation can't be good for all cases. > > > > If I had to chose one topic with most bitchin' on this newsgroup I have > > impression it would be the one about GC. They usually goes from 'GC managed > > programs are slow, D ain't good enough', to 'language X has better GC than > > D', to > > ' GC that D has is bad at Z'. > > > > > > > > Why not make D summer of code - write your own GC optimized for special > > > > case > > of XYZ, send it, bundle all up in D with compiler switch '--useGC=XYZ'. > > That is > > only way to really compare what is best for special case. > > > > > > > > If/when we have enough manpower to write/maintain multiple GC's, here are > > some GC > > designs that can be either optimizations or pessimizations depending on use > > case: > > > > 1. Precise heap scanning (i.e. in the absence of copying/heap compaction). > > If > > you allocate mostly small objects, you're probably very seldom bitten by > > false > > pointers and the O(1) overhead per block needed to store type info may be > > significant. If you allocate mostly large objects, you've probably been > > eaten > > alive by false pointers and the O(1) per block overhead is negligible to > > you. > > > > 2. Concurrent collection. If you use threads for concurreny/latency > > reasons, > > this can be a boon. If you use threads for parallelism/throughput reasons, > > or > > write single-threaded code, this is a complete waste. > Not completely, if you have a multi-core, suddenly your program becomes > multi-threaded and uses more than 1 core. In that case, a concurrent GC > could improve the overall performance of the application. In theory. I thought that in practice concurrent GC's (I'm not talking about the case of thread-local heaps) are always slower throughput-wise than stop-the-world.
Re: State of and plans for the garbage collector
dsimcha, el 15 de julio a las 19:23 me escribiste: > == Quote from Bane (branimir.milosavlje...@gmail.com)'s article > > Anyway, I'm here bitching myself :) Just want to say that idea to have more > > than > one GC type to chose when compiling would be very interesting thing, if single > implementation can't be good for all cases. > > > If I had to chose one topic with most bitchin' on this newsgroup I have > impression it would be the one about GC. They usually goes from 'GC managed > programs are slow, D ain't good enough', to 'language X has better GC than > D', to > ' GC that D has is bad at Z'. > > > > > > Why not make D summer of code - write your own GC optimized for special > > > case > of XYZ, send it, bundle all up in D with compiler switch '--useGC=XYZ'. That > is > only way to really compare what is best for special case. > > > > > If/when we have enough manpower to write/maintain multiple GC's, here are > some GC > designs that can be either optimizations or pessimizations depending on use > case: > > 1. Precise heap scanning (i.e. in the absence of copying/heap compaction). > If > you allocate mostly small objects, you're probably very seldom bitten by false > pointers and the O(1) overhead per block needed to store type info may be > significant. If you allocate mostly large objects, you've probably been eaten > alive by false pointers and the O(1) per block overhead is negligible to you. > > 2. Concurrent collection. If you use threads for concurreny/latency reasons, > this can be a boon. If you use threads for parallelism/throughput reasons, or > write single-threaded code, this is a complete waste. Not completely, if you have a multi-core, suddenly your program becomes multi-threaded and uses more than 1 core. In that case, a concurrent GC could improve the overall performance of the application. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- - Tata Dios lo creó a usté solamente pa despertar al pueblo y fecundar las gayinas. - Otro constrasentido divino... Quieren que yo salga de joda con las hembras y después quieren que madrugue. -- Inodoro Pereyra y un gallo
Re: Overloading property vs. non-property
On 15.07.2010 17:42, dsimcha wrote: == Quote from torhu (n...@spam.invalid)'s article In case the answer is no, that example of yours is the perfect opportunity to dust off the almost-forgotten with statement :) with (Histogram(someData, 10)) { barColor = getColor(255, 0, 0); histType = HistType.Probability; toFigure.title = "A Histogram"; xLabel = "Stuff"; showAsMain(); } A bit more typing, but I'd say that it's easier to read. But toFigure returns a Figure, not this. The idea is that you'd set all the properties for the Plot, then put toFigure somewhere in your chain, then set all the properties for the Figure. Oops, guess I should have waited until after my nap with posting :) You could nest the with statements, but then it's getting more verbose. Might be better to add a convenience constructor or two to Figure that takes care of the most common cases, and having toFigure forward to that. with (Histogram(someData, 10)) { barColor = getColor(255, 0, 0); histType = HistType.Probability; toFigure("A Histogram", "Stuff").showAsMain(); } Other options include having a factory function that returns a probability histogram, or even make it a template parameter and a have a ProbabilityHistogram alias, etc. A few small changes could help a lot for the common use cases. It would never be as quite flexible as what you have now, but you might get close enough.
Re: TDPL notes, part 2
Weren't you leaving for good this list? On Thu, 15 Jul 2010 09:10:03 -0500, retard wrote: Thu, 15 Jul 2010 07:51:55 -0400, bearophile wrote: P 61: this is so hard to read that I don't want to see anything similar even in small script-like programs. The D compiler can even disallow such long chains: int c = (a = b, b = 7, 8); I suppose this mostly explains why the real tuples aren't coming to D. Both of the authors love the C/C++ style comma operator for "code generation purposes" as you can see above. Another advantage is that your time won't be wasted when you switch from D to C++ to do some real world programming. -- Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
Re: State of and plans for the garbage collector
Bane, el 15 de julio a las 14:34 me escribiste: > If I had to chose one topic with most bitchin' on this newsgroup > I have impression it would be the one about GC. They usually goes from > 'GC managed programs are slow, D ain't good enough', to 'language > X has better GC than D', to ' GC that D has is bad at Z'. > > Why not make D summer of code - write your own GC optimized for > special case of XYZ, send it, bundle all up in D with compiler switch > '--useGC=XYZ'. That is only way to really compare what is best for > special case. The GC I'm working on is configurable at startup (runtime) via environment variables. The idea is to be able to tune the GC for different programs without even recompiling. I already made the actual compile time options for memory stomping and sentinel configurable at runtime and it works great, there is no noticeable performance penalty when you don't use them either. I think this is really the way to go. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Los pobres buscan su destino. Acá está; ¿no lo ven? -- Emilio Vaporeso. Marzo de 1914
Re: Getting # Physical CPUs
Walter Bright, el 15 de julio a las 11:40 me escribiste: > dsimcha wrote: > >Here's the error message I'm getting. I know basically nothing about make > >except > >that it's a build system and that it almost never works, so I can't even > >begin to > >debug this. Here's the error message I've been getting, on a freshly > >unpacked > >2.047 directory on some ancient Linux distro that my sys admin insists on > >using: > > > >$ make -flinux.mak > >make --no-print-directory -f OS=posix BUILD=release > >make[1]: OS=posix: No such file or directory > >make[1]: *** No rule to make target `OS=posix'. Stop. > >make: *** [release] Error 2 > > The "OS=posix" sets the macro OS to the value posix, it does not set > the target. This has been a feature of make since at least the > 1980's, earlier than Linux even existed. So I'm astonished you're > seeing this error. This is even standard POSIX: http://www.opengroup.org/onlinepubs/009695399/utilities/make.html (see the OPERANDS section) -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- - Que hacés, ratita? - Espero un ratito...
Re: One case of careless opDispatch :)
On Thu, 15 Jul 2010 15:34:23 -0400, Dmitry Olshansky wrote: As a practical habit, once I stumble upon a very tricky error, I usually share the valuable knowledge of "when you do this ... and get that ... it's probably because ... " Damn, sometimes they can even become cool quizzes... So to warn those oblivious to the dangers of opDispatch, here is my the yesterday nightmare, the striped down code below. import std.algorithm; import std.array; class Widget { string _name; Widget[] _children; this(in string name){ _name = name.idup; } Widget opDispatch(string nm)(){ auto r = find!((Widget c){ return c._name == nm; })(_children); return r.front(); } } void main(){ Widget g = new Widget("G"); Widget warr[] = [new Widget("W"),g]; find(warr,g); } produces: Error1Error: template std.algorithm.startsWith(alias pred = "a == b",Range,Ranges...) if (isInputRange!(Range) && Ranges.length > 0 && is(typeof(binaryFun!(pred)(doesThisStart.front,withOneOfThese[0].front startsWith(alias pred = "a == b",Range,Ranges...) if (isInputRange!(Range) && Ranges.length > 0 && is(typeof(binaryFun!(pred)(doesThisStart.front,withOneOfThese[0].front matches more than one template declaration, C:\dmd2\windows\bin\..\..\src\phobos\std\algorithm.d(1892):startsWith(alias pred = "a == b",Range,Ranges...) if (isInputRange!(Range) && Ranges.length > 0 && is(typeof(binaryFun!(pred)(doesThisStart.front,withOneOfThese[0].front and C:\dmd2\windows\bin\..\..\src\phobos\std\algorithm.d(1980):startsWith(alias pred = "a == b",Range,Elements...) if (isInputRange!(Range) && Elements.length > 0 && is(typeof(binaryFun!(pred)(doesThisStart.front,withOneOfThese[0] C:\dmd2\windows\bin\..\..\src\phobos\std\algorithm.d1488 The tricky part is that *any* class with unconstrained (or loosely constrained) opDispatch is also a Range, and at least a bidirectional one, since it "provides" all the primitives: front, popFront etc. In fact such classes could penetrate almost any attempts at C++ trait-like stuff and should be avoided. The moral: unconstrainted opDispatch == TROUBLE. Hope that helps! P.S. Strangely enough, that problem haven't showed up until update to 2.047 release, so it's probably detonated by some changes to Phobos. I guess better sooner than later. :) I've run into this before, with other compile-time tests such as isAssociativeArray. Often, the real bug is the tests themselves are too permissive.
One case of careless opDispatch :)
As a practical habit, once I stumble upon a very tricky error, I usually share the valuable knowledge of "when you do this ... and get that ... it's probably because ... " Damn, sometimes they can even become cool quizzes... So to warn those oblivious to the dangers of opDispatch, here is my the yesterday nightmare, the striped down code below. import std.algorithm; import std.array; class Widget { string _name; Widget[] _children; this(in string name){ _name = name.idup; } Widget opDispatch(string nm)(){ auto r = find!((Widget c){ return c._name == nm; })(_children); return r.front(); } } void main(){ Widget g = new Widget("G"); Widget warr[] = [new Widget("W"),g]; find(warr,g); } produces: Error1Error: template std.algorithm.startsWith(alias pred = "a == b",Range,Ranges...) if (isInputRange!(Range) && Ranges.length > 0 && is(typeof(binaryFun!(pred)(doesThisStart.front,withOneOfThese[0].front startsWith(alias pred = "a == b",Range,Ranges...) if (isInputRange!(Range) && Ranges.length > 0 && is(typeof(binaryFun!(pred)(doesThisStart.front,withOneOfThese[0].front matches more than one template declaration, C:\dmd2\windows\bin\..\..\src\phobos\std\algorithm.d(1892):startsWith(alias pred = "a == b",Range,Ranges...) if (isInputRange!(Range) && Ranges.length > 0 && is(typeof(binaryFun!(pred)(doesThisStart.front,withOneOfThese[0].front and C:\dmd2\windows\bin\..\..\src\phobos\std\algorithm.d(1980):startsWith(alias pred = "a == b",Range,Elements...) if (isInputRange!(Range) && Elements.length > 0 && is(typeof(binaryFun!(pred)(doesThisStart.front,withOneOfThese[0] C:\dmd2\windows\bin\..\..\src\phobos\std\algorithm.d1488 The tricky part is that *any* class with unconstrained (or loosely constrained) opDispatch is also a Range, and at least a bidirectional one, since it "provides" all the primitives: front, popFront etc. In fact such classes could penetrate almost any attempts at C++ trait-like stuff and should be avoided. The moral: unconstrainted opDispatch == TROUBLE. Hope that helps! P.S. Strangely enough, that problem haven't showed up until update to 2.047 release, so it's probably detonated by some changes to Phobos. I guess better sooner than later. -- Dmitry Olshansky
Re: State of and plans for the garbage collector
== Quote from Bane (branimir.milosavlje...@gmail.com)'s article > Anyway, I'm here bitching myself :) Just want to say that idea to have more > than one GC type to chose when compiling would be very interesting thing, if single implementation can't be good for all cases. > > If I had to chose one topic with most bitchin' on this newsgroup I have impression it would be the one about GC. They usually goes from 'GC managed programs are slow, D ain't good enough', to 'language X has better GC than D', to ' GC that D has is bad at Z'. > > > > Why not make D summer of code - write your own GC optimized for special case of XYZ, send it, bundle all up in D with compiler switch '--useGC=XYZ'. That is only way to really compare what is best for special case. > > If/when we have enough manpower to write/maintain multiple GC's, here are some GC designs that can be either optimizations or pessimizations depending on use case: 1. Precise heap scanning (i.e. in the absence of copying/heap compaction). If you allocate mostly small objects, you're probably very seldom bitten by false pointers and the O(1) overhead per block needed to store type info may be significant. If you allocate mostly large objects, you've probably been eaten alive by false pointers and the O(1) per block overhead is negligible to you. 2. Concurrent collection. If you use threads for concurreny/latency reasons, this can be a boon. If you use threads for parallelism/throughput reasons, or write single-threaded code, this is a complete waste. 3. Thread local allocators. Quite simply, it's a space-speed tradeoff, and it depends which is more important to you.
Re: State of and plans for the garbage collector
On Thu, 15 Jul 2010 21:34:36 +0300, Bane wrote: Why not make D summer of code - write your own GC optimized for special case of XYZ, send it, bundle all up in D with compiler switch '--useGC=XYZ'. That is only way to really compare what is best for special case. In D1, the garbage collector is actually compiled to a stand-alone library, but then it's statically linked into Phobos. If you edit the Phobos makefile a bit, you should then be able to specify the GC .lib to link to on the compiler/linker command-line. In D2 it looks like the GC is clumped together with the runtime. -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: Getting # Physical CPUs
== Quote from Walter Bright (newshou...@digitalmars.com)'s article > > And here's the error I get when I try on a different machine w/ a more > > modern > > distro (this one is probably due to lack of 64 bit libs): > > > > $ make -flinux.mak > > make --no-print-directory -f linux.mak OS=posix BUILD=release > > cc -c -m32 -O3 etc/c/zlib/adler32.c -ogenerated/posix/release/etc/c/zlib/adler32.o > > cc -c -m32 -O3 etc/c/zlib/compress.c -ogenerated/posix/release/etc/c/zlib/compress.o > > cc -c -m32 -O3 etc/c/zlib/crc32.c > > -ogenerated/posix/release/etc/c/zlib/crc32.o > > In file included from /usr/include/features.h:378, > > from /usr/include/string.h:26, > > from etc/c/zlib/zutil.h:23, > > from etc/c/zlib/crc32.c:29: > > /usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or > > directory > > make[1]: *** [generated/posix/release/etc/c/zlib/crc32.o] Error 1 > > make: *** [release] Error 2 > This is most likely because you have not gotten the 32 bit dev system > installed > on your 64 bit system (it usually is not installed by the default linux > install). Which one do you have? > BTW, using a script rather than make wouldn't have helped you with the second > issue. Yeah, in my original most I meant to say it was probably due to lack of 32-bit libs, which I don't have root privileges to install on the machine in question. It can see the same file systems via NFS as some machines that do have 32-bit libs installed, so my usual kludge is to always use a machine w/ 32-bit libs for building. Even so, I only figured this out b/c I was previously aware of the problem. The point isn't that make failed here, it's that it failed with an absolutely inscrutable error message. I wouldn't have even been able to begin guessing what was wrong if I didn't already know about the 32-bit lib issue.
Re: State of and plans for the garbage collector
Anyway, I'm here bitching myself :) Just want to say that idea to have more than one GC type to chose when compiling would be very interesting thing, if single implementation can't be good for all cases. > If I had to chose one topic with most bitchin' on this newsgroup I have > impression it would be the one about GC. They usually goes from 'GC managed > programs are slow, D ain't good enough', to 'language X has better GC than > D', to ' GC that D has is bad at Z'. > > Why not make D summer of code - write your own GC optimized for special case > of XYZ, send it, bundle all up in D with compiler switch '--useGC=XYZ'. That > is only way to really compare what is best for special case. >
Re: Getting # Physical CPUs
dsimcha wrote: Here's the error message I'm getting. I know basically nothing about make except that it's a build system and that it almost never works, so I can't even begin to debug this. Here's the error message I've been getting, on a freshly unpacked 2.047 directory on some ancient Linux distro that my sys admin insists on using: $ make -flinux.mak make --no-print-directory -f OS=posix BUILD=release make[1]: OS=posix: No such file or directory make[1]: *** No rule to make target `OS=posix'. Stop. make: *** [release] Error 2 The "OS=posix" sets the macro OS to the value posix, it does not set the target. This has been a feature of make since at least the 1980's, earlier than Linux even existed. So I'm astonished you're seeing this error. And here's the error I get when I try on a different machine w/ a more modern distro (this one is probably due to lack of 64 bit libs): $ make -flinux.mak make --no-print-directory -f linux.mak OS=posix BUILD=release cc -c -m32 -O3 etc/c/zlib/adler32.c -ogenerated/posix/release/etc/c/zlib/adler32.o cc -c -m32 -O3 etc/c/zlib/compress.c -ogenerated/posix/release/etc/c/zlib/compress.o cc -c -m32 -O3 etc/c/zlib/crc32.c -ogenerated/posix/release/etc/c/zlib/crc32.o In file included from /usr/include/features.h:378, from /usr/include/string.h:26, from etc/c/zlib/zutil.h:23, from etc/c/zlib/crc32.c:29: /usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory make[1]: *** [generated/posix/release/etc/c/zlib/crc32.o] Error 1 make: *** [release] Error 2 This is most likely because you have not gotten the 32 bit dev system installed on your 64 bit system (it usually is not installed by the default linux install). Which one do you have? BTW, using a script rather than make wouldn't have helped you with the second issue.
Re: Getting # Physical CPUs
Philippe Sigaud wrote: As a side note, why is there both a std.cpuid and a core.cpuid? Does std use core? No. In that case, why not import std.cpuid? std.cpuid is deprecated.
Re: State of and plans for the garbage collector
If I had to chose one topic with most bitchin' on this newsgroup I have impression it would be the one about GC. They usually goes from 'GC managed programs are slow, D ain't good enough', to 'language X has better GC than D', to ' GC that D has is bad at Z'. Why not make D summer of code - write your own GC optimized for special case of XYZ, send it, bundle all up in D with compiler switch '--useGC=XYZ'. That is only way to really compare what is best for special case. > Okay. I really don't know much about garbage collectors, how they work, or > what > makes one particularly good or bad (other than the fact that it needs to be > efficient execution-wise and manage memory wisely so that you don't use too > much > of it or do anything else that would be an overall negative for performance). > However, from the comments here - both recent and in the past - it's pretty > clear that D's garbage collector is fairly outdated. I would assume that that > would be negative for performance - certainly it would mean that significant > improvements could be made. > > So, my question is this: what are the plans for the garbage collector? Is the > intention to continue to improve it bit by bit, to give it a major overhaul > at > some point, to outright replace it at a later date, or something else > entirely? > > If D is going to compete with C and C++, it needs to be highly efficient, and > if > the garbage collector isn't up to snuff, that's going to be a big problem. > I'm > not looking to complain about the current garbage collector - I really don't > know how good or bad it is - but if it is rather poor (as I've gotten the > impresison that it is - at least in some respects - from various discussions > on > it here), then I'd assume that it needs a major overhaul or replacement at > some > point. So, are there any specific plans with regards to that, or is that just > something that may be considered in the future? > > - Jonathan M Davis
Re: Getting # Physical CPUs
As a side note, why is there both a std.cpuid and a core.cpuid? Does std use core? In that case, why not import std.cpuid?
Re: Extending deprecated
On Jul 16, 10 01:18, Mafi wrote: Hi folks, When porting some D1 code to D2, i had the idea to add an alternative syntax of deprecated: deprecated ( string ) where string is any valid string or char. The sense of this would be 'deprecated by', so the compiler should then output "xy is deprecated by ..." instead of "xy is deprecated". What do you think about that? Mafi Proposed already: http://article.gmane.org/gmane.comp.lang.d.dmd.devel/360
Re: TDPL notes, part 1
On 07/14/2010 07:28 AM, bearophile wrote: I have finally received my copy of The D Programming Language :-) This is a first post of notes that I am writing while I read this text for the first time. Thanks. Though the effort is definitely to be appreciated, there is a high risk that such long streams of consciousness come and go and are forgotten. There are more persistent places for proposing changes to the language or the book: - for bug reports and enhancement proposals for D, use our Bugzilla repository (http://d.puremagic.com/issues) - for book errata use http://erdani.com/tdpl/errata I'd like to know how many copies have being sold so far, it can give a starting idea of how many people are interested in D2. I receive a statement from the publisher every 6 months. My understanding is that there are significant seasonal variations (summer being generally a slow time) and that the #1 influencing factor of sales are good reviews. I have seen the PDF on the site, but I'd like to see colored code in the electronic text. Source code colorization improves code readability (an offline PDF version that allows copy&paste can allow me to give better quotations here). Unfortunately my choice in the matter is pretty limited. For the publisher a different version than the one printed is a red flag - in theory there could be differences in content too. I'll see what I can do. P 8, first program: - Unsigned integers are dangerous in D, their usage must be discouraged as much as possible, use them only when they are strictly necessary, they are premature optimization, don't use them in the first few hundred pages of the book. And assigning length to an uint loses precision anyway, because length is a size_t that can be 64 bits too, so in this early example it is *much* better to use just one int value. That's indeed a bug, I primed the errata on your behalf. Andrei
Extending deprecated
Hi folks, When porting some D1 code to D2, i had the idea to add an alternative syntax of deprecated: deprecated ( string ) where string is any valid string or char. The sense of this would be 'deprecated by', so the compiler should then output "xy is deprecated by ..." instead of "xy is deprecated". What do you think about that? Mafi
Re: TDPL notes, part 2
On 07/15/2010 09:10 AM, retard wrote: Thu, 15 Jul 2010 07:51:55 -0400, bearophile wrote: P 61: this is so hard to read that I don't want to see anything similar even in small script-like programs. The D compiler can even disallow such long chains: int c = (a = b, b = 7, 8); I suppose this mostly explains why the real tuples aren't coming to D. Both of the authors love the C/C++ style comma operator for "code generation purposes" as you can see above. Another advantage is that your time won't be wasted when you switch from D to C++ to do some real world programming. For what it's worth - I don't care much about the comma operator, and I extremely strongly believe Walter's argument involving code generation has no validity whatsoever. The contrived example mentioned above is given as an illustration for the section on the comma operator. Andrei
Re: Why is array.reverse a property and not a method?
On 12 July 2010 21:28, bearophile wrote: > Andrei Alexandrescu: > > > sort is all but deprecated, since std.algorithm.sort exists. > > > > > > reverse could even more easily be implemented as a library function > than > > > sort, it should be removed as well. > > > > http://www.digitalmars.com/d/2.0/phobos/std_algorithm.html#reverse > > D site can enjoy a page that lists the deprecated D2 features (and > maybe for each of them lists what to use instead of it), so current > D2 programmers can avoid what will be removed. > > Yes, please. This is not only useful for current D2 programmers, but also for newcomers (such as myself). Often I use trail and error alongside the reading of references to learn new languages. If the compiler allows many language constructs/features that are deprecated, it would be very useful to have such a list. Otherwise I'm afraid that I will be writing D2 code that will work today, but will break with a newer more strict version of the compiler when it arrives. Not that I'm planning on writing huge amounts of D2 code right now ;-) but I wouldn't like having to unlearn things later. At least I'd like to keep that to a minimum. Groet, Tim
Re: Getting # Physical CPUs
== Quote from Don (nos...@nospam.com)'s article > dsimcha wrote: > > == Quote from Don (nos...@nospam.com)'s article > >> dsimcha wrote: > >>> == Quote from Don (nos...@nospam.com)'s article > [snip] > Thanks, that's definitely a bug. The code in core.cpuid has not been > tested on the most recent CPUs (Intel added a totally new method) and > their documentation is quite convoluted. It's hard to get it right > without an actual machine. > >>> Bug 4462. http://d.puremagic.com/issues/show_bug.cgi?id=4462 > >> Please check if the latest druntime commit fixes this. > > > > Thanks. Unfortunately I can't test this because the Linux build script for > > Phobos > > is broken on my machine in some inscrutable way. Frankly, my success rate > > at > > building stuff from other people's make files is well under 50%. Make is > > just a > > horrible technology that needs to die a horrible death. We should be > > eating our > > own dogfood and using rdmd for build scripts. > I agree. > In this case, core.cpuid is completely stand-alone. So you could just > copy it into another directory and change the module statement. Great idea. Unfortunately that still doesn't fix it. I get different wrong information on some machines, but it's still wrong, and the specific example I posted to Bugzilla hasn't changed at all.
Re: Getting # Physical CPUs
dsimcha wrote: == Quote from Don (nos...@nospam.com)'s article dsimcha wrote: == Quote from Don (nos...@nospam.com)'s article [snip] Thanks, that's definitely a bug. The code in core.cpuid has not been tested on the most recent CPUs (Intel added a totally new method) and their documentation is quite convoluted. It's hard to get it right without an actual machine. Bug 4462. http://d.puremagic.com/issues/show_bug.cgi?id=4462 Please check if the latest druntime commit fixes this. Thanks. Unfortunately I can't test this because the Linux build script for Phobos is broken on my machine in some inscrutable way. Frankly, my success rate at building stuff from other people's make files is well under 50%. Make is just a horrible technology that needs to die a horrible death. We should be eating our own dogfood and using rdmd for build scripts. I agree. In this case, core.cpuid is completely stand-alone. So you could just copy it into another directory and change the module statement.
Re: Overloading property vs. non-property
== Quote from torhu (n...@spam.invalid)'s article > In case the answer is no, that example of yours is the perfect > opportunity to dust off the almost-forgotten with statement :) > with (Histogram(someData, 10)) { > barColor = getColor(255, 0, 0); > histType = HistType.Probability; > toFigure.title = "A Histogram"; > xLabel = "Stuff"; > showAsMain(); > } > A bit more typing, but I'd say that it's easier to read. But toFigure returns a Figure, not this. The idea is that you'd set all the properties for the Plot, then put toFigure somewhere in your chain, then set all the properties for the Figure.
Re: State of and plans for the garbage collector
On Thu, 15 Jul 2010 04:28:43 -0400, Vladimir Panteleev wrote: On Thu, 15 Jul 2010 10:18:38 +0300, Jonathan M Davis wrote: Okay. I really don't know much about garbage collectors, how they work, or what makes one particularly good or bad (other than the fact that it needs to be efficient execution-wise and manage memory wisely so that you don't use too much of it or do anything else that would be an overall negative for performance). However, from the comments here - both recent and in the past - it's pretty clear that D's garbage collector is fairly outdated. I would assume that that would be negative for performance - certainly it would mean that significant improvements could be made. IMO the D GC isn't bad, it's just mediocre. (It could have been way worse.) I would like to use this opportunity to bring up a GC implementation written by Jeremie Pelletier, which seems to have gone mostly unnoticed when it was posted as a reply to D.announce: http://pastebin.com/f7a3b4c4a Aside from multiple optimizations across the board, this GC employs an interesting different strategy. The gist of it is that it iteratively destroys only objects that have no immediate references. In the case of long linked lists, this trades destruction complexity with scan complexity, which is a very good change - most times deeply-nested structures such as linked lists survive multiple generational cycles. Wouldn't that mean it can't handle cycles? Jeremie, if you're reading this: how goes your D2 runtime project? (I also have an unfinished generational GC lying around, which is still unknown if it's viable performance-wise - I should really try to finish it one day.)
Re: Overloading property vs. non-property
On 15.07.2010 15:16, dsimcha wrote: Once property syntax is fully enforced (not necessarily recommended) will it be possible to overload properties against non-properties? My use case is that I'm thinking about API improvements for my dflplot lib and one thing that I would really like is to give a fluent interface to everything to further cut back on the amount of boilerplate needed to generate simple plots. For example: Histogram(someData, 10) .barColor(getColor(255, 0, 0)) .histType(HistType.Probability) .toFigure.title("A Histogram") .xLabel("Stuff").showAsMain(); The problem is that I also want things like barColor and title to be settable via normal property syntax, using the equals sign. Right now, this "just works" because D's current non-analness about enforcing @property-ness is awesome 99% of the time even if it leads to a few weird corner cases. Will there be a way to express such an interface to be provided (calling a setter as either a member function or a property at the user's choice) once @property is fully implemented? In case the answer is no, that example of yours is the perfect opportunity to dust off the almost-forgotten with statement :) with (Histogram(someData, 10)) { barColor = getColor(255, 0, 0); histType = HistType.Probability; toFigure.title = "A Histogram"; xLabel = "Stuff"; showAsMain(); } A bit more typing, but I'd say that it's easier to read.
Re: Why will the delete keyword be removed?
== Quote from Max Samukha (spam...@d-coding.com)'s article > Not that I fiercely disagree but ideally I'd want it to be obliterated > to an invalid but easily recognizable state. It might help to discover > dangling pointer errors early. Otherwise, leaving a destroyed object in > a perfectly valid state may make debugging more fun than it needs to be. You could use: class A { int guard = 0xdeadbeef; this () { guard = 0; } invariant () { assert (guard == 0); } } Really annoying, though, and it makes more sense to set the guard in the destructor. (A compiler option to do something like this automatically would be nice.)
Re: TDPL notes, part 2
Thu, 15 Jul 2010 07:51:55 -0400, bearophile wrote: > P 61: this is so hard to read that I don't want to see anything similar > even in small script-like programs. The D compiler can even disallow > such long chains: int c = (a = b, b = 7, 8); I suppose this mostly explains why the real tuples aren't coming to D. Both of the authors love the C/C++ style comma operator for "code generation purposes" as you can see above. Another advantage is that your time won't be wasted when you switch from D to C++ to do some real world programming.
Re: Getting # Physical CPUs
== Quote from Don (nos...@nospam.com)'s article > dsimcha wrote: > > == Quote from Don (nos...@nospam.com)'s article > >> [snip] > >> Thanks, that's definitely a bug. The code in core.cpuid has not been > >> tested on the most recent CPUs (Intel added a totally new method) and > >> their documentation is quite convoluted. It's hard to get it right > >> without an actual machine. > > > > Bug 4462. http://d.puremagic.com/issues/show_bug.cgi?id=4462 > Please check if the latest druntime commit fixes this. Thanks. Unfortunately I can't test this because the Linux build script for Phobos is broken on my machine in some inscrutable way. Frankly, my success rate at building stuff from other people's make files is well under 50%. Make is just a horrible technology that needs to die a horrible death. We should be eating our own dogfood and using rdmd for build scripts. Here's the error message I'm getting. I know basically nothing about make except that it's a build system and that it almost never works, so I can't even begin to debug this. Here's the error message I've been getting, on a freshly unpacked 2.047 directory on some ancient Linux distro that my sys admin insists on using: $ make -flinux.mak make --no-print-directory -f OS=posix BUILD=release make[1]: OS=posix: No such file or directory make[1]: *** No rule to make target `OS=posix'. Stop. make: *** [release] Error 2 And here's the error I get when I try on a different machine w/ a more modern distro (this one is probably due to lack of 64 bit libs): $ make -flinux.mak make --no-print-directory -f linux.mak OS=posix BUILD=release cc -c -m32 -O3 etc/c/zlib/adler32.c -ogenerated/posix/release/etc/c/zlib/adler32.o cc -c -m32 -O3 etc/c/zlib/compress.c -ogenerated/posix/release/etc/c/zlib/compress.o cc -c -m32 -O3 etc/c/zlib/crc32.c -ogenerated/posix/release/etc/c/zlib/crc32.o In file included from /usr/include/features.h:378, from /usr/include/string.h:26, from etc/c/zlib/zutil.h:23, from etc/c/zlib/crc32.c:29: /usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory make[1]: *** [generated/posix/release/etc/c/zlib/crc32.o] Error 1 make: *** [release] Error 2
Re: Overloading property vs. non-property
On Thu, 15 Jul 2010 09:16:47 -0400, dsimcha wrote: Once property syntax is fully enforced (not necessarily recommended) will it be possible to overload properties against non-properties? My use case is that I'm thinking about API improvements for my dflplot lib and one thing that I would really like is to give a fluent interface to everything to further cut back on the amount of boilerplate needed to generate simple plots. For example: Histogram(someData, 10) .barColor(getColor(255, 0, 0)) .histType(HistType.Probability) .toFigure.title("A Histogram") .xLabel("Stuff").showAsMain(); The problem is that I also want things like barColor and title to be settable via normal property syntax, using the equals sign. Right now, this "just works" because D's current non-analness about enforcing @property-ness is awesome 99% of the time even if it leads to a few weird corner cases. Will there be a way to express such an interface to be provided (calling a setter as either a member function or a property at the user's choice) once @property is fully implemented? I would say no. A property is not meant to be a function or vice versa. Also, a property setter should either return void or the type it's setting. I would suggest the following model: @property int x(int i); typeof(this) setX(int i); This looks good IMO when used: int m = c.x = 5; c.setX(5).setY(6); I used this in tango.sys.Process to set various parameters for process creation. -Steve
Re: State of and plans for the garbage collector
Jonathan M Davis, el 15 de julio a las 00:18 me escribiste: > Okay. I really don't know much about garbage collectors, how they work, or > what > makes one particularly good or bad (other than the fact that it needs to be > efficient execution-wise and manage memory wisely so that you don't use too > much > of it or do anything else that would be an overall negative for performance). > However, from the comments here - both recent and in the past - it's pretty > clear that D's garbage collector is fairly outdated. I would assume that that > would be negative for performance - certainly it would mean that significant > improvements could be made. > > So, my question is this: what are the plans for the garbage collector? Is the > intention to continue to improve it bit by bit, to give it a major overhaul > at > some point, to outright replace it at a later date, or something else > entirely? > > If D is going to compete with C and C++, it needs to be highly efficient, and > if > the garbage collector isn't up to snuff, that's going to be a big problem. > I'm > not looking to complain about the current garbage collector - I really don't > know how good or bad it is - but if it is rather poor (as I've gotten the > impresison that it is - at least in some respects - from various discussions > on > it here), then I'd assume that it needs a major overhaul or replacement at > some > point. So, are there any specific plans with regards to that, or is that just > something that may be considered in the future? I'm working on a concurrent GC, things are going really slow, but I plan to give it more attention this months, and I hope it will be finished (finished in my own terms, as it is my thesis) by the end of the year. I would like to explore merging the precise scanning patch and some other optimizations, collection strategies, but I'm not sure I will have the time (probably not). I'm not sure how it would turn out, even when I'm doing regular benchmarks to ensure, at least, that the current performance is not degraded (I didn't started doing the collection concurrently). One more note: I'm working with D1, but using the Tango runtime, so I guess it should be not to hard to port to D2. -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ -- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) -- Every day 21 new born babies will be given to the wrong parents
Re: TDPL notes, part 1
On Thu, 15 Jul 2010 12:13:27 +0200, Tim Verweij wrote: > On 14 July 2010 14:28, bearophile wrote: > >> (...) >> P 10: >> In this line of code: >> while (!input.empty) { >> There is not so much need of using an external function plus a >> negation: while (input.length) { >> (...) >> >> > I like writing: > while (!input.empty) { > To me, it better shows the meaning of the condition. I completely agree. I have started using the range interface for arrays whenever possible. It makes code a lot more self-explanatory, easier to read, and easier to reason about. -Lars
Overloading property vs. non-property
Once property syntax is fully enforced (not necessarily recommended) will it be possible to overload properties against non-properties? My use case is that I'm thinking about API improvements for my dflplot lib and one thing that I would really like is to give a fluent interface to everything to further cut back on the amount of boilerplate needed to generate simple plots. For example: Histogram(someData, 10) .barColor(getColor(255, 0, 0)) .histType(HistType.Probability) .toFigure.title("A Histogram") .xLabel("Stuff").showAsMain(); The problem is that I also want things like barColor and title to be settable via normal property syntax, using the equals sign. Right now, this "just works" because D's current non-analness about enforcing @property-ness is awesome 99% of the time even if it leads to a few weird corner cases. Will there be a way to express such an interface to be provided (calling a setter as either a member function or a property at the user's choice) once @property is fully implemented?
Re: Why will the delete keyword be removed?
On 07/14/2010 10:43 PM, Andrei Alexandrescu wrote: There is no early failure with dangling pointers. There is: class A { void foo() {} } class B : A { override void foo() {} } A a = new B; A a2 = a; clear(a); a2.foo(); If you reset the object state to .init, foo will succeed and the program will happily crawl on. If you zero out the object's memory, the call to foo will fail. I'd prefer the latter. The same with an interface: interface I { void foo(); } class A : I { void foo() {} } void main() { A a = new A; I i = a; clear(a); i.foo(); // would segfault } One more (rare but possible): void bar() { } class A { void function() p = &bar; void foo() { p(); } } void main() { A a = new A; A a2 = a; clear(a); a2.foo; // would segfault } I'd probably want clear() to run the destructor and zero the object's memory. No, because: class A { int x = -1; float y = 0; ... } You'd want that guy to be obliterated with the proper initializers for x and y. Not that I fiercely disagree but ideally I'd want it to be obliterated to an invalid but easily recognizable state. It might help to discover dangling pointer errors early. Otherwise, leaving a destroyed object in a perfectly valid state may make debugging more fun than it needs to be. In my world, clear() would run the dispose handlers, unset the memory block's BlkAttr.FINALIZE (if the object is on GC heap), call the object's destructor, destroy the monitor and zero the memory. I guess a call to rt_finalize could be used as part of this procedure. Or something like that.
TDPL notes, part 2
I hope Andrei appreciated my efforts :-) - The non-alphabetical index page 439 is a good idea. - The page thickness is OK for me. More comments on Chapter 1: Page 17: using null to represent empty arrays is not good. In D there is [] that's better for this. P 18: "foo in associativeArray" returns a pointer. So can this work in SafeD too? Maybe it can be accepted by SafeD is the pointer is not used and just tested if it's null or not. P 22: In this code: Object.factory("stats." ~ arg); This code is OK, but in my code I'd like to apply the DRY principle and use something similar to (that keeps workins even if I change the name fo the module): Object.factory(__traits(thisModuleName) ~ arg); This currently works, but it's not nice: Object.factory(split(to!string({class C {}; return new C;}()), ".")[0] ~ ".Foo"); (Probably there are already better ways to do it, but all of them, including the __traits one can't be used in the first pages of the book, so the book page is OK). Chapter 2: P 35: adjacent string concatenation: this is bug-prone, see: http://d.puremagic.com/issues/show_bug.cgi?id=3827 P 38, first code example: the D strings contain immutable chars as the text says, but the book has to show this gotcha, that true immutable strings as Python ones don't have (this code runs): void main() { string s = "Hello"; s.length += 1; } P 49: the explanations on is() don't seem complete: is ( Type Identifier : TypeSpecialization, TemplateParameterList ) is ( Type Identifier == TypeSpecialization, TemplateParameterList ) But I think this is for the better, that syntax becomes unreadable. P 50: The part 2.3.5.3 on function calls is good. I hope dmd will eventually do what is written here. P 55: part 2.3.9: there's an error, the concat (~) is not an addition because it's not commutative, generally: (s1 ~ s2) != (s2 ~ s1) P 56, usage examples of 'in': - LDC has shown to be able to optimize away to nearby associative array lookups in all situations (and storing the pointer for a much later usage is not a common usage pattern), and the usage of pointers in SafeD is not easy to do. So consider returning a more clean boolean and improving the compiler instead. - I have just filed this bug: http://d.puremagic.com/issues/show_bug.cgi?id=4463 - For being an usage example it is OK. In D1 I have avoided that if/else using: typedef double Double1 = 0.0; So you can just increment the default double initialization. Now you can probably use: static struct Double2 { double d = 0.0; alias d this; } But there's a bug and Double2 can't be used yet as associative array value. P 57: a small note can be added for the equality among associative arrays. P 58 part 2.3.12.2: Thanks to Don too those FP exceptions can be loud, this helps debug code (I can even appreciate the severeExceptions to be switched on on default in nonrelease builds): import std.math: FloatingPointControl; void main() { FloatingPointControl fpctrl; fpctrl.enableExceptions(FloatingPointControl.severeExceptions); double x; double y = x * 2.0; } P 59: using void with && and || is a ugly hack, if I find a line like this in production code I kill it with fire. This is not a good example to be present in the D2 reference book: line == "#\n" && writeln("..."); P 60: this is another bad example, it's a cute trick, but it's bad for production code: (predicate ? x : y) += 5 P 61: this is so hard to read that I don't want to see anything similar even in small script-like programs. The D compiler can even disallow such long chains: int c = (a = b, b = 7, 8); Bye, bearophile
Re: TDPL notes, part 1
On 14 July 2010 14:28, bearophile wrote: > (...) > P 10: > In this line of code: > while (!input.empty) { > There is not so much need of using an external function plus a negation: > while (input.length) { > (...) > I like writing: while (!input.empty) { To me, it better shows the meaning of the condition. Groet, Tim
Re: Getting # Physical CPUs
dsimcha wrote: == Quote from Don (nos...@nospam.com)'s article [snip] Thanks, that's definitely a bug. The code in core.cpuid has not been tested on the most recent CPUs (Intel added a totally new method) and their documentation is quite convoluted. It's hard to get it right without an actual machine. Bug 4462. http://d.puremagic.com/issues/show_bug.cgi?id=4462 Please check if the latest druntime commit fixes this.
Re: State of and plans for the garbage collector
On Thu, 15 Jul 2010 10:18:38 +0300, Jonathan M Davis wrote: Okay. I really don't know much about garbage collectors, how they work, or what makes one particularly good or bad (other than the fact that it needs to be efficient execution-wise and manage memory wisely so that you don't use too much of it or do anything else that would be an overall negative for performance). However, from the comments here - both recent and in the past - it's pretty clear that D's garbage collector is fairly outdated. I would assume that that would be negative for performance - certainly it would mean that significant improvements could be made. IMO the D GC isn't bad, it's just mediocre. (It could have been way worse.) I would like to use this opportunity to bring up a GC implementation written by Jeremie Pelletier, which seems to have gone mostly unnoticed when it was posted as a reply to D.announce: http://pastebin.com/f7a3b4c4a Aside from multiple optimizations across the board, this GC employs an interesting different strategy. The gist of it is that it iteratively destroys only objects that have no immediate references. In the case of long linked lists, this trades destruction complexity with scan complexity, which is a very good change - most times deeply-nested structures such as linked lists survive multiple generational cycles. Jeremie, if you're reading this: how goes your D2 runtime project? (I also have an unfinished generational GC lying around, which is still unknown if it's viable performance-wise - I should really try to finish it one day.) -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: Why will the delete keyword be removed?
On Thu, 15 Jul 2010 11:03:07 +0300, Rory McGuire wrote: On Thu, 15 Jul 2010 09:08:24 +0200, Vladimir Panteleev wrote: On Thu, 15 Jul 2010 04:00:49 +0300, Jonathan M Davis wrote: Ideally, you'd want things to blow up when such an object was used, with it clearly indicating that it was because you used an object which isn't supposed to exist anymore. I suggested this as well, by stomping on the object's memory in debug builds. Andrei has different goals. Surely you can't just stomp on the memory? You'd have to keep it allocated so nothing else ends up being allocated there, and you get weird inconsistent errors, debug mode or not. If you want to keep the stomped-on object allocated to prevent that, there is no problem doing it - just don't do immediate deallocation in debug builds, and let the GC collect the object when it sees no references. However, if you want to catch dangling pointer bugs with absolute certainty, the best solution is to use tools such as Valgrind, which keep track of which memory is safe to read, etc. -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: Why will the delete keyword be removed?
On Thu, 15 Jul 2010 09:08:24 +0200, Vladimir Panteleev wrote: On Thu, 15 Jul 2010 04:00:49 +0300, Jonathan M Davis wrote: Ideally, you'd want things to blow up when such an object was used, with it clearly indicating that it was because you used an object which isn't supposed to exist anymore. I suggested this as well, by stomping on the object's memory in debug builds. Andrei has different goals. Surely you can't just stomp on the memory? You'd have to keep it allocated so nothing else ends up being allocated there, and you get weird inconsistent errors, debug mode or not.
State of and plans for the garbage collector
Okay. I really don't know much about garbage collectors, how they work, or what makes one particularly good or bad (other than the fact that it needs to be efficient execution-wise and manage memory wisely so that you don't use too much of it or do anything else that would be an overall negative for performance). However, from the comments here - both recent and in the past - it's pretty clear that D's garbage collector is fairly outdated. I would assume that that would be negative for performance - certainly it would mean that significant improvements could be made. So, my question is this: what are the plans for the garbage collector? Is the intention to continue to improve it bit by bit, to give it a major overhaul at some point, to outright replace it at a later date, or something else entirely? If D is going to compete with C and C++, it needs to be highly efficient, and if the garbage collector isn't up to snuff, that's going to be a big problem. I'm not looking to complain about the current garbage collector - I really don't know how good or bad it is - but if it is rather poor (as I've gotten the impresison that it is - at least in some respects - from various discussions on it here), then I'd assume that it needs a major overhaul or replacement at some point. So, are there any specific plans with regards to that, or is that just something that may be considered in the future? - Jonathan M Davis
Re: Why will the delete keyword be removed?
On Thu, 15 Jul 2010 04:00:49 +0300, Jonathan M Davis wrote: Ideally, you'd want things to blow up when such an object was used, with it clearly indicating that it was because you used an object which isn't supposed to exist anymore. I suggested this as well, by stomping on the object's memory in debug builds. Andrei has different goals. -- Best regards, Vladimirmailto:vladi...@thecybershadow.net
Re: Manual memory management in D2
Andrei Alexandrescu Wrote: > And how would you use such a feature effectively? I've seen such > "optional implementation" policies in standards such as SQL > (compatibility levels) and C++ (export). They _always_ fare disastrously. Just like we do it now: write code for the garbage collected environment of your choise. > It's not about difficulty as much as constraining GC implementers > unnecessarily. Again: use a heap tuned for manual management to manage > memory manually, and a heap tuned for automatic management to manage > memory automatically. I think it's a very reasonable stance. Yes, heap is used by language expressions new and delete. That's exactly what want to say: whether deallocation is supported or not is a feature of chosen runtime and programming style.