Re: avgtime - Small D util for your everyday benchmarking needs
On 23 March 2012 17:53, Juan Manuel Cabo juanmanuel.c...@gmail.com wrote: But I think the most important change is that I'm now showing the 95% and 99% confidence intervals. (For the confidence intervals to mean anything, please everyone, remember to control your variables (don't defrag and benchmark :-) !!) so that apples are still apples and don't become oranges, and make sure N30). More info on histogram and confidence intervals in the usage help. Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just Average times. -- James Miller
Re: avgtime - Small D util for your everyday benchmarking needs
On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.] (man, the gaussian curve is everywhere, it never ceases to perplex me). I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei Well, the shape of the curve depends a lot on how the random noise gets inside the measurement. I like 'ls -lR' because the randomness comes from everywhere, and its quite bell shaped. I guess there is a lot of I/O mess (even if I/O is all cached, there are lots of opportunities for kernel mutexes to mess everything I guess). When testing /bin/sleep 0.5, it will be quite a pretty boring histogram. And I guess than when testing something thats only CPU bound and doesn't make too much syscalls, the shape is more concentrated in a few values. On the other hand, I'm getting some weird bimodal (two peaks) curves sometimes, like the one I put on the README.md. It's definitely because of my laptop's CPU throttling, because it went away when I disabled it (for the curious ones, in ubuntu 64bit, here is a way to disable throttling (WARNING: might get hot until you undo or reboot): echo 160 /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq echo 160 /sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq (yes my cpu is 1.6GHz, but it rocks). --jm
Re: avgtime - Small D util for your everyday benchmarking needs
On Thursday, 22 March 2012 at 17:13:58 UTC, Manfred Nowak wrote: Juan Manuel Cabo wrote: like the unix 'time' command `version linux' is missing. -manfred Linux only for now. Will make it work in windows this weekend. I hope that's what you meant. --jm
Re: avgtime - Small D util for your everyday benchmarking needs
On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote: Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just Average times. -- James Miller Dude, this is awesome. Thanks!! I appreciate your feedback! I would suggest changing the name while you still can. Suggestions welcome!! --jm
Re: avgtime - Small D util for your everyday benchmarking needs
On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote: | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. I'm not printing the population mode, I'm printing the 'sample mode'. It has a very clear meaning: most frequent value. To have frequency, I group into 'bins' by precision: 12.345 and 12.3111 will both go to the 12.3 bin. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. This program doesn't compute the variance. Maybe you are talking about another program. This program computes the standard deviation of the sample. The sample doesn't need to of any distribution to have a standard deviation. It is not a distribution parameter, it is a statistic. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly. ??? The 'sample mode', 'median' and 'average' can quickly tell you something about the shape of the histogram, without looking at it. If the three coincide, then maybe you are in normal distribution land. The only place where I assume normal distribution is for the confidence intervals. And it's in the usage help. If you want to support estimating weird probability distributions parameters, forking and pull requests are welcome. Rewrites too. Good luck detecting distribution shapes ;-) -manfred PS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome). PS2: I now fixed the confusion with the confidence interval of the variable and the confidence interval of the mu average, I simply now show both. (release 0.4). PS3: Statistics estimate distribution parameters. --jm
Re: avgtime - Small D util for your everyday benchmarking needs
On 23 March 2012 21:37, Juan Manuel Cabo juanmanuel.c...@gmail.com wrote: PS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome). If its possible to calculate it, then you can generate a table at compile-time using CTFE. Less error-prone, and controllable accuracy. -- James Miller
Re: avgtime - Small D util for your everyday benchmarking needs
On 23/03/12 09:37, Juan Manuel Cabo wrote: On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote: | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. I'm not printing the population mode, I'm printing the 'sample mode'. It has a very clear meaning: most frequent value. To have frequency, I group into 'bins' by precision: 12.345 and 12.3111 will both go to the 12.3 bin. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. This program doesn't compute the variance. Maybe you are talking about another program. This program computes the standard deviation of the sample. The sample doesn't need to of any distribution to have a standard deviation. It is not a distribution parameter, it is a statistic. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly. ??? The 'sample mode', 'median' and 'average' can quickly tell you something about the shape of the histogram, without looking at it. If the three coincide, then maybe you are in normal distribution land. The only place where I assume normal distribution is for the confidence intervals. And it's in the usage help. If you want to support estimating weird probability distributions parameters, forking and pull requests are welcome. Rewrites too. Good luck detecting distribution shapes ;-) -manfred PS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome). No, it's easy. Student t is in std.mathspecial. PS2: I now fixed the confusion with the confidence interval of the variable and the confidence interval of the mu average, I simply now show both. (release 0.4). PS3: Statistics estimate distribution parameters. --jm
Re: avgtime - Small D util for your everyday benchmarking needs
On 23/03/12 11:20, Don Clugston wrote: On 23/03/12 09:37, Juan Manuel Cabo wrote: On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote: | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. I'm not printing the population mode, I'm printing the 'sample mode'. It has a very clear meaning: most frequent value. To have frequency, I group into 'bins' by precision: 12.345 and 12.3111 will both go to the 12.3 bin. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. This program doesn't compute the variance. Maybe you are talking about another program. This program computes the standard deviation of the sample. The sample doesn't need to of any distribution to have a standard deviation. It is not a distribution parameter, it is a statistic. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly. ??? The 'sample mode', 'median' and 'average' can quickly tell you something about the shape of the histogram, without looking at it. If the three coincide, then maybe you are in normal distribution land. The only place where I assume normal distribution is for the confidence intervals. And it's in the usage help. If you want to support estimating weird probability distributions parameters, forking and pull requests are welcome. Rewrites too. Good luck detecting distribution shapes ;-) -manfred PS: I should use the t student to make the confidence intervals, and for computing that I should use the sample standard deviation (/n-1), but that is a completely different story. The z normal with n30 aproximation is quite good. (I would have to embed a table for the t student tail factors, pull reqs velcome). No, it's easy. Student t is in std.mathspecial. Aargh, I didn't get around to copying it in. But this should do it. /** Inverse of Student's t distribution * * Given probability p and degrees of freedom nu, * finds the argument t such that the one-sided * studentsDistribution(nu,t) is equal to p. * * Params: * nu = degrees of freedom. Must be 1 * p = probability. 0 p 1 */ real studentsTDistributionInv(int nu, real p ) in { assert(nu0); assert(p=0.0L p=1.0L); } body { if (p==0) return -real.infinity; if (p==1) return real.infinity; real rk, z; rk = nu; if ( p 0.25L p 0.75L ) { if ( p == 0.5L ) return 0; z = 1.0L - 2.0L * p; z = betaIncompleteInv( 0.5L, 0.5L*rk, fabs(z) ); real t = sqrt( rk*z/(1.0L-z) ); if( p 0.5L ) t = -t; return t; } int rflg = -1; // sign of the result if (p = 0.5L) { p = 1.0L - p; rflg = 1; } z = betaIncompleteInv( 0.5L*rk, 0.5L, 2.0L*p ); if (z0) return rflg * real.infinity; return rflg * sqrt( rk/z - rk ); }
Re: avgtime - Small D util for your everyday benchmarking needs
On 3/23/12 12:51 AM, Manfred Nowak wrote: Andrei Alexandrescu wrote: You may want to also print the mode of the distribution, nontrivial but informative In case of this implementation and according to the given link: trivial and noninformative, because | For samples, if it is known that they are drawn from a symmetric | distribution, the sample mean can be used as an estimate of the | population mode. and the program computes the variance as if the values of the sample follow a normal distribution, which is symmetric. Therefore the mode of the sample is of interest only, when the variance is calculated wrongly. Again, benchmarks I've seen are always asymmetric. Not sure why those shown here are symmetric. The mode should be very close to the minimum (and in fact I think taking the minimum is a pretty good approximation of the sought-after time). Andrei
Re: avgtime - Small D util for your everyday benchmarking needs
On 3/23/12 3:02 AM, Juan Manuel Cabo wrote: On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.] (man, the gaussian curve is everywhere, it never ceases to perplex me). I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei Well, the shape of the curve depends a lot on how the random noise gets inside the measurement. [snip] Hmm, well the way I see it, the observed measurements have the following composition: X = T + Q + N where T 0 (a constant) is the real time taken by the processing, Q 0 is the quantization noise caused by the limited resolution of the clock (can be considered 0 if the resolution is much smaller than the actual time), and N is noise caused by a variety of factors (other processes, throttling, interrupts, networking, memory hierarchy effects, and many more). The challenge is estimating T given a bunch of X samples. N can be probably approximated to a Gaussian, although for short timings I noticed it's more like bursts that just cause outliers. But note that N is always positive (therefore not 100% Gaussian), i.e. there's no way to insert some noise that makes the code seem artificially faster. It's all additive. Taking the mode of the distribution will estimate T + mode(N), which is informative because after all there's no way to eliminate noise. However, if the focus is improving T, we want an estimate as close to T as possible. In the limit, taking the minimum over infinitely many measurements of X would yield T. Andrei
Re: avgtime - Small D util for your everyday benchmarking needs
On 3/23/12 5:51 AM, Don Clugston wrote: No, it's easy. Student t is in std.mathspecial. Aargh, I didn't get around to copying it in. But this should do it. [snip] Shouldn't put this stuff in std.numeric, or create a std.stat module? I think also some functions for t-test would be useful. Andrei
Walter on reddit with an older article
http://www.reddit.com/r/programming/comments/r9p4c/walter_bright_on_c_compilation_speed/ Andrei
GSoC: Linear Algebra and the SciD library
Hello, I'm a third year undergraduate at the University of Chicago majoring in mathematics. I'm very interested in working on the Matrix library through Google summer of code. The ideas page mentions that progress has already been made but that goals weren't completely met. What kind of support is already in place? Are there any specific types of functions that you would like to see added to the library? Although I'm relatively new to coding, I have a strong background in mathematics (including linear algebra). I've coded mainly in C but also in java, python, and very little in racket. Is this project appropriate for an enthusiastic participant who is not yet an expert hacker? Thanks for your time, Cullen Seaton University of Chicago Class of 2013
Re: avgtime - Small D util for your everyday benchmarking needs
On Friday, 23 March 2012 at 15:33:18 UTC, Andrei Alexandrescu wrote: On 3/23/12 3:02 AM, Juan Manuel Cabo wrote: On Friday, 23 March 2012 at 05:16:20 UTC, Andrei Alexandrescu wrote: [.] (man, the gaussian curve is everywhere, it never ceases to perplex me). I'm actually surprised. I'm working on benchmarking lately and the distributions I get are very concentrated around the minimum. Andrei Well, the shape of the curve depends a lot on how the random noise gets inside the measurement. [snip] Hmm, well the way I see it, the observed measurements have the following composition: X = T + Q + N where T 0 (a constant) is the real time taken by the processing, Q 0 is the quantization noise caused by the limited resolution of the clock (can be considered 0 if the resolution is much smaller than the actual time), and N is noise caused by a variety of factors (other processes, throttling, interrupts, networking, memory hierarchy effects, and many more). The challenge is estimating T given a bunch of X samples. N can be probably approximated to a Gaussian, although for short timings I noticed it's more like bursts that just cause outliers. But note that N is always positive (therefore not 100% Gaussian), i.e. there's no way to insert some noise that makes the code seem artificially faster. It's all additive. Taking the mode of the distribution will estimate T + mode(N), which is informative because after all there's no way to eliminate noise. However, if the focus is improving T, we want an estimate as close to T as possible. In the limit, taking the minimum over infinitely many measurements of X would yield T. Andrei In general, I agree with your reasoning. And I appreciate you taking the time to put it so eloquently!! But I think that your considering T as a constant, and preferring the minimum misses something. This might work very well for benchmarking mostly CPU bound processes, but all those other things that you consider noise (disk I/O, network, memory hierarchy, etc.) are part of the elements that make an algorithm or program faster than other, and I would consider them inside T for some applications. Consider the case depicted in this wonderful (ranty) article that was posted elsewhere in this thread: http://zedshaw.com/essays/programmer_stats.html In a part of the article, the guy talks about a system that worked fast most of the time, but would halt for a good 1 or 2 minutes sometimes. The minimum time for such a system might be a few ms, but the standard deviation would be big. This properly shifts the average time away from the minimum. If programA does the same task than programB with less I/O, or with better memory layout, etc. its average will be better, and maybe its timings won't be so spread out. But the minimum will be the same. So, in the end, I'm just happy that I could share this little avgtime with you all, and as usual there is no one-answer fits all. For some applications, the minimum will be enough. For others, it's esential to look at how spread the sample is. On the symmetry/asymmetry of the distribution topic: I realize as you said that T never gets faster than a certain point. But, depending on the nature of the program under test, the good utilization of disk I/O, network, memory, motherboard buses, etc. is what you want inside the test too, and those come with gaussian like noises which might dominate over T or not. A program that avoids that other big noise is a better program (all else the same), so I would tend to consider the whole. Thanks for the eloquency/insightfulness in your post! I'll consider adding chi-squared confidence intervals in the future. (and open to more info or if another distribution might be better). --jm
Re: avgtime - Small D util for your everyday benchmarking needs
On Friday, 23 March 2012 at 10:51:37 UTC, Don Clugston wrote: No, it's easy. Student t is in std.mathspecial. Aargh, I didn't get around to copying it in. But this should do it. /** Inverse of Student's t distribution * [.] Great!!! Thank you soo much Don!!! --jm
Re: avgtime - Small D util for your everyday benchmarking needs
On Friday, 23 March 2012 at 05:26:54 UTC, Nick Sabalausky wrote: Wow, that's just fantastic! Really, this should be a standard system tool. I think this guy would be proud: http://zedshaw.com/essays/programmer_stats.html Thanks for the good vibes! Hahahhah, that article is so ing hillarious! I love the maddox tone. --jm
Re: avgtime - Small D util for your everyday benchmarking needs
Andrei Alexandrescu wrote: In the limit, taking the minimum over infinitely many measurements of X would yield T. True, if the thoretical variance of the distribution of T is close to zero. But horrible wrong, if T depends on an algorithm that is fast only under amortized analysis, because the worst case scenario will be hidden. -manfred
Re: avgtime - Small D util for your everyday benchmarking needs
Juan Manuel Cabo juanmanuel.c...@gmail.com wrote in message news:bqrlhcggehbrzyuhz...@forum.dlang.org... On Friday, 23 March 2012 at 06:51:48 UTC, James Miller wrote: Dude, this is awesome. I tend to just use time, but if I was doing anything more complicated, I'd use this. I would suggest changing the name while you still can. avgtime is not that informative a name given that it now does more than just Average times. -- James Miller Dude, this is awesome. Thanks!! I appreciate your feedback! I would suggest changing the name while you still can. Suggestions welcome!! timestats?
Re: Wrong lowering for a[b][c]++
On 3/23/12, H. S. Teoh hst...@quickfur.ath.cx wrote: WAT?! What on earth is cast() supposed to mean?? I've no idea. It's probably a front-end bug and the cast forces the compiler to.. come to its senses?
Re: Wrong lowering for a[b][c]++
On 23 March 2012 19:15, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: I've no idea. It's probably a front-end bug and the cast forces the compiler to.. come to its senses? `cast()` is the compiler equivalent to a slap with a wet fish? -- James Miller
Re: Wrong lowering for a[b][c]++
On 03/23/2012 01:24 AM, H. S. Teoh wrote: WAT?! What on earth is cast() supposed to mean?? I think it removes one level of const or immutable from the type of its argument. Why that helps in this case, I don't know. --Ed
Re: Wrong lowering for a[b][c]++
H. S. Teoh hst...@quickfur.ath.cx wrote in message news:mailman.1036.1332480215.4860.digitalmar...@puremagic.com... On Fri, Mar 23, 2012 at 06:11:05AM +0100, Andrej Mitrovic wrote: [...] Btw, want to see a magic trick? Put this into your hash: this(AA)(AA aa) if (std.traits.isAssociativeArray!AA is(KeyType!AA == keytype) is(ValueType!AA == valuetype)) { foreach (key, val; aa) this[key] = val; } And thn. *drumroll*: AA!(string,int) bb = cast()[abc:123]; badoom-tshhh. LOL! WAT?! What on earth is cast() supposed to mean?? That's just screwed up. Well anyway, I pushed the change to github, since it at least makes literal sliiightly more usable. Have fun! :-) My guess is that cast forces it to be parsed as an ExpInitializer instead of an ArrayInitializer. Associative array literals as initializers are parsed as array initializers then reinterpreted during semantic. And cast() is like cast(const) or cast(shared).
Re: Proposal: user defined attributes
On Thursday, 22 March 2012 at 23:48:03 UTC, deadalnix wrote: Inference isn't possible with an attribute system. It isn't possible simply because it wasn't implemented yet. nothing prevents us to add such a feature in the future. Could be a worthwhile enhancement for D3 or D4, given we actually have attributes implemented before then.
Re: virtual-by-default rant
Am Sun, 18 Mar 2012 04:49:12 +0100 schrieb F i L witte2...@gmail.com: On Sunday, 18 March 2012 at 03:27:40 UTC, bearophile wrote: F i L: I'm a bit confused. Reading through the virtual function's docs (http://dlang.org/function.html#virtual-functions) it says: All non-static non-private non-template member functions are virtual. This may sound inefficient, but since the D compiler knows all of the class hierarchy when generating code, all functions that are not overridden can be optimized to be non-virtual. This is so much theoretical that I think this should be removed from the D docs. And to be put back when one DMD compiler is able to do this. Otherwise it's just false advertising :-) Bye, bearophile Dammit, I was afraid someone would say something like that. Well at least it's a good goal. It is a bit of false advertising though, honestly it should just be marked implementation in progress or something like that. the D compiler knows all of the class hierarchy when generating code This is just wrong, and if that was the base for deciding on virtual as default, I believe it is natural to think about it again. Otherwise it should read if you deal with non exported classes and don't use incremental compilation as well as refrain from compiling your code into static libraries, a D compiler can optimize methods to be non-virtual. As of the time of writing [...] no such compiler exists. Now I feel better :) -- Marco
Re: Wrong lowering for a[b][c]++
On 03/23/2012 07:15 AM, Andrej Mitrovic wrote: On 3/23/12, H. S. Teohhst...@quickfur.ath.cx wrote: WAT?! What on earth is cast() supposed to mean?? I've no idea. It's probably a front-end bug and the cast forces the compiler to.. come to its senses? That part is not a bug, it is specified. http://dlang.org/expression.html#CastExpression Casting with no Type or CastQual removes any top level const, immutable, shared or inout type modifiers from the type of the UnaryExpression. What is a bug is that array initializers cannot be used to initialize structs through associative array alias this. Maybe you are also missing that this is valid code: int[] a = [1 : 2, 3 : 4]; This only works for initializers of the form [ ... ]. The cast() removes that possibility. This is why the compiler gets confused.
Re: Proposal: __traits(code, ...) and/or .codeof
On 03/23/2012 12:33 AM, F i L wrote: Timon Gehr wrote: We have the macro keyword. I envision something like: macro replaceAggregate(macro newAggregate, macro loop : foreach(x; aggr){statements}, macro x, macro aggr, macro statements) { foreach(x; newAggregate){statements} } void main(){ int[] a = [1,2,3]; int[] b = [2,3,4]; replaceAggregate(b, foreach(x;a){writeln(x);}); } (The syntax looks horrible, but you get the idea: AST walking by pattern matching) This looks substantially more complicated than what I had in mind. I think it's a great idea, but something that could be added after the initial functionality was there. macro until(macro condition, string str){ while(!condition){ mixin(str); // string and mixin not strictly necessary, // maybe enable the feature on macro params too } } void main(){ bool done = false; int x; until(done){ done = foo(x++); } } This is just a very rough sketch though, we would need a much more refined design. I think getting the symbol scoping right is most important. I'm a bit confused about what's actually going on here, but it certainly looks interesting. What exactly is being passed to string str in the macro? The idea is that if you have something of the form: identifier(arguments) { body } It would get transformed into: identifier(arguments, q{ body });
Re: Proposal for a MessageQueue (was Re: public MessageBox)
On Friday, 23 March 2012 at 01:35:05 UTC, Nathan M. Swan wrote: Used to work, and std.concurrency doesn't even use std.utf. Not sure what's going on there. Weird :( Are you trying to build std.concurrency from Git master against Phobos 2.058 or something like that? David
Re: virtual-by-default rant
Marco Leise wrote: the D compiler knows all of the class hierarchy when generating code This is just wrong, and if that was the base for deciding on virtual as default, I believe it is natural to think about it again. Yes, further reading has led me to believe that Manu is right in his request for a virtual keyword (at least). Final by default is a great concept on paper, but unless it's possible across Lib boundaries (which I'm not sure it is) then to me it does seem a bit backwards given that efficiency is a key feature of D and most programmers are already used to fixed by default anyways.
parallel optimizations based on number of memory controllers vs cpus
I believe the current std.parallelism default threadpool count is number of cpus-1, according to some documentation. When I was testing some concurrent vs threadpool parallel implementations I was seeing improvements on the concurrent operation up to about 14 threads. I didn't try to figure out how to change the threadpool. While reading this article I noticed someone who reported similar improvements up to 14 threads on memory related operations, and explained it by the number of memory controllers being the limiting issue. See his item number 4 where significant gains were made in memory processing up to 14 threads. So, I wonder if it wouldn't be good to have a couple of different built-in threadpool types ... one meant for memory operations, and one primarily for cpu crunching ... with different sizes. http://stackoverflow.com/questions/4260602/how-to-increase-performance-of-memcpy
Re: virtual-by-default rant
On 03/23/2012 02:47 PM, F i L wrote: ... and most programmers are already used to fixed by default anyways. This assertion is unjustified.
Re: parallel optimizations based on number of memory controllers vs cpus
On 03/23/2012 02:46 PM, Jay Norwood wrote: I believe the current std.parallelism default threadpool count is number of cpus-1, according to some documentation. When I was testing some concurrent vs threadpool parallel implementations I was seeing improvements on the concurrent operation up to about 14 threads. I didn't try to figure out how to change the threadpool. While reading this article I noticed someone who reported similar improvements up to 14 threads on memory related operations, and explained it by the number of memory controllers being the limiting issue. See his item number 4 where significant gains were made in memory processing up to 14 threads. So, I wonder if it wouldn't be good to have a couple of different built-in threadpool types ... one meant for memory operations, and one primarily for cpu crunching ... with different sizes. http://stackoverflow.com/questions/4260602/how-to-increase-performance-of-memcpy On program startup, do: ThreadPool.defaultPoolThreads(14); // or 13
Re: virtual-by-default rant
On Friday, 23 March 2012 at 13:58:00 UTC, Timon Gehr wrote: On 03/23/2012 02:47 PM, F i L wrote: ... and most programmers are already used to fixed by default anyways. This assertion is unjustified. Given that the four most popular languages today (Java, C, C++, and C#) all function this way, I'd say it's fairly accurate. But I also didn't to say Final by default should be default in D (though I wouldn't really disagree with that direction either), I do think D should have a virtual keyword.
Re: virtual-by-default rant
Given that the four most popular languages today (Java, C, C++, and C#) all function this way, I'd say it's fairly accurate. But I also didn't to say Final by default should be default in D (though I wouldn't really disagree with that direction either), I do think D should have a virtual keyword. Whoops, that's wrong. Java is virtual by default. So I guess you're right, my statements aren't really justified.
Re: virtual-by-default rant
On 3/18/12 9:23 AM, Manu wrote: The virtual model broken. I've complained about it lots, and people always say stfu, use 'final:' at the top of your class. That sounds tolerable in theory, except there's no 'virtual' keyword to keep the virtual-ness of those 1-2 virtual functions I have... so it's no good (unless I rearrange my class, breaking the logical grouping of stuff in it). So I try that, and when I do, it complains: Error: variable demu.memmap.MemMap.machine final cannot be applied to variable, allegedly a D1 remnant. So what do I do? Another workaround? Tag everything as final individually? My minimum recommendation: D needs an explicit 'virtual' keyword, and to fix that D1 bug, so putting final: at the top of your class works, and everything from there works as it should. Is virtual-ness your performance bottleneck?
Re: Understanding Templates: why can't anybody do it?
On Mar 22, 2012, at 10:31 PM, H. S. Teoh wrote: On Fri, Mar 23, 2012 at 01:16:13AM -0400, Nick Sabalausky wrote: Nick Sabalausky a@a.a wrote in message news:jk2ro7$6dl$1...@digitalmars.com... Here's a little templates primer, I hope it helps: [...] I've cleaned this up, added an intro and outro, and posted it on my website here: https://www.semitwist.com/articles/article/view/template-primer-in-d [...] +1. Good introduction to D templates. Pity you didn't get into fancy stuff like recursive templates, but I suppose that's out of the scope of an intro. :-) The chapter on templates in Learn to Tango with D gets into the basics of template metaprogramming, but in article form it would probably be part 3 of a series. I think that chapter is ~20 pages long.
Re: Understanding Templates: why can't anybody do it?
On Mar 22, 2012, at 10:35 PM, Nick Sabalausky wrote: And some of that fancier template stuff (like template fibonacci) is better done as CTFE anyway ;) It is, but I think it should be covered anyway because people may still encounter this code and should be able to grasp what it's doing.
Re: Three Unlikely Successful Features of D
On Friday, 23 March 2012 at 04:07:53 UTC, bearophile wrote: I suggest to compile all your D2 code with -wi (or -w) and -property. Already using -w, and I thought I was using -property. I am now, thanks. And one bug of UFCS will be probably fixed by Hara (http://d.puremagic.com/issues/show_bug.cgi?id=7722 ), so map and filter will require an ending () (I have closed my http://d.puremagic.com/issues/show_bug.cgi?id=7723 ). So your last line is better written like this: data.map!somefunc().filter!q{a 0}().array() Yes, I agree. I had to work around a bug a few months ago that using map!some_nested_func wouldn't compile, and I still haven't gotten out of the habit of using delegate literals for all but the simplest maps.
Implicit conversions for AA keys
Currently my AA implementation supports automatic key conversion (as suggested by Andrei), for example: AA!(string,int) aa; char[] key = abc.dup; aa[key] = 1;// automatically converts char[] to // string via .idup The way this is implemented is by allowing any input key type that can implicitly convert to the actual key type, or types that can be converted via .idup or slicing (to support using dynamic arrays for static array keys). While this is all nice and general, it is also *too* general: AA!(double,int) aa; int x = 1; aa[x] = 1; // ---PROBLEM The catch here is that int implicitly converts to double, *but* the underlying representation is different, so int.toHash() != double.toHash(). But currently the above code compiles, but computes the wrong hash value for the key, so that aa[1u] and aa[1.0f] are distinct entries, which is nonsensical. So the question is, how to restrict input key types so that we only allow input keys that have the same representation as the AA key type? A more advanced solution is to perform representation conversions (e.g., int - double) first, and *then* compute the hash, and *then* use .idup or slicing if the input key needs to be duplicated (in the first example above, the char[] is not .idup'd until a new entry actually needs to be created). However, I don't know how to check for such cases using function/template signature constraints, besides hard-coding all known conversions (which is ugly, fragile, and hard to maintain). IOW, if is(InputType : KeyType) is true, then how do I tell whether the implicit conversion involves a representation conversion, or merely a const conversion (e.g., immutable-const or unqualified-const)? T -- This is a tpyo.
Re: Implicit conversions for AA keys
Why do you not just do the conversion and then compute the hash, even if the representation is the same?
Re: Proposal for a MessageQueue (was Re: public MessageBox)
On Friday, 23 March 2012 at 12:36:42 UTC, David Nadlinger wrote: Are you trying to build std.concurrency from Git master against Phobos 2.058 or something like that? David I cloned from git://github.com/D-Programming-Language/phobos.git NMS
Re: Implicit conversions for AA keys
On 03/23/2012 07:10 PM, H. S. Teoh wrote: On Fri, Mar 23, 2012 at 07:01:46PM +0100, Timon Gehr wrote: Why do you not just do the conversion and then compute the hash, even if the representation is the same? Because Andrei says that the .idup shouldn't be performed unless it's necessary (e.g., you should be able to lookup char[] in a string-keyed AA without incurring the overhead of an .idup each time). The conversion is not needed if the hash computation doesn't change and we don't need to create a new entry. T That does not apply to your example with double and int. (I'd argue that actually the other overload should be chosen in that case, because the conversion is implicit) For implicit .idup, one solution would be to compare immutable(Key) and immutable(T). If they are the same, then the representation is the same.
Re: Implicit conversions for AA keys
On Fri, Mar 23, 2012 at 07:18:05PM +0100, Timon Gehr wrote: On 03/23/2012 07:10 PM, H. S. Teoh wrote: On Fri, Mar 23, 2012 at 07:01:46PM +0100, Timon Gehr wrote: Why do you not just do the conversion and then compute the hash, even if the representation is the same? Because Andrei says that the .idup shouldn't be performed unless it's necessary (e.g., you should be able to lookup char[] in a string-keyed AA without incurring the overhead of an .idup each time). The conversion is not needed if the hash computation doesn't change and we don't need to create a new entry. [...] That does not apply to your example with double and int. (I'd argue that actually the other overload should be chosen in that case, because the conversion is implicit) Sorry, I didn't understand that... which other overload? For implicit .idup, one solution would be to compare immutable(Key) and immutable(T). If they are the same, then the representation is the same. Excellent idea! I didn't think about using qualifier collapsing to check for representation equivalence. Thanks! :-) I think this issue is now solvable: if a key is implicitly convertible to the AA key but it's *not* equivalent, then convert it first. Otherwise, convert it later via .idup or some analogous mechanism. template isEquiv(T,U) { enum isEquiv = is(immutable(T)==immutable(U)); } ... static if (is(InputKey : Key) !isEquiv!(InputKey,Key)) // convert now else // convert later T -- The irony is that Bill Gates claims to be making a stable operating system and Linus Torvalds claims to be trying to take over the world. -- Anonymous
Re: Implicit conversions for AA keys
On 3/23/12 12:54 PM, H. S. Teoh wrote: Currently my AA implementation supports automatic key conversion (as suggested by Andrei), for example: AA!(string,int) aa; char[] key = abc.dup; aa[key] = 1;// automatically converts char[] to // string via .idup The way this is implemented is by allowing any input key type that can implicitly convert to the actual key type, or types that can be converted via .idup or slicing (to support using dynamic arrays for static array keys). While this is all nice and general, it is also *too* general: AA!(double,int) aa; int x = 1; aa[x] = 1; //---PROBLEM [snip] Let's see what requirements need to be satisfied by []. Say k is a value of the key type and x is a value being looked up. First, we need to be able to evaluate k == x. So the types must be comparable. Second, we need if x == k, then hash(k) == hash(x). This is tricky in general, but let's say we can approximate to the compile-time requirement that the hash function resolves to the same entity for both typeof(x) and typeof(k). This would rule out e.g. int and double but would leave char[] and string. To include int and double correctly, we'd amend the second rule as follows. If typeof(x) converts implicitly to typeof(k), then use hash(cast(typeof(k)) x) instead of hash(x). This makes it possible to look up for an int in a hash of doubles, but not vice versa, which is great. These two are sufficient for lookup. For store, we also need the third rule, which is to!(typeof(k))(x) must compile and run. Andrei
Re: Proposal for a MessageQueue (was Re: public MessageBox)
On 23.03.2012 22:17, Nathan M. Swan wrote: On Friday, 23 March 2012 at 12:36:42 UTC, David Nadlinger wrote: Are you trying to build std.concurrency from Git master against Phobos 2.058 or something like that? David I cloned from git://github.com/D-Programming-Language/phobos.git NMS replace the original phobos after rebuilding it? -- Dmitry Olshansky
Re: Implicit conversions for AA keys
On Fri, Mar 23, 2012 at 01:31:28PM -0500, Andrei Alexandrescu wrote: [...] Let's see what requirements need to be satisfied by []. Say k is a value of the key type and x is a value being looked up. First, we need to be able to evaluate k == x. So the types must be comparable. Second, we need if x == k, then hash(k) == hash(x). This is tricky in general, but let's say we can approximate to the compile-time requirement that the hash function resolves to the same entity for both typeof(x) and typeof(k). This would rule out e.g. int and double but would leave char[] and string. How do we check at compile time whether the hash function resolves to the same entity? To include int and double correctly, we'd amend the second rule as follows. If typeof(x) converts implicitly to typeof(k), then use hash(cast(typeof(k)) x) instead of hash(x). This makes it possible to look up for an int in a hash of doubles, but not vice versa, which is great. OK. These two are sufficient for lookup. For store, we also need the third rule, which is to!(typeof(k))(x) must compile and run. [...] Isn't this already required by the hash lookup? Or is casting different from to!X, in which case it might be messy to import the relevant parts of phobos into druntime. :-/ T -- A man's wife has more power over him than the state has. -- Ralph Emerson
Re: Proposal for a MessageQueue (was Re: public MessageBox)
On Mar 23, 2012, at 11:17 AM, Nathan M. Swan nathanms...@gmail.com wrote: On Friday, 23 March 2012 at 12:36:42 UTC, David Nadlinger wrote: Are you trying to build std.concurrency from Git master against Phobos 2.058 or something like that? David I cloned from git://github.com/D-Programming-Language/phobos.git If you're running Phobos from git and something doesn't work, you likely need to use the latest DMD from git as well.
Re: virtual-by-default rant
On 23 March 2012 17:24, Ary Manzana a...@esperanto.org.ar wrote: On 3/18/12 9:23 AM, Manu wrote: The virtual model broken. I've complained about it lots, and people always say stfu, use 'final:' at the top of your class. That sounds tolerable in theory, except there's no 'virtual' keyword to keep the virtual-ness of those 1-2 virtual functions I have... so it's no good (unless I rearrange my class, breaking the logical grouping of stuff in it). So I try that, and when I do, it complains: Error: variable demu.memmap.MemMap.machine final cannot be applied to variable, allegedly a D1 remnant. So what do I do? Another workaround? Tag everything as final individually? My minimum recommendation: D needs an explicit 'virtual' keyword, and to fix that D1 bug, so putting final: at the top of your class works, and everything from there works as it should. Is virtual-ness your performance bottleneck? Frequently. It's often the most expensive 'trivial' operation many processors can be asked to do. Senior programmers (who have much better things to waste their time on considering their pay bracket) frequently have to spend late nights mitigating this even in C++ where virtual isn't default. In D, I'm genuinely concerned by this prospect. Now I can't just grep for virtual and fight them off, which is time consuming alone, I will need to take every single method, one by one, prove it is never overloaded anywhere (hard to do), before I can even begin the normal process of de-virtualising it like you do in C++. The problem is elevated by the fact that many programmers are taught in university that virtual functions are okay. They come to the company, write code how they were taught in university, and then we're left to fix it up on build night when we can't hold our frame rate. virtual functions and scattered/redundant memory access are usually the first thing you go hunting for. Fixing virtuals is annoying when the system was designed to exploit them, it often requires some extensive refactoring, much harder to fix than a bad memory access pattern, which might be as simple as rearranging a struct.
Re: Implicit conversions for AA keys
On 3/23/12 1:48 PM, H. S. Teoh wrote: How do we check at compile time whether the hash function resolves to the same entity? int fun(T)(T x) { return 42; } void main() { static assert(fun!int != fun!double); } This actually reveals a compiler bug: Assertion failed: (d-purity != PUREfwdref), function typeMerge, file cast.c, line 1909. A cast would be needed anyway because they have different types, too. Anyway upon more thinking maybe this is too restrictive a rule. It won't catch e.g. functions that are, in fact, identical, but come from distinct instantiations. So perhaps you need some conservative approximation, i.e. look if the two types are qualified versions of the same type and then assume they hash the same. To include int and double correctly, we'd amend the second rule as follows. If typeof(x) converts implicitly to typeof(k), then use hash(cast(typeof(k)) x) instead of hash(x). This makes it possible to look up for an int in a hash of doubles, but not vice versa, which is great. OK. These two are sufficient for lookup. For store, we also need the third rule, which is to!(typeof(k))(x) must compile and run. [...] Isn't this already required by the hash lookup? Or is casting different from to!X, in which case it might be messy to import the relevant parts of phobos into druntime. :-/ Casting is very different from to, and useless for your purposes. You must use to. Andrei
Re: virtual-by-default rant
Something that *might* help is to do unit tests. Yeah, that's kinda ass, but it would catch a stray virtual early. Do a unit test that does a traits check for virtuals: http://dlang.org/traits.html#getVirtualFunctions if the name isn't on a list of approved virtuals, static assert fail. You'd then maintain the list of approved virtuals in the unit test, where it is easier to check over. idk though, I've never worked on a project like this.
Re: Implicit conversions for AA keys
On Fri, Mar 23, 2012 at 02:06:15PM -0500, Andrei Alexandrescu wrote: [...] A cast would be needed anyway because they have different types, too. Anyway upon more thinking maybe this is too restrictive a rule. It won't catch e.g. functions that are, in fact, identical, but come from distinct instantiations. So perhaps you need some conservative approximation, i.e. look if the two types are qualified versions of the same type and then assume they hash the same. OK. Would the following be enough? template isEquiv(T,U) { enum isEquiv = is(immutable(T)==immutable(U)); } [...] Isn't this already required by the hash lookup? Or is casting different from to!X, in which case it might be messy to import the relevant parts of phobos into druntime. :-/ Casting is very different from to, and useless for your purposes. You must use to. [...] Wouldn't that require moving std.conv into druntime? And std.conv does depend on std.traits as well... T -- Without outlines, life would be pointless.
Re: Implicit conversions for AA keys
On 03/23/2012 08:06 PM, Andrei Alexandrescu wrote: Casting is very different from to, and useless for your purposes. You must use to. Andrei druntime mustn't depend on Phobos, and I don't see why it is necessary. What kind of functionality do you want to provide that depends on std.conv.to ?
Re: Implicit conversions for AA keys
On 3/23/12 2:30 PM, H. S. Teoh wrote: Wouldn't that require moving std.conv into druntime? And std.conv does depend on std.traits as well... Not sure how it's best to address this. Andrei
Re: Implicit conversions for AA keys
On 3/23/12 2:28 PM, Timon Gehr wrote: On 03/23/2012 08:06 PM, Andrei Alexandrescu wrote: Casting is very different from to, and useless for your purposes. You must use to. Andrei druntime mustn't depend on Phobos, and I don't see why it is necessary. What kind of functionality do you want to provide that depends on std.conv.to ? Casting from char[] to string is not what you want, and .idup is specific to arrays. There must be one coherent method of truely converting across types, and std.conv.to is the closest I can think of. Andrei
Re: Wrong lowering for a[b][c]++
On 3/23/12, Timon Gehr timon.g...@gmx.ch wrote: Maybe you are also missing that this is valid code: int[] a = [1 : 2, 3 : 4]; What is this syntax for and how is it used? It creates '[0, 2, 0, 4]', which is puzzling to me.
Re: Wrong lowering for a[b][c]++
On 03/23/2012 09:10 PM, Andrej Mitrovic wrote: On 3/23/12, Timon Gehrtimon.g...@gmx.ch wrote: Maybe you are also missing that this is valid code: int[] a = [1 : 2, 3 : 4]; What is this syntax for and how is it used? It creates '[0, 2, 0, 4]', which is puzzling to me. It creates an array from key-value pairs. a[1] will be 2 and a[3] will be 4. Unspecified entries are default-initialized. It can be quite useful for building lookup tables. Another reason why array initializers are different from array literals: struct S{int a,b,c;} S[] sarr = [{a: 1, b: 2, c: 3}, {a: 4, b: 5, c: 6}];
Re: Implicit conversions for AA keys
On 03/23/2012 09:05 PM, Andrei Alexandrescu wrote: On 3/23/12 2:28 PM, Timon Gehr wrote: On 03/23/2012 08:06 PM, Andrei Alexandrescu wrote: Casting is very different from to, and useless for your purposes. You must use to. Andrei druntime mustn't depend on Phobos, and I don't see why it is necessary. What kind of functionality do you want to provide that depends on std.conv.to ? Casting from char[] to string is not what you want, and .idup is specific to arrays. There must be one coherent method of truely converting across types, and std.conv.to is the closest I can think of. Andrei This will statically allow looking up an int in a T[string]. I don't think that is desirable. I even think implicit .idup may be overkill.
Re: Implicit conversions for AA keys
On 3/23/12 3:23 PM, Timon Gehr wrote: On 03/23/2012 09:05 PM, Andrei Alexandrescu wrote: On 3/23/12 2:28 PM, Timon Gehr wrote: On 03/23/2012 08:06 PM, Andrei Alexandrescu wrote: Casting is very different from to, and useless for your purposes. You must use to. Andrei druntime mustn't depend on Phobos, and I don't see why it is necessary. What kind of functionality do you want to provide that depends on std.conv.to ? Casting from char[] to string is not what you want, and .idup is specific to arrays. There must be one coherent method of truely converting across types, and std.conv.to is the closest I can think of. Andrei This will statically allow looking up an int in a T[string]. No, because of the other rules. Andrei
Re: Wrong lowering for a[b][c]++
On 3/23/12, Timon Gehr timon.g...@gmx.ch wrote: It creates an array from key-value pairs. a[1] will be 2 and a[3] will be 4. Unspecified entries are default-initialized. It can be quite useful for building lookup tables. Interesting. It's documented under Static Initialization of Statically Allocated Arrays, but I guess it works for dynamic arrays too. I could use these for sure. :) Another reason why array initializers are different from array literals: struct S{int a,b,c;} S[] sarr = [{a: 1, b: 2, c: 3}, {a: 4, b: 5, c: 6}]; Yeah I knew about those, although IIRC these might be deprecated?
Re: Implicit conversions for AA keys
On 03/23/2012 09:43 PM, Andrei Alexandrescu wrote: On 3/23/12 3:23 PM, Timon Gehr wrote: On 03/23/2012 09:05 PM, Andrei Alexandrescu wrote: On 3/23/12 2:28 PM, Timon Gehr wrote: On 03/23/2012 08:06 PM, Andrei Alexandrescu wrote: Casting is very different from to, and useless for your purposes. You must use to. Andrei druntime mustn't depend on Phobos, and I don't see why it is necessary. What kind of functionality do you want to provide that depends on std.conv.to ? Casting from char[] to string is not what you want, and .idup is specific to arrays. There must be one coherent method of truely converting across types, and std.conv.to is the closest I can think of. Andrei This will statically allow looking up an int in a T[string]. No, because of the other rules. Andrei I see. An alternative solution (one that does not make AAs depend on Phobos and is more slick) would be to use the const qualified key type for lookup (that is what const is for) and to have immutable keys for stores. For types that define .idup, there would be another overload of opIndexAssign that can take a const qualified key.
Re: Implicit conversions for AA keys
On 03/23/2012 10:07 PM, Timon Gehr wrote: I see. An alternative solution (one that does not make AAs depend on Phobos and is more slick) would be to use the const qualified key type for lookup (that is what const is for) and to have immutable keys for stores. For types that define .idup, there would be another overload of opIndexAssign that can take a const qualified key. Proof of concept: // ctfe-able simple and stupid replace string replace(string str, string from, string to){ string r = ; foreach(i; 0..str.length){ if(i+from.length=str.length str[i..i+from.length]==from){ r~=to; i+=from.length-1; }else r~=str[i]; } return r; } template getConstQual(T){ // hack static if(is(T==string)) alias const(char)[] getConstQual; else alias const(typeof(mixin(`(`~T.stringof. replace(immutable,const)~`).init`))) getConstQual; } int numidup = 0; struct AA(Key, Value) if(is(Key:immutable(Key))){ Value[Key] payload; auto opIndex(getConstQual!Key k){return payload[cast(immutable)k];} auto opIndexAssign(Value v, Key k){return payload[cast(immutable)k]=v;} static if(is(typeof(getConstQual!Key.init.idup))){ auto opIndexAssign(Value v, getConstQual!Key k){ if(auto p = (cast(immutable)k) in payload) return *p=v; numidup++; return payload[k.idup]=v; } } } void main() { AA!(string, int) aa; aa[123] = 123; char[3] ch = 123; assert(aa[ch] == 123); ch[1]='3'; assert(numidup == 0); aa[ch]=133; assert(numidup == 1); assert(aa[133]==133); ch[0]='3'; assert(aa[133]==133); assert(numidup == 1); }
Re: Wrong lowering for a[b][c]++
Andrej Mitrovic: On 3/23/12, Timon Gehr timon.g...@gmx.ch wrote: Maybe you are also missing that this is valid code: int[] a = [1 : 2, 3 : 4]; What is this syntax for and how is it used? It creates '[0, 2, 0, 4]', which is puzzling to me. See: http://d.puremagic.com/issues/show_bug.cgi?id=4703 This is why I was unnerved when Walter has recently said that we should reduce the amount of breaking changes in D. There are several D problems like that one that should be fixed. This is an example of bug report that needs to be addressed sooner instead of later. Bye, bearophile
Re: Implicit conversions for AA keys
On Fri, Mar 23, 2012 at 10:53:10PM +0100, Timon Gehr wrote: On 03/23/2012 10:07 PM, Timon Gehr wrote: I see. An alternative solution (one that does not make AAs depend on Phobos and is more slick) would be to use the const qualified key type for lookup (that is what const is for) and to have immutable keys for stores. For types that define .idup, there would be another overload of opIndexAssign that can take a const qualified key. Proof of concept: [...] Hmm. I decided that perhaps the full-fledged std.conv.to is a bit of an overkill, so I revised the AA code to compromise between needing std.conv.to and still deliver what Andrei wants. Basically, I have a template that defines AA key compatibility, where compatibility means that given an AA with key type Key and a key k of type K, k is considered compatible if: - k==K.init is valid (i.e. they can be compared); - (At least) one of the following holds: - is(immutable(K) == immutable(Key)) - is(typeof(k.idup) == Key) - Key is a static array of length N, and k[0..N] is valid. - is(K : Key) For the first case (is(immutable(K)==immutable(Key)), which means K and Key have the same representation) and the second case (K.idup yields Key), we can basically assume that K.toHash() is consistent with Key.toHash(). When creating a new entry, we just assign K to Key, or K.idup to Key as necessary. For the third case, we can just slice the input array when comparing or assigning to a new entry (this will throw an Error if the input array has the wrong length). I decided to be permissive and compute the hash on the entire array, if the length doesn't match it will fail anyway, so it's OK to lookup an array of mismatching length in an AA with static array keys, as long as you don't try to store the key into it. Lastly, if is(K : Key) holds but none of the others do, then convert the key before computing the hash: Key key = k;// implicit conversion return key.toHash(); This ensures the int-double conversion works correctly. Creating a new entry can just use straight assignment, due to the implicit conversion. I've added these changes on github in a branch: https://github.com/quickfur/New-AA-implementation/tree/keyconv Andrei, please try it out and see if it works on the cases you have in mind. :-) T -- Try to keep an open mind, but not so open your brain falls out. -- theboz
Re: New hash
I thought I'd open this topic for discussion of issues with the new hash implementation. Anyways, this doesn't seem to work: AA!(string,int[]) hash; D:\dev\projects\New-AA-implementationrdmd -ID:\DMD\dmd2\src\druntime\src newAATest.d newAA.d(581): Error: template newAA.AA!(string,int[]).AssociativeArray.Slot.__ctor(K) if (keyCompat!(K)) cannot deduce template function from argument types !()(const(uint),const(immutable(char)[]),const(int[]),Slot*) newAA.d(581): Error: no constructor for Slot newAATest.d(33): Error: template instance newAA.AA!(string,int[]) error instantiating Failed: dmd -ID:\DMD\dmd2\src\druntime\src -v -o- newAATest.d -I. Exit code: 1
Re: New hash
On Sat, Mar 24, 2012 at 02:39:35AM +0100, Andrej Mitrovic wrote: I thought I'd open this topic for discussion of issues with the new hash implementation. Thanks for taking the time to test the code! Anyways, this doesn't seem to work: AA!(string,int[]) hash; D:\dev\projects\New-AA-implementationrdmd -ID:\DMD\dmd2\src\druntime\src newAATest.d newAA.d(581): Error: template newAA.AA!(string,int[]).AssociativeArray.Slot.__ctor(K) if (keyCompat!(K)) cannot deduce template function from argument types !()(const(uint),const(immutable(char)[]),const(int[]),Slot*) newAA.d(581): Error: no constructor for Slot newAATest.d(33): Error: template instance newAA.AA!(string,int[]) error instantiating Failed: dmd -ID:\DMD\dmd2\src\druntime\src -v -o- newAATest.d -I. Exit code: 1 Argh. Looks like it's caused by more const madness. :-( Try this diff: diff --git a/newAA.d b/newAA.d index a513b31..ac91e0c 100644 --- a/newAA.d +++ b/newAA.d @@ -576,7 +576,7 @@ public: return this; } -@property auto dup() const @safe +@property auto dup() @safe { AssociativeArray!(Key,Value) result; if (impl !is null) @@ -584,7 +584,7 @@ public: result.impl = new Impl(); result.impl.slots = alloc(findAllocSize(impl.nodes)); -foreach (const(Slot)* slot; impl.slots) +foreach (Slot* slot; impl.slots) { while (slot) { @@ -946,6 +946,11 @@ unittest { assert(aa[key1] == 123); } +// Bug found by Andrej Mitrovic: can't instantiate AA!(string,int[]). +unittest { +AA!(string,int[]) aa; +} + // Issues 7512 7704 unittest { AA!(dstring,int) aa; ---snip--- This is just a rough hack that removes the const from dup(). I need to look at it more carefully to figure out how to copy stuff over without violating constness. :-( T -- English has the lovely word defenestrate, meaning to execute by throwing someone out a window, or more recently to remove Windows from a computer and replace it with something useful. :-) -- John Cowan
Re: New hash
On Sat, Mar 24, 2012 at 02:39:35AM +0100, Andrej Mitrovic wrote: I thought I'd open this topic for discussion of issues with the new hash implementation. Anyways, this doesn't seem to work: AA!(string,int[]) hash; [...] Argh... in the process of trying to fix this issue, I ran into a major compiler bug: inout isn't recognized on a parameter when the same parameter is also lazy. :-( I filed a new issue: http://d.puremagic.com/issues/show_bug.cgi?id=7757 This means I probably have to copy-n-paste AA.get() in order to make things work correctly. :-( T -- Political correctness: socially-sanctioned hypocrisy.
Re: New hash
On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: Argh. Looks like it's caused by more const madness. :-( Try this diff: Ahaha: patching file newAA.d Assertion failed: hunk, file ../patch-2.5.9-src/patch.c, line 354 This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Anywho, I'll do it manually. :) P.S. for anyone wondering the new implementation is at https://github.com/quickfur/New-AA-implementation and needs DMD 2.059.
Re: New hash
On Sat, Mar 24, 2012 at 03:11:32AM +0100, Andrej Mitrovic wrote: On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: Argh. Looks like it's caused by more const madness. :-( Try this diff: Ahaha: patching file newAA.d Assertion failed: hunk, file ../patch-2.5.9-src/patch.c, line 354 This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Heh. Never knew I could do that. :-P Anywho, I'll do it manually. :) P.S. for anyone wondering the new implementation is at https://github.com/quickfur/New-AA-implementation and needs DMD 2.059. Actually, I've pushed the workaround to github, you can just pull. I discovered that to make .dup const, I'll need to think about some rather complicated const-related issues... I'll post more later. T -- Why waste time learning, when ignorance is instantaneous? -- Hobbes, from Calvin Hobbes
Re: New hash
On 3/24/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: Anyways, this doesn't seem to work: AA!(string,int[]) hash; Ok it works now w/ your patch.
Re: New hash
On Sat, Mar 24, 2012 at 03:15:49AM +0100, Andrej Mitrovic wrote: On 3/24/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: Anyways, this doesn't seem to work: AA!(string,int[]) hash; Ok it works now w/ your patch. You probably want to git pull, 'cos I found 2-3 other places where I forgot about inout, so many operations on your AA will fail. Anyway, about .dup... what's the current AA's behaviour? Are const/immutable data .dup-able? Or should I just static if the whole thing out unless it doesn't need more insane const hacks? Someday, we have to revisit this whole const thing and how to make it work nicely with containers... I found that it is causing 40% of my troubles with the AA implementation. :-( T -- LINUX = Lousy Interface for Nefarious Unix Xenophobes.
Re: New hash
On 3/24/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: Anyways, this doesn't seem to work: AA!(string,int[]) hash; Ok it works now w/ your patch.
Re: New hash
On Sat, Mar 24, 2012 at 02:39:35AM +0100, Andrej Mitrovic wrote: I thought I'd open this topic for discussion of issues with the new hash implementation. [...] Another issue: AA!(string,int[]) aa; auto x = aa.get(abc, []); This fails to compile, because the compiler deduces the type of [] as void[], and aa.get() doesn't know how to assign void[] to int[]. The problem is that aa.get() is a template function, so the usual implicit type deduction doesn't work. Here's a reduced version of the same problem: int[] func1(int dummy, int[] x) { return x; } int[] func2(T)(T dummy, int[] x) { return x; } void main() { auto x = func1(1, []); // OK: [] deduced as int[] auto y = func2(1, []); // Error: [] deduced as void[], // no matching templates found. } T -- Береги платье снову, а здоровье смолоду.
Re: Mono-D@GSoC - Mentor needed
On Tuesday, 20 March 2012 at 22:52:13 UTC, James Miller wrote: A bit of a side note, but is there any way that some of this work could be made more standalone, even if somebody else has to take up the work to finish it and make it truly standalone. I personally can't stand fully integrated environments, but I do like things like code completion and the like, so it would be nice to be able to use these features in, for example, vim. I don't know how feasible this is, but it's worth mentioning. -- James Miller Yes!! I want a standalone version too. I like Mono-D very much, however, not being able to type the ~ key in MonoDevelog is really annoying.
Re: New hash
On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: On Sat, Mar 24, 2012 at 02:39:35AM +0100, Andrej Mitrovic wrote: I thought I'd open this topic for discussion of issues with the new hash implementation. [...] Another issue: AA!(string,int[]) aa; auto x = aa.get(abc, []); Yeah, templates are unforgiving sometimes. How do you nest your hash type? int[int][int] to AA? I've tried: alias AA!(int, AA!(int, int)) Assoc; but then I get: newAA.d(461): Error: safe function 'opIndexAssign' cannot call system function 'opAssign' newAA.d(330): Error: template instance newAA.AA!(int,AA!(int,int)).AssociativeArray.opIndexAssign!(int) error instantiating
Re: New hash
On Sat, Mar 24, 2012 at 04:21:21AM +0100, Andrej Mitrovic wrote: On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: [...] Another issue: AA!(string,int[]) aa; auto x = aa.get(abc, []); Yeah, templates are unforgiving sometimes. Well, I wish that compiler magic that does these inferences either worked better, or were somehow accessible to user code so that I don't have to keep creating workarounds for it. :-/ How do you nest your hash type? int[int][int] to AA? I've tried: alias AA!(int, AA!(int, int)) Assoc; That should be correct. but then I get: newAA.d(461): Error: safe function 'opIndexAssign' cannot call system function 'opAssign' [...] That's a bug. Lemme take a look and see... Actually, that's a bug introduced by the AA literal workaround that we did before (see line 313). It's missing a @trusted tag. Man these things are fidgety. Fix pushed to github. :-) T -- Why are you blatanly misspelling blatant? -- Branden Robinson
Re: New hash
On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: Fix pushed to github. :-) Okay. Then I have some eye-candy for you: template Assoc(T) if (std.traits.isAssociativeArray!T) { alias AA!(GetTypesTuple!T) Assoc; } import std.typetuple; template GetTypesTuple(T) if (std.traits.isAssociativeArray!T) { static if (std.traits.isAssociativeArray!(ValueType!T)) { alias TypeTuple!(KeyType!T, AA!(GetTypesTuple!(ValueType!T))) GetTypesTuple; } else { alias TypeTuple!(KeyType!T, ValueType!T) GetTypesTuple; } } With this, you can now use this syntax: Assoc!(int[string]) aasi; Assoc!(dstring[int]) aaid; I think this can make testing easier.
Re: New hash
On 3/24/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: Anyways, this doesn't seem to work. Another bug: struct Foo { string s; } AA!(string, Foo) test; newAA.d(378): Error: template newAA.toHash(T) if (is(T == char) || is(T == const(char)) || is(T == immutable(char))) toHash(T) if (is(T == char) || is(T == const(char)) || is(T == immutable(char))) matches more than one template declaration, newAA.d(1276):toHash(T) if (!is(T U : U[]) T.sizeof (int).sizeof) and newAA.d(1284):toHash(S) if (is(S == struct))
Re: New hash
On Sat, Mar 24, 2012 at 05:12:00AM +0100, Andrej Mitrovic wrote: On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: Fix pushed to github. :-) Okay. Then I have some eye-candy for you: I have a much simpler version: template Assoc(T) { static if (is(T K : V[K], V)) alias AA!(K,V) Assoc; else static assert(0); } :-) [...] With this, you can now use this syntax: Assoc!(int[string]) aasi; Assoc!(dstring[int]) aaid; I think this can make testing easier. Cool. Should I just replace the current AA alias with Assoc? I think this is far more concise, and much easier to convert to real AA syntax when we integrate with druntime/dmd. T -- Written on the window of a clothing store: No shirt, no shoes, no service.
Re: New hash
On 3/24/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: snip I've managed to test the hashes on a small-sized closed-source project (9K lines) which used hashes a lot. I've found no issues so far (no memory corruption or anything). Performance did drop a bit from 812msecs to 898msecs. I can't extensively test this yet because I can't serialize the hashes (a serialization library doesn't want to work with the new hashes but I'll fix that) and so I need a 30second run that first fills the hashes before doing work on them. But it does seem to be a tiny bit slower.
Re: New hash
On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: I have a much simpler version: template Assoc(T) { static if (is(T K : V[K], V)) alias AA!(K,V) Assoc; else static assert(0); } :-) Yeah but with a type string[int][int] this will store a AA!(int, string[int]), IOW you will be using a druntime hash inside of your new hash. Mine converts this to AA!(int, AA!(string, int)) to properly test the new AA nested hashes. Also your version won't work since AA!(int, string[int]) fails currently (that same toHash error as my last post).
Re: New hash
On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: Cool. Should I just replace the current AA alias with Assoc? I think this is far more concise, and much easier to convert to real AA syntax when we integrate with druntime/dmd. Sure. I kept wondering whether AA!(int, string) is a hash with an int or a string key. :)
Re: New hash
On Sat, Mar 24, 2012 at 05:47:55AM +0100, Andrej Mitrovic wrote: [...] I've managed to test the hashes on a small-sized closed-source project (9K lines) which used hashes a lot. I've found no issues so far (no memory corruption or anything). Performance did drop a bit from 812msecs to 898msecs. I can't extensively test this yet because I can't serialize the hashes (a serialization library doesn't want to work with the new hashes but I'll fix that) and so I need a 30second run that first fills the hashes before doing work on them. But it does seem to be a tiny bit slower. OK, good to know. I thought I've weeded out the inefficient parts of the code, but I guess one never knows until you run a profiler on it. Note that if hash literals are used, then they can be inefficient because of the current hack of copying from the current AA (so it will involve two copies: one from the compiler's native array-of-keys and array-of-values representation, another from the current AA to the new AA). T -- Don't modify spaghetti code unless you can eat the consequences.
Re: New hash
On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: Note that if hash literals are used, then they can be inefficient because of the current hack of copying from the current AA (so it will involve two copies Okie. I was mostly measuring lookups though. I can't accurately measure writes because I'm taking data from several hundred XML files (so File IO comes to play), which are then stored to the hashes via simple string assignments (there's no copying from old hashes to new ones). After they're stored I do mostly reads 99% of the time.
Re: MessagePack
On 3/21/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: Hi, it seems MessagePack won't compile with 2.058: Oops sorry about that. I've used this old repository: https://bitbucket.org/repeatedly/msgpack4d/ instead of the new one: https://github.com/msgpack/msgpack-d I think you should delete the old one if it's outdated. :)
Re: New hash
On Sat, Mar 24, 2012 at 06:08:40AM +0100, Andrej Mitrovic wrote: On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: Note that if hash literals are used, then they can be inefficient because of the current hack of copying from the current AA (so it will involve two copies Okie. I was mostly measuring lookups though. I can't accurately measure writes because I'm taking data from several hundred XML files (so File IO comes to play), which are then stored to the hashes via simple string assignments (there's no copying from old hashes to new ones). After they're stored I do mostly reads 99% of the time. Hmm OK. So there's another bottleneck somewhere that I don't know about. Maybe hash computation? That area may still have some issues that need fixing. T -- A bend in the road is not the end of the road unless you fail to make the turn. -- Brian White
Re: MessagePack
On 3/24/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: https://github.com/msgpack/msgpack-d Btw, thank you very much for your hard work Masahiro Nakagawa! Your serialization library has reduced my waiting time by over 20 seconds (I used Json before even though I didn't need a human-readable format).
Re: New hash
On Sat, Mar 24, 2012 at 05:56:27AM +0100, Andrej Mitrovic wrote: On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: Cool. Should I just replace the current AA alias with Assoc? I think this is far more concise, and much easier to convert to real AA syntax when we integrate with druntime/dmd. Sure. I kept wondering whether AA!(int, string) is a hash with an int or a string key. :) OK, I found that your template doesn't handle the case where the key type is an AA. :-) It's not hard to fix, though, so I cleaned up my unittests to use the new AA template. However, one case remains unsolved: either I can't find the right way to express this, or the new AA template needs fixing: AA!(string[const AA!(int[int])]) meta; I can't seem to get rid of the inner AA!() without the type changing on me and ending up with const conversion errors (which need to be fixed, but the types *are* different from what was intended in the unittest). Anyway, the changes have been pushed to github. T -- They pretend to pay us, and we pretend to work. -- Russian saying
Re: New hash
On 3/24/12, H. S. Teoh hst...@quickfur.ath.cx wrote: OK, I found that your template doesn't handle the case where the key type is an AA. Heh, yeah, I've never used hashes as key types before. However, one case remains unsolved: either I can't find the right way to express this, or the new AA template needs fixing: AA!(string[const AA!(int[int])]) meta; Not sure why are you combining old and new syntax in there? Is this what you wanted?: AA!(string[const int[int]]) meta;
Freeing memory allocated at C function
I'm using some C functions like these: char *str = allocateNewString(); And this: Object *obj = constructObject(); // etc freeObject(obj); Do I need to free the memory in both cases? Can I someway register them on GC?
Re: Freeing memory allocated at C function
On 03/22/2012 11:27 PM, Pedro Lacerda wrote: I'm using some C functions like these: char *str = allocateNewString(); And this: Object *obj = constructObject(); // etc freeObject(obj); Do I need to free the memory in both cases? Can I someway register them on GC? You can register on GC if you wrap the resources in a class. Then the class object's destructor would call the clean up code. The problem is, it is undeterministic when the destructor will be called, or will it be called at all! Or you can wrap in a struct which has deterministic destruction like in C++, when leaving scopes. A better thing to do in this case is to use the scope() statement. Depending of when you want the cleanup to happen: - scope(success): if the scope is being exited successfully - scope(failure): if the scope is being exited with an exception - scope(exit): when exiting the scope regardless For example, if you want the cleanup only if there is an exception: int allocate() { return 42; } void deallocate(int) {} void foo() { int resource = allocate(); scope(failure) deallocate(resource); // ... an exception may be thrown here ... } void main() { foo(); } Ali
Re: Vector operations optimization.
On 23 March 2012 18:57, Comrad comrad.karlov...@googlemail.com wrote: On Thursday, 22 March 2012 at 10:43:35 UTC, Trass3r wrote: What is the status at the moment? What compiler and with which compiler flags I should use to achieve maximum performance? In general gdc or ldc. Not sure how good vectorization is though, esp. auto-vectorization. On the other hand the so called vector operations like a[] = b[] + c[]; are lowered to hand-written SSE assembly even in dmd. I had such a snippet to test: 1 import std.stdio; 2 void main() 3 { 4 double[2] a=[1.,0.]; 5 double[2] a1=[1.,0.]; 6 double[2] a2=[1.,0.]; 7 double[2] a3=[0.,0.]; 8 foreach(i;0..10) 9 a3[]+=a[]+a1[]*a2[]; 10 writeln(a3); 11 } And I compared with the following d code: 1 import std.stdio; 2 void main() 3 { 4 double[2] a=[1.,0.]; 5 double[2] a1=[1.,0.]; 6 double[2] a2=[1.,0.]; 7 double[2] a3=[0.,0.]; 8 foreach(i;0..10) 9 { 10 a3[0]+=a[0]+a1[0]*a2[0]; 11 a3[1]+=a[1]+a1[1]*a2[1]; 12 } 13 writeln(a3); 14 } And with the following c code: 1 #include stdio.h 2 int main() 3 { 4 double a[2]={1.,0.}; 5 double a1[2]={1.,0.}; 6 double a2[2]={1.,0.}; 7 double a3[2]; 8 unsigned i; 9 for(i=0;i10;++i) 10 { 11 a3[0]+=a[0]+a1[0]*a2[0]; 12 a3[1]+=a[1]+a1[1]*a2[1]; 13 } 14 printf(%f %f\n,a3[0],a3[1]); 15 return 0; 16 } The last one I compiled with gcc two previous with dmd and ldc. C code with -O2 was the fastest and as fast as d without slicing compiled with ldc. d code with slicing was 3 times slower (ldc compiler). I tried to compile with different optimization flags, that didn't help. Maybe I used the wrong ones. Can someone comment on this? The flags you want are -O2, -inline -release. If you don't have those, then that might explain some of the slow down on slicing, since -release drops a ton of runtime checks. Otherwise, I'm not sure why its so much slower, the druntime array ops are written using SIMD instructions where available, so it should be fast. -- James Miller
Template constraint and specializations
Is there a way to write a template constraint that matches any specialization of a given type? For example can the following be done without having to write out every combination of feature1 and feature2: class Foo(bool feature1, bool feature2) { ... } void useFoo(T)(T foo) if (is(T == Foo!(false, false)) || is(T == Foo!(false, true)) || is(T == Foo!(true, false)) || is(T == Foo!(true, true))) { // call methods of foo that don't change based on feature1/feature2 } Thanks, --Ed
Re: Template constraint and specializations
On 3/23/12, Ed McCardell edmcc...@hotmail.com wrote: Is there a way to write a template constraint that matches any specialization of a given type? Nope. But there are simple workarounds: class Foo(bool feature1, bool feature2) { enum _isFoo = true; } template isFoo(T) { enum bool isFoo = __traits(hasMember, T, _isFoo); } void useFoo(T)(T foo) if (isFoo!T) { // call methods of foo that don't change based on feature1/feature2 } void main() { Foo!(true, false) foo; useFoo(foo); }
Re: Template constraint and specializations
On 03/23/2012 04:14 AM, Andrej Mitrovic wrote: On 3/23/12, Ed McCardelledmcc...@hotmail.com wrote: Is there a way to write a template constraint that matches any specialization of a given type? Nope. But there are simple workarounds: class Foo(bool feature1, bool feature2) { enum _isFoo = true; } template isFoo(T) { enum bool isFoo = __traits(hasMember, T, _isFoo); } Thanks! I was tempted to try something hacky for the constraint, like if (T.stringof == Foo) but tagging the type with an enum works better all around. --Ed
Re: Vector operations optimization.
On 23.03.2012 9:57, Comrad wrote: On Thursday, 22 March 2012 at 10:43:35 UTC, Trass3r wrote: What is the status at the moment? What compiler and with which compiler flags I should use to achieve maximum performance? In general gdc or ldc. Not sure how good vectorization is though, esp. auto-vectorization. On the other hand the so called vector operations like a[] = b[] + c[]; are lowered to hand-written SSE assembly even in dmd. I had such a snippet to test: 1 import std.stdio; 2 void main() 3 { 4 double[2] a=[1.,0.]; 5 double[2] a1=[1.,0.]; 6 double[2] a2=[1.,0.]; 7 double[2] a3=[0.,0.]; Here is a culprit, the array ops [] are tuned for arbitrary long(!) arrays, they are not plain 1 simd SEE op. They are handcrafted loops(!) on SSE ops, cool and fast for arrays in general, not fixed pairs/trios/etc. I believe it might change in future, if compiler is able to deduce that size is fixed, and use more optimal code for small sizes. 8 foreach(i;0..10) 9 a3[]+=a[]+a1[]*a2[]; 10 writeln(a3); 11 } And I compared with the following d code: 1 import std.stdio; 2 void main() 3 { 4 double[2] a=[1.,0.]; 5 double[2] a1=[1.,0.]; 6 double[2] a2=[1.,0.]; 7 double[2] a3=[0.,0.]; 8 foreach(i;0..10) 9 { 10 a3[0]+=a[0]+a1[0]*a2[0]; 11 a3[1]+=a[1]+a1[1]*a2[1]; 12 } 13 writeln(a3); 14 } And with the following c code: 1 #include stdio.h 2 int main() 3 { 4 double a[2]={1.,0.}; 5 double a1[2]={1.,0.}; 6 double a2[2]={1.,0.}; 7 double a3[2]; 8 unsigned i; 9 for(i=0;i10;++i) 10 { 11 a3[0]+=a[0]+a1[0]*a2[0]; 12 a3[1]+=a[1]+a1[1]*a2[1]; 13 } 14 printf(%f %f\n,a3[0],a3[1]); 15 return 0; 16 } The last one I compiled with gcc two previous with dmd and ldc. C code with -O2 was the fastest and as fast as d without slicing compiled with ldc. d code with slicing was 3 times slower (ldc compiler). I tried to compile with different optimization flags, that didn't help. Maybe I used the wrong ones. Can someone comment on this? -- Dmitry Olshansky
Re: Vector operations optimization.
The flags you want are -O, -inline -release. If you don't have those, then that might explain some of the slow down on slicing, since -release drops a ton of runtime checks. -noboundscheck option can also speed up things.
Re: Template constraint and specializations
Andrej Mitrovic: Nope. But there are simple workarounds: Why isn't something similar to this working? import std.traits: Unqual; class Foo(bool feature1, bool feature2) {} template isFoo(T) { static if (is(Unqual!T Unused : Foo!Features, Features...)) { enum isFoo = true; } else { enum isFoo = false; } } void main() { auto f1 = new Foo!(true, false)(); static assert(isFoo!(typeof(f1))); } Bye, bearophile
Calling c shared library
Forgive my programming 101 question :) I want to call a method from a precompiled shared library: // c header void f(void); // my d file extern(C) void f(); void main() {} $ dmd mydfile.d libphobos2.a(deh2_33a_525.o): In function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable': src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x4): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0xc): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x13): undefined reference to `_deh_end' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x37): undefined reference to `_deh_end' collect2: ld returned 1 exit status --- errorlevel 1 Is there a way to do the above, or do I have to manually load the shared library and use alias'es for the functions?
Re: Vector operations optimization.
On Friday, 23 March 2012 at 10:48:55 UTC, Dmitry Olshansky wrote: On 23.03.2012 9:57, Comrad wrote: On Thursday, 22 March 2012 at 10:43:35 UTC, Trass3r wrote: What is the status at the moment? What compiler and with which compiler flags I should use to achieve maximum performance? In general gdc or ldc. Not sure how good vectorization is though, esp. auto-vectorization. On the other hand the so called vector operations like a[] = b[] + c[]; are lowered to hand-written SSE assembly even in dmd. I had such a snippet to test: 1 import std.stdio; 2 void main() 3 { 4 double[2] a=[1.,0.]; 5 double[2] a1=[1.,0.]; 6 double[2] a2=[1.,0.]; 7 double[2] a3=[0.,0.]; Here is a culprit, the array ops [] are tuned for arbitrary long(!) arrays, they are not plain 1 simd SEE op. They are handcrafted loops(!) on SSE ops, cool and fast for arrays in general, not fixed pairs/trios/etc. I believe it might change in future, if compiler is able to deduce that size is fixed, and use more optimal code for small sizes. So currently there is no such an optimization exists for any d compiler?
Re: Vector operations optimization.
On Friday, 23 March 2012 at 11:20:59 UTC, Trass3r wrote: The flags you want are -O, -inline -release. If you don't have those, then that might explain some of the slow down on slicing, since -release drops a ton of runtime checks. -noboundscheck option can also speed up things. dmd is anyway very slow, ldc2 was better, but still not fast enough.
Re: Calling c shared library
On Fri, 23 Mar 2012 15:04:48 +0100, simendsjo simend...@gmail.com wrote: Forgive my programming 101 question :) I want to call a method from a precompiled shared library: // c header void f(void); // my d file extern(C) void f(); void main() {} $ dmd mydfile.d libphobos2.a(deh2_33a_525.o): In function `_D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable': src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x4): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0xc): undefined reference to `_deh_beg' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x13): undefined reference to `_deh_end' src/rt/deh2.d:(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh213DHandlerTable+0x37): undefined reference to `_deh_end' collect2: ld returned 1 exit status --- errorlevel 1 Is there a way to do the above, or do I have to manually load the shared library and use alias'es for the functions? Stupidity has a new name, and it's simendsjo! I actually had extern(C): at the top of my file, but extern(C) void main() was nowhere to be found :)