On Monday, 4 March 2013 at 12:28:25 UTC, J wrote:
On Monday, 4 March 2013 at 08:02:46 UTC, J wrote:
That's a really good point. I wonder if there is a canonical
matrix that would be preferred?
I'm not sure if they are the recommended/best practice for
matrix handling in D at the moment
On 03/04/2013 04:46 PM, jerro wrote:
A bit better version:
http://codepad.org/jhbYxEgU
I think this code is good compared to the original (there are better
algorithms).
You can make it much faster even without really changing the algorithm.
Just by reversing the order of inner two loops like
On Monday, 4 March 2013 at 05:07:10 UTC, Manu wrote:
Using dynamic arrays of dynamic arrays that way is pretty poor
form regardless of the language.
You should really use single dimensional array:
int matrix[SIZE*SIZE];
And index via:
matrix[y*SIZE+x]
(You can pretty this up in various
On Monday, 4 March 2013 at 08:02:46 UTC, J wrote:
That's a really good point. I wonder if there is a canonical
matrix that would be preferred?
I'm not sure if they are the recommended/best practice for matrix
handling in D at the moment (please advise if they are not), but
with a little
J wrote:
@bearophile: Thank you! Unfortunately the
http://codepad.org/B5b4uyBM code runs a bit *slower* than the
original D code. Yikes!
$ gdmd -O -inline -release -noboundscheck -m64 bear.d -ofdbear
$ time ./dbear
-1015380632 859379360 -367726792 -1548829944
real2m36.971s
user
On Monday, 4 March 2013 at 14:59:21 UTC, bearophile wrote:
Manu:
Does D support proper square array's this way? Or does it just
automate allocation of the inner arrays?
Does it allocate all the associated memory in one block?
Maybe you should take a look at druntime code.
Bye,
bearophile
On Monday, 4 March 2013 at 04:15:41 UTC, bearophile wrote:
John Colvin:
First things first:
You're not just timing the multiplication, you're timing the
memory allocation as well. I suggest using
http://dlang.org/phobos/std_datetime.html#StopWatch to do some
proper timings in D
Nope, what
On Monday, 4 March 2013 at 14:59:21 UTC, bearophile wrote:
A bit better version:
http://codepad.org/jhbYxEgU
Bye,
bearophile
http://dpaste.dzfl.pl/ is back online btw
John Colvin:
The performance of the multiplication loops and the performance
of the allocation are separate issues and should be measured as
such, especially if one wants to make meaningful optimisations.
If you want to improve the D compiler, druntime, etc, then I
agree you have to
A bit better version:
http://codepad.org/jhbYxEgU
I think this code is good compared to the original (there are
better algorithms).
You can make it much faster even without really changing the
algorithm. Just by reversing the order of inner two loops like
this:
void matrixMult2(in int[][]
On Monday, 4 March 2013 at 15:46:50 UTC, jerro wrote:
A bit better version:
http://codepad.org/jhbYxEgU
I think this code is good compared to the original (there are
better algorithms).
You can make it much faster even without really changing the
algorithm. Just by reversing the order of
On 3/3/2013 8:50 PM, J wrote:
Dump of assembler code for function _D6matrix5mmultFAAiAAiAAiZv:
Using obj2asm will get you much nicer looking assembler.
On Monday, 4 March 2013 at 15:44:40 UTC, bearophile wrote:
John Colvin:
The performance of the multiplication loops and the
performance of the allocation are separate issues and should
be measured as such, especially if one wants to make
meaningful optimisations.
If you want to improve the
On 3/3/2013 7:48 PM, J wrote:
void mmult(int[][] m1, int[][] m2, int[][] m3)
{
foreach(int i, int[] m1i; m1)
{
foreach(int j, ref int m3ij; m3[i])
{
int val;
foreach(int k, int[] m2k; m2)
{
val += m1i[k] *
On 3/4/2013 9:00 AM, John Colvin wrote:
On Monday, 4 March 2013 at 15:44:40 UTC, bearophile wrote:
John Colvin:
The performance of the multiplication loops and the performance of the
allocation are separate issues and should be measured as such, especially if
one wants to make meaningful
On Monday, 4 March 2013 at 15:57:42 UTC, jerro wrote:
matrixMul2() takes 2.6 seconds on my machine and
matrixMul()takes 72 seconds (both compiled with gdmd -O
-inline -release -noboundscheck -mavx).
Thanks Jerro. You made me realize that help from the experts
could be quite useful. I
Dear D pros,
As a fan of D, I was hoping to be able to get similar results as
this fellow on stack overflow, by noting his tuning steps;
http://stackoverflow.com/questions/5142366/how-fast-is-d-compared-to-c
Sadly however, when I pull out a simple matrix multiplication
benchmark from the old
On Monday, 4 March 2013 at 03:48:45 UTC, J wrote:
Dear D pros,
As a fan of D, I was hoping to be able to get similar results
as this fellow on stack overflow, by noting his tuning steps;
http://stackoverflow.com/questions/5142366/how-fast-is-d-compared-to-c
Sadly however, when I pull out a
Your benchmark code updated to D2:
http://codepad.org/WMgu6XQG
Bye,
bearophile
On Monday, 4 March 2013 at 04:12:18 UTC, bearophile wrote:
Your benchmark code updated to D2:
http://codepad.org/WMgu6XQG
Sorry, this line:
enum size_t SIZE = 200;
Should be:
enum size_t SIZE = 2_000;
Bye,
bearophile
John Colvin:
First things first:
You're not just timing the multiplication, you're timing the
memory allocation as well. I suggest using
http://dlang.org/phobos/std_datetime.html#StopWatch to do some
proper timings in D
Nope, what matters is the total program runtime.
Bye,
bearophile
So this should be better:
http://codepad.org/B5b4uyBM
Bye,
bearophile
Generally for such matrix benchmarks if you chose the compilation
flags really well (including link-time optimization!) I've seen
that with LDC you get good enough timings.
Bye,
bearophile
On 3/3/13 10:48 PM, J wrote:
Dear D pros,
As a fan of D, I was hoping to be able to get similar results as this
fellow on stack overflow, by noting his tuning steps;
http://stackoverflow.com/questions/5142366/how-fast-is-d-compared-to-c
Sadly however, when I pull out a simple matrix
I suggest that you move this line
GC.disable;
to the first line.
I don't see how you are doing your timings so that part is a wild
card.
Also note that when the GC is re-enabled it can add a significant
amount of time to the tests. You are not explicitly re-enabling
the GC, but I don't
On Monday, 4 March 2013 at 04:22:01 UTC, bearophile wrote:
So this should be better:
http://codepad.org/B5b4uyBM
Bye,
bearophile
@bearophile: Thank you! Unfortunately the
http://codepad.org/B5b4uyBM code runs a bit *slower* than the
original D code. Yikes!
$ gdmd -O -inline -release
On 4 March 2013 14:50, J priv...@private-dont-email-dont-spam.com wrote:
On Monday, 4 March 2013 at 04:22:01 UTC, bearophile wrote:
So this should be better:
http://codepad.org/B5b4uyBM
Bye,
bearophile
@bearophile: Thank you! Unfortunately the http://codepad.org/B5b4uyBMcode
runs a
On Monday, 4 March 2013 at 04:49:20 UTC, Andrei Alexandrescu
wrote:
You're measuring the speed of a couple of tight loops. The
smallest differences in codegen between them will be on the
radar. Use straight for loops or foreach (i; 0 .. limit) for
those loops...
Thanks Andrei!
I validated
28 matches
Mail list logo