On Sun, Mar 23, 2008 at 6:41 AM, Francesc Altet <[EMAIL PROTECTED]> wrote:

> A Sunday 23 March 2008, Charles R Harris escrigué:
> > gcc --version: gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33)
> > cpu:  Intel(R) Core(TM)2 CPU          6600  @ 2.40GHz
> >
> >         Problem size              Simple              Intrin
> > Inline
> >                  100   0.0002ms (100.0%)   0.0001ms ( 68.7%)
> > 0.0001ms ( 74.8%)
> >                 1000   0.0015ms (100.0%)   0.0011ms ( 72.0%)
> > 0.0012ms ( 80.4%)
> >                10000   0.0154ms (100.0%)   0.0111ms ( 72.1%)
> > 0.0122ms ( 79.1%)
> >               100000   0.1081ms (100.0%)   0.0759ms ( 70.2%)
> > 0.0811ms ( 75.0%)
> >              1000000   2.7778ms (100.0%)   2.8172ms (101.4%)
> > 2.7929ms ( 100.5%)
> >             10000000  28.1577ms (100.0%)  28.7332ms (102.0%)
> > 28.4669ms ( 101.1%)
>
> I'm mystified about your machine requiring just 28s for completing the
> 10 million test, and most of the other, similar processors (some faster
> than yours), in this thread falls pretty far from your figure.  What
> sort of memory subsystem are you using?
>

Yeah, I noticed that ;) The cpu is an E6600, which was the low end of the
performance core duo processors before the recent Intel releases, the north
bridge (memory controller) is a P35, and the memory is DDR2 running at 800
MHz with 4-4-4-12 timing. The only things I tweaked were the memory voltage
and timings. Raising the memory speed from 667 to 800 made a noticeable
difference in my perception of speed, which is remarkable in itself. The
motherboard was cheap, it goes for $70 these days.

I've seen folks overclocking the E6600 up to 3.8 GHz and over 3GHz is
common. Sometimes it's almost tempting...

Chuck
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to