On Mar 3, 2:07 pm, Bill Hart <goodwillh...@googlemail.com> wrote:
> Here are figures for a 2.66 GHz Xeon (Core 2) under linux:
>
> ***** MPIRbench version 0.1 *****
> Using default CFLAGS = "-O2 -fomit-frame-pointer -I/home/wbhart/mpir-core2/"
> Using default CC = "gcc"
> Using default LIBS = "-static -L/home/wbhart/mpir-core2/.libs/ -lmpir"
> Using compilation command: gcc -O2 -fomit-frame-pointer
> -I/home/wbhart/mpir-core2/ foo.c -o foo -static
> -L/home/wbhart/mpir-core2/.libs/ -lmpir
> You may want to override CC, CFLAGS, and LIBS
> Using MPIR version: 0.9.0
> Compiling benchmarks
> Running benchmarks
>   Category base
>     Program multiply
>       multiply 128 128
>       MPIRbench.base.multiply.128,128 result: 36290755
>       multiply 512 512
>       MPIRbench.base.multiply.512,512 result: 6974234
>       multiply 8192 8192
>       MPIRbench.base.multiply.8192,8192 result: 64000
>       multiply 131072 131072
>       MPIRbench.base.multiply.131072,131072 result: 928
>       multiply 2097152 2097152
>       MPIRbench.base.multiply.2097152,2097152 result: 36.4
>     MPIRbench.base.multiply result: 55927
>     Program divide
>       divide 8192 32
>       MPIRbench.base.divide.8192,32 result: 603601
>       divide 8192 64
>       MPIRbench.base.divide.8192,64 result: 620812
>       divide 8192 128
>       MPIRbench.base.divide.8192,128 result: 336842
>       divide 8192 4096
>       MPIRbench.base.divide.8192,4096 result: 109596
>       divide 8192 8064
>       MPIRbench.base.divide.8192,8064 result: 1404355
>       divide 131072 8192
>       MPIRbench.base.divide.131072,8192 result: 2292
>       divide 131072 65536
>       MPIRbench.base.divide.131072,65536 result: 1212
>       divide 8388608 4194304
>       MPIRbench.base.divide.8388608,4194304 result: 3.34
>     MPIRbench.base.divide result: 25526
>   MPIRbench.base result: 37784
>   Category app
>     Program rsa
>       rsa 512
>       MPIRbench.app.rsa.512 result: 15277
>       rsa 1024
>       MPIRbench.app.rsa.1024 result: 3134
>       rsa 2048
>       MPIRbench.app.rsa.2048 result: 476
>     MPIRbench.app.rsa result: 2835.2
>   MPIRbench.app result: 2835.2
> MPIRbench result: 10350
>
> Bill.
>
> 2009/3/3 Bill Hart <goodwillh...@googlemail.com>:
>
> > As the Sage project is currently investing a lot into Windows porting,
> > I asked on the sage-devel list if anyone has a suitable Core 2 machine
> > with Windows.
>
> > Bill.
>
> > 2009/3/3 Bill Hart <goodwillh...@googlemail.com>:
> >> I should add that I don't have access to Windows on Core 2 anywhere.
> >> In fact I do not know anyone who has this.
>
> >> Bill.
>
> >> 2009/3/3 Bill Hart <goodwillh...@googlemail.com>:
> >> - Show quoted text -
> > - Show quoted text -
> >>> I finished some work I was doing on my other project FLINT 5 days
> >>> early!! So I will spend a little time on mpir again. I'll work up some
> >>> timings for you on a core 2.
>
> >>> Bill.
>
> >>> 2009/3/3 Cactus <rieman...@googlemail.com>:
>
> >>>> On Mar 3, 12:42 pm, Jeff Gilchrist <jeff.gilchr...@gmail.com> wrote:
> >>>>> On Mon, Mar 2, 2009 at 3:10 PM, Cactus <rieman...@googlemail.com> wrote:
> >>>>> > I have added batch files in the vc9.build directory - to_gmp.bat and
> >>>>> > to_mpir.bat - for name conversion.
>
> >>>>> Great, thanks, it seems to work well for the build.vc9 directory.
> >>>>> When you eventually convert your support for gmp-ecm and others to
> >>>>> MPIR, you will notice that it looks for "gmp.h" in the root of the
> >>>>> gmp/ directory so the batch file to convert from mpir to gmp misses
> >>>>> changing the root mpir.h to gmp.h.
>
> >>>> Thanks, I'll fix that.
>
> >>>>> > I have also added the new assembler code.
>
> >>>>> I will be testing it soon.
>
> >>>> I am confident in the AMD64 stuff.  But I am NOT yet confident in the
> >>>> performance of the Core2 code.
>
> >>>> I need x86_64 results for the Core2 code on both Windows and Linux/gcc/
> >>>> gas to know where I am with this architecture in performance terms.
>
> >>>>   Brian
> >>>> - Show quoted text -
> >>> - Show quoted text -

Thanks Bill, this is just what I needed to understand whether the
WIndows Core2 stuff is ok - and sadly it isn't :-(

Scaling up my 2.13 Ghz Core2 result for the overall benchmark to 2.66
Ghz gives 9150 - 13% slower.  But more importantly scaling my multiply
figure to 2.66 GHz gives 50251 - 11% slower.  Since my K8 multiply
figues are essentially the same as the Linux figures, it seems that
the code has not moved from the K8 to the Core2 unscathed.  This is a
pain since the chaanges are very minor and shouls have moved over
without too much trouble.

All very frustrating, I'm afraid.  Jason gave figures earleir for the
cycle time of the assembler code for each routine. It would help me to
identify problems if you or another kind soul could duplicaate these
figures for Core2.

    Brian

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"mpir-devel" group.
To post to this group, send email to mpir-devel@googlegroups.com
To unsubscribe from this group, send email to 
mpir-devel+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/mpir-devel?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to