For interest, the same benchmarks on the native Mac OS running
SoyLatte with the -server option and the large data set are:

SciGMark 1.0 - Java - specialized
FFT (1048576): 66.06220438347867
SOR (1000x1000):   788.5439976789686
Monte Carlo : 241.39879385124996
Sparse matmult (N=100000, nz=1000000): 425.0726583860497
LU (1000x1000): 414.8516922439953
PolyMult (N=100): 531.6603292535906

Composite Score: 411.2649459662221

java.vendor: Sun Microsystems Inc.
java.version: 1.6.0_03-p3
os.arch: amd64
os.name: Darwin
os.version: 9.2.2

Which is a little faster - so the overhead of Parallels plus Windows
is small compared to native 10.5.2.

It would be interesting to see some other benchmarks that are more
like a typical OO application. The Scimark tests, whilst typical of
scientific applications, are not like typical OO applications; few
virtual methods, little GC, etc.

Sorry to the Fan people for high jacking their thread.

On Apr 26, 6:54 am, John Rose <[EMAIL PROTECTED]> wrote:
> On Apr 25, 2008, at 4:08 AM, Attila Szegedi wrote:
>
> > On 2008.04.25., at 12:54, hlovatt wrote:
>
> >> All, Java, C#, & C, tests are on Windows XP running under
> >> Parallels on
> >> a Mac Book Pro., 2.33 GHz Intel Core 2 Duo, 2 GB 667 MHz DDR2 SDRAM.
>
> > Benchmarking under a virtualized OS? Kirk just wrote recently about
> > that: <http://kirk.blog-city.com/can_i_bench_with_virtualization.htm>
>
> A Java-to-Java comparison (VMWare/Windows to Mac OS X) would be
> interesting for VMWare aficionados.
>
> Scimark is a good benchmark for basic CPU/FPU use.  It is sensitive
> to loop optimizations and array usage patterns, as well as to stray
> oddities like how your random number generator is designed.  The JVM
> does well on loop opts., and there is always more to do (current
> bleeding edge is SIMD).
>
> A couple of scimark benchmarks use 2-D arrays (not surprising!) and
> the JVM is a little weak there because of the lack of true 2-D
> arrays.  We have long known how to fix this under the covers, but as
> we soberly prioritize our opportunities, we've chosen to work on
> other things.  An excellent outcome of the OpenJDK is that the
> community can now vote with code about which optimizations are most
> important.
>
> At best, this sort of small benchmark will reach C++ levels of
> performance on the JVM.  (At least until we do really aggressive task
> decomposition and use our virtualization freedom to lie about data
> structure layouts.  But at present the state of the art is to require
> heavy input from the programmer for such things.)
>
> At the risk of prolonging the benchmark battle, I have to admit that
> scimark is not the sort of app. I had in mind when I was bragging
> about the JVM earlier on this thread.  (Sorry Fan guys.  Major thread
> hijack here.  Your stuff looks cool, esp. the library agnostic part.)
>
> The JVM's most sophisticated tricks (as enumerated elsewhere) have to
> do with optimistic profile-driven optimizations, with deoptimization
> backoffs.  These show up when the system is large and decoupled
> enough that code from point A is reused at point B in a type context
> that is unknown at A.
>
> At that point, the JVM (following Smalltalk and Self traditions) can
> fill in missing information accumulated during warm-up, which can
> drive optimization of point A in terms of specific use cases at point B.
>
> All of this works best when the optimizations are allowed to fail
> when the use cases at B change (due to app. phase changes, e.g.) or
> when points C and D show up and causes the compilation of A's use
> cases to be reconsidered.  Key methods get recompiled multiple times
> as the app. finds its hot spot.
>
> It is these sorts of optimistic, online optimizations that makes the
> JVM run faster than C++, when it does.  (It does, e.g., when it
> inlines hot interface calls and optimizes across the call boundary.)
> Microsoft could do so with C# also, but probably not as long as the
> C# JIT runs at application load time, which (as I am told by friendly
> Microsoft colleagues) it does.
>
> A final note about C# vs. Java on Intel chips.  We have noticed that
> the Intel (and AMD) chips are remarkably tolerant of junky object
> code.  Part of the challenge of JVM engineering is to find
> optimizations that tend to make code run better across a range of
> platforms with other core competencies (like many-core SPARC,
> obviously for Sun).
>
> I speculate that Hotspot has been driven to work harder on clever
> optimizations not only because we have competed with other excellent
> implementations (IBM J9, BEA JRockit), but also because Java needs to
> run on a wider range of chips than C#; some of them are less
> forgiving than x86.  A way to quantify the "chip factor" would be to
> compare the gap between server and client JITs on a range of Java
> apps., especially simpler more "static" ones like scimark.  More
> forgiving chips would narrow the gap.
>
> Best wishes,
> -- John
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "JVM 
Languages" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/jvm-languages?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to