"Martin Siegert" <[EMAIL PROTECTED]> wrote:
In that respect the SPEC MPI2007 benchmark appears to be ideal. Alas,
it does not appear to be open source (please correct me if I am wrong)

SPEC MPI2007 will not be open source, but the price will not be very high either (and that price could be left up to the vendors to pay, bidders on your procurement).

- so far I have not even been able to figure out which applications are
being used in that benchmark suite.

The names of the SPEC MPI2007 benchmarks have not been made public yet.

SPEC confidentiality rules do not allow that until the suite is
released.  One reason for that is that it is still possible to drop a
candidate code from the suite (though unlikely) and SPEC wants to spare
the author of such a code possible embarrassment.  The expected release
date is late June this year.

To give more of a hint about what MPI2007 will be like, it will consist of real applications that have been made more portable and had correctness tests defined. The types of applications of the CPU2006 floating point suite ( http://www.spec.org/cpu2006/CFP2006/ ) are similar to what will be in the MPI2007 suite, though only a few are in common between the two suites.

I don't think I'm allowed to reveal the price of the MPI2007 suite yet, but it should be somewhat similar to the price structure for CPU2006 (hint, hint).

From: "Brian D. Ropers-Huilman" <[EMAIL PROTECTED]> Subject:

To: "Martin Siegert" <[EMAIL PROTECTED]> Cc: [email protected]
Message-ID: <[EMAIL PROTECTED]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed On 5/4/07,
Martin Siegert <[EMAIL PROTECTED]> wrote:
> We will be purchasing a shared cluster for a wide community (currently
> more than 1000 users). Thus, the common response on this list to evaluate
> hardware - "use your own application as benchmark" - does not work:
> users change, users' applications change, etc., etc. Thus, I need a
> benchmark suite that tests a wide spectrum of properties.

My answer is still to "use your own application(s)." Poll your users
and find out what they have and what they are going to run. Find some
who already have codes that scale well (>1000 cores) and ask them to
participate. Many vendors will allow you to run your own codes on
systems they have at their own sites before you decide to purchase.
These vendor-hosted systems are typically only 256 cores or less, but
it gives you some idea as to how your codes might run.

I also suggest picking some representative synthetic benchmarks to
test floating point and integer operations, memory bandwidth, MPI
ping-pongs (the SPEC MPI2007, among others, would fit here),

SPEC MPI2007 will fit in the class of what I call standard benchmarks, but not "synthetic" benchmarks. SPEC starts with the real applications as you download them, and then tries to make as few changes as possible to make it so the codes can be built, and the results can be validated on a wide range of compilers and platforms.

the HPC
Challenge codes, and the like. Many sites will then take all of these
results (synthetic + their own applications) and aggregate the
results, possibly with weighting factors, into a single number.

Agreed.  This is a good strategy that us used quite often.

Benchmarks like SPEC CPU2006 are often added to various synthetic benchmarks and user applications as benchmark requirements for an RFP. We hope that the same will happen with SPEC MPI2007. Hopefully having a suite of applications available will make you feel like you don't have to develop quite as large a (portable) suite of your own applications as you would otherwise, or to deal as many questions from vendors as they try to get your codes to run.

-Tom Elken
QLogic Corp.  and the SPEC High Performance Group.

_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to