Mark Hahn wrote:
Sigh. I thought I could avoid that response. Our own code (due to the no.
of users who all believe that their code is the most important and
therefore must be benchmarked) is so massive that any potential RFP
respondent would have to work a year to run the code. Thus, we have to
sure. the suggestion is only useful if the cluster is dedicated to a
single purpose or two. for anything else, I really think that
microbenchmarks are the only way to go. after all, your code probably
doesn't do anything which is truely unique, but rather is some
combination of a theoretical microbenchmark "basis set". no, I don't
know how to establish the factor weights, or whether this approach
really provides a good predictor. but isn't it the obvious way,
even the only tractable way?
Agreed. On one hand you need micro-benchmark. OTOH you need your users
to specify what are the sensitive points of their application. First of
all, I suppose their applications are parallel, but are they BW-bound or
latency-bound. How much time do the applications spend on communication?
Are the app's capable of running in mixed-mode (MPI combined with
multithreading), ...
Why do'nt you make a list of multiple-choice questions in a style as
described above and ask your users to fill that in. This solves also the
'weighting factor' because the users that respond to your question
_care_ about the machine being suitable while the others care less.
t
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf