On Wed, Dec 27, 2006 at 09:52:02AM -0800, Martin Knoblauch wrote:
> For sizing purposes, doing benchmarks is the only way. For the purpose
> of Ganglia the sockets/cores/threads info is purely for inventory. And
> we are likely going to add the new information to our metrics.
>
> But - we still
>In article <[EMAIL PROTECTED]> you
wrote:
>> once your program (and many others) have such a check, then the next
>> step will be pressure on the kernel code to "fake" the old situation
>> when there is a processor where no
longer
>> holds. It's basically a road to madness :-(
>
> I agree that fo
In article <[EMAIL PROTECTED]> you wrote:
> once your program (and many others) have such a check, then the next
> step will be pressure on the kernel code to "fake" the old situation
> when there is a processor where no longer
> holds. It's basically a road to madness :-(
I agree that for HPC si
>
> actually I wanted to write that "HT as implemented on XEONs did not
> help a lot for HPC workloads in the past"
btw this is exactly the problem I am trying to point out: ".. as
implemented in generation XYZ model ABC of processor DEF".
that's going to be really fragile and in fact won'
>
> this is a real interesting question. Ganglia is coming [originally]
> from the HPC side of computing. At least in the past HT as implemented
> on XEONs did help a lot. Running two CPU+memory-bandwith intensive
> processes on the same physical CPU would at best result in a 50/50
> performanc
--- Gleb Natapov <[EMAIL PROTECTED]> wrote:
> >
> If I run two threads that are doing only calculations and very little
> or no
> IO at all on the same socket will modern HT and dual core be the same
> (or close) performance wise?
>
actually I wanted to write that "HT as implemented on XEONs
--- Gleb Natapov <[EMAIL PROTECTED]> wrote:
> On Wed, Dec 27, 2006 at 04:13:00PM +0100, Arjan van de Ven wrote:
> > The original p4 HT to a large degree suffered from a too small
> cache
> > that now was shared. SMT in general isn't per se all that different
> in
> > performance than dual core, a
> If I run two threads that are doing only calculations and very little or no
> IO at all on the same socket will modern HT and dual core be the same
> (or close) performance wise?
it depends on how cache/memory bandwidth sensitive your calculation
is if your calculation is memory bandwidth s
On Wed, Dec 27, 2006 at 04:13:00PM +0100, Arjan van de Ven wrote:
> The original p4 HT to a large degree suffered from a too small cache
> that now was shared. SMT in general isn't per se all that different in
> performance than dual core, at least not on a fundamental level, it's
> all a matter of
>
> one piece of information that Ganglia collects for a node is the
> "number of CPUs", originally meaning "physical CPUs".
Ok I was afraid of that.
> With the
> introduction of HT and multi-core things are a bit more complex now. We
> have decided that HT sibblings do not qualify as "real"
--- Arjan van de Ven <[EMAIL PROTECTED]> wrote:
> On Wed, 2006-12-27 at 06:16 -0800, Martin Knoblauch wrote:
> > Hi, (please CC on replies, thanks)
> >
> > for the ganglia project (http://ganglia.sourceforge.net/) we are
> > trying to find a heuristics to determine the number of physical CPU
>
On Dec 27 2006 06:16, Martin Knoblauch wrote:
>
> So far it seems that looking at the "physical id", "core id" and "cpu
>cores" of /proc/cpuinfo is the way to go.
Possibly, but it does not catch all cases.
$grep '"physical id' /erk/kernel/linux-2.6.20-rc2/ -r
returns exactly three lines, for
On Wed, 2006-12-27 at 06:16 -0800, Martin Knoblauch wrote:
> Hi, (please CC on replies, thanks)
>
> for the ganglia project (http://ganglia.sourceforge.net/) we are
> trying to find a heuristics to determine the number of physical CPU
> "cores" as opposed to virtual processors added by enabling H
Hi, (please CC on replies, thanks)
for the ganglia project (http://ganglia.sourceforge.net/) we are
trying to find a heuristics to determine the number of physical CPU
"cores" as opposed to virtual processors added by enabling HT. The
method should work on 2.4 and 2.6 kernels.
So far it seems t
14 matches
Mail list logo