Hyperthreading benefits vary by the specific application, and whether or not the application is optimized for hyperthreading.  You could do a quick workaround and change PBS's view so it only assigns 2 procs per node.  The kernel is supposed to be smart enough to assign both physical processors first before using hyperthreading, so you should be safe there.  However, I've never tested/verified this.
Sean is correct though, in that the problem is rooted at the OS's detection of an extra CPU.

        Jeremy

At 10:26 AM 1/28/2003 -0800, Jenny Aquilino wrote:
Hi,

Could someone explain to me when the OS seeing double causes problems for Oscar?  More specifically, is hyperthreading beneficial in a clustered environment?  I'm not sure how scheduling works in Oscar but it seems like you would constantly be running into the problem of having two running jobs being scheduled on the same CPU in a hyperthreaded environment. 

Currently I have a 6 node, dual Xeon processor cluster that is hyperthreaded and so I am also seeing 4 CPUs/node instead of 2.  Typically, only one run would be done at a time so in this environment, could somebody recommend whether to use or not to use hyperthreading?  Thank you very much!

-Jenny  =)

Sean Dague wrote:

On Tue, Jan 28, 2003 at 10:09:47AM -0500, Mike Mettke wrote:
 

Hi,

We are using Oscar 1.4 on Redhat 7.3 quite happily. For the compute 
nodes we're using dual xeon cpus on westville motherboards. Now, those 
cpus have hyperthreading, making ganglia report those nodes as having 4 
cpus.
Since we're limiting the number of jobs to 2 per node via PBS/maui, this 
means that the cluster load as displayed by ganglia can never be more 
than 50%. This has lead to questions from management regarding effective 
resource usage. Is there any way to adjust ganglia such that those 
calculations are based on 2 cpus per node ?
   

If you have hyperthreading enabled in Linux, the entire OS sees double
when
it comes to processors, and will schedule processes accordingly.  If
you
want to continue to limit jobs to 2 per node, I would turn off
hyperthreading (there is a runtime kernel option to do this I think...
though I can't remember it).  Otherwise you may get the linux
scheduler
putting both running processes on the same chip, which will be bad for
your
throughput.  So the issue is to change what the OS sees, not what
ganglia
sees.  Changing what ganglia sees is just changing a symptom, not
the root
cause.

        -Sean

 
------------------------------------------------------- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com _______________________________________________ Oscar-users mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/oscar-users

Reply via email to