I've done the CPU Core pinning thing, didn't help anything in my case.  I don't 
think it is swapping cores that is the problem, it is just the ability to get 
CPU time in a stable and consistent rate with so many games swapping in/out of 
the CPU.  I think less games with higher slot counts is the only way to 
maximize a 8-way Xeon setup.

I have a theory that FB RAM also makes the situation worse by adding additional 
latency to the overall system performance.  We see a DPC latency of 3000us vs 
30-50us on a server board vs non server board.  I have setup a new server just 
for running 500-1000FPS CS/CSS games.  It is DDR3 non FB RAM, 1 CPU and 
overclocked to the max, basically an enthusiast board setup vs a server setup.  
So far in testing it seems to be able to run more high speed servers than a 2x 
Xeon server board.  So we are looking to use that server to run more lower end 
game servers that are not cranked to the max on the tickrate.

> not it's not crazy.  Running more than one cpu means the OS and programs 
> have to keep track of which core they are on, maintain cache coherency, 
> maintain the memory mappings..etc etc.  If(when) the thread jumps to 
> another cpu then the whole dataset has to get rarranged.  I would start 
> hard assigning servers to their own core and see if that helps out.
>
>   
_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to