Re: [hlds_linux] fps changes in the last patch
At 04:16 AM 6/28/2011, Henry Goffin wrote: Hi all - Free to Play brought a huge influx of new users to Team Fortress. To help server counts scale up to match the demand, we are reworking the dedicated server for performance. We want to improve player responsiveness as well as to reduce CPU usage so that hosts can run more servers per physical server. Some of those changes addressing CPU usage went out last night. Server operators should see a big decrease in CPU load and can potentially run more instances per physical box now. However, a side effect that many of you have noticed is that server FPS has an effective cap of 500 instead of the previous 1000, or possibly even lower than 500 depending on your Linux kernel HZ setting. This should not have a noticeable impact on gameplay as the tick rate is still locked (well, mostly locked) at 66 updates per second and the frames that are being dropped are empty frames that do not actually run a server tick. We're going to address this further in another set of performance improvements. Sorry for the temporary confusion, but we wanted to get these CPU load reduction changes out quickly to help with the Free to Play user crush. How about some native 64bit binaries? Using shared objects clobbers the ebx register on 32bit. Longer term, we want to move away from FPS as a measure of performance and instead show actual load and responsiveness (jitter/latency) statistics. The difference between a tick and a frame is complicated, and fps_max sometimes affects performance in counter-intuitive ways. We would like to retire fps_max for servers and replace it with a more obvious server performance setting. We'll give you all a heads up before we do so. Please also add bounds checking to the usleep call prevent fictitious values from inflating the server's FPS, causing idiots to sell super high FPS that does nothing. I have some code if you want to demonstrate this behavior. gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] fps changes in the last patch
At 04:16 AM 6/28/2011, Henry Goffin wrote: Hi all - Free to Play brought a huge influx of new users to Team Fortress. To help server counts scale up to match the demand, we are reworking the dedicated server for performance. We want to improve player responsiveness as well as to reduce CPU usage so that hosts can run more servers per physical server. Some of those changes addressing CPU usage went out last night. Server operators should see a big decrease in CPU load and can potentially run more instances per physical box now. However, a side effect that many of you have noticed is that server FPS has an effective cap of 500 instead of the previous 1000, or possibly even lower than 500 depending on your Linux kernel HZ setting. This should not have a noticeable impact on gameplay as the tick rate is still locked (well, mostly locked) at 66 updates per second and the frames that are being dropped are empty frames that do not actually run a server tick. We're going to address this further in another set of performance improvements. Sorry for the temporary confusion, but we wanted to get these CPU load reduction changes out quickly to help with the Free to Play user crush. How about some native 64bit binaries? Using shared objects clobbers the ebx register on 32bit. Longer term, we want to move away from FPS as a measure of performance and instead show actual load and responsiveness (jitter/latency) statistics. The difference between a tick and a frame is complicated, and fps_max sometimes affects performance in counter-intuitive ways. We would like to retire fps_max for servers and replace it with a more obvious server performance setting. We'll give you all a heads up before we do so. Please also add bounds checking to the usleep call prevent fictitious values from inflating the server's FPS, causing idiots to sell super high FPS that does nothing. I have some code if you want to demonstrate this behavior. gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Gross over usage of Syscalls.
At 01:32 AM 5/25/2011, Kyle Sanderson wrote: While this has been discussed a number of times (atrocious over usage of gettimeofday has been worked around). Can there be something done regarding the abuse of syscalls? I mean, something like this http://i.imgur.com/PdWTB.png isn't really that great. For instance, every 1 out of 4 futex syscalls fail due to timing out (shown in that picture). This is just expensive and downright silly. Not much can be done on our end without the source to SRCDS, which no doubt will not be released. The CPU usage that SRCDS manages to chug is borderline suicidal for the end result, not to mention the countless places in the engine where memory continues to leak like a sieve. This effects and hurts everyone. The 'serious' people who host servers cannot use Windows due to the lack of Symbols. Abuse of syscalls? No offense, but strace/ktrace/truss et al only show syscalls, not flow of code! It measures the amount of syscalls and usage of them in CPU time. I could write a program to loop main() and chew up 99% CPU and no syscalls will be generated. Maybe you could post what is timing out? It's probably doing EAGAIN over and over.. strace -Ff -s 9 -p whatever ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Gross over usage of Syscalls.
At 01:32 AM 5/25/2011, Kyle Sanderson wrote: While this has been discussed a number of times (atrocious over usage of gettimeofday has been worked around). Can there be something done regarding the abuse of syscalls? I mean, something like this http://i.imgur.com/PdWTB.png isn't really that great. For instance, every 1 out of 4 futex syscalls fail due to timing out (shown in that picture). This is just expensive and downright silly. Not much can be done on our end without the source to SRCDS, which no doubt will not be released. The CPU usage that SRCDS manages to chug is borderline suicidal for the end result, not to mention the countless places in the engine where memory continues to leak like a sieve. This effects and hurts everyone. The 'serious' people who host servers cannot use Windows due to the lack of Symbols. Abuse of syscalls? No offense, but strace/ktrace/truss et al only show syscalls, not flow of code! It measures the amount of syscalls and usage of them in CPU time. I could write a program to loop main() and chew up 99% CPU and no syscalls will be generated. Maybe you could post what is timing out? It's probably doing EAGAIN over and over.. strace -Ff -s 9 -p whatever ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Questions about minimum CPU for 32 slot TF2 and taskset
At 03:06 AM 5/21/2011, Christoffer Pedersen wrote: 1. Its a good question. Many administrators optimizes their kernel to get better performance, which i can recommend. I have en overclocked i7 which runs on 3.73ghz and it did at least not have any problems at all with running a 64 slot server filled with bots. I would say that 3 ghz would do fine. Define performance. I think the context you are thinking is not performance. 2. If you are running more servers than the amount of cores, i recommend to use the load balancer. If you run the same amount or less than the amount of cores i would use taskset. Taskset is useful to prevent cacheline pingpongs and context switching on the same core. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Questions about minimum CPU for 32 slot TF2 and taskset
At 03:06 AM 5/21/2011, Christoffer Pedersen wrote: 1. Its a good question. Many administrators optimizes their kernel to get better performance, which i can recommend. I have en overclocked i7 which runs on 3.73ghz and it did at least not have any problems at all with running a 64 slot server filled with bots. I would say that 3 ghz would do fine. Define performance. I think the context you are thinking is not performance. 2. If you are running more servers than the amount of cores, i recommend to use the load balancer. If you run the same amount or less than the amount of cores i would use taskset. Taskset is useful to prevent cacheline pingpongs and context switching on the same core. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Mandatory TF2 update coming
At 08:42 PM 4/18/2011, Brian Simon wrote: You are a moran. Are you are like 12 years old? G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Mandatory TF2 update coming
At 08:42 PM 4/18/2011, Brian Simon wrote: You are a moran. Are you are like 12 years old? G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Low srcds FPS
At 10:15 AM 3/30/2011, Claudio Beretta wrote: Is this trolling or just a naive question? :) The server tickrate controls how many times per second the server updates the world model and send data to players. higher tickrate means higher precision in the simulation, lower ping, and more resilience to packet loss (smaller time interval that the clients have to interpolate in case something goes wrong). Tick 100 servers run noticeably better than tick 66 servers and blow away tick 33 servers. The server fps controls how many times per second the server checks if it should run a tick. Running the server at exactly fps == tickrate for orangebox is great, but if you can't guarantee that you should run at at least 2x the tickrate (running at 66 fps with a 66 tickrate means that that the system will wake srcds every 15.15ms and compute a tick every time; running at 67 fps means that the system wakes srcds every 14.92ms and compute a tick most of the times, and then wait the 67th frame (other 14.92ms) doing nothing) Client tickrate should match the server tickrate, and client fps should be higher or matching client tickrate. Client fps are unrelated to server fps, and may or may not be bound to the monitor refresh rate. this is my (simplified) understanding of how the orangebox works, i'm not claiming it is 100% exact but at least it makes sense :P Your understanding was copy-and-pasted from that idiot that runs FPS meter. You cannot precisely have a simulation rate at 15.00ms. At 15.11ms it's still 15.00 due to rounding, and 14.95 is 15.00 because of rounding. VALVe games run just like Quake games, so fundamentally they operate in the same manner, even though prototypes are different etc. It would be nice if everyone would simply just play the game at stock settings. Otherwise, you'll have 10,000 servers are running differently causing confusion to everyone, and perhaps widely varying results. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Low srcds FPS
At 06:25 PM 3/30/2011, Claudio Beretta wrote: are you saying that the source-orangebox engine works the same as quake games? (quake 1?) and that valve measures time (in-game time, not valve time.. that would always overflow) in milliseconds in a integer variable? I am saying that VALVe games are based on Quake code, and even looking at it in a debugger shows similarities between code. I am not saying it works exactly like quake 1, but the fundamentals are there, just different. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Low srcds FPS
At 06:25 PM 3/30/2011, Claudio Beretta wrote: are you saying that the source-orangebox engine works the same as quake games? (quake 1?) and that valve measures time (in-game time, not valve time.. that would always overflow) in milliseconds in a integer variable? I am saying that VALVe games are based on Quake code, and even looking at it in a debugger shows similarities between code. I am not saying it works exactly like quake 1, but the fundamentals are there, just different. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Sense of ServerFPS?
At 08:30 AM 2/4/2011, Andre Müller wrote: Hi all, What is the sense of the server-fps in the orangebox engine (CS:S)? Are there any improofments when you raise the server-fps? At the moment there are many game server providers which offer gameservers with more than 20.000 FPS. All of them are using a libhack to break the 1000FPS-Limit. I was one of the first people ever to write a 'lib' to alter usleep data to make it return more often, 'cranking' the FPS, so I know quite a big about it. After awhile I noticed the only thing the libs were doing was calling nanosleep more and more, eating up large amounts of CPU power. There is absolutely NO way a 60fps server does better than a 60,000FPS server. None. Zilch. Nada. The only people who defend High-FPS are the people who have clients (and those clients have an electrical fire in their heads). The following doesn't make you kill people any better: 1.) Real time kernels 2.) Booster libs 3.) HPET/TSC as a timer 4.) Running processes with SCHED_FIFO 5.) x86_64 based kernels 6.) Stable FPS 7.) Better ethernet card drivers. People who sell high FPS servers (over 1) with libs are RIPPING PEOPLE OFF. Because libs alter usleep to return more often, so when the game calls game - gettimeofday() */ step time in the engine */ game - usleep(1000); OS - return 1ms or a little more due to scheduler and timers game - gettimeofday(); /* step time again, calculate last usleep/gettimeofday delay and subtract difference and round it up for FPS*/ Here's what a cheating lib does: game - gettimeofday() */ step time in the engine */ game - usleep(1000); OS - Return 1ms gettimeofday(); /* step time again, calculate last usleep delay and subtract difference and round it up */ You're altering the struct timeval in usleep to return MORE often, reducing the delays. This is cheating because the engine calls usleep(1000); and not usleep(10); http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Sense of ServerFPS?
At 08:30 AM 2/4/2011, Andre Müller wrote: Hi all, What is the sense of the server-fps in the orangebox engine (CS:S)? Are there any improofments when you raise the server-fps? At the moment there are many game server providers which offer gameservers with more than 20.000 FPS. All of them are using a libhack to break the 1000FPS-Limit. I was one of the first people ever to write a 'lib' to alter usleep data to make it return more often, 'cranking' the FPS, so I know quite a big about it. After awhile I noticed the only thing the libs were doing was calling nanosleep more and more, eating up large amounts of CPU power. There is absolutely NO way a 60fps server does better than a 60,000FPS server. None. Zilch. Nada. The only people who defend High-FPS are the people who have clients (and those clients have an electrical fire in their heads). The following doesn't make you kill people any better: 1.) Real time kernels 2.) Booster libs 3.) HPET/TSC as a timer 4.) Running processes with SCHED_FIFO 5.) x86_64 based kernels 6.) Stable FPS 7.) Better ethernet card drivers. People who sell high FPS servers (over 1) with libs are RIPPING PEOPLE OFF. Because libs alter usleep to return more often, so when the game calls game - gettimeofday() */ step time in the engine */ game - usleep(1000); OS - return 1ms or a little more due to scheduler and timers game - gettimeofday(); /* step time again, calculate last usleep/gettimeofday delay and subtract difference and round it up for FPS*/ Here's what a cheating lib does: game - gettimeofday() */ step time in the engine */ game - usleep(1000); OS - Return 1ms gettimeofday(); /* step time again, calculate last usleep delay and subtract difference and round it up */ You're altering the struct timeval in usleep to return MORE often, reducing the delays. This is cheating because the engine calls usleep(1000); and not usleep(10); http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Is today's TF2/DODS/CSS update a required server update?
At 09:20 AM 1/28/2011, Emil Larsson wrote: Since it requires a handshake, TCP is impossible to spoof (unlike UDP). It would make it a bit easier to block IP's since a handshake will fail if a spoofed IP is used. Of course, most DOS bugs in SRCDS are from bugs and lack of packet caching/priority. Errr.. You can spoof most of IP, just not the handshakes. That's why synfloods multilate servers, because of their sheer PPS. Most ISP's don't use BCP38, so it's easier for source-routed IPs to leave their network. Bottom line is you cannot protect yourself against DDOS. Only thing you can do is hope you have more transit than the attackers. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Is today's TF2/DODS/CSS update a required server update?
At 09:20 AM 1/28/2011, Emil Larsson wrote: Since it requires a handshake, TCP is impossible to spoof (unlike UDP). It would make it a bit easier to block IP's since a handshake will fail if a spoofed IP is used. Of course, most DOS bugs in SRCDS are from bugs and lack of packet caching/priority. Errr.. You can spoof most of IP, just not the handshakes. That's why synfloods multilate servers, because of their sheer PPS. Most ISP's don't use BCP38, so it's easier for source-routed IPs to leave their network. Bottom line is you cannot protect yourself against DDOS. Only thing you can do is hope you have more transit than the attackers. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] HT on or off for HLDS 1.6?
At 03:06 PM 1/19/2011, C Szabo wrote: Hey, i am wondering if Hyper-Threading (HT) is good or bad for HLDS 1.6 servers. We are running Debian. Dell PowerEdge R410 Chassi: 1U rack Chip: Intel 5500 Chipset CPU: Dual Intel Xeon X5650 (12M Cache, 2.66 GHz, 6.40 GT/s Intel QPI) Memory: 12GB DDR3 Harddrive: 2x250GB, SATA, 3.5-in, 7.2K RPM (raid1) Networkcard: Broadcom NetXtreme II 5709 Dual Port 1GbE Bandwidth: Gigabit HT in old P4 chips was bad, but newer ones is OK. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] HT on or off for HLDS 1.6?
At 03:06 PM 1/19/2011, C Szabo wrote: Hey, i am wondering if Hyper-Threading (HT) is good or bad for HLDS 1.6 servers. We are running Debian. Dell PowerEdge R410 Chassi: 1U rack Chip: Intel 5500 Chipset CPU: Dual Intel Xeon X5650 (12M Cache, 2.66 GHz, 6.40 GT/s Intel QPI) Memory: 12GB DDR3 Harddrive: 2x250GB, SATA, 3.5-in, 7.2K RPM (raid1) Networkcard: Broadcom NetXtreme II 5709 Dual Port 1GbE Bandwidth: Gigabit HT in old P4 chips was bad, but newer ones is OK. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
[hlds_linux] Serverside FPS jitter
I guess the new sales pitches are that when a server has FPS jitter (from say, 100 to 150 or 66 to 90) that is bad and causes all kinds of issues. Can valve PLEASE PLEASE PLEASE remove FPS from rcon stats or do something to prevent it's behavior from being altered? Or lock it at 1:1 so it scales with the tickrate? ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
[hlds_linux] Serverside FPS jitter
I guess the new sales pitches are that when a server has FPS jitter (from say, 100 to 150 or 66 to 90) that is bad and causes all kinds of issues. Can valve PLEASE PLEASE PLEASE remove FPS from rcon stats or do something to prevent it's behavior from being altered? Or lock it at 1:1 so it scales with the tickrate? ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 06:37 AM 11/15/2010, Björn Rohlén wrote: Instead of hiding the server_fps, it would be better to explain it in detail. -TheG Envelope-to: g...@velocity-servers.net From: Alfred Reynolds alf...@valvesoftware.com To: 'Gary Stanley' g...@velocity-servers.net Date: Mon, 31 Aug 2009 16:48:29 -0700 Subject: RE: Negative usleep adding to FPS Thread-Topic: Negative usleep adding to FPS Thread-Index: AcoqlMdxCOaoY+wIRuukr2N1qqVNMwAAJ3Lw Accept-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US X-Mlf-Version: 7.1.1.1997 X-Mlf-UniqueId: o200908312349180056866 X-Spam-Status: No, score=-6.0 X-Spam-Score: -59 X-Spam-Bar: -- X-Spam-Flag: NO The server FPS is simply cycles / time, where time is from gettimeofday(), with some bounds on the minimum usleep so making the usleep actually less will crank up the server FPS (but not the simulation HZ, so the game isn't actually faster, for Source engine games). ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 06:37 AM 11/15/2010, Björn Rohlén wrote: Instead of hiding the server_fps, it would be better to explain it in detail. -TheG Envelope-to: g...@velocity-servers.net From: Alfred Reynolds alf...@valvesoftware.com To: 'Gary Stanley' g...@velocity-servers.net Date: Mon, 31 Aug 2009 16:48:29 -0700 Subject: RE: Negative usleep adding to FPS Thread-Topic: Negative usleep adding to FPS Thread-Index: AcoqlMdxCOaoY+wIRuukr2N1qqVNMwAAJ3Lw Accept-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US X-Mlf-Version: 7.1.1.1997 X-Mlf-UniqueId: o200908312349180056866 X-Spam-Status: No, score=-6.0 X-Spam-Score: -59 X-Spam-Bar: -- X-Spam-Flag: NO The server FPS is simply cycles / time, where time is from gettimeofday(), with some bounds on the minimum usleep so making the usleep actually less will crank up the server FPS (but not the simulation HZ, so the game isn't actually faster, for Source engine games). ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 01:25 PM 11/15/2010, John wrote: I agree that a better explanation from Valve would be good to squash some of the speculation on what FPS really means (the docs I've seen talk about tickrate, but not FPS). Maybe there's an official one out there, and we just need to find it. Gary, I'm not sure that you're right about seemingly small amounts of jitter never representing a problem. Imagine a scenario in which a server runs at 10fps and a tickrate of 5, with this timeline: The realized FPS here in this case would be 7, and the realized tickrate would be 4. This means that the FPS didn't dip all that much and still exceeds the tickrate, and yet the client would have seen a (very noticeable, at this resolution) glitch in gameplay. Scale this up to higher FPS and tickrate values, and it's quite possible that a dip from 150 to 100, or 90 to 66, could represent a problem. Does it always, and is it always noticeable? No, I wouldn't say that. But, realized FPS is still the best measure of purely server-side performance that we currently have at our disposal. I would like to see a realized tickrate number in addition to, or instead of, FPS. Locking the FPS rate to the tickrate (as L4D/L4D2 servers do, by default) also effectively gives us this, but presumably there is a benefit to having a decoupled higher FPS, such as by splitting up some of the network processing work into smaller chunks so that ticks take less time. (In the real world, what could cause a tick to take so long? Commonly, a misbehaved plugin or long disk write. The latter can be caused by very heavy background disk access when the server is flushing out a log.) -John Page fault latency wouldn't really cause huge delays at all from an application, unless you are running real time application and you need to get rid of jitter completely from doing a write() to disk (which directly goes to the filesystem buffer cache until you call fsync() (IIRC on linux)) You're always going to have jitter from syscalls, and syscalls are exactly what is used to generate what 'FPS' says.. (gettimeofday has nanosecond precision, so with erroring and rounding, you're going to have more variances than with a microsecond one, it has way more erroring; because it's more sensitive to it's own enviornment, ie: temperature of the PLL/quartz, motherboard, I/O load, kernel scheduler, etc) The point I am trying to make here is that with all the info you provided above, it's still speculation. Network frames are driven by the timers off of nanosleep, and gettimeofday is used to step time inside of the engine. I know this because the engine is based off of quake 3, and it does share parts of it (the network engine is just like it). I am not sure I agree with your statement that FPS is used to measure serverside performance, I thought it was people's latency to the server (lower latency means less error prediction) ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 01:25 PM 11/15/2010, John wrote: I agree that a better explanation from Valve would be good to squash some of the speculation on what FPS really means (the docs I've seen talk about tickrate, but not FPS). Maybe there's an official one out there, and we just need to find it. Gary, I'm not sure that you're right about seemingly small amounts of jitter never representing a problem. Imagine a scenario in which a server runs at 10fps and a tickrate of 5, with this timeline: The realized FPS here in this case would be 7, and the realized tickrate would be 4. This means that the FPS didn't dip all that much and still exceeds the tickrate, and yet the client would have seen a (very noticeable, at this resolution) glitch in gameplay. Scale this up to higher FPS and tickrate values, and it's quite possible that a dip from 150 to 100, or 90 to 66, could represent a problem. Does it always, and is it always noticeable? No, I wouldn't say that. But, realized FPS is still the best measure of purely server-side performance that we currently have at our disposal. I would like to see a realized tickrate number in addition to, or instead of, FPS. Locking the FPS rate to the tickrate (as L4D/L4D2 servers do, by default) also effectively gives us this, but presumably there is a benefit to having a decoupled higher FPS, such as by splitting up some of the network processing work into smaller chunks so that ticks take less time. (In the real world, what could cause a tick to take so long? Commonly, a misbehaved plugin or long disk write. The latter can be caused by very heavy background disk access when the server is flushing out a log.) -John Page fault latency wouldn't really cause huge delays at all from an application, unless you are running real time application and you need to get rid of jitter completely from doing a write() to disk (which directly goes to the filesystem buffer cache until you call fsync() (IIRC on linux)) You're always going to have jitter from syscalls, and syscalls are exactly what is used to generate what 'FPS' says.. (gettimeofday has nanosecond precision, so with erroring and rounding, you're going to have more variances than with a microsecond one, it has way more erroring; because it's more sensitive to it's own enviornment, ie: temperature of the PLL/quartz, motherboard, I/O load, kernel scheduler, etc) The point I am trying to make here is that with all the info you provided above, it's still speculation. Network frames are driven by the timers off of nanosleep, and gettimeofday is used to step time inside of the engine. I know this because the engine is based off of quake 3, and it does share parts of it (the network engine is just like it). I am not sure I agree with your statement that FPS is used to measure serverside performance, I thought it was people's latency to the server (lower latency means less error prediction) ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 03:17 PM 11/15/2010, John wrote: I know this because the engine is based off of quake 3 Half-Life predates the release of quake3. From what Valve has previously said, the original GoldSrc engine was based off an improved quake and quake2 engine. Source and Orangebox have a significant amount of changes from GoldSrc. But, what I said applies to all of these. Actually, now that I think about it, the example I previously gave mainly applies to Source/OB. GoldSrc, quake-based games, and most games in general have a FPS that is directly tied to the tickrate, IIRC. On those, FPS dips would also represent tickrate dips. -John You're talking about sv_fps on the quake engines. sv_fps was used for the heartbeat of certain operations, ie: a sv_fps of 20 is 50ms heartbeats, and increasing the heartbeats caused the engine to screw up because most animations etc all depend on the heartbeats to be 50ms instead of 33.3 (for 30). I know all about this, well, because I used to be on the OSP team back in the day, so I know the engine quite extensively. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 03:17 PM 11/15/2010, John wrote: I know this because the engine is based off of quake 3 Half-Life predates the release of quake3. From what Valve has previously said, the original GoldSrc engine was based off an improved quake and quake2 engine. Source and Orangebox have a significant amount of changes from GoldSrc. But, what I said applies to all of these. Actually, now that I think about it, the example I previously gave mainly applies to Source/OB. GoldSrc, quake-based games, and most games in general have a FPS that is directly tied to the tickrate, IIRC. On those, FPS dips would also represent tickrate dips. -John You're talking about sv_fps on the quake engines. sv_fps was used for the heartbeat of certain operations, ie: a sv_fps of 20 is 50ms heartbeats, and increasing the heartbeats caused the engine to screw up because most animations etc all depend on the heartbeats to be 50ms instead of 33.3 (for 30). I know all about this, well, because I used to be on the OSP team back in the day, so I know the engine quite extensively. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 02:48 PM 11/15/2010, John wrote: Page fault latency wouldn't really cause huge delays at all from an application If you're referring to what I said about log writes, this doesn't relate to page faults. Log lines simply have to be written to disk, and when the OS determines that it shouldn't (or can't) cache these writes and return immediately, it becomes a blocking operation, leading to reduced server performance. I've run extensive tests on this and discussed the situation with Valve. That is very debatable. I have a hard time believing that writing tiny files affects 'performance', and I emphasized that word because I have no idea what baseline performance is in the context of a game server. Writing anything to disk from cache takes a hit anyways, ie: TLB hits/misses, etc. You're always going to have jitter from syscalls, and syscalls are exactly what is used to generate what 'FPS' says.. Syscall latency is generally not enough to make a server drop from 150 to 100 FPS, as in your initial example. If it does, there's a serious OS-side performance issue. Do you know where the 'FPS' gets it's number from? The point I am trying to make here is that with all the info you provided above, it's still speculation. By asking that the FPS number be removed from stats output, you seem to be indicating that it is not a valid measure of performance. I don't believe that is the case. Again, that is debatable. For too long there have been many many people (including myself at one point) that claim/claimed high FPS was great and did this and that. Well, after reverse engineering, I have come to the conclusion that FPS only drives the engines time for things like snaps (not sure what the term is for VALVe games, snaps are quake 3 terms) and a few other things.. I guess all and all, people only complain about the FPS of a server because they don't realize most things in a game are estimated due to general relativity, interpolation/exterp, prediction, etc. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 02:48 PM 11/15/2010, John wrote: Page fault latency wouldn't really cause huge delays at all from an application If you're referring to what I said about log writes, this doesn't relate to page faults. Log lines simply have to be written to disk, and when the OS determines that it shouldn't (or can't) cache these writes and return immediately, it becomes a blocking operation, leading to reduced server performance. I've run extensive tests on this and discussed the situation with Valve. That is very debatable. I have a hard time believing that writing tiny files affects 'performance', and I emphasized that word because I have no idea what baseline performance is in the context of a game server. Writing anything to disk from cache takes a hit anyways, ie: TLB hits/misses, etc. You're always going to have jitter from syscalls, and syscalls are exactly what is used to generate what 'FPS' says.. Syscall latency is generally not enough to make a server drop from 150 to 100 FPS, as in your initial example. If it does, there's a serious OS-side performance issue. Do you know where the 'FPS' gets it's number from? The point I am trying to make here is that with all the info you provided above, it's still speculation. By asking that the FPS number be removed from stats output, you seem to be indicating that it is not a valid measure of performance. I don't believe that is the case. Again, that is debatable. For too long there have been many many people (including myself at one point) that claim/claimed high FPS was great and did this and that. Well, after reverse engineering, I have come to the conclusion that FPS only drives the engines time for things like snaps (not sure what the term is for VALVe games, snaps are quake 3 terms) and a few other things.. I guess all and all, people only complain about the FPS of a server because they don't realize most things in a game are estimated due to general relativity, interpolation/exterp, prediction, etc. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 07:22 PM 11/15/2010, John wrote: I have a hard time believing that writing tiny files affects 'performance' Under the scenario I described, it occurs. Physical media can only handle a certain number of IOPs, and with heavy disk access forcing the write cache to fill and the OS to suspend further writes, every transaction has to wait extra time. If you're curious to know more, check out the documentation on vm.dirty_ratio for Linux (though I mostly see this happen on Windows servers). You're also missing the design limitations of the actual drives. Assuming IDE/SATA, the disks do not support disconnected writes, which is a significant performance bottleneck when you are writing to the disk; only disconnected reads are supported. So that means anything you write to cache you take a performance hit (and possibly latency, as you only get 1 outstanding write for each drive, on a RAID array, you get multiple writes (1 per disk)) SAS/SCSI drives have 128 concurrent writes (tagged command queue depth). I have no idea what baseline performance is in the context of a game server. The baseline performance in that case would be no background disk access. mlock()? Memory backed filesystem that doesn't cause faults? different drives? sockets? null? Writing anything to disk from cache takes a hit anyways, ie: TLB hits/misses, etc. I'm not talking about nanosecond-level differences when I talk about delays from disk writes, as TLB hits/misses would cause. I'm talking about multiple-millisecond delays. The typical SATA drive has on the order of a ~10ms latency, and having to wait on a log write causes noticeable spikes/delays. See above. Do you know where the 'FPS' gets it's number from? This was previously discussed. gettimeofday(tv1) usleep(1000); gettimeofday(tv2) (tv2 - tv1 * 10e5) / %1000 (IIRC, it's been a long time) ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 07:22 PM 11/15/2010, John wrote: I have a hard time believing that writing tiny files affects 'performance' Under the scenario I described, it occurs. Physical media can only handle a certain number of IOPs, and with heavy disk access forcing the write cache to fill and the OS to suspend further writes, every transaction has to wait extra time. If you're curious to know more, check out the documentation on vm.dirty_ratio for Linux (though I mostly see this happen on Windows servers). You're also missing the design limitations of the actual drives. Assuming IDE/SATA, the disks do not support disconnected writes, which is a significant performance bottleneck when you are writing to the disk; only disconnected reads are supported. So that means anything you write to cache you take a performance hit (and possibly latency, as you only get 1 outstanding write for each drive, on a RAID array, you get multiple writes (1 per disk)) SAS/SCSI drives have 128 concurrent writes (tagged command queue depth). I have no idea what baseline performance is in the context of a game server. The baseline performance in that case would be no background disk access. mlock()? Memory backed filesystem that doesn't cause faults? different drives? sockets? null? Writing anything to disk from cache takes a hit anyways, ie: TLB hits/misses, etc. I'm not talking about nanosecond-level differences when I talk about delays from disk writes, as TLB hits/misses would cause. I'm talking about multiple-millisecond delays. The typical SATA drive has on the order of a ~10ms latency, and having to wait on a log write causes noticeable spikes/delays. See above. Do you know where the 'FPS' gets it's number from? This was previously discussed. gettimeofday(tv1) usleep(1000); gettimeofday(tv2) (tv2 - tv1 * 10e5) / %1000 (IIRC, it's been a long time) ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 12:32 AM 11/16/2010, John wrote: You're also missing the design limitations of the actual drives. Assuming IDE/SATA, the disks do not support disconnected writes, which is a significant performance bottleneck when you are writing to the disk...SAS/SCSI drives have 128 concurrent writes (tagged command queue depth). I'm not sure what you mean by also missing, since I have been spot-on about disk writes potentially causing performance problems, and what you're saying supports what I said before. This is something that I have studied extensively. I am talking about the physical drive design. Not the OS's inability to stop causing huge amounts of latency because of x/y/z kernel issue/bug/feature/design. You are misinformed about SATA drives. Many do support NCQ, which is the equivalent to TCQ on SAS/SCSI. The OS also maintains its cache and uses a scheduler to try to optimize writes, usually doing a decent job at maintaining a good rate of IOPs. Regardless of the NCQ/TCQ capability, the same performance problem would exist, given heavy enough disk access. No. SATA drives don't seem to support disconnected writes. Writing to disk on SATA is a latency killer. (Read through the SATA spec, it's there, probably on wikipedia too). I didn't know NCQ allowed 32 commands to be store-and-executed, I thought it was 8 or 16. I know how OS's work, btw. My comment about log writes listed them as an example of something that can make a tick take longer than anticipated, along with plugins (and the game itself). This is a valid example, but even if it were not valid, the overall assertion stands. I have no idea what baseline performance is in the context of a game server. The baseline performance in that case would be no background disk access. mlock()? Memory backed filesystem that doesn't cause faults? different drives? sockets? null? I think there might be a misunderstanding here. My example was that disk write delays due to logging during periods of heavy disk writes are one factor that I have seen lead to a performance problem and at the same time cause FPS dips. The baseline performance case for that particular scenario is very simple and as I described. I was not suggesting that there are no other reasons for FPS dips, or suggesting a baseline performance description for all scenarios. This is also a very small piece of what I said as a whole. -John If logging causes you performance.. problems, turn off logging? Stuff your logs into a socket? How about hack in a fix yourself with reverse engineering and .so's? I guess they could have a separate thread with an IPI-like that is only used for disk writes. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Serverside FPS jitter
At 12:32 AM 11/16/2010, John wrote: You're also missing the design limitations of the actual drives. Assuming IDE/SATA, the disks do not support disconnected writes, which is a significant performance bottleneck when you are writing to the disk...SAS/SCSI drives have 128 concurrent writes (tagged command queue depth). I'm not sure what you mean by also missing, since I have been spot-on about disk writes potentially causing performance problems, and what you're saying supports what I said before. This is something that I have studied extensively. I am talking about the physical drive design. Not the OS's inability to stop causing huge amounts of latency because of x/y/z kernel issue/bug/feature/design. You are misinformed about SATA drives. Many do support NCQ, which is the equivalent to TCQ on SAS/SCSI. The OS also maintains its cache and uses a scheduler to try to optimize writes, usually doing a decent job at maintaining a good rate of IOPs. Regardless of the NCQ/TCQ capability, the same performance problem would exist, given heavy enough disk access. No. SATA drives don't seem to support disconnected writes. Writing to disk on SATA is a latency killer. (Read through the SATA spec, it's there, probably on wikipedia too). I didn't know NCQ allowed 32 commands to be store-and-executed, I thought it was 8 or 16. I know how OS's work, btw. My comment about log writes listed them as an example of something that can make a tick take longer than anticipated, along with plugins (and the game itself). This is a valid example, but even if it were not valid, the overall assertion stands. I have no idea what baseline performance is in the context of a game server. The baseline performance in that case would be no background disk access. mlock()? Memory backed filesystem that doesn't cause faults? different drives? sockets? null? I think there might be a misunderstanding here. My example was that disk write delays due to logging during periods of heavy disk writes are one factor that I have seen lead to a performance problem and at the same time cause FPS dips. The baseline performance case for that particular scenario is very simple and as I described. I was not suggesting that there are no other reasons for FPS dips, or suggesting a baseline performance description for all scenarios. This is also a very small piece of what I said as a whole. -John If logging causes you performance.. problems, turn off logging? Stuff your logs into a socket? How about hack in a fix yourself with reverse engineering and .so's? I guess they could have a separate thread with an IPI-like that is only used for disk writes. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] TF2 server multi-threaded support
At 08:40 AM 9/3/2010, Daniel Vogel wrote: The old games were fast because the developers knew that they have to run on a 128 MHz turbo processor, so they've made it performant. Nowadays with the fancy 3,66 GHz machines developers don't think about the performance that much anymore. If they need a feature they just put it in, which wasn't thinkable in the quake days. I wish everyone would just have those 33 - 66 turbo processors again so developers had to think more again about the performance... Old games were faster because they were smaller. Sorry, but this is a fact. Small code usually always runs faster on the stack. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] TF2 server multi-threaded support
At 08:40 AM 9/3/2010, Daniel Vogel wrote: The old games were fast because the developers knew that they have to run on a 128 MHz turbo processor, so they've made it performant. Nowadays with the fancy 3,66 GHz machines developers don't think about the performance that much anymore. If they need a feature they just put it in, which wasn't thinkable in the quake days. I wish everyone would just have those 33 - 66 turbo processors again so developers had to think more again about the performance... Old games were faster because they were smaller. Sorry, but this is a fact. Small code usually always runs faster on the stack. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] newer xeon / i7 cpus: turbo mode enabled or disabled?
At 05:43 AM 9/2/2010, Nephyrin Zey wrote: Could everyone kindly stop spreading false information if you don't know what you're talking about? On some bios, you need to have the speedstep technology (which turboboost is a part of) enabled to make use of turboboost. You can enable speedstep in the bios, and the associated turbo features, without making use of underclocking - which is a kernel/userland configured utility. In linux, you'd simply make sure the cpufeq system is loaded up with the performance governor (always 100%), or, depending on your distro, passed off to the userland governer with the userland tools set to performance mode. Your CPU will never underclock itself. This is separate from turbo mode, which overclocks active cores. It can do this, basically, because the chip is designed to support the heat from all cores at 100%. If some cores are not at 100%, the others can be slightly overclocked as the excess heat wont overheat the chip. (Its slightly more complicated than this, but thats the general idea) I see no reason to have it disabled - though if your system is running near 100% across the cores I don't think it'll see much use (I could be wrong). I think turbo mode may make TSC drift more, because the PLL is calibrated to the quartz crystal on the CPU and if it runs hotter it may drift over time more often. So people who use TSC as their timecounter may see odd things happen (lotsa ntp drift) G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] newer xeon / i7 cpus: turbo mode enabled or disabled?
At 05:43 AM 9/2/2010, Nephyrin Zey wrote: Could everyone kindly stop spreading false information if you don't know what you're talking about? On some bios, you need to have the speedstep technology (which turboboost is a part of) enabled to make use of turboboost. You can enable speedstep in the bios, and the associated turbo features, without making use of underclocking - which is a kernel/userland configured utility. In linux, you'd simply make sure the cpufeq system is loaded up with the performance governor (always 100%), or, depending on your distro, passed off to the userland governer with the userland tools set to performance mode. Your CPU will never underclock itself. This is separate from turbo mode, which overclocks active cores. It can do this, basically, because the chip is designed to support the heat from all cores at 100%. If some cores are not at 100%, the others can be slightly overclocked as the excess heat wont overheat the chip. (Its slightly more complicated than this, but thats the general idea) I see no reason to have it disabled - though if your system is running near 100% across the cores I don't think it'll see much use (I could be wrong). I think turbo mode may make TSC drift more, because the PLL is calibrated to the quartz crystal on the CPU and if it runs hotter it may drift over time more often. So people who use TSC as their timecounter may see odd things happen (lotsa ntp drift) G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] TF2 server multi-threaded support
At 08:16 PM 9/2/2010, f7 f0rkz wrote: Kyle, Couldn't agree with you more. Its 2010 and CPU speeds aren't going to be any faster. The CPU manufacturers are creating low cpu speeds with high threaded cores. Its about time we get a better server suite if TF is going to be continually bloated like it is. -f0rkz From a design point of view, I think it will be difficult to implement multithreaded gameserver code that is fully threaded (dispatcher threads) without adding more complexity and expensive userland locking. I think inter-thread latency may be the reason for it not being fully implimented. Personally, the older games are far more superior in terms of 'performance' because they are - Lightweight - Small - Easy to maintain Look at quake 3. It's old, over 10 years. The engine is simple and efficient and doesn't involve alot of expensive operations inside. You can run about 20+ 64 player quake 3 servers on modern hardware and not even suck up a large amount of CPU. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] TF2 server multi-threaded support
At 08:16 PM 9/2/2010, f7 f0rkz wrote: Kyle, Couldn't agree with you more. Its 2010 and CPU speeds aren't going to be any faster. The CPU manufacturers are creating low cpu speeds with high threaded cores. Its about time we get a better server suite if TF is going to be continually bloated like it is. -f0rkz From a design point of view, I think it will be difficult to implement multithreaded gameserver code that is fully threaded (dispatcher threads) without adding more complexity and expensive userland locking. I think inter-thread latency may be the reason for it not being fully implimented. Personally, the older games are far more superior in terms of 'performance' because they are - Lightweight - Small - Easy to maintain Look at quake 3. It's old, over 10 years. The engine is simple and efficient and doesn't involve alot of expensive operations inside. You can run about 20+ 64 player quake 3 servers on modern hardware and not even suck up a large amount of CPU. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Linux Distribution and Kernel
At 07:36 AM 8/31/2010, Alon Gubkin wrote: Okay guys, you convinced me - I will use the 2.6.35.4 kernel without any patches. Note that I changed *time frequency* to 1000 HZ and disabled *Tickless System (Dynamic Ticks)*. In the orginal articlehttp://wiki.fragaholics.de/index.php/EN:Linux_Kernel_Optimization it was 100HZ time frequency and tickless system enabled. 1. Do I really need to mess with the kernel default configuration for stable 1000fps kernel? 2. Should I use 100HZ and no tickless system instead of 1000HZ and tickless system enabled? 3. What would you suggest to do more? (For example to enable x, disable y, or dont do z) Why do you need stable FPS? There is no evidence at all that stable FPS does anything at all. 100HZ may be better for large servers because HZ on newer kernels is tied to msleep() and using 100HZ may increase throughput slightly as msleep() doesn't use hrtimers, msleep() is used by a large amount of drivers and other things inside the kernel. The wiki you posted was written by a guy who has no idea how the engine works. He posts speculation without any technical proof (external monitoring) and assumes his results are better because FPS is tied into everything like ping calculation. It's no different than altering sv_fps on Quake/CoD games and watching your ping drop because of rounding errors or fixed math that isn't a multiple or divisible by the default number. I would be very careful on believing everything you read about gameserver stuff, only a handful of people have a clue about it's internals and most others that say otherwise just repeat nonsense that others have said over the years. G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Linux Distribution and Kernel
At 07:36 AM 8/31/2010, Alon Gubkin wrote: Okay guys, you convinced me - I will use the 2.6.35.4 kernel without any patches. Note that I changed *time frequency* to 1000 HZ and disabled *Tickless System (Dynamic Ticks)*. In the orginal articlehttp://wiki.fragaholics.de/index.php/EN:Linux_Kernel_Optimization it was 100HZ time frequency and tickless system enabled. 1. Do I really need to mess with the kernel default configuration for stable 1000fps kernel? 2. Should I use 100HZ and no tickless system instead of 1000HZ and tickless system enabled? 3. What would you suggest to do more? (For example to enable x, disable y, or dont do z) Why do you need stable FPS? There is no evidence at all that stable FPS does anything at all. 100HZ may be better for large servers because HZ on newer kernels is tied to msleep() and using 100HZ may increase throughput slightly as msleep() doesn't use hrtimers, msleep() is used by a large amount of drivers and other things inside the kernel. The wiki you posted was written by a guy who has no idea how the engine works. He posts speculation without any technical proof (external monitoring) and assumes his results are better because FPS is tied into everything like ping calculation. It's no different than altering sv_fps on Quake/CoD games and watching your ping drop because of rounding errors or fixed math that isn't a multiple or divisible by the default number. I would be very careful on believing everything you read about gameserver stuff, only a handful of people have a clue about it's internals and most others that say otherwise just repeat nonsense that others have said over the years. G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Linux Distribution and Kernel
At 03:17 PM 8/30/2010, Alon Gubkin wrote: What linux distribution and kernel would you suggest for running source dedicated servers? Currently I use Ubuntu Server 10.04 x86 and 2.6.33.7-rt29. By the way, is there any reason to use Ubuntu Server 10.04 x64 instead of x86? As far as I know, srcds doesn't support 64-bit. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux Running realtime kernels is a waste of time, in my opinion. RT kernels more or less throughput for latency, and there isn't any solid evidence that realtime kernels 'help' the game, except consume large amounts of CPU time. I am assuming you're running a realtime kernel for 'stable fps', but in reality there is no evidence that 'stable fps' helps. gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Linux Distribution and Kernel
At 03:17 PM 8/30/2010, Alon Gubkin wrote: What linux distribution and kernel would you suggest for running source dedicated servers? Currently I use Ubuntu Server 10.04 x86 and 2.6.33.7-rt29. By the way, is there any reason to use Ubuntu Server 10.04 x64 instead of x86? As far as I know, srcds doesn't support 64-bit. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux Running realtime kernels is a waste of time, in my opinion. RT kernels more or less throughput for latency, and there isn't any solid evidence that realtime kernels 'help' the game, except consume large amounts of CPU time. I am assuming you're running a realtime kernel for 'stable fps', but in reality there is no evidence that 'stable fps' helps. gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Anyway to handle many players?
At 10:35 AM 8/9/2010, Christoffer Pedersen wrote: Hello I am trying to figure out how to get my server to handle the many players that visit my servers each day. I have a big 40 slots deathmatch-server and a 30 slots rpg-server. Before the CS:S update, the servers used to run just fine, but after, im getting huge lagspikes and FPS-drops. I have been looking at the CPU-usage and load, but nothing indicates that my server is overloaded. My load is even at 0.60 when the Deathmatch is full. Im using the vanilla ubuntu 9.10 x64 kernel, but i have used 2.6.32- ck2... Anyone here that knows why this is happening and maybe how i could resolve this issue? You could try reducing HZ from 1000 to 100 to reduce overhead on certain drivers etc. But, as others have stated, the new engine consumes a large amount of CPU. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Anyway to handle many players?
At 10:35 AM 8/9/2010, Christoffer Pedersen wrote: Hello I am trying to figure out how to get my server to handle the many players that visit my servers each day. I have a big 40 slots deathmatch-server and a 30 slots rpg-server. Before the CS:S update, the servers used to run just fine, but after, im getting huge lagspikes and FPS-drops. I have been looking at the CPU-usage and load, but nothing indicates that my server is overloaded. My load is even at 0.60 when the Deathmatch is full. Im using the vanilla ubuntu 9.10 x64 kernel, but i have used 2.6.32- ck2... Anyone here that knows why this is happening and maybe how i could resolve this issue? You could try reducing HZ from 1000 to 100 to reduce overhead on certain drivers etc. But, as others have stated, the new engine consumes a large amount of CPU. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] High FPS?
At 06:33 PM 8/2/2010, Steven Hartland wrote: hundred thousands would explain it being unable to go above 990, so that clears something up :) I'm not using a plugin, I'm using an OS override to correct the interframe sleep time so its should have the most accurate view of the simulation possible, unless I'm missing something. I'm assuming you're using those LD_PRELOAD hacks to alter what usleep() does.. If its using Plat_FloatTime that cold easily explain the issues as it uses gettimeofday on Linux which is not guaranteed to be monotonic, so you could see the value decrease as ntp nudges the system clock for example. Obviously if this happened, it could cause all sorts of strange edge cases. It does use it. I patched it to use clock_gettime and didn't see any difference, really. Doesn't matter what you use, the APIs are only as good as the timecounters driving them. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] High FPS?
At 06:33 PM 8/2/2010, Steven Hartland wrote: hundred thousands would explain it being unable to go above 990, so that clears something up :) I'm not using a plugin, I'm using an OS override to correct the interframe sleep time so its should have the most accurate view of the simulation possible, unless I'm missing something. I'm assuming you're using those LD_PRELOAD hacks to alter what usleep() does.. If its using Plat_FloatTime that cold easily explain the issues as it uses gettimeofday on Linux which is not guaranteed to be monotonic, so you could see the value decrease as ntp nudges the system clock for example. Obviously if this happened, it could cause all sorts of strange edge cases. It does use it. I patched it to use clock_gettime and didn't see any difference, really. Doesn't matter what you use, the APIs are only as good as the timecounters driving them. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] High FPS?
At 05:33 AM 7/29/2010, Steven Hartland wrote: Seems to be a common question, we've collated some information on this here: http://www.multiplaygameservers.com/help/source-engine-performance-guide/ If any one has anything to add / remote or general comments on how to improve this please let us know. Regards Steve Without access to the engine's net_sleep code, it's impossible to tell what happens. Nobody can really know. There is no way that you or I could ever document how FPS works. No way. Posting graphs doesn't tell anyone anything about the internals of the game. :) It would take a very long time to reverse engineer all the time functions VALVe uses to step time, plus figure out what they effect and what they don't. The original poster did make me laugh though, the guy who runs that FPS meter site is a complete and total moron. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] CPU Usage increase with OB
At 12:40 AM 6/28/2010, Tony Paloma wrote: Well I doubt Valve is calling getdents directly. Probably getting called by some standard function. Place a breakpoint and do a trace a few times and see what functio % time seconds usecs/call callserrors syscall -- --- --- - - 35.410.014428 5 2784 getdents It's a wrapper that is probably doing something like #define blah(x,y,z) getdents(x,y,z) or another userland function that has to call it.. thundering herd problem? ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] CPU Usage increase with OB
At 12:40 AM 6/28/2010, Tony Paloma wrote: Well I doubt Valve is calling getdents directly. Probably getting called by some standard function. Place a breakpoint and do a trace a few times and see what functio % time seconds usecs/call callserrors syscall -- --- --- - - 35.410.014428 5 2784 getdents It's a wrapper that is probably doing something like #define blah(x,y,z) getdents(x,y,z) or another userland function that has to call it.. thundering herd problem? ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
[hlds_linux] CPU Usage increase with OB
I am seeing about 30% more usage compared to the old engine. I am having a few pubs running and I am now going to have to reduce the slot counts to compensate for the excessive usage. Profiling shows getdents() is using the most syscall time, compared to others, getdents is actually surpassing nanosleep() when it comes to syscall usage. FWIW getdents is VERY expensive to call over and over. Anyone else seeing this as well? ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
[hlds_linux] CPU Usage increase with OB
I am seeing about 30% more usage compared to the old engine. I am having a few pubs running and I am now going to have to reduce the slot counts to compensate for the excessive usage. Profiling shows getdents() is using the most syscall time, compared to others, getdents is actually surpassing nanosleep() when it comes to syscall usage. FWIW getdents is VERY expensive to call over and over. Anyone else seeing this as well? ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] CPU Usage increase with OB
At 07:03 AM 6/27/2010, AnAkIn . wrote: You are not using that tickrate 100 plugin, are you? No. Why would I do that? That's like trying to get 40mpg out of a 600hp engine. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] CPU Usage increase with OB
At 07:03 AM 6/27/2010, AnAkIn . wrote: You are not using that tickrate 100 plugin, are you? No. Why would I do that? That's like trying to get 40mpg out of a 600hp engine. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] CPU Usage increase with OB
At 05:27 PM 6/27/2010, Chris wrote: Possibly because your servers ran at 33tick, and now they are at 66 which takes more cpu ? On 27/06/2010 6:50 AM, Gary Stanley wrote: I am seeing about 30% more usage compared to the old engine. I am having a few pubs running and I am now going to have to reduce the slot counts to compensate for the excessive usage. Profiling shows getdents() is using the most syscall time, compared to others, getdents is actually surpassing nanosleep() when it comes to syscall usage. FWIW getdents is VERY expensive to call over and over. Anyone else seeing this as well? No. I ran all my previous servers @ 99 tickrate, large slot count, now with 66 and the same players it consumes about 35% more (baseline analysis) It's possible the engine is just more expensive. It's also possible this is just normal, considering it's a 'different' engine. G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary There currently are 7 different ways to get time from a computer. All of them can't agree on how long a second is supposed to be -Me ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] CPU Usage increase with OB
At 05:27 PM 6/27/2010, Chris wrote: Possibly because your servers ran at 33tick, and now they are at 66 which takes more cpu ? On 27/06/2010 6:50 AM, Gary Stanley wrote: I am seeing about 30% more usage compared to the old engine. I am having a few pubs running and I am now going to have to reduce the slot counts to compensate for the excessive usage. Profiling shows getdents() is using the most syscall time, compared to others, getdents is actually surpassing nanosleep() when it comes to syscall usage. FWIW getdents is VERY expensive to call over and over. Anyone else seeing this as well? No. I ran all my previous servers @ 99 tickrate, large slot count, now with 66 and the same players it consumes about 35% more (baseline analysis) It's possible the engine is just more expensive. It's also possible this is just normal, considering it's a 'different' engine. G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org http://leaf.dragonflybsd.org/~gary There currently are 7 different ways to get time from a computer. All of them can't agree on how long a second is supposed to be -Me ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] steam binary fails with Illegal instruction.
At 02:50 PM 12/25/2009, Patrick Palka wrote: chmoding it does nothing. The error Illegal Instruction is a kernel error that tells us that the CPU is trying to perform a foreign instruction. I read somewhere that the CPU must contain SSE for the binaries to work; unfortunately, my CPU does not. is this speculation true, that the steam binary is compiled with SSE? strace output: SIGILL's are caused by bad opcodes. This usually happens when binary code tries to run something that the processor/binutils doesn't understand. Steam binaries do not have SSE stuff in them (briefly looking at objdump -d steam | egrep xmm) ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] steam binary fails with Illegal instruction.
At 02:50 PM 12/25/2009, Patrick Palka wrote: chmoding it does nothing. The error Illegal Instruction is a kernel error that tells us that the CPU is trying to perform a foreign instruction. I read somewhere that the CPU must contain SSE for the binaries to work; unfortunately, my CPU does not. is this speculation true, that the steam binary is compiled with SSE? strace output: SIGILL's are caused by bad opcodes. This usually happens when binary code tries to run something that the processor/binutils doesn't understand. Steam binaries do not have SSE stuff in them (briefly looking at objdump -d steam | egrep xmm) ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Crashing L4D2 fork killing entire machine
At 04:21 AM 11/30/2009, Pavilus Zirovski wrote: It's happening about 3-4 times per month for me on 32bit Gentoo hosting TF2 servers only. Im not using -debug in startup line. I am using default preempt kernel (not realtime), but i was using realtime priority scheduler on srcds processes. I think maybe that is the problem. Now i removed resched.sh script from crontab, cause i thought that maybe it happens at some specific moments when cpu overload is very high and rescheduling script changes realtime priority of all srcds_processes (chrt -f -p 98 processid), but im not sure. Now i change realtime priority manually only. Server hasnt crashed for about 2 weeks for now, but i think it might crash any time. This has been frustrating for me as well, cause i lost all remote accesses to server, all processes start to hang one by one and cpu usage is maximized (all taken by one srcds process) at that moment till i lost connection from server at all and i have to call my hosting company so they could physically restart server. At first i thought it was because of some hardware failure or something but now seeing that others have the same problem then im not sure. I hope someone could give some more clues.. TBH you don't need SCHED_FIFO, you only need SCHED_RR. Gameservers are not time sensitive enough to justify running them at the same priority as, say, the network packet scheduler. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Crashing L4D2 fork killing entire machine
At 04:21 AM 11/30/2009, Pavilus Zirovski wrote: It's happening about 3-4 times per month for me on 32bit Gentoo hosting TF2 servers only. Im not using -debug in startup line. I am using default preempt kernel (not realtime), but i was using realtime priority scheduler on srcds processes. I think maybe that is the problem. Now i removed resched.sh script from crontab, cause i thought that maybe it happens at some specific moments when cpu overload is very high and rescheduling script changes realtime priority of all srcds_processes (chrt -f -p 98 processid), but im not sure. Now i change realtime priority manually only. Server hasnt crashed for about 2 weeks for now, but i think it might crash any time. This has been frustrating for me as well, cause i lost all remote accesses to server, all processes start to hang one by one and cpu usage is maximized (all taken by one srcds process) at that moment till i lost connection from server at all and i have to call my hosting company so they could physically restart server. At first i thought it was because of some hardware failure or something but now seeing that others have the same problem then im not sure. I hope someone could give some more clues.. TBH you don't need SCHED_FIFO, you only need SCHED_RR. Gameservers are not time sensitive enough to justify running them at the same priority as, say, the network packet scheduler. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] tf2 stable fps
At 06:07 PM 12/1/2009, Nephyrin Zey wrote: I really dont want to get involved in this, but beware basically all the advice you're receiving (including mine). There's tons of BS regarding tickrate, RT kernels, fpsmeter.org, 'hit registration', and a bunch of other useless nonsense by people who really just are echoing what they read in that one old article about high FPS and what that one wiki says about linux kernel configuration. I agree. You do not need RT kernels at all.. they are for real time systems, since gameservers estimate everything anyways it's pointless to run a RT kernel on something that gives the best guess. The engine can only respond as fast as the users ping to and from the server, with socket latency etc.. If I were you: Setup a kernel with hi res timers and a 300hz interrupt, enable HPET if your system doesn't have a stable TSC. Bind each srcds instance to a core. Set it to sched_fifo (sudo chrt -f -p 98 pid). Set fps_max to 0. Use host_profile 1 to watch effective FPS. Fill server. Play with other tweaks and compare them to your test case. Just because someone says that X or Y will make your server better dont believe it until you see it. net_graph 4 is the best measure of how well your server is doing. If its a solid graph with no gaps, getting 66/s updates, and low var, I would say its near perfect. Others might whine. Decide for yourself. SCHED_RR gives the same latency as SCHED_FIFO, in my tests. Under load, this will be different, though. (send me a message and i'll send you some code) Miscellaneous nonsense: - On a system with hi res timers and TSC/HPET, sleep() will return independent of the interrupt timer, enabling 1000FPS to be hit regardless of system ticrate. In this case, a 1000hz interrupt timer will not have any effect, and possibly a negative one. AFAIK select()/poll() on older kernels do not use hrtimers at all. Only nanosleep()/usleep() do. You don't need 1000hz anyways as that can cause cacheline pingpongs etc (Hurt NUMA performance) - On linux/tf2, the stats command calculates fps in a very useless manner. A single slow frame will make it show '40fps', while the engine's own internal counter (what you see in the green banner in those windows srcds windows) as well as host_profile disagree. - fpsmeter.org uses the stats command. - I've talked to and worked with many people and never seen a linux TF2 server above 20 slots get 'stable' FPS, much less according to fpsmeter. I've seen many TF2 linux servers that perform very well and lag free. - RT kernels chug CPU like no tomorrow for very little benefit, vs FIFO scheduling and hi-res timers. - If your var is 10ms and your updaterate is stable 66, to hell with anyone whining about FPS (flamewar lol). Its worth noting that windows servers are tuned to run at 66fps originally. By valve. The 'booster' came later. - My linux TF2 servers are among the best stability in updaterate and var i've seen anywhere, yet many people have 'more stable FPS' than me. See previous point. - SourceTV is a massive buggy resource hog. - Anyone that brings up 'hit registration' probably doesn't know wtf they're talking about and read some old article about it with questionable logic. As long as you run Low Latency in the kernel with little interrupt behavior, and without CPUspeed/ACPI Processor you should be okay. Realtime kernels chug too much cpu because the scheduler has more overhead etc etc - ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] tf2 stable fps
At 06:07 PM 12/1/2009, Nephyrin Zey wrote: I really dont want to get involved in this, but beware basically all the advice you're receiving (including mine). There's tons of BS regarding tickrate, RT kernels, fpsmeter.org, 'hit registration', and a bunch of other useless nonsense by people who really just are echoing what they read in that one old article about high FPS and what that one wiki says about linux kernel configuration. I agree. You do not need RT kernels at all.. they are for real time systems, since gameservers estimate everything anyways it's pointless to run a RT kernel on something that gives the best guess. The engine can only respond as fast as the users ping to and from the server, with socket latency etc.. If I were you: Setup a kernel with hi res timers and a 300hz interrupt, enable HPET if your system doesn't have a stable TSC. Bind each srcds instance to a core. Set it to sched_fifo (sudo chrt -f -p 98 pid). Set fps_max to 0. Use host_profile 1 to watch effective FPS. Fill server. Play with other tweaks and compare them to your test case. Just because someone says that X or Y will make your server better dont believe it until you see it. net_graph 4 is the best measure of how well your server is doing. If its a solid graph with no gaps, getting 66/s updates, and low var, I would say its near perfect. Others might whine. Decide for yourself. SCHED_RR gives the same latency as SCHED_FIFO, in my tests. Under load, this will be different, though. (send me a message and i'll send you some code) Miscellaneous nonsense: - On a system with hi res timers and TSC/HPET, sleep() will return independent of the interrupt timer, enabling 1000FPS to be hit regardless of system ticrate. In this case, a 1000hz interrupt timer will not have any effect, and possibly a negative one. AFAIK select()/poll() on older kernels do not use hrtimers at all. Only nanosleep()/usleep() do. You don't need 1000hz anyways as that can cause cacheline pingpongs etc (Hurt NUMA performance) - On linux/tf2, the stats command calculates fps in a very useless manner. A single slow frame will make it show '40fps', while the engine's own internal counter (what you see in the green banner in those windows srcds windows) as well as host_profile disagree. - fpsmeter.org uses the stats command. - I've talked to and worked with many people and never seen a linux TF2 server above 20 slots get 'stable' FPS, much less according to fpsmeter. I've seen many TF2 linux servers that perform very well and lag free. - RT kernels chug CPU like no tomorrow for very little benefit, vs FIFO scheduling and hi-res timers. - If your var is 10ms and your updaterate is stable 66, to hell with anyone whining about FPS (flamewar lol). Its worth noting that windows servers are tuned to run at 66fps originally. By valve. The 'booster' came later. - My linux TF2 servers are among the best stability in updaterate and var i've seen anywhere, yet many people have 'more stable FPS' than me. See previous point. - SourceTV is a massive buggy resource hog. - Anyone that brings up 'hit registration' probably doesn't know wtf they're talking about and read some old article about it with questionable logic. As long as you run Low Latency in the kernel with little interrupt behavior, and without CPUspeed/ACPI Processor you should be okay. Realtime kernels chug too much cpu because the scheduler has more overhead etc etc - ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] L4D2 fork cpu lock up
At 05:01 PM 10/28/2009, gamead...@127001.org wrote: Had that earlier as well - no cause that I could determine run strace on the pid, or gdb and run bt full on the thread to see what it's doing -Original Message- From: hlds_linux-boun...@list.valvesoftware.com [mailto:hlds_linux- boun...@list.valvesoftware.com] On Behalf Of Saint K. Sent: 28 October 2009 20:40 To: Half-Life dedicated Linux server mailing list Subject: [hlds_linux] L4D2 fork cpu lock up I just had one of the L4D2 forks sitting in a 100% CPU load lock, obviously without any players active on it. It worries me a bit. OS: Linux Debian 64bit Kernel: 2.6.26-2-amd64 Cheers, ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] L4D2 fork cpu lock up
At 05:01 PM 10/28/2009, gamead...@127001.org wrote: Had that earlier as well - no cause that I could determine run strace on the pid, or gdb and run bt full on the thread to see what it's doing -Original Message- From: hlds_linux-boun...@list.valvesoftware.com [mailto:hlds_linux- boun...@list.valvesoftware.com] On Behalf Of Saint K. Sent: 28 October 2009 20:40 To: Half-Life dedicated Linux server mailing list Subject: [hlds_linux] L4D2 fork cpu lock up I just had one of the L4D2 forks sitting in a 100% CPU load lock, obviously without any players active on it. It worries me a bit. OS: Linux Debian 64bit Kernel: 2.6.26-2-amd64 Cheers, ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] L4D and vm.swappiness values?
At 12:44 AM 10/26/2009, Logan Rogers-Follis wrote: Has anyone ever messed around with tweaking the swappiness value for a maximum performance on a Linux Left 4 Dead server? I'm running a CentOS 5.x server with 2 Xen guest domains for Left 4 Dead (and the 2nd for the soon-to-be-released Left 4 Dead 2) and have been pondering on whether tweaking the swappiness would give me any better performance. As is I never use my swap except for some 32K that randomly shows up on my Cacti graphing, but otherwise everything runs in memory. Memory gets used first, then it will swap. Swapping to disk will cause page fault latency which will kill any latency sensitive applications.. I don't know how you can get better performance by moving things to disk. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] L4D and vm.swappiness values?
At 12:44 AM 10/26/2009, Logan Rogers-Follis wrote: Has anyone ever messed around with tweaking the swappiness value for a maximum performance on a Linux Left 4 Dead server? I'm running a CentOS 5.x server with 2 Xen guest domains for Left 4 Dead (and the 2nd for the soon-to-be-released Left 4 Dead 2) and have been pondering on whether tweaking the swappiness would give me any better performance. As is I never use my swap except for some 32K that randomly shows up on my Cacti graphing, but otherwise everything runs in memory. Memory gets used first, then it will swap. Swapping to disk will cause page fault latency which will kill any latency sensitive applications.. I don't know how you can get better performance by moving things to disk. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Game Server Manager Needed - Hiring
At 06:42 PM 10/9/2009, Mike Zimmermann wrote: You must be the life of all the parties. -Mike Why don't you grow up? Every email you send it full of nonsense / flaming people. Can one of the VALVe guys remove this user from the mailing list? G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org | gary at cpanel dot net http://leaf.dragonflybsd.org/~gary There currently are 7 different ways to get time from a computer. All of them can't agree on how long a second is supposed to be -Me ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Game Server Manager Needed - Hiring
At 06:42 PM 10/9/2009, Mike Zimmermann wrote: You must be the life of all the parties. -Mike Why don't you grow up? Every email you send it full of nonsense / flaming people. Can one of the VALVe guys remove this user from the mailing list? G. Monk Stanley gary at summit-servers dot com | gary at DragonflyBSD dot org | gary at cpanel dot net http://leaf.dragonflybsd.org/~gary There currently are 7 different ways to get time from a computer. All of them can't agree on how long a second is supposed to be -Me ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Very unstable FPS in hlds
FPS depends on gettimeofday precision and nanosleep latency.. Stable FPS is mostly impossible, due to the following: BUGS Probably not accurate on many machines down to the microsecond. Count on precision only to -4 or maybe -5. If you're getting FPS jumps all over the place, it sounds like something is stalling the bus (SMI interrupts) You could try turning off ACPI which will remove power management support, but it should stablize things. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Very unstable FPS in hlds
FPS depends on gettimeofday precision and nanosleep latency.. Stable FPS is mostly impossible, due to the following: BUGS Probably not accurate on many machines down to the microsecond. Count on precision only to -4 or maybe -5. If you're getting FPS jumps all over the place, it sounds like something is stalling the bus (SMI interrupts) You could try turning off ACPI which will remove power management support, but it should stablize things. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 09:05 PM 9/5/2009, Eric Greer wrote: This is all really awesome information everyone and I am very appreciative of all your ime and knowledge... however... What does this mean to the guy who hasn't recompiled a linux kenel before? Right now I'm seting fps_max on the command line to 500. Can I get more than 500 fps without recompiling? What settings would that require? If I do have to recompile, where do I start learning for that? How dangerous is it? Thanks again everyone, Eric The stock centos kernels do not have hrtimers, so you aren't able to get low latency sleeping. The 2.6.18 kernels are very good latency wise, so unless you want to patch them in you're going to have to build something newer. The newer kernels with CFS are pretty much crap, in my opinion, due to the scheduler changes and other .. things, but mostly you will need HPET support in the BIOS enabled with hrtimers. As Laws stated eariler, the older kernels are generally better overall, and newer kernels with RT have too much overhead in most of the codepath the game does not use. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 09:05 PM 9/5/2009, Eric Greer wrote: This is all really awesome information everyone and I am very appreciative of all your ime and knowledge... however... What does this mean to the guy who hasn't recompiled a linux kenel before? Right now I'm seting fps_max on the command line to 500. Can I get more than 500 fps without recompiling? What settings would that require? If I do have to recompile, where do I start learning for that? How dangerous is it? Thanks again everyone, Eric The stock centos kernels do not have hrtimers, so you aren't able to get low latency sleeping. The 2.6.18 kernels are very good latency wise, so unless you want to patch them in you're going to have to build something newer. The newer kernels with CFS are pretty much crap, in my opinion, due to the scheduler changes and other .. things, but mostly you will need HPET support in the BIOS enabled with hrtimers. As Laws stated eariler, the older kernels are generally better overall, and newer kernels with RT have too much overhead in most of the codepath the game does not use. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 11:04 PM 9/6/2009, Eric Greer wrote: Thanks everyone, especially Ulrich. You seem to be an expert with 1000 FPS servers. I made the changes beyond modifying the kernel and ran some tests but I am only getting 400 FPS a the highest right now. I'm scared to do anything involving the kernel because blowing it up would be a horrible thing for me. I've checked and the server processes are running at -99 priority and max_fps is set to 1000. [r...@atom ~]# rpm -qa | grep kernel kernel-PAE-2.6.18-128.el5 kernel-PAE-2.6.18-128.7.1.el5 kernel-PAE-devel-2.6.18-128.el5 kernel-headers-2.6.18-128.7.1.el5 kernel-PAE-devel-2.6.18-128.7.1.el5 [r...@atom ~]# Your not going to be able to get ultra accurate timers with stock CentOS kernels. You will need something with hrtimers merged into them, and hardware that support high res timers. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 11:04 PM 9/6/2009, Eric Greer wrote: Thanks everyone, especially Ulrich. You seem to be an expert with 1000 FPS servers. I made the changes beyond modifying the kernel and ran some tests but I am only getting 400 FPS a the highest right now. I'm scared to do anything involving the kernel because blowing it up would be a horrible thing for me. I've checked and the server processes are running at -99 priority and max_fps is set to 1000. [r...@atom ~]# rpm -qa | grep kernel kernel-PAE-2.6.18-128.el5 kernel-PAE-2.6.18-128.7.1.el5 kernel-PAE-devel-2.6.18-128.el5 kernel-headers-2.6.18-128.7.1.el5 kernel-PAE-devel-2.6.18-128.7.1.el5 [r...@atom ~]# Your not going to be able to get ultra accurate timers with stock CentOS kernels. You will need something with hrtimers merged into them, and hardware that support high res timers. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 02:20 AM 9/5/2009, John wrote: You need high res timers (HPET), on a newer kernel (2.6.24) -M I hated that kernel version. I'm running 2.6.26.5-rt8. And even with hpet enabled you still want the higher kernel frequency wouldn't you? I think Gary meant that you'd need a 2.6.24 or later kernel. HPET and hrtimers are a rather new addition to Linux. If you run something beyond 2.6.26 or so, make sure to also flag the server as a real-time process to remove the kernel's built-in SCHED_OTHER timer slack, which defaults to 50 usec and makes the FPS a bit less stable. This can be done with the chrt utility. With high resolution timers enabled, your machine doesn't need to run at 1000hz, because processes will be woken up at the right times regardless. In fact, a lower hz rate like 100 generally works out better; the lower number leads to less flipping of processes between CPUs, fewer unnecessary context switches to the kernel, etc. The only real advantage to a high hz might be in more accurate process accounting. In my testing, the -rt kernel patchset led to an overall reduction in performance, due to the additional context switching. YMMV. AFAIK the scheduler clock uses jiffies, so it's bound by what the clock interrupt is using. Running a HZ of 100 with SCHED_FIFO makes it perform worse when looking at tasks to process than a HZ of 1000 because of jiffies being tied into sched_clock. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 02:20 AM 9/5/2009, John wrote: You need high res timers (HPET), on a newer kernel (2.6.24) -M I hated that kernel version. I'm running 2.6.26.5-rt8. And even with hpet enabled you still want the higher kernel frequency wouldn't you? I think Gary meant that you'd need a 2.6.24 or later kernel. HPET and hrtimers are a rather new addition to Linux. If you run something beyond 2.6.26 or so, make sure to also flag the server as a real-time process to remove the kernel's built-in SCHED_OTHER timer slack, which defaults to 50 usec and makes the FPS a bit less stable. This can be done with the chrt utility. With high resolution timers enabled, your machine doesn't need to run at 1000hz, because processes will be woken up at the right times regardless. In fact, a lower hz rate like 100 generally works out better; the lower number leads to less flipping of processes between CPUs, fewer unnecessary context switches to the kernel, etc. The only real advantage to a high hz might be in more accurate process accounting. In my testing, the -rt kernel patchset led to an overall reduction in performance, due to the additional context switching. YMMV. AFAIK the scheduler clock uses jiffies, so it's bound by what the clock interrupt is using. Running a HZ of 100 with SCHED_FIFO makes it perform worse when looking at tasks to process than a HZ of 1000 because of jiffies being tied into sched_clock. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 05:04 PM 9/5/2009, Gregg Hanpeter wrote: So what is the secret to achieving 2000 fps if I dump the real time patch? I've never tried this but am now thinking about it. Lie to the engine about when sleeping wakeups occur. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 09:50 AM 9/5/2009, Joseph Laws wrote: I've never cared for the RT patches...but the hi-res timers pre 2.6.24 are very solid. RT patches try and reduce the latency of a great multitude of things, but the only ones that really count are the scheduler latency. The 2.6.22 kernels without CFS are better than the newer ones :) The best mainline kernels are the 2.4 series, because nanosleep will busy wait. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 05:04 PM 9/5/2009, Gregg Hanpeter wrote: So what is the secret to achieving 2000 fps if I dump the real time patch? I've never tried this but am now thinking about it. Lie to the engine about when sleeping wakeups occur. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 09:50 AM 9/5/2009, Joseph Laws wrote: I've never cared for the RT patches...but the hi-res timers pre 2.6.24 are very solid. RT patches try and reduce the latency of a great multitude of things, but the only ones that really count are the scheduler latency. The 2.6.22 kernels without CFS are better than the newer ones :) The best mainline kernels are the 2.4 series, because nanosleep will busy wait. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Updating a server using FTP
At 06:44 AM 9/4/2009, Nightbox wrote: Is it possible ? ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ftp site exec script-that-runs-steam.sh ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Updating a server using FTP
At 06:44 AM 9/4/2009, Nightbox wrote: Is it possible ? ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ftp site exec script-that-runs-steam.sh ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Updating a server using FTP
At 04:11 PM 9/4/2009, Dave Williams wrote: Nightbox, I really do have to say from a professional and players point of view that if your GSP is restricting SSH on your nix box then they are nothing but morons and you really should switch. It's not difficult to restrict ssh access to certain folders on a per user basis. If your GSP can't manage this then they really have no idea what they are doing. Of course if you only have on server hosted on a nix machine then you will need to ask them to add the - autoupdate switch as already has been suggested. Even then IMHO you will need to refer to point 1. Specifically the moron part. I know i don't post on this list much but that is because i just don't need too. The saying too many cooks springs to mind. Anyway, consider this with great thought as it can only be your decision. I've never given anyone ssh access, nor do I plan on it. You don't need ssh access to run a server correctly. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Updating a server using FTP
At 04:11 PM 9/4/2009, Dave Williams wrote: Nightbox, I really do have to say from a professional and players point of view that if your GSP is restricting SSH on your nix box then they are nothing but morons and you really should switch. It's not difficult to restrict ssh access to certain folders on a per user basis. If your GSP can't manage this then they really have no idea what they are doing. Of course if you only have on server hosted on a nix machine then you will need to ask them to add the - autoupdate switch as already has been suggested. Even then IMHO you will need to refer to point 1. Specifically the moron part. I know i don't post on this list much but that is because i just don't need too. The saying too many cooks springs to mind. Anyway, consider this with great thought as it can only be your decision. I've never given anyone ssh access, nor do I plan on it. You don't need ssh access to run a server correctly. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 11:17 PM 9/4/2009, Eric Greer wrote: This goes to you too, Jason :-D So I've recently had some trouble getting CentOS to run 1000 FPS servers. I read online people seem to think you need a custom kernel compile to make this happen. What kind of adjustments need to be made and what settings must be in the config or command line to make this feat possible? Thanks everyone, Eric You need high res timers (HPET), on a newer kernel (2.6.24) -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] 1000 FPS CentOS Servers?
At 11:17 PM 9/4/2009, Eric Greer wrote: This goes to you too, Jason :-D So I've recently had some trouble getting CentOS to run 1000 FPS servers. I read online people seem to think you need a custom kernel compile to make this happen. What kind of adjustments need to be made and what settings must be in the config or command line to make this feat possible? Thanks everyone, Eric You need high res timers (HPET), on a newer kernel (2.6.24) -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Valve Source Engine Console Message Format String Vulnerability
At 03:36 PM 8/18/2009, Ronny Schedel wrote: It's not forbidden to mix diffent programming languages, I am sure they also use Assembler codes. The problem can also occur in C++, because they trust the client that it sends a valid string, but it can send anything. They only use assembly code to in startup to get the CPU MHZ via 2 calls to rdtsc. Thats not right ;) The programming language is the problem in this case. Why should i write my code with functions that shouldnt be used with C++? C++ works with the stdlib, which means streams. Not C stuff. So its finally up to Valve to write programs which follows C++ standards not C. You cant trust your users as programmer. Its up to us, to make the source safe, and if the projecttime needs 2 weeks more, you should spend the time. Ronny Schedel schrieb: The problem is not the programming language, the problem is that Valve trust their game clients too much. Well, Valve should start coding c++ with steams ;) Who works with printfs today? I hope Valve will fix the whole source to prevent overflows. C++ is you friend, not old C stuff... Best regards, Stefan Popp Claudio Beretta schrieb: Thanks, anyone knows if a workaround is available? BTW: aren't security researchers supposed to contact the developers before releasing 0-day exploits?This is the 2nd 0-day exploy from aluigi in a few weeks -.- On Tue, Aug 18, 2009 at 6:44 PM, Morgan Humes mrh9...@lanaddict.com wrote: A friend forwarded me this info regarding a vulnerability. I am unable to test this at the moment, but it does look like it is possible. Thought I would get this out to the community before others start using this to cause havoc. http://www.vupen.com/english/advisories/2009/2296 http://aluigi.altervista.org/adv/sourcefs-adv.txt Morgan Humes ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Valve Source Engine Console Message Format String Vulnerability
At 03:36 PM 8/18/2009, Ronny Schedel wrote: It's not forbidden to mix diffent programming languages, I am sure they also use Assembler codes. The problem can also occur in C++, because they trust the client that it sends a valid string, but it can send anything. They only use assembly code to in startup to get the CPU MHZ via 2 calls to rdtsc. Thats not right ;) The programming language is the problem in this case. Why should i write my code with functions that shouldnt be used with C++? C++ works with the stdlib, which means streams. Not C stuff. So its finally up to Valve to write programs which follows C++ standards not C. You cant trust your users as programmer. Its up to us, to make the source safe, and if the projecttime needs 2 weeks more, you should spend the time. Ronny Schedel schrieb: The problem is not the programming language, the problem is that Valve trust their game clients too much. Well, Valve should start coding c++ with steams ;) Who works with printfs today? I hope Valve will fix the whole source to prevent overflows. C++ is you friend, not old C stuff... Best regards, Stefan Popp Claudio Beretta schrieb: Thanks, anyone knows if a workaround is available? BTW: aren't security researchers supposed to contact the developers before releasing 0-day exploits?This is the 2nd 0-day exploy from aluigi in a few weeks -.- On Tue, Aug 18, 2009 at 6:44 PM, Morgan Humes mrh9...@lanaddict.com wrote: A friend forwarded me this info regarding a vulnerability. I am unable to test this at the moment, but it does look like it is possible. Thought I would get this out to the community before others start using this to cause havoc. http://www.vupen.com/english/advisories/2009/2296 http://aluigi.altervista.org/adv/sourcefs-adv.txt Morgan Humes ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] OP4 cpu usage question
At 02:17 PM 7/22/2009, Ook wrote: So setmaster seems to have fixed the no server on steam browser problem. Within 15 seconds of my issuing the setmaster command, it appeared in the steam list - just like that, lickity split fast. Tnx to all that helped out with that one, I've never needed to deal with setmaster on windows boxes and I would not have thought of doing so. My next issue/question is cpu usage. I'm running this on a Sempron 2400+ box, Slackware 12.2, 2.6.27.7 kernel. 1GB ram. Foxconn K7S Winfast mother board, socket A, SIS chipset. I've used these boards for years, and found them to be quite stable. hlds_amd with no players runs at about 35% cpu. Right now there are eight players, and cpu is about 50%. The increase is actually not that bad, I'm waiting for player count to hit 12+ to see how it's really doing. I've run the windows version on this box, and with 12 players it might hit 10% cpu, so I'm observing quite a bit of difference. Is this typical? I haven't had a chance to recompile the kernel and select a simpler cpu scheduler, the default one is... CFQ, I think, and it has a bit of overhead associated with it. Not sure how much of an impact that might have on a slower box like this. Does anyone have any suggestions as to what I can do to optimize this box? It doesn't seem right that hlds sitting idle with no players would chug along at 35% cpu. That's not a CPU scheduler, that's an I/O scheduler. Deadline is probably the best choice for everyone. There's no magic bullet to reduce CPU usage as the binaries aren't really optimized for linux. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] OP4 cpu usage question
At 02:17 PM 7/22/2009, Ook wrote: So setmaster seems to have fixed the no server on steam browser problem. Within 15 seconds of my issuing the setmaster command, it appeared in the steam list - just like that, lickity split fast. Tnx to all that helped out with that one, I've never needed to deal with setmaster on windows boxes and I would not have thought of doing so. My next issue/question is cpu usage. I'm running this on a Sempron 2400+ box, Slackware 12.2, 2.6.27.7 kernel. 1GB ram. Foxconn K7S Winfast mother board, socket A, SIS chipset. I've used these boards for years, and found them to be quite stable. hlds_amd with no players runs at about 35% cpu. Right now there are eight players, and cpu is about 50%. The increase is actually not that bad, I'm waiting for player count to hit 12+ to see how it's really doing. I've run the windows version on this box, and with 12 players it might hit 10% cpu, so I'm observing quite a bit of difference. Is this typical? I haven't had a chance to recompile the kernel and select a simpler cpu scheduler, the default one is... CFQ, I think, and it has a bit of overhead associated with it. Not sure how much of an impact that might have on a slower box like this. Does anyone have any suggestions as to what I can do to optimize this box? It doesn't seem right that hlds sitting idle with no players would chug along at 35% cpu. That's not a CPU scheduler, that's an I/O scheduler. Deadline is probably the best choice for everyone. There's no magic bullet to reduce CPU usage as the binaries aren't really optimized for linux. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] OP4 cpu usage question
At 03:15 PM 7/22/2009, Ook wrote: That's not a CPU scheduler, that's an I/O scheduler. Deadline is probably the best choice for everyone. There's no magic bullet to reduce CPU usage as the binaries aren't really optimized for linux. -M 35% cpu at idle? Compared to 1% on Winbloze? That goes beyond simply not being optimized for linux...I guess I'll wait and see how it performs with 12+ players. I would hate to have to go back to Windows simply because of this. Run a profiler on it to see what it's doing. If you don't need 1000hz, remove it from the kernel. I'm sorry to say but there are others on this list who know for sure that the binaries just aren't as optimized as their windows counterparts. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] OP4 cpu usage question
At 03:15 PM 7/22/2009, Ook wrote: That's not a CPU scheduler, that's an I/O scheduler. Deadline is probably the best choice for everyone. There's no magic bullet to reduce CPU usage as the binaries aren't really optimized for linux. -M 35% cpu at idle? Compared to 1% on Winbloze? That goes beyond simply not being optimized for linux...I guess I'll wait and see how it performs with 12+ players. I would hate to have to go back to Windows simply because of this. Run a profiler on it to see what it's doing. If you don't need 1000hz, remove it from the kernel. I'm sorry to say but there are others on this list who know for sure that the binaries just aren't as optimized as their windows counterparts. -M ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Ubuntu 2.6.27-14-server, Segmentation fault
At 04:52 PM 6/13/2009, Mark Sebastian Johansen - Support-IT Network ApS wrote: Hello everyone. I've currently been playing around with steam on an VMware hosted ubuntu server, I will take you through the process and tell about what ive done so far. The first thing I did was downloading the steamupdater and extract the steam file, after that I started installing the games I needed on the server one after one at a point I stopped the update as I had to leave and logged off my ssh - when I got home and started the update/install again it began writing Segmentation fault every time I try to install or update any of the installations - it does not help to get a new steam file as I have already tried that. Attach gdb to the steam binary gdb ./steam run See where it crashes. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Ubuntu 2.6.27-14-server, Segmentation fault
At 04:52 PM 6/13/2009, Mark Sebastian Johansen - Support-IT Network ApS wrote: Hello everyone. I've currently been playing around with steam on an VMware hosted ubuntu server, I will take you through the process and tell about what ive done so far. The first thing I did was downloading the steamupdater and extract the steam file, after that I started installing the games I needed on the server one after one at a point I stopped the update as I had to leave and logged off my ssh - when I got home and started the update/install again it began writing Segmentation fault every time I try to install or update any of the installations - it does not help to get a new steam file as I have already tried that. Attach gdb to the steam binary gdb ./steam run See where it crashes. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] No such file or directory on amd64 - debian
At 04:01 AM 6/4/2009, vac - Wojtek Gajda wrote: Dnia 04-06-2009 o 09:49:14 Ferenc Kovacs i...@tyrael.hu napisa³(a): 2009/6/3 vac - Wojtek Gajda v...@milowice.net: few days ago everything (srcds_amd, srcds_run etc.) was working fine. i think that i have started having this problem after small kernel upgrade 2.6.26-1 - 2.6.26-2 (but i'm not quite sure - dont remeber when i did the upgrade). now srcds_run returns: ./srcds_run Auto detecting CPU Using AMD Optimised binary. Server will auto-restart if there is a crash. ./srcds_run: line 362: ./srcds_amd: No such file or directory of course i have this file: ls -l srcds* : -rwxr-xr-x 1 gry lan 183860 2009-02-26 22:31 srcds_amd -rwxr-xr-- 1 gry lan 183828 2009-02-26 22:31 srcds_i486 -rwxr-xr-- 1 gry lan 183828 2009-02-26 22:31 srcds_i686 -rwxr-xr-x 1 gry lan 10174 2009-02-26 22:31 srcds_run so i tried to install everything again (in test directory) and so: with normal user (it used to work fine): g...@milowice2:~/test$ wget http://storefront.steampowered.com/download/hldsupdatetool.bin g...@milowice2:~/test$ chmod +x hldsupdatetool.bin g...@milowice2:~/test$ ls -l -rwxr-xr-x 1 gry lan 3513408 2005-09-02 04:27 hldsupdatetool.bin g...@milowice2:~/test$ ./hldsupdatetool.bin -bash: ./hldsupdatetool.bin: No such file or directory g...@milowice2:~/test$ linux32 ./hldsupdatetool.bin linux32: ./hldsupdatetool.bin: No such file or directory and as a root (the same): milowice2:/home/lan/gry/test# ./hldsupdatetool.bin bash: ./hldsupdatetool.bin: No such file or directory milowice2:/home/lan/gry/test# linux32 ./hldsupdatetool.bin linux32: ./hldsupdatetool.bin: No such file or directory some informations about my system: milowice2:/home/lan/gry/test# uname -a Linux milowice2 2.6.26-2-amd64 #1 SMP Thu May 28 21:28:49 UTC 2009 x86_64 GNU/Linux g...@milowice2:~$ cat test.c #include stdio.h int main(){ printf(aaa\n); return 0; } g...@milowice2:~$ gcc test.c and i'm running normal binary in the same directory: g...@milowice2:~$ ./a.out aaa g...@milowice2:~$ ./hldsupdatetool.bin bash: ./hldsupdatetool.bin: No such file or directory g...@milowice2:~$ ls -l hldsupdatetool.bin -rwxr-xr-x 1 gry lan 3513408 Sep 2 2005 hldsupdatetool.bin Try gcc -m32 test.c -o test See if it runs then. If it does not, that means your glibc compat libaries are broken. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] No such file or directory on amd64 - debian
At 04:01 AM 6/4/2009, vac - Wojtek Gajda wrote: Dnia 04-06-2009 o 09:49:14 Ferenc Kovacs i...@tyrael.hu napisa³(a): 2009/6/3 vac - Wojtek Gajda v...@milowice.net: few days ago everything (srcds_amd, srcds_run etc.) was working fine. i think that i have started having this problem after small kernel upgrade 2.6.26-1 - 2.6.26-2 (but i'm not quite sure - dont remeber when i did the upgrade). now srcds_run returns: ./srcds_run Auto detecting CPU Using AMD Optimised binary. Server will auto-restart if there is a crash. ./srcds_run: line 362: ./srcds_amd: No such file or directory of course i have this file: ls -l srcds* : -rwxr-xr-x 1 gry lan 183860 2009-02-26 22:31 srcds_amd -rwxr-xr-- 1 gry lan 183828 2009-02-26 22:31 srcds_i486 -rwxr-xr-- 1 gry lan 183828 2009-02-26 22:31 srcds_i686 -rwxr-xr-x 1 gry lan 10174 2009-02-26 22:31 srcds_run so i tried to install everything again (in test directory) and so: with normal user (it used to work fine): g...@milowice2:~/test$ wget http://storefront.steampowered.com/download/hldsupdatetool.bin g...@milowice2:~/test$ chmod +x hldsupdatetool.bin g...@milowice2:~/test$ ls -l -rwxr-xr-x 1 gry lan 3513408 2005-09-02 04:27 hldsupdatetool.bin g...@milowice2:~/test$ ./hldsupdatetool.bin -bash: ./hldsupdatetool.bin: No such file or directory g...@milowice2:~/test$ linux32 ./hldsupdatetool.bin linux32: ./hldsupdatetool.bin: No such file or directory and as a root (the same): milowice2:/home/lan/gry/test# ./hldsupdatetool.bin bash: ./hldsupdatetool.bin: No such file or directory milowice2:/home/lan/gry/test# linux32 ./hldsupdatetool.bin linux32: ./hldsupdatetool.bin: No such file or directory some informations about my system: milowice2:/home/lan/gry/test# uname -a Linux milowice2 2.6.26-2-amd64 #1 SMP Thu May 28 21:28:49 UTC 2009 x86_64 GNU/Linux g...@milowice2:~$ cat test.c #include stdio.h int main(){ printf(aaa\n); return 0; } g...@milowice2:~$ gcc test.c and i'm running normal binary in the same directory: g...@milowice2:~$ ./a.out aaa g...@milowice2:~$ ./hldsupdatetool.bin bash: ./hldsupdatetool.bin: No such file or directory g...@milowice2:~$ ls -l hldsupdatetool.bin -rwxr-xr-x 1 gry lan 3513408 Sep 2 2005 hldsupdatetool.bin Try gcc -m32 test.c -o test See if it runs then. If it does not, that means your glibc compat libaries are broken. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] No such file or directory on amd64 - debian
Your last response was mutilated. Anyways, you're going to have to make sure there are glibc libararies installed for x86/i386 binaries. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] No such file or directory on amd64 - debian
Your last response was mutilated. Anyways, you're going to have to make sure there are glibc libararies installed for x86/i386 binaries. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Improving ping
At 09:08 PM 6/3/2009, Oliver Salzburg wrote: I run a box with 4 L4D servers on it. The average ping is around 30 and I am wondering if I could further improve it with srcds settings. I never experience any lag when playing on them but I thought the ping value can never be low enough. I've read a lot about pingboosts and tickrates on this list so I was wondering what the general recommendation would be to improve the ping. Thanks in advance The speed of light in a perfect vacuum is 186k/miles a second. Due to fiber's refractive index of 1.49, The speed of light in normal fiber is about 124k/miles a second. For every 60 miles of fiber, 1 millisecond of latency is added. (This is the most ideal conditions, however network queuing delays and other things slow it down too) ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] Improving ping
At 09:08 PM 6/3/2009, Oliver Salzburg wrote: I run a box with 4 L4D servers on it. The average ping is around 30 and I am wondering if I could further improve it with srcds settings. I never experience any lag when playing on them but I thought the ping value can never be low enough. I've read a lot about pingboosts and tickrates on this list so I was wondering what the general recommendation would be to improve the ping. Thanks in advance The speed of light in a perfect vacuum is 186k/miles a second. Due to fiber's refractive index of 1.49, The speed of light in normal fiber is about 124k/miles a second. For every 60 miles of fiber, 1 millisecond of latency is added. (This is the most ideal conditions, however network queuing delays and other things slow it down too) ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] clock drift causing lag spikes/warping?
At 10:05 PM 5/19/2009, bob dolet wrote: Thanks for the reply gary i appreciate it. I did state that i am not running ntp, i was in the past but figured it was causing issues, so it has not been used for quite some time. Also, i am not currently running a tickless kernel, but i have experienced this issue on tickless and ticked kernels. I just dont understand why a system reboot seems to resolve the issue for a day or two, considering there is nothing substantial running on the machine outside of ssh, screen and game servers. Here's what could be causing your spikes - TSC sync issues (newer linux attempts to keep the tsc's timestamps sync'd up) - CPU Speed / CPU Idle in the kernel - Speedstep in the BIOS - Other types of power management - CPU errata - Ethernet cable errors (CRCs) - Services running like irqbalance - SMI interrupts firing off and consuming the bus for a few hundred ms/uS - ACPI Processor enabled - Tickless kernel - Dynamic ticks - Buggy kernel If it doesn't happen on a stock kernel, and happens on a newer one, perhaps it's a bug in the kernel. You should always turn off the power management options as they can cause problems with things of this nature. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux
Re: [hlds_linux] clock drift causing lag spikes/warping?
At 10:05 PM 5/19/2009, bob dolet wrote: Thanks for the reply gary i appreciate it. I did state that i am not running ntp, i was in the past but figured it was causing issues, so it has not been used for quite some time. Also, i am not currently running a tickless kernel, but i have experienced this issue on tickless and ticked kernels. I just dont understand why a system reboot seems to resolve the issue for a day or two, considering there is nothing substantial running on the machine outside of ssh, screen and game servers. Here's what could be causing your spikes - TSC sync issues (newer linux attempts to keep the tsc's timestamps sync'd up) - CPU Speed / CPU Idle in the kernel - Speedstep in the BIOS - Other types of power management - CPU errata - Ethernet cable errors (CRCs) - Services running like irqbalance - SMI interrupts firing off and consuming the bus for a few hundred ms/uS - ACPI Processor enabled - Tickless kernel - Dynamic ticks - Buggy kernel If it doesn't happen on a stock kernel, and happens on a newer one, perhaps it's a bug in the kernel. You should always turn off the power management options as they can cause problems with things of this nature. ___ To unsubscribe, edit your list preferences, or view the list archives, please visit: http://list.valvesoftware.com/mailman/listinfo/hlds_linux