At 01:25 PM 11/15/2010, John wrote:
I agree that a better explanation from Valve would be good to squash some of the speculation on what FPS really means (the docs I've seen talk about tickrate, but not FPS). Maybe there's an official one out there, and we just need to find it.

Gary, I'm not sure that you're right about seemingly small amounts of jitter never representing a problem. Imagine a scenario in which a server runs at 10fps and a tickrate of 5, with this timeline:

The realized FPS here in this case would be 7, and the realized tickrate would be 4. This means that the FPS didn't dip all that much and still exceeds the tickrate, and yet the client would have seen a (very noticeable, at this resolution) glitch in gameplay.

Scale this up to higher FPS and tickrate values, and it's quite possible that a dip from 150 to 100, or 90 to 66, could represent a problem. Does it always, and is it always noticeable? No, I wouldn't say that. But, realized FPS is still the best measure of purely server-side performance that we currently have at our disposal.

I would like to see a realized tickrate number in addition to, or instead of, FPS. Locking the FPS rate to the tickrate (as L4D/L4D2 servers do, by default) also effectively gives us this, but presumably there is a benefit to having a decoupled higher FPS, such as by splitting up some of the network processing work into smaller chunks so that ticks take less time.

(In the real world, what could cause a tick to take so long? Commonly, a misbehaved plugin or long disk write. The latter can be caused by very heavy background disk access when the server is flushing out a log.)

-John

Page fault latency wouldn't really cause huge delays at all from an application, unless you are running real time application and you need to get rid of jitter completely from doing a write() to disk (which directly goes to the filesystem buffer cache until you call fsync() (IIRC on linux)) You're always going to have jitter from syscalls, and syscalls are exactly what is used to generate what 'FPS' says.. (gettimeofday has nanosecond precision, so with erroring and rounding, you're going to have more variances than with a microsecond one, it has way more erroring; because it's more sensitive to it's own enviornment, ie: temperature of the PLL/quartz, motherboard, I/O load, kernel scheduler, etc)

The point I am trying to make here is that with all the info you provided above, it's still speculation. Network frames are driven by the timers off of nanosleep, and gettimeofday is used to step time inside of the engine. I know this because the engine is based off of quake 3, and it does share parts of it (the network engine is just like it). I am not sure I agree with your statement that FPS is used to measure serverside performance, I thought it was people's latency to the server (lower latency means less error prediction)





_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to