On 6/9/05, Simon Garner <[EMAIL PROTECTED]> wrote:
> On 8/06/2005 1:38 p.m., James Tucker wrote:
> >
> > 1. I wanted raw output to try and provide some limited proof that I am
> > simply provdiing the output of the game (some people still seem to
> > think I'm mad).
> > 2. One of the things that has had me confused for some time now is why
> > the default configurations show so much choke, and why cl_smooth feels
> > so utterly horrible to play with set to 1.
>
> I don't see any choke on net_graph, and I had a play with net_channels
> (that one's new to me!) last night and my choke remained at 0.00 most of
> the time, occasionally rising to 0.10 or so. As long as it's <1.00, I
> really don't think that's anything to worry about.

Although during my extensive emperical testing the apparent 'hit
registration' (I know that this is often a very misused term) seemed
to be better when my outgoing choke was minimised (Suggesting to me
some data loss, or data malignment may be caused by this choke). It is
for this simple reason that I do not run cl_cmdrate 100 as that
generates more choke on the outgoing channel (Probably FPS related
restriction, although all attempts to prove this have failed).

> Totally agree about cl_smooth - this cvar seems to be the wrong way
> around. I wonder if it's a bug? With cl_smooth 1, prediction errors
> result in a jerky display, while with it off they appear either to be
> smoothed or to not occur (hard to tell which without setting up an
> sv_cheats 1 server to use cl_showerrors).

On a small aside, I have noticed the harmful effects of cl_smooth 1
are far less problematic on more powerful computers.

> > 3. net_channels seems to disagree with other things, or is simply
> > presenting different information or information in a different format
> > or level of accuracy.
>
> Well it clearly shows the choke and flow values with more precision. The
> numbers seemed to match what I saw on net_graph though, so I don't see
> any issue with their accuracy.
>
> I don't know what the latency value in net_channels is supposed to mean
> though, as it appears to always show 0.1.

I wonder if certain calculations operate over a range, and in this
case the range is '100ms or less'. I'd be interested to see what value
is run on a 56k link or an overseas link, I will try a US server later
today and investigate this value.

> > Now, a gentle note - I have experimented extensively (good many hour
> > session tests, as the only good emperical test is an extensive one)
> > with various different client settings and in my opinion it comes down
> > to minimising that choke value in net_channels as a priority (normally
> > by raising from defaults or lowering from the commonly (and
> > unjustifiably) praised 101 values). Clearly if one is to take the
> > engineers approach minimising choke is the correct thing to do, but
> > the scale appears completely non-linear, so you end up re-enacting a
> > GA anyway. Whilst minimising this choke, increase the values of
> > cl_cmdrate and cl_updaterate to give the highest possible "packets
> > in/out:" value (which maximises at the tickrate+1 funnily enough).
> >
>
> I run at 100/100 on a tickrate 100 server with no choke and 70-80
> packets/sec in and out.
>
>
> > Now my request for an explanation recieved the following response: "we
> > generally pack 2 user commands into a single packet". I do not know
> > really what this means. I understand what a user command is, and I
> > know what a packet might be (I know what a network packet is, but I am
> > doubtful that all mentioned packets are network packets).
> >
>
> I think what that means is that the client would normally have a higher
> fps than the cmdrate, and could generate more than one user command
> (client input) in the interval between each command packet. So multiple
> commands would be queued and sent together in one packet.

This principle is understandable, but does not account for the
difference between the packetrate and the cmdrate/updaterate.

> > Is it possible to predict (with some knowledge of Source) that the
> > system will perform more properly with the values:
> > cl_cmdrate 60
> > cl_updaterate 100
> > Are these even optimal values? They do, locally minimise choke and
> > maximise the packet rate, and the same technique is working very well
> > for everyone I suggest it to.
>
> In my opinion it's just a question of your client bandwidth (assuming
> the server is not a limiting factor). If you're on DSL or better you
> should be able to use maximum rates with no adverse effect.

Indeed, but alas, as stated above minimising choke seems to give the
best game feel, and this occurs at cmdrate 60, again as stated above,
this may be related to an FPS limitation locally, except that cmdrate
60 also minimises choke on a near top-spec machine that I have access
to.

> There could be cases where 'bandwidth' is a false measure, however. Be
> aware that most of the game's packets are very small, much smaller than
> your link MTU size probably is. And the 'bandwidth' of a given link only
> applies when transferring packets of the MTU size; 100Mbps ethernet is
> actually really 8333 packets/sec ethernet. Depending on how the
> bandwidth of the link is limited, a 256kbps link could by the same
> calculation have a throughput as little as 21 packets/sec.

I have tested this, I can manage 200 packets per second. In general I
am 6 to 10 hops away from most servers (one of the hops sits next to
me). I can also verify that most of the link latency comes from the
first mile and the server itself. The first mile, being ADSL / cable
on my testing platforms generally leaves about 20-30ms of latency, the
rest of the hops taking under 10ms to reach the target (often less
than a few). The servers/routers in front of servers are often adding
a couple of ms.

> > I mean no offence but quite often net_graph 3 is useless to me, as it
> > does not show the choke values at the level of granularity apparent in
> > net_channels, and the packetrate seems to be the rate of command
> > packets and updatepackets, not of "network" or i/o packets as would
> > seem to be the case coming out of net_channels.
>
> The packet rates on net_graph and net_channels are identical for me. And
> I think you're getting too worried about choke values of <1%. I'm pretty
> sure all the packets we are talking about are network packets.

If all the packets are network packets, and there is the presence of
choke, how is this measured? Does this mean that certain portions of
data are not reaching the target on time? If so, then emperical
testing would agree with theory, that no matter how small the choke,
minimising it is always a good idea.

> > P.S. This is not server or tickrate specific, so please don't start that.
>
> Well, the maximum packet rates you'll achieve on a server depends very
> much on the server's sv_maxupdaterate, tickrate and running FPS. And
> your latency, choke and loss depend very much on the quality of your
> connection to the server. So unless you are testing on a listenserver,
> these are important variables.

Yes indeed the packet rates are limited by the tickrate, but this does
not mean that any principles should change as the tickrate is altered.
Unless the algorithms change, the output should be definable by the
same function for different tickrates. I never suffer loss (that I
notice), and I don't play regularly on listen servers. I have tested
all of the above on ticrate 100, ticrate 66 and ticrate 33 servers,
and the effects appear (locally) to be the same, suggesting that the
reasons lie more in FPS and network configuration than in tickrate,
however this is negated by it's replicateability on a faster machine
on a different internet connection medium (cable).

_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds

Reply via email to