ENet has had a packet throttling mechanism since the beginning that allowed the amount of unreliable packets sent over the link to scale up or down depending on a % representing the current quality of the connection. Only % of the queued unreliable packets would get sent through, the rest it would actually discard. Since the packets are unreliable, it is perfectly legal to drop them, and advantageous to do so when it helps reduce network congestion. This % floated up or down depending on whether reliable packets or periodic pings (which are just null reliable packets) took above or below average amount of time to get acknowledged. Keep that in mind.

But I could never figure out a similar simple scheme for reliable packets. I always thought I needed good bandwidth capacity estimates of the connection so that I could know how much reliable data to send on any given call to enet_host_service. Then some silly scheme about measuring the average time between enet_host_service calls so I would know how much more data budget to allocate to reliable data to send again, since I reasoned you need to know the rate at which data was getting sent to fit in within the estimated budget. And that brought up questions like, over what timescales do you measure the rate and the budget, and how do you fit that into the programmer interface of ENet which is based on a polling enet_host_service call? And then the mechanism wouldn't cooperate with the unreliable packet throttling very well since it is a different metric, unless I changed the metric for unreliable packets too which would require a crapload of empirical testing to tune right again... I was not looking forward to any such thing, so for years the throttle has remained as it is, rather than me risk that.

Now, there is a reliable data window size in ENet that affords some static flow control. The window size is constant so does not float up or down in response to connection quality at all. There were some provisions in the code to assign a fixed window size based on the user provided bandwidth numbers, which I don't think anyone (even me) seriously uses at this point. But this also creates nasty antagonism if you send a lot of reliable data that overwhelms the link: suddenly ENet sees your ping times skyrocketing, so the throttle slows unreliable traffic down to almost nothing but keeps flooding the connection with reliable data, so just kinda sucks. But that games I was using it for were based on unreliable traffic with only limited reliable traffic, so it never really mattered except occasionally. Also keep that in mind.

Then I was just for one reason or another reading an article on some TCP mechanism, and there was a sentence in it that struck me like a brick in the head: the window size in TCP floated up or down in response to congestion. See where this is going? Ouch, the answer was staring me right in the face the whole time:

 throttle % * window size = throttled window size

The feedback would be more or less perfect: if the throttled window size was still too large to prevent congestion, ping times would end up increasing, and throttle % will just keep going down until the effective window size is low enough to stabilize the connection again. If ping times improve, the throttle opens up again. And the throttle % should already be well tuned with no need to mess with it. The window was the only mechanism I ever needed to handle flow control, I just needed to link it to the throttle.

Problem solved in one damned line of code. Oh, how blind I was. :( -> :)

Thoughts?

Lee



_______________________________________________
ENet-discuss mailing list
[email protected]
http://lists.cubik.org/mailman/listinfo/enet-discuss

Reply via email to