Hello,

At 22:48 25/05/2012, you wrote:
Perhaps I am wrong, but I believed the overhead for a few bytes within a packet was almost zero.
Here's my logic:
If the packet is smaller than the 1500 byte maximum packet size then the packet will be enlarged and no additional packets will be sent. 10 bytes / 10 megabit per second = 1 microsecond of overhead to serialize the bytes.
A very good network latency is 2 milliseconds.
1 microsecond / 2 milliseconds = 0.05% additional overhead in transmission time

The only additional overhead I see in the game program is the time to serialize/deserialize the additional bytes.
I can't estimate those since I don't know how your code does this.

If this is wrong, or overly simplistic, I'd be happy to learn where it's wrong.

My game is mainly played over Internet, and some people still have only 128kbps of upload, and in real it usually can handle less than that. But even with that the transfer time for 10 bytes isn't too big, the issue isn't really here. But Internet is a big place with congestion, stalling & packet loss, and to avoid all of that, my guess is it's better to have the smallest packets possible, so I see no reason to send unneeded bytes, especially when it's not so hard to remove them.

Another issue is that some people have capped download/upload amount per month. So any saved bandwidth is good for them.
_______________________________________________
ENet-discuss mailing list
[email protected]
http://lists.cubik.org/mailman/listinfo/enet-discuss

Reply via email to