Thank you for the elaborate answer, Lee, it's clear now. As I see it, if it even manages to snoop off a few percent of relatively tight data then I might as well switch it on always. Especially considering messages, in my case, are relatively short (even if aggregated) so there may be no significant overhead if it decides to send the original anyway.

You could spend a lot of time trying to optimize out redundant data in
your packets, often at the expense of more complicated code, more runtime
cost to create the complicated data, etc...

Or, you can insert a compression step in the stream and let it do what it
does.

Depending on a lot of factors, going for simple data and cheap but
effective compression is a net win.

Yes, I agree. But since data often needs to be serialized (especially if null-terminated strings are involved) it makes sense to try and compress the data somewhat on the spot (at least I think serialization often does that), so I'm guessing it's never an either/or situation.

Martin

Quoting Brad Roberts <[email protected]>:

Another way to look at it is this:

You could spend a lot of time trying to optimize out redundant data in
your packets, often at the expense of more complicated code, more runtime
cost to create the complicated data, etc...

Or, you can insert a compression step in the stream and let it do what it
does.

Depending on a lot of factors, going for simple data and cheap but
effective compression is a net win.

Later,
Brad

On Thu, 20 May 2010, Lee Salzman wrote:

1. This more depends on the data itself than on size, since the compressor can manage to start working on anything over about 10 bytes. However, keep in mind
it is compressing the entire UDP packet, including ENet protocol headers, so
almost every UDP packet sent is going to be above this limit. The compressor
uses an adaptive scheme which does not send any frequency tables,  so  the
size overhead allows it to operate well even on packets numbered in the tens
of bytes.

2. Same as for any compressor, redundant data. But if your data is redundant, you probably did something wrong in your application to begin with. So it's a
catch-22. :)

It managed to squeeze only about 5-10% off of Sauerbraten's physics state that
is blasted out in extreme volume, but that data was very very well packed.
Compression ratios were much better on non-physics data I was sending but at
too low a volume for it to matter. YMMV depending on how clever you were
encoding your network data in the first place.

3. It doesn't send a compressed packet if the compressed size exceeds the
uncompressed size. So it's mostly just the CPU penalty of touching the data
and trying to compress it, even if ENet decides to just send the original
packet instead.

4. Yes, both ends of the connection must have this enabled for it to work.
Even to decode a compressed packet the user has to have supplied the decoder
to use. Since it works through a callback which could use any compression
library you wanted instead, but the protocol has no way of really saying what
kind of compression was used or marshalling it from the user, so I just kept
it simple here.

5. The problem was that I was wanted it to be able to compress packet headers,
and keep in mind that ENet aggregates many sends into one UDP packet if it
can. Compressing one small user packet would be kind of odd in this
circumstance, since I might have to invoke the compressor multiple times for
one UDP packet. Keep in mind the compressor is also adaptive, so it trains its
frequency estimation as it walks through the data. It will start compressing
better the longer the packet gets, up to a point. So for the moment I just
decided to compress the entire UDP packet always, and so long as the packet
gets smaller, send the compressed version.

Lee

M. Rijks wrote:
> The 1.3 edition sounds excellent, Lee - it already includes three
> wanna-haves for me. =)
>
> I've tried catching up on range coders using a Wikipedia article, but I have
> to admit that the inner workings are entirely unclear to me. :( So I'll go
> with some questions I think I can handle the answers of:
>
> 1. What's the minimum size for a packet for this kind of compression to
> become effective?
> 2. What kind of data is best compressed with it?
> 3. Is there a size penalty for attempting to compress data that can't be
> compressed?
> 4. Do I understand it correctly that the hosts must be set for compression
> on both ends of a connection for this to work correctly? Or are compressed
> packets somehow flagged so Enet 1.3 can recognize and decompress them when
> coming in?
> 5. Wouldn't it have been more convenient to decide compression per packet
> (in which case packets *would* need to be flagged, of course) ? I expect
> there is little sense in compressing very small packets especially because
> there may be overhead in terms of size and processing...
>
> Thanks!
>
> Martin
>

_______________________________________________
ENet-discuss mailing list
[email protected]
http://lists.cubik.org/mailman/listinfo/enet-discuss

_______________________________________________
ENet-discuss mailing list
[email protected]
http://lists.cubik.org/mailman/listinfo/enet-discuss




_______________________________________________
ENet-discuss mailing list
[email protected]
http://lists.cubik.org/mailman/listinfo/enet-discuss

Reply via email to