You misunderstood me. :)

I mean a *counter* overflow... as in increment an unsigned int that's already at max, and rolls back over to 0.

Lee

Andrew Fenn wrote:
The semantics of these is that they are set to 0 on enet_host_create() but
just increment thereafter. It is the user's responsibility to
reset them to 0 periodically to prevent them for overflowing.

What about people that aren't using these new features, isn't that
going to trip them up?

Perhaps it would be better to just contain data since the last time
you have called enet_host_service because I wouldn't want other users
of the lib to suddenly have buffer overflows in their code after
upgrading.

On Sat, May 15, 2010 at 6:53 AM, Lee Salzman <[email protected]> wrote:
Okay, this is what I added, which should hopefully suit your needs:

typedef struct _ENetHost
{
  ... usual stuff here ...
 enet_uint32        totalSentData;               /**< total data sent, user
should reset to 0 as needed to prevent overflow */
 enet_uint32        totalSentPackets;            /**< total UDP packets
sent, user should reset to 0 as needed to prevent overflow */
 enet_uint32        totalReceivedData;           /**< total data received,
user should reset to 0 as needed to prevent overflow */
 enet_uint32        totalReceivedPackets;        /**< total UDP packets
received, user should reset to 0 as needed to prevent overflow */
} ENetHost;

The semantics of these is that they are set to 0 on enet_host_create() but
just increment thereafter. It is the user's responsibility to
reset them to 0 periodically to prevent them for overflowing. Suggested
usage pattern would be like:

myCounter += host -> totalSentData;
host -> totalSentData = 0;

Also note the total*Packets statistics are in terms of raw UDP packets, i.e.
the aggregates of various ENet commands, so are a better
indicator of the actual network performance than what you were using before,
which was only user packets. Each increment represents
a single enet_socket_send or enet_socket_receive call under the hood. The
total*Data versions reflect the actual size of the buffers sent
or received via the enet_socket_* calls too.

Lee


Andrew Fenn wrote:
In that case is there a solution or can we see this possibly being
made easier via the API down the line?

On Fri, May 14, 2010 at 9:52 PM, Lee Salzman <[email protected]> wrote:

Keep in mind that reusing those internal ENet statistics is not going to
be
very safe at all, except for the roundTripTime stat. Those *Bandwidth
vars
don't measure totals and are just limits supplied by the user. And
because
of the whole dispatch queue change I made, lastServicedPeer no longer
exists. :)

Lee

Andrew Fenn wrote:

That out of the way, are there any not too difficult things people
would
like to see in 1.2.2?


- I use the cmake build system for our project. It'd be great if ENET
included a FindENET.cmake script or better yet just completely change
over from autotools.

I've attached the ENET scripts I use to statically link it into our
project.

- I'm currently trying to figure out how to get the following data from
ENET:
   - Total data received / sent
   - Data received / sent since the last flush
   - Total packets received / sent

I'm not so sure those things are features but I'm sure i'm not the
only one that's confused over how to get that data. So if you could
correct me on the following..

lHost->lastServicedPeer->roundTripTime - I use this for ping time
however it always says 5ms even when I kill the server.
lHost->incomingBandwidth/1000 - Data in KiB received (always shows the
same number even when killing the server)
lHost->outgoingBandwidth/1000 - Data in KiB sent (always shows the
same number even when killing the server)
lHost->lastServicedPeer->packetsSent - Get's the packets sent since
the last flush


_______________________________________________
ENet-discuss mailing list
[email protected]
http://lists.cubik.org/mailman/listinfo/enet-discuss

Reply via email to