Gerhard Reitmayr wrote:
> 
> First I distinguished between
> VRML update messages and everything else such as chat or
> control messages. Then I used buffers for each client and
> at the outgoing side of the client, that are read by dedicated
> threads and pushed on the network. Only VRML updates
> ( by far the most ) go into these buffers.
> 
> Moreover if an update message of the same type
> ( i.e. same receiver like avatar and same VRML field type ) is
> found in the buffer, it is replaced by the new one. 
>

 Hmmm (seems to be the word for the day), it looks like the
vnet client does pretty much exactly this for messages
to the server, and the server does it for messages back
to the client.

 What I observed using a bunch of VnClientTest'ers, was
that even if you compress the messages as above, then if
enough clients move at once, you get an overload.


> all other messages are passed directly to the network, therefore
> the system can react faster to control messages and text is
> faster too.
> 

 That's pretty much how vnet does it, but the problem
(maybe) is that passing a tcp/ip message to the sockets layer
doesn't mean it gets sent right away, as there might be 
100's of other messages waiting to be sent down inside one
of the low-level socket buffers.

 Are you saying that DeepMatrix doesn't ever have any problems
with the sort of hang VNet does? If so, maybe the theory is
wrong, and it's some other random bug causing the problem.

 Of course, running VnClientTest isn't the same as running real
clients, so I'd like to see the results of the experiment I
proposed. Even if the theory is wrong, the experiment might
reveal useful data...



-cks

Reply via email to