Hi Jesse,

jetty supports the latest versions on or around when they come out, if
> you scan the hybi lists you'll see Greg on there a lot...and he is
> websockets on jetty.  A couple of the rest of us follow it passively
> in the background as it has been evolving.
>

Glad to know jetty's websocket-support is so up to date (kudos to Greg).
Just gave it a try - can't believe how easy and well performing things are
compared to the emulated data-upstream using multiple XmlHttpRequests.

However, I still have a question:
I need to stream data to the client as fast as possible, in fact the amount
of data generated depends on how fast the client can download it
(request/response is not possible, because of latency). On the client-side
there is the "bufferedAmount"-field, but how can I control the amount of
data written on the server-side?

I assume Jetty's WebSocket.Connection will block when too much data is
written, but are there mechanisms to control this? Is the message-based API
powerful enough, or will I have to work with the fragment-API?
Reading through WebSocketFactory indicates 64k are used for buffering, can
this be controlled?

Thanks, Clemens
_______________________________________________
jetty-users mailing list
[email protected]
https://dev.eclipse.org/mailman/listinfo/jetty-users

Reply via email to