We've found that as our exchange volumes have increased the only protocol
capable of handling a full un-throttled feed is ITCH (over multicast UDP).
For all of our other stream based TCP feeds (FIX, HTTP) we are moving
toward rate throttling and coalescing events based on symbol in all cases -
we already do it in the majority of our connections.  We maintain a buffer
per connection (Disruptor or coalescing ring buffer depending on the
implementation) so that the rate at which a remote connection consumes does
not impact on any of the other connections.  With FIX we also maintain some
code that if we detect a ring buffer becoming too full (e.g. >50%) then we
pro-actively tear down that connection under the assumption that their
connection is not fast enough to handle the full feed or it has
disconnected and we didn't set get a FIN packet.  If you have non-blocking
I/O available, then you can be a little bit smarter regarding the
implementation (unfortunately not an option with the standardised web
socket APIs).

Mike.

On 15 April 2017 at 02:01, Greg Young <gregoryyou...@gmail.com> wrote:

> for a price feed? what good is a 30 second old price update? I would
> prefer the current one losing the middle in most cases.
>
> if you were doing level 2 data (order book) this statement would make
> more sense.
>
> On Fri, Apr 14, 2017 at 3:00 PM, Vero K. <vero.ka...@gmail.com> wrote:
> > thanks, but losing msgs won't work for us: either wait and disconnect or
> > consume all
> >
> >
> > On Friday, April 14, 2017 at 4:00:07 PM UTC+3, peter royal wrote:
> >>
> >> For a similar problem I will only let one message for a given "key"
> remain
> >> in the queue to be sent.
> >>
> >> So if a client is slow, they'll receive the most recent message for a
> key
> >> but loose intermediate ones.
> >>
> >> -pete
> >>
> >> --
> >> peter royal - (on the go)
> >>
> >> On Apr 14, 2017, at 5:03 AM, Vero K. <vero....@gmail.com> wrote:
> >>
> >> hi, we want to stream fx rates over websockets and need to find out how
> to
> >> do it properly. we open socket for every connection and it has a
> buffer, now
> >> if the buffer is full it might cause a problem, on the other side if our
> >> client is slow as some point we need to drop connection. how would you
> >> implement rates streaming over websockets to handle this? would you
> consider
> >> to put an additional buffer of some size (for example disruptor queue)
> for
> >> every client, pick up data from there and put into socket buffer and in
> case
> >> if it full, keep a message in disruptor and after socket buffer is free,
> >> publish it to the client? and if disruptor q. is full, disconnect a
> client?
> >> do you think it is a good solution or how it is usually handled? we use
> java
> >> for our project.
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "mechanical-sympathy" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to mechanical-sympathy+unsubscr...@googlegroups.com.
> >> For more options, visit https://groups.google.com/d/optout.
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "mechanical-sympathy" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to mechanical-sympathy+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
> Studying for the Turing test
>
> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to mechanical-sympathy+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to