[This message was posted by Joseph Conron of  <jcon...@aegisoft.com> to the 
"General Q/A" discussion forum at http://fixprotocol.org/discuss/22. You can 
reply to it on-line at http://fixprotocol.org/discuss/read/ddf2c1f1 - PLEASE DO 
NOT REPLY BY MAIL.]

Having worked with less complex throttling rules as proposed here, I think 
Joey's suggestion merit's further thought.  I suspect some may not appreciate 
the full scope of his suggestion, so I'll elaborate as I understand it:

First, what Joey has suggested is a credit based scheme:

Let’s assume that we have a new Fix Tag 98765 "TAGOrderCredit".

It would work like this:

On Fix Logon response, the bank adds the tag 98756=L where L is the max number 
of orders that the bank will accept from client at this moment.

Client is obligated to set local variable O = L.

When client has an order to send,, it checks O.  If O > 0, client sends order 
and decrements O.  If before decrement O == 0, client either rejects order to 
trader, or queues order for later transmission

Bank is obligated to send execution report each time an order changes state.  
So, each execution report will now include the OrderCredit tag, and that tag's 
value updates client's current value of O.

When O is updated, client checks Queue and if O > 0 and client has queued 
order, sends it and decrements O.

The beauty of this is that it allows streaming of orders with no need to worry 
if client clock ticks at same rate as bank clock.  If client is a bit too fast 
with clock tick, it might send an order ever so slightly sooner than the bank 
clock ticks, and then bank will reject the order.  The time wasted to handle 
that reject adds considerable delay to time at which the order can be re-sent.

Moreover, it allows the bank to increase or decrease the throttle limits 
without need for clients to change code or parameters.

Finally, it keeps orders flowing at the maximum rate since executions are sent 
in parallel (full duplex) with new orders, so credit is updated on the fly.  
This is pipelining: keeping the order pipe filled with orders as much as 
possible.  

I think that some will worry that they will see additional latency if they have 
to wait for a 0 credit state to be refreshed with an execution message.  They 
are correct in that regard.

But, in practice, it will work that way for everybody, so no one should be at a 
disadvantage here, hence the mechanism is fair.

> The proposal is based on specific requirements for throttles received
> from various parties that have implemented or need to implement it. The
> idea of explicit tokens being provided was discussed but discarded for
> high-speed environments where predictability of latency is key. Algos
> having to wait for explicit tokens before they can "continue" will not
> like it. They want to be in control and know upfront at what rate they
> can send requests to the marketplace without getting slowed down.
> 
> > > Considering how much work has been done on QoS and congestion
> > > relief/mitigation on existing network protocols, I was wondering if
> > > this proposal is based on one of those, or if it's entirely ad-hoc?
> >
> > Speaking of network protocols, may I suggest a simpler scheme:
> >
> > The Exchange-side sends a single new field in any message: this new
> > field gives the number of additional orders and/or replaces that the
> > exchange will accept at this time. Optionally, if the exchange-side
> > needs to update this number, it can send an asynchronous (heartbeat or
> > status) message with the new value.
> >
> > Thanks! -Joey


[You can unsubscribe from this discussion group by sending a message to 
mailto:unsubscribe+100932...@fixprotocol.org]

-- 
You received this message because you are subscribed to the Google Groups 
"Financial Information eXchange" group.
To post to this group, send email to fix-proto...@googlegroups.com.
To unsubscribe from this group, send email to 
fix-protocol+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/fix-protocol?hl=en.

Reply via email to