It would be interesting to try to build up a picture of what exactly
is happening while we wait for a data request to come back with a
response.  We could presumably do this by collecting statistics on one
or more nodes, tracking stuff like:

  - Time from a request entering to exiting
  - Time for a request existing to generate a response relative to its
TTL and anything else that might indicate where it is in its request
chain

>From this we could narrow down where delays are coming from, is it
throughput, or perhaps just long request chains?

Has anything like this been done yet?

Ian.

-- 
Ian Clarke
CEO, Uprizer Labs
Email: ian at uprizer.com
Ph: +1 512 422 3588

Reply via email to