In article <201005271035.46352.fmgrotep...@yahoo.co.uk>,
 Frans Grotepass <fmgrotep...@yahoo.co.uk> wrote:

> Hi all,
> 
> Sorry for abusing my membership to this forum for this question.
> 
> We are busy with building an embedded application that must retrieve data 
> very fast. 

Please define "very fast" in numbers.  For example, 95% of responses must be 
fully received within 1,000 microseconds, and 100% within 10 milliseconds, or 
the planet will explode.

What does the embedded application do?


>  The choice is to either have the data locally or go to a central 
> server(pool) that contains the data. 

Well, locally is always faster and more predictable than remotely, so why even 
consider remotely?


> In evaluating the network option, I thought that the people here could 
> possibly help me with the expected network latency for a Gb network via a 
> switch. My gut feeling says that with increased load, the switch will bundle 
> the traffic to the different nodes more and this will result in higher 
> latency. 

Big switches can have transit latencies of a few tens of microseconds, but 
there 
is far more to it than that.  And if there is a choke point somewhere, the 
observed latencey will vary wildly depending on perhaps unrelated traffic and 
loading, making it appear that the latency varies randomly.  The farther the 
commands and resulting data travel, the more vulnerable one is to these effects.

Joe Gwinn

_______________________________________________
questions mailing list
questions@lists.ntp.org
http://lists.ntp.org/listinfo/questions

Reply via email to