1. Nobody has proven that SBGK testcase is not causing data loss. 
Data loss is possible in an IP network.
2. I agree with SBGK to NOT agree with above statement, that "...we all
agree...". Though due to other reasons then SBGK. I haven't run the
magnet testcase. ;)
3. The Phil Leigh 20s one-buffer listening testcase I consider
irrelevant.


IP packet networks were never meant for realtime data streaming.
It is a well known fact that A/V realtime traffic is causing 
serious and pretty inhomogeneous latencies and paket jitter on 
the receiving end. 
To cope with that problem, special buffers had to be introduced to 
the receiving end. 

Logitech made that buffer pretty large for a very good reason I guess.
Though I don't think 20s would be required because of networking
jitter/latency only.

I do see some more reasons for a big buffer. The transport needs to be
able to cope with higher samplerates then 44.1/16. 
I guess rew/ffd actions also require more buffer space.

Run all that wireless and you're scratching the ceiling. 

There are services like QoS or qWave by MS, which are supposed 
to improve that well known network A/V streaming challenge.  
I learned the other day that LMS is not using the qWave API.
I'm not sure how Logitech handles these challenges. Perhaps the 
buffer gives them plenty of space to avoid network streaming 
optimizations.

So far so good.

Back to paket latencies and paket jitter.

Running a large buffer doesn't mean that there is less work 
to do for the NIC to get the traffic managed. 
Not to forget also buffers usually get continously refilled
and need to get managed. It's not that somebody will sit idle for 20s
until the buffer needs a free refill, while the server further
updstream still runs a realtime stream. 

It is well known that that the inhomogenous paket traffic is causing 
high inhomogeneous loads on the NIC and related parts of the chain.

Heavy inhomgeneous load conditions due to NIC resp. related parts incl.
software management functions on the Transport, could translate into
jitter 
resp. additional noise or power variations. 

And from there those flaws would add to the rest of noise and jitter
(all cumulative) of that device and make it all the way through the
transport 
to the DAC. 

All that should be measurable. If somebody intends to do so. 

The data and the noise are taking a different route. They seperate at
the NIC somewhere. The data gets filled into the buffer and the noise
makes it right to the output without any buffering.

They probably won't meet again at a later stage - at the output. 
The data will always be late.

So far my theory.

Since nobody around here can prove anything in that area, it's rather
useless to have that discussion. 

Of course we can continue to exchange our observations or wild
guesses.


My guess - the better the upstream network and server setup - the less
impact you'll see on the downstream NIC. That would translate into less
noise/jitter/power variations.



The network related EMI/RFI (cable shield yes/no) story is another
story.


Cheers


-- 
soundcheck

::: ' Touch Toolbox 3.0 and more' (http://soundcheck-audio.blogspot.com)
:::  by soundcheck
------------------------------------------------------------------------
soundcheck's Profile: http://forums.slimdevices.com/member.php?userid=34383
View this thread: http://forums.slimdevices.com/showthread.php?t=91322

_______________________________________________
audiophiles mailing list
audiophiles@lists.slimdevices.com
http://lists.slimdevices.com/mailman/listinfo/audiophiles

Reply via email to