I think the primary area where people are concerned about latency are rbd
and 4k block size access. OTOH 2.3us latency seems to be 2 orders of
magnitude below of what seems to be realistically achievable on a real
world cluster anyway (
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011731.html)
so I don't really think the basic latency difference from copper vs fiber
as listed make much of a difference at this point

On Thu, 22 Mar 2018 at 17:14, Subhachandra Chandra <schan...@grailbio.com>
wrote:

> Latency is a concern if your application is sending one packet at a time
> and waiting for a reply. If you are streaming large blocks of data, the
> first packet is delayed by the network latency but after that you will
> receive a 10Gbps stream continuously. The latency for jumbo frames vs 1500
> byte frames depends upon the switch type. On a cut-through switch there is
> very little difference but on a store-and-forward switch it will be
> proportional to packet size. Most modern switching ASICs are capable of
> cut-through operation.
>
> Subhachandra
>
> On Wed, Mar 21, 2018 at 7:15 AM, Willem Jan Withagen <w...@digiware.nl>
> wrote:
>
>> On 21-3-2018 13:47, Paul Emmerich wrote:
>> > Hi,
>> >
>> > 2.3µs is a typical delay for a 10GBASE-T connection. But fiber or SFP+
>> > DAC connections should be faster: switches are typically in the range of
>> > ~500ns to 1µs.
>> >
>> >
>> > But you'll find that this small difference in latency induced by the
>> > switch will be quite irrelevant in the grand scheme of things when using
>> > the Linux network stack...
>>
>> But I think it does when people start to worry about selecting High
>> clock speed CPUS versus packages with more cores...
>>
>> 900ns is quite a lot if you have that mindset.
>> And probably 1800ns at that, because the delay will be a both ends.
>> Or perhaps even 3600ns because the delay is added at every ethernet
>> connector???
>>
>> But I'm inclined to believe you that the network stack could take quite
>> some time...
>>
>>
>> --WjW
>>
>>
>> > Paul
>> >
>> > 2018-03-21 12:16 GMT+01:00 Willem Jan Withagen <w...@digiware.nl
>> > <mailto:w...@digiware.nl>>:
>> >
>> >     Hi,
>> >
>> >     I just ran into this table for a 10G Netgear switch we use:
>> >
>> >     Fiberdelays:
>> >     10 Gbps vezelvertraging (64 bytepakketten): 1.827 µs
>> >     10 Gbps vezelvertraging (512 bytepakketten): 1.919 µs
>> >     10 Gbps vezelvertraging (1024 bytepakketten): 1.971 µs
>> >     10 Gbps vezelvertraging (1518 bytepakketten): 1.905 µs
>> >
>> >     Copperdelays:
>> >     10 Gbps kopervertraging (64 bytepakketten): 2.728 µs
>> >     10 Gbps kopervertraging (512 bytepakketten): 2.85 µs
>> >     10 Gbps kopervertraging (1024 bytepakketten): 2.904 µs
>> >     10 Gbps kopervertraging (1518 bytepakketten): 2.841 µs
>> >
>> >     Fiberdelays:
>> >     1 Gbps vezelvertraging (64 bytepakketten) 2.289 µs
>> >     1 Gbps vezelvertraging (512 bytepakketten) 2.393 µs
>> >     1 Gbps vezelvertraging (1024 bytepakketten) 2.423 µs
>> >     1 Gbps vezelvertraging (1518 bytepakketten) 2.379 µs
>> >
>> >     Copperdelays:
>> >     1 Gbps kopervertraging (64 bytepakketten) 2.707 µs
>> >     1 Gbps kopervertraging (512 bytepakketten) 2.821 µs
>> >     1 Gbps kopervertraging (1024 bytepakketten) 2.866 µs
>> >     1 Gbps kopervertraging (1518 bytepakketten) 2.826 µs
>> >
>> >     So the difference is serious: 900ns on a total of 1900ns for a 10G
>> >     pakket.
>> >     Other strange thing is that 1K packets are slower than 1518 bytes.
>> >
>> >     So that might warrant connecting boxes preferably with optics
>> >     instead of CAT cableing if you are trying to squeeze the max out of
>> >     a setup.
>> >
>> >     Sad thing is that they do not report for jumbo frames, and doing
>> these
>> >     measurements your self is not easy...
>> >
>> >     --WjW
>> >
>> >     _______________________________________________
>> >     ceph-users mailing list
>> >     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> >     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> >
>> >
>> >
>> >
>> > --
>> > --
>> > Paul Emmerich
>> >
>> > croit GmbH
>> > Freseniusstr. 31h
>> > 81247 München
>> > www.croit.io <http://www.croit.io>
>> > Tel: +49 89 1896585 90
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to