I second...

Note that even with a "single VL", no endpoint can freeze the fabric -
if a multicast receiver has gone to breakfast it would just loose its
own packets rather than introducing congestion.
The only way an end-node can cause congestion is if its internal buses
don't match the IB link's BW, but this is unrelated to (lack of)
transport-level flow control.
--Liran


-----Original Message-----
From: linux-rdma-ow...@vger.kernel.org
[mailto:linux-rdma-ow...@vger.kernel.org] On Behalf Of Roland Dreier
Sent: Thursday, December 24, 2009 2:47 AM
To: Or Gerlitz
Cc: Or Gerlitz; Liran Liss; Yevgeny Petrilin; Richard Frank; Sean Hefty;
Linux RDMA list; Paul Grun
Subject: Re: RDMAoE / lossless Ethernet (ewg: SC'09 BOF - Meeting notes)


 > To start with, no matter how many data VLs are used (e.g one), all
the  > crucial management traffic (SMPs) go on VL15 which is on the one
hand  > lossy and on the other hand not subject to congestion when other
VLs  > are. Now how would you manage your Cisco switch --remotely-- on a
> globally paused fabric when some multicast receiver hasn't had its  >
breakfast and now slows the sender while filling the queues throughout
> the congestion tree where this switch is part of?

There's not really an analog of QP0/VL15 traffic in IBoE (no SM, etc).
The analog of switch management traffic would either be on a separate
management network (and I wouldn't be surprised if many IBoE fabrics
have 100 meg management networks next to the 10/40G data fabric), or
would be QP1 traffic on the same data VL.  Yes this leads to problems if
the fabric is congested but many IB production fabrics seem to cope.

As I said, DCB is definitely useful for IBoE and also has many
advantages even for non-RDMA deployments, but conversely I think IBoE
may be useful in production, even in non-DCB classical ethernet fabrics.

 > To continue with, lossless is good, but to make your cluster usable
> under congestion, you need congestion control, that is QCN, which is
> designed/optimized to the case of multiple TCs.

I am not aware of a single production deployment of IB congestion
management.  So clearly it's a "nice to have" but again not a prereq for
production use.

 > Also, IBoE can potentially find its way to much more complex  >
environments than IB has, specifically, to clusters whose hosts are  >
acting as hypervisors running many many VMs and the underlying fabrics
> does consolidates many types of traffic, globally pausing a port can
> dramatically reduce the efficiency of such computing center which  >
probably was built originally to increase efficiency.

Sure, DCB is very useful, in many environments.  And maybe even a
requirement sometimes.  I'm simply trying to say that IBoE with
classical ethernet is at least as useful as standard IB in many cases
(IBoE without DCB is roughly equivalent to IB without QoS, and most IB
deployments still don't use QoS).

 - R.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majord...@vger.kernel.org More majordomo info
at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to