At 11:55 AM 8/25/2006, Greg Lindahl wrote:
On Fri, Aug 25, 2006 at 10:00:50AM -0400, Thomas Bachman wrote:

> Not that I have any stance on this issue, but is this is the text in the
> spec that is being debated?
>
> (page 269, section 9.5, Transaction Ordering):
> "An application shall not depend upon the order of data writes to
> memory within a message. For example, if an application sets up
> data buffers that overlap, for separate data segments within a
> message, it is not guaranteed that the last sent data will always
> overwrite the earlier."

No. The case we're talking about is different from the example.
There's text elsewhere which says, basically, that you can't access
the data buffer until seeing the completion.


> I'm assuming that the spec authors had reason for putting this in there, so
> maybe they could provide guidance here?

We put that text there to accommodate differing memory controller architectures / coherency protocol capabilities / etc.  Basically, there is no way to guarantee that the memory is in a usable and correct state until the completion is seen.  This was intended to guide software to not peek at memory but to examine a completion queue entry so that if memory is updated out of order, silent data corruption would not occur.


I can't speak for the authors, but as an implementor, this has a huge impact on implementation.

For example, on an architecture where you need to do work such as flushing the cache before accessing DMAed data, that's done in the completion. x86 in general is not such an architecture, but they exist. IB is intended to be portable to any CPU architecture.

Invalidation protocol is one concern.  The other is the a completion notification also often acts as a flush of the local I/O fabric as well.  In the case of a RDMA Write, the only way to safely determine complete delivery was to have a RDMA Write / Send with completion combination or a RDMA Write / RDMA Read depending upon which side required such completion knowledge.


For iWarp, the issue is that packets are frequently reordered.

Neither IP or Ethernet re-order packets that often in practice.  Same is true for packet drop rates (the real issue for packet drop is the impact on performance and recovery times which is why IB was not designed to work over long or diverse topologies where intermediate elements may see what might be termed a high packet loss rate).

Mike


-- greg


_______________________________________________
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
_______________________________________________
openib-general mailing list
openib-general@openib.org
http://openib.org/mailman/listinfo/openib-general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to