> >so it seems like what we need is a bit in the IP header to indicate > >that L2 integrity checks are optional, and to specify for various > >kinds of IP-over-FOO how to implement that bit in FOO. > > > How would an app know to set this bit? The problem is that different > L2s will have different likelihoods of corruption; you may decide that > it's safe to set the bit on Ethernet, but not on 802.11*. And, in > general, the app doesn't know all of the L2s that may be involved when > it sends a packet.
I'm not sure that the app needs to know how much lossage to expect, or to specify how much lossage is too much. It just wants the bits, errors included. Depending on the app's needs it might dynamically adapt to varying degrees of error by adding its own FEC, e2e retransmission, and/or interleaving, and this probably works better than trying to have the app either determine statically how much error it can expect or by having the app specify to the network how much error is acceptable. I suppose we could define a maximum error rate (say 15%) that IP-over-FOO should be designed to provide if the "lossage okay" bit is set. But practically speaking I doubt it's necessary to do that- links that are designed to support lossy traffic will already have enough FEC or whatever to suit that kind of traffic. The biggest questions I have are: - where to put this bit? - are there unintended consequences of doing this that we can forsee? Keith