Gruesse, Carsten; > > How would an app know to set this bit? The problem is that different > > L2s will have different likelihoods of corruption; you may decide that > > it's safe to set the bit on Ethernet, but not on 802.11*. > > Aah, there's the confusion. The apps we have in mind would think that > it is pointless (but harmless) to set the bit on Ethernet, but would be > quite interested in setting it on 802.11*.
If the application is voice and the datalink is SONET, it may be fine to ignore some errors. However, the problem, in general, is that packets are often corrupted a lot to be almost meaningless even if the contents is voice. Still, it may be possible to design a datalink protocol to ignore a few bits, but not beyond, of errors in packets. But, it is, then, easy to have ECC to fix the errors that no sane person design such protocol. > in the middle. (Reducing packet sizes to achieve a similar effect is > highly counterproductive on media with a very high per-packet overhead > such as 802.11.) Of course, 802.11 has retransmissions, so maybe this > is a bad example, but it does illustrate the point. The problem of 802.11 is that packets are often corrupted a lot by collisions (from hidden terminals). > *) e.g., in order to salvage half of a video packet that got an error You need extra check sum which consumes bandwidth. Note that, for multicast case, you can't behave adaptively to all the receivers that all the receivers suffer from the bandwidth loss. Masataka Ohta