Hi all,

It's common wisdom that a datagram that needs to be fragmented between 
endpoints (because it is bigger than the path MTU) will demonstrate less 
reliable delivery and reassembly than a datagram that doesn't need to be 
fragmented, because math, firewall, other, take your pick.

Is anybody aware of any wide-scale studies that examine the probability of 
fragmentation of datagrams of different sizes?

For example, I could reasonable expect an IPv4 packet of 576 bytes not to be 
fragmented very often (to choose a size not at random). The probability of a 
10,000 octet IPv4 packet getting fragmented seems likely to be 100%, if we're 
talking about arbitrary paths across the Internet.

What does the curve look like between 576 bytes and 10,000 bytes?

I might expect exciting curve action around 1500 bytes (because ethernet), 1492 
(PPPoE), 1480 (GRE), etc. But I'm interested in actual data.

Anybody have any pointers? IPv4 and IPv6 are both interesting.


Joe

Reply via email to