RE: Strange public traceroutes return private RFC1918 addresses
And why 4470 for POS? Did everyone borrow a vendor's FDDI-like default or is there a technical reason? PPP seems able to use 64k packets (as can the frame-based version of GFP, incidentally, POS's likely replacement). According to this URL http://www.columbia.edu/acis/networks/advanced/jumbo/jumbo.html which you have seen before, the number of CRC bits in the protocol header limits the number of bytes you can practically use for the MTU. I expect that we won't go beyond 9000 byte MTUs for a long time. The 4470 for POS probably comes from Token Ring originally. In the original 4 Mbps token ring a device was allowed to hold the token for 9.1 ms which was enough time to transmit 4550 octets. This timing was probably adopted by FDDI which borrowed a lot from the token ring design. No doubt, the designers of POS were also influenced by token ring and just chose the same size. --Michael Dillon
Re: Strange public traceroutes return private RFC1918 addresses
How does a 50Mbyte MTU sound like? http://www.psc.edu/~mathis/MTU/ ~Hani Mustafa
Re: Strange public traceroutes return private RFC1918 addresses
On 3-feb-04, at 11:47, [EMAIL PROTECTED] wrote: Which (as discussed previously) breaks things like Path MTU Discovery, traceroute, If RFC1918 addresses are used only on interfaces with jumbo MTUs on the order of 9000 bytes then it doesn't break PMTUD in a 1500 byte Ethernet world. And it doesn't break traceroute. You mean if they use 9000 bytes + RFC 1918 for the internal links and 1500 + real addresses for the external links there are no problems, even when people filter the RFC 1918 addresses? That would be correct in the case where this is a single organization network. But if it's a service provider network, there may be customers somewhere that connect over 1500 byte + links. (And never mind the fact that firewall admins are incredibly paranoid and also often filter RFC 1918 sources.) A more important question is what will happen as we move out of the 1500 byte Ethernet world into the jumbo gigE world. Not as much as I'd hoped. My powerbook has gigabit ethernet but it's limited to 1500 byte frames. It's only a matter of time before end users will be running gigE networks and want to use jumbo MTUs on their Internet links. The internet has always been a network with a variable MTU size. Even today under the iron rule of ether there are many systems with MTUs that aren't 1500. And yes, obviously people will want larger MTUs. I had the opportunity to work with a couple for boxes with 10 gigabit ethernet interfaces today. Unfortunately, I was unable to squeeze more than 1.5 gbit out of them over TCP. That's 125000 packets per second at 1500 bytes, which makes no sense any which way you slice it. (And the driver did actually do 125k interrupts per second, which probably explains the poor throughput.) Could we all agree on a hierarchy of jumbo MTU sizes that with the largest sizes in the core and the smallest sizes at the edge? The increment in sizes should allow for a layer or two of encapsulation and peering routers should use the largest size MTU. No need. Simply always use the largest possible MTU and make sure path MTU discovery works. If you have a range of maximum MTU sizes that is pretty close (9000 and 9216 are both common) it could make sense to standardize on the lowest in the range to avoid successive PMTUD drops but apart from that there is little to be gained by over-designing. Oh yes: there were some calculations in other postings, which were quite misleading as they only looked at the 20 byte IP overhead. There's also TCP overhead (20 bytes), often a timestamp option (12 bytes) and of course the ethernet overhead which is considerable: 8 byte preamble, 14 byte header, 4 byte FCS and an inter frame gap that is equivalent to 12 bytes. So a 1500 byte IP packet takes up 1538 bytes on the wire while it only has a 1460 byte payload (94.9% efficiency). A 9000 byte IP packet takes up 9038 bytes and delivers a 8960 byte payload (99.1%). 1520 bytes in a single packet would be 95% efficiency but fragmenting this packet would create a new IP packet with a 24 byte payload for a total of 44 bytes which is padded to 46 because of the ethernet minimum packet size, for a total bandwidth use on the wire of 1618 bytes, making for an efficiency rating of 91.5%. (Fragmenting 1520 in 1496 and 44 is pretty stupid by the way, 768 and 772 would be much better, thinking of the reasons why is left as an exercise for the reader.)
Re: Strange public traceroutes return private RFC1918 addresses
[EMAIL PROTECTED] disait : Search the archives, Comcast and other cable/DSL providers use the 10/8 for their infrastructure. The Internet itself doesn't need to be Internet routable. Only the edges need to be routable. It is common practice to use RFC1918 address space inside the network. Companies like Sprint and Verio use 'real' IPs but don't announce them to their peers on customer edge routes. Are you sure about Sprint ? I was told that Sprint DOES announce edge blocks to peers/custom (For URPF i guess) but blackholes this block at the edge. Thus you can still traceroute the IP up to Sprint edge, but cannot get into Sprint network. This is a hot issue for Opentransit since we are considering not announcing some infrastructure blocks. I think that Sprint way is rather smart : . It prevent/mitigate infrastructure DDOS . It keeps working with URPF enable peers. Vincent, Opentransit - France Telecom
Re: Strange public traceroutes return private RFC1918 addresses
Which (as discussed previously) breaks things like Path MTU Discovery, traceroute, If RFC1918 addresses are used only on interfaces with jumbo MTUs on the order of 9000 bytes then it doesn't break PMTUD in a 1500 byte Ethernet world. And it doesn't break traceroute. We just lose the DNS hint about the router location. A more important question is what will happen as we move out of the 1500 byte Ethernet world into the jumbo gigE world. It's only a matter of time before end users will be running gigE networks and want to use jumbo MTUs on their Internet links. Could we all agree on a hierarchy of jumbo MTU sizes that with the largest sizes in the core and the smallest sizes at the edge? The increment in sizes should allow for a layer or two of encapsulation and peering routers should use the largest size MTU. Thoughts? --Michael Dillon
RE: Strange public traceroutes return private RFC1918 addresses
A more important question is what will happen as we move out of the 1500 byte Ethernet world into the jumbo gigE world. It's only a matter of time before end users will be running gigE networks and want to use jumbo MTUs on their Internet links. The performance gain achieved by using jumbo frames outside of very specific LAN scenarios is highly questionable, and they're still not standardized. Are jumbo Internet MTUs seen as a pressing issue by ISPs and vendors these days? -Terry
Re: Strange public traceroutes return private RFC1918 addresses
A more important question is what will happen as we move out of the 1500 byte Ethernet world into the jumbo gigE world. It's only a matter of time before end users will be running gigE networks and want to use jumbo MTUs on their Internet links. The performance gain achieved by using jumbo frames outside of very specific LAN scenarios is highly questionable, and they're still not standardized. Are jumbo Internet MTUs seen as a pressing issue by ISPs and vendors these days? -Terry for some, yes. running 1ge is fairly common and 10ge is maturing. bleeding edge 40ge is available ... and 1500byte mtu is -not- an option. --bill
Re: Strange public traceroutes return private RFC1918 addresses
bill wrote: for some, yes. running 1ge is fairly common and 10ge is maturing. bleeding edge 40ge is available ... and 1500byte mtu is -not- an option. Me wonders why people ask for 40 byte packets at linerate if the mtu is supposedly larger? Pete
RE: Strange public traceroutes return private RFC1918 addresses
On Tue, 3 Feb 2004, Terry Baranski wrote: A more important question is what will happen as we move out of the 1500 byte Ethernet world into the jumbo gigE world. It's only a matter of time before end users will be running gigE networks and want to use jumbo MTUs on their Internet links. The performance gain achieved by using jumbo frames outside of very specific LAN scenarios is highly questionable, and they're still not standardized. Are jumbo Internet MTUs seen as a pressing issue by ISPs and vendors these days? Being a position to use a default mtu larger that 1500 would be nice given the number of tunnels of varying varieties that have to fragment because the packets going into them are themselves 1500 bytes... 4352 and 4470 are fairly common in the internet today... edge networks that are currently jumbo enabled for the most part do just fine when talking to the rest of the internet since they can do path mtu discovery... non-jumbo enabled devices on the same subnet with jumbo devices become a big problem since they end up black-holed from the hosts. adoption in the core of networks is likely easier than at the end-user edges... -Terry -- -- Joel Jaeggli Unix Consulting [EMAIL PROTECTED] GPG Key Fingerprint: 5C6E 0104 BAF0 40B0 5BD3 C38B F000 35AB B67F 56B2
RE: Strange public traceroutes return private RFC1918 addresses
A more important question is what will happen as we move out of the 1500 byte Ethernet world into the jumbo gigE world. It's only a matter of time before end users will be running gigE networks and want to use jumbo MTUs on their Internet links. The performance gain achieved by using jumbo frames outside of very specific LAN scenarios is highly questionable, Did I say anything about performance? In any case, a lot of packets on the Internet have to traverse LANs at either edge and if jumbo packets can help on the LAN then people will use them. and they're still not standardized. Precisely my point. We can do jumbo MTUs of various sizes today but we need to discuss some standard ways of setting these MTUs so that we avoid MTU bottlenecks in the core. That way PMTUD will continue to be a non-issue. Are jumbo Internet MTUs seen as a pressing issue by ISPs and vendors these days? Some people like to do forward planning instead of waiting until an issue hits them in the face. By definition, forward planning will never be dealing with pressing issues. --Michael Dillon
Re: Strange public traceroutes return private RFC1918 addresses
[EMAIL PROTECTED] wrote: If RFC1918 addresses are used only on interfaces with jumbo MTUs on the order of 9000 bytes then it doesn't break PMTUD in a 1500 byte Ethernet world. And it doesn't break traceroute. We just lose the DNS hint about the router location. I'm confused about your traceroute comment. You're assuming a packet with a RFC1918 source address won't be dropped. In many cases, it will, and should be. Each organization is permitted to use the RFC1918 address space internally for any purpose they see fit. This often means they don't want people outside the organization to be able to generate packets with source addresses for machines they consider to be internal. It makes sense to drop such packets as they come in to your AS. Assuming that a packet with an RFC1918 source address will get dropped as it crosses in to a new AS, this will break traceroute hops, Path MTU Discovery, Network/Host unreachable, or any other ICMP that needs to be generated from a router with a RFC1918 address. Is everyone filtering RFC1918 at their edge? No. But my impression is that more and more places are. Certainly anyone who uses either Team Cymru's Bogon services or similar services (doesn't Cisco now do this in IOS as well?) will be blocking them... Bob
Re: Strange public traceroutes return private RFC1918 addresses
On Tue, 03 Feb 2004 06:39:33 PST, Joel Jaeggli said: edge networks that are currently jumbo enabled for the most part do just fine when talking to the rest of the internet since they can do path mtu discovery... Well, until you hit one of these transit providers that uses 1918 addresses for their links. :) pgp0.pgp Description: PGP signature
Re: Strange public traceroutes return private RFC1918 addresses
In a message written on Tue, Feb 03, 2004 at 08:15:13AM -0600, Terry Baranski wrote: The performance gain achieved by using jumbo frames outside of very specific LAN scenarios is highly questionable, and they're still not standardized. Are jumbo Internet MTUs seen as a pressing issue by ISPs and vendors these days? While the rate of request is still very low, I would say we get more and more requests for jumbo frames everyday. The pressing application today is larger frames; that is don't think two hosts talking 9000 MTU frames to each other, but rather think IPSec or other tunneling boxes talking 1600 byte packets to each other so they don't have to split 1500 byte Ethernet packets in half. Since most POS is 4470, adding a jumbo frame GigE edge makes this application work much more efficiently, even if it doesn't enable jumbo (9k) frames end to end. The interesting thing here is it means there absolutely is a PMTU issue, a 9K edge with a 4470 core. There is also a lot of work going on in academic networks that uses jumbo frames. I suspect in a few more years this will make it into more common applications. In a message written on Tue, Feb 03, 2004 at 04:40:15PM +0200, Petri Helenius wrote: Me wonders why people ask for 40 byte packets at linerate if the mtu is supposedly larger? This is a problem that is going to get worse. I support IP you have to support a 40 byte packet. As long as that exists, DDOS tools will use 40 byte packets, knowing more lookups are harder on the software/hardware in routers. At the same time I suspect software is going to continue to slowly move to larger and larger packets, because at the higher data rates (eg 40 gige) it makes a huge difference in host usage. You can fit 6 times in the data in a 9K packet that you can in a 1500 byte packet, which means 1/6th the interrupts, DMA transfers, ACL checks, etc, etc, etc. -- Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - [EMAIL PROTECTED], www.tmbg.org pgp0.pgp Description: PGP signature
Re: Strange public traceroutes return private RFC1918 addresses
* [EMAIL PROTECTED] (Petri Helenius) [Tue 03 Feb 2004, 15:42 CET]: Me wonders why people ask for 40 byte packets at linerate if the mtu is supposedly larger? Support for the worst-case scenario. Same why you spec support for a BIGINT-line ACL without excessive impact on forwarding capacity. -- Niels. -- Blessed are the Watchmakers, for they shall inherit the earth.
Re: Strange public traceroutes return private RFC1918 addresses
Niels Bakker wrote: * [EMAIL PROTECTED] (Petri Helenius) [Tue 03 Feb 2004, 15:42 CET]: Me wonders why people ask for 40 byte packets at linerate if the mtu is supposedly larger? Support for the worst-case scenario. Same why you spec support for a BIGINT-line ACL without excessive impact on forwarding capacity. Why large MTU then? Most modern ethernet controllers don´t care if you´re sending 1500 or 9000 byte packets. (with proper drivers taking advantage of the features there) If you´re paying for 40 byte packets anyway, there is no incentive to ever go beyond 1500 byte MTU. Pete
Re: Strange public traceroutes return private RFC1918 addresses
Leo Bicknell wrote: because at the higher data rates (eg 40 gige) it makes a huge difference in host usage. You can fit 6 times in the data in a 9K packet that you can in a 1500 byte packet, which means 1/6th the interrupts, DMA transfers, ACL checks, etc, etc, etc. This is wrong. Interrupt moderation has been there for quite a while, DMA is chained and predictive. ACL checks I can agree on, but if you are optimizing the system, what do you need ACL´s for anyway because you can make the applications secure in the first place? Pete
Re: Strange public traceroutes return private RFC1918 addresses
In a message written on Tue, Feb 03, 2004 at 08:40:22PM +0200, Petri Helenius wrote: If you're paying for 40 byte packets anyway, there is no incentive to ever go beyond 1500 With a 20 byte IP header: A 40 byte packet is 50% data. A 1500 byte packet is 98.7% data. A 9000 byte packet is 99.7% data. Anyone who pays by the bit should like large packets better than small packets, as you pay for less overhead bandwidth. Note that a 1500 byte IP in IP packet becomes 1520, and then gets fragmented to 1500 and a 40 byte packet (20 data, 20 header). That's only 97.3% efficient, where as a single 1520 byte packet, if it could be carried, is 98.7% efficient. Obviously talking in smaller numbers, but to a lot of VPN vendors 1.4% improvement in bandwidth usage, bus usage, or avoiding the path through the device that fragments a packet in the first place is a big win. -- Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - [EMAIL PROTECTED], www.tmbg.org pgp0.pgp Description: PGP signature
Re: Strange public traceroutes return private RFC1918 addresses
Why large MTU then? Most modern ethernet controllers don´t care if you´re sending 1500 or 9000 byte packets. (with proper drivers taking advantage of the features there) If you´re paying for 40 byte packets anyway, there is no incentive to ever go beyond 1500 byte MTU. I think its partially due to removal of overhead and improvements you get out of TCP (bearing in mind it uses windowing and slow start) Bit of data on this link that i googled up, http://www-iepm.slac.stanford.edu/monitoring/bulk/10ge/20030303/tests.html
Re: Strange public traceroutes return private RFC1918 addresses
Leo Bicknell wrote: because at the higher data rates (eg 40 gige) it makes a huge difference in host usage. You can fit 6 times in the data in a 9K packet that you can in a 1500 byte packet, which means 1/6th the interrupts, DMA transfers, ACL checks, etc, etc, etc. * [EMAIL PROTECTED] (Petri Helenius) [Tue 03 Feb 2004, 19:47 CET]: This is wrong. Interrupt moderation has been there for quite a while, DMA is chained and predictive. Just like the extra chopping up of the data you want to send into more packets, it's things you have to do a few extra times. That takes time. There is no way around this. What Leo wrote is in no way wrong. ACL checks I can agree on, but if you are optimizing the system, what do you need ACL?s for anyway because you can make the applications secure in the first place? You're trolling, right? -- Niels. -- Blessed are the Watchmakers, for they shall inherit the earth. pgp0.pgp Description: PGP signature
Re: Strange public traceroutes return private RFC1918 addresses
Stephen J. Wilcox wrote: Why large MTU then? Most modern ethernet controllers don´t care if you´re sending 1500 or 9000 byte packets. (with proper drivers taking advantage of the features there) If you´re paying for 40 byte packets anyway, there is no incentive to ever go beyond 1500 byte MTU. I think its partially due to removal of overhead and improvements you get out of TCP (bearing in mind it uses windowing and slow start) Sure, if you control both endpoints. If you don´t and receivers have small (4k,8k or 16k) window sizes, your performance will suffer. Maybe we should define if we´re talking about record breaking attempts or real operationally useful things here. Pete
Re: Strange public traceroutes return private RFC1918 addresses
Niels Bakker wrote: Just like the extra chopping up of the data you want to send into more packets, it's things you have to do a few extra times. That takes time. There is no way around this. What Leo wrote is in no way wrong. Maybe we need to define what the expression huge difference means in this context. Previously it has been defined as 1.4% difference which in my opinion qualifies as understatement of the day. If we would be talking about 20% or more difference here, the pain from larger MTU might be tolerable. ACL checks I can agree on, but if you are optimizing the system, what do you need ACL?s for anyway because you can make the applications secure in the first place? You're trolling, right? No. I´ll trust my digital signatures over the source IP filters any day. Pete
Re: Strange public traceroutes return private RFC1918 addresses
In a message written on Tue, Feb 03, 2004 at 09:53:30PM +0200, Petri Helenius wrote: Sure, if you control both endpoints. If you don´t and receivers have small (4k,8k or 16k) window sizes, your performance will suffer. Maybe we should define if we´re talking about record breaking attempts or real operationally useful things here. Google and Akamai are just two examples of companies with hundreds of thousands of machines where they move large amounts of data between them and have control of both ends. Many corporations are now moving off-site backup data over the Internet, in large volumes between two end points they control. The Internet is not just web servers feeding dial-up clients. -- Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - [EMAIL PROTECTED], www.tmbg.org pgp0.pgp Description: PGP signature
Re: Strange public traceroutes return private RFC1918 addresses
On Tue, 3 Feb 2004, Petri Helenius wrote: Stephen J. Wilcox wrote: Why large MTU then? Most modern ethernet controllers don´t care if you´re sending 1500 or 9000 byte packets. (with proper drivers taking advantage of the features there) If you´re paying for 40 byte packets anyway, there is no incentive to ever go beyond 1500 byte MTU. I think its partially due to removal of overhead and improvements you get out of TCP (bearing in mind it uses windowing and slow start) Sure, if you control both endpoints. If you don´t and receivers have small (4k,8k or 16k) window sizes, your performance will suffer. Maybe we should define if we´re talking about record breaking attempts or real operationally useful things here. By definition of this discussion about using large MTU we are assuming that packets are arriving 1500 bytes and therefore that we do have control of the endpoints and they are set to use jumbos Steve
Re: Strange public traceroutes return private RFC1918 addresses
Leo Bicknell wrote: Google and Akamai are just two examples of companies with hundreds of thousands of machines where they move large amounts of data between them and have control of both ends. Many corporations are now moving off-site backup data over the Internet, in large volumes between two end points they control. Makes me wonder if either one of the mentioned want to take the operational and support burden of increasing the MTU across maybe one of the most diverse set of paths in any environment. I would probably never send even a 1500 byte packet if I would be either of them, but live somewhere in the low-1400 range. Pete
RE: Strange public traceroutes return private RFC1918 addresses
Leo Bicknell wrote: Since most POS is 4470, adding a jumbo frame GigE edge makes this application work much more efficiently, even if it doesn't enable jumbo (9k) frames end to end. The interesting thing here is it means there absolutely is a PMTU issue, a 9K edge with a 4470 core. This brings up the question of what other MTUs are common on the Internet, as well as which ones are simply defaults (i.e., could easily be increased) and which ones are the result of device/protocol limitations. And why 4470 for POS? Did everyone borrow a vendor's FDDI-like default or is there a technical reason? PPP seems able to use 64k packets (as can the frame-based version of GFP, incidentally, POS's likely replacement). -Terry
Re: Strange public traceroutes return private RFC1918 addresses
From: Terry Baranski [EMAIL PROTECTED] Date: Tue, 3 Feb 2004 16:42:55 -0600 Sender: [EMAIL PROTECTED] Leo Bicknell wrote: Since most POS is 4470, adding a jumbo frame GigE edge makes this application work much more efficiently, even if it doesn't enable jumbo (9k) frames end to end. The interesting thing here is it means there absolutely is a PMTU issue, a 9K edge with a 4470 core. This brings up the question of what other MTUs are common on the Internet, as well as which ones are simply defaults (i.e., could easily be increased) and which ones are the result of device/protocol limitations. And why 4470 for POS? Did everyone borrow a vendor's FDDI-like default or is there a technical reason? PPP seems able to use 64k packets (as can the frame-based version of GFP, incidentally, POS's likely replacement). 4470 was, as you surmised, to allow a full sized FDDI packet to be packed into a single POS packet. At the time FDDI was using larger packets than anything else. Now the recommendation for research and education networks (Abilene, ESnet, NASA, and many Asian and European REs) is 9000 and, within that community, is almost universally adopted when the hardware will support it. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: [EMAIL PROTECTED] Phone: +1 510 486-8634
Re: Strange public traceroutes return private RFC1918 addresses
On Tue, Feb 03, 2004 at 11:02:16AM -0500, Leo Bicknell wrote: While the rate of request is still very low, I would say we get more and more requests for jumbo frames everyday. The pressing application today is larger frames; that is don't think two hosts talking 9000 MTU frames to each other, but rather think IPSec or other tunneling boxes talking 1600 byte packets to each other so they don't have to split 1500 byte Ethernet packets in half. Since most POS is 4470, adding a jumbo frame GigE edge makes this application work much more efficiently, even if it doesn't enable jumbo (9k) frames end to end. The interesting thing here is it means there absolutely is a PMTU issue, a 9K edge with a 4470 core. 9k isn't an absolutely necessity, especially for x86. I believe the original reason for 9k as picked by Alteon was to support the 8192 byte page size on the Alpha. As long as there is enough to squeeze an x86 memory page (4096 bytes of payload) plus some room for headers, the important goal of jumbo frames (which is NOT to lower the packet/sec count, this is only a mild by-product for those who are still doing things wrong) is achieved. This would also eliminate the problems of IPSec, GRE, and other forms of tunneling which may or may not be applied breaking things where PMTUD is blocked, since the standard payload packet for TCP would only be 4136 octets (leaving plenty for other stuff). The 4470 MTU of POS meets this requirement perfectly, and the world of end to end connectivity would be an infinitely better place if everyone could expect to pass 4470 through the Internet. But alas, there are probably too many people people running GigE in the core which doesn't support jumbo frames let alone a standardized size of jumbo frame, due to various vendor hijinks to truly make use of POS's MTU these days. -- Richard A Steenbergen [EMAIL PROTECTED] http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
Re: Strange public traceroutes return private RFC1918 addresses
This is quite often used. You cant (d)DoS the routers this way, nor try to do any harm to them as you cant reach them. Regards, Jonas On Tue, 2004-02-03 at 00:01, Brian (nanog-list) wrote: Any ideas how (or why) the following traceroutes are leaking private RFC1918 addresses back to me when I do a traceroute? Maybe try from your side of the internet and see if you get the same types of responses. It's really strange to see 10/8's and 192.168/16 addresses coming from the public internet. Has this phenomenon been documented anywhere? Connectivity to the end-sites is fine, it's just the traceroutes that are strange. (initial few hops sanitized) [EMAIL PROTECTED] /]# traceroute www.ibm.com traceroute: Warning: www.ibm.com has multiple addresses; using 129.42.17.99 traceroute to www.ibm.com (129.42.17.99), 30 hops max, 38 byte packets 1 (---.---.---.---) 2.481 ms 2.444 ms 2.379 ms 2 (---.---.---.---) 17.964 ms 17.529 ms 17.632 ms 3 so-1-2.core1.Chicago1.Level3.net (209.0.225.1) 17.891 ms 17.985 ms 18.026 ms 4 so-11-0.core2.chicago1.level3.net (4.68.112.194) 18.272 ms 18.109 ms 17.795 ms 5 so-4-1-0.bbr2.chicago1.level3.net (4.68.112.197) 17.851 ms 17.859 ms 18.094 ms 6 so-3-0-0.mp1.stlouis1.level3.net (64.159.0.49) 23.095 ms 22.975 ms 22.998 ms 7 ge-7-1.hsa2.stlouis1.level3.net (64.159.4.130) 23.106 ms 23.237 ms 22.977 ms 8 unknown.level3.net (63.20.48.6) 24.264 ms 24.099 ms 24.154 ms 9 10.16.255.10 (10.16.255.10) 24.164 ms 24.108 ms 24.105 ms 10 * * * [EMAIL PROTECTED] /]# traceroute www.att.net traceroute: Warning: www.att.net has multiple addresses; using 204.127.166.135 traceroute to www.att.net (204.127.166.135), 30 hops max, 38 byte packets 1 (---.---.---.---) 2.404 ms 2.576 ms 2.389 ms 2 (---.---.---.---) 17.953 ms 18.170 ms 17.435 ms 3 500.pos2-1.gw10.chi2.alter.net (63.84.96.9) 18.077 ms * 18.628 ms 4 0.so-6-2-0.xl1.chi2.alter.net (152.63.69.170) 18.238 ms 18.321 ms 18.213 ms 5 0.so-6-1-0.BR6.CHI2.ALTER.NET (152.63.64.49) 18.269 ms 18.396 ms 18.329 ms 6 204.255.169.146 (204.255.169.146) 19.231 ms 19.042 ms 18.982 ms 7 tbr2-p012702.cgcil.ip.att.net (12.122.11.209) 20.530 ms 20.542 ms 23.033 ms 8 tbr2-cl7.sl9mo.ip.att.net (12.122.10.46) 26.904 ms 27.378 ms 27.320 ms 9 tbr1-cl2.sl9mo.ip.att.net (12.122.9.141) 27.194 ms 27.673 ms 26.677 ms 10 gbr1-p10.bgtmo.ip.att.net (12.122.4.69) 26.606 ms 28.026 ms 26.246 ms 11 12.122.248.250 (12.122.248.250) 27.296 ms 28.321 ms 28.997 ms 12 192.168.254.46 (192.168.254.46) 28.522 ms 30.111 ms 27.439 ms 13 * * * 14 * * *
Re: Strange public traceroutes return private RFC1918 addresses
Search the archives, Comcast and other cable/DSL providers use the 10/8 for their infrastructure. The Internet itself doesn't need to be Internet routable. Only the edges need to be routable. It is common practice to use RFC1918 address space inside the network. Companies like Sprint and Verio use 'real' IPs but don't announce them to their peers on customer edge routes. -Matt On Feb 2, 2004, at 6:01 PM, Brian (nanog-list) wrote: Any ideas how (or why) the following traceroutes are leaking private RFC1918 addresses back to me when I do a traceroute? Maybe try from your side of the internet and see if you get the same types of responses. It's really strange to see 10/8's and 192.168/16 addresses coming from the public internet. Has this phenomenon been documented anywhere? Connectivity to the end-sites is fine, it's just the traceroutes that are strange. (initial few hops sanitized) [EMAIL PROTECTED] /]# traceroute www.ibm.com traceroute: Warning: www.ibm.com has multiple addresses; using 129.42.17.99 traceroute to www.ibm.com (129.42.17.99), 30 hops max, 38 byte packets 1 (---.---.---.---) 2.481 ms 2.444 ms 2.379 ms 2 (---.---.---.---) 17.964 ms 17.529 ms 17.632 ms 3 so-1-2.core1.Chicago1.Level3.net (209.0.225.1) 17.891 ms 17.985 ms 18.026 ms 4 so-11-0.core2.chicago1.level3.net (4.68.112.194) 18.272 ms 18.109 ms 17.795 ms 5 so-4-1-0.bbr2.chicago1.level3.net (4.68.112.197) 17.851 ms 17.859 ms 18.094 ms 6 so-3-0-0.mp1.stlouis1.level3.net (64.159.0.49) 23.095 ms 22.975 ms 22.998 ms 7 ge-7-1.hsa2.stlouis1.level3.net (64.159.4.130) 23.106 ms 23.237 ms 22.977 ms 8 unknown.level3.net (63.20.48.6) 24.264 ms 24.099 ms 24.154 ms 9 10.16.255.10 (10.16.255.10) 24.164 ms 24.108 ms 24.105 ms 10 * * * [EMAIL PROTECTED] /]# traceroute www.att.net traceroute: Warning: www.att.net has multiple addresses; using 204.127.166.135 traceroute to www.att.net (204.127.166.135), 30 hops max, 38 byte packets 1 (---.---.---.---) 2.404 ms 2.576 ms 2.389 ms 2 (---.---.---.---) 17.953 ms 18.170 ms 17.435 ms 3 500.pos2-1.gw10.chi2.alter.net (63.84.96.9) 18.077 ms * 18.628 ms 4 0.so-6-2-0.xl1.chi2.alter.net (152.63.69.170) 18.238 ms 18.321 ms 18.213 ms 5 0.so-6-1-0.BR6.CHI2.ALTER.NET (152.63.64.49) 18.269 ms 18.396 ms 18.329 ms 6 204.255.169.146 (204.255.169.146) 19.231 ms 19.042 ms 18.982 ms 7 tbr2-p012702.cgcil.ip.att.net (12.122.11.209) 20.530 ms 20.542 ms 23.033 ms 8 tbr2-cl7.sl9mo.ip.att.net (12.122.10.46) 26.904 ms 27.378 ms 27.320 ms 9 tbr1-cl2.sl9mo.ip.att.net (12.122.9.141) 27.194 ms 27.673 ms 26.677 ms 10 gbr1-p10.bgtmo.ip.att.net (12.122.4.69) 26.606 ms 28.026 ms 26.246 ms 11 12.122.248.250 (12.122.248.250) 27.296 ms 28.321 ms 28.997 ms 12 192.168.254.46 (192.168.254.46) 28.522 ms 30.111 ms 27.439 ms 13 * * * 14 * * *
Re: Strange public traceroutes return private RFC1918 addresses
On Feb 2, 2004, at 6:20 PM, Jonas Frey (Probe Networks) wrote: This is quite often used. You cant (d)DoS the routers this way, nor try to do any harm to them as you cant reach them. Sure you can, easy, attack a router 1 hop past your real target and spoof your target as the source. The resulting ICMP responses will hammer the target. If the Internet edge actually protected itself against spoofing it would be harder but it is still very do-able now.
Re: Strange public traceroutes return private RFC1918 addresses
Matthew Crocker wrote: Search the archives, Comcast and other cable/DSL providers use the 10/8 for their infrastructure. The Internet itself doesn't need to be Internet routable. Only the edges need to be routable. It is common practice to use RFC1918 address space inside the network. Companies like Sprint and Verio use 'real' IPs but don't announce them to their peers on customer edge routes. Which (as discussed previously) breaks things like Path MTU Discovery, traceroute, and other things that depend on the router sending back ICMP packets to the sender if any ISP along the return path (properly) filters RFC1918 address space as being bogus. You can use RFC1918 space on any device that really has no need to communicate with the outside world, but generally, un-NAT'ed routers don't qualify for this, at least on their transit interfaces. I believe Comcast (and I'm going only on my experience as a customer) is or has moved from RFC1918 space to routable IP space for their routers, at least on interfaces I've been doing traceroutes through. Bob
Re: Strange public traceroutes return private RFC1918 addresses
Using real but announced IPs for routers will make their packets fail unicast-RPF checks, dropping traceroute and PMTUD responses as happens with RFC1918 addresses. Rubens - Original Message - From: Matthew Crocker [EMAIL PROTECTED] To: Brian (nanog-list) [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Monday, February 02, 2004 9:25 PM Subject: Re: Strange public traceroutes return private RFC1918 addresses Search the archives, Comcast and other cable/DSL providers use the 10/8 for their infrastructure. The Internet itself doesn't need to be Internet routable. Only the edges need to be routable. It is common practice to use RFC1918 address space inside the network. Companies like Sprint and Verio use 'real' IPs but don't announce them to their peers on customer edge routes. -Matt On Feb 2, 2004, at 6:01 PM, Brian (nanog-list) wrote: Any ideas how (or why) the following traceroutes are leaking private RFC1918 addresses back to me when I do a traceroute? Maybe try from your side of the internet and see if you get the same types of responses. It's really strange to see 10/8's and 192.168/16 addresses coming from the public internet. Has this phenomenon been documented anywhere? Connectivity to the end-sites is fine, it's just the traceroutes that are strange. (initial few hops sanitized) [EMAIL PROTECTED] /]# traceroute www.ibm.com traceroute: Warning: www.ibm.com has multiple addresses; using 129.42.17.99 traceroute to www.ibm.com (129.42.17.99), 30 hops max, 38 byte packets 1 (---.---.---.---) 2.481 ms 2.444 ms 2.379 ms 2 (---.---.---.---) 17.964 ms 17.529 ms 17.632 ms 3 so-1-2.core1.Chicago1.Level3.net (209.0.225.1) 17.891 ms 17.985 ms 18.026 ms 4 so-11-0.core2.chicago1.level3.net (4.68.112.194) 18.272 ms 18.109 ms 17.795 ms 5 so-4-1-0.bbr2.chicago1.level3.net (4.68.112.197) 17.851 ms 17.859 ms 18.094 ms 6 so-3-0-0.mp1.stlouis1.level3.net (64.159.0.49) 23.095 ms 22.975 ms 22.998 ms 7 ge-7-1.hsa2.stlouis1.level3.net (64.159.4.130) 23.106 ms 23.237 ms 22.977 ms 8 unknown.level3.net (63.20.48.6) 24.264 ms 24.099 ms 24.154 ms 9 10.16.255.10 (10.16.255.10) 24.164 ms 24.108 ms 24.105 ms 10 * * * [EMAIL PROTECTED] /]# traceroute www.att.net traceroute: Warning: www.att.net has multiple addresses; using 204.127.166.135 traceroute to www.att.net (204.127.166.135), 30 hops max, 38 byte packets 1 (---.---.---.---) 2.404 ms 2.576 ms 2.389 ms 2 (---.---.---.---) 17.953 ms 18.170 ms 17.435 ms 3 500.pos2-1.gw10.chi2.alter.net (63.84.96.9) 18.077 ms * 18.628 ms 4 0.so-6-2-0.xl1.chi2.alter.net (152.63.69.170) 18.238 ms 18.321 ms 18.213 ms 5 0.so-6-1-0.BR6.CHI2.ALTER.NET (152.63.64.49) 18.269 ms 18.396 ms 18.329 ms 6 204.255.169.146 (204.255.169.146) 19.231 ms 19.042 ms 18.982 ms 7 tbr2-p012702.cgcil.ip.att.net (12.122.11.209) 20.530 ms 20.542 ms 23.033 ms 8 tbr2-cl7.sl9mo.ip.att.net (12.122.10.46) 26.904 ms 27.378 ms 27.320 ms 9 tbr1-cl2.sl9mo.ip.att.net (12.122.9.141) 27.194 ms 27.673 ms 26.677 ms 10 gbr1-p10.bgtmo.ip.att.net (12.122.4.69) 26.606 ms 28.026 ms 26.246 ms 11 12.122.248.250 (12.122.248.250) 27.296 ms 28.321 ms 28.997 ms 12 192.168.254.46 (192.168.254.46) 28.522 ms 30.111 ms 27.439 ms 13 * * * 14 * * *
Re: Strange public traceroutes return private RFC1918 addresses
On Tue, 3 Feb 2004, Rubens Kuhl Jr. wrote: Using real but announced IPs for routers will make their packets fail unicast-RPF checks, dropping traceroute and PMTUD responses as happens with RFC1918 addresses. I guess you meant unannounced. This is the case for those who run uRPF towards their upstream (or transit ISPs peering with them who'd run uRPF on the peering links). I don't think too many folks do that. But I see very little point in not announcing them. Equally well you could just set up an acl at the edge which drops or rate-limits the traffic. Well, you might not be able to if you're using a vendor the implementation of which doesn't allow you to do that.. :) -- Pekka Savola You each name yourselves king, yet the Netcore Oykingdom bleeds. Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings