Thanks - yes, absolutely and I can figure that into the equation. Been reading a lot of discussions in archives and Google about this. I want to ensure that however/where we deploy this that we can provide a full 1500 MTU *without* having desktops make MTU adjustments basically.... at the expense of fragmentation and CPU (which we can account for). No matter what I've tried so far I can't get a ping through our pair of test routers larger than 1472 though yet....
This avoids websites being unreachable (Microsoft comes to mind) and other MTU annoyances we've encountered over time... Paul -----Original Message----- From: Ge Moua [mailto:[email protected]] Sent: Thursday, February 26, 2009 11:50 AM To: Paul Stewart Cc: [email protected] Subject: Re: [c-nsp] l2tpv3 config - MTU question I was tackling a similar issue over here too, I think it may have to do with the fact that l2tpv3 and ethernet headers are taking some of the mtu allocation. Regards, Ge Moua | Email: [email protected] Network Design Engineer University of Minnesota | Networking & Telecommunications Services Paul Stewart wrote: > Hi folks. > > > > I've setup a pair of 1841's back to back for testing l2tpv3 deployment for a > client.. > > > > FastE0/0 from each 1841 is connected to one another at 10.0.0.0/24 - each > router has a loopback of 192.168.254.1 and .2 - OSPF is running and am able > to successfully ping each other's loopback with redistributed subnets etc.. > > > > Configured each router to look like this: > > > > pseudowire-class test > > encapsulation l2tpv3 > > sequencing both > > ip local interface Loopback0 > > > > interface FastEthernet0/0 > > ip address 10.0.0.2 255.255.255.0 > > duplex auto > > speed auto > > > > interface FastEthernet0/1 > > no ip address > > duplex auto > > speed auto > > no cdp enable > > xconnect 192.168.254.2 1234 pw-class test > > > > Have a notebook hooked up to each FastE0/1 port and assigned 172.16.0.1 and > .2 on them. I can ping back and forth proving connectivity etc. > > > > My problem/question is how to get a packet of 1500 bytes to transverse the > link - obviously fragmented but that's ok. In the real-world deployment > of this setup we are limited to 1500 MTU in most situations and will presume > no mini-jumbo support anywhere (from a config perspective at least). > > > > In my first config I had Path MTU discovery enabled and could only ping up > to 1440 bytes. With that disabled I can now ping to 1472 but not beyond. > > > > With Path MTU turned on it looked like this: > > > > site2#sh l2tun session all > > > > %No active L2F tunnels > > > > L2TP Session Information Total tunnels 1 sessions 1 > > > > Session id 53211 is up, tunnel id 32076 > > Call serial number is 1293300000 > > Remote tunnel name is site1 > > Internet address is 192.168.254.1 > > Session is L2TP signalled > > Session state is established, time since change 00:26:44 > > 114 Packets sent, 116 received > > 30446 Bytes sent, 29032 received > > Last clearing of "show vpdn" counters never > > Receive packets dropped: > > out-of-order: 0 > > total: 0 > > Send packets dropped: > > exceeded session MTU: 1 > > total: 1 > > Session vcid is 1234 > > Session Layer 2 circuit, type is Ethernet, name is FastEthernet0/1 > > Circuit state is UP > > Remote session id is 22201, remote tunnel id 12358 > > Session PMTU enabled, path MTU is 1500 bytes > > DF bit on, ToS reflect disabled, ToS value 0, TTL value 255 > > No session cookie information available > > UDP checksums are disabled > > SSS switching enabled > > Sequencing is on > > Ns 114, Nr 116, 0 out of order packets received > > Unique ID is 1 > > > > %No active PPTP tunnels > > > > > > Upon looking further I could see the DF bit on which I believe would explain > the 1440 byte limit I hit. But with that disabled I am puzzled or missing > something as to why I cannot fragment packets up to full 1500? What I am > missing here? Do I need to make MTU adjustments towards the FastE0/1 > interface to force fragmentation before the l2tpv3 tunnel? > > > > Thanks in advance, > > > > Paul > > _______________________________________________ > cisco-nsp mailing list [email protected] > https://puck.nether.net/mailman/listinfo/cisco-nsp > archive at http://puck.nether.net/pipermail/cisco-nsp/ > _______________________________________________ cisco-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
