On Sun, Nov 30, 2008 at 4:18 PM, Templin, Fred L <[EMAIL PROTECTED]> wrote: > Hi Chris, > >>-----Original Message----- >>From: Christopher Morrow [mailto:[EMAIL PROTECTED] >>Sent: Sunday, November 30, 2008 11:18 AM >>To: Templin, Fred L >>Cc: [EMAIL PROTECTED]; Darrel Lewis (darlewis); Routing Research Group > Mailing List >>Subject: Re: [rrg] Fundamental objections > toahost-basedscalableroutingsolution >> >>On Sat, Nov 29, 2008 at 5:57 PM, Templin, Fred L >><[EMAIL PROTECTED]> wrote: >>> Hi Chris, >>> >>>>-----Original Message----- >>>>From: Christopher Morrow [mailto:[EMAIL PROTECTED] >>>>Sent: Saturday, November 29, 2008 11:48 AM >>>>To: Templin, Fred L >>>>Cc: [EMAIL PROTECTED]; Darrel Lewis (darlewis); Routing Research Group >>> Mailing List >>>>Subject: Re: [rrg] Fundamental objections >>> toahost-basedscalableroutingsolution >>>> >>>>On Sat, Nov 29, 2008 at 2:08 PM, Templin, Fred L >>>><[EMAIL PROTECTED]> wrote: >>>>>>|> That implies that the >>>>>>|> ETR does a mapping lookup on the receipt of a packet, buffers >>>>>>|> the packet until the lookup succeeds, and the does the >>>>>>|> compare. >>>>>>| >>>>>>|Oh you mean like the IPv6 neighbor discovery process!? >>>>>> >>>>>> >>>>>>Two wrongs don't make a right. >>>>> >>>>> Why buffer the packet until the lookup succeeds? Why not >>>>> just accept the first few packets while a lookup is done >>>> >>>>a synflood is a bunch of 1 packet flows :( you lose, I win! yippee! > :( >>>>Seriously though, if you send through 'some' of the bad packets all >>>>the attacker has to know is how many 'some' is... in the worst case >>>>the answer is 'one'. >>> >>> Still, AFAICT performing egress filtering of some sort during >>> decaps could be used to the ETR's advantage in establishing a >>> pattern of behavior from certain ITRs. In particular, it could >>> be used by the ETR to determine which ITRs are not correctly >>> implementing ingress filtering - right? >>> >> >>with some level of logging/stats-collection on the ETR you might be >>able to determine that an ITR is improperly encapping, yes... though >>it's possible that the ITR is performing encap but not decap for the >>networks behind it so you may not be able to tell if this is a problem >>or not. >> >>>>Buffering is bad, really, really bad. >>> >>> Buffering on decaps does sound pretty bad, but buffering on >>> encaps may be a different story. I haven't looked at the >> >>The encap buffering problem is 'solved' by encaping only when you have >>a map for the destination, no? else you drop the 'first packet' (which >>maybe multiple first packets certainly) while you await the mapping >>reply. >> >>> mapping schemes all that closely, but at least the APT >>> approach seems to have the ITR send to a "default mapper" >>> with side-effect of getting a mapping resolution in return. >>> That sounds great, but it essentially puts "default routers" >>> in the DFZ. Is that better or worse than buffering? >> >>it sounds like default-mappers can get really busy, and cost some >>stretch while maybe not forcing 'first packet drop'... depending on >>the network that might not be a bad thing. > > But, how bad that would be? In normal use, default mappers > would only have to forward the first packet out - not a > sustained stream of packets.
This really gets at 'where are the default mappers deployed' and 'what is the traffic mix for the network deploying the default mapper(s)'. you can minimize stretch, and lookup lag by pushing default mappers to as many 'pops' as possible (thinking the network heirarchy is something like: network -> region -> metro -> pop). you can minimize mapping punts, at steady state, by keeping more map entries 'local', but... you will still have a limited amount of the map you can store on any one 'router' though, so if the traffic mix is such that a router sees more destinations than their map space can hold you'll always be sending some traffic through the default mapper. For example say your map is only 100 items long, yet your network sends to 150 locations per time period. In that time period you'd be doing 50% default-mapper punts... If this default mapper is deployed at the 'network' level you're going to get a lot of traffic punted toward it... There are places where default mappers are going to be the wrong solution, there are places where they may fit the bil very nicely, perhaps even in the same network. Say a network that has a sizable dialup/dsl plant and wants to limit costs on the L3 devices there, but has sizable routers/traffic through the network core and can spend on mapping memory in L3 devices at that point, perhaps default mappers inside the dial/dsl plant makes sense but not for the network as a whole. Anyway, my point wasn't that one solution is better than the other, but that each tool has to be used properly, and we may want more than one tool here. > > I'm wondering if there is a parallel to be drawn between > default mappers and the root domain name servers. Is there I would bet only slightly since the lookup load is much lower on DNS boxes (I think it is at least) than it is on routers. (per packet vs per TTL) > some comparison to be drawn that could indicate anticipated > scaling properties? Maybe the APT people have thought about > this more and could comment? > hopefully they can :) -Chris _______________________________________________ rrg mailing list [email protected] https://www.irtf.org/mailman/listinfo/rrg
