Hi Tony and Scott, In "Re: Generic requirements on mapping mechanisms" you wrote, in part:
>> One way of shifting that cost further is for the advertising site >> to proactively issue updates based on historical requests. > > Are you suggesting that if the advertising site does this, the > caching site could toss out cache entries more casually, because > they are likely to be refreshed? I wonder if proactive updates > will increase both the cost of caching and the cost of processing > updates even more, because updates will be sent and processed even > when they are not needed. I think you are considering a "proactive" update system on a global scale, since you mention "advertising sites" and I think this means ETRs or whatever devices are the authoritative source of mapping information in LISP-ALT. In a pure pull system like ALT, adding a Notify system would have some scaling and reliability problems. The global distances involved in reaching the requesters are a challenge in terms of the overall cost of traffic, risk of packet loss etc. There is also extra delay. My interest is in Notify from local query servers, such as APT's Default Mappers or Ivip's full database QSDs. This way, the distances are short, packet loss will be lower and the the load is spread evenly over tens of thousands of QSDs, rather than with a global system where a mapping update suddenly means one or a few ETRs have to do a lot of work. The primary benefit of Notify - sending updated mapping information, not just a cache invalidation - is that all ITRs can be made to respond very quickly to the end-user's mapping change. Yet this can still be done with relatively long caching times, so it doesn't involve a great increase in query and response traffic. I described Ivip's approach to Notify, using the nonce of the original request, and potentially using intermediate caching query servers, at the end of: http://psg.com/lists/rrg/2008/msg00742.html The future map-encap scheme will grow in ways we can't reliably predict - in terms of number of end-users, different types of end-users, numbers of micronets (EIDs), use for purposes beyond out current vision of multihoming, TE and portability. We can't really predict the update rate, but with mobility, all this stuff will go through the roof. So the more flexibility we can build into the system the better. The idea with hybrid push-pull (APT and Ivip) is that over the decades to come, operators of various types of networks (including types we can't yet envisage) will be free to push the whole mapping database to some depth in their network of their choosing - to some number of full database ITRs and Query Servers. Beyond that, they are choosing to use pull to these local query servers with notify. Ivip mapping contains only an ETR address - no TE stuff, no caching time. The idea is that the QSD responds to all queries with the micronet's start and length, with the ETR address - and with a caching time. The operator of the QSD could configure it with a fixed caching time, or the QSD could use some fancy algorithm, based on recent update patterns, to send out replies with different caching times. A long caching time means: 1 - Lower rate of the ITRs re-requesting mapping. 2 - More storage required in caching ITRs - although each such ITR could drop the mapping from its cache and re-request it if it needs to. 3 - Larger amount of Notify messages to be sent if the mapping changes. 4 - Larger amount of state to be kept in each QSD. 5 - Likewise, more state and work for any intermediate caching QSCs. With Ivip, the operator of each ISP or end-user network chooses a number of things: 1 - How far to push and store the full mapping database. If this turned out to be onerous, then small networks could rely on full database ITRs and query servers in nearby networks, for a fee. There are reasons to put caching ITR functions as close to sending hosts as possible, including for free in sending hosts via an OS update (not behind NAT). So I am thinking of end-user networks needing access to a full database query server (or several for redundancy), not just ISP networks. 2 - How many devices store the full database. These can be full database ITRs and typically two or more full database query servers. Ideally, for redundancy, each such query server would have a separate pair of mapping feeds. A caching ITR right next to a full database query server is pretty much the same as a full database ITR, but doesn't in itself need storage or a feed of mapping data. 3 - Where to place caching ITRs and potentially caching query servers. 4 - To what extent to build caching ITR functions into sending hosts such as web servers, which are not behind NAT. Also potentially building the caching ITR function into the same device as NAT - such as in DSL modems. The update rate due to multihoming service restoration for non-mobile networks is likely to be pretty low in comparison to these two types of update: 1 - TE updates, where people pay per update for the benefits it brings in managing incoming traffic, enabling them to run their links at an overall higher capacity than if they didn't have this real-time method of steering traffic. 2 - Mobility. The volume depends on so many things - number of mobile nodes (this is the only class of end-user network or host which numbers in the billions), the cost of updates, sophistication of software which selects TTRs and therefore generates updates. These updates will be paying their way on the fast-push system. The idea is that operators will have great flexibility in how many QSDs they have, how close they are to the caching ITRs which query them, and how long the caching time is. This allows operators in the decades ahead to make trade-offs which suit the local circumstances and the technology and traffic costs of the day - none of which we can reliably predict now. - Robin -- to unsubscribe send a message to [EMAIL PROTECTED] with the word 'unsubscribe' in a single line as the message text body. archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg
