Re: IPv6 transition work was RE: NANOG 40 agenda posted
What I guess have not been clear on is the fact that loadbalancers for many people are an integral (and required) part of the *architecture* (and not just something u need to distribute load), and as such, are a component that must support v6 for the *service* to then be able to support it (much like basic logging now would need to be v6 capable, etc). There is simply no easy way of taking 1 machine and making it mail.ipv6.yahoo.com (as an example), not to mention that nobody is going to invest the time and resources into building a completely different architecture (a single server is a complete one-off) to support a test rollout of v6 (and then having to sync different code trees, etc), when that time and resources could be better invested in coming up with a real solution for the long-term. Again, we are working on it, it is much harder then it seems, my views are my own, I'm not in any way speaking for my employer, and in fact, I've said all I can. Thanks, -igor On Mon, 4 Jun 2007, JORDI PALET MARTINEZ wrote: :: :: Agree, and in fact, a quick though is that as you may expect *much less* :: IPv6 traffic today, not having load balancing may not be an issue, and you :: can always actively measure if the traffic is going high, etc. :: :: If the time arrives when the traffic is so high and your preferred vendor :: doesn't yet support the IPv6 load+IPv4 one, then you have no risk in the :: sense that you can just delete the records, but meanwhile you have a :: very realistic test environment and motivation to push your vendors, or :: considering the traffic, decide if moving to other vendors, etc. :: :: Regards, :: Jordi :: :: :: :: :: > De: <[EMAIL PROTECTED]> :: > Responder a: <[EMAIL PROTECTED]> :: > Fecha: Sun, 3 Jun 2007 23:01:57 +0100 :: > Para: <[EMAIL PROTECTED]> :: > Conversación: IPv6 transition work was RE: NANOG 40 agenda posted :: > Asunto: IPv6 transition work was RE: NANOG 40 agenda posted :: > :: > :: > :: >> Without naming any vendors, quite a few features that work :: >> with hardware assist/fast path in v4, don't have the same :: >> hardware assist in v6 (or that sheer enabling of ipv6 doesn't :: >> impact v4 performance drasticly). :: >> Also, quite a few features simply are not supported in v6 :: >> (not to mention that some LB vendors don't support v6 at :: >> all). Just because it "works", doesn't mean it works :: >> corrctly, or at the right scale. Again, not naming any vendors... :: > :: > This just emphasizes the importance of turning on IPv6 today either in :: > some part of your production networks in order to identify the specifics :: > of these issues and get them out in the open where they can be fixed. :: > :: >> Actually, for me 100% feature parity (for stuff we use per :: >> vip) is a day-1 requirement. :: > :: > This doesn't sound like transition as we know it. If you can set up :: > everything that you need to test in a lab environment and then certify :: > IPv6 as ready for use, this could work. But I don't believe that the :: > IPv6 transition can be handled this way. It involves many networks with :: > services and end-users of all types which interact in interesting ways. :: > We need everybody to get some IPv6 into live Internet production. The :: > only way this can work is to take lots of baby steps. Turn on a bit of :: > v6, test, repeat. :: > :: >> My stance is that simply enabling v6 on a server in "not :: >> interesting", v6 has to be enabled on the *service*, :: > :: > I disagree. If a company can offer their service using lots of IPv4 :: > in-house with an IPv6 proxy gateway to the Internet, then this is still :: > valuable and useful in order to support OTHER people's testing. Let's :: > face it, IPv4 is not going away and even when the v4 addresses run out, :: > anybody who has them can keep their services running as long as they :: > don't need to grow the v4 infrastructure. This is not an issue of :: > turning on some IPv6 to test it and then evaluate the results. The fact :: > of IPv4 exhaustion is an imperative that means you and everyone else :: > must transition to an IPv6 Internet. You turn on some v6, test, adjust, :: > turn on some more, test, adjust and repeat until your infrastructure no :: > longer has a dependency on new IPv4 addresses. Your end game may still :: > have lots of IPv4 in use which is OK as long as no new IPv4 addresses :: > are needed. :: > :: >> Like you said, different companies have different approaches, :: >> but if I'm going to invest my (and a lot of other :: >> engineers/developers/qa) time in enabling v6, it's not going :: >> to be putting a single server behind the mail.ipv6.yahoo.com :: >> rotation, it's going to be figuring out how to take :: >> everything that we use for mail.yahoo.com, and making it work :: >> in v6 (as that is the only way it would be concidered a valid :: >> test), so that at some point in the not-too-distant future it :: >> could become dual-stack... :: > :
Re: Yahoo-hosted phishing sites
:: > [EMAIL PROTECTED] has been fairly responsive lately. [EMAIL PROTECTED] is where all Yahoo related abuse issues should go to (and it will be read and acted upon). If you submit phishing reports through phishing related channels (ie http://www.castlecops.com/pirt) they might be acted upon faster (some trusted sources have an "express" escalation path). If abuse@ isn't working, please let me know, and I can see about geting it escalated. Thanks, -igor (this time speaking for my employer, Yahoo!)
Re: FW: DNS TTL adherence
:: >So, if you, or the original poster, is going to move :: ${important_resource} :: >around ip-wise keep in mind that your ${important_thing} may have to :: >answer to more than 1 ip address for a period much longer than your :: tuned :: >TTL :( :: :: Thanks all for the responses. I do understand we may need to support the :: old IP addresses for sometime. I was hoping someone had performed a :: study out there to determine what a ratio maybe for us supporting an old :: IP address (I know our traffic profile will be unique for us thus it :: would only give us a general idea). :: :: For example if we change ip addresses will we need to plan on 20% :: traffic at old site on day1, 10% day2, 5%, day3, and so on...? There are :: also issues related to proxy servers and browser caching that are :: independent of DNS we will need to quantify to understand full risk. The :: more data we have will drive some of our decisions. In my not-so-scientific "studies" with changind IPs for a fairly large volume site, I found that 90% of the people will use the new ip within an hour of TTL expiration, 99.999% of the people within 3 days, and that remaining .001% may take years As someone said earlier, some parts of the 'net are just broken beyond your control... -igor
Re: Multi-6 [WAS: OT - Vint Cerf joins Google]
:: > We also like that fact that we can change our :: > announcements so others can only use prefix X through transit provider Y :: > and not transit provider Z, unless transit provider Y goes away (those 2 :: > are obviously not the only uses of such policies, but are just examples). :: :: :: This also seems like it achievable via DNS hacks on your side. Again, :: this seems like it can be done locally. Wonderful.. so now we have to do routing in DNS, a protocol that's not exactly designed for rapid convergence (yes, neither is BGP, but it's a *lot* faster then DNS). Just brilliant. :: While I realize that the status quo is always the most comfortable, you :: should also recognize that the status quo is simply not sustainable from :: an architectural viewpoint. Thus, the charter of multi6/shim6 is to :: change the model into one that is sustainable, and the fact that certain :: features and functionality will be lost is an unfortunate necessity. While the status quo is not sustainable if growth continues for 4+ years, deciding to "fix" the problem by pretending that there was never a good reason for it in the first place, and moving it to a different place is not a very good architectural solution either. :: Well, I cannot disagree with you. However, this is the direction that :: the IETF has chosen after careful and lengthy discussions. Those of us :: who had alternative ideas have long since lost the battle and are :: resigned to the inevitable, of which shim6 seems like the best of a bad :: lot. And I hope this thread points out why more content isn't v6 enabled.. And no, I'm not saying that "the evil greedy bastards" did this on purpose, unfortunately, it's simply yet another example of things being created without operator involvement (and yes, we, the operators, are at fault for that). See you on [EMAIL PROTECTED] -igor
Re: Multi-6 [WAS: OT - Vint Cerf joins Google]
:: All in all, site traffic engineering is NOT going to be an easy problem :: to solve in a hop-by-hop forwarding paradigm based on clever :: manipulation of L3 locators. Architecturally, what one would really :: like is to not worry about the traffic engineering problem per-se. :: Rather, what is needed is a mechanism that allows congestion control and :: mechanisms to feed into the address selection algorithms, so that when a :: link does become saturated, some traffic (but not all! ;-), shifts to :: alternate addresses. Traffic engineering is not *only* about congestion, in fact, for a large content provider, it's about *policy*. Content providers like the fact that by manipulating the routing policy we can chose to send X amount of traffic to B via peering link Y (provided that prefix is announced by both peers Y and Z). We also like that fact that we can change our announcements so others can only use prefix X through transit provider Y and not transit provider Z, unless transit provider Y goes away (those 2 are obviously not the only uses of such policies, but are just examples). For us (and i'm sure not only us) it's about control, and that control is required for financial, political (and when the 2 intersect), as well as performance engineering reasons, things that are easily done in v4 right now, and can not be done simply in v6 (please correct me if I'm wrong here), unless every datacenter all of a sudden gets a /32 (and if the folks in ARIN have no problems giving a large content provider a /26 (of v6 space) in order to encourage it's adoption, because the current multihoming strategies simply do not work, please do drop me a line) Moving everything to the end-hosts is simply not a good idea imho. -igor
Re: Multi-6 [WAS: OT - Vint Cerf joins Google]
:: > Well, I have no evidence of them doing anything with IPv6 anyway, so I :: > don't know if this makes a difference. :: :: I have a very strong feeling that part of the lack of content providers on :: IPv6 is due to the lack of multihoming. :: :: Whilst this thread is open... perhaps someone can explain to me how shim6 is :: as good as multihoming in the case of redundancy when one of the links is :: down at the time of the initial request, so before any shim-layer negotiation :: happens. :: :: I must be missing something, but there's a good chance that the requester is :: going to have to wait for a timeout on their SYN packets before failing over :: to another address to try. Or is the requester supposed to send SYNs to all :: addresses for a hostname and race them off? Or, on top of that, how traffic engineering can be performed with shim6.. And people wonder why more "content" isn't available for v6. Maybe when content providers start asking for a /32 *per datacenter* (ie a /26 or so of initial allocation) those issues might get solved... then again, probably not. -igor (firmly in the shim6 does not adress *most* of the issues camp)