RE: New tsunami advisory warning - Japan
*yawn*. A foot and a half isn't going to be all *that* bad Sorry to continue off topic: Try to imagine ... a temporary very high tide, rather than a cresting wave. In addition to the height, it's the wave-length you have to take into account. Tsunami's rarely become towering breaking waves. [That said, tsunamis can form into a bore - a step-like wave with a steep breaking front. Likely if the tsunami moves from deep water into a shallow river / bay] 1 1/2 foot on top of an existing high tide, could easily cause further flooding in the wrong locations (although as mentioned, not to the levels already experienced). travels in general at approx 970 kph (600 mph) True in the deepest parts of open ocean - upon reaching the shore-line it'll be travelling a lot slower. /off-topic // Gav
Re: Creating an IPv6 addressing plan for end users
On 3/23/11 6:14 AM, Hammer wrote: Nathalie, As an end customer (not a carrier) over in ARIN land I purchased a /48 about a year ago for our future IPv6 needs. We have 4 different Internet touchpoints (two per carrier) all rated at about 1Gbps. Recently, both carriers told us that the minimum advertisement they would accept PER CIRCUIT would be a /48. I was surprised to say the least. Basically a /48 would not be enough for us. The arguement was that this was to support all the summarization efforts and blah blah blah. Even though my space would be unique to either carrier. So now I'm contemplating a much larger block. Seems wasteful but I have to for the carriers. Have you heard of this elsewhere or is this maybe just an ARIN/American thing? Both carriers told me that in discussions with their peers that they were all doing this. there are providers that will accept more specific prefixes from the customers for internal use. since /48 is the minimum arin allocation there is observed to be general consensus on not accepting prefixes longer than /48 into the dfz. http://www.verizonbusiness.com/Products/networking/internet/ipv6/policy.xml is one such example of a transit provider that will carry longer prefixes internally. -Hammer- I was a normal American nerd. -Jack Herer On Wed, Mar 16, 2011 at 1:52 PM, Schiller, Heather A heather.schil...@verizonbusiness.com wrote: For those who don't like clicking on random bit.ly links: http://www.ripe.net/training/material/IPv6-for-LIRs-Training-Course/IPv6 _addr_plan4.pdf --Heather -Original Message- From: Nathalie Trenaman [mailto:natha...@ripe.net] Sent: Wednesday, March 16, 2011 5:05 AM To: nanog@nanog.org Subject: Creating an IPv6 addressing plan for end users Hi all, In our IPv6 courses, we often get the question: I give my customers a /48 (or a /56 or a /52) but they have no idea how to distribute that space in their network. In December Sander Steffann and Surfnet wrote a manual explaining exactly that, in clear language with nice graphics. A very useful document but it was in Dutch, so RIPE NCC decided to translate that document to English. Yesterday, we have published that document on our website and we hope this document is able to take away some of the fear that end users seem to have for these huge blocks. You can find this document here: http://bit.ly/IPv6addrplan (PDF) I look forward to your feedback, tips and comments. With kind regards, Nathalie Trenaman RIPE NCC Trainer
Re: Paul Baran, RIP.
- Original Message - From: Roland Dobbins rdobb...@arbor.net http://www.networkworld.com/news/2011/032811-paul-baran-packet-switching-obit.html Oh hell; now we'll *never* lay the ghost of packet switching was invented to create a nuclear-war-survivable network. [ reads obit ] See? Happy Landings, Dr B. Cheers, -- jra
Re: Paul Baran, RIP.
On Mon, 28 Mar 2011, Jay Ashworth wrote: - Original Message - From: Roland Dobbins rdobb...@arbor.net http://www.networkworld.com/news/2011/032811-paul-baran-packet-switching-obit.html Oh hell; now we'll *never* lay the ghost of packet switching was invented to create a nuclear-war-survivable network. [ reads obit ] See? The Man Who Shot Liberty Valance (1962) Ransom Stoddard: You're not going to use the story, Mr. Scott? Maxwell Scott: No, sir. This is the West, sir. When the legend becomes fact, print the legend. - Lucy Happy Landings, Dr B. Cheers, -- jra
Re: New tsunami advisory warning - Japan
Gavin Pearce wrote: *yawn*. A foot and a half isn't going to be all *that* bad Sorry to continue off topic: Try to imagine ... a temporary very high tide, rather than a cresting wave. In addition to the height, it's the wave-length you have to take into account. Tsunami's rarely become towering breaking waves. Quite right. The other part is that the water becomes a very fast moving river, especially in places where it's not normally one. Watching the footage from the Santa Cruz harbor it wasn't the height that was a particular problem, but the fact that all of a sudden you had a 5-10 knot current. And this happened at low tide, so it would have been far worse if it happened at high tide. There was a pretty spectacular photo of the tsunami that appeared to be around the Emeryville flats. Only about 6 or so inches high, but massive. Had it been at high tide, it could have probably done some nasty things... like, oh for example, the sewage treatment plant next to the Bay Bridge comes to mind. Mike
RE: New tsunami advisory warning - Japan
You guys forget a lot of folks on the list are working on cabling ships and off shore platforms, its not all about what happens on shore in this industry. Valid point ... however in deep ocean, these things are pretty imperceptible. The effect on ships on the surface are nominal, and off shore platforms are (generally) built with these things in mind: http://www.msnbc.msn.com/id/27324535/ns/technology_and_science-innovation/ At the other extreme, Lituya Bay is a good example of a Mega Tsunami (1,720 feet): http://en.wikipedia.org/wiki/1958_Lituya_Bay_megatsunami
Re: New tsunami advisory warning - Japan
On Mar 28, 2011, at 10:57 AM, Gavin Pearce wrote: You guys forget a lot of folks on the list are working on cabling ships and off shore platforms, its not all about what happens on shore in this industry. Valid point ... however in deep ocean, these things are pretty imperceptible. The effect on ships on the surface are nominal, and off shore platforms are (generally) built with these things in mind: http://www.msnbc.msn.com/id/27324535/ns/technology_and_science-innovation/ Here is a video of the recent Japanese tsunami from a JCG ship in the the open ocean. The waves (@ ~4:20 and 6:40 into the video) caused them no trouble, but they were certainly not imperceptible. Regards Marshall At the other extreme, Lituya Bay is a good example of a Mega Tsunami (1,720 feet): http://en.wikipedia.org/wiki/1958_Lituya_Bay_megatsunami
Re: New tsunami advisory warning - Japan
On Mar 28, 2011, at 11:28 AM, Marshall Eubanks wrote: On Mar 28, 2011, at 10:57 AM, Gavin Pearce wrote: You guys forget a lot of folks on the list are working on cabling ships and off shore platforms, its not all about what happens on shore in this industry. Valid point ... however in deep ocean, these things are pretty imperceptible. The effect on ships on the surface are nominal, and off shore platforms are (generally) built with these things in mind: http://www.msnbc.msn.com/id/27324535/ns/technology_and_science-innovation/ Here is a video of the recent Japanese tsunami from a JCG ship in the the open ocean. The waves (@ ~4:20 and 6:40 into the video) caused them no trouble, but they were certainly not imperceptible. With the video : http://www.youtube.com/watch?v=4XSBrrueVoQfeature=player_embedded#at=19 Marshall Regards Marshall At the other extreme, Lituya Bay is a good example of a Mega Tsunami (1,720 feet): http://en.wikipedia.org/wiki/1958_Lituya_Bay_megatsunami
RE: New tsunami advisory warning - Japan
JCG ship in the the open ocean. Impressive video. The wave height and speed would suggest shallower waters, and that likely the ship was close to land mass when the video was filmed rather than open ocean (in the sense of being far out to sea). Not being there of course I could easily be incorrect. Anyway we digress :) Gav On Mar 28, 2011, at 11:28 AM, Marshall Eubanks wrote: On Mar 28, 2011, at 10:57 AM, Gavin Pearce wrote: You guys forget a lot of folks on the list are working on cabling ships and off shore platforms, its not all about what happens on shore in this industry. Valid point ... however in deep ocean, these things are pretty imperceptible. The effect on ships on the surface are nominal, and off shore platforms are (generally) built with these things in mind: http://www.msnbc.msn.com/id/27324535/ns/technology_and_science-innovatio n/ Here is a video of the recent Japanese tsunami from a JCG ship in the the open ocean. The waves (@ ~4:20 and 6:40 into the video) caused them no trouble, but they were certainly not imperceptible. With the video : http://www.youtube.com/watch?v=4XSBrrueVoQfeature=player_embedded#at=19 Marshall Regards Marshall At the other extreme, Lituya Bay is a good example of a Mega Tsunami (1,720 feet): http://en.wikipedia.org/wiki/1958_Lituya_Bay_megatsunami
Re: New tsunami advisory warning - Japan
On Mar 28, 2011, at 1:03 PM, Marshall Eubanks wrote: On Mar 28, 2011, at 11:28 AM, Marshall Eubanks wrote: On Mar 28, 2011, at 10:57 AM, Gavin Pearce wrote: You guys forget a lot of folks on the list are working on cabling ships and off shore platforms, its not all about what happens on shore in this industry. Valid point ... however in deep ocean, these things are pretty imperceptible. The effect on ships on the surface are nominal, and off shore platforms are (generally) built with these things in mind: http://www.msnbc.msn.com/id/27324535/ns/technology_and_science-innovation/ Here is a video of the recent Japanese tsunami from a JCG ship in the the open ocean. The waves (@ ~4:20 and 6:40 into the video) caused them no trouble, but they were certainly not imperceptible. With the video : http://www.youtube.com/watch?v=4XSBrrueVoQfeature=player_embedded#at=19 Didn't show much and they were near the epicenter. My friend was on her 44' sailboat about halfway between Galapagos and Easter Island went Chile's earthquake happened which caused a 10' tsunami in the Galapagos. They never noticed a thing. Tom Tom
Re: New tsunami advisory warning - Japan
On 03/28/2011 01:22 PM, Gavin Pearce wrote: JCG ship in the the open ocean. Impressive video. The wave height and speed would suggest shallower waters, and that likely the ship was close to land mass when the video was filmed rather than open ocean (in the sense of being far out to sea). Not being there of course I could easily be incorrect. Anyway we digress :) Gav On Mar 28, 2011, at 11:28 AM, Marshall Eubanks wrote: On Mar 28, 2011, at 10:57 AM, Gavin Pearce wrote: You guys forget a lot of folks on the list are working on cabling ships and off shore platforms, its not all about what happens on shore in this industry. Valid point ... however in deep ocean, these things are pretty imperceptible. The effect on ships on the surface are nominal, and off shore platforms are (generally) built with these things in mind: http://www.msnbc.msn.com/id/27324535/ns/technology_and_science-innovatio n/ Here is a video of the recent Japanese tsunami from a JCG ship in the the open ocean. The waves (@ ~4:20 and 6:40 into the video) caused them no trouble, but they were certainly not imperceptible. With the video : http://www.youtube.com/watch?v=4XSBrrueVoQfeature=player_embedded#at=19 Thanks for the link... Very impressive, though strong storm waves get higher. This is not an open-ocean tsunami, it is probably either direct from the quake source or reflected from the nearby coast (I'm fairly sure it is the latter, though there isn't a good time reference in the video, since there appears to be land visible in the frame; if the white mass is really land, this definitely does not qualify as open-ocean, which for tsunami purposes has to be an open-ocean wavelength or so from the nearest land or shallow water (600-800 miles; you wouldn't see the land...) Cable damage from tsunamis mostly comes from bulk motion up or down a sloping ocean bottom, or from primary or secondary turbidity currents (basically an underwater avalanche) (unless you are unlucky enough to have the fault break itself cut the cable; this isn't too likely but with this fault geometry it could have happened.) Also, this quake (the 8.9 main one, not either the foreshocks or aftershocks, several of each were strong enough to trigger a tsunami watch in Hawaii by themselves) had a very extended energy-release time; the ground motion went for several minutes (see the graph in http://earthquake.usgs.gov/earthquakes/eqinthenews/2011/usc0001xgp/finite_fault.php). That can complicate wave generation a lot. Various mechanisms contribute to tsunami generation; the majority of wave generation in the 1960 Chile event came from landslides secondary to the main earthquake (which was very deep and centered under land...) My memory of papers about the 1964 Alaska quake involved both ground motion and landslides as contributors. Note that in this case, the resulting waves can go different directions from the same quake. Aside: I worked for the U of Hawaii tsunami reasearch program in the 1960's for a while, we were mainly working on very early prototypes of the deep-ocean pressure sensors that are now deployed. Decent embedded microprocessors didn't exist then (for that matter, *any* microprocessors, even the 1802 or 8008, either of which would have been a grand luxury :-( Those finally made these sensors practical. (ours was set up with a write-only 7-track tape drive, using discrete-transistor logic modules (no practical ICs yet either). It was to be placed on Ocean Station November (about 2/3 of the way from San Francisco to Honolulu), kicked overboard on request from the research people (after a suitable earthquake), then retrieved using a low-power radio beacon a few days later when the cable release timer tripped.) Modern electronics has improved things :-) -- Pete Marshall Regards Marshall At the other extreme, Lituya Bay is a good example of a Mega Tsunami (1,720 feet): http://en.wikipedia.org/wiki/1958_Lituya_Bay_megatsunami This one lists landsliding (or perhaps calving) as the generation mechanism, and both the source and the bay outlet were small enough that the wave probably didn't propagate too far once in the open ocean.
Re: Paul Baran, RIP.
On 03/28/2011 03:14 AM, Jay Ashworth wrote: - Original Message - From: Roland Dobbinsrdobb...@arbor.net http://www.networkworld.com/news/2011/032811-paul-baran-packet-switching-obit.html Oh hell; now we'll *never* lay the ghost of packet switching was invented to create a nuclear-war-survivable network. [ reads obit ] See? Happy Landings, Dr B. If it's good enough to use as a source for Wikipedia, who's to tell what is and what isn't factual.
Re: Regional AS model
On Mar 28, 2011, at 2:13 PM, Dave Temkin wrote: On 3/27/11 2:53 AM, Patrick W. Gilmore wrote: On Mar 25, 2011, at 3:33 PM, Owen DeLong wrote: Single AS worldwide is fine with or without a backbone. Only if you want to make use of ugly ugly BGP hacks on your routers, or, you don't care about Site A being able to hear announcements from Site B. You are highly confused. Accepting default is not ugly, especially if you don't even have a backbone connecting your sites. And even if we could argue over default's aesthetic qualities (which, honestly, I don't see how we can), there is no rational person who would consider it a hack. You really should stop trying to correct the error you made in your first post. Remember the old adage about when you find yourself in a hole. Another thing to note is the people who actually run multiple discrete network nodes posting here all said it was fine to use a single AS. One even said the additional overhead of managing multiple ASes would be more trouble than it is worth, and I have to agree with that statement. Put another way, there is objective, empirical evidence that it works. In response, you have some nebulous ugly comment. I submit your argument is, at best, lacking sufficient definition to be considered useful. And in reality, is allowas-in *that* horrible of a hack? If used properly, I'd say not. In a network where you really are split up regionally with no backbone there's really little downside, especially versus relying on default only. -Dave I agree that allowas-in is not as bad as default, but, I still think that having one AS per routing policy makes a hell of a lot more sense and there's really not much downside to having an ASN for each independent site. Owen
Re: Regional AS model
On Mar 28, 2011, at 5:40 PM, Owen DeLong wrote: On Mar 28, 2011, at 2:13 PM, Dave Temkin wrote: On 3/27/11 2:53 AM, Patrick W. Gilmore wrote: On Mar 25, 2011, at 3:33 PM, Owen DeLong wrote: Single AS worldwide is fine with or without a backbone. Only if you want to make use of ugly ugly BGP hacks on your routers, or, you don't care about Site A being able to hear announcements from Site B. You are highly confused. Accepting default is not ugly, especially if you don't even have a backbone connecting your sites. And even if we could argue over default's aesthetic qualities (which, honestly, I don't see how we can), there is no rational person who would consider it a hack. You really should stop trying to correct the error you made in your first post. Remember the old adage about when you find yourself in a hole. Another thing to note is the people who actually run multiple discrete network nodes posting here all said it was fine to use a single AS. One even said the additional overhead of managing multiple ASes would be more trouble than it is worth, and I have to agree with that statement. Put another way, there is objective, empirical evidence that it works. In response, you have some nebulous ugly comment. I submit your argument is, at best, lacking sufficient definition to be considered useful. And in reality, is allowas-in *that* horrible of a hack? If used properly, I'd say not. In a network where you really are split up regionally with no backbone there's really little downside, especially versus relying on default only. -Dave I agree that allowas-in is not as bad as default, but, I still think that having one AS per routing policy makes a hell of a lot more sense and there's really not much downside to having an ASN for each independent site. I'm glad you ignored Woody and others, who actually runs a multi-site, single-as topology. How many multi-site (non)networks have you run with production traffic? -- TTFN, patrick
IPv6 SEO implecations?
I'm attempting to find out information on the SEO implications of testing ipv6 out. A couple of concerns that come to mind are: 1) www.domain.com and ipv6.domain.com are serving the exact same content. Typical SEO standards are to only serve good content from a single domain so information isn't watered down and so that the larger search engines won't penalize. So a big concern is having search results take a hit because content is duplicated through two different domains, even though one domain is ipv4 only and the other is ipv6 only. 2) Not running ipv6 natively, or using 6to4. This (potentially) increases hop count and will put content on a slower GRE tunnel and add some additional time for page load times. 3) ??? Any others that I haven't thought of ??? So basically I'd love to set up some sites for ipv6.domain.com via 6to4 as a phase one, and at some point in the near future implement ipv6 natively inside the datacenter, but I'm somewhat concerned about damaging SEO reputation in the process. Thoughts? -wil
Re: Regional AS model
On Mon, Mar 28, 2011 at 5:40 PM, Owen DeLong o...@delong.com wrote: I agree that allowas-in is not as bad as default, but, I still think that having one AS per routing policy makes a hell of a lot more sense and there's really not much downside to having an ASN for each independent site. Well, let's say I'm a a medium/large transit network like Hurricane Electric, with a few far-flung POPs that have backup transit. I've got a POP in Miami, Minneapolis, or Toronto which has single points of backbone failure, e.g. one circuit/linecard/etc might go down, while the routers at the POP remain functional, and the routers in the rest of the network remain functional. What happens? 1) with allowas-in your remote POP will still learn your customers' routes by any transit you might have in place there 2) with default route toward transit (breaking uRPF) you would not learn the routes but still be able to reach everything 3) with neither of these solutions, your single-homed customers at the broken POP could not reach single-homed customers elsewhere on your backbone, even if you have backup transit in place. I'm not bashing on HE for possibly having a SPOF in backbone connectivity to a remote POP. I'm asking why you don't choose to use a different ASN for these remote POPs. After all, you prefer that solution over allowas-in or default routes. Oh, that's right, sometimes you have a business and/or technical need to operate a single global AS. Vendors have given us the necessary knobs to make this work right. There's nothing wrong with using them, except in your mind. Should every organization with a backbone that has an SPOF grab some more ASNs? No. Should every organization with multiple distinct networks and no backbone use a different ASN per distinct network? IMO the answer is probably yes, but I am not going to say it's always yes. I'll agree with you in a general sense, but if your hard-and-fast rule is that every distinct network should be its own ASN, you had better start thinking about operational failure modes. Alternatively, you could allow for the possibility that allowas-in has plenty of legitimate application. -- Jeff S Wheeler j...@inconcepts.biz Sr Network Operator / Innovative Network Concepts
Re: New tsunami advisory warning - Japan
Michael Thomas wrote: Gavin Pearce wrote: *yawn*. A foot and a half isn't going to be all *that* bad Sorry to continue off topic: Try to imagine ... a temporary very high tide, rather than a cresting wave. In addition to the height, it's the wave-length you have to take into account. Tsunami's rarely become towering breaking waves. Quite right. The other part is that the water becomes a very fast moving river, especially in places where it's not normally one. Watching the I don't underestimate the power of even a small tsunami. I have friends rendered homeless by the Santa Cruz tsunami (their boat being their only home). Though I can understand one is underwhelmed by a mag 6.x earthquake in that region considering I believe more than 20 6+ ones happened there since the first mag 7.2 earthquake that happened 2-3 days before the mag 9 one. Most recent ones: http://earthquake.usgs.gov/earthquakes/recenteqsww/Maps/10/145_40_eqs.php -- http://goldmark.org/jeff/stupid-disclaimers/ http://linuxmafia.com/~rick/faq/plural-of-virus.html
Re: IPv6 SEO implecations?
On Mon, 28 Mar 2011 15:18:30 -0700 Wil Schultz wschu...@bsdboy.com wrote: I'm attempting to find out information on the SEO implications of testing ipv6 out. A couple of concerns that come to mind are: 1) www.domain.com and ipv6.domain.com are serving the exact same content. Typical SEO standards are to only serve good content from a single domain so information isn't watered down and so that the larger search engines won't penalize. So a big concern is having search results take a hit because content is duplicated through two different domains, even though one domain is ipv4 only and the other is ipv6 only. 2) Not running ipv6 natively, or using 6to4. This (potentially) increases hop count and will put content on a slower GRE tunnel and add some additional time for page load times. 3) ??? Any others that I haven't thought of ??? If you are so concerned about SEO, just dual-stack your site. It works well for me. William
Re: Regional AS model
On Mar 28, 2011, at 2:51 PM, Patrick W. Gilmore wrote: On Mar 28, 2011, at 5:40 PM, Owen DeLong wrote: On Mar 28, 2011, at 2:13 PM, Dave Temkin wrote: On 3/27/11 2:53 AM, Patrick W. Gilmore wrote: On Mar 25, 2011, at 3:33 PM, Owen DeLong wrote: Single AS worldwide is fine with or without a backbone. Only if you want to make use of ugly ugly BGP hacks on your routers, or, you don't care about Site A being able to hear announcements from Site B. You are highly confused. Accepting default is not ugly, especially if you don't even have a backbone connecting your sites. And even if we could argue over default's aesthetic qualities (which, honestly, I don't see how we can), there is no rational person who would consider it a hack. You really should stop trying to correct the error you made in your first post. Remember the old adage about when you find yourself in a hole. Another thing to note is the people who actually run multiple discrete network nodes posting here all said it was fine to use a single AS. One even said the additional overhead of managing multiple ASes would be more trouble than it is worth, and I have to agree with that statement. Put another way, there is objective, empirical evidence that it works. In response, you have some nebulous ugly comment. I submit your argument is, at best, lacking sufficient definition to be considered useful. And in reality, is allowas-in *that* horrible of a hack? If used properly, I'd say not. In a network where you really are split up regionally with no backbone there's really little downside, especially versus relying on default only. -Dave I agree that allowas-in is not as bad as default, but, I still think that having one AS per routing policy makes a hell of a lot more sense and there's really not much downside to having an ASN for each independent site. I'm glad you ignored Woody and others, who actually runs a multi-site, single-as topology. How many multi-site (non)networks have you run with production traffic? Over the years, about a dozen or so. Owen
Re: IPv6 SEO implecations?
On Mar 28, 2011, at 3:18 PM, Wil Schultz wrote: I'm attempting to find out information on the SEO implications of testing ipv6 out. A couple of concerns that come to mind are: 1) www.domain.com and ipv6.domain.com are serving the exact same content. Typical SEO standards are to only serve good content from a single domain so information isn't watered down and so that the larger search engines won't penalize. So a big concern is having search results take a hit because content is duplicated through two different domains, even though one domain is ipv4 only and the other is ipv6 only. 2) Not running ipv6 natively, or using 6to4. This (potentially) increases hop count and will put content on a slower GRE tunnel and add some additional time for page load times. 3) ??? Any others that I haven't thought of ??? So basically I'd love to set up some sites for ipv6.domain.com via 6to4 as a phase one, and at some point in the near future implement ipv6 natively inside the datacenter, but I'm somewhat concerned about damaging SEO reputation in the process. Thoughts? -wil If you're worried about SEO, go with native IPv6 and then deploy s for WWW.domain.foo. It's been working just fine for www.he.net for years. Owen
Re: IPv6 SEO implecations?
On Mon, 2011-03-28 at 15:55 -0700, Owen DeLong wrote: If you're worried about SEO, go with native IPv6 and then deploy s for WWW.domain.foo. Why is native IPv6 needed? I'd have thought a tunnel would be fine, too. Regards, K. -- ~~~ Karl Auer (ka...@biplane.com.au) +61-2-64957160 (h) http://www.biplane.com.au/kauer/ +61-428-957160 (mob) GPG fingerprint: DA41 51B1 1481 16E1 F7E2 B2E9 3007 14ED 5736 F687 Old fingerprint: B386 7819 B227 2961 8301 C5A9 2EBC 754B CD97 0156 signature.asc Description: This is a digitally signed message part
RE: IPv6 SEO implecations?
Why is native IPv6 needed? I'd have thought a tunnel would be fine, too. I believe the concern is that the higher latency of a tunnel would impact SEO rankings.
Re: IPv6 SEO implecations?
On Mar 28, 2011, at 7:10 PM, Karl Auer wrote: On Mon, 2011-03-28 at 15:55 -0700, Owen DeLong wrote: If you're worried about SEO, go with native IPv6 and then deploy s for WWW.domain.foo. Why is native IPv6 needed? I'd have thought a tunnel would be fine, too. So why does www A 127.0.0.1 www ::1 Preclude a tunnel? I can't get native here to my IPv6 is tunneled thru he (Thanks he) but that doesn't change dual DNS entires. (Note used loopback as an example) Tom
Re: IPv6 SEO implecations?
On Mar 28, 2011, at 7:17 PM, Nathan Eisenberg wrote: Why is native IPv6 needed? I'd have thought a tunnel would be fine, too. I believe the concern is that the higher latency of a tunnel would impact SEO rankings. True but you live with what you can get acces to ;-) Tom
Re: IPv6 SEO implecations?
On Mar 28, 2011, at 3:55 PM, Owen DeLong wrote: On Mar 28, 2011, at 3:18 PM, Wil Schultz wrote: I'm attempting to find out information on the SEO implications of testing ipv6 out. A couple of concerns that come to mind are: 1) www.domain.com and ipv6.domain.com are serving the exact same content. Typical SEO standards are to only serve good content from a single domain so information isn't watered down and so that the larger search engines won't penalize. So a big concern is having search results take a hit because content is duplicated through two different domains, even though one domain is ipv4 only and the other is ipv6 only. 2) Not running ipv6 natively, or using 6to4. This (potentially) increases hop count and will put content on a slower GRE tunnel and add some additional time for page load times. 3) ??? Any others that I haven't thought of ??? So basically I'd love to set up some sites for ipv6.domain.com via 6to4 as a phase one, and at some point in the near future implement ipv6 natively inside the datacenter, but I'm somewhat concerned about damaging SEO reputation in the process. Thoughts? -wil If you're worried about SEO, go with native IPv6 and then deploy s for WWW.domain.foo. It's been working just fine for www.he.net for years. Owen So far the consensus is to run dual stack natively. While this definitely is the way things should be set up in the end, I can see some valid reasons to run ipv4 and ipv6 on separate domains for a while before final configuration. For example, if I'm in an area with poor ipv6 connectivity I'd like to be given the option of explicitly going to an ipv4 site vs the ipv6 version. I'd also like to not damage SEO in the process though. ;-) -wil
Re: IPv6 SEO implecations?
I would be getting ipv6 connectivity, adding an unknown record such as ipv6 or www6; but not www, and do as many comparative ipv4 vs ipv6 tracerouts from as many route servers as possible. Then you will have the data you need to actually make an informed decision rather than just guessing how it will behave. Remove the temp record and add a real quad for www only if you liked what you saw. I assume the name servers are also available over ipv6 including glue? \n On 29/03/2011, at 9:25, Wil Schultz wschu...@bsdboy.com wrote: On Mar 28, 2011, at 3:55 PM, Owen DeLong wrote: On Mar 28, 2011, at 3:18 PM, Wil Schultz wrote: I'm attempting to find out information on the SEO implications of testing ipv6 out. A couple of concerns that come to mind are: 1) www.domain.com and ipv6.domain.com are serving the exact same content. Typical SEO standards are to only serve good content from a single domain so information isn't watered down and so that the larger search engines won't penalize. So a big concern is having search results take a hit because content is duplicated through two different domains, even though one domain is ipv4 only and the other is ipv6 only. 2) Not running ipv6 natively, or using 6to4. This (potentially) increases hop count and will put content on a slower GRE tunnel and add some additional time for page load times. 3) ??? Any others that I haven't thought of ??? So basically I'd love to set up some sites for ipv6.domain.com via 6to4 as a phase one, and at some point in the near future implement ipv6 natively inside the datacenter, but I'm somewhat concerned about damaging SEO reputation in the process. Thoughts? -wil If you're worried about SEO, go with native IPv6 and then deploy s for WWW.domain.foo. It's been working just fine for www.he.net for years. Owen So far the consensus is to run dual stack natively. While this definitely is the way things should be set up in the end, I can see some valid reasons to run ipv4 and ipv6 on separate domains for a while before final configuration. For example, if I'm in an area with poor ipv6 connectivity I'd like to be given the option of explicitly going to an ipv4 site vs the ipv6 version. I'd also like to not damage SEO in the process though. ;-) -wil
Re: The state-level attack on the SSL CA security model
On 3/25/2011 at 2:21 AM, Florian Weimer fwei...@bfk.de wrote: * Roland Dobbins: On Mar 24, 2011, at 6:41 PM, Florian Weimer wrote: Disclosure devalues information. I think this case is different, given the perception of the cert as a 'thing' to be bartered. Private keys have been traded openly for years. For instance, when your browser tells you that a web site has been verified by Equifax (exact phrasing in the UI may vary), it's just not true. Equifax has sold its private key to someone else long ago, and chances are that the key material has changed hands a couple of times since. I can't see how a practice that is completely acceptable at the root certificate level is a danger so significant that state-secret-like treatment is called for once end-user certificates are involved. Any large, well funded national-level intelligence agency almost certainly has keys to a valid CA distributed with any browser or SSL package. It would be trivial for the US Gov't (and by extension, the whole AUSCANNZUKUS intelligence community) to simply form a shell company CA that could get a trusted cert in the distros or enlist a legit CA to do their patriotic duty (along with some $$$) and give up a key. Heck, it's so easy, private industry sells this as a product for the law enforcement community. It's an easy recipe, 1) Go start your own CA (or buying an existing one may be easier, as Florian points out). 2) Get your key put in Windows, Firefox, Opera, etc. 3) Build an appliance that uses your key to do MIM attacks on the fly. 4) Sell appliance to law enforcement (or anyone else with the money, maybe a smaller nation's intelligence apparatus?). 5) Profit! Just Google around for commercial products aimed at LI that have this capability. Commercial SSL/TLS, i.e. using built-in CAs, offers no protection against nation-states at the intelligence or law enforcement level. -- Crist Clark Network Security Specialist, Information Systems Globalstar 408 933 4387
RE: IPv6 SEO implecations?
I would be getting ipv6 connectivity, adding an unknown record such as ipv6 or www6; but not www, and do as many comparative ipv4 vs ipv6 tracerouts from as many route servers as possible. Then you will have the data you need to actually make an informed decision rather than just guessing how it will behave. Remove the temp record and add a real quad for www only if you liked what you saw. I assume the name servers are also available over ipv6 including glue? Why do you even need a record to do that? Just do a traceroute to the v6 address. The temporary record seems to do nothing useful in your proposed procedure. Easiest hack to test site usability: Modify your hosts file. Don't even publish the record in DNS until you're ready. Then there's no SEO implications. :) So far the consensus is to run dual stack natively. While this definitely is the way things should be set up in the end, I can see some valid reasons to run ipv4 and ipv6 on separate domains for a while before final configuration. For example, if I'm in an area with poor ipv6 connectivity I'd like to be given the option of explicitly going to an ipv4 site vs the ipv6 version. I'd also like to not damage SEO in the process though. ;-) If you're going to expose the site via a separate hostname (v6.bobdole.com), create a v6.robots.txt file that tells Google not to index v6.bobdole.com. Use an .htaccess rule to rewrite requests for robots.txt based on the host header, so v4 requests get the v4.robots.txt, and v6 requests get the v6.robots.txt, which tells Google not to index things. Nathan
Re: IPv6 SEO implecations?
Why do you even need a record to do that? Just do a traceroute to the v6 address. The temporary record seems to do nothing useful in your proposed procedure. Easiest hack to test site usability: Modify your hosts file. Don't even publish the record in DNS until you're ready. Then there's no SEO implications. :) You could go direct to the v6 addy, but using your hosts file for a dns record isn't going to work for the remote route servers I suggest testing from. Using a temp doesn't hurt, or lose you anything, and is technically a more accurate test, ultimatly I leave it to your discretion. \n
Re: IPv6 SEO implecations?
On Mar 28, 2011, at 4:10 PM, Karl Auer wrote: On Mon, 2011-03-28 at 15:55 -0700, Owen DeLong wrote: If you're worried about SEO, go with native IPv6 and then deploy s for WWW.domain.foo. Why is native IPv6 needed? I'd have thought a tunnel would be fine, too. He was worried about the latency of tunnels creating penalties for SEO purposes, but, otherwise, yes, that works too. Since he stated a desire to avoid tunnels as an initial area of concern, I went with his original statement. Owen
Re: IPv6 SEO implecations?
On Mar 28, 2011, at 4:20 PM, TR Shaw wrote: On Mar 28, 2011, at 7:10 PM, Karl Auer wrote: On Mon, 2011-03-28 at 15:55 -0700, Owen DeLong wrote: If you're worried about SEO, go with native IPv6 and then deploy s for WWW.domain.foo. Why is native IPv6 needed? I'd have thought a tunnel would be fine, too. So why does www A 127.0.0.1 www ::1 Preclude a tunnel? I can't get native here to my IPv6 is tunneled thru he (Thanks he) but that doesn't change dual DNS entires. (Note used loopback as an example) Tom Well, hard to tunnel to a loopback address, but, using a better example: www IN A 192.0.2.50 IN 2001:db8::2:50 Would not preclude a tunnel at all. The issue is that he seemed concerned with additional latency from a tunnel resulting in SEO penalties, so, I suggested native as a resolution to that concern. Owen
Re: IPv6 SEO implecations?
In a message written on Mon, Mar 28, 2011 at 03:18:30PM -0700, Wil Schultz wrote: I'm attempting to find out information on the SEO implications of testing ipv6 out. I don't run a web site where SEO is a top priority, so I don't track such things. Quite simply, who's crawling on IPv6? That is, will any of the search engines even notice? -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ pgpEu5mCCJyhK.pgp Description: PGP signature
Re: IPv6 SEO implecations?
On Mar 28, 2011, at 9:50 PM, Leo Bicknell wrote: In a message written on Mon, Mar 28, 2011 at 03:18:30PM -0700, Wil Schultz wrote: I'm attempting to find out information on the SEO implications of testing ipv6 out. I don't run a web site where SEO is a top priority, so I don't track such things. Quite simply, who's crawling on IPv6? That is, will any of the search engines even notice? -- Leo Bicknell - bickn...@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ The only crawling I have seen over IPv6 has come from Google - but I have only seen that on IPv6-only sites, not dual-stack sites: 2001:4860:4801:1302:0:6006:1300:b075 - - [28/Mar/2011:21:54:12 -0400] GET /p/OWJjZD HTTP/1.1 200 3790 - Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Re: IPv6 SEO implecations?
On Mon, Mar 28, 2011 at 5:18 PM, Wil Schultz wschu...@bsdboy.com wrote: I'm attempting to find out information on the SEO implications of testing ipv6 out. A couple of concerns that come to mind are: 1) www.domain.com and ipv6.domain.com are serving the exact same content. Typical SEO standards are to only serve good content from a single domain so information isn't watered down and so that the larger search engines won't penalize. So a big concern is having search results take a hit because content is duplicated through two different domains, even though one domain is ipv4 only and the other is ipv6 only. The real name for SEO is Search-Engine manipulation. And the moment you indicate typical SEO standards, the search engine developers have likely already become aware of the existence of the problem/tactic and fiddled with knobs plenty of times since then Sometimes search engines penalize what they see to be duplicate content in the indexes.Spammers sometimes try to include the same content in many domains or steal content from other sites to enhance page rank. Big search engines offer some method of canonicalization or selection of a preferred domain through sitemaps. Use the tools provided by your search engine to tell them ipv6.domain.com is just domain.com. If IPv4 and IPv6 are combined in one index, there is a risk that the IPv4 pages could get penalized and only the IPv6 pages show at the top (or vice-versa). You could use robots.txt to block access to one of the sites for just the robots that penalized or a rel=nofollow. If even necessary... I for one am completely unconvinced that major search engines are penalizing in this scenario currently, solely because a site was duplicated to a ipv6 subdomain. Keep in mind there is a search engine using this practice for their own domain. Who knows... in the future they may be penalizing sites that _don't_ have an IPv6 subdomain or v6 dual-stacking (assuming they are not penalizing that / rewarding IPv6 connected sites already). In this case attempting to put old SEO tactics first may hurt visitor experience more than help. ipv6.domain.com available over IPv6 and domain.com available over IPv4 are not really different domains; I expect search engines may keep IPv4 and IPv6 indexes separate. At least for a time... since there are IPv4-only nodes who would not be able to access IPv6 hyperlinks in a search results page. -- -JH
Anyone have info on the Dallas Infomart power outage?
Anyone have details?
Re: Paul Baran, RIP.
On Mon, 28 Mar 2011 09:14:18 -0400 (EDT) Jay Ashworth j...@baylink.com wrote: Oh hell; now we'll *never* lay the ghost of packet switching was invented to create a nuclear-war-survivable network. Maybe you're confusing the invention of packet switching with the creation of the ARPANET? Survivability, particular to enemy attack, was a prime motivator for Baran's original ideas as published in he IEEE Transactions of Communications 1964 paper. ARPANET's motivation was apparently very different. The Network World article looks to be factually accurate to me. Looks like this was used as a primary source for the article: http://www.rand.org/about/history/baran.list.html John
Re: IPv6 SEO implecations?
On Mar 29, 2011, at 1:21 AM, Wil Schultz wrote: So far the consensus is to run dual stack natively. While this definitely is the way things should be set up in the end, I can see some valid reasons to run ipv4 and ipv6 on separate domains for a while before final configuration. For example, if I'm in an area with poor ipv6 connectivity I'd like to be given the option of explicitly going to an ipv4 site vs the ipv6 version. I'd also like to not damage SEO in the process though. ;-) There has been a discussion of this in v6ops, around http://tools.ietf.org/html/draft-ietf-v6ops-v6--whitelisting-implications IPv6 DNS Whitelisting Implications, Jason Livingood, 22-Feb-11 and http://tools.ietf.org/html/draft-ietf-v6ops-happy-eyeballs Happy Eyeballs: Trending Towards Success with Dual-Stack Hosts, Dan Wing, Andrew Yourtchenko, 14-Mar-11 In that context, you might review http://www.ietf.org/proceedings/80/slides/v6ops-12.pdf Where you find a name ipv6.example.com, such as ipv6.google.com and www.v6.facebook.com, it is generally a place where the service is testing the IPv6 configuration prior to listing both the A and the record under the same name. The up side of giving them the same name is that the same content is viewable using IPv4 and IPv6; being IP-agnostic is a good thing. Unfortunately, at least right now, there is a side-effect. The side-effect is that a temporary network problem (routing loop etc) on one technology can be fixed by using the other, and the browsers don't necessarily fall back as one would wish. This works negatively against IPv6 deployment and customer satisfaction; it is not unusual for tech support people to respond to such questions with turn off IPv6 and you won't have that problem. Hence, content providers often separate the names to ensure that people only get the IPv6 experience if they expect it. And Google among others whitelists people for IPv6 DNS service based on their measurements of the client's path to google - if a bad experience is likely, they try to prevent it by not offering IPv6 names. In general, I don't see a lot of difference between A and accesses, but I have had glitches when there was a network glitch. On one occasion, there was an IPv6 routing loop en route to www.ietf.org, but not one on the IPv4 path. The net result was a huge delay - it took nearly two minutes to download a page. The amusing part of that was that the same routing loop got in the way of reporting the issue to HE; I wound up sending an email rather than filing a case. Once it was fixed, matters returned to normal.