Fwd: [listname] Fwd: Connectivity status for Egypt

2011-01-29 Thread Owen DeLong
Anonymized because I am forwarding without permission...

 
 Dears,
 
 As per what I know from friends:
 
 1. All Internet is down. Apparently one network is still working but seems to 
 be serving the stock exchange only and few others.
 2. SMS is down.
 3. Landlines are down (at least internationally).
 4. Mobile networks seem to be on and off depending on location. For example, 
 Vodafone is apparently working in Cairo while other networks are not !!
 
 God help them and bring Egypt and Egyptians out of this harmless inshallah.
 Regards..

Owen



Re: Need provider suggestions - BGP transit over GRE tunnel

2011-01-29 Thread Franck Martin
Just make sure you don't shoot yourself in the foot by telling the best route 
to the end of the tunnel is via the tunnel itself...

I use it too: http://www.avonsys.com/blogpost367 but because I have no other 
choice.

- Original Message -
From: Robert Johnson fasterfour...@gmail.com
To: C. Jon Larsen jlar...@richweb.com, nanog@nanog.org
Sent: Saturday, 29 January, 2011 6:48:50 PM
Subject: Re: Need provider suggestions - BGP transit over GRE tunnel

My network spans a multicity geographic area using microwave radio
links. The point of the GRE tunnel is to allow me to establish a BGP
session to another AS using a consumer grade Internet connection
(cheap) over the public Internet. I don't want to build out additional
microwave paths to a new datacenter to become multihomed.

On Fri, Jan 28, 2011 at 5:36 PM, C. Jon Larsen jlar...@richweb.com wrote:

 I have read your email a few times and i dont see how this makes sense.

 Why do you need a public AS and PI space? Your gre tunnel wont need it or be
 able to use it. A gre tunnel is just a replacement for a physical pipe.

 If your datacenter based presence goes down, you will need a pipe at your
 office, or some other location speaking bgp that can annouce your block
 anyway.






Re: Ipv6 for the content provider

2011-01-29 Thread George B.
On Fri, Jan 28, 2011 at 8:04 PM, Owen DeLong o...@delong.com wrote:

 The IPv6 geo databases actually tend to be about on par with the IPv4
 ones from what I have seen so far (which is admittedly limited as I don't
 really use geolocation services). However, I still think it is important
 for
 people considering deploying something as you described to be aware
 of the additional things that may break and factor that into their
 decision about how and what to deploy.

 Owen



That isn't going to be the case going forward, I don't believe because our
allocation from ARIN will likely be used globally and others are likely to
come to the same conclusion.  While I had initially considered obtaining
regional allocations for operations in that region, the overhead of dealing
with multiple registries, differing policies, multiple fees, etc. didn't
seem worth the trouble.  The ARIN allocation will likely be used in EU and
APAC regions in addition to NA.


Re: [menog] Fwd: Connectivity status for Egypt

2011-01-29 Thread Vesna Manojlovic

Dear all,

On 1/28/11 1:07 AM, Richard Barnes wrote:

Hey all,
Some NANOG participants are seeing hearing reports of disrupted
communications in Egypt.  Are any of you seeing the same thing?
--Richard


Here is the analysis of BGP table regarding what happened to the 
Internet in Egypt:


http://stat.ripe.net/egypt/

https://labs.ripe.net/Members/akvadrako/live_eqyptian_internet_incident_analysis

Vesna Manojlovic
RIPE NCC Trainer



-- Forwarded message --
From: Danny O'Brien da...@spesh.com
Date: Thu, Jan 27, 2011 at 6:47 PM
Subject: Connectivity status for Egypt
To: NANOG Operators' Group nanog@nanog.org


Around 2236 UCT, we lost all Internet connectivity with our contacts in
Egypt, and I'm hearing reports of (in declining order of confirmability):

1) Internet connectivity loss on major (broadband) ISPs
2) No SMS
4) Intermittent connectivity with smaller (dialup?) ISPs
5) No mobile service in major cities -- Cairo, Alexandria

The working assumption here is that the Egyptian government has made the
decision to shut down all external, and perhaps internal electronic
communication as a reaction to the ongoing protests in that country.

If anyone can provide more details as to what they're seeing, the extent,
plus times and dates, it would be very useful. In moments like this there
are often many unconfirmed rumors: I'm seeking concrete reliable
confirmation which I can pass onto the press and those working to bring some
communications back up (if you have a ham radio license, there is some very
early work to provide emergency connectivity. Info at:
http://pastebin.com/fHHBqZ7Q )

Thank you,

--
dobr...@cpj.org
Danny O'Brien, Committee to Protect Journalists
gpg key: http://www.spesh.com/danny/crypto/dannyobrien-key20091106.txt
___
Menog mailing list
me...@menog.net
http://lists.menog.net/mailman/listinfo/menog





Re: DSL options in NYC for OOB access

2011-01-29 Thread Andy Ashley

On 29/01/2011 00:16, Bill Stewart wrote:


How much bandwidth do you need?  Is a dialup modem fast enough?

Hi,

Not much at all. Just enough for a telnet/ssh session.
A dialup modem would likely do the trick, but that raises other issues about 
dialing up from the UK based NOC,
so I think DSL will be a little more flexible for us in this case.
If we must have a telephone line installed we may as well get DSL service over 
that.
Point taken though about reliability of DSL service vs plain PSTN.

I have had some offers from the right sort of companies.
One in particular has everything we need (low speed, static ip, no red tape  a 
clue) at half the price of the others (ask me off list if you want the name).
Also suggested to me was doing a swap with another provider in the facility but 
it seems as if cross connects may be prohibitively expensive between 
suites/floors there.
Im going to wait for pricing on this and make a choice then.

Thanks to all who responded.

Regards,
Andy.





--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.




Re: DSL options in NYC for OOB access

2011-01-29 Thread Randy McAnally
On Sat, 29 Jan 2011 13:35:01 +, Andy Ashley wrote

 if you want the name). Also suggested to me was doing a swap with 
 another provider in the facility but it seems as if cross connects 
 may be prohibitively expensive between suites/floors there. Im going 
 to wait for pricing on this and make a choice then.

Have you looked into the cross connect cost for your DSL line?  They typically
aren't very cheap either.

~Randy




Re: Bogons

2011-01-29 Thread Patrick W. Gilmore
On Jan 28, 2011, at 8:41 PM, Matthew Palmer wrote:
 On Fri, Jan 28, 2011 at 12:35:43PM -0800, Jacob Broussard wrote:
 Static bogons are the bane of my existence...  The pain of trying to explain
 to someone for MONTHS that they haven't updated their reference, with
 traceroutes to back it up, and they continue to say that it has something to
 do with my network.
 
 THey're right -- your network is using an address range they've chosen to
 configure their equipment not to accept... grin

The RIRs have decided to hand out the polluted space to large providers to 
ensure the greatest damage to those who are too stupid to update their filters.

If you know anyone who runs a network, tell them to remove their static bogon 
filters today.  Seriously.

-- 
TTFN,
patrick




Re: [arin-announce] ARIN Resource Certification Update

2011-01-29 Thread Alex Band
John,

Thanks for the update. With regards to offering a hosted solution, as you know 
that is the only thing the RIPE NCC currently offers. We're developing support 
for the up/down protocol as I write this.

To give you some perspective, one month after launching the hosted RIPE NCC 
Resource Certification service, 216 LIRs are using it in the RIPE Region and 
created 169 ROAs covering 467 prefixes. This means 40151 /24 IPv4 prefixes and 
7274499 /48 IPv6 prefixes now have a valid ROA associated with them.

I realize a hosted solution is not ideal, we're very open about that. But at 
least in our region, it seems there are quite a number of organizations who 
understand and accept the security trade-off of not being the owner of the 
private key for their resource certificate and trust their RIR to run a 
properly secured and audited service. So the question is, if the RIPE NCC would 
have required everyone to run their own certification setup using the open 
source tool-sets Randy mentions, would there be this much certified address 
space now? 

Looking at the depletion of IPv4 address space, it's going to be crucially 
important to have validatable proof who is the legitimate holder of Internet 
resources. I fear that by not offering a hosted certification solution, real 
world adoption rates will rival those of IPv6 and DNSSEC. Can the Internet 
community afford that?

Alex Band
Product Manager, RIPE NCC

P.S. For those interested in which prefixes and ASs are in the RIPE NCC ROA 
Repository, here is the latest output in CSV format:
http://lunimon.com/valid-roas-20110129.csv



On 24 Jan 2011, at 21:33, John Curran wrote:

 Copy to NANOG for those who aren't on ARIN lists but may be interested in 
 this info.
 FYI.
 /John
 
 Begin forwarded message:
 
 From: John Curran jcur...@arin.netmailto:jcur...@arin.net
 Date: January 24, 2011 2:58:52 PM EST
 To: arin-annou...@arin.netmailto:arin-annou...@arin.net 
 arin-annou...@arin.netmailto:arin-annou...@arin.net
 Subject: [arin-announce] ARIN Resource Certification Update
 
 ARIN continues its preparations for offering production-grade resource 
 certification
 services for Internet number resources in the region.  ARIN recognizes the 
 importance
 of Internet number resource certification in the region as a key element of 
 further
 securing Internet routing, and plans to rollout Resource Public Key 
 Infrastructure (RPKI)
 at the end of the second quarter of 2011 with support for the Up/Down 
 protocol for those
 ISPs who wish to certify their subdelegations via their own RPKI 
 infrastructure.
 
 ARIN continues to evaluate offering a Hosting Resource Certification service 
 for this
 purpose (as an alternative to organizations having to run their own RPKI 
 infrastructure),
 but at this time it remains under active consideration and is not committed.  
  We look
 forward to discussing the need for this type of service and the organization 
 implications
 atour upcoming ARIN Members Meeting in April in San Juan, PR.
 
 FYI,
 /John
 
 John Curran
 President and CEO
 ARIN
 
 ___
 ARIN-Announce
 You are receiving this message because you are subscribed to
 the ARIN Announce Mailing List 
 (arin-annou...@arin.netmailto:arin-annou...@arin.net).
 Unsubscribe or manage your mailing list subscription at:
 http://lists.arin.net/mailman/listinfo/arin-announce
 Please contact i...@arin.net if you experience any issues.
 
 



smime.p7s
Description: S/MIME cryptographic signature


Re: Need provider suggestions - BGP transit over GRE tunnel

2011-01-29 Thread Valdis . Kletnieks
On Sun, 30 Jan 2011 00:49:34 +1300, Franck Martin said:
 Just make sure you don't shoot yourself in the foot by telling the best route
 to the end of the tunnel is via the tunnel itself...

Did you mean routing *your* end of the tunnel to the tunnel itself, or
announcing to the entire world that The Internet was best reached via your
tunnel?  I think we've seen spectacular failures in both modes...



pgpk3N1yLPP97.pgp
Description: PGP signature


Re: [arin-announce] ARIN Resource Certification Update

2011-01-29 Thread John Curran
On Jan 29, 2011, at 10:26 AM, Alex Band wrote:
 John,
 
 Thanks for the update. With regards to offering a hosted solution, as you 
 know that is the only thing the RIPE NCC currently offers. We're developing 
 support for the up/down protocol as I write this.

Alex - Yes, congrats on rolling out that offering!  Also, I wish the folks at 
the very best on the up/down protocol work, since (as you're likely aware) ARIN 
is planning to leverage that effort in our up/down service development.  :-)

 I realize a hosted solution is not ideal, we're very open about that. But at 
 least in our region, it seems there are quite a number of organizations who 
 understand and accept the security trade-off of not being the owner of the 
 private key for their resource certificate and trust their RIR to run a 
 properly secured and audited service. So the question is, if the RIPE NCC 
 would have required everyone to run their own certification setup using the 
 open source tool-sets Randy mentions, would there be this much certified 
 address space now?

For many organizations, a hosted service offers the convenience that would make 
deployment likely.  The challenge that ARIN faces isn't with respect to whether 
our community trusts us to run a properly secured and audited service, but the 
potential implied liability to ARIN if a party alleges that the hosted service 
performs incorrectly.  It is rather challenging to show that a relying party 
is legally bound to the terms of service in certificate practices statement, 
and this means that there are significant risks in the offering the service 
(even with it performing perfectly), since much of the normal contractual 
protections are not available.

Imagine an organization that incorrectly enters its AS number during a ROA 
generation, and succeeds in taking itself off their air for a prolonged period. 
 Depending on the damages the organization suffered as a result, it may want to 
claim that ARIN's Hosted RPKI system performed incorrectly, as may those 
folks who were impacted by not being able to reach the organization.  While 
ARIN's hosted system would be performing perfectly, the risk and costs to the 
organization in trying to defend against such (spurious) claims could be very 
serious.  Ultimately, the ARIN Board needs to weigh such matters of benefit and 
risk in full against the mission and determine the appropriate direction.

 Looking at the depletion of IPv4 address space, it's going to be crucially 
 important to have validatable proof who is the legitimate holder of Internet 
 resources. I fear that by not offering a hosted certification solution, real 
 world adoption rates will rival those of IPv6 and DNSSEC. Can the Internet 
 community afford that?


The RPKI information regarding valid address holder is effectively same as that 
contained in the WHOIS, so readily available evidence of resource holder is 
available today.  Parties already use information from the RIRs from WHOIS and 
routing registries to do various forms of resource  route validation; resource 
certification simply provides a clearer, more secure  more consistent model 
for this information.  I'm not saying that resource certification isn't 
important, but do not think that characterizing its need as crucial 
specifically due to IPv4 depletion is the complete picture.  

ARIN recognizes the importance of resource certification and hence its 
commitment to supporting resource certification for resources in the region via 
Up/Down protocol. There is not a decision on a hosted RPKI offer at this time, 
but that is because we want to be able to discuss the benefits and risks with 
the community at our upcoming April meeting to make sure there is significant 
demand for service as well as appropriate mechanisms for safely managing the 
risks involved.  I hope this clarifies the update message that I sent out 
earlier, and provides some insight into the considerations that have led ARIN's 
position on resource certification.

Thanks!
/John

John Curran
President and CEO
ARIN




Re: Need provider suggestions - BGP transit over GRE tunnel

2011-01-29 Thread C. Jon Larsen


On Sun, 30 Jan 2011, Franck Martin wrote:


Just make sure you don't shoot yourself in the foot by telling the best route 
to the end of the tunnel is via the tunnel itself...


Right, nail up a /32 static route for the remote gre tunnel endpoint on 
each side. That /32 is nailed up to the next hop that you want the gre tunnel 
to always traverse. If that next hop becomes unavailable, the tunnel will 
go down, which is what you want rather than the tunnel trying to come up 
across some other path it can find.



I use it too: http://www.avonsys.com/blogpost367 but because I have no other 
choice.

- Original Message -
From: Robert Johnson fasterfour...@gmail.com
To: C. Jon Larsen jlar...@richweb.com, nanog@nanog.org
Sent: Saturday, 29 January, 2011 6:48:50 PM
Subject: Re: Need provider suggestions - BGP transit over GRE tunnel

My network spans a multicity geographic area using microwave radio
links. The point of the GRE tunnel is to allow me to establish a BGP
session to another AS using a consumer grade Internet connection
(cheap) over the public Internet. I don't want to build out additional
microwave paths to a new datacenter to become multihomed.

On Fri, Jan 28, 2011 at 5:36 PM, C. Jon Larsen jlar...@richweb.com wrote:


I have read your email a few times and i dont see how this makes sense.

Why do you need a public AS and PI space? Your gre tunnel wont need it or be
able to use it. A gre tunnel is just a replacement for a physical pipe.

If your datacenter based presence goes down, you will need a pipe at your
office, or some other location speaking bgp that can annouce your block
anyway.





--
This message has been scanned for viruses and
dangerous content by the Richweb.com MailScanner, and is
believed to be clean.







Re: [menog] Fwd: Connectivity status for Egypt

2011-01-29 Thread Ingo Flaschberger


Here is the analysis of BGP table regarding what happened to the Internet in 
Egypt:


http://stat.ripe.net/egypt/

https://labs.ripe.net/Members/akvadrako/live_eqyptian_internet_incident_analysis


Cidr report (http://www.cidr-report.org) shows this also very well:

Recent Table History
Date  PrefixesCIDR Agg
26-01-11345293  201663
27-01-11344858  200621
28-01-11342381  201194

Top 20 Net Decreased Routes per Originating AS

Prefixes  Change  ASnum AS Description
-102102-0   AS5536  Internet-Egypt


Kind regards,
Ingo Flaschberger





Re: Found: Who is responsible for no more IP addresses

2011-01-29 Thread Joly MacFie
Thanks for the link to the vid. I see Geoff Huston spoke too. I've embedded
both on
http://www.isoc-ny.org/p2/?p=1713

FWIW Vint has been using this as an intro to IPv6 for years.. in fact I've
got some video to edit of him speaking in 1999 - I'll look for it..

j

On Sat, Jan 29, 2011 at 2:43 AM, Ben McGinnes b...@adversary.org wrote:

 On 28/01/11 7:03 AM, Jay Ashworth wrote:
  Let me clarify:
 
  The original question was (so far as I could see): Was Fox making up the
  quote where Vint took the blame for IPv4 exhaustion?
 
  The answer, of course, was no, they didn't; lots of people have the
 quote.

 If you want to see and hear footage of him repeating this and
 explaining, his keynote address to Linux Conf Australia is here:

 http://linuxconfau.blip.tv/file/4683393/


 Regards,
 Ben




-- 
---
Joly MacFie  218 565 9365 Skype:punkcast
WWWhatsup NYC - http://wwwhatsup.com
 http://pinstand.com - http://punkcast.com
  VP (Admin) - ISOC-NY - http://isoc-ny.org
---


help needed - state of california needs a benchmark

2011-01-29 Thread Mike

Hello,

	My company is small clec / broadband provider serving rural communities 
in northern California, and we are the recipient of a small grant from 
the state thru our public utilities commission. We went out to 'middle 
of nowhere' and deployed adsl2+ in fact (chalk one up for the good 
guys!), and now that we're done, our state puc wants to gather 
performance data to evaluate the result of our project and ensure we 
delivered what we said we were going to. Bigger picture, our state is 
actively attempting to map broadband availability and service levels 
available and this data will factor into this overall picture, to be 
used for future grant/loan programs and other support mechanisms, so 
this really is going to touch every provider who serves end users in the 
state.


	The rub is, that they want to legislate that web based 'speedtest.com' 
is the ONLY and MOST AUTHORITATIVE metric that trumps all other 
considerations and that the provider is %100 at fault and responsible 
for making fraudulent claims if speedtest.com doesn't agree. No 
discussion is allowed or permitted about sync rates, packet loss, 
internet congestion, provider route diversity, end user computer 
performance problems, far end congestion issues, far end server issues 
or cpu loading, latency/rtt, or the like. They are going to decide that 
the quality of any provider service, is solely and exclusively resting 
on the numbers returned from 'speedtest.com' alone, period.


	All of you in this audience, I think, probably immediately understand 
the various problems with such an assertion. Its one of these situations 
where - to the uninitiated - it SEEMS LIKE this is the right way to do 
this, and it SEEMS LIKE there's some validity to whats going on - but in 
practice, we engineering types know it's a far different animal and 
should not be used for real live benchmarking of any kind where there is 
a demand for statistical validity.


	My feeling is that - if there is a need for the state to do 
benchmarking, then it outta be using statistically significant 
methodologies for same along the same lines as any other benchmark or 
test done by other government agencies and national standards bodies 
that are reproducible and dependable. The question is, as a hotbutton 
issue, how do we go about getting 'the message' across, how do we go 
about engineering something that could be considered statistically 
relevant, and most importantly, how do we get this to be accepted by 
non-technical legislators and regulators?


Mike-



Re: help needed - state of california needs a benchmark

2011-01-29 Thread Dan White

On 29/01/11 10:00 -0800, Mike wrote:
	The rub is, that they want to legislate that web based 
'speedtest.com' is the ONLY and MOST AUTHORITATIVE metric that trumps 
all other considerations and that the provider is %100 at fault and 
responsible for making fraudulent claims if speedtest.com doesn't 
agree. No discussion is allowed or permitted about sync rates, packet 
loss, internet congestion, provider route diversity, end user 
computer performance problems, far end congestion issues, far end 
server issues or cpu loading, latency/rtt, or the like. They are 
going to decide that the quality of any provider service, is solely 
and exclusively resting on the numbers returned from 'speedtest.com' 
alone, period.


If you license the software with Ookla, you can install it on a local
server and, with your permission, be listed on the speedtest.net site. When
your customers visit speedtest.net, your server is, or is close to, the
default server that your customers land at.

You could try to convince the state that their metric is suboptimal and X
is superior, but if your *customers* are anything like ours, it's even
harder to educate them why remote speed tests aren't always an accurate
measurement of the service you're providing.

We've learned to pick our fights, and this isn't one of them.

--
Dan White



Re: help needed - state of california needs a benchmark

2011-01-29 Thread Jeff Richmond
Mike, nothing is perfect, so let's just start with that. What the FCC has done 
to measure this is to partner with Sam Knows and then have friendly DSL subs 
for the participating telcos to run modified CPE firmware to test against their 
servers. We have been collecting data for this for the past couple of months, 
actually. More can be found here:

http://www.samknows.com/broadband/fcc_and_samknows

While even that I have issues with, it certainly is better than hitting that 
speedtest site where anything at all problematic on the customer LAN side of 
the CPE can cause erroneous results.

Good luck,
-Jeff


On Jan 29, 2011, at 10:00 AM, Mike wrote:

 Hello,
 
   My company is small clec / broadband provider serving rural communities 
 in northern California, and we are the recipient of a small grant from the 
 state thru our public utilities commission. We went out to 'middle of 
 nowhere' and deployed adsl2+ in fact (chalk one up for the good guys!), and 
 now that we're done, our state puc wants to gather performance data to 
 evaluate the result of our project and ensure we delivered what we said we 
 were going to. Bigger picture, our state is actively attempting to map 
 broadband availability and service levels available and this data will factor 
 into this overall picture, to be used for future grant/loan programs and 
 other support mechanisms, so this really is going to touch every provider who 
 serves end users in the state.
 
   The rub is, that they want to legislate that web based 'speedtest.com' 
 is the ONLY and MOST AUTHORITATIVE metric that trumps all other 
 considerations and that the provider is %100 at fault and responsible for 
 making fraudulent claims if speedtest.com doesn't agree. No discussion is 
 allowed or permitted about sync rates, packet loss, internet congestion, 
 provider route diversity, end user computer performance problems, far end 
 congestion issues, far end server issues or cpu loading, latency/rtt, or the 
 like. They are going to decide that the quality of any provider service, is 
 solely and exclusively resting on the numbers returned from 'speedtest.com' 
 alone, period.
 
   All of you in this audience, I think, probably immediately understand 
 the various problems with such an assertion. Its one of these situations 
 where - to the uninitiated - it SEEMS LIKE this is the right way to do this, 
 and it SEEMS LIKE there's some validity to whats going on - but in practice, 
 we engineering types know it's a far different animal and should not be used 
 for real live benchmarking of any kind where there is a demand for 
 statistical validity.
 
   My feeling is that - if there is a need for the state to do 
 benchmarking, then it outta be using statistically significant methodologies 
 for same along the same lines as any other benchmark or test done by other 
 government agencies and national standards bodies that are reproducible and 
 dependable. The question is, as a hotbutton issue, how do we go about getting 
 'the message' across, how do we go about engineering something that could be 
 considered statistically relevant, and most importantly, how do we get this 
 to be accepted by non-technical legislators and regulators?
 
 Mike-
 




Re: help needed - state of california needs a benchmark

2011-01-29 Thread Patrick W . Gilmore
I think the big deal here is the 100% thing.  If Speedtest is one of many 
tests, then I don't particularly see the problem.

It shouldn't be any more difficult to convince politicians that any system 
(testing or otherwise) can have problems than it is to convince them of any 
other hard fact.  (IOW: Nearly impossible, but you have to try. :)

-- 
TTFN,
patrick

On Jan 29, 2011, at 1:29 PM, Jeff Richmond wrote:

 Mike, nothing is perfect, so let's just start with that. What the FCC has 
 done to measure this is to partner with Sam Knows and then have friendly DSL 
 subs for the participating telcos to run modified CPE firmware to test 
 against their servers. We have been collecting data for this for the past 
 couple of months, actually. More can be found here:
 
 http://www.samknows.com/broadband/fcc_and_samknows
 
 While even that I have issues with, it certainly is better than hitting that 
 speedtest site where anything at all problematic on the customer LAN side of 
 the CPE can cause erroneous results.
 
 Good luck,
 -Jeff
 
 
 On Jan 29, 2011, at 10:00 AM, Mike wrote:
 
 Hello,
 
  My company is small clec / broadband provider serving rural communities 
 in northern California, and we are the recipient of a small grant from the 
 state thru our public utilities commission. We went out to 'middle of 
 nowhere' and deployed adsl2+ in fact (chalk one up for the good guys!), and 
 now that we're done, our state puc wants to gather performance data to 
 evaluate the result of our project and ensure we delivered what we said we 
 were going to. Bigger picture, our state is actively attempting to map 
 broadband availability and service levels available and this data will 
 factor into this overall picture, to be used for future grant/loan programs 
 and other support mechanisms, so this really is going to touch every 
 provider who serves end users in the state.
 
  The rub is, that they want to legislate that web based 'speedtest.com' 
 is the ONLY and MOST AUTHORITATIVE metric that trumps all other 
 considerations and that the provider is %100 at fault and responsible for 
 making fraudulent claims if speedtest.com doesn't agree. No discussion is 
 allowed or permitted about sync rates, packet loss, internet congestion, 
 provider route diversity, end user computer performance problems, far end 
 congestion issues, far end server issues or cpu loading, latency/rtt, or the 
 like. They are going to decide that the quality of any provider service, is 
 solely and exclusively resting on the numbers returned from 'speedtest.com' 
 alone, period.
 
  All of you in this audience, I think, probably immediately understand 
 the various problems with such an assertion. Its one of these situations 
 where - to the uninitiated - it SEEMS LIKE this is the right way to do this, 
 and it SEEMS LIKE there's some validity to whats going on - but in practice, 
 we engineering types know it's a far different animal and should not be used 
 for real live benchmarking of any kind where there is a demand for 
 statistical validity.
 
  My feeling is that - if there is a need for the state to do 
 benchmarking, then it outta be using statistically significant methodologies 
 for same along the same lines as any other benchmark or test done by other 
 government agencies and national standards bodies that are reproducible and 
 dependable. The question is, as a hotbutton issue, how do we go about 
 getting 'the message' across, how do we go about engineering something that 
 could be considered statistically relevant, and most importantly, how do we 
 get this to be accepted by non-technical legislators and regulators?
 
 Mike-
 
 
 




RE: help needed - state of california needs a benchmark

2011-01-29 Thread Nathan Eisenberg
 We've learned to pick our fights, and this isn't one of them.
 
 --
 Dan White

The most effective mechanism I've seen for explaining the problem is latency 
and VOIP.  Set up an artificially latency-ridden, high bandwidth connection, 
then connect to a PBX using a softphone.  One call is generally sufficient 
proof of the issue.

Ookla does offer another metric, at http://www.pingtest.net/, which provides 
some valuable additional information.  You can therefore infer an argument by 
speedtest.net:

Gov: Speedtest.net is an authorative location for all testing.
Speedtest.net: Anyone can host our test application, so that is clearly false.

Gov: The only important factor in certification is bandwidth to speedtest.net.
Speedtest.net: We offer other connection quality tests that don't rely on 
bandwidth.

I often find that statements people make rely on half-truths gleaned from other 
people, and that generally, the fastest way to conclude an argument is to go to 
the source and extract the complete truth, and then present in contrast.  It is 
difficult to argue with your own source.  :-)

Nathan




Re: help needed - state of california needs a benchmark

2011-01-29 Thread Roy

On 1/29/2011 10:00 AM, Mike wrote:

Hello,

My company is small clec / broadband provider serving rural 
communities in northern California, and we are the recipient of a 
small grant from the state thru our public utilities commission. We 
went out to 'middle of nowhere' and deployed adsl2+ in fact (chalk one 
up for the good guys!), and now that we're done, our state puc wants 
to gather performance data to evaluate the result of our project and 
ensure we delivered what we said we were going to. Bigger picture, our 
state is actively attempting to map broadband availability and service 
levels available and this data will factor into this overall picture, 
to be used for future grant/loan programs and other support 
mechanisms, so this really is going to touch every provider who serves 
end users in the state.


The rub is, that they want to legislate that web based 
'speedtest.com' is the ONLY and MOST AUTHORITATIVE metric that trumps 
all other considerations and that the provider is %100 at fault and 
responsible for making fraudulent claims if speedtest.com doesn't 
agree. No discussion is allowed or permitted about sync rates, packet 
loss, internet congestion, provider route diversity, end user computer 
performance problems, far end congestion issues, far end server issues 
or cpu loading, latency/rtt, or the like. They are going to decide 
that the quality of any provider service, is solely and exclusively 
resting on the numbers returned from 'speedtest.com' alone, period.


All of you in this audience, I think, probably immediately 
understand the various problems with such an assertion. Its one of 
these situations where - to the uninitiated - it SEEMS LIKE this is 
the right way to do this, and it SEEMS LIKE there's some validity to 
whats going on - but in practice, we engineering types know it's a far 
different animal and should not be used for real live benchmarking of 
any kind where there is a demand for statistical validity.


My feeling is that - if there is a need for the state to do 
benchmarking, then it outta be using statistically significant 
methodologies for same along the same lines as any other benchmark or 
test done by other government agencies and national standards bodies 
that are reproducible and dependable. The question is, as a hotbutton 
issue, how do we go about getting 'the message' across, how do we go 
about engineering something that could be considered statistically 
relevant, and most importantly, how do we get this to be accepted by 
non-technical legislators and regulators?


Mike-





You took the state's money so you are stuck with their dumb rules.  
Furthermore the CPUC people aren't stupid.  They have highly paid 
consultants as well as professors from colleges in California that are 
advising them.  Unless you have some plan for a very inexpensive 
alternative, don't think you are going to make any headway





Re: help needed - state of california needs a benchmark

2011-01-29 Thread Michael Painter

Mike wrote:


The rub is, that they want to legislate that web based 'speedtest.com'
is the ONLY and MOST AUTHORITATIVE metric that trumps all other
considerations and that the provider is %100 at fault and responsible
for making fraudulent claims if speedtest.com doesn't agree. 


speedtest.net?



Re: help needed - state of california needs a benchmark

2011-01-29 Thread Chuck Anderson
On Sat, Jan 29, 2011 at 10:00:36AM -0800, Mike wrote:
 issue, how do we go about getting 'the message' across, how do we go  
 about engineering something that could be considered statistically  
 relevant, and most importantly, how do we get this to be accepted by  
 non-technical legislators and regulators?

How about this analogy:

Using speedtest.com as the sole benchmark is like trying to test the 
speed and throughput of the city streets in Sacramento by seeing how 
long it takes to drive to New York City and back.  Oh, and why should 
we be responsible for the speeds on the Interstate portions of that 
route when we only control the city streets and local secondary roads?



Re: DSL options in NYC for OOB access

2011-01-29 Thread Andy Ashley

On 29/01/2011 14:56, Randy McAnally wrote:


Have you looked into the cross connect cost for your DSL line?  They typically
aren't very cheap either.

~Randy

Im still waiting for the quote to come back from L3.
Figured a copper pair would be cheaper than a fiber, but who knows?

Andy.


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.




Re: [arin-announce] ARIN Resource Certification Update

2011-01-29 Thread Arturo Servin

I agree with Alex that without a hosted solution RIPE NCC wouldn't have 
so many ROAs today, for us, even with it, it has been more difficult to roll 
out RPKI among our ISPs. As many, I do not think that a hosted suits to 
everybody and it has some disadvantages but at leas it could help to lower the 
entry barrier for some.


Speaking about RPKI stats, here some ROA evolution in various TAs (the 
data from ARIN is from their beta test, the rest are production systems):

http://www.labs.lacnic.net/~rpki/rpki-evolution-report_EN.txt

And visually:

http://www.labs.lacnic.net/~rpki/rpki-heatmaps/latest/global-roa-heatmap.png

and

http://www.labs.lacnic.net/~rpki/rpki-heatmaps/latest/

To see each region.

http://www.labs.lacnic.net/~rpki/rpki-heatmaps

Also, bgpmon has a nice whois interface for humans to see ROAs (not 
sure if this link was share here or in twitter, sorry if I am duplicating):

http://bgpmon.net/blog/?p=414


Best regards,
-as



On 29 Jan 2011, at 13:26, Alex Band wrote:

 John,
 
 Thanks for the update. With regards to offering a hosted solution, as you 
 know that is the only thing the RIPE NCC currently offers. We're developing 
 support for the up/down protocol as I write this.
 
 To give you some perspective, one month after launching the hosted RIPE NCC 
 Resource Certification service, 216 LIRs are using it in the RIPE Region and 
 created 169 ROAs covering 467 prefixes. This means 40151 /24 IPv4 prefixes 
 and 7274499 /48 IPv6 prefixes now have a valid ROA associated with them.
 
 I realize a hosted solution is not ideal, we're very open about that. But at 
 least in our region, it seems there are quite a number of organizations who 
 understand and accept the security trade-off of not being the owner of the 
 private key for their resource certificate and trust their RIR to run a 
 properly secured and audited service. So the question is, if the RIPE NCC 
 would have required everyone to run their own certification setup using the 
 open source tool-sets Randy mentions, would there be this much certified 
 address space now? 
 
 Looking at the depletion of IPv4 address space, it's going to be crucially 
 important to have validatable proof who is the legitimate holder of Internet 
 resources. I fear that by not offering a hosted certification solution, real 
 world adoption rates will rival those of IPv6 and DNSSEC. Can the Internet 
 community afford that?
 
 Alex Band
 Product Manager, RIPE NCC
 
 P.S. For those interested in which prefixes and ASs are in the RIPE NCC ROA 
 Repository, here is the latest output in CSV format:
 http://lunimon.com/valid-roas-20110129.csv
 
 
 
 On 24 Jan 2011, at 21:33, John Curran wrote:
 
 Copy to NANOG for those who aren't on ARIN lists but may be interested in 
 this info.
 FYI.
 /John
 
 Begin forwarded message:
 
 From: John Curran jcur...@arin.netmailto:jcur...@arin.net
 Date: January 24, 2011 2:58:52 PM EST
 To: arin-annou...@arin.netmailto:arin-annou...@arin.net 
 arin-annou...@arin.netmailto:arin-annou...@arin.net
 Subject: [arin-announce] ARIN Resource Certification Update
 
 ARIN continues its preparations for offering production-grade resource 
 certification
 services for Internet number resources in the region.  ARIN recognizes the 
 importance
 of Internet number resource certification in the region as a key element of 
 further
 securing Internet routing, and plans to rollout Resource Public Key 
 Infrastructure (RPKI)
 at the end of the second quarter of 2011 with support for the Up/Down 
 protocol for those
 ISPs who wish to certify their subdelegations via their own RPKI 
 infrastructure.
 
 ARIN continues to evaluate offering a Hosting Resource Certification service 
 for this
 purpose (as an alternative to organizations having to run their own RPKI 
 infrastructure),
 but at this time it remains under active consideration and is not committed. 
   We look
 forward to discussing the need for this type of service and the organization 
 implications
 atour upcoming ARIN Members Meeting in April in San Juan, PR.
 
 FYI,
 /John
 
 John Curran
 President and CEO
 ARIN
 
 ___
 ARIN-Announce
 You are receiving this message because you are subscribed to
 the ARIN Announce Mailing List 
 (arin-annou...@arin.netmailto:arin-annou...@arin.net).
 Unsubscribe or manage your mailing list subscription at:
 http://lists.arin.net/mailman/listinfo/arin-announce
 Please contact i...@arin.net if you experience any issues.
 
 
 



Re: help needed - state of california needs a benchmark

2011-01-29 Thread Christopher Morrow
On Sat, Jan 29, 2011 at 1:29 PM, Jeff Richmond jeff.richm...@gmail.com wrote:
 Mike, nothing is perfect, so let's just start with that. What the FCC has 
 done to measure this is to partner with Sam Knows and then have friendly DSL 
 subs for the participating telcos to run modified CPE firmware to test 
 against their servers. We have been collecting data for this for the past 
 couple of months, actually. More can be found here:

 http://www.samknows.com/broadband/fcc_and_samknows

note that samknows has some deficiencies in their platform, at least:
  o no ip v6 support
  o the home-routers/gateways/aps randomly reboot with completely
non-functional setups

their customer support ... isn't really available, ever.

 While even that I have issues with, it certainly is better than hitting that 
 speedtest site where anything at all problematic on the customer LAN side of 
 the CPE can cause erroneous results.


the above aside, how about using the network test suite from Mlabs?
  http://www.measurementlab.net/measurement-lab-tools

Or ask the swedish folk who've deployed 'speedtest' gear for the
swedish isp/users to tst against? (common test infrastructure).

-chris



Strange L2 failure

2011-01-29 Thread Jack Bates

Has anyone seen issues with IOS where certain MACs fail?

54:52:00 (kvm) fails out an old 10mbit port on a 7206 running 12.2 SRE. 
I've never seen anything like this. DHCP worked, ARP worked, and arp 
debugging showed responses for arp to the MAC, however, tcpdump on the 
host system showed no unicast or arp responses coming from the router, 
while the switch management ip and other stuff on the local segment 
communicated fine with the vm. This broke for IPv6 as well.


I changed the vm's MAC to 54:51:00, 50:52:00 and still failed. Changed 
it to 40:52:00 and it works for both v4 and v6. Was there a change which 
would cause certain hardware to not accept a MAC starting with 50: or 
higher?



Jack



Re: [arin-announce] ARIN Resource Certification Update

2011-01-29 Thread Paul Vixie
 From: Alex Band al...@ripe.net
 Date: Sat, 29 Jan 2011 16:26:55 +0100
 
 ... So the question is, if the RIPE NCC would have required everyone
 to run their own certification setup using the open source tool-sets
 Randy mentions, would there be this much certified address space now?

i don't agree that that question is pertinent.  in deployment scenario
planning i've come up with three alternatives and this question is not
relevant to any of them.  perhaps you know a fourth alternative.  here
are mine.

1. people who receive routes will prefer signed vs. unsigned, and other
people who can sign routes will sign them if it's easy (for example,
hosted) but not if it's too hard (for example, up/down).

2. same as #1 except people who really care about their routes (like
banks or asp's) will sign them even if it is hard (for example, up/down).

3. people who receive routes will ignore any unsigned routes they hear,
and everyone who can sign routes will sign them no matter how hard it is.

i do not expect to live long enough to see #3.  the difference between #1
and #2 depends on the number of validators not the number of signed routes
(since it's an incentive question).  therefore small differences in the
size of the set of signed routes does not matter very much in 2011, and
the risk:benefit profile of hosted vs. up/down still matters far more.

 Looking at the depletion of IPv4 address space, it's going to be
 crucially important to have validatable proof who is the legitimate
 holder of Internet resources. I fear that by not offering a hosted
 certification solution, real world adoption rates will rival those of
 IPv6 and DNSSEC. Can the Internet community afford that?

while i am expecting a rise in address piracy following depletion, i am
not expecting #3 (see above) and i think most of the piracy will be of
fallow or idle address space that will therefore have no competing route
(signed or otherwise).  this will become more pronounced as address
space holders who care about this and worry about this sign their routes
-- the pirates will go after easier prey.  so again we see no material
difference between hosted and up/down on the deployment side or if there
is a difference it is much smaller than the risk:benefit profile
difference on the provisioning side.

in summary, i am excited about RPKI and i've been pushing hard for in
both my day job and inside the ARIN BoT, but... let's not overstate the
case for it or kneejerk our way into provisioning models whose business
sense has not been closely evaluated.  as john curran said, ARIN will
look to the community for the guideance he needs on this question.  i
hope to see many of you at the upcoming ARIN public policy meeting in
san juan PR where this is sure to be discussed both at the podium and in
the hallways and bar rooms.

Paul Vixie
Chairman and Chief Scientist, ISC
Member, ARIN BoT



Re: Found: Who is responsible for no more IP addresses

2011-01-29 Thread Franck Martin
You should do a rap song...

IPv6, IPv4, it is all my fault!
Internet was just an experiment

- Original Message -
From: Joly MacFie j...@punkcast.com
To: Ben McGinnes b...@adversary.org
Cc: nanog@nanog.org
Sent: Sunday, 30 January, 2011 6:36:21 AM
Subject: Re: Found: Who is responsible for no more IP addresses

Thanks for the link to the vid. I see Geoff Huston spoke too. I've embedded
both on
http://www.isoc-ny.org/p2/?p=1713

FWIW Vint has been using this as an intro to IPv6 for years.. in fact I've
got some video to edit of him speaking in 1999 - I'll look for it..

j




Re: help needed - state of california needs a benchmark

2011-01-29 Thread Don Gould

Morning Mike,

The *New Zealand Government* don't use speedtest.net as a benchmark.  
Our Government uses a consulting company to provide a range of tests 
that address the issues you're talking about and benchmarks are 
published each year.  http://www.comcom.govt.nz/broadband-reports


The user and network communities are not 100% happy with the way this 
testing is done either.  
http://www.geekzone.co.nz/forums.asp?forumid=49topicid=73698  Some 
providers are know to fudge the results by putting QoS on the test paths.


http://weathermap.karen.net.nz/ is a New Zealand academic project that 
shows their network performance in real time.  This is a very useful 
site for demonstrating the sort of tools that Governments should be 
looking for when doing performance measuring.


Recent work done by Jared Kells, in Australia, on consumer level network 
performance shows a very interesting picture (pictures are best for 
political people).  
http://forums.whirlpool.net.au/forum-replies.cfm?t=1579142  Kells 
demonstrates that providers deliver very different results for national 
and international sites.  Kells provides a set of Open Source tools to 
do your own testing.


http://www.truenet.co.nz - John Butt - is a commercial start up 
providing another range of testing metrics which the user community at 
www.geekzone.co.nz seem to be much happier with as a proper indication 
of network performance.  I have talked with John personally and can 
attest that the testing is fairly robust and addresses issues that 
you've raised.  http://www.truenet.co.nz/how-does-it-work


The recent upgrades of www.telstraclear.co.nz HFC network from DOCIS2.0 
(25/2 max) to DOCIS3.0 (100/10 testing introduction speed) presented a 
range of challenges for John's testing.  http ramp up speeds to 100mbit 
cause impact on test results, so John had to change the way they were 
testing to get a better performance presentation.


Internode in Australia have learnt the hard way recently that consumer 
expectation of their new NBN FTTH network needs to be managed 
carefully.  As a result of some very poor media press over the 
performance of an education site recently installed in Tasmania, they 
have engaged in quite a bit of consumer education around network 
performance.  
http://www.theaustralian.com.au/news/nation/governments-broadband-not-up-to-speed-at-tasmanian-school/story-e6frg6nf-1225961150410  
-  http://whrl.pl/Rcyrhz - 
http://forums.whirlpool.net.au/user/6258Simon Hackett - Internode CEO 
responds.


*Speedtest.net* will only provide a BIR/PIR measure, and not CIR, which 
is not an indicator of service quality.


In New Zealand SpeedTest.net is used extensively with a number of 
hosting servers.  The information is fundamentally flawed as you have no 
control over what testing the end user performs.  In my case I can 
product three different tests from a 15/2 HFC service and get 3 
different results.


http://www.speedtest.net/result/1133639492.png - Test 1 - The 
application has identified that I am located in Christchurch New Zealand 
so has selected a Christchurch based server for testing 
(www.snap.co.nz).  As you can see the results show ~7.5/2.1mbits/s.


http://www.speedtest.net/result/1133642520.png - Test 2 - This time I've 
chosen the CityLink (www.citylink.co.nz) server in Wellington New 
Zealand.  ~6.2/1.97bits/s.


http://www.speedtest.net/result/1052386636.png - Test 3 - from 12/12/10 
shows ~15.1/2.15.  This was tested to an Auckland, New Zealand server.


I did run a set of tests this morning to the Auckland servers as well, 
however they are all being limited to the same numbers as the 
Christchurch test (1) now.  None of the servers are on my providers 
network and performance is governed by the peering/hand overs between 
the networks.


Christchurch - Wellington - 320km - Christchurch - Auckland -  750km 
straight line distances according to Google Earth.


The HFC service I'm using will deliver a through put of 15/2 for some 
time even at peek usage times when pulling content off the providers own 
network.


Ok, that's enough for now.  I hope this helps and let me know if you 
need any more assistance.


Cheers Don




RE: help needed - state of california needs a benchmark

2011-01-29 Thread Frank Bulk
Configure your DNS server so that speedtest.net and every variation to point
to the Speedtest that you host...

Frank

-Original Message-
From: Mike [mailto:mike-na...@tiedyenetworks.com] 
Sent: Saturday, January 29, 2011 12:01 PM
To: NANOG list
Subject: help needed - state of california needs a benchmark

Hello,

My company is small clec / broadband provider serving rural communities 
in northern California, and we are the recipient of a small grant from 
the state thru our public utilities commission. We went out to 'middle 
of nowhere' and deployed adsl2+ in fact (chalk one up for the good 
guys!), and now that we're done, our state puc wants to gather 
performance data to evaluate the result of our project and ensure we 
delivered what we said we were going to. Bigger picture, our state is 
actively attempting to map broadband availability and service levels 
available and this data will factor into this overall picture, to be 
used for future grant/loan programs and other support mechanisms, so 
this really is going to touch every provider who serves end users in the 
state.

The rub is, that they want to legislate that web based 'speedtest.com' 
is the ONLY and MOST AUTHORITATIVE metric that trumps all other 
considerations and that the provider is %100 at fault and responsible 
for making fraudulent claims if speedtest.com doesn't agree. No 
discussion is allowed or permitted about sync rates, packet loss, 
internet congestion, provider route diversity, end user computer 
performance problems, far end congestion issues, far end server issues 
or cpu loading, latency/rtt, or the like. They are going to decide that 
the quality of any provider service, is solely and exclusively resting 
on the numbers returned from 'speedtest.com' alone, period.

All of you in this audience, I think, probably immediately understand 
the various problems with such an assertion. Its one of these situations 
where - to the uninitiated - it SEEMS LIKE this is the right way to do 
this, and it SEEMS LIKE there's some validity to whats going on - but in 
practice, we engineering types know it's a far different animal and 
should not be used for real live benchmarking of any kind where there is 
a demand for statistical validity.

My feeling is that - if there is a need for the state to do 
benchmarking, then it outta be using statistically significant 
methodologies for same along the same lines as any other benchmark or 
test done by other government agencies and national standards bodies 
that are reproducible and dependable. The question is, as a hotbutton 
issue, how do we go about getting 'the message' across, how do we go 
about engineering something that could be considered statistically 
relevant, and most importantly, how do we get this to be accepted by 
non-technical legislators and regulators?

Mike-





Re: [arin-announce] ARIN Resource Certification Update

2011-01-29 Thread Owen DeLong
I don't understand why you can't have a hosted solution where the private keys
are not held by the host.

Seems to me you should be able to use a Java Applet to do the private key
generation and store the private key on the end-user's machine, passing
objects that need to be signed by the end user down to the applet for
signing.

This could be just as low-entry for the user, but, without the host holding
the private keys.

What am I missing?

Owen

On Jan 29, 2011, at 1:06 PM, Arturo Servin wrote:

 
   I agree with Alex that without a hosted solution RIPE NCC wouldn't have 
 so many ROAs today, for us, even with it, it has been more difficult to roll 
 out RPKI among our ISPs. As many, I do not think that a hosted suits to 
 everybody and it has some disadvantages but at leas it could help to lower 
 the entry barrier for some.
 
 
   Speaking about RPKI stats, here some ROA evolution in various TAs (the 
 data from ARIN is from their beta test, the rest are production systems):
 
 http://www.labs.lacnic.net/~rpki/rpki-evolution-report_EN.txt
 
   And visually:
 
 http://www.labs.lacnic.net/~rpki/rpki-heatmaps/latest/global-roa-heatmap.png
 
   and
 
 http://www.labs.lacnic.net/~rpki/rpki-heatmaps/latest/
 
   To see each region.
 
 http://www.labs.lacnic.net/~rpki/rpki-heatmaps
 
   Also, bgpmon has a nice whois interface for humans to see ROAs (not 
 sure if this link was share here or in twitter, sorry if I am duplicating):
 
 http://bgpmon.net/blog/?p=414
 
 
 Best regards,
 -as
   
 
 
 On 29 Jan 2011, at 13:26, Alex Band wrote:
 
 John,
 
 Thanks for the update. With regards to offering a hosted solution, as you 
 know that is the only thing the RIPE NCC currently offers. We're developing 
 support for the up/down protocol as I write this.
 
 To give you some perspective, one month after launching the hosted RIPE NCC 
 Resource Certification service, 216 LIRs are using it in the RIPE Region and 
 created 169 ROAs covering 467 prefixes. This means 40151 /24 IPv4 prefixes 
 and 7274499 /48 IPv6 prefixes now have a valid ROA associated with them.
 
 I realize a hosted solution is not ideal, we're very open about that. But at 
 least in our region, it seems there are quite a number of organizations who 
 understand and accept the security trade-off of not being the owner of the 
 private key for their resource certificate and trust their RIR to run a 
 properly secured and audited service. So the question is, if the RIPE NCC 
 would have required everyone to run their own certification setup using the 
 open source tool-sets Randy mentions, would there be this much certified 
 address space now? 
 
 Looking at the depletion of IPv4 address space, it's going to be crucially 
 important to have validatable proof who is the legitimate holder of Internet 
 resources. I fear that by not offering a hosted certification solution, real 
 world adoption rates will rival those of IPv6 and DNSSEC. Can the Internet 
 community afford that?
 
 Alex Band
 Product Manager, RIPE NCC
 
 P.S. For those interested in which prefixes and ASs are in the RIPE NCC ROA 
 Repository, here is the latest output in CSV format:
 http://lunimon.com/valid-roas-20110129.csv
 
 
 
 On 24 Jan 2011, at 21:33, John Curran wrote:
 
 Copy to NANOG for those who aren't on ARIN lists but may be interested in 
 this info.
 FYI.
 /John
 
 Begin forwarded message:
 
 From: John Curran jcur...@arin.netmailto:jcur...@arin.net
 Date: January 24, 2011 2:58:52 PM EST
 To: arin-annou...@arin.netmailto:arin-annou...@arin.net 
 arin-annou...@arin.netmailto:arin-annou...@arin.net
 Subject: [arin-announce] ARIN Resource Certification Update
 
 ARIN continues its preparations for offering production-grade resource 
 certification
 services for Internet number resources in the region.  ARIN recognizes the 
 importance
 of Internet number resource certification in the region as a key element of 
 further
 securing Internet routing, and plans to rollout Resource Public Key 
 Infrastructure (RPKI)
 at the end of the second quarter of 2011 with support for the Up/Down 
 protocol for those
 ISPs who wish to certify their subdelegations via their own RPKI 
 infrastructure.
 
 ARIN continues to evaluate offering a Hosting Resource Certification 
 service for this
 purpose (as an alternative to organizations having to run their own RPKI 
 infrastructure),
 but at this time it remains under active consideration and is not 
 committed.   We look
 forward to discussing the need for this type of service and the 
 organization implications
 atour upcoming ARIN Members Meeting in April in San Juan, PR.
 
 FYI,
 /John
 
 John Curran
 President and CEO
 ARIN
 
 ___
 ARIN-Announce
 You are receiving this message because you are subscribed to
 the ARIN Announce Mailing List 
 (arin-annou...@arin.netmailto:arin-annou...@arin.net).
 Unsubscribe or manage your mailing list subscription at:
 http

Re: Strange L2 failure

2011-01-29 Thread ML

On 1/29/2011 4:24 PM, Jack Bates wrote:

Has anyone seen issues with IOS where certain MACs fail?

54:52:00 (kvm) fails out an old 10mbit port on a 7206 running 12.2 SRE.
I've never seen anything like this. DHCP worked, ARP worked, and arp
debugging showed responses for arp to the MAC, however, tcpdump on the
host system showed no unicast or arp responses coming from the router,
while the switch management ip and other stuff on the local segment
communicated fine with the vm. This broke for IPv6 as well.

I changed the vm's MAC to 54:51:00, 50:52:00 and still failed. Changed
it to 40:52:00 and it works for both v4 and v6. Was there a change which
would cause certain hardware to not accept a MAC starting with 50: or
higher?


Jack




I just ran into something like this yesterday.  A Belkin router with a 
MAC of 9444.52dc. was properly learned at the IDF switch but the 
upstream agg switch/router wouldn't learn it.  I even tried to static 
the MAC into the CAM..router refused.


After changing the MAC address, everything worked.

Best I can figure the router treated it like a broadcast MAC. DHCP 
snooping at the IDF and the upstream aggregation switch/router learned 
the IP/MAC/interface lease information.  ARP entry at the router had the 
correct IP/MAC binding. Nothing in the CAM for the MAC of that darn Belkin.




Re: Strange L2 failure

2011-01-29 Thread Jack Bates

On 1/29/2011 8:47 PM, ML wrote:
I just ran into something like this yesterday.  A Belkin router with a 
MAC of 9444.52dc. was properly learned at the IDF switch but the 
upstream agg switch/router wouldn't learn it.  I even tried to static 
the MAC into the CAM..router refused. 


That's what really tripped me out, was that the router actually did 
place an ARP entry and pretended like everything should be working. 
Scheduling some more direct tests with packet sniffers next week when I 
get back in the office.


I'm curious now if IOS has the issue or the line card, so we'll test off 
different cards direct and monitor results.


Jack



Re: ARIN IRR Authentication (was: Re: AltDB?)

2011-01-29 Thread Jeff Wheeler
On Thu, Jan 27, 2011 at 10:00 PM, John Curran jcur...@arin.net wrote:
 Based on the ARIN's IRR authentication thread a couple of weeks ago, there
 were suggestions placed into ARIN's ACSP process for changes to ARIN's IRR
 system. ARIN has looked at the integration issues involved and has scheduled
 an upgrade to the IRR system that will accept PGP and CRYPT-PW authentication
 as well as implementing notification support for both the mnt-nfy and notify
 fields by the end of August 2011.

I'm glad to see that a decision was made to improve the ARIN IRR,
rather than stick to status-quo or abandon it.  However, this response
is essentially what most folks I spoke with off-list imagined: You
have an immediate operational security problem which could cause
service impact to ARIN members and others relying on the ARIN IRR
database, and fixing it by allowing passwords or PGP to be used is not
very hard.

As I have stated on this list, I believe ARIN is not organizationally
capable of handling operational issues.  This should make everyone
very worried about any ARIN involvement in RPKI, or anything else that
could possibly have short-term operational impact on networks.  Your
plan to fix the very simple IRR problem within eight months is a very
clear demonstration that I am correct.

How did you arrive at the eight month time-frame to complete this project?

Can you provide more detail on what CRYPT-PW hash algorithm(s) will be
supported?  Specifically, the traditional DES crypt(3) is functionally
obsolete, and its entire key-space can be brute-forced within a few
days on one modern desktop PC.  Will you follow the practice
established by several other IRR databases (including MERIT RADB) and
avoid exposing the hashes by way of whois output and IRR database
dumps?

If PGP is causing your delay, why don't you address the urgent problem
of supporting no authentication mechanism at all first, and allow
CRYPT-PW (perhaps with a useful hash algorithm) and then spend the
remaining 7.9 months on PGP?

The plan and schedule you have announced is indefensible for an
operational security issue.

-- 
Jeff S Wheeler j...@inconcepts.biz
Sr Network Operator  /  Innovative Network Concepts



RE: DSL options in NYC for OOB access

2011-01-29 Thread Ryan Finnesey
All this out of band management talk is making me think it is an
opportunity for a supper low cost DSL offering.  Maybe a good way to get
read of some capacity we have.
Cheers
Ryan


-Original Message-
From: Andy Ashley [mailto:li...@nexus6.co.za] 
Sent: Saturday, January 29, 2011 3:42 PM
To: nanog@nanog.org
Subject: Re: DSL options in NYC for OOB access

On 29/01/2011 14:56, Randy McAnally wrote:

 Have you looked into the cross connect cost for your DSL line?  They 
 typically aren't very cheap either.

 ~Randy
Im still waiting for the quote to come back from L3.
Figured a copper pair would be cheaper than a fiber, but who knows?

Andy.


--
This message has been scanned for viruses and dangerous content by
MailScanner, and is believed to be clean.





Re: help needed - state of california needs a benchmark

2011-01-29 Thread Mikael Abrahamsson

On Sun, 30 Jan 2011, Don Gould wrote:

Ok, that's enough for now.  I hope this helps and let me know if you 
need any more assistance.


In Sweden, Bredbandskollen.se (translates to Broadband check) rules 
supreme. It uses two parallell TCP sessions to measure speed, and the 
whole industry has agreed (mostly product managers were involved, little 
attention was given to technical arguments) to set a minimum standard for 
each service and let people cancel contracts or downgrade if they didn't 
get that level.


For instance, an ADSL2+ connection sold as up to 24 megabit/s now let's 
you cancel or downgrade if you don't get 12 megabit/s TCP throughput. For 
100/10 service the lower limit is 40 megabit/s. There is a requirement for 
the user to not use wireless and to put their computer directly into the 
ethernet jack (ETTH) without the use of a NAT router, because these can 
heavily affect service speed. It also features a guide to how to diagnose 
why you're getting a lower than expected speed.


Customer complaints are down so generally this seems to increase customer 
satisfaction. My biggest objection is that with a 100/10 service download 
speeds can vary a lot depending what browser vendor and version one is 
using, TCP settings of course also play a role.


The upside is that the sevice is extremely easy to use.

--
Mikael Abrahamssonemail: swm...@swm.pp.se



DSL network build out

2011-01-29 Thread Santino Codispoti
I am hoping for some recommendations from the group.  We will shortly
be building out a new network for offering DSL/ access.  We are going
to interface with the Covad network within:

111 8th Avenue,New York (5th Floor)
11 Great Oaks Blvd,San Francisco
427 S La Salle, Chicago


We are going to try and push most of the trafic out to the internet
within the same POPs but we would also like to have conntivity between
the locations as well.  Who would you recommend we talk with for
fiber?  Who would you recommend we talk with for network gear?

Thank you for your help

Regards
Santino



Re: DSL options in NYC for OOB access

2011-01-29 Thread Joel Jaeggli
On 1/29/11 9:30 PM, Ryan Finnesey wrote:
 All this out of band management talk is making me think it is an
 opportunity for a supper low cost DSL offering.  Maybe a good way to get
 read of some capacity we have.

The key of course is that it not be coupled to the physical plant that
the other circuits use. I've been in a couple of facilties recently
(though not in ny) where riding into the building on twsited pair was at
best costly and more generally, infeasible.

joel

 Cheers
 Ryan
 
 
 -Original Message-
 From: Andy Ashley [mailto:li...@nexus6.co.za] 
 Sent: Saturday, January 29, 2011 3:42 PM
 To: nanog@nanog.org
 Subject: Re: DSL options in NYC for OOB access
 
 On 29/01/2011 14:56, Randy McAnally wrote:
 
 Have you looked into the cross connect cost for your DSL line?  They 
 typically aren't very cheap either.

 ~Randy
 Im still waiting for the quote to come back from L3.
 Figured a copper pair would be cheaper than a fiber, but who knows?
 
 Andy.
 
 
 --
 This message has been scanned for viruses and dangerous content by
 MailScanner, and is believed to be clean.