RE: Creating a crystal clear and pure Internet

2007-11-27 Thread Jerry Pasker




But, if it's not viewed as political then...

Your analogy is flawed, because the Internet is not a pipe system
and ISP's are not your local water utility.


And the internet is not a big truck!  It'sIt's a series of tubes!


Sorry, I couldn't resist... with all these things clogging all the tubes.  :-)


-Jerry



Re: Why do we use facilities with EPO's?

2007-07-25 Thread Jerry Pasker


I've always wondered who died or was injured and caused the EPO to 
come in to existence.  There have been lots of "EPO caused downtime" 
stories, but does anyone on the NANOG list even have one single 
"Thank God for the EPO" story?  I'll feel better about the general 
state of the world if I know that the EPO actually has a real valid 
use that has been ACTUALLY PROVEN IN PRACTICE rather than just in 
someone's mind.



-Jerry   

Re: ATT <-> Broadwing (Level3) Congestion

2007-05-16 Thread Jerry Pasker


Is anyone else seeing significant congestion between ATT and 
Broadwing in Dallas/Fort Worth?


-brandon


I have no idea if the following is related: (and frankly, I don't 
care, for anyone that wants to flame me on it)  ;-)


I was seeing it between Qwest and Level 3 since about November(!!!), 
on and off.   And getting worse over the months.  You know it's bad 
when residential customers send traceroutes and ping summaries saying 
"Um, 6 hops out, Qwest is broken in Dallas."   When I would call 
Qwest on it, I would get one clueless in-duh-vidual after another. 
Eventually, I did get a cluefull chap who summed it up with "Yeah, 
that's a big political finger pointing debacle."  He wouldn't comment 
further, and I don't blame him.


(Sidebar... it's funny that as a PAYING CUSTOMER I have to talk to 
the front line support doofs... if I were a NON-PAYING CUSTOMER 
(peer) with them, I'd get to talk to some cluefull NOC person that 
knows what the phrase "IP transit" means.)


A week ago,  admin downed my BGP session with Qwest.  Life was 
instantly 100% better.


Maybe it was Qwest's fault.  Or Maybe it Level3's fault the whole 
time.  I don't care who's fault it is L3 or Qwest, or the man on 
the moon.As far as I'm concerned, when two large NSPs can't get a 
peering arrangement to not have packet loss, they both suck equally, 
because they're both being equally STUPID in not solving the problem. 
It's not like they can't afford to upgrade something.  If it's really 
that big of a deal, down the peering session!  Transit  the traffic 
on your own backbones to a peering session somewhere else!  Idiots! 
Argh!!


In this case, by firing Qwest, I had the ability to punish stupid. 
And it felt good.  Because over the months, my customers have done 
the same to me by leaving due to crappy performance.


This week, I'm no longer a Qwest transit customer.  And I'm sleeping 
better at night, and my family life is even better.  Life's good!


Just my two cents.



Re: SaidCom disconnected by Level 3 (former Telcove property)

2007-03-15 Thread Jerry Pasker


Not knowing anything about the case other than what I read in the 
article, my hang up is that a transit provider can make a phone call 
and destroy a customer's business with 30 minutes notice.  On a DS3 
that has actual real lead time to replace, that's a business killer. 
The argument of "should be multi homed" holds some water, but I've 
never considered multihoming as a typical remedy for a 30-90 day 
outage.  And then it only works if lines are underutilized to the 
point of loosing one will constantly have zero affect on network 
performance, even during peak use, and if there's still some level of 
extra redundancy remaining.  (Multiple contingency situations aside)


My opinion is probably somewhat influenced by the fact that I'm a 
small ISP with customers that want the internet to NOT be slow, 
facing that same DS3 lead time problem.  I ordered a DS3 in early 
December, (who's local loop was to ride on a preexisting OC3, sounds 
easy, right?) and with dates slipping over and over again, and with 
no firm install date in site provided from the company last week, I 
had to finally cancel and order with a different company last week. 
For the last month, the last thing I wanted to do was "punt" and 
start the process over again, but at some point, one starts to feel 
"choiceless."


"Do you think I placed that order in December just for fun?"


I see talk over and over again on NANOG about "Maybe some provider 
will come in with [insert new technology here] and compete with the 
cable/DSL providers" but as a small provider doing fix broadband 
wireless, I just don't see how even an army of small providers can 
compete against the likes of TENS OF BILLIONS OF DOLLARS of 
cable/telco market capitalization.


After fighting Qwest for ten years, maybe I'm starting to feel a 
little hopeless.






Re: WWPVD (was what the heck do I do know)

2007-02-01 Thread Jerry Pasker



If no one's been sued before because they've wild carded a defunct 
RBL, what's the big deal?  When someone tries their best, goes out to 
an intelligent group to get their opinions, and spends a HUGE amount 
of effort, and incurs measurable monetary damage (bandwidth, time, 
etc) and when the only reasonable answer (dare I say group 
consensus?!?!)  is "shut it off, in a way that could break things to 
get their attention" how can there be grounds for a lawsuit?  That's 
just silly!  Pay service or not, it doesn't matter when that period 
of time has passed.   Paul could be found negligent when a server 
admin was negligent for 6-7 YEARS?!  Seriously?!  I don't buy it that 
argument.


Now, if he set up the DNS to wild card 1% of packets on day 1, 2% on 
day 2, 3% on day three, etc, in an attempt to be less disruptive then 
perhaps, I could see someone being upset about that, because as a 
clueless person (bad admin) trying to troubleshoot some problem like 
that, they'd definitely play a good victim.  And I bet they would 
wait until day 80 to call in a consultant.  The only sane way is to 
pick a date, announce it far in advance, and flip the switch at 
00:00:00 on that day.


I suppose in some universe, it *IS* possible that Paul could be found 
negligent by some jury trial and ordered to pay millions of dollars. 
But that's the same universe were swine routinely fly to and fourth 
across the green sky.


Just my humble opinion.


Re: Security of National Infrastructure

2006-12-29 Thread Jerry Pasker



 > Why is it that every company out there allows connections through their

 firewalls to their web and mail infrastructure from countries that they
 don't even do business in. Shouldn't it be our default to only allow US
 based IP addresses and then allow others as needed? The only case I can
 think of would be traveling folks that need to VPN or something, which
 could be permitted in the Firewall, but WHY WIDE OPEN ACCESS? We still
 seem to be in the wild west, but no-one has the [EMAIL PROTECTED] to be braven 
and

 > block the unnecessary access.


Most people inherently know the answer to this, but I figure I might 
as well answer the question since it was asked.


It is the way it is, because the internet works when it's open by 
default, and closed off carefully. (blacklists, and the such)   Would 
email have ever taken off if it were based on white lists of approved 
domains and or senders? Sure, it might make email better NOW (maybe?) 
but in the beginning?


Block the few bad apples, and generally allow everything else by 
default.  (but allow it carefully)  It works for the web, email, 
airport security, and society in general (mostly open, free... unless 
you're a Bad Guy Criminal Type).


No one is smart enough to be a central planner, and know where the 
bad is, all the time. And no one is smart enough to predict who/where 
the "good" is.  That's why open by default (with careful security to 
screen out the "bad") generally works the best.  Chase down the 
"bad", and assume (correctly so) that the rest is "good."


Same concept applies to why we have police that chase criminals, 
rather than just throwing everyone in prison by default and making 
them prove that they're worth of being free.



-Jerry




Re: 200K prefixes - Weekly Routing Table Report

2006-10-13 Thread Jerry Pasker


Sorry, I got several questions emailed to me, so I'll save my own 
bandwidth at the expense of everyone else's, and hopefully answer 
some people that didn't take the time/effort to ask...


The Dirty-Thirty is what I called the list of "Aggregation Summery" 
in the cidr report (cidr-report.org) that gets posted to the NANOG 
list.  They put the top 30 ASes that have the most to gain through 
aggregation in their report for all to see.  When discussing this in 
the past I referred to it as the dirty thirty.


In the past, I suggested giving out "I'm the dirty thirty" t-shirts 
at NANOG meetings to those attending from the networks listed. 
Require them to be worn to attend.  Put slogans on them like 
"Aggregate is what you put in concrete, right?"  Have a cute picture 
of a stick person on it with a concrete block for a head, next to a 
router or something.


More affective, less funny, and also somewhat discussed in the past, 
was my suggestion of the creation of a route-server style of 
distribution of filters (like the cymru bogon servers) that would 
filter routes to the top 5 people on the list, essentially black 
holing the absolute worst of the worst.


It basically would be similar to email RBL, except that it would 
break the entire net, not just SMTP.  ;-)  While it may be 
sacrilegious to discuss such things like purposely breaking parts of 
the net on the NANOG list, it's for the greater good.  So hear me 
now, and belive me later.



It would work like this:

Step 1) Read the cidr report

2)Contact those top 5 networks with a simple message. 
"Congratulations!  You're in the top 5 of the dirty thirty! 
Aggregate now, because if you're still on the dirty-thirty list 60 
days from now, and your entry can gain more than a 30% reduction size 
through aggregation, we're going to add you do the black hole server. 
Have a nice day."


3)Do this weekly.
3a)Shrug off threats of lawsuits.

4)In the mean time, a few crazy network operators would actually 
subscribe to the "Aggregation Route Server."  It might be a guy with 
an ASN and a /24 in his apartment, or a small company with an 
underpowered router that's facing an upgrade and wants to try to 
change the world, maybe a small host  or ISP, or whatever.  Or maybe 
a larger organization might actually be insane enough to apply this 
to all of their border routers.


"Crazy" is the key operator here.  And I mean that in a good way. 
:-)  It's crazy that the net even works... just announce some routes, 
and the world accepts them?  Now *THAT'S* crazy!


The whole idea is a terror tactic like weapons of mass destruction 
and mine fields. And email RBLs.  Remember when some through RBLs to 
be crazy?  Who would block email and cause collateral damage for 
themselves just to stop a few spams? Turned out that the answer to 
that question was "Everybody." Getting blacklisted had quite an 
affect on people, and that alone closed a lot of open relays.  Being 
responsible, and working to fight spam wasn't enough.  It took a 
terror weapon like RBLs to get people to close their relays.  I 
maintain that we are at the same point with the routing table.  It 
would provide motivation to aggregate,to stay as far away from that 
top 30 list as possible.   And because the rest of the world wouldn't 
actually know WHO is subscribed, or what impact it might actually 
have, or if say, a large tier-1 nsp might actually subscribe to it 
just to be belligerent (tired of needing more RAM for their core 
routers, and can make a crazy business case for it [didn't Sprint do 
something like that a long time ago or something?] ) or actually just 
plan crazy.


Maybe no one would join.  That's OK too.  The dirty thirty 
participants don't get to know that information.  No one would know 
except for the operators of the (free) service.  Because while you 
may have to be crazy to subscribe to it, you'd have to be equally 
crazy to sit on the top of the dirty thirty, and ignore the warnings 
that you might be black holed.  Maybe a single tier-1 nsp decides to 
use it.  That's pretty significant. Fight crazy with more crazy!


5)After 60 days, if the network that was in the top 5 to qualify 
hasn't moved out of the dirty thirty all together, actually go add 
all their un-aggregated space to the route server.  Because we only 
really want to block the more specifics that are causing the bloat


5a)Continuously monitor the actually global routing table, in 
somewhat real time... when they get aggregated, stop the madness 
immediately, and automagically.


6)Avoid lawsuits.  Or get sued.  Or fold and comply with the lawyers' 
demands.  Whatever.
(I don't have a solution to this it's just a general 
requirement... I didn't say this would be easy, or even possible to 
operate in a sustainable manor I'm just saying that it is 
technically possible.  Logic would dictate that RBL operators 
*shouldn't* be liable to lawsuits from spammers, but this is a pretty 
messe

Re: 200K prefixes - Weekly Routing Table Report

2006-10-13 Thread Jerry Pasker



On Oct 13, 2006, at 2:02 PM, Routing Analysis Role Account wrote:


Routing Table Report   04:00 +10GMT Sat 14 Oct, 2006

Analysis Summary


BGP routing table entries examined:  
200339
Prefixes after maximum aggregation:  
108814


Shall we all have a moment of silence for 200K prefixes in the global table.

Maybe reboot all our routers at once or something?

--
TTFN,
patrick


Thanks for reminding me to change my neighbor 
maximum-prefix 25 80 statements to something 
more "reasonable" before I started getting 
warnings to my pager!  I'm still a few thousand 
routes shy of 200K as of today..


I like that second line that you included. 
Maximum aggregation isn't always possible, but I 
think there are a lot of operators out there that 
don't aggregate as much as they could.  They cite 
various reasons for chewing up router memory ( 
"Oh, it's technically impossible". or my 
favorite "because someone could announce more 
specifics and steal our traffic, so we have to 
announce 842 /24s all separately, ALL THE TIME" ) 
while the rest of the net doesn't seem to have 
those issues, (or they deal with them as they 
happen... "uh, oh, someone's blackholing our 
traffic, let's announce our space as /24s until 
we can get that other operator to correct their 
stupidity... we'll withdraw the /24s as soon as 
it's fixed 22 hours later")


You should have put a difference number there 
too, just so everyone didn't have to get out 
their calculators to figure out how many extra 
routes there are (91525).  So since my calculator 
is out, I did some more numbers. Of those 91,525 
routes that are extra routes in the table, 
14,444 of those are the dirty-30.


So of those top 30 ASes that I refer to as the 
"Dirty Thirty", represent .13% (POINT ONE THREE 
PERCENT!) of the ASes, but they contribute 15.7% 
of the amount of route-bloat on the net!! 
.13%,=15.7%.


The dirty-thirty is a shameful list.  But 
apparently there isn't enough pressure from 
within the routing community to not be on it.  At 
least not yet.  ;-)


--
"I'll reboot mine, if you reboot yours."


Re: Best practices inquiry: filtering 128/1

2006-07-10 Thread Jerry Pasker




Actually, I take that back.  Why wouldn't you just get a feed from 
Cymru  ??




Because you fear that their routers that distribute the feed could 
become own3d and used to cause a massive DoS by filtering out some 
networks?


You asked.   And I use their route feed.  :-)

I figure it a problem occurs, 1)I won't be the only one that has that 
problem 2)I'll hear about it on NANOG.


I figure the minute risk is worth the conveniencethe chances of 
their routers getting 0wn3d are probably about the same as my routers 
getting 0wn3d.  The chances of it happening aren't zero, but probably 
pretty small.  Enough so that it sure beats editing the BOGON list 
manually!


-Jerry



Re: Who wants to be in charge of the Internet today?

2006-06-23 Thread Jerry Pasker


One two three NOT IT!

Sorry, when I saw the subject, I couldn't resist.


RE: insane over-regulation - what not to do

2006-06-21 Thread Jerry Pasker



Could you be more specific? Are you talking about "Part VIII
DOMAIN NAME REGISTAR" or something else?

rsw.



I like Part XIII, Subsecton 115.   "Thing." myself.

-Jerry


Re: I never realized so many trains derailed until my Internet kept going out

2006-01-29 Thread Jerry Pasker


Maybe they had a parallel backup route 2mm  diverse from the main 
one.  Do you think their fiber maps showed the same thing?  :-)






Re: The Backhoe: A Real Cyberthreat?

2006-01-19 Thread Jerry Pasker


While it is always fun to call the government stupid, or anyone else 
for that matter, there is a little more to the story.


- For one you do not need a backhoe to cut fiber
- Two, fiber carries a lot more than Internet traffic - cell phone, 
911, financial tranactions, etc. etc.
- Three, while it is very unlikely terrorists would only attack 
telecom infrastructure, a case can be made for a telecom attack that 
amplifies a primary conventional attack.  The loss of communications 
would complicate things quite a bit.


I'll agree it is very far fethced you could hatch an attack plan 
from FCC outage reports, but I would not call worrying about attacks 
on telecommunications infrastructure stupid.  Enough sobriety 
though, please return to the flaming.


I agree with you on all points except the one you didn't make.  :-)

The point is:  What's more damaging?  Being open with the maps to 
EVERYONE can see where the problem areas are so they can design 
around them? (or chose not to) or pulling the maps, and reports, and 
sticking our heads in the sand, and hoping that security through 
obscurity works.




Stupidity: A Real Cyberthreat.

2006-01-19 Thread Jerry Pasker


[subject change since this is a change of subject, was "Re: The 
Backhoe: A Real Cyberthreat?"]


The biggest threat to Cyber security is stupidity, followed only by 
indifference.  Period.  There.  Someone was bound to say it, so I 
said it first.


Now, in an attempt to get my NANOG "Header to Content" size ratio to 
1, I'll rant on a little for your entertainment, enjoyment, 
annoyance, or hatred.  :-)


Terrorists want to kill people.   Did anyone die when those two 
fibers were cut?  Did it cripple the US Economy?  Did it close the 
stock markets?  When the markets opened the next day, did stock 
prices fall across the board for weeks and months on end?  Not 
exactly.  Will people put bumper stickers on their cars that say 
"Remember 1/9?" or "Remember Buckeye and Reno Junction" No.  Not one 
person will do that.


[most] Religious extremists tend to site religious verses saying 
things along the lines of it being acceptable to kill those who do 
not belive or who oppose their religion.  [just like Christianity 
during the crusades]  I'm pretty sure there's nothing in the Koran 
that says anything about "taking away their internet and cell phones, 
and knocking out their power." [so they can live like we do]  This is 
something that the DHS knows, but doesn't want to admit too loudly. 
Why? Because it's easy to say "We're doing more to prevent cyber 
attacks.  See?  We took away the fiber maps!  We accomplished 
something!  This is bound to help out!"  [now give us more money so 
we can afford to do more things like that]


They say that, to throw us  [the public, and Congress that pays for 
their department to exist] a bone every now and again. It's nearly 
impossible for them to say "you're safer today than you were 
yesterday!"  Well, they could say it, but it would be laughed at by 
the majority of the population.  [more so than they are now] How are 
they supposed to calm people's fears?  With a statement like:  "See? 
You aren't being attacked by terrorists today!  We must be doing our 
job!"


The graphic in the Wired story from FortiusOne showing fiber optic 
backbones and how they clump also shows just how many other fiber 
routes exist.  It also shows where terrorists should go looking for 
fiber to cut.   Look at THAT map.  Go look for, and follow the signs. 
Failing that, make a few phone calls, and have the stuff marked so it 
can be found to cut it.  It's really that easy.  But why even do 
that?  We already cut enough of it without any help from terrorists. 
Just in case no one was paying attention, the score is: Lack of 
information + guy on backhoe = 675,000 cuts per year:  Terrorists = 
ZERO. It's up to carriers to either diversify or feel the wrath of 
the backhoe.   Fortunately [for carriers that have an outage] and 
unfortunately [for long term reliability], the general population is 
forgiving and forgetful enough that when outages do occur and their 
life is back to 'normal' they just don't care enough to want to pay 
higher prices for that extra infrastructure.


The part that wasn't mentioned, is something I'm most interested in. 
How much did the outage cost Sprint?  And is it worthwhile for them 
to use install or lease different fiber routes to prevent that type 
of revenue loss in the future?  [My guess would be "No"] 
Marketing will make up for lost customers, and trying to convince 
people to forget that it ever happened, and rate increases and/or 
insurance will make up for any lost revenue.


-Jerry


Re: Phone networks struggle in Hurricane Katrina's wake

2005-08-30 Thread Jerry Pasker



On 30-aug-2005, at 22:08, Fergie (Paul Ferguson) wrote:

"In this age of cheap commoditized consumer electronics and 
advanced mobile technology, why can't all the people of a city make 
contact during an emergency?


Simple: it's too expensive.

Keep this in mind when trading in your POTS service for VoIP service 
over the internet. Discounting the local loop which is often the 
same in both cases, POTS is extremely reliable while VoIP over the 
public internet, well, isn't. But apparently people that switch to 
VoIP don't mind the reduced likelihood of being able to make calls 
during the next large scale emergency.


Yes!  I agree 100%.   The key words in that above statement were 
"cheap commoditized." The reason satellite phones work in big 
disaster areas (other than the fact  that the entire infrastructure 
in the affected area is comprised of a  solar powered satellite and a 
subscriber's hand set with a remote base station(s) somewhere else in 
the world) is simple;   not everyone and their cousin has one to use.


Why?  Because they're too expensive!

Cell phones have trained the public in to accepting lower levels of 
phone service.  Low cost equals high market adaptation, and in most 
cases, lower QoS.


-Jerry


Re: UUNET connectivity in Minneapolis, MN

2005-08-11 Thread Jerry Pasker





ATT must adhere to some diffrent engineering standards; as well 
devices we monitor there were all fine no blips... but all of the 
MCI customers we have in IL, MI, WI, MN all had issues...



Power went out at 4:30 ish and ckts all dumped about 8:30 pm...

Then bounced until 6:30 AM this morning.

Not sure I understand how on earth something like this happens... 
power is not that confusing to make sure it does not stop working.


JD




Maybe they actually *HAVE* standby generators.  I have no sympathy 
for any provider for failure to plan for the inevitable power 
failure.  I only have moderate sympathy for a failed standby 
generator. It's a diesel bolted to an alternator.  The design was 
solidly debugged by the 1930s. exercise it, be obsessive about 
preventative maintenance, keep the fuel polished, have extra filters 
on hand,  and it will rarely let you down.  Having to dish out SLA 
credits isn't punishment enough for failing to have standby power.


On the other hand, if the customers are in contract, and a provider 
can get away with having their network fall down when there's a power 
outage and few if any customers actually go through the hassle of 
seeking SLA reimbursement, then it's really the customers' fault for 
the provider not having a generator.   Yup, this is the screwed up 
world we live in. :-)




Re: UUNET connectivity in Minneapolis, MN

2005-08-11 Thread Jerry Pasker



Hi Chris,

It seems all 800 numbers I have is busy.
I heard that there was fire around home depot in Down Grove area,
and it did hit the power grid, so UUNET/MCI POP lost the power.
UUNET/MCI tech - Fortunately, our Network management center tech has 
the number for him - said he is waiting

for generator coming in, but NO estimated time for recovery.

Hyun




Did the generator at the POP fail, causing this outage?  Or is this a 
POP without standby generation?


-Jerry



Re: More long AS-sets announced

2005-06-20 Thread Jerry Pasker


Hank wrote:




Wrong again.  You used both AS1221 and AS2121:



I logged pretty much the same thing with max-as limit 
blocking/logging the announcements (of course, the paths, and time 
stamp seconds are  different, YMMV)   Perhaps the unforeseen 
technical difficulties have something to do with not sticking to the 
original plan?  Announce something different to get around filters, 
and thus, detect who put filters in place?  Or more innocent. 
maybe fat fingers, or copy/paste gone horribly wrong?


Doesn't really matter what happened though, because a controlled 
announce is a lot better than a malicious one. Since a stupid person 
is the most dangerous type of person (fifth basic law of human 
stupidity), a malicious  announcement is even safer than a clueless 
one.  I'd much rather see this done by someone with clue that's going 
to announce and withdraw them, then check for damage, than by someone 
that might not know what the heck they're doing.  If there is damage, 
we'll all hear about it, and figure out how to stop it in the future 
when someone else tries to be malicious.  Or when someone else is 
just plain clueless.


Apparently that's not the case since this whole experiment was so 
disruptive that it took 16 hours for someone to notice and point out 
on NANOG that it neither did nor did not go off as previously 
announced.


(I'm guilty of looking it over in the logs, and not even noticing the 
difference between 2121 and 1221)


The internet is our playground... can't we just all get along?  If 
someone's going to load 50 kids on a merry-go-round, and get 50 more 
kids to push it I'll just stand over by the monkey bars and try 
to avoid the flying vomit.  :-)


-Jerry


Re: Outage queries and notices (was Re: GBLX congestion in Dallas area)

2005-06-08 Thread Jerry Pasker


It seems like it's taking more time to discuss it than it actually 
would take to create a nanog-outage list.


Maybe it's not being done because doing so would be threatening to a 
lot of people.


Having a large sounding board for outages will make it very difficult 
for larger providers to cover up outages.   Having a web archive of 
postings during the outages will make it harder for people to forget 
those outages after they're corrected.  Seeing lots of 
[reported/speculated] outages on a web archive from a larger NSP/ISP 
might make that network look bad, even though a large network might 
have a small percentage of outages when measured against the number 
of customers it serves.   Furthermore,  when someone does something 
stupid, it will be harder too forget/ignore the lessons learned.  I'm 
sure that's bound to make someone feel threatened.  ;-)


And last but not least: nanog-outage may become more operationally 
relevant than this list. :-)


-Jerry


Re: Battery Maint in LEC equipment

2005-06-07 Thread Jerry Pasker


Even though it is fed with N+1 UPS power, Qwest put N+1 rectifiers & 
batteries for their fiber cabinet they installed for me a few years 
ago.  At the time, batteries were required no matter what, and they 
say they will replace them every 5 years.  A little-town independent 
telco however, refused to even install a data center fiber shelf 
unless I provided them with DC power.  It just seems to depend on the 
whim of the telco.


As prices fall, so does level of service.  NANOGers all know 
providing uninterruptible power in the current evolving networks is 
hard as the communications infrastructure continues to decentralize. 
Providing non stop power for long term power failures with generators 
scattered all over the place is insanely hard.  Keeping them running 
during a widespread 'event' is even harder.  Everyone wants (expects) 
"always on" dial tone.  And everyone wants cheap calling and cheep 
bandwidth.  Batteries, generators, and their maintenance/operation 
are expensive.  A resilient built network is much more expensive than 
a non resilient one.  Eventually the public will start to realize 
this, and start to demand laws to maintain certain minimum levels of 
service.  It won't happen until some large disaster, or touching news 
story about some preventable tragedy brings it in front of the 
public.  People will have to die for this trend to change.


The non-reliable VOIP as a lifeline, even if it's not intended as 
one, is the tip of this iceberg.


(by the way, like many other forms of regulation, the same goes for 
internet regulation if some shady network somewhere ever turns 
out to be the root cause of some incident where a number of deaths 
occur, regulation will soon follow)


Aside from human error, right now the weakest link in the net is the 
grid, and that is a link that isn't apparently getting any stronger.


RE: what will all you who work for private isp's be doing in a few years?

2005-05-12 Thread Jerry Pasker

bottom line is that in a few years everything will be virtualized and
cosolodation will rule the land.

I've heard this over and over again, and it's just not happened.  I'm 
still one of the few 100% facilities based dial ISPs left in Iowa, 
and if I have to be reduced to being a reseller to survive, I'll just 
close shop.  But I don't see that happening.  Sure, dial up will 
eventually be a niche service, and that's fine, as most of my revenue 
will be from other sources by the time that happens.  If I ever have 
to become a dial up reseller, it will be because my core business has 
moved in another direction.

 there will be single turnkey solutions
for the end user / corporate environment that will be infinitely
configurable to meet the latest trends and needs.
Are you in a marketing department of some BigCo?  "Let's produce a 
single product that 100% of all customers can use, and that can 
change depending on the latest fad of the day, and we'll rule the 
marketplace!"  If it were possible, wouldn't someone have already 
done that?  It sounds like something that would make for a good 
Dilbert comic strip.

there will be no use
for the small time 'innovator' or 'player' except in a purely academic
environment.

You must be new to this game.  :-)
In Capitolism, there is always an innovator.  They drive technology 
forward, and then the mainstream follows.   You're under the false 
assumption that there will reach a point where there is nothing to 
innovate in the areas of last mile IP net access, and consolidation 
will make a single, regulated monopolistic provider.

I think we know that scenario won't be allowed to happen.  By the 
federal government regulators (I suppose that depends on the FCC), by 
state regulators, or competition/capitalism in general.

BigCableCo, and BigTelco can fight over customers all they want, I'll 
be happy with the table scraps.  And since "single miracle product 
that can be everything for everyone, and perfectly meets everyone's 
needs" doesn't exist, and won't ever exist, there will be plenty of 
scraps to be had.

-Jerry


Re: what will all you who work for private isp's be doing in a few years?

2005-05-11 Thread Jerry Pasker

You mean those of us who ARE private isps?
Probably doing what we are doing today, reacting to the
enviroment.
Amen.
And, might I add, doing it faster and more efficiently (although on a 
smaller scale) than any BigCo can.

(I feel like troll bait... but will elaborate sense others have taken 
up this thread.)

In the world of slow moving BigCo dinosaurs, I'm just a little 
quickly adapting rodent looking for scraps.  Right now, the 
efficiencies of big business leave plenty of scraps for the taking. 
If the getting gets to difficult, there are plenty of other things 
that I'm over qualified to do.  Some days, I think those "other 
things" would pay better, and be more satisfying.

But alas, I knew that when I decided to start up this little ISP in 
'96, with 8 modems, a couple of Macs, and a 2511.  I knew that if the 
internet ever got popular and main stream enough, Big Co would jump 
in, and make it impossible to compete.  I figured "Oh, what the heck, 
I might as well give it a go."  And yes, that's happened on several 
fronts, but at each turn, I find new and different things that I can 
do, and do better, and cheaper than BigCo.  If I'm forced all the way 
out of the market, fine... I'll adapt.  If my company goes away 
because it can't offer what people want, so be it.  I'll find 
something else to do.  So will my employeesthey're all smart 
enough to do different things, and knowing them all well, I know 
they'd eventually welcome the change of scenery.

Any company that doesn't adapt, will go extinct.  ANY company. 
(Unless it's a monopoly)   Capitalism, and free markets dictate this. 
Living in a small town that recently had a major highway bypass it, 
I've lost some popularity points for stating that.  Just because some 
main street business has been there for 40 years, always doing it the 
same way from when they started, they think they have some 
God-given-right to be in business. In reality, it's quite the 
opposite.  Every day a business does the exact same thing that it did 
the day before, is one less day that company will be in business. 
That should be the tag line of every small business.

-Jerry


Re: The "not long discussion" thread....

2005-04-27 Thread Jerry Pasker
Steve Sobol allegedly replied to my reply with:
What were the router ACLs doing that the DNS server ACLs weren't/couldn't?
The ACLs were doing it for the entire server network.  Since I prefer 
my job as a  router-rat over everything else I do, I find it easiest 
to use the biggest hammer available to me when dealing with DoS 
attacks.  One router ACL vs. 10 server ACLs?  When I'm under attack 
I'll take the one router ACL.   Then, per their request, I added it 
to the networks that my collocation clients were on.  They were 
getting 0wn3d regularly, and it really simplified my life in a time 
when new BIND 8 exploits were coming out every 4 minutes.  The router 
ACLs made my life easier, not harder.  Besides, it's my ASN, and I 
can do what I want.  ;-)

Christopher L. Morrow allegedly wrote:
This, it seems, was an unfortunate side effect (as I pointed out earlier)
of legacy software and legacy config... if I had  to guess.
You guess wrong.  See the above.  And don't pass judgement. (am I 
being sited for lack of clue?  It kind of feels like it)  It wasn't a 
*BAD* thing, it was a *GOOD* thing.  It made things better, not 
worse.  I still may go back and re-implement port 53 blocks in the 
future if I find a good reason to. I know now that it doesn't really 
cause operational problems.  At least not in a smaller ISP 
environment.  Would I want a transit network to block TCP 53?  Of 
course not.  But my end customers request those types of services 
regularly, so I try to provide what they want.

And don't think I'm coming off as all ticked off and defensive.  I'm 
not ticked off, I'm actually enjoying this.  As for being defensive? 
Maybe.  I'm trying hard not to be though.  I really can't help 
myselfI have this lurking fear that I'm being tossed in to 
the "clueless block TCP 53 with an outsourced firewall, and don't 
know what I'm doing beyond that" group that I so despise.  ;-) 
Especially on this list, full of people that I have so much respect 
for.

I knew I was opening myself up a little when I decided to "help out" 
by sharing my worldnic.com experiences, but figured it was for the 
good of the group, and therefore, worth it.  And I still think that.

-Jerry


Re: Schneier: ISPs should bear security burden

2005-04-26 Thread Jerry Pasker

I've been there -- I know how I feel about it -- but I'd love
to know how ISP operations folk feel about this.

It means 10 different things to 10 different people.  The article was 
vague.  "Security" could mean blocking a few ports, simple Proxy/NAT, 
blocking port 25 (or 139... or 53.. heh heh) or a thousand different 
things.  There is a market for this, it's called "managed services." 
ISPs do this type of thing all the time.  And customers pay for it. 
Maybe he means "broadband home users".  News flash... home users will 
get it wherever it's cheap.  And cheap means no managed services.

To the author of the article:  Should ISPs be *REQUIRED* to do it? 
Just try it and see what happens try to pass a law and regulate 
the internet, I dare you... :-)   (I double-dog-dare you to get the 
law makers to understand it first!)

Every security appliance ven-duh on the planet would be in there, 
trying to have laws written that would require the use of their own 
proprietary solutions to the "problem."  (and the proposed problem 
would differ depending upon the "solutions" that the particular 
ven-duh offered)

Wait a second... this article was FROM security ven-duhs... all 
offering solutions to these problems...uh-oh this is probably 
their first move in getting a law.  step 1)  cause a public 
outcry... so it's starting already.

I think we've all seen this act before.
Some days, the world really annoys me.  :-(
-Jerry


The "not long discussion" thread....

2005-04-26 Thread Jerry Pasker
I posted to NANOG:
Jerry Pasker <[EMAIL PROTECTED]> wrote:

 fine. (after a few tries)  I'm using BIND 9.2.4 without the eye pee
 vee six stuff compiled in.  Because I don't want to start something;
 No discussion about me blocking port 53, ok?  I got tired of gobs of
 log files of script kiddies trying to download my domains 5 years
 ago...
Steve Sobol replied with:
I'm not going to enter into a long discussion with you. :)
I'm just curious why you didn't restrict AXFR to certain IPs instead.
And I'm posting back to NANOG:
I did.
And I had router ACLs doing the same thing.  Allow to hosts that 
needed it, deny for everyone else.  And I did this to ALL my DNS 
servers.

I was getting DoSed one day, somewhere in the whereabouts of about 
2001, and put in the ACLs, immediately expecting it to break things. 
(truncated responses needing TCP and/or other things that I didn't 
foresee).  Much to my dismay, it broke nothing.  Despite me looking 
for problems, and asking and pleading my techies to find trouble 
tickets related to this issue, it didn't happen.  I revisited the 
issue periodically.  Every time there was an unexplained DNS issue, I 
would think "it must be the port 53 block!"but alas, I was 
disappointed each and every time.  I've removed and added the ACLs 
countless times over the years trouble shooting various DNS issues, 
but this is the first time that removing them actually solved 
anything.

See, I *WANTED* there to be a problem in blocking port 53, I 
*BELIEVED* all the talk that it would cause problems, but that 
problem never showed up.   Over the years, eventually I just slowly 
arrived at the conclusion that all the talk were from people who 
talked, not from people who were brave enough to try it in a 
production environment.

4 years later, I was proved "inconclusive":  Blocking port 53 does 
break things to servers that are already (apparently?) broken.


-Jerry


Re: Problems with NS*.worldnic.com

2005-04-25 Thread Jerry Pasker

something *very* strange is going on.  the worldnic servers have
been giving delayed or no results for days now.  and nsi is hoping
we and the wsj/nyt won't notice.
I agree 100%.

but it's probably time for us all to dump symptoms here and figure
it out as a community, as the dog with the bone ain't 'fessing up.
randy
I'll bite.
I couldn't resolve ns*.worldnic.com domains until I finally bit the 
bullet, and unblocked port 53 TCP from my DNS server.  Then it worked 
fine. (after a few tries)  I'm using BIND 9.2.4 without the eye pee 
vee six stuff compiled in.  Because I don't want to start something; 
No discussion about me blocking port 53, ok?  I got tired of gobs of 
log files of script kiddies trying to download my domains 5 years 
ago... I actually READ my logs besides, I had to keep the linux 
boxes safe from the tyranny of bind 8 until they got upgraded.  :-)

-Jerry


Re: grrr

2005-04-17 Thread Jerry Pasker

http://rfc-ignorant.org/tools/lookup.php?domain=ebay.com
it's been three years, I don't think they really give a damn.
matto
On Sat, 16 Apr 2005, Scott Grayban wrote:
  If there are any eBay admin here please fix your spoof@ & abuse@
  address because it is denying every spoof complaint sent to it.
  It constantly replies back "Your email has not been delivered"
  I dont understand why this company has to be so hard headed in
  abuse issues.
[EMAIL PROTECTED]<
  The only thing necessary for the triumph
  of evil is for good men to do nothing. - Edmund Burke
This is the same company that runs Path MTU discovery on their web 
servers, and then blocks ICMP at their border.

-Jerry


Re: New Outage Hits Comcast Subscribers

2005-04-14 Thread Jerry Pasker

On Apr 15, 2005, at 1:13 AM, Jerry Pasker wrote:
Jeff Cole wrote:
Run bind locally on your laptop. There's a Win32 version available 
if you're not running some sort of Unix or Linux on there. It's 
what I do as my ISP's DNS is wonky at times, as is $ork's as they 
choose to use Active Directory for DNS.
For the sake of the root servers, I hope everyone doesn't start doing this.
Well configured laptops will not put that much pressure on the 
roots.  A single misconfigured / broken recursive name server puts a 
lot more pressure on the roots than lots of well-configured laptops.

I guess one could argue that the chance of misconfiguration go up as 
the number of systems goes up.

--
TTFN,
patrick
I didn't say "I hope a few cluefull people don't do this."  I said "I 
hope _everyone_ doesn't start doing this."  Big difference.

For the sake of the net, I hope no one, not even a semi popular OS 
venduh, gets the idea to build a dns server in to their next OS some 
day.


Re: New Outage Hits Comcast Subscribers

2005-04-14 Thread Jerry Pasker
Jeff Cole wrote:
Run bind locally on your laptop. There's a Win32 version available 
if you're not running some sort of Unix or Linux on there. It's what 
I do as my ISP's DNS is wonky at times, as is $ork's as they choose 
to use Active Directory for DNS.
For the sake of the root servers, I hope everyone doesn't start doing this.
-Jerry


Re: Anyone familiar with the SBC product lingo?

2005-04-14 Thread Jerry Pasker

(Anybody here *NOT* seen cases where the 2 fibers leave the building 
on opposite
sides, go down different streets - and rejoin 2 miles down the way because
there's only one convenient bridge/tunnel/etc over the river, or similar?)
Even if that's not the case, and it's still perfectly separated all 
the way to the CO, the CO is a common point of failure.  Granted, the 
failure modes are very unlikely to occur for a CO, but they do exist. 
Those two separate paths of the ring have a way of always coming 
together somewhere, by design.

The only way to insure that doesn't happen is to have two sources of 
connectivity to a building, from two separate local carriers that 
have fiber going in two opposite directions (eg., one carrier to the 
east, one to the west), to two opposite area codes/LATAs that get 
transit from two different transit providers that have POPs in cities 
that are geographically the furthest apart (one to the north, one to 
the south, or east west, or whatever).  As long as everything keeps 
heading in complete opposite directions, it becomes very assured that 
the common modes of failure diminish with distance.

This tactic works, and works well with IP using BGP, but it's 
something that would be beyond my scope of expertise to attempt to 
implement with anything else.

(someone mentioned earlier charging the 2 9's rate for providing 5 
9's service.. it was a wake up call to myself.I'm that guy!)

On a somewhat related, but kind of a little off topic note:
It always makes me chuckle inside to hear data centers tout their 
"dual grid connections" as a way to insure that the power "is hardly 
ever interrupted"  Same basic principal.  Sure they might be separate 
distribution feeders, and they might even come from separate 
distribution substations, and the subtransmission that feeds the 
distribution substations might even come from separate transmission 
substations... but within about a minimum of a 60-100 mile radius, 
it's nearly always connected together by the transmission grid.

Now, if there was a data center that had a power feed connection to 
say, ERCOT, the Eastern Interconnection, and the Western 
Interconnection THAT would be something to brag about.


Re: sorbs.net

2005-03-15 Thread Jerry Pasker

If it *actually* worked right, why do I *ever* encounter people that 
don't even
know what block lists they're using?

Because enough people running networks are idiots.  Why do these 
network even stay
in business?

Because their competitors are often equally mercifully free of the ravages
of intelligence

I'm sorry, but the correct answer that we're looking for is :
"Customers."   Because they have customers who don't just put up with 
it, but encourage them by *PAYING THEM MONEY*

All "really stupid" companies that make "really stupid" products, 
stay in business because"really stupid" customers pay them them 
"really stupid" money.  So, who's stupid?

This is not only relevant to network operation, but life, as a whole.
It's not my opinion, it's the truth.   (is it not a fun world we live in?)
-Jerry


Re: sorbs.net

2005-03-15 Thread Jerry Pasker

It's just cynicism at it's best. I like people who can be smartasses 
without being asses, but this is ridiculous if they want to be a 
serious service, and cute if they are looking to make jokes.

	Gadi.

I totally agree.  Although $50 is a little steep.  I've seen people 
fly in to gargantuan rant -dare I say temper tantrum- over a $5 
parking fine.  One only needs to charge a fine of any type to get 
people worked up about it.  A $5 "you were stupid, now pay here to 
get off the blacklist" fine would probably be much easier to deal 
with for a lot more people, but still be considered "No, I am not 
going to pay your ridiculous fine." (and there's not a darn thing you 
can do about it!  I'm mad has heck, and by gosh, I'm not gonna take 
it any more!) by about the same number of people as before.

The thing about running a dns blacklist, is that one doesn't have to 
be a serious service.  One merely has to operate a blacklist on a 
whim, and certain [equally irresponsible] mail admins, fed up with 
spam, will use it no matter how ridiculous one's listing or delisting 
procedures are.

On the flip side, when one finds their IP on a blacklist, it's nearly 
impossible to know how many servers are actually using the blacklist, 
so it's impossible to gage the seriousness of the blacklist entry. 
It's blacklist terrorism.

And yes, I'm still kicking around the idea of a bgp route feed style 
"aggregation blacklist."  I wonder if that makes me an "ip routing 
terrorist?"  :-)

-Jerry


Re: Heads up: Long AS-sets announced in the next few days

2005-03-03 Thread Jerry Pasker

On 2 Mar 2005, at 22:30, David Schwartz wrote:
	Please just clarify the following point: do you intend to 
advertise paths
containing AS numbers belonging to other entities on the public Internet
without the permission of the owners of those AS numbers? You admit that you
don't know what the consequences of this injection will be.
Prepending announcements with remote AS numbers has been a 
well-known technique for preventing prefixes from propagating to 
particular ASes for a long time.

The AS_PATH attribute is a loop detection mechanism, and a 
determinant in path selection. What other magic is there in it that 
requires such careful consideration? Why should anybody need to get 
permission from remote operators before deciding what attributes to 
include in their own advertisements?

Do I need to get permission from Sprint before I include 1239:100 as 
a community-string attribute on my own advertisement, too?

It seems to me that there are enough issues with this type of
experimentation *with* the permission of the AS numbers you plan to use. But
the ethical issues with using them without such permission seems to me to be
insurmountable.
The ethical issues seem to be non-existent, to my way of thinking, 
and hence trivial to surmount :-)

Joe

It is my humble little opinion that if joe-public looked at AS paths, 
it may be somewhat of an ethical issue, as some companies wouldn't 
want to be associated with others. ( "hey, it says right here that 
Sprint gets it connection from the isp down the street!")  However, 
most who see the as paths have a clue, and are smart enough to know 
that anyone can prepend pretty much anything thing pretty much any 
way they want to, so it's really not an issue.

Those that look at AS paths, and aren't cluefull enough to know the 
difference well does anyone really care what those people 
think anyway?  ;-)

-Jerry


Re: The Cidr Report

2005-02-12 Thread Jerry Pasker
Until there's deep shame, or real financial incentive to not being 
listed as a member of the dirty 30, nothing is going to happen in 
terms of aggregation.

Unfortunately, an automated email going out to each of the dirty 30 
weekly from the Cidr Report saying that their network again made the 
list of top 30 most shameful examples of how to participate as an 
active member in the global routing table would probably have little 
effect.  If they cared, they'd already be doing something about it.

Nothing is going to happen unless enough people (ASNs) take a 
simultaneous, and  UNITED stand, and make it painful for those that 
don't care about the routes they leak to the net.

Here's an idea, it's probably not the best idea, and has a *lot* of 
potential problems, but it's just an idea:

Pick the top 1 or two worst offenders every week, and automatically 
dump them into a route distribution server would work in the same way 
as the Team Cymru bogon server list.  I bet THAT would get people to 
scramble aggregate!  Want to make a clear business case for spending 
time to clean up routes?  How about "global routability" ?  Every 
week, the top of the list would be singled out, and  they could be 
placed on the server, and anyone that wanted to null route them based 
on that information could do so.  A level of automation would be 
required to quickly remove them from the blacklist as soon as they 
aggregated, and quickly re-add them without warning if they decide to 
deaggregate within a certain time frame of being on the blacklist. 
If the addition/removal was automated, it would be clear cut as to 
why the "victim" was placed on the list.  No favoritism or politics 
would come in to play.

It would get results.  I'm not sure what those results would be, and 
the result might just be a bunch of really mad and aggravated people, 
and a slightly more broken internet, but there'd definitely be 
results.

Or something.;-)
(I bet it would be a lot like the early days of DNS-RBL for mail servers)
I'm sure someone on this list who is wiser than me, has a better 
idea.  I'd love to hear it discussed.

I'm going to repeat what I typed earlier:
Nothing is going to happen unless enough people (ASNs) take a 
simultaneous, and  UNITED stand, and make it painful for those that 
don't care about the routes they leak to the net.

-Jerry


Re: Weekly Routing Table Report

2005-01-07 Thread Jerry Pasker


Analysis Summary

BGP routing table entries examined:  153319
   Prefixes after maximum aggregation:   89967

Should it matter that in six months its gone from 140k to 153k?
At this rate it might crack 200k in less than two years.
This was about the weekly routing table report, but I'm going to 
bring in some numbers from the CIDR report.

It would be back down to 140k if the "dirty 30"  top offenders in the 
CIDR Report would aggregate their routes.

Someone's going to have to draw a line in the sand at some point, and 
someone thinking locally and acting globally is going to be punished 
by the globe.  Don't ask me how this could work, because I don't have 
an answer.

Maybe "I'm the Dirty 30" T-Shirts could be made up and handed out. 
(I wonder if a couple of major routing venders, who profit from 
routing table growth, would sponsor the creation of the t shirts 
snicker...)

-Jerry


Re: Smallest Transit MTU

2004-12-29 Thread Jerry Pasker

Regardless of this, it's probably a good idea to obsolete the 
original meaning of the DF bit.
So my next question is: Is it safe for the entire internet to ignore 
the DF bit entirely?  Sounds like it would save plenty of router 
manufactures, plenty of time/effort.

Apparently Cisco's official recommendation for solving the problem 
for packets destined to any network with an MTU less than 1500 bytes 
due to ICMP "Fragmentation Needed But DF Set" packets not making it 
back to the original pMTUd server (for whatever reason..) is to 
clear the DF bits with policy routing, and fragment anyway.

"Let's break the internet some more to fix something that someone 
else* broke!  Fun!"

*as in: an idiot ICMP blocking firewall admin who thinks that "ICMP" 
means ping.

Maybe they think they can use pMTUd to make up the speed lost from 
the possible increase in congestion/dropped packets caused by the 
lack of ICMP source-quench messages reaching their server.

I hate to think how many people-hours were wasted on the 
implementation of anything to do with the DF flag, routers kicking 
back ICMPs when encountering smaller networks,  everything pMTUd, the 
router code to flip DF bits, and the implementation of all of it to 
arrive back at the way life was pre-pMTUd+bad firewall.

/rant.
-Jerry


Smallest Transit MTU

2004-12-29 Thread Jerry Pasker
Operational comment, question:
I've learned that having an MTU smaller than 1500 bytes is  a bad 
thing. When encountering networks with MTUs smaller than 1500 bytes, 
path MTU discovery breaks when sites like a computer science college 
my friend is going to .edu, a certain 'us' online bank.com, and the 
worlds most popular auction site.com block all icmp, including the 
icmp "fragmentation needed but DF bit set" packets.   Despite what 
the RFCs say, the transit internet, in my opinion, generally needs to 
accept and transit packets up to 1500 bytes without packet 
fragmentation.

Is this consistent with what everyone else's operational experiences?
Is there an RFC that clearly states: "The internet needs to transit 
1500 byte packets without fragmentation."??


Re: Dampening considered harmful?

2004-12-28 Thread Jerry Pasker

Back in mid-December someone typed:
 > One reason to be careful with dampening is that flaps can be
 > multiplied. (Connect to routeviews and see the different flap counts
 > under different peers for the same flap at your end to observe this.)
How about in this scenario:
  asA gets transit from asT
  asA gets backup transit (ASpath padding) from asB
asB gets transit from asT
asB gets transit from asJ
  asJ gets transit from asT
asT peers with whole world(*)
Now, as asA flaps to asT, we see "bad things" happen to their routes,
namely an unreasonable amount of flap at even nearest neighbors to asT.
Can this flap magnification be explained by the hierarchy I describe
above?  That is, asT treats all of these ASpaths as customer routes:
  asA
  asB_asA
  asJ_asB_asA
and so we might expect to see multiple flaps as different "best"
routes come into view inside the geographically diverse asT...  right?
Thanks,
-mark
(*)you know what I mean.
I post this response to the list also seeking clarification/further 
education.
The PATHS that flap at each router get damped, right?  So if a path 
to an AS doesn't flap (best path or not), then it doesn't get damped. 
So if a best path gets damped, suddenly it's no longer "best" and 
another non flapping path to that AS becomes the new best.

I had to draw this hypothetical network out on a white board, and 
then it became a little easier to "see".

If the path from A to T flaps into oblivion, the path from A to B 
starts carrying traffic from A<-->B and B<>J> and  J<>A (via 
B).  The A to T link can flap all day long, and B and J will still 
have undamped connectivity to A.

T will send flaps to J and B and J and B should damp those paths that 
are flapping, and continue to send traffic to A via the links that 
aren't flapping.  The PATHS that flap at each router get damped, 
right?  If that's the case, then eventually, everyone will damp all 
the "A" flap from T, and start to prefer any paths  to A from J and B 
(because they aren't flapping, and therefore are better, more stable, 
and more preferred paths)   If T is the only transit provider that's 
peered with everyone* then A might have a problem.   Perhaps A should 
chose their transit providers a little more carefully.  If A, B, and 
J are all tier 2 ISPs peered with each other that get transit 
ultimately from T, then none of them are really truly multi homed. If 
J was also 'nearly peered with everyone' then eventually, A traffic 
would all start to arrive via J through the connection to B.

If all the above is correct, then dampening just made the network 
much more stable in the above case, rather than traffic constantly 
routing to and them re routing around the flapping link from A to T.

Stability good. Packet loss and latency, bad.  Stability = damp.
-Jerry


Re: Dampening considered harmful? (Was: Re: verizon.net and other email grief)

2004-12-21 Thread Jerry Pasker

Well, a particular router doesn't get to set its dampening according 
to its 'view' today, and that view is going to vary depending on 
prefix.

I would like to argue that how we define flapping today is simply a 
broken concept.  We count up/down/path change transitions, but such 
transitions can exist due to connectivity or implementation 
differences and may have nothing to do with not complying with 
ettiquette.
If people were damped for a single up/down, then yes, that's a 
problem.  Dampening someone because they have a single path change is 
not the idea.  That would sort of nullify the spirit of being 
multihomed with BGP, or even using BGP in any production or transit 
network for that matter.

IF there's a connection problem, or implementation difference that 
makes a lot of up/down, then dampening could occur close to the 
"problem" but it will be contained close, and won't spread to the 
rest of the internet.

I submit that a better way of measuring flap is to look at the 
period over which a particular prefix is flapping.  Anti-social 
behavior is many changes occurring over long periods of time. 
However, many changes over a short period may well not be a problem.

Regards,
Tony
So configure your route dampening that way.  No one's twisting your 
arm to use the defaults. Want many flaps in the short term to not be 
damped?  Simply increase your suppress limit.  Way way up.   Increase 
or decrease your half-life as you see fit.  Just don't expect the 
rest of the world to do the same.  But do expect to see others damp 
the flapping that you leak to their network.

 If no one on the net ran dampening, routers wouldn't 
get anything done except processing constant churn.  
I'm grateful that routers out there in the world are doing route-damp 
so my router's CPU doesn't have to deal with needless route-churn, 
even if it does impact my connectivity (to poorly connected) end 
sites somewhat.  The stability is worth it.

Another item to consider, is this is a sentence from RFC2439 " By 
damping their own routing information, providers can reduce their
   own need to make requests of other providers to clear damping state
   after correcting a problem."

Using that logic,  I'm also grateful for my transit providers 
dampening the advertisements to their peers, of my flapping transit 
links to them.  That way, when they flap, (hey, it happens to 
everyone eventually) it doesn't get me damped by the rest of the 
internet, and my connections to different transit providers can do 
their job of carrying the traffic.

Dampening, in it's current state, really is a good idea.
-Jerry


Re: Dampening considered harmful? (Was: Re: verizon.net and other email grief)

2004-12-20 Thread Jerry Pasker

An even more important consideration is whether our current paradigm 
of flap dampening actually is the behavior that we want to penalize.

If a single link bounces just once, then thanks to our mesh, 
confederations, differing MRAI's etc., we can see many many changes 
to the AS path, resulting in dampening.  Do we really want to 
inflict pain over such an incident?

Tony
Of course not.  Dampening should be set on a router according to that 
router's views.  The more chances a router can see a single event 
multiplied, the more lax dampening should be on that router.

I think Rob Thomas's saying of "know your network" really applies here.
In my book, the threat of dampening to anyone not playing nice is the 
true value of route dampening.  Automatic enforcement of etiquette.

-Jerry
.


Re: New Computer? Six Steps to Safer Surfing

2004-12-19 Thread Jerry Pasker

Sean Donelan wrote:
...the infection rate of Home/SOHO computers with AV/firewalls is 
higher than "naked" computers.

please, where does this information come from?  are you sitting on 
proof that the Home AV/Security industry is *complete* FUD?  :)

What's more interesting is the highest infection rate of all is for homes
with laptop/mobile computers.
is that so?  please, where does *this* information come from?  it 
might seem intuitively correct, but i'd like to see some numbers and 
other data to back these claims up.

thanks
-d
How 'bout this data:  Stupid people get viruses more than smart 
people.  There will always be viruses, there will always be stupid 
people.  What does this have to do with network operations?

Are we, as network operators, supposed to protect people (stupid or 
not) from themselves?

-Jerry


Re: Dampening considered harmful? (Was: Re: verizon.net and other email grief)

2004-12-16 Thread Jerry Pasker

i've been wondering, since most people aren't using a
25xx class router for bgp anymore, and the forwarding planes
are able to cope more when 'bad things(tm)' happen, what the value
of dampening is these days.
ie: does dampening cause more problems than it tries to solve/avoid
these days.
- jared
I don't know what takes more router resources;  dampening enabled 
doing the dampening calculations, or no dampening and constantly 
churning the BGP table.  I would assume dampening generally saves 
router resources, or operators wouldn't chose to enable it.

I don't know about the rest of the network operators, but the threat 
of ever being dampened makes me much more careful with my network in 
general.  My general practice is that if a transit line has problems, 
it's BGP session is put in an admin down state ASAP until the 
problems can be isolated and corrected.

It is my humble little opinion that the threat of being damped for 
some period of time is a great hammer to hold over network operator's 
heads.  I think it's a 'good thing(tm)' for automatic penalties to go 
in to effect for those who are irresponsible with the global routing 
table.  Even if that means that though no fault of my own, one of my 
transit lines flaps, and gets me somewhat damped for an hour.

-Jerry


Re: Bogon filtering (don't ban me either)

2004-12-03 Thread Jerry Pasker

On Fri, 3 Dec 2004, Hank Nussbacher wrote:
 "Blocks all IANA reserved IP address blocks"
 The actual doc:

Surprise, surprise.  The examples in that document are already out of date
and filtering as bogons perfectly good IP space ARIN is handing out to
members.
The idea of a "default static bogon filter" being made part of IOS is a
horrible idea.  It's bad enough getting the places that went to the
trouble of setting up bogon filters to update them.  If everyone had them
by default, that would likely break the Internet for signifigant numbers
of people.  How many customer routers do you have on your networks that
were installed years ago and never upgraded?  How out of date would their
default bogon filters be now?
--
 Jon Lewis   |  I route
 Senior Network Engineer |  therefore you are
 Atlantic Net|
_ http://www.lewis.org/~jlewis/pgp for PGP public key_
Isn't the path to hell is paved with good intentions?
It's not the first time Cisco routes have shipped with out of date 
software in them, or known bugs/issues that pop up later to cause 
problems.  ;-)  Seriously, I'm not knocking Cisco, I'm just telling 
it like it is.  If someone knows what they're doing they won't get 
burned on it.  There are a lot of other IOS commands/options that can 
be turned on to screw networks up much worse.  I don't fault Cisco 
for giving people the option.   It should have a warning though, when 
enabled that it is out of date and will break things.

Just thinking out loud here:
If Cisco wanted to do something related to bogon filtering, they 
should make routes that expire/self delete after a certain date. 
Routes with a time to live.  (NTP optional, but a set clock required 
to use the TTL routes).

Also, bogon lists, especially the ones that have been prepared by 
hand by someone so they can be cut/pasted into a router, should start 
with a remark line that says something along the lines of **WARNING 
DELETE AFTER FEB 2005! **  (Or, current date+ 4 months). I realize a 
lot of things can't be remarked, but any attempt to remark it, seems 
like it would be a good idea.  Some people don't read all the stuff 
in the web page before they scroll down, and copy the bogon list. 
Some people don't heed the warnings.  Some people leave their job 
after they put in bogons.  Some people are router consultants, and 
never see that router again.  Some people are too busy putting out 
fires and forget that 8 months have passed since they checked their 
bogons.

And some people are just stupid.  ;-)
A remark could go a long way to solving/preventing  the problem when 
the next person takes a look at the router's configuration.

The perfect solution to the bogon issue is constant diligence. 
Getting a route feed is a good seccond choice.  The third choice is 
to not use bogon filters at all.

In a perfect world, those in charge of allowing routes in to the 
global internet wouldn't allow bogons, because they would only allow 
announcements that they've checked out ahead of time.  And just like 
packet ingress filtering, it's a solution that probably won't happen 
any time soon.

-Jerry


RE: "Make love, not spam"....

2004-11-29 Thread Jerry Pasker

 > -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Monday, November 29, 2004 11:54 AM
 To: [EMAIL PROTECTED]
 Subject: RE: "Make love, not spam"

[ SNIP ]
 The big difference between Lycos Europe, and a script kiddie with
 zombies is that Lycos is mature enough to use restraint and not knock
 down websites with brute force.  They're attempting to use the
 politically correct "grown up" way to attack someone:  economics.
I didn't know there was a politically correct way to create a
BotMonster and rule the Internet by emminent domain.
-M<

Yeah, that's exactly what they're doing!  It's a plot to TAKE OVER 
THE WORLD.  You figured it out!

It's about giving the spammers what they want:  More traffic to their 
websites.  How can it be wrong when they send out 1 million emails 
that all say "click on this link" and 1 million computers actually 
click the link?  Who's in the wrong there?

Besides: "rule the Internet by emminent domain."  Isn't' that Verisign's job?
-Jerry


RE: "Make love, not spam"....

2004-11-29 Thread Jerry Pasker

It's a DDOS. The risk of collateral damage is  high. I
won't discuss the RBL aspect of it because it can't be
legitimized past the first sentence.
-M<


From what limited information is available in the articles, it 
doesn't sound that way.  It's not really a DDoS attack, but more of a 
"distributed web surfing bot."   The point isn't to generate a ton of 
false requests to overload the web servers, the point is to send a 
controlled amount of requests to cause the target websites to 
generate a lot of http traffic.   One that's not meant to knock the 
sites off line, but just consume their bandwidth through real http 
use.  *IF* their screen saver is written correctly, the sites should 
never go down, but at worst, just slow down.  That's a big *IF*.

I understand this as more of a Distributed Consumption of Service 
attack.  (Is the acronym DCoS used yet?)  Real requests, downloading 
real data, to real computers.  A lot of them.  The same effect could 
be had by having those websites being requested by the Lycos mail 
users by clicking on a link to their web site, except that would be 
more prone to cause operational problems with target sites being 
overloaded.

Also, if the "target" web servers are set up right, they should 
protect themselves in all the normal ways an http server under load 
does.  If you still think it's a DDoS, then they're only as guilty as 
Slashdot.

The big difference between Lycos Europe, and a script kiddie with 
zombies is that Lycos is mature enough to use restraint and not knock 
down websites with brute force.  They're attempting to use the 
politically correct "grown up" way to attack someone:  economics.

How is giving the spammers what they want (real web site traffic) an 
attack?  That doesn't even qualify!

Would a huge advertising effort to get users to visit every spammer 
web site they get, and click "reload" a few times also qualify as an 
attack?

Remember:  I'm assuming a properly written client.
-Jerry


Re: who gets a /32 [Re: IPV6 renumbering painless?]

2004-11-20 Thread Jerry Pasker

if the ipv6 routing table ever gets as large as the ipv4 routing table is
today (late 2004 if you're going to quote me later), we'll be in deep doo.
--
Paul Vixie
"Nut-uh!"
*WHEN* the ipv6 routing table gets as large as the ipv4 routing table 
is today (late 2004, when you quote me later) it won't be a problem.

As a matter of fact, I would bet that Cisco , Juniper, and any other 
edge/core router manufacturer are banking on this happening.

Today's routing table can be carried on older edge routers very 
effectively (There are many 7500, 7200 series routers out there), and 
I predict that this will continue to be the case for quite some time 
(at least a few more years)  This is not conducive to the business 
model of Cisco Systems.   *WHEN* the IPv6 routing table is the same 
size that it is today, I seriously doubt that there will be any 
problem with finding a CPU fast enough, RAM with a memory rate high 
enough, or CPU to memory bandwidth wide enough to handle it.

And when that time comes: I promise that any Cisco sales person will 
have at least more than a handful of routers to sell you that'll 
handle the load just fine.

I'm Jerry Pasker, and I approved this message.


Re: Ship seized for cutting Sri Lanka's internet link

2004-08-24 Thread Jerry Pasker


Do you have any idea of the *cost* of such a 'second cable' ?
Or how _long_ it takes to install?
Or how many 'hungry for business' undersea fiber installers would 
line up to bid such a project?  ;-)

(/me assumes there are some undersea fiber installers left.)
-Jerry
--


Re: WashingtonPost computer security stories

2004-08-15 Thread Jerry Pasker

Bad hardware and application software cause a lot more problems than
the operating system itself.
--
Mikael Abrahamssonemail: [EMAIL PROTECTED]

Bad users cause more problems than everything else combined.  Doesn't 
matter if you're running windows, bsd, linux, OS X, or whatever. When 
a dumb user does a dumb thing, dumb things happen.

Doesn't matter really, if it's an OS that gets asked to run malware, 
and then is blamed for corrupting itself, or an SUV who's airbags and 
crumple zones fail to keep the driver alive that was spacing off and 
talking on a cell phone when they crossed in front of a semi.  The 
end result is the same:  Technology, and the intelligent individuals 
that create it can only do so much to prevent a stupid individual 
from causing damage to themselves and others.

If anyone wants to argue against this, I beg of them to read
http://www.mentalsoup.com/mentalsoup/basic.htm
first.

-Jerry
--