What could have been done differently?

2003-01-28 Thread Sean Donelan


On Tue, 28 Jan 2003, The New York Times wrote:
 A spokesman for Microsoft, Rick Miller, confirmed that a
 number of the company's machines had gone unpatched, and
 that Microsoft Network services, like many others on the
 Internet, experienced a significant slowdown. We, like the
 rest of the industry, struggle to get 100 percent
 compliance with our patch management, he said.

Many different companies were hit hard by the Slammer worm, some with
better than average reputations for security awareness.  They bought
finest firewalls, they had two-factor biometric locks on their data
centers, they installed anti-virus software, they paid for SAS70
audits by the premier auditors, they hired the best managed security
consulting firms.  Yet, they still were hit.

Its not as simple as don't use microsoft, because worms have hit other
popular platforms too.

Are there practical answers that actually work in the real world with
real users and real business needs?





Re: What could have been done differently?

2003-01-28 Thread Alex Bligh

Sean,

--On 28 January 2003 03:10 -0500 Sean Donelan [EMAIL PROTECTED] wrote:


Are there practical answers that actually work in the real world with
real users and real business needs?


1. Employ clueful staff
2. Make their operating environment (procedures etc.) best able
  to exploit their clue

In the general case this is a people issue. Sure there are piles of
whizzbang technical solutions that address individual problems (some of
which your clueful staff might even think of themselves), but in the final
analysis, having people with clue architect, develop and operate your
systems is far more important than anything CapEx will buy you alone.

Note it is not difficult to envisage how this attack could have been
far far worse with a few code changes...

Alex Bligh




Re: Level3 routing issues?

2003-01-28 Thread David Howe

at Monday, January 27, 2003 7:50 PM, [EMAIL PROTECTED] [EMAIL PROTECTED]
was seen to say:
 This is not correct. VPN simply extends security policy to a different
 location. A VPN user must make sure that local security policy
 prevents other traffic from entering VPN connection.
This is nice in theory, but in practice is simply not true. even
assuming that the most restrictive settings are used (user may not
install software by admin setting, has no local administration on his
machine, IP traffic other than via the VPN is exclusive to the vpn
client) it is *still* possible that the machine could be compromised by
(say) an email virus who then bypasses security by any one of a dozen
routes.




Blocked by msn.com MX, contact for MSN.COM postmaster ?

2003-01-28 Thread Miquel van Smoorenburg

I found out that our outgoing SMTP servers have been blocked by
the msn.com MXes. In a nasty way, too -- no SMTP error, the TCP
connection is simply closed by them immidiately after establishing it.
We're not listed on any RBL/DNSBL and have an active abuse desk.

I mailed [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED],
but didn't get a reply from any of them. Does anyone here know
who to talk to ?

Thanks,

Mike.
-- 
Anyone who is capable of getting themselves made President should
on no account be allowed to do the job -- Douglas Adams.



Re: What could have been done differently?

2003-01-28 Thread E.B. Dreger

ED Date: Tue, 28 Jan 2003 12:42:41 + (GMT)
ED From: E.B. Dreger


ED Sure, worm authors are to blame for their creations.
ED Software developers are to blame for bugs.  Admins are to

s/Admins/Admins and their management/


Eddy
--
Brotsman  Dreger, Inc. - EverQuick Internet Division
Bandwidth, consulting, e-commerce, hosting, and network building
Phone: +1 (785) 865-5885 Lawrence and [inter]national
Phone: +1 (316) 794-8922 Wichita

~
Date: Mon, 21 May 2001 11:23:58 + (GMT)
From: A Trap [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Please ignore this portion of my mail signature.

These last few lines are a trap for address-harvesting spambots.
Do NOT send mail to [EMAIL PROTECTED], or you are likely to
be blocked.




Re: Level3 routing issues?

2003-01-28 Thread cowie


  Wow, for a minute I thought I was looking at one of our old
  plots, except for the fact that the x-axis says January 2003
  and not September 2001 :) :)
 
 seeing that the etiology and effects of the two events were quite
 different, perhaps eyeglasses which make them look the same are
 not as useful as we might wish?
 
 randy

If you've been watching, you might agree that the interesting thing is 
not that it looked like that in September 2001,  but that we really
haven't seen a signal that looks like that SINCE September 2001.  

The large differences between the worms are exactly what should make us  
doubly interested in fingering the common mechanism that connects very 
high speed, high diversity wormscan to increased bgp activity.

So far it's been visible as an apparently accidental byproduct of an attack
with other goals.  Are you willing to bet your bifocals that the same 
mechanism can't be weaponized and used against the routing infrastructure 
directly in the future?

--
James Cowie
Renesys Corporation
http://gradus.renesys.com





Re: Level3 routing issues?

2003-01-28 Thread Jack Bates

From:


 So far it's been visible as an apparently accidental byproduct of an
attack
 with other goals.  Are you willing to bet your bifocals that the same
 mechanism can't be weaponized and used against the routing infrastructure
 directly in the future?


Yet the question becomes the reasoning behind it. How much is a direct
result of the worm and how much is a result of actions based on the NE's?
The other question is BGP deployment within smaller networks. I've seen a
lot of different BGP configs handed down from reputable NEs to smaller
businesses and ISPs. Unfortunately, the configs are usually comparable to
what you'd use in a network that has peers beneath it versus what a network
that only has two uplinks requires (ie, AS filtering not really required).

It is quite common that /24 networks listed on connected interfaces not be
null routed which has it's good points and bad. When you lose the interface,
the traffic will stop at the local ISP's BGP routers if using ARIN assigned
addresses or it will stop at the upstream provider's routers due to
aggregates if using their IPs. In general, unless cost is an issue, it's
usually good to let the packet come all the way to your network. It makes
external troubleshooting easier and keeps BGP stable so long as the peering
connection isn't lost. Of course, people need to learn to use metrics when
doing null routes. Some people forget they exist. :)

BGP update storms are enough to drop some peering sessions due to
underpowered routers. Some large providers reject updates if the network
goes critical in order to keep traffic manageable while the problem is
determined and rectified. So while I do agree that the worms themselves hold
some sway over the BGP activity, the same lack of preparation that allowed
the worm to run so rampant can also be seen in the networks themselves.

I personally have dealt with enough DOS/DDOS attacks that I have a emergency
plan in place which allows as much control over the network from remote
without depending on the network itself. I have an understanding of how my
network is effected by different loads and which direction cascade failures
will go. Luckily, I have a relatively small network, yet such an
understanding and research should exist for any network regardless of size.
The records of both worms should be indications of the weak points in
people's networks.

Jack Bates
BrightNet Oklahoma




Re: Blocked by msn.com MX, contact for MSN.COM postmaster ?

2003-01-28 Thread Karsten W. Rohrbach

Miquel van Smoorenburg([EMAIL PROTECTED])@2003.01.28 11:49:16 +:
 
 I found out that our outgoing SMTP servers have been blocked by
 the msn.com MXes. In a nasty way, too -- no SMTP error, the TCP
 connection is simply closed by them immidiately after establishing it.
 We're not listed on any RBL/DNSBL and have an active abuse desk.

Miquel, does this problem still endure? I had such a thing quite a while
ago (mid-2002) with them, but apparently it was a temporary problem of
their MX in servers. I am also not listed in RBLs (due to pretty
restrictive relaying policy) and the like, I was also _not_ able to reach
someone at their end ([EMAIL PROTECTED]). After several hours of closed
sockets, everything just worked again.

Right now, our mail to msn.com goes via smtp-gw-4.msn.com(207.46.181.13)
which appears to work:
220 cpimssmtpa19.msn.com Microsoft ESMTP MAIL Service, Version: 5.0.2195.4905 ready at 
 Tue, 28 Jan 2003 06:09:46 -0800

Apparently their service runs on some successor of Win2000, so I
wouldn't be very surprised, if it turned out to be resource shortage on
their end (WRT things like The Worm Of The Week[tm] and the like).
A misconfigured proxy or load balancing device might be another option.

Also, their clock is off by approx. five minutes. Their system
apparently lacks NTP support, or the clocks in Redmond are 5 minutes
behind the rest of the world... :-
Oh no - not-so-funny - they got different clock drift for every
machine (cpimssmtpa[01..40].msn.com) that happens to pop up when
connecting to their best preference MX.
Looks like they DNS-loadbalance their loadbalancers for SMTP, too. Funny.

Regards,
/k[Ok-I-am-silent-now]arsten

-- 
 Motto of the Electrical Engineer:
 Working computer hardware is a lot like an erect penis: it
 stays up as long as you don't fuck with it.
WebMonster Community Project -- Reliable and quick since 1998 -- All on BSD
http://www.webmonster.de/ - ftp://ftp.webmonster.de/ - http://www.rohrbach.de/
GnuPG:   0xDEC948A6 D/E BF11 83E8 84A1 F996 68B4  A113 B393 6BF4 DEC9 48A6
REVOKED: 0x2964BF46 D/E 42F9 9FFF 50D4 2F38 DBEE  DF22 3340 4F4E 2964 BF46
REVOKED: 0x4C44DA59 RSA F9 A0 DF 91 74 07 6A 1C  5F 0B E0 6B 4D CD 8C 44
Please do not remove my address from To: and Cc: fields in mailing lists. 10x



Re: What could have been done differently?

2003-01-28 Thread Leo Bicknell
In a message written on Tue, Jan 28, 2003 at 03:10:18AM -0500, Sean Donelan wrote:
 They bought finest firewalls,

A firewall is a tool, not a solution.  Firewall companies advertise
much like Home Depot (Lowes, etc), everything you need to build
a house.

While anyone with 3 brain cells realizes that going into Home Depot
and buying truck loads of building materials does not mean you have
a house, it's not clear to me that many of the decision makers in
companies understand that buying a spiffy firewall does not mean
you're secure.

Even those that do understand, often only go to the next step.
They hire someone to configure the firewall.  That's similar to
hiring the carpenter with your load of tools and building materials.
You're one step closer to the right outcome, but you still have no
plans.  A carpenter without plans isn't going to build something
very useful.

Very few companies get to the final step, hiring an architect.
Actually, the few that get here usually don't do that, they buy
some off the shelf plans (see below, managed security) and hope
it's good enough.  If you want something that really fits you have
to have the architect really understand your needs, and then design
something that fits.

 they had two-factor biometric locks on their data centers,

This is the part that never made sense to me.  Companies are
installing new physical security systems at an amazing pace.  I
know some colos that have had four new security systems in a year.
The thing that fascinates me is that unless someone is covering up
the numbers /people don't break into data centers/.

The common thief isn't too interested.  Too much security/video
already.  People notice when the stuff goes offline.  And most
importantly too hard to fence for the common man.  The thief really
interested in what's in the data center, the data, is going to take
the easiest vector, which until we fix other problems is going to
be the network.

I think far too many people spend money on new security systems
because they don't know what else to do, which may be a sign
that they aren't the people who want to trust with your network
data.

 they installed anti-virus software, 

Which is a completely different problem.  Putting the bio-hazard
in a secure setting where it can't infect anyone and developing an
antidote in case it does are two very different things.  One is
prevention, one is cure.

 they paid for SAS70 audits by the premier auditors,

Which means absolutely nothing.  Those audits are the equivalent
of walking into a doctor's office, making sure he has a working
stethoscope and box of toungue depressors, and maybe, just maybe,
making the doctor use both to verify that he knows how to use the
them.

While interesting, that doesn't mean very much at all that when
you walk in with a disease the doctor will cure you.  Just like it
doesn't mean when the network virus/worm/trojan comes you will be
immune.

 they hired the best managed security consulting firms.

This goes back to my first comment.  Managed security consulting
firms do good work, but what they can't do is specialized work.
To extend the house analogy they are like the spec architects who
make one ok plan and then sell it thousands of times to the people
who don't want to spend money on a custom architect.

It's better than nothing, and in fact for a number of firms it's
probably a really good fit.  What the larger and more complex firms
seem to fail to realize is that as your needs become more complex
you need to step up to the fully customized approach, which no matter
how hard these guys try to sell it to you they are unlikely to be
able to provide.  At some level you need someone on staff who
understands security, but, and here's the hard part, understands
all of your applications as well.

How many people have seen the firewall guy say something like well
I opened up port 1234 for xyzsoft for the finance department.  I
have no idea what that program does or how it works, but their support
people told me I needed that port open.  Yeah.  That's security.
Your firewall admin doesn't need to know how to use the finance
software, but he'd better have an understanding of what talks to
what, what platforms it runs on, what is normal traffic and what
is abnormal traffic, and so on.

 Are there practical answers that actually work in the real world with
 real users and real business needs?

I think there are two fundamental problems:

* The people securing networks are very often underqualified
  for the task at hand.  If there is one place you need a generalist
  type network/host understands-it-all type person it's in security
  -- but that's not where you find them.  Far too often network
  security people are cross overs from the physical security world,
  and while they understand security concepts I find much of the
  time they are lost at how to apply them to the network.

* Companies need to hold each other responsible for bad software.
  Ford is being sued 

RE: What could have been done differently?

2003-01-28 Thread Eric Germann

Not to sound to pro-MS, but if they are going to sue, they should be able to
sue ALL software makers.  And what does that do to open source?  Apache,
MySQL, OpenSSH, etc have all had their problems.  Should we sue the nail gun
vendor because some moron shoots himself in the head with it?  No.  It was
never designed for flicking flies off his forehead.  And they said, don't
use for anything other than nailing stuff together.  Likewise, MS told
people six months ago to fix the hole.  Lack of planning on your part does
not constitute an emergency on my part was once told to me by a wise man.
At some point, people have to take SOME responsibility for their
organizations deployment of IT assets and systems.  Microsoft is the
convenient target right now because they HAVE assets to take.  Who's going
to pony up when Apache gets sued and loses.  Hwo do you sue Apache, or how
do you sue Perl, because, afterall, it has bugs.  Just because you give it
away shouldn't isolate you from liability.

Eric




 * Companies need to hold each other responsible for bad software.
   Ford is being sued right now because Crown Vic gas tanks blow
   up.  Why isn't Microsoft being sued over buffer overflows?  We've
   known about the buffer overflow problem now for what, 5 years?
   The fact that new, recent software is coming out with buffer
   overflows is bad enough, the fact that people are still buying
   it, and also making the companies own up to their mistakes is
   amazing.  I have to think there's billions of dollars out there
   for class action lawyers.  Right now software companies, and in
   particular Microsoft, can make dangerously unsafe products and
   people buy them like crazy, and then don't even complain that
   much when they break.

 --
Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
 PGP keys at http://www.ufp.org/~bicknell/
 Read TMBG List - [EMAIL PROTECTED], www.tmbg.org






Re: Level3 routing issues?

2003-01-28 Thread cowie

  So far it's been visible as an apparently accidental byproduct of an
 attack
  with other goals.  Are you willing to bet your bifocals that the same
  mechanism can't be weaponized and used against the routing infrastructure
  directly in the future?
 
 
 Yet the question becomes the reasoning behind it. How much is a direct
 result of the worm and how much is a result of actions based on the NE's?

Good question. null routing of traffic destined to a network with a BGP
interface on it will cause the session to drop. That is a BGP effect due
to engineers' actions, indirectly triggered by the worm.  

On the other hand, we also know (from private communications and from
other mailing lists.. ahem) that high rate and high src/dst diversity
of scans causes some network devices to fail (devices that cache flows, or
devices that suffer from cpu overload under such conditions). 

Some BGP-speaking routers (not all, by any means, but some subpopulation)
found themselves pegged at 100% CPU on Saturday.  Just one example: 

   http://noc.ilan.net.il/stats/ILAN-CPU/new-gp-cpu.html

Whether you believe anthropogenic explanations for the instability 
depends on how fast you believe NEs can look, think, and type, compared
to the speed with which the BGP announcement and withdrawal rates are 
observed to take off.  For my part, I'd bet that the long slow exponential 
decay (with superimposed spiky noise) is people at work.  But the initial 
blast is not.

--
James Cowie
Renesys Corporation
http://gradus.renesys.com



Re: What could have been done differently?

2003-01-28 Thread Jack Bates

From: Eric Germann


 Not to sound to pro-MS, but if they are going to sue, they should be able
to
 sue ALL software makers.  And what does that do to open source?  Apache,
 MySQL, OpenSSH, etc have all had their problems.  Should we sue the nail
gun
 vendor because some moron shoots himself in the head with it?

With all the resources at their disposal, is MS doing enough to inform the
customers of new fixes? Are the fixes and lates security patches in an easy
to find location that any idiot admin can spot? Have they done due diligence
in ensuring that proper notification is done? I ask because it appears they
didn't tell part of their own company that a patch needed to be applied. If
I want the latest info on Apache, I hit the main website and the first thing
I see is a list of security issues and resolutions. Navigating MS's website
isn't quite so simplistic. Liability isn't necessarily in the bug but in the
education and notification.

Jack Bates
BrightNet Oklahoma




Re: What could have been done differently?

2003-01-28 Thread Ted Fischer

At 11:13 AM 1/28/03 -0200, Rubens Kuhl Jr. et al postulated:


| Are there practical answers that actually work in the real world with
| real users and real business needs?

Yes, the simple ones that are known for decades:
- Minimum-privilege networks (access is blocked by default, permitted to
known and required traffic)
- Hardened systems (only needed components are left on the servers)
- Properly coded applications
- Trained personnel


   I would just add, as has been mentioned by others (but bears repeating):

 - A commitment by management


There are no shortcuts.


   Agreed

Ted Fischer



Rubens Kuhl Jr.






Re: Level3 routing issues?

2003-01-28 Thread Jack Bates

From: [EMAIL PROTECTED]

snip
 On the other hand, we also know (from private communications and from
 other mailing lists.. ahem) that high rate and high src/dst diversity
 of scans causes some network devices to fail (devices that cache flows, or
 devices that suffer from cpu overload under such conditions).

 Some BGP-speaking routers (not all, by any means, but some subpopulation)
 found themselves pegged at 100% CPU on Saturday.  Just one example:

http://noc.ilan.net.il/stats/ILAN-CPU/new-gp-cpu.html

Was it not known that under certain conditions the router would flatline?
What percautionary measures were put into place in such an event to limit
the damage?

 Whether you believe anthropogenic explanations for the instability
 depends on how fast you believe NEs can look, think, and type, compared
 to the speed with which the BGP announcement and withdrawal rates are
 observed to take off.  For my part, I'd bet that the long slow exponential
 decay (with superimposed spiky noise) is people at work.  But the initial
 blast is not.

When the crisis is on you, it's too late. You are either prepared and know
exactly what to do at that critical moment or you don't. You either had a 5
minute response time to the crisis or you didn't. We also know (from private
communications and from other mailing lists.. yes, I'm a thief :) that many
NEs were caught with their pants down, a mistake they aren't apt to do
again. It comes down to one's outlook. Do you just configure and maintain or
do you strive to push it to the envelope? Do you truly know your network?
Remember, it's a living, breathing thing. The complexity of variables makes
complete predictability impossible, and so we must learn to understand it
and how it reacts.

Then again, perhaps I'm a lunatic. :)

Jack Bates
BrightNet Oklahoma




RE: What could have been done differently?

2003-01-28 Thread Drew Weaver

Would it be that hard to have windows update check to see the version of SQL
server? Its sad but I know a lot of MS admins only use windows update to
check for updates because awhile ago Microsoft pushed it as the premier
method of which to update your systems.

Im just saying if they included all fixes in one spot instead of halfway
automating it and halfway making it cryptically difficult it would benefit
everyone.

-Drew



-Original Message-
From: Jack Bates [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, January 28, 2003 10:36 AM
To: [EMAIL PROTECTED]; Leo Bicknell; [EMAIL PROTECTED]
Cc: Eric Germann
Subject: Re: What could have been done differently?



From: Eric Germann


 Not to sound to pro-MS, but if they are going to sue, they should be 
 able
to
 sue ALL software makers.  And what does that do to open source?  
 Apache, MySQL, OpenSSH, etc have all had their problems.  Should we 
 sue the nail
gun
 vendor because some moron shoots himself in the head with it?

With all the resources at their disposal, is MS doing enough to inform the
customers of new fixes? Are the fixes and lates security patches in an easy
to find location that any idiot admin can spot? Have they done due diligence
in ensuring that proper notification is done? I ask because it appears they
didn't tell part of their own company that a patch needed to be applied. If
I want the latest info on Apache, I hit the main website and the first thing
I see is a list of security issues and resolutions. Navigating MS's website
isn't quite so simplistic. Liability isn't necessarily in the bug but in the
education and notification.

Jack Bates
BrightNet Oklahoma



Re: Level3 routing issues?

2003-01-28 Thread Hank Nussbacher

At 09:47 AM 28-01-03 -0600, Jack Bates wrote:


From: [EMAIL PROTECTED]

snip
 On the other hand, we also know (from private communications and from
 other mailing lists.. ahem) that high rate and high src/dst diversity
 of scans causes some network devices to fail (devices that cache flows, or
 devices that suffer from cpu overload under such conditions).

 Some BGP-speaking routers (not all, by any means, but some subpopulation)
 found themselves pegged at 100% CPU on Saturday.  Just one example:

http://noc.ilan.net.il/stats/ILAN-CPU/new-gp-cpu.html

Was it not known that under certain conditions the router would flatline?


Yes.  And so does Cisco.


What percautionary measures were put into place in such an event to limit
the damage?


A very reactive NOC.  -Hank



 Whether you believe anthropogenic explanations for the instability
 depends on how fast you believe NEs can look, think, and type, compared
 to the speed with which the BGP announcement and withdrawal rates are
 observed to take off.  For my part, I'd bet that the long slow exponential
 decay (with superimposed spiky noise) is people at work.  But the initial
 blast is not.

When the crisis is on you, it's too late. You are either prepared and know
exactly what to do at that critical moment or you don't. You either had a 5
minute response time to the crisis or you didn't. We also know (from private
communications and from other mailing lists.. yes, I'm a thief :) that many
NEs were caught with their pants down, a mistake they aren't apt to do
again. It comes down to one's outlook. Do you just configure and maintain or
do you strive to push it to the envelope? Do you truly know your network?
Remember, it's a living, breathing thing. The complexity of variables makes
complete predictability impossible, and so we must learn to understand it
and how it reacts.

Then again, perhaps I'm a lunatic. :)

Jack Bates
BrightNet Oklahoma





Re: What could have been done differently?

2003-01-28 Thread Leo Bicknell
In a message written on Tue, Jan 28, 2003 at 10:23:09AM -0500, Eric Germann wrote:
 Not to sound to pro-MS, but if they are going to sue, they should be able to
 sue ALL software makers.  And what does that do to open source?  Apache,
 MySQL, OpenSSH, etc have all had their problems.  Should we sue the nail gun

IANAL, but I think this is all fairly well worked out, from a legal
sense.  Big companies are held to a higher standard.  Sadly it's
often because lawyers pursue the dollars, but it's also because
they have the resources to test, and they have a larger public
responsibility to do that work.

That is, I think there is a big difference between a company the
size of Microsoft saying we've known about this problem for 6
months but didn't consider it serious so we didn't do anything
about it, and an open source developer saying I've known about
it for 6 months, but it's a hard problem to solve, I work on this
in my spare time, and my users know that.

Just like I expect a Ford to pass federal government safety tests,
to have been put through a battery of product tests by ford, etc
and be generally reliable and safe; but when I go to my local custom
shop and have them build me a low volume or one off street rod, or
chopper I cannot reasonably expect the same.

The responsibility is the sum total of the number of product units
out in the market, the risk to the end consumer, the companies
ability to foresee the risk, and the steps the company was able to
reasonably take to mitigate the risk.

So, if someone can make a class action lawsuit against OpenSSH, go
right ahead.  In all likelyhood though there isn't enough money in
it to get the lawyers interested, and even if there was it would
be hard to prove that a couple of guys should have exhaustively
tested the product like a big company should have done.

It was once said, there is risk in hiring someone to do risk analysis.

 use for anything other than nailing stuff together.  Likewise, MS told
 people six months ago to fix the hole.  Lack of planning on your part does

It is for this very reason I suspect no one could collect on this
specific problem.  Microsoft, from all I can tell, acted responsibly
in this case.  Sean asked for general ways to solve this type of
problem.  I gave what I thought was the best solution in general.
It doesn't apply very directly to the specific events of the last
few days.

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org



msg08607/pgp0.pgp
Description: PGP signature


Re: Banc of America Article

2003-01-28 Thread Roger Marquis

[EMAIL PROTECTED] wrote:
  It could be that BoA's network wasn't flooded / servers infected, but that
  the ATM's do not dial BoA directly, and dial somewhere else (ie, maybe some
  kind of ATM Dial Provider, nationwide wholesale, etc), and then tunnel back
  to BoA to get the data.  Could be that the upstream of either the dial
  provider, or BoA was just flooded...

 Again, that design makes nearly no sense. The vast majority of the ATMs that
 banks own and operate directly are located in the LATAs with bank branches.
 Those branches do have good connectivity to the bank processing centers be
 that via dedicated links, VPN or carrier pigeons.

While the exact mechanism of BofA's exposure is important it is
nowhere near as important as the fact that they were, and presumably
are still, exposed.  My money's on Frame Relay congestion.

Some department at BofA, short on engineers and long on budget-oriented
management, likely made a decision that saving a lot of money was
worth a bit of exposure.  I know that decision has been made at
other banks.

-- 
Roger Marquis
Roble Systems Consulting
http://www.roble.com/



Re: Level3 routing issues?

2003-01-28 Thread Jared Mauch

On Tue, Jan 28, 2003 at 03:34:15PM +, [EMAIL PROTECTED] wrote:
 Some BGP-speaking routers (not all, by any means, but some subpopulation)
 found themselves pegged at 100% CPU on Saturday.  Just one example: 
 
http://noc.ilan.net.il/stats/ILAN-CPU/new-gp-cpu.html

I wonder how much of this was because of packets
destined *TO* the router.  I don't know about you but I'm not
about to go put access-lists on all 600+ interfaces in some of
my routers.  My push is for Cisco to (and i'm sure others agree, as
well as the other vendors who don't have a similar feature today)
to port their ip receive acl to other important platforms.  The
GSR is not the only router that needs to be protected on the internet
and they seem to be missing that bit of direction.

http://www.cisco.com/en/US/products/sw/iosswrel/ps1829/products_feature_guide09186a00800a8531.html

Not putting this feature in the next releases of software
would be irresponsible on their part after the critical nature
of this attack, IMHO.

- jared

-- 
Jared Mauch  | pgp key available via finger from [EMAIL PROTECTED]
clue++;  | http://puck.nether.net/~jared/  My statements are only mine.



Re: What could have been done differently?

2003-01-28 Thread Andy Putnins

On Tue, 28 Jan 2003 10:42:05 -  Alex Bligh wrote:
  
  Sean,
  
  --On 28 January 2003 03:10 -0500 Sean Donelan [EMAIL PROTECTED] wrote:
  
   Are there practical answers that actually work in the real world with
   real users and real business needs?
  
  1. Employ clueful staff
  2. Make their operating environment (procedures etc.) best able
 to exploit their clue
  
  In the general case this is a people issue. Sure there are piles of
  whizzbang technical solutions that address individual problems (some of
  which your clueful staff might even think of themselves), but in the final
  analysis, having people with clue architect, develop and operate your
  systems is far more important than anything CapEx will buy you alone.
  
  Note it is not difficult to envisage how this attack could have been
  far far worse with a few code changes...
  
  Alex Bligh

How does one find a clueful person to hire? Can you recognize one by their
hat or badge of office? Is there a guild to which they all belong? If one 
wants to get a clue, how does one find a master to join as an apprentice?

I would argue that sooner or later network security must become an 
engineering discipline whose practitioners can design a security system 
that cost-effectively meets the unique needs of each client.

Engineering requires that well-accepted (best) practices be documented 
and adopted by all practicioners. Over time, there emerges a body of such 
best practices which provide a foundation upon which new technologies and 
practices are adopted as technical concensus emerges among the practicioners. 
Part of the training of an engineer involves learning the existing body of 
best practices. Engineering also is quantitative, which means that design
incorporates measurements and calculations so that the solution is good
enough to to the job required, but no more, albeit with commonly accepted
margins of safety.

Society requires that some kinds of engineers be licensed because they 
are responsible for the safety of others, such as engineers who design 
buildings, bridges, roads, nuclear power plants, sanitation, etc. However, 
some are not (yet?) required to be licensed, like engineers who design cars, 
trucks, buses, ships, airplanes, factory process control systems and the 
computer networks that monitor and control them.

This is therefore a request for all of those who possess this clue to 
write down their wisdom and share it with the rest of us, so we can 
address what clearly is a need for discipline in the design of networks 
and network security, since computer networks are an infrastructure upon 
which people are becoming dependent, even to the point of their personal 
safety.

- Andy




RE: What could have been done differently?

2003-01-28 Thread Ray Burkholder

The SANS Institute [[EMAIL PROTECTED]] www.sans.org is a well respected
collection of individuals who have provided this 'pool' of knowledge and
regularly disseminate it to inquiring minds.

Ray Burkholder


 -Original Message-
 From: Andy Putnins [mailto:[EMAIL PROTECTED]] 
 Sent: January 28, 2003 12:43
 To: Alex Bligh
 Cc: Sean Donelan; [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: What could have been done differently? 
 

 This is therefore a request for all of those who possess this 
 clue to 
 write down their wisdom and share it with the rest of us, so we can 
 address what clearly is a need for discipline in the design 
 of networks 
 and network security, since computer networks are an 
 infrastructure upon 
 which people are becoming dependent, even to the point of 
 their personal 
 safety.
 
   - Andy
 
 



VPN clients and security models

2003-01-28 Thread alex

  This is not correct. VPN simply extends security policy to a different
  location. A VPN user must make sure that local security policy
  prevents other traffic from entering VPN connection.

 This is nice in theory, but in practice is simply not true. even
 assuming that the most restrictive settings are used (user may not
 install software by admin setting, has no local administration on his
 machine, IP traffic other than via the VPN is exclusive to the vpn
 client) it is *still* possible that the machine could be compromised by
 (say) an email virus who then bypasses security by any one of a dozen
 routes.

Welcome to the world of formal security models. If in theory a VPN is
nothing more than a tool of extending the security policy of a site to a
remote location, then it does not matter what kind of things you try to
achieve with it, it *wont* work for anything other than extending a security
model of a site to a remote location. Can one try to use it for something
else? Sure, one can. It may even work for a little bit, as long as it does
not contradict that security model. 

Your VPN connection dropped you back into your site. If it is site's
security model that all mail comes in and goes out via some mail server that
filters out email viruses, and via VPN you are virtually in a footprint of
that site, then why are you not using the site mail server or why is the VPN
client lets you not use it? If it does not enforce the site's security
policy, then it is a BAD VPN client.

Alex




Re: Level3 routing issues?

2003-01-28 Thread Haesu

 http://noc.ilan.net.il/stats/ILAN-CPU/new-gp-cpu.html  Was it not
 known that under certain conditions the router would flatline? What
 percautionary measures were put into place in such an event to limit
 the damage?

scheduler allocate

-hc





Re: VPN clients and security models

2003-01-28 Thread Valdis . Kletnieks
On Tue, 28 Jan 2003 11:52:39 EST, [EMAIL PROTECTED]  said:

 Welcome to the world of formal security models. If in theory a VPN is
 nothing more than a tool of extending the security policy of a site to a
 remote location, then it does not matter what kind of things you try to
 achieve with it, it *wont* work for anything other than extending a security
 model of a site to a remote location. Can one try to use it for something
 else? Sure, one can. It may even work for a little bit, as long as it does
 not contradict that security model.

Right. In the *formal* sense, this is correct.

But that's not how things work out in the Real World.  As I pointed out
before, you have *USERS* involved, and they'll do stupid things like try
to connect their laptop to the internet.  And as I also pointed out,
if the head of a TLA screws up and Gets This Wrong, why should we expect
untrained, non-security-aware users to Get It Right?

The problem is exacerbated by the fact that these mobile laptops are usually
*NOT* configured like a kiosk, where the user is unable to make any changes.

 that site, then why are you not using the site mail server or why is the VPN
 client lets you not use it? If it does not enforce the site's security
 policy, then it is a BAD VPN client.

And when the VPN client isn't even running, what stops the user from changing
the mail software config to fetch his mail from some other server like AOL or
MSN or whatever?

Remember - users do NOT care about security.  Users care about finishing
whatever task THEY are busy with, which is almost never security.
-- 
Valdis Kletnieks
Computer Systems Senior Engineer
Virginia Tech




msg08614/pgp0.pgp
Description: PGP signature


WANAL (Re: What could have been done differently?)

2003-01-28 Thread Paul Vixie

[EMAIL PROTECTED] (Eric Germann) writes:

 Not to sound to pro-MS, but if they are going to sue, they should be able
 to sue ALL software makers.  And what does that do to open source?
 Apache, MySQL, OpenSSH, etc have all had their problems.  ...

Don't forget BIND, we've had our problems as well.  Our license says:

/*
 * [Portions] Copyright (c) - by Internet Software Consortium.
 *
 * Permission to use, copy, modify, and distribute this software for any
 * purpose with or without fee is hereby granted, provided that the above
 * copyright notice and this permission notice appear in all copies.
 *
 * THE SOFTWARE IS PROVIDED AS IS AND INTERNET SOFTWARE CONSORTIUM DISCLAIMS
 * ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES
 * OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INTERNET SOFTWARE
 * CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
 * DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
 * PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
 * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS
 * SOFTWARE.
 */

I believe that Apache and the others you mention do the same.  Disclaiming
fitness for use, and requiring that the maker be held harmless, only works
when the software is fee-free.  Microsoft can get you to click Accept as
often as they want and keep records of the fact that you clicked it, but in
every state I know about, fitness for use is implied by the presence of fee
and cannot be disclaimed even by explicit agreement from the end user.  B2B
considerations are different -- I'm talking about consumer rights not overall
business liability.

In any case, all of these makers (including Microsoft) seem to make a very
good faith effort to get patches out when vulnerabilities are uncovered.  I
wish we could have put time bombs in older BINDs to force folks to upgrade,
but that brings more problems than it takes away, so a lot of folks run old
broken software even though our web page tells them not to.

Note: IANAL.
-- 
Paul Vixie



wrt BofA ATM: is it ATM 'automated' or ATM 'async' ?

2003-01-28 Thread Jeff . Hodges

good question. anyone know the answer?


JeffH

--- Forwarded Message

Date: Tue, 28 Jan 2003 02:29:17 -0500
Subject: [IP] is it ATM or ATM  Internet Attack's Disruptions
More Serious Than Many Thought Possible
From: Dave Farber [EMAIL PROTECTED]
To: ip [EMAIL PROTECTED]


- -- Forwarded Message
From: David Devereaux-Weber [EMAIL PROTECTED]
Date: Mon, 27 Jan 2003 23:52:19 -0600
To: [EMAIL PROTECTED]
Subject: Re: [IP] Internet Attack's Disruptions More Serious Than Many
Thought Possible

One interesting aspect of the reporting for this event is related to the
acronym ATM.  The University of Wisconsin-Madison uses Asynchronous
Transfer Mode (ATM) for backbone transport.  We further use LAN Emulation
(LANE) on the Asynchronous Transfer Mode backbone (LANE maps IP addresses
to ATM Virtual Circuits and back to IP at the far end).  The LANE BUS
(Broadcast and Unknown Server) on the network was swamped due to the high
volume of SQLSlammer hits on broadcast and unknown addresses, effectively
denying legitimate traffic.  This BUS saturation did not happen with the
Code Red worm several months back.  We spent several hours thinking our ATM
problems were distinct from the SQLSlammer problems.

My question is, has anyone seen source information about the Bank of
America and Automated Teller Machines?  Is it possible that Bank of America
was reporting Asynchronous Transfer Mode (ATM) problems and not Automated
Teller Machine (ATM) problems?

Dave

- --
David Devereaux-Weber, P.E.
Network Services
Division of Information Technology
The University of Wisconsin - Madison
[EMAIL PROTECTED]  http://cable.doit.wisc.edu


- -- End of Forwarded Message

- -
You are subscribed as [EMAIL PROTECTED]
To unsubscribe or update your address, click
  http://v2.listbox.com/member/?listname=ip

Archives at: http://www.interesting-people.org/archives/interesting-people/

--- End of Forwarded Message






OT: Re: WANAL (Re: What could have been done differently?)

2003-01-28 Thread Rafi Sadowsky

## On 2003-01-28 17:49 - Paul Vixie typed:

PV 
PV In any case, all of these makers (including Microsoft) seem to make a very
PV good faith effort to get patches out when vulnerabilities are uncovered.  I
PV wish we could have put time bombs in older BINDs to force folks to upgrade,
PV but that brings more problems than it takes away, so a lot of folks run old
PV broken software even though our web page tells them not to.
PV 

Hi Paul,

 What do you think of OpenBSD still installing BIND4 as part of the
default base system and  recommended as secure by the OpenBSD FAQ ?
(See Section 6.8.3 in http://www.openbsd.org/faq/faq6.html#DNS )

-- 
Thanks
Rafi




Re: OT: Re: WANAL (Re: What could have been done differently?)

2003-01-28 Thread Paul Vixie

  What do you think of OpenBSD still installing BIND4 as part of the
 default base system and  recommended as secure by the OpenBSD FAQ ?
 (See Section 6.8.3 in http://www.openbsd.org/faq/faq6.html#DNS )

i think that bind4 was relatively easy for them to do a format string
audit on, and that bind9 was comparatively huge, and that their caution
is justified based on bind4/bind8's record in CERT advisories, and that
for feature level reasons they will move to bind9 as soon as they can
complete a security audit on the code.  (although in this case ISC and
others have already completed such an audit, another pass never hurts.)



Re: wrt BofA ATM: is it ATM 'automated' or ATM 'async' ?

2003-01-28 Thread Marshall Eubanks

This makes it pretty clear

http://biz.yahoo.com/rb/030125/tech_virus_boa_1.html

Reuters
Bank of America ATMs Disrupted by Virus
Saturday January 25, 5:33 pm ET

SEATTLE (Reuters) - Bank of America Corp. (NYSE:BAC - News) said on 
Saturday that customers at a majority of its 13,000 automatic teller 
machines were unable to process customer transactions after a malicious 
computer worm nearly froze Internet traffic worldwide.

Bank of America spokeswoman Lisa Gagnon said by phone from the company's 
headquarters in Charlotte, North Carolina, that many, if not a majority 
of the No. 3 U.S. bank's ATMs were back online and that their automated 
banking network would recover by late Saturday.

snip

We have been impacted, and for a while customers could not use ATMs and 
customer services could not access customer information, Gagnon said.

snip

On Tuesday, January 28, 2003, at 01:46 PM, [EMAIL PROTECTED] 
wrote:


good question. anyone know the answer?


JeffH

--- Forwarded Message

Date: Tue, 28 Jan 2003 02:29:17 -0500
Subject: [IP] is it ATM or ATM  Internet Attack's Disruptions
	More Serious Than Many Thought Possible
From: Dave Farber [EMAIL PROTECTED]
To: ip [EMAIL PROTECTED]


- -- Forwarded Message
From: David Devereaux-Weber [EMAIL PROTECTED]
Date: Mon, 27 Jan 2003 23:52:19 -0600
To: [EMAIL PROTECTED]
Subject: Re: [IP] Internet Attack's Disruptions More Serious Than Many
Thought Possible

One interesting aspect of the reporting for this event is related to the
acronym ATM.  The University of Wisconsin-Madison uses Asynchronous
Transfer Mode (ATM) for backbone transport.  We further use LAN 
Emulation
(LANE) on the Asynchronous Transfer Mode backbone (LANE maps IP 
addresses
to ATM Virtual Circuits and back to IP at the far end).  The LANE BUS
(Broadcast and Unknown Server) on the network was swamped due to the 
high
volume of SQLSlammer hits on broadcast and unknown addresses, 
effectively
denying legitimate traffic.  This BUS saturation did not happen with the
Code Red worm several months back.  We spent several hours thinking our 
ATM
problems were distinct from the SQLSlammer problems.

My question is, has anyone seen source information about the Bank of
America and Automated Teller Machines?  Is it possible that Bank of 
America
was reporting Asynchronous Transfer Mode (ATM) problems and not 
Automated
Teller Machine (ATM) problems?

Dave

- --
David Devereaux-Weber, P.E.
Network Services
Division of Information Technology
The University of Wisconsin - Madison
[EMAIL PROTECTED]  http://cable.doit.wisc.edu


- -- End of Forwarded Message

- -
You are subscribed as [EMAIL PROTECTED]
To unsubscribe or update your address, click
  http://v2.listbox.com/member/?listname=ip

Archives at: http://www.interesting-
people.org/archives/interesting-people/

--- End of Forwarded Message




 Regards
 Marshall Eubanks

T.M. Eubanks
Multicast Technologies, Inc
10301 Democracy Lane, Suite 410
Fairfax, Virginia 22030
Phone : 703-293-9624   Fax : 703-293-9609
e-mail : [EMAIL PROTECTED]
http://www.multicasttech.com

Test your network for multicast :
http://www.multicasttech.com/mt/
 Status of Multicast on the Web  :
 http://www.multicasttech.com/status/index.html




RE: Banc of America Article

2003-01-28 Thread alex

 I'm familiar with some enforced financial institution requirements, no
 where did I find transaction data of ATMs on a dedicated network to be
 _required_.  Is this a common industry practice, or a mandatory standard
 I have not discovered?

It is a common practice. Since the alarm line is permanently connected to
some monitoring company which is supposed to make up its mind about ATM
still being there should a nice gentleman with a truck decide to take the
entire machine with him, it is very difficult to get that line used for
something else. The other line is the data line, which, in my experience,
tends to be POTS or ISDN (and sometimes DS0). Maybe I am not being clear on
the terminology - dedicated in this case means non-alarm line. Wherever
the connection terminates is going to depend on implementation.

 How does this relate to the No Name standalone ATM which normally have
 exposed POTS wires running to the wall?

If you look carefully at those wires (without flipping out mini-mart owners)
you are most likely to notice that it has either two visible POTS lines or
one cable carrying two phone lines. 

Alex




Re: Is it time to block all Microsoft protocols in the core?

2003-01-28 Thread Joe Abley


On Monday, Jan 27, 2003, at 14:04 Asia/Katmandu, Sean Donelan wrote:


Its not just a Microsoft thing.  SYSLOG opened the network port by
default, and the user has to remember to disable it for only local
logging.


You're using mixed tense in these sentences, so I can't tell whether 
you think that syslog's network port is open by default on operating 
systems today.

On FreeBSD, NetBSD, OpenBSD and Darwin/Mac OS X (the only xterms I 
happen to have open right now) this is not the case, and has not been 
for some time. I presume, perhaps naïvely, that other operating systems 
have done something similar.

[...]

DESCRIPTION
 syslogd reads and logs messages to the system console, log 
files, other
 machines and/or users as specified by its configuration file.

 The options are as follows:

[...]

 -u  Select the historical ``insecure'' mode, in which 
syslogd will
 accept input from the UDP port.  Some software wants 
this, but
 you can be subjected to a variety of attacks over the 
network,
 including attackers remotely filling logs.

[...]


Joe




RE: What could have been done differently?

2003-01-28 Thread Vadim Antonov


On Tue, 28 Jan 2003, Eric Germann wrote:

 
 Not to sound to pro-MS, but if they are going to sue, they should be able to
 sue ALL software makers.  And what does that do to open source?

A law can be crafted in such a way so as to create distinction between
selling for profit (and assuming liability) and giving for free as-is. In
fact, you don't have Goodwill to sign papers to the effect that it won't
sue you if they decide later that you've brought junk - because you know
they won't win in court. However, that does not protect you if you bring
them a bomb disguised as a valuable.

The reason for this is: if someone sells you stuff, and it turns out not
to be up to your reasonable expectations, you suffered demonstrable loss
because vendor has misled you (_not_ because the stuff is bad).  I.e. the
amount of that loss is the price you paid, and, therefore, this is
vendor's direct liability.

When someone gives you something for free, his direct liability is,
correspondingly, zero.

So, what you want is a law permitting direct liability (i.e. the lemon
law, like the ones regulating sale of cars or houses) but setting much
higher standards (i.e. willfully deceiptive advertisement, maliciously
dangerous software, etc) for suing for punitive damages.  Note that in
class actions it is often much easier to prove the malicious intent of a
defendant in cases concering deceiptive advertisement - it is one thing
when someone gets cold feet and claims he's been misled, and quite another
when you have thousands of independent complaints.  Because there's
nothing to gain suing non-profits (unless they're churches:) the
reluctance of class action lawyers to work for free would protect
non-profits from that kind of abuse.

A lemon law for software may actually be a boost for the proprietary
software, as people will realize that the vendors have incentive to
deliver on promises.

--vadim




Re: Is it time to block all Microsoft protocols in the core?

2003-01-28 Thread David Charlap

Joe Abley wrote:


You're using mixed tense in these sentences, so I can't tell whether you 
think that syslog's network port is open by default on operating systems 
today.

On FreeBSD, NetBSD, OpenBSD and Darwin/Mac OS X (the only xterms I 
happen to have open right now) this is not the case, and has not been 
for some time. I presume, perhaps naïvely, that other operating systems 
have done something similar.

Current versions of Linux appear to be safe.  This is from the syslog 
package that ships with RedHat version 8 (sysklogd package version 
1.4.1-10).

	NAME
	sysklogd - Linux system logging utilities.

	...

	OPTIONS
	...
	-rThis option will enable the facility to receive
	  message from the network using an internet domain
	  socket with the syslog service (see  services(5)).
	  The default is to not receive any messages from
	  the network.

	  This option is introduced in version 1.3 of the
	  sysklogd package.   Please note that the default
	  behavior is the opposite of how older versions
	  behave, so you might have to turn this on.

The default RedHat installation does not turn on this option.

Looking through RedHat's FTP server, their 4.2 distribution (the oldest 
on on their server) is at version 1.3-15, and therefore incorporates 
this feature.  This release has a README dated 1997, and the sysklogd 
package on their server is dated December 1996.

I would assume that other Linux distributions from the same era (1997 
through the present) would also have sysklogd version 1.3 or later, and 
therefore have this feature.

-- David



Re: What could have been done differently?

2003-01-28 Thread Iljitsch van Beijnum

Sean Donelan wrote:

Many different companies were hit hard by the Slammer worm, some with
better than average reputations for security awareness.  They bought
finest firewalls, they had two-factor biometric locks on their data
centers, they installed anti-virus software, they paid for SAS70
audits by the premier auditors, they hired the best managed security
consulting firms.  Yet, they still were hit.



Its not as simple as don't use microsoft, because worms have hit other
popular platforms too.


As a former boss of me was fond of saying when someone made a stupid 
mistake: It can happen to anyone. It just happens more often to some 
people than others.

Are there practical answers that actually work in the real world with
real users and real business needs?


As this is still a network operators forum, let's get this out of the 
way: any time you put a 10 Mbps ethernet port in a box, expect that it 
has to deal with 14 kpps at some point. 100 Mbps - 148 kpps, 1000 Mbps 
- 1488 kpps. And each packet is a new flow. There are still routers 
being sold that have the interfaces, but can't handle the maximum 
traffic. Unfortunately, router vendors like to lure customers to boxes 
that can forward these amounts of traffic wire speed rather than 
implement features in their lower-end products that would allow a box 
to drop the excess traffic in a reasonable way.

But then there is the real source of the problem. Software can't be 
trusted. It doesn't mean anything that 100 lines of code are 
correct, if one line is incorrect something really bad can happen. 
Since we obviously can't make software do what we want it to do, we 
should focus on making it not do what we don't want it to do. This 
means every piece of software must be encapsulated inside a layer of 
restrictive measures that operate with sufficient granularity. In Unix, 
traditionally this is done per-user. Regular users can do a few things, 
but the super-user can do everything. If a user must do something that 
regular users can't do, the user must obtain super-user priviliges and 
then refrain from using these absolute priviliges for anything else 
than the intended purpose. This doesn't work. If I want to run a web 
server, I should be able to give a specific piece of web serving 
software access to port 80, and not also to every last bit of memory or 
disk space.

Another thing that could help is have software ask permission from some 
central authority before it gets to do dangerous things such as run 
services on UDP port 1434. The central authority can then keep track of 
what's going on and revoke permissions when it turns out the server 
software is insecure. Essentially, we should firewall on software 
versions as well as on traditional TCP/IP variables.

And it seems parsing protocols is a very difficult thing to do right 
with today's tools. The SNMP fiasco of not long ago shows as much, as 
does the new worm. It would proably a good thing if the IETF could 
build a good protocol parsing library so implementors don't have to do 
this by hand and skip over all that pesky bounds checking. Generating 
and parsing headers for a new protocol would then no longer require new 
code, but could be done by defining a template of some sort. The 
implementors can then focus on the functionality rather than which bit 
goes where. Obviously there would be a performance impact but the same 
goes for coding in higher languages than assembly. Moore's law and 
optimizers are your friends.



Re: OT: Re: WANAL (Re: What could have been done differently?)

2003-01-28 Thread Mike Lewinski

On 1/28/03 11:57 AM, Paul Vixie [EMAIL PROTECTED] wrote:

 
  What do you think of OpenBSD still installing BIND4 as part of the
 default base system and  recommended as secure by the OpenBSD FAQ ?
 (See Section 6.8.3 in http://www.openbsd.org/faq/faq6.html#DNS )
 
 i think that bind4 was relatively easy for them to do a format string
 audit on, and that bind9 was comparatively huge, and that their caution
 is justified based on bind4/bind8's record in CERT advisories, and that
 for feature level reasons they will move to bind9 as soon as they can
 complete a security audit on the code.  (although in this case ISC and
 others have already completed such an audit, another pass never hurts.)


It is my understanding that this process has been completed, and BIND9
should ship as the default OpenBSD named in the 3.3 release:

http://deadly.org/article.php3?sid=20030121022208mode=flat

We've been running BIND9 from the ports tree for over two years now and are
*very* happy with performance/stability.

Mike




Aggregate traffic management

2003-01-28 Thread Stanislav Rost

Dear NANOGers,

I have a very hands-on question:
Suppose I am a network operator for a decent-sized ISP, and I decide
that I want to divide aggregate traffic flowing through a router
toward some destination, in order to then send some of it through one
route and the remainder through another route.  Thus, I desire to
enforce some traffic engineering decision.

How would I be able to accomplish this division?  What technologies
(even if vendor-specific) would I use?  

I can think of some methods like prefix-matching classification and
ECMP, but I am still not sure exactly how the latter works in practice
(at the router level) and how one may set them up to achieve such
load-sharing.

Thank you for your expertise and lore,

-- 
Stanislav Rost [EMAIL PROTECTED]
Laboratory for Computer Science, MIT




Re: Aggregate traffic management

2003-01-28 Thread Jack Bates

From: Stanislav Rost


 How would I be able to accomplish this division?  What technologies
 (even if vendor-specific) would I use?

 I can think of some methods like prefix-matching classification and
 ECMP, but I am still not sure exactly how the latter works in practice
 (at the router level) and how one may set them up to achieve such
 load-sharing.

To my knowledge it is somewhat vendor specific. I've used policy routes and
tag switching in dividing and altering traffic flows. I do know that with
some vendors, it is important to make sure you activate the optimizations
for things such as policy routing to save mass amounts of cpu time. I'm
still a neophyte when it comes to tag switching. Others might help better in
that.

Jack Bates
BrightNet Oklahoma




Re: Is it time to block all Microsoft protocols in the core?

2003-01-28 Thread Joe Abley


On Wednesday, Jan 29, 2003, at 01:25 Asia/Katmandu, Joe Abley wrote:


On FreeBSD, NetBSD, OpenBSD and Darwin/Mac OS X (the only xterms I 
happen to have open right now) this is not the case, and has not been 
for some time. I presume, perhaps naïvely, that other operating 
systems have done something similar.

This is not right. Guess I was typing man in the wrong xterms.

FreeBSD (4.x, 5.x) listens to the network by default (and can be 
persuaded not to with a -s flag). NetBSD (1.6) does the same.

Darwin/Mac OS X and OpenBSD do not listen by default (and can be 
persuaded to listen with a -u flag). (Looks like Darwin ships with 
OpenBSD's syslogd).

Various people mailed me and told me that Linux does not listen by 
default, presumably for commonly-packaged values of Linux.


Joe



Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 03:10:18AM -0500, [EMAIL PROTECTED] said:
[snip]
 Many different companies were hit hard by the Slammer worm, some with
 better than average reputations for security awareness.  They bought
 finest firewalls, they had two-factor biometric locks on their data
 centers, they installed anti-virus software, they paid for SAS70
 audits by the premier auditors, they hired the best managed security
 consulting firms.  Yet, they still were hit.
 
 Its not as simple as don't use microsoft, because worms have hit other
 popular platforms too.

True. But few platforms have as dismal a record in this regard as MS. Whether
that's due to number of bugs or market penetration is a matter for debate.
Personally, I think it's clear that the focus, from MS and many other
vendors, is on time-to-market and feature creep. Security is an afterthought,
at best (regardless of Trustworthy Computing, which is looking to be just
another marketing initiative). The first step towards good security is
choosing vendors/software with a reputation for caring about security. I
realize that for many of us, this is not an option at this stage of the game.
And in some arenas, there just aren't any good choices - the best you can do
is to choose the lesser of multiple evils. Which leads me to the next point:

 Are there practical answers that actually work in the real world with
 real users and real business needs?

I think a good place to start is to have at least one person, if not more,
who has in their job description to daily check errata/patch lists for the
software in use on the network. This can be semi-automated by just
subscribing to the right mailing lists. Now, deciding whether or not a patch
is worth applying is another story, but there's no excuse for being ignorant
of published security updates for software on one's network. Yes, it's a
hassle wading through the voluminous cross-site scripting posts on BUGTRAQ,
but it's worth it when you do occasionally get that vital bit of information.
Sometimes vendors aren't as quick to release bug information, much less
patches, as forums like BUGTRAQ/VulnWatch/etc.

Stay on top of security releases, and patch anything that is a security
issue. I realize this is problematic for larger networks, in which case I
would add, start with the most critical machines and work your way down. If
this requires downtime, well, better to spend a few hours of rotating
downtime to patch holes in your machines than to end up compromised, or
contributing to the kind of chaos we saw this last weekend.

Simple answer, practical for some folks, maybe less so for others. I know
I've been guilty of not following my own advice in this area before, but that
doesn't make it any less pertinent.
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08631/pgp0.pgp
Description: PGP signature


Re: Aggregate traffic management

2003-01-28 Thread John Todd

It can be done several ways, but  the question is how are you 
differentiating?

This is an incomplete list of methods for differentiating, each of 
which is supported by one or more vendors or open-source solutions:

 - destination address
   - specific prefix matching
   - prefix length matching (.../23, /24, etc.)
   - destination AS #
   - destination AS path length
   - destination (fill in the blank for other BGP or IGP specific tricks)
 - source address (see destination address for list of possible 
match criteria)
 - protocol (UDP, TCP, ICMP)
 - port (source or destination)
 - ToS bits
 - URL for port 80 traffic
 - MAC address of source machine
 - amount of throughput to a particular source/destination path


There are even some methods that don't differentiate, but do random 
traffic distribution (for some highly imperfect value of random, 
which I won't debate here.)

 - round-robin gateway specification
 - DNS round-robin (limited use, usually for servers producing traffic)

If you could be more specific on what your distinguishing method 
would be for choosing one path over the other, some specific examples 
could be distilled from this group, I'm sure.  sarcasm Though, of 
course, we here in North America don't have to worry ourselves about 
these issues, and we route everything through our most congested 
peer. /sarcasm

Is this simply to get a survey of all possible routing decision 
concepts, or are you looking for answers only at one layer of the OSI 
model?  You do say router in your description, but I'm still 
uncertain if that is a limiting factor in your question.

JT


Dear NANOGers,

I have a very hands-on question:
Suppose I am a network operator for a decent-sized ISP, and I decide
that I want to divide aggregate traffic flowing through a router
toward some destination, in order to then send some of it through one
route and the remainder through another route.  Thus, I desire to
enforce some traffic engineering decision.

How would I be able to accomplish this division?  What technologies
(even if vendor-specific) would I use? 

I can think of some methods like prefix-matching classification and
ECMP, but I am still not sure exactly how the latter works in practice
(at the router level) and how one may set them up to achieve such
load-sharing.

Thank you for your expertise and lore,

--
Stanislav Rost [EMAIL PROTECTED]
Laboratory for Computer Science, MIT




Re: Is it time to block all Microsoft protocols in the core?

2003-01-28 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Barney Wolff writes:

On Wed, Jan 29, 2003 at 03:50:34AM +0545, Joe Abley wrote:
 
 On Wednesday, Jan 29, 2003, at 01:25 Asia/Katmandu, Joe Abley wrote:
 
 On FreeBSD, NetBSD, OpenBSD and Darwin/Mac OS X (the only xterms I 
 happen to have open right now) this is not the case, and has not been 
 for some time. I presume, perhaps na?vely, that other operating 
 systems have done something similar.
 
 This is not right. Guess I was typing man in the wrong xterms.
 
 FreeBSD (4.x, 5.x) listens to the network by default (and can be 
 persuaded not to with a -s flag). NetBSD (1.6) does the same.

You were right the first time, at least for FreeBSD.  The -s flag
is applied by default - see /etc/defaults/rc.conf .  Not quite as
idiot-proof as a compiled-in default, but way better than defaulting
to listening.

The same is true of NetBSD 1.6; look in the same place.


--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (2nd edition of Firewalls book)





wrt BofA ATM: is it ATM 'automated' or ATM 'async' ?

2003-01-28 Thread Stewart, William C (Bill), SALES

Over the last N years, I've often been the 
(Asynchronous Transfer Mode) ATM specialist for the group I'm in,
as well as occasionally doing network designs and proposals for banks.
While some banks use ATM to connect the networks that support their ATMs,
few if any come close to 1000 Asynchronous Transfer Mode connections,
much less 13000, and I haven't seen any equipment from Cisco, Lucent,
Nortel, or Newbridge that could dispense cash (though all of them made
equipment that needed to burn lots of cash on occasion.)
NCR once tried to get people to use their PCs are routers,
but I don't think any of them had ATM interfaces, 
though they make automated teller machines as well as cash registers.

I've been frustrated by this whole Oh, no, the bank must have
done something terribly wrong discussion.  Banks really _are_ careful
about potential problems that could let people siphon off money wholesale,
and this doesn't sound at all like that.  I don't know their
current network, but the most likely causes for some generic banks'
problems with teller machines during an event like this are

- Using internet VPNs to connect ATMs to their servers,
either dedicated 56kbps, DSL, ISDN D channel, or maybe dialup,
which got stomped on by ISP traffic loads,
which doesn't require that they accept any of the UDP1434 packets,
just that random machines send enough of them.
Remember that ATMs are very low traffic devices,
so if they're using internet connections, they're small,
and the flooding picks random IP addresses which isn't size-dependent.

- If there actually had been an Asynchronous Transport Mode problem here,
it would have been something weird like the bank using an
ATM pipe from a network provider (probably a telco)
to deliver internet connections on one PVC and Teller Machine
and Bank Branch traffic on some or many other PVCs,
and getting stomped on by the traffic load.

- Another variation on that theme is that some carriers carry their
internet traffic on their ATM backbones along with customer
frame and ATM traffic, so even if a bank's network was entirely
frame relay or frame/ATM interworking, the carrier could have
trunk congestion if all their internet users started bursting heavily.

- Possibly they're using some teller machine network providers
that deliver to them over an internet VPN as opposed to
private line or frame, and that got flooded, but that's less likely.

- It's possible that a bank that uses VPNs for their ATMs and
also has a publicly-visible internet banking service
might share the same internet access line and firewall,
and be susceptible to flooding cause either by inbound traffic
or (much much less likely) outbound traffic from infected servers,
while still not affecting the VPNs.  It's much more likely
they'd be on separate internet pipes for security and reliability,
but they might be doing something like separate firewalls
for separate servers sharing a pair of access lines.
Besides the normal security concerns, those two kinds of operations
are usually run by different organizations in any large bank,
and BofA is definitely large.




Re: Is it time to block all Microsoft protocols in the core?

2003-01-28 Thread Joe Abley


On Wednesday, Jan 29, 2003, at 04:56 Asia/Katmandu, Steven M. Bellovin 
wrote:

In message [EMAIL PROTECTED], Barney Wolff 
writes:

On Wed, Jan 29, 2003 at 03:50:34AM +0545, Joe Abley wrote:


On Wednesday, Jan 29, 2003, at 01:25 Asia/Katmandu, Joe Abley wrote:


On FreeBSD, NetBSD, OpenBSD and Darwin/Mac OS X (the only xterms I
happen to have open right now) this is not the case, and has not 
been
for some time. I presume, perhaps na?vely, that other operating
systems have done something similar.

This is not right. Guess I was typing man in the wrong xterms.

FreeBSD (4.x, 5.x) listens to the network by default (and can be
persuaded not to with a -s flag). NetBSD (1.6) does the same.


You were right the first time, at least for FreeBSD.  The -s flag
is applied by default - see /etc/defaults/rc.conf .  Not quite as
idiot-proof as a compiled-in default, but way better than defaulting
to listening.


The same is true of NetBSD 1.6; look in the same place.


Serves me right for contradicting myself.




Re: Aggregate traffic management

2003-01-28 Thread Kyle C. Bacon

Take a look at a product called Path Control by RouteScience.

http://www.routescience.com/

I have seen their product in action and it is very slick.  Does exactly
what you want,
plus a whole lot more and does it transparently (so if it fails you aren't
SOL) via
manipulating BGP tables and nexthop based on a multitude of criteria.

K



   

Stanislav  

Rost To: [EMAIL PROTECTED]   

stanrost@lcscc:   

.mit.eduSubject: Aggregate traffic management 

Sent by:   

owner-nanog@m  

erit.edu   

   

   

01/28/2003 

04:59 PM   

   

   






Dear NANOGers,

I have a very hands-on question:
Suppose I am a network operator for a decent-sized ISP, and I decide
that I want to divide aggregate traffic flowing through a router
toward some destination, in order to then send some of it through one
route and the remainder through another route.  Thus, I desire to
enforce some traffic engineering decision.

How would I be able to accomplish this division?  What technologies
(even if vendor-specific) would I use?

I can think of some methods like prefix-matching classification and
ECMP, but I am still not sure exactly how the latter works in practice
(at the router level) and how one may set them up to achieve such
load-sharing.

Thank you for your expertise and lore,

--
Stanislav Rost [EMAIL PROTECTED]
Laboratory for Computer Science, MIT






RE: What could have been done differently?

2003-01-28 Thread Eric Germann

XP has autoupdate notifications that nag you.  They could make it automatic,
but then everyone would sue them if it mucked up their system.

And, MS has their HFCHECK program which checks which hotfixes should be
installed.  Again, not automatic because they would like the USER to sign
off on installing it.

On the Open Source side, you sort of have that when you build from source.
Maybe apache should build a util to routinely go out and scan their source
and all the myriad add on modules and build a new version when one of them
has a fix to it, but we leave that to the sysadmin.  Why, because the
permutations are too many.  Which is why we have Windows.  To paraphrase a
phone company line I heard in a sales meeting when reaming them, we may
suck, but we suck less   It ain't the best, but for the most part, it
does what the user wants and is relatively consistent across a number of
machines.  User learns at home and can operate at work.  No retraining.

Sort of like the person who sued McD's when they dumped their own coffee in
their lap because it was too hot.  Somewhere in the equation, the
sysadmin/enduser, whether Unix or Windows, has to take some responsibility.

To turn the argument around, people don't pay for IIS either, but everyone
would love to sue MS for its vulnerabilities (i.e. CR/Nimda, etc).

As has been said, no one writes perfect software.  And again, sometime, the
user has to share some responsibility.  Maybe if the users get burned
enough, the problem will get solved.  Either they will get fired, the
software will change to another platform, or they'll install the patches.
People only change behaviors through pain, either mental or physical.

Eric


 -Original Message-
 From: Jack Bates [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, January 28, 2003 10:36 AM
 To: [EMAIL PROTECTED]; Leo Bicknell; [EMAIL PROTECTED]
 Cc: Eric Germann
 Subject: Re: What could have been done differently?


 From: Eric Germann

 
  Not to sound to pro-MS, but if they are going to sue, they
 should be able
 to
  sue ALL software makers.  And what does that do to open source?  Apache,
  MySQL, OpenSSH, etc have all had their problems.  Should we sue the nail
 gun
  vendor because some moron shoots himself in the head with it?

 With all the resources at their disposal, is MS doing enough to inform the
 customers of new fixes? Are the fixes and lates security patches
 in an easy
 to find location that any idiot admin can spot? Have they done
 due diligence
 in ensuring that proper notification is done? I ask because it
 appears they
 didn't tell part of their own company that a patch needed to be
 applied. If
 I want the latest info on Apache, I hit the main website and the
 first thing
 I see is a list of security issues and resolutions. Navigating
 MS's website
 isn't quite so simplistic. Liability isn't necessarily in the bug
 but in the
 education and notification.

 Jack Bates
 BrightNet Oklahoma








Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 07:10:52PM -0500, [EMAIL PROTECTED] said:
[snip]
 As has been said, no one writes perfect software.  And again, sometime, the
 user has to share some responsibility.  Maybe if the users get burned
 enough, the problem will get solved.  Either they will get fired, the
 software will change to another platform, or they'll install the patches.
 People only change behaviors through pain, either mental or physical.

There's a difference between having the occasional bug in one's software
(Apache, OpenSSH) and having a track record of remotely exploitable
vulnerabilities in virtually EVERY revision of EVERY product one ships, on
the client-side, the server side and in the OS itself. Microsoft does not
care about security, regardless of what their latest marketing ploy may be.
If they did, they would not be releasing the same exact bugs in their
software year after year after year.

/rant
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08638/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 11:22:13AM -0500, [EMAIL PROTECTED] said:
[snip]
 That is, I think there is a big difference between a company the
 size of Microsoft saying we've known about this problem for 6
 months but didn't consider it serious so we didn't do anything
 about it, and an open source developer saying I've known about
 it for 6 months, but it's a hard problem to solve, I work on this
 in my spare time, and my users know that.
 
 Just like I expect a Ford to pass federal government safety tests,
 to have been put through a battery of product tests by ford, etc
 and be generally reliable and safe; but when I go to my local custom
 shop and have them build me a low volume or one off street rod, or
 chopper I cannot reasonably expect the same.
 
 The responsibility is the sum total of the number of product units
 out in the market, the risk to the end consumer, the companies
 ability to foresee the risk, and the steps the company was able to
 reasonably take to mitigate the risk.

*applause*

Very well stated. I've been trying for some time now to express my thoughts
on this subject, and failing - you just expressed _exactly_ what I've been
trying to say.

  use for anything other than nailing stuff together.  Likewise, MS told
  people six months ago to fix the hole.  Lack of planning on your part does
 
 It is for this very reason I suspect no one could collect on this
 specific problem.  Microsoft, from all I can tell, acted responsibly
 in this case.  Sean asked for general ways to solve this type of
 problem.  I gave what I thought was the best solution in general.
 It doesn't apply very directly to the specific events of the last
 few days.

Yes, in this particular case Microsoft did The Right Thing. It's not their
fault (this time) that admins failed to apply patches.

Of course, when one has a handful of new patches every _week_ for all manner
of software from MS, ranging from browsers to mail clients to office software
to OS holes to SMTP and HTTP daemons to databases ... well, one can
understand why the admins might have missed this patch. It doesn't remove
responsibility, but it does make the lack of action understandable. One could
easily hire a full-time position, in any medium enterprise that runs MS gear,
just to apply patches and stay on top of security issues for MS software.

Microsoft is not alone in this - they just happen to be the poster child, and
with the market share they have, if they don't lead the way in making
security a priority, I can't see anybody else in the commercial software biz
taking it seriously.

The problem was not this particular software flaw. The problem here is the
track record, and the attitude, of MANY large software vendors with regards
to security. It just doesn't matter to them, and that will not change until
they have a reason to care about it.
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08639/pgp0.pgp
Description: PGP signature


Re: OT: Re: WANAL (Re: What could have been done differently?)

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 08:53:59PM +0200, [EMAIL PROTECTED] said:
[snip]
 Hi Paul,
 
  What do you think of OpenBSD still installing BIND4 as part of the
 default base system and  recommended as secure by the OpenBSD FAQ ?
 (See Section 6.8.3 in http://www.openbsd.org/faq/faq6.html#DNS )

OpenBSD ships a highly-audited, chrooted version of BIND4 that bears little
resemblance to the original code (I'm sure Paul can correct me here if I'm
off-base). The reasons for the team's decision are well-documented on various
lists and FAQs. Given the choices at hand (use the exhaustively audited,
chrooted BIND4 already in production; go with a newer BIND version that
hasn't been through the wringer yet; write their own dns daemon; use tinydns
(licensing issues); use some other less well-known dns software), I think
they made the right one. I'm sure they'll move to a newer version when
somebody on the team gets a chance to give it a thorough code audit, and run
it through sufficient testing prior to release.
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08641/pgp0.pgp
Description: PGP signature


Re: Banc of America Article

2003-01-28 Thread Leo Bicknell

FWIW:

http://www.washingtonpost.com/wp-dyn/articles/A57550-2003Jan28.html

About 13,000 Bank of America cash machines had to be shut down. The
bank's ATMs sent encrypted information through the Internet, and when
the data slowed to a crawl, it stymied transactions, according to a
source, who said customer financial information was never in danger of
being stolen.

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org



msg08642/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-28 Thread David Lesher


 Somewhere in the equation, the sysadmin/enduser, whether Unix
 or Windows, has to take some responsibility.

Hence I loved this:

http://www.nytimes.com/2003/01/28/technology/28SOFT.html

Worm Hits Microsoft, Which Ignored Own Advice
By JOHN SCHWARTZ 

Among the companies that found its computer system under attack
by a rogue program was Microsoft, which has been preaching
the gospel of secure computing.
.



-- 
A host is a host from coast to [EMAIL PROTECTED]
 no one will talk to a host that's close[v].(301) 56-LINUX
Unless the host (that isn't close).pob 1433
is busy, hung or dead20915-1433



Re: What could have been done differently?

2003-01-28 Thread Mike Lewinski



On Tue, 28 Jan 2003, Andy Putnins wrote:

 This is therefore a request for all of those who possess this clue to
 write down their wisdom and share it with the rest of us

I can't tell you what clue is, but I know when I don't see it. In some
cases our clients have had Code Red, Nimda, and Sapphire hit the same
friggin machines.

To borrow from the exploding car analogy, if you're the highway dept. and
you notice that only *some* people's cars seem to explode, maybe you build
the equivalent of an HOV lane with concrete dividers, and funnel them all
into it, so at least they don't blow up the more conscientious
drivers/mechanics in the next lane over.

Providers who were negatively affected might want to look at their lists,
compare with past incident lists and schedule a maintenance window to
aggregate the repeat offenders ports where feasible, to isolate impact of
the next worm.

We've tried to share clue with clients via security announcements,
encouraging everyone to get on their vendors' security lists, follow
BUGTRAQ, and provide relevant signup URLs.

Mike







Re: Banc of America Article

2003-01-28 Thread Steven M. Bellovin

In message [EMAIL PROTECTED], Leo Bicknell writes:



FWIW:

http://www.washingtonpost.com/wp-dyn/articles/A57550-2003Jan28.html

About 13,000 Bank of America cash machines had to be shut down. The
bank's ATMs sent encrypted information through the Internet, and when
the data slowed to a crawl, it stymied transactions, according to a
source, who said customer financial information was never in danger of
being stolen.


OK.  I -- and, I suspect, most other folks on this list -- have been 
predicting for years that the Internet would become *the* data network, 
good for all taks.  Bank of America believed us.  They learned that 
maybe we were a bit overoptimistic in our time frame


--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (2nd edition of Firewalls book)





Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 08:14:17PM +0100, [EMAIL PROTECTED] said:
[snip]
 restrictive measures that operate with sufficient granularity. In Unix, 
 traditionally this is done per-user. Regular users can do a few things, 
 but the super-user can do everything. If a user must do something that 
 regular users can't do, the user must obtain super-user priviliges and 
 then refrain from using these absolute priviliges for anything else 
 than the intended purpose. This doesn't work. If I want to run a web 
 server, I should be able to give a specific piece of web serving 
 software access to port 80, and not also to every last bit of memory or 
 disk space.

Jeremiah Gowdy gave an excellent presentation at ToorCon 2001 on this very
topic - Fundamental Flaws in Network Operating System Design, I think it
was called. I'm looking around to see if I can find a copy of the lecture,
but so far I'm having little luck. His main thesis was basically that every
OS in common use today, from Windows to UNIX variants, has a fundamental
flaw in the way privileges and permissions are handled - the concept of
superuser/administrator. He argued instead that OSes should be redesigned to
implement the principle of least privilege from the ground up, down to the
architecture they run on. OpenSSH's PrivSep (now making its way into other
daemons in the OpenBSD tree) is a step in the right direction.

I'm still looking for a copy of the presentation, but I was able to find a
slightly older rant he wrote that contains many of the same points:
http://www.bsdatwork.com/reviews.php?op=showcontentid=2

Good reading, even if it's not very much practical help at this moment. :)

 Another thing that could help is have software ask permission from some 
 central authority before it gets to do dangerous things such as run 
 services on UDP port 1434. The central authority can then keep track of 
 what's going on and revoke permissions when it turns out the server 
 software is insecure. Essentially, we should firewall on software 
 versions as well as on traditional TCP/IP variables.

The problem there is the same as with windowsupdate - if one can spoof the
central authority, one instantly gains unrestricted access to not one, but
myriad computers. Now, if it were possible to implement this central
authority concept on a limited basis in a specific network area, I'd say that
deserved further consideration. So far, the closest thing I've seen to this
concept is the ssh administrative host model: adminhost:~root/.ssh/id_dsa.pub
is copied to every targethost:~root/.ssh/authorized_keys2, such that commands
can be performed network-wide from a single station. While I have used this
model with some success, it does face scalability issues in large
environments, and if your admin box is ever compromised ...

 And it seems parsing protocols is a very difficult thing to do right 
 with today's tools. The SNMP fiasco of not long ago shows as much, as 
 does the new worm. It would proably a good thing if the IETF could 
 build a good protocol parsing library so implementors don't have to do 
 this by hand and skip over all that pesky bounds checking. Generating 
 and parsing headers for a new protocol would then no longer require new 
 code, but could be done by defining a template of some sort. The 
[snip]

It's the trust issue, again - trust is required at some point in most
security models. Defining who you can trust, and to what degree, and how/why,
and knowing when to revoke that trust, is a problem that has been stumping
folks for quite a while now. I certainly don't claim to have an answer to
that question. :)
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08646/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 09:00:48PM -0500, [EMAIL PROTECTED] said:
 In message [EMAIL PROTECTED], Scott Francis writes:
 
 There's a difference between having the occasional bug in one's software
 (Apache, OpenSSH) and having a track record of remotely exploitable
 vulnerabilities in virtually EVERY revision of EVERY product one ships, on
 the client-side, the server side and in the OS itself. Microsoft does not
 care about security, regardless of what their latest marketing ploy may be.
 If they did, they would not be releasing the same exact bugs in their
 software year after year after year.
 
 
 They do have a lousy track record.  I'm convinced, though, that
 they're sincere about wanting to improve, and they're really trying
 very hard.  In fact, I hope that some other vendors follow their
 lead.  My big worry isn't the micro-issues like buffer overflows
 -- it's the meta-issue of an overall too-complex architecture.  I
 don't think they have a handle on that yet.

Quite true - complexity is inversely proportional to security (thanks, Mr.
Schneier). Unfortunately, it seems like the Net as a whole, including the
systems, software and protocols running on it, only gets more complex as time
goes by. How will we reconcile this growing complexity and our increasing
dependency on the global network with the ever-growing need for security and
reliability? They seem to be accelerating at the same rate.
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08647/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-28 Thread Brian Wallingford

On Tue, 28 Jan 2003, Steven M. Bellovin wrote:

:They do have a lousy track record.  I'm convinced, though, that
:they're sincere about wanting to improve, and they're really trying
:very hard.  In fact, I hope that some other vendors follow their
:lead.  My big worry isn't the micro-issues like buffer overflows
:-- it's the meta-issue of an overall too-complex architecture.  I
:don't think they have a handle on that yet.

Excellent point.  I have been saying this since the dawn of Windows
3.x.  Obviously, software engineering for such a large project as an(y) OS
needs to be distributed.  MS has long been remiss in facilitating 
(mandating?) coordination between project teams pre-market.  You're
absolutely correct that complexity is now the issue, and it could have
been mitigated early on.  (Who knows what?  Is who still
employed?  If not, where are who's notes?  Who knows if who shared
his notes with what?, Who's on third?...)

Now, it's going to cost loads of $$ to get everyone on the same page (or
chapter), if that's even in the cards.  For MS, it's a game of picking the
right fiscal/social/political tradeoff.  It's extremely complex now, as
the project has taken on a life of its own.

Someone let the suits take control early on, and we all know the rest of
the story.

Any further discussion will likely be nothing more than educated
conjecture (as was the above).

cheers,
brian




Re: What could have been done differently?

2003-01-28 Thread Valdis . Kletnieks
On Tue, 28 Jan 2003 19:10:52 EST, Eric Germann [EMAIL PROTECTED]  said:

 Sort of like the person who sued McD's when they dumped their own coffee in
 their lap because it was too hot.  Somewhere in the equation, the
 sysadmin/enduser, whether Unix or Windows, has to take some responsibility.

Bad Example. Or at least it's a bad example for your point.  That particular
case has a *LOT* of similarities with the other big-M company we're discussing.
Cross out hot coffee and write in buffer overflow and see how it reads:

From http://lawandhelp.com/q298-2.htm

1:  For years, McDonald's had known they had a problem with the way they make
their coffee - that their coffee was served much hotter (at least 20 degrees
more so) than at other restaurants.

2:  McDonald's knew its coffee sometimes caused serious injuries - more than
700 incidents of scalding coffee burns in the past decade have been settled by
the Corporation - and yet they never so much as consulted a burn expert
regarding the issue.

3:  The woman involved in this infamous case suffered very serious injuries -
third degree burns on her groin, thighs and buttocks that required skin grafts
and a seven-day hospital stay.

4:  The woman, an 81-year old former department store clerk who had never
before filed suit against anyone, said she wouldn't have brought the lawsuit
against McDonald's had the Corporation not dismissed her request for
compensation for medical bills.

5:  A McDonald's quality assurance manager testified in the case that the
Corporation was aware of the risk of serving dangerously hot coffee and had no
plans to either turn down the heat or to post warning about the possibility of
severe burns, even though most customers wouldn't think it was possible.

6:  After careful deliberation, the jury found McDonald's was liable because
the facts were overwhelmingly against the company. When it came to the punitive
damages, the jury found that McDonald's had engaged in willful, reckless,
malicious, or wanton conduct, and rendered a punitive damage award of 2.7
million dollars. (The equivalent of just two days of coffee sales, McDonalds
Corporation generates revenues in excess of 1.3 million dollars daily from the
sale of its coffee, selling 1 billion cups each year.)

7:  On appeal, a judge lowered the award to $480,000, a fact not widely
publicized in the media.

8:  A report in Liability Week, September 29, 1997, indicated that Kathleen
Gilliam, 73, suffered first degree burns when a cup of coffee spilled onto her
lap. Reports also indicate that McDonald's consistently keeps its coffee at 185
degrees, still approximately 20 degrees hotter than at other restaurants. Third
degree burns occur at this temperature in just two to seven seconds, requiring
skin grafting, debridement and whirlpool treatments that cost tens of thousands
of dollars and result in permanent disfigurement, extreme pain and disability
to the victims for many months, and in some cases, years.




msg08649/pgp0.pgp
Description: PGP signature


Dropouts since Saturday 1/25/03 only affecting web traffic?

2003-01-28 Thread Sean Donelan

According to Matrix Systems (http://average.miq.net/Weekly/markR.html)
there have been two additional dropouts of global Web reachability on
January 26 and January 28.  These dropouts have been for few hours or so,
but nearly as large as we saw from the SQL worm.  However it doesn't seem
to affect other network services, as measured by Matrix.  Just the
measured web servers.  The most recent was tonight from 3-5pm and again
from 5-7pm EST (http://average.miq.net/)

Any ideas what is causing them?  Measurement artifact?  Are you seeing
something strange on your networks about that time?





Re: Aggregate traffic management

2003-01-28 Thread Serge Maskalik

   Stanislav, 

  It depends what control mechanism you are using: 

   o routes learned via an IGP - ECMP would work and if it's a single 
 destination host, per-packet loadbalancing between the outgoing 
 links is your only practical choice; rest of ECMP schemes work 
 by distributing flows or routes amongst links

   o learned via BGP and the traffic consists of a variety of flows 
 that all use the same reachability information (BGP route); you 
 could de-aggregate the announcement locally if you have an idea 
 how the per-flow volume maps into the route; BGP Multipath feature
 set exists in most router implementations, but the distribution 
 methods are statistically different

  For the latter, several systems exist in the market place that try 
  to automate TE for BGP-learned routes, one of which is ours. These 
  system require a closed feedback loop for traffic volume per flow 
  and link mappings; this needs to occur close to real time to be 
  effective. 
  
- Serge  

Thus spake Stanislav Rost ([EMAIL PROTECTED]):

 
 Dear NANOGers,
 
 I have a very hands-on question:
 Suppose I am a network operator for a decent-sized ISP, and I decide
 that I want to divide aggregate traffic flowing through a router
 toward some destination, in order to then send some of it through one
 route and the remainder through another route.  Thus, I desire to
 enforce some traffic engineering decision.
 
 How would I be able to accomplish this division?  What technologies
 (even if vendor-specific) would I use?  
 
 I can think of some methods like prefix-matching classification and
 ECMP, but I am still not sure exactly how the latter works in practice
 (at the router level) and how one may set them up to achieve such
 load-sharing.
 
 Thank you for your expertise and lore,
 
 -- 
 Stanislav Rost [EMAIL PROTECTED]
 Laboratory for Computer Science, MIT