Re: OT: Re: WANAL (Re: What could have been done differently?)

2003-02-04 Thread Scott Francis
On Mon, Feb 03, 2003 at 11:27:46AM +0100, [EMAIL PROTECTED] said:
> 
> 
> 
> --On Tuesday, January 28, 2003 18:06:47 -0800 Scott Francis
> <[EMAIL PROTECTED]> wrote:
> 
> > I'm sure
> > they'll move to a newer version when somebody on the team gets a chance
> > to give it a thorough code audit, and run it through sufficient testing
> > prior to release.
> 
> The -current tree now is at BIND 9.2.2rc-whatever, and has been so for
> roughly a month. Thank Jakob Schlyter. 

*nod* Just noticed this when going through misc@ mail earlier. With
sufficient testing, this will probably be in the 3.3 release in May ...
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08853/pgp0.pgp
Description: PGP signature


Re: OT: Re: WANAL (Re: What could have been done differently?)

2003-02-03 Thread Måns Nilsson



--On Tuesday, January 28, 2003 18:06:47 -0800 Scott Francis
<[EMAIL PROTECTED]> wrote:

> I'm sure
> they'll move to a newer version when somebody on the team gets a chance
> to give it a thorough code audit, and run it through sufficient testing
> prior to release.

The -current tree now is at BIND 9.2.2rc-whatever, and has been so for
roughly a month. Thank Jakob Schlyter. 

-- 
Måns NilssonSystems Specialist
+46 70 681 7204 KTHNOC  MN1334-RIPE

We're sysadmins. To us, data is a protocol-overhead.



Re: What could have been done differently?

2003-02-01 Thread Dave Howe


At least theoretically, the US *is* supposed to have a comparable system.
European privacy law makes it illegal to transfer personal data of any kind
to a country without a comparable system - the US has a voluntary "Safe
Haven" scheme that is supposed to enable US companies to be able to receive
personal data from europe without the board of directors of the sending
company being arrested
Mind you, none of this takes into account the web; based in the US, Passport
isn't subject to english law (but then, most american courts assume
Internet==American law anyhow)




Re: What could have been done differently?

2003-01-30 Thread Scott Francis
On Thu, Jan 30, 2003 at 10:39:17AM -0800, [EMAIL PROTECTED] said:
> IIRC, MS's patches has been digitally signed by MS, and their patching
> system checks these sign silently. So, they will claim that
> compromised route info and/or DNS spoofing does not affect their
> correctness.
> 
> Though, I'm not sure what will happen in key revoking situation.

interesting side note ... top of the page right now at http://www.ntk.net
details a similar problem facing MS in the UK currently. (Remember when they
forgot to renew hotmail.com, and some kind Linux geek fixed it for them ...
well, apparently their entry in the Data Protection Register (UK) expired
January 8. This means all personal data held by them in the UK is now illegal
(passport, anyone?) I wonder if something like this would be useful (or even
possible) in the US, of if it would be just another opportunity for
bureaucratic bungling ...)

> Koji

-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08756/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-30 Thread David Howe

at Thursday, January 30, 2003 12:01 AM, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> was seen to say:
>> But this worm required external access to an internal server (SQL
>> Servers are not front-end ones); even with a bad or no patch
>> management system, this simply wouldn't happen on a properly
>> configured network. Whoever got slammered, has more problems than
>> just this worm. Even with no firewall or screening router,  use of
>> RFC1918 private IP address on the SQL Server would have prevented
>> this worm attack
>
> RFC1918 addresses would not have prevented this worm attack.
> RFC1918 != security
Indeed. More accurately though "don't have an SQL server port exposed to
the general internet you bloody fools" might be closer to the correct
advice to customers :)
I have been trying *hard* but can't think of a single decent reason a
random visitor to a site needs SQL Server access from the outside.




Re: What could have been done differently?

2003-01-29 Thread Scott Francis
On Tue, Jan 28, 2003 at 11:13:19AM -0200, [EMAIL PROTECTED] said:
[snip]
> But this worm required external access to an internal server (SQL Servers
> are not front-end ones); even with a bad or no patch management system, this
> simply wouldn't happen on a properly configured network. Whoever got
> slammered, has more problems than just this worm. Even with no firewall or
> screening router,  use of RFC1918 private IP address on the SQL Server would
> have prevented this worm attack

Only if the worm's randomly-chosen IP addresses were picked from the valid IP
space (i.e. not RFC1918 addresses), and although I am not sure, I doubt the
worm's author(s) was that conscientious.

Later, on Wed, Jan 29, 2003 at 19:01:25 -0500 (EST), <[EMAIL PROTECTED]>
replied:
> RFC1918 addresses would not have prevented this worm attack.
> RFC1918 != security

All too true. However, using NAT/packet filtering can at least prevent
casual/automated network scans. Of course, if one was implementing proper
filtering, 1434/udp wouldn't be accepting connections from outside sources,
whether directly or through NAT/port forwarding. But then, this observation
has been made many times already ...
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08729/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-29 Thread Mike Hogsett


> Similarly, you _pay_ MS for a product. A product which is repeatedly
> vulnerable.



I think this is key.  People (individuals/corporations) keep buying crappy
software.  As long as people keep paying the software vendors for these
broken products what incentives do they have to actually fix them?

Imagine if your car had to be recalled for problems every week (for years
and years) [and you had to install the fixes yourself].  Do you think that
the manufacturer of that car would still be selling cars, or atleast that
model?  Not likely.

Why do we as consumers put of with this for software but not other
products?  It doesn't make any sense.

 - Mike Hogsett







Re: What could have been done differently?

2003-01-29 Thread bdragon

> Not to sound to pro-MS, but if they are going to sue, they should be able to
> sue ALL software makers.  And what does that do to open source?  Apache,
> MySQL, OpenSSH, etc have all had their problems.  Should we sue the nail gun
> vendor because some moron shoots himself in the head with it?  No.  It was
> never designed for flicking flies off his forehead.  And they said, don't
> use for anything other than nailing stuff together.  Likewise, MS told
> people six months ago to fix the hole.  "Lack of planning on your part does
> not constitute an emergency on my part" was once told to me by a wise man.
> At some point, people have to take SOME responsibility for their
> organizations deployment of IT assets and systems.  Microsoft is the
> convenient target right now because they HAVE assets to take.  Who's going
> to pony up when Apache gets sued and loses.  Hwo do you sue Apache, or how
> do you sue Perl, because, afterall, it has bugs.  Just because you give it
> away shouldn't isolate you from liability.
> 
> Eric

Similarly, you _pay_ MS for a product. A product which is repeatedly
vulnerable. You don't typically pay for Apache. If you pay for a closed-source
product, security should be part of the price you've paid. If you acquire
an open-source product, you either accept the limitations or you pay to
have someone check it over, which is possible, since it is open-source.

Some companies which believe certain open source products perform better
than certain other closed source products, do just this. They pay someone
to support that product.

If you only use open-source, or non-commercial closed-source (probably the
most dangerous) because it is cheap/free, then you get what you pay for.




Re: What could have been done differently?

2003-01-29 Thread bdragon

> But this worm required external access to an internal server (SQL Servers
> are not front-end ones); even with a bad or no patch management system, this
> simply wouldn't happen on a properly configured network. Whoever got
> slammered, has more problems than just this worm. Even with no firewall or
> screening router,  use of RFC1918 private IP address on the SQL Server would
> have prevented this worm attack

RFC1918 addresses would not have prevented this worm attack.
RFC1918 != security




Re: What could have been done differently?

2003-01-29 Thread Scott Francis
On Wed, Jan 29, 2003 at 12:21:50PM -0800, [EMAIL PROTECTED] said:
[snip]
>   >   So far, the closest thing I've seen to this concept is the ssh
>   >   administrative host model: adminhost:~root/.ssh/id_dsa.pub is
>   >   copied to every targethost:~root/.ssh/authorized_keys2, such that
>   >   commands can be performed network-wide from a single station.
>   >
>   > Do you even read what you write? How does a host with root access to
>   > an entire set of hosts exemplify the least privilege principle?
> 
>   Your selections from my post managed to obscure the fact that I was making
>   more than one point. I did _not_ state that the ssh key mgmt system outlined
>   above exemplifies least privilege. I was merely making a comparison between
>   that model and the topic under discussion, central
>   administrative/authenticating authorities.
> 
> So when windowsupdate does it, its a problem, because they aren't
> using ssh keys? I'm just confused, as they both seem to represent the
> same model in your discussion, however one is a "problem" and the
> other is a sugegsted practice.

When windowsupdate does it, it's more problematic because I have no way of
knowing what machine that is, who's controlling it ... I'm basically relying
on DNS. There's no strong crypto used for authentication there that I'm aware
of. Perhaps I'm misinformed. I consider the use of ssh keys I generated, from
machines I built, to be more trustworthy than relying on DNS as the
authentication mechanism.

> Is it because windowsupdate requres explicit action on each client
> machine to operate?

That's not necessarily true either. Anyway, my point was, windowsupdate has
been spoofed, and spoofing DNS is easier than trying to spoof or MIM an auth
system that uses strong crypto. It's not perfect, but it's better than
relying solely on DNS.

(I can't seem to find the news article I'm thinking of, but I'm pretty sure
it's out there. I'll keep looking.)

> I'm still missing whatever point you were trying to make in your
> original post.

Go read it again then, and spare us all your lack of comprehension.

>   Please do not put words into my mouth.
> 
> I'm not. I'm simply quoting ones coming from it.

You did indeed put words into my mouth - you wrote:

Do you even read what you write? How does a host with root access to
an entire set of hosts exemplify the least privilege principle?


when I had NOT drawn any correlation, AT ALL, between the ssh key admin model
and the principle of least privilege. They were two separate topics that just
happened to be discussed in the same posting.

This is my last post in this thread; further flames should be directed
offlist.
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08709/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-29 Thread just me

On Wed, 29 Jan 2003, Scott Francis wrote:

  On Wed, Jan 29, 2003 at 10:47:30AM -0800, [EMAIL PROTECTED] said:
  > On Tue, 28 Jan 2003, Scott Francis wrote:
  >
  >   He argued instead that OSes should be redesigned to implement the
  >   principle of least privilege from the ground up, down to the
  >   architecture they run on.
  >
  > [...]
  >
  >   The problem there is the same as with windowsupdate - if one can spoof the
  >   central authority, one instantly gains unrestricted access to not one, but
  >   myriad computers.
  >
  > [...]
  >
  >   So far, the closest thing I've seen to this concept is the ssh
  >   administrative host model: adminhost:~root/.ssh/id_dsa.pub is
  >   copied to every targethost:~root/.ssh/authorized_keys2, such that
  >   commands can be performed network-wide from a single station.
  >
  > Do you even read what you write? How does a host with root access to
  > an entire set of hosts exemplify the least privilege principle?

  Your selections from my post managed to obscure the fact that I was making
  more than one point. I did _not_ state that the ssh key mgmt system outlined
  above exemplifies least privilege. I was merely making a comparison between
  that model and the topic under discussion, central
  administrative/authenticating authorities.

So when windowsupdate does it, its a problem, because they aren't
using ssh keys? I'm just confused, as they both seem to represent the
same model in your discussion, however one is a "problem" and the
other is a sugegsted practice.

Is it because windowsupdate requres explicit action on each client
machine to operate?

I'm still missing whatever point you were trying to make in your
original post.

  Please do not put words into my mouth.

I'm not. I'm simply quoting ones coming from it.

matto

[EMAIL PROTECTED]<
   Flowers on the razor wire/I know you're here/We are few/And far
   between/I was thinking about her skin/Love is a many splintered
   thing/Don't be afraid now/Just walk on in. #include 




Re: What could have been done differently?

2003-01-29 Thread Scott Francis
On Wed, Jan 29, 2003 at 10:47:30AM -0800, [EMAIL PROTECTED] said:
> On Tue, 28 Jan 2003, Scott Francis wrote:
> 
>   He argued instead that OSes should be redesigned to implement the
>   principle of least privilege from the ground up, down to the
>   architecture they run on.
> 
> [...]
> 
>   The problem there is the same as with windowsupdate - if one can spoof the
>   central authority, one instantly gains unrestricted access to not one, but
>   myriad computers.
> 
> [...]
> 
>   So far, the closest thing I've seen to this concept is the ssh
>   administrative host model: adminhost:~root/.ssh/id_dsa.pub is
>   copied to every targethost:~root/.ssh/authorized_keys2, such that
>   commands can be performed network-wide from a single station.
> 
> Do you even read what you write? How does a host with root access to
> an entire set of hosts exemplify the least privilege principle?

Your selections from my post managed to obscure the fact that I was making
more than one point. I did _not_ state that the ssh key mgmt system outlined
above exemplifies least privilege. I was merely making a comparison between
that model and the topic under discussion, central
administrative/authenticating authorities. Additionally, the section higher
up regarding least privilege was in connection with OS design, and was quoted
from another author's presentation at ToorCon last year. You're stringing
together statements on disparate subjects and then jumping to conclusions.

Please do not put words into my mouth.

> matto
> 
> [EMAIL PROTECTED]<

-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08703/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-29 Thread just me

On Tue, 28 Jan 2003, Scott Francis wrote:


  He argued instead that OSes should be redesigned to implement the
  principle of least privilege from the ground up, down to the
  architecture they run on.

[...]

  The problem there is the same as with windowsupdate - if one can spoof the
  central authority, one instantly gains unrestricted access to not one, but
  myriad computers.

[...]

  So far, the closest thing I've seen to this concept is the ssh
  administrative host model: adminhost:~root/.ssh/id_dsa.pub is
  copied to every targethost:~root/.ssh/authorized_keys2, such that
  commands can be performed network-wide from a single station.


Do you even read what you write? How does a host with root access to
an entire set of hosts exemplify the least privilege principle?

matto

[EMAIL PROTECTED]<
   Flowers on the razor wire/I know you're here/We are few/And far
   between/I was thinking about her skin/Love is a many splintered
   thing/Don't be afraid now/Just walk on in. #include 




Re: What could have been done differently?

2003-01-29 Thread Iljitsch van Beijnum

On Tue, 28 Jan 2003, Scott Francis wrote:

> I'm still looking for a copy of the presentation, but I was able to find a
> slightly older rant he wrote that contains many of the same points:
> http://www.bsdatwork.com/reviews.php?op=showcontent&id=2

> Good reading, even if it's not very much practical help at this moment. :)

I'm reminding of the two men that were sent out to chop a whole lot of
wood. One judged the amount of work and immediately started, chopping
away until dark. The other stopped to sharpen his blade from time to
time. Despite the fact he lost valuable chopping time this way, he was
home in time for dinner.

> > Another thing that could help is have software ask permission from some
> > central authority before it gets to do dangerous things such as run
> > services on UDP port 1434. The central authority can then keep track of
> > what's going on and revoke permissions when it turns out the server
> > software is insecure. Essentially, we should firewall on software
> > versions as well as on traditional TCP/IP variables.

> The problem there is the same as with windowsupdate - if one can spoof the
> central authority, one instantly gains unrestricted access to not one, but
> myriad computers.

I din't mean quite that central, but rather one or two of these boxes
for a small-to-medium sized organization. If there are different
servers authenticating and authorizing users on the one hand and
software/network services on the other hand, an attacker would have to
compromize both: the network aaa box to bypass the firewalls, and the
user aaa box to actually log on.

> > It would proably a good thing if the IETF could
> > build a good protocol parsing library so implementors don't have to do
> > this "by hand" and skip over all that pesky bounds checking. Generating
> > and parsing headers for a new protocol would then no longer require new
> > code, but could be done by defining a template of some sort.

> It's the trust issue, again - trust is required at some point in most
> security models.

This isn't a matter of trust, but a matter of well-designed and
well-tested software. If the RFC Editor publishes and RFC with the C
example code for a generic protocol handler library, this code will have
seen a lot of review, especially if people intend to actually use this
code in their products. Since this code will be so important and not
all that big, a formal correctness proof may be possible.




Re: What could have been done differently?

2003-01-29 Thread Michael . Dillon

> His main thesis was basically that every
> OS in common use today, from Windows to UNIX variants, has a fundamental
> flaw in the way privileges and permissions are handled - the concept of
> superuser/administrator. He argued instead that OSes should be 
redesigned to
> implement the principle of least privilege from the ground up, down to 
the
> architecture they run on. OpenSSH's PrivSep (now making its way into 
other
> daemons in the OpenBSD tree) is a step in the right direction.

Capability-based systems like EROS-OS are a way of addressing this issue. 
Have a look at http://www.eros-os.org/
If you only read one article then pick this summary from IEEE Software 
magazine http://www.eros-os.org/papers/IEEE-Software-Jan-2002.pdf

The slammer worm made its way into some very unexpected places. It seems 
that in many organizations, once the UDP packet made its way to one MS-SQL 
server through one hole, it then acquired all the privileges of the IP 
address that supposedly belonged to a database server. Since traffic from 
the database server was considered to be trustworthy, it was able to 
easily reach and infect many more internal MS-SQL servers that were on 
internal networks unconnected to the Internet. In other words, there were 
MS-SQL servers acting as Application Layer Gateways to transport the worm 
into protected networks. 

The random nature of the addresses chosen by the worm virtually guaranteed 
that every single network path in the world containing MS-SQL servers 
would be infected.

--Michael Dillon




Re: What could have been done differently?

2003-01-28 Thread Valdis . Kletnieks
On Tue, 28 Jan 2003 19:10:52 EST, Eric Germann <[EMAIL PROTECTED]>  said:

> Sort of like the person who sued McD's when they dumped their own coffee in
> their lap because it was "too hot".  Somewhere in the equation, the
> sysadmin/enduser, whether Unix or Windows, has to take some responsibility.

Bad Example. Or at least it's a bad example for your point.  That particular
case has a *LOT* of similarities with the other big-M company we're discussing.
Cross out "hot coffee" and write in "buffer overflow" and see how it reads:

>From http://lawandhelp.com/q298-2.htm

1:  For years, McDonald's had known they had a problem with the way they make
their coffee - that their coffee was served much hotter (at least 20 degrees
more so) than at other restaurants.

2:  McDonald's knew its coffee sometimes caused serious injuries - more than
700 incidents of scalding coffee burns in the past decade have been settled by
the Corporation - and yet they never so much as consulted a burn expert
regarding the issue.

3:  The woman involved in this infamous case suffered very serious injuries -
third degree burns on her groin, thighs and buttocks that required skin grafts
and a seven-day hospital stay.

4:  The woman, an 81-year old former department store clerk who had never
before filed suit against anyone, said she wouldn't have brought the lawsuit
against McDonald's had the Corporation not dismissed her request for
compensation for medical bills.

5:  A McDonald's quality assurance manager testified in the case that the
Corporation was aware of the risk of serving dangerously hot coffee and had no
plans to either turn down the heat or to post warning about the possibility of
severe burns, even though most customers wouldn't think it was possible.

6:  After careful deliberation, the jury found McDonald's was liable because
the facts were overwhelmingly against the company. When it came to the punitive
damages, the jury found that McDonald's had engaged in willful, reckless,
malicious, or wanton conduct, and rendered a punitive damage award of 2.7
million dollars. (The equivalent of just two days of coffee sales, McDonalds
Corporation generates revenues in excess of 1.3 million dollars daily from the
sale of its coffee, selling 1 billion cups each year.)

7:  On appeal, a judge lowered the award to $480,000, a fact not widely
publicized in the media.

8:  A report in Liability Week, September 29, 1997, indicated that Kathleen
Gilliam, 73, suffered first degree burns when a cup of coffee spilled onto her
lap. Reports also indicate that McDonald's consistently keeps its coffee at 185
degrees, still approximately 20 degrees hotter than at other restaurants. Third
degree burns occur at this temperature in just two to seven seconds, requiring
skin grafting, debridement and whirlpool treatments that cost tens of thousands
of dollars and result in permanent disfigurement, extreme pain and disability
to the victims for many months, and in some cases, years.




msg08649/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-28 Thread Brian Wallingford

On Tue, 28 Jan 2003, Steven M. Bellovin wrote:

:They do have a lousy track record.  I'm convinced, though, that
:they're sincere about wanting to improve, and they're really trying
:very hard.  In fact, I hope that some other vendors follow their
:lead.  My big worry isn't the micro-issues like buffer overflows
:-- it's the meta-issue of an overall too-complex architecture.  I
:don't think they have a handle on that yet.

Excellent point.  I have been saying this since the dawn of Windows
3.x.  Obviously, software engineering for such a large project as an(y) OS
needs to be distributed.  MS has long been remiss in facilitating 
(mandating?) coordination between project teams pre-market.  You're
absolutely correct that complexity is now the issue, and it could have
been mitigated early on.  (Who knows what?  Is "who" still
employed?"  If not, where are "who's" notes?  Who knows if "who" shared
his notes with "what"?, Who's on third?...)

Now, it's going to cost loads of $$ to get everyone on the same page (or
chapter), if that's even in the cards.  For MS, it's a game of picking the
right fiscal/social/political tradeoff.  It's extremely complex now, as
the project has taken on a life of its own.

Someone let the suits take control early on, and we all know the rest of
the story.

Any further discussion will likely be nothing more than educated
conjecture (as was the above).

cheers,
brian




Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 09:00:48PM -0500, [EMAIL PROTECTED] said:
> In message <[EMAIL PROTECTED]>, Scott Francis writes:
> 
> >There's a difference between having the occasional bug in one's software
> >(Apache, OpenSSH) and having a track record of remotely exploitable
> >vulnerabilities in virtually EVERY revision of EVERY product one ships, on
> >the client-side, the server side and in the OS itself. Microsoft does not
> >care about security, regardless of what their latest marketing ploy may be.
> >If they did, they would not be releasing the same exact bugs in their
> >software year after year after year.
> 
> 
> They do have a lousy track record.  I'm convinced, though, that
> they're sincere about wanting to improve, and they're really trying
> very hard.  In fact, I hope that some other vendors follow their
> lead.  My big worry isn't the micro-issues like buffer overflows
> -- it's the meta-issue of an overall too-complex architecture.  I
> don't think they have a handle on that yet.

Quite true - complexity is inversely proportional to security (thanks, Mr.
Schneier). Unfortunately, it seems like the Net as a whole, including the
systems, software and protocols running on it, only gets more complex as time
goes by. How will we reconcile this growing complexity and our increasing
dependency on the global network with the ever-growing need for security and
reliability? They seem to be accelerating at the same rate.
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08647/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 08:14:17PM +0100, [EMAIL PROTECTED] said:
[snip]
> restrictive measures that operate with sufficient granularity. In Unix, 
> traditionally this is done per-user. Regular users can do a few things, 
> but the super-user can do everything. If a user must do something that 
> regular users can't do, the user must obtain super-user priviliges and 
> then refrain from using these absolute priviliges for anything else 
> than the intended purpose. This doesn't work. If I want to run a web 
> server, I should be able to give a specific piece of web serving 
> software access to port 80, and not also to every last bit of memory or 
> disk space.

Jeremiah Gowdy gave an excellent presentation at ToorCon 2001 on this very
topic - "Fundamental Flaws in Network Operating System Design", I think it
was called. I'm looking around to see if I can find a copy of the lecture,
but so far I'm having little luck. His main thesis was basically that every
OS in common use today, from Windows to UNIX variants, has a fundamental
flaw in the way privileges and permissions are handled - the concept of
superuser/administrator. He argued instead that OSes should be redesigned to
implement the principle of least privilege from the ground up, down to the
architecture they run on. OpenSSH's PrivSep (now making its way into other
daemons in the OpenBSD tree) is a step in the right direction.

I'm still looking for a copy of the presentation, but I was able to find a
slightly older rant he wrote that contains many of the same points:
http://www.bsdatwork.com/reviews.php?op=showcontent&id=2

Good reading, even if it's not very much practical help at this moment. :)

> Another thing that could help is have software ask permission from some 
> central authority before it gets to do dangerous things such as run 
> services on UDP port 1434. The central authority can then keep track of 
> what's going on and revoke permissions when it turns out the server 
> software is insecure. Essentially, we should firewall on software 
> versions as well as on traditional TCP/IP variables.

The problem there is the same as with windowsupdate - if one can spoof the
central authority, one instantly gains unrestricted access to not one, but
myriad computers. Now, if it were possible to implement this central
authority concept on a limited basis in a specific network area, I'd say that
deserved further consideration. So far, the closest thing I've seen to this
concept is the ssh administrative host model: adminhost:~root/.ssh/id_dsa.pub
is copied to every targethost:~root/.ssh/authorized_keys2, such that commands
can be performed network-wide from a single station. While I have used this
model with some success, it does face scalability issues in large
environments, and if your admin box is ever compromised ...

> And it seems parsing protocols is a very difficult thing to do right 
> with today's tools. The SNMP fiasco of not long ago shows as much, as 
> does the new worm. It would proably a good thing if the IETF could 
> build a good protocol parsing library so implementors don't have to do 
> this "by hand" and skip over all that pesky bounds checking. Generating 
> and parsing headers for a new protocol would then no longer require new 
> code, but could be done by defining a template of some sort. The 
[snip]

It's the trust issue, again - trust is required at some point in most
security models. Defining who you can trust, and to what degree, and how/why,
and knowing when to revoke that trust, is a problem that has been stumping
folks for quite a while now. I certainly don't claim to have an answer to
that question. :)
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08646/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-28 Thread Mike Lewinski



On Tue, 28 Jan 2003, Andy Putnins wrote:

> This is therefore a request for all of those who possess this "clue" to
> write down their wisdom and share it with the rest of us

I can't tell you what clue is, but I know when I don't see it. In some
cases our clients have had Code Red, Nimda, and Sapphire hit the same
friggin machines.

To borrow from the exploding car analogy, if you're the highway dept. and
you notice that only *some* people's cars seem to explode, maybe you build
the equivalent of an HOV lane with concrete dividers, and funnel them all
into it, so at least they don't blow up the more conscientious
drivers/mechanics in the next lane over.

Providers who were negatively affected might want to look at their lists,
compare with past incident lists and schedule a maintenance window to
aggregate the repeat offenders ports where feasible, to isolate impact of
the next worm.

We've tried to share clue with clients via security announcements,
encouraging everyone to get on their vendors' security lists, follow
BUGTRAQ, and provide relevant signup URLs.

Mike







Re: What could have been done differently?

2003-01-28 Thread David Lesher


> Somewhere in the equation, the sysadmin/enduser, whether Unix
> or Windows, has to take some responsibility.

Hence I loved this:

http://www.nytimes.com/2003/01/28/technology/28SOFT.html

Worm Hits Microsoft, Which Ignored Own Advice
By JOHN SCHWARTZ 

Among the companies that found its computer system under attack
by a rogue program was Microsoft, which has been preaching
the gospel of secure computing.
.



-- 
A host is a host from coast to [EMAIL PROTECTED]
& no one will talk to a host that's close[v].(301) 56-LINUX
Unless the host (that isn't close).pob 1433
is busy, hung or dead20915-1433



Re: OT: Re: WANAL (Re: What could have been done differently?)

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 08:53:59PM +0200, [EMAIL PROTECTED] said:
[snip]
> Hi Paul,
> 
>  What do you think of OpenBSD still installing BIND4 as part of the
> default base system and  recommended as secure by the OpenBSD FAQ ?
> (See Section 6.8.3 in  )

OpenBSD ships a highly-audited, chrooted version of BIND4 that bears little
resemblance to the original code (I'm sure Paul can correct me here if I'm
off-base). The reasons for the team's decision are well-documented on various
lists and FAQs. Given the choices at hand (use the exhaustively audited,
chrooted BIND4 already in production; go with a newer BIND version that
hasn't been through the wringer yet; write their own dns daemon; use tinydns
(licensing issues); use some other less well-known dns software), I think
they made the right one. I'm sure they'll move to a newer version when
somebody on the team gets a chance to give it a thorough code audit, and run
it through sufficient testing prior to release.
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08641/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-28 Thread Steven M. Bellovin

In message <[EMAIL PROTECTED]>, Scott Francis writes:
>

>There's a difference between having the occasional bug in one's software
>(Apache, OpenSSH) and having a track record of remotely exploitable
>vulnerabilities in virtually EVERY revision of EVERY product one ships, on
>the client-side, the server side and in the OS itself. Microsoft does not
>care about security, regardless of what their latest marketing ploy may be.
>If they did, they would not be releasing the same exact bugs in their
>software year after year after year.


They do have a lousy track record.  I'm convinced, though, that
they're sincere about wanting to improve, and they're really trying
very hard.  In fact, I hope that some other vendors follow their
lead.  My big worry isn't the micro-issues like buffer overflows
-- it's the meta-issue of an overall too-complex architecture.  I
don't think they have a handle on that yet.



--Steve Bellovin, http://www.research.att.com/~smb (me)
http://www.wilyhacker.com (2nd edition of "Firewalls" book)





Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 11:22:13AM -0500, [EMAIL PROTECTED] said:
[snip]
> That is, I think there is a big difference between a company the
> size of Microsoft saying "we've known about this problem for 6
> months but didn't consider it serious so we didn't do anything
> about it", and an open source developer saying "I've known about
> it for 6 months, but it's a hard problem to solve, I work on this
> in my spare time, and my users know that."
> 
> Just like I expect a Ford to pass federal government safety tests,
> to have been put through a battery of product tests by ford, etc
> and be generally reliable and safe; but when I go to my local custom
> shop and have them build me a low volume or one off street rod, or
> chopper I cannot reasonably expect the same.
> 
> The responsibility is the sum total of the number of product units
> out in the market, the risk to the end consumer, the companies
> ability to foresee the risk, and the steps the company was able to
> reasonably take to mitigate the risk.

*applause*

Very well stated. I've been trying for some time now to express my thoughts
on this subject, and failing - you just expressed _exactly_ what I've been
trying to say.

> > use for anything other than nailing stuff together.  Likewise, MS told
> > people six months ago to fix the hole.  "Lack of planning on your part does
> 
> It is for this very reason I suspect no one could collect on this
> specific problem.  Microsoft, from all I can tell, acted responsibly
> in this case.  Sean asked for general ways to solve this type of
> problem.  I gave what I thought was the best solution in general.
> It doesn't apply very directly to the specific events of the last
> few days.

Yes, in this particular case Microsoft did The Right Thing. It's not their
fault (this time) that admins failed to apply patches.

Of course, when one has a handful of new patches every _week_ for all manner
of software from MS, ranging from browsers to mail clients to office software
to OS holes to SMTP and HTTP daemons to databases ... well, one can
understand why the admins might have missed this patch. It doesn't remove
responsibility, but it does make the lack of action understandable. One could
easily hire a full-time position, in any medium enterprise that runs MS gear,
just to apply patches and stay on top of security issues for MS software.

Microsoft is not alone in this - they just happen to be the poster child, and
with the market share they have, if they don't lead the way in making
security a priority, I can't see anybody else in the commercial software biz
taking it seriously.

The problem was not this particular software flaw. The problem here is the
track record, and the attitude, of MANY large software vendors with regards
to security. It just doesn't matter to them, and that will not change until
they have a reason to care about it.
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08639/pgp0.pgp
Description: PGP signature


Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 07:10:52PM -0500, [EMAIL PROTECTED] said:
[snip]
> As has been said, no one writes perfect software.  And again, sometime, the
> user has to share some responsibility.  Maybe if the users get burned
> enough, the problem will get solved.  Either they will get fired, the
> software will change to another platform, or they'll install the patches.
> People only change behaviors through pain, either mental or physical.

There's a difference between having the occasional bug in one's software
(Apache, OpenSSH) and having a track record of remotely exploitable
vulnerabilities in virtually EVERY revision of EVERY product one ships, on
the client-side, the server side and in the OS itself. Microsoft does not
care about security, regardless of what their latest marketing ploy may be.
If they did, they would not be releasing the same exact bugs in their
software year after year after year.


-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08638/pgp0.pgp
Description: PGP signature


RE: What could have been done differently?

2003-01-28 Thread Eric Germann

XP has autoupdate notifications that nag you.  They could make it automatic,
but then everyone would sue them if it mucked up their system.

And, MS has their HFCHECK program which checks which hotfixes should be
installed.  Again, not automatic because they would like the USER to sign
off on installing it.

On the Open Source side, you sort of have that when you build from source.
Maybe apache should build a util to routinely go out and scan their source
and all the myriad add on modules and build a new version when one of them
has a fix to it, but we leave that to the sysadmin.  Why, because the
permutations are too many.  Which is why we have Windows.  To paraphrase a
phone company line I heard in a sales meeting when reaming them, "we may
suck, but we suck less ...".  It ain't the best, but for the most part, it
does what the user wants and is relatively consistent across a number of
machines.  User learns at home and can operate at work.  No retraining.

Sort of like the person who sued McD's when they dumped their own coffee in
their lap because it was "too hot".  Somewhere in the equation, the
sysadmin/enduser, whether Unix or Windows, has to take some responsibility.

To turn the argument around, people don't pay for IIS either, but everyone
would love to sue MS for its vulnerabilities (i.e. CR/Nimda, etc).

As has been said, no one writes perfect software.  And again, sometime, the
user has to share some responsibility.  Maybe if the users get burned
enough, the problem will get solved.  Either they will get fired, the
software will change to another platform, or they'll install the patches.
People only change behaviors through pain, either mental or physical.

Eric


> -Original Message-
> From: Jack Bates [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, January 28, 2003 10:36 AM
> To: [EMAIL PROTECTED]; Leo Bicknell; [EMAIL PROTECTED]
> Cc: Eric Germann
> Subject: Re: What could have been done differently?
>
>
> From: "Eric Germann"
>
> >
> > Not to sound to pro-MS, but if they are going to sue, they
> should be able
> to
> > sue ALL software makers.  And what does that do to open source?  Apache,
> > MySQL, OpenSSH, etc have all had their problems.  Should we sue the nail
> gun
> > vendor because some moron shoots himself in the head with it?
>
> With all the resources at their disposal, is MS doing enough to inform the
> customers of new fixes? Are the fixes and lates security patches
> in an easy
> to find location that any idiot admin can spot? Have they done
> due diligence
> in ensuring that proper notification is done? I ask because it
> appears they
> didn't tell part of their own company that a patch needed to be
> applied. If
> I want the latest info on Apache, I hit the main website and the
> first thing
> I see is a list of security issues and resolutions. Navigating
> MS's website
> isn't quite so simplistic. Liability isn't necessarily in the bug
> but in the
> education and notification.
>
> Jack Bates
> BrightNet Oklahoma
>
>
>





Re: What could have been done differently?

2003-01-28 Thread Scott Francis
On Tue, Jan 28, 2003 at 03:10:18AM -0500, [EMAIL PROTECTED] said:
[snip]
> Many different companies were hit hard by the Slammer worm, some with
> better than average reputations for security awareness.  They bought
> finest firewalls, they had two-factor biometric locks on their data
> centers, they installed anti-virus software, they paid for SAS70
> audits by the premier auditors, they hired the best managed security
> consulting firms.  Yet, they still were hit.
> 
> Its not as simple as don't use microsoft, because worms have hit other
> popular platforms too.

True. But few platforms have as dismal a record in this regard as MS. Whether
that's due to number of bugs or market penetration is a matter for debate.
Personally, I think it's clear that the focus, from MS and many other
vendors, is on time-to-market and feature creep. Security is an afterthought,
at best (regardless of "Trustworthy Computing", which is looking to be just
another marketing initiative). The first step towards good security is
choosing vendors/software with a reputation for caring about security. I
realize that for many of us, this is not an option at this stage of the game.
And in some arenas, there just aren't any good choices - the best you can do
is to choose the lesser of multiple evils. Which leads me to the next point:

> Are there practical answers that actually work in the real world with
> real users and real business needs?

I think a good place to start is to have at least one person, if not more,
who has in their job description to daily check errata/patch lists for the
software in use on the network. This can be semi-automated by just
subscribing to the right mailing lists. Now, deciding whether or not a patch
is worth applying is another story, but there's no excuse for being ignorant
of published security updates for software on one's network. Yes, it's a
hassle wading through the voluminous cross-site scripting posts on BUGTRAQ,
but it's worth it when you do occasionally get that vital bit of information.
Sometimes vendors aren't as quick to release bug information, much less
patches, as forums like BUGTRAQ/VulnWatch/etc.

Stay on top of security releases, and patch anything that is a security
issue. I realize this is problematic for larger networks, in which case I
would add, start with the most critical machines and work your way down. If
this requires downtime, well, better to spend a few hours of rotating
downtime to patch holes in your machines than to end up compromised, or
contributing to the kind of chaos we saw this last weekend.

Simple answer, practical for some folks, maybe less so for others. I know
I've been guilty of not following my own advice in this area before, but that
doesn't make it any less pertinent.
-- 
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
  GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui



msg08631/pgp0.pgp
Description: PGP signature


Re: OT: Re: WANAL (Re: What could have been done differently?)

2003-01-28 Thread Mike Lewinski

On 1/28/03 11:57 AM, "Paul Vixie" <[EMAIL PROTECTED]> wrote:

> 
>>  What do you think of OpenBSD still installing BIND4 as part of the
>> default base system and  recommended as secure by the OpenBSD FAQ ?
>> (See Section 6.8.3 in  )
> 
> i think that bind4 was relatively easy for them to do a format string
> audit on, and that bind9 was comparatively huge, and that their caution
> is justified based on bind4/bind8's record in CERT advisories, and that
> for feature level reasons they will move to bind9 as soon as they can
> complete a security audit on the code.  (although in this case ISC and
> others have already completed such an audit, another pass never hurts.)


It is my understanding that this process has been completed, and BIND9
should ship as the default OpenBSD named in the 3.3 release:

http://deadly.org/article.php3?sid=20030121022208&mode=flat

We've been running BIND9 from the ports tree for over two years now and are
*very* happy with performance/stability.

Mike




Re: What could have been done differently?

2003-01-28 Thread Iljitsch van Beijnum

Sean Donelan wrote:

Many different companies were hit hard by the Slammer worm, some with
better than average reputations for security awareness.  They bought
finest firewalls, they had two-factor biometric locks on their data
centers, they installed anti-virus software, they paid for SAS70
audits by the premier auditors, they hired the best managed security
consulting firms.  Yet, they still were hit.



Its not as simple as don't use microsoft, because worms have hit other
popular platforms too.


As a former boss of me was fond of saying when someone made a stupid 
mistake: "It can happen to anyone. It just happens more often to some 
people than others."

Are there practical answers that actually work in the real world with
real users and real business needs?


As this is still a network operators forum, let's get this out of the 
way: any time you put a 10 Mbps ethernet port in a box, expect that it 
has to deal with 14 kpps at some point. 100 Mbps -> 148 kpps, 1000 Mbps 
-> 1488 kpps. And each packet is a new flow. There are still routers 
being sold that have the interfaces, but can't handle the maximum 
traffic. Unfortunately, router vendors like to lure customers to boxes 
that can forward these amounts of traffic wire speed rather than 
implement features in their lower-end products that would allow a box 
to drop the excess traffic in a reasonable way.

But then there is the real source of the problem. Software can't be 
trusted. It doesn't mean anything that 100 lines of code are 
correct, if one line is incorrect something really bad can happen. 
Since we obviously can't make software do what we want it to do, we 
should focus on making it not do what we don't want it to do. This 
means every piece of software must be encapsulated inside a layer of 
restrictive measures that operate with sufficient granularity. In Unix, 
traditionally this is done per-user. Regular users can do a few things, 
but the super-user can do everything. If a user must do something that 
regular users can't do, the user must obtain super-user priviliges and 
then refrain from using these absolute priviliges for anything else 
than the intended purpose. This doesn't work. If I want to run a web 
server, I should be able to give a specific piece of web serving 
software access to port 80, and not also to every last bit of memory or 
disk space.

Another thing that could help is have software ask permission from some 
central authority before it gets to do dangerous things such as run 
services on UDP port 1434. The central authority can then keep track of 
what's going on and revoke permissions when it turns out the server 
software is insecure. Essentially, we should firewall on software 
versions as well as on traditional TCP/IP variables.

And it seems parsing protocols is a very difficult thing to do right 
with today's tools. The SNMP fiasco of not long ago shows as much, as 
does the new worm. It would proably a good thing if the IETF could 
build a good protocol parsing library so implementors don't have to do 
this "by hand" and skip over all that pesky bounds checking. Generating 
and parsing headers for a new protocol would then no longer require new 
code, but could be done by defining a template of some sort. The 
implementors can then focus on the functionality rather than which bit 
goes where. Obviously there would be a performance impact but the same 
goes for coding in higher languages than assembly. Moore's law and 
optimizers are your friends.



RE: What could have been done differently?

2003-01-28 Thread Vadim Antonov


On Tue, 28 Jan 2003, Eric Germann wrote:

> 
> Not to sound to pro-MS, but if they are going to sue, they should be able to
> sue ALL software makers.  And what does that do to open source?

A law can be crafted in such a way so as to create distinction between
selling for profit (and assuming liability) and giving for free as-is. In
fact, you don't have Goodwill to sign papers to the effect that it won't
sue you if they decide later that you've brought junk - because you know
they won't win in court. However, that does not protect you if you bring
them a bomb disguised as a valuable.

The reason for this is: if someone sells you stuff, and it turns out not
to be up to your reasonable expectations, you suffered demonstrable loss
because vendor has misled you (_not_ because the stuff is bad).  I.e. the
amount of that loss is the price you paid, and, therefore, this is
vendor's direct liability.

When someone gives you something for free, his direct liability is,
correspondingly, zero.

So, what you want is a law permitting direct liability (i.e. the "lemon
law", like the ones regulating sale of cars or houses) but setting much
higher standards (i.e. willfully deceiptive advertisement, maliciously
dangerous software, etc) for suing for punitive damages.  Note that in
class actions it is often much easier to prove the malicious intent of a
defendant in cases concering deceiptive advertisement - it is one thing
when someone gets cold feet and claims he's been misled, and quite another
when you have thousands of independent complaints.  Because there's
nothing to gain suing non-profits (unless they're churches:) the
reluctance of class action lawyers to work for free would protect
non-profits from that kind of abuse.

A lemon law for software may actually be a boost for the proprietary
software, as people will realize that the vendors have incentive to
deliver on promises.

--vadim




Re: What could have been done differently?

2003-01-28 Thread Alex Bligh



--On 28 January 2003 10:42 -0600 Andy Putnins <[EMAIL PROTECTED]> wrote:


How does one find a "clueful" person to hire? Can you recognize one by
their hat or badge of office? Is there a guild to which they all belong?
If one  wants to get a "clue", how does one find a master to join as an
apprentice?


In the long term one might presume market forces would provide better
answers than speculation & ...


Society requires that some kinds of engineers be licensed


... economic theory suggests that licensing etc. is only a good idea when
the externalities of failure cases exceed the benefits of licensing by more
than the costs of its imposition (including barriers to entry etc.).

I do not think we have come to the point where this has been
demonstrated yet. Note licensing does not have a 100% success
record in protecting against failure (viz. Andersen).


This is therefore a request for all of those who possess this "clue" to
write down their wisdom and share it with the rest of us, so we can


This industry has been pretty good at that, despite recent economic
circumstances militating against it. No argument there.

Alex Bligh




Re: OT: Re: WANAL (Re: What could have been done differently?)

2003-01-28 Thread Paul Vixie

>  What do you think of OpenBSD still installing BIND4 as part of the
> default base system and  recommended as secure by the OpenBSD FAQ ?
> (See Section 6.8.3 in  )

i think that bind4 was relatively easy for them to do a format string
audit on, and that bind9 was comparatively huge, and that their caution
is justified based on bind4/bind8's record in CERT advisories, and that
for feature level reasons they will move to bind9 as soon as they can
complete a security audit on the code.  (although in this case ISC and
others have already completed such an audit, another pass never hurts.)



OT: Re: WANAL (Re: What could have been done differently?)

2003-01-28 Thread Rafi Sadowsky

## On 2003-01-28 17:49 - Paul Vixie typed:

PV> 
PV> In any case, all of these makers (including Microsoft) seem to make a very
PV> good faith effort to get patches out when vulnerabilities are uncovered.  I
PV> wish we could have put time bombs in older BINDs to force folks to upgrade,
PV> but that brings more problems than it takes away, so a lot of folks run old
PV> broken software even though our web page tells them not to.
PV> 

Hi Paul,

 What do you think of OpenBSD still installing BIND4 as part of the
default base system and  recommended as secure by the OpenBSD FAQ ?
(See Section 6.8.3 in  )

-- 
Thanks
Rafi




WANAL (Re: What could have been done differently?)

2003-01-28 Thread Paul Vixie

[EMAIL PROTECTED] ("Eric Germann") writes:

> Not to sound to pro-MS, but if they are going to sue, they should be able
> to sue ALL software makers.  And what does that do to open source?
> Apache, MySQL, OpenSSH, etc have all had their problems.  ...

Don't forget BIND, we've had our problems as well.  Our license says:

/*
 * [Portions] Copyright (c) - by Internet Software Consortium.
 *
 * Permission to use, copy, modify, and distribute this software for any
 * purpose with or without fee is hereby granted, provided that the above
 * copyright notice and this permission notice appear in all copies.
 *
 * THE SOFTWARE IS PROVIDED "AS IS" AND INTERNET SOFTWARE CONSORTIUM DISCLAIMS
 * ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES
 * OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL INTERNET SOFTWARE
 * CONSORTIUM BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
 * DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
 * PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
 * ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS
 * SOFTWARE.
 */

I believe that Apache and the others you mention do the same.  Disclaiming
fitness for use, and requiring that the maker be held harmless, only works
when the software is fee-free.  Microsoft can get you to click "Accept" as
often as they want and keep records of the fact that you clicked it, but in
every state I know about, fitness for use is implied by the presence of fee
and cannot be disclaimed even by explicit agreement from the end user.  B2B
considerations are different -- I'm talking about consumer rights not overall
business liability.

In any case, all of these makers (including Microsoft) seem to make a very
good faith effort to get patches out when vulnerabilities are uncovered.  I
wish we could have put time bombs in older BINDs to force folks to upgrade,
but that brings more problems than it takes away, so a lot of folks run old
broken software even though our web page tells them not to.

Note: IANAL.
-- 
Paul Vixie



RE: What could have been done differently?

2003-01-28 Thread Ray Burkholder

The SANS Institute [[EMAIL PROTECTED]] www.sans.org is a well respected
collection of individuals who have provided this 'pool' of knowledge and
regularly disseminate it to inquiring minds.

Ray Burkholder


> -Original Message-
> From: Andy Putnins [mailto:[EMAIL PROTECTED]] 
> Sent: January 28, 2003 12:43
> To: Alex Bligh
> Cc: Sean Donelan; [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: Re: What could have been done differently? 
> 

> This is therefore a request for all of those who possess this 
> "clue" to 
> write down their wisdom and share it with the rest of us, so we can 
> address what clearly is a need for discipline in the design 
> of networks 
> and network security, since computer networks are an 
> infrastructure upon 
> which people are becoming dependent, even to the point of 
> their personal 
> safety.
> 
>   - Andy
> 
> 



Re: What could have been done differently?

2003-01-28 Thread Andy Putnins

On Tue, 28 Jan 2003 10:42:05 -  Alex Bligh wrote:
 > 
 > Sean,
 > 
 > --On 28 January 2003 03:10 -0500 Sean Donelan <[EMAIL PROTECTED]> wrote:
 > 
 > > Are there practical answers that actually work in the real world with
 > > real users and real business needs?
 > 
 > 1. Employ clueful staff
 > 2. Make their operating environment (procedures etc.) best able
 >to exploit their clue
 > 
 > In the general case this is a people issue. Sure there are piles of
 > whizzbang technical solutions that address individual problems (some of
 > which your clueful staff might even think of themselves), but in the final
 > analysis, having people with clue architect, develop and operate your
 > systems is far more important than anything CapEx will buy you alone.
 > 
 > Note it is not difficult to envisage how this attack could have been
 > far far worse with a few code changes...
 > 
 > Alex Bligh

How does one find a "clueful" person to hire? Can you recognize one by their
hat or badge of office? Is there a guild to which they all belong? If one 
wants to get a "clue", how does one find a master to join as an apprentice?

I would argue that sooner or later network security must become an 
engineering discipline whose practitioners can design a security system 
that cost-effectively meets the unique needs of each client.

Engineering requires that well-accepted ("best") practices be documented 
and adopted by all practicioners. Over time, there emerges a body of such 
best practices which provide a foundation upon which new technologies and 
practices are adopted as technical concensus emerges among the practicioners. 
Part of the training of an engineer involves learning the existing body of 
best practices. Engineering also is quantitative, which means that design
incorporates measurements and calculations so that the solution is good
enough to to the job required, but no more, albeit with commonly accepted
margins of safety.

Society requires that some kinds of engineers be licensed because they 
are responsible for the safety of others, such as engineers who design 
buildings, bridges, roads, nuclear power plants, sanitation, etc. However, 
some are not (yet?) required to be licensed, like engineers who design cars, 
trucks, buses, ships, airplanes, factory process control systems and the 
computer networks that monitor and control them.

This is therefore a request for all of those who possess this "clue" to 
write down their wisdom and share it with the rest of us, so we can 
address what clearly is a need for discipline in the design of networks 
and network security, since computer networks are an infrastructure upon 
which people are becoming dependent, even to the point of their personal 
safety.

- Andy




Re: What could have been done differently?

2003-01-28 Thread Leo Bicknell
In a message written on Tue, Jan 28, 2003 at 10:23:09AM -0500, Eric Germann wrote:
> Not to sound to pro-MS, but if they are going to sue, they should be able to
> sue ALL software makers.  And what does that do to open source?  Apache,
> MySQL, OpenSSH, etc have all had their problems.  Should we sue the nail gun

IANAL, but I think this is all fairly well worked out, from a legal
sense.  Big companies are held to a higher standard.  Sadly it's
often because lawyers pursue the dollars, but it's also because
they have the resources to test, and they have a larger public
responsibility to do that work.

That is, I think there is a big difference between a company the
size of Microsoft saying "we've known about this problem for 6
months but didn't consider it serious so we didn't do anything
about it", and an open source developer saying "I've known about
it for 6 months, but it's a hard problem to solve, I work on this
in my spare time, and my users know that."

Just like I expect a Ford to pass federal government safety tests,
to have been put through a battery of product tests by ford, etc
and be generally reliable and safe; but when I go to my local custom
shop and have them build me a low volume or one off street rod, or
chopper I cannot reasonably expect the same.

The responsibility is the sum total of the number of product units
out in the market, the risk to the end consumer, the companies
ability to foresee the risk, and the steps the company was able to
reasonably take to mitigate the risk.

So, if someone can make a class action lawsuit against OpenSSH, go
right ahead.  In all likelyhood though there isn't enough money in
it to get the lawyers interested, and even if there was it would
be hard to prove that "a couple of guys" should have exhaustively
tested the product like a big company should have done.

It was once said, "there is risk in hiring someone to do risk analysis."

> use for anything other than nailing stuff together.  Likewise, MS told
> people six months ago to fix the hole.  "Lack of planning on your part does

It is for this very reason I suspect no one could collect on this
specific problem.  Microsoft, from all I can tell, acted responsibly
in this case.  Sean asked for general ways to solve this type of
problem.  I gave what I thought was the best solution in general.
It doesn't apply very directly to the specific events of the last
few days.

-- 
   Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
PGP keys at http://www.ufp.org/~bicknell/
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org



msg08607/pgp0.pgp
Description: PGP signature


RE: What could have been done differently?

2003-01-28 Thread Drew Weaver

Would it be that hard to have windows update check to see the version of SQL
server? Its sad but I know a lot of MS admins only use windows update to
check for updates because awhile ago Microsoft pushed it as the premier
method of which to update your systems.

Im just saying if they included all fixes in one spot instead of halfway
automating it and halfway making it cryptically difficult it would benefit
everyone.

-Drew



-Original Message-
From: Jack Bates [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, January 28, 2003 10:36 AM
To: [EMAIL PROTECTED]; Leo Bicknell; [EMAIL PROTECTED]
Cc: Eric Germann
Subject: Re: What could have been done differently?



From: "Eric Germann"

>
> Not to sound to pro-MS, but if they are going to sue, they should be 
> able
to
> sue ALL software makers.  And what does that do to open source?  
> Apache, MySQL, OpenSSH, etc have all had their problems.  Should we 
> sue the nail
gun
> vendor because some moron shoots himself in the head with it?

With all the resources at their disposal, is MS doing enough to inform the
customers of new fixes? Are the fixes and lates security patches in an easy
to find location that any idiot admin can spot? Have they done due diligence
in ensuring that proper notification is done? I ask because it appears they
didn't tell part of their own company that a patch needed to be applied. If
I want the latest info on Apache, I hit the main website and the first thing
I see is a list of security issues and resolutions. Navigating MS's website
isn't quite so simplistic. Liability isn't necessarily in the bug but in the
education and notification.

Jack Bates
BrightNet Oklahoma



Re: What could have been done differently?

2003-01-28 Thread Ted Fischer

At 11:13 AM 1/28/03 -0200, Rubens Kuhl Jr. et al postulated:


| Are there practical answers that actually work in the real world with
| real users and real business needs?

Yes, the simple ones that are known for decades:
- Minimum-privilege networks (access is blocked by default, permitted to
known and required traffic)
- Hardened systems (only needed components are left on the servers)
- Properly coded applications
- Trained personnel


   I would just add, as has been mentioned by others (but bears repeating):

 - A commitment by management


There are no shortcuts.


   Agreed

Ted Fischer



Rubens Kuhl Jr.






Re: What could have been done differently?

2003-01-28 Thread Jack Bates

From: "Eric Germann"

>
> Not to sound to pro-MS, but if they are going to sue, they should be able
to
> sue ALL software makers.  And what does that do to open source?  Apache,
> MySQL, OpenSSH, etc have all had their problems.  Should we sue the nail
gun
> vendor because some moron shoots himself in the head with it?

With all the resources at their disposal, is MS doing enough to inform the
customers of new fixes? Are the fixes and lates security patches in an easy
to find location that any idiot admin can spot? Have they done due diligence
in ensuring that proper notification is done? I ask because it appears they
didn't tell part of their own company that a patch needed to be applied. If
I want the latest info on Apache, I hit the main website and the first thing
I see is a list of security issues and resolutions. Navigating MS's website
isn't quite so simplistic. Liability isn't necessarily in the bug but in the
education and notification.

Jack Bates
BrightNet Oklahoma




RE: What could have been done differently?

2003-01-28 Thread Eric Germann

Not to sound to pro-MS, but if they are going to sue, they should be able to
sue ALL software makers.  And what does that do to open source?  Apache,
MySQL, OpenSSH, etc have all had their problems.  Should we sue the nail gun
vendor because some moron shoots himself in the head with it?  No.  It was
never designed for flicking flies off his forehead.  And they said, don't
use for anything other than nailing stuff together.  Likewise, MS told
people six months ago to fix the hole.  "Lack of planning on your part does
not constitute an emergency on my part" was once told to me by a wise man.
At some point, people have to take SOME responsibility for their
organizations deployment of IT assets and systems.  Microsoft is the
convenient target right now because they HAVE assets to take.  Who's going
to pony up when Apache gets sued and loses.  Hwo do you sue Apache, or how
do you sue Perl, because, afterall, it has bugs.  Just because you give it
away shouldn't isolate you from liability.

Eric



>
> * Companies need to hold each other responsible for bad software.
>   Ford is being sued right now because Crown Vic gas tanks blow
>   up.  Why isn't Microsoft being sued over buffer overflows?  We've
>   known about the buffer overflow problem now for what, 5 years?
>   The fact that new, recent software is coming out with buffer
>   overflows is bad enough, the fact that people are still buying
>   it, and also making the companies own up to their mistakes is
>   amazing.  I have to think there's billions of dollars out there
>   for class action lawyers.  Right now software companies, and in
>   particular Microsoft, can make dangerously unsafe products and
>   people buy them like crazy, and then don't even complain that
>   much when they break.
>
> --
>Leo Bicknell - [EMAIL PROTECTED] - CCIE 3440
> PGP keys at http://www.ufp.org/~bicknell/
> Read TMBG List - [EMAIL PROTECTED], www.tmbg.org
>





Re: What could have been done differently?

2003-01-28 Thread Leo Bicknell
In a message written on Tue, Jan 28, 2003 at 03:10:18AM -0500, Sean Donelan wrote:
> They bought finest firewalls,

A firewall is a tool, not a solution.  Firewall companies advertise
much like Home Depot (Lowes, etc), "everything you need to build
a house".

While anyone with 3 brain cells realizes that going into Home Depot
and buying truck loads of building materials does not mean you have
a house, it's not clear to me that many of the decision makers in
companies understand that buying a spiffy firewall does not mean
you're secure.

Even those that do understand, often only go to the next step.
They hire someone to configure the firewall.  That's similar to
hiring the carpenter with your load of tools and building materials.
You're one step closer to the right outcome, but you still have no
plans.  A carpenter without plans isn't going to build something
very useful.

Very few companies get to the final step, hiring an architect.
Actually, the few that get here usually don't do that, they buy
some off the shelf plans (see below, managed security) and hope
it's good enough.  If you want something that really fits you have
to have the architect really understand your needs, and then design
something that fits.

> they had two-factor biometric locks on their data centers,

This is the part that never made sense to me.  Companies are
installing new physical security systems at an amazing pace.  I
know some colos that have had four new security systems in a year.
The thing that fascinates me is that unless someone is covering up
the numbers /people don't break into data centers/.

The common thief isn't too interested.  Too much security/video
already.  People notice when the stuff goes offline.  And most
importantly too hard to fence for the common man.  The thief really
interested in what's in the data center, the data, is going to take
the easiest vector, which until we fix other problems is going to
be the network.

I think far too many people spend money on new security systems
because they don't know what else to do, which may be a sign
that they aren't the people who want to trust with your network
data.

> they installed anti-virus software, 

Which is a completely different problem.  Putting the bio-hazard
in a secure setting where it can't infect anyone and developing an
antidote in case it does are two very different things.  One is
prevention, one is cure.

> they paid for SAS70 audits by the premier auditors,

Which means absolutely nothing.  Those audits are the equivalent
of walking into a doctor's office, making sure he has a working
stethoscope and box of toungue depressors, and maybe, just maybe,
making the doctor use both to verify that he knows how to use the
them.

While interesting, that doesn't mean very much at all that when
you walk in with a disease the doctor will cure you.  Just like it
doesn't mean when the network virus/worm/trojan comes you will be
immune.

> they hired the best managed security consulting firms.

This goes back to my first comment.  Managed security consulting
firms do good work, but what they can't do is specialized work.
To extend the house analogy they are like the spec architects who
make one "ok" plan and then sell it thousands of times to the people
who don't want to spend money on a custom architect.

It's better than nothing, and in fact for a number of firms it's
probably a really good fit.  What the larger and more complex firms
seem to fail to realize is that as your needs become more complex
you need to step up to the fully customized approach, which no matter
how hard these guys try to sell it to you they are unlikely to be
able to provide.  At some level you need someone on staff who
understands security, but, and here's the hard part, understands
all of your applications as well.

How many people have seen the firewall guy say something like "well
I opened up port 1234 for xyzsoft for the finance department.  I
have no idea what that program does or how it works, but their support
people told me I needed that port open".  Yeah.  That's security.
Your firewall admin doesn't need to know how to use the finance
software, but he'd better have an understanding of what talks to
what, what platforms it runs on, what is normal traffic and what
is abnormal traffic, and so on.

> Are there practical answers that actually work in the real world with
> real users and real business needs?

I think there are two fundamental problems:

* The people securing networks are very often underqualified
  for the task at hand.  If there is one place you need a "generalist"
  type network/host understands-it-all type person it's in security
  -- but that's not where you find them.  Far too often "network"
  security people are cross overs from the physical security world,
  and while they understand security concepts I find much of the
  time they are lost at how to apply them to the network.

* Companies need to hold each other responsible for bad software.
  Ford is

Re: What could have been done differently?

2003-01-28 Thread Rubens Kuhl Jr.

| Many different companies were hit hard by the Slammer worm, some with
| better than average reputations for security awareness.  They bought
| finest firewalls, they had two-factor biometric locks on their data
| centers, they installed anti-virus software, they paid for SAS70
| audits by the premier auditors, they hired the best managed security
| consulting firms.  Yet, they still were hit.

Because they hired people (staff or outsourced) that made them feel
comfortable, instead of getting the job done.

| Its not as simple as don't use microsoft, because worms have hit other
| popular platforms too.

But this worm required external access to an internal server (SQL Servers
are not front-end ones); even with a bad or no patch management system, this
simply wouldn't happen on a properly configured network. Whoever got
slammered, has more problems than just this worm. Even with no firewall or
screening router,  use of RFC1918 private IP address on the SQL Server would
have prevented this worm attack

| Are there practical answers that actually work in the real world with
| real users and real business needs?

Yes, the simple ones that are known for decades:
- Minimum-privilege networks (access is blocked by default, permitted to
known and required traffic)
- Hardened systems (only needed components are left on the servers)
- Properly coded applications
- Trained personnel

There are no shortcuts.

Rubens Kuhl Jr.







Re: What could have been done differently?

2003-01-28 Thread Eliot Lear

Sean,

Ultimately, all mass-distributed software is vulnerable to software 
bugs.  Much as we all like to bash Microsoft, the same problem can and 
has occurred through buffer overruns.

One thing that companies can do to mitigate a failure is to detect it 
faster, and stop the source.  Since you don't know what the failure will 
look like, the best you can do is determine what is ``nominal'' through 
profiling, and use IDSes to report to NOCs for considered action.

There are two reasons companies don't want to do this:

1.  It's hard (and expensive).  Profiling nominal means installing IDSes 
everywhere in one's environment at a time when you think things are 
actually working and making assumptions that *other* behavior is to be 
reported.  Worse, network behavior is often cyclical, and you need to 
know how that cycle will impact what is nominal.  Indeed you can have a 
daily, weekly, monthly, quarterly, and annual cycle.  Add to this 
ongoing software deployment and you have something of a moving target.

2.  It doesn't solve all attacks.  Only attacks that break the profile 
will be captured.  Those are going to be those that use new or unusual 
ports, existing "bad" signatures, or excessive bandwidth.

On the other hand, in *some* environments, IDS and an active NOC may 
improve predictability by reducing time needed to diagnose the problem. 
 Who knows?  Perhaps some people did benefit through these methods. 
I'm very curious in netmatrix's view of the whole matter, as compared to 
comparable events.  NANOG presentation, Peter?

Eliot



Re: What could have been done differently?

2003-01-28 Thread E.B. Dreger

ED> Date: Tue, 28 Jan 2003 12:42:41 + (GMT)
ED> From: E.B. Dreger


ED> Sure, worm authors are to blame for their creations.
ED> Software developers are to blame for bugs.  Admins are to

s/Admins/Admins and their management/


Eddy
--
Brotsman & Dreger, Inc. - EverQuick Internet Division
Bandwidth, consulting, e-commerce, hosting, and network building
Phone: +1 (785) 865-5885 Lawrence and [inter]national
Phone: +1 (316) 794-8922 Wichita

~
Date: Mon, 21 May 2001 11:23:58 + (GMT)
From: A Trap <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: Please ignore this portion of my mail signature.

These last few lines are a trap for address-harvesting spambots.
Do NOT send mail to <[EMAIL PROTECTED]>, or you are likely to
be blocked.




Re: What could have been done differently?

2003-01-28 Thread E.B. Dreger

SD> Date: Tue, 28 Jan 2003 03:10:18 -0500 (EST)
SD> From: Sean Donelan


[ snip firewalls, audits, et cetera ]

As most people on this list hopefully know, security is a
process... not a product.  Tools are useless if they are not
applied properly.


SD> Are there practical answers that actually work in the real
SD> world with real users and real business needs?

It depends.  If "real business needs" means management ego gets
in the way of letting talented staff do their jobs, having to
form a committee to conduct a feasibility study re whether to
apply a one-hour patch that closes a critical hole, drooling
over paper certs... the answer is no.

Automobiles require periodic maintenance.  Household appliances
require repair from time to time.  People get sick and require
medicine.  Reality is that people need to deal with the need for
proper systems administration.

It might not be exciting or make people feel good, but it's
necessary.  Failure has consequences.  Inactivity is a vote cast
for "it's worth the risk".

Sure, worm authors are to blame for their creations.  Software
developers are to blame for bugs.  Admins are to blame for lack
of administration.  The question is who should take what share,
and absorb the pain when something like this occurs.


Eddy
--
Brotsman & Dreger, Inc. - EverQuick Internet Division
Bandwidth, consulting, e-commerce, hosting, and network building
Phone: +1 (785) 865-5885 Lawrence and [inter]national
Phone: +1 (316) 794-8922 Wichita

~
Date: Mon, 21 May 2001 11:23:58 + (GMT)
From: A Trap <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: Please ignore this portion of my mail signature.

These last few lines are a trap for address-harvesting spambots.
Do NOT send mail to <[EMAIL PROTECTED]>, or you are likely to
be blocked.




Re: What could have been done differently?

2003-01-28 Thread Alex Bligh

Sean,

--On 28 January 2003 03:10 -0500 Sean Donelan <[EMAIL PROTECTED]> wrote:


Are there practical answers that actually work in the real world with
real users and real business needs?


1. Employ clueful staff
2. Make their operating environment (procedures etc.) best able
  to exploit their clue

In the general case this is a people issue. Sure there are piles of
whizzbang technical solutions that address individual problems (some of
which your clueful staff might even think of themselves), but in the final
analysis, having people with clue architect, develop and operate your
systems is far more important than anything CapEx will buy you alone.

Note it is not difficult to envisage how this attack could have been
far far worse with a few code changes...

Alex Bligh




What could have been done differently?

2003-01-28 Thread Sean Donelan


On Tue, 28 Jan 2003, The New York Times wrote:
> A spokesman for Microsoft, Rick Miller, confirmed that a
> number of the company's machines had gone unpatched, and
> that Microsoft Network services, like many others on the
> Internet, experienced a significant slowdown. "We, like the
> rest of the industry, struggle to get 100 percent
> compliance with our patch management," he said.

Many different companies were hit hard by the Slammer worm, some with
better than average reputations for security awareness.  They bought
finest firewalls, they had two-factor biometric locks on their data
centers, they installed anti-virus software, they paid for SAS70
audits by the premier auditors, they hired the best managed security
consulting firms.  Yet, they still were hit.

Its not as simple as don't use microsoft, because worms have hit other
popular platforms too.

Are there practical answers that actually work in the real world with
real users and real business needs?