Re: Fed Bill Would Restrict Web Server Logs

2006-02-14 Thread David G. Andersen

On Tue, Feb 14, 2006 at 09:47:50AM -0500, Jon R. Kibler scribed:
  
  http://www.politechbot.com/docs/markey.data.deletion.bill.020806.pdf
  
  to delete information about visitors, including e-mail addresses, if the 
  data is no longer required for a legitimate business purpose.
  
 
 Original posting from Declan McCullagh's PoliTech mailing list. Thought
 NANOGers would be interested since, if this bill passes, it would impact
 almost all of us. Just imagine the impact on security of not being able
 to login IP address and referring page of all web server connections!

Call me weird, but I fail to see where the scary teeth lie in such
a bill.  First of all, it's phrased very abstractly and would hopefully
have its language clarified by the time it escapes a committee.  Second,
the bill is fairly clear about the meaning of personal information, and
it doesn't include things like IP addresses in its examples; the latter
would be a matter for a court to decide, and it's not clear cut at all:

  ... that allows a living person to be identified individually,
   including ... : first and last name, home or physical
   address, ... 

Third, it says nothing at all about restricting what you can log:

  An owner of an Internet website shall destroy, within
   a reasonable period of time, any data containing personal
   information if the information is no longer necessary for
   the purpose for which it was collected or any other legitimate
   business purpose.

If you need IP address logging to ensure the security of your website,
then that sounds like a pretty legitimate business practice.  The more
interesting question is how _long_ you need to keep the personal
information around for your for your legitimate business purposes.
A week?  A month?  A year?  Ultimately, it would probably boil down to
a dash of best practices and a pinch of CYA.  But there's nothing
in there to freak out about for day to day operations.  The worry
is more that you'd probably have to ensure that your logs get blasted or
sanitized according to a well-defined schedule.  Which, when you
think about it, might not be a bad thing at all.

  -Dave

-- 
Dave Andersen [EMAIL PROTECTED]
Assistant Professor   412.268.3064
Carnegie Mellon Universityhttp://www.cs.cmu.edu/~dga


Re: multi homing pressure

2005-10-19 Thread David G. Andersen

On Wed, Oct 19, 2005 at 10:19:28PM +, Paul Vixie scribed:
 
 [EMAIL PROTECTED] (Jared Mauch) writes:
 
  it will be interesting to see if this has acutal impact on
  ASN allocation rates globally.
 
 i don't think so.  multihoming without bgp isn't as hard as qualifying for
 PI space.  i think we'll finally see enterprise-sized multihoming NAT/proxy
 products.

If you can run Squid, you can multihome your web connections today. 
It's a little bit awkward to configure, but then again, so is
Squid.  People are welcome to poke at, fold, spindle, or mutilate:

http://nms.lcs.mit.edu/ron/ronweb/#code

(Part of my thesis work, Monet is a modification to Squid that causes
it to try to open N TCP connections to a Web server that it wants
to talk to.  It uses the first SYN ACK to return, and closes the
other connections to be a nice neighbor.  It's shockingly effective
at improving availability to Web sites that are themselves multihomed
or otherwise good.  Warning:  Often still leads to annoyance if you find
yourself able to browse the web but not do anything else.  We do have
a NAT version of this that works with arbitrary protocols.  If people
are interested, I'll try to convince my former student to dig up the
code and make it a bit prettier.)

  -Dave




Re: multi homing pressure

2005-10-19 Thread David G. Andersen

On Thu, Oct 20, 2005 at 03:18:35AM +0100, Paul Jakma scribed:
 On Wed, 19 Oct 2005, David G. Andersen wrote:
 
 If you can run Squid, you can multihome your web connections today.
 It's a little bit awkward to configure, but then again, so is
 Squid.  People are welcome to poke at, fold, spindle, or mutilate:
 
 http://nms.lcs.mit.edu/ron/ronweb/#code
 
 (Part of my thesis work,
 
 Hehe, google for vixie ifdefault.

Right.  Vix was talking about the inbound path - I'm talking
about the outbound path.  Complimentary solutions to the same
problem.

   -Dave


Sites wanted for research boxes

2005-01-24 Thread David G. Andersen

I sent a similar mail out a couple of years ago and greatly appreciate the
response I got.  Time and entropy have done their dirty work, so we're looking
for a few (more) good hosts. 

We've been running a moderate sized (30 node) overlay network and general
network research testbed for the last 4 or 5 years.  The testbed started
as part of the Resilient Overlay Networks project at MIT, and has evolved
to support a variety of research by about 15 network researchers at a
number of institutions.  At this time, the primary institutions involved
in the testbed management are MIT, CMU, NYU, and the University of Utah.

(Regular NANOG attendees may know Nick Feamster from some talks he's given
about routing and automated tools for managing and debugging sets of
router configurations.  Nick's the MIT contact for the testbed these days.)

Our major goal for the testbed is to have access to a realistic set of
Internet paths - we want to make sure that network research actually takes
place in the real world, not a perfect isolated environment.  Which is where
nanog comes in...

What we need is a machine we can place on your network.  We will be
happy to give you the machine (a PC), which you can use as well for
your own work.  Ideally, it is best if you can place it outside your
firewall (if you have one), since that may simplify logistics at your
end.

This PC will run a FreeBSD kernel provided by us; all you need to do
is to configure (or DHCP) an IP address, and we'll be all set.  The
machine will have a 10/100 Ethernet interface.

We have two options for bandwidth usage.  If you'll let us, we'd love
to be able to consume a fair bit of it from time to time.  Some of our
researchers are experimenting with data transfer protocols and more
efficient ways of shipping things (like news) around the 'net.  
However, if you're bandwidth constrained, we do a lot of measurements
and other low-bandwidth experiments that benefit greatly from just having
a fairly non-intrusive presense.

We'd love a BGP feed, or even internal routing feeds, for data collection.

We won't be sniffing packets on the network, etc. and we will work to ensure
that the machine is as secure as we can make it - most services will
be disabled, and the running services will be firewalled, and
we'll keep the machine up to date with security patches.  (No problems
yet, fingers crossed...)

What you get in return:
   a)  A locally hosted stratum 1 time source that you're welcome to use
   or let your clients use.  Our machines are CDMA synchronized.

   b)  Our eternal gratitude, love, and acknowledement (and participation /
   input, if you are interested!) in our research.
   This also involves free beer if you swing by Pittsburgh. :)
   
   c)  One of our goals is to create tools that run on the testbed that
   are useful to their hosts for things like distributed debugging,
   or the aforementioned BGP configuration debugging tools.
   We also collect BGP feeds and have a nice interface for searching
   through the historical data for figuring out what went wrong
   on the 'net at a particular time.

   d)  Ask!  If you have specific network problems/etc., we're always
   looking for more problems to solve.  Your problems are our dinners;
   we'd like to know what they are and try to create solutions.

Please let us know if you can help; we'd appreciate it very much!  If
so, could you please tell us if:

1. You prefer a small (1U) rack-mountable machine for us to send you
   as a package,

OR

2. If you have, and are willing to, spare your own PC, we can give you
   a disk image that you can burn, insert, enter an IP address, and be
   good to go.

   -Dave

-- 
work: dga at cs.cmu.edu du  me:  [EMAIL PROTECTED]
  Carnegie Mellon Universityhttp://www.angio.net/
  Department of Computer Science



Re: BCP38 making it work, solving problems

2004-10-19 Thread David G. Andersen

On Tue, Oct 19, 2004 at 07:14:32PM +0200, JP Velders scribed:
 
  Date: Tue, 19 Oct 2004 09:21:46 -0700
  From: Randy Bush [EMAIL PROTECTED]
  Subject: Re: BCP38 making it work, solving problems
 
   For example, how many ISPs use TCP MD5 to limit the possibility of a
   BGP/TCP connection getting hijacked or disrupted by a ddos attack?
 
  i hope none use it for the latter, as it will not help.  more and
  more use it for the former.  why?  becuase they perceived the need
  to solve an immediate problem, a weakness in a vendor's code.
 
 Uhm, you might need to run that by me again...
 
 Hijacking the connection is in a completely different class as someone
 bombarding you with a bunch of forged BGP packets to close down a
 session. Without that MD5 checksum you are quite vulnerable to that. I
 haven't seen a vendor come up with a solution to that, because the
 problem is on a much more vendor-neutral level...

  Unless you're worried about an adversary who taps into your 
fiber, how is MD5 checksums any better than anti spoofing filters
that protect your BGP peering sessions?  The only benefit I see is
that you can actually verify that your peer is using md5 checksums,
instead of having to take them on faith that they won't permit
someone to spoof their router's address.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/


Re: That MIT paper

2004-08-12 Thread David G. Andersen

On Thu, Aug 12, 2004 at 01:35:36PM +0200, Niels Bakker scribed:
 
 * [EMAIL PROTECTED] (David G. Andersen) [Thu 12 Aug 2004, 02:55 CEST]:
  Global impact is greatest when the resulting load changes are
  concentrated in one place.  The most clear example of that is changes
  that impact the root servers.  When a 1% increase in total traffic
  is instead spread among hundreds of thousands of different, relatively
  unloaded DNS servers, the impact on any one DNS server is minimal.
  And since we're talking about a protocol that variously occupies less than
  3% of all Internet traffic, the packet count / byte count impact is
  negligible (unless it's concentrated, as happens at root and
  gtld servers).
 
 This doesn't make sense to me.  You're saying here that a 1% increase in
 average traffic is a 1% average increase in traffic.  What's your point?

 if a load change is concentrated in one place how can the impact be
 global?

  Because that point could be critical infrastructure (to abuse
the buzzword).  If a 1% increase in DNS traffic is 100,000 requests
per second (this number is not indicative of anything, just an
illustration), that could represent an extra request per second per
nameserver -- or 7,000 more requests per second at the root.
One of these is pretty trivial, and the other could be
unpleasant.

 At root and gTLD servers I assume DNS traffic occupies significantly
 more than 3% of all traffic there.  Still, a 1% increase remains 1%.

   Sure, but the ratio still plays out.  If your total traffic due
to DNS is small, then even a large (percentage) increase in DNS traffic
doesn't affect your overall traffic volume, though it might hurt
your nameservers.  If you're a root server, doubling the DNS traffic
nearly doubles total traffic volume, so in addition to DNS-specific
issues, you'll also start looking at full pipes.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/


Re: That MIT paper

2004-08-11 Thread David G. Andersen

On Wed, Aug 11, 2004 at 04:49:18PM +, Paul Vixie scribed:
 what i meant by act globally, think locally in connection with That
 MIT Paper is that the caching effects seen at mit are at best
 representative of that part of mit's campus for that week, and that

  Totally agreed.  The paper was based upon two traces, one from
MIT LCS, and one from KAIST in Korea.  I think that the authors
understood that they were only looking at two sites, but their
numbers have a very interesting story to tell -- and I think that
they're actually fairly generalizable.  For instance, the rather
poorly-behaving example from your f-root snapshot is rather consistent
with one of the findings in the paper:

  [Regarding root and gTLD server lookups] ...It is likely that
many of these are automatically generated by incorrectly implemented
or configured resolvers;  for example, the most common error 'loopback'
is unlikely to be entered by a user

 even a variance of 1% in caching effectiveness at MIT that's due to
 generally high or low TTL's (on A, or MX, or any other kind of data)
 becomes a huge factor in f-root's load, since MIT's load is only one

  But remember - the only TTLs that the paper was suggesting could be
reduced were non-nameserver A records.  You could drop those all to zero
and not affect f-root's load one bit.  In fairness, I think this is
jumbled together with NS record caching in the paper, since most
responses from the root/gTLD servers include both NS records and
A records in an additional section.

Global impact is greatest when the resulting load changes are
concentrated in one place.  The most clear example of that is changes
that impact the root servers.  When a 1% increase in total traffic
is instead spread among hundreds of thousands of different, relatively
unloaded DNS servers, the impact on any one DNS server is minimal.
And since we're talking about a protocol that variously occupies less than
3% of all Internet traffic, the packet count / byte count impact is
negligible (unless it's concentrated, as happens at root and
gtld servers).

The other questions you raise, such as:

 how much of the measured traffic was due to bad logic in 
 caching/forwarding servers, or in clients?  how
 will high and low ttl's affect bad logic that's known to be in wide
 deployment? 

are equally important questions to ask, but .. there are only so many
questions that a single paper can answer.  This one provides valuable
insight into client behavior and when and why DNS caching is effective.
There have been other papers in the past (for instance, Danzig's 1992
study) that examined questions closer to those you pose.  The results from
those papers were useful in an entirely different way (namely, that almost
all root server traffic was totally bogus because of client errors).

It's clear that from the perspective of a root name server operator,
the latter questions are probably more important.  But from the
perspective of, say, an Akamai or a Yahoo (or joe-random dot com),
the former insights are equally valuable.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/


Re: that MIT paper again

2004-08-09 Thread David G. Andersen

Regarding both Paul's message below and Simon Walter's earlier message on
this topic...

Simon Walters scribed:

 I'm slightly concerned that the authors think web traffic is the big
 source of DNS, they may well be right (especially given one of the
 authors is talking about his own network), but my quick glance at the

Two things - first, the paper breaks down the DNS traffic by the
protocol that generated it - see section III C, which notes 
a small percentage of these lookups are related to reverse
bloack-lists such as rbl.maps.vix.com -- but
remember that the study was published in 2001 based upon
measurements made in January and December of 2000.  RBL traffic
wasn't nearly the proportion of DNS queries that it is today.  As
the person responsible for our group's spam filtering (one mailserver
among many that were measured as a part of the study), we didn't
start using spamassassin until late 2001, and I believe we were
one of the more aggressive spam filtering groups in our lab.
Also note that they found that about 20% of the TCP connections were
FTP connections, mostly to/from mirror sites hosted in our lab.

Sendmail of five years ago also wasn't as aggressive about performing
reverse verification of sender addresses.

I asked Jaeyeon about this (we share an office), and she
noted that:

In our follow-up measurement study, [we found] that DNSBL related
 DNS lookups at CSAIL in February 2004 account for 14% of all DNS
 lookups. In comparison, DNSBL related traffic accounted for merely
 0.4% of all DNS lookups at CSAIL in December 2000.

Your question was right on the money for contemporary DNS data.

 The abstract doesn't mention that the TTL on NS records is found to be 
 important for scalability of the DNS. Probably the main point Paul 
 wants us to note. Just because the DNS in insensitive to slight 
 changes in A record TTL doesn't mean TTL doesn't matter on other 
 records.

This is a key observation, and seems like it's definitely missing
from the abstract (alas, space constraints...).  They're not talking
about the NS records, and they're not talking about the associated
A records for _nameservers_.


On Sat, Aug 07, 2004 at 04:55:00PM +, Paul Vixie scribed:
 
 here's what i've learned by watching nanog's reaction to this paper, and
 by re-reading the paper itself.
 
 1. almost nobody has time to invest in reading this kind of paper.
 2. almost everybody is willing to form a strong opinion regardless of that.
 3. people from #2 use the paper they didn't read in #1 to justify an opinion.

  :)  human nature.

 4. folks who need academic credit will write strong self-consistent papers.
 5. those papers do not have to be inclusive or objective to get published.
 6. on the internet, many folks by nature think locally and act globally.
 
 7. #6 includes manufacturers, operators, endusers, spammers, and researchers.
 8. the confluence of bad science and disinterested operators is disheartening.
 9. good actual policy must often fly in the face of accepted mantra.

I'm not quite sure how to respond to this part (because I'm not
quite sure what you meant...).  It's possible that the data analyzed
in the paper may not be representative of, say, commercial Internet
traffic, but how is the objectivity in question?  The conclusions
of the paper are actually pretty consistent with what informed
intuition might suggest.  First:

If NS records had lower TTL values, essentially all of the DNS lookup
 traffic observed in our trace would have gone to a root or gTLLD server, which
 would have increased the load on them by a factor of about five.  Good
 NS-record caching is therefore critical to DNS scalability.

and second:

Most of the benefit of caching [of A records] is achieved with TTL
 values of only a small number of minutes.  This is because most cache
 hits are produced by single clients looking up the same server multiple
 times in quick succession [...]

As most operational experience can confirm, operating a nameserver
for joe-random-domain is utterly trivial -- we used to (primary) a
couple thousand domains on a p90 with bind 4..  As your own experience
can confirm, running a root nameserver is considerably less trivial.
The paper confirms the need for good TTL and caching management to
reduce the load on root nameservers, but once you're outside that
sphere of ~100 critical servers, the hugely distributed and
heavy-tailed nature of DNS lookups renders caching a bit less
effective except in those cases where client access patterns cause
intense temporal correlations.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/


L3 burp today - what happened?

2004-02-23 Thread David G. Andersen

Anyone know what happened to L3 during the last hour?  They
seem to have developed an appetite for dropping packets
in San Jose for customers on the Genuity portion of their
network, but I'm curious if anyone has a slightly more
detailed explanation about the failure.

The failure seems to have started at 17:09 and ended at about 17:51 EST.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/


Re: Does anyone think it was a good thing ? (was Re: VeriSign Capitulates

2003-10-03 Thread David G. Andersen

On Fri, Oct 03, 2003 at 05:34:05PM -0400, jeffrey.arnold quacked:
 
 On Fri, 3 Oct 2003, Mike Tancsa wrote:
 
 :: OK, so was ANYONE on NANOG happy with
 :: a) Verisign's site finder
 :: b) How they launched it
 :: 
 
 Disregarding their implementation issues, the product is pretty good. 
 I've actually used it to fix a few typos, etc... From an end user 
 perspective, it's certainly better than a squid error page.

  Yeah, but this is easy for you to provide as a service to users
who want it.
patch your squids with the following change to src/errorpage.c:

@@ -414,6 +414,7 @@
  * T - UTC  x
  * U - URL without password x
  * u - URL with passwordx
+ * V - URL without http method without password x
  * w - cachemgr email address   x
  * z - dns server error message x
  */
@@ -546,6 +547,9 @@
 case 'u':
p = r ? urlCanonical(r) : err-url ? err-url : [no URL];
break;
+case 'V':
+   p = r ? urlCanonicalStripped(r) : err-url ? err-url : [no URL];
+break;
 case 'w':
if (Config.adminEmail)
memBufPrintf(mb, %s, Config.adminEmail);


And then modify errors/English/ERR_DNS_FAIL to say:

H2Alternatives/H2
You can try to view this server through:
ul
li a href=http://www.google.com/search?q=cache:%V;Google's Cache/a
li a href=http://web.archive.org/web/*/%U;The Internet Archive/a
li a href=http://sitefinder.verisign.com/lpc?url=%V;Use Sitefinder to search for 
typos for this domain/a
/ul

If you're creative, have it send them with a redirect to a local
CGI script that tries obvious typos.  Very simple.  My users like
the link to the internet archive (also modify the could not connect
error page and others).  If you just want HTML, create a framed document
that auto-loads the sitefinder doc in the bottom half, and pops up
your own error page in the front.  I leave that as an exercise to
the HTML-clued reader, but it's not very hard.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: anycast (Re: .ORG problems this evening)

2003-09-22 Thread David G. Andersen

On Thu, Sep 18, 2003 at 02:38:18PM -0400, Todd Vierling quacked:
 
 On Thu, 18 Sep 2003, E.B. Dreger wrote:
 
 : EBD That's why one uses a daemon with main loop including
 : EBD something like:
 : EBD
 : EBDsuccess = 1 ;
 : EBDfor ( i = checklist ; i-callback != NULL ; i++ )
 : EBDsuccess = i-callback(foo) ;
 : EBDif ( success )
 : EBDsend_keepalive(via_some_ipc_mechanism) ;
 
 Yes, I hope that UltraDNS implements something like this, if they have not
 already.  It's still not a guarantee that things will get withdrawn -- or be
 reachable, even if working but not withdrawn -- in case of a problem.  That
 still leaves the DNS for a gTLD at risk for a single point of failure.

The whole problem with only listing two anycast servers is that 
you leave yourself vulnerable to other kinds of faults.  Your
upstream ISP fat-fingers ip route 64.94.110.11 null0 and
accidentally blitzes the netblock from which the anycast servers
are announced.  A router somewhere between customers and the
anycast servers stops forwarding traffic, or starts corrupting
transit data, without interrupting its route processing.
packet filters get misconfigured..

(Observe how divorced route processing and packet processing
are in modern routing architectures and it's pretty easy to
see how this can happen.  With load balancing, traffic
can get routed down a non-functional path while routing
takes place over the other one - BBN did that to us once,
was very entertaining).

Route updates in BGP take a while to propagate.  Much longer
than the 15ms RTT from me to, say, a.root-server.net.  The application
retry in this context can be massively faster than waiting 30+ seconds
for a BGP update interval.

The availability of the DNS is now co-mingled with the success
of the magic route tweak code;  the resulting system is a fair
bit more complex than simply running a bunch of different
DNS servers.   God forbid that zebra ever has bugs...

  http://www.geocrawler.com/lists/3/GNU/372/0/

In contrast, talking to a few DNS servers gives you an end-to-end
test of how well the service is working.  You still depend on the
answers being correct, but you can intuit a lot from whether
or not you actually get answers, instead of sitting around twiddling
your thumbs thinking, gee, I sure wish that routing update would
get sent out so I could use the 'net.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: News of ISC Developing BIND Patch

2003-09-17 Thread David G. Andersen

On Wed, Sep 17, 2003 at 02:50:51AM -0700, Vadim Antonov quacked:
 
 In fact, we do have an enormously useful and popular way of doing exactly
 that - this is called search engines and bookmarks.  What is needed is
 an infrastructure for allocation of unique semantic-free end point
 identifiers (to a large extent, MAC addresses may play this role, or, say,
 128-bit random numbers), a way to translate EIDs to the topologically
 allocated IP addresses (a kind of simplified numbers-only DNS?) and a
 coordinated effort to change applications and expunge domain names from
 protocols, databases, webpages and such, replacing URLs containing domain
 names with URLs containing EIDs.

  Oh, you mean something like the Semantic Free Referencing project?

  http://nms.lcs.mit.edu/projects/sfr/

  (Blatant plug for a friend's research, yes, but oh my god does it
seem relevant today)

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: Private port numbers?

2003-08-14 Thread David G. Andersen

On Wed, Aug 13, 2003 at 10:40:30PM +, Christopher L. Morrow quacked:
 
 what about ports that start as 'private' and are eventually ubiquitously
 used on a public network? (Sean Donelan noted that 137-139 were
 originally intended to be used in private networks... and they became
 'public' over time)

 You run it on a different port. I actually really like this idea,
because it makes shipping a more secure default configuration
easier for vendors without having to coordinate between firewall
vendors and implementors.

The gotcha is that it makes life pretty weird for you if you
then want to make your service work in the wide-area... but
that's pretty easy to do with intelligent defaults:

Ports 1-1024:  Well-known-ports
Ports 60001-61024:  Private well-known-port analogues

Applications would try:

 if (!connect(..., public port #))
   connect(..., private port #))

In fact, this (impractically) generalizes to a nice way of
signifying whether or not you want external people to be
able to talk to your service:

   port bit[0] == 0:  Public service, please do not filter
   port bit[0] == 1:  Private service, please filter at
  organizational boundary

I suddenly wish the port space was 32 bits. :)

People _could_, of course, implement all of this with
tcpwrappers and host-local firewalls.  But experience has
shown that they don't.  It might be easier for them if they
could just click private when they configured the service,
though experience has shown that services migrate to the less
restrictive mode as debugging and time goes on...

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Complaint of the week: Ebay abuse mail (slightly OT)

2003-08-03 Thread David G. Andersen

To add to the eternally annoying list of companies that ignore
abuse@ mail... ebay now requires that you fill in their lovely
little web form to send them a note.  Even if, say, you're
trying to let them know about another scam going around that
tries to use the machine www.hnstech.co.kr to extract people's
credit card information.

Has anyone had success in convincing companies that this is just
A Bad Idea (ignoring abuse mail), and if so, how did you manage
to do it?

Sorry for the slightly non-operational content, but I've had it with
ebay on this one.

  -Dave

- Forwarded message from eBay Safe Harbor [EMAIL PROTECTED] -

Date: Sat, 02 Aug 2003 22:58:01 -0700
From: eBay Safe Harbor [EMAIL PROTECTED]
Subject: Your message to [EMAIL PROTECTED] was not received  (KMM86277800V90276L0KM)
To: David G. Andersen [EMAIL PROTECTED]
Auto-Submitted: auto-replied
Reply-To: eBay Safe Harbor [EMAIL PROTECTED]
X-MIME-Autoconverted: from quoted-printable to 8bit by eep.lcs.mit.edu id 
h735w5sU087612

Thank you for writing to the eBay SafeHarbor Team. 
 
The address you wrote to ([EMAIL PROTECTED]) is no longer in service. 
Please resend your email to us through one of the online webforms listed
below. Using these forms will help us direct your email to the right 
department where we can quickly answer your question correctly and get 
it right back to you.
 
For Trust and Safety issues (reports of policy violations, problems with
transactions, suspensions, etc.) please use the following webform:
 
 http://pages.ebay.com/help/basics/select-RS.html
 
For General Support questions (billing, bidding, or selling concerns and
technical issues, etc.) please use the following webform:
 
http://pages.ebay.com/help/basics/select-support.html
 
Once on the webform, what will really help us assist you further is if 
you choose the best topic for your question. This will allow you to view
our Instant Help pages, where you may find your answer immediately. 
Should you not find your answer there, choosing the best topics will 
still help us answer your question faster, correctly, and completely. 

We truly appreciate your assistance in this matter and apologize for any
inconvenience this may have caused.
 
Sincerely, 
 
eBay SafeHarbor Team

- End forwarded message -

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: North America not interested in IP V6

2003-07-31 Thread David G. Andersen

On Thu, Jul 31, 2003 at 11:02:14AM -0600, Irwin Lazar quacked:

 As one person noted in response to Christian's speech.  If there
 is no addressing shortage, why do I have to pay $75 a month for a
 DSL connection with a static IP address when a floating IP address
 only costs me $40 per month?

I think there are two parts to the answer.

a)  DHCP'ing everyone is just easier.

b) Why do you pay less for a flight with a saturday night stopover?
   - Market segmentation.  People with static addresses usually
 want to do things like run servers, and are probably willing to
 pay for the privilege.

 -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: Latency generator?

2003-06-25 Thread David G. Andersen

On Wed, Jun 25, 2003 at 12:48:29PM -0400, Temkin, David quacked:
 Does anyone know of any free, cheap, or potentially rentable latency
 generators?  Ideally I'd like something that just sits between two ethernet
 devices to induce layer 2/3 latency in traffic, but am open to any
 options...

Dummynet.  We use it at Emulab (http://www.emulab.net/) to do
exactly what you're describing.  You have to use it in conjunction
with the bridging code, and then you can just do it.  By default,
it still uses the ipfw firewall rules to match traffic, so it only
delays IP, but that could probably be fixed with a little hacking
if you also want to delay ARP and other things.

Built into FreeBSD.  Should work mostly out of the box.
It'll also do traffic shaping and whatnot.

Your .signature disclaimer was longer than your message, by the way. ;-)

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: NAT for an ISP

2003-06-05 Thread David G. Andersen

On Wed, Jun 04, 2003 at 12:51:51PM -0700, Christopher J. Wolff quacked:
 
 Hello,
 
 I would like to know if any service providers have built their access
 networks out using private IP space.  It certainly would benefit the
 global IP pool but it may adversely affect users with special
 applications.  At any rate, it sounds like good fodder for a debate.

  I've got a friend who puts all of his internal servers,
routers, and _customers_ on RFC1918 space and pipes them out
thrugh a PNAT.  Fairly small ISP - maybe 15 megabits of bandwidth -
operating at the state local level.

It's an interesting setup.  Kind of fun.  The stateful pnat
functionality forces customers to specify exactly what inbound
services they want, which can't hurt security.  Every customer
gets a /24 or greater, which helps convenience.  On the other
hand, everyone has a NAT in front of them, which means that
they get clients who would have probably been putting a NAT
in front of themselves anyway.  I probably wouldn't use that
setup myself, but then again, I subscribe to nanog...

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: NAT for an ISP

2003-06-05 Thread David G. Andersen

On Wed, Jun 04, 2003 at 07:07:28PM -0400, Andy Dills quacked:
 
I've got a friend who puts all of his internal servers,
  routers, and _customers_ on RFC1918 space and pipes them out
  thrugh a PNAT.  Fairly small ISP - maybe 15 megabits of bandwidth -
  operating at the state local level.
 
 Why on earth would they do this? What you've said implies DS3 level
 connectivity, so to skimp on ARIN fees seems a little ridiculous.

Historical accident in many ways.  I implied DS3-level
connectivity, but what it really means is multiple bonded
T1s from multiple providers.  It started out as a T1 from
here, a T1 from there, and no local BGP knowledge (and
discouragement from the upstreams).  In fact, using a bunch
of NATs is a great way to resell cheap upstream connections.

 Yeah, I read you loud and clear. My friend is a half-baked cluebie using
 techniques I'll term fun and later encourage my competitors to employ. :)

Actually, I do mean the fun part.  You can do some cool tricks with
it.  Renumbering to different providers is mostly seamless,
particularly since he runs the DNS for his customers.  Easy to
experiment with throwing transparent caches and things like that
in front of the customers since they're already going through a
firewall.  Now that he's about large enough to get ARIN space, the game
is changing, and they're moving in the directions one would expect
them to.

It's not an approach that I would ever encourage a large ISP
to take.  In fact, I don't necessarily think of him as providing
standard Internet services - he provides primarily web, mail, 
and VPN services, and then some customized stuff on a per-customer
basis.  But he's had a decent customer base for a small ISP, and
he seems to be filling a niche, and hasn't gone out of business
doing it.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: Non-GPS derived timing sources (was Re: NTp sources that work in a datacenter)

2003-06-02 Thread David G. Andersen

On Sun, Jun 01, 2003 at 08:13:08AM -0700, Peter Lothberg quacked:
 
  I don't expect GPS to spin out of control soon..
 
 So GPS tracks TAI and the difference is published (2 months after the
 fact..)
 
 But it's simple to build a 'jamer' that makes GPS reception not work
 in a limited area, same for Loran-C used in combination with GPS in
 many Sonet/SDH S1 devices.
 
  but I did wonder how
  hard it is to find a another reliable clock source of similar quality to
  GPS to double check GPS.

   For NTP purposes, WWVB is actually just fine, as long as you properly
configure your distance from the transmitter.  The NTP servers list shows
several WWVB synchronized clocks.  CDMA clocks synch indirectly to GPS,
but are typically locally stabalized by a rubidium or ovenized quartz
oscillator with decent holdover capabilities for a few days of GPS outages.
But they'll suffer the same fate if GPS went just plain wrong.

   The NIST timeservers are available over the net, if you can deal with
that degree of synch.  Lots of them just use ACTS dialup synch to get the
offset, and have very good local clocks.  ACTS is certainly a good fall-back
for GPS, since it uses a wired path instead of a wireless one.

  So if you're really paranoid:  GPS + WWVB + ACTS + internet to tick/tock or
one of the NIST primaries.  Ultimately, WWVB, ACTS, and ntp to NIST are
all synched from pretty much the same source, but the odds that they'd
all go bad are pretty slim.  GPS is steered from the USNO time, but the
clocks on the satellites are pretty good.

-Dave 

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: 923Mbits/s across the ocean

2003-03-09 Thread David G. Andersen

On Sun, Mar 09, 2003 at 02:25:25PM +0100, Iljitsch van Beijnum quacked:
 
 On Sat, 8 Mar 2003, Joe St Sauver wrote:
 
  you will see that for bulk TCP flows, the median throughput is still only
  2.3Mbps. 95th%-ile is only ~9Mbps. That's really not all that great,
  throughput wise, IMHO.
 
 Strange. Why is that? RFC 1323 is widely implemented, although not
 widely enabled (and for good reason: the timestamp option kills header
 compression so it's bad for lower-bandwidth connections). My guess is
 that the OS can't afford to throw around MB+ size buffers for every TCP
 session so the default buffers (which limit the windows that can be
 used) are relatively small and application programmers don't override
 the default.

  Which makes it doubly a shame that the adaptive buffer tuning
tricks haven't made it into production systems yet.  It was
a beautiful, simple idea that worked very well for adapting to
long fat networks:

  http://www.acm.org/sigcomm/sigcomm98/tp/abs_26.html

  -dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: 923Mbits/s across the ocean

2003-03-08 Thread David G. Andersen

On Sat, Mar 08, 2003 at 03:29:56PM -0500, [EMAIL PROTECTED] quacked:
 
 High speeds are not important. High speeds at a *reasonable* cost are
 important. What you are describing is a high speed at an *unreasonable*
 cost.

To paraphrase many a california sufer, dude, chill out.

The bleeding edge of performance in computers and networks is always
stupidly expensive.  But once you've achieved it, the things you
did to get there start to percolate back into the consumer stream,
and within a few years, the previous bleeding edge is available
in the current O(cheap) hardware.

A cisco 7000 used to provide the latest and greatest performance
in its day, for a rather considerable cost.  Today, you can get a
box from Juniper for the same price you paid for your 7000 that
provides a few orders of magnitude more performance.

But to get there, you have to be willing to see what happens when
you push the envelope.  That's the point of the LSR, and a lot of
other research efforts.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: 923 Mbps across the Ocean ...

2003-03-07 Thread David G. Andersen

On Fri, Mar 07, 2003 at 10:09:51PM +0100, Mikael Abrahamsson quacked:
 
 On Fri, 7 Mar 2003, Richard A Steenbergen wrote:
 
  Production commercial networks need not apply, 'lest someone realize that 
  they blow away these speed records on a regular basis.
 
 What kind of production environment needs a single TCP stream of data 
 at 1 gigabit/s over a 150ms latency link? 
 
 Just the fact that you need a ~20 megabyte TCP window size to achieve this
 (feel free to correct me if I'm wrong here) seems kind of unusal to me.

It's unusual, but it's not completely unheard of.  One of the biggest
sources of such data is VLBI  (interferometry to measure the movement
of the earth's crust), in which signals from geographically distributed
measurement sites have to be recorded and correlated at a central site:

http://web.haystack.edu/vlbi/vlbisystems.html

The signals are massive.  Right now they use specially made tape
drives that can record 1Gb/s:

ftp://web.haystack.edu/pub/mark4/memos/230.2.pdf

ftp://web.haystack.edu/pub/mark4/memos/HDR_concept.PDF

and they send the data around via airplanes.  They'd love to be
able to do real-time correlation of the data, but that
involves collecting 6 of these feeds at a central site (more coming).
The feeds must be capable of running unattended for up to 24 hours
(86 terabytes each, or an aggregate of half a petabyte per day).

Yes, backbones push more than a gigabit across links, but not as
for a single flow of data.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: anti-spam vs network abuse

2003-02-28 Thread David G. Andersen

On Fri, Feb 28, 2003 at 03:11:00PM -0600, Jack Bates quacked:
 
  Should we outlaw a potentially beneficial practice due to its abuse by
  criminals?
 
 Okay. What happens if you make a mistake and overload one of my devices
 costing my company money. I guarantee you, the law will look favorably on
 damages. That is the problem with probing. Sometimes the probe itself can be
 the damage. Programmers are human. Humans make mistakes. Programmers are
 perfect.

That wasn't the question.  There are plenty of circumstances in
which it's legal to do something once  -- say, make a phone
call to you and ask how you're doing -- and illegal to do it
one hundred million times.  You don't outlaw telephones because
people can and have used them to harass other people, you outlaw
the harassing behavior and make it subject to damages. ... which
is exactly what you described.

Probing can be knocking on your door, or it can be taking a sledgehammer
to your garage.  These are so quantitatively different that there
is a qualitative shift between the behaviors.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


Re: scripts to map IP to AS?

2003-02-20 Thread David G. Andersen

On Thu, Feb 20, 2003 at 08:09:31AM -0500, William Allen Simpson quacked:
 
 Anybody have a pointer to scripts to map IP to AS? 
 
 There are still 10K-20K hosts spewing M$SQL slammer/sapphire packets, 
 and I'd like to start blocking routing to those irresponsible AS's 
 that haven't blocked their miscreant customers.
 
 http://isc.sans.org/port_details.html?port=1434

  You can use a quick perl wrapper around whois, or you
could use this terribly ugly hacked up traceroute-ng that I
wrote to do lookups:

  http://nms.lcs.mit.edu/software/ron/lookup_as.c

Compile with

   gcc -DSTANDALONE=1 lookup_as.c -o lookup_as -lm

And then run.  It gets the job done, but it's ugly. :)

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: scripts to map IP to AS?

2003-02-20 Thread David G. Andersen

I should have been a bit more specific.  The hacked up traceroute-ng
queries the radb, not a whoisd.  I've never had problems
being blocked when doing radb queries, but YMMV, of course.  I also
suggest that people be nice and rate-limit their queries so that
others don't have to do it for them...

  -Dave

On Thu, Feb 20, 2003 at 12:04:51PM -0500, George Bakos quacked:
 Careful. Many whoisds don't appreciate automated queries  will block YOUR
 ip address for sometime if you cross their max query rate threshold.
 
You can use a quick perl wrapper around whois, or you
  could use this terribly ugly hacked up traceroute-ng that I
  wrote to do lookups:
  
http://nms.lcs.mit.edu/software/ron/lookup_as.c
  
  Compile with
  
 gcc -DSTANDALONE=1 lookup_as.c -o lookup_as -lm
  
  And then run.  It gets the job done, but it's ugly. :)
  
-Dave
 
 
 -- 
 George Bakos
 Institute for Security Technology Studies
 Dartmouth College
 [EMAIL PROTECTED]
 voice 603-646-0665
 fax   603-646-0666
 Key fingerprint = D646 8F91 F795 27EC FF8B  8C95 B102 9EB2 081E CB85

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: Level3 routing issues?

2003-01-27 Thread David G. Andersen

On Sun, Jan 26, 2003 at 12:17:20AM -0500, Tim Griffin mooed:
 
 
 hc wrote:
  I am on Verizon-GNI via Qwest and Genuity and seeing the same problem as
  well.
 
 here's a plot showing the impact on BGP routing tables from seven ISPs 
 (plotted using route-views data): 
 http://www.research.att.com/~griffin/bgp_monitor/sql_worm.html

And as an interesting counterpoint to this, this graph shows
the number of BGP routing updates received at MIT before, during,
and after the worm (3 day window).  Tim's plots showed that the
number of actual routes at the routers he watched was down
significantly - these plots show that the actual BGP traffic
was up quite a bit.  Probably the withdrawals that were taking
routes away from routeviews...

http://nms.lcs.mit.edu/~dga/sqlworm.html

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: Level3 routing issues?

2003-01-27 Thread David G. Andersen

On Mon, Jan 27, 2003 at 06:15:33PM -0800, Randy Bush mooed:
 
  Wow, for a minute I thought I was looking at one of our old
  plots, except for the fact that the x-axis says January 2003
  and not September 2001 :) :)
 
 seeing that the etiology and effects of the two events were quite
 different, perhaps eyeglasses which make them look the same are
 not as useful as we might wish?

  Actually, an eyeballing of the MIT data would suggest that the SQL
worm hit harder and faster than NIMDA, and resulted in a more
drastic effect on routing tables.  I've updated the page I mentioned
before:

  http://nms.lcs.mit.edu/~dga/sqlworm.html

  to also contain the graph of MIT updates during the NIMDA worm.

I should note that our route monitor moved closer to MIT's border
router between these updates - it's now colocated in the same datacenter,
and before it was across the street, which made it a bit more susceptable
to link resets during the NIMDA worm attack.  LCS is more prone to
dropping off the network than is the entire MIT campus.  Therefore, the
NIMDA graph probably has a few more session resets (the spikes up to
100,000 routes updated) than it should.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: New worm / port 1434?'

2003-01-25 Thread David G. Andersen

On Sat, Jan 25, 2003 at 10:49:01AM -0500, Eric Gauthier mooed:
 
 Ok,
 
 I'm not sure if this helps at all.  Our campus has two primary connections - 
 the main Internet and something called Internet2.  Internet2 has a routing
 table of order 10,000 routes and includes most top-tier research instituations
 in the US (and a few other places).  By 1am this morning (Eastern US time),
 all of our Internet links saturated outbound but we didn't appear to see any 
 noticable increase in our Internet2 bandwidth.  I'm throwing this out there 
 because it may indicate that the destinations for the traffic - though large - 
 aren't completely random.
 
 Has anyone else seen this?

  It's actually fairly rational.  If you look at the size of the
I2 routing table in terms of how much of the IP space it covers,
it's a fair bit smaller than the full Internet routing table.  And
most institutions have _more_ I2 bandwidth than commodity internet
connectivity.  If the probing's roughly random, you'd expect the
I2 connection to fare better.
 
  MIT's I2 connectivity was better off than its commercial Internet
connection as well.  Our private peering link to ATT/mediaone was
actually in great shape (DS3, very small address space).

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: can you ping mount everest?

2003-01-23 Thread David G. Andersen

On Wed, Jan 22, 2003 at 11:36:14PM -0800, Mike Lyon mooed:
 
 The link wants you to log in with a New York Times login...
 -Mike

  You can always learn from other mailing lists.

  username:  cipherpunks3
  password:  cipherpunks

  -Dave



Re: Is there a line of defense against Distributed Reflective attacks?

2003-01-19 Thread David G. Andersen

On Mon, Jan 20, 2003 at 12:25:27AM -0500, Deepak Jain mooed:
 
 As long as the car _moves_ under its own power across the highway, its
 essentially not the car manufacturers' (or the consumers') immediate
 concern.

  That's really not true.  Before car companies sell cars, they
pass (lots of) safety certification tests.  Before owners drive
cars legally, they pass a safety and emissions test.  Sure, the
highway folks clean up after the occasional tire blowout, but
there's been a lot of work put in to make sure that the engines
aren't going to drop out on a regular basis.

  If the Internet was a highway, it would be covered in burned-out engines.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: Is there a line of defense against Distributed Reflective attacks?

2003-01-17 Thread David G. Andersen

On Fri, Jan 17, 2003 at 06:38:08PM +, Christopher L. Morrow mooed:
 
  has something called Source Path Isolation Engine (SPIE).  There
 
 This would be cool to see a design/whitepaper for.. Kelly?

The long version of the SPIE paper is at:

  http://nms.lcs.mit.edu/~snoeren/papers/spie-ton.html

The two second summary that I'll probably botch:  SPIE keeps a (very tiny)
hash of each packet that the router sees.  If you get an attack packet, 
you can hand it to the router and ask From where did this come?
And then do so to the next router, and so on.  The beauty of the scheme
is that you can use it to trace single-packet DoS or security attacks
as well as flooding attacks.  The downside is that it's hardware.

  -Dave
 
-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: Is there a line of defense against Distributed Reflective attacks?

2003-01-16 Thread David G. Andersen

On Thu, Jan 16, 2003 at 08:48:03PM -0500, Brad Laue mooed:
 
 By way of quick review, such an attack is carried out by forging the
 source address of the target host and sending large quantities of
 packets toward a high-bandwidth middleman or several such.
 
 One method that comes to mind that can slow the incoming traffic in a
 more distributed way is ECN (explicit congestion notification), but it
 doesn't seem as though the implementation of ECN is a priority for many

   No.  ECN is, first and foremost, an optimization for TCP so that
it doesn't have to drop packets before cutting its rate back when
there's congestion in the network.  A zombie or malicious host would
just ignore the ECN bit - and the attacks you're describing never
reach the point where a host's flow control is involved.

   You might be thinking of source quench, but that's really not an
option with today's networks.

  Some other conventional alternatives have been discussed already
(ingress/egress filtering, etc).  Some less conventional options:
[Warning:  Some researchy stuff ahead]

  a)  Mazu and Arbor provide products that can detect and
  optionally shape traffic to avoid DDoS attacks.  Must be
  installed in-line to shape, and can't (AFAIK) shape at
  really really high line speeds.  But for reasonable things
  like, maybe gigabit and under, I think they can provide
  pretty reasonable protection.  Don't quote me for sure on the rates.

  b)  Ioannidis and Bellovin proposed a mechanism called Pushback
  for automatically establishing router-based rate limits to
  staunch packet flows during DoS attacks.
  [NDSS 2002, Implementing Pushback:  Router-Based Defense
   Against DDoS Attacks]

  c)  I stole some ideas from a sigcomm paper this year (SOS:  Secure
  Overlay Services) to propose a proactive DDoS resistance scheme
  I term Mayday.  The basic idea is that you pick some secret
  attributes of your packets - destination port, destination
  address, etc. - and only allow packets with the right values
  through.  You then tell that secret to someone like Akamai,
  and have them proxy all requests to you.  Then you ask your
  upstream to proactively deny all packets without the magical
  values.

  http://nms.lcs.mit.edu/papers/mayday-usits2003.html

  It's a little weird, but I'd be willing to bet that one of
  the big overlay providers like Akamai could actually pull it off.
  The advantage of this approach is that you can implement it
  without fixing the whole world, unlike egress filters.  The
  downside is that you need someone with lots of nodes.

  I'd be interested in hearing folk's comments about the mayday 
  paper, btw, since I have to babble about it at a conference
  in a month. ;-)

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: Is there a line of defense against Distributed Reflective attacks?

2003-01-16 Thread David G. Andersen

On Fri, Jan 17, 2003 at 01:11:14AM -0500, David G. Andersen mooed:
 
   b)  Ioannidis and Bellovin proposed a mechanism called Pushback
   for automatically establishing router-based rate limits to
   staunch packet flows during DoS attacks.
   [NDSS 2002, Implementing Pushback:  Router-Based Defense
Against DDoS Attacks]

  I should have been a bit more accurate here.  The proposal for
pushback is actually earlier than the implementation paper I cited above:

  Controlling High Bandwidth Aggregates in the Network.  Ratul Mahajan,
   Steven M. Bellovin, Sally Floyd, John Ioannidis, Vern Paxson, and Scott
   Shenker.  July, 2001.

and it also included an internet-draft:

  http://www.aciri.org/floyd/papers/draft-floyd-pushback-messages-00.txt

I believe that Steve Bellovin gave a talk about it at NANOG 21:

  http://www.research.att.com/~smb/talks/pushback-nanog.pdf

  -Dave (I'll learn not to send mail past midnight some day)

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: Weird networking issue.

2003-01-07 Thread David G. Andersen

Rule number 1 with any ethernet:  Check to make sure you have the duplex
and rate statically configured, and configured identically on both ends of
the connection.

I'd wager you've got half duplex set on one side, and full on the other...

  -Dave

On Tue, Jan 07, 2003 at 02:19:10PM -0500, Drew Weaver mooed:
 
 Hi, this is kind of a newbie question but this doesn't make a whole lot of
 sense :P
 
 I have an etherstack hub connected to a FastEthernet port on a cisco 3660
 router, these are the stats when I do a show int fast0/0:
 
 5776 input errors, 5776 CRC, 2717 frame, 0 overrun, 0 ignored
 
 Whats weird is I just cleared the counters 12 minutes ago, and already there
 are almost 6000 CRC errors. This connected via a standard Cat5 ethernet
 cable, I have tried replacing the cable to noavail.
 
 Is this a fairly normal situation, If so that's great, but it seemed rather
 ridiculous to me, and if it is not a normal situation, what would cause
 this?
 
 Any ideas are appreciated.
 Thanks,
 -Drew Weaver

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: DNS issues various

2002-10-24 Thread David G. Andersen

On Thu, Oct 24, 2002 at 04:07:18PM -0400, Richard A Steenbergen mooed:
 
 We're still working on the distributed attacks, but eventually we'll come 
 up with something just as effective. If it was as easy to scan for 
 networks who don't spoof filter as it is to scan for networks with open 
 broadcasts, I think we'd have had that problem licked too.

  Are you sure? 

*  A smurf attack hurts the open broadcast network as much (or more) 
   than it does the victim.  A DDoS attack from a large number
   of sites need not be all that harmful to any one traffic source.

*  'no ip directed broadcast', which is becoming the default behavior
   for many routers and end-systems,
  vs.
   'access-list 150 deny  ip ... any'
   'access-list 150 deny  ip ... any'
   ...
   'access-list 150 permit ip any any'

   (ignoring rpf, which doesn't work for everyone).

Until the default behavior of most systems is to block spoofed packets,
it's going to remain a problem.

  -Dave, whose glass is half-empty this week. :)

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: Hunting for bogus BGP announcement for 204.106.93.155

2002-10-03 Thread David G. Andersen


On Thu, Oct 03, 2002 at 06:48:53PM +0200, Jesper Skriver mooed:
 
 On Thu, Oct 03, 2002 at 04:35:45PM +0100, [EMAIL PROTECTED]
 wrote:
 
  For the last two days, between approximately 7pm to 2am Eastern
  time, a spammer hijacked a piece of our address space, presumably
  by announcing some size of aggregate containing the IP address
  204.106.93.155. During the time that the spammer had connectivity
  using this bogus announcement,
 
 RIS didn't pick anything up
 
Nor did our BGP monitors, nor our db of Routeviews.

http://bgp.lcs.mit.edu/

Interestingly, we se _no_ announcements of any netblock containing
this address, ever.  I assume you haven't brought this address space
on-line yet?

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: IP over in-ground cable applications.

2002-09-12 Thread David G. Andersen


On Thu, Sep 12, 2002 at 03:04:35PM -0400, Deepak Jain mooed:
 
 
 You would need multicast speakers (routers, etc) along the cable route to
 effectively multiple your bandwidth at all. Since cable is already
 multicasting (1 stream to many/all) I don't think I see any advantage.
 
 Unless, of course, you expect cable customers to be broadcasting to other
 cable customers (say their own home video content)... Then MPEG2 Multicast
 would be your friend.

 I don't think the answer is as simple as that.  It really depends
on the number of subscribers per last-hop multicast box, and on
the number of channels you offer / popularity distribution of
the channels.

  If you've got 5 channels and 10,000 subscribers per box,
multicast saves you nothing.  If you've got 1000 channels and
100 subscribers per box, ...

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: Bad bad routing problems?

2002-08-31 Thread David G. Andersen


In the last few days, it's been advertised, but often
withdrawn.  Perhaps the timing of the announces and
withdrawls will help you:

MIT saw it advertised via 1 701 8001 4276:
http://bgp.lcs.mit.edu/bgpview.cgi?time=dayprefix=216.223.192.0%2F19rel=eqtable=updatesaction=list

PSG (Randy Bush's BGP feed) saw it via 2914 8001 4276:
http://bgp.lcs.mit.edu/bgpview.cgi?time=dayprefix=216.223.192.0%2F19rel=eqtable=ns0_psg_com_updatesaction=list

Playground.net saw it via 14177 15290 7018 701 8001 4276:
http://bgp.lcs.mit.edu/bgpview.cgi?time=dayprefix=216.223.192.0%2F19rel=eqtable=nortel_ron_lcs_mit_edu_updatesaction=list

... and so on.  (Change the table menu to see a different view)

  -Dave

On Sat, Aug 31, 2002 at 11:25:00AM -0500, Malayter, Christopher mooed:
 
 I'm not receiving that network at all.
 
 I looked at our border to sprint, genuity, cw, PSI, and our AADS peers.
 
 Sorry to not be of more assistance.
 
 -Chris
 
 
 -Original Message-
 From: Mike Tancsa [mailto:[EMAIL PROTECTED]]
 Sent: Saturday, August 31, 2002 11:14 AM
 To: Gerald
 Cc: [EMAIL PROTECTED]
 Subject: Re: Bad bad routing problems?
 
 
 
 
 Strange, from my network I see you via GT and via Telus but not via ATT
 
  From me (as11647)
 BGP routing table entry for 216.223.192.0/19
 Paths: (2 available, best #2, table Default-IP-Routing-Table)
Not advertised to any peer
852 174 8001 4276
  209.115.141.1 from 209.115.141.1 (209.115.137.196)
Origin IGP, localpref 100, valid, external
Last update: Sat Aug 31 08:47:55 2002
 
6539 8001 4276
  64.7.143.42 from 64.7.143.42 (64.7.143.42)
Origin IGP, localpref 100, valid, internal, best
Community: 6539:30
Last update: Sat Aug 31 08:47:30 2002
 
 
  From the ATT CANADA routeserver  (route-server.east.attcanada.com)
 ATT route server
 route-server.eastshow ip bgp 216.223.192.0/19
 % Network not in table
 route-server.east
 
 ATT Canada gets to 8001 via Williams, but I dont see your /19 through them.
 
 route-server.eastshow ip bgp regex _8001_
 BGP table version is 2266132, local router ID is 216.191.65.118
 Status codes: s suppressed, d damped, h history, * valid,  best, i -
 internal
 Origin codes: i - IGP, e - EGP, ? - incomplete
 
 Network  Next HopMetric LocPrf Weight Path
 *i12.129.128.0/24  216.191.64.253 80  0 7911 8001 22420
 i
 *i63.74.146.0/23   216.191.64.253 80  0 7911 8001 23368
 i
 
 
 At 10:54 AM 8/31/2002 -0400, Gerald wrote:
 
 We are seeing bad routing problems from outside our network. Can anyone
 corroborate this or help?
 
 We are on AS4276 and all traffic from us to our upstream seems good. Great
 way to spend holiday weekend. /me wonders if anyone is even awake on the
 NANOG list. :-)
 
 2 addresses in our network I've tested with are:
 
 216.223.200.14
 and 216.223.192.68
 
 Many many traces done, some interesting ones below:
 
 Here is a succesful traceroute:
 traceroute to 216.223.200.14 (216.223.200.14), 30 hops max, 40 byte
1   10 ms   10 ms   10 ms  rt1.altair7.com [209.11.155.129]
2   10 ms15 ms16 ms  209.10.41.130
3   10 ms16 ms15 ms  ge-6-1-0.core1.sjc1.globix.net
 [209.10.2.193]
4   10 ms16 ms16 ms  so-4-2-0.core1.sjc4.globix.net 
  [209.10.11.221]
563 ms46 ms47 ms  so-1-0-0.core2.cgx2.globix.net 
  [209.10.10.150]
662 ms63 ms47 ms  so-0-0-0.core1.cgx2.globix.net 
  [209.10.10.157]
778 ms94 ms94 ms  so-1-0-0.core2.nyc8.globix.net 
  [209.10.10.162]
878 ms94 ms94 ms  pos15-0.core2.nyc1.globix.net
 [209.10.11.169]
978 ms94 ms94 ms  s5-0-peer1.nyc3.globix.net [209.10.12.18]
   1078 ms94 ms94 ms  nyiix.peer.nac.net [198.32.160.20]
   1178 ms94 ms93 ms  internetchannel.customer.nac.net 
  [209.123.10.34]
   1278 ms94 ms94 ms  auth-2.inch.com [216.223.200.14]
 
   These show failures before reaching our network:
  
   traceroute to 216.223.192.68 (216.223.192.68): 1-30 hops, 38 byte
 packets
1  paleagw4.hpl.external.hp.com (192.6.19.2)  1.95 ms  1.95 ms  5.86 ms
2  palgwb01-vbblpa.americas.hp.net (15.243.170.49)  0.976 ms  0.977 ms
   0.976 ms
3  svl-edge-15.inet.qwest.net (65.115.64.25)  0.976 ms !H  *  0.977 ms
 !H
4  * * *
5  * * *
  
  
  
   traceroute to 216.223.192.68 (216.223.192.68), 30 hops max, 40 byte
   packets
1  e3-13.foundry1.cs.wisc.edu (198.133.224.116)  2.195 ms  1.754 ms
   1.663 ms
2  e15.extreme1.cs.wisc.edu (128.105.1.1)  0.856 ms  0.765 ms  0.737 ms
3  144.92.128.194 (144.92.128.194)  1.550 ms  1.171 ms  1.264 ms
4  * * *
5  * * *
  
   All global crossing traces seem to loop within their router:
   traceroute to 216.223.192.68 (216.223.192.68), 30 hops max, 40 byte
   1 64.214.13.1 (64.214.13.1) 0.695 ms 0.889 ms 0.485 ms 0.453 ms 0.350 ms
   2 64.214.13.1 (64.214.13.1) 4.535 ms !N ms * ms 0.574 ms !N ms
   3 * 

Re: Standalone Stratum 1 NTP Server

2002-08-28 Thread David G. Andersen


On Tue, Aug 27, 2002 at 11:07:10PM -0700, Jim Hickstein mooed:
 --On Wednesday, August 28, 2002 12:51 AM -0400 David G. Andersen 
 [EMAIL PROTECTED] wrote:
 
 At work, it's all steel studs and foil-backed wallboard, and the windows 
 (for a patch GPS antenna) are _way over there_.  *sigh*   I'd love it if 
 someone would pay for my roof penetration there.

  Does your cell phone work in the room?  The CDMA time receivers
work in the strangest places.  The only places I've had no luck:

  - In the bowels of a big building along the mass tech corridor
  - when moved to a bad spot inside a fairly steel and concrete-heavy
lab at the university of utah (works in other spots in the lab).

But aside from that, I've got them working in network closets
and labs all over the place.  Worth giving a shot if you're really
desperate to play.  They're not quite as accurate as GPS (~10 microseconds
vs. ~2-5 microseconds), but what's a few microseconds compared to 
sticking an antenna on the roof?

(As a frequency standard, they're quite good.  But you can't autocorrect
for the CDMA propagation delay).

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: Standalone Stratum 1 NTP Server

2002-08-27 Thread David G. Andersen


On Tue, Aug 27, 2002 at 11:57:39PM -0400, John Todd mooed:
 
 Hmm... $2400 is still in the pricey range to be throwing out 
 bunches of these across a network in wide distribution.  (Pardon me 
 [...]
 
 One would think that a vendor could come up with a 1u rackmount box 
 with a GPS and single-board computer (BSD or Linux-based) for ~$500 
 total cost.   Add 150% for profit and distribution costs, you're 
 still in the $1300 range, which is more reasonable.  I suppose my 
 oversimplification is the reason I'm not in the hardware business. 

   You might be imagining a somewhat larger market for standalone
stratum-1 timeservers than you might imagine.  For real accuracy, 
you don't want standalone -- you want a locally connected source that 
you can use to tightly discipline the local clock.  (When I say real, I
mean sub-milisecond).  The difference is .. substantial.  Taken
from two of my machines on the same subnet:

 remote   st poll reach  delay   offsetdisp

Local CDMA 0   32  377 0.0 -0.11 0.00047
100Mbps Ethernet   1 1024  377 0.00035  0.001103 0.01866

And if you want paranoia, go by ntp's estimate of its accuracy:

Local  maximum error 5449 us, estimated error 3 us, TAI offset 0
Ether  maximum error 584994 us, estimated error 1241 us, TAI offset 0

With that on the board... why do you need, or even want,
a standalone NTP server if you're on a budget?  Almost certainly
you have a local computer in your POP -- you can even get
Cisco routers to talk with a local time receiver, if all you
want to do is discipline your routers.  If you've got a caching
nameserver or something else in your POP, that will do just as
well.

 I'd be even happier with a PCI-bus card that I could put into an old 
 (reasonably fast) PC and a CD-ROM with an OpenBSD distribution that 
 automatically did the Right Thing.   There is a case to be made about 

  Grab a serial CDMA/GPS unit (I use the EndRun Praecis Ct because it's
CDMA;  I mention some GPS units below), plug it into your serial port, and
stick:

server 127.127.29.0 prefer
fudge 127.127.29.0 refid CDMA

   in ntp.conf.  It's about as simple as you can get.  But remember --
regardless of how nifty your local clock is, you still need to
have a good server mesh with NTP.  Clocks go bad.  CDMA base stations
screw up (we've found one so far) or change protocols unexpectedly
(three).  GPS has serious visibility issues unless you can get an actual
roof antenna (two).  Configuring this mesh in an intelligent way takes work.
Would make a great research project. :)

  The Ct costs something like $1100.  endruntechnologies.com.
synergy-gps.com sells a really nice GPS timing unit based on the
Motorolla UT+ chipset (designed for timing), including all the parts
you need, for .. eh, 600?  I forget.  Maybe a bit less.  Plug into
serial port, go.  Requires a recompiled kernel under FreeBSD and
Linux, but it's fairly easy to set up.  If you want something for
a bit less work, look at the Trimble units.

  (For reference:  I've got two of the UT+ GPS units, and 20
EndRun Praecis Ct's.  Like them both.  The Ct is a heck of a lot
easier to deploy in a datacenter, as would be the CDMA TrueTime model)

  If you're really broke, and want a stratum 1 server, host one of
our network measurement boxes.  We'll ship it to you, you provide
the network.  In return, you get a local stratum-1 timeserver, 
managed by yours truly.  (I'm serious about this offer, btw.)

  As a second option:  If you manage the connections between
your POPs, you can get really decent remote NTP performance.
The places in which NTP dies are where latencies are asymmetric.
With priority assigned to inter-POP NTP traffic and known
symmetric links, life could be quite happy.

   -Dave (time is very cool)

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.



Re: looking glass

2002-07-18 Thread David G. Andersen


On Thu, Jul 18, 2002 at 12:00:38PM -0700, Scott Granados mooed:
 
 What are people using for looking glass software.  Is it just some simple 
 perl code which grabs data from the router or is it more complex than 
 that?

  It's just perl.  I have a copy of it at

   http://www.angio.net/rep/lg.tar.gz

  if you want it.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/



Re: ARIN IP allocation questionn

2002-06-27 Thread David G. Andersen


Technically, you can't sell them to someone else.

  -Dave

On Thu, Jun 27, 2002 at 07:37:34AM -0400, Ralph Doncaster mooed:
 
 There's lots of old C's that aren't being announced any more.  You might
 be able to find one that someone can lend you to use.
 Strangley a search for portable class C on ebay didn't find anything
 though...
 
 Ralph Doncaster
 principal, IStop.com 
 
 

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/



Re: Testing Bandwidth performance

2002-06-26 Thread David G. Andersen


On Wed, Jun 26, 2002 at 06:18:00AM -0700, todd glassey mooed:
 
 Oh and use something like a SNIFFER to generate the traffic. Most of what we
 know of as commercial computer's cannot generate more than 70% to 80%
 capacity on whatever network they are on because of driver overhead and OS
 latency etc etc etc. It was funny, but I remember testing FDDI on a UnixWARE
 based platform and watching the driver suck 79% of the system into the
 floor.

  Btw, if you've got a bit of time on your hands, the Click router
components have some extremely-low-overhead drivers (for specific
ethernet cards under Linux).  They can generate traffic at pretty
impressive rates.  They used them for testing DOS traffic for a while.

  http://pdos.lcs.mit.edu/click/

  (Most of the driver overhead you see is interrupt latency;  click
uses an optimized polling style to really cram things through).  Also,
the new FreeBSD polling patches should make it so you can get more
throughput from your drivers when doing tests.  I understand there are
similar things for Linux.

  -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/



Re: Global view increase (was:BGP route explosion)

2002-05-02 Thread David G. Andersen


This is more w.r.t. the huge burst of announcements yesterday,
not a persistent increase in the routing table sizes, but..

We saw absolutely huge amounts of announcements from
1 3459 17676 (sometimes with padding)

For example, see:

http://ginseng/bgpview.cgi?time=betweenstart=2002-05-01+06%3A00%3A00end=2002-05-01+07%3A00%3A00bins=100prefix=rel=eqaspath=asrel=containorigin_as=17676scale=lineartable=updates_newaction=plotView=View

(Sorry for the long URL).  Shows that we received something like 30k 
announcements that originated from 17676 between 6 and 7 am on May 1st.
They were primarily announcing /23s out of large address space ranges
allocated to APNIC, like 219.31/16 and friends.

  -Dave


On Thu, May 02, 2002 at 11:33:50AM +0100, Stephen J. Wilcox mooed:
 
 We see that too
 
 Predominantly seems to be massive amounts of /24s in a couple of nets
 which were previously /16s. Culprit would appear to be AS705
 
 *i63.0.0.0/24  62.24.196.1   100  0 286 209 701 705 i
 *i63.1.0.0/24  62.24.196.1   100  0 286 209 701 705 i
 *i63.2.0.0/24  62.24.196.1   100  0 286 209 701 705 i
 .
 
 Lots of new /20-24 in 67.0.0.0/8 also AS705
 
 In total AS705 is announcing 4432 new routes from yesterday.
 
 
 If you're interested a copy of all new routes since yesterday is at:
 
 http://noc.opaltelecom.net/newbgp020502.txt
 
 THe file is 0.5Mb so I couldnt really email it :)
 
 Steve
 
 
 
 On Thu, 2 May 2002, James Spenceley wrote:
 
  
  Around 15mins ago an additional ~5,000 routes entered the global view, sadly
  they appear to be hanging around. 
  
  Last Tuesday had an increase of 2,000 routes. 
  
  +7000 routes in a week is significant de-aggregation or leak, any ideas on
  where these routes are flowing ?
  
  I've not seen an increase on any of our peers, so I can only assume its
  coming from a network who doesn't peer particularly openly.
  
  --
  James
  
 

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/



BGP route update propagation questions

2002-04-16 Thread David G. Andersen


I'm trying to get a better feel for the dynamics of some
maybe-necessary BGP routing traffic, and had a few questions:

Under what circumstances will BGP send an update (of any sort)
to a peer when there is an internal failure that does _not_
result in the complete isolation of a prefix?

For example:

__ AS1-rtr1 ---*-- AS2-rtr1 --\
   /  |-- AS3
  10/8 -X- AS1-rtr2 ---*-- AS2-rtr2 --/  
^
|--link goes down at X   

In this case, AS1 is announcing 10/8 to its peer, AS2.
An internal link within AS1 goes down.  Are there any
circumstances under which AS2 will announce some kind of
change to AS3?  My thoughts:

  * AS1 might announce a MED change to AS2, and AS2 might
propagate that to AS3 in some unknown way (manually?)
  * AS2 would see the path to AS1 change, and would change
its announced MED accordingly
  * For some reason, AS1 might announce a withdraw on 10/8,
and then re-announce it.  Is this possible, and under
what circumstances would this happen?
  * Other things?  I feel like the answers might depend on
networks using ibgp with route reflectors vs. full mesh,
or using a different interior routing protocol.

In a related situation, let's say AS1 did a 'clear ip bgp'
on AS1-rtr2, which was the preferred link for AS2 to get to 10/8.
Would AS2 propagate any internal BGP announcements to AS3?
(Same kinds of reasons as above, but this time AS1-rtr2
actually stopped announcing anything to AS2)

I'm not certain of the right framework in which to think about
the answers to these questions, alas.  Clues from the operational
side would be very welcome indeed.

   -Dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/