Re: An A record is an MX record and is a missing MX....

2003-04-03 Thread Craig Partridge


In message <[EMAIL PROTECTED]>, Indra PRAMANA
 writes:

>If you only have one mail exchange server to serve your domain, you don't 
>need MX records. An A record pointing to your mail server is sufficient.

I think what you meant was that an A record for your domain name is
sufficient.

Recall, A records don't point to anything -- they simply provide the
address.

Craig


Re: An A record is an MX record and is a missing MX....

2003-04-03 Thread Craig Partridge


In message <[EMAIL PROTECTED]>, "Gerardo Gregory" writes:

>Since then I have learned that some MTA's will look for an A record if it 
>cannot find an MX record and use the A record instead. 

That's indeed what the standard says.  I put it into RFC 973 after consulting
with Jon Postel about preferred ways to ensure robust delivery.

Craig


quick note on hash based tracebacks

2003-01-20 Thread Craig Partridge


The capabilities of hash-based tracebacks has come up a few times in the
past couple of days.  Most of the discussion has been quite accurate (always
nice to see one's work understood!) but there are two points that I thought
might benefit from clarification:

> The SPIE hash-based 
> traceback is a much cooler idea, but it has some practical limitations, 
> including the need to do the trace in more or less real-time (once the 
> hash table fills up, it becomes useless), and the need for very large 
> amounts of very fast memory on the tracing routers.

So that folks understand this point.  SPIE requires a fast memory that
equals a fraction of a second (fraction varies depending on configuration)
times about .2% of link bandwidth to store its hash information.  It then
expects each router to cache (on disk) all the hash tables recorded for
some period of time.

You can actually reduce the router's storage needs by simply sending old
hash tables off to an archive (say at your operations center).  [Remember,
the amount of data kept is a fraction of a percent of the link speed -- so
even if you've got ten lines into a box, the total bandwidth required to copy
their tables to an archive is, *at worst*, around 1%-2% of the bandwidth of the 
fastest link].

> There was an IETF 
> BoF on it, but the folks behind it haven't been pushing it much.  
> (Randy, do you know the status of it?)

Randy encouraged us to go off and write a spec on our own and just bring it
to IETF when done.  Since we've got a customer paying us to work on a
deployable system in his network, we figured we'd take advantage of the
experience (something about "running code" :-)).

Craig

PS: For more info, see the paper in the lastest issue of IEEE/ACM Trans. on
Networking, or read the (somewhat less detailed) paper from ACM SIGCOMM 2001.
The SIGCOMM paper is available on-line at no charge:

http://doi.acm.org/10.1145/383059.383060



on-line briefing on NRC study of Internet on 9/11 of last year

2002-11-20 Thread Craig Partridge


Dave Clark, Sean Donelan and I will be briefing the National Research 
Council report on
how the Internet handled the events of 9/11/2001 on Thursday morning. 
The report is
available on-line this evening and the briefing will be webcast.

For more details see www.nas.edu

Thanks!

Craig


Re: DNS issues various

2002-10-24 Thread Craig Partridge


On Thu, 24 Oct 2002, Barry Shein wrote:
> Something I'd love to see is a blue-ribbon commission (meaning, made
> up of people with real clue) whose job it was to come up with a
> bird's-eye view of what the internet would look like if it were
> designed from scratch today.

There's a workshop on just this kind of subject, from an even higher bird's
eye (namely, how should we think about architecting networks) to be held
at SIGCOMM 2003.  Personally I'm looking forward to it and not just
because SIGCOMM 2003 is in Germany.

http://www.acm.org/sigcomm/ccr/cfp/CCR-arch-call.html

Craig



Re: effects of NYC power outage

2002-07-22 Thread Craig Partridge



In message <[EMAIL PROTECTED]>, senthil ayyasa
my writes:

> BGP stability was normal on 9/11. As we know only
>the telephone network suffered more whereas internet
>remained stable. Their might have been some problems
>in the access because of the flash crowd problem.

I've now seen a lot of data on 9/11 and BGP (and other metrics) and,
while final results and interpretation will wait for the NRC report, I will
say that the data on reachability and such like varies dramatically,
depending on where measured, granularity of measurements and other issues.

Thanks!

Craig



effects of NYC power outage

2002-07-22 Thread Craig Partridge



Anyone got good data comparing the effects on the Net (BGP reachability,
etc) of this weekend's NYC power outage with the effects power outage late
on September 11th.

I'm on a National Academy of Sciences committee looking at how the Internet
fared on 9/11 and we're always in search of good comparative data.

Thanks!

Craig Partridge
Chief Scientist, BBN Technologies



Re: list problems?

2002-05-22 Thread Craig Partridge



In message <[EMAIL PROTECTED]>, Richard Irving writes:

>  This PC and Internet revolution was founded by men with Advanced
>Degree's from Prominent Ivy League Colleges...

Well remember that the Internet revolution wasn't Bill's -- he's a follower.
Now if he'd finished his Harvard degree in Applied Math, then, maybe...
Harvard was an ARPANET site when he was there -- some of the more senior
students used it

Craig



Re: packet reordering at exchange points

2002-04-11 Thread Craig Partridge



In message <00ae01c1e125$ba6b5380$[EMAIL PROTECTED]>, "Jim Forster" write
s:

>Sure, see the original Van Jacobson-Mike Karels paper "Congestion Avoidance
>and Control", at http://www-nrg.ee.lbl.gov/papers/congavoid.pdf.  Briefly,
>TCP end systems start pumping packets into the path until they've gotten
>about RTT*BW worth of packets "in the pipe".  Ideally these packets are
>somewhat evenly spaced out, but in practice in various circumtances they can
>get clumped together at a bottleneck link.  If the bottleneck link router
>can't handle the burst then some get dumped.

Actually, it is even stronger than that -- in a perfect world (without
jitter, etc), the packets *will* get clumped together at the bottleneck
link.  The reason is that for every ack, TCP's pumping out two back to back
packets -- but the acks are coming back at approximately the spacing
at which full-sized data packets get the bottleneck link... So you're
sending two segments (or 1.5 if you ack every other segment) in the time
the bottleneck can only handle one.

[Side note, this works because during slow start, you're not sending during
the entire RTT -- you're sending bursts at the start of the RTT, and each
slow start you fill more of the RTT]

Craig