Re: 923Mbits/s across the ocean

2003-03-09 Thread Iljitsch van Beijnum

On Sat, 8 Mar 2003, Joe St Sauver wrote:

 you will see that for bulk TCP flows, the median throughput is still only
 2.3Mbps. 95th%-ile is only ~9Mbps. That's really not all that great,
 throughput wise, IMHO.

Strange. Why is that? RFC 1323 is widely implemented, although not
widely enabled (and for good reason: the timestamp option kills header
compression so it's bad for lower-bandwidth connections). My guess is
that the OS can't afford to throw around MB+ size buffers for every TCP
session so the default buffers (which limit the windows that can be
used) are relatively small and application programmers don't override
the default.



RE: 923Mbits/s across the ocean

2003-03-09 Thread Cottrell, Les

Also as the OS's are shipped they come with small default maximum window sizes (I 
think Linux is typically 64KB and Solaris is 8K), and so one has to get the sysadmin 
with root privs to change this. 

-Original Message-
From: Iljitsch van Beijnum [mailto:[EMAIL PROTECTED] 
Sent: Sunday, March 09, 2003 5:25 AM
To: Joe St Sauver
Cc: [EMAIL PROTECTED]
Subject: Re: 923Mbits/s across the ocean



On Sat, 8 Mar 2003, Joe St Sauver wrote:

 you will see that for bulk TCP flows, the median throughput is still 
 only 2.3Mbps. 95th%-ile is only ~9Mbps. That's really not all that 
 great, throughput wise, IMHO.

Strange. Why is that? RFC 1323 is widely implemented, although not widely enabled (and 
for good reason: the timestamp option kills header compression so it's bad for 
lower-bandwidth connections). My guess is that the OS can't afford to throw around MB+ 
size buffers for every TCP session so the default buffers (which limit the windows 
that can be
used) are relatively small and application programmers don't override the default.


Re: 923Mbits/s across the ocean

2003-03-09 Thread David G. Andersen

On Sun, Mar 09, 2003 at 02:25:25PM +0100, Iljitsch van Beijnum quacked:
 
 On Sat, 8 Mar 2003, Joe St Sauver wrote:
 
  you will see that for bulk TCP flows, the median throughput is still only
  2.3Mbps. 95th%-ile is only ~9Mbps. That's really not all that great,
  throughput wise, IMHO.
 
 Strange. Why is that? RFC 1323 is widely implemented, although not
 widely enabled (and for good reason: the timestamp option kills header
 compression so it's bad for lower-bandwidth connections). My guess is
 that the OS can't afford to throw around MB+ size buffers for every TCP
 session so the default buffers (which limit the windows that can be
 used) are relatively small and application programmers don't override
 the default.

  Which makes it doubly a shame that the adaptive buffer tuning
tricks haven't made it into production systems yet.  It was
a beautiful, simple idea that worked very well for adapting to
long fat networks:

  http://www.acm.org/sigcomm/sigcomm98/tp/abs_26.html

  -dave

-- 
work: [EMAIL PROTECTED]  me:  [EMAIL PROTECTED]
  MIT Laboratory for Computer Science   http://www.angio.net/
  I do not accept unsolicited commercial email.  Do not spam me.


RE: Question concerning authoritative bodies.

2003-03-09 Thread McBurnett, Jim

See Comments In-line below..
 
 So I'm curious what people think. We have semi centralized 
 various things in
 the past such as IP assignments and our beloved DNS root 
 servers. Would it
 not also make sense to handle common security checks in a 
 similar manner? In
 creating an authority to handle this, we cut back on the 
I would question the validity of this scan..
How easy would it be to put an ACL entry to block the Scan source?

 amount of noise
 issued. I bring this up because the noise is getting louder. 
This is almost the cost of being a business...

 More and more
 networks are issuing their own relay and proxy checks. At 
 this rate, in a
 few years, we'll see more damage done to server resources by 
 scanners than
 we do from spam and those who would exploit such vulnerabilities.

Why not establish a system like dshield.org, where companies
could reference the database and submit their data.
Maybe get the backbones to sponsor, or Dept of Homeland Security.
It needs to be global, and probrably should be an IETF / RIR / IANA
thought process...


Thoughts??

Jim


Re: Question concerning authoritative bodies.

2003-03-09 Thread Valdis . Kletnieks
On Sun, 09 Mar 2003 11:50:04 CST, Jack Bates [EMAIL PROTECTED]  said:

 So I'm curious what people think. We have semi centralized various things in
 the past such as IP assignments and our beloved DNS root servers. Would it
 not also make sense to handle common security checks in a similar manner? In

IP assignments are factual things of record - AS1312 has 198.82/16 and
128.173/16, and no amount of value judgments will change that.  And yet,
there's scattered complaints about what it takes to get a /19 to multihome.

DNS servers are similarly things of record.  This organization has this
domain, and their servers are found where the NS entries point.  And the
dispute resolution process is, in a word, a total mess - how many *years*
has the sex.com debacle dragged on now?

So who do you trust to be objective enough about a centralized registry
of security, especially given that there's no consensus on what a proper
level of security is?  And if there's a problem, what do you do?   In our
case, do you ban an entire /16 because one chucklehead sysadmin forgot to
patch up IIS (or wasn't able to - I know of one case where one of our boxes
got hacked while the primary sysadmin was recovering from a heart bypass).
Dropping a note to our abuse@ address will probably get it fixed, but often
we're legally not *ABLE* to say much more than we got your note and we'll
deal with the user - Buckley Amendment is one of those laws that I'm glad
is there, even if it does make life difficult sometimes.

 needs to be done? Would it not be better to have a single test suite run
 against a server once every six months than the constant bombardment we see
 now?

I submit to you the thesis that in general, the sites that are able to tell
the difference between these two situations are not the sites that either
situation is trying to detect.


-- 
Valdis Kletnieks
Computer Systems Senior Engineer
Virginia Tech



pgp0.pgp
Description: PGP signature


Re: 923Mbits/s across the ocean

2003-03-09 Thread Richard A Steenbergen

On Sun, Mar 09, 2003 at 08:29:16AM -0800, Cottrell, Les wrote:
 
  Strange. Why is that? RFC 1323 is widely implemented, although not
  widely enabled (and for good reason: the timestamp option kills header
  compression so it's bad for lower-bandwidth connections). My guess is
  that the OS can't afford to throw around MB+ size buffers for every TCP
  session so the default buffers (which limit the windows that can be
  used) are relatively small and application programmers don't override
  the default.

 Also as the OS's are shipped they come with small default maximum window
 sizes (I think Linux is typically 64KB and Solaris is 8K), and so one
 has to get the sysadmin with root privs to change this.

This is related to how the kernel/user model works in relation to TCP.  
TCP itself happens in the kernel, but the data comes from userland through 
the socket interface, so there is a socket buffer in the kernel which 
holds data coming from and going to the application. TCP cannot release 
data from it's buffer until it has been acknowledged by the other side, 
incase it needs to retransmit. This means TCP performance is limited by 
the smaller of either the congestion window (determined by measuring 
conditions along the path), or the send/recv window (determined by local 
system resources).

However, you can't just blindly turn up your socket buffers to large 
values and expect good results.

On the send size, the application transmitting is guaranteed to utilize 
the buffers immediately (ever seen a huge jump in speed at the beginning 
of a transfer, this is the local buffer being filled, and the application 
has no way to know if this data is going out to the wire, or just to the 
kernel). Then the network must drain the packets onto the wire, sometimes 
very slowly (think about a dialup user downloading from your GigE server). 
Setting the socket buffers too high can potentially result in an 
incredible waste of resources, and can severely limit the number of 
simultaneous connections your server can support. This is precisely why 
OS's cannot ship with huge default values, because what may be appropriate 
for your one-user GigE connected box might not be appropriate for someone 
else's 100BASE-TX web server (and guess which setup has more users :P).

On the receive size, the socket buffers must be large enough to
accommodate all the data received between application read()'s, as well 
as making sure they have enough available space to hold future data in the 
event of a gap due to loss and the need for retransmission. However, if 
the application fails to read() the data from the socket buffer, it will 
sit there forever. Large socket buffers also opens the server up to 
malicious attack causing non-swapable kernel memory to consume all 
available resources, either locally (by someone dumping data over lots of 
connections, or running an application which intentionally fails to read 
data from the socket buffer), or remotely (think someone opening a bunch 
of rate limited connections from your high speed server). It can even be 
unintentional, but just as bad (think a million confused dialup users 
accidentally clicking on your high speed video stream).

Some of this can be worked around by implementing what is called
auto-tuning socket buffers. In this case, the kernel would limit the
amount of data allowed into the buffer, by looking at the tcp session's 
observed congestion window. This allows you to define large send buffers 
without applications connected to slow receivers sucking up unnecessary 
resourced. PSC has had example implementations for quite a while, and 
recently FreeBSD even added this (sysctl net.inet.tcp.inflight_enable=1 as 
of 4.7). Unfortunately, there isn't much you can do to prevent malicious 
receive-side buffer attacks, short of limiting the overall max buffer 
(FreeBSD implements this as an rlimit sbsize).

Of course, you need a few other things before you can start getting into
end to end gigabit speeds. If you're transfering a file, you probably
don't want to be reading it from disk via the kernel just to send it back
to the kernel again for transmission, so various things like sendfile()  
and zero copy implementations help get you the performance you need
locally. Jumbo frames help too, but their real benefit is not the
simplistic hey look theres 1/3rd the number of frames/sec view that many
people see. The good stuff comes from techniques like page flipping, where
the NIC DMA's data into a memory page which can be flipped through the
system straight to the application, without copying it throughout. Some
day TCP may just be implemented on the NIC itself, with ALL work
offloaded, and the system doing nothing but receiving nice page-sized
chunks of data at high rates of speed. IMHO the 1500 byte MTU of ethernet 
will still continue to prevent good end to end performance like this for a 
long time to come. But alas, I digress...

-- 
Richard A Steenbergen [EMAIL 

Re: Question concerning authoritative bodies.

2003-03-09 Thread Jack Bates


- Original Message -
From: [EMAIL PROTECTED]
To: Jack Bates [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Sunday, March 09, 2003 12:31 PM
Subject: Re: Question concerning authoritative bodies.

 So who do you trust to be objective enough about a centralized registry
 of security, especially given that there's no consensus on what a proper
 level of security is?  And if there's a problem, what do you do?   In our
 case, do you ban an entire /16 because one chucklehead sysadmin forgot to
 patch up IIS (or wasn't able to - I know of one case where one of our
boxes

There are private systems in use today like NJABL which act as centralized
resources. I believe that it is possible to come to an agreement on a
standardized test suit that can be used and what the variables concerning #
of scans and how frequent should be set to. I'm not suggesting a full
security evaluation of networks, but a detection mechanism that can be used
as a resource to recognized standard issues, primarily protecting email
which is one of our most utilized resources.

 I submit to you the thesis that in general, the sites that are able to
tell
 the difference between these two situations are not the sites that either
 situation is trying to detect.

I agree for the most part (excluding RoadRunner given recent events).
However, the sites that are able to tell the difference suffer the costs of
scans just the same while everyone tries to detect those unable to tell the
difference. And as I mentioned, you always have situations like RoadRunner
arise where a detection was needed, but they are able to detect the scans
and issue complaints even when they were in fault. The goal is to provide a
service that many require to limit the amount of noise currently generated.
I do not think that we can necessarily scan and analyze every security
problem. However, I do think that there are no-brainer security issues that
can be detected which the public demands they be protected from. In
particular open SMTP relay and unsecured proxy/socks servers. Detection, of
say, the latest sendmail or saphire exploits is not as critical. We can
passively detect these things from their own abuse. We cannot passively
detect open proxies and smtp relays.

-Jack



Re: Question concerning authoritative bodies.

2003-03-09 Thread Valdis . Kletnieks
On Sun, 09 Mar 2003 13:09:14 CST, Jack Bates said:

 There are private systems in use today like NJABL which act as centralized

private systems. Plural. Because..

 resources. I believe that it is possible to come to an agreement on a
 standardized test suit that can be used and what the variables concerning #
 of scans and how frequent should be set to. I'm not suggesting a full

Forgive my cynicism, but... you're saying this on the same mailing list where it's 
possible to
start a flame-fest by saying that ISP's should ingress-filter RFC1918 source
addresses so they don't pollute the net at large? ;)

I've been participating in the Center for Internet Security development of
security benchmarks - it was hard enough to get me, Hal Pomeranz, and the
reps from DISA and NSA to agree on standards for sites to apply *to themselves*.
There's a lot of things that I think are good ideas that I don't want other
sites checking for, no matter how well intentioned they are.

I'd just *LOVE* to hear how you intend to avoid the same problems that the crew
from ORBS ran into with one large provider who decided to block their probes.
Failing to address that scenario will guarantee failure



pgp0.pgp
Description: PGP signature


Re: Question concerning authoritative bodies.

2003-03-09 Thread Jack Bates

From: Valdis.Kletnieks

 I'd just *LOVE* to hear how you intend to avoid the same problems that the
crew
 from ORBS ran into with one large provider who decided to block their
probes.
 Failing to address that scenario will guarantee failure

Run the probes from the DNS root servers. Problem solved. Go ahead and block
them. haha.

Seriously, I do understand that some networks would block the probes. This
is to be expected. Many of these same networks block probes from current
lists or issue do not probe statements. A network is more likely to
concede to tests from a central authority that limits what is tested and how
often if it means the reduction of scans from numerous sources for lists
such as DSBL. The only way such a resource would work is if the largest
networks back it. Blocking the scans at a TCP/IP level is easily detectable.
Provider received email from said server, IP was submitted for testing, no
connection can be established to said server. Place it in the wouldn't
allow scan list. Politely ask AOL to use the wouldn't allow scan list for
all inbound smtp connections.

People want the abuse of unsecured relays for smtp stopped. I'm afraid it is
a choice of the lesser of two evils. The scans are going to happen no matter
what. The question is, will administrators accept that a single run of a
test suite on a server that has established connections to other servers is
better than just having the entire 'net issuing their own scans? Am I wrong
in assuming that a majority of networks use smtp and do not wish the abuse
of their servers?

-Jack



Re: Question concerning authoritative bodies.

2003-03-09 Thread jlewis

On Sun, 9 Mar 2003, Jack Bates wrote:

   made. Instead of contacting 3-5 DNSBLs, one must contact every ISP that
   happened to do a scan during the outage period. Centralizing scanning
 for
   security issues is a good thing in every way. It is the responsible
 thing to do.

This, IMO, is where the real headache lies.  If every provider (or just
every large provider) has their own private DNSBL, and worse, doesn't do
much to document how it works...i.e. how to check if IPs are in it, how to
get IPs out of it, then it becomes a major PITA to deal with these
providers when one of your servers gets into their list.  I've personally
dealt with this several times over the past couple years with Earthlink
and more recently with AOL.  In each case, there was no way (other than
5xx errors or even connections refused) to tell an IP was listed.  In each
case, there was no documented procedure for getting delisted.  In AOL's
case, they couldn't even tell us why our mail was being rejected or our
connections to their MX's blocked and I had to wait a week for their
postmaster dept. to get to my ticket and return my call to fill me in on
what was going on.

 networks are issuing their own relay and proxy checks. At this rate, in a
 few years, we'll see more damage done to server resources by scanners than
 we do from spam and those who would exploit such vulnerabilities.

I doubt that's possible.  If an average sized ISP mail server receives
messages from, say, a few thousand unique IPs/day, and if that ISP wanted
to test every one of those IPs (with some sane frequency limiting of no
more than once per X days/weeks/months) then it doesn't take long at all
to get through the list.  Suppose every one of those servers decided to
test you back.  Now you're looking at a few thousand tests/day (really a
fraction of that if they do any frequency limiting).  I've got servers
that each reject several hundred thousand (sometimes 1 million)  
messages/day using a single DNSBL.

Also, I suspect consensus on a central authority and testing methods is 
highly unlikely.  People can't agree on what is spam? or how to deal 
with providers who turn a blind eye to spammer customers (spews).  How 
will a single central DNSBL bring all these people with opposing views 
together?

Two obvious reasons for the existence of dozens of DNSBLs are:

1) not agreeing with the policies of existing ones...thus you start your 
own
2) not trusting existing ones (not being willing to give up control over 
what you block to some 3rd party), so you start your own

I suspect AOL and Earthlink run their own DNSBLs primarily for the second
reason.  How would you convince them to trust and give up control to a
central authority?

Even if IANA were to create or bless some existing DNSBL and decree that
all IP address holders will submit to testing or have their space revoked
(yeah, that'll happen) there would still be those who weren't happy with
the central DNSBL thus creating demand for additional ones.

 network. These arguments would be diminished if an authoritative body
 handled it in a proper manner. At what point do we as a community decide
 that something needs to be done? Would it not be better to have a single
 test suite run against a server once every six months than the constant
 bombardment we see now?

Parts of the community have already decided and have helped to create 
central quasi-authoratative DNSBLs.  If nobody uses a DNSBL, who care's 
what's in it?  If a sufficient number of systems use a DNSBL, that creates 
authority.

--
 Jon Lewis [EMAIL PROTECTED]|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_



Re: Port 445 issues (was: Port 80 Issues)

2003-03-09 Thread james

Here is a graph of scans for port 445:

http://isc.incidents.org/port_details.html?port=445


Re: Question concerning authoritative bodies.

2003-03-09 Thread jlewis

On Sun, 9 Mar 2003, Jack Bates wrote:

 networks back it. Blocking the scans at a TCP/IP level is easily detectable.
 Provider received email from said server, IP was submitted for testing, no
 connection can be established to said server. Place it in the wouldn't
 allow scan list. Politely ask AOL to use the wouldn't allow scan list for
 all inbound smtp connections.

Lots of people run outgoing mail servers that don't accept connections 
from the outside.  A scarey number of people run multihomed mail servers 
where traffic comes in on one IP, leaves on another, and the output IP 
doesn't listen for SMTP connections.

 People want the abuse of unsecured relays for smtp stopped. I'm afraid it is

Some do.  Some see absolutely nothing wrong with their running open 
relays.  You're going to need a serious authority figure with some 
effective means of backing up their policy to change these minds.

BTW...these topics have been discussed before.  Before we all get warnings 
from the nanog list police, have a look at the thread I started back in 
8-2001 http://www.cctec.com/maillists/nanog/historical/0108/msg00448.html
 
--
 Jon Lewis [EMAIL PROTECTED]|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_



Re: Port 445 issues (was: Port 80 Issues)

2003-03-09 Thread Jonathan Claybaugh


Are other people having problems with this right now?  
There doesn't seem to be very much traffic or information about this on any of 
the security lists (it is Sunday...).  
The last posted URL points to an impending storm...

Other operators opinions about blocking port 445 before this thing starts 
spreading faster than it already is?



On Sunday 09 March 2003 03:03 pm, james wrote:
 Here is a graph of scans for port 445:

 http://isc.incidents.org/port_details.html?port=445



Re: Port 445 issues (was: Port 80 Issues)

2003-03-09 Thread Johannes Ullrich


 Are other people having problems with this right now?  
 There doesn't seem to be very much traffic or information about this on any of 
 the security lists (it is Sunday...).  
 The last posted URL points to an impending storm...
 
 Other operators opinions about blocking port 445 before this thing starts 
 spreading faster than it already is?

IMHO, this is similar in impact to Opaserv. As an ISP, I would probably block
445 just to avoid having lots of people call Monday morning complaining about
slow connections after they got infected. This worm is unlikely to cause
major 'global' network slowdowns, so filtering further upstream probably makes
not too much sense.

The main 'facts' so far:
- this virus does attempt to exploit weak passwords, not just open / no password
shares
- there are some reports that this worm has a VNC or IRC backdoor component,
which opens the infected machines to future exploits.
- port 445 has gotten a lot of attention from the malware community recently.
So there are likely further exploits in the works.



  http://isc.incidents.org/port_details.html?port=445
 
 


-- 

[EMAIL PROTECTED] Collaborative Intrusion Detection
 join http://www.dshield.org


Re: Question concerning authoritative bodies.

2003-03-09 Thread E.B. Dreger

 Date: Sun, 9 Mar 2003 14:59:05 -0500 (EST)
 From: jlewis


 In AOL's case, they couldn't even tell us why our mail was
 being rejected or our connections to their MX's blocked and I
 had to wait a week for their postmaster dept. to get to my
 ticket and return my call to fill me in on what was going on.

E.  Much better to put a semi-descriptive code in the 5.x.x
and give a contact phone number and/or off-net email box.


 Parts of the community have already decided and have helped to
 create central quasi-authoratative DNSBLs.  If nobody uses a
 DNSBL, who care's what's in it?  If a sufficient number of

True.  It cracks me up when someone complains about being on
Selwerd XBL.


Eddy
--
Brotsman  Dreger, Inc. - EverQuick Internet Division
Bandwidth, consulting, e-commerce, hosting, and network building
Phone: +1 (785) 865-5885 Lawrence and [inter]national
Phone: +1 (316) 794-8922 Wichita

~
Date: Mon, 21 May 2001 11:23:58 + (GMT)
From: A Trap [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Please ignore this portion of my mail signature.

These last few lines are a trap for address-harvesting spambots.
Do NOT send mail to [EMAIL PROTECTED], or you are likely to
be blocked.



Re: Port 445 issues (was: Port 80 Issues)

2003-03-09 Thread Sean Donelan

On Sun, 9 Mar 2003, Jonathan Claybaugh wrote:
 Are other people having problems with this right now?
 There doesn't seem to be very much traffic or information about this on any of
 the security lists (it is Sunday...).
 The last posted URL points to an impending storm...

 Other operators opinions about blocking port 445 before this thing starts
 spreading faster than it already is?

Blocking ports in the core doesn't stop stuff from spreading.  There are
too many alternate paths in the core for systems to get infected through.
In reality, backbones dropped 1434 packets as a traffic management practice
(excessive traffic), not as a security management practice (protecting
users).

So far the Deloder worm appears to be responding to normal congestion
feedback controls, limiting its network impact.  Like CodeRed, Nimda, etc
some edge providers may need to implement network controls due to
scanning activities causing cache busting, but I suspect most network
backbones will not need to do anything.




Re: Question concerning authoritative bodies.

2003-03-09 Thread J.A. Terranson


On Sun, 9 Mar 2003, E.B. Dreger wrote:

 True.  It cracks me up when someone complains about being on
 Selwerd XBL.

You may find it funny, but I do not.  I get literally dozens, possibly
hundreds of calls a year about that moron.  He costs us real money in lost
cycles.  His inclusion in the various master lists also hurts the validity
of those lists (which I've often wondered over.  Is it possible that Selwerd
is actually trying to point out the lunacy of [many] lists?

-- 
Yours, 
J.A. Terranson
[EMAIL PROTECTED]






Re: Port 445 issues (was: Port 80 Issues)

2003-03-09 Thread Jack Bates

From: Sean Donelan


 So far the Deloder worm appears to be responding to normal congestion
 feedback controls, limiting its network impact.  Like CodeRed, Nimda, etc
 some edge providers may need to implement network controls due to
 scanning activities causing cache busting, but I suspect most network
 backbones will not need to do anything.

I agree. It will mostly be useful at edge networks to spot outbound traffic
of possibly infected users. 445 should normally be very light, and I suspect
that 99% of the systems issuing the traffic will be found to be infected
with at least one worm or virus, and probably have more security issues. My
last 445 spewing customer had 3 back door programs, 5 viruses, and 2 worms.
It was, of course, a school computer.

The problem with blocking is if you decide to remove the blocks. Upon
removal of 1434 from my EBGP routers, I immediately saw 3 systems infected
and start spewing. One of them, scarily, was a dialup while another was on a
transit customers network and, of course, shut him down. If we protect the
customer, the customer won't fix the problem. Blocks always have to be used
with caution because of this.

-Jack



Re: Port 445 issues (was: Port 80 Issues)

2003-03-09 Thread james

 So far the Deloder worm appears to be responding to normal congestion
 feedback controls, limiting its network impact.  Like CodeRed, Nimda, etc
 some edge providers may need to implement network controls due to
 scanning activities causing cache busting, but I suspect most network
 backbones will not need to do anything.


I agree this is not a backbone issue. Since we are an ISP and at the edge,
it is a good place to drop this. Traffic is not as large, as of yet, as the
SQL worm.
Blocking port 445, for us, means far less $$ in support time to deal with
abuse reports
and infected users.



Re: Question concerning authoritative bodies.

2003-03-09 Thread jlewis

On Sun, 9 Mar 2003, E.B. Dreger wrote:

  In AOL's case, they couldn't even tell us why our mail was
  being rejected or our connections to their MX's blocked and I
  had to wait a week for their postmaster dept. to get to my
  ticket and return my call to fill me in on what was going on.
 
 E.  Much better to put a semi-descriptive code in the 5.x.x
 and give a contact phone number and/or off-net email box.

There was a multiline message (when our connections weren't just refused 
or immediately closed).

550-The information presently available to AOL indicates that your server
550-is being used to transmit unsolicited bulk e-mail to AOL. Based on AOL's
550-Unsolicited Bulk E-mail policy at http://www.aol.com/info/bulkemail.html
550-AOL cannot accept further e-mail transactions from your server or your
550-domain.  Please have your ISP/ASP contact AOL to resolve the issue at
550 703.265.4670.

Trouble was, the people at 703.265.4670 can't help you.  They just take 
your name, number, and some other basic info, and open a ticket that the 
postmaster group will get to eventually.

On the affected system, I ended up changing the source IP for talking to 
AOL's servers.
 
 True.  It cracks me up when someone complains about being on
 Selwerd XBL.

xbl.selwerd.cx might be useful for a few points in a spamassassin setup.  
I don't use it.

[EMAIL PROTECTED] implied that some of the other DNSBLs include selwerd.  I'm 
not aware of any, but I'm sure there are lots of DNSBLs I've never heard 
of and know nothing about.

--
 Jon Lewis [EMAIL PROTECTED]|  I route
 System Administrator|  therefore you are
 Atlantic Net|  
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: Question concerning authoritative bodies.

2003-03-09 Thread up


We just had this same exact thing happen to us, but not by AOL, by
Comcast.  We have alot of aliases pointing to Comcast email addresses, so
my best guess is that one or more of them had enough spam or spam bounces
going to them to trigger something.  Nobody there could tell me exactly
what happened, but after a bunch of calls, the delisted our mail server
about 3 days later.  In the mean time, we just routed the mail going to
comcast through another server.

On Sun, 9 Mar 2003 [EMAIL PROTECTED] wrote:


 On Sun, 9 Mar 2003, E.B. Dreger wrote:

   In AOL's case, they couldn't even tell us why our mail was
   being rejected or our connections to their MX's blocked and I
   had to wait a week for their postmaster dept. to get to my
   ticket and return my call to fill me in on what was going on.
 
  E.  Much better to put a semi-descriptive code in the 5.x.x
  and give a contact phone number and/or off-net email box.

 There was a multiline message (when our connections weren't just refused
 or immediately closed).

 550-The information presently available to AOL indicates that your server
 550-is being used to transmit unsolicited bulk e-mail to AOL. Based on AOL's
 550-Unsolicited Bulk E-mail policy at http://www.aol.com/info/bulkemail.html
 550-AOL cannot accept further e-mail transactions from your server or your
 550-domain.  Please have your ISP/ASP contact AOL to resolve the issue at
 550 703.265.4670.

 Trouble was, the people at 703.265.4670 can't help you.  They just take
 your name, number, and some other basic info, and open a ticket that the
 postmaster group will get to eventually.

 On the affected system, I ended up changing the source IP for talking to
 AOL's servers.

  True.  It cracks me up when someone complains about being on
  Selwerd XBL.

 xbl.selwerd.cx might be useful for a few points in a spamassassin setup.
 I don't use it.

 [EMAIL PROTECTED] implied that some of the other DNSBLs include selwerd.  I'm
 not aware of any, but I'm sure there are lots of DNSBLs I've never heard
 of and know nothing about.

 --
  Jon Lewis [EMAIL PROTECTED]|  I route
  System Administrator|  therefore you are
  Atlantic Net|
 _ http://www.lewis.org/~jlewis/pgp for PGP public key_




James Smallacombe PlantageNet, Inc. CEO and Janitor
[EMAIL PROTECTED]   http://3.am
=