Re: Google Public DNS Problems?

2013-05-02 Thread Perry Lorier
On 5/1/13 12:38 PM, Blair Trosper wrote:

 That's all well and good, but I certainly wouldn't expect nslookup
 gmail.com http://gmail.com or for nslookup google.com to return
 SERVFAIL


Do you have traceroutes to 8.8.8.8 and 8.8.4.4?


Re: Rate of growth on IPv6 not fast enough?

2010-04-19 Thread Perry Lorier



LSN is not trivial.

Here is some unverified calculations I did on the problem of scaling nat.
  


One of my colleagues here (Shane Alcock) did some research into Service 
Provider NAT based off passive traces from a New Zealand Residential 
ISP[1].  By passively looking at connections he investigated how you 
could dimension a NAT box for an ISP.  His research is available here 
http://www.wand.net.nz/~salcock/spnat/tech_report.pdf .  If walls of 
text scare you (why are you reading this mailing list then?) skip 
through and look at the graphs (page 3 onwards)




[1]: Contrary to Joe Abley's belief, I'm not aware of any major DSL 
provider in NZ that is doing NAT inside their network -- although almost 
all of the CPE's in New Zealand do NAT you, so as an end user you 
usually do end up behind NAT, but it is one under your own control, and 
can be eliminated with a careful choice of CPE.  There are some wifi 
providers, and some 3G APN's that will provide you with ISP NAT.




Re: IPv6 Deployment for the LAN

2009-10-23 Thread Perry Lorier



WRT Anycast DNS; Perhaps a special-case of ULA, FD00::53?

You want to allow for more than one for obvious fault isolation and 
load balancing reasons.  The draft suggested using prefix:::1 I 
personally would suggest getting a well known ULA-C allocation 
assigned to IANA, then use prefix::protocol assignment:1 
prefix::protocol assignment:2 and prefix::protocol 
assignment:3, where protocol assignment could be 0035 for DNS, 
and 007b for NTP, and if you're feeling adventurous you could use 
0019 for outgoing SMTP relay.


I thought ULA-C was dead... Did someone resurrect this unfortunate bad 
idea?




I'm not sure, I've not checked for a pulse recently.  Last I looked it 
seemed that there was ULA-L and ULA-C, and most people were saying use 
ULA-L unless you needed ULA-C, ULA-C seemed like a good fit for this, if 
it's been buried then sure ULA-L would fit the bill just as well.




Easily identified, not globally routable, can be pre-programmed in 
implementations/applications ... ?





Exactly, seems easy, straight forward, robust, reliable and allows 
for things like fate sharing and fail over.


Why pull this out of ULA?  Why not pull it out of /16 or one of 
the other reserved prefixes?


With my proposal above it only requires a /96, seems silly to use up an 
entire /16 on a /96 worth of bits.  It shouldn't come out of 2000::/3 
because that's globally routable and this is defined to sit within 
locally scoped addressing.


I have no major thoughts either way as to exactly where the range comes 
from other than it should be an easy to spot, and easy to type range 
which suggests plenty of 0's :)





Re: IPv6 Deployment for the LAN ... anycast

2009-10-23 Thread Perry Lorier

TJ wrote:

WRT Anycast DNS; Perhaps a special-case of ULA, FD00::53?


You want to allow for more than one for obvious fault isolation and
load balancing reasons.  The draft suggested using prefix:::1
  


FWIW - I think simple anycast fits that bill.

  


I think for very small/small networks anycast requires a lot of overhead 
and understanding.  If your big enough to do anycast and/or 
loadbalancing it's not hard for you to put all three addresses onto one 
device.


There are some protocols that anycasting doesn't work well for, they may 
require multiple instances.



I personally would suggest getting a well known ULA-C allocation
assigned to IANA, then use prefix::protocol assignment:1
prefix::protocol assignment:2 and prefix::protocol
assignment:3, where protocol assignment could be 0035 for DNS,
and 007b for NTP, and if you're feeling adventurous you could use
0019 for outgoing SMTP relay.
  


IMHO non-hex-converted port numbers works cleanly ... ?

  


Up to , if you want to announce a service port 30,000 you're in 
trouble.  Also quite a few protocols don't have well known ports, so 
may want to get things assigned.  If you're doing assignment you could 
do nice things like 0x53 for DNS and then ports  and protocols that 
don't have well known ports could get an unused one assigned to them.



... Heck, start a registry (@IANA) and add in FD00::101, etc. ...
Maybe reserve FD00::/96 for this type of ULA port-based anycast
allocation. (16bits would only reach  w/o hex-conversion (if
hex-converted could reserve FD00::/112 ... But would be less
obvious))



Thinking further, if simply based on port#s wouldn't even need a registry.
Unless it was decided to implement the multiple-addresses-per-function
mentioned above, then perhaps useful.

  
In my humble opinion I'd have them registered, linking them to port 
numbers means that it's easier on the poor admins brain at 3am while 
diagnosing faults, but may cause various hassles in the future (see above).




Re: IPv6 Deployment for the LAN ... anycast

2009-10-23 Thread Perry Lorier



 I think for very small/small networks anycast requires a lot of overhead
and understanding.  If your big enough to do anycast and/or loadbalancing
it's not hard for you to put all three addresses onto one device.




Anycast isn't really hard - same address, multiple places, routers see what
appear to be multiple routes to same destination, they choose the least
expensive.  Only tricky part (for stateless things) is ensuring the route
announcement is implicitly tied to service availability on that device ...

  


I'm thinking for places like home users and the like which don't really 
run an IGP,  can't correctly identify a router, and when you say 
anycast think that you might be talking about a new form of fishing.



There are some protocols that anycasting doesn't work well for, they may
require multiple instances.




Fair enough; could also standardize something like FD00::port number,
FD00::1:port number, and FD00::2:port number ... is three addresses
enough?  (IIRC, the Site-Local based automagic DNS used 2 or three addresses
... )

  


3 seems to me to be plenty for most cases.  For some things like NTP you 
might want to have 4 or more.

OK, so the non-hex converted as above (FD00::x:53; where x=[0,1,2] -
reserving FD00::/96) covers us to port  based on automatically using
port numbers (or using automatically registered port numbers, see below).

Maybe FD00:::/112 as a block within the above range to be used for
manual assignment of automatic service (potentially anycast) addresses.

  


Seems sane to me.


In my humble opinion I'd have them registered, linking them to port numbers
  

means that it's easier on the poor admins brain at 3am while diagnosing
faults, but may cause various hassles in the future (see above).




OK, so register them - but sign up some well-known ones at very comfortable
addresses, like DNS at 53 :).
(Or 35 if you prefer hex-conversion ...)

And I am sure some would be concerned about hosts performing any sort of
automatic service discovery anything, but this atleast has the advantage
over multicast in that it doesn't require multicast routing or group
membership / state maintenance, only delivers packets to the nearest (not
all) instances, etc.

  


Yup, and it makes a sane default, if you want to override with DHCP, or 
some funky RA option, or manual configuration or whatever, then this 
gets out of your way and you don't have to care.

It doesn't involve any code changes on hosts *or* routers to work today.



Re: Consistent asymetric latency on monitoring?

2009-10-22 Thread Perry Lorier

Rick Ernst wrote:

Resent, since I responded from the wrong address:
---
The basic operation of IP SLA is as surmised; payload with timestamps
and other telemetry data is sent to a 'responder' which manipulates
the payload, including adding its own timestamps, and returns the
altered payload.
  


Yup :) It's the obvious way to do it :)


I had to do a mental walk-through, but I think I see how drift can
cause this. I'm going to generate some artificial data, graph it, and
see if it matches the general waveshape I'm seeing.

I purposefully have the traffic generators ntp syncing against the
responders. I thought that would keep the clocks more closely in sync.
I don't necessarily care if the time is 'right', just that it's the
same. 


This causes major problems.  What you're actually measuring here is how 
well ntp can keep the clock sync'd under assymetric latency.  ntp is 
trying to do it's own measurements of one way delay, without the help of 
clocks to measure clock drift as well.   As you can see from your graphs 
ntp is not coping[1].


You are far better to have each end sync to a local stratum 1 or stratum 
2 ntp source, preferably one over a different link to the one under 
test.  If you don't have a local stratum 1/2 time source at each end,  
you might be able find one over a local exchange or other less congested 
link.  If this is very important to you then you should consider looking 
at running your own stratum 1 clocks at each end syncronised off 
something like GPS, CDMA or a T1 clock.



What kind of difference should I expect if I sync both
generators and responders against the same source, or not sync the
responder? I'm thinking that having one source with constant drift may
be better than both devices trying to walk/correct the time.
  


Most hardware clocks in PC's/routers/switches etc have pretty atrocious 
amounts of drift if left to free run[2], sometimes in the order of 
seconds or occasionally minutes per week.  To get useful numbers you 
really do need to syncronise them to /something/.  Synchronising them to 
each other causes problems as ntp I think (I could be wrong) assumes 
mostly symmetrical latency, and if the latency isn't symmetric assumes 
it's because one clock is running fast/slow and will alter the clock's 
speed to account for it.  The great thing about ntp stratum 1 servers is 
that by definition they have more or less the same time no matter where 
they are, so synchronising each against a local ntp server will be a 
much much better solution.  If possible you should consider peering with 
at least 3 upstreams, preferably 4(!)[3] other ntp servers.


[1]: To be fair it's a hard problem.  Anything that involves time just 
gets more and more complicated the more you look at it, ntp is extremely 
clever and probably knows more about time than I'd ever want to know, 
but you're making it's job hard.


[2]: http://vancouver-webpages.com/time/ / 
http://vancouver-webpages.com/time/ltmhist.png


[3]: 
http://twiki.ntp.org/bin/view/Support/SelectingOffsiteNTPServers#Section_5.3.3.




Re: IPv6 Deployment for the LAN

2009-10-22 Thread Perry Lorier

bmann...@vacation.karoshi.com wrote:

On Thu, Oct 22, 2009 at 12:02:14PM +0200, Iljitsch van Beijnum wrote:
  

On 22 okt 2009, at 01:55, bmann...@vacation.karoshi.com wrote:



so your not a fan of the smart edge and the stupid network.
  
I'm a fan of getting things right. A server somewhere 5 subnets away  
doesn't really know what routers are working on my subnet. It can take  
a guess and then wait for people to complain and then an admin to fix  
stuff if the guess is wrong, but I wouldn't call that a smart edge.


Always learn information from the place where it's actually known.



i'm ok w// that mindset.

one should learn routing from the router(s),
time from the time servers,
DNS from the DNS servers,
etc...

  


I quite liked the old 
http://tools.ietf.org/html/draft-ietf-ipv6-dns-discovery-07 idea.  
(tl;dr version: Have a set of well known site local anycast address for 
recursive name servers).  It has a number of nice features such as:


* since it's not in globally routable space people can't (ab)use your 
recursive name servers from outside your network. 
* you don't have to change recursive name servers when going to a 
different network -- they're the same everywhere.
* the addresses can be set as the default addresses by the OS 
manufacturer, and then don't need to be configured ever again.
* If your recursive name server becomes unreachable, broken or otherwise 
out of contact, it withdraws the address from your IGP, then since you 
can any cast these addresses, another node takes over.  Similar to the 
shared fate idea of RA's.
* You don't tie your recursive nameservers addresses to any point in the 
network, since they have their own special range, moving them, 
reshuffling them, or anything doesn't impact anything.  They don't need 
to cohabit a router sending RA's or anything odd like that.


However it has a number of obvious drawbacks, primarily amongst them 
being that it uses deprecated site local addresses.


You could imagine extending this to other services such as NTP, but I'm 
not sure that you really would want to go that far, perhaps using DNS to 
lookup _ntp._udp.local IN SRV or similar to find your local NTP servers.


Another obvious approach might be to have a service discovery protocol 
where you send to a service discovery multicast group a message asking 
wheres the nearest nameserver(s)? then nameserver implementations 
could listen on this multicast group and reply.  Again shared fate.  
This does have the downside of people running rogue nameservers and 
needing a ServiceDiscovery-Guard feature for switches 

Personally I like the first option (anycast addresses) better, you can 
control who has access to your IGP and if your IGP is down, then for all 
intents and purposes your recursive nameservers are offline too :)






Re: IPv6 Deployment for the LAN

2009-10-22 Thread Perry Lorier

bmann...@vacation.karoshi.com wrote:

On Fri, Oct 23, 2009 at 12:22:52AM +1300, Perry Lorier wrote:
  
You could imagine extending this to other services such as NTP, but I'm 
not sure that you really would want to go that far, perhaps using DNS to 
lookup _ntp._udp.local IN SRV or similar to find your local NTP servers.


Another obvious approach might be to have a service discovery protocol 
where you send to a service discovery multicast group a message asking 
wheres the nearest nameserver(s)? then nameserver implementations 
could listen on this multicast group and reply.  Again shared fate.  
This does have the downside of people running rogue nameservers and 
needing a ServiceDiscovery-Guard feature for switches 



ah... well - if your a router centric person, then you want
to put everything into the tools you know and love.

  


Generally I don't think of myself as a router centric person ;)


if your a dns centric person, then you put everything in the
DNS.

  


This has always sounded like a nice solution to me, however, DNS is 
starting to look a little long in the tooth, overloading it with more 
and more semantics seems to be pushing it well beyond it's design 
envelope.  (EDNS0?)



I point you to the DISCOVER' opcode (experimental) in the DNS
and the use of DNS over multicast for doing service discovery
(e.g. Apples Bonjour)...  Most of that is already designed/deployed
and in pretty widespread use... over IPv4 or IPv6.
  


Yup, they're good ways to discover some local resources.  To my 
understanding mDNS works over the local subnet, unless you're going to 
start having your routers run as mDNS relays (is there any standards for 
this?  How do you stop mDNS relays from creating loops and broadcast 
storms?).  mDNS works over a a single multicast group with a single port 
5353 which makes it hard to filter different types of services (People 
on this switch can announce their iTunes sharing, but they're not 
allowed to announce a local DNS server) without DPI, you're more likely 
to find network engineers just filter the entire multicast group 
breaking it for other purposes  If you're not going to have mDNS 
forwarders on your routers, then you're going to need to reconfigure 
your entire configuration on every segment as well.


(IMHO) there are different types of configuration:

* Network related (routes, addressing).  RA's seem to do a fairly good 
job at at least 80% of this.


* Network provided services, such as DNS, NTP.  Well known anycast 
addresses seems to be (IMHO) the best way to advertise these.  Currently 
you need DHCPv6 to get this information.


* Application settings (web proxy, local outgoing SMTP relay, default 
printer location, local SIP/RTP proxy, local home/intranet page, what 
the current local timezone rules are).  This seems to be currently done 
by a variety of adhoc protocols, usually bundled over well known DNS 
names with HTTP, often replicating the same information in a wide range 
of different places in different formats.  This seems an ugly solution, 
but I have no other suggestions.  I'm sure there are several RFCs/Drafts 
around somewhere that tries to solve this.


* Ephemeral end user provided services, which is already provided today 
by the well documented, and deployed mDNS.


in theory you can kinda pick and choose between those, for instance you 
can run just mDNS on a network without RA's or DHCPv6 and things will 
work (for the limited value of work that involves only whatever the 
ephemeral services are being announced).  You can run RA's by 
themselves, and your bittorrent will work fine.

And yes, its not RA/ND, or DHCP... its another configuration protocol
and its not quite vendor specific.  The best thing is, it pushes
the smarts closer to the edge (the end device)  and this makes me happy.

  
Theres a general issue of access control.  While I like a very smart 
edge, you don't want some misinformed user turning on a feature and 
suddenly having the rest of your network ending up using it. 



Personally I like the first option (anycast addresses) better, you can 
control who has access to your IGP and if your IGP is down, then for all 
intents and purposes your recursive nameservers are offline too :)





everyone to their own taste.

  


Yup.  There are different systems, they have different tradeoffs.  Pick 
the one that trades off things you don't care about for things you do 
care about. :)





Re: IPv6 Deployment for the LAN

2009-10-22 Thread Perry Lorier

trej...@gmail.com wrote:
WRT Anycast DNS; Perhaps a special-case of ULA, FD00::53? 

  
You want to allow for more than one for obvious fault isolation and load 
balancing reasons.  The draft suggested using prefix:::1 I 
personally would suggest getting a well known ULA-C allocation assigned 
to IANA, then use prefix::protocol assignment:1 prefix::protocol 
assignment:2 and prefix::protocol assignment:3, where protocol 
assignment could be 0035 for DNS, and 007b for NTP, and if you're 
feeling adventurous you could use 0019 for outgoing SMTP relay.





... Heck, start a registry (@IANA) and add in FD00::101, etc. ... Maybe reserve FD00::/96 
for this type of ULA port-based anycast allocation. (16bits would only reach 
 w/o hex-conversion (if hex-converted could reserve FD00::/112 ... But would be less 
obvious))


Easily identified, not globally routable, can be pre-programmed in 
implementations/applications ... ?

  


Exactly, seems easy, straight forward, robust, reliable and allows for 
things like fate sharing and fail over.




Re: IPv6 Deployment for the LAN

2009-10-21 Thread Perry Lorier



What it does deprive them of, with increasing layers of NAT or proxy
service, is dial-in access.  Many do not require this feature.  The
cost of providing it is increased support costs; debugging two
networks and three or four protocols.  Today, even debugging IPv4
problems with customers is problematic and costly enough.
  


The WAND Networking Research group did some measurements on the number 
of clients that accepted at least one incoming TCP connection from 
external to their network and presented their results at NZNOG 2009 ( 
http://www.wand.net.nz/~salcock/nznog09/spnat-nznog.pdf ).  The number 
of people that successfully accepted at least one incoming TCP 
connection was somewhere from 30% to 44%.  Most of it seemed to be from 
people using bittorrent, but about half was from other protocols.


I'm not so sure it's entirely obvious that people aren't accepting 
incoming TCP connections.






Re: Consistent asymetric latency on monitoring?

2009-10-21 Thread Perry Lorier

Rick Ernst wrote:

Although the implementation is Cisco-specific, this feels more appropriate
for NANOG.

We've started rolling out a state-wide monitoring system based on Cisco's
IP SLA feature set.  Out of 5 sites deployed so far (different locations,
different providers), we are consistently seeing one-way latency mirror the
opposite direction. As source-destination latency goes up,
destination-source latency goes down and vice versa.

Myself and the monitoring team have ripped apart the OIDs, IP SLA
configuration, and monitoring system.  We've also built an ad-hoc system to
compare the results.  It's still consistent behavior.  It's not a true
mirror; there is definitely variation between the data collection, but at
the 10,000 foot level, there is an obvious and consistent mirror to the
data.

The network topology is independant service providers all providing backhaul
to a local ethernet exchange.

Has anybody seen this type of behavior? We are solidly convinced that we are
using the proper OIDs and making the proper transformations of the data.
The two remaining causes appear to be either natural behavior of the links
and/or artifact in the IP SLA mechanism.

Any ideas?
  



Having never used cisco's IP SLA (or even read about it), take this with 
a sack of salt.


I assume this product works by having a packet with a timestamp sent 
from the source to the destination where it is timestamped again and 
either sent back, or another packet is sent in the other direction.  The 
difference between the two timestamps gives you the latency in that 
direction.


Now, how are your clocks syncronised?  are they synchronised using NTP? 
or something better (GPS?)  If one of your clocks is drifting with 
respect to the other then you'll see this effect.  Does your clock drift 
because NTP is failing to keep the clock well syncronised when it's 
connection to it's parent NTP server is saturated?





Re: Important New Requirement for IPv4 Requests

2009-04-24 Thread Perry Lorier





Large data sets?  So you are saying that 512-byte packets with no 
windowing work better?  Bill, have you measured this?


Time to download a 100mb file over HTTP and a 100mb interface: 20 
seconds.

Time to download a 100mb file over FTP and a 100mb interface: ~7 minutes.

And yes, that was FreeBSD with the old version openssl library that 
shipped with 6.3.




As someone who copies large network trace files around a bit,  100MB at 
100mb, over what I presume is a local (low latency) link is barely a 
fair test.  Many popular web servers choke on serving files 2GB or 4GB 
in size  (Sigh).  I'm in New Zealand.  It's usually at least 150ms to 
anywhere, often 300ms, so I feel the pain of small window sizes in 
popular encryption programs very strongly.  Transferring data over high 
speed research networks means receive windows of at least 2MB, usually 
more.  When popular programs provide their own window of 64kB, things 
get very slow.






Re: IPv6 routing /48s

2008-11-18 Thread Perry Lorier



Having no route is not a problem, you should get a destination
unreachable directly and all is fine because IPv4 should be used as a
fallback.

  
The big problem is when you have a route to them, but they don't have a 
route back.  You don't get destination unreachables, but instead get 
timeouts, and pain, and misery (btdt).  :(






Re: IPv6 Wow

2008-10-23 Thread Perry Lorier

Alain Durand wrote:


On 10/23/08 6:39 PM, Tony Hain [EMAIL PROTECTED] wrote:

  

 A properly
implemented client will do the longest prefix match against that set, so a
6to4 client will go directly to the content provider's 6to4 router, while a
native client will take the direct path.



Not quite.
Say the server has native IPv6 address 2001::1 and 6to4 IPv6 2002::X.
Say the client has native IPv6 address 2003::1 and 6to4 IPv6 2002::Y.
Longest prefix match will choose 6to4 over native IPv6. Not good.

  


Not quite.   A properly implemented client will use the policy table 
first which by section 2.1 and 10.3 of RFC 3484 depref's 2002::/16 below 
0::/0.
It's only if two addresses are very similar (as far as the OS can 
determine) that the longest match rule comes into play.


You should also be able to configure your operating system to depref or 
pref source/destination addresses as local site policy requires 
(avoiding tunnels,
preferring v4 for some sites, using 6to4 for other sites, and avoiding 
v6 all together for others and so on).





Re: DNS Hijacking by Cox

2007-07-23 Thread Perry Lorier




doing it[1].  If you're interested in finding people that Undernet
detects as being open proxies or such like, put an IDS rule looking for
:[^ ]* 465 [^ ]* :AUTO .


I'm not so sure Undernet is the only IRC network to ever begin
a banned reason message with the word AUTO.

I suspect it would be most useful if detected drones by most major IRC 
network
would be visible to cooperating ISPs for further analysis, not just 
Undernet.



What I would see as ideal is for some form of detected BOT incident
reporting protocol to be utilized at both ends.


I'd love to see this.  Undernet tried doing something similar a few 
years ago and it didn't really seem to pan out the way we'd hoped. 
Having this hosted by some independent third party would be great.  Are 
there any trusted security organisations that are interested in running 
this?  It's fairly easy to parse undernet's logs into some kind of sane 
format that can be submitted.



Rather than use IDS to snoop for our user banned messages, let IRC
networks run their automated bot detection methods, and when a bot is 
found,

post that fact to a short-term real-time incident database of some
sort that would
limit visibility of the information (to the ISP responsible, or to
someone asking a
Yes/No question about a specific ip address and approximate date  time).


One of the problems here is that responsible networks are happy to 
report and deal with these drones.  The kiddies learn this pretty 
quickly and just set up an IRC server on a hacked box somewhere and use 
that.  These are the bot networks that are used by kiddies that have 
graduated from annoying to dangerous.



The ISP would then be able to use the database as a geiger counter and
apply more exhaustive (CPU-intensive) monitoring to the activities of
a bot-reported connection for a short time, and ascertain if they can 

 erify the user is a bot claim, and maybe take some abuse response if
 the user is actually infected with a spambot or floodbot.


Due to the massive volume, automation would be a must, however.

This relies on the IRC network's bot detection heuristics being kept
up to date, but
for the IRC networks' sake, they already have to be.


People are busy keeping these up to date all the time as you point out.


And unfortunately also relies on someone maintaining a database, that
could be costly, and would all be a waste if no ISP was able to utilize

 it to actually isolate bot-infected computers, or if no IRC network
 actually reported to the DB.

IRC networks in general are extremely keen to do anything they can to 
get rid of these bots.  The interesting question is if ISP's would sign 
on to such a service.



them harder to find and ban.[2]  Also the constant reconnects themselves
can almost overwhelm a server.  I almost want to submit patches to the


Not just almost; the constant reconnects are themselves a DDoS
against the server that has banned them, or the entire network, depending
on the target of the reconnects.

Merely perhaps, an unintentional one, rather than deliberate attack.

Use of firewall rules in addition to ordinary K-Lines should serve well
to mitigate some of the incidental reconnect spam's negative impacts on 
IRCD.


Yeah, many servers run a script that counts number of incoming syns, and 
if they exceed a certain threshold firewall the source IP.


Firewall rules are just impossible to implement across all servers on 
most IRC
networks, as each server is administered independently, and contacts to 
make
the necessary changes are not likely to all be available or on IRC at 
the same

time.


Yeah, but automated if you see too many syn's from a certain source ip, 
just firewall it systems give a fairly reasonable approximation to this.



botnet codebases to implement exponential back off, or infact /any/ kind
of reasonable delay between connection attempts.


I suspect boxes IRC Servers run on should enforce something like sane 
delay,

such as  with new connection throttling at the OS level.

By that I mean, for instance, more than 3 connects to port 6667 in 60
seconds would
be met with all SYN packets dropped from that source for a few minutes
thereafter...


Yeah some already do this.  That and just ratelimit limit syn's  (if you 
ban 10k clients off your network, they all try and reconnect immediately 
usually to the first server in their list...)