Re: Paul Wilson and Geoff Huston of APNIC on IP address allocation ITU v/s ICANN

2005-04-28 Thread Alex Bligh

--On 28 April 2005 10:47 +0200 Stephane Bortzmeyer [EMAIL PROTECTED] 
wrote:

This is no longer true (for several years). Corporations (Sector
members) can now join (ITU is the only UN organization which does
that). See
http://www.itu.int/cgi-bin/htsh/mm/scripts/mm.list?_search=SEC
I think Bill is actually correct. ITU is a treaty organization. Only
members of the UN (i.e. countries). ITU-T (and ITU-R, ITU-D) are sector
organizations that telcos can join (AIUI the difference having arisen
when a meaningful difference arose between telco and state monopoly).
However, given the entire organization is run by the ITU, it's fair
to say it is essentially a governmental organization run with some
private sector involvement. Whereas ...
So, like ICANN, governements and big corporations are represented at
the ITU. Like ICANN, ordinary users are excluded.
... ICANN is billed as a private sector organization with government
involvement.
Obviously the extent of the involvement of the private sector (and
non-commercial sectors), and the extent to which one likes the ICANN
model are all up for extensive debate, preferably on somewhere other
than this mailing list.
Alex


Re: Paul Wilson and Geoff Huston of APNIC on IP address allocation ITU v/s ICANN

2005-04-28 Thread Alex Bligh

--On 28 April 2005 07:06 -0400 Scott W Brim [EMAIL PROTECTED] wrote:
I think Bill is actually correct. ITU is a treaty organization. Only
members of the UN (i.e. countries). ITU-T (and ITU-R, ITU-D) are sector
organizations that telcos can join (AIUI the difference having arisen
when a meaningful difference arose between telco and state monopoly).
However, given the entire organization is run by the ITU, it's fair
to say it is essentially a governmental organization run with some
private sector involvement. Whereas ...
An ITU publication says the majority of ITU members, including member
states and sector members, are now vendors.
Members yes, if you count sector members. But as far as I can tell,
the ITU is ultimately controlled by its council, which are state
representatives elected by a plenipotentiary committee of states.
Here's the ITU's own take, which seems to agree with me:
http://www.itu.int/aboutitu/overview/council.html
Note the remit of the Council:
The role of the Council is to consider, in the interval between
plenipotentiary conferences, broad telecommunication policy issues to
ensure that the Unions activities, policies and strategies fully
respond to todays dynamic, rapidly changing telecommunication
environment. It also prepares the ITU strategic plan.
In addition, the Council is responsible for ensuring the smooth
day-to-day running of the Union, coordinating work programmes, approving
budgets and controlling finances and expenditure.
Finally, the Council takes all steps to facilitate the implementation of
the provisions of the ITU Constitution, the ITU Convention, the
Administrative Regulations (International Telecommunication Regulations
and Radio Regulations), the decisions of plenipotentiary conferences and,
where appropriate, the decisions of other conferences and meetings of the
Union
Just like any organization (and this is without criticism of the ITU), when
talking to a given audience, it tries to make itself appear most attractive
to that audience. Thus it emphasizes private sector involvement when
talking to the private sector. I am quite sure that when talking to African
nations, it also emphasizes that there are more Region D (African) states
on the council than their are either Region A (Americas) or region B
(Western Europe). That's politics.
I'm am trying to provide objective information here rather than opinion.
It's not as if ICANN is beyond criticism: it could equally be argued that
ICANN has *no* members (of the corporation) as such, and that the way its
board is elected is at least non-trivial to understand. However,
characterizing the ITU as a private sector dominated organization (let
alone an organization dominated by private sector players relevant to the
internet) is not accurate (at least not today - I understand they are
making overtures towards internet companies - see WGIG/WSIS side meetings).
Alex


Re: ICMP Vulnerability

2005-04-12 Thread Alex Bligh

--On 12 April 2005 11:57 -0400 Gwendolynn ferch Elydyr [EMAIL PROTECTED] 
wrote:

http://www.cisco.com/warp/public/707/cisco-sa-=20050412-icmp.shtml
Actually
http://www.cisco.com/warp/public/707/cisco-sa-20050412-icmp.shtml
Alex


Re: Reports or data on data centres without access to competitive fibre

2005-04-05 Thread Alex Bligh

--On 05 April 2005 10:43 +1000 Stephen Baxter 
[EMAIL PROTECTED] wrote:

I was looking around for any reports, press releases or even yarns about
the issues data centres face when they are built without access to
competitive fibre optic cable.
See MFS  MAE-East ad nauseam.
Alex


Re: botted hosts

2005-04-04 Thread Alex Bligh

--On 04 April 2005 04:59 -0400 Sean Donelan [EMAIL PROTECTED] wrote:
I've saying that for several years, and then immediately get shouted
down.
Statistically, most anti-spam options (good and bad) have been brought up
many times for several years, and have been shouted down. Why would you
expect your views to be treated any differently? :-)
We now return to the normal program of more heat than light...
Alex


RE: Vonage Hits ISP Resistance

2005-04-01 Thread Alex Bligh

--On 01 April 2005 10:05 -0800 Alexander Kiwerski 
[EMAIL PROTECTED] wrote:

And for the record, the GPS locators currently in cell phones tend *not*
to work indoors, so even if you are lucky enough to live in an area where
E911 is plugged into your cell phone carrier's locator service, you still
have a high probability of being screwed.
No idea why this is relevant to NANOG, but cell phone location works by
cell triangulation, not by GPS. So if the cell phone is working indoors,
the locator service should work.
Alex


Re: T1 vs. T2 [WAS: Apology: [Tier-2 reachability and multihoming]]

2005-03-28 Thread Alex Bligh

--On 27 March 2005 12:59 -0800 Randy Bush [EMAIL PROTECTED] wrote:
better?  i did not say better.  a simple way to look at it, which
we have repeated here every year since com-priv migrated here is
a tier-1 network does not get transit prefixes from any other
network and peers with, among others, other tier-1 networks.
a tier-2 gets transit of some form from another network, usually but
not necessarily a tier-1, and may peer with other networks.
this does not please everyone, especially folk who buy transit and
don't like discussing it.  and there are kinky corners
Even this is debatable ( I know you know this Randy).
Firstly, peering isn't binary. Is peering vs transit a distinction based on
routes taken / accepted  readvertised, or on cost? Does paid for peering
count as peering or transit? If you pay by volume? If you pay for more
than your fair share of the interconnect pipes? (if the latter, I am
guessing there are actually no Tier 1s as everyone reckons they pay for
more than their fair share...).
Secondly, it doesn't cover scenarios that have have happened in the past.
For instance, the route swap. EG Imagine networks X1, X2, X3, X4 are Tier
1 as Randy describes them. Network Y peers with all the above except X1.
Network Z peers with all the above except X2. Y  Z peer. To avoid Y or Z
needing to take transit, Y sends Z X2's routes (and sends Z's routes to X2
routes marked no export to X2's peers), and Z sends Y X1's routes (and
sends Y's routes to X1 marked no export to X1's peers). Perhaps they do
this for free. Perhaps they charge eachother for it and settle up at the
end of each month. Perhaps it's one company that's just bought another.
All this come down to the fact that Tier n is not a useful taxonomy
because there is no clear ordering of networks.
If I was really pushed for a definition, I'd say it was this: you are a
Tier-1 network, when, if you tell all third parties not to advertise your
routes to anyone but their customers, and you get a phone call from one of
your customers complaining about a resultant connectivity problem, you can
be confident before you've analyzed it, that the customer will accept
it's that networks problem, not yours. This boils down to does the
customer believe you.
Alex


Re: DNS cache poisoning attacks -- are they real?

2005-03-26 Thread Alex Bligh

--On 26 March 2005 23:23 +0100 Florian Weimer [EMAIL PROTECTED] wrote:
Should we monitor for evidence of hijacks (unofficial NS and SOA
records are good indicators)?  Should we actively scan for
authoritative name servers which return unofficial data?
And what if you find them? I seem to remember a uu.net server (from memory
ns.uu.net) many many years ago had some polluted data out there as an A
record. All bright and bushy-tailed I told the UUnet folks about this. They
were resigned. Someone, somewhere, had mistyped an IP address, and it had
got into everyone's glue, got republished by anyone and everyone, and in
essence had no chance of going away. Now I understand (a little) more about
DNS than I did at the time so I now (just about) know how DNS servers
should avoid returning such info (where they are both caching and
authoritative), but I equally know this is built upon the principle
no-one does anything actively malicious.
The only way you are going to prevent packet level (as opposed to
organization level) DNS hijack is get DNSSEC deployed. Your ietf list is
over -- there.
Alex


Re: 72/8 friendly reminder

2005-03-23 Thread Alex Bligh

--On 23 March 2005 10:51 -0800 Randy Bush [EMAIL PROTECTED] wrote:
a bit more coffee made me realize that what might best occur would
be for the rir, some weeks BEFORE assigning from a new block issued
by the iana, put up a pingable for that space and announce it on
the lists so we can all test BEFORE someone uses space from that
block.
Hmmm.. or, if the RIRs are going to advertize the block anyway between IANA
issue and space assignment (which would appear to be a necessary
precondition for what you suggest to work), why not ping a large
collection of targets using the new block, and various other IP addresses
as source addresses, and see which addresses responded from the old
block(s), but not from the new block. Sort by AS, and that would give you a
list (correct to heuristic level) of AS's that need to update their filters.
Then stick it on a web page.
RIPE could (for instance) generate it's large collection of targets using
a tiny sample of host-count data. (clearly RIPE needs to ping addresses
from all RIRs, ditto ARIN, APNIC etc.)
Alex


Re: 72/8 friendly reminder

2005-03-23 Thread Alex Bligh

--On 23 March 2005 11:15 -0800 Randy Bush [EMAIL PROTECTED] wrote:
at least one rir is just dying to become net police,
you don't need any mandatory aspect. Just publish which AS's have addresses
that can be pinged from old netblocks, but not from new ones. No more
net police-like than all the other project stuff which monitors
reachability. If people want to filter on odd numbered first octet
of IP address, well, more power to them.
(yes I know it was partly tongue in cheek).
Alex


Re: Clue on Europe

2005-03-08 Thread Alex Bligh

--On 07 March 2005 19:34 -0800 Ashe Canvar [EMAIL PROTECTED] wrote:
My research leads me to believe that London and Amsterdam have the
most dense connectivity. Is this true ?
I'd say London has the most dense connectivity because just about every
transatlantic circuit goes through London. However, this factor may
not be relevant if you are trying to set up an outpost from the US,
because you perhaps don't much care about connectivity to the US.
Frankfurt has good connectivity too (probably less good than
Ams  Lon), but staff in London and Amsterdam speak English (yes,
the guys from NL speak English just about as well as the English)
which is probably an advantage unless you are German speaking.
There used to be some pan-European networks that unbelievably had no
node or no interconnection in London (EBONE fell into this camp for
a long while).
There is probably more choice of decent colo in London than in AMS or
Frankfurt. At least in the UK, intrabuilding local loop charges in the Isle
of Dogs should be the cost of putting in a wire, and then maintaining a
wiring plan. Interbuilding local loop costs should be fine so long as you
don't buy from BT (unless you buy LES circuits - 100Mb/s ethernet on dark
fiber, which are reasonably priced). If it's not like that, go elsewhere
(Telehouse has some bizarre policies on this).
People you might want to talk to in London:
* Redbus (HEX has had power problems and is completely packed anyway,
 but Meridian Gate which is their newer facility is fine AFAIK).
* Telecity (make sure you get HEX not Bonnington house - different power
 from above). Helpful and price competitive.
* XchangePoint - will do you relatively cheap inter building interconnect,
 peering, will help you find decent colo, and have helped out quite a
 few people from the US who've had similar requirements. Don't know if
 things have changed after their recent acquisition.
Disclaimer: I used to have connections with 2 of the above but haven't
for 1 yr now. There are plenty of others, too.
Alex


Re: Clue on Europe

2005-03-08 Thread Alex Bligh

--On 08 March 2005 10:07 + [EMAIL PROTECTED] wrote:
Also, when I dealt with them, I
believe their NOC was connected to the Net for external monitoring
purposes by a leased line which was frequently down.
I don't think that's true. Their NOC has always been in either one
data center or another (separated by a few hundred yards).
Alex


Re: E1 - RJ45 pinout with ethernet crossover cable

2005-02-25 Thread Alex Bligh

--On 25 February 2005 11:57 + Per Gregers Bilse 
[EMAIL PROTECTED] wrote:

Quick question: If I have two E1 ports (RJ45), then will running a
straight ethernet cable between the two ports have the same affect as
plugging a ballan into each port and using a pair of coax (over a v.
short distance).
You generally need a router or something else acting as store-and-forward.
E1/T1 and other plesiochronous circuits are just that, near synchronous,
and certainly not asynchronous.
Whilst this is true, his question still stands. Yes indeed if you got the
RJ-45 crossover right (I don't think it's ethernet pinout from memory,
but...) you would indeed achieve the same effect as a crossed over pair of
coaxes. However, it might well not be the effect you intend or desire (for
the reasons Per points out).
One circumstance where this does work is connecting (for instance) an E1
trunk connection between (say) two FR switches in the same room, provided
you remember to set exactly one end to originate, and one end to receive
clock (i.e. where there are no carriers involved).
Alex


RE: E1 - RJ45 pinout with ethernet crossover cable

2005-02-25 Thread Alex Bligh

--On 25 February 2005 09:43 -0500 Hannigan, Martin 
[EMAIL PROTECTED] wrote:

Not that I know of, but I've never attempted what you
describe. Putting the baluns in the loop will destroy the
framing i.e. it's going to try and convert b8zs/ami to 802.x.
How does a balun destroy the framing (or rather line coding)? It's just a
pair of transformers, and hence AC characteristics pass through intact.
All you've done is converted impedance (and, IIRC, line voltage).
Alex


Re: Kornet/ChinaNet was Re: ChinaNet Contacts

2005-02-18 Thread Alex Bligh

--On 18 February 2005 08:32 + Simon Waters [EMAIL PROTECTED] wrote:
Whilst I can appreciate that Kornet may have issues with a lot of
broadband  users, but the other big Korean company seems to have it
solved. What I see  is what appear to be (using whois data!) US companies
buying transit from  them.
How are US companies with Korean offices meant to take connectivity
then?
Alex


Re: Smallest Transit MTU

2004-12-29 Thread Alex Bligh

--On 29 December 2004 17:04 -0500 Joe Abley [EMAIL PROTECTED] wrote:
But that only affects tcp traffic - it does nothing to help other
protocols.
Are there any common examples of the DF bit being set on non-TCP packets?
traceroute
Alex


Re: Affects of rate-limiting at the far end of links

2004-12-13 Thread Alex Bligh

--On 13 December 2004 13:18 + Sam Stickland [EMAIL PROTECTED] 
wrote:

doesn't lock out traffic for such long periods of time.
Could it be that buffers and flow-control over the 14ms third party leg
are causing the rate-limiting leaky bucket to continue to overflow long
after it's full?
Or you are losing line protocol keepalives of some sort (e.g. at L2), or
routing protocol packets. It may also be that your MPLS provider limits
the traffic at X kbps INCLUDING protocol overhead - if so it's going to
police out all sorts of important stuff (assuming you are running FR, ATM
or something rather than some sort of TDM over MPLS).
Alex


RE: [Fwd: zone transfers, a spammer's dream?]

2004-12-13 Thread Alex Bligh

--On 14 December 2004 10:17 + Matt Ryan [EMAIL PROTECTED] 
wrote:

 171 uk.zone

www.bl.uk?
All bar the 171 lines :-) (.uk itself contains some legacy including
bl.uk, govt.uk etc.).
Alex


Re: no whois info ?

2004-12-12 Thread Alex Bligh

--On 11 December 2004 12:07 -0500 Rich Kulawiec [EMAIL PROTECTED] wrote:
I don't want to turn this into a domain policy discussion,
Ditto. I'd add one thing though: allowing anonymous registration is not
necessarily the same thing as allowing all details of registration to be
publicly queryable under all circumstances. In any case (whether happily or
sadly) local laws can often get in the way of total openness.
The operational aspect of this I think is as follows: if an operator had a
problem with a network endpoint in 1995, then there was a good chance whois
domainname would reach someone clueful, as the majority of network
endpoints were clueful (for some reading thereof); hence whois domainname
was useful for network debugging. In 2004, I'd suggest the wider
penetration of the internet means whois domainname on its own is not a
useful operational tool any more. Even whois -h rir inetnum is becoming
less useful, and to an extent whois asnumber. The argument for people not
wanting to put personal information up on domain name registrations is I'd
have to say a little similar to the reason some providers don't like having
their (true) NOC number on whois provider.net; i.e. they don't want junk
calls. Which leaves you in essence with hop-by-hop debugging according to
peering agreements. Or is anyone here from $provider messages.
Alex


Re: [Fwd: zone transfers, a spammer's dream?]

2004-12-09 Thread Alex Bligh

--On 09 December 2004 10:24 -0500 Rich Kulawiec [EMAIL PROTECTED] wrote:
The irony of all this is that spammers already have all this information
-- yet registrars have gone out of their way to make it as difficult as
possible for everyone else to get it (rate-limiting queries and so on).
They clearly don't already have this information, or they wouldn't
be
a) offering to pay people for it
b) continue to be trying to obtain it by data mining.
Your argument is roughly equivalent to The irony of this is that drug
dealers already have drugs -- yet governments have gone out of their
way to make it as difficult as possible for everyone else to get them.
Or Credit card fraudsters already have credit card numbers - yet
credit card companies have gone out of their way to make it is
difficult as possible for everyone else to get them.
IE sure, there's a lot of leaked information out there (often including
personal data), that doesn't mean responsible registries should add
to it.
Note also that responsible registries do provide query access (automable
where necessary) to registration data in a variety of different ways;
not all make it as hard as possible for others to access it.
I will leave it to the reader's judgment to work out which registries
come under the category responsible.
Alex


Re: [Fwd: zone transfers, a spammer's dream?]

2004-12-09 Thread Alex Bligh

--On 09 December 2004 18:46 +0100 Kandra Nygårds [EMAIL PROTECTED] wrote:
IE sure, there's a lot of leaked information out there (often including
personal data), that doesn't mean responsible registries should add
to it.
Such as... selling access to the data to anyone who pays? No, responsible
registries should of course not do this.
Indeed. I wasn't suggesting they should.
Alex


Re: [OT] Re: Banned on NANOG

2004-12-04 Thread Alex Bligh

--On 04 December 2004 17:35 + Paul Vixie [EMAIL PROTECTED] wrote:
third and last, there are a number of principles up for grabs right now,
and the folks who want to grab them aren't universal in their motives or
goals.  some folks think that rules are bad.  others think that susan is
bad or that merit is bad.  some say that rules are ok if the community has
visibility and ultimate control.
I'd add: if people don't like NANOG, demand a full refund for your
year's membership. Then go set up your own mail-server and work out your own
moderation policies. If you do a better job, you'll win clueful
subscribers.
Alex


MTU (was Re: ULA and RIR cost-recovery)

2004-11-25 Thread Alex Bligh

--On 25 November 2004 13:16 + [EMAIL PROTECTED] wrote:
In today's network, is there anyone left who uses 1500 byte
MTUs in their core?
I expect there are quite a few networks who will give you workable
end-to-end MTU's 1500 bytes, either because of the above or because of
peering links.
Given how pMTUd works, this speculation should be relatively easy to test
(take end point on 1500 byte MTU, run traceroute with appropriate MTU to
various points and see where fragmentation required comes back). Of course
I'd have tried this myself before posting, except, urm, I can't find a
single machine I have root on that I can get more than a hop or two from
without running into 1500 byte (or less) MTU.
I am guessing also that a recent netflow sample from a commercial core (not
Internet2), even with jumbo frames enabled, will show 0.01% of packets
will not fit in 1500 byte MTU. Anyone have data?
Alex


Re: who gets a /32 [Re: IPV6 renumbering painless?]

2004-11-21 Thread Alex Bligh

--On 21 November 2004 11:59 +0200 Petri Helenius [EMAIL PROTECTED] wrote:
If we ever make contact to some other civilization out there, do they
have to run NAT?
Nah. Jim Fleming tells me they're running IPv8 (ducks)
Alex


Re: who gets a /32 [Re: IPV6 renumbering painless?]

2004-11-20 Thread Alex Bligh

--On 19 November 2004 09:40 -0800 Owen DeLong [EMAIL PROTECTED] wrote:
If it were true, then I would have to renumber
every time I changed telephone companies.  I don't, so, obviously, there
is some solution to this problem.
But I'm not sure you'd like it applied to the internet. Firstly, in
essence, PSTN uses static routes for interprovider routing (not quite true,
but nearly - if you add a new prefix everyone else has to build it into
their table on all switches). Secondly, IIRC porting works in the UK
something like - call delivered to switch of operator who owns the block,
marked as ported number, lookup in central porting database (one for all
operators), operator port prefix put on dialed number, call sent back out
all the way to interconnect, enters new operator network, goes to switch
managing ports, further signalling info added to make call go to the
correct local switch, call goes to correct local switch, dross removed,
call terminated.
Roughly speaking this is the internet equivalent of:
* Configure all interprovider routes by a static routing config loaded
 every week or so.
* Handle porting by getting ICANN to run a box with a primative gated
 BGP feed connected to all your distribution routers. Where a packet
 is delivered to a distribution router and the IP address has changed
 providers, change the next hop received from the ICANN BGP feed
 to a GRE tunnel to the appropriate provider's tunnel termination box.
* At that tunnel termination box, static route all ported-in IP addresses
 to the correct distribution router.
Yum yum.
Sometimes we don't have lessons to learn from the PSTN world, and instead
the reverse is true.
Alex


Re: Problems receiving emails from china...

2004-11-18 Thread Alex Bligh

--On 18 November 2004 14:01 -0500 Lou Laczo [EMAIL PROTECTED] wrote:
The client's mailserver is
running qmail. In almost all of the cases, the failing email has at least
one attachment and is larger than what might be considered normal.
Have you tried checking the intervening path is clean w.r.t. ECN?
Alex


Re: IPV6 renumbering painless?

2004-11-16 Thread Alex Bligh

--On 15 November 2004 17:24 -0800 Owen DeLong [EMAIL PROTECTED] wrote:
ASNs issued today are subject to annual renewal.
ARIN ASNs only?
Alex


Re: How to Blocking VoIP ( H.323) ?

2004-11-11 Thread Alex Bligh

--On 11 November 2004 10:46 -0800 Randy Bush [EMAIL PROTECTED] wrote:
What business issue/problem are you trying to address by
blocking VoIP?
an incumbent telco which also has the monopoly on ip might
want to prevent bypass.  welcome to singapore, and remember
to try the chili crab.
Me I'm trying the IPsec+SIP.
Joe might want to try NewPort Networks who claim to be able to find,
remove, capture and otherwise prevent bypass using VoIP. I'll be interest
to see what they do with the above without breaking VPNs. No
recommendation, just read their blurb. They are at:
http://www.newport-networks.com/
Alex


Re: Important IPv6 Policy Issue -- Your Input Requested

2004-11-09 Thread Alex Bligh

--On 09 November 2004 11:09 -0500 Leo Bicknell [EMAIL PROTECTED] wrote:
I have to believe if the code can do IPv4-IPv6
NAT
I want to see IPv4-IPv4 NAT working first...
Alex


Re: Big List of network owners?

2004-10-28 Thread Alex Bligh

--On 28 October 2004 11:33 -0700 Gary E. Miller [EMAIL PROTECTED] wrote:
in general, we try not to make life that easy for spammers and scammers
Too late.  That horse ran out the barn when Verisgn sold their whois data.
At this point keeping the data hard to get just makes it harder on
abuse admins.
Last time I looked, VRSN did not have whois data on netblock owners.
Alex


Re: why upload with adsl is faster than 100M ethernet ?

2004-10-15 Thread Alex Bligh

--On 15 October 2004 13:33 +0200 Iljitsch van Beijnum [EMAIL PROTECTED] 
wrote:

However, the cause can also be rate limiting. Rate limiting is deadly for
TCP performance so it shouldn't be used on TCP traffic.
Add unless appropriate shaping is performed prior to the rate-limiting
with the parameters carefully tuned to the rate-limiting
You can also see an effect similar to rate-limiting from inadequate
buffering where there is a higher input media speed than output.
I can't remember what the tool is now, but there used to be a tool which
worked like ping but sent a udp stream at a given rate per second and told
you about packet drops, and also allowed for some parameter to be tweaked
to give stochastic variation in interpacket delay (i.e. massive jitter).
You could use this to show inadequate buffering on gigabit interfaces where
a 2Mb/s stream would get through, but if you wound up the jitter
sufficiently, a whole burst of packets would arrive together and a gigabit
interface with (deliberately) misconfigured buffers would then drop packets.
Alex


Re: why upload with adsl is faster than 100M ethernet ?

2004-10-15 Thread Alex Bligh

--On 15 October 2004 11:46 -0400 Andy Dills [EMAIL PROTECTED] wrote:
Hmm...I'd have to disagree. Are you perhaps assuming a certain threshold
(100mbps, for instance)?
I use rate limiting for some of my customers, and when correctly
configured (you _must_ use the right burst sizes), you will get the
exact rate specified, TCP or not. However, I've never had to rate-limit
above 30mbps, so perhaps you have some experience that I don't.
I can support what Iljisch said.
In a former life I ran extensive tests on the effect of CAR on TCP (no
longer have the data to publish, but it's out there), and it's just plain
broken - if your purpose is to simulate a lower amount of bandwidth with
or without a burst. In a nutshell the problem is that the sliding window
algorithm expects RTT to gradually increase with congestion, to find the
optimum window size - the increased RTT stops the window growing. With
rate-limiting that does not also shape (i.e. buffer the packets - this is
true of token based systems such as CAR), the window size just keeps on
expanding in leaps and bounds until there's a packet drop, whereupon it
shrinks right down, rinse and repeat, so you get a sawtooth effect. Adding
burst sizes just moves the problem around - you don't see the effect until
later in the stream - because the excess of traffic over committed rate
just sits there using up the burst and there is no signal to slow down; it
/somewhat/ hides the effect in a lab if you are using short single requests
(e.g. short HTTP) but not if you aggregate multiple parallel requests.
If you want to simulate lower bandwidths through a high bandwidth
interface, and you want to be TCP friendly, you HAVE to use shaping. That
means buffering (delaying) packets, and at gigabit line rates, with
multiple clients, you need BIG buffers (but set sensible buffer limits per
client).
You can reasonably trivially do the above test with ftp, ethereal,
a bit of perl, and something to graph sequence numbers and throughput.
There certainly used to be very few devices that did this properly, AND
cope with a saturated GigE of small packet DDoS without dying
spectacularly. This may or may not have changed.
Alex


Re: why upload with adsl is faster than 100M ethernet ?

2004-10-15 Thread Alex Bligh

--On 15 October 2004 12:31 -0400 Andy Dills [EMAIL PROTECTED] wrote:
If the desire is to provide a simulated circuit with x bandwidth, CAR
does a great job, IFF you correctly size the burst: 1.5x/8 for the normal
burst, 3x/8 for the max burst.
The aggregate rate of the transfer is x in all the testing I've done.
How can you ask for more than the configured line rate? In my testing, I
noticed a pronounced saw-tooth effect with incorrectly configured bursts,
but with correctly configured bursts, the saw-toothing affect did not
prevent delivery of the configured throughput.
It's a fair while ago now, we did a pretty full range of tweaking (both
of max burst, burst size, and indeed of committed rate). We observed
the following problems:
a) The fudge factor that you needed to apply to get the right bandwidth
  depended heavily on (from memory)
  (i)   TCP stacks either end, whether slowstart configured etc.
  (ii)  path MTU
  (iii) Number of simultaneous connections
  (iv)  Protocol type (e.g. TCP vs. UDP), and content (HTTP was for
reasons to do with persistent connections typically different
from FTP)
  We did indeed (until we found a better solution) manage to come up
  with a fudge factor that minimized customer complaints under this
  head (which was most of them), but it was essentially let's wind
  everything up high enough that in the worst case of the above they
  get throughput not less than they have bought; however, this meant
  we were giving away rather more bandwidth than we meant to, which
  made upgrades a hard sell.
b) It *STILL* didn't work like normal TCP. We had customers with web
  servers behind these things who expected (say) a 2Mb service running
  constantly flatlined to operate like a 2Mb/s pipe running full (but
  not overfull) - i.e. they'd expect to go buy a level of service roughly
  equal to their 95th percentile / busy hour rate. When they were even
  slightly congested, their packet loss substantially exceeded what
  you'd see on the end of properly buffered (say) 2Mb/s serial link.
  If their traffic was bursty, the problem was worse. Even if you
  could then say well our tests show you are getting 2Mb/s (or rather
  more than that) the fact a disproportionate number of packets were
  being lost caused lots of arguments about SLA.
c) The problem is worst when the line speed and the ratelimit speed
  are most mismatched. Thus if you are ratelimiting at 30Mb/s on a
  100Mb/s, you won't see too much of a problem. If you are ratelimiting
  at (say) 128kbps on a 1Gb/s port, you see rather more problems.
  In theory, this should have been fixed by sufficient buffering and
  burst, but at least on Cisco 75xx (which is what this was on several
  years ago), it wasn't - whilst we found a mathematical explanation,
  it wasn't sufficient to explain the problems we saw (I have a feeling
  it was due to something in the innards of CEF switching).
I know several others who had similar problems both before this and after
it (one solving it by putting in a Catalyst with an ATM blade running LANE
and a fore ATM switch - yuck - there are better ways to do it now). I
am told that PXE stuff which does WFQ etc. in hardware is now up to this
(unverified). But that's shaping, not rate-limiting.
Alex


Re: HSSI-adtran

2004-09-20 Thread Alex Bligh

--On 20 September 2004 07:56 -0700 Philip Lavine [EMAIL PROTECTED] 
wrote:

I am having a problem witha DS3 that terminates into a
Adtran CSU (T3SU-300) and then into a 7200 with HSSI.
I can not ping with a  data pattern and I
experience packet loss and errors when I pass TCP
traffic.
Adtran recommended an attenuator.
What is the issue here? Is the signalling incompatible
with a PA-T3+. Should I be wasting my time with an
external CSU?
Last time I looked, PA-T3+ were even MORE fussy about the need
for attenuation than the non-plus variants (of the never actually
work with a BT DS-3 delivered as standard sort of fussy). So yes
you are probably wasting your time with an external CSU, but you
are also quite likely to need attenuators anyway.
Alex


RE: HSSI-adtran

2004-09-20 Thread Alex Bligh

--On 20 September 2004 10:50 -0700 Philip Lavine [EMAIL PROTECTED] 
wrote:

More clues. It seems that everytime I ping with the
 pattern the controller counter:
rx_soft_overrun_err=27473, increments.
If you admin both ends, enable scrambling.
Alex


Re: RIPE Golden Networks Document ID - 229/210/178

2004-09-04 Thread Alex Bligh

--On 02 September 2004 16:09 -0700 John Bender [EMAIL PROTECTED] 
wrote:

This would not be as problematic if dampening could be applied to a path
rather than a prefix, since an alternate could then be selected.  But
since this would require modifications to core aspects of BGP (and
additional memory and processor requirements) it does not seem a likely
solution.
Hmmm
So returning to the illustration Rodney gave Randy about the .foo
domain, are we saying that if the .foo domain's DNS is anycast,
then as (just from statistics of multiple paths) prefix flaps (as
opposed to flaps of individual paths) are going to be more likely [*],
route dampening adversely affects such (anycast) sources more than
straight unicast?
Or, looking at it the other way around, if in a heavily plural
anycast domain prefix route changes (as opposed to route changes
of individual paths) are more common than normal routes [*] (albeit
without - dampening aside - affecting reachability), does this mean
route dampening disproportionately harms such routes?
i.e. is the answer to Randy because such networks [might] have
a higher tendency to use anycast.
* = note untested assumption
Alex


RE: BGP-based blackholing/hijacking patented in Australia?

2004-08-15 Thread Alex Bligh

--On 14 August 2004 22:23 +0300 Hank Nussbacher [EMAIL PROTECTED] 
wrote:

Predating this is Bellwether (June 2000):
Indeed. In days of yore, when people developed at least marginally
non-obvious operational techniques, people sent email to nanog about it,
explaining the technique and their experience (hence the NOG bit);
the reception wasn't always positive, but at least the criticism was
technical. I wonder what the driving factor was for the change.
Alex


Re: BGP list of phishing sites?

2004-06-28 Thread Alex Bligh

--On 28 June 2004 18:43 +0100 Simon Lockhart [EMAIL PROTECTED] 
wrote:

It's wholy unfair to the innocent parties affected by the blacklisting.
i.e. the collateral damage.
Say a phising site is hosted by geocities. Should geocities IP addresses
be added to the blacklist?
What if it made it onto an akamaized service? Should all of akamai be
blacklisted?
This is an issue wider than spam, phishing, etc.
That would depend on whether your block by IP address (forget whether
this is BGP black hole lists, DNSRBL for SMTP etc.) is of
a) IP address that happen to have $nasty at one end of them; or
b) IP address for whom no abuse desk even gives a response (even
  we know, go away) when informed of $nasty.
It also depends on whether your response is drop all packets (a la
BGP blackhole) or apply greater sanctions.
Seems to me (b) is, in general, a lot more reasonable than (a) particularly
where there is very likely 1 administrative zone per IP address (for
example HTTP/1.1). It also better satisfies Paul's criterion of being more
likely to engender better behaviour (read: responsibility of network work
operators for downstream traffic) if behaviour of the reporter is
proportionate  targeted.
WRT apply greater sanctions, it is possible of course, though perhaps
neither desirable nor scalable, to filter at layer3 all sites on given IPs
to minimize collateral damage. See
http://www.theregister.co.uk/2004/06/07/bt_cleanfeed_analysis/
This is effectively what tools like spamassassin do when taking RBL type
feeds as a scoring input to filtering, in a mail context.
Alex


Re: what's going on with yahoo and gmail lately?

2004-06-21 Thread Alex Bligh

--On 21 June 2004 10:43 -0400 Randy Bush [EMAIL PROTECTED] wrote:
Why wait for Gmail when you can get max 10M messages and 1G total from
rediff.com ?
how american of us.  i doubt there uas been 1G of *real content* in my
email for the last two decades.
I'm trying to work out whether in the last two decades I've ever received
a non-local email smaller than 100 bytes. Even your gnomic insights
exceed this with headers.
Alex


Re: Default Internet Service

2004-06-13 Thread Alex Bligh

--On 13 June 2004 16:15 +0100 Dave Howe [EMAIL PROTECTED] wrote:
disproof by counterexample is a valid technique.
only where the law of excluded middle holds true - that means if
everything is black  white with no shades of grey.
It is quite clear if nothing else from the circularity of threads
on NANOG about this sort of stuff, the number of iterations
around the circles, and the number of years this has been going on
that there is no silver bullet. So there are two other possibilities:
a) There is at least one (probably more) type of action which mitigates
  but does not fully solve the problem [in which case telling us
  why solution X doesn't work because it doesn't address example Y
  is not much help, as by assumption no solution is perfect], or
b) There are no types of action which mitigate the problem. In which
  case go do something more interesting than read/write NANOG on
  the subject.
There are a lot of (a)'s that are quite helpful. If anyone thinks,
for example, that installing a decent firewall doesn't help
prevent intrusions (no, it doesn't stop them all), or online access
to OS fixes (ditto), I would suggest some statistics to show this
would be useful.
Alex


RE: Even you can be hacked

2004-06-11 Thread Alex Bligh

--On 11 June 2004 14:18 -0700 Randy Bush [EMAIL PROTECTED] wrote:
the bottom line
  o if you want the internet to continue to innovate, then
the end-to-end model is critical.  it means that it
If there is a lesson here, seems to me it's that those innovative protocols
should be designed such that it is relatively easy to prevent or at least
discourage bad traffic. Because that's in the long run easier (read
cheaper for those of you of a free market bent) than educating users in an
ever changing environment. It would be a bit rich to criticize SMTP
(for instance) as misdesigned for not bearing this in mind given
the difficulty of anticipating its success at the time, but there is a
lesson here for other protocols. I can think of one rather obvious one
which would seem to allow delivery of junk in many similar ways to SMTP;
hadn't thought of this before but we should be learning from our
mistakes^Wprevious valuable experience.
Alex


Re: SSH on the router - was( IT security people sleep well)

2004-06-07 Thread Alex Bligh

[use telnet+ACL instead of SSH]
while this protects the router such that it allows packets in only
from known addresses, it does not allow packets in only from known
MACHINES. Addresses can be spoofed. Vendor C (at least in recent
history) did/does not allow binding of the host stack only to specific
interfaces.
Thus it is (if you are determined) not impossible to spoof a telnet
session especially if the first thing you do is inject a return
route.
This is why we were all good chaps and secured our BGP sessions,
remember?
Of course SSH should ALSO be secured so it only comes from known
source addresses, mainly for administrative reasons (I'd like to
know just WHICH NOC member of staff logged in from where and when).
There are still possible
man in the middle attacks that cannot be protected against by SSH.
Consider the case of a staff member lounging in the backyard on a
lazy Saturday afternoon with their iBook. They have an 802.11 wireless
LAN at home so they telnet to their Linux box in the kitchen and run
SSH to the router. Ooops!
Umm, I get seriously worried when people suggest they allow people
with router access to telnet from box A to box B, then SSH to a router.
Firstly, they should be logging into a secure set of machines first
in all sensible security models I've seen (even if an ACL doesn't
force them to do that, they should do it as good practice). Before
you say that requires them to have connectivity to those machines
in the case of network meltdown, in all sensible authentication
schemes the router is going to challenge some remote box(es) anyway,
and you can provide multiple such boxes - anything beyond that
is failover.
But the major point is: what kind of people do you (a) give enable
access on your router, and (b) do not appreciate that telnet, then
ssh, is a seriously bad idea in terms of security (and can't
instead install ssh on whatever box it is). Are engineers really
that dumb these days? Doing that sort of thing was a disciplinary
offence last time I ran a large network - not something to try and
work around with security policy. Note we even had this degree of
protection (no passwords in the clear over wires not controlled
by us) when IOS did not even have an ssh build.
Alex


Re: SSH on the router - was( IT security people sleep well)

2004-06-07 Thread Alex Bligh

--On 07 June 2004 11:10 -0700 Randy Bush [EMAIL PROTECTED] wrote:
It makes more sense to funnel everything through secure gateways and
then use SSH as a second level of security to allow staff to connect
to the secure gateways from the Internet. Of course these secure
gateways are more than just security proxies; they can also contain
diagnostic tools, auditing functions, scripting capability,
etc.
and all the other things single points of failure need.  like
pixie dust, chicken entrails, ...
Where did the word single come from, given he had an s on gateways?
Replicate them across POPs. Having lots of routers accessible from a small
number of machines, which are (relatively) widely accessible but can be
firewalled to hell, seems a better option than having lots of routers
accessible from a large number of machines (esp. ones outside ones own
administrative domain, e.g. home machines). YMMV. [no I don't think
they need the other pixie dust stuff on though]
Alex


Re: SSH on the router - was( IT security people sleep well)

2004-06-07 Thread Alex Bligh

--On 07 June 2004 17:50 -0400 [EMAIL PROTECTED] wrote:
Well, either you have one per POP (and that, as Randy Bush points out, can
be quite the headache in itself), which is still a single point of
failure for that POP, or you're advocating that the routers be reachable
from the magic box at *any* POP (which is right back into the large
number of machines issue)
Well the way we did it, all routers were accessible from 2 (large) POPs,
two being in the NOC, and one being elsewhere (now you mention it, it was a
datacenter  POP combined). So the large number of machines was 3. I am
sure we could have scaled this to (say) 4 without substantial difficulty.
I agree one in every POP would be both painful and pointless. But that
wasn't what I meant.
Alex


Juniper DoS

2004-04-27 Thread Alex Bligh
Guys,
Which Juniper router do I need to /realistically/ (i.e. I have seen it do
this in practice, not it says it will do this in the specs, which I can
read myself) cope with and filter out 1Gbps of small packet DoS, while
still carrying a full table and generally behaving like a happy beast. I
don't need lots of ports (3xGigE will do) and am looking for the smallest
box that will do the job.
I will summarise off-list responses to the list.
Alex


Re: Alternate and/or hidden infrastructure addresses (BGP/TCP RST/SYN vulnerability)

2004-04-23 Thread Alex Bligh


--On 23 April 2004 09:09 -0400 Patrick W.Gilmore [EMAIL PROTECTED] 
wrote:

(TTL should only be decremented when _forwarding_, and I don't think
you could argue that you need to _forward_ a packet from your ingress
interface to your _loopback_ interface..)
Well, if that were the case, then you wouldn't need multi-hop to do
loopback peering.
Um, only if there were no intervening hops: i.e. where the
physical mesh is
 A---B
 |   |
 C---D
Then A-D, and B-C peering requires multihop anyway.

Alex


RE: Lazy network operators

2004-04-18 Thread Alex Bligh


--On 18 April 2004 03:48 +0100 Paul Jakma [EMAIL PROTECTED] wrote:

Well, let's be honest, name one good reason why you'd want IPv6
(given you have 4)?
As an IPv6 skeptic I would note that some protocols NAT extremely badly
(SIP for instance), and the bodges to fix it are costly. So if IPv6 means I
can avoid NAT, that can actually save $$$.
Alex


Re: Lazy network operators - NOT

2004-04-18 Thread Alex Bligh


--On 18 April 2004 02:56 -0400 Sean Donelan [EMAIL PROTECTED] wrote:

If you don't want to accept connections from indeterminate or
unauthenticated addresses, its your choice.
Whilst that may gave you some heuristic help, I'm not sure
about the language. HINFO used that way neither /authenticates/
the address (in any meaningful manner as the reverse DNS holder
can put in whatever they like), nor does it /authenticate/ the
user (which some might characterize as the problem). Given it
is a widely held view (IMHO correct) that using network layer
addressing for authentication is broken, I think your suggestion
would probably be better received if you described this as a
heuristic mechanism.
Speaking of which, we gets lots proposed heuristic solutions
suggested. Has anyone actually done any formal evaluation of
the statistics behind this. For instance looked at a statistical
correlation between DUL listed entries and spam, extrapolated
to determine what would be the effect if all dialup blocks were
listed, and done proper significance testing etc.? Ditto any
of the other techniques Paul's greylisting paper refer to. If not,
sounds like a useful academic research paper. Hardly like we
are short of data points.
Alex


Re: Lazy network operators

2004-04-14 Thread Alex Bligh


--On 14 April 2004 12:17 +0300 Petri Helenius [EMAIL PROTECTED] wrote:

How many MUAs default to port 587? How many even know about 587 and give
it as an option other than fill-in-the-blank?
So until they do, treat unauthenticated port 25 connections skeptically,
and authenticated port 587 connections not skeptically.
Skeptically might defined as: do not allow connections from outside
known IP's and reply 550: Denied - please see http://myisp.net/relay.html;
which explains how to fix your mail client.
metaargument

Not to pick on you in particular:

This argument (at least on NANOG) seems to be characterized by the following

1. A suggests X, where X is a member of S, being a set of largely well known
  solutions.
2. B1 ... Bn, where n1 says X is without value as X does not solve
  the entire problem, each using a different definition of problem.
3. C1 ... Cn, where n1 says X violates a fundamental principle of
  the internet (in general without quoting chapter  verse as to
  its definition, or noting that for its entire history, fundamental
  principles, such as they exist, have often been in conflict, for
  instance end-to-end connectivity, and taking responsibility for
  ones own network in the context of (for instance) packets sourced
  from 127.0.0.1 etc.)
4. D1 .. Dn, where n1 says X will put an enormous burden on some
  network operators and/or inconvenience users (normally without
  reference to the burden/inconvenience from the problem itself,
  albeit asymmetrically distributed, and normally without reference
  to the extent or otherwise that similar problems have been
  solved in a pragmatic manner before - viz route filtering, bogon
  filtering etc.)
5. E1 .. En, where n1 insert irrelevant and ill-argued invective
  thus obscuring any new points in 1..4 above.
6. Goto 1.

It may be that NANOG (mailing list) is a particularly unproductive place
to discuss tackling the spam problem, but I don't know of anywhere less
bad.
In my view, we have to recognize:

A. The problem is complex, else it would have been solved by now. There
  is unlikely to be a single silver-bullet solution. Any solution will
  be a composite of multiple different solutions, none of which alone
  (possibly together) will be perfect.
B. Solutions need to be proportionate to what they achieve - where they
  challenge fundamental principles we need to evaluate that in the
  context of why those fundamental principles exist in the first place.
C. Many solutions require hard work by network engineers. That is the
  value add. The problem is asymmetric which means that at least some
  part of the solution must have some normative component (see, for
  example, route filtering) as far as network operators are concerned.
D. There also needs to be a normative component as far as users are
  concerned. Much of the behaviour we seek to change is not reliably
  distinguishable from acceptable behaviour at a technical level; whilst
  we may be able to improve that with better technology or simply
  different default settings, technology alone is not going to produce
  a solution in the absence of (say) AUPs and/or legislation.
metaargument

Alex


Re: Verification required for steve@blueyonder.co.uk, protected by 0Spam.com.

2004-03-09 Thread Alex Bligh


--On 09 March 2004 11:25 + [EMAIL PROTECTED] wrote:

Requiescas in pace o email
ITYM Requiescas in pace o elitterae

Alex


Re: UUNet Offer New Protection Against DDoS

2004-03-06 Thread Alex Bligh


--On 06 March 2004 23:02 + Paul Vixie [EMAIL PROTECTED] wrote:

ok, i'll bite.  why do we still do this?  see the following from June
2001:
http://www.cctec.com/maillists/nanog/historical/0106/msg00681.html
Having had almost exactly that phrase in my peering contracts for
$n years, the answer is because if you are A, and peer is B,
if ( AB )
 your spoofed traffic comes (statistically) from elsewhere so you don't
 notice. You are dealing with traffic from C, where CA
else
 you've signed their peering agreement, and are 'peering' on their
 terms instead. Was I going to pull peering with $tier1 from whom
 the occasional DoS came? Nope.
The only way this was ever going to work was if the largest networks
cascaded the requirements down to the smallest. And the largest networks
were the ones for whom (quite understandably) rpf was most difficult.
DoS (read unpaid for, unwanted traffic) is one of the best arguments
against settlement-free peering (FX: ducks  runs).
Alex


Re: Source address validation (was Re: UUNet Offer New Protection Against DDoS)

2004-03-06 Thread Alex Bligh


--On 06 March 2004 18:39 -0500 Sean Donelan [EMAIL PROTECTED] wrote:

Source address validation (or Cisco's term uRPF) is perhaps more widely
deployed than people realize.  Its not 100%, but what's interesting is
despite its use, it appears to have had very little impact on DDOS or
lots of other bad things.
...
But relatively few DDOS attacks use spoofed
packets.  If more did, they would be easier to deal with.
AIUI that's cause  effect: the gradual implementation of source-address
validation has made attacks dependent on spoofing less attractive to
perpetrators. Whereas the available of large pools of zombie machines
has made the use of source spoofing unnecessary. Cisco et al have shut
one door, but another one (some suggest labeled Microsoft) has opened.
Those with long memories might draw parallels with the evolution of
phreaking from abuse of the core, which became (reasonably) protected
to abuse of unprotected PABXen. As I think I said only a couple of days
ago, there is nothing new in the world.
Alex


Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)

2004-02-27 Thread Alex Bligh


--On 27 February 2004 13:39 + Paul Jakma [EMAIL PROTECTED] wrote:

Sounds like a perfect job for anycast.
Because you always want to get to an E911 service in the same AS number...

(seriously, read the sip  sipping w/gs)

Alex


Re: How relable does the Internet need to be? (Was: Re: Converged Network Threat)

2004-02-27 Thread Alex Bligh


--On 27 February 2004 14:52 + Paul Jakma [EMAIL PROTECTED] wrote:

Because you always want to get to an E911 service in the same AS
number...
You do or you dont? I dont see why anycast addresses need or need not
be restricted to same AS.
Anycast topology tends to follow AS topology, as people prefer their own
routes. So if there is 205.1.2.3/32 anycast into (say) AS701 in DC (only),
and anycast into (say) AS2914 in every US city, then it would not be
unexpected for an AS701 customer in SF to reach the anycase node for
205.1.2.3/32 in DC, as AS701 will in general prefer its own routes. If you
take a rural situation where you have your nearest (geographically) E911
service on some long link into Sprint, and the customer on some long link
into UUnet, it is most unlikely they will be close (network wise) Anycast
is arguable good for finding the best *connected* (i.e. closest using a
network metric) server, but is pretty hopeless for finding a closest (using
a geographic metric) server at anything much less than continental
resolution. Further, it is heuristic in nature. For (say) DNS, it doesn't
much matter if 1 in 50 queries go to a server far further away than they
need to. For E911, it does.
Alex


Re: Anycast and windows servers

2004-02-20 Thread Alex Bligh
Sean,

Hence the reason why I want the route to cease being advertised if the box
fails.
I'm trying to avoid putting yet another server load balancer box in front
of the windows box to withdraw the route so a different working box will
be closest.  It may be an oxymoron, but I'm trying to make the windows
service (if not a particular windows box) as reliable as possible
without introducing more boxes than necessary.
You might be better not running the routing protocol on the Windows box,
and run gated (or whatever) on some nearby Linux/BSD box which tests the
availability of each of your windows box and introduces the appropriate
route (i.e. a next-hop for the anycast address pointing at a normal IP
address) into (a very local) BGP (running multipath) or other favorite
routing protocol for each of the servers that are up.
Alex


RE: Clueless service restrictions (was RE: Anti-spam System Idea)

2004-02-18 Thread Alex Bligh
Tony,

--On 17 February 2004 17:27 -0800 Tony Hain [EMAIL PROTECTED] wrote:

Clearly I misinterpreted your comments; sorry for reading other parts of
the thread into your intent. The bottom line is the lack of a -scalable-
trust infrastructure. You are arguing here that the technically inclined
could select from a list of partial trust options and achieve 'close
enough'. While that is true, Joe-sixpack wouldn't bother even if he could
figure out how. Whatever trust infrastructure that comes into existence
for the mass market has to appear to be seamless, even if it is
technically constructed from multiple parts.
What I am thinking of takes the policy decision at the end MTA level,
i.e. the MTA closest to the user receiving the mail. That could be
a techy user on a DSL line. In the case of Joe-Sixpack, it would be whoever
runs POP/IMAP for him. If he runs an MTA without any of this stuff, he
is left where he is now (i.e. gets all mail).
Steve Bellovin suggested earlier that identity based approaches wouldn't
work. While I agree having the identity won't solve the problems by
itself, it does provide a key that the rest of the legal system can start
to work with. False identities are common in the real world, so their
existence in the electronic world is not really any different.
I am not sure you need to go as far as verifiable individual identities for
each sender/user; however, you may need to go as far as being able to
verify identities of MTAs - at least the first one that claims I have
received this from someone for whom I am prepared to deal with the abuse
consequences.
I guess I
am looking at this from the opposite side the two of you appear to be,
rather than requiring authorization to send, irrefutable identity should
be used to deny receipt after proven abuse.
I am using authorization to permit/deny *receipt* (not sending). Clearly
if enough people deny unauthenticated receipt then in practice it does
imply one needs to authenticate sending. Whether you deny receipt after
proven abuse, or only accept receipt on proven identity is just a policy
decision in the hands of the mail-system-admin as to what they do with
the don't know case.
I think this is getting (further) OT for NANOG  I should just write
something up.
Alex


Re: Clueless service restrictions (was RE: Anti-spam System Idea)

2004-02-17 Thread Alex Bligh


--On 17 February 2004 12:17 -0800 Tony Hain [EMAIL PROTECTED] wrote:

[with apologies for rearrangement]

The Internet has value because it allows arbitrary interactions where new
applications can be developed and fostered. The centrally controlled model
would have prevented IM, web, sip applications, etc. from ever being
deployed. If there are any operators out there who still understand the
value in allowing the next generation of applications to incubate, you
need to push back on this tendency to limit the Internet to an 'approved'
list of ports and service models.
...
Seriously, filtering is about attempting to prevent the customer from
using their target application. Central registration is no better, as its
only purpose is exercising power through extortion of additional funds for
'allowing' that application.


Quite right in general.

However
a) Some forms of filtering, which do occasionally prevent the customer
  from using their target application, are in general good, as the
  operational (see, on topic) impact of *not* applying tends to be
  worse than the disruption of applying them. Examples: source IP
  filtering on ingress, BGP route filtering. Both of these are known
  to break harmless applications. I would suggest both are good things.
b) The real problem here is that there are TWO problems which interact.
  It is a specific case of the following general problem:
  * A desire for any to any end to end connectivity using the
protocol concerned = filter free internet
  * No authentication scheme
Applying filters based on IP address  protocol (whether it's by filtering
or RBL) is in effect attempting to do authentication by IP address. We know
this is not a good model. People do, however, use it because there
currently is no realistic widely deployed alternative available. Those
that are currently available (e.g. SPF) are not widely deployed, and
in any case are far from perfect. Whilst we have no hammer, people will
keep using the screwdriver to drive in nails, and who can blame them?
Alex


RE: Clueless service restrictions (was RE: Anti-spam System Idea)

2004-02-17 Thread Alex Bligh


--On 17 February 2004 16:10 -0600 Chen, Weijing 
[EMAIL PROTECTED] wrote:

Sound like an any to any end to end signaling/control mechanism with
authentication capabilities.  Smell fishy (packet version of dial tone?)
Since when had dialtone got end-to-end signalling/control? My POTS line
doesn't run C7/SS7. I mean authentication as in scp (and not tftp). IE
one end user authorizes the other end user, using whatever credentials
they like.
Alex


Re: Clueless service restrictions (was RE: Anti-spam System Idea)

2004-02-17 Thread Alex Bligh
Steve,

--On 17 February 2004 17:28 -0500 Steven M. Bellovin 
[EMAIL PROTECTED] wrote:

In almost all circumstances, authentication is useful for one of two
things: authorization or retribution.  But who says you need
authorization to send email?  Authorized by whom?  On what criteria?
Authorized by the recipient or some delegee thereof, using whatever
algorithms and heuristics they chose. But based on data the authenticity of
which they can determine without it being trivially forgeable, and without
it being mixed up with the transport protocol. IE in much the same way as
say PGP, or BGP.
Attempts to define official ISPs leads very quickly to the walled
garden model -- you have to be part of the club to be able to send mail
to its members, but the members themselves have to enforce good
behavior by their subscribers.
I never said anything about official ISPs. I am attempting to draw an
analogy (and note the difference) between SMTP as currently deployed, and
the way this same problem has been solved many times for other well known
protocols.
We do not have an official BGP authorization repository. Or an official PGP
authorization repository. We just have people we chose to trust, and people
they in turn chose to trust. Take BGP (by which I mean eBGP) as the case in
point: It seems to be general held opinion that the one-and-only canonical
central repository for routes does not work well. The trust relationship is
important, and we expect some transitivity (no pun intended) in the trust
relationshipa to apply. And many end-users in the BGP case - i.e. stub
networks - chose to outsource their their trust to their upstream; when
they don't like how their upstream manages their routes, they move
provider. BGP allows me (in commonly deployed form) to run a relatively
secure protocol between peers, and deploy (almost) universal end-to-end
connectivity for IP packets in a manner that does not necessarily involve
end users in needing to know anything about it bar if the routing doesn't
work, I move providers; and IP packets do not flow through BGP, they
flow in manners prescribed by BGP. Replace BGP by a mail authorization
protocol and IP packets by emails in the foregoing; if the statement
still holds, we are getting there (without reverting to bangpaths 
pathalias). Oh, and people keep mentioning settlement and how it might fix
everything - people said the same about BGP (i.e. IP peering) - may be, may
be not - the market seems to have come up with all sorts of ingenious
solutions for BGP.
Alex


RE: Clueless service restrictions (was RE: Anti-spam System Idea)

2004-02-17 Thread Alex Bligh


--On 17 February 2004 16:19 -0800 Tony Hain [EMAIL PROTECTED] wrote:

Where they specifically form a club and agree to preclude the basement
multi-homed site from participating through prefix length filters. This
is exactly like the thread comments about preventing consumers from
running independent servers by forced filtering and routing through the
ISP server. This is not scaled trust; it is a plain and simple power
grab. Central censorship is what you are promoting, but you are trying to
pass it off as spam control through a provider based transitive trust
structure. Either you are clueless about where you are headed, or you
think the consumers won't care when you take their rights away. Either
way this path is not good news for the future Internet.
Now there was me thinking that I was in general agreeing with you. I am not
promoting any sort of censorship, central or otherwise. I believe you have
a perfect right to open a port 25 connection to any server, and I have a
perfect right to accept or deny it. And of course vice-versa. What I am
saying is that I would like, in determining whether to accept or reject
your connection, to know who you are and that you act responsibly, or
failing that, to know someone who is prepared to vouch for you; failing
that, may be I'll accept your email anyway, may be I won't. I do not care
what upstream either you or I have. For the avoidance of doubt, I am not
talking about forcing people to send mail through their upstreams, or even
suggesting that the graph of any web of trust should follow the BGP
topology. Indeed the entire point I made about separating the web of
trust's topology from IP addresses etc. was rather to enable end users to
CHOOSE how they accept/reject mail in a manner that might have nothing to
do with network topology. Personally I would be far more happy accepting
mail from someone who'd been vouched for by (say) someone on this list I
knew, than vouched for by their quite possibly clueless DSL provider. Of
course some people will want to use their ISP, many won't. Just like Joe
User can use their upstream's DNS service, but doesn't necessarily need to.
Maybe PGP would have been a better analogy as far as the scale bit goes. I
think you are assigning motives to the BGP basement-multihoming problems
where in general the main motive is not getting return on cost of hardware;
however, I don't think the same scale constraints need apply as it is
unnecessary to hold a complete table in-core at once.
Alex


Re: SMTP authentication for broadband providers

2004-02-13 Thread Alex Bligh


--On 12 February 2004 18:13 -0500 [EMAIL PROTECTED] wrote:

Since when was anything sent over port 25 confidential?
Since Phil Zimmerman decided to do something about it.
Well if you are considering the plain-text of an encrypted mail,
it doesn't much matter whether port 25 is intercepted by whatever
governmental agency, or relayed through however many servers with
questionable operators.
And quite frankly, he was right - that's the only way to do it right.
Oh I agree. My point to the original poster was that supposed security
of port 25 communications was not a good reason to avoid using
relays on the way. If you want security of you communications
a good first step is PGP (et al.). (Note that this does still leak
To:/From:/Subject: lines, but they be read via wire-tap just as they
can be read via intercept at a relay).
Alex


Re: SMTP authentication for broadband providers

2004-02-13 Thread Alex Bligh


--On 13 February 2004 08:47 -0500 Carl Hutzler [EMAIL PROTECTED] wrote:

Is this what is commonly referred to as STARTTLS?
That would be good, but doesn't work when port 25 is blocked unless it's
STARTTLS on submission.
Alex


Re: SMTP authentication for broadband providers

2004-02-13 Thread Alex Bligh


--On 13 February 2004 09:27 -0500 [EMAIL PROTECTED] wrote:

Y-Haw!  A return to the Old West of bangbaths and pathalias.
*Not* that I think bilateral peering for SMTP is a great idea, but: a
web of trust (A trusts B, B trusts C) does not necessarily mean
the mail has to traverse the route of the web of trust (i.e. if
A can establish B trusts C, then why not accept the mail directly
from C if all B is going to do is forward it in essence unaltered).
Perhaps this is no different from having someone DNS sign some form
of inverse MX record saying this is my customer and they shalt
not spam you or lo the wrath of my abuse department shall descend
on them and cut them off, and people not accept mail from those
without that an inverse MX record signed by someone they trust,
someone who someone they trust trusts (etc.).
Alex


Re: SMTP authentication for broadband providers

2004-02-12 Thread Alex Bligh


--On 12 February 2004 14:07 -0800 Lou Katz [EMAIL PROTECTED] wrote:

I can locally submit to my mailserver, but if it tries to make an outbound
connection on port 25 to a client's mailserver, and that is blocked, than
all confidentiality of business or personal communication is gone.
Since when was anything sent over port 25 confidential?

Alex


Re: SMTP authentication for broadband providers

2004-02-11 Thread Alex Bligh


what about port 25 blocking that is now done by many access providers?
this makes it impossible for mobile users, coming from those providers,
to access your server and do the auth.
[EMAIL PROTECTED]:~$ fgrep submission /etc/services
submission  587/tcp # submission
[EMAIL PROTECTED]:~$ fgrep ssmtp /etc/services
ssmtp   465/tcp smtps   # SMTP over SSL
Alex


Re: SMTP authentication for broadband providers

2004-02-11 Thread Alex Bligh


--On 11 February 2004 16:30 -0500 Sean Donelan [EMAIL PROTECTED] wrote:

And I applaud your effort.  But does it really answer the question of who
is responsible for handling abuse of the service?  If ISP's are not
responsible for abuse using port 573, they probably don't care.
I think you are missing the point. I have lots of people abusing my port
25. They can abuse this due to the nature of the (current unadorned) SMTP
protocol as I have to leave it open and unauthenticated in order to receive
mail to users served by my server. I can quite see why their DSL provider
wants to block their connecting to my port 25, and (incidentally) other
customers of theirs get caught in the collateral damage. On the other hand,
I have noone even trying to abuse port 587 (sic) i.e. submission. Even if
people tried, they'd find they needed authentication on that port (even to
send to my local users). As I am doing nothing beyond a dumb RFC
implementation, and assuming other mail hosts are no dumber, ISPs thus
won't get abuse complaints for port 587 attacks in the same way they get
port 25 complaints. Of course they'll get *some* port 587 complaints, just
like they get some port 80 complaints. But blocking port 25 blocks access
to a well known poorly authenticated service. Blocking port 587 doesn't (or
rather wouldn't). If there were a whole pile of people accepting
unauthenticated connections on port 587, life would be different. But there
aren't  it isn't.
Alex


Re: SMTP authentication for broadband providers

2004-02-11 Thread Alex Bligh


--On 11 February 2004 19:45 -0500 Sean Donelan [EMAIL PROTECTED] wrote:

The bulk of the abuse (some people estimate 2/3's) is due to compromised
computers.  The owner of the computer doesn't know it is doing it.
Unfortunately, once the computer is compromised any information on that
computer is also compromised, including any SMTP authorization
information.
SMTP Auth is not the silver bullet to solve the spam problem. ...
Right now SMTP AUTH is a bit more useful because the mailer can directly
identify the compromised subscriber.  But I expect this to also be
short-lived.  Eventually the compromised computers will start passing
authentication information.
Sure it's not a silver bullet. I think we ran out of silver bullets years
ago. But it gives you a lot more useful information that the IP address
(not much use with NAT etc.). As someone spake earlier who appeared to have
actually done it, you can then rate-limit by individual users, disable
individual users etc. - that's *far* harder on non-authenticated dynamic
SMTP.
Once someone has comprimised a machine  stolen authentication tokens you 
are (arguably) fighting a different battle anyway. A comprimised machine
could HTTP post spam to hotmail/yahoo etc. if it wanted to - the problem
is then protocol independent.

My original point was that port 25 blocking by ISPs does not stop mobile
users using SMTP AUTH, and the reasons for ISPs blocking port 25 are not
likely to be extended to smtps / submission. Not that the latter two
protocols would solve all spam tomorrow.
Alex


Re: .ORG problems this evening

2003-09-19 Thread Alex Bligh


--On 18 September 2003 10:05 -0400 Todd Vierling [EMAIL PROTECTED] wrote:

DNS site A goes down, but its BGP advertisements are still in effect.
(Their firewall still appears to be up, but DNS requests fail.)  Host
site C cannot resolve ANYTHING from DNS site A, even though DNS site B is
still up and running.  But host site C cannot see DNS site B!
What you seem to be missing is that the BGP advert goes away when the DNS
requests stop working.
I have written DNS/BGP code (nothing to do with UltraDNS) and I can tell
you it works very well. Even if you unplug the machine from the net you can
get rapid failover by tweaking a BGP timer here or there. If you are going
to say yes but that means I don't have one of the servers up whilst
routing reconverges this is true, but (a) it happens ANYWAY, (b) as the
prefered route is in general more local, the rainshadow from routing
reconvergence in the event of disruption is smaller.
Alex


Re: What could have been done differently?

2003-01-28 Thread Alex Bligh

Sean,

--On 28 January 2003 03:10 -0500 Sean Donelan [EMAIL PROTECTED] wrote:


Are there practical answers that actually work in the real world with
real users and real business needs?


1. Employ clueful staff
2. Make their operating environment (procedures etc.) best able
  to exploit their clue

In the general case this is a people issue. Sure there are piles of
whizzbang technical solutions that address individual problems (some of
which your clueful staff might even think of themselves), but in the final
analysis, having people with clue architect, develop and operate your
systems is far more important than anything CapEx will buy you alone.

Note it is not difficult to envisage how this attack could have been
far far worse with a few code changes...

Alex Bligh




Re: The magic security CD disc Re: HTTP proxies

2002-12-09 Thread Alex Bligh



--On 08 December 2002 23:16 -0500 Sean Donelan [EMAIL PROTECTED] wrote:


It takes a lot of time to talk individual users through fixing their
computers.  Especially when they didn't break it.  They just plugged
the computer in, and didn't spend 4 hours hardening it.  Most of the
time we're not talking about very complex server configurations, with
full-time system administrators.  The magic CD would be for people who
don't know they are sharing their computers with the Internet.


How unfortunate that the magic CD you refer is not the one with Microsoft
Windows written on the front :-p

Seriously, it is faintly ridiculous that we have operators talking about
a magic CD to fix the broken default installations of various operating
systems (I include Linux etc. here too). If OS vendors shipped, by default,
less broken configs (or at least configs that turned services off -
e.g. port 137 - when not required), much, though not all, of this
problem would go away. Just like it is (now) considered irresponsible
to ship a PABX/Voicemail system with open dialthrough, the same should
be true of operating systems. In many such OS's, like it or loath it,
automatic or semiautomatic update mechanisms already exist. This would
seem to be a good use to put them too. Perhaps NIPC etc. should start
talking to OS vendors.

Concrete example (not to pick on MS for a change) - every time I've
installed a Linux machine I spend 10 or 20 minutes rewriting the (kernel)
firewall rules for the box to suit the apps I have installed. It's a
completely automable task. Someone unfamiliar with either IP or UNIX would
find writing such a script very hard and it would take them much longer. Do
mainstraim distributions include such an automatically built script by
default? Not to my knowledge.

Alex Bligh




Re: Risk of Internet collapse grows

2002-12-02 Thread Alex Bligh



--On 02 December 2002 11:07 + [EMAIL PROTECTED] wrote:


I just don't see how an outside probe can determine the true topology of
a  network.


You did *read* the paper?

Alex




Re:

2002-11-12 Thread Alex Bligh



--On 11 November 2002 18:40 -0800 Harsha Narayan [EMAIL PROTECTED] 
wrote:

   How do ISPs manage the allocations they get from the RIRs? More
specifically, do they make the assignments from this sequentially or not?
Are multihoming assignments to customers amidst non-multihoming
assignments?

   I ask this because /23s and /24s seem to be scattered over a wide area
- they are not adjacent to each other.


Some ISPs use allocation strategies (within the block from the RIR) to
maximize the likelihood of a future request from the same customer being
capable of adjacent assignment in such a manner as to produce aggregatable
blocks, to reduce routing entries. The simplest dumb strategy if all
requests were of equal size would (effectively) be to reverse the binary
bits (for instance when allocating /24s out of a /16 allocate 0.0, 128.0,
64.0, 192.0, 32.0, 160.0, 96.0, 224.0 and so on). Others use more informal
strategies (e.g.'well you may well want 2 x /24 but you are only entitled
to one x /24 on the basis of the current network plan. We'll give you one
now use the adjacent /24 last but if we have to use it in order to get
another block from the RIR then tough').

Generally there's only one block (or at most 2) active at a time in
most ISPs as the RIR won't issue another until utilization in existing
ones is good. However, there is of course reuse of space when customers
leave which also distributes address space.

Alex Bligh




Re: ICANN Targets DDoS Attacks

2002-11-04 Thread Alex Bligh

 - a very small percentage cud be blocked if u were willing to link

this to BGP learnt networks..at least those are complete networks, not
subnetted

ofcourse its a very small portion, mebbe u cud ask guys to send more
specific BGP routes from now


I am assuming you mean 'mark /32's for broadcast addresses as specifics
in BGP', or 'propogate subnets in BGP which are the actual networks
as more specifics in which case the broadcast address ( network
address) are obvious'.

But if you are clueful enough to determine which downstream (possibly
customer) IPs are broadcast, and those still have directed broadcast
switched on (for instance as customer claims it's impossible to
turn off), then why not just drop all traffic to them rather than
push the routes around.

I have never had customers (used as reflectors) complain that traffic
to their network/broadcast addresses was dropped. In 'a network
with which I was involved', this was standard response if customers
didn't block directed broadcasts quickly. I seem to recall we used
exactly the same blackholing technique (propogate /32s internally
in BGP only with community tag to ensure traffic is next-hopped
to the bit bucket) as we used to drop other malicious traffic,
so it all got dropped at the border rather than at the CPE.

Alex Bligh




Re: ICANN Targets DDoS Attacks

2002-11-01 Thread Alex Bligh



--On 29 October 2002 21:11 + Stephen J. Wilcox 
[EMAIL PROTECTED] wrote:

As they say, if you dont set the rate limit too low then you wont
encounter drops under normal operation.


It would be useful if [vendor-du-jour] implemented rate-limiting
by hased corresponding IP address.

IE:
hash=gethash(source);
if (!hash) {hash=gethash(dest)}
if (hash) ratelimiton(bucket(hash);

That way you could (on transit interfaces) specify a paltry limit
of (say) 10kb/s of ICMP (per (hashed) source/destination), even
when there was 'naturally' hundreds of Mb/s of ICMP flowing
through the interface in a non DDoS environment. And if
an IP gets DDoS'd (or sources a DDoS), the ratelimit would
only affect that IP (oh, and any hash equivalents) only.
As, for these purposes, dropping large numbers of relatively
inactive hash entries wouldn't be painful, I would have thought
this would be unlikely to suffer from the self-similarity
properties that made Netflow hard - but perhaps not.

Alex