Re: Abuse response [Was: RE: Yahoo Mail Update]

2008-04-16 Thread Simon Waters

On Wednesday 16 April 2008 17:47, Dave Pooser wrote:

  It can be useful to explain the abuse desk as being just another form
  of marketing, another form of reputation management that happens to be
  specific to Internet companies.

 Is it? 

.. SNIP good points about abuse desks ..

In the specific case that started this (Yahoo), then I think there is a 
marketing issue.

Ask anyone in the business if I want a free email account who do I use.. and 
you'll get the almost universal answer Gmail. 

Mostly this is because Hotmail delete email randomly, Yahoo struggle with the 
volumes, and everyone forgets AOL do free accounts (although it is painfully 
slow and the documentation is incomplete).

But it is in part that Google do actually answer enquiries still, be they 
abuse or support. Yahoo occassionally manage an answer, usually not to the 
question you asked, or asking for information already supplied. AOL - well 
you can get an answer from their employee who watches Spam-L, but directly 
not a chance.

So it is a competitive market, and the opinion of those in the know matters (a 
little -- we could make more noise!). Although the tough one to compete with 
is Hotmail, since their computer offers it to them every time they reinstall, 
and those reinstalling more often have least clue, but eventually realise 
having their email on THEIR(!) PC is a bad idea.

But yes, abuse desk is only a minor issue in that market, but if you don't 
deal with abuse, it will cost the bottom line for email providers. I think 
for people mostly providing bandwidth, email is still largely irrelevant, 
even at the hugely inflated levels the spammers cause it is still a 
minor %age, favicons (missing or otherwise) probably cause nearly as much 
traffic.


Re: Problems sending mail to yahoo?

2008-04-13 Thread Simon Lyall

On Mon, 14 Apr 2008, Adrian Chadd wrote:
 There already has been a paradigm shift. University students (college for 
 you
 'merkins) use facebook, myspace (less now, thankfully!) and IMs as their
 primary online communication method. A number of students at my university
 use email purely because the university uses it for internal systems
 and communication, and use the above for everything else.

That is not anything new. ICQ is 10 years old and IRC was common in the
early 90s. I would guess plenty of people on this list use (and used back
then) both to talk to their friends and team mates.

The question is what tool are people going to use to talk to people,
government bodies and companies that they are not friends with? Even if
the person you want to contact is on IM it is likely they will block
messages from random people due to the existing Spam problem there.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: YouTube IP Hijacking

2008-02-26 Thread Simon Leinen

Iljitsch van Beijnum writes:
 Well, if they had problems like this in the past, then I wouldn't
 trust them to get it right. Which means that it's probably a good
 idea if EVERYONE starts filtering what they allow in their tables
 from PCCW. Obviously that makes it very hard for PCCW to start
 announcing new prefixes, but I can't muster up much sympathy for
 that.

 So basically, rather than generate routing registry filters for the
 entire world, generate routing registry filters for known careless
 ASes. This number should be small enough that this is somewhat
 doable. [...]

Maybe, but how much would that help?

So you suggest that we only need to filter against AS7007, AS9121, and
AS17557.  Personally, those are among the ones I least worry about -
maybe I'm naive, but I'd hope they or their upstreams have learned
their lessons.

The problem is that nobody knows which of the other 25000+ ASes will
be the next AS7007.  So I guess we have to modify your suggestion
somewhat and, in addition to filtering the known-careless also
filter the unknown-maybe-careful class.  Oops, that leaves only the
known-careful class, which includes... my own AS, and then whom?
-- 
Simon.


Re: hijack chronology: was [ YouTube IP Hijacking ]

2008-02-26 Thread Simon Leinen

Martin A Brown writes:
 Late last night, after poring through our data, I posted a detailed
 chronology of the hijack as seen from our many peering sessions.  I
 would add to this that the speed of YouTube's response to this
 subprefix hijack impressed me.

For a Sunday afternoon, yes, not bad.

Here's a graphical version of the timeline:

http://www.ris.ripe.net/cgi-bin/bgplay.cgi?prefix=208.65.153.0/24start=2008-02-24+18:46end=2008-02-24+21:05

 As discussed earlier in this thread, this is really the same old
 song--it simply has a new verse now.  (How many of our troubadors
 know all of the verses since AS 7007?)

Probably someone is busy working on the new NANOG song with an
extending refrain (AS7007, ..., AS17557, when will we ever learn?).
-- 
Simon.


Re: YouTube IP Hijacking

2008-02-26 Thread Simon Leinen

Rick Astley writes:
 Anything more specific than a /24 would get blocked by many filters,
 so some of the high target sites may want to announce their
 mission critical IP space as /24 and avoid using prepends.

Good idea.  But only the high target sites, please.  If you're an
unimportant site that nobody cares about, then DON'T DO THIS, ok? ;-)
-- 
Simon.


Re: YouTube IP Hijacking

2008-02-24 Thread Simon Lockhart

On Sun Feb 24, 2008 at 04:32:45PM -0500, Martin Hannigan wrote:
 Let's avoid speculation as to the why and reserve this thread for
 global restoration activity.

So, from the tit-bits I've picked up from IRC and first-hand knowledge,
it would appear that 17557 leaked an announcement of 208.65.153.0/24 to 
3491 (PCCW/BTN). After several calls to PCCW NOC, including from Youtube
themselves, PCCW claimed to be shutting down the links to 17557. Initially
I saw the announcement change from 3491 17557 to 3491 17557 17557, so 
I speculate that they shut down the primary link (or filtered the announcement
on that link), and the prefix was still coming in over a secondary link 
(hence the prepend). After more prodding, that route vanished too.

Various mitigations were talked about and tried, including Youtube announcing
the /24 as 2*/25, but these announcements did not seem to make it out to the 
world at large.

Currently Youtube are announcing the /24 themselves - I assume this will drop
at some time once it's safe.

It was noticed that all the youtube.com DNS servers were in the affected /24.
Youtube have subsequently added a DNS server in another prefix.

Simon
-- 
Simon Lockhart | * Sun Server Colocation * ADSL * Domain Registration *
   Director|* Domain  Web Hosting * Internet Consultancy * 
  Bogons Ltd   | * http://www.bogons.net/  *  Email: [EMAIL PROTECTED]  * 


Re: YouTube IP Hijacking

2008-02-24 Thread Simon Lockhart

On Sun Feb 24, 2008 at 01:49:00PM -0800, Tomas L. Byrnes wrote:
 Which means that, by advertising routes more specific than the ones they
 are poisoning, it may well be possible to restore universal connectivity
 to YouTube.

Well, if you can get them in there Youtube tried that, to restore service
to the rest of the world, and the announcements didn't propogate.

Simon


Re: Sicily to Egypt undersea cable disruption

2008-01-31 Thread Simon Lockhart

On Thu Jan 31, 2008 at 11:35:03AM -0500, Martin Hannigan wrote:
 The distances are consistent with repeaters/op amps. And the chart
 legend notates the same.

I think you need to zoom right in and look for yellow dots, rather than red
dots.

Simon
-- 
Simon Lockhart | * Sun Server Colocation * ADSL * Domain Registration *
   Director|* Domain  Web Hosting * Internet Consultancy * 
  Bogons Ltd   | * http://www.bogons.net/  *  Email: [EMAIL PROTECTED]  * 


Re: Lessons from the AU model

2008-01-22 Thread Simon Lyall

On Tue, 22 Jan 2008, Tom Vest wrote:
 But even assuming you manage to define a reasonable cap, how will
 you defend it against competitors, and how will you determine when 
 how to adjust it (presumably upwards) as the basket of typical user
 content and services gets beefier -- or will that simply tip more and
 more people into some premium user category?

Seriously Tom it's not *that* hard and like we've been saying plenty of
ISPs in many countries manage to do it. The different companies just play
around until your profit, costs and income all balance nicely

In NZ and Aus as the costs of bandwidth have decreased (and the demand has
increased) then the bandwidth quotas have tended to go up.

Lets say that a customer costs around $25 per month in last mile charges,
staff, billing, marketing, profit for a 6Mb/s DSL. In the US right now you
spend $5 on marginal bandwith usage which at $10 per MB/s gets you 150GB
across the month.

In Australia where the bandwidth cost is closer to $200 per MB/s per month
that $5 only gets you around 8GB. Pricing flat rate will either put you
out of business or cost so much that you not get 90% of customers.

So in Australia you'll do a $30 cheap account with a 5GB/month quota, a
$40 account with a 15GB Quota and a $60 account with a 40GB/month quota.
This keep you bandwidth cost about the same and allows you to capture low
end customers for $30 as well as heavier users at $60. Cheap entry level
option and heavier users can pay more to get more.

In the US 150GB is more than 90-something percent of users do and 6MB/s
line speeds ( mostly) keeps the heavier users from going high enough to
screw the average above that. So you keep a simple flat pricing scheme
because that is easier to market and makes you more money. Just like
helpdesk calls are usually free even though some users use up hundreds of
dollars worth of them per month.

On the other hand, image a few years down the track and you are at a US
provider with $5 per Mb/s per month transit costs, most of your customers
have 100Mb/s connections and :

the bottom 30% of your customers average 0.25 TB / month = $4 / customer
The next 40% of your customers average 1 TB / month = $15 / customer
the top 30% of them average 3 TB / month = $45 / customer

So you average bandwidth cost is around $20 per customer.

Options are:

1. Increase prices to $45 per month and keep flat rate
2. Introduce tiered accounts
3. Traffic shap until bandwidth costs drop enough.

Now right now option 3 seems to be common which sort of indicates that
bandwidth usage by home customers *is* a problem. In many cases the choke
point is at the last mile but it still doesn't really change the numbers.

Providers have a budget to spend per customer on bandwidth, when they
start to exceed that then something has to give.

Of course with a bit of luck the cost of providing bandwith will keep
falling as fast or faster than the average customer demand. Personally I
doubt that long term home demand will exceed 30Mb/s or 10TB per month (
around 1 HDTV channel) on average.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: Lessons from the AU model

2008-01-22 Thread Simon Lyall

On Tue, 22 Jan 2008, Mikael Abrahamsson wrote:
 I am also hesitant regarding billing when a person is being DDOS:ed. How
 is that handled in .AU? I can see billing being done on outgoing traffic
 from the customer because they can control that, but what about incoming,
 the customer has only partial control over that.

DOS's against home customers arn't *that* common, certainly those that
last long enough to hit bandwidth quota don't happen very often.

In the past when you paid for going over your quota people did get $5000
bills for their home accounts. The terms and conditions made the customer
responsible for it fullstop. It was their job to monitor their usage.

The throttle on cap method tends to fix the problem. The customer does a
huge amount of traffic unexpectantly so they just get slower Internet.
Repeat until customer learns to not leave p2p programs running all night
or let junior hang around the wrong channels on IRC.

Usually they will get an email when they are at 80% of their limit or
something which helps more.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-20 Thread Simon Leinen

Frank Bulk writes:
 Except if the cable companies want to get rid of the 5% of heavy
 users, they can't raise the prices for that 5% and recover their
 costs.  The MSOs want it win-win: they'll bring prices for metered
 access slightly lower than unlimited access, making it attractive
 for a large segment of the user base (say, 80%), and slowly raise
 the unlimited pricing for the 15 to 20% that want that service, such
 that at the end of the day, the costs are less AND the revenue is
 greater.

While I think this is basically a sound approach, I'm skeptical that
*slightly* lowering prices will be sufficient to convert 80% of the
user base from flat to unmetered pricing.  Don't underestimate the
value that people put on not having to think about their consumption.

So I think it is important to design the metered scheme so that it is
perceived as minimally intrusive, and users feel in control.  For
example, a simple metered rate where every Megabyte has a fixed price
is difficult, because the customer has to think about usage vs. cost
all the time.  95%ile is a little better, because the customer only
has to think about longer-term usage (42 hours of peak usage per month
are free).  A flat rate with a usage cap and a lowered rate after the
cap is exceeded is easier to swallow than a variable rate, especially
when the lowered rate is still perceived as useful.  And there are
bound to be other creative ways of charging that might be even more
acceptable.  But in any case customers tend to be willing to pay a
premium for a flat rate.
-- 
Simon.


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-20 Thread Simon Leinen

Stupid typo in my last message, sorry.

 While I think this is basically a sound approach, I'm skeptical that
 *slightly* lowering prices will be sufficient to convert 80% of the
 user base from flat to unmetered pricing. [...]
  METERED pricing, of course.
-- 
Simon.


Re: Network Operator Groups Outside the US

2008-01-16 Thread Simon Lockhart

On Wed Jan 16, 2008 at 12:09:48PM -, Rod Beck wrote:
 6. I am not aware of any Dutch per se ISP conferences although that market is
 certainly quite vibrant. I am also disappointed to see the Canadians and
 Irish have next to nothing despite Ireland being the European base of
 operations for Google, Microsoft, Amazon, and Yahoo. And Canada has over 30
 million people. Where is the National Pride?

Inex, the Dublin internet exchange runs member meetings a few times a year.
(But, like LINX, DE-CIX  AMS-IX member meetings, they're designed for members,
not for the general community).

NANOG occasionally holds meetings in Canada.

 8. Both DEC-IX and AMS-IX have member meetings each year. Not clear how
 difficult to get invited if you are not a member. 

There's also the EPF (European Peering Forum) co-run by LINX, DE-CIX and AMS-IX
once a year.

Simon


Re: New Years Eve

2007-12-29 Thread Simon Lockhart

On Sat Dec 29, 2007 at 09:55:25AM -0500, Martin Hannigan wrote:
 That would be a slip of the auto-completion function. I can't really
 think of how to operationalize NYE so I'll have to apologize instead.

Does that mean we're not invited after all? Darn, I'll cancel those flights
I booked :-)

Simon


Re: v6 subnet size for DSL leased line customers

2007-12-21 Thread Simon Lyall

On Fri, 21 Dec 2007, Scott Weeks wrote:
 If I wasn't worried about routing table size (you said if you didn't
 have any history...imagine IPv4 didn't exist) I wouldn't give household
 and SOHO networks billions of addresses.

Well since it looks like it takes about 20-30 years to get a new version
of the IP protocol deployed we have to look way ahead.

Now I think there is a chance that full nanotech could deploy in the next
20 years so the protocol should probably be designed with that possibility
in mind.

Now according to an article on Utility Fog [1] one idea is that most of
the household objects around us could be replaced with small nanotech
robot. Each might weigh 20 micrograms which means 50,000 per gram or 50
million per kilogram. Non moving CPUs would probably be smaller.

So my house may have a couple of tonnes of them scattered around in
thousands of objects (chairs, screens, door handles, fly screens, sensors)
with between a few hundred to a few billion bots in each.

Yeah sure it's science fiction now but it's fairly possible that it could
be the situation in say 2030 and IPv6 is probably good enough to handle
it. If we'd let you design a protocol that didn't support billions of SOHO
addresses then around 2020 we'd be madly deploying IPv7.

However in the shorter term nobody has billions of IPs and most people
don't have thousands of networks.

My understanding is that the main idea with the /48 is that everybody
smaller than a provider, government or a Fortune 500 company will just get
one and no further paperwork will be required. Dropping it to a /56 means
that a certain percentage of your customers are going to have to
negotiate, fill out paperwork and pay extra for their /48
which is going to add costs all around.

[1] http://www.kurzweilai.net/meme/frame.html?main=/articles/art0220.html


-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: v6 subnet size for DSL leased line customers

2007-12-21 Thread Simon Lyall

On Sat, 22 Dec 2007, Randy Bush wrote:
 logic chains which begin with

  Now I think there is a chance that

 may not be the best way to do engineering.  there is a 'chance that'
 just about anything.

Sure, the Sun could explode tomorrow and all these IPv6 people will have
wasted their lives.

However the scenario is a common one and the timetable is well within the
time period when IPv6 will be the main network technology (say 2015 - 2035+)
so it should have been taken into account and judging on the fact that
IPv6 *does* support billions of nodes and thousands of networks to every
end site I guess it probably was.

Making engineering decisions on the basis of there is no chance that
is risky too, especially looking 40+ years into the future.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: Fwd: [nanog-admin] Vote on AUP submission to SC

2007-10-31 Thread Simon Lyall
 3. Cross posting is prohibited.

Just wondering on this one. It would appear to mainly hit things like the
CIDR reports[1] , conference CFPs and news about new networks being
allocated to the APNIC etc. Stopping these doesn't seem a priority.

Or does it mean something else?

[1] - These appear to be crossposted, but it's hard to tell since they are
Bcc'd.

-- 
Simon Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: Abovenet OC48 down

2007-10-25 Thread Simon Lockhart

On Thu Oct 25, 2007 at 02:54:27PM -0700, Jason Matthews wrote:
 I lost nearly all of my bgp routes to Above a few minutes ago. The NOC 
 has they have an oc48 down some where, as of this writing the location 
 has not been localized.

Does anyone actually believe that an ISP could know that they've got an
OC48 down, but not which one it was?

Simon
-- 
Simon Lockhart | * Sun Server Colocation * ADSL * Domain Registration *
   Director|* Domain  Web Hosting * Internet Consultancy * 
  Bogons Ltd   | * http://www.bogons.net/  *  Email: [EMAIL PROTECTED]  * 


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Simon Lyall

On Sun, 21 Oct 2007, Sean Donelan wrote:
 Its not just the greedy commercial ISPs, its also universities,
 non-profits, government, co-op, etc networks.  It doesn't seem to matter
 if the network has 100Mbps user connections or 128Kbps user connection,
 they all seem to be having problems with these particular applications.

I'm going to call bullshit here.

The problem is that the customers are using too much traffic for what is
provisioned. If those same customers were doing the same amount of traffic
via NNTP, HTTP or FTP downloads then you would still be seeing the same
problem and whining as much [1] .

In this part of the world we learnt (the hard way) that your income has
to match your costs for bandwidth. A percentage [2] of your customers are
*always* going to move as much traffic as they can on a 24x7 basis.

If you are losing money or your network is not up to that then you are
doing something wrong, it is *your fault* for not building your network
and pricing it correctly. Napster was launched 8 years ago so you can't
claim this is a new thing.

So stop whinging about how bitorrent broke your happy Internet, Stop
putting in traffic shaping boxes that break TCP and then complaining
that p2p programmes don't follow the specs and adjust your pricing and
service to match your costs.


[1] See SSL and ISP traffic shaping? at http://www.usenet.com/ssl.htm

[2] - That percentage is always at least 10% . If you are launching a new
flat rate, uncapped service at a reasonable price it might be closer to
80%.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: dns authority changes and lame servers

2007-10-19 Thread Simon Waters

On Friday 19 October 2007 01:03, Paul Vixie wrote:
 
 i agree that it's something BIND should do, to be
 comprehensive.  if someone is excited enough about this to consider
 sponsoring the work, please contact me ([EMAIL PROTECTED]) to discuss details.

Sounds like a really bad idea to me.

The original problems sound like management issues mostly. Why are they 
letting customers who don't understand DNS update their NS records, and if 
they do, why is it a problem for them (and not just the customer who fiddled 
and broke stuff).

Similarly we'll provide authoritative DNS for a zone as instructed (and paid 
for), even if it isn't delegated, if that is what the customer wants.

For as long as one doesn't mix authoritative and recursive servers, it matters 
not a jot what a server believes it is authoritative for, only what is 
delegated. Hence one can't graph the mistakes as one would have to be 
psychic to find them.

Perhaps they need to provide DNS status reports to clients, so the clients 
know if things are misconfigured? Monitoring/measuring is the first step in 
managing most things. But I think far more important to find and fix what is 
broken, than to try and let the machines prune it down when something is 
wrong, although I guess breaking things that are misconfigured is a good way 
to get them fixed ;)


Re: Sun Project Blackbox / Portable Data Center

2007-10-14 Thread Simon Lyall

On Sun, 14 Oct 2007, Andy Davidson wrote:
 I understand what Lorell means - the web 2.0 scaling model is to
 throw resources, rather than intelligence at your bottlenecks.

I think this is a little hard. Just about all the Web 2.0 presentations I
see have a big bit that says that how they had to redesign and rearchitect
each time their customer level increased by a factor or 10 or so. The
newer companies are learning from this and implementing scaling from the
start.

Most of these companies are less than a dozen people and sometimes go from
nothing to Top 1000 site in months or a year or two. The aim these days
is to make sure you can do that.

Take a look at Pages 8-11 of this ( ppt - flash presentation):

http://s3.amazonaws.com/slideshare/ssplayer.swf?id=122183doc=aiderss-aws-the-startup-project708

These people don't care about power, space, aircon and bandwidth problems.
They just buy off others companies (eg Amazon) who specialise in those
problems and charge the Web 2.0 companies what it costs them to solve.

As for where the Blackboxes will be used, It'll be where companies want
servers in place in weeks or months and existing datacenters are full or
in the wrong place. Think of a building full of people processing
insurance claims in India or a cluster delivering video on demand in each
Asian city with more than 500,000 people.


-- 
Simon Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Transatlantic Ethernet suggestions (London - Toronto)

2007-08-22 Thread Simon Lockhart

Hi,

I'm in the process of building a new network, and have a requirement for an
ethernet link between London (Telehouse, Redbus Sovereign, or Interxion), and
Telehouse Canada (151 Front St, Toronto). I'm looking for either a 10M link,
or (if the price is good) fractional 100M (e.g. 30M).

I've already got pricing from Hibernia (EoTDM), and Cogent (EoMPLS), but does
anyone have any recommendations for who else should be able to provide this?

Does anyone have any experiences (good or bad) of Cogent's transatlantic
ethernet services that they could share with me off-list?

Many thanks,

Simon


Re: [policy] When Tech Meets Policy...

2007-08-15 Thread Simon Lyall

On Tue, 14 Aug 2007, Al Iverson wrote:
 On 8/14/07, Douglas Otis [EMAIL PROTECTED] wrote:

  This comment was added as a follow-on note.  Sorry for not being clear.
 
  Accepting messages from a domain lacking MX records might be risky
  due to the high rate of domain turnovers.  Within a few weeks, more
  than the number of existing domains will have been added and deleted
  by then.  Spammers take advantage of this flux.  Unfortunately SMTP
  server discovery via A records is permitted and should be
  deprecated.

 Should be (perhaps) but clearly isn't. When you run it through a
 standards body and/or obtain broad acceptance; great! Until then, it's
 pipe dreaming.

Okay I wasn't reading this thread but the last few posts have gone a
little over the edge.

I don't know where this whole Must have MX record to send email thing
came from but I would have thought domains that don't want to send email
can easily mark this fact with a simple SPF record:

v=spf1 -all

Trying to overload the MX record is pointless when there is a simple
method that the domain owners, registrars can choose to use or not.

-- 
Simon Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: large organization nameservers sending icmp packets to dns servers.

2007-08-06 Thread Simon Waters

On Monday 06 August 2007 16:53, Drew Weaver wrote:
 Is it a fairly normal practice for large companies such as Yahoo!
 And Mozilla to send icmp/ping packets to DNS servers? If so, why? 

Some of the DNS load balancing schemes do this, I assume to work out how far 
away your server is so they can give geographically relevant answers. If you 
are geographically close to your recursive name server, it might even work.

 And a 
 related question would be from a service provider standpoint is there any
 reason to deny ICMP/PING packets to name servers within your organization?

I tend to favour filtering some types of ICMP packets and not others, the 
packets required for ping hold little fear for me (and are kind of useful), 
but YMMV. 

My ICMP filtering experience is not DNS specific, you might be able to do 
better with DNS server specific rule, but that is too much like 
micromanagement for me, others may know a lot more on this.


NZNOG 08 - Call for Participation and Papers

2007-06-26 Thread Simon Lyall


  NZNOG 08 - Call for Participation and Papers

The next conference of the New Zealand Network Operators' Group is to be
held in Dunedin, New Zealand between 23 January and 25 January 2008. Our
host is WIC.

NZNOG meetings provide opportunities for the exchange of technical
information and discussion of issues relating to the operation and
support of network services, with particular emphasis on New Zealand.

The conference is low priced and has a strong technical focus with the
aim and history of getting a strong turnout of technical personal from
New Zealand Internet orientated companies.

Conference Overview
---

NZNOG 2008 will consist of one day workshop and tutorial day followed by
two days of conference presentations.  There will also be opportunity for
more informal and small lightening talks. These are typically around five
minutes long and are organised closer to the actual conference.

Important Dates
---

Call for Papers opens:25 June 2007
Deadline for speaker submissions:  6 August 2007
Responses to speaker submissions: 24 August 2007
Draft program published:   3 September 2007
Final program published:   1 November 2007
NZNOG 2008 Conference:23 - 25 January 2008

SIG / Miniconf / Tutorials
--

The first day of the conference will again be a workshop and tutorial
day. It is usually run as one or more parallel 'interest group' streams
and practical or interactive workshops of interest to Network Operators.
This is a call for papers and activities for the first day of conference.

Examples of past activities workshops include 'MPLS and fixed access
networks for beginners', 'Mikrotik RouterOS Training', 'APNIC Internet
Resource Management Essentials', and a System Administrators
mini-conference which included talks on 'Using Debian packages for
system administration' and '42 hosts in 1U: Using virtual machines'.

Conference Presentations


The main conference program for 2008 will be made up of two days with a
single stream where possible. Presentations don't need to fit any particular
fixed length and can be from 30 minutes to 3 hours in length.

NZNOG conferences have traditionally spanned the entire operational spectrum,
and then some. Proposals for conference presentations are invited for
virtually any topic with a degree of relevance to the NZNOG community.

Past years' talks have included the following:

- Internet exchange operations
- Global anycast networks and the building thereof
- Peering, peering, and peering
- Network security
- 10GB ethernet operations
- Advanced networks in NZ
- Current Internet research in NZ
- Wireless networking
- QOS over carrier networks
- Content distribution networks and media streaming
- How we paid the construction guys 18 pints of beer and they
  gave us a free metro fibre network in Palmy North
- Open Source VoIP Platform in Carrier Environments

If you are interested in submitting a talk please fill out the questions
at then end of this document and email them to [EMAIL PROTECTED] .

Submission Guidelines
-

When considering a presentation or SIG, remember that the NZNOG audience
is mainly comprised of technical network operators and engineers with a wide
range of experience levels from beginners to multi-year experience. There is
a strong orientation to offer core skills and basic knowledge in the SIGs
and to address issues relevant to the day-to-day operations of ISPs and
network operators in the conference sessions.

The inclusion of a title, bio, topic, abstract, and slides with proposals
is not compulsory but each will help us determine the quality of your
proposal and increase the likelihood it will be accepted.

Final slides are to be provided by 21 January 2008.

Note: While the majority of speaking slots will be submitted by 6 August 2007,
a limited number of slots may be available for presentations that are
exceptionally timely, important, or of critical operational importance.

The NZNOG conference is a TECHNICAL conference so marketing and commercial
content is NOT allowed within the program. The program committee is charged
with maintaining the technical standard of NZNOG conferences, and will
therefore not accept inappropriate materials.  It is expected that the
presenter be a technical person and not a sales or marketing person. The
audience is extremely technical and unforgiving, and expects that the speakers
are themselves very knowledgeable.  All sessions provide time for questions,
so presenters should expect technical questions and be prepared to deliver
insightful and technically deep responses.

Funding and Support
---

NZNOG conferences are community run and funded events that try to keep the
cost to attendees as low as possible so generally we are unable to pay the
travel costs of speakers.  There is a limited amount of funding available to
international speakers.


Re: TransAtlantic Cable Break

2007-06-24 Thread Simon Leinen

Leo Bicknell writes:
 However, if you put 15G down your 20G path, you have no
 redundancy.  In a cut, dropping 5G on the floor, causing 33% packet
 loss is not up, it might as well be down.

Sorry, it doesn't work like that either.  33% packet loss is an upper
limit, but not what you'd see in practice.  The vast majority of
traffic is responsive to congestion and will back off.  It is
difficult to predict that actual drop rate; that depends a lot on your
traffic mix.  A million web mice are much less elastic than a dozen
bulk transfers.

It is true that on average (averaged over all bytes), *throughput*
will go down by 33%.  But this reduction will not be distributed
evenly over all connections.

In an extreme (ly benign) case, 6G of the 20G are 30 NNTP connections
normally running at 200 Mb/s each, with 50 ms RTT.  A drop rate of
just 0.01% will cause those connections to back down to 20 Mb/s each
(0.6 Gb/s total).  This alone is more than enough to handle the
capacity reduction.  All other connections will (absent other QoS
mechanisms) see the same 0.01% loss, but this won't cause serious
issues to most applications.

What users WILL notice is when suddenly there's a 200ms standing queue
because of the overload situation.  This is a case for using RED (or
small router buffers).

Another trick would be to preferentially drop low-value traffic, so
that other users wouldn't have to experience loss (or even delay,
depending on configuration) at all.  And conversely, if you have (a
bounded amount of) high-value traffic, you could configure protected
resources for that.

 If your redundancy solution is at Layer 3, you have to have the
 policies in place that you don't run much over 10G across your dual
 10G links or you're back to effectively giving up all redundancy.

The recommendation has a good core, but it's not that blackwhite.

Let's say that whatever exceeds the 10G should be low-value and
extremely congestion-responsive traffic.  NNTP (server/server) and P2P
file sharing traffic are examples for this category.  Both application
types (NetNews and things like BitTorrent) even have application-level
congestion responsiveness beyond what TCP itself provides: When a
given connection has bad throughput, the application will prefer
other, hopefully less congested paths.
-- 
Simon.


Re: Network Level Content Blocking (UK)

2007-06-08 Thread Simon Waters

On Thursday 07 June 2007 23:15, Deepak Jain wrote:
 
  I can't imagine this would fly in the US.

Such systems have already been ruled unconstitutional in the US.

 -- The Home Office Minister has already said he expects it in place,
 thats not far from a precondition of operation.

We are kind of use to the home office minister saying all sorts of cranky 
things. Chances are he'll be gone by the end of the month.

My personal dealing with the IWF (stop emailing me, we don't have any NNTP 
servers anymore) don't fill me with confidence.

If the government mandate this, they'll have to provide a list of images to 
block under a more accountable regime than some random voluntary body, and 
they'll have to take responsibility when people point out the government is 
blocking access to specific sites that contain material that criticises them.

I think complying with a voluntary censorship regime is a bad idea all around.

I'm one of James's employers customers when I'm surfing at home.

 Simon





Re: dual-stack

2007-05-31 Thread simon

Donald Stahl writes:
 I guess we have different definitions for most significant
 backbones. Unless you mean they have a dual-stack router running
 _somewhere_, say, for instance, at a single IX or a lab LAN or
 something.  Which is not particularly useful if we are talking about
 a significant backbone.
 Rather than go back and forth- can we get some real data?

Yes please, I like data!

 Can anyone comment on the backbone IPv6 status of the major carriers?

Our three Tier-1(?) upstreams AS1299, AS3356, and AS3549 all provide
IPv6.  Only one of them has dual-stack on our access link, for the
other two we have to tunnel into their IPv6 backbone through their
IPv4 backbone.

I don't know exactly how their internal IPv6 networks are built,
although with one of them I'm sure they use/used 6PE, i.e. IPv6
tunneled over an IPv6-agnostic MPLS core (learned this from trouble
tickets, sigh).  But all three offer decent IPv6 connectivity -
e.g. we rarely observe gratuitous routing over an ocean and back, or
order-of-magnitude RTT or loss-rate differences between IPv4 and IPv6.

Our own backbone has been dual-stack for a couple years now, but I
guess this just shows that we can't be a major carrier - same for
many other national academic backbones as well as GEANT, the
backbone that interconnects those.  Same in the US with Internet2 and
the regional research/education networks.
-- 
Simon. (AS559)


Re: IPv6 Training?

2007-05-31 Thread simon

Alex Rubenstein writes:
 Does anyone know of any good IPv6 training resources (classroom, or
 self-guided)?

If your router vendor supports IPv6 (surprisingly, many do!):

lab-router#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
lab-router(config)#ipv6 ?
  access-listConfigure access lists
  cefCisco Express Forwarding for IPv6
  dhcp   Configure IPv6 DHCP
  general-prefix Configure a general IPv6 prefix
  hop-limit  Configure hop count limit
  host   Configure static hostnames
  icmp   Configure ICMP parameters
  local  Specify local options
  mfib   Multicast Forwarding
  mfib-mode  Multicast Forwarding mode
  mldGlobal mld commands
  multicast-routing  Enable IPv6 multicast
  neighbor   Neighbor
  ospf   OSPF
  pimConfigure Protocol Independent Multicast
  prefix-listBuild a prefix list
  route  Configure static routes
  router Enable an IPV6 routing process
  unicast-routingEnable unicast routing

lab-router(config)#ipv6 unicast-routing
lab-router(config)#interface tengigabitEthernet 1/1
lab-router(config-if)#ipv6 ?
IPv6 interface subcommands:
  address Configure IPv6 address on interface
  cef Cisco Express Forwarding for IPv6
  dhcpIPv6 DHCP interface subcommands
  enable  Enable IPv6 on interface
  mfibInterface Specific MFIB Control
  mld interface commands
  mtu Set IPv6 Maximum Transmission Unit
  nd  IPv6 interface Neighbor Discovery subcommands
  ospfOSPF interface commands
  pim PIM interface commands
  policy  Enable IPv6 policy routing
  redirects   Enable sending of ICMP Redirect messages
  rip Configure RIP routing protocol
  router  IPv6 Router interface commands
  traffic-filter  Access control list for packets
  unnumbered  Preferred interface for source address selection
  verify  Enable per packet validation

lab-router(config-if)#ipv6 enable
[...]

And then chances are good that you find useful training material on
their Web sites, often not just command descriptions, but actual
deployment guides.
-- 
Simon.


Re: Interesting new dns failures

2007-05-25 Thread Simon Waters

On Friday 25 May 2007 15:40, you wrote:
 
 It's too late to put the genie back in the bottle. The only way to
 change the policy before the contract term ends is to either move ICANN
 out of US jurisdiction (to brake contract terms) or to organise a
 grass-root uprising to replace ICANNs root with something else.

Since ICANN doesn't contract to all TLD registries, nor do the root server 
operators control the CCTLD, there is no way to fix this from the top down. 
One can at best displace it from those top level domains ICANN does have 
contracts for to those that they don't.

Packets and digs can slow my networks. but other people's names can't hurt me.


Re: Interesting new dns failures

2007-05-21 Thread Simon Waters

On Monday 21 May 2007 16:19, Tim Franklin wrote:
 
  I wonder how the .de or .uk folks see things? Is the same true elsewhere?

 .co.uk generally seems to be understood by UK folks.  .org.uk tends to
 cause a double-take.  (The 'special' UK SLDs, like nhs.uk, are a maze of
 twisty turny third-levels, all on different logic).

The odd thing is customers mostly fall into either;

I don't understand anything beyond .com and .co.uk

I'm a gov.uk, nhs.uk other speciality, who often know more about the 
procedures or technicalities of registering their desired domain name than we 
do. 

And those who just want every possible TLD, and variant, for a name, in some 
misguided belief this will protect it in some magical way, and won't just 
make a load of money for the registries.

We obviously prefer the last group, as they spend more money, are less hassle, 
and are usually content with registering all the TLD domains we can do for 
the standard price. 

I'm sure there is a business in doing services to the second group, especially 
if you chuck in certificates and a few related things.


Re: Interesting new dns failures

2007-05-21 Thread Simon Waters

On Monday 21 May 2007 14:43, you wrote:

 I'll bet a large pizza that 90% or more could be relocated to a more
 appropriate location in the DNS tree, and nobody except the domain holder
 and less than a dozen other people will notice/care in the slightest.

More like 99% I suspect, but we've no idea which 99%.

The decision to make the name servers part of the hierarchy, without insisting 
they be within the zones they master (in bailiwick as some call it) and 
thus glued in, means we have no definite idea which bits of the DNS break on 
any specific deletion.

In general it is impossible when deleting a zone to know the full consequences 
of that action unless you are that zones DNS administrator, and even then you 
need to ask any administrators of delegated domains. 

So those who think deleting zones is a way to fix things, or penalise people, 
should tread VERY carefully, less they end up liable for something bigger 
than they expected (or could possibly imagine).

Doing it all again, this is clearly something that folks would work to 
minimize in the design of the DNS. Such that deleting .uk could be 
guaranteed to only affect domains ending in .uk. But at the moment, you 
can't know exactly which bits of the DNS would break if you deleted the .uk 
zone from the root servers. 

For example deleting our corporate .com zones from the GTLD servers could 
potentially* disable key bits of another second level UK domain, and no third 
party can tell for sure the full impact of that change in advance. Who knows 
they may be hosting other DNS servers for other zones in their turn (I doubt 
it but I don't know for certain).

Of course even if the DNS were designed so you can recognise which bits might 
break with a given change, you'd then be left not knowing which services are 
linked into a particular domain. But that is beyond the scope of a name 
service design I think.

Sure most of the time if you delete a recently registered domain name, with a 
lot of changes and abuse in its history, you normally just hurt a spammer. I 
dare say collateral damage probably follows some simple mathematical law like 
1/f ? Hopefully before you delete something really important you most likely 
delete something merely expensive, and learn to be more careful.

 Simon

PS: Those who make sarcastic comments about people not knowing the difference 
between root servers, and authoritative servers, may need to be a tad more 
explicit for the help of the Internet challenged.

* I'm hoping the name servers in co.uk will help if anything ever does go pear 
shaped with that domain name, but I wouldn't bet money on it.


Re: Bandwidth Augmentation Triggers

2007-05-01 Thread Simon Leinen

Jason Frisvold writes:
 I'm working on a system to alert when a bandwidth augmentation is
 needed.  I've looked at using both true averages and 95th percentile
 calculations.  I'm wondering what everyone else uses for this
 purpose?

We use a secret formula, aka rules of thumb, based on perceived
quality expectations/customer access capacities, and cost/revenue
considerations.

In the bad old days of bandwidth crunch (ca. 1996), we scheduled
upgrades of our transatlantic links so that relief would come when
peak-hour average packet loss exceeded 5% (later 3%).  At that time
the general performance expectation was that Internet performance is
mostly crap anyway, if you need to transfer large files, at 0300 AM
is your friend; and upgrades were incredibly expensive.  With that
rule, link utilization was 100% for most of the (working) day.

Today, we start thinking about upgrading from GbE to 10GE when link
load regularily exceeds 200-300 Mb/s (even when the average load over
a week is much lower).  Since we run over dark fibre and use mid-range
routers with inexpensive ports, upgrades are relatively cheap.  And -
fortunately - performance expectations have evolved, with some users
expecting to be able to run file transfers near Gb/s speeds, 500 Mb/s
videoconferences with no packet loss, etc.

An important question is what kind of users your links aggregate.  A
core link shared by millions of low-bandwidth users may run at 95%
utilization without being perceived as a bottleneck.  On the other
hand, you may have an campus access shared by users with fast
connections (I hear GbE is common these days) on both sides.  In that
case, the link may be perceived as a bottleneck even when utilization
graphs suggest there's a lot of headroom.

In general, I think utilization rates are less useful as a basis for
upgrade planning than (queueing) loss and delay measurements.  Loss
can often be measured directly at routers (drop counters in SNMP), but
queueing delay is hard to measure in this way.  You could use tools
such as SmokePing (host-based) or Cisco IP SLA or Juniper RPM
(router-based) to do this.

(And if you manage to link your BSS and OSS, then you can measure the
rate at which customers run away for an even more relevant metric :-)

 We're talking about anything from a T1 to an OC-12 here.  My guess
 is that the calculation needs to be slightly different based on the
 transport, but I'm not 100% sure.

Probably not on the type of transport - PDH/SDH/Ethernet behave
essentially the same.  But the rules will be different for different
bandwidth ranges.  Again, it is important to look not just at link
capacities in isolation, but also at the relation to the capacities of
the access links that they aggregate.
-- 
Simon.


Re: Hotmail blackholing certain IP ranges ?

2007-04-26 Thread Simon Waters

On Thursday 26 April 2007 00:43, you wrote:

 A chap I know (for some reason) set his source port
 for queries to be port 53 and his DNS queries started to fail.

It was the default source port for DNS queries in some versions of BIND. And 
may well still be (I don't do those versions of BIND). The main reason for 
changes was that you need root privilege to bind to those ports in 
traditional Unix model, and people wanted to run DNS as a non-root user.

The more general bitbucketing of hotmail email is well known (try Google or 
Yahoo! search engines to find out more).

In general people should be advising against using Hotmail until Hotmail fix 
the bitbucketing issue, as encouraging it will undermine the reliability of 
email.

Presumably eventually (like AOL did) Hotmail will bitbucket some email 
important enough to make them realise the error of their ways, meanwhile 
Hotmail users get a service which is worth about what most of them pay for 
it.


Re: www.cnn.com

2007-04-26 Thread Simon Waters

On Thursday 26 April 2007 11:32, Stefan Schmidt wrote:
 
 I think your debugging tool is faulty, as a dig ns cnn.com
 @a.gtld-servers.net gives:

cnn.com is not www.cnn.com ;)

dig @twdns-03.ns.aol.com www.cnn.com ns

Although doc is very long in the tooth, at least the last version I was 
using in anger. 

As to what CNN are doing with their DNS, I've no idea, but I don't think it 
concerns Nanog, unless these nameservers host a lot of important domains ;)


Re: from the academic side of the house

2007-04-26 Thread Simon Leinen

Tony Li writes:
 On Apr 25, 2007, at 2:55 PM, Simon Leinen wrote:
 Routing table lookups(*) are what's most relevant here, [...]

 Actually, what's most relevant here is the ability to get end-hosts
 to run at rate.  Packet forwarding at line rate has been
 demonstrated for quite awhile now.

That's true (although Steve's question was about the routers).

The host bottleneck for raw 10Gb/s transfers used to be bus bandwidth.
The 10GE adapters in most older land-speed record entries used the
slower PCI-X, while this entry was done with PCI Express (x8) adapters.

Another host issue would be interrupts and CPU load for checksum, but
most modern 10GE (and also GigE!) adapters offload checksum
segmentation and reassembly, as well as checksum computation and
validation to the adapter if the OS/driver supports it.

The adapters used in this record (Chelsio S310E) contain a full TOE
(TCP Offload Engine) that can run the entire TCP state machine on the
adapter, although I'm more sure whether they made use of that.
Details on

http://data-reservoir.adm.s.u-tokyo.ac.jp/lsr-200612-02/
-- 
Simon.


Re: from the academic side of the house

2007-04-25 Thread Simon Leinen

Steven M Bellovin writes:
 Jim Shankland [EMAIL PROTECTED] wrote:

 (2) Getting this kind of throughput seems to depend on a fast
 physical layer, plus some link-layer help (jumbo packets), plus
 careful TCP tuning to deal with the large bandwidth-delay product.
 The IP layer sits between the second and third of those three items.
 Is there something about IPv6 vs. IPv4 that specifically improves
 perfomance on this kind of test?  If so, what is it?

 I wonder if the router forward v6 as fast.

In the 10 Gb/s space (sufficient for these records, and I'm not
familiar with 40 Gb/s routers), many if not most of the current gear
handles IPv6 routing lookups in hardware, just like IPv4 (and MPLS).

For example, the mid-range platform that we use in our backbone
forwards 30 Mpps per forwarding engine, whether based on IPv4
addresses, IPv6 addresses, or MPLS labels.  30 Mpps at 1500-byte
packets corresponds to 360 Gb/s.  So, no sweat.

Routing table lookups(*) are what's most relevant here, because the other
work in forwarding is identical between IPv4 and IPv6.  Again, many
platforms are able to do line-rate forwarding between 10 Gb/s ports.
-- 
Simon, AS559.
(*) ACLs (access control lists) are also important, but again, newer
hardware can do fairly complex IPv6 ACLs at line rate.


Re: UK ISP threatens security researcher

2007-04-19 Thread Simon Lyall

On Thu, 19 Apr 2007, Gadi Evron wrote:
 Looking at the lack of security response and seriousness from this ISP, I
 personally, in hindsight (although it was impossible to see back
 then) would not waste time with reporting issues to them, now.

These days there is almost never any reason to report a security issue
unless you are a professional security researcher who is looking for
publicity/work. [1]

If you are a random person who comes across a security hole in a website
or commercial product then the best thing to do is tell nobody, refrain
from any further investigation and if possible remove all evidence you
ever did anything.

There is almost zero potential upside of reporting these holes vs the very
real potential downside that the company might decide to go after you with
their legal team or the police.

Anonymous notifications to 3rd parties like security forums or
journalists might be an option if you really fell it is important. However
in the scheme of things giving $50 to your favorite charity is likely to
be safer and do the world more good.

[1] - An exception might be for open source projects or as part of your
 normal job with your companies products. Even then you should only follow
 normal channels and always be careful.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: Thoughts on increasing MTUs on the internet

2007-04-13 Thread Simon Leinen
 of this assumption.

  Seriously, I think it's illusionary to try to change this for
  general networks, in particular large LANs.  It might work for
  exchange points or other controlled cases where the set of protocols
  is fairly well defined, but then exchange points have other options
  such as separate jumbo VLANs.

  For campus/datacenter networks, I agree that the consistent-MTU
  requirement is a big problem for deploying larger MTUs.  This is
  true within my organization - most servers that could use larger
  MTUs (NNTP servers for example) live on the same subnet with servers
  that will never bother to be upgraded.  The obvious solution is to
  build smaller subnets - for our test servers I usually configure a
  separate point-to-point subnet for each of its Ethernet interfaces
  (I don't trust this bridging-magic anyway :-).

* Most edges will not upgrade anyway.

  On the slow edges of the network (residual modem users, exotic
  places, cellular data users etc.), people will NOT upgrade their MTU
  to 9000 byte, because a single such packet would totally kill the
  VoIP experience.  For medium-fast networks, large MTUs don't cause
  problems, but they don't help either.  So only a few super-fast
  edges have an incentive to do this at all.

  For the core networks that support large MTUs (like we do), this is
  frustrating because all our routers now probably carve their
  internal buffers for 9000-byte packets that never arrive.
  Maybe we're wasting lots of expensive linecard memory this way?

* Chicken/egg

  As long as only a small minority of hosts supports 1500-byte MTUs,
  there is no incentive for anyone important to start supporting them.
  A public server supporting 9000-byte MTUs will be frustrated when it
  tries to use them.  The overhead (from attempted large packets that
  don't make it) and potential trouble will just not be worth it.
  This is a little similar to IPv6.

So I don't see large MTUs coming to the Internet at large soon.  They
probably make sense in special cases, maybe for land-speed records
and dumb high-speed video equipment, or for server-to-server stuff
such as USENET news.

(And if anybody out there manages to access [2] or http://ndt.switch.ch/
with 9000-byte MTUs, I'd like to hear about it :-)
-- 
Simon.

[1] Here are a few tracepaths (more or less traceroute with integrated
PMTU discovery) from a host on our network in Switzerland.
9000-byte packets make it across our national backbone (SWITCH),
the European academic backbone (GEANT2), Abilene and CENIC in the
US, as well as through AARnet in Australia (even over IPv6).  But
the link from the last wide-area backbone to the receiving site
inevitably has a 1500-byte MTU (pmtu 1500).

: [EMAIL PROTECTED]; tracepath www.caida.org
 1:  mamp1-eth2.switch.ch (130.59.35.78)0.110ms pmtu 9000
 1:  swiMA1-G2-6.switch.ch (130.59.35.77)   1.029ms 
 2:  swiMA2-G2-5.switch.ch (130.59.36.194)  1.141ms 
 3:  swiEL2-10GE-1-4.switch.ch (130.59.37.77)   4.127ms 
 4:  swiCE3-10GE-1-3.switch.ch (130.59.37.65)   4.726ms 
 5:  swiCE2-10GE-1-4.switch.ch (130.59.36.209)  4.901ms 
 6:  switch.rt1.gen.ch.geant2.net (62.40.124.21)  asymm  7   4.429ms 
 7:  so-7-2-0.rt1.fra.de.geant2.net (62.40.112.22)asymm  8  12.551ms 
 8:  abilene-wash-gw.rt1.fra.de.geant2.net (62.40.125.18) asymm  9 105.099ms 
 9:  64.57.28.12 (64.57.28.12)asymm 10 121.619ms 
10:  kscyng-iplsng.abilene.ucaid.edu (198.32.8.81)asymm 11 153.796ms 
11:  dnvrng-kscyng.abilene.ucaid.edu (198.32.8.13)asymm 12 158.520ms 
12:  snvang-dnvrng.abilene.ucaid.edu (198.32.8.1) asymm 13 180.784ms 
13:  losang-snvang.abilene.ucaid.edu (198.32.8.94)asymm 14 177.487ms 
14:  hpr-lax-gsr1--abilene-LA-10ge.cenic.net (137.164.25.2) asymm 20 179.106ms 
15:  riv-hpr--lax-hpr-10ge.cenic.net (137.164.25.5)   asymm 21 185.183ms 
16:  hpr-sdsc-sdsc2--riv-hpr-ge.cenic.net (137.164.27.54) asymm 18 186.368ms 
17:  hpr-sdsc-sdsc2--riv-hpr-ge.cenic.net (137.164.27.54) asymm 18 185.861ms 
pmtu 1500
18:  cider.caida.org (192.172.226.123)asymm 19 186.264ms 
reached
 Resume: pmtu 1500 hops 18 back 19 
: [EMAIL PROTECTED]; tracepath www.aarnet.edu.au
 1:  mamp1-eth2.switch.ch (130.59.35.78)0.095ms pmtu 9000
 1:  swiMA1-G2-6.switch.ch (130.59.35.77)   1.024ms 
 2:  swiMA2-G2-5.switch.ch (130.59.36.194)  1.115ms 
 3:  swiEL2-10GE-1-4.switch.ch (130.59.37.77)   3.989ms 
 4:  swiCE3-10GE-1-3.switch.ch (130.59.37.65)   4.731ms 
 5:  swiCE2-10GE-1-4.switch.ch (130.59.36.209)  4.771ms 
 6:  switch.rt1.gen.ch.geant2.net (62.40.124.21)  asymm  7   4.424ms 
 7:  so-7-2-0.rt1.fra.de.geant2.net (62.40.112.22)asymm  8  12.536ms 
 8:  ge-3-3-0.bb1.a.fra.aarnet.net.au (202.158.204.249)   asymm  9  13.207ms

Re: airfrance.com

2007-04-03 Thread Simon Waters

On Tuesday 03 April 2007 15:59, Geo. wrote:

  initially I thought it was a dns problem

Irrelevant lame DNS server issue reported to SOA email address.


Re: ICANNs role [was: Re: On-going ...]

2007-04-03 Thread Simon Waters

On Tuesday 03 April 2007 18:35, Donald Stahl wrote:
 
 The problem here is that the community gets screwed not the guy paying
 $8.95. If he was getting what he paid for- well who cares. The problem is
 everyone else.

At the risk of prolonging a thread that should die

Gadi forwarded a post suggesting DNSSEC is unneeded because we have security 
implemented elsewhere (i.e. SSL).

Thus how does it affect me adversely if someone else registers a domain, if I 
don't rely on the DNS for security?

Much of the phishing I see is hosted on servers that have been compromised, I 
guess that is cheaper than the $8.95 for a domain.

If there is evidence that domain tasting is being used for abusive practices, 
I'm sure the pressure to deal with it will increase. Much as I think the 
practice is a bad thing, I don't see it as a major security issue.

The reason domain registration works quickly, is that it was a real pain when 
they didn't (come on it wasn't that long ago). People registering domains 
want it up and running quickly, as humans aren't good at the I'll check it 
all in 8 hours/2 days/whatever. I'm sure prompt 
registration/activation/changes of domains is in general a good thing, 
resulting in better DNS configurations.

Sure it is possible domains will be registered for abusive activity, and 
discarded quickly, with a difficult path in tracing such. But if there is 
some sort of delay or grace period it won't make a difference. When domains 
took days to register spammers waited days. I don't suppose phishers are any 
less patient.

Validation of names, addresses, and such like is impractical, and I believe 
inappropriate. There is a method for such validations (purchase of SSL 
certificates), and even there the software, methods, and tools are pitiful. 
Why should the domain registrars be expected to do the job (or do it 
better?), when it could be equally argued that ISPs are is a better position 
to police the net.

The credit card companies are good at passing chargeback fees to the vendor, 
so be assured if people are using fraudulent credit card transactions, the 
domain sellers will have motivation to stop selling them domains.

The essential problem with Internet security is that there is little come back 
on abusers. There have been obvious and extensive advanced fee fraud run from 
a small set of IP addresses in Europe, using the same national telecomm 
provider as a mail relay, and it took 4 years to get any meaningful action (I 
assume the recent drying up of such things was a result of action, the 
fraudster may just have retired with their ill gotten gains for all I know!).

There are specific technical, and market issues, but without any real world 
policing, the abusers will keep trying, till either they succeed or go bust. 
If they succeed they may well go on to become part of more organized abuse.

The other problem is that their is no financial incentive for ISPs to do 
the right thing. Where as domain registrars can cancel a domain, and get 
another sale from the same abuser - so they have a financial incentive to 
clean up. If ISPs close an account, the person will likely just switch ISP.

A classic example I commented on recently was Accelerate Biz, unrepentant 
spammers (at least their IP address range is from here, either that or so 
thoroughly incompetent they might as well be). Their inbound email service is 
filtered by Mail Foundry, but despite being an antispam provider, Mail 
Foundry have no financial incentive to stop providing services to these 
spammers. Till companies (ISPs included) are fined for providing such 
services, so it isn't profitable, we'll be spammed.

Port 25 SYN rate limiting isn't that much harder than ICMP ;)

 Simon, speaking in a personal capacity, views expressed are not necessarily 
those of my employers.


Re: On-going Internet Emergency and Domain Names

2007-04-01 Thread Simon Lyall

On Sun, 1 Apr 2007, Douglas Otis wrote:
 When functional information is not valid, such as incorrect name servers
 or IP addresses, this would not impose an immediate threat.  However,
 basic functional information will trace to the controlling entity.  Only
 by being able to preview this information, would comprehensive
 preemptive efforts be able to prove fully effective.

So assuming you get rid of tasting and reduce the flow of new names to
say 50,000 per day [1] exactly how are you going to preview these in any
meaningful sort of way?

Are you going to do the same for every ccTLD as well? What about domains
with constantly changing subdomains? Everything hosted in different
countries with different languages, policies and privacy laws? Believe it
or not, some countries don't even have states or 5 digit zip codes.

Please detail exactly what you will do if I register trademe.ir using
a Pakistani Registrar, a .ly contact email, a physical address in Nigeria,
the name Tarek Rasshid [2] , $10/year name servers in Cuba and pay for
using Visa gift credit card bought in Malaysia.

[1] 20 million new domains each year, just 20% growth on what we have now.

[2] http://www.angelfire.com/tx/afira/arabic1.html

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: What is the correct way to get Whitelisted?

2007-03-30 Thread Simon Waters

On Friday 30 March 2007 15:33, Wil Schultz wrote:

 Sorry of this is off topic:

Try SPAM-L, a lot of overlap between that and this group, but it exists for 
these issues, NANOG doesn't (unless you are sending so much email it 
adversely affects network stability).

 On another side note, if anyone has information on how to get
 whitelisted (or DeBlacklisted :-) ) from Hotmail, MSN, Earthlink,
 AOL, Yahoo!, etc feel free to email offlist...

Hotmail, and AOL, provide various feedback systems, the SPAM-L archive 
discusses relative merits. The more clueful of the providers return all you 
need to know in the reject message.

Ultimately if you are sending bulk email, and a significant number of the 
recipients claim it is unsolicited, the big email providers are going to 
block you, whether the recipients are right or wrong about the solicited 
nature of the list.

Hotmail silently bitbucket email from us regularly (we have a lot of rarely 
used forwards, so the little bits of spam that leak through count badly 
against our email server), we've given up on Hotmail, but I think it is 
possible to ask for a whitelisting.


Yahoo! clue

2007-03-29 Thread Simon Waters

Is there a Yahoo! abuse contact around who will talk, and not just sended me 
canned responses?

Their abuse team seems very responsive, but I fear they don't actually read 
the whole email, but just hit the button for the most appropriate canned 
response as soon as they think they know what is being said. (Let he who is 
without sin here, cast the first stone).

 Thanks,

 Simon


Re: TCP and WAN issue

2007-03-28 Thread Simon Leinen

Andre Oppermann gave the best advice so far IMHO.
I'll add a few points.

 To quickly sum up the facts and to dispell some misinformation:

  - TCP is limited the delay bandwidth product and the socket buffer
sizes.

Hm... what about: The TCP socket buffer size limits the achievable
throughput-RTT product? :-)

  - for a T3 with 70ms your socket buffer on both endss should be
450-512KB.

Right.  (Victor Reijs' goodput calculator says 378kB.)

  - TCP is also limited by the round trip time (RTT).

This was stated before, wasn't it?

  - if your application is working in a request/reply model no amount
of bandwidth will make a difference.  The performance is then
entirely dominated by the RTT.  The only solution would be to run
multiple sessions in parallel to fill the available bandwidth.

Very good point.  Also, some applications have internal window
limitations.  Notably SSH, which has become quite popular as a bulk
data transfer method.  See http://kb.pert.geant2.net/PERTKB/SecureShell

  - Jumbo Frames have definately zero impact on your case as they
don't change any of the limiting parameters and don't make TCP go
faster.

Right.  Jumbo frames have these potential benefits for bulk transfer:

(1) They reduce the forwarding/interrupt overhead in routers and hosts
by reducing the number of packets.  But in your situation it is quite
unlikely that the packet rate is a bottleneck.  Modern routers
typically forward even small packets at line rate, and modern
hosts/OSes/Ethernet adapters have mechanisms such as interrupt
coalescence and large send offload that make the packet size
largely irrelevant.  But even without these mechanisms and with
1500-byte packets, 45 Mb/s shouldn't be a problem for hosts built in
the last ten years, provided they aren't (very) busy with other
processing.

(2) As Perry Lorier pointed out, jumbo frames accelerate the additive
increase phases of TCP, so you reach full speed faster both at
startup and when recovering from congestion.  This may be noticeable
when there is competition on the path, or when you have many smaller
transfers such that ramp-up time is an issue.

(3) Large frames reduce header overhead somewhat.  But the improvement
going from 1500-byte to 9000-bytes packets is only 2-3%, from ~97%
efficiency to ~99.5%.  No orders of magnitude here.

There are certain very high-speed and LAN (5ms) case where it
may make a difference but not here.

Cases where jumbo frames might make a difference: When the network
path or the hosts are pps-limited (in the Gb/s range with modern
hosts); when you compete with other traffic.  I don't see a relation
with RTTs - why do you think this is more important on 5ms LANs?

  - Your problem is not machine or network speed, only tuning.

Probably yes, but it's not clear what is actually happening.  As it
often happens, the problem is described with very little detail, so
experts (and experts :-) have a lot of room to speculate.

This was the original problem description from Philip Lavine:

I have an east coast and west coast data center connected with a
DS3. I am running into issues with streaming data via TCP

In the meantime, Philip gave more information, about the throughput he
is seeing (no mention how this is measured, whether it is total load
on the DS3, throughput for an application/transaction or whatever):

This is the exact issue. I can only get between 5-7 Mbps.

And about the protocols he is using:

I have 2 data transmission scenarios:

1. Microsoft MSMQ data using TCP
2. Streaming market data stock quotes transmitted via a TCP
   sockets

It seems quite likely that these applications have their own
performance limits in high-RTT situations.

Philip, you could try a memory-to-memory-test first, to check whether
TCP is really the limiting factor.  You could use the TCP tests of
iperf, ttcp or netperf, or simply FTP a large-but-not-too-large file
to /dev/null multiple times (so that it is cached and you don't
measure the speed of your disks).

If you find that this, too, gives you only 5-7 Mb/s, then you should
look at tuning TCP according to Andre's excellent suggestions quoted
below, and check for duplex mismatches and other sources of
transmission errors.

If you find that the TCP memory-to-memory-test gives you close to DS3
throughput (modulo overhead), then maybe your applications limit
throughput over long-RTT paths, and you have to look for tuning
opportunities on that level.

 Change these settings on both ends and reboot once to get better throughput:

 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
 SackOpts=dword:0x1 (enable SACK)
 TcpWindowSize=dword:0x7D000 (512000 Bytes)
 Tcp1323Opts=dword:0x3 (enable window scaling and timestamps)
 GlobalMaxTcpWindowSize=dword:0x7D000 (512000 Bytes)

 http://www.microsoft.com/technet/network/deploy/depovg/tcpip2k.mspx
-- 
Simon.


Re: NOC Personel Question (Possibly OT)

2007-03-14 Thread Simon Lyall

On Wed, 14 Mar 2007, Justin M. Streiner wrote:
 Not sure why your HR dept would even care :)

So they can look them up on a pay scale list and decide what they should be
paid. Had this problem at one place I was at where the pay scale list
thought a System Administrator or Network Administrator was somebody
who looked after 20 Windows desktops in an office and not network/machines
for a thousands of customers (as was the case) and thus paid about twice
as much.

I think the managers just argued with HR or reclassified everybody as a
Network Architect to solve the problem. Calling people Engineers was a
problem since half them didn't have degrees.


-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Curiousity: blogspot.com

2007-03-08 Thread Simon Waters

Anyone have a tool that quickly measures the reachability of websites 
subdomains of blogspot.com?

Search google for site:blogspot.com $subjectofinterest

i.e. chess

http://susanpolgar.blogspot.com

Works fine from most places, but the connection is immediately closed from 
work.

Resolves to: 72.14.207.191

Not sure I can justify more time on fixing access to other peoples blogs, and 
Google are aware of the issue. But curious to know if it is just parts of the 
ISP we use at work (NTL/Telewest), or if it is more widespread.


Re: Curiousity: blogspot.com

2007-03-08 Thread Simon Waters

On Thursday 08 March 2007 09:51, you wrote:

 Works fine from most places, but the connection is immediately closed from
 work.

Hmm, seems that blogspot.com is now 6 hops closer to us, and working fine.

6 hops missing were all internal to telewest.

Apologies for the noise.


R Scott Perry (HopOne/Superb.net collateral damage)

2007-02-19 Thread Simon Waters

Anyone have an email for him, could they drop it to me off list.

He seems to have stuff hosted with that the den of spammers at HopOne 
Internet.

On the upside it seems there is at least one genuine service amongst the 
address space we blocked at HopOne.  But that is 1 address out of 32 class Cs 
we blocked.

They seem to have a virulent PayPal Phish sender at 66.36.228.37 as well this 
week.

 Ho Hum

 Simon


Re: botnets: web servers, end-systems and Vint Cerf [LONG, sorry]

2007-02-19 Thread Simon Waters

On Monday 19 February 2007 13:27, you wrote:
 
 people consider this to be a Windows malware problem. I consider it to
 be an email architecture problem. We all know that you need hierarchy to
 scale networks and I submit that any email architecture without
 hierarchy is broken by design and no amount of ill-thought-out bandaids
 will fix it.

I look forward to your paper on the end to end concept, and why it doesn't 
apply to email ;)

I'm not convinced there is an email architecture problem of relevance to the 
discussion. People mistake a security problem for its most visible symptoms. 

The SMTP based email system has many faults, but it seems only mildly stressed 
under the onslaught of millions of hosts attempting to subvert it. Most of 
the attempts to fix the architecture problem so far have moved the problem 
from blacklisting IP addresses, to blacklisting domains, or senders, or other 
entities which occupy a larger potential space than the IPv4 addresses, which 
one can use to effectively deal with most of the symptom. In comparison, 
people controlling malware botnets, have demonstrated their ability to 
completely DDoS significant chunks of network, suggesting perhaps that other 
protocols are potentially more vulnerable than SMTP, or more approrpiate 
layers to address the problem at.

We may need a trust system to deal with identity within the existing email 
architecture, but I see no reason why that need be hierarchical, indeed 
attempts to build such hierarchical systems have often failed to gather a 
critical mass, but peer to peer trust systems have worked fine for decades 
for highly sensitive types of data.

I simply don't believe the higher figures bandied about in the discussion for 
compromised hosts. Certainly Microsoft's malware team report a high level of 
trojans around, but they include things like the Jar files downloaded onto 
many PCs, that attempt to exploit a vulnerability that most people patched 
several years ago. Simply identifying your computer downloaded (as designed), 
but didn't run (because it was malformed), malware, isn't an infection, or of 
especial interest (other than indicating something about the frequency with 
which webservers attempt to deliver malware).


Re: botnets: web servers, end-systems and Vint Cerf

2007-02-16 Thread Simon Lyall

On Fri, 16 Feb 2007, J. Oquendo wrote:
 After all these years, I'm still surprised a consortium of ISP's haven't
 figured out a way to do something a-la Packet Fence for their clients
 where - whenever an infected machine is detected after logging in, that
 machine is thrown into say a VLAN with instructions on how to clean
 their machines before they're allowed to go further and stay online.

All very nice. This sort of things has been detailed a few dozen times by
various people. Doing this is not hard from a technical point of view
(which isn't to say it won't cost a lot of money to impliment).

The hard bit is creating a business case to show how spending the money to
impliment it and then wearing the cost of pissed off customers results in
a net gain to the bottom line.

If someone could actually do a survey to show how much each bot infested
customer is costing their ISP then people might be able to do something.
Right now AFAIK an extra 10,000 botted customers costs the average ISP no
more than a dozen heavy p2p users.

On the other hand Port 25 filtering probably is something that has low
enough negatives vs the positives for people to actually do.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: what the heck do i do now?

2007-02-04 Thread Simon Lyall

On Thu, 1 Feb 2007, Jay Hennigan wrote:
 Set up a nameserver there.  Configure it to return 127.0.0.2 (or
 whatever the old MAPS reply for spam was) to all queries.  Let it run
 for a week.  See if anything changes in terms of it getting hammered.

Well I've seen some RBLs do this with about 2 days notice. Perhaps a
special value could be defined ( 127.255.255.255 ? ) to tell users that
the DNSBL is no longer in operation and shouldn't be used, standard
software can then raise an error or whatever.


-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: Comment spammers chewing blogger bandwidth like crazy

2007-01-16 Thread Simon Waters

On Tuesday 16 January 2007 03:06, Jason Frisvold wrote:

 The argument there is that those users don't deserve to comment if
 they can't keep their computers clean, but let's get real..  Some of
 this stuff is getting pretty advanced and it's getting tougher for
 general users to keep their computers clean.

I'd have said it was getting easier to keep computers clean. Back in the late 
1980's I use to have my own DOS boot disk, with bootsector antivirus tools, 
so that any PC I used on my University I could be sure was clean. Doesn't 
mean there aren't more computers, with less clueful users, these days.

 I think a far better system is something along the lines of a SURBL
 with word filtering.  I believe that Akismet does something along
 these lines.

This is the same issue as the email spam issue. Identify by source, or 
content. Just as content filters are error prone with email spam, they will 
be error prone with other types of content.

I think either approach is viable, as long as the poster has an immediate 
method of redress. (My IP is clean works, and scales, this URL is safe 
works but doesn't scale, this post is safe is viable). In each case you 
need to make sure the redress is protected from abuse, so some sort of 
CAPTCHA is inevitable.

  There is such a black listing service already, but again, reliability is
  an issue.

 Reliability is always an issue with blacklists as they are run as
 independent entities.  There is always someone who has a problem with
 how an individual blacklist is run...

That is easily solved with one's feet. Not as if there is a shortage of 
blacklists for various purposes.


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Simon Lockhart

On Wed Jan 10, 2007 at 09:43:11AM +, [EMAIL PROTECTED] wrote:
 And it is difficult to plug Internet TV into your existing TV setup.

Can your average person plug a cable / satellite / terrestrial (in the UK,
the only mainstream option here for self-install is terrestrial)? Power,
TV, and antenna? Then why can't they plug in Power, TV  phone line? That's
where IPTV STBs are going...

Simon


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-10 Thread Simon Leinen

Alexander Harrowell writes:
 For example: France Telecom's consumer ISP in France (Wanadoo) is
 pushing out lots and lots of WLAN boxes to its subs, which it brands
 Liveboxes. As well as the router, they also carry their carrier-VoIP
 and IPTV STB functions. [...]

Right, and the French ADSL ecosystem mostly seems to be based on these
boxes - Proxad/free.fr has its Freebox, Alice ADSL (Telecom Italia)
the AliceBox, etc.  All these have SCART (peritelevision) TV plugs
in their current incarnations, in addition to the WLAN access points
and phone jacks that previous versions already had.

Personally I don't like this kind of bundling, and I think being able
to choose telephony and video providers indepenently of ISP is better.
But the business model seems to work in that market.  Note that I
don't have any insight or numbers, just noticing that non-technical
people (friends and family in France) do seem to be capable of
receiving TV over IP (although not over the Internet) - confirming
what Simon Lockhart claimed.

Of course there are still technical issues such as how to connect two
TV sets in different parts of an appartment to a single *box.  (Some
boxes do support two simultaneous video channels depending on
available bandwidth, which is based on the level of unbundling
(degroupage) in the area.)

As far as I know, the French ISPs use IP multicast for video
distribution, although I'm pretty sure that these IP multicast
networks are not connected to each other or to the rest of the
multicast Internet.
-- 
Simon.


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-09 Thread Simon Lockhart

On Tue Jan 09, 2007 at 07:52:02AM +, [EMAIL PROTECTED] wrote:
 Given that the broadcast model for streaming content
 is so successful, why would you want to use the
 Internet for it? What is the benefit?

How many channels can you get on your (terrestrial) broadcast receiver?

If you want more, your choices are satellite or cable. To get cable, you 
need to be in a cable area. To get satellite, you need to stick a dish on 
the side of your house, which you may not want to do, or may not be allowed
to do.

With IPTV, you just need a phoneline (and be close enough to the exchange/CO
to get decent xDSL rate). In the UK, I'm already delivering 40+ channels over
IPTV (over inter-provider multicast, to any UK ISP that wants it).

Simon


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-09 Thread Simon Lockhart

On Mon Jan 08, 2007 at 10:26:30PM -0500, Gian Constantine wrote:
 My contention is simple. The content providers will not allow P2P  
 video as a legal commercial service anytime in the near future.  

 Furthermore, most ISPs are going to side with the content providers  
 on this one. Therefore, discussing it at this point in time is purely  
 academic, or more so, diversionary.

In my experience, content providers want to use P2P because it reduces 
their distribution costs (in quotes, because I'm not convinced it does, in
the real world). Content providers don't care whether access providers like 
P2P or not, just whether it works or not.

On one hand, access providers are putting in place rate limiting or blocking
of P2P (subject to discussions of how effective those are), but on the other
hand, content providers are saying that P2P is the future...

Simon


Re: A side-note Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-09 Thread Simon Lockhart

On Tue Jan 09, 2007 at 12:17:56AM -0800, Scott Weeks wrote:
 : ...My view on this subject is U.S.-centric...this 
 : is NANOG, not AFNOG or EuroNOG or SANOG.
 
 The 'internet' is generally boundary-less.  I would hope that one day our
 discussions will be likewise.  Otherwise, the forces of the boundary-creators
 will segment everthing we are working on and defend the borders they've
 created.

Unfortunately, content rights owners don't understand this. All they 
understand is that they sell their content in USA, and then the sell it 
again in UK, and then again in France, and again in China, etc. What they
don't want is to sell it once, in the USA, say, and not be able to sell it 
again because it's suddenly available everywhere.

Simon


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-08 Thread Simon Lyall

On Mon, 8 Jan 2007, Gian Constantine wrote:
 I would also argue storage and distribution costs are not
 asymptotically zero with scale. Well designed SANs are not cheap.
 Well designed distribution systems are not cheap. While price does
 decrease when scaled upwards, the cost of such an operation remains
 hefty, and increases with additions to the offered content library
 and a swelling of demand for this content. I believe the graph
 becomes neither asymptotic, nor anywhere near zero.

Lets see what I can do using today's technology:

According to the itunes website they have over 3.5 million songs. Lets
call it 4 million. Assume a decent bit rate and make them average 10 MB
each. That's 40 TB which would cost me $6k per month to store on Amazon
S3. Lets assume we use Amazon EC3 to only allow torrents of the files to
be downloaded and we transfer each file twice per month. Total cost around
$20k per month or $250k per year. Add $10k to pay somebody to create the
interface and put up a few banner ads and it'll be self supporting.

That sort of setup could come out of petty cash for larger ISPs marketing
Departments.

Of course there are a few problems with the above business model (mostly
legal) but infrastructure costs are not one of them. Plug in your own
numbers for movies and tv shows but 40 TB for each will probably be enough.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: http://cisco.com 403 Forbidden

2007-01-03 Thread Simon Waters

On Wednesday 03 January 2007 16:29, you wrote:
 On Wed, 3 Jan 2007, James Baldwin wrote:
  Anyone else getting a 403 Forbidden when trying to access
  http://cisco.com?

 Forbidden

 You don't have permission to access / on this server.

 Additionally, a 403 Forbidden error was encountered while trying to use an
 ErrorDocument to handle the request.
 Apache/2.0 Server at www.cisco.com Port 80

 I think someone's going to get in big trouble.

Maybe they've blocked you lot for a reason ;)

Working fine here. Resolves to 198.133.219.25   



Re: Home media servers, AUPs, and upstream bandwidth utilization.

2006-12-25 Thread Simon Leinen

Lionel Elie Mamane writes:
 On Mon, Dec 25, 2006 at 12:44:37AM +, Jeroen Massar wrote:
 That said ISP's should simply have a package saying 50GiB/month
 costs XX euros, 100GiB/month costs double etc. As that covers what
 their transits are charging them, nothing more, nothing less.

 I thought IP transit was mostly paid by 95% percentile highest speed
 over 5 minutes or something like that these days? Meaning that ISP's
 costs are maximised if everyone maxes our their line for the same 6%
 of the time over the month (even if they don't do anything the rest of
 the time), and minimised if the usage pattern were nicely spread out?

Yes.  With Jeroen's suggestion, there's a risk that power-users'
consumption will only be reduced for off-peak hours, and then the ISP
doesn't save much.  A possible countermeasure is to not count off-peak
traffic (or not as much).  Our charging scheme works like that, but
our customers are mostly large campus networks, and I don't know how
digestible this would be to retail ISP consumers.
-- 
Simon.


Re: DNS - connection limit (without any extra hardware)

2006-12-11 Thread Simon Waters

On Monday 11 December 2006 16:15, you wrote:
  I use to slave . which can save time on recursive DNS servers when they 
have
 a lot of dross to answer (assuming it is totally random dross).

 I'm not sure to understand your solution.
 You configure your name-server as a slave-root-server?

Yes. Most of the root server traffic is answering queries with NXDOMAIN for 
non-existant top level domains, if you slave root on your recursive servers, 
your recursive servers can answer those queries directly (from the 120KB root 
zone file), rather than relying on negative caching, and a round trip to the 
root servers, for every new non-existant domain.

The drawback is you provide the answer with the authority bit set, which isn't 
what the world's DNS clients should expect, but DNS clients don't care about 
that one bit (sorry).

If the root zone file changed quickly it might also cause other problems!

Paul V was very cautious about it as a method of running a DNS server, but if 
the recursive servers are being barraged with queries for (different) 
non-existent top level domains I think it is probably preferable to the 
servers being flattened (and/or passing that load onto the root name 
servers).

If the queries are for existing, or the same, domains each time, it won't 
provide significant improvement.

I suppose any server issuing more than 2000 or so queries a day to the root 
servers would potentially save bandwidth, and provide a more responsive 
experience for the end user. But one also has to handle the case of the root 
zone potentially expiring, not something I ever allowed to happen, but then 
I'm not the average DNS administrator.

I've used this technique extensively myself in the past with no issues, but 
I'm not using it operationally at the moment. Since the load average on our 
DNS server is 0.00 to two decimal places I doubt it would make a lot of 
difference, and we host websites, and email, not randomly misconfigured, 
home, or business user PCs. So mostly we do lookups in in-addr.arpa, a 
depressingly large proportion of which fail, or look-ups for a small set of 
servers we forward email to (most of which exist, or I delete the forward).


Re: Best Email Time

2006-12-08 Thread Simon Waters

On Friday 08 December 2006 12:50, you wrote:
 
 CNN recently reported that 90% of all email on the internet is spam.
 http://www.cnn.com/2006/WORLD/europe/11/27/uk.spam.reut/index.html

I posted my rant a while back to save bandwidth;

http://www.circleid.com/posts/misleading_spam_data/


Re: DNS - connection limit (without any extra hardware)

2006-12-08 Thread Simon Waters

On Friday 08 December 2006 14:40, you wrote:
 
 For this reason, I would like that a DNS could response maximum to 10
 queries per second given by every single Ip address.

That may trap an email server or two.

Did you consider checking what they are looking up, and lying to them about 
the TTL/answer 127.0.0.1 for a week maybe better than NXDOMAIN.

I use to slave . which can save time on recursive DNS servers when they have 
a lot of dross to answer (assuming it is totally random dross).

I suspect complex rate limiting may be nearly as expensive as providing DNS 
answers with Bind9.


Re: The Cidr Report

2006-11-10 Thread Simon Leinen

cidr-report  writes:
 Recent Table History
 Date  PrefixesCIDR Agg
 03-11-06199409  129843
[...]
 10-11-06  134555024  129854

Growth of the global routing table really picked up pace this week!
(But maybe I'm just hallucinating for having heard the report from the
IAB Routing Workshop report three times in a week :-)
Or the CIDR Report software has an R200K problem?
-- 
Simon.


Re: adviCe on network security report

2006-11-02 Thread Simon Waters

On Thursday 02 Nov 2006 14:54, you wrote:
 
 I'm thinking for every answered message sent to abuse (non autoresponder),
 one will likely see more than 7-10 failures.  

It is a self fulfilling issue. Those abuse desks who deal with the issues you 
rarely end up writing to, those who don't, you inevitably end up writing to.

Which is why you get a better response when raising a new issue, or a small 
issue, with someone who hasn't been notified of it before.

Broach a big established problem like pointing out that Telecom Italia is one 
of the worse spewers of advance fee fraud emails on the Internet, and you 
can't get anyone to take an interest. If there were anyone who cared, they 
would have done something about it by now. Even the Italian government 
doesn't seem to care about that one.

rfc-ignorant.org exists for a reason.


Re: (OT)MSN/hotmail postmaster contact

2006-10-31 Thread Simon Waters

On Monday 30 Oct 2006 21:06, you wrote:

 Is there a postmaster from MSN/Hotmail out there? Mail from my domain to 
 any of yours is being junked and randomly blackholed.  No progress has been
 made yet with the normal tech support.

I previously got responses from the advertised postmaster contact eventually.

But if an email provider is bit bucketing email, other than as a tactical 
measure, rather than rejecting it, or quarantining it, your time is probably 
better spent advising people not to use that service.

It is not as if, since AOLs Harvard email fiasco, anyone can claim they didn't 
know it wasn't a stupid thing to be doing.

Since people have already told Hotmail it is a stupid thing to do, and they 
still do it, they are clearly stupid, or uncaring on the matter, neither is a 
good thing in an email provider.

Or as put elsewhere real friends don't let friends use hotmail.


Re: 10,352 active botnets (was Re: register.com down sev0?)

2006-10-26 Thread Simon Waters

On Thursday 26 Oct 2006 13:45, you wrote:
 
 Is there a similar statistic available for Mac OS X ?

Now now.

  Of the 4 million computers cleaned by the company's MSRT
  (malicious software removal tool), about 50 percent (2 million)
  contained at least one backdoor Trojan. While this is a high
  percentage, Microsoft notes that this is a decrease from the
  second half of 2005. During that period, the MSRT data showed
  that 68 percent of machines cleaned by the tool contained a
  backdoor Trojan.

A lot depends on the definition.

I've removed some malware trying to exploit an old Microsoft JRE bug. This 
stuff gets everywhere (well anywhere IE goes).

These get downloaded to some cached program folder for Java, and because the 
exploit hasn't worked for years, sit there till some antivirus software comes 
along and removes them, doing nowt but consuming disk space.

If you are the Microsoft malicious software removal tool marketing department, 
that is a trojan removed. To the average person on the street, it is another 
bit of meaningless fluff their PC will lose when they reinstall.

So yes, Microsoft is big enough to have bits who have a vested interest in 
making the other bits look bad (if only incidentally). Thus is the way of big 
companies.



Re: register.com down sev0?

2006-10-25 Thread Simon Waters

On Wednesday 25 Oct 2006 15:59, you wrote:

 just guessing but:
 1) it's 'hard'

rant
The reason the public facing DNS is poorly set up at the majority of 
institutions is the IT guy says lets bring it in house to give us more 
control, how hard can it be?.

When if they had left it with their ISP it would be done right (along with the 
thousands of others that the ISP does right).

I've seen it done dozens of times when consulting.

I have data from a personal survey that confirms this is the leading cause of 
poor DNS configuration and lack of redundancy in my part of the UK.

I even have a few domains we slave to servers across several continents, and 
otherwise clueful IT people pick SOA settings that still cause their domains 
to expire too quickly when, had they left it to us, it would just work.

(okay I could override those settings, but if I do that why bother letting 
them master it in the first place?! we delegated control to you, and then 
overrode all your settings because they were stupid?!). So don't let the IT 
guy be a hidden master either, just leave it to the ISP.

How I reach the zillions of IT guys out there to say don't do DNS inhouse, 
you'll only mess up is the remaining question; slashdot?
/rant


Re: dns - golog

2006-10-20 Thread Simon Waters

On Friday 20 Oct 2006 00:35, you wrote:

 Here's a visionary article related to this topic, but
 at the root server level, even more of a delicate issue,
 but with the same principles as the one we're discussing:

No this is the difference between impersonation, and service.

I think one problem is that IANA doesn't have a brand name, so when you buy 
an Internet connection you aren't told you are getting an IANA DNS, that is 
assumed. The interesting question is whether that is sustainable if a lot of 
ISPs provide a non-IANA DNS service. There may be an argument for saying that 
non IANA DNS services can't be described as Internet services, but that 
is an issue for ICANNs lawyers.

 http://www.circleid.com/posts/techies_wanna_do_policy/

Karl was so wrong on the F root-server issue. Paul asserted no new right, most 
companies and organisation would act legally against impersonators of their 
products and services, Paul is merely asserting he believes IANA (or the ISC 
since it is their address space) would do the same. 

Let us assume, for the moment at least, that the ISC will do what Paul thinks 
is the correct thing to do!

There is a HUGE difference between providing a modified DNS service to ones 
consenting clients, and subverting the Internet experience in such a way that 
clients find that systems clients are talking to, are fakes.

 And this article shows the convenience of falling back
 on standards when they serve your purpose:

 http://www.circleid.com/posts/paul_vixie_on_fort_nocs/

The only standards fallen back on, are an assertion that there are standards 
root server operators must adhere to, or lose their role. That is a statement 
of fact -- although one might argue as to whether one could effectively 
enforce these standards -- bringing facts, and expertise, to the debate is 
why you want people like Paul involved.


Re: dns - golog

2006-10-19 Thread Simon Waters

On Thursday 19 Oct 2006 13:50, you wrote:

 Can you suggest me any objective reason in order to invalidate this
 proposal?

Been done to death here before, assuming it is the same sort of DNS hack as 
the others.

Basically if you can guarantee that all DNS servers are used exclusively for 
browsing then it probably won't generate much of a problem (maybe complaints 
but not that many technical problems).

If your clients use DNS for SMTP (or possibly other stuff but SMTP will do), 
then a wildcard breaks a lot of things.

You can demonstrate if clients used DNS in such a fashion, dump the database, 
and look for common DNS BL for spam filtering. If that data is in your cache, 
at least one of your clients email systems will likely break with this 
change.

Stefan blogged this in response to previous discussion here;

http://blog.zaphods.net/articles/2006/07/17/re-sitefinder-ii-the-sequel

Of course it is a business decision, upsetting lots of customers, and losing a 
lot of email, breaking common Internet assumptions may be a good business 
decision if the customers left generate you enough revenue. But I would be 
cautious myself.

Wildcard DNS can make troubleshooting a problem due to a mistyped name a real 
pain. I know I've had that pain, what with ssh claiming that the key had 
changed, and all sorts of weirdness I didn't need when the pager went off in 
the small hours, because I types a name wrong, and got a server I wasn't 
expecting.


Re: AS 701 problems in Chicago ?

2006-10-17 Thread Simon Waters

On Tuesday 17 Oct 2006 03:32, Mike Tancsa wrote:
 Anyone know whats up ? I have seen some strange routing depending on
 the payload's protocol to a site in one of their colos in Toronto.

Don't know if it is related, but we can't route email to bellsouth.net -- no 
route to host. When I checked it rattles around inside their network 
(assuming traceroute is trustworthy).

Their customer support (first number I found) report they have a ticket 
already open. They know they have a problem, but didn't supply any details, I 
didn't ask. I just wanted to know it wasn't specific to our own address 
space, and they knew they had a problem, it isn't, they do.

Snip first few hops.

 9  dhr4-pos-8-0.Weehawkennj2.savvis.net (204.70.1.6)  87.029 ms  78.880 ms  
93.625 ms
10  0.ge-5-1-0.XL4.NYC4.ALTER.NET (152.63.3.121)  80.281 ms  76.763 ms  85.844 
ms
11  0.so-6-0-0.XL2.ATL5.ALTER.NET (152.63.10.105)  111.373 ms  110.133 ms  
106.262 ms
12  0.so-7-0-0.GW13.ATL5.ALTER.NET (152.63.84.109)  100.764 ms  114.852 ms  
107.897 ms
13  bellsouth-atl5-gw.customer.alter.net (157.130.71.170)  103.411 ms  102.371 
ms  95.479 ms
14  axr00asm-1-0-0.bellsouth.net (65.83.236.3)  95.399 ms  99.486 ms  99.574 
ms
15  ixc00asm-4-0.bellsouth.net (65.83.237.1)  96.381 ms  96.492 ms  101.427 ms
16  acs01asm.asm.bellsouth.net (205.152.37.66)  109.901 ms  118.113 ms  
111.783 ms
17  axr01asm-1-3-1.bellsouth.net (65.83.237.6)  96.847 ms  102.503 ms  96.839 
ms
18  65.83.238.40 (65.83.238.40)  96.554 ms  96.587 ms  96.480 ms
19  65.83.238.37 (65.83.238.37)  110.986 ms  110.987 ms  118.938 ms
20  65.83.239.29 (65.83.239.29)  110.261 ms  115.079 ms  110.075 ms
21  65.83.239.102 (65.83.239.102)  110.098 ms  110.116 ms  110.158 ms
22  205.152.45.65 (205.152.45.65)  110.158 ms  110.263 ms  110.142 ms
23  205.152.161.63 (205.152.161.63)  109.775 ms  110.116 ms  115.274 ms
24  205.152.161.65 (205.152.161.65)  110.571 ms  110.576 ms  110.512 ms
25  205.152.161.48 (205.152.161.48)  118.745 ms  116.560 ms  112.485 ms
26  * * *
27  205.152.156.25 (205.152.156.25)  128.332 ms  128.274 ms  133.991 ms
28  205.152.161.49 (205.152.161.49)  132.822 ms  141.624 ms  143.536 ms
29  205.152.161.64 (205.152.161.64)  142.537 ms  131.862 ms  132.765 ms
30  205.152.161.62 (205.152.161.62)  132.566 ms  131.734 ms  134.176 ms



Re: AS 701 problems in Chicago ?

2006-10-17 Thread Simon Waters

On Tuesday 17 Oct 2006 03:32, you wrote:
 205.150.100.214

Sorry - my mistake

Saw the 205.150 prefix and confused in with 205.152, which are totally 
different of course.

bellsouth.net have sorted their issue (from our perspective).


Re: Broadband ISPs taxed for generating light energy

2006-10-10 Thread Simon Lockhart

On Tue Oct 10, 2006 at 02:40:25PM +, Fergie wrote:
 Is it April 1st already?  :-)

Their reasoning is certainly barmy, but some dark-fibre customers in the
UK get charged business property taxes on the fibre.

Simon
-- 
Simon Lockhart | * Sun Server Colocation * ADSL * Domain Registration *
   Director|* Domain  Web Hosting * Internet Consultancy * 
  Bogons Ltd   | * http://www.bogons.net/  *  Email: [EMAIL PROTECTED]  * 


Re: AOL Non-Lameness

2006-10-03 Thread Simon Waters

On Monday 02 Oct 2006 23:30, Joseph S D Yao wrote:
 
 All, this seems seriously NON-lame to me.  Of course, testing and fixing
 the bug before it was put out there would have been less so.  But think
 of this!  A large company has actually admitted that it was wrong and
 backed out a problem!  Isn't this what everyone always complains SHOULD
 be done?  ;-)  ;-)  ;-)

Hehe, AOL also reject the URLs generated by the 'visitors' Apache log 
reporting program. I pointed it out, they fixed it, then the next week it had 
regressed. I think the problem here is trying to detect bulk unsolicited 
email by its content. Its like assuming all DDoS address use ICMP packets, 
sometimes it works, except people get angrier when you drop genuine email.

Quite why AOL incorrectly rejecting email is worthy of comment on NANOG is 
beyond me, now if AOL stopped bouncing genuine email that might be worthy of 
comment, but not on NANOG.



Re: Potentially on-Topic: is MSNBot for real?

2006-09-22 Thread Simon Waters

On Friday 22 Sep 2006 11:39, you wrote:

 Is this unusual, or what?  Are search engines supposed to be amongst the
 biggest user agents recorded on a typical website?  How much trolling and
 indexing is considered 'too much' ?

Whenever it becomes a problem.

If you don't have enough genuine traffic, and you don't have much, then the 
search engines will look like they are dominating it, as they are pretty 
thorough.

I've seen issues arise with some search bots, where they have discovered loops 
in a websites structure and downloaded multiple copies, or found novels links 
to dynamic content and indexed your entire database. So worth checking what 
pages they have been to, to see if those could be an issue.

 Off-list thoughts on this welcome if the operational relevance of this
 issue is questioned...

Trust me, anything involving 40,000 hits is off-topic in Nanog, unless you 
have reason to believe the same 40,000 are happening to everyone on the net, 
or they took down 40,000 important websites.

Most of the regular are just getting in, so expect to be flamed mercilessly.


Re: Why is RFC1918 space in public DNS evil?

2006-09-18 Thread Simon Waters

On Monday 18 Sep 2006 07:40, you wrote:

 I know the common wisdom is that putting 192.168 addresses in a public
 zonefile is right up there with kicking babies who have just had their
 candy stolen, but I'm really struggling to come up with anything more
 authoritative than just because, now eat your brussel sprouts.

I believe it is simply because the address isn't globally unique, so you may 
connect to the wrong server.

So they use in internal.example.com and get 192.168.0.1

They then terminate the VPN, try something that should connect to this server, 
and send their credentials (not over the VPN, so not encrypted perhaps) to 
some other server that promptly snaffles them (all untrusted servers are 
assumed to run honeypots, and password grabbing tools, at the very least).

Of course including the DNS inside the VPN doesn't stop the addresses being 
not unique. I'm guessing the logic here is that one must flush ones DNS after 
disconnecting from a VPN that uses RFC1918 address space, and/or block 
RFC1918 addresses at routers (including client VPN hosts or routers) so that 
you don't accidentally connect to the wrong network unless a specific route 
is connected.

I normally block RFC1918  at routers, ever since I found a Windows box sending 
weird traffic to 10.0.0.1 for reasons I never managed to decipher, other than 
it could. Of course my ISP both used, and routed 10.0.0.1 somewhere, so this 
random stray traffic was going somewhere (I know not where to this day).

How this works out for people connection via Wireless lans, which seem 
invariably to use 192.168.0.0/24, I'm not sure, but since you read the RFC 
and used a random chunk of 10/8 internally you don't care, right?


Re: [routing-wg]BGP Update Report

2006-09-13 Thread Simon Leinen

Marshall Eubanks writes:
 In a typical flight Europe / China I believe that there would be
 order 10-15 satellite transponder / ground station changes. The
 satellite footprints count for more that the geography.

What I remember from the Connexion presentations is that they used
only four ground stations to cover more or less the entire Northern
hemisphere.  I think the places were something like Lenk
(Switzerland), Moscow, Tokyo, and somewhere in the Central U.S..

So a Europe-China flight should involve just one or two handoffs
(Switzerland-Moscow(-Tokyo?)).  Each ground station has a different
ISP, and the airplane's /24 is re-announced from a different origin AS
after the handoff.

It's possible that there are additional satellite/transponder changes,
but those wouldn't be visible in BGP.
-- 
Simon.


Re: [routing-wg]BGP Update Report

2006-09-13 Thread Simon Leinen

Vince Fuller writes:
 On Mon, Sep 11, 2006 at 12:32:57PM +0200, Oliver Bartels wrote:
 Ceterum censeo: Nevertheless this moving-clients application shows
 some demand for a true-location-independend IP-addresses
 announcement feature (provider independend roaming) in IPv6, as
 in v4 (even thru this isn't the standard way, but Connexion is
 anything but standard). Shim etc. is not sufficient ...

Ehm, well, Connexion by Boeing is maybe not such a good example for
this demand.  Leaving aside the question whether there is a business
case, I remain unconvinced that using BGP for mobility is even worth
the effort.  It is obvious that it worked for Boeing in IPv4, for
some value of worked, but the touted delay improvements on the
terrestrial ISP path (ground station - user's home ISP) are probably
lost in the noise compared to the 300ms of geostationary.  But, hey,
it's free - just deaggregate a few /19's worth of PA (what's that?)
space into /24 and annouce and re-announce at will.

Vince has an outline of an excellent solution that would have avoided
all the load on the global routing system with (at least) the same
performance (provided that the single network/VPN is announced to the
Internet from good locations on multiple continents):

 One might also imagine that more globally-friendly way to implement
 this would have been to build a network (VPN would be adequate)
 between the ground stations and assign each plane a prefix out of a
 block whose subnets are only dynamically advertsed within that
 network/VPN. Doing that would prevent the rest of the global
 Internet from having to track 1000+ routing changes per prefix per
 day as satellite handoffs are performed.

But that would have cost money! Probably just 1% of the marketing
budget of the project or 3% of the cost of equipping a single plane
with the bump for the antenna, but why bother? With IPv4 you get
away with advertising de-aggregated /24s from PA space.

At one of the Boeing presentations (NANOG or RIPE) I asked the
presenter how they coped with ISPs who filter.  Instead of responding,
he asked me back are you from AS3303?.  From which I deduce that
there are about two ISPs left who filter such more-specifics (AS3303
and us :-).

IMHO Connexion by Boeing's BGP hack, while cool, is a good example of
an abomination that should have been avoided by having slightly
stronger incentives against polluting the global routing system.
Where's Sean Doran when you need him?
-- 
Simon (AS559).


Re: Market Share of broadband provider in Scandidavia

2006-09-08 Thread Simon Waters

On Friday 08 Sep 2006 15:21, you wrote:

 Could anyone point me to a market-share by-country overview of broadband
 provider in Scandinavia (Denmark, Sweden, Norway, Finland, Iceland). Any
 help would be appreciated.

Ovum use to do reports on European ISP market share, I think it covered 
Scandinavia (I don't think we worried too much given that the entire 
population of Scandinavia is about the same as greater London, we were 
focused on larger countries - by population - mostly).

Included best guess at future changes at the time as well.

The report wasn't cheap, but the people I was working with just bought large 
chunks of such intelligence in order to assess big investment decisions.


Re: comast email issues, who else has them?

2006-09-06 Thread Simon Lyall

On Thu, 7 Sep 2006, Christopher L. Morrow wrote:
 Perhaps some of the comcast folks reading might take a better/harder look
 at their customer service tickets and do a 'better' job (note I'm not even
 half of a comcast customer so I'm not sure that there even IS a
 problem...) on this issue?

You can try:

http://www.comcastsupport.com/sdcxuser/lachat/user/Blockedprovider.asp

or if that doesn't work:

Call 856-317-7272, listen for 00:02:17 , press 1 (within a 3 second
window)m listen for 1 minute and leave a message that might be returned
within 3 days (according to the message).

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



NZNOG 07 - Call for Participation and Papers

2006-09-05 Thread Simon Lyall


  NZNOG 07 - Call for Participation and Papers

The next conference of the New Zealand Network Operators' Group is to be held
in Palmerston North, New Zealand between 31 January and 2 February 2007. Our
host is Inspire.net and the venue is to be at Massey University's campus.

NZNOG meetings provide opportunities for the exchange of technical
information and discussion of issues relating to the operation and
support of network services, with particular emphasis on New Zealand.

The conference is low priced and has a strong technical focus with the
aim and history of getting a strong turnout of technical personal from
New Zealand Internet orientated companies.

Conference Overview
---

NZNOG 2007 will consist of one day Special Interest Group ( SIG ) Meetings
followed by two days of conference presentations.  There will also be
opportunity for more informal and small lightening talks. These are typically
around five minutes long and are organised closer to the actual conference.

Important Dates
---

Call for Papers opens: 5 September 2006
Deadline for speaker submissions: 20 October 2006
Responses to speaker submissions: 27 October 2006
Draft program published:   1 November 2006
Final program published:  18 December 2006
NZNOG 2007 Conference:31 January - 2 February 2007

SIG Meetings


The first day of the conference is arranged as parallel tracks to
cover subject material which is of interest to particular groups of
attendees. Proposals are requested for workshops, tutorials and other
appropriate special-interest meetings of interest to Network Operators.

Topics which have been successful in the past, or have been requested
for this year are:

* Advanced BGP gubbins
* IPV6 implementation
* Service Provider wireless networking
* VoIP
* BIND and DNSSEC
* MPLS and high performance IP backbone routing
* Systems Administration.

The program committee will consider proposals for tutorials in any of these
or new areas. There is a single day of SIGs with up to five running
concurrently.

If you have an idea for a tutorial subject that is not listed, please feel
free to submit it to us.


Conference Presentations


The main conference program for 2007 will be made up of two days with a
single stream where possible. Presentations don't need to fit any particular
fixed length and can be from 30 minutes to 3 hours in length.

NZNOG conferences have traditionally spanned the entire operational spectrum,
and then some. Proposals for conference presentations are invited for
virtually any topic with a degree of relevance to the NZNOG community.

Past years' talks have included the following:

- Internet exchange operations
- Global anycast networks and the building thereof
- Peering, peering, and peering
- Network security
- 10GB ethernet operations
- Advanced networks in NZ
- Current Internet research in NZ
- Wireless networking
- QOS over carrier networks
- Content distribution networks and media streaming

If you are interested in submitting a talk please fill out the questions
at then end of this document and email them to [EMAIL PROTECTED] .

Submission Guidelines
-

When considering a presentation or SIG, remember that the NZNOG audience
is mainly comprised of technical network operators and engineers with a wide
range of experience levels from beginners to multi-year experience. There is
a strong orientation to offer core skills and basic knowledge in the SIGs
and to address issues relevant to the day-to-day operations of ISPs and
network operators in the conference sessions.

The inclusion of a title, bio, topic, abstract, and slides with proposals
is not compulsory but each will help us determine the quality of your
proposal and increase the likelihood it will be accepted.

Final slides are to be provided by 22 January 2007.

Note: While the majority of speaking slots will be submitted by 20 October
2006, a limited number of slots may be available for presentations that are
exceptionally timely, important, or of critical operational importance.

The NZNOG conference is a TECHNICAL conference so marketing and commercial
content is NOT allowed within the program. The program committee is charged
with maintaining the technical standard of NZNOG conferences, and will
therefore not accept inappropriate materials.  It is expected that the
presenter be a technical person and not a sales or marketing person. The
audience is extremely technical and unforgiving, and expects that the speakers
are themselves very knowledgeable.  All sessions provide time for questions,
so presenters should expect technical questions and be prepared to deliver
insightful and technically deep responses.

Funding and Support
---

NZNOG conferences are community run and funded events that try to keep the
cost to attendees as low as possible so generally we are unable to pay the
travel costs of 

Re: NNTP feed.

2006-09-05 Thread Simon Lyall

On Wed, 6 Sep 2006, Daniel Roesen wrote:
 If folks would end abusing NNTP for file distribution via flooding, the
 matter would quickly be resolved. Am i naive?

The technical term might be trolling . Binaries have made up the vast
majority of Usenet bandwidth since at least the early 90s so hoping it
will go away will not happen.

One of the attractions to Usenet is that since the majority of messages
(by number) are in fact discussions it is hard to claim it is an illegal
file sharing network which is why (IANAL) the *AA hasn't shut it down yet
and Usenet providers operate openly.

Talking of resolving and abusing wrt binary newsfeeds grew old around
10 years ago. These days plenty of people run text-only feeds and leave
the full-binary feeds to the top hundred or so sites that can afford
them.

-- 
Simon J. Lyall  |  Very Busy  |  Web: http://www.darkmere.gen.nz/
To stay awake all night adds a day to your life - Stilgar | eMT.



Re: Captchas was Re: ISP wants to stop outgoing web based spam

2006-08-16 Thread Simon Waters

On Wednesday 16 Aug 2006 01:13, Paul Jakma wrote:
 On Thu, 10 Aug 2006, Simon Waters wrote:
  I've no doubt some captcha can be invented in ASCII, but this isn't
  it.

 'tis. It works for at least one blog platform, where I've never once
 had comment spam.

You snipped the bit where I said It would work for a minority use.

I'm sure it works fine for just you, but it doesn't scale, so the folks at 
Nanog probably don't care.

The reason people use image recognition is it is something (most) humans find  
very easy, but requires considerable investment of effort (or resource for 
self training) to teach computers, and readily permits of variations ('click 
the kitten' being a good example).

For a demonstration of bashing at ASCII captchas try any good chat bot.

I asked the online bot at ellaz.com your question:

What is 2 added to 23?

Ellaz replied;

I can tell you that 2, plus 23, is equal to 25

I hope your parser can recognise that as a valid answer, otherwise you'll have 
trouble with humans failing the test. Although for blog comments, excluding 
stupid, or overly verbose humans may not be a bad idea, I just get the 
feeling some days I'd never get to comment on anyones blog.

I thought maybe spice it up a little;

Simon: What is the square root of -1?
Ellaz: Hey Hey!  You cannot take the square root of a negative number.  That 
gives an imaginary number, and I don't go there.

(Spot the canned response).

Shucks. Unfortunately Ellaz bot isn't terribly good at non-maths questions, 
but I think it makes the point well enough. 

The reason no one defeated your text captcha was probably because no one 
tried, but that won't remain the case if it gets popular. We are locked in 
another arms race here. At the moment greylisting kills most of your email 
spam, and any captcha (even ones for which programs exists for, and which 
score better than humans) will kill most of your blog spam, but don't expect 
them to last as a defence, just as greylisting is slowly crumbling. The real 
solution is to break the monoculture, and have more security at the leaf 
nodes, but someone already started that thread (again).

Although possibly the mistake is to assume you can distinguish between humans, 
and computers on the basis of intelligence. It isn't reliably possible to do 
this yet, but give it a few years and you'll know that if a site asks for all 
the integer solutions of a given quintic equation, it is probably not that 
interested in comments from apes, except perhaps the most exceptional apes.


Re: ISP wants to stop outgoing web based spam

2006-08-11 Thread Simon Waters

On Friday 11 Aug 2006 05:24, Hank Nussbacher wrote:
 On Thu, 10 Aug 2006, Florian Weimer wrote:
  You should look after the automated tools (probably using a virus
  scanner or something like this) and trigger a covert alert once they
  are detected.  If the spam sent out is of the right kind, you can
  phone the police and have the guy arrested.

 Please show me which virus scanner scans html pages for the words like V I
 A G R A, or Free M O R T G A G E, as it is going outbound.

HTTP::Proxy ?

I don't know what the icap support in Squid 3 will offer.

I'm with Florian, you are looking for a technical solution, when the problem 
is best solved on the ground.

Did you consider that perhaps your customer really is the spammer, or is 
complicit in the abuse?


Re: ISP wants to stop outgoing web based spam

2006-08-10 Thread Simon Waters

On Wednesday 09 Aug 2006 18:28, Suresh Ramasubramanian wrote:
 
 2. West african cities like Lagos, Nigeria, that are full of
 cybercafes that use this satellite connectivity, and have a huge
 customer base that has a largish number of 419 scam artists who sit
 around in cybercafes doing nothing except opening up free hotmail,
 gmail etc accounts, and posting spam through those accounts, using the
 cybercafe / satellite ISP's connectivity.

If we get abuse like that from a Cybercafe, and we have in the past, we block 
their IP address allocation on our webservers. It is up to the cybercafe 
owner to police his space, or suffer the consequences, just like any other 
ISP.

If the question is how can he police his space, well I'm sure technical 
solutions are possible, but there are very cheap human solutions, along with 
keeping a functional abuse address.

 I got asked this way back in 2005, and then talked to Justin Mason of
 the spamassassin project.  He was of the opinion that it could be done
 but he wasnt too aware of anybody who had tried it, plus he didnt
 exactly have much free time on his hands for that.

I suspect there are sufficient free email servers using HTTPS, that it is 
pretty much impossible to spot this kind of thing from content inspection, at 
least not as a long term solution.

Certainly if you assume content inspection is impossible, or at least 
unreliable as a long term solution, you are left with traffic analysis. I 
suspect IP addresses doing automated abuse have distinctive patterns, but the 
risk of false positives must be reasonably high. Simple analysis tools 
applied to a Squid log would show volume of HTTP traffic and other stuff. 
Provide them a login when they pay, and you can immediately know who it is as 
well. There are even real time analysis tools for Squid logs.

The webmail provider on the other hand can easily and cheaply check if content 
from one member is suspicious in either content or volume, and suspend the 
account. So perhaps you are trying to apply the solution in the wrong place.


Captchas was Re: ISP wants to stop outgoing web based spam

2006-08-10 Thread Simon Waters

On Thursday 10 Aug 2006 01:14, Paul Jakma wrote:
 On Thu, 10 Aug 2006, Stefan Bethke wrote:
  Do you have any links or references?

 Just ask the user some basic question. E.g.:

   What is 2 added to 23?: textbox

I've no doubt some captcha can be invented in ASCII, but this isn't it. AI 
already substantially out performs all but a small minority of humans on 
mathematical style IQ test (they were over 160 when I was a kid), and it 
would be relatively trivial to code it to handle the types of questions for 
this kind of test.

It would work for a minority use. Indeed I've already used a BBS that expected 
you to understand about factoring numbers or some such question on joining.

Something requiring real world knowledge would be better, but it is very hard 
to automatically generate questions (and answers), that can't be 
automatically answered. And remember in most cases you need questions that 
are consistently hard, as the machines won't get bored retrying. If you 
generate them manually (at least the first time one is encountered). Visual 
noise (and auditory noise) is something we are good are consistently good at 
removing,and machines are still playing catch-up. But then some of the 
automated captcha solvers aren't that much worse than a lot of people.

On the upside such captchas might spark more research into AI, as whilst 
recognising badly mangled images of text is kind of useful for the post 
office and other handwriting recognition, it has limited applications 
elsewhere.


Re: mitigating botnet CCs has become useless

2006-08-08 Thread Simon Waters

On Tuesday 08 Aug 2006 15:03, you wrote:

 And, as usual, security is only costing you money.

To a first approximation 10% of all incoming net traffic is malware/abuse/junk 
related, so if you are a residential ISP presumably 10% of outgoing bandwidth 
is swallowed up this way.

So there are savings to be made, of course the economies work against it, as 
it is generally cheaper to by bandwidth in bulk, than deal with individual 
cases.

However most big residential ISPs must be getting to the point where 10% 
bandwidth saving would justify buying in third party solutions for containing 
malware sources. I assume residential ISPs must be worse than 10%, as I hope 
businesses do slightly better on average. On the upside, over here, the 
migration to ADSL, means that containing an infected host via a third party 
can be as simple as changing the ADSL settings so they connect to a third 
party walled garden rather than the host ISP (effectively transferring them 
to a different ISP, just one who exists solely for cleaning up infected 
systems).


DNS BIND dispatch errors

2006-08-03 Thread Simon Waters

The increase in dispatch errors reported by BIND recently is explained by the 
other ISC here;

http://isc.sans.org/diary.php?storyid=1538

So it looks like the error message was right, although some older versions of 
BIND didn't do a good job of reporting the IP addresses involved.

My own experience was the servers mentioned here aren't the only servers doing 
this rather bizarre behaviour, but it is likely they could account for the 
increase seen (in association with a spam run causing lookups).


Re: Odd named messages...

2006-08-02 Thread Simon Waters

On Tuesday 01 Aug 2006 20:18, you wrote:
 Has anyone else seen an increase of the following named errors?

 Aug  1 01:00:09 morannon /usr/sbin/named[21279]: dispatch 0x4035bd70:
 shutting down due to TCP receive error: unexpected error
 Aug  1 01:00:09 morannon /usr/sbin/named[21279]: dispatch 0x4035bd70:
 shutting down due to TCP receive error: unexpected error

Noted similar here, started Jul 31 17:06:09 (GMT+1).

 .. someone trying some new anti-bind trickery?

The error can occur in normal usage of BIND9 so may reflect a change in 
firewall practice or similar.

It is occurring on recursive servers with no remote recursive queries allowed, 
so it is presumably in response to some query initiated locally (email/spam 
related perhaps?).

We have spare disk space, I will enable query logging and see if it helps.

Suggest the DNS ops list may be best place to take further comments.

My best guess is ignorance over conspiracy. If I find a concrete answer I will 
follow up to NANOG if appropriate.

Afraid my first attempt to investigate got side tracked into reporting some 
phishing scam or other.



Re: AOL Mail Problem

2006-07-28 Thread Simon Waters

On Thursday 27 Jul 2006 17:59, William Yardley wrote:

 Keeping in mind that they are not only a huge email provider, but also
 that their user-base is mostly not exactly tech savvy, I think Carl,
 Charles et al do a pretty good job over there.

I think Carl moved on to other things in AOL.

 Dealing with their postmaster team can still take a while sometimes, but
 they'll generally respond.

Experience here is that they don't any more. I've got responses, but not via 
[EMAIL PROTECTED]

They still do simplistic blocks on content, i.e. containing certain types of 
content will cause a message to be rejected outright, without any sort of 
consideration of the other content of the message. I think that is a broken 
model.

Some sort of port 25 block, but neither a complete block, nor guaranteed 
delivery, but some sort of intermediate proxy. This makes life very hard on 
people who are learning about email, or coming from elsewhere. I think what 
was needed was abuse detection and some sort of walled garden approach, which 
could have dealt with all forms of abuse, not just email.

I appreciate changing anything at all on that sort of scale is always 
tremendously challenging.


Re: update bogon routes

2006-07-27 Thread Simon Leinen

Miguel,

 We have had some problems of being beaten back. Our space, being
 annocunced by AS 16592, is 190.5.128.0/19

I only see 190.5.128.0/21, and because it is our policy to ignore
more-specifics from PA space (including anything more specific than
/21 from 190.0.0.0/8 and the other LACNIC ranges), we don't accept
that route.  Couldn't you just announce the entire /19?

Regards,
-- 
Simon, AS559.


Re: Eurid suspends more than 74,000 .eu domain names

2006-07-26 Thread Simon Waters

On Tuesday 25 Jul 2006 18:04, Henry Linneweh wrote:

 I think this operationally impact some people
 
http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9001972

Anyone else note the irony that the domain names were registered through 
domainsbyproxy.com so he is complaining about his own customers.

People willing to put up with Eurid burocracy three times over deserve all the 
domain names their own money can buy.


Re: Sitefinder II, the sequel...

2006-07-14 Thread Simon Waters

On Thursday 13 Jul 2006 13:08, you wrote:

 The second part doesn't make any sense to me.  It seems that having
 multiple, geographically disparate recursive name servers would be
 more likely to present an alternative [view] of the DNS.  (In fact,
 I can prove that's true in at least some cases. :)  So you are
 actually arguing -against- your first point.

Only where others deliberately provide conflicting data from different 
sources. That is their choice, certainly the recursive machines would be 
deployed to avoid making the situation worse.

The point of local provision is for reliability, and performance.

 Perhaps something as simple as a preference only 'correcting' queries
 that begin with www?

Alas www is ascribing meaning where non-exists, webservers exist without the 
www prefix, and some name servers and mail servers have proper names with the 
www in. Such half baked approaches are how systems decay.


Re: Sitefinder II, the sequel...

2006-07-13 Thread Simon Waters

On Wednesday 12 Jul 2006 18:35, David Ulevitch wrote:
 On Jul 12, 2006, at 12:30 AM, Simon Waters wrote:
  On Tuesday 11 Jul 2006 20:22, Daniel Golding wrote:
  I'm at a loss to explain why people are
  trying so hard to condemn something like this.
 
  Experience?

 People have never created a platform to manage recursive DNS

That somewhat depends on what you mean by platform.

If by platform you mean a remote managed service for recursive DNS, no one I 
know in the DNS business ever tried to sell that (although arguably the ISPs 
generally supply something similar free to every customer), that doesn't 
necessarily negate their experience.

Most of those I know try to deploy recursive services as close as possible to 
the client, avoiding where possible alternative views of the DNS, and 
forwarding.

Perhaps time to ask Brad, Paul and Cricket what they think, and have answers 
to their comments.

I commend your enterprise, but have you considered trying to sell the data 
feed via firewall channels, where the restrictions could be applied more 
specifically than via a different view of the DNS.

With automated responses to bad things, it is usually best to minimise the 
scope of the change. Similarly typo correction makes sense for URLs, but not 
for most other uses of the DNS (hence the proviso you make to switch it off 
if you use RBL, although I'd say switch it off for all email servers less you 
start correcting spambot crud, our email servers make a DNS check on the 
senders domain, that doesn't want correcting either), so the answer is 
probably browser plug-in (although most browsers already try to guess what 
you meant to some extent).



Re: Sitefinder II, the sequel...

2006-07-12 Thread Simon Waters

On Tuesday 11 Jul 2006 20:22, Daniel Golding wrote:
 
 I'm at a loss to explain why people are
 trying so hard to condemn something like this.

Experience? 


Re: Sitefinder II, the sequel...

2006-07-11 Thread Simon Waters

On Tuesday 11 Jul 2006 07:19, Steve Sobol wrote:

 There's a big difference, of course, between INTENTIONALLY pointing your
 computers at DNS servers that do this kind of thing, and having it done for
 you without your knowledge and/or consent.

Yes, one way you choose who breaks your DNS, the otherway Verisign break it 
for you.

Most people don't have the know-how to understand the consequences of using 
such a service. So providing it without screaming huge warnings is at best 
misleading.

As someone who works for a company that provides trials of a web hosting 
product, we've had our share of abusive trial users inventing new ways to 
abuse our service. But if you try and block this abuse at the DNS level 
you'll almost certainly break access to every other site we host on that 
service.

Similarly our DNS servers provide short term A records for some important 
sites, blocking their IP address in the DNS server would result in a loss of 
redundancy of a fairly major service (okay we use different names for the DNS 
server and the webserver, but not everyone does that). In this instance it is 
unlikely the loss of redundancy would be noticed, until it was needed, as by 
its nature redundancy acts to hide small scale failures.

This is the basic issue with DNS changes by third parties;

the third party can have no knowledge of the scope or scale of the issues 
their changes could cause. 

That is why the DNS has delegated administration, although there is probably 
less need for the delegated deployment any more (computers are big and cheap 
compared to the 1970's), delegated administration is still a MUST have.

Think DNS is *sensitively dependent on correct values*.

Sure they can try and guess, but it is at best a guess. I note almost all 
phishing sites use IP address these days anyway, certainly all those I 
reported this morning were using URLs of the form 
http://10.11.12.13/www.example.com/;

If you just want faster recursive resolvers, that is easily done without 
breaking anything, and without risking your view of the DNS. More hardware, 
slave ., optimise the binaries (Rick Jones has documented this in huge 
detail at HPs performance labs), optimise the IP stack etc.

If the only value add is fast recursive resolution, but from off your network, 
I'd suggest this is a poor choice as well, as a key planning decision of DNS 
resolver deployment is to deploy within your network so stuff works when 
your connectivity is toast (of course that'll never happen).

I see no redeeming features of the service, or did I miss something?


Re: Sitefinder II, the sequel...

2006-07-11 Thread Simon Waters

On Tuesday 11 Jul 2006 13:40, you wrote:
 
 Client sites with dedicated recursers are going to be presented with a
 challenge:  if their servers use the recursers, then will they set up
 a parallel set of caching forwarding recursers for desktop-to-OpenDNS
 use, or will they simply let OpenDNS be their default resolver for
 desktops?  (etc)  What happens if/when OpenDNS gets too busy, or fails,
 or goes TU?

Fortunately BIND does a forward first option. But of course then the view of 
the DNS will change when the remote servers are busy :(

A bigger issue I haven't thought through is the site encourages forwarding, 
which is notorious in the DNS world for causing poisoning issues. Although 
presumably if their DNS implementation itself is perfect, that may not raise 
issues, it makes me nervous.

 I have not been convinced that coherence is a property that *must* be
 maintained within the DNS, though I see certain portions that must
 obviously remain coherent.

But can you define a mechanical rule to identify if an A record belongs to the 
set of A records that must remain coherent, so that they never get modified 
by such a scheme?

The advantage of things like relay block lists is the effect is limited in 
scope -- I won't talk to that email server because -- and the errors and 
conditions that result are small, but as soon as you return an untrue 
answer for an A record you have no way of knowing how much of the Internet 
you just lost name resolution from, because you can't know for sure that it 
isn't the delegated name server for an important domain.

Sure this may reflect bad design decision in the DNS from olden days, but it 
is the reality of the Internet that servers with names like hippo.ru.ac.za 
play a crucially important role, and unless you happen to know what that role 
is, you can't assess the importance of that A record (okay that one was an 
easy one).


Re: IP Delegations for Forum Spammers and Invalid Whois info

2006-07-05 Thread Simon Waters

On Monday 03 Jul 2006 16:26, Phil Rosenthal wrote:
 
 We are very much anti-spam and I will look into Mark's issue - I'm
 looking  through the tickets for abuse@ and there is no email sent in
 from [EMAIL PROTECTED] ...

I suspect he tried [EMAIL PROTECTED] which seems to be in rfc-ignorant.

Looks like the server;

195.225.177.31

Has been spewing guest book spam (and wiki spam) out, as a quick google of 
195.225.177.31 nice site will show hundreds of links, although quite a lot 
of it just looks bizarre, and Dshield shows 80,000 odd reports port 80 probes 
in the last month from this address.

We've just cleaned up a lot of address book spam promoted sites, so I know it 
is relentless and tedious thing to squash.


Re: IP Delegations for Forum Spammers and Invalid Whois info

2006-07-03 Thread Simon Waters

On Monday 03 Jul 2006 06:16, you wrote:

 Forgive the relative noobishness of the question, but I've not had to deal
 with this sort of situation before.  Should I be forwarding to RIPE?

I don't think RIPE will be that interested.

The address range gets connectivity from someone. I suggest reporting 
upstream.

Oh dear upstream is ISPrime -- anyone here think they are anything but a spam 
house? Is not then why are they still in NY?



  1   2   3   4   >