Re: load balancing and fault tolerance without load balancer

2008-03-14 Thread Mark Smith

On Sat, 15 Mar 2008 00:42:26 +0800 (CST)
Joe Shen [EMAIL PROTECTED] wrote:

 
 hi,
 
we plan to set up a web site with two web servers.
 
The two servers should be under the same domain
 name.  Normally,  web surfing load should be
 distributed between the servers. when one server
 fails, the other server should take all of load
 automatically. When fault sever recovers, load
 balancing should be achived automatically.There is no
 buget for load balancer.
 
 
we plan to use DNS to balance load between the two
 servers. But, it seems DNS based solution could not
 direct all load to one server automatically when the
 other is down.
  
 
Is there any way to solve problem above? 
 

One option might be to run two instances of VRRP/CARP across the hosts.
You have Host A being the primary/master for one IP address that's in
your DNS, and Host B being the primary/master for the other IP addess
that's in your DNS. Host A is the secondary/backup for the IP address
normally owned by Host B and Host B is the secondary/backup for the IP
address normally owned by Host A. When, for example, Host A fails, Host
B takes over being the primary/master for both IP addresses in your
DNS, giving you your continued availability. If you want make that fail
over transparent to load, you'd need to keep the load on the hosts 50%
under normal, non-fail circumstances.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: IETF Journal Announcement (fwd)

2008-02-28 Thread Mark Smith

On Thu, 28 Feb 2008 08:41:27 -0500
Joe Abley [EMAIL PROTECTED] wrote:

 
 On 27-Feb-2008, at 15:09, Mark Smith wrote:
 
  Don't worry if the ISOC website times out, their firewall isn't TCP
  ECN compatible.
 
 Isn't it the case in the real world that the Internet isn't TCP ECN  
 compatible?


In my experience no. The Linux kernel defaults to ECN enabled (although
I think distros switch it off), and I've been running my PC ECN enabled
for at least the last 5 to 7 years. The number of websites that I've
had trouble with in that time was such a low number (3), that I
remember what they are. The other two, other than the ISOC website,
have been fixed within the last 3 years.

That's not really an excuse anyway. The ECN bit originally was
reserved, so things that don't understand it should be ignoring it, not
making sure it's set to zero. I understand that's the fundamentals of
the robustness principle. If people claim doing that is insecure,
how are there so many firewalls out there that don't have / aren't
causing this problem?

 
 I thought people had relegated that to the nice idea but, in  
 practice, waste of time bucket years ago.


Not exactly sure of it's exact status, however every now and then I
come across things relating to it e.g. I think I recently came across
proposed ECN additions to MPLS, so it still seems relevant. 

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: IETF Journal Announcement (fwd)

2008-02-27 Thread Mark Smith

Don't worry if the ISOC website times out, their firewall isn't TCP
ECN compatible. It was going to be fixed a couple of years ago when I
enquired about it, but obviously hasn't been. Being liberal in what
they'll accept seems to be a bit of a problem for them.

It's the last remaining non-ECN compatible website that I've tried to
access over the last couple of years. The others I'd had trouble with
have all become ECN friendly.

On Wed, 27 Feb 2008 08:33:43 -0800 (PST)
Lucy Lynch [EMAIL PROTECTED] wrote:

 
 All -
 
 Forwarded on Mirjam's behalf.
 
 Aside: If you find the Thaler/Aboba article on protocol success
 interesting you might also want to check out the plenary slides
 from the last IETF:
 
 http://www.ietf.org/proceedings/07dec/slides/plenaryt-1.pdf
 
 - Lucy
 
 -- Forwarded message --
 Date: Wed, 27 Feb 2008 08:55:40 +0100
 Subject: IETF Journal Announcement
 
 Hello,
 
 The new issue of the IETF Journal - Volume 3, Issue 3 - is now
 available at http://ietfjournal.isoc.org
 
 This issue's main focus is Security and Unwanted Traffic. Please also
 note the previous issue (Volume 3, Issue 2) which covered many topics
 related to IPv6.
 
 You can read this publication online or choose to download the full
 issue in PDF format. You can also keep up to date with the latest
 issue of the IETF Journal by subscribing to one of our RSS or Atom
 feeds.
 
 For comments or suggestions, please do not hesiate to contact us at
 [EMAIL PROTECTED]
 
 Kind Regards,
 Mirjam Kuehne
 Internet Society (ISOC)


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mark Smith

On Mon, 14 Jan 2008 18:43:12 -0500
William Herrin [EMAIL PROTECTED] wrote:

 
 On Jan 14, 2008 5:25 PM, Joe Greco [EMAIL PROTECTED] wrote:
   So users who rarely use their connection are more profitable to the ISP.
 
  The fat man isn't a welcome sight to the owner of the AYCE buffet.
 
 Joe,
 
 The fat man is quite welcome at the buffet, especially if he brings
 friends and tips well.

But the fat man isn't allowed to take up residence in the restaurant
and continously eat - he's only allowed to be there in bursts, like we
used to be able to assume people would use networks they're connected
to. Left running P2P is the fat man never leaving and never stopping
eating.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Mark Smith

On Tue, 15 Jan 2008 17:56:30 +0900
Adrian Chadd [EMAIL PROTECTED] wrote:

 
 On Tue, Jan 15, 2008, Mark Smith wrote:
 
  But the fat man isn't allowed to take up residence in the restaurant
  and continously eat - he's only allowed to be there in bursts, like we
  used to be able to assume people would use networks they're connected
  to. Left running P2P is the fat man never leaving and never stopping
  eating.
 
 ffs, stop with the crappy analogies.
 

They're accurate. No network, including the POTS, or the road
networks you drive your car on, are built to handle 100% concurrent use
by all devices that can access it. Data networks (for many, many years)
have been built on the assumption that the majority of attached devices
will only occasionally use it.

If you want to _guaranteed_ bandwidth to your house, 24x7, ask your
telco for the actual pricing for guaranteed Mbps - and you'll find that
the price per Mbps is around an order of magnitude higher than what
your residential or SOHO broadband Mbps is priced at. That because for
sustained load, the network costs are typically an order of magnitude
higher.

 The internet is like a badly designed commodity network. Built increasingly
 cheaper to deal with market pressures and unable to shift quickly to shifting
 technologies.
 

That's because an absolute and fundamental design assumption is
changing - P2P changes the traffic profile from occasional bursty
traffic to a constant load. I'd be happy to build a network that can
sustain high throughput P2P from all attached devices concurrently - it
isn't hard - but it's costly in bandwidth and equipment. I'm not
against the idea of P2P a lot, because it distributes load for popular
content around the network, rather than creating the slashdot effect.
It's the customers that are the problem - they won't pay $1000 per/Mbit
per month I'd need to be able to do it...

TCP is partly to blame. It attempts to suck up as much bandwidth as
available. That's great if you're attached to a network who's usage is
bursty, because if the network is idle, you get to use all it's
available capacity, and get the best network performance possible.
However, if your TCP is competing with everybody else's TCP, and you're
expecting idle network TCP performance - you'd better pony up money
for more total network bandwidth, or lower your throughput expectations.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Assigning IPv6 /48's to CPE's?

2008-01-04 Thread Mark Smith

On Thu, 3 Jan 2008 12:53:24 -0500
William Herrin [EMAIL PROTECTED] wrote:

 
 On Jan 3, 2008 11:25 AM, Tim Franklin [EMAIL PROTECTED] wrote:
  Only assuming the nature of your mistake is 'turn it off'.
 

 
 Do you mean to tell me there's actually such a thing as a network
 engineer who creates and uses a test plan every single time he makes a
 change to every firewall he deals with? I thought such beings were a
 myth, like unicorns and space aliens!


I've had to do this. When there's SLAs / money involved for failure,
you can't cowboy anything - because you're risking your job if you do.
Funny thing is, going through a rehersal before you make any change
increases significantly your chances of success - and now I prefer to
plan my changes in detail before I do them, as it makes me look a lot
better at my job.

A firewall change in particular, because of the security consequences
of mistake, justifies planning and review before implementation.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2008-01-01 Thread Mark Smith

On Tue, 1 Jan 2008 12:57:17 +0100
Iljitsch van Beijnum [EMAIL PROTECTED] wrote:

 
 On 31 dec 2007, at 1:24, Mark Smith wrote:
 
  Another idea would be to give each non-/48 customer the
  first /56 out of each /48.
 
 Right, so you combine the downsides of both approaches.
 
 It doesn't work when ARIN does it:
 

Well, ARIN aren't running the Internet route tables. If they were, I'd
assume they'd force AS6453 to do the right thing and aggregate their
address space. 

 *  24.122.32.0/20   4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.48.0/20   4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.64.0/20   4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.80.0/20   4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.96.0/20   4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.112.0/20  4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.128.0/20  4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.144.0/20  4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.160.0/20  4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.176.0/20  4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.192.0/19  4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.224.0/20  4.68.1.166   0 0 3356 6453  
 11290 i
 *  24.122.240.0/20  4.68.1.166   0 0 3356 6453  
 11290 i
 
 And it's unlikely to work here: for those standard size blocks, you  
 really don't want any per-user config: you want those to be assigned  
 automatically. But for the /48s you do need per-user config, if only  
 that this user gets a /48. So these two block sizes can't  
 realistically come from the same (sub-) range.

Maybe I'm not understanding this correctly. Are you saying that
customers who have a /56 would get dynamic ones i.e. a different one
each time they reconnect? If they've got a routed downstream topology,
with multiple routers and subnets (because of course, they've got 256
of them), I don't think customers will be very happy about having to
renumber top /56 bits if e.g. they have a DSL line sync drop out and
get a different /56.

Static assignments of /56 to customers make sense to me, and that's the
assumption I've made when suggesting the addressing scheme I proposed.
Once you go static with /56s, you may as well make it easy for both
yourself and the customer to move to a /48 that encompasses the
original /56 (or configure the whole /48 for them from the outset).

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-31 Thread Mark Smith

On Mon, 31 Dec 2007 13:18:41 -0800
Joel Jaeggli [EMAIL PROTECTED] wrote:

 
 Mark Smith wrote:
 
  
  Another idea would be to give each non-/48 customer the
  first /56 out of each /48. If you started out with a /30 or /31 RIR block , 
  by
  the time you run out of /48s, you can either start using up the
  subsequent /56s out of the first /48, as it's likely that the first /56
  customer out of the /48 would have needed the /48 by that time.
 
 As stated, that approach has really negative implications for the number
 of routes you carry in your IGP.
 

Well, for 120K+ customers, I doubt you're using an IGP for anything
much more than BGP loopbacks - and you'd have to be aggregating routes
at a higher layer in your routing hierarchy anyway, to cope with 120K
routes, regardless of what method you use to dole out /48s or /56s to
end-sites.


  Alternatively you might have become more comfortable with giving each
  customer a /48, and wouldn't require any of them to renumber - they'd
  just have to shorten their prefix length.
  
  Regards,
  Mark.
  
 


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Assigning IPv6 /48's to CPE's?

2007-12-31 Thread Mark Smith

On Mon, 31 Dec 2007 10:18:08 -0600 (CST)
Joe Greco [EMAIL PROTECTED] wrote:

 
  I see there is a long thread on IPv6 address assignment going, and I
  apologize that I did not read all of it, but I still have some unanswered
  questions.
 
snip
 Anyways, I suggest you run over and read 
 
 http://www.6net.org/publications/standards/draft-vandevelde-v6ops-nap-01.txt
 

That ended up, after a number of revisions, being published as RFC4864
- Local Network Protection for IPv6.

 as it is useful foundation material to explain IPv6 strategies and how they
 differ from IPv4.
 
 ... JG
 -- 
 Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
 We call it the 'one bite at the apple' rule. Give me one chance [and] then I
 won't contact you again. - Direct Marketing Ass'n position on e-mail 
 spam(CNN)
 With 24 million small businesses in the US alone, that's way too many apples.


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-31 Thread Mark Smith

On Tue, 1 Jan 2008 10:27:50 +1030
Mark Smith [EMAIL PROTECTED] wrote:

 
 On Mon, 31 Dec 2007 13:18:41 -0800
 Joel Jaeggli [EMAIL PROTECTED] wrote:
 
  
  Mark Smith wrote:
  
   
   Another idea would be to give each non-/48 customer the
   first /56 out of each /48. If you started out with a /30 or /31 RIR block 
   , by
   the time you run out of /48s, you can either start using up the
   subsequent /56s out of the first /48, as it's likely that the first /56
   customer out of the /48 would have needed the /48 by that time.
  
  As stated, that approach has really negative implications for the number
  of routes you carry in your IGP.
  
 
 Well, for 120K+ customers, I doubt you're using an IGP for anything
 much more than BGP loopbacks - and you'd have to be aggregating routes
 at a higher layer in your routing hierarchy anyway, to cope with 120K
 routes, regardless of what method you use to dole out /48s or /56s to
 end-sites.


It being New Year's Day and my brain not working right yet ... you'd
probably divide your RIR block up across your PoPs, and then could use
this technique within each PoP, with the PoP being the route aggregation
boundary. 
 
   Alternatively you might have become more comfortable with giving each
   customer a /48, and wouldn't require any of them to renumber - they'd
   just have to shorten their prefix length.
   
   Regards,
   Mark.
   
  
 
 
 -- 
 
 Sheep are slow and tasty, and therefore must remain constantly
  alert.
- Bruce Schneier, Beyond Fear


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-30 Thread Mark Smith

On Sun, 30 Dec 2007 12:08:34 +0100
Jeroen Massar [EMAIL PROTECTED] wrote:

 Scott Weeks wrote:
 [..]
  I have about 100K DSL customers at this time and most all are households.
  65K wouldn't cover that.  At this point, I doubt that I'd require much
  more than just asking and making sure the person is understanding what
  they're asking for.  Mostly, that'd be the leased line customers.
 
 Thus why didn't you request a larger prefix from ARIN then?
 Clearly you can justify it.
 
 Then again, if you are going to provide /56's to home users, nobody will
 think you are a bad person and most people will be quite happy already.
 
 In your case I would then reserve (probably topdown) /48's, for the
 larger sites/businesses and start allocating bottom-up for /56's to
 endusers.
 

Another idea would be to give each non-/48 customer the
first /56 out of each /48. If you started out with a /30 or /31 RIR block , by
the time you run out of /48s, you can either start using up the
subsequent /56s out of the first /48, as it's likely that the first /56
customer out of the /48 would have needed the /48 by that time.
Alternatively you might have become more comfortable with giving each
customer a /48, and wouldn't require any of them to renumber - they'd
just have to shorten their prefix length.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-29 Thread Mark Smith

On Sat, 29 Dec 2007 15:14:25 -0500
Marshall Eubanks [EMAIL PROTECTED] wrote:

 
 On Dec 27, 2007, at 11:19 PM, Mark Smith wrote:
 
 
  On Fri, 28 Dec 2007 12:57:45 +0900
  Randy Bush [EMAIL PROTECTED] wrote:
 
  Ever calculated how many Ethernet nodes you can attach to a  
  single LAN
  with 2^46 unicast addresses?
 
  you mean operationally successfully, or just for marketing glossies?
 
 
  Theoretically. What I find a bit hard to understand is peoples'
  seemingly complete acceptance of the 'gross' amount of ethernet  
  address
  space there is available with 46 bits available for unicast addressing
  on a single LAN segment, yet confusion and struggle over the  
  allocation
  of additional IPv6 bits addressing bits for the same purpose - the
  operational convenience of having addressing work out of the box or
  be simpler to understand and easier to work with.
 
  Once I realised that IPv6's fixed sized node addressing model was
  similar to Ethernet's, I then started wondering why Ethernet was like
  it was - and then found a paper that explains it :
 
  48-bit Absolute Internet and Ethernet Host Numbers
  http://ethernethistory.typepad.com/papers/HostNumbers.pdf
 
 
 Would it be possible to find the even part of this paper ? This  
 version only has the odd numbered pages.
 

Hmm, you're right. The version I originally read was from somewhere
else, and that was complete. I figured this one was more original as
it's on one of the papers author's websites, so I've remembered that
one, and even deleted my original electronic copy for this one. I'll try to find
the other copy.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-29 Thread Mark Smith

On Sat, 29 Dec 2007 15:14:25 -0500
Marshall Eubanks [EMAIL PROTECTED] wrote:

 
 On Dec 27, 2007, at 11:19 PM, Mark Smith wrote:
 
 
  On Fri, 28 Dec 2007 12:57:45 +0900
  Randy Bush [EMAIL PROTECTED] wrote:
 
 
 
 Would it be possible to find the even part of this paper ? This  
 version only has the odd numbered pages.
 

Here's where I got the version I first read. The full text/pdf is
available if you have or create yourself an ACM login :

http://portal.acm.org/citation.cfm?id=800081.802680



Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-28 Thread Mark Smith

On Thu, 27 Dec 2007 21:50:01 -0500
Robert E. Seastrom [EMAIL PROTECTED] wrote:

 
 
 Leo Bicknell [EMAIL PROTECTED] writes:
 
snip
 
 I'd really, really, really like to have DHCP6 on the Mac.  Autoconfig
 is not sufficient for this task unless there is some kind of trick you
 can do to make the eui-64 come out the same for both interfaces (don't
 think so).
 

Don't know if Mac's can do bridging, but under Linux, all you'd need to
do would be create bridge instance, assign the two or more interfaces to the
bridge, and have DHCPv6 use the bridge virtual interface.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-27 Thread Mark Smith

On Thu, 27 Dec 2007 11:27:13 +0100
Iljitsch van Beijnum [EMAIL PROTECTED] wrote:

 
 On 26 dec 2007, at 22:40, Leo Bicknell wrote:
 

snip

 
  It would be very interesting to me if the answer was it's moot
  because we're going to move to CGA's as a step forward; it would
  be equally interesting if the answer is CGA isn't ready for prime
  time / we can't deploy it for xyz reason, so IPv6 is less secure
  than IPv4 today and that's a problem.
 
 With IPv4, a lot of these features are developed by vendors and  
 (sometimes) later standardized in the IETF or elsewhere. With IPv6,  
 the vendors haven't quite caught up with the IETF standardization  
 efforts yet, so the situation is samewhat different. For instance,  
 SEND/CGA is excellent work, but we've only recently seen the first  
 implementations.
 
 Personally, I'm not a big fan of DHCPv6. First of all, from a  
 philosophical standpoint: I believe that stateless autoconfiguration  
 is a better model in most cases (although it obviously doesn't support  
 100% of the DHCP functionality). But apart from that, some of the  
 choices made along the way make DHCPv6 a lot harder to use than DHCP  
 for IPv4. Not only do you lack a default gateway (which is actually a  
 good thing for fate sharing reasons) but also a subnet prefix length  
 and any extra on-link prefixes.

I think it's interesting CGAs are being discussed in the same email as
the one where you say you want to be able to express prefix length in DHCPv6 -
because I'm guessing you want that feature to be able to shorten node
addresses.

One of the benefits of fixed length node addresses, at the 64 bit
boundary, is that it has made CGA much simpler - the CGA designers knew
from the outset that they were dealing with a single, fixed length 64
bit field to store the results of their crypto functions. If people had
different length autoconfigured or DHCPv6 learned addresses, not only
would CGA have to had be designed to support those varying field
lengths, increasing it's complexity (and therefore increasing
opportunities for implementation related security failure aka bugs with
security consequences), it's also likely that CGA would have had to
incorporate parameter checks to prevent people enabling it, when the
node address length they've chosen is too short for the security
strength CGA is designed to provide. If you don't perform these sorts
of checks, then too weak CGA, because of too short node addresses,
might give people a false sense of security - and I think that's far
worse than no security at all.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-27 Thread Mark Smith

On Thu, 27 Dec 2007 12:11:54 +0100
Iljitsch van Beijnum [EMAIL PROTECTED] wrote:

 
 On 27 dec 2007, at 11:57, [EMAIL PROTECTED] wrote:
 
  Configure this stuff manually may work for a small number of
  customers. It is highly undesirable (and probably won't be considered
  at all) in an environment with, say, 1 million customers.
 
 Of course not. But RAs on a subnet with a million customers doesn't  
 work either, nor does DHCP on a subnet with a million customers.
 
 If we're talking about provisioning cable/DSL/FTTH users, that's a  
 completely different thing. Here, DHCPv6 prefix delegation to a CPE  
 which then provides configuration to hosts on its LAN side would be  
 the most appropriate option. However, the specifics of that model need  
 to be worked out as there are currently no ISPs and no CPEs that do  
 that, as far as I know.

I haven't had a chance to test it, but according to Deploying IPv6
Networks, IOS can support DHCPv6 based prefix delegation. It even
supports multiple downstream interfaces on the CPE - you configure the
subnet number you want on each of the interfaces, and the CPE will
configure the DHCP-PD learned /48 on the front of them automatically
and then start announcing those prefixes in RAs out those interfaces.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-27 Thread Mark Smith

On Thu, 27 Dec 2007 22:57:59 +0100
Iljitsch van Beijnum [EMAIL PROTECTED] wrote:

 
 On 27 dec 2007, at 20:26, Christopher Morrow wrote:
 

snip

 
 Taken to its extreme feature parity means a search and replace of  
 all IPv4 specs to make every instance of 32 bits 128 bits but not  
 changing anything else. That's not what IPv6 is.

Exactly.

IPv6 is similar enough to IPv4 that it makes it easier to learn than if
it were a completely new and unrelated protocol.

It's different enough that you need to take each of the concepts and
practices that you know and have used for IPv4 for many years and try
to objectively evaluate whether they're still valid for IPv6. IPv6 has
features that IPv4 has never had, but have existed in IPX and Appletalk
since they were designed many years ago. If people have the time,
learning about those protocols might help with more easily learning
about IPv6.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-27 Thread Mark Smith

On Thu, 27 Dec 2007 18:08:10 -0800
Scott Weeks [EMAIL PROTECTED] wrote:

 
 
 
 
 First, thanks everyone for the discussion.  I learned more from this than a 
 LOT of other discussions on IPv6.  I now have a plan and I didn't before...
 
 It looks to me that one really has to know his customer's needs to plan out 
 the allocation of IPv6 space.  That leads me to believe that a /56 is going 
 to work for everyone on this network because, at this time, only very, very 
 few of our largest customers might possibly have a need for more than 256 /64 
 subnets.  In fact, almost all household DSL customers here only have one LAN 
 and I could get away with /64s for them because they wouldn't know the 
 difference.  But in an effort to simplify the lives of the network folks here 
 I am thinking of a /56 for everyone and a /48 on request.
 

Out of curiosity, what in form would a request for a /48 need to be? A
checkbox on the application form, or some sort of written
justification? Remember that with an initial RIR allocation of a /32,
you've got 65K /48s ... so they're pretty cheap to give away.

 Now I just gotta wrap my brain around 4.7x10^21 addresses for each customer.  
 Absolutely staggering.
 

Ever calculated how many Ethernet nodes you can attach to a single LAN
with 2^46 unicast addresses? That's a staggering number too.

Regards,
Mark.

 scott
 
 
 
 --- [EMAIL PROTECTED] wrote:
 
 From: Randy Bush [EMAIL PROTECTED]
 To: Joel Jaeggli [EMAIL PROTECTED]
 CC: nanog@merit.edu
 Subject: Re: v6 subnet size for DSL  leased line customers
 Date: Thu, 27 Dec 2007 13:19:27 +0900
 
 
  vendors, like everyone else, will do what is in their best interests.
  as i am an operator, not a vendor, that is often not what is in my best
  interest, marketing literature aside.  i believe it benefits the ops
  community to be honest when the two do not seem to coincide.
  If the ops community doesn't provide enough addresses and a way to use
  them then the vendors will do the same thing they did in v4.
 
 i presume you mean nat v6/v6.  this would be a real mess and i don't
 think anyone is contending it is desirable.  but this discussion is
 ostensibly operators trying to understand what is actually appropriate
 and useful for a class of customers, i believe those of the consumer,
 soho, and similar scale.
 
 to summarize the positions i think i have heard
   o one /64 subnet per device, but the proponent gave no estimate of the
 number of devices
   o /48
   o /56
   o /64
 the latter three all assuming that the allocation would be different if
 the site had actual need and justification.
 
 personally, i do not see an end site needing more than 256 subnets *by
 default*, though i can certainly believe a small minority of them need
 more and would use the escape clause.  so, if we, for the moment, stick
 to the one /64 per subnet religion, than a /56 seems sufficient for the
 default allocation.
 
 personally, i have a hard time thinking that any but a teensie minority,
 who can use the escape clause, need more than 256.  hence, i just don't
 buy the /48 position.
 
 personally, i agree that one subnet is likely to be insufficient in a
 large proportion of cases.  so keeping to the /64 per subnet religion, a
 /64 per site is insufficient for the default.
 
 still personally, i think the one /64 subnet per device is analogous to
 one receptacle per mains breaker, i.e. not sensible.
 
  there are three legs to the tripod
  network operator
  user
  equipment manufacturer
  They have (or should have) a mutual interest in:
  Transparent and automatic configuration of devices.
 
 as you have seen from chris's excellent post [0] on this one, one size
 does not fit all.  this is likely another worthwhile, but separate,
 discussion.
 
  The assignment of globally routable addresses to internet
  connected devices
 
 i suspect that there are folk out there who equate nat with security.  i
 suspect we both think them misguided.
 
  The user having some control over what crosses the boundry
  between their network and the operators.
 
 yup
 
 randy
 
 ---
 
 [0] - http://www.merit.edu/mail.archives/nanog/msg04887.html
 
 


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-27 Thread Mark Smith

On Fri, 28 Dec 2007 12:57:45 +0900
Randy Bush [EMAIL PROTECTED] wrote:

  Ever calculated how many Ethernet nodes you can attach to a single LAN
  with 2^46 unicast addresses?
 
 you mean operationally successfully, or just for marketing glossies?
 

Theoretically. What I find a bit hard to understand is peoples'
seemingly complete acceptance of the 'gross' amount of ethernet address
space there is available with 46 bits available for unicast addressing
on a single LAN segment, yet confusion and struggle over the allocation
of additional IPv6 bits addressing bits for the same purpose - the
operational convenience of having addressing work out of the box or
be simpler to understand and easier to work with.

Once I realised that IPv6's fixed sized node addressing model was
similar to Ethernet's, I then started wondering why Ethernet was like
it was - and then found a paper that explains it :

48-bit Absolute Internet and Ethernet Host Numbers
http://ethernethistory.typepad.com/papers/HostNumbers.pdf

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-27 Thread Mark Smith

On Fri, 28 Dec 2007 13:36:56 +0900
Adrian Chadd [EMAIL PROTECTED] wrote:

 
 On Fri, Dec 28, 2007, Mark Smith wrote:
 
  Once I realised that IPv6's fixed sized node addressing model was
  similar to Ethernet's, I then started wondering why Ethernet was like
  it was - and then found a paper that explains it :
  
  48-bit Absolute Internet and Ethernet Host Numbers
  http://ethernethistory.typepad.com/papers/HostNumbers.pdf
  
 
 Question. Whats the ethernet 48-bit MAC space usage atm? Does anyone
 have a graph showing an E-Day? :)
 
 

Apparently there's a foreseeable one, hence EUI-64s. Novell'll have to
extend their IPX node addressing to 64 bits.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Mark Smith

On Sun, 23 Dec 2007 12:54:34 -0500
Ross Vandegrift [EMAIL PROTECTED] wrote:

 
 On Sun, Dec 23, 2007 at 12:24:32AM +0100, Iljitsch van Beijnum wrote:
  First of all, there's RFC 3513:
  
  For all unicast addresses, except those that start with binary value  
  000, Interface IDs are required to be 64 bits long and to be  
  constructed in Modified EUI-64 format.
 
 Ahhh, thanks - that is the only thing I have ever seen that gives any
 reason for the /64 prefix.  Sadly, the document contains no
 compelling technical reasons for it - looks like it's done just so
 things are easy when generating interface IDs from ethernet addresses.
 

If operational simplicity of fixed length node addressing is a
technical reason, then I think it is a compelling one. If you've ever
done any reasonable amount of work with Novell's IPX (or other fixed
length node addressing layer 3 protocols (mainly all of them except
IPv4!)) you'll know what I mean.

I think Ethernet is also another example of the benefits of
spending/wasting address space on operational convenience - who needs
46/47 bits for unicast addressing on a single layer 2 network!? If I
recall correctly from bits and pieces I've read about early Ethernet,
the very first versions of Ethernet only had 16 bit node addressing.
They then decided to spend/waste bits on addressing to get
operational convenience - plug and play layer 2 networking.

If IPv6 can have the same operational simplicity as Ethernet,
and addressing bits can afford to be spent on it, then I think those
bits are well worth spending.

The /64 for all subnets idea is probably an example of worse is
better principle. It's not ideal for everything, but because it's
general enough, it works with everything, and is simpler and a
*single* solution to everything, and that's what makes it better.

Regarding where the /64 boundary came from, from what I understand, the
following Internet Drafts are it's origin:

8+8 - An Alternate Addressing Architecture for IPv6
http://arneill-py.sacramento.ca.us/ipv6mh/draft-odell-8+8-00.txt

GSE - An Alternate Addressing Architecture for IPv6
http://arneill-py.sacramento.ca.us/ipv6mh/draft-ipng-gseaddr-00.txt

  Second, we currently have two mechanisms to configure IPv6 hosts with  
  an address: router advertisements and DHCPv6. The former has been  
  implemented in ALL IPv6 stacks but doesn't work if your subnet isn't  
  a /64.
 
 But the protocols don't imply or require this.  All of the messages
 used in stateless autoconfig will behave as expected with longer prefix
 lengths.  So it seems that because the interface identifier has to be
 64-bits, stateless autoconfig is unnecessarily crippled.
 
 For kicks I just tried RAs with a /96 prefix.  Linux 2.6 checks and
 enforces the requirement from RFC3513, though it'd be trivial to
 change.  But I'm guessing other vendors enforce this as well.
 
 -- 
 Ross Vandegrift
 [EMAIL PROTECTED]
 
 The good Christian should beware of mathematicians, and all those who
 make empty prophecies. The danger already exists that the mathematicians
 have made a covenant with the devil to darken the spirit and to confine
 man in the bonds of Hell.
   --St. Augustine, De Genesi ad Litteram, Book II, xviii, 37


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Mark Smith

On Sun, 23 Dec 2007 19:46:26 +0100
Florian Weimer [EMAIL PROTECTED] wrote:

 
 * Joe Greco:
 
  Right now, we might say wow, 256 subnets for a single end-user... 
  hogwash! and in years to come, wow, only 256 subnets... what were we 
  thinking!?
 
  Well, what's the likelihood of the only 256 subnets problem?
 
 There's a tendency to move away from (simulated) shared media networks.
 One host per subnet might become the norm.

Or possibly maybe Peter M. Gleitz's and Steven M. Bellovin's idea of

Transient Addressing for Related Processes: Improved Firewalling by Using IPV6 
and Multiple Addresses per Host

http://www.cs.columbia.edu/~smb/papers/tarp/tarp.html

A /64 per host is probably not necessary, however if an end-site has
a /48, that's 65K hosts so it wouldn't likely be much of a problem for
most sites ... certainly not my house currently or in the forseeable
future or my current employer, or most employers I've worked for in the
past.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Mark Smith

On Sun, 23 Dec 2007 17:26:12 -0600 (CST)
Joe Greco [EMAIL PROTECTED] wrote:

  If operational simplicity of fixed length node addressing is a
  technical reason, then I think it is a compelling one. If you've ever
  done any reasonable amount of work with Novell's IPX (or other fixed
  length node addressing layer 3 protocols (mainly all of them except
  IPv4!)) you'll know what I mean.
  
  I think Ethernet is also another example of the benefits of
  spending/wasting address space on operational convenience - who needs
  46/47 bits for unicast addressing on a single layer 2 network!? If I
  recall correctly from bits and pieces I've read about early Ethernet,
  the very first versions of Ethernet only had 16 bit node addressing.
  They then decided to spend/waste bits on addressing to get
  operational convenience - plug and play layer 2 networking.
 
 The difference is that it doesn't cost anything.  There are no RIR fees,
 there is no justification.  You don't pay for, or have to justify, your 
 Ethernet MAC addresses.
 
 With IPv6, there are certain pressures being placed on ISP's not to be
 completely wasteful.


I don't think there is that difference at all. MAC address allocations
are paid for by the Ethernet chipset/card vendor, and I'm pretty sure
they have to justify their usage before they're allowed to buy another
block. I understand they're US$1250 an OUI, so something must have
happened to prevent somebody buying them all up to hoard them, creating
artificial scarcity, and then charging a market sensitive price for
them, rather than the flat rate they cost now. That's not really any
different to an ISP paying RIR fees, and then indirectly passing those
costs onto their customers.


 This will compel ISP's to at least consider the issues, and it will most
 likely force users to buy into technologies that allow them to do what they
 want.  And inside a /64, you have sufficient space that there's probably
 nothing you can't do.  :-)
 
 ... JG
 -- 
 Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
 We call it the 'one bite at the apple' rule. Give me one chance [and] then I
 won't contact you again. - Direct Marketing Ass'n position on e-mail 
 spam(CNN)
 With 24 million small businesses in the US alone, that's way too many apples.


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Mark Smith

On Sun, 23 Dec 2007 19:27:55 -0600 (CST)
Joe Greco [EMAIL PROTECTED] wrote:

I think Ethernet is also another example of the benefits of
spending/wasting address space on operational convenience - who needs
46/47 bits for unicast addressing on a single layer 2 network!? If I
recall correctly from bits and pieces I've read about early Ethernet,
the very first versions of Ethernet only had 16 bit node addressing.
They then decided to spend/waste bits on addressing to get
operational convenience - plug and play layer 2 networking.
   
   The difference is that it doesn't cost anything.  There are no RIR fees,
   there is no justification.  You don't pay for, or have to justify, your 
   Ethernet MAC addresses.
   
   With IPv6, there are certain pressures being placed on ISP's not to be
   completely wasteful.
  
  I don't think there is that difference at all. MAC address allocations
  are paid for by the Ethernet chipset/card vendor, and I'm pretty sure
  they have to justify their usage before they're allowed to buy another
  block. I understand they're US$1250 an OUI, so something must have
  happened to prevent somebody buying them all up to hoard them, creating
  artificial scarcity, and then charging a market sensitive price for
  them, rather than the flat rate they cost now. That's not really any
  different to an ISP paying RIR fees, and then indirectly passing those
  costs onto their customers.
 
 MAC address allocations are paid for by the Ethernet chipset/card vendor.
 
 They're not paid for by an ISP, or by any other Ethernet end-user, except
 as a pass-through, and therefore it's considered a fixed cost.  There are
 no RIR fees, and there is no justification.  You buy a gizmo with this
 RJ45 and you get a unique MAC.  This is simple and straightforward.  If
 you buy one device, you get one MAC.  If you buy a hundred devices, you
 get one hundred MAC's.  Not 101, not 99.  This wouldn't seem to map well
 at all onto the IPv6 situation we're discussing.
 

How many ISP customers pay RIR fees? Near enough to none, if not none.
I never have when I've been an ISP customer. Why are you pretending
they do? I think your taking an end-user perspective when discussing
ethernet but an RIR fee paying ISP position when discussing IPv6 subnet
allocations. That's not a valid argument, because you've changed your viewpoint 
on the situation to suit your position.

Anyway, the point I was purely making was that if you can afford to
spend the bits, because you have them (as you do in Ethernet by design,
as you do in IPv6 by design, but as you *don't* in IPv4 by design), you
can spend them on operational convenience for both the RIR paying
entity *and* the end-user/customer. Unnecessary complexity is
*unnecessary*, and your customers won't like paying for it if they
discover you've chosen to create it either on purpose or through
naivety.

 With an IPv6 prefix, it is all about the prefix size.  Since a larger 
 allocation may cost an ISP more than a smaller allocation, an ISP may 
 decide that they need to charge a customer who is allocated a /48 more 
 than a customer who is allocated a /64.
 
 I don't pay anyone anything for the use of the MAC address I got on this
 free ethernet card someone gave me, yet it is clearly and unambiguously
 mine (and only mine) to use.  Does that clarify things a bit?
 
 If you are proposing that RIR's cease the practice of charging different
 amounts for different allocation sizes, please feel free to shepherd that
 through the approvals process, and then I will certainly agree that there
 is no longer a meaningful cost differential for the purposes of this
 discussion.  Otherwise, let's not pretend that they're the same thing, 
 since they're clearly not.
 
 ... JG
 -- 
 Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
 We call it the 'one bite at the apple' rule. Give me one chance [and] then I
 won't contact you again. - Direct Marketing Ass'n position on e-mail 
 spam(CNN)
 With 24 million small businesses in the US alone, that's way too many apples.


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Mark Smith

On Mon, 24 Dec 2007 09:58:44 +0900
Randy Bush [EMAIL PROTECTED] wrote:

 
  There's a tendency to move away from (simulated) shared media networks.
  One host per subnet might become the norm.
 
 and, with multiple addresses per interface, the home user surely _might_
 need a /32.
 

What prompted you to suggest that? Trolling maybe?

 
 might does not make right
 

Neither does being ridiculous. 

 randy


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-22 Thread Mark Smith

On Sat, 22 Dec 2007 12:53:52 -0800
Christopher Morrow [EMAIL PROTECTED] wrote:

 
 On Dec 22, 2007 12:23 PM, Ross Vandegrift [EMAIL PROTECTED] wrote:
 
  On Fri, Dec 21, 2007 at 01:33:15PM -0500, Deepak Jain wrote:
   For example... Within one's own network (or subnet if you will) we can
   absorb all the concepts of V4 today and have lots of space available.
   For example... for the DMZ of a business... Why not give them 6 bits
   (/122?) are we anticipating topology differences UPSTREAM from the
   customers that can take advantage of subnet differences between /64 and
   /56 ?
 
  I am confused on this point as well.  IPv6 documents seem to assume
  that because auto-discovery on a LAN uses a /64, you always have to
  use a /64 global-scope subnet.  I don't see any technical issues that
  require this though.  ICMPv6 is capable of passing info on prefixes of
  any length -  prefix length is a plain old 8bit field.
 
 
 Uhm, so sure the spec might be able to do something different than /64
 but most equipment I've used only does auto-conf if the prefix is a
 /64 :( Somewhere along the path to ipng we got reverted to classful
 addressing again :(
 

Not really. Classful IPv4 defined both an addressing structure *and* an
agorithm to match destinations against the route table entries (i.e.
classful forwarding won't match on a default route if the router knows
at least one prefix within a classful network).

IPv6 uses the longest match rule regardless of any addressing
structure, and only uses structure for a few portions of the total
IPv6 address space, for the operation of things like DHCPv6 and address
autoconfiguration. A change in IPv6 addressing structure won't involve
a change in the route table matching algorithm.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-21 Thread Mark Smith

On Fri, 21 Dec 2007 08:31:07 -0800
Owen DeLong [EMAIL PROTECTED] wrote:

 
  The primary reasons I see for separate networks on v6 would include
  firewall policy (DMZ, separate departmental networks, etc)...
 
 This is certainly one reason for such things.
 
  And I'm having some trouble envisioning a residential end user that
  honestly has a need for 256 networks with sufficiently differently
  policies.  Or that a firewall device can't reasonably deal with those
  policies even on a single network, since you mainly need to protect
  devices from external access.
 
 Perhaps this is a lack of imagination.
 
 Imagine that your ethernet-bluetooth gateway wants to treat the  
 bluetooth
 and ethernet segments as separate routed segments.
 
snip

I think this is also showing a bit of a lack of imagination:

 I think it makes sense to assign as follows:
 
 /64 for the average current home user.
 /56 for any home user that wants more than one subnet
 /48 for any home user that can show need.
 

Well, it doesn't really make sense to me - I think it's far more
conservative than it has to be. Even spending time on considering and
evaluating the checkboxes for the last two options is time that could
be better spent on something else, and probably costs more than the
IPv6 address space (and associated costs) saved by being conservative
with the allocations.

I'd be interested to know *why* that makes sense to you - the justifications.

I'd also be interested to know what you'd *want* if you were asked how
you'd like to structure IPv6 addressing, if you didn't have any history
of having to be conservative with IPv4 addressing. IOW, imagine IPv4
didn't exist, and therefore your thinking about IPv6 isn't influenced
by your history with IPv4.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: v6 subnet size for DSL leased line customers

2007-12-20 Thread Mark Smith

On Thu, 20 Dec 2007 12:26:43 +0900
Randy Bush [EMAIL PROTECTED] wrote:

 
  I work on a network with 100K+ DSL folks and 200+ leased line
  customers, plus some other stuff.  The leased line customers are
  increasing dramatically.  I should plan for a /64 for every DSL
  customer and a /48 for every leased line customer I expect over the
  next 5-7 years?
 
 why not a /56 by default for both, and give them an opportunity to
 justify more?
 

Why not a /48 for all? IPv6 address space is probably cheap enough that
even just the time cost of dealing with the occasional justification
for moving from a /56 to a /48 might be more expensive than just giving
everybody a /48 from the outset. Then there's the op-ex cost of
dealing with two end-site prefix lengths - not a big cost, but a
constant additional cost none the less.


 a /64 is a bit old-think unless you are having cost issues getting your
 space from above.

Agree.
 
Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: European ISP enables IPv6 for all?

2007-12-18 Thread Mark Smith

On Tue, 18 Dec 2007 15:49:18 GMT
Paul Ferguson [EMAIL PROTECTED] wrote:

 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 - -- Christopher Morrow [EMAIL PROTECTED] wrote:
 
 On Dec 17, 2007 9:59 PM, Paul Ferguson [EMAIL PROTECTED] wrote:
 
  And in fact, threat propagation in a v6 world may actually
  be worse than expected, and naivet_ may actually contribute to
  a larger-scale attack, given the statistical possibility of
  potentially more victims.
 
 
 naivete because folks believe the 'v6 is more secure' propoganda? or
 some other reason?
 
 Yes. :-)
 
  Address space size, and proximity, may well be red herrings in
  this discussion.
 
 can you expand on this some?
 
 Someone else mentioned self-infliction in this thread, and that's
 spot on.
 
 Over the course of the past year or more, we've seen less  less
 scanning  self-propagating malware, and more  more self-infliction,
 either by being duped via social engineering or just by drive-by
 infections/compromises.
 
 As it stands, now -- and unless the pendulum swings the other way --
 the whole ...v6 address space is larger, thus it is much harder to
 scan and thus propagation of worms is much harder... train of thought
 is completely misguided.
 

It has been for quite a while - and so has NAT/NAPT = IPv4
security, for exactly the same reason. Some people say IPv6 isn't
necessary because of IPv4 NAT/NAPT being available, and then when they
say why, it's commonly because of the supposed security of IPv4
NAT/NAPT that'd be lost when moving to no-NAT IPv6.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-22 Thread Mark Smith

On Sun, 21 Oct 2007 19:31:09 -0700
Joel Jaeggli [EMAIL PROTECTED] wrote:

 
 Steven M. Bellovin wrote:
 
  This result is unsurprising and not controversial.  TCP achieves
  fairness *among flows* because virtually all clients back off in
  response to packet drops.  BitTorrent, though, uses many flows per
  request; furthermore, since its flows are much longer-lived than web or
  email, the latter never achieve their full speed even on a per-flow
  basis, given TCP's slow-start.  The result is fair sharing among
  BitTorrent flows, which can only achieve fairness even among BitTorrent
  users if they all use the same number of flows per request and have an
  even distribution of content that is being uploaded.
  
  It's always good to measure, but the result here is quite intuitive.
  It also supports the notion that some form of traffic engineering is
  necessary.  The particular point at issue in the current Comcast
  situation is not that they do traffic engineering but how they do it.
  
 
 Dare I say it, it might be somewhat informative to engage in a priority
 queuing exercise like the Internet-2 scavenger service.
 
 In one priority queue goes all the normal traffic and it's allowed to
 use up to 100% of link capacity, in the other queue goes the traffic
 you'd like to deliver at lower priority, which given an oversubscribed
 shared resource on the edge is capped at some percentage of link
 capacity beyond which performance begins to noticably suffer... when the
 link is under-utilized low priority traffic can use a significant chunk
 of it. When high-priority traffic is present it will crowd out the low
 priority stuff before the link saturates. Now obviously if high priority
 traffic fills up the link then you have a provisioning issue.
 
 I2 characterized this as worst effort service. apps and users could
 probably be convinced to set dscp bits themselves in exchange for better
 performance of interactive apps and control traffic vs worst effort
 services data transfer.
 

And if you think about these p2p rate limiting devices a bit more
broadly, all they really are are traffic classification and QoS policy
enforcement devices. If you can set dscp bits with them for certain
applications and switch off the policy enforcement feature ...

 Obviously there's room for a discussion of net-neutrality in here
 someplace. However the closer you do this to the cmts the more likely it
 is to apply some locally relevant model of fairness.
 
  --Steve Bellovin, http://www.cs.columbia.edu/~smb
  
 


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Why do some ISP's have bandwidth quotas?

2007-10-10 Thread Mark Smith

Hi Andrew,

On Mon,  8 Oct 2007 08:36:12 -0500 (CDT)
[EMAIL PROTECTED] (Andrew Odlyzko) wrote:

 
 As a point of information, Australia is one of the few places where
 the government collects Internet traffic statistics (which are hopefully
 trustworthy).  Pointer is at
 
http://www.dtc.umn.edu/mints/govstats.html
 
 (which also has a pointer to Hong Kong reports).  If one looks at the
 Australian Bureau of Statistics report for the quarter ended March 2007,
 we find that the roughly 3.8 M residential broadband subscribers in
 Australia were downloading an average of 2.5 GB/month, or about 10 Kbps
 on average (vs. about 20x that in Hong Kong).  While Australian Internet
 traffic had been growing very vigorously over the last few years (as
 shown by the earlier reports from the same source), growth has slowed
 down substantially, quite likely in response to those quotas.
 

These quotas have been around since the late 90s in .au, pretty much
since broadband became available. Their origins are probably the dial
up plans that were also measured that way - although there were also
dial up plans that were measured by minutes online. 

The only significant change to plans that has happened is that rather
than people who go over their quota being changed a per MB excess fee,
the customer's service is now rate limited (shaped) down to a dialup
like speed e.g. 64Kbps, resulting in a fixed monthly bill. This feature
was introduced something like 3 to 4 or maybe 5 years ago, and has
wildly spread across the industry (and as you say in one of your
papers, people like it because it's insurance against unexpected and
variable bills).

There are various levels for these quotas. The 500MB ones are really
only aimed to be for people who don't want to spend more per month than they
are for dialup - they probably act as a taster as to what you can do
with broadband, rather than being a broadband plan. Common proper
broadband quota plan values are 4000 or 5000, 1 or 12000, 2,
3, 4, 6 or 8 MB per month.

Regards,
Mark.

 Andrew Odlyzko
 
 P.S.  The MINTS (Minnesota Internet Traffic Studies) project,
 
http://www.dtc.umn.edu/mints
 
 provides pointers to a variety of sources of traffic statistics, as
 well as some analyses.  Comments, and especially pointers to additional
 traffic reports, are eagerly solicited.
 
 
 
 
 
On Fri Oct  5, Mark Newton wrote:
 
   On Fri, Oct 05, 2007 at 01:12:35PM -0400, [EMAIL PROTECTED] wrote:
 
 As you say, 90GB is roughly .25Mbps on average.  Of course, like you 
 pointed
 out, the users actual bandwidth patterns are most likely not a straight
 line.  95%ile on that 90GB could be considerably higher.  But let's take 
 a
 conservative estimate and say that user uses .5Mbps 95%ile.  And lets say
 this is a relatively large ISP paying $12/Mb.  That user then costs that 
 ISP
 $6/month in bandwidth.  (I know, that's somewhat faulty logic, but how 
 else
 is the ISP going to establish a cost basis?)  If that user is only paying
 say $19.99/month for their connection, that leaves only $13.99 a month to
 pay for all the infrastructure to support that user, along with 
 personnel,
 etc all while still trying to turn a profit. 
 
   In the Australian ISP's case (which is what started this) it's rather
   worse.
 
   The local telco monopoly bills between $30 and $50 per month for access
   to the copper tail.
 
   So there's essentially no such thing as a $19.99/month connection here
   (except for short-lived flash-in-the-pan loss-leaders, and we all know
   how they turn out)
 
   So to run the numbers:  A customer who averages .25Mbit/sec on a tail 
 acquired
   from the incumbent requires --
 
  Port/line rental from the telco   ~ $50
  IP transit~ $ 6 (your number)
  Transpacific backhaul ~ $50 (I'm not making this up)
 
   So we're over a hundred bucks already, and haven't yet factored in the 
   overheads for infrastructure, personnel, profit, etc.  And those numbers
   are before sales tax too, so add at least 10% to all of them before
   arriving at a retail price.
 
   Due to the presence of a quota, our customers don't tend to average
   .25 Mbit/sec over the course of a month (we prefer to send the ones
   that do to our competitors :-).  If someone buys access to, say, 
   30 Gbytes of downloads per month, a few significant things happen:
 
- The customer has a clear understanding of what they've paid for,
  which doesn't encompass unlimited access to the Internet.  That
  tends to moderate their usage;
 
- Because they know they're buying something finite, they tend to 
  pick a package that suits their expected usage, so customers who 
  intend to use more end up paying more money;
 
- The customer creates their own backpressure against hitting their
  quota:  Once they've gone past it they're usually rate-limited to
  64kbps, 

Re: Why do some ISP's have bandwidth quotas?

2007-10-04 Thread Mark Smith

On Thu, 04 Oct 2007 15:50:11 +0100
Leigh Porter [EMAIL PROTECTED] wrote:

 
 Yeah, try buying bandwidth in Australia! The have a lot more water to
 cover ( and so potentially more cost and more profit to be made by
 monopolies) than well connected areas such as the US.
 

I don't necessarily think it is only that.

Customers on ADSL2+ usually get the maximum ADSL2+ speed
their line will support, so customers can have speeds of up to 24Mbps
downstream. Download and/or upload quotas have an effect
of smoothing out the backhaul impact those high bandwidth customers
could make. As they could use up all their quota in such a short time
period at those speeds, and once they exceed their quota they'd get
their speed shaped down to something like 64Kbps, it typically forces
the customer to make their bandwidth usage patterns more bursty rather
than a constant. That effect, averaged across a backhaul region helps
avoid having to provision backhaul bandwidth for a much higher constant
load.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: what a non-neutral net would look to the user

2007-09-22 Thread Mark Smith

On Sat, 22 Sep 2007 06:02:32 -1000
Randy Bush [EMAIL PROTECTED] wrote:

 
 http://isen.com/blog/uploaded_images/5z6vt4n-720249.jpg

IMS?

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Congestion control train-wreck workshop at Stanford: Call for Demos

2007-09-05 Thread Mark Smith

On Tue, 4 Sep 2007 04:19:32 +
[EMAIL PROTECTED] wrote:

 
 On Mon, Sep 03, 2007 at 09:37:46PM -0400, John Curran wrote:
  
  At 9:21 PM -0400 9/3/07, Joe Abley wrote:
  
  Is there a groundswell of *operators* who think TCP should be replaced, 
  and believe it can be replaced?
  
  Just imagine *that* switchover, with the same level of
  transition planning as we received with IPv6...
  ;-)
  /John
 
   well, if you let the IETF do it...
 

Well, if you're too lazy to participate ... you'll have accept what
you're given. If you're not happy with that, get off your butt and do
something about it.


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: 2M today, 10M with no change in technology? An informal survey.

2007-08-28 Thread Mark Smith

On Tue, 28 Aug 2007 15:11:52 -0400
William Herrin [EMAIL PROTECTED] wrote:

 
 On 8/27/07, Deepak Jain [EMAIL PROTECTED] wrote:
  an MSFC2 can
  hold 256,000 entries in its FIB of which 12,000 are reserved for
  Multicast. I do not know if the 12,000 can be set to serve the general
  purpose.
 
  The MSFC2 therefore can server 244,000 routes without uRPF turned on.
 
snip
 
 Now, my request for help:
 
 I have a leaf node on the DFZ handled by a pair of Sup2's
 (pfc2/msfc2), two transit providers and several peers. My focus is
 very heavily domestic, and I'd like to delay my upgrade. I'd like to
 buy some time by aggregating the incoming APNIC region prefixes
 (http://www.iana.org/assignments/ipv4-address-space) into the
 following FIB entries:
 
 58.0.0.0/7
 60.0.0.0/7
 116.0.0.0/6
 120.0.0.0/6
 124.0.0.0/7
 126.0.0.0/8
 202.0.0.0/7
 210.0.0.0/7
 218.0.0.0/7
 220.0.0.0/7
 222.0.0.0/8
 
 Can anyone suggest how to program that into the router or refer me to
 the URL of the correct documentation at Cisco's site?
 

Probably better over at cisco-nsp, however I'd expect you'd use the
aggregate-address prefix mask summary-only command to create
aggregates, yet supressing them from being announced to any other BGP
peer. I think that would still cause the more specifics to get into the
FIB of the aggregating router, however there's a command I've only come
across recently, under the router bgp section, which allows you to
apply a route-map to routes as they go from the BGP RIB to the FIB. You
might be able to use that to stop the more specifics getting into the
FIB, with a route-map deny clause. The command is table-map. I
haven't used it myself, and the command reference says that it's only
to set attributes so YMMV. I haven't had success using deny clauses
in BGP attribute setting route-maps, so it may not be possible at all to use
this command for this purpose.

Another way you might avoid the more specifics getting into
the FIB is to only accept a few known or selected large more specifics
from those ranges from your upstreams e.g. 3 or so, dropping the rest,
and use those select few to create the /6-8 aggregates you'll use
internally. Probably a bit more work than the table-map method, but if
that doesn't work, this is probably the way to do it.

(Looks like the coffee is just kicking in this morning - I've just come
up with another way just before I send this off.)

Or you could set up a route server upstream of your router with the
limited FIB and do the filtering and / or aggregation there. As it
isn't in the forwarding path, you could probably use a lower end
software Cisco platform with enough CPU and RAM just to do the BGP
processing e.g. probably something as low end as an 1800 series with
1GB of RAM (I'd suggest switching CEF off to save RAM) would be quite
fine to do that job. I'd even suggest an 800 series (400MHz PowerPCs
are no slouches), however they've only got a max of 256MB of RAM with
probably isn't enough (for a bit of fun one day, I put the full route
table in a 128MB one, but it only got to 140 000 routes before it ran
out of RAM.)

HTH,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Network Parameters on Subscriber side feelings

2007-06-18 Thread Mark Smith

On Mon, 18 Jun 2007 13:02:55 +0100
Leigh Porter [EMAIL PROTECTED] wrote:

 
 [EMAIL PROTECTED] wrote:
is there any work or research on measuring method for  
  subscriber (customer)side feelings of network service? 
 

snip


 We have been doing a lot of work on how to measure the subscriber
 experience of a network. e2e ping delay actually is quite a good
 measure so long as you use it correctly. However we found that using
 tools such as iperf to take periodic measurments of TCP throughput, UDP
 throughput and packet loss was far more interesting.
 
 --
 Leigh Porter

You might also find this OWAMP (One Way Active Measurement Protocol)
AKA One Way Ping implementation to be quite useful for that sort of
thing.

http://e2epi.internet2.edu/owamp/

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: UK ISPs v. US ISPs (was RE: Network Level Content Blocking)

2007-06-11 Thread Mark Smith

On Sat, 9 Jun 2007 17:38:20 -0400
[EMAIL PROTECTED] wrote:

 IMHO, unless it's something blatantly illegal such as kiddie porn and the 
 like I don't think content filtering is the responsibility of the ISP's. 
 Besides all of the conspiracy theories that are bound to surface, I think 
 forcing ISP's to block content is a bit like forcing car makers to police 
 what can be played on the radio.  I think that giving parents the option 
 of manually turning off porn sites would be an improvement.  Although 
 still not within the responsibility of the ISP they are in the best place 
 to implement such a technology.  However, I don't like the idea of a 
 mandatory global traffic filtering initiative.
 
 

I think in the home is the best place to implement the technology - a
power switch or BIOS password.

Here is a true analogy. My father worked for a TV station, so you'd
think we'd have the TV on all the time, yet right through up until
after I left high school, my parents wanted to limit my TV watching ...
significantly.

How did they do it ?

(a) they didn't buy a TV set and put it in my bedroom - the TV was in a
common area of the house i.e. the lounge and/or dining room

(b) they didn't allow me to watch the TV unsupervised

So what I don't understand is why parents put computers in their
childrens' bedrooms and don't supervise their children's Internet use.

Substituting a piece of filting software that won't ever do as good a
job as a parent in enforcing parental responsibility is just bad
parenting in my opinion, and not the responsiblity of government or
ISPs.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Security gain from NAT

2007-06-06 Thread Mark Smith

On Wed, 6 Jun 2007 09:45:01 -0700
David Conrad [EMAIL PROTECTED] wrote:

 
 On Jun 6, 2007, at 8:59 AM, Stephen Sprunk wrote:
  The thing is, with IPv6 there's no need to do NAT.
 
 Changing providers without renumbering your entire infrastructure.
 
 Multi-homing without having to know or participate in BGP games.
 
 (yes, the current PI-for-everybody allocation mindset would address  
 the first, however I have to admit I find the idea of every small  
 enterprise on the planet playing BGP games a bit ... disconcerting)
 
  However, NAT in v6 is not necessary, and it's still evil.
 
 Even ignoring the two above, NAT will be a fact of life as long as  
 people who are only able to obtain IPv6 addresses and need/want to  
 communicate with the (overwhelmingly IPv4 for the foreseeable future)  
 Internet.  Might as well get used to it.  I for one welcome our new  
 NAT overlords...


For all those people who think IPv4 NAT is quite fine, I challenge them
to submit RFCs to the IETF that resolve, without creating worse
or more even more complicated problems, the list of problems here. All
the IPv6 RFCs do ... :

http://www.cs.utk.edu/~moore/what-nats-break.html

I've spent a number of years wondering why people seem to like NAT
(don't bother trying to convince me, my burnt stubs of fingers have
convinced me it's evil), and the only feasible conclusion I can come to
is that it is a chance to live out the invisible man fantasy they had
in their childhood. We've all had that fantasy I think, and we'd all
like to live it out ...

In IPv6, if you want to have a globally reachable service, you bind it
to a global address, and you protect the rest of the services/layer 4
protocol endpoints on that host that use global addresses via an SI
firewall, preferably on the host itself.

If you don't want to have a service globally reachable, then you don't
bind it to a global address - bind the service only to the to the ULA
addresses on the host. Then it'll be globally unreachable regardless of
whether there is a SI firewall active or not (although if people start
convincing upstreams and peers to accept their ULA routes external to
their own private network ... well, they made that choice, they'll have
to live with the security consequences)



-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-24 Thread Mark Smith

On Wed, 24 Jan 2007 02:07:06 -0800
Roland Dobbins [EMAIL PROTECTED] wrote:


 Of course I understand this, but I also understand that if one can  
 get away with RFC1918 addresses on a non-Internet-connected network,  
 it's not a bad idea to do so in and of itself; quite the opposite, in  
 fact, as long as one is sure one isn't buying trouble down the road.
 

The problem is that you can't be sure that if you use RFC1918 today you
won't be bitten by it's non-uniqueness property in the future. When
you're asked to diagnose a fault with a device with the IP address
192.168.1.1, and you've got an unknown number of candidate devices
using that address, you really start to see the value in having world
wide unique, but not necessarily publically visible addressing.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: CDN ISP (was: Re: Google wants to be your Internet)

2007-01-22 Thread Mark Smith

On Mon, 22 Jan 2007 04:15:44 -0600 (CST)
Gadi Evron [EMAIL PROTECTED] wrote:

 
 On Mon, 22 Jan 2007, Michal Krsek wrote:
 
 
 For broad-band ISPs, whose main goal is not to sell or re-sell transit 
 though...
 
  
  a) caching systems are not easy to implement and maintain (another system 
  for configuration)
  b) possible conflict with content owners
  c) they want to sell as much as possible of bandwidth
  d) they want to have their network fully transparent
 
 Only a, b apply. d I am not sure I understand.
 

I think (d) is all network testing tools showing a perfect path, which
sould isolate the fault to the remote web server itself, yet the
website not working because the translucent proxy has a fault.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sun, 21 Jan 2007 08:33:26 +0800
Adrian Chadd [EMAIL PROTECTED] wrote:

 
 On Sun, Jan 21, 2007, Charlie Allom wrote:
 
   This is a pure example of a problem from the operational front which can
   be floated to research and the industry, with smarter solutions than port
   blocking and QoS.
  
  This is what I am interested/scared by.
 
 Its not that hard a problem to get on top of. Caching, unfortunately, 
 continues
 to be viewed as anaethma by ISP network operators in the US. Strangely enough
 the caching technologies aren't a problem with the content -delivery- people.
 

 I've had a few ISPs out here in Australia indicate interest in a cache that
 could do the normal stuff (http, rtsp, wma) and some of the p2p stuff 
 (bittorrent
 especially) with a smattering of QoS/shaping/control - but not cost upwards of
 USD$100,000 a box. Lots of interest, no commitment.
 

I think it is probably because to build caching infrastructure that is
high performance and has enough high availability to make a difference is
either non-trivial or non-cheap. If it comes down to introducing
something new (new software / hardware, new concepts, new
complexity, new support skills, another thing that can break etc.)
verses just growing something you already have, already manage and
have since day one as an ISP - additional routers and/or higher capacity
links - then growing the network wins when the $ amount is the same
because it is simpler and easier.

 It doesn't help (at least in Australia) where the wholesale model of ADSL 
 isn't
 content-replication-friendly: we have to buy ATM or ethernet pipes to 
 upstreams
 and then receive each session via L2TP. Fine from an aggregation point of 
 view,
 but missing the true usefuless of content replication and caching - right at
 the point where your customers connect in.
 

I think if even pure networking people (i.e. those that just focus on
shifting IP packets around) are accepting of that situation, when they
also believe in keeping traffic local, indicates that it is probably
more of an economic rather than a technical reason why that is still
happening. Inter-ISP peering at the exchange (C.O) would be the ideal,
however it seems that there isn't enough inter-customer (per-ISP or
between ISP) bandwidth consumption at each exchange to justify the
additional financial and complexity costs to do it.

Inter-customer traffic forwarding is usually happening at the next
level up in the hierarchy - at the regional / city level, which is
probably at this time the most economic level to do it.

 (Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount
 of interest from CDN/content origination players but none from ISPs. I'd love
 to know why ISPs don't view caching as a viable option in today's world and
 what we could to do make it easier for y'all.)
 

Maybe that really means your customers (i.e. people who most benefit
from your software) are really the content distributors not ISPs
anymore. While the distinction might seem somewhat minor, I think ISPs
generally tend to have more of a view point of where is this traffic
wanting or probably going to go, and how to do we build infrastructure
to get it there, and less of a what is this traffic view. In other
words, ISPs tend to be more focused on trying to optimise for all types
of traffic rather than one or a select few particular types, because
what the customer does with the bandwidth they purchase is up to
the customer themselves. If you spend time optimising for one type of
traffic you're either neglecting or negatively impacting another type.
Spending time on general optimisations that benefit all types of
traffic is usually the better way to spend time. I think one of the
reasons for ISP interest in the p2p problem could be because it is
reducing the normal benefit-to-cost ratio of general traffic
optimsation. Restoring the regular benefit-to-cost ratio of general
traffic optimsation is probably the fundamental goal of solving the
p2p problem.

My suggestion to you as a squid developer would be focus on caching, or
more generally, localising of P2P traffic. It doesn't seem that the P2P
application developers are doing it, maybe because they don't care
because it doesn't directly impact them, or maybe because they don't
know how to. If squid could provide a traffic localising solution which
is just another traffic sink or source (e.g. a server) to an ISP,
rather than something that requires enabling knobs on the network
infrastructure for special handling or requires special traffic
engineering for it to work, I'd think you'd get quite a bit of
interest. 

Just my 2c.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sat, 20 Jan 2007 17:38:06 -0600 (CST)
Gadi Evron [EMAIL PROTECTED] wrote:

 
 On Sat, 20 Jan 2007, Alexander Harrowell wrote:
  Marshall wrote:
  Those sorts of percentages are common in Pareto distributions (AKA
  
   Zipf's law AKA the 80-20 rule).
   With the Zipf's exponent typical of web usage and video watching, I
   would predict something closer to
   10% of the users consuming 50% of the usage, but this estimate is not
   that unrealistic.
  
   I would predict that these sorts of distributions will continue as
   long as humans are the primary consumers of
   bandwidth.
  
   Regards
   Marshall
  
  
  That's until the spambots inherit the world, right?
  
 
 That is if you see a distinction, metaphorical or physical, between
 spambots and real users.
 

On the Internet, Nobody Knows You're a Dog (Peter Steiner, The New Yorker)
 
Woof woof,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sat, 20 Jan 2007 18:51:08 -0800
Roland Dobbins [EMAIL PROTECTED] wrote:

 
 
 On Jan 20, 2007, at 6:14 PM, Mark Smith wrote:
 
  It doesn't seem that the P2P
  application developers are doing it, maybe because they don't care
  because it doesn't directly impact them, or maybe because they don't
  know how to. If squid could provide a traffic localising solution  
  which
  is just another traffic sink or source (e.g. a server) to an ISP,
  rather than something that requires enabling knobs on the network
  infrastructure for special handling or requires special traffic
  engineering for it to work, I'd think you'd get quite a bit of
  interest.
 
 I think there's interest from the consumer level, already:
 
 http://torrentfreak.com/review-the-wireless-BitTorrent-router/
 
 It's early days, but if this becomes the norm, then the end-users  
 themselves will end up doing the caching.
 

Maybe I haven't understood what that exactly does, however it seems to
me that's really just a bit-torrent client/server in the ADSL router.
Certainly having a bittorrent server in the ADSL router is unique, but
not really what I was getting at.

What I'm imagining (and I'm making some assumptions about how
bittorrent works) would be bittorrent super peer that :

* announces itself as a very generous provider of bittorrent fragments.
* selects which peers to offer it's generosity to, by measuring it's
network proximity of those peers. I think bittorrent uses TCP, and it
would seem to me that TCP's own round trip and througput measuring
would be a pretty good source to measuring network locality.
* This super peer could also have it's generosity announcements
restricted to certain IP address ranges etc.

Actually, thinking about it a bit more, for this device to work well it
would need to somehow be inline with the bit torrent seed URLs, so maybe
that wouldn't be feasible to have a server in the ISP's network do it.
Still, if BT peer software was modified to take into account the TCP
measurements when selecting peers, I think it would probably go a long
way towards mitigating some of the traffic problems that P2P seems to be
causing.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sat, 20 Jan 2007 19:47:04 -0800
Roland Dobbins [EMAIL PROTECTED] wrote:

snip

 
 The advantage of providing caching services is that they both help  
 preserve scare resources and result in a more pleasing user  
 experience.  As already pointed out, CAPEX/OPEX along with insertion  
 into the network are the current barriers, along with potential legal  
 liabilities; cooperation between content providers and SPs could help  
 alleviate some of these problems and make it a more attractive model,  
 and help fund this kind of infrastructure in order to make more  
 efficient use of bandwidth at various points in the topology.
 

I think you're more or less describing what already Akamai do - they're
just not doing it for authorised P2P protocol distributed content (yet?).

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Network end users to pull down 2 gigabytes a day, continuously?

2007-01-08 Thread Mark Smith

On Mon, 8 Jan 2007 10:25:54 +
[EMAIL PROTECTED] wrote:
snip

 
 I am suggesting that ISP folks should be cooperating with
 P2P software developers. Typically, the developers have a very
 vague understanding of how the network is structured and are
 essentially trying to reverse engineer network capabilities. 
 It should not be too difficult to develop P2P clients that
 receive topology hints from their local ISPs. If this results
 in faster or more reliable/predictable downloads, then users
 will choose to use such a client. 
 

I'd think TCP's underlying and constant round trip time measurement to
peers could be used for that. I've wondered if P2P protocols did that
fairly recently, however hadn't found the time to see if it was so.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: UUNET issues?

2006-11-04 Thread Mark Smith

 the internet is broken.  anyone know why?
No.


Re: UUNET issues?

2006-11-04 Thread Mark Smith

On Sat, 4 Nov 2006 22:55:46 -0800
Michael Smith [EMAIL PROTECTED] wrote:

 
 
 On Nov 4, 2006, at 10:51 PM, Randy Bush wrote:
 
  Could you be any less descriptive of the problem you are seeing?
  the internet is broken.  anyone know why?
  Did you ping it?
 
  is that what broke it?
 
 Please.  That's how you *know* it's broken.
 

With the prevalence of firewalls, I've found broken traceroutes to be a
much more reliable indicator of broken Internettedness.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: icmp rpf

2006-09-25 Thread Mark Smith

Hi Mark,

On Sun, 24 Sep 2006 16:33:30 -0700 (PDT)
Mark Kent [EMAIL PROTECTED] wrote:

 Mark Smith wrote:
  The non-announcers, because they're also breaking PMTUD.
 
 Really?   How?   Remember, we're not talking about RFC1918 space,
 where there is a BCP that says we should filter it at the edge.
 We're talking about public IP space, that just doesn't happen to be
 announced outside of a particular AS.
 

When a router that can't shove a DF'd packet down a link because the
MTU is too small needs to create a ICMP Destination Unreachable, Packet
Too Big, Fragmentation Required, it needs to pick a source IP address
to use for that ICMP packet, which will be one of those assigned to the
router with the MTU problem (I'm fairly sure it's the IP
address assigned to the outgoing interface for this ICMP packet,
although I don't think it probably matters much). If an upstream
router, i.e. on the way back to the sender who needs to resend with a
smaller packet, is dropping these packets because they fail RPF, then
PMTUD breaks. The result might be connection timeouts at the sender, or
possibly after quite a while the sender might try smaller packets and
eventually they'll get through (I think Windows might do this). Either
way, bad end-user experience.

PMTUD as it currently works isn't ideal, as of course there isn't any
guarantee that these ICMP Dest Unreachables will get there even in a
good network. However, most of the time it works, where as in the
scenario you're presenting, it definately won't.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: New router feature - icmp error source-interface [was: icmp rpf]

2006-09-25 Thread Mark Smith

On Mon, 25 Sep 2006 09:22:34 -0400
Patrick W. Gilmore [EMAIL PROTECTED] wrote:

 
 On Sep 25, 2006, at 9:06 AM, Ian Mason wrote:
 
  ICMP packets will, by design, originate from the incoming interface  
  used by the packet that triggers the ICMP packet. Thus giving an  
  interface an address is implicitly giving that interface the  
  ability to source packets with that address to potential anywhere  
  in the Internet. If you don't legitimately announce address space  
  then sourcing packets with addresses in that space is (one  
  definition of) spoofing.
 
 Who thinks it would be a good idea to have a knob such that ICMP  
 error messages are always source from a certain IP address on a router?
 

I do.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Is it my imagination or are countless operations impacted today with mysql meltdowns

2006-08-27 Thread Mark Smith

On Sun, 27 Aug 2006 00:13:50 -0400
Richard A Steenbergen [EMAIL PROTECTED] wrote:

 On Sun, Aug 27, 2006 at 08:04:01AM +0930, Mark Smith wrote:
  
  On Sat, 26 Aug 2006 12:48:39 -0700 (PDT)
  Henry Linneweh [EMAIL PROTECTED] wrote:
  
   
   Every where I go that uses MySql is hozed and I can not access the pages

   -Henry
  
  There seems to have been a big fault over there that is effecting us
  here in .AU. According to our local upstream it's a GLX fault, and by
  it's duration, it seems to have been a big one - I was told about it
  more than 12 hours ago. Examples of sites customers are having trouble
  accessing are :
 
 I think you're referring to an issue of blackholed packets between GX 
 (3549) and Singtel (7473) in LA, for packets going to Optus (4804) (which 
 for some reason appear to not be announced to normal Singtel peers). I 
 don't think this was GX's fault actually, but I'm not sure if the issue 
 extended beyond 3549-7473.
 

Optus's AS is 7474, or at least that is the AS we peer with, and then
that peers with 7473.

Our routes to those destinations had been up for days / weeks, so it
seemed to be a return path problem. A packet blackhole would explain it.

 At any rate this has nothing to do with MySQL faults or off-topic posts, 
 and it is venturing dangerously close to actually talking about routing 
 issues. We'd best change the subject to spam or botnets or something, 
 before somebody gets the wrong idea about this list. :)
 

Maybe the routes were stored in a MySQL database, and they suffered
from a disk crash ?

:-)

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Is it my imagination or are countless operations impacted today with mysql meltdowns

2006-08-26 Thread Mark Smith

On Sat, 26 Aug 2006 12:48:39 -0700 (PDT)
Henry Linneweh [EMAIL PROTECTED] wrote:

 
 Every where I go that uses MySql is hozed and I can not access the pages
  
 -Henry

There seems to have been a big fault over there that is effecting us
here in .AU. According to our local upstream it's a GLX fault, and by
it's duration, it seems to have been a big one - I was told about it
more than 12 hours ago. Examples of sites customers are having trouble
accessing are :

games.swirve.com
206.104.8.56

hostgator.com
67.18.54.2

itwarehouse.com.au
67.19.93.101

centralops.net
70.84.211.98

whatalicefound.net
70.87.152.2


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Interesting new spam technique - getting a lot more popular.

2006-06-15 Thread Mark Smith

On Wed, 14 Jun 2006 11:59:51 -0700
Warren Kumari [EMAIL PROTECTED] wrote:

 
 
 On Jun 14, 2006, at 2:18 AM, John van Oppen wrote:
 
  That being said, I know at least one of our transit customers does  
  hosting exactly how you are describing.   Coincidentally, this  
  customer is also one of the customers that asked if we could give  
  them a class C block.
 
 Ok, I KNOW I am going to be slapped by a bunch of people here, but
 
 I often refer to a /24 (anywhere in the space) as a class C. 

SLAP!

Actually, we've recently seen an Internet service RFP requesting Class
A addresses because they were better than Class Bs! At least they
won't be asking for any Class Cs - too low rent for them !

Hmm, I've just realised that we've just been assigned a Class A /18,
so maybe we can supply the customer Class A, Number 1 Grade, Premium,
Royal Quality IP addresses after all.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Mutual Redistribution

2006-03-29 Thread Mark Smith

On Tue, 28 Mar 2006 16:37:48 -0500
Joe Maimon [EMAIL PROTECTED] wrote:

 
 
 Mark Smith wrote:
 
  One better
  solution is to take advantage of route tags or labels. When a route is
  redistributed you tag it, and then when mutual redistribution occurs in
  the other direction, you exclude routes that have that tag. You'd need
  to do this in both redistribution directions, with different tags to
  prevent loops in either direction. This method doesn't rely on the
  behaviour of always increase metrics, so it would be more robust.
  
  HTH,
  Mark.
  
 I dont believe popular vendors implementations of rip propogate tags.
 
 At least the last time I tried loop prevention with that, it didnt work.

Did it happen to be RIPv1 ? Only RIPv2 supports route tags.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Mutual Redistribution

2006-03-28 Thread Mark Smith

On Tue, 28 Mar 2006 06:46:13 +0530
Glen Kent [EMAIL PROTECTED] wrote:

 
 Hi,
 
 There is a provider who is running ISIS in its core and they are using
 RIP for the management interface. Is it valid to redistribute all the
 ISIS routes into RIP and all the RIP routes into ISIS?
 

Depends on what they are trying to achieve, as well as their routing
protocol topology. Mutual redistribution may not be necessary if in one
of the routing protocol clouds they have a default route pointing
towards the other e.g. for a a hub and spoke topology (IS-IS hub, RIP
spokes), a default in the RIP cloud pointing towards the IS-IS hub, and
then redistributing the RIP learned routes into IS-IS would achieve the
same as what mutual redistribution is being used for.

 Cant this create a loop or something?
 

You've just got to make sure that routes don't get redistributed back to
where they came from e.g. an IS-IS route into RIP, then from RIP back
into IS-IS, then IS-IS into RIP etc. On face value you'd think that
increasing metrics would prevent this routing information loop, except
during redistribution the metric can loose its ability to properly
measure the path length, in part due to some protocols not having very
large metric capacity (RIP probably being the only one). One better
solution is to take advantage of route tags or labels. When a route is
redistributed you tag it, and then when mutual redistribution occurs in
the other direction, you exclude routes that have that tag. You'd need
to do this in both redistribution directions, with different tags to
prevent loops in either direction. This method doesn't rely on the
behaviour of always increase metrics, so it would be more robust.

HTH,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Fire in bakery fries fiber optic cable

2006-03-26 Thread Mark Smith

On Sun, 26 Mar 2006 06:05:49 -0600
neal rauhauser [EMAIL PROTECTED] wrote:

 
 
   The fiber cable hit by bullet was in New Jersey if I'm recalling 
 correctly ... this was maybe four or five years ago. If memory serves 
 (and forty *is* uncomfortably close) this was part of a cable modem plant.
 

Maybe this one is it. I think it was around 1999, possibly 98, when I saw it.

Thanks,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Fire in bakery fries fiber optic cable

2006-03-25 Thread Mark Smith

n Thu, 23 Mar 2006 18:32:13 -0500 (EST)
Sean Donelan [EMAIL PROTECTED] wrote:

 
 
 http://timesunion.com/AspStories/story.asp?storyID=463928category=BUSINESSnewsdate=3/23/2006
   A fire Tuesday that tore through a popular bakery in Cohoes left 70,000
   Time Warner Cable subscribers without TV service. Some who also rely on
   the cable company for their high-speed Internet or telephone found all
   three out of commission.
 
 In the pictures, it appears electric, telephone and cable lines were all
 on the utility poles damaged by the fire.  I'm not sure why time-warner
 cable had the brunt of the outages in the newspaper reports.  It may have
 just been bad luck on which company's lines got baked (sorry, more bad
 puns).
 

A few years back there was a photo floating around of a fibre that had
been destroyed by a stray bullet. Does anybody know of it, or have a
copy ?

Thanks,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Fire in bakery fries fiber optic cable

2006-03-25 Thread Mark Smith

On Sat, 25 Mar 2006 18:16:34 -0500
Aaron Gagnier [EMAIL PROTECTED] wrote:

 This one?
 
 http://www.dslreports.com/forum/remark,2471255~root=cable,opt~mode=flat
 

Could be. Keith Woodworth sent me this version of it off list :

http://please.rutgers.edu/show/broadband/fibercable.jpg

I seem to remember it being on some sort of fault report, maybe that is
a copy of the original photo.

Thanks,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Network graphics tools

2006-03-22 Thread Mark Smith

Hi Howard,

On Tue, 21 Mar 2006 21:17:44 -0500
Howard C. Berkowitz [EMAIL PROTECTED] wrote:

 
 Much of the enterprise market seems wedded to Visio as their network 
 graphics tool, which locks them into Windows. Personally, I hate both 
 little pictures of equipment and Cisco hockey-puck icons; I much 
 prefer things like rectangles saying 7507 STL-1 or M160 NYC-3.
 
 Assuming you use *NIX platforms (including BSD under Mac OS X), what 
 are your preferred tools for network drawings, both for internal and 
 external use?  I'd hate to be driven to Windows only because I need 
 Visio.

I've been using inkscape (http://www.inkscape.org/) a bit recently, and
haven't found it too bad for basic box network drawings. It's native
format is SVG, although make sure you save your working diagrams in the
Inkscape SVG format. If you save it as normal SVG, all the objects get
merged into a single one - annoying if you want to go back and edit it
later. I haven't tried it, however there is a probability that Firefox
1.5 can view the .SVGs Inkscape produces natively.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: UDP Badness [Was: Re: How to measure network qualityperformance for voipgameservers (udp packetloss, delay, jitter,...)]

2006-03-10 Thread Mark Smith

On Tue, 7 Mar 2006 23:33:44 +
tony sarendal [EMAIL PROTECTED] wrote:

 On 07/03/06, Gunther Stammwitz [EMAIL PROTECTED] wrote:
 
 
  Well that's true but Iperf won't show you at which time a loss occured. It
  will simply print out the results when the test has been finished. I need
  something well more accurate that can also tell me which hop is causing
  the
  problems.
 
  Last I checked I got the time from Iperf, even if it was indirectly.
 A tool that shows which hop in the network that has problems forwarding
 certain traffic ? Awesome, I want one of those.
 

traceroute ? :-) (sorry, couldn't resist)


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Transit LAN vs. Individual LANs

2006-02-25 Thread Mark Smith


On Sat, 25 Feb 2006 13:56:37 -0600
Stephen Sprunk [EMAIL PROTECTED] wrote:

 
 Thus spake Patrick W. Gilmore [EMAIL PROTECTED]
  On Feb 24, 2006, at 9:03 PM, Scott Weeks wrote:

snip

 
 There are a few advantages to going with PTP VLANs, such as eliminating 
 DR/BDR elections needed on shared ones, but you'd need 10 of them to get a 
 full mesh, and 15 if you add one more router.  That's just too much 
 complexity for virtually no gain, and as Owen notes, it is generally bad for 
 your logical topology to not match the physical one.
 

Even if you have a small number of routers on a segment, you can set the
ethernet interface type to point-to-multipoint, at least on Ciscos.

Automatic nighbour discovery via multicast hellos still happens, the
difference is that the routers establish direct adjacencies between each
other, rather than with the DR. While this costs additional RAM, and CPU
during the SPF calc, the benefit of avoiding DR/BDR elections, and the
'DR/BDR' approximately 40 second listening phase when a third and
subsequent routers come online may be well worth those costs.

I've also found you can set the OSPF interface type on ethernets to
point-to-point. From memory, it results in a slightly smaller Router LSA
than point-to-multipoint. That probably doesn't matter much. I haven't
tested it, however setting the type to point-to-point might prevent a
third OSPF router being accidentally added to the segment and then
establishing an unwanted adjacency, which might provide a robustness
against human error advantage.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Transit LAN vs. Individual LANs

2006-02-25 Thread Mark Smith

On Sun, 26 Feb 2006 08:41:45 +1030
Mark Smith [EMAIL PROTECTED] wrote:

To qualify this better, there are no DR/BDR on the segment at all,
rather than there being ones that just aren't used :

 Automatic nighbour discovery via multicast hellos still happens, the
 difference is that the routers establish direct adjacencies between each
 other, rather than with the DR. While this costs additional RAM, and CPU
 during the SPF calc, the benefit of avoiding DR/BDR elections, and the
 'DR/BDR' approximately 40 second listening phase when a third and
 subsequent routers come online may be well worth those costs.
 

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Stupidity: A Real Cyberthreat.

2006-01-19 Thread Mark Smith

The purpose of terrorism is to create widespread _terror_ (the
hint is in the word).

On Thu, 19 Jan 2006 12:00:28 -0700
A Satisfied Mind [EMAIL PROTECTED] wrote:

 
 On 1/19/06, Jerry Pasker [EMAIL PROTECTED] wrote:
 
 You are oversimplifying things here Why was the World Trade Center
 chosen (twice) to attack it is an economic target.  All wars are
 economic, including drug wars and terror wars... what was the COST of
 9/11???
 
 A hell of a lot:  http://www.ccc.nps.navy.mil/si/aug02/homeland.asp
 

Was the terror caused by 9/11 because of the economic impact, or because
3000 innocent people died in such a terrible and unexpected manner ? If
the goal was lets get those Amercians and the grand financial
institutions, Fort Knox might have been a better target for the
terrorists.

I strongly recommend reading the book I quote below, which deals exactly
with this topic.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Stupidity: A Real Cyberthreat.

2006-01-19 Thread Mark Smith

On Thu, 19 Jan 2006 14:17:35 -0700
A Satisfied Mind [EMAIL PROTECTED] wrote:

 On 1/19/06, Mark Smith
 [EMAIL PROTECTED] wrote:
  The purpose of terrorism is to create widespread _terror_ (the
  hint is in the word).
 
 And what is terror?   Warfare


War is certainly terrible, although it isn't necessarily terrifying if
you aren't there :

http://dictionary.cambridge.org/define.asp?key=82098dict=CALD

1 [C or U] (violent action which causes) extreme fear:
They fled from the city in terror.
There was sheer/abject terror in her eyes when he came back into the room.
Lots of people have a terror of spiders.
What he said struck terror in my heart (= made me very frightened).
The separatists started a campaign of terror (= violent action causing fear) to 
get independence.
Heights have/hold no terrors for me (= do not frighten me).

This is so way off topic for nanog that I'm going to stop here.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Mark Smith

Hi Randy,

On Sun, 15 Jan 2006 11:10:04 -1000
Randy Bush [EMAIL PROTECTED] wrote:

 
  You are using a crossover cable right?
  I'm having a right mare trying to get a Foundry BigIron to 
  connect up to a cisco 2950T, via Gigabit copper.
 
 i was under the impression that gige spec handled crossover
 automagically
 

According to Ethernet, The Definitive Guide, that feature is an
optional part of the spec.

One thing I've heard people encounter is that if they use a cross-over
cable, which probably really implies a 100BASE-TX cross-over, then the
ports only go to 100Mbps. A Gig-E rated straight through, in conjunction
with the automatic crossover feature, was necessary to get to GigE.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Mark Smith

On Sun, 15 Jan 2006 23:50:07 + (GMT Standard Time)
Sam Stickland [EMAIL PROTECTED] wrote:

 
 Hi,

snip

 
 The cabling arrangement is:
 
 Foundry -- Straight -- Patch -- Underfloor -- Patch -- Crossover -- Cisco
   GBIC   Cable  Panel Straight Panel  Cable
 
 If I replace the final crossover cable with a straight,
Just do that ^^^ and give it a try.


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Problems connectivity GE on Foundry BigIron to Cisco 2950T

2006-01-15 Thread Mark Smith

On Mon, 16 Jan 2006 00:24:35 + (GMT Standard Time)
Sam Stickland [EMAIL PROTECTED] wrote:

 
 On Mon, 16 Jan 2006, Mark Smith wrote:
 
  On Sun, 15 Jan 2006 23:50:07 + (GMT Standard Time)
  Sam Stickland [EMAIL PROTECTED] wrote:
 
 
  Hi,
 
  snip
 
 
  The cabling arrangement is:
 
  Foundry -- Straight -- Patch -- Underfloor -- Patch -- Crossover -- Cisco
GBIC   Cable  Panel Straight Panel  Cable
 
  If I replace the final crossover cable with a straight,
  Just do that ^^^ and give it a try.
 
 Will do.
 

Having done a bit more looking into this myself, one thing that might be
a cause is the cross-over, in the sense that if it is a 100BASE-T
crossover, only two of the pairs will be crossed, and the other two
pairs are usually wired straight.

A GigE cross over, assuming you need one if you're ports don't support
automatic cross over, has all four pairs crossed over
(1-3,2-6,3-1,6-2,4-7,5-8,7-4,8-5). My guess would be that if a device
only detects two of the four pairs crossed, it drops back to 100BASE-T.
In other words, GigE cross overs are backwards compatible with
10/100BASE-T, but 10/100BASE-T crossovers aren't forward compatible with
GigE.

A GigE rated straight through path would be the first thing I'd test,
after that, possibly try a GigE crossover somewhere between the devices.

Regards,
Mark.


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: The Qos PipeDream [Was: RE: Two Tiered Internet]

2005-12-16 Thread Mark Smith


On Fri, 16 Dec 2005 04:16:17 + (GMT)
Christopher L. Morrow [EMAIL PROTECTED] wrote:

 
 
 On Fri, 16 Dec 2005, Christopher L. Morrow wrote:
 
  http://www.secsup.org/files/dmm-queuing.pdf
 
 
 oh firstgrad spelling where ahve you gone?
 
 also at: http://www.secsup.org/files/dmm-queueing.pdf
 
 incase you type not paste.

Another interesting one is 

Provisioning IP Backbone Networks to Support Latency Sensitive Traffic

From the abstract,

To support latency sensitive traffic such as voice, network
providers can either use service differentiation to prioritize such traffic
or provision their network with enough bandwidth so that all traffic
meets the most stringent delay requirements. In the context of widearea
Internet backbones, two factors make overprovisioning an attractive
approach. First, the high link speeds and large volumes of traffic make
service differentiation complex and potentially costly to deploy. Second,
given the degree of aggregation and resulting traffic characteristics, the
amount of overprovisioning necessary may not be very large 

... 

We then develop a procedure which uses this model to find the amount of
bandwidth needed on each link in the network so that an end-to-end delay
requirement is satisfied. Applying this procedure to the Sprint network,
we find that satisfying end-to-end delay requirements as low as 3 ms
requires only 15% extra bandwidth above the average data rate of the
traffic.

http://www.ieee-infocom.org/2003/papers/10_01.PDF

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: GoDaddy DDoS

2005-12-01 Thread Mark Smith

On Wed, 30 Nov 2005 16:18:52 -0700
Sam Crooks [EMAIL PROTECTED] wrote:

This confidentiality notice almost DoS'd my MUA !
 
 
 CONFIDENTIALITY NOTICE:
 This message, and any attachments, are intended only for the lawful and 
 specified use of the individual or entity to which it is addressed and may 
 contain information that is privileged, confidential or exempt from 
 disclosure under applicable law. If the reader of this message is not the 
 intended recipient or the employee or agent responsible for delivering the 
 message to the intended recipient, you are hereby notified that you are 
 STRICTLY PROHIBITED from disclosing, printing, storing, disseminating, 
 distributing or copying this communication, or admitting to take any action 
 relying thereon, and doing so may be unlawful. It should be noted that any 
 use of this communication outside of the intended and specified use as 
 designated by the sender, may be unlawful.  If you have received this in 
 error, please immediately notify us by return e-mail, fax and/or telephone, 
 and destroy this original transmission and its attachments without reading or 
 saving in any manner.
 
 


-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier


Re: IAB and private numbering

2005-11-17 Thread Mark Smith

On Thu, 17 Nov 2005 17:44:10 +0100
Daniel Karrenberg [EMAIL PROTECTED] wrote:

 On 15.11 07:38, Mark Smith wrote:
  
  RFC1627, Network 10 Considered Harmful (Some Practices Shouldn't be
  Codified) and RFC3879, Deprecating Site Local Addresses provide some
  good examples of where duplicate or overlapping address spaces cause
  problems, which is what happens when different organisations use RFC1918
  addresses, even if they aren't connected to the Internet.
 
 This is practical engineering, not theoretical science.  Practical
 engineering is about *trade-offs*. 
 

All I know is that I've had bad experiences with duplicated or
overlapping address spaces. One particularly bad one was spending two
months developing templates for combinations of NAT / NAPT for Internet
/ VPN access (e.g. NAT to Internet, not VPN; NAT to VPN, not Internet;
NAPT to Internet, NAT to VPN, different to address spaces for NAT to
the Internet and NAT to the VPN etc. etc.). In addition to developing
these solutions I also sat scratching my head for two months asking why
not just give them public address space, restoring uniqueness to their
addressing, so I can work on improving the product rather than just
developing work arounds ?. Spending time on work arounds, as well as
building protocol and other limitations into the network that will be
encountered in the future, isn't a good trade-off in my
opinion.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier


Re: IAB and private numbering

2005-11-14 Thread Mark Smith

On Mon, 14 Nov 2005 11:36:00 +
[EMAIL PROTECTED] wrote:

 
  I'd like to see some acknowledgement that there are legitimate uses of
  number resources that don't include the public Internet.
 

RFC1627, Network 10 Considered Harmful (Some Practices Shouldn't be
Codified) and RFC3879, Deprecating Site Local Addresses provide some
good examples of where duplicate or overlapping address spaces cause
problems, which is what happens when different organisations use RFC1918
addresses, even if they aren't connected to the Internet.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier


Re: IAB and private numbering

2005-11-12 Thread Mark Smith

On Sun, 13 Nov 2005 02:12:13 + (GMT)
Christopher L. Morrow [EMAIL PROTECTED] wrote:

snip

 
 I don't believe there is a 'rfc1918' in v6 (yet), I agree that it doesn't
 seem relevant, damaging perhaps though :)


Sort of do, with a random component in them to help attempt to prevent
collisions :

RFC 4193 - Unique Local IPv6 Unicast Addresses
http://www.faqs.org/rfcs/rfc4193.html

 
 
  IMHO, assigning globally unique prefixes to those who utilize IP
  protocols, regardsless of whom else they choose to see via routing
  is the right course.  every other attempt to split the assignements
  into us vs. them has had less than satisfactory results.
 
 agreed
 

See above ... that was pretty much the fundamental goal of ULAs - unique
address space, not dependant on a provider, not intended to be globally
routable, preferred over global addresses so that connections can
survive global address renumbering events.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier


Re: IPv6 daydreams

2005-10-17 Thread Mark Smith

Hi David,

On Sun, 16 Oct 2005 16:49:25 -0700 (PDT)
David Barak [EMAIL PROTECTED] wrote:

 
snip
 
 I'd change the allocation approach: rather than give
 every customer a /64, which represents an IPv4
 universe full of IPv4 universes, I'd think that any
 customer can make do with a single IPv4-size universe,
 and make the default end-customer allocation a /96. 
 ISPs could still get gigantic prefixes (like a /23 or
 something), to make sure that an ISP would never need
 more than one prefix.
 

If we're going to do that, we may as well also start reclaiming those 48
bit MAC addresses that come with ethernet cards. After all, nobody would
need anymore than say 12 to 13 bits to address their LANs.

Hmm, so what do 48 bit addresses give us that 12 bits don't ? How about
convenience. It is convenient to be able to plug in an ethernet card,
and, excepting the very rare occasions when a manufacturer has stuffed
up, be assured that you can just plug it in and it works. No jumpering,
no maintaining a LAN address registry per segment, no address
collisions, or at least extremely rare ones.

From what I understand, it is considered that 48 bit MAC addresses will
be too small for our convenience needs of the future, so IEEE have
invented 64 bit ones (EUI-64s).

Wouldn't it be nice to the same sort of convenience in a new layer 3
protocol that we've had since 802.3 was first published (and since I
started working in networking 1993) ? I'd like it, and I'm willing to
pay a few bytes in the src and dst addresses in my layer 3 protocol
header for it.

/64s in IPv6 for multi-access segments (i.e. everything other than
single address loopbacks) is convenient and useful, and I think should
be kept.


Regards,
Mark.

-- 

The Internet's nature is peer to peer.



Re: IPv6 daydreams

2005-10-17 Thread Mark Smith

Hi Randy,

On Sun, 16 Oct 2005 23:08:49 -1000
Randy Bush [EMAIL PROTECTED] wrote:

 
  If we're going to do that, we may as well also start reclaiming
  those 48 bit MAC addresses that come with ethernet cards. After
  all, nobody would need anymore than say 12 to 13 bits to address
  their LANs.
 
 so you think that layer-2 lans scale well above 12-13 bits?
 which ones in particular?


Maybe you've missed my point. Nobody (at least that I'm aware of)
_needs_ 48 bits of address space to address nodes their LANs. We didn't
get 48 bits because we needed them (although convenience is a need, if
it wasn't we'd still be hand winding our car engines to start them ). We
got them because it made doing other things much easier, such as (near)
guarantees of world wide unique NIC addresses, allowing plug-and-play,
at least a decade before the term was invented.

I've read somewhere that the original ethernet address was only 16 bits
in size. So why was it expanded to 48 bits ? Obviously people in the 80s
weren't running LANs with 2^48 devices on them, just like they aren't
today.

Why have people, who are unhappy about /64s for IPv6, been happy enough
to accept 48 bit addresses on their LANs for at least 15 years? Why
aren't people complaining today about the overheads of 48 bit MAC
addresses on their 1 or 10Gbps point-to-point links, when none of those
bits are actually necessary to identify the other end ? Maybe because
they have unconsciously got used to the convenience, and, if they've
thought about it, realise that the byte overhead/cost of that
convenience is not worth worrying about, because there are far higher
costs elsewhere in the network (including administration of it) that
could be reduced.

Regards,
Mark.

-- 

The Internet's nature is peer to peer.



Re: IPv6 news

2005-10-16 Thread Mark Smith

Hi Tony,

On Sat, 15 Oct 2005 23:26:20 -0700
Tony Li [EMAIL PROTECTED] wrote:

snip

  Perhaps  
 this is yet another case where people misunderstand the principle  
 itself and are invoking it to give a name to their (well placed)  
 architectural distaste.
 

Doesn't NAT, or more specifically the most commonly used, NAPT, create
hard state within the network, which then makes it violate the
end-to-end argument ? Also, because it has to understand transport and
application layer protocols, to be able to translate embedded addresses,
doesn't this also make it violate end-to-end ? I've understood the
fundamental benefit of following the end-to-end argument is that you end
up with a application agnostic network, which therefore doesn't create
future constraints on which applications can then be used over that
network. In an end-to-end compliant network, any new transport layer
protocols, such as SCTP or DCCP, and new user applications, only require
an upgrade of the end or edge node software, which can be performed in
an incremental, per edge node as needed basis. In other words, there
isn't any whole of network upgrade cost or functionality deployment
delay to support new applications, which was the drawback of application
specific networks, such as the traditional POTS network.

Have I somehow misunderstood the intent or benefits of the end-to-end
argument ?

Thanks, Mark.

-- 

The Internet's nature is peer to peer.



Re: And Now for Something Completely Different (was Re: IPv6 news)

2005-10-16 Thread Mark Smith

Hi David,

snip

 
 Well, if you NAT the destination identifier into a routing locator  
 when a packet traverses the source edge/core boundary and NAT the  
 locator back into the original destination identifier when you get to  
 the core/destination edge boundary, it might be relevant.  The  
 advantages I see of such an approach would be:
 
 - no need to modify existing IPv6 stacks in any way
 - identifiers do not need to be assigned according to network  
 topology (they could, in fact, be allocated according to national  
 political boundaries, geographic boundaries, or randomly for that  
 matter).  They wouldn't even necessarily have to be IPv6 addresses  
 just so long as they could be mapped and unmapped into the  
 appropriate locators (e.g., they could even be, oh say, IPv4 addresses).
 - locators could change arbitrarily without affecting end-to-end  
 sessions in any way
 - the core/destination edge NAT could have arbitrarily many locators  
 associated with it
 - the source edge/core NAT could determine which of the locators  
 associated with a destination it wanted to use
 
 Of course, the locator/identifier mapping is where things might get a  
 bit complicated.  What would be needed would be a globally  
 distributed lookup technology that could take in an identifier and  
 return one or more locators.  It would have to be very fast since the  
 mapping would be occurring for every packet, implying a need for  
 caching and some mechanism to insure cache coherency, perhaps  
 something as simple as a cache entry time to live if you make the  
 assumption that the mappings either don't change very frequently and/ 
 or stale mappings could be dealt with.  You'd also probably want some  
 way to verify that the mappings weren't mucked with by miscreants.   
 This sounds strangely familiar...


Certainly does. Apparently this or a similar idea was suggested back in
1997, and is the root origin of the 64 bits for host address space,
according to Christian Huitema, in his IPv6 book -
http://www.huitema.net/ipv6.asp.

A google search found the draft :

GSE - An Alternate Addressing Architecture for IPv6
M. O'Dell, INTERNET DRAFT, 1997

http://www.caida.org/outreach/bib/networking/entries/odell97GSE.xml


 
 Can two evils make a good?  :-)
 

Not sure, however, two wrongs don't make a right, but three lefts do.

Regards,
Mark.

-- 

The Internet's nature is peer to peer.