Re: Google wants to be your Internet

2007-05-08 Thread Travis H.
On Tue, Jan 23, 2007 at 02:59:21PM -0500, Sean Donelan wrote:
 I think network engineers are too quick to use network identifiers for
 applications.

Analogous to using names or SSNs or anything else as a primary key
in a database.  The database people already figured out that if you
don't assign an identifier used solely for identification purposes,
then you can't capture the idea of something changing names (or IPs,
or whatever).

I think wifi is making this clear; I may connect from various wifi
networks, but I'm still me.  To deal with roaming mobile devices,
we'll have to figure out something to allow us to maintain
connectivity while changing IPs, right?  Same with DHCP in some cases.

-- 
Kill dash nine, and its no more CPU time, kill dash nine, and that
process is mine. -- URL:http://www.subspacefield.org/~travis/
For a good time on my UBE blacklist, email [EMAIL PROTECTED]


pgpD9HAvRrvyg.pgp
Description: PGP signature


Re: Google wants to be your Internet

2007-05-08 Thread Travis H.
On Wed, Jan 24, 2007 at 05:23:10AM -0800, Roland Dobbins wrote:
 RFC1918 was created for a reason, and it is used (and misused, we all  
 understand that) today by many network operators for a reason.

I used 10/8 for my LAN a while back until my ISP's routers advertised
in DHCP suddenly started using the same 10.0.0.0/24... so I ended up
having to renumber, or else I wouldn't route out the WAN link for them.
What a pain.

I'd name the ISP if I had the Time.
-- 
Kill dash nine, and its no more CPU time, kill dash nine, and that
process is mine. -- URL:http://www.subspacefield.org/~travis/
For a good time on my UBE blacklist, email [EMAIL PROTECTED]


pgpmOpPSlzOHR.pgp
Description: PGP signature


Re: Google wants to be your Internet

2007-01-31 Thread Joseph S D Yao

On Tue, Jan 30, 2007 at 08:19:12AM -, [EMAIL PROTECTED] wrote:
 
  
   IPv6 makes NAT obsolete because IPv6 firewalls can provide all
   the useful features of IPv4 NAT without any of the downsides.
  
  IPv6 firewalls?  Where?  Good ones?
 
 Why good ones. NAT is a basic IPv4 firewall. All IPv6 needs to obsolete
 NAT is a firewall that offers all the features of NAT without requiring
 the address translation. Then, instead of setting up a port translation
 for a particular incoming protocol, you simply open up that port without
 modifying the packets as they flow through. Suddenly, SIP works and
 incoming VoIP phonecalls work just like on the phone network.


There is more to firewalls than NAT and packet filtering, no matter what
the Cisco Pix people say.


-- 
Joe Yao
---
   This message is not an official statement of OSIS Center policies.


Re: Google wants to be your Internet

2007-01-31 Thread Joseph S D Yao

On Tue, Jan 30, 2007 at 08:04:25PM -, Mark D. Kaye wrote:
 
 Hi,
 
 PIX/ASA Supports IPv6 Apparently, see below.
 
 Don't know anyone who has tested it yet though ;-)
 
 http://www.cisco.com/en/US/products/ps6120/products_configuration_guide_
 chapter09186a0080636f44.html

Note Failover does not support IPv6. The ipv6 address command does not
support setting standby addresses for failover configurations. The
failover interface ip command does not support using IPv6 addresses on
the failover and Stateful Failover interfaces.

The following inspection engines support IPv6:
* FTP
* HTTP
* ICMP
* SMTP
* TCP
* UDP

as opposed to 23 separate application inspection engines listed in a
table later on.  Granted, some of those protocols don't exist on IPv6,
but hardly 17 of 23.


-- 
Joe Yao
---
   This message is not an official statement of OSIS Center policies.


RE: Google wants to be your Internet

2007-01-30 Thread Crist Clark

 On 1/30/2007 at 12:19 AM, [EMAIL PROTECTED] wrote:

  
  IPv6 makes NAT obsolete because IPv6 firewalls can provide all
  the useful features of IPv4 NAT without any of the downsides.
  
 IPv6 firewalls?  Where?  Good ones?
 
 Why good ones. NAT is a basic IPv4 firewall. All IPv6 needs to obsolete
 NAT is a firewall that offers all the features of NAT without requiring
 the address translation. Then, instead of setting up a port translation
 for a particular incoming protocol, you simply open up that port without
 modifying the packets as they flow through. Suddenly, SIP works and
 incoming VoIP phonecalls work just like on the phone network.

Oh, if it were so easy. Even without NAT our firewalls still
need to meddle in the application layer. You'll still need
smarts in the firewall to use the bad ol' FTP. And of course
although SIP itself usually uses a fixed port, the calls it
sets up generally do not.

You don't have to modify packets, but you still need to read
them, understand the protocol, and add state entries to your
firewall. The absence of NAT doesn't really save you much work.
-- 

Crist J. Clark   [EMAIL PROTECTED]
Globalstar Communications(408) 933-4387


B¼information contained in this e-mail message is confidential, intended only 
for the use of the individual or entity named above. If the reader of this 
e-mail is not the intended recipient, or the employee or agent responsible to 
deliver it to the intended recipient, you are hereby notified that any review, 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this e-mail in error, please contact [EMAIL 
PROTECTED] 


RE: Google wants to be your Internet

2007-01-30 Thread Mark D. Kaye

Hi,

PIX/ASA Supports IPv6 Apparently, see below.

Don't know anyone who has tested it yet though ;-)

http://www.cisco.com/en/US/products/ps6120/products_configuration_guide_
chapter09186a0080636f44.html

Mark 



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Joe Abley
Sent: 30 January 2007 01:34
To: Brandon Galbraith
Cc: nanog@merit.edu
Subject: Re: Google wants to be your Internet



On 29-Jan-2007, at 20:12, Brandon Galbraith wrote:

 On 1/29/07, Henning Brauer [EMAIL PROTECTED] wrote:

 * Joseph S D Yao [EMAIL PROTECTED] [2007-01-30 01:59]:
 
  IPv6 firewalls?  Where?  Good ones?

 OpenBSD's pf has support for v6 for years now.

 Do a fair amount of appliance firewalls support it?

To be fair, I think the question was about good firewalls, not  
appliances.


Joe



Re: Google wants to be your Internet

2007-01-29 Thread Joseph S D Yao

On Wed, Jan 24, 2007 at 01:48:04PM -, [EMAIL PROTECTED] wrote:
...
 IPv6 makes NAT obsolete because IPv6 firewalls can provide all
 the useful features of IPv4 NAT without any of the downsides.
...

IPv6 firewalls?  Where?  Good ones?

-- 
Joe Yao
---
   This message is not an official statement of OSIS Center policies.


Re: Google wants to be your Internet

2007-01-29 Thread Henning Brauer

* Joseph S D Yao [EMAIL PROTECTED] [2007-01-30 01:59]:
 
 On Wed, Jan 24, 2007 at 01:48:04PM -, [EMAIL PROTECTED] wrote:
 ...
  IPv6 makes NAT obsolete because IPv6 firewalls can provide all
  the useful features of IPv4 NAT without any of the downsides.
 ...
 
 IPv6 firewalls?  Where?  Good ones?

OpenBSD's pf has support for v6 for years now.

-- 
Henning Brauer, [EMAIL PROTECTED], [EMAIL PROTECTED]
BS Web Services, http://bsws.de
Full-Service ISP - Secure Hosting, Mail and DNS Services
Dedicated Servers, Rootservers, Application Hosting - Hamburg  Amsterdam


Re: Google wants to be your Internet

2007-01-29 Thread Brandon Galbraith

On 1/29/07, Henning Brauer [EMAIL PROTECTED] wrote:



* Joseph S D Yao [EMAIL PROTECTED] [2007-01-30 01:59]:

 On Wed, Jan 24, 2007 at 01:48:04PM -, [EMAIL PROTECTED] wrote:
 ...
  IPv6 makes NAT obsolete because IPv6 firewalls can provide all
  the useful features of IPv4 NAT without any of the downsides.
 ...

 IPv6 firewalls?  Where?  Good ones?

OpenBSD's pf has support for v6 for years now.



Do a fair amount of appliance firewalls support it?

-brandon


Re: Google wants to be your Internet

2007-01-29 Thread Bernhard Schmidt

Henning Brauer [EMAIL PROTECTED] wrote:

  IPv6 makes NAT obsolete because IPv6 firewalls can provide all
  the useful features of IPv4 NAT without any of the downsides.
 ...
 
 IPv6 firewalls?  Where?  Good ones?
 OpenBSD's pf has support for v6 for years now.

Which works pretty well if you forget one tiny thing (from pf.conf(5))

| FRAGMENT HANDLING
| [...]
| Currently, only IPv4 fragments are supported and IPv6 fragments are
| blocked unconditionally.

which can bite you in the ass pretty hard if you don't expect it.
Fragments are valid packets and crucial for many applications, so
unconditional blocking (even with a pass inet6 from any to any
policy) is bad.

Other working solutions are

- Linux + nf_conntrack (maybe in a few kernel versions, there was an
  OOPS in 2.6.20-rc5 with (tadaaa) fragment handling, fixed though)
- Cisco ASA and FWSM
- IIRC Juniper (Netscreen) firewalls

and I guess some more.

Regards,
Bernhard



Re: Google wants to be your Internet

2007-01-29 Thread Joe Abley



On 29-Jan-2007, at 20:12, Brandon Galbraith wrote:


On 1/29/07, Henning Brauer [EMAIL PROTECTED] wrote:

* Joseph S D Yao [EMAIL PROTECTED] [2007-01-30 01:59]:

 IPv6 firewalls?  Where?  Good ones?

OpenBSD's pf has support for v6 for years now.

Do a fair amount of appliance firewalls support it?


To be fair, I think the question was about good firewalls, not  
appliances.



Joe



Re: Google wants to be your Internet

2007-01-29 Thread Joel Jaeggli

Joseph S D Yao wrote:
 On Wed, Jan 24, 2007 at 01:48:04PM -, [EMAIL PROTECTED] wrote:
 ...
 IPv6 makes NAT obsolete because IPv6 firewalls can provide all
 the useful features of IPv4 NAT without any of the downsides.
 ...
 
 IPv6 firewalls?  Where?  Good ones?

There are vendors on this list that make/sell/support ipv6 firewalls. If
 you have a need, you should be able to arrange for an eval from several
of them.

regards
joelja


Re: Google wants to be your Internet

2007-01-29 Thread Steven M. Bellovin

On Mon, 29 Jan 2007 19:57:24 -0500
Joseph S D Yao [EMAIL PROTECTED] wrote:

 
 On Wed, Jan 24, 2007 at 01:48:04PM -, [EMAIL PROTECTED] wrote:
 ...
  IPv6 makes NAT obsolete because IPv6 firewalls can provide all
  the useful features of IPv4 NAT without any of the downsides.
 ...
 
 IPv6 firewalls?  Where?  Good ones?
 
Checkpoint claims to have supported IPv6 since 2002:
http://www.checkpoint.com/press/2002/ipv6_081402.html


--Steve Bellovin, http://www.cs.columbia.edu/~smb


Re: Electric utilities, IP addressing, and BPL (was Re: Google wants to be your Internet)

2007-01-26 Thread JORDI PALET MARTINEZ

Hi Josh,

According to our own experiments and real deployment experience in this
field for many years, IPv6 is the best solution. Because the lack of support
of IPv6 in many applications, we also keep IPv4+NAT, so actually is a dual
stack situation, but all new sensors, meters, and apps. in general, can
easily support IPv6 and make your life much easier.

You may not need now IP up to the edge, but sooner or later, it will change,
and also the high number of devices and networks that you want to put
together. Having enough addresses provides you end to end connectivity,
which again, may not be seen now as a real need, but it will become sooner
or later.

You can find lots of documents, from a project that we finish a couple of
years ago at http://www.6power.org. They aren't updated up to the latest
technology on this field, but sure are a good reading source.

Regards,
Jordi




 De: Josh Gerber [EMAIL PROTECTED]
 Responder a: [EMAIL PROTECTED]
 Fecha: Thu, 25 Jan 2007 11:56:31 -0800
 Para: [EMAIL PROTECTED]
 Asunto: Electric utilities, IP addressing, and BPL (was Re: Google wants to be
 your Internet)
 
 
* From: Sean Donelan
* Date: Tue Jan 23 15:06:02 2007
 
 What do you do when the electric companies split up again,
 renumber the meters into different network blocks?
 
 Thanks for the discussion. It's rare I've seen a thread on NANOG
 that's so pertinent to my own situation.
 
 I work for a fairly large utility faced with these very problems. We
 are developing a ~half $billion project to replace our electric meters
 with 2-way communicating ones, and have several other things going,
 such as intelligent grid projects that will grow the size of our
 networks many times over. I've also been on a broadband over power
 line (BPL) project for a year and a half, and have a good
 understanding of how  where it works, and where it doesn't.
 
 The US electric distribution grid, is, for the most part, quite dumb.
 There are various SCADA and fault protection systems out there, but
 not everything is automated, and certainly not everything is covered.
 For example, when the power goes out somewhere, almost all of the
 time, we don't know until we receive calls from customers. Not a
 proactive system. Not the level of service that our customers want.
 
 Intelligent communicating meters are at the core of the modernization
 of the grid (and of utility profits, if you look at it in certain
 ways). You'll eventually find these meters tied to time-of-use
 electric rates (more $$$ during peak times), as well as used as part
 of an overall instrumentation effort of the grid.
 
 Demand response is also a piece of that puzzle. Shaving a few
 megawatts on a hot day by reducing load (think 50,000 thermostats
 having their temperature settings moved up a few degrees on a really
 hot day) can mean avoiding building power plant and/or transmission
 line capacity. The electric meter is potentially the utility's gateway
 into a neighborhood or home network, and could enable those
 applications.
 
 Instrumentation of the grid will also take sensor nets, on the order
 of tens or hundreds of thousands of endpoints per distribution system.
 We'll find similar networks in customer homes - the load control
 boxes, the thermostats, etc., that will extend from our edge.
 Potentially, hundreds of thousands more networks there.
 
 There are some utilities that will use BPL as a partial or whole
 transport for metering  other utility applications, with the notion
 that there's enough bandwidth to handle residential access services as
 well (this will often be via a tenant or other 3rd party). While BPL
 business cases won't be made without utility applications like meter
 reading, it's unlikely they'll be made without an access play. The
 other stuff just doesn't require the bandwidth.
 
 Do we need IP all the way to the edge of the network? We don't think
 so. But the alternatives, such as adding ZigBee mesh networks beyond
 the IP edge, have their own problems of scale  complexity.
 
 I've been following the notion that however far our IP network extends
 out, IPV4 and RFC1918 have plenty of space for us. I'm reminded by
 this thread, however, that MA activities, among others, might one day
 bite us for making that assumption.
 
 -jg




**
The IPv6 Portal: http://www.ipv6tf.org

Bye 6Bone. Hi, IPv6 !
http://www.ipv6day.org

This electronic message contains information which may be privileged or 
confidential. The information is intended to be for the use of the 
individual(s) named above. If you are not the intended recipient be aware that 
any disclosure, copying, distribution or use of the contents of this 
information, including attached files, is prohibited.





Electric utilities, IP addressing, and BPL (was Re: Google wants to be your Internet)

2007-01-25 Thread Josh Gerber



   * From: Sean Donelan
   * Date: Tue Jan 23 15:06:02 2007

What do you do when the electric companies split up again,
renumber the meters into different network blocks?


Thanks for the discussion. It's rare I've seen a thread on NANOG
that's so pertinent to my own situation.

I work for a fairly large utility faced with these very problems. We
are developing a ~half $billion project to replace our electric meters
with 2-way communicating ones, and have several other things going,
such as intelligent grid projects that will grow the size of our
networks many times over. I've also been on a broadband over power
line (BPL) project for a year and a half, and have a good
understanding of how  where it works, and where it doesn't.

The US electric distribution grid, is, for the most part, quite dumb.
There are various SCADA and fault protection systems out there, but
not everything is automated, and certainly not everything is covered.
For example, when the power goes out somewhere, almost all of the
time, we don't know until we receive calls from customers. Not a
proactive system. Not the level of service that our customers want.

Intelligent communicating meters are at the core of the modernization
of the grid (and of utility profits, if you look at it in certain
ways). You'll eventually find these meters tied to time-of-use
electric rates (more $$$ during peak times), as well as used as part
of an overall instrumentation effort of the grid.

Demand response is also a piece of that puzzle. Shaving a few
megawatts on a hot day by reducing load (think 50,000 thermostats
having their temperature settings moved up a few degrees on a really
hot day) can mean avoiding building power plant and/or transmission
line capacity. The electric meter is potentially the utility's gateway
into a neighborhood or home network, and could enable those
applications.

Instrumentation of the grid will also take sensor nets, on the order
of tens or hundreds of thousands of endpoints per distribution system.
We'll find similar networks in customer homes - the load control
boxes, the thermostats, etc., that will extend from our edge.
Potentially, hundreds of thousands more networks there.

There are some utilities that will use BPL as a partial or whole
transport for metering  other utility applications, with the notion
that there's enough bandwidth to handle residential access services as
well (this will often be via a tenant or other 3rd party). While BPL
business cases won't be made without utility applications like meter
reading, it's unlikely they'll be made without an access play. The
other stuff just doesn't require the bandwidth.

Do we need IP all the way to the edge of the network? We don't think
so. But the alternatives, such as adding ZigBee mesh networks beyond
the IP edge, have their own problems of scale  complexity.

I've been following the notion that however far our IP network extends
out, IPV4 and RFC1918 have plenty of space for us. I'm reminded by
this thread, however, that MA activities, among others, might one day
bite us for making that assumption.

-jg


RE: Google wants to be your Internet

2007-01-24 Thread michael.dillon

 We also see this with extranet/supply-chain-type connectivity 
 between large companies who have overlapping address space, 
 and I'm afraid it's only going to become more common as more 
 of these types of relationships are established.

Fortunately, IP addresses are not intended for use on the
Internet. Rather, they are intended for use with Internet
Protocol (IP) implementations. That's why the RIRs, in 
alignment with RFC 2050, section 3(a), do give out IP address
allocations to organizations who are connected to extranet-type
networks. If you read RFC 1918, section 2, category 3, you will
see that this is consistent.

So if the power companies want to assign a unique network
address to all power meters then there is no good reason
to stop them. After all, it is consistent with the goals
of the original IP designers to address every light switch
and toaster.

Just remember, IP addresses are *NOT* Internet addresses.
They are Internet Protocol addresses. Connection to the 
Internet and public announcement of prefixes are totally
irrelevant.

--Michael Dillon



Re: Google wants to be your Internet

2007-01-24 Thread Roland Dobbins



On Jan 24, 2007, at 12:33 AM, [EMAIL PROTECTED] wrote:


Just remember, IP addresses are *NOT* Internet addresses.
They are Internet Protocol addresses. Connection to the
Internet and public announcement of prefixes are totally
irrelevant.


Of course I understand this, but I also understand that if one can  
get away with RFC1918 addresses on a non-Internet-connected network,  
it's not a bad idea to do so in and of itself; quite the opposite, in  
fact, as long as one is sure one isn't buying trouble down the road.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-24 Thread Andy Davidson



On 23 Jan 2007, at 16:48, Sean Donelan wrote:


Why is IP required,


Because using something that works so well means less wheel reinvention.

and even if you used IP for transport why must the meter  
identification be based on an IP address?


Idenification via IP address (exclusively) is bad.  I'd argue that if  
you are looking to check the meter for consumption data and for  
problems, a store-and-forward message system which didn't depend on  
always-on connectivity would preserve enough address space to make it  
viable as well.


-a




Re: Google wants to be your Internet

2007-01-24 Thread Mark Smith

On Wed, 24 Jan 2007 02:07:06 -0800
Roland Dobbins [EMAIL PROTECTED] wrote:


 Of course I understand this, but I also understand that if one can  
 get away with RFC1918 addresses on a non-Internet-connected network,  
 it's not a bad idea to do so in and of itself; quite the opposite, in  
 fact, as long as one is sure one isn't buying trouble down the road.
 

The problem is that you can't be sure that if you use RFC1918 today you
won't be bitten by it's non-uniqueness property in the future. When
you're asked to diagnose a fault with a device with the IP address
192.168.1.1, and you've got an unknown number of candidate devices
using that address, you really start to see the value in having world
wide unique, but not necessarily publically visible addressing.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-24 Thread Roland Dobbins



On Jan 24, 2007, at 4:58 AM, Mark Smith wrote:

The problem is that you can't be sure that if you use RFC1918 today  
you

won't be bitten by it's non-uniqueness property in the future. When
you're asked to diagnose a fault with a device with the IP address
192.168.1.1, and you've got an unknown number of candidate devices
using that address, you really start to see the value in having world
wide unique, but not necessarily publically visible addressing.



That's what I meant by the 'as long as one is sure one isn't buying  
trouble down the road' part.  Having encountered problems with  
overlapping address space many times in the past, I'm quite aware of  
the pain, thanks.


;

RFC1918 was created for a reason, and it is used (and misused, we all  
understand that) today by many network operators for a reason.  It is  
up to the architects and operators of networks to determine whether  
or not they should make use of globally-unique addresses or RFC1918  
addresses on a case-by-case basis; making use of RFC1918 addressing  
is not an inherently stupid course of action, its appropriateness in  
any given situation is entirely subjective.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-24 Thread Jason LeBlanc


I hear you on the double, triple nat nightmare, I'm there myself.  I'm 
working on rolling out VRFs to solve that problem, still testing.  The 
nat complexities and bugs (nat translations losing their mind and 
killing connectivity for important apps) are just too much for some of 
our customers, users, etc to deal with.  Some days it kills me that v6 
is still not really viable, I keep asking providers where they're at 
with it.  Their most common complaint is that the operating systems 
don't support it yet.  They mention primarily Windows since that is what 
is most implemented, not in the colo world but what the users have.  I 
suggested they offer a service that somehow translates (heh, shifting 
the pain to them) v4 to v6 for their customers to move it along.


Roland Dobbins wrote:



On Jan 24, 2007, at 4:58 AM, Mark Smith wrote:


The problem is that you can't be sure that if you use RFC1918 today you
won't be bitten by it's non-uniqueness property in the future. When
you're asked to diagnose a fault with a device with the IP address
192.168.1.1, and you've got an unknown number of candidate devices
using that address, you really start to see the value in having world
wide unique, but not necessarily publically visible addressing.



That's what I meant by the 'as long as one is sure one isn't buying 
trouble down the road' part.  Having encountered problems with 
overlapping address space many times in the past, I'm quite aware of 
the pain, thanks.


;

RFC1918 was created for a reason, and it is used (and misused, we all 
understand that) today by many network operators for a reason.  It is 
up to the architects and operators of networks to determine whether or 
not they should make use of globally-unique addresses or RFC1918 
addresses on a case-by-case basis; making use of RFC1918 addressing is 
not an inherently stupid course of action, its appropriateness in any 
given situation is entirely subjective.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder







RE: Google wants to be your Internet

2007-01-24 Thread michael.dillon

 The problem is that you can't be sure that if you use RFC1918 
 today you won't be bitten by it's non-uniqueness property in 
 the future. When you're asked to diagnose a fault with a 
 device with the IP address 192.168.1.1, and you've got an 
 unknown number of candidate devices using that address, you 
 really start to see the value in having world wide unique, 
 but not necessarily publically visible addressing.

A lot of people who implemented RFC 1918 addressing in the 
past didn't actually read RFC 1918. They just heard the mantra
of address conservation and learned that RFC 1918 defined something
called private addresses. Then, without reading the RFC, they
made assumptions in interpreting the meaning of private. Now,
many of those people or their successors have been bit hard by
problems created by using RFC 1918 addresses in networks which 
are not really private at all, i.e. wholly unconnected from other
IP networks. Those people now see the benefits of using truly
globally unique registered addresses.

The whole address conservation mantra has turned out to be a lot
of smoke and mirrors anyway. The dotcom collapse followed by the
telecom collapse shows that it was a sham argument based on the
ridiculous theory that exponential growth of the network was 
really sustainable. Now we live in a time where there is no
shortage of IP addresses. Even IPv4 addresses are not guaranteed
to ever run out as IPv6 begins to be used for some of the drivers
of network growth. 

IPv6 makes NAT obsolete because IPv6 firewalls can provide all
the useful features of IPv4 NAT without any of the downsides.

--Michael Dillon
 


Re: Google wants to be your Internet

2007-01-24 Thread Roland Dobbins



On Jan 24, 2007, at 5:48 AM, [EMAIL PROTECTED] wrote:


The whole address conservation mantra has turned out to be a lot
of smoke and mirrors anyway.


At the time, yes, this particular issue was overhyped, just as the  
routing-table-expansion issue was underhyped.  As we move to an  
'Internet of Things', however, it will become manifestl


With regards to the perceived advantages and disadvantages of IPv6 as  
it is currently defined, there is wide range of opinion on the  
subject.  For many, the 'still-need-NAT-under-IPv6 vs. IPv6- 
eliminates-the-need-for-NAT' debate is of minor importance compared  
to more fundamental questions.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






RE: Google wants to be your Internet

2007-01-24 Thread Jamie Bowden

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On 
 Behalf Of Jason LeBlanc
 Sent: Wednesday, January 24, 2007 8:40 AM
 To: Roland Dobbins
 Cc: NANOG
 Subject: Re: Google wants to be your Internet

 I hear you on the double, triple nat nightmare, I'm there 
 myself.  I'm 
 working on rolling out VRFs to solve that problem, still 
 testing.  The 
 nat complexities and bugs (nat translations losing their mind and 
 killing connectivity for important apps) are just too much 
 for some of 
 our customers, users, etc to deal with.  Some days it kills 
 me that v6 
 is still not really viable, I keep asking providers where they're at 
 with it.  Their most common complaint is that the operating systems 
 don't support it yet.  They mention primarily Windows since 
 that is what 
 is most implemented, not in the colo world but what the users 
 have.  I 
 suggested they offer a service that somehow translates (heh, shifting 
 the pain to them) v4 to v6 for their customers to move it along.

Windows XP SP2 has IPv6.  It isn't enabled by default, but it's not
difficult to do.

Apparently Vista does do IPv6 by default out of the box, but I don't
have a Vista system to play with yet to confirm this.

Jamie Bowden
-- 
It was half way to Rivendell when the drugs began to take hold
Hunter S Tolkien Fear and Loathing in Barad Dur
Iain Bowen [EMAIL PROTECTED]


Re: Google wants to be your Internet

2007-01-24 Thread Joe Abley



On 24-Jan-2007, at 10:01, Jamie Bowden wrote:


Some days it kills
me that v6
is still not really viable, I keep asking providers where they're at
with it.  Their most common complaint is that the operating systems
don't support it yet.  They mention primarily Windows since
that is what
is most implemented, not in the colo world but what the users
have.


Windows XP SP2 has IPv6.  It isn't enabled by default, but it's not
difficult to do.

Apparently Vista does do IPv6 by default out of the box, but I don't
have a Vista system to play with yet to confirm this.


I might argue that, legacy systems and hardware aside, the main  
reason that v6 might be considered non-viable these days is the lack  
of customers willing to pay for it.


I don't think the viability of v6 has been blocking on operating  
systems or router hardware for quite some time, now. It's still a  
problem for many operational support systems, but arguably that would  
change rapidly if there was some prospect of revenue.



Joe



Re: Google wants to be your Internet

2007-01-23 Thread Daniel Golding


One interesting point - they plan to use Broadband over Power Line  
(BPL) technology to do this. Meter monitoring is the killer app for  
BPL, which can then also be used for home broadband, Meter reading is  
one of the top costs and trickiest problems for utilities.


- Dan

On Jan 22, 2007, at 12:28 PM, Niels Bakker wrote:



* [EMAIL PROTECTED] (Jim Shankland) [Mon 22 Jan 2007, 18:21 CET]:

Travis H. [EMAIL PROTECTED] writes:
IIRC, someone representing the electrical companies approached  
someone representing network providers, possibly the IETF, to ask  
about the feasibility of using IP to monitor the electrical  
meters throughout the US

The response was yeah, well, maybe with IPv6.


Which is nonsense.  More gently, it's only true if you not only  
want to use IP to monitor electrical meters, but want the use the  
(global) Internet to monitor electrical meters.


I'd love to hear the business case for why my home electrical  
meter needs to be directly IP-addressable from an Internet cafe in  
Lagos.


It's not nonsense.  Those elements need to be unique.  RFC1918  
isn't unique enough (think what happens during a corporate merger).



-- Niels.




Re: Google wants to be your Internet

2007-01-23 Thread Brandon Galbraith

On 1/22/07, Daniel Golding [EMAIL PROTECTED] wrote:



One interesting point - they plan to use Broadband over Power Line (BPL)
technology to do this. Meter monitoring is the killer app for BPL, which can
then also be used for home broadband, Meter reading is one of the top costs
and trickiest problems for utilities.

- Dan

On Jan 22, 2007, at 12:28 PM, Niels Bakker wrote:



Why don't utilities strike deals with celluar providers to push data back to
HQ over the cellular network at low utilization times (how many people use
GPRS in the dead of night?).

-brandon


Re: Google wants to be your Internet

2007-01-23 Thread Christian Kuhtz
Dan,

there's one very big assumption in your statement: cost of BPL for metering is 
economical or workable in the regulatory model.  Forget value added services 
for a moment, the cost often cannot be burdened on the rate payer (regulatory 
constraint).  So, funding this sort of effort is non-trivial.

Best regards,
Christian

--
Sent from my BlackBerry.  

-Original Message-
From: Daniel Golding [EMAIL PROTECTED]
Date: Mon, 22 Jan 2007 18:52:45 
To:Niels Bakker [EMAIL PROTECTED]
Cc:nanog@merit.edu
Subject: Re: Google wants to be your Internet

 
One interesting point - they plan to use Broadband over Power Line (BPL) 
technology to do this. Meter monitoring is the killer app for BPL, which can 
then also be used for home broadband, Meter reading is one of the top costs and 
trickiest problems for utilities. 

- Dan
 


On Jan 22, 2007, at 12:28 PM, Niels Bakker wrote:



* [EMAIL PROTECTED]: mailto:[EMAIL PROTECTED]  (Jim Shankland) [Mon 22 Jan 
2007, 18:21 CET]: 
Travis H. [EMAIL PROTECTED]: mailto:[EMAIL PROTECTED]  writes: 
IIRC, someone representing the electrical companies approached someone 
representing network providers, possibly the IETF, to ask about the feasibility 
of using IP to monitor the electrical meters throughout the US
The response was yeah, well, maybe with IPv6. 


Which is nonsense.  More gently, it's only true if you not only want to use IP 
to monitor electrical meters, but want the use the (global) Internet to monitor 
electrical meters.


I'd love to hear the business case for why my home electrical meter needs to be 
directly IP-addressable from an Internet cafe in Lagos. 


It's not nonsense.  Those elements need to be unique.  RFC1918 isn't unique 
enough (think what happens during a corporate merger).




-- Niels. 


Re: Google wants to be your Internet

2007-01-23 Thread Alexander Harrowell


Why don't utilities strike deals with celluar providers to push data back to

HQ over the cellular network at low utilization times (how many people use
GPRS in the dead of night?).

 -brandon


Enron did this with SkyTel paging in California. Or rather they wanted
to do it, couldn't hack it, so used the bulk-bought pager airtime as a
perk.


Re: Google wants to be your Internet

2007-01-23 Thread Valdis . Kletnieks
On Tue, 23 Jan 2007 10:18:09 CST, Brandon Galbraith said:
 Why don't utilities strike deals with celluar providers to push data back to
 HQ over the cellular network at low utilization times (how many people use
 GPRS in the dead of night?).

Especially in rural areas (where physically reading meters sucks the most due
to long inter-house distances), you have no guarantee of good cellular coverage.

The electric company *can* however assume they have copper connectivity to
the meter by definition


pgpmfWZDDCHFP.pgp
Description: PGP signature


Re: Google wants to be your Internet

2007-01-23 Thread Sean Donelan


On Mon, 22 Jan 2007, Daniel Golding wrote:
One interesting point - they plan to use Broadband over Power Line (BPL) 
technology to do this. Meter monitoring is the killer app for BPL, which can 
then also be used for home broadband, Meter reading is one of the top costs 
and trickiest problems for utilities.


Why is IP required, and even if you used IP for transport why must the 
meter identification be based on an IP address?  If meters only report 
information, they don't need a unique transport address and could put

the meter identifier in the application data.

Even if the intent is to include additional controls, e.g. cycle air 
conditioners during peak periods, you still don't need to use IP or

unique IP transport addresses.

Just because you have the hammer called IP, doesn't mean you must use
it on everything.




Re: Google wants to be your Internet

2007-01-23 Thread Donald Stahl



Especially in rural areas (where physically reading meters sucks the most due
to long inter-house distances), you have no guarantee of good cellular coverage.

The electric company *can* however assume they have copper connectivity to
the meter by definition

Doesn't have to be copper- it could be aluminum :)

-Don


Re: Google wants to be your Internet

2007-01-23 Thread Brandon Galbraith


Why is IP required, and even if you used IP for transport why must the
meter identification be based on an IP address?  If meters only report
information, they don't need a unique transport address and could put
the meter identifier in the application data.

Even if the intent is to include additional controls, e.g. cycle air
conditioners during peak periods, you still don't need to use IP or
unique IP transport addresses.

Just because you have the hammer called IP, doesn't mean you must use
it on everything.



Exactly. A meter should be able to connect over an available transport
method, and be identifiable via a serial number, not an IP. It may need to
grab a DHCP address of some sort (or whatever the moniker is for the
transport available), but in the end it's unique serial number should be
used to identify itself.


RE: Google wants to be your Internet

2007-01-23 Thread Jamie Bowden


Virginia Power replaced our meter over the summer with a new one that
has wireless on it.  The meter reader just drives a truck past the
houses and grabs the data without him/her ever leaving the truck.  I
have no idea what protocol they're using, or if it's even remotely
secure.

Jamie Bowden
-- 
It was half way to Rivendell when the drugs began to take hold
Hunter S Tolkien Fear and Loathing in Barad Dur
Iain Bowen [EMAIL PROTECTED]
 

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On 
 Behalf Of [EMAIL PROTECTED]
 Sent: Tuesday, January 23, 2007 11:44 AM
 To: Brandon Galbraith
 Cc: Daniel Golding; Niels Bakker; nanog@merit.edu
 Subject: Re: Google wants to be your Internet
 
 On Tue, 23 Jan 2007 10:18:09 CST, Brandon Galbraith said:
  Why don't utilities strike deals with celluar providers to 
 push data back to
  HQ over the cellular network at low utilization times (how 
 many people use
  GPRS in the dead of night?).
 
 Especially in rural areas (where physically reading meters 
 sucks the most due
 to long inter-house distances), you have no guarantee of good 
 cellular coverage.
 
 The electric company *can* however assume they have copper 
 connectivity to
 the meter by definition
 


Re: Google wants to be your Internet

2007-01-23 Thread Saku Ytti

On (2007-01-23 12:25 -0500), Jamie Bowden wrote:

 Virginia Power replaced our meter over the summer with a new one that
 has wireless on it.  The meter reader just drives a truck past the
 houses and grabs the data without him/her ever leaving the truck.  I
 have no idea what protocol they're using, or if it's even remotely
 secure.

We have it here too, few times there has been articles in the
newspaper about car alarms going off along the street, when the
meters are read :).

-- 
  ++ytti


Re: Google wants to be your Internet

2007-01-23 Thread Chris L. Morrow



On Mon, 22 Jan 2007, Jim Shankland wrote:


 Travis H. [EMAIL PROTECTED] writes:

  IIRC, someone representing the electrical companies approached
  someone representing network providers, possibly the IETF, to
  ask about the feasibility of using IP to monitor the electrical
  meters throughout the US
 
  The response was yeah, well, maybe with IPv6.

 Which is nonsense.  More gently, it's only true if you not only
 want to use IP to monitor electrical meters, but want the use
 the (global) Internet to monitor electrical meters.

 I'd love to hear the business case for why my home electrical meter
 needs to be directly IP-addressable from an Internet cafe in Lagos.

globally unique addresses

I have an electic company, it's got 2500 partners, all with the same
'internal ip addressing plan' (192.168.1.0/24) we need to communicate, is
NAT on both sides really efficient?


Re: Google wants to be your Internet

2007-01-23 Thread Jeroen Massar
[ 2-in-1, before I hit the 'too many flames posted' threshold ;) ]

Roland Dobbins wrote:
 
 
 On Jan 22, 2007, at 10:49 AM, Jeroen Massar wrote:
 
 But which address space do you put in the network behind the VPN?

 RFC1918!? Oh, already using that on the DSL link to where you are
 VPN'ing in from. oopsy ;)
 
 Actually, NBD, because you can handle that with a VPN client which does
 a virtual adaptor-type of deal and overlapping address space doesn't
 matter, because once you're in the tunnel, you're not sending/receiving
 outside of the tunnel.  Port-forwarding and NAT (ugly, but people do it)
 can apply, too.

How do you handle 192.168.1.1 talking to 192.168.1.1, oh I do mean a
different one. Or do you double-reverse-ultra-NAT the packets !? :)

One doesn't want to solve problems that way. That is only seen as
creating problems. Good for a consultants wallet, but not good for the
companies using it and neither good for the programmer who had to work
around it in all his applications.

 That is the case for globally unique addresses and the reason why banks
 that use RFC1918 don't like it when they need to merge etc etc etc...
 
 Sure, and then you get into double-NATting and who redistributes what
 routes into who's IGP and all that kind of jazz (it's a big problem on
 extranet-type connections, too).  To be clear, all I was saying is that
 the subsidiary point that there are things which don't belong on the
 global Internet is a valid one

One can perfectly request address space from any of the RIR's and never
ever announce or connect it to the internet. One can even give that as a
reason I require globally unique address space and you will receive it
from the RIR in question. One doesn't need to use globally unique
address space in the Internet, it is perfectly valid to use it as a
disconnected means. Simple example which nicely works: 9.0.0.0/8
That network is definitely used, but not to be found on the Internet.

Also, how many military and bank networks are announced on the Internet?
If they are announced, they most likely are nicely firewalled away or
actually disconnected in all means possible from the Internet and just
used as a nice virus trap, as those silly virusses do scan them :)

 and entirely separate from any
 discussions of universal uniqueness in terms of address-space, as there
 are (ugly, non-scalable, brittle, but available) ways to work around
 such problems, in many cases.

You actually mean that you love to create all kinds of weird solutions
to solve a problem that could have easily be avoided in the first place!?

I don't think I would like to have your job doing those dirty things.

With IPv6 and ULA's especially those mistakes fortunately won't happen
that quickly any more. Saves you, me, and a load of other people a lot
of headaches. Maybe you won't be able to consult for them any more and
make quite some money off them, well that is too bad.


And now for some asbestos action:

short summary:
  a) use global addresses for everything,
  b) use proper acl's),
  c) toys exist that some people clearly don't know about yet ;)

No further technical content below, except for a reply to a flame.
(But don't miss out on the pdf mentioned for the toys ;)


Jim Shankland wrote:
 In response to my saying:

 I'd love to hear the business case for why my home electrical meter
 needs to be directly IP-addressable from an Internet cafe in Lagos.

 Jay R. Ashworth [EMAIL PROTECTED] responds, concisely:

 It doesn't, and it shouldn't.  That does *not* mean it should not
 have a globally unique ( != globally routable) IP address.

 and Jeroen Massar [EMAIL PROTECTED] presents several hypothetical
 scenarios.

Are you trying to say that I make things up? Neat, lets counter that:

http://www.sixxs.net/presentations/SwiNOG11-DeployingIPv6.pdf
(yes, I know large slideset, unfortunately alexandria.paf.se where the
pix came from is not available anymore and I can't find another source)

Slides 50-57 show some nice toys which you can get in the Asian region
already. This is thus far from hypothetical. Note the IPv6 address on
that hydro controller's LCD, it can be used to water your plants. Yes,
indeed, when that show was happening, it was globally addressable, just
like the camera and all the other toys there. And yes, I gave the plant
water using telnet :)

That you don't have it, That you didn't see it yet, doesn't mean it does
not exist.


 Note that the original goal was for electrical companies to monitor
 electrical meters.  Jeroen brings up backyard mini-nuke plants, seeing
 how much the power plug in the garden is being used, etc.  These may
 all be desirable goals, but they represent considerable mission creep
 from the originally stated goal.

What is your point with writing this section? Trying to explain that it
does not conform to your exact wishes? Or do you just want to type my
name a couple of times to practice it? I know it is as difficult to
pronounce as to type it ;) Dunno 

Re: Google wants to be your Internet

2007-01-23 Thread Sean Donelan


On Tue, 23 Jan 2007, Chris L. Morrow wrote:

globally unique addresses

I have an electic company, it's got 2500 partners, all with the same
'internal ip addressing plan' (192.168.1.0/24) we need to communicate, is
NAT on both sides really efficient?


What do you do when the electric companies split up again, renumber the
meters into different network blocks?

Satellite set-top boxes don't need to be assigned unique phone numbers to 
report pay-per-view events back to Dish/DirecTV.  They just wake up every
few weeks, use the transport identifier already available on the 
customer's phone line and sends the data, with an embedded identifier

independent from the network transport.  If the satellite STB ever knew
its telephone number, it is probably out-of-date after a few area code
changes.  The same thing happens with burgerler alarm reporting, and lots 
of other things.


I think network engineers are too quick to use network identifiers for
applications.  Electric meters, set-top boxes, alarm systems, ice-boxes,
and whatever else you want to connect to the network don't need to have
the same permanent identifier for the application and the transport.  Most
of the time they don't need a permanent transport identifier.



Re: Google wants to be your Internet

2007-01-23 Thread Bob Martin


Our REA has been reading the meter via the copper running to our house 
for several years now. Took them less than 2 years to realize a savings. 
(And since it's a co-op, that means the price goes down :) )



[EMAIL PROTECTED] wrote:

On Tue, 23 Jan 2007 10:18:09 CST, Brandon Galbraith said:

Why don't utilities strike deals with celluar providers to push data back to
HQ over the cellular network at low utilization times (how many people use
GPRS in the dead of night?).


Especially in rural areas (where physically reading meters sucks the most due
to long inter-house distances), you have no guarantee of good cellular coverage.

The electric company *can* however assume they have copper connectivity to
the meter by definition


Re: Google wants to be your Internet

2007-01-23 Thread Marshall Eubanks


Hello;

On Jan 22, 2007, at 6:52 PM, Daniel Golding wrote:



One interesting point - they plan to use Broadband over Power Line  
(BPL) technology to do this. Meter monitoring is the killer app for  
BPL, which can then also be used for home broadband, Meter reading  
is one of the top costs and trickiest problems for utilities.




If they control the network, why is doing this with IPv6 out of the  
question ?


It seems like a good fit to me.

Regards
Marshall



- Dan

On Jan 22, 2007, at 12:28 PM, Niels Bakker wrote:



* [EMAIL PROTECTED] (Jim Shankland) [Mon 22 Jan 2007, 18:21 CET]:

Travis H. [EMAIL PROTECTED] writes:
IIRC, someone representing the electrical companies approached  
someone representing network providers, possibly the IETF, to  
ask about the feasibility of using IP to monitor the electrical  
meters throughout the US

The response was yeah, well, maybe with IPv6.


Which is nonsense.  More gently, it's only true if you not only  
want to use IP to monitor electrical meters, but want the use the  
(global) Internet to monitor electrical meters.


I'd love to hear the business case for why my home electrical  
meter needs to be directly IP-addressable from an Internet cafe  
in Lagos.


It's not nonsense.  Those elements need to be unique.  RFC1918  
isn't unique enough (think what happens during a corporate merger).



-- Niels.






Re: Google wants to be your Internet

2007-01-23 Thread Adrian Chadd

On Tue, Jan 23, 2007, Chris L. Morrow wrote:

 I have an electic company, it's got 2500 partners, all with the same
 'internal ip addressing plan' (192.168.1.0/24) we need to communicate, is
 NAT on both sides really efficient?

I've seen plenty of company setups that double/triple-NAT due to administrative/
security boundaries.

The majority of them seem to be government organisations too. :)



Adrian



Re: Google wants to be your Internet

2007-01-23 Thread Roland Dobbins



On Jan 23, 2007, at 11:51 AM, Jeroen Massar wrote:


  a) use global addresses for everything,


Everything which needs to be accessed globally, sure.  But I don't  
see this as a hard and fast requirement, it's up to the user based  
upon his projected use.



  b) use proper acl's),


Of course.


  c) toys exist that some people clearly don't know about yet ;)


Indeed.

---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-23 Thread Roland Dobbins



On Jan 23, 2007, at 3:38 PM, Adrian Chadd wrote:


The majority of them seem to be government organisations too. :)


We also see this with extranet/supply-chain-type connectivity between  
large companies who have overlapping address space, and I'm afraid  
it's only going to become more common as more of these types of  
relationships are established.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-22 Thread william(at)elan.net



On Mon, 22 Jan 2007, Travis H. wrote:


On Sun, Jan 21, 2007 at 06:41:19AM -0800, Lucy Lynch wrote:

sensor nets anyone?


The bridge-monitoring stuff sounds a lot like SCADA.

//drift

IIRC, someone representing the electrical companies approached
someone representing network providers, possibly the IETF, to
ask about the feasibility of using IP to monitor the electrical
meters throughout the US.  Presumably this would be via some
slow signalling protocol over the power lines themselves
(slow so that you don't trash the entire spectrum by signalling
in the range where power lines are good antennas - i.e. 30MHz or
so).

The response was yeah, well, maybe with IPv6.


I've heard tha's pretty close to how IPv6 ends up being used as
far as current public production installation use go (not counting
those done for research, etc). For example apparently some railroad
in europe setup ipv6 for use in the rail sensors. Then we also
recently heard of large ISP using ipv6 for creating management
subnet for all their network equipment, etc.

--
William Leibzon
Elan Networks
[EMAIL PROTECTED]


CDN ISP (was: Re: Google wants to be your Internet)

2007-01-22 Thread Michal Krsek


Hi Adrian,

I've had a few ISPs out here in Australia indicate interest in a cache 
that
could do the normal stuff (http, rtsp, wma) and some of the p2p stuff 
(bittorrent
especially) with a smattering of QoS/shaping/control - but not cost 
upwards of

USD$100,000 a box. Lots of interest, no commitment.


Here in central europe we had caching friendly environment from 1997 till 
2001 due of transit lines pricing. Few yaers ago prices for upstream 
connectivity fell and from this time there is no interest for caching. I've 
discussed this with several nationwide ISPs in .cz and found these reasons:


a) caching systems are not easy to implement and maintain (another system 
for configuration)

b) possible conflict with content owners
c) they want to sell as much as possible of bandwidth
d) they want to have their network fully transparent

I don't want to judge these answers, just FYI.

It doesn't help (at least in Australia) where the wholesale model of ADSL 
isn't
content-replication-friendly: we have to buy ATM or ethernet pipes to 
upstreams
and then receive each session via L2TP. Fine from an aggregation point of 
view,
but missing the true usefuless of content replication and caching - right 
at

the point where your customers connect in.


Same here.

(Disclaimer: I'm one of the Squid developers. I'm getting an increasing 
amount
of interest from CDN/content origination players but none from ISPs. I'd 
love
to know why ISPs don't view caching as a viable option in today's world 
and

what we could to do make it easier for y'all.)


Please see points (a)-(d). I think there can be also point (e).

Some telcos want to play triple-play game (Internet, telephony and IPTV). 
They want to move their users back from the Internet to relativelly safe 
revenue area (television channel distribution via IPTV).


   Regards
   Michal Krsek



Re: CDN ISP (was: Re: Google wants to be your Internet)

2007-01-22 Thread Gadi Evron

On Mon, 22 Jan 2007, Michal Krsek wrote:


For broad-band ISPs, whose main goal is not to sell or re-sell transit 
though...

 
 a) caching systems are not easy to implement and maintain (another system 
 for configuration)
 b) possible conflict with content owners
 c) they want to sell as much as possible of bandwidth
 d) they want to have their network fully transparent

Only a, b apply. d I am not sure I understand.



Re: Google wants to be your Internet

2007-01-22 Thread Jim Shankland

Travis H. [EMAIL PROTECTED] writes:

 IIRC, someone representing the electrical companies approached
 someone representing network providers, possibly the IETF, to
 ask about the feasibility of using IP to monitor the electrical
 meters throughout the US
 
 The response was yeah, well, maybe with IPv6.

Which is nonsense.  More gently, it's only true if you not only
want to use IP to monitor electrical meters, but want the use
the (global) Internet to monitor electrical meters.

I'd love to hear the business case for why my home electrical meter
needs to be directly IP-addressable from an Internet cafe in Lagos.

Jim Shankland


Re: Google wants to be your Internet

2007-01-22 Thread Niels Bakker


* [EMAIL PROTECTED] (Jim Shankland) [Mon 22 Jan 2007, 18:21 CET]:

Travis H. [EMAIL PROTECTED] writes:
IIRC, someone representing the electrical companies approached 
someone representing network providers, possibly the IETF, to 
ask about the feasibility of using IP to monitor the electrical 
meters throughout the US


The response was yeah, well, maybe with IPv6.


Which is nonsense.  More gently, it's only true if you not only 
want to use IP to monitor electrical meters, but want the use 
the (global) Internet to monitor electrical meters.


I'd love to hear the business case for why my home electrical meter 
needs to be directly IP-addressable from an Internet cafe in Lagos.


It's not nonsense.  Those elements need to be unique.  RFC1918 isn't 
unique enough (think what happens during a corporate merger).



-- Niels.


Re: Google wants to be your Internet

2007-01-22 Thread Jeroen Massar
Jim Shankland wrote:
 Travis H. [EMAIL PROTECTED] writes:
 
 IIRC, someone representing the electrical companies approached
 someone representing network providers, possibly the IETF, to
 ask about the feasibility of using IP to monitor the electrical
 meters throughout the US

 The response was yeah, well, maybe with IPv6.
 
 Which is nonsense.  More gently, it's only true if you not only
 want to use IP to monitor electrical meters, but want the use
 the (global) Internet to monitor electrical meters.

Ah, cool, an advocate of NAT. Or didn't you want to say that one can
just make their own IPv4 address space and use that ?

Remember that the machines checking the billing most likely has a global
address and RFC1918 ain't nice.

Barring getting address space, IPv4 and IPv6 will both do fine for it.

 I'd love to hear the business case for why my home electrical meter
 needs to be directly IP-addressable from an Internet cafe in Lagos.

1) You are on vacation and want to check if you actually turned on that
mini-nuke plant in your garden, so that you will retain some cash on
your credit card so that you can still come home.

2) You are still on vacation and want to check if your kids are not over
abusing electrical power instead of being 'green' for the environment.

3) You are already on the northpole, Lagos was boring after all, and you
want to check about that email you received from the electrical company,
to see where the power usage was actually so high.
You notice that the power plug in the garden is being used a lot, look
at the webcam there and notice that your neighbor is using your power.

Oh, only one case eh? :)

But I guess it is nonsense.

Greets,
 Jeroen



signature.asc
Description: OpenPGP digital signature


Re: Google wants to be your Internet

2007-01-22 Thread Nicholas Suan



On Jan 22, 2007, at 12:15 PM, Jim Shankland wrote:



Travis H. [EMAIL PROTECTED] writes:


IIRC, someone representing the electrical companies approached
someone representing network providers, possibly the IETF, to
ask about the feasibility of using IP to monitor the electrical
meters throughout the US

The response was yeah, well, maybe with IPv6.


Which is nonsense.  More gently, it's only true if you not only
want to use IP to monitor electrical meters, but want the use
the (global) Internet to monitor electrical meters.

I'd love to hear the business case for why my home electrical meter
needs to be directly IP-addressable from an Internet cafe in Lagos.

Perhaps your electrical company has more than 16.7 million electrical  
meters it needs to address.


Re: Google wants to be your Internet

2007-01-22 Thread Roland Dobbins



On Jan 22, 2007, at 9:38 AM, Jeroen Massar wrote:


But I guess it is nonsense.


This is what ssh tunnels and/or VPN are for, IMHO.  It's perfectly  
legitimate to construct private networks (DCN/OOB nets, anyone?  How  
about that IV flow-control monitor which determines how much  
antibiotics you're getting per hour after your open-heart surgery?)  
for purposes which aren't suited to direct connectivity to/from  
anyone on the global Internet.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-22 Thread J. Oquendo

Roland Dobbins wrote:


This is what ssh tunnels and/or VPN are for, IMHO.  It's perfectly 
legitimate to construct private networks (DCN/OOB nets, anyone?  How 
about that IV flow-control monitor which determines how much 
antibiotics you're getting per hour after your open-heart surgery?) 
for purposes which aren't suited to direct connectivity to/from anyone 
on the global Internet.


---


Can this thread now be merged with the Cacti thread and made into Using 
Cacti for Monitoring your Heart and IV's While Using Your Google Toolbar?


--

J. Oquendo
http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x1383A743
sil . infiltrated @ net http://www.infiltrated.net 


The happiness of society is the end of government.
John Adams



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Google wants to be your Internet

2007-01-22 Thread Jeroen Massar
Roland Dobbins wrote:
 
 
 On Jan 22, 2007, at 9:38 AM, Jeroen Massar wrote:
 
 But I guess it is nonsense.
 
 This is what ssh tunnels and/or VPN are for, IMHO
[..]

Of course, for protecting them you should use that and firewalls and
other security measures that one deems neccesary.

But which address space do you put in the network behind the VPN?

RFC1918!? Oh, already using that on the DSL link to where you are
VPN'ing in from. oopsy ;)

That is the case for globally unique addresses and the reason why banks
that use RFC1918 don't like it when they need to merge etc etc etc...

Fortunately, for IPv6 we have ULA's (fc00::/7), that solves that problem.

/me donates coffee around.

Greets,
 Jeroen



signature.asc
Description: OpenPGP digital signature


Re: Google wants to be your Internet

2007-01-22 Thread Roland Dobbins



On Jan 22, 2007, at 10:49 AM, Jeroen Massar wrote:


But which address space do you put in the network behind the VPN?

RFC1918!? Oh, already using that on the DSL link to where you are
VPN'ing in from. oopsy ;)


Actually, NBD, because you can handle that with a VPN client which  
does a virtual adaptor-type of deal and overlapping address space  
doesn't matter, because once you're in the tunnel, you're not sending/ 
receiving outside of the tunnel.  Port-forwarding and NAT (ugly, but  
people do it) can apply, too.




That is the case for globally unique addresses and the reason why  
banks

that use RFC1918 don't like it when they need to merge etc etc etc...


Sure, and then you get into double-NATting and who redistributes what  
routes into who's IGP and all that kind of jazz (it's a big problem  
on extranet-type connections, too).  To be clear, all I was saying is  
that the subsidiary point that there are things which don't belong on  
the global Internet is a valid one, and entirely separate from any  
discussions of universal uniqueness in terms of address-space, as  
there are (ugly, non-scalable, brittle, but available) ways to work  
around such problems, in many cases.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-22 Thread Jim Shankland

In response to my saying:

 I'd love to hear the business case for why my home electrical meter
 needs to be directly IP-addressable from an Internet cafe in Lagos.

Jay R. Ashworth [EMAIL PROTECTED] responds, concisely:

 It doesn't, and it shouldn't.  That does *not* mean it should not have
 a globally unique ( != globally routable) IP address.

and Jeroen Massar [EMAIL PROTECTED] presents several hypothetical
scenarios.

Note that the original goal was for electrical companies to monitor
electrical meters.  Jeroen brings up backyard mini-nuke plants, seeing
how much the power plug in the garden is being used, etc.  These may
all be desirable goals, but they represent considerable mission creep
from the originally stated goal.

None of Jeroen's applications requires end-to-end, packet-level access
to the individual devices in Jeroen's future (I assume) home.  You can
certainly argue that packet-level connectivity is better, easier to
engineer, scales better, etc., etc.; but it is not *required*.
In fact, there are sound engineering arguments against packet-level
access:  since we've dragged in the backyard nuke plant, consider what
happens when everybody has a backyard mini-nuke, with control software
written by Linksys, and it turns out that sending it a certain kind
of malformed packet can cause it to melt down 

No matter.  Reasonable people can disagree on the question of whether
every networkable device benefits from being globally, uniquely
addressable.  The burden on the proponents is higher than that:  there
are *costs* associated with such an architecture, and the proponents
of globally unique addressing need to show not only that it has benefits,
but that the benefits exceed the costs.  Coming full circle, the original
assertion was that IPv6 was required in order for electric companies
to use IP to monitor US electric meters.  That assertion is false, and
no amount of hand-waving about backyard nuke plants will make it true.

The history of IPv6 has been that it keeps receding into the future
as people's use of IPv4 adapts enough to make the current benefit of
switching to IPv6 smaller than the cost to do so.  Perhaps after a
decade or so, we're nearing the end of that road.  Or perhaps, as
F. Scott Fitzgerald once wrote about IPv6, it is:

the orgiastic future that year by year recedes before
us. It eluded us then, but that's no matter - tomorrow
we will run faster, stretch out our arms further
And one fine morning -

We'll see.

Jim Shankland


Re: CDN ISP (was: Re: Google wants to be your Internet)

2007-01-22 Thread Mark Smith

On Mon, 22 Jan 2007 04:15:44 -0600 (CST)
Gadi Evron [EMAIL PROTECTED] wrote:

 
 On Mon, 22 Jan 2007, Michal Krsek wrote:
 
 
 For broad-band ISPs, whose main goal is not to sell or re-sell transit 
 though...
 
  
  a) caching systems are not easy to implement and maintain (another system 
  for configuration)
  b) possible conflict with content owners
  c) they want to sell as much as possible of bandwidth
  d) they want to have their network fully transparent
 
 Only a, b apply. d I am not sure I understand.
 

I think (d) is all network testing tools showing a perfect path, which
sould isolate the fault to the remote web server itself, yet the
website not working because the translucent proxy has a fault.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-21 Thread Lucy Lynch


On Sat, 20 Jan 2007, Marshall Eubanks wrote:




On Jan 20, 2007, at 4:36 PM, Alexander Harrowell wrote:


Marshall wrote:
Those sorts of percentages are common in Pareto distributions (AKA
Zipf's law AKA the 80-20 rule).
With the Zipf's exponent typical of web usage and video watching, I
would predict something closer to
10% of the users consuming 50% of the usage, but this estimate is not
that unrealistic.

I would predict that these sorts of distributions will continue as
long as humans are the primary consumers of
bandwidth.

Regards
Marshall

That's until the spambots inherit the world, right?


I tend to take the long view.



sensor nets anyone?

research
http://research.cens.ucla.edu/portal/page?_pageid=59,43783_dad=portal_schema=PORTAL

business
http://www.campbellsci.com/bridge-monitoring

investment
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=184400339

global alerts? disaster management? physical world traffic engineering?


Re: Google wants to be your Internet

2007-01-21 Thread Petri Helenius


Lucy Lynch wrote:


sensor nets anyone?
On that subject, the current IP protocols are quite bad on delivering 
asynchronous notifications to large audiences. Is anyone aware of 
developments or research toward making this work better? (overlays, 
multicast, etc.)


Pete



research
http://research.cens.ucla.edu/portal/page?_pageid=59,43783_dad=portal_schema=PORTAL 



business
http://www.campbellsci.com/bridge-monitoring

investment
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=184400339

global alerts? disaster management? physical world traffic engineering?





Re: Google wants to be your Internet

2007-01-21 Thread Travis H.
On Sun, Jan 21, 2007 at 06:41:19AM -0800, Lucy Lynch wrote:
 sensor nets anyone?

The bridge-monitoring stuff sounds a lot like SCADA.

//drift

IIRC, someone representing the electrical companies approached
someone representing network providers, possibly the IETF, to
ask about the feasibility of using IP to monitor the electrical
meters throughout the US.  Presumably this would be via some
slow signalling protocol over the power lines themselves
(slow so that you don't trash the entire spectrum by signalling
in the range where power lines are good antennas - i.e. 30MHz or
so).

The response was yeah, well, maybe with IPv6.
-- 
``Unthinking respect for authority is the greatest enemy of truth.''
-- Albert Einstein -- URL:http://www.subspacefield.org/~travis/


pgpMI1sNxfHtS.pgp
Description: PGP signature


Re: Google wants to be your Internet

2007-01-20 Thread Rodrick Brown


On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:



Cringley has a theory and it involves Google, video, and oversubscribed
backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html



The following comment has to be one of the most important comments in
the entire article and its a bit disturbing.

Right now somewhat more than half of all Internet bandwidth is being
used for BitTorrent traffic, which is mainly video. Yet if you
surveyed your neighbors you'd find that few of them are BitTorrent
users. Less than 5 percent of all Internet users are presently
consuming more than 50 percent of all bandwidth.

--
Rodrick R. Brown


Re: Google wants to be your Internet

2007-01-20 Thread Owen DeLong



On Jan 20, 2007, at 10:37 AM, Rodrick Brown wrote:



On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:



Cringley has a theory and it involves Google, video, and  
oversubscribed

backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html



The following comment has to be one of the most important comments in
the entire article and its a bit disturbing.

Right now somewhat more than half of all Internet bandwidth is being
used for BitTorrent traffic, which is mainly video. Yet if you
surveyed your neighbors you'd find that few of them are BitTorrent
users. Less than 5 percent of all Internet users are presently
consuming more than 50 percent of all bandwidth.


I'm not sure why you find that disturbing.  I can think of two  
reasons, and,

they depend almost entirely on your perspective:

If you are disturbed because you know that these users are early  
adopters

and that eventually, a much wider audience will adopt this technology
driving a need for much more bandwidth than is available today, then,
the solution is obvious.  As in the past, bandwidth will have to  
increase to

meet increased demand.

If you are disturbed by the inequity of it, then, little can be  
done.  There

will always be classes of consumers who use more than other classes
of consumers of any resource. Frankly, looking from my corner of the
internet, I don't think that statistic is entirely accurate.  From my  
perspective,

SPAM uses more bandwidth than BitTorrent.

OTOH, another thing to consider is that if all those video downloads
being handled by BitTorrent were migrated to HTTP connections
instead the required amount of bandwidth would be substantially
higher.

Owen



Re: Google wants to be your Internet

2007-01-20 Thread David Ulevitch



Rodrick Brown wrote:


On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:



Cringley has a theory and it involves Google, video, and oversubscribed
backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html



The following comment has to be one of the most important comments in
the entire article and its a bit disturbing.

Right now somewhat more than half of all Internet bandwidth is being
used for BitTorrent traffic, which is mainly video. Yet if you
surveyed your neighbors you'd find that few of them are BitTorrent
users. Less than 5 percent of all Internet users are presently
consuming more than 50 percent of all bandwidth.


Moreover, those of you who were at NANOG in June will remember some of 
the numbers Colin gave about Youtube using 20gbps outbound.


That number was still early in the exponential growth phase the site is 
(*still*) having.  The 20gbps number would likely seem laughable now.


-david




Re: Google wants to be your Internet

2007-01-20 Thread Alexander Harrowell

The Internet: the world's only industry that complains that people want its
product.

On 1/20/07, David Ulevitch [EMAIL PROTECTED] wrote:




Rodrick Brown wrote:

 On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:


 Cringley has a theory and it involves Google, video, and oversubscribed
 backbones:

   http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html


 The following comment has to be one of the most important comments in
 the entire article and its a bit disturbing.

 Right now somewhat more than half of all Internet bandwidth is being
 used for BitTorrent traffic, which is mainly video. Yet if you
 surveyed your neighbors you'd find that few of them are BitTorrent
 users. Less than 5 percent of all Internet users are presently
 consuming more than 50 percent of all bandwidth.

Moreover, those of you who were at NANOG in June will remember some of
the numbers Colin gave about Youtube using 20gbps outbound.

That number was still early in the exponential growth phase the site is
(*still*) having.  The 20gbps number would likely seem laughable now.

-david





Re: Google wants to be your Internet

2007-01-20 Thread Randy Bush

 The following comment has to be one of the most important comments in
 the entire article and its a bit disturbing.
 
 Right now somewhat more than half of all Internet bandwidth is being
 used for BitTorrent traffic, which is mainly video. Yet if you
 surveyed your neighbors you'd find that few of them are BitTorrent
 users. Less than 5 percent of all Internet users are presently
 consuming more than 50 percent of all bandwidth.

the heavy hitters are long known.  get over it.

i won't bother to cite cho et al. and similar actual measurement
studies, as doing so seems not to cause people to read them, only to say
they already did or say how unlike japan north america is.  the
phenomonon is part protocol and part social.

the question to me is whether isps and end user borders (universities,
large enterprises, ...) will learn to embrace this as opposed to
fighting it; i.e. find a business model that embraces delivering what
the customer wants as opposed to winging and warring against it.

if we do, then the authors of the 2p2 protocols will feel safe in
improving their customers' experience by taking advantage of
localization and proximity, as opposed to focusing on subverting
perceived fierce opposition by isps and end user border fascists.  and
then, guess what; the traffic will distribute more reasonably and not
all sum up on the longer glass.

randy

randy


Re: Google wants to be your Internet

2007-01-20 Thread Florian Weimer

* Rodrick Brown:

 Right now somewhat more than half of all Internet bandwidth is being
 used for BitTorrent traffic, which is mainly video. Yet if you
 surveyed your neighbors you'd find that few of them are BitTorrent
 users. Less than 5 percent of all Internet users are presently
 consuming more than 50 percent of all bandwidth.

s/BitTtorrent/porn, and we've been there all along.

I think the real issue here is that Google's video traffic does *not*
clog the network, but would be distributed through private networks
(sometimes Google's own, or through another company's CDN) and
injected into the Internet very close to the consumer.  No one is able
to charge for that traffic because if they did, Google would simply
inject it someplace else.  At best your, one of your peerings would go
out of balance, or at worst, *you* would have to pay for Google's
traffic.


Re: Google wants to be your Internet

2007-01-20 Thread David Ulevitch


Alexander Harrowell wrote:
The Internet: the world's only industry that complains that people want 
its product.


The quote sounds good, but nobody in this thread is complaining.

There have always been top-talkers on networks and there always will be. 
 The current top-talkers are the joe and jane users of tomorrow.  That 
is what is important.  BitTorrent-like technology might start showing up 
in your media center, your access point, etc.  The Venice Project 
(Joost) and a number of other new startups are also built around this 
model of distribution.


Maybe a more symmetric load on the network (at least on the edge) will 
improve economic models or maybe we'll see eyeball networks start to 
peer with each other as they start sourcing more and more of the bits. 
Maybe that's already happening.


-david






On 1/20/07, *David Ulevitch*  [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:




Rodrick Brown wrote:
 
  On 1/20/07, Mark Boolootian  [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
 
 
  Cringley has a theory and it involves Google, video, and
oversubscribed
  backbones:
 
   
http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html

 
 
  The following comment has to be one of the most important
comments in
  the entire article and its a bit disturbing.
 
  Right now somewhat more than half of all Internet bandwidth is being
  used for BitTorrent traffic, which is mainly video. Yet if you
  surveyed your neighbors you'd find that few of them are BitTorrent
  users. Less than 5 percent of all Internet users are presently
  consuming more than 50 percent of all bandwidth.

Moreover, those of you who were at NANOG in June will remember some of
the numbers Colin gave about Youtube using 20gbps outbound.

That number was still early in the exponential growth phase the site is
(*still*) having.  The 20gbps number would likely seem laughable now.

-david







Re: Google wants to be your Internet

2007-01-20 Thread Marshall Eubanks


Hello;

On Jan 20, 2007, at 1:37 PM, Rodrick Brown wrote:



On 1/20/07, Mark Boolootian [EMAIL PROTECTED] wrote:



Cringley has a theory and it involves Google, video, and  
oversubscribed

backbones:

  http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html



The following comment has to be one of the most important comments in
the entire article and its a bit disturbing.

Right now somewhat more than half of all Internet bandwidth is being
used for BitTorrent traffic, which is mainly video. Yet if you
surveyed your neighbors you'd find that few of them are BitTorrent
users. Less than 5 percent of all Internet users are presently
consuming more than 50 percent of all bandwidth.


Those sorts of percentages are common in Pareto distributions (AKA  
Zipf's law AKA the 80-20 rule).
With the Zipf's exponent typical of web usage and video watching, I  
would predict something closer to
10% of the users consuming 50% of the usage, but this estimate is not  
that unrealistic.


I would predict that these sorts of distributions will continue as  
long as humans are the primary consumers of

bandwidth.

Regards
Marshall



--
Rodrick R. Brown




Re: Google wants to be your Internet

2007-01-20 Thread Alexander Harrowell

Marshall wrote:
Those sorts of percentages are common in Pareto distributions (AKA


Zipf's law AKA the 80-20 rule).
With the Zipf's exponent typical of web usage and video watching, I
would predict something closer to
10% of the users consuming 50% of the usage, but this estimate is not
that unrealistic.

I would predict that these sorts of distributions will continue as
long as humans are the primary consumers of
bandwidth.

Regards
Marshall



That's until the spambots inherit the world, right?


Re: Google wants to be your Internet

2007-01-20 Thread Jim Popovitch
On Sat, 2007-01-20 at 10:12 -0800, Mark Boolootian wrote:
 
 Cringley has a theory and it involves Google, video, and oversubscribed
 backbones:
 
   http://www.pbs.org/cringely/pulpit/2007/pulpit_20070119_001510.html

Aren't there some Telco laws wrt cross-state, but still interlata, calls
not being able to be charged as interstate?  Perhaps Google wants to
avoid any future federal/state regulations by providing in-state (i.e.
local) access.  Additionally, it makes it easier to do state and local
govt business when the data is in the same state (it's not out-sourcing
if it's just nextdoor...).  And then there is the lobbying issue, what
better way to lobby multiple states than do do significant business
their in?  Or perhaps I'm just daydreaming too much today ;-)

-Jim P.


signature.asc
Description: This is a digitally signed message part


Re: Google wants to be your Internet

2007-01-20 Thread Gadi Evron

On Sat, 20 Jan 2007, Alexander Harrowell wrote:
 Marshall wrote:
 Those sorts of percentages are common in Pareto distributions (AKA
 
  Zipf's law AKA the 80-20 rule).
  With the Zipf's exponent typical of web usage and video watching, I
  would predict something closer to
  10% of the users consuming 50% of the usage, but this estimate is not
  that unrealistic.
 
  I would predict that these sorts of distributions will continue as
  long as humans are the primary consumers of
  bandwidth.
 
  Regards
  Marshall
 
 
 That's until the spambots inherit the world, right?
 

That is if you see a distinction, metaphorical or physical, between
spambots and real users.



Re: Google wants to be your Internet

2007-01-20 Thread Gadi Evron

On Sat, 20 Jan 2007, Randy Bush wrote:
 the heavy hitters are long known.  get over it.
 
 i won't bother to cite cho et al. and similar actual measurement
 studies, as doing so seems not to cause people to read them, only to say
 they already did or say how unlike japan north america is.  the
 phenomonon is part protocol and part social.
 
 the question to me is whether isps and end user borders (universities,
 large enterprises, ...) will learn to embrace this as opposed to
 fighting it; i.e. find a business model that embraces delivering what
 the customer wants as opposed to winging and warring against it.
 
 if we do, then the authors of the 2p2 protocols will feel safe in
 improving their customers' experience by taking advantage of
 localization and proximity, as opposed to focusing on subverting
 perceived fierce opposition by isps and end user border fascists.  and
 then, guess what; the traffic will distribute more reasonably and not
 all sum up on the longer glass.
 
 randy

It has been a long time since I bowed before Mr. Bush's wisdom, but
indeed, I bow now in a very humble fashion.

Thing is though, it is quivalent to one or all of the following:
-. EFF-like thinking (moral high-ground or impractical at times, yet
   correct and to live by).
-. (very) Forward thinking (yet not possible for people to get behind - by
   people I mean those who do this daily), likely to encounter much
   resistence until it becomes mainstream a few years down the road.
-. Not connected with what can currently happen to affect change, but
   rather how things really are which people can not yet accept.

As Randy is obviously not much affected when people disagree with him, nor
should he, I am sure he will preach this until it becomes real. With that
in mind, if many of us believe this is a philosophical as well as a
technological truth -- what can be done today to affect this change?

Some examples may be:
-. Working with network gear vendors to create better equipment built to
   handle this and lighten the load.
-. Working on establishing new standards and topologies to enable both
   vendors and providers to adopt them.
-. Presenting case studies after putting our money where our mouth is, and
   showing how we made it work in a live network.

Staying in the philosophical realm is more than respectable, but waiting
for FUSSP-like wide-addoption or for sheep to fly is not going to change
the world, much.

For now, the P2P folks who are not in most cases eveel Internet
Pirates are mostly allied, whether in name or in practice with
illegal activities. The technology isn't illegal and can be quite good for
all of us to save quite a bit of bandwidth rather than waste it (quite a
bit of redudndancy there!).

So, instead of fighting it and seeing it left in the hands of the
pirates and the privacy folks trying to bypass the Firewall of [insert
evil regime here], why not utilize it?

How can service providers make use of all this redudndancy among their top
talkers and remove the privacy advocates and warez freaks from the
picture, leaving that front with less technology and legitimacy while
helping themselves?

This is a pure example of a problem from the operational front which can
be floated to research and the industry, with smarter solutions than port
blocking and QoS.

Gadi.



Re: Google wants to be your Internet

2007-01-20 Thread Charlie Allom

On Sat, 20 Jan 2007 17:55:49 -0600 (CST), Gadi Evron wrote:
 On Sat, 20 Jan 2007, Randy Bush wrote:
 
 the question to me is whether isps and end user borders (universities,
 large enterprises, ...) will learn to embrace this as opposed to
 fighting it; i.e. find a business model that embraces delivering what
 the customer wants as opposed to winging and warring against it.

interesting.. i was about to say..

I am involved in London, in building an ISP that encourages users of 
p2p with respect from major and independent record labels. it makes 
sense that the film industry will (and is?) moving towards some kind of 
acceptance as well.

 Thing is though, it is quivalent to one or all of the following:
 -. EFF-like thinking (moral high-ground or impractical at times, yet
correct and to live by).
 -. (very) Forward thinking (yet not possible for people to get behind - by
people I mean those who do this daily), likely to encounter much
resistence until it becomes mainstream a few years down the road.
 -. Not connected with what can currently happen to affect change, but
rather how things really are which people can not yet accept.

well, a little dash of all thinking makes for a healthy environment 
doesn't it?

 This is a pure example of a problem from the operational front which can
 be floated to research and the industry, with smarter solutions than port
 blocking and QoS.

This is what I am interested/scared by.

  C.
-- 
 hail eris
 http://rubberduck.com/


Re: Google wants to be your Internet

2007-01-20 Thread Adrian Chadd

On Sun, Jan 21, 2007, Charlie Allom wrote:

  This is a pure example of a problem from the operational front which can
  be floated to research and the industry, with smarter solutions than port
  blocking and QoS.
 
 This is what I am interested/scared by.

Its not that hard a problem to get on top of. Caching, unfortunately, continues
to be viewed as anaethma by ISP network operators in the US. Strangely enough
the caching technologies aren't a problem with the content -delivery- people.

I've had a few ISPs out here in Australia indicate interest in a cache that
could do the normal stuff (http, rtsp, wma) and some of the p2p stuff 
(bittorrent
especially) with a smattering of QoS/shaping/control - but not cost upwards of
USD$100,000 a box. Lots of interest, no commitment.

It doesn't help (at least in Australia) where the wholesale model of ADSL isn't
content-replication-friendly: we have to buy ATM or ethernet pipes to upstreams
and then receive each session via L2TP. Fine from an aggregation point of view,
but missing the true usefuless of content replication and caching - right at
the point where your customers connect in.

(Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount
of interest from CDN/content origination players but none from ISPs. I'd love
to know why ISPs don't view caching as a viable option in today's world and
what we could to do make it easier for y'all.)



Adrian



Re: Google wants to be your Internet

2007-01-20 Thread Marshall Eubanks



On Jan 20, 2007, at 4:36 PM, Alexander Harrowell wrote:


Marshall wrote:
Those sorts of percentages are common in Pareto distributions (AKA
Zipf's law AKA the 80-20 rule).
With the Zipf's exponent typical of web usage and video watching, I
would predict something closer to
10% of the users consuming 50% of the usage, but this estimate is not
that unrealistic.

I would predict that these sorts of distributions will continue as
long as humans are the primary consumers of
bandwidth.

Regards
Marshall

That's until the spambots inherit the world, right?


I tend to take the long view.


Re: Google wants to be your Internet

2007-01-20 Thread Jeremy Chadwick

On Sat, Jan 20, 2007 at 05:55:49PM -0600, Gadi Evron wrote:
 Some examples may be:
 -. Working on establishing new standards and topologies to enable both
vendors and providers to adopt them.

Keep this point in mind while reading my below comment.

 For now, the P2P folks who are not in most cases eveel Internet
 Pirates are mostly allied, whether in name or in practice with
 illegal activities. The technology isn't illegal and can be quite good for
 all of us to save quite a bit of bandwidth rather than waste it (quite a
 bit of redudndancy there!).

A paper put together by the authors of a download-only free riding
BitTorrent client, called BitThief.  The paper is worth reading:

http://dcg.ethz.ch/publications/hotnets06.pdf
http://dcg.ethz.ch/projects/bitthief/  (client is here)

The part that saddens me the most about this project isn't the
complete disregard for the give back what you take moral (though
that part does sadden me personally) , but what this is going to
do to the protocol and the clients.

Chances are that other torrent client authors are going to see the
project as major defiance and start implementing things like
filtering what client can connect to who based on the client name/ID
string (ex. uTorrent, Azureus, MainLine), which as we all know, is
going to last maybe 3 weeks.

This in turn will solicit the BitThief authors implementing a feature
that allows the client to either spoof its client name or use randomly-
generated ones.  Rinse lather repeat, until everyone is fighting rather
than cooperating.

Will the BT protocol be reformed to address this?  50/50 chance.

 So, instead of fighting it and seeing it left in the hands of the
 pirates and the privacy folks trying to bypass the Firewall of [insert
 evil regime here], why not utilize it?

I think Adrian Chadd's mail addresses this indirectly: it's not
being utilised because of the bandwidth requirements.

ISPs probably don't have an interest in BT caching because of 1)
cost of ownership, 2) legal concerns (if an ISP cached a publicly
distributed copy of some pirated software, who's then responsible?),
and most of all, 3) it's easier to buy a content-sniffing device that
rate-limits, or just start hard-limiting users who use too much
bandwidth (a phrase ISPs use as justification for shutting off
customers' connections, but never provide numbers of just what's too
much).

The result of these items already been shown: BT encryption.  I
personally know of 3 individuals who have their client to use en-
cryption only (disabling non-encrypted connection support).  For
security?  Nope -- solely because their ISP uses a rate limiting
device.

Bram Cohen's official statement is that using encryption to get
around this is silly because not many ISPs are implementing
such devices (maybe not *right now*, Bram, but in the next year
or two, they likely will):

http://bramcohen.livejournal.com/29886.html

ISPs will go with implementing the above device *before* implementing
something like a BT caching box.  Adrian probably knows this too,
and chances are it's probably because of the 3 above items I listed.

So my question is this: how exactly do we (as administrators of
systems or networks) get companies, managers, and even other
administrators, to think differently about solving this?

-- 
| Jeremy Chadwick jdc at parodius.com |
| Parodius Networkinghttp://www.parodius.com/ |
| UNIX Systems Administrator   Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP: 4BD6C0CB |



Re: Google wants to be your Internet

2007-01-20 Thread Gadi Evron

On Sat, 20 Jan 2007, Jeremy Chadwick wrote:

snip

 ISPs probably don't have an interest in BT caching because of 1)
 cost of ownership, 2) legal concerns (if an ISP cached a publicly
 distributed copy of some pirated software, who's then responsible?),

They cache the web, which has the same chance of being illegal content.

snip
 
 The result of these items already been shown: BT encryption.  I
 personally know of 3 individuals who have their client to use en-
 cryption only (disabling non-encrypted connection support).  For
 security?  Nope -- solely because their ISP uses a rate limiting
 device.

Yep. Users will find a way to maintain functionality.

 Bram Cohen's official statement is that using encryption to get
 around this is silly because not many ISPs are implementing
 such devices (maybe not *right now*, Bram, but in the next year
 or two, they likely will):
 
 http://bramcohen.livejournal.com/29886.html

I don't know of many user ISPs which don't implement them, you kidding?:)

snip

 So my question is this: how exactly do we (as administrators of
 systems or networks) get companies, managers, and even other
 administrators, to think differently about solving this?
 
 -- 
 | Jeremy Chadwick jdc at parodius.com |
 | Parodius Networkinghttp://www.parodius.com/ |
 | UNIX Systems Administrator   Mountain View, CA, USA |
 | Making life hard for others since 1977.   PGP: 4BD6C0CB |
 



Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sun, 21 Jan 2007 08:33:26 +0800
Adrian Chadd [EMAIL PROTECTED] wrote:

 
 On Sun, Jan 21, 2007, Charlie Allom wrote:
 
   This is a pure example of a problem from the operational front which can
   be floated to research and the industry, with smarter solutions than port
   blocking and QoS.
  
  This is what I am interested/scared by.
 
 Its not that hard a problem to get on top of. Caching, unfortunately, 
 continues
 to be viewed as anaethma by ISP network operators in the US. Strangely enough
 the caching technologies aren't a problem with the content -delivery- people.
 

 I've had a few ISPs out here in Australia indicate interest in a cache that
 could do the normal stuff (http, rtsp, wma) and some of the p2p stuff 
 (bittorrent
 especially) with a smattering of QoS/shaping/control - but not cost upwards of
 USD$100,000 a box. Lots of interest, no commitment.
 

I think it is probably because to build caching infrastructure that is
high performance and has enough high availability to make a difference is
either non-trivial or non-cheap. If it comes down to introducing
something new (new software / hardware, new concepts, new
complexity, new support skills, another thing that can break etc.)
verses just growing something you already have, already manage and
have since day one as an ISP - additional routers and/or higher capacity
links - then growing the network wins when the $ amount is the same
because it is simpler and easier.

 It doesn't help (at least in Australia) where the wholesale model of ADSL 
 isn't
 content-replication-friendly: we have to buy ATM or ethernet pipes to 
 upstreams
 and then receive each session via L2TP. Fine from an aggregation point of 
 view,
 but missing the true usefuless of content replication and caching - right at
 the point where your customers connect in.
 

I think if even pure networking people (i.e. those that just focus on
shifting IP packets around) are accepting of that situation, when they
also believe in keeping traffic local, indicates that it is probably
more of an economic rather than a technical reason why that is still
happening. Inter-ISP peering at the exchange (C.O) would be the ideal,
however it seems that there isn't enough inter-customer (per-ISP or
between ISP) bandwidth consumption at each exchange to justify the
additional financial and complexity costs to do it.

Inter-customer traffic forwarding is usually happening at the next
level up in the hierarchy - at the regional / city level, which is
probably at this time the most economic level to do it.

 (Disclaimer: I'm one of the Squid developers. I'm getting an increasing amount
 of interest from CDN/content origination players but none from ISPs. I'd love
 to know why ISPs don't view caching as a viable option in today's world and
 what we could to do make it easier for y'all.)
 

Maybe that really means your customers (i.e. people who most benefit
from your software) are really the content distributors not ISPs
anymore. While the distinction might seem somewhat minor, I think ISPs
generally tend to have more of a view point of where is this traffic
wanting or probably going to go, and how to do we build infrastructure
to get it there, and less of a what is this traffic view. In other
words, ISPs tend to be more focused on trying to optimise for all types
of traffic rather than one or a select few particular types, because
what the customer does with the bandwidth they purchase is up to
the customer themselves. If you spend time optimising for one type of
traffic you're either neglecting or negatively impacting another type.
Spending time on general optimisations that benefit all types of
traffic is usually the better way to spend time. I think one of the
reasons for ISP interest in the p2p problem could be because it is
reducing the normal benefit-to-cost ratio of general traffic
optimsation. Restoring the regular benefit-to-cost ratio of general
traffic optimsation is probably the fundamental goal of solving the
p2p problem.

My suggestion to you as a squid developer would be focus on caching, or
more generally, localising of P2P traffic. It doesn't seem that the P2P
application developers are doing it, maybe because they don't care
because it doesn't directly impact them, or maybe because they don't
know how to. If squid could provide a traffic localising solution which
is just another traffic sink or source (e.g. a server) to an ISP,
rather than something that requires enabling knobs on the network
infrastructure for special handling or requires special traffic
engineering for it to work, I'd think you'd get quite a bit of
interest. 

Just my 2c.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Gadi Evron

On Sat, 20 Jan 2007, Roland Dobbins wrote:
 
 On Jan 20, 2007, at 11:55 AM, Randy Bush wrote:
 
  the question to me is whether isps and end user borders (universities,
  large enterprises, ...) will learn to embrace this as opposed to
  fighting it; i.e. find a business model that embraces delivering what
  the customer wants as opposed to winging and warring against it.
 
 I believe that it will end up becoming the norm, as it's a form of  
 cost-shifting from content providers to NSPs and end-users - but for  
 it to really take off, the tension between content-providers and  
 their customers (i.e., crippling DRM) needs to be resolved.
 
 There have been some experiments in U.S. universities over the last  
 couple of years in which private music-sharing services have been run  
 by the universities themselves, and the students pay a fee for access  
 to said music.  I haven't seen any studies which provide a clue as to  
 whether or not these experiments have been successful (for some value  
 of 'successful'); my suspicion is that crippling DRM combined with a  
 lack of variety may have been 'features' of these systems, which is  
 not a good test.
 
 OTOH, emusic.com seem to be going great guns with non-DRMed .mp3s and  
 a subscription model; perhaps (an official) P2P distribution might be  
 a logical next step for a service of this type.  I think it would be  
 a very interesting experiment.

Won't really happen as long as they stick to a business model which is
over a hundred years old.

I would strongly suggest people with interest in this area watch
Lawrence Lessig's lecture from CCC:
http://dewy.fem.tu-ilmenau.de/CCC/23C3/video/23C3-1760-en-on_free.m4v

But I would like to stay on-track and discuss how we can help ISPs change
from their end, considering both operational and business needs. Do you
believe making such a case study public will help? Do you believe it is
the ISP itself which should become the content provider rather than a
bandwidth service?

Gadi.



Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sat, 20 Jan 2007 17:38:06 -0600 (CST)
Gadi Evron [EMAIL PROTECTED] wrote:

 
 On Sat, 20 Jan 2007, Alexander Harrowell wrote:
  Marshall wrote:
  Those sorts of percentages are common in Pareto distributions (AKA
  
   Zipf's law AKA the 80-20 rule).
   With the Zipf's exponent typical of web usage and video watching, I
   would predict something closer to
   10% of the users consuming 50% of the usage, but this estimate is not
   that unrealistic.
  
   I would predict that these sorts of distributions will continue as
   long as humans are the primary consumers of
   bandwidth.
  
   Regards
   Marshall
  
  
  That's until the spambots inherit the world, right?
  
 
 That is if you see a distinction, metaphorical or physical, between
 spambots and real users.
 

On the Internet, Nobody Knows You're a Dog (Peter Steiner, The New Yorker)
 
Woof woof,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 1:02 PM, Marshall Eubanks wrote:


as long as humans are the primary consumers of
bandwidth.


This is an interesting phrase.  Did you mean it T-I-C, or are you  
speculating that M2M (machine-to-machine) communications will at some  
point rival/overtake bandwidth consumption which is interactively  
triggered by human actions?  Right now TiVo will record television  
programs it thinks you might like; what effect will this type of  
technology have on IPTV, more mature P2P systems, etc.?


It would be very interesting to try and determine how much automated  
bandwdith consumption is taking place now and try to extrapolate some  
trends; a good topic for a PhD dissertation, IMHO.


;

---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 6:14 PM, Mark Smith wrote:


It doesn't seem that the P2P
application developers are doing it, maybe because they don't care
because it doesn't directly impact them, or maybe because they don't
know how to. If squid could provide a traffic localising solution  
which

is just another traffic sink or source (e.g. a server) to an ISP,
rather than something that requires enabling knobs on the network
infrastructure for special handling or requires special traffic
engineering for it to work, I'd think you'd get quite a bit of
interest.


I think there's interest from the consumer level, already:

http://torrentfreak.com/review-the-wireless-BitTorrent-router/

It's early days, but if this becomes the norm, then the end-users  
themselves will end up doing the caching.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Jeroen Massar
Gadi Evron wrote:
 On Sat, 20 Jan 2007, Jeremy Chadwick wrote:
 
 snip
 
 ISPs probably don't have an interest in BT caching because of 1)
 cost of ownership, 2) legal concerns (if an ISP cached a publicly
 distributed copy of some pirated software, who's then responsible?),
 
 They cache the web, which has the same chance of being illegal content.
[..]

They do have NNTP Caches though with several Terabytes of storage
space and obvious newsgroups like alt.binaries.dvd-r and similar names.

The reason why they don't run BT Caches is because the protocol is not
made for it. NNTP is made for distribution (albeit not really for 8bit
files ;), the Cache (more a wrongly implemented auto-replicating FTP
server) is local to the ISP and serves their local users. As such that
is only gain. Instead of having their clients use their transits, the
data only gets pulled over ones and all their clients get it.

For BT though, you either have to do tricks at L7 involving sniffing the
lines and thus breaking end-to-end; or you end up setting up a huge BT
client which automatically mirrors all the torrents on the planet and
hope that only your local users use it, which most likely is not the
case as most BT clients don't do network-close downloading.

As such NNTP is profit, BT is not. Also, NNTP access is a service which
you can sell. There exist a large number of NNTP-only services and even
ISP's that have as a major selling point: access to their newsserver.

Fun detail about NNTP: most companies publish how much traffic they do
and even in which alt.binaries.* group the most crap is flowing. Still
it seems totally legal to have those several Terabytes of data and make
them available, even with the obvious names that the messages carry. The
most named reason: It is a Cache and we don't put the data on it, it
is automatic... yup alt.binaries.dvd.movies whatever is really not so
obvious ;)

Of course replace BT with most kinds of P2P network in the above of
course. There are some P2P nets that try to induce some network topology
though, so that you will be downloading from that person next door
instead of that guy on a 56k in Timbuktoe while you are sitting on a
1Gbit NREN connect ;)


But anyway what I am wondering is why ISP folks are thinking so bad
about this, do you guys want:
 a) customers that do not use your network
 b) customers that do use the network

Probably it is a) because of the cash. But that is strange, why sell
people an 'unlimited' account when you don't want them to use it in the
first place? Also if your network is not made to handle customers of
type b) then upgrade your network. Clearly your customers love using it,
thus more customers will follow if you keep it up and running. No better
advertisement than the neighbor saying that it is great ;)

Greets,
 Jeroen



signature.asc
Description: OpenPGP digital signature


Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sat, 20 Jan 2007 18:51:08 -0800
Roland Dobbins [EMAIL PROTECTED] wrote:

 
 
 On Jan 20, 2007, at 6:14 PM, Mark Smith wrote:
 
  It doesn't seem that the P2P
  application developers are doing it, maybe because they don't care
  because it doesn't directly impact them, or maybe because they don't
  know how to. If squid could provide a traffic localising solution  
  which
  is just another traffic sink or source (e.g. a server) to an ISP,
  rather than something that requires enabling knobs on the network
  infrastructure for special handling or requires special traffic
  engineering for it to work, I'd think you'd get quite a bit of
  interest.
 
 I think there's interest from the consumer level, already:
 
 http://torrentfreak.com/review-the-wireless-BitTorrent-router/
 
 It's early days, but if this becomes the norm, then the end-users  
 themselves will end up doing the caching.
 

Maybe I haven't understood what that exactly does, however it seems to
me that's really just a bit-torrent client/server in the ADSL router.
Certainly having a bittorrent server in the ADSL router is unique, but
not really what I was getting at.

What I'm imagining (and I'm making some assumptions about how
bittorrent works) would be bittorrent super peer that :

* announces itself as a very generous provider of bittorrent fragments.
* selects which peers to offer it's generosity to, by measuring it's
network proximity of those peers. I think bittorrent uses TCP, and it
would seem to me that TCP's own round trip and througput measuring
would be a pretty good source to measuring network locality.
* This super peer could also have it's generosity announcements
restricted to certain IP address ranges etc.

Actually, thinking about it a bit more, for this device to work well it
would need to somehow be inline with the bit torrent seed URLs, so maybe
that wouldn't be feasible to have a server in the ISP's network do it.
Still, if BT peer software was modified to take into account the TCP
measurements when selecting peers, I think it would probably go a long
way towards mitigating some of the traffic problems that P2P seems to be
causing.

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 7:38 PM, Mark Smith wrote:


Maybe I haven't understood what that exactly does, however it seems to
me that's really just a bit-torrent client/server in the ADSL router.
Certainly having a bittorrent server in the ADSL router is unique, but
not really what I was getting at.


I understand it's not what you meant; my point is that if the SPs  
don't figure out how to do this, the customers will, by whatever  
means they have at their disposal, with always-on devices which do  
the distribution and seeding and caching automagically, and with a  
revenue model attached.  I foresee consumer-level devices like this  
little Asus router which not only act as torrent clients/servers, but  
which also are woven together into caches with something like PNRP as  
the location service (and perhaps an innovative content producer/ 
distributor acting as a billing overlay prover a la FON in order to  
monetize same, leaving the SP with nothing).


The advantage of providing caching services is that they both help  
preserve scare resources and result in a more pleasing user  
experience.  As already pointed out, CAPEX/OPEX along with insertion  
into the network are the current barriers, along with potential legal  
liabilities; cooperation between content providers and SPs could help  
alleviate some of these problems and make it a more attractive model,  
and help fund this kind of infrastructure in order to make more  
efficient use of bandwidth at various points in the topology.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Adrian Chadd

On Sun, Jan 21, 2007, Mark Smith wrote:

 What I'm imagining (and I'm making some assumptions about how
 bittorrent works) would be bittorrent super peer that :

Azereus already has functional 'proxy discovery' stuff. Its quite naive but
it does the job. The only implementation I know about is the JoltId PeerCache,
but its quite expensive.

The initial implementation should use this for client communication.
Then try to work with the P2P crowd to ratify some kind of P2P proxy discovery
and communication protocol (and have more luck than WPAD :)



Adrian



Re: Google wants to be your Internet

2007-01-20 Thread Mark Smith

On Sat, 20 Jan 2007 19:47:04 -0800
Roland Dobbins [EMAIL PROTECTED] wrote:

snip

 
 The advantage of providing caching services is that they both help  
 preserve scare resources and result in a more pleasing user  
 experience.  As already pointed out, CAPEX/OPEX along with insertion  
 into the network are the current barriers, along with potential legal  
 liabilities; cooperation between content providers and SPs could help  
 alleviate some of these problems and make it a more attractive model,  
 and help fund this kind of infrastructure in order to make more  
 efficient use of bandwidth at various points in the topology.
 

I think you're more or less describing what already Akamai do - they're
just not doing it for authorised P2P protocol distributed content (yet?).

Regards,
Mark.

-- 

Sheep are slow and tasty, and therefore must remain constantly
 alert.
   - Bruce Schneier, Beyond Fear


Re: Google wants to be your Internet

2007-01-20 Thread Roland Dobbins



On Jan 20, 2007, at 8:10 PM, Mark Smith wrote:

I think you're more or less describing what already Akamai do -  
they're
just not doing it for authorised P2P protocol distributed content  
(yet?).


Yes, and P2P might make sense for them to explore - but a) it doesn't  
help SPs smooth out bandwidth 'hotspots' in and around their  access  
networks due to P2P activity, b) doesn't bring the content out to the  
very edges of the access network, where the users are, and c) isn't  
something which can be woven together out of more or less off-the- 
shelf technology with the users themselves supplying the  
infrastructure and paying for (and being compensated for, a la FON or  
SpeakEasy's WiFi sharing program) the access bandwidth.


It seems to me that a FON-/Speakeasy-type bandwidth-charge  
compensation model for end-user P2P caching and distribution might be  
an interesting approach for SPs to consider, as it would reduce the  
CAPEX and OPEX for caching services and encourage the users  
themselves to subsidize the bandwidth costs to one degree or another.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder






Re: Google wants to be your Internet

2007-01-20 Thread Stephen Sprunk


Thus spake Adrian Chadd [EMAIL PROTECTED]

On Sun, Jan 21, 2007, Charlie Allom wrote:
 This is a pure example of a problem from the operational front 
 which

 can be floated to research and the industry, with smarter solutions
 than port blocking and QoS.

This is what I am interested/scared by.


Its not that hard a problem to get on top of. Caching, unfortunately,
continues to be viewed as anaethma by ISP network operators in the
US. Strangely enough the caching technologies aren't a problem with
the content -delivery- people.


US ISPs get paid on bits sent, so they're going to be _against_ caching 
because caching reduces revenue.  Content providers, OTOH, pay the ISPs 
for bits sent, so they're going to be _for_ caching because it increases 
profits.  The resulting stalemate isn't hard to predict.



I've had a few ISPs out here in Australia indicate interest in a cache
that could do the normal stuff (http, rtsp, wma) and some of the p2p
stuff (bittorrent especially) with a smattering of 
QoS/shaping/control -

but not cost upwards of USD$100,000 a box. Lots of interest, no
commitment.


Basically, they're looking for a box that delivers what P2P networks 
inherently do by default.  If the rate-limiting is sane, then only a 
copy (or two) will need to come in over the slow overseas pipes, and 
it'll be replicated and reassembled locally over fast pipes.  What, 
exactly, is a middlebox supposed to add to this picture?



It doesn't help (at least in Australia) where the wholesale model of
ADSL isn't content-replication-friendly: we have to buy ATM or
ethernet pipes to upstreams and then receive each session via L2TP.
Fine from an aggregation point of view, but missing the true usefuless
of content replication and caching - right at the point where your
customers connect in.


So what you have is a Layer 8 problem due to not letting the network 
topology match the physical topology.  No magical box is going to save 
you from hairpinning traffic between a thousand different L2TP pipes. 
The best you can hope for is that the rate limits for those L2TP pipes 
will be orders of magnitude larger than the rate limit for them to talk 
upstream -- and you don't need any new tools to do that, just 
intelligent use of what you already have.


(Disclaimer: I'm one of the Squid developers. I'm getting an 
increasing

amount of interest from CDN/content origination players but none from
ISPs. I'd love to know why ISPs don't view caching as a viable option
in today's world and what we could to do make it easier for y'all.)


As someone who voluntarily used a proxy and gave up, and has worked in 
an IT dept that did the same thing, it's pretty easy to explain: there 
are too many sites that aren't cache-friendly.  It's easy for content 
folks to put up their own caches (or Akamaize) because they can design 
their sites to account for it, but an ISP runs too much risk of breaking 
users' experiences when they apply caching indiscriminately to the 
entire Web.  Non-idempotent GET requests are the single biggest breakage 
I ran into, and the proliferation of dynamically-generated Web 2.0 
pages (or faulty Expires values) are the biggest factor that wastes 
bandwidth by preventing caching.


S

Stephen Sprunk God does not play dice.  --Albert Einstein
CCIE #3723 God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity. --Stephen Hawking 



Re: Google wants to be your Internet

2007-01-20 Thread Stephen Sprunk


Thus spake Jeremy Chadwick [EMAIL PROTECTED]

Chances are that other torrent client authors are going to see
[BitThief] as major defiance and start implementing things like
filtering what client can connect to who based on the client name/ID
string (ex. uTorrent, Azureus, MainLine), which as we all know, is
going to last maybe 3 weeks.


BitComet has virtually dropped off the face of the 'net since the 
authors decided to not honor the private flag.  Even public trackers 
_that do not serve private torrents_ frequently block it out of 
community solidarity.  Note that the blocking hasn't been incorporated 
into clients, because it's largely unnecessary.



This in turn will solicit the BitThief authors implementing a feature
that allows the client to either spoof its client name or use 
randomly-
generated ones.  Rinse lather repeat, until everyone is fighting 
rather

than cooperating.

Will the BT protocol be reformed to address this?  50/50 chance.


There are lots of smart folks working on improving the tit-for-tat 
mechanism, and I bet the algorithm (but _not_ the protocol) implemented 
in popular clients will be tuned to adjust for freeloaders over time. 
However, the vast majority of people are going to use clients that 
implement things as intended because (a) it's simpler, and (b) it 
performs better.  Freeloading does work, but it takes several times as 
long to download files even with the existing, easily-exploited 
mechanisms.


Note that all it takes to turn any standard client into a BitThief is 
tuning a few of the easily-accessible parameters (e.g. max connections, 
connection rate, and upload rate).  As many folks have found out with 
various P2P clients over the years, doing so really hurts you in 
practice, but you can freeload anything you want if you have patience. 
This is not particularly novel research; it just quantifies common 
knowledge.



The result of these items already been shown: BT encryption.  I
personally know of 3 individuals who have their client to use en-
cryption only (disabling non-encrypted connection support).  For
security?  Nope -- solely because their ISP uses a rate limiting
device.

Bram Cohen's official statement is that using encryption to get
around this is silly because not many ISPs are implementing
such devices (maybe not *right now*, Bram, but in the next year
or two, they likely will):

http://bramcohen.livejournal.com/29886.html


Bram is delusional; few ISPs these days _don't_ implement rate-limiting 
for BT traffic.  And, in response, nearly every client implements 
encryption to get around it.  The root problem is ISPs aren't trying to 
solve the problem the right way -- they're seeing BT taking up huge 
amounts of BW and are trying to stop that, instead of trying to divert 
that traffic so that it costs them less to deliver.


( My ISP doesn't limit BT, but I've talked with their tech support folks 
and the response was that if I use excessive bandwidth they'll 
rate-limit my entire port regardless of protocol.  They gave me a 
ballpark of what excessive means to them, I set my client below that 
level, and I've never had a problem.  This works better for me since all 
my non-BT traffic isn't competing for limited port bandwidth, and it 
works better for them since my BT traffic is unencrypted and easy to 
de-prioritize -- but they don't limit it per se, just mark it to be 
dropped first during congestion, which is fair.  Everyone wins. )


S

Stephen Sprunk God does not play dice.  --Albert Einstein
CCIE #3723 God is an inveterate gambler, and He throws the
K5SSSdice at every possible opportunity. --Stephen Hawking 



Re: Google wants to be your Internet

2007-01-20 Thread Randy Bush

 Its not that hard a problem to get on top of. Caching, unfortunately,
 continues to be viewed as anaethma by ISP network operators in the
 US. Strangely enough the caching technologies aren't a problem with the
 content -delivery- people.

if we enbrace p2p, today's heavy hitting bad users are tomorrow's
wonderful local cachers.

randy



Re: Google wants to be your Internet

2007-01-20 Thread rich


holy kook bait.

it's amazing after all these years, and companies, how 
many people, and companies, still don't get it.





/rf