Re: Superfast internet may replace world wide web

2008-04-07 Thread Fred Baker


That and someone can't tell the difference between a network and an  
application that runs in a network.


On Apr 7, 2008, at 10:38 AM, [EMAIL PROTECTED] wrote:

On Mon, 07 Apr 2008 20:21:26 +0530, Glen Kent said:


says the solemn headline of Telegraph.

http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2008/04/06/ 
ninternet106.xml


So yoy get higher bandwidth (physical pipe allowing) by downloading  
from a

"grid" of systems.

Sounds suspiciously like somebody has re-invented BitTorrent?

(Sorry, am in a cynical mood today.. ;)




Train wreck (was "Does TCP Need an Overhaul?")

2008-04-07 Thread Fred Baker


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Apr 7, 2008, at 8:36 AM, Lucy Lynch wrote:

Anyone out there attend this event?

The Future of TCP: Train-wreck or Evolution?
http://yuba.stanford.edu/trainwreck/agenda.html

how did the demos go?


The researchers demonstrated four things that made sense to me:

(1) TCP is not the right transport for carrying video data if what  
you want is real-time delivery. Carrying stored video (YouTube-style)  
is fine, but if you're trying to watch TV, you really should be using  
some other transport such as RTP or DCCP. Same comment holds for  
sensor traffic, but the astronomers who carry radiotelescope data  
halfway around the world weren't present.


(2) TCP is probably not the right protocol for carrying transaction  
traffic within a data center. One speculates that SCTP (which has a  
concept of a stream of TCP-like "transactions" that can be handled  
out of order and allows for congestion management both within and  
among transactions) might be a better protocol, and in any event that  
when thousands of transactions back up in a gigabit Ethernet chip's  
queue on a host that the host should start noticing that they are  
experiencing congestion.


(3) 802.11 networks experience not only the traditional congestion  
experienced in wired networks, but channel access congestion (true of  
shared media in general) and radio interference. In such networks, it  
may be useful to think about congestion as happening "in a region" as  
opposed to "at a bottleneck".


(4) When it is pointed out that instead of complaining about TCP in  
cases where it is the wrong protocol it may be more useful to use the  
transport designed for the purpose, researchers who presumably are  
expert on matters in the transport layer respond in complete surprise.

-BEGIN PGP SIGNATURE-

iD8DBQFH+klNbjEdbHIsm0MRAlLhAKCDprgXaKYukFG57KRsRS8HyGAUHgCgyRLd
SpNahEUbZudgcoc3bMz/Cto=
=hnGa
-END PGP SIGNATURE-


Re: Aggregation for IPv4-compatible IPv6 address space

2008-02-03 Thread Fred Baker


in the most recent architecture, rfc 4291, that was deprecated. The  
exact statement is


2.5.5.1.  IPv4-Compatible IPv6 Address

   The "IPv4-Compatible IPv6 address" was defined to assist in the IPv6
   transition.  The format of the "IPv4-Compatible IPv6 address" is as
   follows:

   |80 bits   | 16 |  32 bits|
   +--+--+
   |..||IPv4 address |
   +--++-+

   Note: The IPv4 address used in the "IPv4-Compatible IPv6 address"
   must be a globally-unique IPv4 unicast address.

   The "IPv4-Compatible IPv6 address" is now deprecated because the
   current IPv6 transition mechanisms no longer use these addresses.
   New or updated implementations are not required to support this
   address type.

I should think you are within bounds to not announce it at all.

On Feb 4, 2008, at 6:09 AM, snort bsd wrote:



Hi all:

With IPv4-compatible IPv6 address space, could I aggregate the  
address space?


say 192.168.0.0/16 become ::192.168/112? or It must be converted to  
native IPv6 address space?


Just wondering,




  Get the name you always wanted with the new y7mail email  
address.

www.yahoo7.com.au/y7mail






Re: EU Official: IP Is Personal

2008-01-24 Thread Fred Baker


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jan 24, 2008, at 12:50 PM, Roland Perry wrote:

no fundamental contradiction in the proposition that private sector  
information can be mandated to be kept for minimum periods, is  
confidential, but nevertheless can be acquired by lawful subpoena.


they are if the records are kept for no private sector purpose, which  
is the case here. The corollary that is being built on is telco call  
detail records, which were once used in billing. But the ISPs have no  
use for the data and storing it costs power, cooling, disk-or-other- 
storage, and so on. Get an ISP or other data center to give you an  
idea how many megawatts they go through and what that costs...

-BEGIN PGP SIGNATURE-

iD8DBQFHmJTTbjEdbHIsm0MRAkawAKDnhoWSoMvmSkvYrGMKyjcOg479fACfY5IC
XPNxwAA1fsU6j5Z/r5REBLw=
=2fCn
-END PGP SIGNATURE-


Re: EU Official: IP Is Personal

2008-01-23 Thread Fred Baker


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jan 24, 2008, at 2:09 AM, Mikael Abrahamsson wrote:

The local antipiracy organization in Sweden needed a permit to  
collect/handle IP+timestamp and save it in their database, as this  
information was regarded as personal information. Since ISPs  
regularily save who has an IP at what time, IP+timestamp can be  
used to discern at least what access port a certain IP was at, or  
in case of PPPoE etc, what account was used to obtain the IP that  
that time.


I still think IP+timestamp doesn't imply what person did something


it doesn't, no any more than the association of your cell phone with  
a cell tower conclusively implies that the owner of a telephone used  
it to do something in particular. However, in forensic data retention  
and wiretap procedures, the assumption is made that the user of a  
telephone or a computer is *probably* a person who normally has  
access to it.


In the EU Data Retention model, I will argue that the only thing that  
makes sense to use as a "Session Detail Record" is an IPFIX/Netflow  
record correlated with with any knowledge the ISP might have of the  
person using the source and/or destination IP address at the time.  
When the address is temporarily or "permanently" assigned to a  
subscriber, such as a wireless address in a T-Mobile Hotspot (which  
one has to identify one's account when logging into, which  
presumptively identifies the subscriber) or the address assigned to a  
Cable Modem subscriber (home/SOHO), this tends to have a high degree  
of utility.


In the wiretap model, one similarly selects the traffic one  
intercepts on the presumption that a surveillance subject is probably  
the person using the computer.


For them, it's all about probability. It doesn't have to be "one" if  
it is reasonable to presume that it is in the neighborhood.


What I find interesting here is the Jekyll/Hyde nature of it.  
European ISPs are required to keep expensive logs of the behavior of  
subscribers for forensic data mining, accessible under subpoena, for  
extensive periods like 6-24 months (last I heard it was 7 years in  
Italy, but that may now be incorrect), but the information is deemed  
private and therefore inappropriate to keep under EU privacy rules.  
ISPs are required to keep inappropriate information at their own  
expense in case forensic authorities decide to pay an occasional  
pittance to access some small quantity of it.

-BEGIN PGP SIGNATURE-

iD8DBQFHmA3hbjEdbHIsm0MRAhsKAJ4+xXkJm/JM/lDL1YpufmUYZdhClACgrvxD
keX0Zsm+QtJG6RcCMrJcVqk=
=DpcR
-END PGP SIGNATURE-


Re: NeXT Default Network

2007-11-27 Thread Fred Baker


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Could someone please tell me what 192.42.172.0/24 is or why it  
should be handled as a special prefix?


ftp://ftp-eng.cisco.com/cons/isp/security/Ingress-Prefix-Filter- 
Templates/T-ip-prefix-filter-ingress-strict-check-v18.txt


You might review the notes I list below, and specifically RFC 3330.  
They mention the prefix neither by name or by value...


I would expect that this had something to do with a company called  
NeXT and an operating system called NextStep. It sounds like they  
came up with a variety of site-local address pre-RFC1918 and pre- 
RFC3927 that did something similar to RFC 3927 addresses. This is  
mentioned in passing in RFCs 1117 and 1166. The big question is - are  
there any NextStep systems still in use (I last used one in 1990),  
and whether they have been configured with other addresses (seems  
likely, especially in a DHCP world).


http://www.ietf.org/rfc/rfc3330.txt
3330 Special-Use IPv4 Addresses. IANA. September 2002. (Format:
 TXT=16200 bytes) (Status: INFORMATIONAL)

http://www.ietf.org/rfc/rfc3789.txt
3789 Introduction to the Survey of IPv4 Addresses in Currently
 Deployed IETF Standards Track and Experimental Documents. P.  
Nesser,
 II, A. Bergstrom, Ed.. June 2004. (Format: TXT=22842 bytes)  
(Status:

 INFORMATIONAL)

http://www.ietf.org/rfc/rfc3790.txt
3790 Survey of IPv4 Addresses in Currently Deployed IETF Internet Area
 Standards Track and Experimental Documents. C. Mickles, Ed., P.
 Nesser, II. June 2004. (Format: TXT=102694 bytes) (Status:
 INFORMATIONAL)

http://www.ietf.org/rfc/rfc3791.txt
3791 Survey of IPv4 Addresses in Currently Deployed IETF Routing Area
 Standards Track and Experimental Documents. C. Olvera, P.  
Nesser, II.

 June 2004. (Format: TXT=27567 bytes) (Status: INFORMATIONAL)

http://www.ietf.org/rfc/rfc3792.txt
3792 Survey of IPv4 Addresses in Currently Deployed IETF Security Area
 Standards Track and Experimental Documents. P. Nesser, II, A.
 Bergstrom, Ed.. June 2004. (Format: TXT=46398 bytes) (Status:
 INFORMATIONAL)

http://www.ietf.org/rfc/rfc3793.txt
3793 Survey of IPv4 Addresses in Currently Deployed IETF Sub-IP Area
 Standards Track and Experimental Documents. P. Nesser, II, A.
 Bergstrom, Ed.. June 2004. (Format: TXT=11624 bytes) (Status:
 INFORMATIONAL)

http://www.ietf.org/rfc/rfc3794.txt
3794 Survey of IPv4 Addresses in Currently Deployed IETF Transport
 Area Standards Track and Experimental Documents. P. Nesser, II, A.
 Bergstrom, Ed.. June 2004. (Format: TXT=60001 bytes) (Status:
 INFORMATIONAL)

http://www.ietf.org/rfc/rfc3795.txt
3795 Survey of IPv4 Addresses in Currently Deployed IETF Application
 Area Standards Track and Experimental Documents. R. Sofia, P.  
Nesser,

 II. June 2004. (Format: TXT=92584 bytes) (Status: INFORMATIONAL)

http://www.ietf.org/rfc/rfc3796.txt
3796 Survey of IPv4 Addresses in Currently Deployed IETF Operations &
 Management Area Standards Track and Experimental Documents. P.
 Nesser, II, A. Bergstrom, Ed.. June 2004. (Format: TXT=78400  
bytes)

 (Status: INFORMATIONAL)


-BEGIN PGP SIGNATURE-

iD8DBQFHTLSMbjEdbHIsm0MRAssIAKDxNy0f4IjveLjyfrxGTkGuslSZ9QCgroID
E53IZ9u0/CnSmbKfWn9j7wI=
=n0CU
-END PGP SIGNATURE-


Re: Congestion control train-wreck workshop at Stanford: Call for Demos

2007-09-05 Thread Fred Baker



On Sep 5, 2007, at 8:01 AM, Sean Donelan wrote:

That's the issue with per-flow sharing, 10 institutions may be  
sharing a cost equally but if one student in one department at one  
institution generates 95% of the flows should he be able to consume  
95% of the capacity?


The big problem with this line of reasoning is that the student isn't  
visible at the network layer; at most, the IP address s/he is using  
is visible. If the student has an account at each of the universities  
s/he might be using all of them simultaneously. To the network, at  
most we can say that there were some number of IP addresses  
generating a lot of traffic.


One can do "interesting" things in the network in terms of scheduling  
capacity. My ISP in front of my home does that; they configure my  
cable modem to shape my traffic up and down to not exceed certain  
rates, and lo and behold my families combined computational capacity  
doesn't exceed those rates. One could similar do such things on a per- 
address or per-port basis in an enterprise network. That's where the  
discussion of per-address WFQ came from a decade ago - without having  
to configure each system's capabilities, make the systems using a  
constrained interface share it in some semi-rational manner  
automatically. That kind of thing is actually a lot harder on the end  
system; they don't talk with each other about such things. Can that  
be defeated? Of course; use a different IP address for each  
BitTorrent TCP session for example. My guess is "probably not on a  
widespread basis". That kind of statement might fall in the same  
category as "640K is enough", though.


Can you describe for me what problem you would really like solved?  
Are you saying, for example, that BitTorrent and similar applications  
should be constrained in some way so that the many TCPs from one  
system typically gets no more bandwidth than the single TCP on the  
system next door? Or are you really trying to build constraints on a  
per-user basis?


Re: Congestion control train-wreck workshop at Stanford: Call for Demos

2007-09-04 Thread Fred Baker


On Sep 3, 2007, at 6:44 PM, Steven M. Bellovin wrote:
More seriously -- the question is whether new services will cause  
operator congestion problems that today's mechanisms don't handle.


and, it includes the questions of what operators will be willing to  
deploy. One of the questions on the table, for example, is whether  
the network might be willing to characterize available capacity on  
links in datagrams that traverse them, either in an IP option or some  
interior header such as an IPv6 hop-by-hop option. The canonical  
variants there are XCP and RCP.


As I understand it, the conference organizers want to do something  
about TCP, but the examples of why it should be done that they are  
bringing up related to video and other applications. So this is going  
to have to extend to some variation on a session layer (SIP, for  
example), and potentially protocols like dccp.


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-16 Thread Fred Baker



On Aug 16, 2007, at 7:46 AM, <[EMAIL PROTECTED]> wrote:
In many cases, yes. I know of a certain network that ran with 30%  
loss for a matter of years because the option didn't exist to  
increase the bandwidth. When it became reality, guess what they did.


How many people have noticed that when you replace a circuit with a  
higher capacity one, the traffic on the new circuit is suddenly  
greater than 100% of the old one. Obviously this doesn't happen all  
the time, such as when you have a 40% threshold for initiating a  
circuit upgrade, but if you do your upgrades when they are 80% or  
90% full, this does happen.


well, so lets do a thought experiment.

First, that infocomm paper I mentioned says that they measured the  
variation in delay pop-2-pop at microsecond granularity with hyper- 
synchronized clocks, and found that with 90% confidence the variation  
in delay in their particular optical network was less than 1 ms. Also  
with 90% confidence, they noted "frequent" (frequency not specified,  
but apparently pretty frequent, enough that one of the authors later  
worried in my presence about offering VoIP services on it) variations  
on the order of 10 ms. For completeness, I'll note that they had six  
cases in a five hour sample where the delay changed by 100 ms and  
stayed there for a period of time, but we'll leave that observation  
for now.


Such spikes are not difficult to explain. If you think of TCP as an  
on-off function, a wave function with some similarities to a sin  
wave, you might ask yourself what the sum of a bunch of sin waves  
with slightly different periods is. It is also a wave function, and  
occasionally has a very tall peak. The study says that TCP  
synchronization happens in the backbone. Surprise.


Now, let's say you're running your favorite link at 90% and get such  
a spike. What happens? The tip of it gets clipped off - a few packets  
get dropped. Those TCPs slow down momentarily. The more that happens,  
the more frequently TCPs get clipped and back off.


Now you upgrade the circuit and the TCPs stop getting clipped. What  
happens?


The TCPs don't slow down. They use the bandwidth you have made  
available instead.


in your words, "the traffic on the new circuit is suddenly greater  
than 100% of the old one".


In 1995 at the NGN conference, I found myself on a stage with Phill  
Gross, then a VP at MCI. He was basically reporting on this  
phenomenon and apologizing to his audience. MCI had put in an OC-3  
network - gee-whiz stuff then - and had some of the links run too  
close to full before starting to upgrade. By the time they had two  
OC-3's in parallel on every path, there were some paths with a  
standing 20% loss rate. Phill figured that doubling the bandwidth  
again (622 everywhere) on every path throughout the network should  
solve the problem for that remaining 20% of load, and started with  
the hottest links. To his surprise, with the standing load > 95% and  
experiencing 20% loss at 311 MBPS, doubling the rate to 622 MBPS  
resulted in links with a standing load > 90% and 4% loss. He still  
needed more bandwidth. After we walked offstage, I explained TCP to  
him...


Yup. That's what happens.

Several folks have commented on p2p as a major issue here.  
Personally, I don't think of p2p as the problem in this context, but  
it is an application that exacerbates the problem. Bottom line, the  
common p2p applications like to keep lots of TCP sessions flowing,  
and have lots of data to move. Also (and to my small mind this is  
egregious), they make no use of locality - if the content they are  
looking for is both next door and half-way around the world, they're  
perfectly happen to move it around the world. Hence, moving a file  
into a campus doesn't mean that the campus has the file and will stop  
bothering you. I'm pushing an agenda in the open source world to add  
some concept of locality, with the purpose of moving traffic off ISP  
networks when I can. I think the user will be just as happy or  
happier, and folks pushing large optics will certainly be.


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-16 Thread Fred Baker


yes.

On Aug 16, 2007, at 12:29 AM, Randy Bush wrote:



So that's why I keep returning to the need to pushback traffic a  
couple

of ASNs back.  If its going to get dropped anyway, drop it sooner.


ECN


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-16 Thread Fred Baker


On Aug 15, 2007, at 10:13 PM, Adrian Chadd wrote:
Well, emprically (on multi-megabit customer-facing links) it takes  
effect immediately and results in congestion being "avoided" (for  
values of avoided.) You don't hit a "hm, this is fine" and "hm,  
this is congested"; you actually notice a much smoother performance  
degredation right up to 95% constant link use.


yes, theory says the same thing. It's really convenient when theory  
and practice happen to agree :-)


There is also a pretty good paper by Sue Moon et al in INFOCOMM 2004  
that looks at the Sprint network (they had special access) and looks  
at variation in delay pop-2-pop at a microsecond granularity and  
finds some fairly interesting behavior long before that.


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-15 Thread Fred Baker



On Aug 15, 2007, at 8:39 PM, Sean Donelan wrote:

Or would it be better to let the datagram protocols fight it out  
with the session oriented protocols, just like normal Internet  
operations


  Session protocol start packets (TCP SYN/SYN-ACK, SCTP INIT, etc)  
1% queue

  Everything else (UDP, ICMP, GRE, TCP ACK/FIN, etc) normal queue

And finally why only do this during extreme congestion?  Why not  
always

do it?


I think I would always do it, and expect it to take effect only under  
extreme congestion.



On Aug 15, 2007, at 8:39 PM, Sean Donelan wrote:

On Wed, 15 Aug 2007, Fred Baker wrote:
So I would suggest that a third thing that can be done, after the  
other two avenues have been exhausted, is to decide to not start  
new sessions unless there is some reasonable chance that they will  
be able to accomplish their work.


I view this as part of the flash crowd family of congestion  
problems, a combination of a rapid increase in demand and a rapid  
decrease in capacity.


In many cases, yes. I know of a certain network that ran with 30%  
loss for a matter of years because the option didn't exist to  
increase the bandwidth. When it became reality, guess what they did.


That's when I got to thinking about this.



Re: [policy] When Tech Meets Policy...

2007-08-15 Thread Fred Baker



On Aug 15, 2007, at 2:55 PM, Barry Shein wrote:
It seems to me that this should be an issue between the domain  
registrars and their customers, but maybe some over-arching policy  
is making it difficult to do the right thing?


Charging a "re-stocking fee" sounded perfectly reasonable. I don't  
think anyone has any *right* to "domain tasting", that is, to any  
particular pricing structure. But I don't see why it requires  
anything beyond some pricing solution as suggested.


Then my next question is, what reasons are there where it'd be  
wise/useful/non-criminal to do it on a large scale?


I'm not sure what the problem is with that except it seems to  
offend some people's sensibilities.


It costs the registry some money in terms of order entry and all  
that, and there are opportunity costs - if one registrar has a name  
checked out and being tasted by one of his clients, another registrar  
can't sell it to one of his.


PIR (.org) instituted an "excess deletion fee" in late May, which is  
at this point somewhat experimental. The fee is five cents per  
deleted domain if the total number of domains deleted within the 5  
day grace period in a month is greater than 90%.  The idea is that  
there is still a grace period where an individual can correct a mistake.


Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-15 Thread Fred Baker


let me answer at least twice.

As you say, remember the end-2-end principle. The end-2-end  
principle, in my precis, says "in deciding where functionality should  
be placed, do so in the simplest, cheapest, and most reliable manner  
when considered in the context of the entire network. That is usually  
close to the edge." Note the presence of advice and absence of mandate.


Parekh and Gallagher in their 1993 papers on the topic proved using  
control theory that if we can specify the amount of data that each  
session keeps in the network (for some definition of "session") and  
for each link the session crosses define exactly what the link will  
do with it, we can mathematically predict the delay the session will  
experience. TCP congestion control as presently defined tries to  
manage delay by adjusting the window; some algorithms literally  
measure delay, while most measure loss, which is the extreme case of  
delay. The math tells me that place to control the rate of a session  
is in the end system. Funny thing, that is found "close to the edge".


What ISPs routinely try to do is adjust routing in order to maximize  
their ability to carry customer sessions without increasing their  
outlay for bandwidth. It's called "load sharing", and we have a list  
of ways we do that, notably in recent years using BGP advertisements.  
Where Parekh and Gallagher calculated what the delay was, the ISP has  
the option of minimizing it through appropriate use of routing.


ie, edge and middle both have valid options, and the totality works  
best when they work together. That may be heresy, but it's true. When  
I hear my company's marketing line on intelligence in the network  
(which makes me cringe), I try to remind my marketing folks that the  
best use of intelligence in the network is to offer intelligent  
services to the intelligent edge that enable the intelligent edge to  
do something intelligent. But there is a place for intelligence in  
the network, and routing its its poster child.


In your summary of the problem, the assumption is that both of these  
are operative and have done what they can - several links are down,  
the remaining links (including any rerouting that may have occurred)  
are full to the gills, TCP is backing off as far as it can back off,  
and even so due to high loss little if anything productive is in fact  
happening. You're looking for a third "thing that can be done" to  
avoid congestive collapse, which is the case in which the network or  
some part of it is fully utilized and yet accomplishing no useful work.


So I would suggest that a third thing that can be done, after the  
other two avenues have been exhausted, is to decide to not start new  
sessions unless there is some reasonable chance that they will be  
able to accomplish their work. This is a burden I would not want to  
put on the host, because the probability is vanishingly small - any  
competent network operator is going to solve the problem with money  
if it is other than transient. But from where I sit, it looks like  
the "simplest, cheapest, and most reliable" place to detect  
overwhelming congestion is at the congested link, and given that  
sessions tend to be of finite duration and present semi-predictable  
loads, if you want to allow established sessions to complete, you  
want to run the established sessions in preference to new ones. The  
thing to do is delay the initiation of new sessions.


If I had an ICMP that went to the application, and if I trusted the  
application to obey me, I might very well say "dear browser or p2p  
application, I know you want to open 4-7 TCP sessions at a time, but  
for the coming 60 seconds could I convince you to open only one at a  
time?". I suspect that would go a long way. But there is a trust  
issue - would enterprise firewalls let it get to the host, would the  
host be able to get it to the application, would the application  
honor it, and would the ISP trust the enterprise/host/application to  
do so? is ddos possible? 


So plan B would be to in some way rate limit the passage of TCP SYN/ 
SYN-ACK and SCTP INIT in such a way that the hosed links remain fully  
utilized but sessions that have become established get acceptable  
service (maybe not great service, but they eventually complete  
without failing).


On Aug 15, 2007, at 8:59 AM, Sean Donelan wrote:


On Wed, 15 Aug 2007, Fred Baker wrote:

On Aug 15, 2007, at 8:35 AM, Sean Donelan wrote:
Or should IP backbones have methods to predictably control which  
IP applications receive the remaining IP bandwidth?  Similar to  
the telephone network special information tone -- All Circuits  
are Busy.  Maybe we've found a new use for ICMP Source Quench.


Source Quench wouldn't be my favored solution here. What I might  
suggest is taking TCP SYN and SCTP INIT (or new sessions if the

Re: Extreme congestion (was Re: inter-domain link recovery)

2007-08-15 Thread Fred Baker



On Aug 15, 2007, at 8:35 AM, Sean Donelan wrote:

Or should IP backbones have methods to predictably control which IP  
applications receive the remaining IP bandwidth?  Similar to the  
telephone network special information tone -- All Circuits are  
Busy.  Maybe we've found a new use for ICMP Source Quench.


Source Quench wouldn't be my favored solution here. What I might  
suggest is taking TCP SYN and SCTP INIT (or new sessions if they are  
encrypted or UDP) and put them into a lower priority/rate queue.  
Delaying the start of new work would have a pretty strong effect on  
the congestive collapse of the existing work, I should think.


Re: TCP congestion

2007-07-12 Thread Fred Baker



On Jul 12, 2007, at 11:42 AM, Brian Knoll ((TTNET)) wrote:


If the receiver is sending a DUP ACK, then the sender either never
received the first ACK or it didn't receive it within the timeframe it
expected.


or received it out of order.

Yes, a tcpdump trace is the first step.


Re: Thoughts on increasing MTUs on the internet

2007-04-13 Thread Fred Baker


I agree with many of your thoughts. This is essentially the same  
discussion we had upgrading from the 576 byte common MTU of the  
ARPANET to the 1500 byte MTU of Ethernet-based networks. Larger MTUs  
are a good thing, but are not a panacea. The biggest value in real  
practice is IMHO that the end systems deal with a lower interrupt  
rate when moving the same amount of data. That said, some who are  
asking about larger MTUs are asking for values so large that CRC  
schemes lose their value in error detection, and they find themselves  
looking at higher layer FEC technologies to make up for the issue.  
Given that there is an equipment cost related to larger MTUs, I  
believe that there is such a thing as an MTU that is impractical.


1500 byte MTUs in fact work. I'm all for 9K MTUs, and would recommend  
them. I don't see the point of 65K MTUs.


On Apr 14, 2007, at 7:39 AM, Simon Leinen wrote:



Ah, large MTUs.  Like many other "academic" backbones, we implemented
large (9192 bytes) MTUs on our backbone and 9000 bytes on some hosts.
See [1] for an illustration.  Here are *my* current thoughts on
increasing the Internet MTU beyond its current value, 1500.  (On the
topic, see also [2] - a wiki page which is actually served on a
9000-byte MTU server :-)

Benefits of >1500-byte MTUs:

Several benefits of moving to larger MTUs, say in the 9000-byte range,
were cited.  I don't find them too convincing anymore.

1. Fewer packets reduce work for routers and hosts.

   Routers:

   Most backbones seem to size their routers to sustain (near-)
   line-rate traffic even with small (64-byte) packets.  That's a good
   thing, because if networks were dimensioned to just work at average
   packet sizes, they would be pretty easy to DoS by sending floods of
   small packets.  So I don't see how raising the MTU helps much
   unless you also raise the minimum packet size - which might be
   interesting, but I haven't heard anybody suggest that.

   This should be true for routers and middleboxes in general,
   although there are certainly many places (especially firewalls)
   where pps limitations ARE an issue.  But again, raising the MTU
   doesn't help if you're worried about the worst case.  And I would
   like to see examples where it would help significantly even in the
   normal case.  In our network it certainly doesn't - we have Mpps to
   spare.

   Hosts:

   For hosts, filling high-speed links at 1500-byte MTU has often been
   difficult at certain times (with Fast Ethernet in the nineties,
   GigE 4-5 years ago, 10GE today), due to the high rate of
   interrupts/context switches and internal bus crossings.
   Fortunately tricks like polling-instead-of-interrupts (Saku Ytti
   mentioned this), Interrupt Coalescence and Large-Send Offload have
   become commonplace these days.  These give most of the end-system
   performance benefits of large packets without requiring any support
   from the network.

2. Fewer bytes (saved header overhead) free up bandwidth.

   TCP segments over Ethernet with 1500 byte MTU is "only" 94.2%
   efficient, while with 9000 byte MTU it would be 99.?% efficient.
   While an improvement would certainly be nice, 94% already seems
   "good enough" to me.  (I'm ignoring the byte savings due to fewer
   ACKs.  On the other hand not all packets will be able to grow
   sixfold - some transfers are small.)

3. TCP runs faster.

   This boils down to two aspects (besides the effects of (1) and  
(2)):


   a) TCP reaches its "cruising speed" faster.

  Especially with LFNs (Long Fat Networks, i.e. paths with a large
  bandwidth*RTT product), it can take quite a long time until TCP
  slow-start has increased the window so that the maximum
  achievable rate is reached.  Since the window increase happens
  in units of MSS (~MTU), TCPs with larger packets reach this
  point proportionally faster.

  This is significant, but there are alternative proposals to
  solve this issue of slow ramp-up, for example HighSpeed TCP [3].

   b) You get a larger share of a congested link.

  I think this is true when a TCP-with-large-packets shares a
  congested link with TCPs-with-small-packets, and the packet loss
  probability isn't proportional to the size of the packet.  In
  fact the large-packet connection can get a MUCH larger share
  (sixfold for 9K vs. 1500) if the loss probability is the same
  for everybody (which it often will be, approximately).  Some
  people consider this a fairness issue, other think it's a good
  incentive for people to upgrade their MTUs.

About the issues:

* Current Path MTU Discovery doesn't work reliably.

  Path MTU Discovery as specified in RFC 1191/1981 relies on ICMP
  messages to discover when a smaller MTU has to be used.  When these
  ICMP messages fail to arrive (or be sent), the sender will happily
  continue to send too-large packets into the blackhole.  This problem
  is very real.  As an 

Re: NANOG Thread

2006-09-25 Thread Fred Baker


no; what OS and what applications are you using? Anything  
particularly unusual?


On Sep 25, 2006, at 8:55 AM, [EMAIL PROTECTED] wrote:




On Mon, 25 Sep 2006, Alexander Harrowell wrote:



Well, if anyone wants to add more to it, there are quite a few
prominent 'noggers still to cast.



Can I be at the bottom of each thread, for when it really gets into  
wanker

territory? Thanks.

- billn


Re: Why is RFC1918 space in public DNS evil?

2006-09-18 Thread Fred Baker


I know the common wisdom is that putting 192.168 addresses in a  
public zonefile is right up there with kicking babies who have just  
had their candy stolen, but I'm really struggling to come up with  
anything more authoritative than "just because, now eat your  
brussel sprouts".


I think the best answer to that is to turn it on its head.

As Joe points out, exposing interior information unnecessarily is a  
security risk - leaving a treasure map with "X marks the spot"  
invites pirates of all sorts. In this case, it is not only exposing  
interior information (the.host.you.want.to.attack.example.com)  
unnecessarily, but also in a way that doesn't actually help anyone  
else. The address of my telephone is 10.32.244.220. But do a  
traceroute to that address (ar the address of my family computer,  
which is 192.168.1.20), and I about guarantee that you will come to a  
different computer, for the simple reason that you aren't in any of  
my private domains.


So putting those addresses in the public DNS actually *only* helps me  
if I am someone who is bombarding your prophylactic defenses with  
messages intended to reach your chewy innards. Anyone else has no  
actual use for the internal addresses.


I think the right question for your client is: "why exactly did you  
want to do that?"


Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-12 Thread Fred Baker



On Sep 12, 2006, at 2:45 AM, Daniel Golding wrote:

What would establish IP addresses as some sort of ARIN-owned and  
licensed community property? Well, winning a court case like this,  
or congress passing a law.


Korea also has passed a law that any addresses assign to KRNIC become  
the property of KRNIC. But even passing a law doesn't make it so.


IP Addresses have always been treated as a resource of the network  
since its inception. The fact that lawmakers don't understand or care  
to understand doesn't change the facts of the case.


Re: voip calea interfaces

2006-06-20 Thread Fred Baker



On Jun 20, 2006, at 11:44 AM, Eric A. Hall wrote:


This is interesting approach. For one, it seems to cover a lot more
technology than CALEA requires. I suppose that is an artifact of  
trying to

serve multiple countries' requiresments in a single architecture.


Actually, no.

IANAL

US laws include Title III of the 1968 OCCSS, 1978 FISA, and the 1994  
CALEA, with updates related to PATRIOT. The US is unusual in this  
respect; most of the countries that have published law or regulation  
relating to lawful intercept simply state that the police have  
authority to intercept any communications a surveillance subject  
participates in. As such Cisco implemented the PacketCable solution  
for CALEA a while, and then went on to meet the requirements of our  
various customers that have IP data intercept requirements.


You might find the following of interest.It's more about e-911, but  
if you want to read forensic access in as well, the shoe fits.


http://blogs.cisco.com/networkers/2006/06/ 
deploying_emergency_services_e.html


It's my opinion. Cisco is welcome to espouse it as well if it wants to.



Re: voip calea interfaces

2006-06-20 Thread Fred Baker


I'm willing to reply on-list, but obviously any business or legal  
contacts have to be off-list. For those, I can point you to the  
product manager for the technology, but it would frankly be better  
for one to go through one's account team, for scaling reasons.


Yes, the vendors are aware of this. Our legal people track it pretty  
closely, and we have been dealing with the issues in Europe,  
Australia, and a number of other places for quite a while. We talk  
directly with legislators, regulators, and various police entities.  
Before you ask whether we speak with China, I'll point out that we  
deliver a common technology that people using it configure to the  
applicable laws and warrants, and the laws we looked at in designing  
it were the laws and regulations of the various countries that signed  
the CyberCrime treaty. We designed it the way we did to meet the laws  
and regulations of western democracies like the US and EU.


RFC 2804 requested that anyone that designed a Lawful Intercept  
technology please publish it so that it could have open review. We  
did so:


http://www.ietf.org/rfc/rfc3924.txt
3924 Cisco Architecture for Lawful Intercept in IP Networks. F. Baker,
 B. Foster, C. Sharp. October 2004. (Format: TXT=40826 bytes)  
(Status:

 INFORMATIONAL)

This has also been submitted to ETSI, as an alternative to the model  
initially proposed there, which was "why don't we just split every  
fiber and run one instance under the appropriate agency's door?". I  
am not personally involved in that effort, but someone from my  
company is and I understand that ETSI is considering the model.


What this describes is the interface from a router or switch, or from  
a control application like a SIP proxy, to a third party mediation  
device. The interface from the mediation device to the law  
enforcement agency is different, and differs by country. The  
fundamental principle that we are trying to design to is "give the  
LEA what the warrant says they should get, no more and no less"; in  
some cases, that means that the mediation device will get a superset  
of the warranted data and have to edit it appropriately. There are  
various technologies for lawful intercept that exist that require a  
site visit to the POP to respond to the warrant or deployment of a  
stack of equipment in each POP in case an LEA ever asks; we try to  
make this a feature of the router or switch that can be configured  
the same way anything else is, but the information regarding the  
intercept kept appropriately private.


You might also take a look at http://www.cisco.com/pcgi-bin/search/ 
search.pl?searchPhrase=lawful+intercept


On Jun 20, 2006, at 9:48 AM, Eric A. Hall wrote:
I'm looking into the FCC ruling to require CALEA support for  
certain classes of VoIP providers, as upheld by the DC circuit  
court a couple of weeks ago [1]. The portion of VoIP that is  
covered by this order is pretty narrow (ie, you provide telephony- 
like voip services for $$ [read the specs for the real  
definition]), and the FCC is looking at narrowing it down further  
but has not done so yet. Meanwhile, the deadline for implementation  
-- May 14, 2007 -- is starting to get pretty close.


The operational part of this subject, and the reason for this mail,  
is the implementation of the wiretap interface. Obviously there are  
going to be a range of implementation approaches, given that there  
are a wide variety of providers. I mean, big-switch users probably  
just enable a feature, but small providers that rely on IP PBX gear  
with FXO cards will have to do something specific. Are vendors  
stepping up to the plate? Did you even know about this?


Off-list is fine, and I'll summarize if there's interest.

Thanks

[1] http://pacer.cadc.uscourts.gov/docs/common/opinions/ 
200606/05-1404a.pdf


--
Eric A. Hallhttp:// 
www.ehsco.com/
Internet Core Protocols  http://www.oreilly.com/catalog/ 
coreprot/


Re: MEDIA: ICANN rejects .xxx domain

2006-05-12 Thread Fred Baker


On May 11, 2006, at 11:28 PM, Martin Hannigan wrote:
Im having an offline discussion with a list member and I'll ask,  
why does it matter if you have a domain name if a directory can  
hold everything you need to know about them via key words and ip- 
addrs, NAT's and all?


I think there is a place for that discussion; a directory would allow  
for containment, which might allow the same character string to be  
used as a name by different groups if they have sufficiently low  
probability of needing to communicate. There are other ways to handle  
this as well. You might google some out-dated drafts by John Klensin  
that mention such a concept.


As someone else mentioned, there is this authority thing, though. So  
who manages this name directory? If there is a directory managed by a  
central agency of some sort that in turn hands LDAP queries (or  
whatever) off to local instances of directories managed by companies,  
how does that differ (apart from the use of a different transport)  
from what DNS does today? Is that central directory-managing  
authority someone we have to collectively agree to, and how do we do  
that? How do changes in that directory get made? And if there is no  
central directory, then basically we have the size and complexity of  
the .com, .net, .org, and other large namespaces to contend with -  
just how do we determine that www.renesys translates to 69.84.130.137  
and not to 198.133.219.25? How do we distribute that information, and  
assure ourselves that it got distributed correctly?


I'm not saying it is impossible, or even difficult. I am, however,  
pointing out that the job DNS does today would have to be done in the  
new regime, and would have to be done at least as well, and would be  
fairly likely to have many of the same characteristics, at least when  
taken in the large.


Now, as to ccTLDs vs gTLDs, if anyone wants to eliminate one or the  
other they get my vote. I think that gTLDs mostly create a mess, and  
if I were King they would have been eliminated a long time ago. But  
that is the opinion of one person, and is probably worth what you  
paid to receive it.


Re: MEDIA: ICANN rejects .xxx domain

2006-05-11 Thread Fred Baker



On May 11, 2006, at 8:42 PM, Jim Popovitch wrote:


Why not just plain ole hostnames like nanog, www.nanog, mail.nanog


For the same reason DNS was created in the first place. You will  
recall that we actually HAD a hostname file that we traded around...


Re: AW: Italy orders ISPs to block sites

2006-03-07 Thread Fred Baker


On Mar 7, 2006, at 12:13 AM, tom wrote:

I hope you don't mind this commentary from a European...


I certainly don't mind commentary from a European. I just wouldn't  
want to hear the same European complaining about the Chinese...


:-)


Re: manet, for example (Re: protocols that don't meet the need...)

2006-02-15 Thread Fred Baker


then fine, I agree that a manet network run by an operator is in  
scope. I was responding to the comments I have already gotten from  
network operators who have dumped all over me when I mentioned manet.


On Feb 15, 2006, at 1:52 PM, Christian Kuhtz wrote:



Fred,

Hmm.  Is self-organizing mesh access network with (some) explicitly  
mobile participants really that dissimilar from what the claimed  
goal of manet is?  Seems to me that's perfectly in scope.


Further, I think if you review the charter for the manet wg you  
could be convinced they're explicitly in scope.  And, from  
EarthLink Municipal Networks perspective, we're hardly a 'wired  
network' operator a la incumbent telco, even though elements of  
those types of networks may help bring our wireless mesh to life in  
the end.


So, if what we're doing isn't part of manet, what is the  
appropriate industry forum to work out IP routing issues etc?  What  
is the appropriate context for manet if it isn't what I read the  
charter to state?  Is it really just, for example, autonomous  
devices navigating in a sensor network?


Best regards,
Christian

On Feb 15, 2006, at 4:35 PM, Fred Baker wrote:

The big question there is whether it is helpful for an operator of  
a wired network to comment on a routing technology for a network  
that is fundamentally dissimilar from his target topology. Not  
that there is no valid comment - the security issues are certainly  
related. But if you want to say "but in my continental or global  
fiber network I don't plan to run a manet, so this is entirely  
stupid" - which is nearly verbatim the operator comment I got in a  
discussion of manet routing in a university setting three years  
ago - the logical answer is "we didn't expect you to; do you have  
comments appropriate to a regional enterprisish network whose  
'core' is a set of unmanned airplanes flying in circles and  
connects cars, trucks, and other kinds of vehicles?".


So operators are certainly welcome in a research group, but I  
would suggest that operator concerns/requirements be tailored to  
operational use of a manet network in a context where it *is*  
appropriate.


On Feb 14, 2006, at 1:55 PM, Christian Kuhtz wrote:
Hmm, well, when there is lots of vendor and academia involvement,  
no, there's no operator community presented in number of things  
I'm following in the IETF.  Take manet, for example, I don't even  
know to begin where to inject operator concerns/requirements. :-/


Re: a radical proposal (Re: protocols that don't meet the need...)

2006-02-15 Thread Fred Baker


On Feb 15, 2006, at 9:13 AM, Edward B. DREGER wrote:
Of course not.  Let SBC and Cox obtain a _joint_ ASN and _joint_  
address
space.  Each provider announces the aggregate co-op space via the  
joint

ASN as a downstream.


Interesting. This is what has been called metropolitan addressing.  
I'm certainly not the one who first proposed it, although I have  
thought about it for a while, dating at least as far back as 2001.


The crux of the concept as several *have* proposed it is that a  
regional authority - a city, perhaps, or a consortium of ISPs, or in  
the latest version of the proposal I have seen the country of Korea -  
gets a prefix, and sets up an arrangement. SOHOs that want to  
multihome within its territory are able to get small (/48? /56?)  
prefixes from it, and providers that deliver service in the area may  
opt in to supporting such SOHO prefixes. If they opt in, they are  
agreeing to:


 - join a local IXP, which may be a physical switch or
   virtualized by a set of bilateral agreements.
 - outside the region, they advertise the prefix of the
   regional authority
 - within the region, for customers that have gotten such a
   prefix, if they have connectivity to the customer they
   advertise the customer's prefix to the ISPs at the IXP.

Note that the customer is not expected to run BGP or get an AS  
number, but either the regional authority gets an AS number or each  
serving ISP is deemed authorized to originate the prefix in its BGP  
announcements. But if a SOHO has two ISPs, both advertise its prefix  
within the region, and when a packet is sent to the prefix from  
wherever, any ISP that is delivering service to the SOHO can  
legitimately deliver it, and if one gets the packet but is not the  
servicing ISP, it knows how to hand the packet to the appropriate ISP  
at the IXP.


This turns the business model of routing on its head. Typically today  
if Alice is using ISP AliceNet and Bob is using ISP BobNet, Alice  
hands her packet to AliceNet, AliceNet gets it to BobNet in the  
cheapest way it can, and BobNet carries it halfway around the world  
to Bob. Bob's ISP carries the burden of most of the work. But in this  
model, if AliceNet happens to also provide service in Bob's region,  
AliceNet might carry the packet to the region and only give it to  
BobNet for the last 500 feet.


Whenever I have talked about the model with an ISP, I have gotten  
blasted. Basically, I have been told that


(1) any idea on operations proposed in the IETF is a bad idea because  
the IETF doesn't listen to operators
(2) the ISPs aren't going to be willing to make settlement payments  
among themselves in accordance with the plan

(3) routing isn't good enough to support it
(4) and in any event, this makes it too easy to change ISPs

In short, "hell no".

So, since nobody in the IETF (according to you) is supporting this  
model, what I understand from your remark and this thread is that the  
IETF is not responsive to ideas proposed by operators and doesn't  
come up with things operators can use, taking as an example that it  
hasn't told you how to implement metropolitan addressing.


Did I get that right?

I'm not sure how to proceed, given the level of invective I get in  
any discussion with anyone on the topic.


Note 1: PI addressing for edge networks that can qualify under a  
sensible set of rules (current ones are inadequate) for an AS number  
is the preferred way to handle an enterprise of a size r complexity  
comparable to a small (or large) ISP.


Note 2: Provider-provisioned addresses continue to make sense for  
folks that don't plan to multihome.


Re: protocols that don't meet the need...

2006-02-15 Thread Fred Baker


The big question there is whether it is helpful for an operator of a  
wired network to comment on a routing technology for a network that  
is fundamentally dissimilar from his target topology. Not that there  
is no valid comment - the security issues are certainly related. But  
if you want to say "but in my continental or global fiber network I  
don't plan to run a manet, so this is entirely stupid" - which is  
nearly verbatim the operator comment I got in a discussion of manet  
routing in a university setting three years ago - the logical answer  
is "we didn't expect you to; do you have comments appropriate to a  
regional enterprisish network whose 'core' is a set of unmanned  
airplanes flying in circles and connects cars, trucks, and other  
kinds of vehicles?".


So operators are certainly welcome in a research group, but I would  
suggest that operator concerns/requirements be tailored to  
operational use of a manet network in a context where it *is*  
appropriate.


On Feb 14, 2006, at 1:55 PM, Christian Kuhtz wrote:
Hmm, well, when there is lots of vendor and academia involvement,  
no, there's no operator community presented in number of things I'm  
following in the IETF.  Take manet, for example, I don't even know  
to begin where to inject operator concerns/requirements. :-/


Re: Domain name hijack?

2006-01-23 Thread Fred Baker


Are they in .org? If so, I would call PIR. More generally, I would  
suggest you contact the registrar, not ICANN.


http://www.icann.org/registrars/accredited-list.html

On Jan 23, 2006, at 2:55 PM, Wil Schultz wrote:



Hey all, probably not the best place to ask this but thought that I  
would give it a shot.


At my company I manage 30 or so domain names through various  
registrars, they existed before I came on board. Today I received  
an email from a person claiming ownership of one of our valuable  
ones, valuable to us anyway since we have an ASP product sitting  
behind it. Whois database says that is clearly belongs to him and  
the ICANN registrar is not one that is being used here, last  
updated 6 months ago. If this would have been changed 6 months ago  
I would have been the one to change it, and I didn't change anything.



   Created on..: 2001-May-20.
   Expires on..: 2010-May-20.
   Record last updated on..: 2005-Jul-25 21:29:10.


The domain is still pointing to our DNS servers, we haven't had any  
outages to this point, looks like the admin and tech contacts were  
the only thing changed, and now 6 months later they want the domain.


I've got calls into the current registrar to see what is going on,  
they were contacted at the same time I was and need some time to  
see what's going on. Anyone have any advice? Should I call ICANN?  
How can I find out what that change was on July 25th?


Re: the iab simplifies internet architecture!

2005-11-14 Thread Fred Baker


I believe that it is attributable to John Hart, Vitalink, late  
1980's. If he didn't coin it, he sure quoted it a lot.


Radia would have said something more like "bridge within a campus and  
route between them", I suspect.


On Nov 11, 2005, at 1:36 PM, [EMAIL PROTECTED] wrote:

	"bridge where you can, route where you must."  -- i forgot where  
this came from? Radia?


Re: the iab simplifies internet architecture!

2005-11-11 Thread Fred Baker


yes, a specific member of the IAB said that. A few moments ago, I was  
chatting with the chair of the IAB, who wondered out loud whether he  
had noticed everyone else on the IAB edging away from him (something  
about lightning strikes emanating from the dagger-eyes of fellow IAB  
members I think) and observing that in the viewpoint he was on his own.


But your comment was not "PN, member of the IAB, said something  
clueless that the rest of the IAB disagreed with", nor did your  
subsequent comment


On Nov 11, 2005, at 6:03 AM, Randy Bush wrote:
but it will be a classic.  if you can get and edit it, send it to  
boing boing or /.


Pearls before swine.


that's what a number of i* members have publicly stated is their  
opinion of talking to us operators.


distinguish between the IAB, the IESG, the IETF, ISOC, and or any of  
the other acronyms that start with the letter I.


Yes, the experience of communicating between groups with different  
expertise can be brutal, and the brutality goes both directions. A  
classic example relates to discussions I have with various military  
agencies and developing countries on their issues, and when I pursue  
solutions to same get comments from some members of this community  
(note the lack of broad-brush over-generalization) that "we don't  
need that so it is a stupid idea". Well, if one is running a static  
fiber core and has effectively infinite bandwidth everywhere with  
very high reliability, it probably is. It's hard to run fiber to a  
geosynchronous satellite - that's a lot of glass, at a minimum.


I would suggest that we drop the overgeneralizations, in which "PN"  
becomes "The I*", drop the disrespectful associations ("pearls before  
swine"), and drop the tone. Guys, we're all in this together, and it  
would be better if we spent a nanosecond thinking about how to get  
along.


On Nov 11, 2005, at 10:42 AM, Randy Bush wrote:


None that I have spoken with.

that's what a number of i* members have publicly stated is their
opinion of talking to us operators.


i imagine you speak with the one i was quoting rather often, though  
you were not there when it was said.  i was.  ask others who were  
there, pitsburgh ietf, a meeting between ipv6 chairs, iesg members,  
rirs, and a few ops.  a current member of the iab specifically  
said, and i quote again, since you seem to have missed the rest of  
my paragraph,


operators won't accept the h ratio because they don't
know what a logarithm is.

while the ietf mouths a lot of words about wanting to hear from,  
and get participation from, operators, the actual experience is  
pretty brutal.


http://rip.psg.com/~randy/051000.ccr-ivtf.html is from the current  
issue of acm sigcomm's ccr, where aaron falk also has a piece.  i  
play curmudgeon and he pollyanna.


randy


Re: the iab simplifies internet architecture!

2005-11-11 Thread Fred Baker


None that I have spoken with. What I hear continually is that people  
would like operational viewpoints on what they're doing and are  
concerned at the fact that operators don't involve themselves in IETF  
discussions.


On Nov 11, 2005, at 6:03 AM, Randy Bush wrote:


that's what a number of i* members have publicly stated is their
opinion of talking to us operators.


Re: New Rules On Internet Wiretapping Challenged

2005-11-03 Thread Fred Baker

and, if you're interested,
http://www.ietf.org/rfc/rfc3924.txt
3924 Cisco Architecture for Lawful Intercept in IP Networks. F. Baker,
 B. Foster, C. Sharp. October 2004. (Format: TXT=40826 bytes)  
(Status:

 INFORMATIONAL)

On Nov 3, 2005, at 9:17 AM, Vicky Rode wrote:


You might want to take a look at rfc 2804 for some background.


--
"Don't worry about the world coming to an end today. It's already  
tomorrow in Australia." (Charles Schulz )





PGP.sig
Description: This is a digitally signed message part


Re: classful routes redux

2005-11-02 Thread Fred Baker



On Nov 2, 2005, at 4:01 PM, Bill Woodcock wrote:


  On Wed, 2 Nov 2005, Fred Baker wrote:

actually, no, I could compare a /48 to a class A.


...which makes the /32s-and-shorter that everybody's actually getting
double-plus-As, or what?


A class A gives you 16 bits to enumerate 8 bit subnets. If you start  
from the premise that all subnets are 8 bits (dubious, but I have  
heard it asserted) in IPv4, and that all subnets in IPv6 are 16 bits  
(again dubious, given the recent suggestion of a /56 allocation to an  
edge network), a /48 is the counterpart of a class A. We just have a  
lot more of them.


All of which seems a little twisted to me. While I think /32, /48, / 
56, and /64 are reasonable prefix lengths for what they are proposed  
for, I have this feeling of early fossilization when it doesn't  
necessarily make sense.


Re: classful routes redux

2005-11-02 Thread Fred Baker


actually, no, I could compare a /48 to a class A.

On Nov 2, 2005, at 3:51 PM, [EMAIL PROTECTED] wrote:




er..  would this be a poor characterization of the IPv6 addressing
architecture which is encouraged by the IETF and the various RIR
members?

class A ==  /32
class B ==  /48
class C ==  /56
hostroute == /64

(and just think of all that spam than can originate from all those
 "loose" IP addresses in that /64 for your local SMTP server!!! Yummy)

-- Oat Willie


--
"Don't worry about the world coming to an end today. It's already  
tomorrow in Australia." (Charles Schulz )




Re: And Now for Something Completely Different (was Re: IPv6 news)

2005-10-18 Thread Fred Baker


the principal issue I see with your proposal is that it is DUAL  
homing vs MULTI homing. To make it viable, I think you have to say  
something like "two or more ISPs must participate in a multilateral  
peering arrangement that shares the address pool among them". The  
location of the actual peering is immaterial - doing one for Santa  
Barbara County in California might actually mean peering at One  
Wilshire Way in LA, for example. However, the peering arrangement  
would have to be open to the ISPs that serve the community;  
otherwise, it would be exposed to anti-trust litigation (if Cox and  
Verizon, the Cable Modem and DSL providers in Santa Barbara, did  
this, but it was not open to smaller ISPs in the community, the  
latter might complain that it had the effect of locking out  
competition).


But yes, communities of a rational size and density could get an  
address block, the relevant ISPs could all advertise it into the  
backbone, and the ISPs could determine among themselves how to  
deliver traffic to the homes, which I should expect would mean that  
they would deliver directly if they could and otherwise hand off to  
another ISP, and that handoff would require an appropriate routing  
exchange. Can you say "lots of long prefixes within a limited  
domain"? They would want to configure the home's address block on its  
interior interface and route to it through their own networks. Note  
that NAT breaks this... this requires end to end connectivity. I  
should expect that they would not literally expect the homes to run  
BGP (heaven forfend); I could imagine the "last mile" being the last  
bastion of RIP - the home sends a IP update upstream for its interior  
address, and the ISP sends a default route plus routes to its own  
data centers down.


The biggest issue here might be the effect on cost models in routing.  
Today, hot potato routing makes a some relationships relatively cheap  
while other relationships are more expensive; this reverses those.  
Today, if a datagram is handed to me that will be delivered in your  
network, I hand it to you at my earliest opportunity and you deliver  
it. In this model, I can't tell who will deliver it until I get  
relatively close to the community. Hence, I will carry the packet to  
that exchange point, and only then hand it to you. Funny. I described  
this in an internet draft nearly a decade ago, and got dumped on  
because of this issue - something about living in an ivory tower and  
playing with people's livelihoods, as I recall. I'll see if I can  
resurrect that, if you like.


On Oct 18, 2005, at 10:40 AM, Church, Chuck wrote:







Nanog,

I've been thinking a bunch about this IPv6 multihoming issue.
It seems that the method of hierarchical summarization will keep the
global tables small for all single-homed end user blocks.  But the
multihomed ones will be the problem.  The possible solution I've been
thinking about is 'adjacency blocks', for lack of a better term.  In
theory, the first customer to home to two different ISPs causes a new
large address block to be advertised upstream by these two ISPs.
Further customers homing to these two ISPs get an allocation out of  
this
same block.  The two ISPs will only announce the large block.  Of  
course

there are issues involving failure and scalability...
Failure would involve an ISP losing contact with end customer,
but still announcing the aggregate upstream.  This can be solved by
requiring that two ISPs must have a direct peering agreement, before
they can accept dual-homed customers.  Or a possible method (maybe  
using
communities?) where ISP B will announce the customer's actual block  
(the

small one) to it's upstreams, if notified by ISP A that it's not
reachable by them.  When ISP A resumes contact with end customer,  
ISP B

retracts the smaller prefix.
Scalability is an obvious issue, as the possible number of these
'adjacency blocks' would be N * (N-1), where N is the number of  
ISPs in
the world.  Obviously pretty large.  But I feel the number of ISPs  
that

people would actually dual home to (due to reputation, regional
existence, etc) is a few orders of magnitude smaller.  ~100,000  
prefixes

(each can be an ASN, I suppose) should cover all needs, doing some
simple math.
The downside is that end customers are going to lose the ability
to prefer traffic from one ISP versus another for inbound traffic.   
That
alone might be a show-stopper, not sure how important it is.  Since  
IPv6

is a whole new ballgame, maybe it's ok to change the rules...
Looking for any thoughts about it.  I'm sure there's things I
haven't considered, but the people I've bounced it off of haven't seen
any obvious problems.  Flame-retardant clothes on, just in case  
though.



Chuck







Every multi-homer will be needing their own ASN, so that's what






clutters






up your routing tables. It's economy there. Btw, a lot of ASNs






advertise





one netw

Re: And Now for Something Completely Different (was Re: IPv6 news)

2005-10-17 Thread Fred Baker


On Oct 17, 2005, at 2:24 PM, Tony Li wrote:
To not even *attempt* to avoid future all-systems changes is  
nothing short of negligent, IMHO.



On Oct 17, 2005, at 2:17 PM, Randy Bush wrote:

and that is what the other v6 ivory tower crew said a decade ago.
which is why we have the disaster we have now.


and there I would agree, on both points.

now, the proposal put forward lo these many moons ago to avoid any  
possibility of a routing change was, as I recall, Nimrod, and the  
Nimrod architecture called for variable length addresses in the  
network layer protocol and the use of a flow label (as in "IPv6 flow  
label") as a short-form address in some senses akin to a virtual  
circuit ID. There has been a lot of work on that in rrg among other  
places, but the word from those who would deploy it has been  
uniformly "think in terms of an incremental upgrade to BGP" and  
"maybe MPLS will work as a virtual circuit ID if we really need one".


As you no doubt recall all too well, the variable length address was  
in fact agreed on at one point, but that failed for political  
reasons. Something about OSI. The 16 byte length of an IPv6 address  
derived from that as well - it didn't allow one to represent an NSAP  
in IPv6, which was an objective.


So the routing problem was looked at, and making a fundamental  
routing change was rejected by both the operational community and the  
routing folks.


No, IPv6 doesn't fix (or even change) the routing of the system, and  
that problem will fester until it becomes important enough to change.  
But lets not blame that on the "ivory tower folks", at least not  
wholly. We were all involved.


Re: And Now for Something Completely Different (was Re: IPv6 news)

2005-10-17 Thread Fred Baker


we agree that at least initially every prefix allocated should belong  
to a different AS (eg, no AS gets more than one); the fly in that is  
whether there is an ISP somewhere that is so truly large that it  
needs two super-sized blocks. I don't know if such exists, but one  
hopes it is very much the exception.


The question is "does every AS get a prefix". Under current rules,  
most AS's assigned to edge networks to support multihoming will not  
get a prefix. I personally think that's probably the wrong answer  
(eg, you and I seem to agree on PI space for networks that would  
warrant an AS number does to size, connectivity, and use of BGP to  
manage their borders), but it is the current answer.


On Oct 17, 2005, at 2:06 PM, Per Heldal wrote:

The RIRs have been trying pretty hard to make IPv6 allocations be  
one prefix per ISP, with truly large edge networks being treated  
as functionally equivalent to an ISP (PI addressing without  
admitting it is being done). Make the bald assertion that this is  
equal to one prefix per AS (they're not the same statement at all,  
but the number of currently assigned AS numbers exceeds the number  
of prefixes under discussion, so in my mind it makes a reasonable  
thumb-in-the-wind- guesstimate), that is a reduction of the  
routing table size by an order of magnitude.


I wouldn't even characterise that as being bald. Initial  
allocations of more than one prefix per AS should not be allowed.  
Further; initial allocations should differentiate between network  
of various sizes into separate address-blocks to simplify and  
promote strict prefix-filtering policies. Large networks may make  
arrangements with their neighbors to honor more specifics, but that  
shouldn't mean that the rest of the world should accept those.


Re: And Now for Something Completely Different (was Re: IPv6 news)

2005-10-17 Thread Fred Baker


works for me - I did say I'd like to change the routing protocol -  
but I think the routing protocol can be changed asynchronously, and  
will have to.


On Oct 17, 2005, at 1:51 PM, Tony Li wrote:



Fred,


If we are able to reduce the routing table size by an order of  
magnitude, I don't see that we have a requirement to fundamentally  
change the routing technology to support it. We may *want* to (and  
yes, I would like to, for various reasons), but that is a  
different assertion.





There is a fundamental difference between a one-time reduction in  
the table and a fundamental dissipation of the forces that cause it  
to bloat in the first place.  Simply reducing the table as a one- 
off only buys you linearly more time.  Eliminating the drivers for  
bloat buys you technology generations.


If we're going to put the world thru the pain of change, it seems  
that we should do our best to ensure that it never, ever has to  
happen again.


Regards,
Tony



Re: And Now for Something Completely Different (was Re: IPv6 news)

2005-10-17 Thread Fred Baker


That is an assumption that I haven't found it necessary to make. I  
have concluded that there is no real debate about whether the  
Internet will have to change to something that gives us the ability  
to directly address (e.g. not behind a NAT, which imposes some  
"interesting" requirements at the application layer and gateways of  
the sort which were what the Internet came about to not need) a whole  
lot more things than it does today. The debate is about how and when.  
"when" seems pretty solidly in the 3-10 year timeframe, exactly where  
in that timeframe being a point of some discussion, and "how" comes  
down to a choice of "IPv6" or "something else".


Fleming's IPv8 was a non-stupid idea that has a fundamental flaw in  
that it re-interprets parts of the IPv4 header as domain identifiers  
- it effectively extends the IP address by 16 bits, which is good,  
but does so in a way that is not backward compatible. If we could  
make those 16 bits be AS numbers (and ignoring for the moment the  
fact that we seem to need larger AS numbers), the matter follows  
pretty quickly. If one is going to change the header, though, giving  
up fragmentation as a feature sees a little tough; one may as well  
change the header and manage to keep the capability. One also needs  
to change some other protocols, such as routing AS numbers and  
including them in DNS records as part of the address.


From my perspective, we are having enough good experience with IPv6  
that we should simply choose that approach; there isn't a real good  
reason to find a different one. Yes, that means there is a long  
coexistence period yada yada yada. This is also true of any other  
fundamental network layer protocol change.


The RIRs have been trying pretty hard to make IPv6 allocations be one  
prefix per ISP, with truly large edge networks being treated as  
functionally equivalent to an ISP (PI addressing without admitting it  
is being done). Make the bald assertion that this is equal to one  
prefix per AS (they're not the same statement at all, but the number  
of currently assigned AS numbers exceeds the number of prefixes under  
discussion, so in my mind it makes a reasonable thumb-in-the-wind- 
guesstimate), that is a reduction of the routing table size by an  
order of magnitude.


If we are able to reduce the routing table size by an order of  
magnitude, I don't see that we have a requirement to fundamentally  
change the routing technology to support it. We may *want* to (and  
yes, I would like to, for various reasons), but that is a different  
assertion.


On Oct 17, 2005, at 12:42 PM, Per Heldal wrote:


mon, 17,.10.2005 kl. 11.29 -0700, Fred Baker:

OK. What you just described is akin to an enterprise network with  
a  default route. It's also akin to the way DNS works.


No default, just one or more *potential* routes.

Your input is appreciated, and yes I'm very much aware that many  
people who maintain solutions that assume full/total control of the  
entire routing-table will be screaming bloody murder if that is  
going to change. Further details about future inter-domain-routing  
concepts belong in other fora (e.g. ietf's inter-domain-routing wg).


The long-term operational impact is that the current inter-domain- 
routing concepts (BGP etc) don't scale indefinitely and will have  
to be changed some time in the future. Thus expect the size of the  
routing-table to be eliminated from the list of limiting factors,  
or that the bar is considerably raised.


---

Note that I'm not saying that nothing should be done to preserve  
and optimise the use of the resources that are available today just  
because there will be something better available in a distant  
future. I'm in favor of the most restrictive allocation policies in  
place today. The development of the internet has for a long time  
been based on finding better ways to use available resources (CIDR  
anyone). To me a natural next-step in that process is for RIR's to  
start reclaiming unused v4 address-blocks, or at least start  
collect data to document that space is not being used (if they're  
not already doing so). E.g prevously announced address-blocks that  
has disappeared from the global routing-table for more than X  
months should go back to the RIR-pool (X<=6).



//Per



Re: And Now for Something Completely Different (was Re: IPv6 news)

2005-10-17 Thread Fred Baker


OK. What you just described is akin to an enterprise network with a  
default route. It's also akin to the way DNS works.


The big question becomes not only "who knows what I need to know",  
but "how do I know that they actually know it?". For example, let's  
postulate that the concept is that each ISP advertises some sort of  
routing service that will install routes on demand, but requires that  
someone initiate a request for the route, and requires either the  
target system or the edge router in that domain that is closest to  
the target respond with a route.


Simplistically, perhaps I am trying to route from my edge network  
("A") towards your edge network ("B"), and we are both customers of  
some ISP ("C"). The host A' that is trying to get to your host B'  
initiates a request. Lets presume that this goes to some name in  
domain A that lists all border routers, or some multicast group that  
they are members of. Presumably every border router does this, but  
for present discussion the border router in A connecting to router C'  
in C asks all of his peers (POPs?) for the route, and some other  
router C" asks B's border router. B's border router has the route,  
and so replies; C" replies to C', C' to A's border router, and that  
router to A'. A' can now send a message.


Presumably, if someone else now ask C about the route, either C' or  
C", or if the route was multicast to all of C's edge routers then any  
router in C would be able to respond directly.


This becomes more interesting if C is in fact a succession of peer  
ISPs or ISPs that purchase transit from other ISPs. It also becomes  
very interesting if some router D' is misconfigured to advertise  
itself as B.


It's not dissimilar to ant routing. For that, there is a variety of  
literature; Google is your friend. In manet and sensor networks, it  
works pretty well, especially in the sense that once it finds a route  
it keeps using it while it continues working even if other routes are  
changing around it, and it can use local repair to deal with local  
changes.


At least as the researchers have described it, it doesn't do "policy"  
very well, and in networks that tend to be stable (such as wired  
networks) its load and convergence properties can be improved on.


I'll let you read there.


On Oct 17, 2005, at 9:20 AM, Per Heldal wrote:


man, 17,.10.2005 kl. 07.25 -0700, skrev Fred Baker:


is that anything like using, in Cisco terms, a "fast-switching cache"
vs a "FIB"?



I'll bite as I wrote the paragraph you're quoting;

Actually, hanging on to the old concepts may be more confusing than
trying to look at it in completely new ways.

Imagine a situation with no access to any means of direct  
communication
(phone etc). You've got a message to deliver to some person, and  
have no

idea where to find that person. Chances are there's a group of people
nearby you can ask. They may know how to find the one you're looking
for. If not they may know others they can ask on your behalf. Several
iterations later the person is located and you've established a path
through which you can pass the information you wanted.

Translated into cisco terms this mean that the FIB is just a partial
routing database, enough to start the search and otherwise handle
communications in the neighborhood (no more than X router-hops, maybe
AS-hops away). When the destination is located you keep that  
information

for a while in case there are more packets going to the same place,
similar to what you do with traditional route-cache.




On Oct 17, 2005, at 6:47 AM, Mikael Abrahamsson wrote:


Well, let's try to turn the problem on its head and see if thats
clearer; Imagine an internet where only your closest neighbors
know you exist. The rest of the internet knows nothing about you,
except there are mechanisms that let them "track you down" when
necessary. That is very different from today's full-routing-table.




Re: And Now for Something Completely Different (was Re: IPv6 news)

2005-10-17 Thread Fred Baker


is that anything like using, in Cisco terms, a "fast-switching cache"  
vs a "FIB"?


On Oct 17, 2005, at 6:47 AM, Mikael Abrahamsson wrote:
Well, let's try to turn the problem on its head and see if thats  
clearer; Imagine an internet where only your closest neighbors  
know you exist. The rest of the internet knows nothing about you,  
except there are mechanisms that let them "track you down" when  
necessary. That is very different from today's full-routing-table.


Re: IPv6 news

2005-10-12 Thread Fred Baker


I am told that some of the access providers are starting to deploy in  
the US, or at least that's what they tell us. Macs and Linux come  
with v6 enabled, and Longhorn will as well. So with any luck we will  
squeak through this one.


On Oct 12, 2005, at 12:13 PM, Randy Bush wrote:


four years from now, when marissa can't get v4 space from an
rir/lir and so gets v6 space, she will not be able to use 99%
of the internet because no significant number of v4 end hosts
will have bothered to be v6 enabled because there was no
perceived market for it.


Re: "Cisco gate" and "Meet the Fed" at Defcon....

2005-08-01 Thread Fred Baker



Cisco, are you listening?


Cisco is in fact listening.  Cisco, like other companies, generally  
does not release security notices until enough information exists to  
allow customers to make a reasonable determination as to whether or  
not they are at risk and how to mitigate possible risk.


The issue underlying the suit wasn't the disclosure of the security  
issue, although we would have rather worked that according to the  
usual processes. From what the corporate legal folks tell me, their  
issue was the disclosure of Cisco intellectual property. Note that it  
wasn't just Cisco that felt the presentation was out of order; Lynn's  
employer became "former" because it also felt that way. I'll refer  
you to the legal brief for anything further on that, but I would  
really like to see this discussion begin to resemble an informed one.


By this misbehavior you are seriously discouraging researchers from  
releasing info to you. They will suspect you'll sit on the exploit  
for months and not tell anyone (as you did with this one). They'll  
be afraid you'll try to kill the messenger (as you did with this one).


For the record, the vulnerability was first detected by Cisco in  
internal testing, not by outside researchers, and Cisco's approach to  
this has been in accordance with the RDF. Part of that process, at  
Cisco, is to develop work-arounds or updated code that corrects the  
exploit, testing it, and getting it into the field. Releasing the  
information on the exploit before that point exposes the ISPs to a  
vulnerability that they can't fix, or puts them into a scramble to  
download code that they haven't been able to gain confidence on. I  
should imagine that the various operators on this list would prefer  
to get the fix in place before the vulnerability is exposed rather  
than playing catchup while their pants are around their ankles.


We very much try to work with people that are willing to work with  
us. We aren't very impressed by people that expose the industry to  
danger.


Re: E-mail Authentication Implementation Summit 2005?

2005-07-13 Thread Fred Baker


On Jul 13, 2005, at 2:38 PM, Brad Knowles wrote:

Does anyone know if any of these presentations are available anywhere?


Eric would have to point to his presentation, but you can find the 
internet drafts at the following:


  http://www.ietf.org/internet-drafts/draft-allman-dkim-base-00.txt
  "DomainKeys Identified Mail (DKIM)", Eric Allman, 12-Jul-05,
  

  http://www.ietf.org/internet-drafts/draft-allman-dkim-ssp-00.txt
  "DKIM Sender Signing Policy", Eric Allman, 12-Jul-05,
  

  http://www.ietf.org/internet-drafts/draft-lyon-senderid-core-01.txt
  "Sender ID: Authenticating E-Mail", Jim Lyon, Meng Weng Wong, 
19-May-05,

  

  http://www.ietf.org/internet-drafts/draft-lyon-senderid-pra-01.txt
  "Purported Responsible Address in E-Mail Messages", Jim Lyon, 
19-May-05,

  


Re: mh (RE: OMB: IPv6 by June 2008)

2005-07-08 Thread Fred Baker


On Jul 8, 2005, at 9:49 AM, Jay R. Ashworth wrote:
A machine behind a NAT box simply is not visible to the outside world, 
except for the protocols you tunnel to it, if any.   This *has* to 
vastly reduce it's attack exposure.


It is true that the exposure is reduced, just as it is with a stateful 
firewall. The technical term for this is "security by obscurity". Being 
obscure, however, is not the same as being invisible or being 
protected. It just means that you're a little harder to hit. When a NAT 
sets up an association between an "inside" and "outside" address+port 
pair, that constitutes a bridge between the inside device and the 
outside world. There are ample attacks that are perpetrated through 
that association.


A NAT, in that context, is a stateful firewall that changes the 
addresses, which means that the end station cannot use IPSEC to ensure 
that it is still talking with the same system on the outside. It is 
able to use TLS, SSH, etc as transport layer solutions, but those are 
subject to attacks on TCP such as RST attacks, data insertion, 
acknowledge hacking, and so on, and SSH also has a windowing problem 
(on top of TCP's window, SSH has its own window, and in large 
delay*bandwidth product situations SSH's window is a performance 
limit). In other words, a NAT is a man-in-the-middle attack, or is a 
device that forces the end user to expose himself to man-in-the-middle 
attacks. A true stateful firewall that allows IPSEC end to end doesn't 
expose the user to those attacks.


Re: OMB: IPv6 by June 2008

2005-06-30 Thread Fred Baker


On Jun 30, 2005, at 5:37 PM, Todd Underwood wrote:
where is the service that is available only on IPv6? i can't seem to 
find it.


You might ask yourself whether the Kame Turtle is dancing at 
http://www.kame.net/. This is a service that is *different* (returns a 
different web page) depending on whether you access it using IPv6 or 
IPv4. You might also look at IP mobility, and the routing being done 
for the US Army's WIN-T program. Link-local addresses and some of the 
improved flexibility of the IPv6 stack has figured in there.


There are a number of IPv6-only or IPv6-dominant networks, mostly in 
Asia-Pac. NTT Communications runs one as a trial customer network, with 
a variety of services running over it. The various constituent networks 
of the CNGI are IPv6-only. There are others.


Maybe you're saying that all of the applications you can think of run 
over IPv4 networks a well as IPv6, and if so you would be correct. As 
someone else said earlier in the thread, the reason to use IPv6 has to 
do with addresses, not the various issues brought up in the marketing 
hype. The reason the CNGI went all-IPv6 is pretty simple: on the North 
American continent, there are ~350M people, and Arin serves them with 
75 /8s. In the Chinese *University*System*, there are ~320M people, and 
the Chinese figured they could be really thrifty and serve them using 
only 72 /8s. I know that this is absolutely surprising, but APNIC 
didn't give CERNET 72 /8s several years ago when they asked. I really 
can't imagine why. The fact that doing so would run the IPv4 address 
space instantly into the ground wouldn't be a factor would it? So CNGI 
went where they could predictably get the addresses they would need.


Oh, by the way. Not everyone in China is in the Universities. They also 
have business there, or so they tell me...


The point made in the article that Fergie forwarded was that Asia and 
Europe are moving to IPv6, whether you agree that they need to or not, 
and sooner or later we will have to run it in order to talk with them. 
They are business partners, and we *will* have to talk with them. We, 
the US, have made a few my-way-or-the-highway stands in the past, such 
as "who makes cell phones" and such. When the rest of the world went a 
different way, we wound up be net consumers of their products. 
Innovation transfered to them, and market share.


The good senator is worried that head-in-the-sand attitudes like the 
one above will similarly relegate us to the back seat in a few years in 
the Internet.


Call him "Chicken Little" if you like. But remember: even Chicken 
Little is occasionally right.


Re: Calculating Jitter

2005-06-10 Thread Fred Baker


you saw marshall's comment. If you're interested in a moving average, 
he's pretty close.


If I understood your question, though, you simply wanted to quantify 
the jitter in a set of samples. I should think there are two obvious 
definitions there.


A statistician would look, I should think, at the variance of the set. 
Reaching for my CRC book of standard math formulae and tables, it 
defines the variance as the square of the standard deviation of the 
set, which is to say


sum of ((x(i) - xmean)^2)
 
 n - 1

where the n values x(i) are the members of the set, xmean is the mean 
of those values, and n is the number of x(i).


A sample set with a larger standard deviation or variance than another 
set has contains more jitter.


In this context, the other thought that comes to mind is the variation 
from nominal. If the speed-of-light delay between here and there is M, 
the jitter might be defined as the root-mean-square difference from M, 
which would be something like


sum of ((x(i) - xmin)^2)
 ---
  n - 1

with the same variables except that xmin is the least value in the set.


Re: BCP regarding TOS transparancy for internet traffic

2005-05-25 Thread Fred Baker



On May 25, 2005, at 10:39 AM, Sam Stickland wrote:

While it's true that IP is end-to-end, are fields such as TOS and DSCP 
meant to be end to end? A case could be argued that they are used by 
the actual forwarding devices on route in order to make QoS or even 
routing decisions, and that the end devices shouldn't actually rely on 
the values of these fields?


It used to be that TCP would reset a session if the TOS byte changed in 
mid-session. That certainly sounds like an end-to-end expectation.


Re: BCP regarding TOS transparancy for internet traffic

2005-05-25 Thread Fred Baker


RFC 2474 permits the DSCP to be over-written on ingress to a network. 
RFC 3168 gives rules for over-writing the ECN flags.


US NCS currently has a filing before the FCC (unless FCC has recently 
responded) asking for a DSCP value that would be set only by 
NCS-authorized users, never over-written, and that ISPs would either 
ignore or observe in order to give that traffic preferential service. 
Yes, I have made my comments about that too.


I guess the question is why, just because you don't want to offer a 
specific service, you want to prevent other ISPs from offering a stated 
service to a user? There are some fairly good-sized ISPs offering 
services based on the TOS octet. Are you trying to drive users to them? 
Any customer that is setting EF on VoIP service is certainly expecting 
that to go end to end.



On May 25, 2005, at 4:08 AM, Mikael Abrahamsson wrote:
I've been debating whether the TOS header information must be left 
untouched by an ISP, or if it's ok to zero/(or modify) it for internet 
traffic. Does anyone know of a BCP that touches on this?


My thoughts was otherwise to zero TOS information incoming on IXes, 
transits and incoming from customers, question is if customers expect 
this to be transparent or not.


Reading 
 it 
looks like in the Diffserv terminology, it's ok to do whatever one 
would want.


Any feedback appreciated.

--
Mikael Abrahamssonemail: [EMAIL PROTECTED]



Re: FCC To Require 911 for VoIP

2005-05-01 Thread Fred Baker

On May 2, 2005, at 2:34 AM, Jay R. Ashworth wrote:
How about an anycast address implement(ed|able) by every network
provider that would return a zipcode?
That would be fine in the US, and with some extension in Canada and a 
few other countries.

No, I think the service would have to be built using some real 
definition of location (such as GPS) which is offered by the phone to 
the called party on user command, and the called party then refers that 
to some clearinghouse that gets it to the right emergency service 
office.


Re: Smallest Transit MTU

2004-12-29 Thread Fred Baker
At 01:43 PM 12/29/04 -0500, Joe Abley wrote:
Is there an RFC that clearly states: "The internet needs to transit 1500 
byte packets without fragmentation."??
Not to my knowledge, and since the hoardes of users mentioned above 
present a clear, deployed counter-example it seems unlikely that one will 
be written.
There are any number of RFCs that state that implementations SHOULD 
implement the capability to receive a 1500 byte payload on just about any 
link, and that the interface MTU or MRU SHOULD default to no smaller than 
that number.

That said, RFC 1042 ("Standard for the transmission of IP datagrams over 
IEEE 802 networks.") notes that

   Note that the MTU for the Ethernet allows a 1500 octet IP datagram,
   with the MTU for the 802.3 network allows only a 1492 octet IP
   datagram.
For an RFC to require an MTU of 1500 octets without fragmentation would 
imply requiring it to not use IEEE 802.3 framing, which is to say, not use 
802.1d (used to be p/q) prioritization. It would also require one to not 
use CPE-CPE tunnels (which often means "VPNs"), as such tunnels add an 
additional IP header (at least), reducing the MTU within the tunnel by that 
amount.

To be honest, I think we should be carefully considering Mathis' newer 
approach to Path MTU, described in 
http://www.psc.edu/~mathis/MTU/pmtud/draft-mathis-pmtud-method-00.txt and a 
more recent but expired internet draft.

   The general strategy of the new algorithm is to start with a small
   MTU and probe upward, testing successively larger MTUs by probing
   with single packets.  If the probe is successfully delivered, then
   the MTU is raised.  If the probe is lost, it is treated as an MTU
   limitation and not as a congestion signal.
The table in 5.7.1 appears wrong (it list 1492 as an MSS candidate, but 
neither 1460 nor 1500, so I find it rather incomprehensible). But the 
concept seems reasonable. 


Re: Botnet pointer

2004-12-20 Thread Fred Baker
At 09:40 PM 12/20/04 +, Fergie (Paul Ferguson) wrote:
Here's a decent pointer:
 http://en.wikipedia.org/wiki/Botnet
- ferg
that is a very good pointer. 


Re: Botnet pointer

2004-12-20 Thread Fred Baker
At 02:01 PM 12/20/04 -0800, william(at)elan.net wrote:
Can somebody also share good definition of "BOT" and "BOTNET" for glossary 
and description of 2-4 lines? Should I also list it as synonymous with 
Zombie (bot being more hacker-oriented use and zombie being more toward 
spammer-oriented use)?
It is not really synonymous, but the distinction is subtle. How about:
"bot": derivative of "robot". An application on an infected computer used 
for orchestrated attacks or for distributed generation of spam, often 
distributed in or with viruses or other malware. Similar to "zombie", which 
is an older usage specific to distributed denial of service attacks.

"botnet": a set of bots that may be controlled as a single service, and 
which may be leased or sold to a user as a unit. 


Re: New Computer? Six Steps to Safer Surfing

2004-12-20 Thread Fred Baker
At 09:14 PM 12/18/04 -0500, Sean Donelan wrote:
I wouldn't rely on software firewalls.  At the same store you buy your 
computer, also buy a hardware firewall.  Hopefully soon the motherboard 
and NIC manufacturers will start including built-in hardware firewalls.
I guess my question is: why rely on a firewall at all? Yes, a firewall at 
ingress to a network will reduce the probability or effectiveness of an 
attack from "outside" in many cases. But in many cases the infection is 
from "inside", and in any event something in the network or in the end 
system at the edge of the network can only really address link and network 
layer attacks effectively.

I personally would far rather presume that the end system is responsible 
for its own security, and that there are security considerations at every 
layer. Reduce the incidence and track attacks with network-based tools, but 
in the final analysis build the applications and stack code to withstand 
attacks. 


Re: tli back at cisco

2004-12-09 Thread Fred Baker
At 11:17 AM 12/09/04 -0500, Richard Irving wrote:
That, or they finally got the nail out of the door, from his last 
resignation.
there were two nails in that board... It's a long story... But the 
interesting part was that all those toys actually fit into Dr. Bug... 


Re: is reverse dns required? (policy question)

2004-12-01 Thread Fred Baker
At 08:56 AM 12/01/04 -0800, Greg Albrecht wrote:
are we obligated, as a user of ARIN ip space, or per some BCP, to provide 
ad-hoc reverse dns to our customers with-out cost, or without financial 
obligation.
As noted, reverse DNS is pretty universally considered a normal operating 
practice, "part of the service". There is no IETF BCP that tells you 
anything about your business obligations, as in "without cost". However, I 
think you are correct that it is an important service to your customers.

One consideration: you might very strongly consider a mechanism (such as 
dynamic DNS) that enables you to not only provide names corresponding to 
addresses assigned, but to limit your names to addresses that are in actual 
use. The way my ISP-of-sorts (Cisco) sets up my home office address space, 
I have a name for each address in the block whether it is used or not, and 
if someone were to spoof one of the unused addresses the fact would not be 
noticed. Dynamic DNS or something like it would  



Re: BBC does IPv6 ;) (Was: large multi-site enterprises and PI prefix [Re: who gets a /32)

2004-11-27 Thread Fred Baker
At 11:54 PM 11/26/04 -0800, Owen DeLong wrote:
IMHO, the rules that qualify someone for an AS number should qualify them
for a prefix. It need not be a truly long prefix, but larger than a /48.
I agree with the first part, but, a /48 is 65,536 64 bit subnets.  Do you
really think most organizations need more than that?  Or, by larger than
a /48 did you mean a longer prefix (smaller allocation/assignment)?
The important part there is "most networks".
What about a network that is not one of "most networks"? My point is that 
one size does not fit all, and that "most" != "all". So I think we need a 
policy that applies in the general case, a policy that applies in specific 
cases where the general case doesn't work, and a rule for saying which 
policy applies. 



Re: BBC does IPv6 ;) (Was: large multi-site enterprises and PI prefix [Re: who gets a /32)

2004-11-26 Thread Fred Baker
At 10:09 PM 11/26/04 -0800, Fred Baker wrote:
IMHO, the rules that qualify someone for an AS number should qualify them 
for a prefix. It need not be a truly long prefix, but larger than a /48.
Reading my own email - that isn't clear.
I think the length of the prefix given to a PI edge network should be 
permitted to be larger than a /48 (perhaps a /40 or a /35), but need not be 
as large as is given to an ISP (/30). Willing enough to take the /30, but I 
think the statistics likely don't support it.

My reasoning: well, I work for an outfit that has an AS number, meaning 
that it has a certain number of ISPs. It is also an edge network. It has 
~35K employees and VPNs a subnet to each employee's home. It also has lots 
of office space, labs, and so on. It has DMZs to the Internet in Australia, 
the Netherlands, and a couple of places in the US (at least, might be more).

Provider-dependent addressing is a nightmare for such. Now imagine a truly 
large company, like GE or IBM.

Hence, I will argue that more than 65K subnet prefixes should be allowable 
to such an edge network. How many more - well, I'll leave that to someone 
else to argue.

The thing that brings me out here is the "one size fits all" reasoning that 
seems to soll around this community so regularly. "Multihoming should 
always use provider-independent addressing" and "Multihoming should always 
use provider-dependent addressing" are the statements in this debate. Well, 
you know what? The argument relating to someone's home while he is 
switching from DSL to Cable Modem access service isn't the same as the 
argument for a multinational corporation. I don't see any reason that the 
solution has to be the same either.

So here's my proposal. If you qualify for an AS number (have a reasonable 
business plan, clueful IT staff, and a certain number of ISPs one connects 
with), you should also be able to be a PI prefix.

And if you don't qualify for that, you should probably go provider-dependent. 

pgpyJOy6YVTXa.pgp
Description: PGP signature


Re: BBC does IPv6 ;) (Was: large multi-site enterprises and PI prefix [Re: who gets a /32)

2004-11-26 Thread Fred Baker
At 11:31 PM 11/25/04 -0800, Owen DeLong wrote:
I think the policy _SHOULD_ make provisions for end sites and 
circumstances like this, but, currently, I believe it _DOES NOT_ make such 
a provision.
I understand the policy in the same way. That said, I believe that the 
policy is wrong.

IMHO, the rules that qualify someone for an AS number should qualify them 
for a prefix. It need not be a truly long prefix, but larger than a /48.

My logic is this. We grant someone an AS number not because we think they 
are an ISP, but because we believe that they are sufficiently well 
connected that using BGP to advertise their routing is necessary, and 
running BGP to a number of neighbors implies an AS number. Well, if you are 
sufficiently well-connected to need to advertise your routing in BGP, 
ingress policing is going to materially hurt you in your use of said 
multiple ISPs. You want an address that you can safely originate from, and 
you want to be able to use routing to multihome in the other direction.

Note that this isn't an argument that all multi-homing should be done using 
provider-independent addressing. This is an argument that some should. 
Multihoming for outfits that don't qualify for an AS number still looks for 
a solution that is implementable by mortals and uses provider-dependent 
addresses. 

pgpd4Z33c28Pc.pgp
Description: PGP signature


Re: BCP38 making it work, solving problems

2004-10-19 Thread Fred Baker
At 01:11 PM 10/19/04 +0200, JP Velders wrote:
As it was "in the old days": first clean up your own act and then start 
pointing at others that they're doing "it" wrong.
hear hear... But Paul knows and in fact does that. He is pointing out the 
difficulty of getting people to do basic things that are for their own 
benefit.

For example, how many ISPs use TCP MD5 to limit the possibility of a 
BGP/TCP connection getting hijacked or disrupted by a ddos attack? But this 
has been in the code since ~1990, and was put there because of a fairly 
serious and specific attack that was made on Internet routing, and benefits 
primarily the ISP that enables the procedure in that it knows that its 
routes are coming to it from systems it has chosen to trust.

Ingress filters help the ISP that installs them, in that a certain class of 
attacks are prevented among customers of the ISP. Would it be better if all 
ISPs and all edge networks put appropriate filters in place? Absolutely. 
But even if they do not, the ISP saves itself that much trouble.

Where ingress filters don't help, of course, is when the attacks come from 
an apparently-legitimate address. Then we are off to other tools. 



Re: BCP38 making it work, solving problems

2004-10-13 Thread Fred Baker
At 12:01 PM 10/13/04 +0200, Iljitsch van Beijnum wrote:
Trusting the source when it says that its packets aren't evil might be 
sub-optimal. Evaluation of evilness is best left up to the receiver.
Likely true. Next question is whether the receiver can really determine 
that in real time. For some things, yes, but for many things it is not as 
obvious to me. 



Re: BCP38 making it work, solving problems

2004-10-11 Thread Fred Baker
At 08:39 AM 10/12/04 +0530, Suresh Ramasubramanian wrote:
Yes I know that multihoming customers must make sure packets going out to 
the internet over a link match the route advertised out that link .. but 
stupid multihoming implementations do tend to ensure that lots of people 
will yell loudly, and yell loudly enough for several tickets to be 
escalated well beyond tier 1 NOC support desks, for ISPs to kind of think 
twice before they put uRPF filters in ..
You might want to take a glance at RFC 3704, which looks at a number of the 
issues that have been raised in this thread, including the routing of 
traffic to appropriate enterprise egress points.

In my heart of hearts, I would like enterprises to (as a default) match 
layer 2 and layer 3 addresses on the originating LAN, and 
quarantine-as-busted any machine that sends an address other than assigned 
on an interface. It seems that the few cases where a device legitimately 
sends multiple addresses are exception cases that can be handled 
separately. Handling it that close to the source solves the problem for 
everyone.

Practically, that is difficult. If you think getting all of the service 
providers (who wind up having to fix ddos attacks, and pay for bandwidth 
and services related to ddos attacks) to manage networks well is difficult, 
consider the prospect of getting all the edge networks to do so...

As simple solution is, as someone suggested, pose an idiot tax and bill the 
customers for doing stupid things. Egress traffic filtering in the 
enterprise is relatively simple for the average enterprise - it has at most 
a few prefixes and can write a simple ACL on its upstream router. It can 
use the ACL either to discard offending packets or to route them to the 
right egress. It is also relatively simple for the average enterprises' 
ISP: it knows what prefix(es) it agreed to accept traffic from and can 
write an ACL.

It gets a little dicier when the customer is a lower tier ISP. In that 
case, there are potentially many prefixes, and they change more frequently. 
That is the argument for something like uRPF. No, it is not a "sure fix", 
but it handles that case more readily, both in the sense of being a fast 
lookup and in the sense of maintaining the table. The problem is, of 
course, in the asymmetry of routing - it has to be used with the brain 
engaged.

From an ISP perspective, I would think that it would be of value to offer 
*not* ingress filtering (whether by ACL or by uRPF) as a service that a 
customer pays for. Steve Bellovin wrote an April Fool's note suggesting an 
"Evil Bit" (ftp://ftp.rfc-editor.org/in-notes/rfc3514.txt); I actually 
think that's not such a dumb idea if implemented as a "Not Evil" flag, 
using a DSCP or extending the RFC 3168 codes to include such, as Steve 
Crocker has been suggesting. Basically, a customer gets ingress filtered 
(by whatever means) and certain DSCP settings are treated as "someone not 
proven to have their act together". Should a ddos happen, such traffic is 
dumped first. But if the customer pays extra, their traffic is marked "not 
evil", protected by the above, and ingress filtering may be on or off 
according to the relevant agreement. The agreement would need to include a 
provision to the effect that once a ddos is traced in part to the customer, 
their traffic is marked as "evil" for a period of time afterwards. What the 
customer is paying for, if you will, is the ability to do their thing 
during a ddos in a remote part of the network, such as delivering a service 
to a remote peer.

Address spoofing is just one part of the ddos problem; to nail ddos, we 
also need to police a variety of application patterns. One reason I like 
the above is that it gives us a handle on what traffic might possibly be 
"not evil" - someone has done something that demonstrates that it is from a 
better managed source. 



Re: who's next?

2004-09-08 Thread Fred Baker
At 04:29 PM 09/08/04 +, Paul Vixie wrote:
i guess this is progress.  the press keeps bleating about stopping spam 
from being received -- perhaps if they start paying attention to how it 
gets sent and how many supposedly-legitimate businesses profit from the 
sending, there could be some flattening of the spam growth curve.
I think both approaches have value.
Consider this by comparison to the "war against drugs". One line of 
reasoning says "if there is no supply, there will be no market". Another 
line of reasoning says "if there is no demand, there will be no market". A 
third line of reasoning notes that with purveyance of such come a multitude 
of other social ills, and focuses on the "businessmen" in the trade: "if 
there is no way for supply and demand to meet, the market will fail."

Believe it or not, there is a market for spam. One person in a zillion 
actually replies to email claiming to be from the survivors of deposed 
African officials, resulting in them being able to fleece another sucker. 
If nobody replied, sooner or later they would get tired of sending the 
stuff. And yes, if they stop sending the stuff (perhaps as a result of 
going to jail), we won't have to deal with it. And oh by the way, a way to 
help them decide to not send it is to disable them from getting access to 
the net.

So, I say, consider spam to be fraud or theft of service when it is, and 
apply anti-fraud or anti-theft laws to the spammers. Consider it to be a 
costly nuisance to the receiver, and provide a way for him to inexpensively 
and reliably sort wheat from chaff (signatures and reputation services, 
which are not about "I signed my email so I'm cool" as much as they are 
about "I really am who I say I am, and you may apply policies as you see 
fit to deal with my email"), preferably without having to actually see the 
chaff. And yes, deny the spammer access.

Where this gets interesting is with so-called "legitimate spam". At least 
under US law, if you and I have a relationship as buyer and seller, the 
seller has a right to advertise legitimate services and products to the 
buyer. I travel in a vertical direction when I get spam from my employer; I 
have sat down with the designated spammer and have been told in detail that 
as a user of that equipment I am a buyer and they have a right to advertise 
to me, and take pretty serious steps to target and not annoy their 
audience. There is a part of me that wants to site in an 18" gun using 
their building as a target; there is another part of me that notes the 
photography in magazines and on billboards and the little jingles that go 
by on TV and the radio, and notices that legitimate advertising is in fact 
treated as (ulp!) legitimate.

In that case, they're not going to jail, and no ISP is going to refuse them 
service. I just want the ability to say "but I choose to not receive email 
from the designated spammer, and need to be able to reliably identify email 
from him in order to enforce that policy." 

pgpiHNlAIksee.pgp
Description: PGP signature


Re: OT- need a new GSM provider

2004-09-02 Thread Fred Baker
At 06:04 PM 09/02/04 -0700, Joe Rhett wrote:
> Also note due to fraud mitigation, most phones only allow you to call
> within the country you are in or back to the home country, all the while
> charging you an exhorbitant price.
Um, sorry but I've never seen this.  I used to world-roam on AT&T, and now
I do it with T-Mobile and never had any such drama.
ditto. color me clueless, but AT&T worked once upon a time, and T-Mobile 
works quite well for me now. 



Re: Senator Diane Feinstein Wants to know about the Benefits of P2P

2004-08-30 Thread Fred Baker
At 05:03 PM 08/30/04 -0400, Sean Donelan wrote:
I've always wondered what really makes P2P different from anything else on 
the Internet?  From the service provider's point of view, users accessing 
CNN.COM is a peer-to-peer activity between the user and CNN.  From the 
service provider's point of view, Microsoft and Akamai are peer-to-peer 
activities.
From an internet-layer SP viewpoint, you're absolutely correct - p2p 
traffic is just that, traffic.

If you are an ISP that offers specific application services (for example, 
you market a VoIP service), you have just walked into the world that 
enterprise managers have lived in for quite some time. Suddenly it is not 
about "can the packet cross my network"; it is about "does the application 
I market behave as specified, and if not, what do I need to do to make it 
do so." At that point, you lump applications into a few buckets that you 
care about and one you don't, and think about their various implications.

And then there is the question of an ISP or enterprise that pays by the 
pound for its upstream service. It needs to be able to correlate its costs 
with its incomes. I have had a number of ISPs approach me for solutions 
that will allow them to do so, either by figuring out who is originating 
traffic to bill and send them a bill, or figuring out who is originating 
traffic they can't bill for and make it be less - without completely 
enraging the customer and making them change providers.

I have been approached by some providers who think p2p might be a service 
they want to offer, and therefore be able to manage, so that they can both 
bill for it and offer other services in a cost-effective manner. For them, 
it's part of the stuff they want to treat in a friendly manner...

It all depends on what kind of provider you are...  



Re: Senator Diane Feinstein Wants to know about the Benefits of P2P

2004-08-30 Thread Fred Baker
I think you just tripped across the difference between a user and an SP. 
SPs don't generally have 28 KBPS dial links between them and their 
upstream, and folks that have 28 KBPS dial uplinks don't generally host 
Akamai servers. Assuming that just because you have effectively-infinite 
bandwidth and effectively-zero delay everyone perforce must enjoy that is a 
bit of a leap...

This kind of a "you're different and therefore wrong" mismatch has made 
complete hash out of quite a variety of discussions concerning user 
experience and user requirements on the Internet. Please listen carefully 
when someone talks about having limited rate access. The assumptions that 
are obviously true in your (SP) world are completely irrelevant in theirs. 
If you want their opinions - and this opinion was explicitly requested - 
you have to respect them when they are offered, not just bash them as 
different from your experience.

At 01:21 PM 08/30/04 -0600, Byron L. Hicks wrote:
Not true.  For those of us who host Akamai servers, we could download SP2 
with no problems.  We did not need P2P, or MSDN.  In fact, I would be very 
reluctant to trust a Windows update downloaded via P2P.

--
Byron L. Hicks
Network Engineer
NMSU ICT
On 8/30/04 12:43 PM, "Jeff Wheeler" <[EMAIL PROTECTED]> wrote:
My two cents:
When Windows XP SP2 was released the only way to get it (for those of us 
not part of MSDN at least) was via P2P.  The same has been true for 
countless other large but important software releases on various 
platforms (particularly ones like Linux that aren't backed by huge 
corporations with tons of bandwidth to host these sorts of files).

Point is?  P2P is extremely valuable for the timely and cost-effective 
delivery of critical updates to the masses.

--
 Jeff Wheeler
 Postmaster, Network Admin
 US Institute of Peace
On Aug 30, 2004, at 2:27 PM, Henry Linneweh wrote:
So I would like some professional expert opinion to give her on this 
issue since it will effect the copyright inducement bill. Real benefits 
for production and professional usage of this technology.

 -Henry



Re: WashingtonPost computer security stories

2004-08-15 Thread Fred Baker
At 12:58 PM 08/15/04 -0700, Alexei Roudnev wrote:
SuSe linux can be installed on the first attempt by Windoze-only gurus (I
did such experiment) and never require any command line interaction (except
if you decide to run something complicated).
My then-16-year-old son did the same, building a dual boot, and prefers it. 
The only reason he runs Windows rather than Linux is the games; Linux's 
windows-API software needs a lot of work. 



Re: AOL fixing Microsoft default settings

2003-10-28 Thread Fred Baker
At 11:13 AM 10/23/2003, Sean Donelan wrote:
How many other ISPs intend to follow AOL's practice and use their 
connection support software to fix the defaults on their customer's 
Windows computers?
Interesting question from several angles. Here's the flip side. Our 
corporate IT department likes to magically download software and 
configuration changes to us without telling us, which occasionally has the 
effect of having someone in the middle of a presentation to a customer have 
something pop up and say "I have installed new software on your laptop, 
because you need it and it is good for you. Click here to reboot."

um, ...

timing is everything, right?

Personally, I don't ask my ISP or my IT department to randomly change the 
configuration of my computer. I am very happy for them to suggest changes, 
but *if* I agree, *I* want to install them when it is convenient for *me*, 
not when it is convenient for *them*.

That said, this particular configuration change is an improvement... 



Re: New mail blocks result of Ralsky's latest attacks?

2003-10-11 Thread Fred Baker
At 09:07 AM 10/10/2003, Steven M. Bellovin wrote:
Out of curiousity, has anyone tried turning this over to law
enforcement?  It's another form of hacking, but the money trail back
through the spammers might provide enough evidence for prosecution.
From my read, it sounds sufficient in its own right. This month's 
Communications of the ACM has an interesting article on addressing it as 
"trespass on chattel" - attacking someone's property in a manner that 
reduces their ability to use it or uses it without their permission for 
purposes they don't agree with. Breaking into a server and using it for a 
purpose its own doesn't authorize sounds a lot like trespass against 
chattel to me.

It might be interesting for him to wake up in the morning with 50 lawsuits 
at his door seeking damages in the quantity of money spent horsing around 
with him. 



Re: Wired mag article on spammers playing traceroute games with

2003-10-09 Thread Fred Baker
At 03:00 PM 10/9/2003, [EMAIL PROTECTED] wrote:
We seem to be slowly transforming the network into more and more just a 
network of port 80 boxes.  :(  Perhaps the Internet really is going to end 
up being just the Web, not through evil intervention, but by our own 
well-intentioned efforts.
I imagine port 25 will still be active... 



RE: Wired mag article on spammers playing traceroute games with trojaned boxes

2003-10-09 Thread Fred Baker
At 09:01 AM 10/9/2003, McBurnett, Jim wrote:
Can Broadband ISP's require a Linksys, dlink or other
broadband router without too many problems?
The router vendors would like that to happen :^) 



RE: What *are* they smoking?

2003-09-15 Thread Fred Baker
At 04:18 PM 9/15/2003, Jeroen Massar wrote:
Even worse of this is that you can't verify domain names under .net
any more for 'existence' as every .net domain suddenly has a A record
and then can be used for spamming...
so, every spammer in the world spams versign. The down side of this is ... 
what? I don't remember... 



Re: East Coast outage?

2003-08-14 Thread Fred Baker
At 01:31 PM 8/14/2003 -0700, Aaron D. Britt wrote:
I just lost 80 circuits (Voice and Data), across multiple states on the
East Coast in the last 10 minutes.  Is there a Northeast power outage or
fiber cut that anyone knows about?
CNN speaks:
Major power outage hits New York, other large cities
Thursday, August 14, 2003 Posted: 6:28 PM EDT (2228 GMT)

NEW YORK (CNN) -- A major power outage simultaneously struck 
dozens of
cities in the United States and Canada late Thursday afternoon.

http://www.cnn.com/2003/US/08/14/power.outage/index.html 



RE: How much longer..

2003-08-14 Thread Fred Baker
At 12:53 PM 8/13/2003 -0500, Ejay Hire wrote:
I don't care what defective operating system a worm uses.
Yes. Lets recall that the first worm on the net was a sendmail worm, and 
attacked UNIX systems. I'm no friend of Windows either, but a little 
humility is in order. Windows is attacked because it is ubiquitous, not 
because it is vulnerable. If the whole world ran Linux, the attacks would 
be on Linux machines. 



draft-savola-bcp38-multihoming-update-nn.txt

2003-07-19 Thread Fred Baker
Pekka and I have been discussing the impact of ingress filters on 
multihomed networks - which may be ISPs or edge networks, and may have an 
arbitrary number of upstream ISPs.

We wonder what your thoughts might be regarding 
http://www.ietf.org/internet-drafts/draft-savola-bcp38-multihoming-update-00.txt. 
With your concurrence, we would like to recommend it for BCP status, as an 
update to BCP 38. Our questions are at two levels: philosophical and at the 
detail level. If you have significant comments calling for a change of 
text, it would help us if you proposed text.