Re: Abuse response [Was: RE: Yahoo Mail Update]

2008-04-16 Thread Joe Abley



On 16 Apr 2008, at 13:33 , Simon Waters wrote:

Ask anyone in the business if I want a free email account who do I  
use.. and

you'll get the almost universal answer Gmail.


I think amongst those not in the business there are regional trends,  
however. Around this neck of the woods (for some reason) the answer  
amongst your average, common-or-garden man in the street is yahoo!.


I don't know why this is. But that's my observation.

There are also the large number of people using Y! mail who don't  
realise they're using Y! mail, because the telco or cableco they use  
for access have outsourced mail operations to Y!, and there are still  
(apparently) many people who assume that access providers and mail  
providers should match. In those cases choice of mail provider may  
have far more to do with price of tv channel selections or  
availability of long-distance voice plans than anything to do with e- 
mail.


So, with respect to your other comments, correlation between technical/ 
operational competence and customer choice seems weak, from my  
perspective. If there's competition, it may not driven by service  
quality, and the conclusion that well-staffed abuse desks promote  
subscriber growth is, I think, faulty.



Joe



Re: Abuse response [Was: RE: Yahoo Mail Update]

2008-04-15 Thread Joe Provo

On Tue, Apr 15, 2008 at 12:31:33PM +0530, Suresh Ramasubramanian wrote:
 
 On Tue, Apr 15, 2008 at 11:55 AM, Paul Ferguson [EMAIL PROTECTED] wrote:
[snip]
   It should be simple -- not require a freeking full-blown standard.
 
 Its a standard. And it allows automated parsing of these complaints.
 And automation increases processing speeds by orders of magnitude..
 you dont have to wait for an abuse desker to get to your email and
 pick it out of a queue with hundreds of other report emails, and
 several thousand pieces of spam [funny how [EMAIL PROTECTED] type addresses
 end up in so many spammer lists..]

It cannot be understated that even packet pushers and code grinders
who care get stranded in companies where abuse handling is deemed 
by management to be a cost center that only saps resources.  Paul, 
you are doing a serious disservice to those folks in specific, and
working around such suit-induced damage in general, by dismissing 
any steps involving automation.

Cheers,

Joe

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Abuse response [Was: RE: Yahoo Mail Update]

2008-04-15 Thread Joe Abley



On 15 Apr 2008, at 11:22 , William Herrin wrote:


There's a novel idea. Require incoming senior staff at an email
company to work a month at the abuse desk before they can assume the
duties for which they were hired.


At a long-previous employer we once toyed with the idea of having  
everybody in the (fairly small) operations and architecture/ 
development groups spend at least a day on the helpdesk every month.


The downside to such a plan from the customer's perspective is that  
I'm pretty sure most of us would have been really bad helpdesk people.  
There's a lot of skill in dealing with end-users that is rarely  
reflected in the org chart or pay scale.



Joe


Re: the O(N^2) problem

2008-04-14 Thread Joe Greco

 The risk in a reputation system is collusion.

/One/ risk in a reputation system is collusion.

Reputation is a method to try to divine legitimacy of mail based on factors
other than whether or not a recipient authorized a sender to send mail.  To
a large extent, the majority of the focus on fighting spam has been to try
to do this sort of divination by coding clever things into machines, but it 
should be clear to anyone who has ever had legitimate mail mysteriously go 
missing, undelivered, or delayed that the process isn't without the
occasional falsing.

There are both positive (whitelist) and negative (DNSBL, local This-Is-Spam,
etc) reputation lists, and there are pros and cons to each.

Consider, for example, Kevin Day's example of the Group-B-Objectionable
scenario.  This is a nonobvious issue that can subvert the reputation of
a legitimate mailer.

On the flip side, what about someone who actually wants to receive mail
that an organization such as Spamhaus has deemed to be hosted on a spammy
IP?  (And, Steve and the Spamhaus guys, this is in no way a criticism of
the job you guys do, the Internet owes you a debt of gratitude for doing
a nearly impossible job in such a professional manner)

There are risks inherent with having any third party, specifically
including the ISP or mailbox provider, trying to determine the nature of
the communications, and filtering on that basis.

This is why I've been talking about paradigms that eliminate the need for
third parties to do analysis of e-mail, and rely on the third parties to
simply implement systems that allow the recipient to control mail.  There
are a number of such systems that are possible.

However, the current systems of divining legitimacy (reputation, filtering,
whatever) generate results that loosely approximate the typical mail that
the average user would wish to receive.  Users have been trained to consider
errors in the process as acceptable, and even unavoidable.

It's ridiculous when systems like Hotmail silently bitbucket e-mail from
a sender (and IP) that has never spammed, and have ONLY sent transactional
e-mail and customer support correspondence, and the individually composed
non-HTML REPLIES to customer inquiries are eaten by Hotmail, or tossed in
the spam folder.  Nice.  (I know, we all have our stories)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-14 Thread Joe Greco

  You want to define standards?  Let's define some standard for 
  establishing permission to mail.  If we could solve the 
  permission problem, then the filtering wouldn't be such a 
  problem, because there wouldn't need to be as much (or maybe 
  even any).  As a user, I want a way to unambiguously allow a 
  specific sender to send me things, spam filtering be 
  damned.  I also want a way to retract that permission, and 
  have the mail flow from that sender (or any of their 
  affiliates) to stop.
  
  Right now I've got a solution that allows me to do that, but 
  it requires a significant paradigm change, away from 
  single-e-mail-address.
 
 In general, your permission to send idea is a good one to
 put in the requirements list for a standard email architecture.
 But your particular solution stinks because it simply adds
 another bandage to a creaky old email architecture that is 
 long past its sell-by date.

Yes.  I'm well aware of that.  My requirements list included that my
solution be able to actually /fix/ something with /today's/ architecture;
this is a practical implementation to solve a real problem, which was
that I was tired of vendor mail being confused for spam.

So, yes, it stinks when compared to the concept of a shiny new mail
architecture.  However, it currently works and is successfully whitelisting
the things I intended.  I just received a message from a tool battery
distributor that some batteries I ordered months ago are finally shipping.
It was crappy HTML, and I would normally have completely missed it -
probably even forgetting that we had ordered them, certainly not
recognizing the From line it came from.  It's a success story.  Rare.

You are welcome to scoff at it as being a stinky bandaid on a creaky mail
system.

 IMHO, the only way that Internet email can be cleaned up is
 to create an entirely new email architecture using an entirely
 new set of protcols with entirely new port assignments and 
 no attempt whatsoever to maintain reverse compatibility with
 the existing architecture. That is a fair piece of work and
 requires a lot of people to get their heads out of the box
 and apply some creativity. Many will say that the effort is
 doomed before it starts because it is not compatible with
 what went before. I don't buy that argument at all.
 
 In any case, a new architecture won't come about until we have
 some clarity of the requirements of the new architecture. And
 that probably has to be hashed out somewhere else, not on any
 existing mailing list.

If such a discussion does come about, I want people to understand that
user-controlled permission is a much better fix than arbitrary spam
filtering steps.  There's a lot of inertia in the traditional spam 
filtering advice, and a certain amount of resistance to considering
that the status quo does not represent e-mail nirvana.

Think of it as making that unsubscribe at the bottom of any marketing
e-mail actually work, without argument, without risk.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco

 Gak, there isn't even a standard code which means MAILBOX FULL or
 ACCOUNT NOT RECEIVING MAIL other than MAILBOX FULL, maybe by choice,
 maybe non-payment, as specific as a site is comfortable with.
 
 That's what I mean by standards and at least trying to focus on what
 can be done rather than the endless retelling of what can't be done.

I would have thought it was obvious, but to see this sort of enlightened
ignorance(*) suggests that it isn't:  The current methods of spam filtering
require a certain level of opaqueness.

Having just watched the gory hashing through of how $MEGAISP deals with
filtering on another list, I was amazed that the prevailing stance among
mailbox hosters is that they don't really care about principles, and that
they mostly care about whether or not users complain.

For example, I feel very strongly that if a user signs up for a list, and
then doesn't like it, it isn't the sender's fault, and the mail isn't spam.
Now, if the user revokes permission to mail, and the sender keeps sending,
that's covered as spam under most reasonable definitions, but that's not
what we're talking about here.

To expect senders to have psychic knowledge of what any individual recipient
is or is not going to like is insane.  Yet that's what current expectations
appear to boil down to.

So, on one hand, we have the filtering by heuristics, which require a
level of opaqueness, because if you respond 567 BODY contained www.sex.com,
mail blocked to their mail, you have given the spammer feedback to get
around the spam.

And on the other hand, we have the filtering by statistics, which requires
a large userbase and probably a This Is Spam button, where you use a
complaint driven model to reject mail, but this is severely complicated 
because users have also been trained to report as spam any other mail that
they don't want, which definitely includes even things that they've opted
in to.

So you have two opaque components to filtering.  And senders are
deliberately left guessing - is the problem REALLY that a mailbox is full,
or am I getting greylisted in some odd manner?

Filtering stinks.  It is resource-intensive, time-consuming, error-prone,
and pretty much an example of something that is desperately flagging the
current e-mail system is failing.

You want to define standards?  Let's define some standard for establishing
permission to mail.  If we could solve the permission problem, then the
filtering wouldn't be such a problem, because there wouldn't need to be as
much (or maybe even any).  As a user, I want a way to unambiguously allow
a specific sender to send me things, spam filtering be damned.  I also
want a way to retract that permission, and have the mail flow from that
sender (or any of their affiliates) to stop.

Right now I've got a solution that allows me to do that, but it requires a
significant paradigm change, away from single-e-mail-address.

Addressing standards of the sort you suggest is relatively meaningless
in the bigger picture, I think.  Nice, but not that important.

(*) It's enlightened to hope for standards that would allow remote sites
to have some vague concept of what the problem is.  I respect that.
It just seems to be at odds with current reality.

 More specific and standardized SMTP failure codes are just one example
 but I think they illustrate the point I'm trying to make.
 
 Oh yeah here's another (ok maybe somewhere this is written down), how
 about agreeing on contact mailboxes like we did with
 [EMAIL PROTECTED]

Yeah, like that's actually implemented or useful at a majority of domains.

 Is it [EMAIL PROTECTED] or [EMAIL PROTECTED] or [EMAIL PROTECTED] or
 [EMAIL PROTECTED] (very commonly used) or [EMAIL PROTECTED] Who cares? But
 let's pick ONE, stuff it in an RFC or BCP and try to get each other to
 conform to it.

Having defined methods for contacting people OOB would be nice.  IFF (and
often/mostly they don't) anyone cared to actually try to resolve individual
problems.  Don't expect them to want to, because for the most part, they do
not.  Sigh.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco

 On April 13, 2008 at 14:24 [EMAIL PROTECTED] (Joe Greco) wrote:
   I would have thought it was obvious, but to see this sort of enlightened
   ignorance(*) suggests that it isn't:  The current methods of spam filtering
   require a certain level of opaqueness.
 
 Indeed, that must be the problem.
 
 But then you proceed to suggest:
 
   So, on one hand, we have the filtering by heuristics, which require a
   level of opaqueness, because if you respond 567 BODY contained 
 www.sex.com,
   mail blocked to their mail, you have given the spammer feedback to get
   around the spam.
 
 Giving the spammer feedback?
 
 In the first place, I think s/he/it knows what domain they're using if
 they're following bounces at all. Perhaps they have to guess among
 whether it was the sender, body string, sending MTA, but really that's
 about it and given one of those four often being randomly generated
 (sender) and another (sender MTA) deducible by seeing if multiple
 sources were blocked on the same email...my arithmetic says you're
 down to about two plus or minus.

In many (even most) cases, that is only useful if you're sending a lot of
mail towards a single source, a variable which introduces yet *another*
ambiguity, since volume is certainly a factor in blocking decisions. 
Further, if you look at the average mail message, you have domains based
on multiple factors, such as services to do open tracking (1x1/invisible
pixels, etc), branding, and many other reasons that there could be more
than a single domain in a single message.  Further, once you're being
blocked, it may be implemented by-IP even though there was some other
metric that triggered the block.

Having records that allow a sender to go back and unilaterally determine 
what was amiss may not be considered desirable by the receiving site.
 
 But even that is naive since spammers of the sort anyone should bother
 worrying about use massive bot armies numbering O(million) and
 generally, and of necessity, use fire and forget sending techniques.

Do you mean to suggest that your definition of spammer only includes
senders using massive bot armies?  That'd be mostly pill spammers,
phishers, and other really shady operators.  There are whole other classes
of spam and spammer.

 Perhaps you have no conception of the amount of spam the major
 offenders send out. It's on the order of 100B/day, at least.

I have some idea.  However, I will concede that my conception of current
spam volumes is based mostly on what I'm able to quantify, which is the
~4-8GB/day of spam we receive here.

 That's why you and your aunt bessie and all the people on this list
 get the same exact spam. Because they're being sent out in the
 hundreds of billions. Per day.

Actually, we see significant variation in spam received per address.

 Now, what exactly do you base your interesting theory that spammers
 analyze return codes to improve their techniques for sending through
 your own specific (not general) mail blocks? Sure they do some
 bayesian scrambling and so forth but that's general and will work on
 zillions of sites running spamassassin or similar so that's worthwhile
 to them.

I'm sure that if you were to talk to the Postmasters at any major ISP/mail
provider, especially ones like AOL, Hotmail, Yahoo, and Earthlink, that
you would discover that they're familiar with businesses which claim to be
in the business of enhancing deliverability.

However, what I'm saying was pretty much the inverse of the theory that you
attribute to me:  I'm saying that receivers often do NOT provide feedback
detailing the specifics of why a block happened.  As a matter of fact, I 
think I can say that the most common feedback provided in the mail world 
would be notice of listing on a DNS blocking list, and this is primarily 
because the default code and examples for implementation usually provide 
some feedback about the source (or, at least, source DNSBL) of the block.

You'll see generic guidance such as the Yahoo! error message that started
this thread (temporarily deferred due to user complaints, IIRC), but 
that's not particularly helpful, now, is it.  It doesn't tell you which
user, or how many complaints, etc.

 But what, exactly, do you base your interesting theory that if a site
 returned 567 BODY contained www.sex.com that spammers in general and
 such that it's worthy of concern would use this information to tune
 their efforts?

Because there are businesses out there that claim to do that very sort of
thing, except that they do it by actually sending mail and then checking
canary e-mail boxes on the receiving site to measure effectiveness of their
delivery strategy.  Failures result in further tuning.

Being able to simply analyze error messages would result in a huge boost
for their effectiveness, since they would essentially be able to monitor
the deliverability of entire mail runs, rather than assuming that the
deliverability percentage of their canaries, plus any open tracking

Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco
.

Again - to them.

But they're hardly the only class of spammers.  I realize it's convenient
to ignore that fact for the purposes of this discussion, since it supports
your argument while ignoring the fact that other spammers would mine a
lot of useful information out of such messages.

 But any such return codes should be voluntary,

And they are.  To the best of my knowledge, you can put pretty much any
crud you like after the ### , and if anybody wanted to return this data,
they would be doing it today.

 particularly the
 details, and a receiving MTA should be free to respond with as much or
 as little information as they are comfortable with right down to the
 big red button, 421 it just ain't happenin' bub!
 
 But it was just an example of how perhaps some standards, particularly
 regarding mail rejection, might help operationally. I'm not pushing
 the particular example I gave of extending status codes.
 
 Also, again I can't claim to know what you're working on, but there
 are quite a few disposable address systems in production which use
 various variations such as one per sender, one per message, change it
 only when you want to, etc. But maybe you have something better, I
 encourage you to pursue your vision.

No.  The difference to my solution is simply that it solves all the
problems I outlined when I wanted to solve the problem I started with -
finding a clean way to be able to exempt senders from anti-spam checks
that they frequently fell afoul of.

But then again, I am merely saying that there are solutions capable, but
that they all seem to require some paradigm shift.

 And, finally, one quote:
 
 I didn't say I had a design.  Certainly there are solutions to the
 problem, but any solution I'm aware of involves paradigm changes of
 some sort, changes that apparently few are willing to make.
 
 Gosh if you know of any FUSSP* whose only problem is that it requires
 everyone on the internet to abandon SMTP entirely or similar by all
 means share it.

That was kind of the nifty part to my solution:  it didn't require any
changes at any sender's site.  By accepting some tradeoffs, I was able
to compartmentalize all the permission issues as functions controlled by
the receiving site.

 Unfortunately this is a common hand-wave, oh we could get rid of spam
 overnight but it would require changes to (SMTP, usually) which would
 take a decade or more to implement, if at all!
 
 Well, since it's already BEEN a decade or more that we've all been
 fussing about spam in a big way maybe we should have listened to
 people with a secret plan to end the war back in 1998. So I'm here to
 tell ya I'll listen to it now and I suspect so will a lot of others.

If we cannot have a flag day for the e-mail system, and obviously, duh,
we cannot have a flag day for the e-mail system, we have to look at other
changes.

That's too big a paradigm shift.

My solution is a comprehensive solution to the permission problem, which is
a root issue in the fight against spam, but it is based on a paradigm shift
that ISP's are unwilling to underwrite - dealing with per-correspondent
addresses.  This has challenges associated with it, primarily related to
educating users how to use it, and then getting users to commit to actually
doing so.

That's not TOO big a paradigm shift, since it's completely backwards-
compatible and managed at the receiving site without any support required
anywhere else in the e-mail system, but since service providers aren't 
interested in it, it is a non-starter.  Were it interesting, it wouldn't
be that tough to support relatively transparently via plugins into modern
browsers such as Firefox and Thunderbird.  But it is a LARGE paradigm
shift, and it doesn't even solve every problem with the e-mail system.

I am unconvinced that there aren't smaller potential paradigm shifts that
could be made.  However...

It is exceedingly clear to me that service providers prefer to treat the
spam problem in a statistical manner.  It offers fairly good results (if
you consider ~90%-99% accuracy to be acceptable) but doesn't actually do
anything for users who need e-mail that they can actually rely on.  It's
cheap (relatively speaking) and the support costs can be made to be cheap.

 * FUSSP - Final and Ultimate Solution to the Spam Problem.

Shoot all the spammers?  :-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco

 On Sun, Apr 13, 2008, Joe Greco wrote:
  browsers such as Firefox and Thunderbird.  But it is a LARGE paradigm
  shift, and it doesn't even solve every problem with the e-mail system.
  
  I am unconvinced that there aren't smaller potential paradigm shifts that
  could be made.  However...
 
 There already has been a paradigm shift. University students (college for 
 you
 'merkins) use facebook, myspace (less now, thankfully!) and IMs as their
 primary online communication method. A number of students at my university
 use email purely because the university uses it for internal systems
 and communication, and use the above for everything else.
 
 I think you'll find that we are the paradigm shift that needs to happen.
 The younger people have already moved on. :)

I believe this is functionally equivalent to the block 25 and consider
SMTP dead FUSSP.

It's worth noting that each newer system is being systematically attacked
as well.  It isn't really a solution, it's just changing problem platforms.
The abuse remains.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco

 On Sun, Apr 13, 2008, Joe Greco wrote:
  I believe this is functionally equivalent to the block 25 and consider
  SMTP dead FUSSP.
  
  It's worth noting that each newer system is being systematically attacked
  as well.  It isn't really a solution, it's just changing problem platforms.
  The abuse remains.
 
 Yes, but the ownership of the problem is better defined for messages -inside-
 a system.
 
 If you've got tens of millions of users on your IM service, you can start
 using statistical techniques on your data to identify likely spam/ham,
 and (very importantly) you are able to cut individual users off if they're
 doing something nasty. Users can't fake their identity like they can
 with email. There's no requirement for broadcasting messages a la email
 lists (which btw is touted as one of those things that break when various
 anti-spam verify-sender proposals come up.)
 
 Besides - google has a large enough cross section of users' email to do
 these tricks. I'd love to be a fly on the wall at google for just this
 reason ..

Few of these systems have actually been demonstrated to be invulnerable
to abuse.  As a matter of fact, I just saw someone from LinkedIn asking
about techniques for mitigating abuse.  When it's relatively cheap (think:
economically attractive in excessively poor countries with high
unemployment) to hire human labor, or even to engineer CAPTCHA evasion
systems where you have one of these wonderful billion-node-botnets
available, it becomes feasible to get your message out.  Statistically,
there will be some holes.  You only need a very small success rate.

The relative anonymity offered by e-mail is a problem, yes, but it is only
one challenge to the e-mail architecture.  For example, given a realistic
way to revoke permission to mail, having an anonymous party send you a
message (or even millions of messages) wouldn't be a problem, because you
could stop the flow whenever you wanted.  The problem is that there isn't
a commonly available way to revoke permission to mail.

I've posted items in places where e-mail addresses are likely to be
scraped or otherwise picked up and later spammed.  What amazed me was
how cool it was that I could actually post a usable e-mail address and
receive comments from random people, and then when the spam began to
roll in, I could simply turn off the address, and it doesn't even hit
the mailservers.  That's the power of being able to revoke permission.
The cost?  A DNS query and answer anytime some spammer tries to send 
to that address.  But a DNS query was happening anyways...

The solution I've implemented here, then, has the interesting quality
of moving ownership of the problem of permission within our systems,
without also requiring that all correspondents use our local messaging
systems (bboard, private messaging, whatever) or having to do ANY work
to figure out what's spam vs ham, etc.  That's my ultimate reply to 
your message, by the way.

Since it is clear that many other networks have no interest in stemming
the flood of trash coming from their operations, and clearly they're
not going to be interested in permission schemes that require their
involvement, I'd say that solutions that do not rely on other networks
cooperating to solve the problem bear the best chance of dealing with
the problem.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-11 Thread Joe Abley



On 10 Apr 2008, at 23:58 , Rob Szarka wrote:


At 02:23 PM 4/10/2008, you wrote:
Maybe we all should do the same to them until they quit spewing out  
all the
Nigerian scams and the like that I've been seeing from their  
servers lately!




If there were an coordinated boycott, I would participate. Yahoo is  
*by far* the worst single abuser of our server among the  
legitimate email providers.


Having done my own share of small-scale banging-of-heads-against-yahoo  
recently, the thing that surprised me was how many people with non- 
yahoo addresses had their mail handled by yahoo. It turns out that if  
Y! doesn't want to receive mail from me, suddenly I can't send mail to  
anybody in my extended family, or to most people I know in the town  
where I live. These involve domains like ROGERS.COM and  
BTINTERNET.COM, and not just the obvious Y! domains.


In my more paranoid moments I have wondered how big a market share Y!  
now has in personal e-mail, given the number of large cable/telcos who  
have outsourced mail handling to them for their residential products.  
Once you pass a certain threshold, the fact that Y! subscribers are  
the only people who can reliably deliver mail to other Y! subscribers  
provides a competitive advantage and a sales hook to make the resi  
mail empire even larger. At that point it makes no sense for Y! to  
expend effort to accept *more* mail from subscribers of other services.


To return to the topic at hand, you may already have outsourced the  
coordination of your boycott to Yahoo!, too! They're already not  
accepting your mail. There's no need to stop sending it! :-)



Joe



Re: Problems sending mail to yahoo?

2008-04-11 Thread Joe Greco

  The lesson one should get from all this is that the ultimate harm of
  spammers et al is that they are succeeding in corrupting the idea of a
  standards-based internet.
  
  Sites invent policies to try to survive in a deluge of spam and
  implement those policies in software.
  
  Usually they're loathe to even speak about how any of it works either
  for fear that disclosure will help spammers get around the software or
  fear that someone, maybe a customer maybe a litigious marketeer who
  feels unfairly excluded, will hold their feet to the fire.
  
  So it's a vast sea of security by obscurity and standards be damned.
  
  It's a real and serious failure of the IETF et al.
 
 Has anyone ever figured out what percentage of a connection to the
 internet is now overhead i.e. spam, scan, viruses, etc? More than 5%? If
 we put everyone behind 4to6 gateways would the spam crush the gateways
 or would the gateways stop the spam? Would we add code to these
 transitional gateways to make them do more than act like protocol
 converters and then end up making them permanent because of benefit?
 Perhaps there's more to transitioning to a new technology after all?
 Maybe we could get rid of some of the cruft and right a few wrongs while
 we're at it?

We(*) can't even get BCP38 to work.  Ha.

Having nearly given up in disgust on trying to devise workable anti-spam
solutions that would reliably deliver requested/desired mail to my own
mailbox, I came to the realization that the real problem with the e-mail
system is so fundamental that there's no trivial way to save it.  

Permission to mail is implied by simply knowing an e-mail address.  If I
provide [EMAIL PROTECTED] to a vendor in order to receive updates to an
online order, the vendor may retain that address and then mail it again at
a later date.  Worse, if the vendor shares the address list with someone
else, we eventually have the Millions CD problem - and I have no idea who
was responsible.

Giving out tagged addresses gave a somewhat useful way to track back the
who was responsible, but didn't really offload the spam from the mail
server.

I've solved my spam problem (or, more accurately, am in the process of
slowly solving my spam problem) by changing the paradigm.  If the problem 
is that knowing an e-mail address acts as the key to the mail box, then 
giving the same key to everyone is stupid.

For vendors, I now give them a crypto-signed e-mail address(*2).  By 
making the key a part of the DNS name, I can turn off reception for a 
bad sender (anyone I don't want to hear from anymore!) or a sender who's
shared my address with their affiliates (block two for the price of
one!)  All other validated mail makes it to my mailbox without further
spam filtering of any kind.

This has been excessively effective, though doing it for random consumers
poses a lot of interesting problems.  However, it proves to me that one
of the problems is the permission model currently used.

The spam problem is potentially solvable, but there's a failure to figure
out (at a leadership level) paradigm changes that could actually make a 
difference.  There's a lot of resistance to changing anything about the
way e-mail works, and understandably so.  However, these are the sorts of
things that we have to contemplate and evaluate if we're really interested
in making fundamental changes that reduce or eliminate abuse.

(*) fsvo we that doesn't include AS14536.

(*2) I've omitted a detailed description of the strategy in use because
 it's not necessarily relevant to NANOG.  I'm happy to discuss it
 with anyone interested.  It has technical merit going for it, but it
 represents a significant divergence from current practice.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: spam wanted :)

2008-04-10 Thread Joe Greco

 Randy Bush [EMAIL PROTECTED] writes:
 
  this would be a straight sample, before filtering, ip address
  blocking, etc.
 
  i realize this is difficult, as all of us go through much effort to
  reject this stuff as early as possible.  but it will be a sample
  unbiased by your filtering techniques.
 
 How do you classify email as spam without adding bias?

You can always claim bias.

There's often been debate, even in the anti-spam community, about what
spam actually means.  The meaning has repeatedly been diluted over the
years, to a point where some now define it merely as that which we do
not want, an attitude supported in code by some service providers who
now sport great big Easy Buttons (with apologies to any office supply
chain) labelled This Is Spam.

Even so, there's some complexity - users making typos, for example.

However, the easiest way to avoid bias is to look for a mail stream that
has the quality of not having any valid recipients.  There will be, of 
course, someone who will disagree with me that mail sent to an address 
that hasn't been valid in years, and whose parent domain was unresolvable
in DNS for at least a year is spam.  However, it's as unbiased as I can
reasonably imagine being.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-10 Thread Joe Greco

 Barry Shein wrote:
  Is it just us or are there general problems with sending email to
  yahoo in the past few weeks? Our queues to them are backed up though
  they drain slowly.
  
  They frequently return:
  
 421 4.7.0 [TS01] Messages from MAILSERVERIP temporarily deferred due 
  to user complaints - 4.16.55.1; see 
  http://postmaster.yahoo.com/421-ts01.html
  
  (where MAILSERVERIP is one of our mail server ip addresses)
 
  Just wondering if this was a widespread problem or are we just so
  blessed, and any insights into what's going on over there.
 
 I see this a lot also and what I see causing it is accounts on my servers
 that don't opt for spam filtering and they have their accounts here set to
 forward mail to their yahoo.com accounts - spam and everything then gets
 sent there - they complain to yahoo.com about the spam and bingo - email
 delays from here to yahoo.com accounts

We had this happen when a user forwarded a non-filtered mail stream from
here to Yahoo.  The user indicated that no messages were reported to Yahoo
as spam, despite the fact that it's certain some of them were spam.

I wouldn't trust the error message completely.  It seems likely that a jump
in volume may trigger this too, especially of an unfiltered stream.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Any tool or theorical method on detecting number of computer behind a NAT box?

2008-04-07 Thread Joe Shen

hi,

   Sharing internet access bandwidth between multiple
computers is common today. 

   Usually, bandwidth sharer bought a little router
with NAT/PAT function. After connecting that box to a
ADSL/LAN access link, multiple computer could share a
single access link.

   I heard some company provide prdouct for detecting
number of computers behind NAT/PAT box. 

   Is there any paper or document on how such product
work? where could I fint them ?


  Joe


  __
Search, browse and book your hotels and flights through Yahoo! Travel.
http://sg.travel.yahoo.com


Re: Nanog 43/CBX -- Hotel codes etc

2008-04-05 Thread Joe Greco

 Anyway -- I regard most of those warnings as quite overblown.  I mean,
 on lots of subway cars you stand out more if you don't have white
 earbuds in, probably attached to iPhones.  Midtown is very safe.  Your
 laptop bag doesn't have to say laptop on it to be recognized as such,
 but there are so many other people with laptop bags that you won't stand
 out if you have one.  Subway crime?  The average daily ridership is
 about 5,000,000; there are on average 9 felonies a day on the whole
 system. To quote a city police official I met, that makes the subways
 by far the safest city in the world.

That's probably an abuse of statistics.

 Yes, you're probably at more risk if you look like a tourist.  But there
 are lots of ways to do that, like waiting for a walk sign before
 crossing the street...  (Visiting Tokyo last month was quite a shock to
 my system; I had to unlearn all sorts of things.)

Looking and acting like you belong is good advice in most circumstances.
Act like the other monkeys.  If you don't give someone reason to question
you, they probably won't.  Wait, oh, that's the guide book for infiltrating
facilities ...  ;-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: fiber switch for gig

2008-04-01 Thread Joe Greco

 Speaking of running gig long distances, does anyone on the list have
 suggestions on a 8 port L2 switch with fiber ports based on personal
 experience?  Lots of 48 port gig switches have 2-4 fiber uplink ports, but
 this means daisy-chains instead of hub/spoke.  Looking for a central switch
 for a star topography to home fiber runs that is cost effective and works.
 
 Considering:
 DLink DXS-3326GSR
 NetGear GSM7312
 Foundry SX-FI12GM-4
 Zyxel GS-4012F
 
 I realize not all these switches are IEEE 802.3ae, Clause 49 or IEEE 802.3aq
 capable.

Cost effective would probably be the Dell 6024F.  We have some of these
and they've worked well, but we're not making any use of their advanced
features.  Can be had cheaply on eBay these days.  Has basic L3
capabilities (small forwarding table, OSPF), built in redundant power
supply, etc.  If you're fine with a non-ae/aq switch, these are worth
considering.

16 SFP plus 8 shared SFP/copper make it a fairly flexible device.

You did say cost effective, right?  :-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: rack power question

2008-03-25 Thread Joe Abley



On 25 Mar 2008, at 09:11 , Dorn Hetzel wrote:

It would sure be nice if along with choosing to order servers with  
DC or AC power inputs one could choose air or water cooling.


Or perhaps some non-conductive working fluid instead of water.  That  
might not carry quite as much heat as water, but it would surely  
carry more than air and if chosen correctly would have more benign  
results when the inevitable leaks and spills occur.


The conductivity of (ion-carrying) water seems like a sensible thing  
to worry about. The other thing is its boiling point.


I presume that the fact that nobody ever brings that up means it's a  
non-issue, but it'd be good to understand why.


Seems to me that any large-scale system designed to distribute water  
for cooling has the potential for hot spots to appear, and that any  
hot spot that approaches 100C is going to cause some interesting  
problems.


Wouldn't some light mineral oil be a better option than water?


Joe



Re: Transition Planning for IPv6 as mandated by the US Govt

2008-03-17 Thread Joe Abley



On 17-Mar-2008, at 06:07, [EMAIL PROTECTED]  
[EMAIL PROTECTED] wrote:



If you're providing content or network services on v6 and you
don't have both a Teredo and 6to4 relay, you should - there
are more v6 users on those two than there are on native
v6[1]. Talk to me and I'll give you a pre-built FreeBSD image
that does it, boot off compact flash or hard drives. Soekris
(~$350USD, incl. power supply and CF card), or regular
server/whatever PC.


Pardon me for interfering with your lucrative business here,
but anyone contemplating running a Teredo relay and 6to4 relay
should first understand the capacity issues before buying a
little embedded box to stick in their network.


Do you imagine that Soekris are giving Nathan kick-backs for  
mentioning the price of their boxes on NANOG? :-)


I'm sure for many small networks a Soekris box would do fine. For the  
record, FreeBSD also runs on more capable hardware.



Joe



Re: Operators Penalized? (was Re: Kenyan Route Hijack)

2008-03-17 Thread Joe Maimon




Glen Kent wrote:





Do ISPs (PTA, AboveNet, etc) that unintentionally hijack someone
else IP address space, ever get penalized in *any* form? 


The net only functions as a single entity because sp's intentionally 
DONT hijack space and the mutual trust in other sp's rational behavior.


Since the sp behavior is financialy driven by user's desires, this is 
actually fairly easy to predict.


The entire stability of the net is due to Nash Equilibrium/MAD Principle.

This is an ecosystem which functions because it is in everybodies best 
_practical_ interest to keep it so. Punitive actions will most likely be 
viewed as impractical, dampened and staunched as quickly as practically 
possible +- human tendencies such as ego and similar.


Actions that disturb equilibrium must be punitive in and of themselves, 
either by direct consequence or by predictable side effect and chain 
reaction.


So yes, the penalties must already exist in sufficient form, otherwise 
this mailing list wouldnt.


The jitter in the system is caused by the imperfections in the system, 
that would be the human element. The system functions because of the 
imperfections, not in spite of them.


I cant see how any imposition of authority could ever change the 
dynamic, seeing as how it requires the buy in of all, in other words it 
would function simply as an inefficient version of what already exists.


I think its worth consideration that possibly what we have now is the 
best it will ever be.






load balancing and fault tolerance without load balancer

2008-03-14 Thread Joe Shen

hi,

   we plan to set up a web site with two web servers.

   The two servers should be under the same domain
name.  Normally,  web surfing load should be
distributed between the servers. when one server
fails, the other server should take all of load
automatically. When fault sever recovers, load
balancing should be achived automatically.There is no
buget for load balancer.


   we plan to use DNS to balance load between the two
servers. But, it seems DNS based solution could not
direct all load to one server automatically when the
other is down.
 

   Is there any way to solve problem above? 

   we use HP-UX with MC-Service Guard installed. 


  thanks in advance.

Joe


  __ 
Tired of visiting multiple sites for showtimes? 
Yahoo! Movies is all you need
http://sg.movies.yahoo.com


Re: load balancing and fault tolerance without load balancer

2008-03-14 Thread Joe Abley



On 14-Mar-2008, at 12:42, Joe Shen wrote:


  Is there any way to solve problem above?


The approach described in http://www.nanog.org/mtg-0505/abley.cluster.html 
 would probably work, so long as the routers choosing between the  
ECMP routes are able to make route selections per flow, and not just  
per packet (e.g. ip cef on a cisco).


Tony Kapela did a lightning talk a few meetings ago about another  
cisco-specific approach which used some kind of SLA-measuring cisco  
feature to do the same thing without needing to run a routing protocol  
on a server. I can't seem to find a link to the details, but if  
someone else knows where it is it'd be good to know.



Joe



Re: IPv6 on SOHO routers?

2008-03-12 Thread Joe Abley



On 12-Mar-2008, at 16:06, Frank Bulk - iNAME wrote:


Slightly off-topic, but tangentially related that I'll dare to ask.

I'm attending an Emerging Communications course where the instructor
stated that there are SOHO routers that natively support IPv6,  
pointing to

Asia specifically.

Do Linksys, D-Link, Netgear, etc. have such software for the Asian  
markets?


I seem to think I've seen SOHO routers (or gateways I suppose,  
assuming that these boxes are rarely simply routers) on display at  
beer'n'gear-type venues at APRICOT meetings, going back several years.  
The glossy pamphlets have long since been discarded, so I can't tell  
you names of vendors.


More mainstream for this market, Apple's airport extreme SOHO router  
does IPv6.


  http://www.apple.com/airportextreme/specs.html

I have not had the time to figure out what does IPv6 means, exactly  
(DHCPv6? IPv6 DNS resolver?) but I seem to think it will provide route  
advertisements and route out either using 6to4 or a manually- 
configured tunnel.



Joe



Tools to measure TCP connection speed

2008-03-10 Thread Joe Shen

hi,

  is there any tool could measue e2e TCP connection
speed? 


  e.g. we want to measue the delay between the TCP SYN
and receiving SYN ACK packet.


 Joe


  __
Search, browse and book your hotels and flights through Yahoo! Travel.
http://sg.travel.yahoo.com


RE: Tools to measure TCP connection speed

2008-03-10 Thread Joe Shen


we do not just want to analyze e2e performance, but to
monitor network performance at IP and TCP layer.

We monitor end-to-end ping with smokeping, but as you
know, ICMP data does not reflect application layer
permance at any time. So, we set up two hosts to
measure TCP permance. 

Is there tools like smokeping to monitoring e2e TCP
connecting speed?

Joe




--- Darden, Patrick S. [EMAIL PROTECTED] wrote:

 
 
 Best way to do it is right after the SYN just count
 one one thousand, two one thousand until you get
 the ACK.  This works best for RFC 1149 traffic, but
 is applicable for certain others as well.
 
 I don't know of any automated tool, per se.  You
 really couldn't do it *well* on the software side. 
 I see a few options:
 
 1.  this invalidates itself, but it is easily
 doable: get one of those ethernet cards that
 includes all stack processing, and write a simple
 driver that includes a timing mechanism and a
 logger.  It invalidates itself because your
 real-life connection speeds would depend on the
 actual card you usually use, the OS, etc. ad
 nauseum, and you would be bypassing all of those.
 
 2.  if you are using a free as in open source OS,
 specifically as in Linux or FreeBSD, then you could
 write a simple kernel module that could do it.  It
 would still be wrong--but depending on your skill it
 wouldn't be too wrong.
 
 3.  this might actually work for you.  Check to see
 how many total TCP connections your OS can handle,
 make sure your TCP timeout is set to the default 15
 minutes, then set up a simple perl script that
 simply starts a timer, opens sockets as fast as it
 can, and when it reaches the total the OS can handle
 it lets you know the time passed.  Take that and
 divide by total number of connections and you get
 the average  It won't be very accurate, but it
 will give you some kind of idea.
 
 Please forgive the humor
 
 --Patrick Darden
 
 
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of
 Joe Shen
 Sent: Monday, March 10, 2008 5:00 AM
 To: NANGO
 Subject: Tools to measure TCP connection speed
 
 
 
 hi,
 
   is there any tool could measue e2e TCP connection
 speed? 
 
 
   e.g. we want to measue the delay between the TCP
 SYN
 and receiving SYN ACK packet.
 
 
  Joe
 
 
  

__
 Search, browse and book your hotels and flights
 through Yahoo! Travel.
 http://sg.travel.yahoo.com
 



  __ 
Yahoo! Singapore Answers 
Real people. Real questions. Real answers. Share what you know at 
http://answers.yahoo.com.sg


Re: IETF Journal Announcement (fwd)

2008-02-28 Thread Joe Abley



On 27-Feb-2008, at 15:09, Mark Smith wrote:


Don't worry if the ISOC website times out, their firewall isn't TCP
ECN compatible.


Isn't it the case in the real world that the Internet isn't TCP ECN  
compatible?


I thought people had relegated that to the nice idea but, in  
practice, waste of time bucket years ago.



Joe



Re: Qwest desires mesh to reduce unused standby capacity

2008-02-28 Thread Joe Abley



On 28-Feb-2008, at 01:56, Paul Wall wrote:

UU/MFS tried running IP on the 'protect' path of their SONET rings  
10 years ago. It didn't work then.


Well, it works so long as whoever was trying to troubleshoot the  
circuits at 3am on US Thanksgiving understands that having the system  
switch to protect is quite bad, in the sense that it causes both  
sides to go down at once (I seem to remember there was a protect paths  
built for each side of the original ring using a loopback).


Other than the unfamiliarity with the concept demonstrated by phone  
companies, I didn't notice any great fundamental problem with the  
idea. The extra 10G of capacity across the Atlantic was arguably more  
useful in the grand scheme of things than the being able to recover  
from a single-point failure at SONET speeds. It's probably fair to say  
there's more real-time traffic on the network today than there was  
then, however.


I have never worked for UU/MFS, lest anybody draw that conclusion.


Joe



Re: Qwest desires mesh to reduce unused standby capacity

2008-02-28 Thread Joe Abley



On 28-Feb-2008, at 09:26, Adrian Chadd wrote:

Then you probably haven't been on the ass end of a continental fibre  
link

drop. That actually mattered.


If both sides of your SONET ring drop, then surely you're as dead in  
the water as you would be if each side of the ring was being used as a  
separate, unprotected circuit.


(But quite possibly I'm missing your point.)


Joe


Re: Aggregation for IPv4-compatible IPv6 address space

2008-02-04 Thread Joe Abley



On 4-Feb-2008, at 00:19, Scott Morris wrote:


You mean do you have to express it in hex?


There are two related things here: (a) the ability to represent a 32- 
bit word in an IPv6 address in the form of a dotted-quad, and (b) the  
legitimacy of an IPv6 address of the form ::A.B.C.D, where A.B.C.D is  
an IPv4 address.


(a) is a question about the presentation of IPv6 addresses. (b) is a  
question about the construction of IPv6 addresses to be used in packet  
headers.


I believe (a) is still allowed. However, (b) is not allowed. Since (b)  
is not allowed, (a) is arguably not very useful.



Joe



Re: EU Official: IP Is Personal

2008-01-23 Thread Joe Greco

 Paul Vixie wrote:
  [EMAIL PROTECTED] (Hank Nussbacher) writes:
  http://ap.google.com/article/ALeqM5g08qkYTaNhLlscXKMnS3V8dkc-WwD8UAGH900
 
  they say it's personally identifiable information, not personal property.
  EU's concern is the privacy implications of data that google and others
  are saving, they are not making a statement related to address ownership.
 
 Correct. In the EU DP framework (see: 
 [...]
 P. S. How many bits in the mask are necessary to achieve the non-PII aim?

So, this could be basically a matter of dredging up someone with a /25 
allocated to them personally, in the EU service area.  I think I know 
some people like that.

I know for a fact that I know people with swamp C's here in the US.  That
would seem to set the bar higher than a mere 7 bits.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-22 Thread Joe Greco
.

The reasonable thing to do, when you're just looking for some numbers, is
to come up with a reasonable way to generate those numbers, without giving
yourself an ulcer over the other possibilities of what may or may not be
on some specific network somewhere, or whether or not the other features
that come along with something like the upgrade to a sup720 should somehow 
be attributed to some other thing.

But getting back to this statement of yours:

 I cannot think of a pair of boxes where one can support a full table  
 and one can't where the _only_ difference is prefix count.  

I'll put a nail in this, AND cure some of your unhappiness, by noting the
following:

Per Froogle, which is public information that can be readily verified by
the random reader, it appears that a SUP720-3B can be had for ~$8K.  It 
appears that a SUP720-3BXL can be had for ~$29K.  IGNORING THE FACT that
the average network probably isn't merely upgrading from 3B to 3BXL, and
that line cards may need upgrades or daughtercards, that gives us a cost
of somewhere around $21K that can be attributed to JUST the growth in
table size.  (At least, I'm not /aware/ of any difference between the 3B 
and 3BXL other than table size.)

Will everyone decide to make that /particular/ jump in technology?  No.

Is it a fair answer to the question being asked?  It's a conservative
estimate, and so it is safe to use for the purposes of William's 
discussion.  It is a middle-of-the-road number.  There WILL be networks
that do not experience these costs, for various reasons.  There WILL be
networks where the costs are substantially higher, maybe because they've
got a hundred routers that all need to be upgraded.  There will even be
networks who have the 7600 platform and have already deployed the 3bxl.

The more general problem of what does it cost to carry another route
is somewhat like arguing about how many angels can dance on the head of
a pin.  Unlike the angels, there's an actual answer to the question, but
we're not able to accurately determine all the variables with precision.
That doesn't mean it's completely unreasonable to make a ballpark guess.

Remember the wisdom of Pnews:

This program posts news to thousands of machines throughout the entire
civilized world.  Your message will cost the net hundreds if not thousands of
dollars to send everywhere.  Please be sure you know what you are doing.

This is hardly different, and we're trying to get a grasp on what it is 
we're doing.  Your input of useful numbers and estimates would be helpful
and interesting.  Your arguments about why it's all wrong, minus any better
suggestion of how to do it, are useless.  Sorry, that's just the way it is.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-21 Thread Joe Greco

  For example, the Cisco 3750G has all of features except for the
  ability to hold 300k+  prefixes. Per CDW, the 48-port version costs
  $10k, so the difference (ergo cost attributable to prefix count) is
  $40k-$10k=$30k, or 75%.
 
 Unfortunately, I have to run real packets through a real router in the  
 real world, not design a network off CDW's website.
 
 As a simple for-instance, taking just a few thousand routes on the  
 3750 and trying to do multipath over, say 4xGigE, the 'router' will  
 fail and you will see up to 50% packet loss.  This is not something I  
 got off CDW's website, this is something we saw in production.
 
 And that's without ACLs, NetFlow, 100s of peering sessions, etc.  None  
 of which the 3750 can do and still pass gigabits of traffic through a  
 layer 3 decision matrix.

Patrick,

Please excuse me for asking, but you seem to be arguing in a most unusual
manner.  You seem to be saying that the 3750 is not a workable device for
L3 routing (which may simply be a firmware issue, don't know, don't care).
From the point of finding a 48-port device which could conceivably route
packets at wirespeed, even if it doesn't /actually/ do so, this device 
seems like a reasonable choice for purposes of cost comparisons to me.  
But okay, we'll go your way for a bit.

Given that the 3750 is not acceptable, then what exactly would you propose
for a 48 port multigigabit router, capable of wirespeed, that does /not/
hold a 300K+ prefix table?  All we need is a model number and a price, and
then we can substitute it into the pricing questions previously posed.

If you disagree that the 7600/3bxl is a good choice for the fully-capable
router, feel free to change that too.  I don't really care, I just want to
see the cost difference between DFZ-capable and non-DFZ-capable on stuff
that have similar features in other ways.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-21 Thread Joe Greco

 On Mon, 21 Jan 2008, Joe Greco wrote:
  Given that the 3750 is not acceptable, then what exactly would you propose
  for a 48 port multigigabit router, capable of wirespeed, that does /not/
  hold a 300K+ prefix table?  All we need is a model number and a price, and
  then we can substitute it into the pricing questions previously posed.
 
  If you disagree that the 7600/3bxl is a good choice for the fully-capable
  router, feel free to change that too.  I don't really care, I just want to
  see the cost difference between DFZ-capable and non-DFZ-capable on stuff
  that have similar features in other ways.
 
 If using the 7600/3bxl as the cost basis of the upgrade, you might as 
 well compare it to the 6500/7600/sup2 or sup3b.  Either of these would 
 likely be what people buying the 3bxls are upgrading from, in some cases 
 just because of DFZ growth/bloat, in others, to get additional features 
 (IPv6).

I see a minor problem with that in that if I don't actually need a chassis
as large as the 6500/sup2, there's a bit of a hefty jump to get to that
platform from potentially reasonable lesser platforms.  If you're upgrading,
though, it's essentially a discard of the sup2 (because you lose access to
the chassis), so it may be fair to count the entire cost of the sup720-3bxl.

Punching in 720-3bxl to Froogle comes up with $29K.  Since there are other
costs that may be associated with the upgrade (daughterboards, incompatible
line cards, etc), let's just pretend $30K is a reasonable figure, unless
someone else has Figures To Share.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-20 Thread Joe Greco

  However, if you look, all the prepaid plans that I've seen look  
  suspiciously
  like predatory pricing.  The price per minute is substantially  
  higher than
  an equivalent minute on a conventional plan.  Picking on ATT, for a  
  minute,
  here, look at their monthly GoPhone prepaid plan, $39.99/300  
  anytime, vs
  $39.99/450 minutes for the normal.  If anything, the phone company  
  is not
  extending you any credit, and has actually collected your cash in  
  advance,
  so the prepaid minutes ought to be /cheaper/.
 
 I disagree.  Ever heard of volume discounts?
 
 Picking on att again, a typical iPhone user signs up for 24 months @ ~ 
 $100/month, _after_ a credit check to prove they are good for it or  
 plunking down a hefty deposit.
 
 Compare that $2.4 kilo-bux to the $40-one-time payment by a pre-paid  
 user.  Or, to be more far, how about $960 ($40/month for voice only)  
 compared to $40 one-time?
 
 Hell yes I expect more minutes per dollar on my long-term contract.
 
 Hrmm, wonder if someone will offer pay-as-you-go broadband @ $XXX (or  
 $0.XXX) per gigabyte?

Actually, I was fairly careful, and I picked monthly recurring plans in 
both cases.  The typical prepaid user is NOT going to pay a $40-one-
time payment, because the initial cost of the phone is going to be a
deterrent from simply ditching the phone after $40 is spent.

The lock-in of contracts is typically done to guarantee that the cell
phone which they make you buy is paid for, and it is perfectly possible
(though somewhat roundabout) to get the cheaper postpaid plan without a
long contract - assuming you meet their creditworthiness guidelines.
Even without that, once you've gone past your one or two year commitment,
you continue at that same rate, so we can still note that the economics
are interesting.

The iPhone seems to be some sort of odd case, where we're not quite sure
whether there's money going back and forth between ATT and Apple behind
the scenes to subsidize the cost of the phones (or I may have missed the
news).  So talking about your iPhone is pretty much like comparing Apples
and oranges, and yes, you set yourself up for that one.

To put it another way, they do not give you a better price per minute if
you go and deposit $2400 in your prepaid account.  You can use your volume
discount argument once you come up with a compelling explanation for that.
;-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-20 Thread Joe Greco

 But before we go too far down this road, everyone here should realize  
 that new PI space and PA deaggregation WILL CONTINUE TO HAPPEN.
 
 Many corporations paying for Internet access will NOT be tied to a  
 single provider.  Period.  Trying to tell them you are too small, you  
 should only let us big networks have our own space is a silly  
 argument which won't fly.
 
 The Internet is a business tool.  Trying to make that tool less  
 flexible, trying to tie the fate of a customer to the fate of a single  
 provider, or trying force them to jump through more hoops than you  
 have to jump through for the same redundancy / reliability is simply  
 not realistic.  And telling them it will cost some random network in  
 some random other place a dollar a year for their additional  
 flexibility / reliability / performance is not going to convince them  
 not to do it.
 
 The number of these coAt least not while the Internet is still driven  
 by commercial realities.  (Which I personally think is a Very Good  
 Thing - much better than the alternative.)  Someone will take the  
 customer's check, so the prefix will be in the table.  And since you  
 want to take your customers' checks to provide access to that ISP's  
 customer, you will have to carry the prefix.
 
 Of course, that doesn't mean we shouldn't be thrifty with table  
 space.  We just have to stop thinking that only the largest providers  
 should be allowed to add a prefix to the table.  At least if we are  
 going to continue making money on the Internet.

While I agree with this to some extent, it is clear that there are some
problems.  The obvious problem is where the line is drawn; it is not
currently reasonable for each business class DSL line to be issued PI
space, but it is currently reasonable for the largest 100 companies in
the world to have PI space.  (I've deliberately drawn the boundary lines
well outside what most would argue as a reasonable range; the boundaries
I've drawn are not open to debate, since they're for the purposes of
contemplating a problem.)

I don't think that simply writing a check to an ISP is going to be
sufficiently compelling to cause networks of the world to accept a 
prefix in the table.  If I happen to be close to running out of table
entries, then I may not see any particular value in accepting a prefix
that serves no good purpose.  For example, PA deaggregated space and
prefixes from far away might be among the first victims, with the former
being filtered (hope you have a covering route!) and the latter being
filtered with a local covering route installed to default a bunch of
APNIC routes out a reasonable pipe.

For the overall good of the Internet, that's not particularly desirable,
but it will be a reality for providers who can't keep justifying
installing lots of routers with larger table sizes every few years.

There is, therefore, some commercial interest above and beyond hey, 
look, some guy paid me.  We'd like the Internet to work _well_, and
that means that self-interest exclusive of all else is not going to be
a good way to contemplate commercial realities.

So, what can reasonably be done?  Given what I've seen over the years,
I keep coming back to the idea that PI space allocations are not all
that far out of control, but the PA deaggregation situation is fairly
rough.  There would also seem to be some things that smaller sites could
do to fix the PA deagg situation.  Is this the way people see things
going, if we're going to be realistic?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-20 Thread Joe Greco

 I think the point is that you need to get buyers to segregate =
 themslevesinto two groups - the light users and the heavy users. By =
 heavy users I mean the 'Bandwidth Hogs' (Oink, Oink) and a light user =
 someone like myself for whom email is the main application. Afterall the =
 problem with the current system is that there is no segregation - =
 everyone is on basically the same plan.=20

Well, yes.

 The pricing plan needs to be structure in a way that light users have an =
 incentive to take a different pricing plan than do the heavy users.=20

Using the local cable company as an example, right now, I believe that
they're doing Road Runner Classic for $40/mo, with Road Runner Turbo for
$50/mo (approx).  Extra speed for Turbo (14M/1M, IIRC)

The problem is, Road Runner is delivering 7M/512K for $40/mo, which is
arguably a lot more capacity than maybe 50-80% of the customers actually
need.

Ma Bell is selling DSL a *lot* cheaper (as low as $15, IIRC).

So, does:

1) Road Runner drop prices substantially (keep current pricing for high
   bandwidth users), and continue to try to compete with DSL, which could 
   have the adverse side effect of damaging revenue if customers start
   moving in volume to the cheaper plan,

2) Road Runner continue to provide service to the shrinking DSL-less service
   areas at a premium price, relying on apathy to minimize churn in the
   areas where Ma Bell is likely leafing every bill with DSL adverts,

3) Road Runner decide to keep the high paying customers, for now, and try to
   minimize bandwidth, and then deal with the growth of DSL coverage at a 
   future date by dropping prices later?

Option 1) is aggressive but kills profitability.  If done right, though, 
it ensures that cable will continue to compete with DSL in the future.
Option 2) is a holding pattern that is the slow path to irrelevancy.  
Option 3) is a way to maximize current profitability, but makes it 
difficult to figure out just when to implement a strategy change.  In 
the meantime, DSL continues to nibble away at the customer base.  The
end result is unpredictable.

I'm going to tend to view 3) as the shortsighted approach that is also
going to be very popular with businesses who cannot see out beyond next
quarter's profits.

The easiest way to encourage light users to take a different pricing plan
is to give them one.  If Road Runner does that, that's option 1), complete
with option 1)'s problem.  On the flip side, if you seriously think that 
$40/month is an appropriate light pricing plan and high bandwidth users 
should pay more (let's say $80/), then there's a competition problem with
DSL where DSL is selling tiers, and even the highest is at least somewhat
cheaper.

That means that the main advantages to Road Runner are:

1) Availability in non-DSL areas,

2) A 14M/1M service plan currently unmatched by DSL (TTBOMK).

That latter one is simply going to act as a magnet to the high bandwidth
users.

Interesting.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-20 Thread Joe Abley



On 20-Jan-2008, at 15:34, William Herrin wrote:


Perhaps your definition of entry level DFZ router differs from mine.
I selected a Cisco 7600 w/ sup720-3bxl or rsp720-3xcl as my baseline
for an entry level DFZ router.


A new cisco 2851 can be found for under $10k and can take a gig of  
RAM. If your goal is to have fine-grained routing data, and not to  
carry gigs of traffic, that particular router is perfectly adequate.


If you're prepared to consider second-hand equipment (which seems  
fair, since it's not as though the real Internet has no eBay VXRs in  
it) you could get better performance, or lower cost, depending on  
which way you wanted to turn the dial.


Sometimes it's important to appreciate that the network edge is bigger  
than the network core. Just because this kind of equipment wouldn't  
come close to cutting it in a carrier network doesn't mean that they  
aren't perfectly appropriate for a large proportion of deployed  
routers which take a full table.



Joe


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-19 Thread Joe Greco

Condensing a few messages into one:

Mikael Abrahamsson writes:
 Customers want control, that's why the prepaid mobile phone where you get
 an account you have to prepay into, are so popular in some markets. It
 also enables people who perhaps otherwise would not be eligable because of
 bad credit, to get these kind of services.

However, if you look, all the prepaid plans that I've seen look suspiciously 
like predatory pricing.  The price per minute is substantially higher than
an equivalent minute on a conventional plan.  Picking on ATT, for a minute,
here, look at their monthly GoPhone prepaid plan, $39.99/300 anytime, vs
$39.99/450 minutes for the normal.  If anything, the phone company is not
extending you any credit, and has actually collected your cash in advance,
so the prepaid minutes ought to be /cheaper/.

Roderick S. Beck writes:
 Do other industries have mixed pricing schemes that successfully =
 coexist? Some restuarants are all-you-can-eat and others are pay by =
 portion. You can buy a car outright or rent one and pay by the mile.=20

Certainly.  We already have that in the Internet business, in the form of
business vs residential service, etc.  For example, for a residential
circuit where I wanted to avoid a disclosed (in the fine print, sigh)
monthly limit, we instead ordered a business circuit, which we were
assured differed from a T1 in one way (on the usage front):  there was 
no specific performance SLA, but there were no limits imposed by the
service provider, and it was explicitly okay to max it 24/7.  This cost
all of maybe $15/month extra (prices have since changed, I can't check.)

Quinn Kuzmich writes:
 You are sadly mistaken if you think this will save anyone any cash,
 even light users.  Their prices will not change, not a chance.
 Upgrade your network instead of complaining that its just kids
 downloading stuff and playing games.

It is certainly true that the price is resistant to change.  In the local
area, RR recently increased speeds, and I believe dropped the base price
by $5, but didn't tell any of their legacy customers.  The pricing aspect
in particular has been somewhat obscured; when I called in to have a
circuit updated to Road Runner Turbo, the agent merely said that it would
only cost $5/month more (despite it being $10/ more, since the base
service price had apparently dropped $5).  They seemed hesitant to explain.

Michael Holstein writes:
 The problem is the inability of the physical media in TWC's case (coax) 
 to support multiple simultaneous users. They've held off infrastructure 
 upgrades to the point where they really can't offer unlimited 
 bandwidth. TWC also wants to collect on their unlimited package, but 
 only to the 95% of the users that don't really use it,

Absolutely.  If you can do that, you're good to go.  Except that you run
into this dynamic where someone else comes in and picks the fruit.  In
Road Runner's case, they're going to be competing with ATT who is going
to be trying to pick off those $35-$40/mo low volume customers into a
less expensive $15-$20/mo plan.

 and it appears 
 they don't see working to accommodate the other 5% as cost-effective.

Certainly, but only if they can retain the large number of high-paying 
customers who make up that 95%.

 My guess is the market will work this out. As soon as it's implemented, 
 you'll see ATT commercials in that town slamming cable and saying how 
 DSL is really unlimited.

Especially if ATT can make it really unlimited.  Their speeds do not
quite compete with Road Runner Turbo, but for 6.0/768 here, ATT Y! is
$34.99/mo, while RR appears to be $40(?) for 7.0/512.

The difference is that's the top-of-the-line legacy (non-U-verse) ATT
DSL offering; there are less expensive ones.  Getting back to what Roderick
Beck said, ATT is *effectively* offering mixed pricing schemes, simply by
offering various DSL speeds.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 gluelessness

2008-01-18 Thread Joe Abley



On 18-Jan-2008, at 18:56, Randy Bush wrote:


The .com/.net registry has supported  RRs for over five years
(since May, 2002).  The issue you may be encountering is that not
every .com/.net registrar supports them.


way cool.

do you happen to know if opensrs registrars have a path to do so?


Typing IPv6 into the search box at http://resellerhelp.tucows.com/faq1.php 
 returns:


Q: Is IPV6 supported?
A: No. IPV6 is currently not supported.

It's not entirely clear what that means (glue? transport?), but it  
doesn't sound tremendously promising.



Joe


Re: v6 gluelessness

2008-01-18 Thread Joe Abley



On 18-Jan-2008, at 05:39, Randy Bush wrote:


similarly for the root, as rip.psg.com serves some tlds.

The request has to come from a TLD manager (anyone which uses
rip.psg.com)


i can go down the hall to the mirror and ask myself to ask me to do  
it. :)


:-)


but, of course, you would get a more authoritative reply from IANA.


i am hoping that.


It's the same process that is used to update a delegation in the root  
zone. For ccTLDs I believe there's some kind of web portal to allow  
such changes to be requested, but my experience is that the old text  
form also still works just fine.


I've done this a number of times over the past few years and have not  
had any problems.


I don't know what the process is for getting IPv6 addresses associated  
with host records in the VGRS COM/NET registry, but it seems like good  
information to share here if you find a definitive answer.



Joe


Re: request for help w/ ATT and terminology

2008-01-18 Thread Joe Greco

 On Thu, 17 Jan 2008 17:35:30 -0500
 [EMAIL PROTECTED] wrote:
  On Thu, 17 Jan 2008 21:29:37 GMT, Steven M. Bellovin said:
  
   You don't always want to rely on the DNS for things like firewalls
   and ACLs.  DNS responses can be spoofed, the servers may not be
   available, etc.  (For some reason, I'm assuming that DNSsec isn't
   being used...)
  
  Been there, done that, plus enough other stupid DNS tricks and
  stupid /etc/host tricks to get me a fair supply of stories best
  told over a pitcher of Guinness down at the Undergroud..
 
 I prefer nice, hoppy ales to Guiness, but either works for stories..

Heh.

  *Choosing* to hardcode rather than use DNS is one thing.  *Having* to
  hardcode because the gear is too stupid (as Joe Greco put it) is
  however Caveat emptor no matter how you slice it...
 
 Mostly.  I could make a strong case that some security gear shouldn't
 let you do the wrong thing.  (OTOH, my preferred interface would do the
 DNS look-up at config time, and ask you to confirm the retrieved
 addresses.)  You can even do that look-up on a protected net in some
 cases.

It's all nice and trivial to generate scenarios that could work, but the
cold, harsh reality of the world is full of scenarios that don't work.

Exempting /etc/resolv.conf (or Windows equiv) from blame could be
considered equally silly, because DHCP certainly allows discovery of
DNS servers ...  yet we already exempted that scenario.  Why not exempt
more difficult scenarios, such as how do you use DNS to specify a
firewall rule that (currently) allows 123.45.67.0/24.  Your suggested
interface for single addresses is actually fairly reasonable, but is not
comprehensive by a long shot, and still has some serious issues (such as
what happens when the firewall in question is under someone else's
administrative control, the config-time nature of the DNS resolution 
solution means that the use of DNS doesn't actually result in your being
able to get that update installed without their intervention).

It's also worth remembering that hardware manufactured fairly recently
still didn't have DNS lookup capabilities; I think only our newest
generation of APC RPDU's has it, for example, and it doesn't do it for
ACL purposes.  The CPU's in some of these things are tiny, as are the
memories, ROM/flash, etc.  And it's simply unfair to say that equipment
older than N years must be obsolete.

As much as I'd like it to be easy to renumber, I'd say that it's
unreasonable to assume that it is actually trivial to do so.  Further,
the real experiences of those who have had to undergo such an ordeal
should represent some hard-learned wisdom to those working on
autoconfiguration for IPv6; if we don't learn from our v4 problems,
then that's stupid.  (That's primarily why this is worth discussing)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: request for help w/ ATT and terminology

2008-01-17 Thread Joe Greco

 P.S. if your network is all in one cage, it can't be that difficult
 to just renumber it all into ATT address space.

Oh, come on, let's not be naive.  It's perfectly possible to have a common
situation where it would be exceedingly difficult to do this.  Anything
that gets wired in by IP address, particularly on remote computers, would
make this a killer.  That could include things such as firewall rules/ACL's,
recursion DNS server addresses, VPN adapters, VoIP equipment with stacks too
stupid to do DNS, etc.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Network Operator Groups Outside the US

2008-01-17 Thread Joe Abley



On 16-Jan-2008, at 07:09, Rod Beck wrote:

6. I am not aware of any Dutch per se ISP conferences although that  
market is certainly quite vibrant. I am also disappointed to see the  
Canadians and Irish have next to nothing despite Ireland being the  
European base of operations for Google, Microsoft, Amazon, and  
Yahoo. And Canada has over 30 million people. Where is the National  
Pride?




We have played host to a couple of NANOG meetings, you know :-)

And the TorIX community in Toronto has occasional meetings with  
technical content, and has had at least one meeting with no technical  
content but a lot of alcohol and poker.



Joe



Re: request for help w/ ATT and terminology

2008-01-17 Thread Joe Greco

 On Thu, 17 Jan 2008 09:15:30 CST, Joe Greco said:
  make this a killer.  That could include things such as firewall rules/ACL's,
  recursion DNS server addresses, VPN adapters, VoIP equipment with stacks too
  stupid to do DNS, etc.
 
 I'll admit that fixing up /etc/resolv.conf and whatever the Windows equivalent
 is can be a pain - but for the rest of it, if you bought gear that's too
 stupid to do DNS, I have to agree with Leigh's comment: Caveat emptor.

Wow, as far as I can tell, you've pretty much condemned most firewall
software and devices then, because I'm really not aware of any serious
ones that will successfully implement rules such as allow from
123.45.67.0/24 via DNS.  Besides, if you've gone to the trouble of
acquiring your own address space, it is a reasonable assumption that 
you'll be able to rely on being able to tack down services in that
space.  Being expected to walk through every bit of equipment and
reconfigure potentially multiple subsystems within it is unreasonable.

Taking, as one simple example, an older managed ethernet switch, I see
the IP configuration itself, the SNMP configuration (both filters and
traps), the ACL's for management, the time server IP, etc.  I guess if
you feel that Bay Networks equipment was a bad buy, you're welcome to
that opinion.  I can probably dig up some similar Cisco gear.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Network Operator Groups Outside the US

2008-01-16 Thread Joe Provo

On Wed, Jan 16, 2008 at 01:44:00PM +0100, Phil Regnauld wrote:
[snip]

Also missed Middle East Network Operators Group (MENOG): 
 http://www.menog.net/


Better still would be some links to aggregate lists:
- http://www.nanog.org/orgs.html
- http://www.bugest.net/nogs.html 
- http://nanog.cluepon.net/index.php/Other_Operations_Groups

...and aggregated calendars:
- http://www.icann.org/general/calendar/
- http://www.isoc.org/isoc/conferences/events/

Cheers,

Joe

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Looking for geo-directional DNS service

2008-01-16 Thread Joe Greco

 [EMAIL PROTECTED] (Joe Greco) writes:
  ...
  So, anyways, would it be entertaining to discuss the relative merits of
  various DNS implementations that attempt to provide geographic answers 
  to requests, versus doing it at a higher level?  (I can hear everyone 
  groaning now, and some purist somewhere probably having fits)
 
 off topic.  see http://lists.oarci.net/mailman/listinfo/dns-operations.

Possibly, but I found myself removed from that particular party, and the
request was on NANOG, not on dns-operations.  I was under the impression 
that dns-operations was for discussion of DNS operations, not 
implementation choices.  Whether NANOG is completely appropriate remains 
to be seen; I haven't heard a ML complaint though.  There would ideally 
be a list for implementation and design of such things, but I've yet to 
see one that's actually useful, which is, I suspect, why NANOG got a 
request like this.

Besides, if you refer back to the original message in this thread, where I
was driving would be much closer to being related to what the OP was 
interested in.

Hank was saying:

 What I am looking for is a commercial DNS service.
 [...]
 Another service I know about is the Ultradns (now Neustar) Directional DNS:
 http://www.neustarultraservices.biz/solutions/directionaldns.html
 But this service is based on statically defined IP responses at each of
 their 14 sites so there is no proximity checking done.

So there are three basic ways to go about it,

1) Totally static data (in which case anycast and directionality are not a
   consideration, at least at the DNS level), which does not preclude doing
   things at a higher level.

2) Simple anycast, as in the Directional DNS service Hank mentioned, which
   has thoroughly been thrashed into the ground as to why it ain't great,
   which it seems Hank already understood.

3) Complex DNS implementations.  Such as ones that will actually do active
   probes, etc.  Possibly combined with 1) even.

I was trying to redirect the dead anycast horse beating back towards a 
discussion of the relative merits of 1) vs 3).  The largest problems with 
3) seem to revolve around the fact that you generally have no idea where 
a request /actually/ originated, and you're pinning your hopes on the 
client's resolver having some vague proximity to the actual client. 
Redirection at a higher level is going to be desirable, but is not always 
possible, such as for protocols like NNTP.

I'm happy to be criticized for guiding a conversation back towards being
relevant...  :-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Joe Greco

 On Mon, 14 Jan 2008 18:43:12 -0500
 William Herrin [EMAIL PROTECTED] wrote:
  On Jan 14, 2008 5:25 PM, Joe Greco [EMAIL PROTECTED] wrote:
So users who rarely use their connection are more profitable to the ISP.
  
   The fat man isn't a welcome sight to the owner of the AYCE buffet.
  
  Joe,
  
  The fat man is quite welcome at the buffet, especially if he brings
  friends and tips well.
 
 But the fat man isn't allowed to take up residence in the restaurant
 and continously eat - he's only allowed to be there in bursts, like we
 used to be able to assume people would use networks they're connected
 to. Left running P2P is the fat man never leaving and never stopping
 eating.

Time to stop selling the always on connections, then, I guess, because
it is always on - not P2P - which is the fat man never leaving.  P2P
is merely the fat man eating a lot while he's there.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: BGP Filtering

2008-01-15 Thread Joe Abley



On 15-Jan-2008, at 11:40, Ben Butler wrote:

Defaults wont work because a routing decision has to be made, my  
transit

originating a default or me pointing a default at them does not
guarantee the reachability of all prefixes..


Taking a table that won't fit in RAM similarly won't guarantee  
reachability of anything :-)


Filter on assignment boundaries and supplement with a default. That  
ought to mean that you have a reasonable shot at surviving de-peering/ 
partitioning events, and the defaults will pick up the slack in the  
event that you don't.


For extra credit, supplement with a bunch of null routes for bogons so  
packets with bogon destination addresses don't leave your network, and  
maybe make exceptions for golden prefixes.


I am struggling to see a defensible position for why just shy of 50%  
of

all routes appears to be mostly comprised of de-aggregated routes when
aggregation is one of the aims RIRs make the LIRs strive to  
achieve.  If

we cant clean the mess up because there is no incentive than cant I
simply ignore the duplicates.


You can search the archives I'm sure for more detailed discussion of  
this. However, you can't necessarily always attribute the presence of  
covered prefixes to incompetence.



Joe


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Joe Greco

 Joe Greco wrote:
  Time to stop selling the always on connections, then, I guess, because
  it is always on - not P2P - which is the fat man never leaving.  P2P
  is merely the fat man eating a lot while he's there.
 
 As long as we're keeping up this metaphor, P2P is the fat man who says 
 he's gonna get a job real soon but dude life is just SO HARD and crashes 
 on your couch for three weeks until eventually you threaten to get the 
 cops involved because he won't leave. Then you have to clean up 
 thirty-seven half-eaten bags of Cheetos.

I have no idea what the networking equivalent of thirty-seven half-eaten
bags of Cheetos is, can't even begin to imagine what the virtual equivalent
of my couch is, etc.  Your metaphor doesn't really make any sense to me,
sorry.

Interestingly enough, we do have a pizza-and-play place a mile or two
from the house, you pay one fee to get in, then quarters (or cards or
whatever) to play games - but they have repeatedly answered that they
are absolutely and positively fine with you coming in for lunch, and 
staying through supper.  And we have a discount card, which they used
to give out to local businesspeople for business lunches, on top of it.

 Every network has limitations, and I don't think I've ever seen a 
 network that makes every single end-user happy with everything all the 
 time. You could pipe 100Mbps full-duplex to everyone's door, and someone 
 would still complain because they don't have gigabit access to lemonparty.

Certainly.  There will be gigabit in the future, but it isn't here (in
the US) just yet.  That has very little to do with the deceptiveness
inherent in selling something when you don't intend to actually provide
what you advertised.

 Whether those are limitations of the technology you chose, limitations 
 in your budget, policy restrictions, whatever.
 
 As long as you fairly disclose to your end-users what limitations and 
 restrictions exist on your network, I don't see the problem.

You've set out a qualification that generally doesn't exist.  For example,
this discussion included someone from a WISP, Amplex, I believe, that 
listed certain conditions of use on their web site, and yet it seems like
they're un{willing,able} (not assigning blame/fault/etc here) to deliver
that level of service, and using their inability as a way to justify
possibly rate shaping P2P traffic above and beyond what they indicate on 
their own documents.

In some cases, we do have people burying TC in lengthy TC documents,
such as some of the 3G cellular providers who advertise Unlimited
Internet(*) data cards, but then have a slew of (*) items that are
restricted - but only if you dig into the fine print on Page 3 of the
TC.  I'd much prefer that the advertising be honest and up front, and
that ISP's not be allowed to advertise unlimited service if they are
going to place limits, particularly significant limits, on the service.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Looking for geo-directional DNS service

2008-01-15 Thread Joe Greco

 Except Hank is asking for true topological distance (latency /  
 throughput / packetloss).
 
 Anycast gives you BGP distance, not topological distance.
 
 Say I'm in Ashburn and peer directly with someone in Korea where he  
 has a node (1 AS hop), but I get to his node in Ashburn through my  
 transit provider (2 AS hops), guess which node anycast will pick?

Ashburn and other major network meet points are oddities in a very complex
network.  It would be fair to note that anycast is likely to be reasonably
effective if deployed in a manner that was mindful of the overall Internet
architecture, and made allowances for such things.

Anycast by itself probably isn't entirely desirable in any case, and could
ideally be paired up with other technologies to fix problems like this.

I haven't seen many easy ways to roll-your-own geo-DNS service.  The ones
I've done in the past simply built in knowledge of the networks in question,
and where such information wasn't available, took best guess and then may
have done a little research after the fact for future queries.  This isn't
as comprehensive as doing actual latency / throughput / pl checking.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Joe Greco

 Joe Greco wrote:
  I have no idea what the networking equivalent of thirty-seven half-eaten
  bags of Cheetos is, can't even begin to imagine what the virtual equivalent
  of my couch is, etc.  Your metaphor doesn't really make any sense to me,
  sorry.
 
 There isn't one. The fat man metaphor was getting increasingly silly, 
 I just wanted to get it over with.

Actually, it was doing pretty well up 'til near the end.  Most of the
amusing stuff was [off-list.]  The interesting conclusion to it was that
obesity is a growing problem in the US, and that the economics of an AYCE
buffet are changing - mostly for the owner.

  Interestingly enough, we do have a pizza-and-play place a mile or two
  from the house, you pay one fee to get in, then quarters (or cards or
  whatever) to play games - but they have repeatedly answered that they
  are absolutely and positively fine with you coming in for lunch, and 
  staying through supper.  And we have a discount card, which they used
  to give out to local businesspeople for business lunches, on top of it.
 
 That's not the best metaphor either, because they're making money off 
 the games, not the buffet. (Seriously, visit one of 'em, the food isn't 
 very good, and clearly isn't the real draw.) 

True for Chuck E Cheese, but not universally so.  I really doubt that
Stonefire is expecting the people who they give their $5.95 business
lunch card to to go play games.  Their pizza used to taste like cardboard
(bland), but they're much better now.  The facility as a whole is designed
to address the family, and adults can go get some Asian or Italian pasta,
go to the sports theme area that plays ESPN, and only tangentially notice
the game area on the way out.  The toddler play areas (8yr) are even free.

http://www.whitehutchinson.com/leisure/stonefirepizza.shtml

This is falling fairly far from topicality for NANOG, but there is a
certain aspect here which is exceedingly relevant - that businesses
continue to change and innovate in order to meet customer demand.

 I suppose you could market 
 Internet connectivity this way - unlimited access to HTTP and POP3, and 
 ten free SMTP transactions per month, then you pay extra for each 
 protocol. That'd be an awfully tough sell, though.

Possibly.  :-)

  As long as you fairly disclose to your end-users what limitations and 
  restrictions exist on your network, I don't see the problem.
  
  You've set out a qualification that generally doesn't exist.
 
 I can only speak for my network, of course. Mine is a small WISP, and we 
 have the same basic policy as Amplex, from whence this thread 
 originated. Our contracts have relatively clear and large (at least by 
 the standards of a contract) no p2p disclaimers, in addition to the 
 standard no traffic that causes network problems clause that many of 
 us have. The installers are trained to explicitly mention this, along 
 with other no-brainer clauses like don't spam.

Actually, that's a difference, that wasn't what [EMAIL PROTECTED] was talking
about.  Amplex web site said they would rate limit you down to the minimum 
promised rate.  That's disclosed, which would be fine, except that it
apparently isn't what they are looking to do, because their oversubscription
rate is still too high to deliver on their promises.

 When we're setting up software on their computers (like their email 
 client), we'll look for obvious signs of trouble ahead. If a customer 
 already has a bunch of p2p software installed, we'll let them know they 
 can't use it, under pain of find a new ISP.
 
 We don't tell our customers they can have unlimited access to do 
 whatever the heck they want. The technical distinctions only matter to a 
 few customers, and they're generally the problem customers that we don't 
 want anyway.

There is certainly some truth to that.  Getting rid of the unprofitable
customers is one way to keep things good.  However, you may find yourself
getting rid of some customers who merely want to make sure that their ISP
isn't going to interfere at some future date.  

 To try to make this slightly more relevant, is it a good idea, either 
 technically or legally, to mandate some sort of standard for this? I'm 
 thinking something like the Nutrition Facts information that appears 
 on most packaged foods in the States, that ISPs put on their Web sites 
 and advertisements. I'm willing to disclose that we block certain ports 
 for our end-users unless they request otherwise, and that we rate-limit 
 certain types of traffic. 

ABSOLUTELY.  We would certainly seem more responsible, as providers, 
if we disclosed what we were providing.

 I can see this sort of thing getting confusing 
 and messy for everyone, with little or no benefit to anyone. Thoughts?

It certainly can get confusing and messy.

It's a little annoying to help someone go shopping for broadband and then
have to dig out the dirty details in the TC, if they're even there.

In a similar way, I get highly annoyed

Re: Looking for geo-directional DNS service

2008-01-15 Thread Joe Abley



On 15-Jan-2008, at 12:50, Patrick W. Gilmore wrote:


Anycast gives you BGP distance, not topological distance.


Yeah, it's topology modulated by economics :-)


Joe


Re: Looking for geo-directional DNS service

2008-01-15 Thread Joe Greco

 Unless you define topologically nearest as what BGP picks, that is  
 incorrect.  And even if you do define topology to be equivalent to  
 BGP, that is not what is of the greatest interest.   
 Goodput (latency, packet loss, throughput) is far more important.   
 IMHO.

Certainly, but given some completely random transaction, there's still
going to be a tendency for anycast to be some sort of improvement over
pure random chance.  1000 boneheaded anycast implementations cannot be
wrong.  :-)  That you don't get it right every time doesn't make it
wrong every time.

I'm certainly not arguing for anycast-only solutions, and said so.  I'll
be happy to consider it as a first approximation to getting something to
a topologically nearby network, though as I also said, there needs to
be some care taken in the implementation.

Anycast can actually be very powerful within a single AS, where of course
you have some knowledge of the network and predictability.  You lose some
(probably a lot) of that in the translation to the public Internet, but
I'm going to go out on a bit of a limb and guess that if I were to stick an
anycast node in Chicago, Sydney, and Amsterdam, I'm very likely to be able
to pick my networks such that I get a good amount of localization.

Of course, nobody's perfect, and it probably needs to be a data-driven 
business if you really want well-optimized redirection.  However, that's
a bit of magic.  Even the fabled Akamai used to direct us to some ISP up
in Minnesota...  (BFG)

So, anyways, would it be entertaining to discuss the relative merits of
various DNS implementations that attempt to provide geographic answers 
to requests, versus doing it at a higher level?  (I can hear everyone 
groaning now, and some purist somewhere probably having fits)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: ISPs slowing P2P traffic...

2008-01-14 Thread Joe Greco

 Geo:
 
 That's an over-simplification.  Some access technologies have different
 modulations for downstream and upstream.
 i.e. if a:b and a=b, and c:d and cd, a+bc+d.
 
 In other words, you're denying the reality that people download a 3 to 4
 times more than they upload and penalizing every in trying to attain a 1:1
 ratio.

So, is that actually true as a constant, or might there be some
cause-effect mixed in there?

For example, I know I'm not transferring any more than I absolutely must
if I'm connected via GPRS radio.  Drawing any sort of conclusions about
my normal Internet usage from my GPRS stats would be ... skewed ... at
best.  Trying to use that reality as proof would yield you an exceedingly
misleading picture.

During those early years of the retail Internet scene, it was fairly easy
for users to migrate to usage patterns where they were mostly downloading
content; uploading content on a 14.4K modem would have been unreasonable.
There was a natural tendency towards eyeball networks and content networks.

However, these days, more people have always on Internet access, and may
be interested in downloading larger things, such as services that might
eventually allow users to download a DVD and burn it.

http://www.engadget.com/2007/09/21/dvd-group-approves-restrictive-download-to-burn-scheme/

This means that they're leaving their PC on, and maybe they even have other
gizmos or gadgets besides a PC that are Internet-aware.

To remain doggedly fixated on the concept that an end-user is going to
download more than they upload ...  well, sure, it's nice, and makes
certain things easier, but it doesn't necessarily meet up with some of
the realities.  Verizon recently began offering a 20M symmetrical FiOS
product.  There must be some people who feel differently.

So, do the modulations of your access technologies dictate what your
users are going to want to do with their Internet in the future, or is it
possible that you'll have to change things to accomodate different
realities?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: FW: ISPs slowing P2P traffic...

2008-01-14 Thread Joe Greco

 From my experience, the Internet IP Transit Bandwidth costs ISP's a lot
 more than the margins made on Broadband lines.
 
 So users who rarely use their connection are more profitable to the ISP.

The fat man isn't a welcome sight to the owner of the AYCE buffet.

What exactly does this imply, though, from a networking point of view?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco
 being limited to 256kbps up, unless I am on 
the business service, where it'd be 768kbps up.  This seems quite fair and
equitable.  It's clearly and unambiguously disclosed, it's still 
guaranteeing delivery of the minimum class of service being purchased, etc.

If such an ISP were unable to meet the commitment that it's made to
customers, then there's a problem - and it isn't the customer's problem,
it's the ISP's.  This ISP has said We guarantee our speeds will be as
good or better than we specify - which is fairly clear.

You might want to check to see if you've made any guarantees about the
level of service that you'll provide to your customers.  If you've made
promises, then you're simply in the unenviable position of needing to
make good on those.  Operating an IP network with a basic SLA like this
can be a bit of a challenge.  You have to be prepared to actually make
good on it.  If you are unable to provide the service, then either there
is a failure at the network design level or at the business plan level.

One solution is to stop accepting new customers where a tower is already
operating at a level which is effectively rendering it full.
 
... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

 Joe Greco wrote,
  There are lots of things that could heavily stress your upload channel.
  Things I've seen would include:
 
  1) Sending a bunch of full-size pictures to all your friends and family,
 which might not seem too bad until it's a gig worth of 8-megapixel 
 photos and 30 recipients, and you send to each recipient separately,
  2) Having your corporate laptop get backed up to the company's backup
 server,
  3) Many general-purpose VPN tasks (file copying, etc),
  4) Online gaming (capable of creating a vast PPS load, along with fairly
 steady but low volumetraffic),
 
  etc.  P2P is only one example of things that could be stressful.
   
 These things all happen - but they simply don't happen 24 hours a day, 7 
 days a week.   A P2P client often does.

It may.  Some of those other things will, too.  I picked 1) and 2) as
examples where things could actually get busy for long stretches of
time.

In this business, you have to realize that the average bandwidth use of
a residential Internet connection is going to grow with time, as new and
wonderful things are introduced.  In 1995, the average 14.4 modem speed
was perfectly fine for everyone's Internet needs.  Go try loading web
pages now on a 14.4 modem...  even web pages are bigger.

 snip for brevity
 
  The questions boil down to things like:
 
  1) Given that you unable to provide unlimited upstream bandwidth to your 
 end users, what amount of upstream bandwidth /can/ you afford to
 provide?
   
 Again - it depends.   I could tell everyone they can have 56k upload 
 continuous and there would be no problem from a network standpoint - but 
 it would suck to be a customer with that restriction. 

If that's the reality, though, why not be honest about it?

 It's a balance between providing good service to most customers while 
 leaving us options.

The question is a lot more complex than that.  Even assuming that you have
unlimited bandwidth available to you at your main POP, you are likely to
be using RF to get to those remote tower sites, which may mean that there 
are some specific limits within your network, which in turn implies other
things.

  What Amplex won't do...
 
  Provide high burst speed if  you insist on running peer-to-peer file 
  sharing
  on a regular basis.  Occasional use is not a problem.   Peer-to-peer
  networks generate large amounts of upload traffic.  This continuous traffic
  reduces the bandwidth available to other customers - and Amplex will rate
  limit your connection to the minimum rated speed if we feel there is a
  problem. 
  
 
  So, the way I would read this, as a customer, is that my P2P traffic would
  most likely eventually wind up being limited to 256kbps up, unless I am on 
  the business service, where it'd be 768kbps up.  

 Depends on your catching our attention.  As a 'smart' consumer you might 
 choose to set the upload limit on your torrent client to 200k and the 
 odds are pretty high we would never notice you.

... today.  And since 200k is less than 256k, I would certainly expect
that to be true tomorrow, too.  However, it might not be, because your
network may not grow easily to accomodate more customers, and you may
perceive it as easier to go after the high bandwidth users, yes?

 For those who play nicely we don't restrict upload bandwidth but leave 
 it at the capacity of the equipment (somewhere between 768k and 1.5M).
 
 Yep - that's a rather subjective criteria.   Sorry.
 
  This seems quite fair and
  equitable.  It's clearly and unambiguously disclosed, it's still 
  guaranteeing delivery of the minimum class of service being purchased, etc.
 
  If such an ISP were unable to meet the commitment that it's made to
  customers, then there's a problem - and it isn't the customer's problem,
  it's the ISP's.  This ISP has said We guarantee our speeds will be as
  good or better than we specify - which is fairly clear.
 
 We try to do the right thing - but taking the high road costs us when 
 our competitors don't.   I would like to think that consumers are smart 
 enough to see the difference but I'm becoming more and more jaded as 
 time goes on

You've picked a business where many customers aren't technically
sophisticated.  That doesn't necessarily make it right to rip them
off - even if your competitors do.

  One solution is to stop accepting new customers where a tower is already
  operating at a level which is effectively rendering it full.
 
 Unfortunately full is an ambiguous definition.Is it when:
 
 a)  Number of Customers * 256k up = access point limit?
 b)  Number of Customers * 768k down = access point limit?
 c)  Peak upload traffic = access point limit?
 d)  Peak download traffic = access point limit?
 (e) Average ping times start to increase?
 
 History shows (a) and (b) occur well before the AP is particularly 
 loaded and would be wasteful of resources.

Certainly, but it's the only way to actually be able to guarantee

Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

 It may.  Some of those other things will, too.  I picked 1) and 2) as
 examples where things could actually get busy for long stretches of
 time.
 
 The wireless ISP business is a bit of a special case in this regard, where 
 P2P traffic is especially nasty.
 
 If I have ten customers uploading to a Web site (some photo sharing site, or 
 Web-based email, say), each of whom is maxing out their connection, that's 
 not a problem.

That is not in evidence.  In fact, quite the opposite...  given the scenario
previously described (1.5M tower backhaul, 256kbps customer CIR), it would 
definitely be a problem.  The data doesn't become smaller simply because it
is Web traffic.

 If I have one customer running Limewire or Kazaa or whatever P2P software all 
 the cool kids are running these days, even if he's rate-limited himself to 
 half his connection's maximum upload speec, that often IS a problem.

That is also not in evidence, as it is well within what the link should be
able to handle.

 It's not the bandwidth, it's the number of packets being sent out.

Well, PPS can be a problem.  Certainly it is possible to come up with
hardware that is unable to handle the packets per second, and wifi can
be a bit problematic in this department, since there's such a wide
variation in the quality of equipment, and even with the best, performance
in the PPS arena isn't generally what I'd consider stellar.  However, I'm
going to guess that there are online gaming and VoIP applications which are
just as stressful.  Anyone have a graph showing otherwise (preferably
packet size and PPS figures on a low speed DSL line, or something like
that?)

 One customer, talking to twenty or fifty remote hosts at a time, can kill a 
 wireless access point in some instances. All those little tiny packets 

Um, I was under the impression that FastTrack was based on TCP...?  I'm not
a file-sharer, so I could be horribly wrong.  But if it is based on TCP,
then one would tend to assume that actual P2P data transfers would appear
to be very similar to any other HTTP (or more generally, TCP) traffic - and
for transmitted data, the packets would be large.  I was actually under the
impression that this was one of the reasons that the DPI vendors were
successful at selling the D in DPI.

 tie up the AP's radio time, and the other nine customers call and complain.

That would seem to be an implementation issue.  I don't hear WISP's crying
about gaming or VoIP traffic, so apparently those volumes of packets per
second are fine.  The much larger size of P2P data packets should mean that 
the rate of possible PPS would be lower, and the number of individual remote 
hosts should not be of particular significance, unless maybe you're trying 
to implement your WISP on consumer grade hardware.

I'm not sure I see the problem.

 One customer just downloading stuff, disabling all the upload features in 
 their P2P client of choice, often causes exactly the same problem, as the 
 kids tend to queue up 17 CDs worth of music then leave it running for a week. 
 The software tries its darnedest to find each of those hundreds of different 
 files, downloading little pieces of each of 'em from multiple servers. 

Yeah, but little pieces still works out to fairly sizeable chunks, when 
you look at it from the network point of view.  It isn't trying to download
a 600MB ISO with data packets that are only 64 bytes of content each.

 We go out of our way to explain to every customer that P2P software isn't 
 permitted on our network, and when we see it, we shut the customer off until 
 that software is removed. It's not ideal, but given the limitations of 
 wireless technology, it's a necessary compromise. I still have a job, so we 
 must have a few customers who are alright with this limitation on their 
 broadband service.
 
 There's more to bandwidth than just bandwidth.

If so, there's also Internet, service, and provider in ISP.

P2P is nasty because it represents traffic that wasn't planned for or
allowed for in many business models, and because it is easy to perceive
that traffic as unnecessary or illegitimate.

For now, you can get away with placing such a limit on your broadband
service, and you still have a job, but there may well come a day when
some new killer service pops up.  Imagine, for example, TiVo deploying
a new set of video service offerings that bumped them back up into being
THE device of the year (don't think TiVo?  Maybe Apple, then...  who
knows?)  Downloads interesting content for local storage.  Everyone's
buzzing about it.  The lucky 10% buy it.

Now the question that will come back to you is, why can't your network
deliver what's been promised?

The point here is that there are people promising things they can't be
certain of delivering.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct

Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

  P2P based CDN's are a current buzzword; 

P2P based CDN's might be a current buzzword, but are nothing more than
P2P technology in a different cloak.  No new news here.

 This should prove to be interesting.   The Video CDN model will be a 
 threat to far more operators than P2P has been to the music industry.
 
 Cable companies make significant revenue from video content (ok - that 
 was obvious).Since they are also IP Network operators they have a 
 vested interest in seeing that video CDN's  that bypass their primary 
 revenue stream fail.The ILEC's are building out fiber mostly so that 
 they can compete with the cable companies with a triple play solution.   
 I can't see them being particularly supportive of this either.  As a 
 wireless network operator I'm not terribly interested in helping 3rd 
 parties that cause issue on my network with upload traffic (rant away 
 about how were getting paid by the end user to carry this traffic...).

At the point where an IP network operator cannot comprehend (or, worse,
refuses to comprehend) that every bit received on the Internet must be
sourced from somewhere else, then I wish them the best of luck with the
legislated version of network neutrality that will almost certainly
eventually result from their shortsighted behaviour.

You do not get a free pass just because you're a wireless network
operator.  That you've chosen to model your network on something other
than a 1:1 ratio isn't anyone else's problem, and if it comes back to
haunt you, oh well.  It's nice that you can take advantage of the fact
that there are currently content-heavy and eyeball-heavy networks, but
to assume that it must stay that way is foolish.

It's always nice to maintain some particular model for your operations
that is beneficial to you.  It's clearly ideal to be able to rely on
overcommit in order to be able to provide the promises you've made to
customers, rather than relying on actual capacity.  However, this will
assume that there is no fundamental change in the way things work, which
is a bad assumption on the Internet.

This problem is NOTHING NEW, and in fact, shares some significant
parallels with the way Ma Bell used to bill out long distance vs local 
service, and then cried and whined about how they were being undercut
by competitive LD carriers.  They ... adapted.  Can you?  Will you?

And yes, I realize that this borders on unfair-to-the-(W)ISP, but if
you are incapable of considering and contemplating these sorts of
questions, then that's a bad thing.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


RE: ISPs slowing P2P traffic...

2008-01-09 Thread Joe St Sauver
 IPv6 based IPs instead of ports would also help by obfuscating 
#protocol and behavior. Even IP rotation through /64s (cough 1 IP per 
#half-connection anyone).

Some traffic also stands out simply because only interesting people 
exhibit the behavior in question. :-) That could be port hopping, or 
nailing up a constantly full encrypted connection that only talks 
to one other host. :-)

#My caffeine hasn't hit, so I can't think of anything else. Is this 
#something the market will address by itself?

I think so. At some point there's sufficient capacity everywhere, edge
and core, that (a) there's no pressing operational need to shape 
traffic, and (b) the shaping devices available for the high capacity
circuits are prohibitively expensive. That's part of the discussion
I offered in Capacity Planning and System and Network Security, a
talk I did for the April '07 Internet2 Member Meeting, see 
http://www.uoregon.edu/~joe/i2-cap-plan/internet2-capacity-planning.ppt
(or .pdf) at slides 44-45 or so. 

I'd also note that if end user hosts are comparatively clean and under 
control, traffic from a few outlier users is a lot easier to absorb than 
if you're infested with zombied boxes. In some cases, those bumps in the 
wire may not be targeting P2P traffic, but rather artifacts associated 
with botted hosts which are running excessively hot. 

Regards,

Joe St Sauver ([EMAIL PROTECTED])

Disclaimer: all opinions strictly my own. 


Re: ISPs slowing P2P traffic...

2008-01-09 Thread Joe Provo

On Wed, Jan 09, 2008 at 03:04:37PM -0500, Deepak Jain wrote:
[snip]
 However, my question is simply.. for ISPs promising broadband service. 
 Isn't it simpler to just announce a bandwidth quota/cap that your good 
 users won't hit and your bad ones will? 

Simple bandwidth is not the issue.  This is about traffic models using
statistical multiplexing making assumption regardin humans at the helmu,
and those models directing the capital investment of facilities and 
hardware.  You likely will see p2p throttling where you also see 
residential customers must not host servers policies.  Demand curves 
for p2p usage do not match any stat-mux models where brodband is sold
for less than it costs to maintain and upgrade the physical plant.

 Especially when there is no real reason this P2P traffic can't 
 masquerade as something really interesting... like Email or Web (https, 
 hello!) or SSH or gamer traffic. I personally expect a day when there is 
 a torrent encryption module that converts everything to look like a 
 plain-text email conversation or IRC or whatever.

The problem with p2p traffic is how it behaves, which will not be
hidden by ports or encryption.  If the *behavior* of the protocol[s]
change such that they no longer look like digital fountains and more
like email conversation or IRC or whatever, then their impact is
mitigated and they would not *be* a problem to be shaped/throttled/
managed.  

[snip]
 I remember Bill Norton's peering forum regarding P2P traffic and how the 
 majority of it is between cable and other broadband providers... 
 Operationally, why not just lash a few additional 10GE cross-connects 
 and let these *paying customers* communicate as they will?

Peering happens between broadband companies all the time.  That does
not resolve regional, city, or neighborhood congestion in one network.

[snip]
 Encouraging encryption of more protocols is an interesting way to 
 discourage this kind of shaping.

This does nothing but reduce the pool of remote-p2p-nodes to those 
running encryption-capable clients.  This is why people think they 
get away using encryption, as they are no longer the tallest nail
to be hammered down, and often enough fit within their buckets.

[snip]
 My caffeine hasn't hit, so I can't think of anything else. Is this 
 something the market will address by itself?

Likely.  Some networks abandon standards and will tie customers to 
gear that looks more like dedicated pipes (narad, etc). Some will 
have the 800-lb-gorilla-tude to accelerate vendors' deployment of
docsis3.0.  Folks with the apropriate war chests can (and have) 
roll out PON and be somewhat generous... of course, the dedicated
and mandatory ONT  CPE looks a lot like voice pre-carterfone...

Joe, not promoting/supporting any position, just trying to provide
facts about running last-mile networks.

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: ISPs slowing P2P traffic...

2008-01-09 Thread Joe St Sauver

Jared mentioned:

#   We'll see what happens, and how the 160Mb/s DOCSIS 3.0 connections
#and infrastructure to support it pan out on the comcast side..

There may be comparatively little difference from what you see today, 
largely because most hosts still have stacks which are poorly tuned by 
default, or host throughput is limited by some other device in the path 
(such as a broadband router) which acts by default as the constricting 
link in the chain, or the application itself isn't written to take full
advantage of higher speed wide area connections. 

Depending on your point of view, all those poorly tuned hosts are either a 
incredible PITA, or the only thing that's keeping the boat above water. 

If you believe the latter point of view, tuning guides such as
http://www.psc.edu/networking/projects/tcptune/ and diagnostic tools
like NDT (e.g., see http://miranda.ctd.anl.gov:7123/ ) are incredibly
seditious resources. :-)

Regards,

Joe St Sauver ([EMAIL PROTECTED])

Disclaimer: all opinions strictly my own.


Re: Using x.x.x.0 and x.x.x.255 host addresses in supernets.

2008-01-08 Thread Joe Provo

On Tue, Jan 08, 2008 at 05:45:36AM -0800, Joshman at joshman dot com wrote:
 Hello all,
   As a general rule, is it best practice to assign x.x.x.0 and
 x.x.x.255 as host addresses on /23 and larger?  

Yes.  Efficient address utilization is a Good Thing.

 I realize that technically they are valid addresses, but does anyone 
 assign a node or server which is a member of a /22 with a x.x.x.0 
 and x.x.x.255?

Great for router interfaces, loops, etc where you don't care that 
broken or archaic systems cannot reach them, and where the humans
interacting with them should have no issues.  

Cheers,

Joe

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Using x.x.x.0 and x.x.x.255 host addresses in supernets.

2008-01-08 Thread Joe Provo

On Tue, Jan 08, 2008 at 09:50:13AM -0500, Jon Lewis wrote:
 On Tue, 8 Jan 2008, Joe Provo wrote:
 
 Yes.  Efficient address utilization is a Good Thing.
 
 I realize that technically they are valid addresses, but does anyone
 assign a node or server which is a member of a /22 with a x.x.x.0
 and x.x.x.255?
 
 Great for router interfaces, loops, etc where you don't care that
 broken or archaic systems cannot reach them, and where the humans
 interacting with them should have no issues.
 
 Until you assign a .255/32 to a router loopback interface and then find 
 that you can't get to it because some silly router between you and it 
 thinks '.255? that's a broadcast address.'

See the qualifier where you don't care that broken or archaic systems 
cannot reach them. If you have brokenness on your internal systems 
then yes, you'd be shooting yourself in the foot.


-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Assigning IPv6 /48's to CPE's?

2007-12-31 Thread Joe Greco

 I see there is a long thread on IPv6 address assignment going, and I
 apologize that I did not read all of it, but I still have some unanswered
 questions.

The answers to some of this are buried within it.

 I believe someone posted the ARIN recommendation that carriers assign out
 /64's and /56's, and in a few limited cases, /48.
 
 I can understand corporations getting more than a /64 for their needs, but
 certainly this does not mean residential ISP subscribers, right?

That answer, along with detailed information, is within that thread.  In an
ideal world, yes, it does mean resi subscribers.  Some of us would like to
see that very much, but are simultaneously expecting that something less
optimal will happen.

 I can understand the need for /64's because the next 64 bits are for the
 client address, but there seems to be this idea that one and only one node
 may use a whole /64. 

Certainly, if the node is the only one on the subnet.

 So in the case of Joe, the residential DSL subscriber
 who has 50,000 PCs, TiVo's,  microwaves, and nanobots that all need unique
 routable IP addresses, what is to stop him from assigning them unique client
 ID's (last 64 bits) under the same /64? We can let Joe put in some switches,
 and if that isn't enough he should consider upgrading from his $35/month DSL
 or $10/month dial up anyway.

I don't think it was ever in doubt that people could stick lots of devices
on a single /64.  The question is more one of under what circumstances
would a site want more than a /64.  

One is when you're crossing boundaries between network protocols (Ethernet
to HomeControlNet or whatever).  Repeat for Bluetooth or any other
alternative technology.

Many would prefer to see firewalling handled at the L3 boundary between
networks, which is an indication for multiple /64's.  While I certainly
agree that this is attractive, and ought to be possible in IPv6, the fact
is that it still represents a disruption of the broadcast domain, and
requires that all firewall-candidate traffic be routed.  This could have 
an impact to a site that deems a sudden firewall policy change necessary,
such as my PC #3 just got infected, stop it from talking to local 
network but allow it to download virus updates.  I believe that there
could (and should) be a natural evolution towards deconstructing the 
requirements at which layer these sorts of policies are implemented.  I 
would very much like to see a layer 2/3 switch that is capable of 
implementing a firewall policy /for a port/, and having the onboard 
software be sufficiently intelligent that an end-user can deal with his 
firewalling switch as an abstract item, without having to understand 
the underlying network topology.  This could even be generalized into a
useful general purpose networking device, that could provide services 
such as VPN's.

However, I am certain that there will be situations in which DHCP PD does
not work, and so I expect that most protocol bridges will in fact be able
to support bridging from an already populated IPv6 /64.

 My next question is that there is this idea that there will be no NAT in the
 IPv6 world. Some companies have old IPv4 only software, some companies have
 branch offices using the same software on different networks, and some like
 the added security NAT provides.

What added security would that be, exactly?  Introducing a proper stateful
firewall would give you about the same security, without the penalties of
having to write proxyware for every new protocol that comes along.  There
/are/ some differences; a NAT gateway is less likely to fail to firewall in
a catastrophic manner, for example: if it isn't working, network
connectivity vaporizes.  A stateful firewall might go away and leave you
with your pants down.  However, that doesn't really make NAT a better
technology...

{P,N}AT is a technology that was designed to allow more than one computer 
to share {ports, addresses}.  This is fundamentally unnecessary in IPv6
because there are plenty of addresses available, and providers are expected
to hand them out like candy.

I would much prefer to see a different security model evolve, where even
residential class equipment gains the ability to do smart firewalling.
Some of that discussion is in the thread you skipped.

 There are also serious privacy concerns with having a MAC address within an
 IP address. Aside from opening the doors to websites to share information on
 specific users, lack of NAT also means the information they have is more
 detailed in households where separate residents use different computers. I
 can become an IPv4 stranger to websites once a week by deleting cookies,
 IPv6 means they can profile exactly what I do over periods of years from
 work, home, starbucks, it doesn't matter. I don't see NAT going away any
 time soon.

This seems to be an urban myth.  Your current average broadband customer
is leased an IP address that may stay active for years at a time.  To
imagine

Re: v6 subnet size for DSL leased line customers

2007-12-26 Thread Joe Greco

 If the ops community doesn't provide enough addresses and a way to use
 them then the vendors will do the same thing they did in v4. It's not
 clear to me where their needs don't coincide in this case.
 
 there are three legs to the tripod
 
   network operator
   user
   equipment manufacturer
 
 They have (or should have) a mutual interest in:
 
   Transparent and automatic configuration of devices.
   The assignment of globally routable addresses to internet
   connected devices
   the user having some control over what crosses the boundry
   between their network and the operators.

Yes, well, that sounds fine, but I think that we've already hashed over
at least some of the pressures on businesses in this thread.  I've
tried to focus on what's in the Subject:, and have mostly ignored other
problems, which would include things such as cellular service, where I
suspect that the service model is such that they'll want to find a way
to allocate users a /128 ...

There is, further, an effect which leads to equipment mfr being split
into netwk equipment mfr and cpe equipment mfr, because the CPE guys 
will be trying to build things that'll work for the end user, working
around any brokenness, etc.  The problem space is essentially polarized, 
between network operators who have their own interests, and users who
have theirs.

So, as /engineers/ for the network operators, the question is, what can
we do to encourage/coerce/force the businesses on our side of the 
equation to allocate larger rather than smaller numbers of bits, or find
other solutions?

What could we do to encourage, or better yet, mandate, that an ISP end-
user connection should be allocated a minimum of /56, even if it happens 
to be a cellular service?  ( :-) )

What do we do about corporate environments, or any other environment where
there may be pressure to control topology to avoid DHCP PD to devices
added to the network on an ad-hoc basis?

Is it actually an absolutely unquestionable state of affairs that the
smallest autoconfigurable subnet is a /64?  Because if not, there are
options there ...  but of course, that leads down a road where an ISP may
not want to allocate as much as a /64 ...

What parts of this can we tackle through RIR policy?  RFC requirements?
Best practice?  Customer education?  ( :-) )  Other ideas?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-26 Thread Joe Maimon




Tony Li wrote:




On Dec 26, 2007, at 8:26 AM, Leo Bicknell wrote:



It's unlikely that it will matter.  In practice, ICMP router  discovery 
died a long time ago, thanks to neglect.  Host vendors  didn't adopt it, 
and it languished.  The problem eventually got  solved with HSRP and its 
clone, VRRP.


Its been available from microsoft since windows2000, and according to 
documentation, on by default. I am not quite sure this can be blamed on 
vendors as opposed to users.


http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/regentry/33574.mspx?mfr=true









Re: v6 subnet size for DSL leased line customers

2007-12-25 Thread Joe Greco
 or not it
actually winds up as cheaper to support a single address space size on
backroom systems will all work to shape what actually happens.

  So, the point is, as engineers, let's not be completely naive.  Yes,  
  we
  /want/ end-users to receive a /56, maybe even a /48, but as an  
  engineer,
  I'm going to assume something more pessimistic.  If I'm a device  
  designer,
  I can safely do that, because if I don't assume that a PD is going  
  to be
  available and plan accordingly, then my device is going to work in  
  both
  cases, while the device someone who has relied on PD is going to break
  when it isn't available.
 
 Assuming that PD is available is naive.  However, assuming it is not is
 equally naive. 

No, it's not equally naive.  The bridging scenario is likely to work in
all cases, therefore, assuming bridging as a least common denominator is
actually pretty smart - even though I would prefer to see a full
implementation that works in all cases.  Assume the worst, hope for the
best.  If that's naive, well, then it's all a lost cause.  You can call
it coldly cynical all you'd like, though.  ;-)

 The device must be able to function in both  
 circumstances
 if possible, or, should handle the case where it can't function in a  
 graceful
 and informative manner.

That much is certain.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-24 Thread Joe Greco

 It's likely that the device may choose to nat when they cannot obtain a
 prefix... pd might be desirable but if you can't then the alternative is
 easy.

I thought we were all trying to discourage NAT in IPv6.  Clearly, NAT
solves the problem ... while introducing 1000 new ones.  :-/

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-24 Thread Joe Greco

 Joe Greco wrote:
 [..]
  Okay, here, let me make it reaaally simple.
 
 Yes, indeed lets make it reaaally simple for you:
 
  If your ISP has been delegated a /48 (admittedly unlikely, but possible=
 )
  for $1,250/year, and they assign you a /56, their cost to provide that
  space is ~$5.  They can have 256 such customers.
 
 Fortunately ISP's get their space per /32 and up based on how much they
 can justify, which is the amount of customers they have.
 
 As such for a /32 single /48 is only (x / 65k) =3D like 20 cents or so?
 And if you are running your business properly you will have more clients
 and the price will only go down and down and down.

 Really (or should I write reaaally to add force?) if you
 as an ISP are unable to pay the RIR fees for that little bit of address
 space, then you definitely have bigger problems as you won't be able to
 pay the other bills either.

There's a difference between unable to pay the RIR fees and do not deem
any business value in spending the money.  Engineers typically feel that
businesses should be ready and willing to spend more money for reasons that
the average business person won't care about.

Pretend I'm your CFO.  Explain the value proposition to me.  Here's the
(slightly abbreviated) conversation.

Well, you say we need to spend more money every year on address space.
Right now we're paying $2,250/year for our /32, and we're able to serve
65 thousand customers.  You want us to start paying $4,500/year, but Bob
tells me that we're wasting a lot of our current space, and if we were 
to begin allocating less space to customers [aside: /56 vs /48], that we
could actually serve sixteen million users for the same cash.  Is there
a compelling reason that we didn't do that from the outset?

This discussion is getting really silly; the fact of the matter is that
this /is/ going to happen.  To pretend that it isn't is simply naive.

 How high are your transitequipment bills again, and how are you exactly
 charging your customers? ah, not by bandwidth usage, very logical!

Perhaps end-user ISP's don't charge by bandwidth usage...

 As an enduser I would love to pay the little fee for IP space that the
 LIR (ISP in ARIN land) pays to the RIR and then simply pay for the
 bandwidth that I am using + a little margin so that they ISP also earns
 some bucks and can do writeoffs on equipment and personnel.

Sure, but that's mostly fantasyland.  The average ISP is going to want to
monetize the variables.  You want more bandwidth, you pay more.  You want
more IP's, you pay more.  This is one of the reasons some of us are 
concerned about how IPv6 will /actually/ be deployed ...  quite frankly, 
I would bet that it's a whole lot more likely that an end-user gets 
assigned a /64 than a /48 as the basic class of service, and charge for 
additional bits.  If we are lucky, we might be able to s/64/56/.

I mean, yeah, it'd be great if we could mandate /48 ...  but I just can't
see it as likely to happen.

 For some magic reasons though(*), it seems to be completely ludacrist to
 do it this way, even though it would make the bill very clear and it
 would charge the right amount for the right things and not some
 arbitrary number for some other arbitrary things and then later
 complaining that people use too much bandwidth because they use
 bittorrent and other such things. For the cable folks: make upstream
 bandwidth more pricey per class than downstream, problem of
 heavy-uploaders solved as they get charged.

Sure, but that's how the real world works.  The input from engineering
folks is only one small variable in the overall scheme of things.  It is
a /mistake/ to assume that cost-recovery must be done on a direct basis.
There's a huge amount of business value in being able to say unlimited(*)
Internet service for $30/mo!  The current offerings in many places should
outline this clearly.

So, the point is, as engineers, let's not be completely naive.  Yes, we
/want/ end-users to receive a /56, maybe even a /48, but as an engineer,
I'm going to assume something more pessimistic.  If I'm a device designer,
I can safely do that, because if I don't assume that a PD is going to be 
available and plan accordingly, then my device is going to work in both 
cases, while the device someone who has relied on PD is going to break 
when it isn't available.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-24 Thread Joe Maimon




Scott Weeks wrote:




Disclaimer:  I'm still very much an IPv6 wussie...  :-)

-
But even in 2000 the policy was and still is:
 /128 for really a single device
 /64  if you know for sure that only one single subnet will
  ever be allocated.
 /48  for every other case (smart bet, should be used per default)
--

I work on a network with 100K+ DSL folks and 200+ leased line customers, plus 
some other stuff.  The leased line customers are increasing dramatically.  I 
should plan for a /64 for every DSL customer and a /48 for every leased line 
customer I expect over the next 5-7 years?

scott


Same disclaimer as above. But perhaps thats a benefit, allowing the 
landscape forest view instead of the tree one.


Seems like everything good and desirable in ipv6 was backported to ipv4, 
including router advertisements (which nobody uses, since DHCP [yes dhcp 
can/could be made redundant] is far far preferred, even by SOHO vendors).


All except the 4 x bitspace.

If it hasnt been backported after all this time, its likely either 
undoable or unusable.


Since its quite likely that a minimum 50 year lifetime for ipv4 looks to 
be in the cards, judging by bitspace, ipv6 should be engineered for 200 
(or 50 to the 4th which makes 125000).


One would suppose that the way to do this is to do as much as is 
neccessary to comfortably move the world onto it AND NO MORE. We are not 
prophets. We dont even know how many prefixes the average router will be 
able to handle in 10 years (considering that a maxed out pc-as-a-router 
can handle millions more than the nice expensive 7600), let alone 50.


So the first thing we do is:

Make it as big for ISP's as ipv4 was for end users, by assigning /32 
prefixes, minus all the special purpose carvings.


To make things simple, a 4 byte AS should come with a /32.

Brilliant. We have forward ported ipv4 scalability onto ipv6.

For what? So that end users can have nanotech networks? It goes without 
saying that I will want my nanotech network(s) firewalled (and natted 
for good measure).


Autoconfiguration doesnt require 64 bits. We have autoconfig for ipv4, 
it appears to only need 16.


As stated, we dont want people to be taking their /64's with them as 
they change ISP's, so imbuing all this uniqueness and matching it with 
their global id's and telephone numbers is just asking for trouble.


Unless the whole world becomes an ISP. Presto, address shortage unless 
massive depopulation occurs over the next couple hundred years.


We should not pretend to be building an allocation structure that will 
not simultaneously satisify uniqueness, portability and scalability for 
the next hundred years or so when we clearly are not.


Whats the current state with PI in ipv6? How often will it change?

We could have reserved 90% of the first 32 bits, use the next 32 bits to 
assign to ISP's /64 bits, and allow the ISP's to assign an internet 
worth of customer their own internet.


Tiered routing? Geo-location routing? All easily made available with 
another bit or two from the first /32.


Oh and the whole protocol is still useless, since proper connectivity 
to the ipv4 network without an ipv4 stack seems to be somewhat non 
standard. Obviously, nobody rolling out ipv6 due to address shortage is 
going to tolerate that, and interop strategies will be used, standard or 
not.


Expect the interop strategy to be the one with the lowest network 
resistance. Thats nat.


IPv6 is a textbook second system syndrome. We could have all been on it 
already without the dozens of super-freighters attached to the 128bit 
tugboat.


Joe




Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

 There is a huge detent at /48, but there's a certain amount of guidance
 that can only be derived from operational experience. It's not clear to
 me why /56 would be unacceptable, particularly if you're delegating them
 to a device that already has a /64. Are one's customers attached via
 point-to-point links, or do they sit on shared broadcast domain where
 their cpe is receiving a /128 and requesting pd from the outset?
 
 When someone plugs an apple airport into a segment of the corporate lan
 should it be be able to request pd under those circumstances as well?
 how is that case different than plugging it in on a residential connection?
 
 These are issues providers can and should grapple with. 

More likely, at least some of them are fairly naive questions.

For example, /experience/ tells us that corporate LAN policies are often
implemented without regard to what we, as Internet engineers, would
prefer, so I can guarantee you with a 100% certainty that there will be
at least some networks, and more than likely many networks, where you
will not be able to simply request a prefix delegation and have that work
the way you'd like.  There will always be some ISP who has delegated, or
some end site who has received, a far too close to being just large
enough allocation, and so even if we assume that every router vendor
and IPv6 implementation from here to eternity has no knobs to disable
prefix delegation, simple prefix exhaustion within an allocation will be 
a problem.  All the screams of but they should have been allocated more
will do nothing to change this.

Further, if we consider, for a moment, a world where prefix delegation is
the only method of adding something like an Apple Airport to an existing
network, this is potentially encouraging the burning of /64's for the
addition of a network with perhaps a single client.  That's perfectly fine,
/as long as/ networks are allocated sufficient resources.  This merely
means that from a fairly pessimistic viewpoint, IPv6 is actually a 64-bit
address space for purposes of determining how much address space is
required.

So, from the point of view of someone manufacturing devices to attach to
IPv6 networks, I have some options.

I can:

1) Assume that DHCP PD is going to work, and that the end user will have
   a prefix to delegate, which might be valid or it might not.  This leaves
   me in the position of having to figure out a backup strategy, because I
   do not want users returning my device to Best Buy because it don't
   work.  The backup strategy is bridging.

2) Assume that DHCP PD is not going to work, and make bridging the default
   strategy.  DHCP PD can optionally be a configurable thing, or autodetect,
   or whatever, but it will not be mandatory.

I am being facetious here, of course, since only one of those is really
viable in the market.  Anyone who thinks otherwise is welcome to explain to
me what's going to happen in the case where there are no P's to D.

I will leave the difference between corporate and residential as an exercise
to the reader; suffice it to say that the answers are rather obvious in the
same manner.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

 Once upon a time, Florian Weimer [EMAIL PROTECTED] said:
   Right now, we might say wow, 256 subnets for a single end-user... 
   hogwash! and in years to come, wow, only 256 subnets... what were we 
   thinking!?
  
   Well, what's the likelihood of the only 256 subnets problem?
  
  There's a tendency to move away from (simulated) shared media networks.
  One host per subnet might become the norm.
 
 So each host will end up with a /64?

That's a risk.  It is more like each host might end up with a /64.

Now, the thing here is, there's nothing wrong with one host per subnet.
There's just something wrong with blowing a /64 per subnet in an
environment where you have one host per subnet, and a limited amount of
bits above /64 (you essentially have /unlimited/ addresses within the 
/64, but an ISP may be paying for space, etc).

Now, understand, I /like/ the idea of /64 networks in general, but I do
have concerns about where the principle breaks down.  If we're agreed to
contemplate IPv6 as being a 64-bit address space, and then allocating 
space on that basis, I would suggest that some significant similarities 
to IPv4 appear.  In particular, a NAT gateway for IPv4 translates fairly
well into a subnet-on-a-/64 in IPv6.

That is interesting, but it may not actually reduce the confusion as to
how to proceed.

 How exactly are end-users expected to manage this?  Having a subnet for
 the kitchen appliances and a subnet for the home theater, both of which
 can talk to the subnet for the home computer(s), but not to each other,
 will be far beyond the abilities of the average home user.

Well, this gets back to what I was saying before.

At a certain point, Joe Sixpack might become sophisticated enough to have
an electrician come in and run an ethernet cable from the jack on the
fridge to his home router.  He might also be sophisticated enough to pay
$ElectronicsStore installation dep't to run an ethernet cable from the
jack on the home theater equipment to the home router.  I believe that
this may in fact have come to pass ...

Now the question is, what should happen next.

The L3 option is that the home router presents a separate /64 on each
port, and offers some firewalling capabilities.  I hinted before that I
might not be thrilled with this, due to ISP commonly controlling CPE, but
that can be addressed by making the router separate.

There's a trivial L2 option as well.  You can simply devise a L2 switch
that implements filtering policies.  Despite all the cries of that's
not how we do it in v4! and we can't change the paradigm, the reality
is that this /could/ be perfectly fine.  As a matter of fact, for Joe
Sixpack, it almost certainly /is/ fine.

Joe Sixpack's policy is going to read just like what you wrote above.
subnet for appliances, subnet for computer, subnet for theater,
with the appliances and theater only being able to talk to computer.
He's not going to care if it's an actual subnet or just a logical blob.
This is easy to do at L2 or L3.  We're more /used/ to doing it at L3,
but it's certainly workable at L2, and the interface to do so doesn't
necessarily even need to look any different, because Joe Sixpack does
not care about the underlying network topology and strategy.

I would absolutely like to see DHCP PD be usable for environments where
multiple prefixes are available and allowed, but I believe we're going
to also be needing to look at bridging.

There's /going/ to be some crummy ISP somewhere that only allocates end
users a /64, or there's /going/ to be a business with a network that will
refuse DHCP PD, and as a result there /will/ be a market for devices that
have the ability to cope.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

 If operational simplicity of fixed length node addressing is a
 technical reason, then I think it is a compelling one. If you've ever
 done any reasonable amount of work with Novell's IPX (or other fixed
 length node addressing layer 3 protocols (mainly all of them except
 IPv4!)) you'll know what I mean.
 
 I think Ethernet is also another example of the benefits of
 spending/wasting address space on operational convenience - who needs
 46/47 bits for unicast addressing on a single layer 2 network!? If I
 recall correctly from bits and pieces I've read about early Ethernet,
 the very first versions of Ethernet only had 16 bit node addressing.
 They then decided to spend/waste bits on addressing to get
 operational convenience - plug and play layer 2 networking.

The difference is that it doesn't cost anything.  There are no RIR fees,
there is no justification.  You don't pay for, or have to justify, your 
Ethernet MAC addresses.

With IPv6, there are certain pressures being placed on ISP's not to be
completely wasteful.

This will compel ISP's to at least consider the issues, and it will most
likely force users to buy into technologies that allow them to do what they
want.  And inside a /64, you have sufficient space that there's probably
nothing you can't do.  :-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

   I think Ethernet is also another example of the benefits of
   spending/wasting address space on operational convenience - who needs
   46/47 bits for unicast addressing on a single layer 2 network!? If I
   recall correctly from bits and pieces I've read about early Ethernet,
   the very first versions of Ethernet only had 16 bit node addressing.
   They then decided to spend/waste bits on addressing to get
   operational convenience - plug and play layer 2 networking.
  
  The difference is that it doesn't cost anything.  There are no RIR fees,
  there is no justification.  You don't pay for, or have to justify, your 
  Ethernet MAC addresses.
  
  With IPv6, there are certain pressures being placed on ISP's not to be
  completely wasteful.
 
 I don't think there is that difference at all. MAC address allocations
 are paid for by the Ethernet chipset/card vendor, and I'm pretty sure
 they have to justify their usage before they're allowed to buy another
 block. I understand they're US$1250 an OUI, so something must have
 happened to prevent somebody buying them all up to hoard them, creating
 artificial scarcity, and then charging a market sensitive price for
 them, rather than the flat rate they cost now. That's not really any
 different to an ISP paying RIR fees, and then indirectly passing those
 costs onto their customers.

MAC address allocations are paid for by the Ethernet chipset/card vendor.

They're not paid for by an ISP, or by any other Ethernet end-user, except
as a pass-through, and therefore it's considered a fixed cost.  There are
no RIR fees, and there is no justification.  You buy a gizmo with this
RJ45 and you get a unique MAC.  This is simple and straightforward.  If
you buy one device, you get one MAC.  If you buy a hundred devices, you
get one hundred MAC's.  Not 101, not 99.  This wouldn't seem to map well
at all onto the IPv6 situation we're discussing.

With an IPv6 prefix, it is all about the prefix size.  Since a larger 
allocation may cost an ISP more than a smaller allocation, an ISP may 
decide that they need to charge a customer who is allocated a /48 more 
than a customer who is allocated a /64.

I don't pay anyone anything for the use of the MAC address I got on this
free ethernet card someone gave me, yet it is clearly and unambiguously
mine (and only mine) to use.  Does that clarify things a bit?

If you are proposing that RIR's cease the practice of charging different
amounts for different allocation sizes, please feel free to shepherd that
through the approvals process, and then I will certainly agree that there
is no longer a meaningful cost differential for the purposes of this
discussion.  Otherwise, let's not pretend that they're the same thing, 
since they're clearly not.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

  MAC address allocations are paid for by the Ethernet chipset/card vendor.
  
  They're not paid for by an ISP, or by any other Ethernet end-user, except
  as a pass-through, and therefore it's considered a fixed cost.  There are
  no RIR fees, and there is no justification.  You buy a gizmo with this
  RJ45 and you get a unique MAC.  This is simple and straightforward.  If
  you buy one device, you get one MAC.  If you buy a hundred devices, you
  get one hundred MAC's.  Not 101, not 99.  This wouldn't seem to map well
  at all onto the IPv6 situation we're discussing.
 
 How many ISP customers pay RIR fees? Near enough to none, if not none.

Who said anything about ISP customers paying RIR fees?  Although they
certainly do, indirectly.

 I never have when I've been an ISP customer.

(Must be one of those legacy ISP's?)

 Why are you pretending they do? 

I don't recall bringing them into the discussion, BUT...

 I think your taking an end-user perspective when discussing
 ethernet but an RIR fee paying ISP position when discussing IPv6 subnet
 allocations. That's not a valid argument, because you've changed your
 viewpoint on the situation to suit your position.

Oddly enough, I'm one of those rare people who've worked with both ISP's
and OEM's that have been assigned MAC's.  You can think as you wish, and
you're wrong. 

 Anyway, the point I was purely making was that if you can afford to
 spend the bits, because you have them (as you do in Ethernet by design,
 as you do in IPv6 by design, but as you *don't* in IPv4 by design), you
 can spend them on operational convenience for both the RIR paying
 entity *and* the end-user/customer. Unnecessary complexity is
 *unnecessary*, and your customers won't like paying for it if they
 discover you've chosen to create it either on purpose or through
 naivety.

Okay, here, let me make it reaaally simple.

If I am going out and buying an Ethernet card today, the mfr will pay $.NN 
for my MAC address, a cost that is built into the retail cost of the card.
It will never be more or less than $.NN, because the number of MAC
addresses assigned to my card is 1.  Always 1.  Always $.NN.

If I am going out and buying IPv6 service today, the ISP will pay a
variable amount for my address space.  The exact amount is a function of
their own delegation size (you can meander on over to ARIN yourself) and
the size they've delegated to you; and so, FOR PURPOSES OF ILLUSTRATION,
consider this.

If your ISP has been delegated a /48 (admittedly unlikely, but possible)
for $1,250/year, and they assign you a /56, their cost to provide that
space is ~$5.  They can have 256 such customers.

If your ISP has been delegated a /48 (admittedly unlikely, but possible)
for $1,250/year, and they assign you a /48, their cost to provide that
space is ~$1,250.  They can have 1 such customer.

If your ISP has been delegated a /41, for $1,250/year, and they assign
you a /48, their cost to provide that space is ~$10.  They can have 128
such customers.

There is a significant variation in pricing as the parameters are changed.
You do not just magically have free bits in IPv6 by design; the ISP is
paying for those bits.  There will be factors MUCH more real than whether
or not customers like paying for it if they discover you've chosen to
create [complexity], because quite frankly, residential end users do not
typically have a clue, and so even if you do tick off 1% who have a clue,
you're still fine.

Now, seriously, just who do you think is paying for the space?  And if
$competitor down the road is charging rock bottom prices for Internet
access, how much money does the ISP really want to throw at extra address
space?  (Do you want me to discuss naivety now?)

And just /how/ is this in any way similar to Ethernet MAC addresses, 
again?  Maybe I'm just too slow and can't see how fixed cost ==
variable cost.  I won't accept any further hand-waving as an answer,
so to continue, please provide solid examples, as I've done.

Perhaps more on-topic, how many IP addresses can dance on the head of 
a /64?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-21 Thread Joe Greco

  Why not a /48 for all? IPv6 address space is probably cheap enough that
  even just the time cost of dealing with the occasional justification
  for moving from a /56 to a /48 might be more expensive than just giving
  everybody a /48 from the outset. Then there's the op-ex cost of
  dealing with two end-site prefix lengths - not a big cost, but a
  constant additional cost none the less.
 
 And let's not ignore the on-going cost of table-bloat. If you provide a 
 /48 to everyone, in 5 years, those allocations may/may not look stupid. :)
 
 Right now, we might say wow, 256 subnets for a single end-user... 
 hogwash! and in years to come, wow, only 256 subnets... what were we 
 thinking!?

Well, what's the likelihood of the only 256 subnets problem?

Given that a subnet in the current model consists of a network that is
capable of swallowing the entire v4 Internet, and still being virtually
empty, it should be clear that *number of devices* will never be a serious
issue for any network, business or residential.  You'll always be able to
get as many devices as you'd like connected to the Internet with v6.  This
may ignore some /current/ practical issues that devices such as switches
may impose, but that doesn't make it any less true.

The question becomes, under what conditions would you need separate
subnets.  We have to remember that the answer to this question can be,
and probably should be, relatively different than it is under v4.  Under
v4, subnet policies involved both network capacity and network number
availability.  A small business with a /25 allocation might use a /26 and
a /27 for their office PC's, a /28 for a DMZ, and the last /28 for
miscellaneous stuff like a VPN concentrator, etc.  The office PC /26 and
/27 would generally be on different switches, and the server would have
more than one gigE port to accomodate.  To deal with higher bandwidth
users, you typically try to split up those users between the two networks.

Under a v6 model, it may be simpler and more convenient to have a single
PC network, with dual gigE LAG (or even 10G) to the switch(es).  So I am
envisioning that separate networks primarily imposed due to numbering
reasons under v4 will most likely become single networks under v6.

The primary reasons I see for separate networks on v6 would include
firewall policy (DMZ, separate departmental networks, etc)...

And I'm having some trouble envisioning a residential end user that 
honestly has a need for 256 networks with sufficiently differently
policies.  Or that a firewall device can't reasonably deal with those 
policies even on a single network, since you mainly need to protect
devices from external access.

I keep coming to the conclusion that an end-user can be made to work on
a /64, even though a /56 is probably a better choice.  I can't find the
rationale from the end-user's side to allocate a /48.  I can maybe see
it if you want to justify it from the provider's side, the cost of dealing
with multiple prefix sizes.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-21 Thread Joe Greco
 nearby friends and neighbors.

Having fewer options is going to be easier for the ISP, I suspect.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-21 Thread Joe Greco
 to expect his device to be smart enough to tell him
what he needs to do, and whether the underlying network is one thing or
another isn't a serious consideration.

You simply have to realize that L2 and L3 aren't as different as you seem
to think.  You can actually consider them flip sides of a coin in many
cases.

 Actually, there is some guarantee that, in IPv6, you'll be able to do  
 that,
 or, you will know that you could not.  You will make a DHCP6 request
 for a prefix delegation, and, you will receive it or be told no.

So, as I said...

 Most likely, that is how most such v6 gateways will function.

/Possibly/.  It would be much more likely to be that way if everyone
was issued large CIDR blocks, every router was willing to delegate a
prefix, and there was no call for bridging.

 I think that bridges are less likely to be the norm in IPv6.

I'm skeptical, but happy to be proven wrong someday.

  If we have significant customer-side routing of IPv6, then there's  
  going
  to need to be some way to manage that.  I guess that's RIPv6/ng.  :-)

 Nope... DHCPv6 prefix delegation and Router discovery.

We'll see.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re:

2007-12-08 Thread Joe Abley



On 8-Dec-2007, at 00:18, sana sohail wrote:


I am looking for a typical percentage of external(inter-domain) routes
versus typical percentage of internal (intra-domain) routes in a core
router with couple of hundred thousand entries in the routing table.
Can anyone please help me in this?


I think first you have to decide what a typical AS looks like. The  
question, as it stands, is too general for any answer to be  
(in)defensible.



Joe



Re: unwise filtering policy from cox.net

2007-11-21 Thread Joe Greco

 Given what Sean wrote goes to the core of how mail is routed, you'd
 pretty much need to overhaul how MX records work to get around this one,
 or perhaps go back to try to resurrect something like a DNS MB record,
 but that presumes that the problem can't easily be solved in other
 ways.  Sean demonstrated one such way (move the high volume stuff to its
 own domain).

Moving abuse@ to its own domain may work, however, fixing this problem at
the DNS level is probably an error, and probably non-RFC-compliant anyways.

The real problem here is probably one of:

1) Mail server admin forgot (FSVO forgot, which might be didn't even
   stop to consider, considered it and decided that it was worthwhile to
   filter spam sent to abuse@, not realizing the implications for abuse 
   reporting, didn't have sufficient knowledge to figure out how to
   exempt abuse@, etc.)

2) Server software doesn't allow exempting a single address; this is a
   common problem with certain software, and the software should be fixed,
   since the RFC's essentially require this to work.  Sadly, it is 
   frequently assumed that if you cannot configure your system to do X, 
   then it's all right to not do X, regardless of what the RFC's say.

The need to be able to accept unfiltered recipients has certain 
implications for mail operations, such as that it could be bad to use IP 
level filtering to implement a shared block for bad senders.  

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: unwise filtering policy from cox.net

2007-11-20 Thread Joe Greco

 Or it was a minor oversight and you're all pissing and moaning over nothing?
 
 That's a thought too.

Pretty much all of network operations is pissing and moaning over
nothing, if you wish to consider it such.  Some of us actually care.

In any case, I believe that I've found the Cox abuse folks to be
pretty helpful and clueful in the past, but they may have some of the
typical problems, such as having to forward mail for abuse@ through
a large e-mail platform that's designed for customers.  I'm certainly
not saying that it's all right to have this problem, but I would
certainly encourage you to try sending along a brief note without any
BL-listed URL's, to see if you can get a response that way.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: General question on rfc1918

2007-11-13 Thread Joe Abley



On 13-Nov-2007, at 10:08, Drew Weaver wrote:

   Hi there, I just had a real quick question. I hope this is  
found to be on topic.


Is it to be expected to see rfc1918 src'd packets coming from  
transit carriers?


You should not send packets with RFC1918 source or destination  
addresses to the Internet. Everybody should follow this advice. If  
everybody did follow that advice, you wouldn't see the packets you are  
seeing.


The cynical answer, however, based on observation of real-life  
networks, is yes because people are naturally messy creatures.


We have filters in place on our edge (obviously) but should we be  
seeing traffic from 192.168.0.0 and 10.0.0.0 et cetera hitting our  
transit interfaces?


I guess I'm not sure why large carrier networks wouldn't simply  
filter this in their core?


I can think of lots of things that large carrier networks (as well as  
smaller, non-carrier networks!) do that seem on the face of it to defy  
explanation, of which this is just one example :-)



Joe


Re: General question on rfc1918

2007-11-13 Thread Joe Greco

 Hi there, I just had a real quick question. I hope this is found to 
 be on topic.
 
 Is it to be expected to see rfc1918 src'd packets coming from transit 
 carriers?
 
 We have filters in place on our edge (obviously) but should we be seeing 
 traffic from 192.168.0.0 and 10.0.0.0 et cetera hitting our transit 
 interfaces?
 
 I guess I'm not sure why large carrier networks wouldn't simply filter this 
 in their core?

[pick-a-random-BCP38-snipe ...]

It's a feature: You can tell which of your providers does BCP38 this way.

Heh.

It's the networking equivalent of all the bad sorts of DOS/Windows 
programming.  You know, the rule that says once it can run successfully,
it must be correct.  Never mind checking for exceptional conditions,
buffer overruns, etc.

It's the same class of problem where corporate IT departments, listening
to some idiot, filter all ICMP, and are convinced this is okay because 
they can reach ${one-web-site-of-your-choice}, and refuse to contemplate
that they might have broken something.

Once your network is routing packets and you aren't hearing complaints
about being unable to reach a destination, it's got to be configured
correctly ... right?

Consider it life on the Internet.  Do their job for them.

Around here, we've been doing BCP38 since before there was a BCP38.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: General question on rfc1918

2007-11-13 Thread Joe Abley



On 13-Nov-2007, at 10:35, Robert Bonomi wrote:


On 13-Nov-2007, at 10:08, Drew Weaver wrote:


  Hi there, I just had a real quick question. I hope this is
found to be on topic.

Is it to be expected to see rfc1918 src'd packets coming from
transit carriers?


You should not send packets with RFC1918 source or destination
addresses to the Internet. Everybody should follow this advice. If
everybody did follow that advice, you wouldn't see the packets you  
are

seeing.


Really?  What do you do if a 'network internal' device -- a legitimate
use of RFC1918 addresses -- discovers 'host/network unreachable' for  
an

external-origin packet transitinng that device?   evil grin


You drop the packet at your border before it is sent out to the  
Internet.


This is why numbering interfaces in the data path of non-internal  
traffic is a bad idea.


Packets which are strictly error/status reporting -- e.g. IMP  
'unreachable',
'ttl exceeded', 'redirect', etc. -- should *NOT* be filtered at  
network

boundaries  _solely_ because of an RFC1918 source address.


I respectfully disagree.


Joe


Re: cpu needed to NAT 45mbs

2007-11-08 Thread Joe Greco

 I do the networking in my house, and hang out with guys that do networking in 
 small offices that have a few T1s.   Now I am talking to people about a DS3 
 connection for 500 laptops*, and I am bing told a p4 linux box with 2 nics 
 doing NAT will not be able to handle the load.   I am not really qualified 
 to 
 say one way or the other.  I bet someone here is.

So, are they Microsoft fans, or Cisco fans, or __ fans?  For any of
the above, you can make the corresponding product fail too.  :-)

The usual rules for PC's-as-routers apply.  You can find extensive
discussions of this on lists such as the Quagga list (despite the list
being intended for routing _protocols_ rather than routing platforms) and
the Soekris (embedded PC) lists.

Briefly,

1) Small packet traffic is harder than large packet traffic,

2) Good network cards and competent OS configuration will help extensively,

3) The more firewall rules, the slower things will tend to be (highly
   implementation-dependent)

4) In the case of NAT, it would seem to layer some additional delays on top
   of #3.

We've successfully used a carefully designed FreeBSD machine (PIII-850,
dual fxp) as a load balancer in the past, which shares quite a few
similarities to a NAT device.  The great upside is complete transparency
as to what's happening and why, and the ability to affect this as desired.
I don't know how close we ran to 100Mbps, but I know we exceeded 45.

With sufficient speed, you can make up for many sins, including a
relatively naive implementation.  With that in mind, I'd guess that you 
are more likely to be successful than not.  The downside is that if it
doesn't work out, you can recycle that PC into a more traditional role.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Hey, SiteFinder is back, again...

2007-11-05 Thread Joe Greco

 Sean,
 
  Yes, it sounds like the evil bit.  Why would anyone bother to set it?
 
  Two reasons
 
  1) By standardizing the process, it removes the excuse for using
  various hacks and duct tape.
 
  2) Because the villian in Bond movies don't view themselves as evil.
  Google is happy to pre-check the box to install their Toolbar, OpenDNS
  is proud they redirect phishing sites with DNS lookups, Earthlink says it
  improves the customer experience, and so on.
 
 Forgive my skepticism, but what I would envision happening is resolver
 stacks adding a switch that would be on by default, and would translate
 the response back to NXDOMAIN.  At that point we would be right back
 where we started, only after a lengthy debate, an RFC, a bunch of code,
 numerous bugs, and a bunch of I told you sos.

The other half of this is that it probably isn't *appropriate* to encourage
abuse of the DNS in this manner, and if you actually add a framework to do
this sort of thing, it amounts to tacit (or explicit) approval, which will
lead to even more sites doing it.

Consider where it could lead.  Pick something that's already sketchy, such
as hotel networks.  Creating the perfect excuse for them to map every domain
name to 10.0.0.1, force it through a web proxy, and then have their tech
support people tell you that if you're having problems, make sure you set
the browser-uses-evilbit-dns.  And that RFC mandate to not do things like
this?  Ignored.  It's already annoying to try to determine what a hotel
means if they say they have Internet access.

Reinventing the DNS protocol in order to intercept odd stuff on the Web 
seems to me to be overkill and bad policy.  Could someone kindly explain
to me why the proxy configuration support in browsers could not be used 
for this, to limit the scope of damage to the web browsing side of things? 
I realize that the current implementations may not be quite ideal for 
this, but wouldn't it be much less of a technical challenge to develop a
PAC or PAC-like framework to do this in an idealized fashion, and then 
actually do so?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Any help for Yahoo! Mail arrogance?

2007-10-30 Thread Joe Greco

  I'm pretty sure
  none of our systems have been compromised and forwards mail that we
  don't know about.
 
 Yet your sending IP reputation is poor

Do you actually have data that confirms that?

We've had random problems mailing Hotmail (frequently), Yahoo!
(infrequently), and other places where the mail stream consists of
a low volume (10/day) of transactional and support e-mail directly
arising from user-purchased services, on an IP address that had 
never previously sent e-mail - ever.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Joe Greco

 Rep. Boucher's solution: more capacity, even though it has been 
 demonstrated many times more capacity doesn't actually solve this 
 particular problem.

That would seem to be an inaccurate statement.

 Is there something in humans that makes it difficult to understand
 the difference between circuit-switch networks, which allocated a fixed 
 amount of bandwidth during a session, and packet-switched networks, which 
 vary the available bandwidth depending on overall demand throughout a 
 session?
 
 Packet switch networks are darn cheap because you share capacity with lots 
 of other uses; Circuit switch networks are more expensive because you get
 dedicated capacity for your sole use.

So, what happens when you add sufficient capacity to the packet switch
network that it is able to deliver committed bandwidth to all users?

Answer: by adding capacity, you've created a packet switched network where
you actually get dedicated capacity for your sole use.

If you're on a packet network with a finite amount of shared capacity,
there *IS* an ultimate amount of capacity that you can add to eliminate 
any bottlenecks.  Period!  At that point, it behaves (more or less) like
a circuit switched network.

The reasons not to build your packet switched network with that much
capacity are more financial and technical than they are impossible.  We
know that the average user will not use all their bandwidth.  It's also
more expensive to install more equipment; it is nice when you can fit
more subscribers on the same amount of equipment.

However, at the point where capacity becomes a problem, you actually do
have several choices:

1) Block certain types of traffic,

2) Limit {certain types of, all} traffic,

3) Change user behaviours, or

4) Add some more capacity

Come to mind as being the major available options.  ALL of these can be
effective.  EACH of them has specific downsides.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Joe Greco

 
 On Fri, 26 Oct 2007, Paul Ferguson wrote:
  The part of this discussion that really infuriates me (and Joe
  Greco has hit most of the salient points) is the deceptiveness
  in how ISPs underwrite the service their customers subscribe to.
 
  For instance, in our data centers, we have 1Gb uplinks to our ISPs,
  but guaranteed service subscription (a la CIR) to a certain rate
  which we engineer (based on average traffic volume, say, 400Mb), but
  burstable to full line rate -- if the bandwidth is available.
 
  Now, we _know_ this, because it's in the contract. :-)
 
  As a consumer, my subscription is based on language that doesn't
  say you can only have the bandwidth you're paying for when we
  are congested, because we oversubscribed our network capacity.
 
  That's the issue here.
 
 You have a ZERO CIR on a consumer Internet connection.

Where's it say that?

 How many different ways can an ISP say speeds may vary and are not 
 guaranteed.  It says so in the _contract_.  So why don't you know
 that?

Gee, that's not exactly what I read.

http://help.twcable.com/html/twc_sub_agreement2.html

Section 6 (a) Speeds and Network Management.  I acknowledge that each tier
or level of the HSD Service has limits on the maximum speed at which I may
send and receive data at any time, as set forth in the price list or Terms
of Use.  I understand that the actual speeds I may experience at any time
will vary based on a number of factors, including the capabilities of my
equipment, Internet congestion, the technical properties of the websites,
content and applications that I access, and network management tools and
techniques employed by TWC. I agree that TWC or ISP may change the speed of
any tier by amending the price list or Terms of Use. My continued use of the
HSD Service following such a change will constitute my acceptance of any new
speed. I also agree that TWC may use technical means, including but not
limited to suspending or reducing the speed of my HSD Service, to ensure
compliance with its Terms of Use and to ensure that its service operates
efficiently.

Both to ensure that its service operates efficiently and techniques
employed by TWC would seem to allow for some variation in speed by the
local cable company - just as the speed on a freeway may drop during
construction, or during rush hour.  However, there's very strong language 
in there that indicates that the limits on sending and receiving are set 
forth in the price list.

 ISPs tell you that when you order, in the terms of service, when you call
 customer care that speeds may vary and are not guaranteed.

Speeds may vary and are not guaranteed is obvious on the Internet.
We're deliberately going to screw with your speeds if you use too much
is not, at least to your average consumer.

 How much do you pay for your commercial 1GE connection with a 400Mbps CIR? 
 Is it more or less than what you pay for a consumer connection with a ZERO 
 CIR?

Show me a consumer connection with a contract that /says/ that it has a 
zero CIR, and we can start that discussion.  Your saying that it has a
zero CIR does not make it so.

 ISPs are happy to sell you SLAs, CIRs, etc.  But if you don't buy SLAs,
 CIRs, etc, why are you surprised you don't get them?

There's a difference between not having a SLA, CIR, etc., all of which I'm
fine for with a residential class connection, and having an ISP that sells
20Mbps! Service! Unlimited! but then quietly messes with users who
actually use that.

The ISP that sells a 20Mbps pipe, and doesn't mess with it, but has a
congested upstream, these guys are merely oversubscribed.  That's the
no-SLA-no-CIR situation.

 Once again blinkspeeds may vary and are not guaranteed/blink.
 
 Now that you know that speeds may vary and are not guaranteed, does
 that make you satisified?

Only if my ISP isn't messing with my speeds, or has made it exceedingly
clear in what ways they'll be messing with my speeds so that they do not
match what I paid for on the price list.

Let me restate that:  I don't really care if I get 8 bits per second to
some guy in Far North, Canada who is on a dodgy satellite Internet link.
That's what speeds may vary and are not guaranteed should refer to -
things well beyond an ISP's control.

Now, let me flip this on its ear.  We rent colo machines to users.  We
provide flat rate pricing.  When we sell a machine with 1Mbps of 
Internet bandwidth, that is very much speeds may vary and are not 
guaranteed - HOWEVER, we do absolutely promise that if it's anything 
of ours that is causing delivery of less than 1Mbps, WE WILL FIX IT. 
PERIOD.  This isn't a SLA.  This isn't a CIR.  This is simple honesty,
we deliver what we advertised, and what the customer is paying for.

The price points that consumers are paying for resi Internet may not
allow quite that level of guarantee, but does that mean that they do
not deserve to be provided with some transparency so that end users 
understand what the ACTUAL policy

Re: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)

2007-10-24 Thread Joe Greco

 I did consulting work for NTT in 2001 and 2002 and visited their Tokyo =
 headquarters twice. NTT has two ILEC divisions, NTT East and NTT West. =
 The ILEC management told me in conversations that there was no money in =
 fiber-to-the-home; the entire rollout was due to government pressure and =
 was well below a competitive rate of return. Similarly, NTT kept staff =
 they did not need becuase the government wanted to maintain high =
 employment in Japan and avoid the social stress that results from =
 massive layoffs.

Mmm hmm.  That sounds somewhat like the system we were promised here in
America.  We were told by the ILEC's that it was going to be very expensive
and that they had little incentive to do it, so we offered them a package
of incentives - some figure as much as $200 billion worth.

See http://www.newnetworks.com/broadbandscandals.htm

 You should not  assume that 'Japanese capitalism' works =
 like American capitalism. 

That could well be; it appears that American capitalism is much better at
lobbying the political system.  They eventually found ways way to take
their money and run without actually delivering on the promises they made.
I'll bet the American system paid out a lot better for a lot less work.

Anyways, it's clear to me that any high bandwidth deployment is an immense
investment for a society, and one of the really interesting meta-questions
is whether or not such an investment will still be paying off in ten years,
or twenty, or...

The POTS network, which merely had to transmit voice, and never had to 
deal with substantial growth of the underlying bandwidth (mainly moving
from analog to digital trunks, which increased but then fixed the
bandwidth), was a long-term investment that has paid off for the telcos
over the years, even if there was a lot of wailing along the way.

However, one of the notable things about data is that our needs have
continued to grow.  Twenty years ago, a 9600 bps Internet connection
might have served a large community, where it was mostly used for
messaging and an occasional interactive session.  Fifteen years ago,
a 14.4 bps was a nice connection for a single user.  Ten years ago,
a 1Mbps connection was pretty sweet (maybe a bit less for DSL, a bit
more for cable). 

Things pretty much go awry at that point, and we no longer see such
impressive progression in average end-user Internet connection speeds.
This didn't stop speed increases elsewhere, but it did put the brakes
on rapid increases here.

If we had received the promised FTTH network, we'd have speeds of up
to 45Mbps, which would definitely be in-line with previous growth (and
the growth of computing and storage technologies).

At a LAN networking level, we've gone from 10Mbps to 100Mbps to 1Gbps
as the standard ethernet interface that you might find on computers and
networking devices.

So the question is, had things gone differently, would 45Mbps still be
adequate?  And would it be adequate in 10 or 20 years?  And what effect
would that have had overall?

Certainly it would be a driving force for continued rapid growth in
both networking and Internet technologies.  As has been noted here in the
past, current Ethernet (40G/100G) standards efforts haven't been really
keeping pace with historical speed growth trends.

Has the failure to deploy true high-speed broadband in a large and key
market such as the US resulted in less pressure on vendors by networks
for the next generations of high-speed networking?

Or, getting back to the actual situation here in the US, what implications
does the continued evolution of US broadband have for other network
operators?  As the ILEC's and cablecos continue to grow and dominate the
end-user Internet market, what's the outlook on other independent networks,
content providers, etc.?  The implications of the so-called net neutrality
issues are just one example of future issues.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-24 Thread Joe Greco

 I wonder how quickly applications and network gear would implement QoS
 support if the major ISPs offered their subscribers two queues: a default
 queue, which handled regular internet traffic but squashed P2P, and then a
 separate queue that allowed P2P to flow uninhibited for an extra $5/month,
 but then ISPs could purchase cheaper bandwidth for that.
 
 But perhaps at the end of the day Andrew O. is right and it's best off to
 have a single queue and throw more bandwidth at the problem.

A system that wasn't P2P-centric could be interesting, though making it
P2P-centric would be easier, I'm sure.  ;-)

The idea that Internet data flows would ever stop probably doesn't work
out well for the average user.

What about a system that would /guarantee/ a low amount of data on a low
priority queue, but would also provide access to whatever excess capacity
was currently available (if any)?

We've already seen service providers such as Virgin UK implementing things
which essentially try to do this, where during primetime they'll limit the
largest consumers of bandwidth for 4 hours.  The method is completely
different, but the end result looks somewhat similar.  The recent 
discussion of AU service providers also talks about providing a baseline 
service once you've exceeded your quota, which is a simplified version of
this.

Would it be better for networks to focus on separating data classes and 
providing a product that's actually capable of quality-of-service style 
attributes?

Would it be beneficial to be able to do this on an end-to-end basis (which
implies being able to QoS across ASN's)?

The real problem with the throw more bandwidth solution is that at some
point, you simply cannot do it, since the available capacity on your last
mile simply isn't sufficient for the numbers you're selling, even if you
are able to buy cheaper upstream bandwidth for it.

Perhaps that's just an argument to fix the last mile.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Verizon has been listening to nanog.

2007-10-24 Thread Joe Maimon




Hex Star wrote:


On 10/23/07, Leo Bicknell [EMAIL PROTECTED] wrote:


http://www.usatoday.com/tech/news/2007-10-23-verizon-fios-plan_N.htm

20 Mbps down, 20 Mbps up, fully symmetrical for $65.



That's pretty sweet, now all they have to do is start laying the fiber
over here...



And stop ripping out copper.


Re: Can P2P applications learn to play fair on networks?

2007-10-23 Thread Joe Provo

On Tue, Oct 23, 2007 at 01:18:01PM +0200, Iljitsch van Beijnum wrote:
 
 On 22-okt-2007, at 18:12, Sean Donelan wrote:
 
 Network operators probably aren't operating from altruistic  
 principles, but for most network operators when the pain isn't  
 spread equally across the the customer base it represents a  
 fairness issue.  If 490 customers are complaining about bad  
 network performance and the cause is traced to what 10 customers  
 are doing, the reaction is to hammer the nails sticking out.
 
 The problem here is that they seem to be using a sledge hammer:  
 BitTorrent is essentially left dead in the water. 

Wrong - seeding from scratch, that is uploading without any 
download component, is being clobbered. Seeding back into the 
swarm works while one is still taking chunks down, then closes.
Essentially, all clients into a client similar to BitTyrant
and focuses on, as Charlie put it earlier, customers downloading
stuff.

From the perspective of thee protocol designers, unfair sharing
is indeed dead but to state it in a way that indicates customers
cannot *use* BT for some function is bogus.  Part of the reason
why caching, provider based, etc schemes seem to be unpopular
is that private trackers appear to operate much in the way that
old BBS download/uploads used to... you get credits for contributing
and can only pull down so much based on such credits.  Not just
bragging rights, but users need to take part in the transactions
to actually use the service. A provider-hosted solution which 
managed to transparently handle this across multiple clients and 
trackers would likely be popular with the end users.

Cheers,

Joe 

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-23 Thread Joe Provo

On Tue, Oct 23, 2007 at 03:13:42AM +, Steven M. Bellovin wrote:
 
 According to
 http://torrentfreak.com/comcast-throttles-bittorrent-traffic-seeding-impossible/
 Comcast's blocking affects connections to non-Comcast users.  This
 means that they're trying to manage their upstream connections, not the
 local loop.

Disagree - despite Comcast's size, there's more Internet outside of 
them than on-net.  Even with decent knobs, these devices are more blunt 
instruments than anyone would like.  See my previous comments regarding 
allowing the on-net to on-net (or within region, or whatever BGP community 
you use...) such that transfers with better RTT to complete quicker.  

Everyone who is commenting on This tracker/client does $foo to behave 
is missing the point - would one rather have the traffic snooped further
to see if such and such tracker/client is in use? And pay for the admin
overhead required to keep those non-automatable lists updated? Adrian
hit it on the head regarding the generations of kittens romping free...

While I expect end-users to miss the boat that providers use stat-mux 
calculations to build and price their networks, I'm floored to see the
sentiment on NANOG.  No edge provider of geographic scope/scale will 
survive if 1:1 ratios were built and priced accordingly. Perhaps the
MA colonialism era is coming to a close and smaller, regional nation-
states... erm last-mile providers will be the entities to grow with
satisfied customers?

Cheers,

Joe
-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Joe Provo

On Sun, Oct 21, 2007 at 10:45:49PM -0400, Geo. wrote:
[snip]
 Second, the more people on your network running fileshare network software 
 and sharing, the less backbone bandwidth your users are going to use when 
 downloading from a fileshare network because those on your network are 
 going to supply full bandwidth to them. This means that while your internal 
 network may see the traffic your expensive backbone connections won't (at 
 least for the download). Blocking the uploading is a stupid idea because 
 now all downloading has to come across your backbone connection.

As stated in several previous threads on the topic, the clump
of p2p protocols in themselves do not provide any topology or
locality awareness.  At least some of the policing middleboxes 
have worked with network operators to address the need and bring 
topology-awareness into varous p2p clouds by eating a BGP feed 
to redirect traffic on-net (or to non-transit, or same region, 
or latency class or ...) when possible.   Of course the on-net 
has less long-haul costs, but the last-mile node congestion is 
killer; at least lower-latency on-net to on-net trafsfers should
complete quickly if the network isn't completely hosed.  One 
then can create a token scheme for all the remaining traffic 
and prioritize, say, the customers actually downloading over
those seeding from scratch. 
 

-- 
 RSUC / GweepNet / Spunk / FnB / Usenix / SAGE


Re: Comcast blocking p2p uploads

2007-10-21 Thread Joe Greco

 Leo Bicknell wrote:
  I'm a bit confused by your statement. Are you saying it's more
  cost effective for ISP's to carry downloads thousands of miles
  across the US before giving them to the end user than it is to allow
  a local end user to upload them to other local end users?
   
 Not to speak on Joe's behalf, but whether the content comes from 
 elsewhere on the Internet or within the ISP's own network the issue is 
 the same: limitations on the transmission medium between the cable modem 
 and the CMTS/head-end.  The issue that cable companies are having with 
 P2P is that compared to doing a HTTP or FTP fetch of the same content 
 you will use more network resources, particularly in the upstream 
 direction where contention is a much bigger issue.  On DOCSIS 1.x 
 systems like Comcast's plant, there's a limitation of ~10mbps of 
 capacity per upstream channel.  You get enough 384 - 768k connected 
 users all running P2P apps and you're going to start having problems in 
 a big hurry.  It's to remove some of the strain on the upstream channels 
 that Comcast has started to deploy Sandvine to start closing *outbound* 
 connections from P2P apps.

That's part of it, certainly.  The other problem is that I really doubt
that there's as much favoritism towards local clients as Leo seems to
believe.  Without that, you're also looking at a transport issue as you
shove packets around.  Probably in ways that the network designers did
not anticipate.

Years ago, dealing with web caching services, there was found to be a
benefit, a limited benefit, to setting up caching proxies within a major
regional ISP's network.  The theoretical benefit was to reduce the need 
for internal backbone and external transit connectivity, while improving
user experience.

The interesting thing is that it wasn't really practical to cache on a
per-POP basis, so it was necessary to pick cache locations at strategic
locations within the network.  This meant you wouldn't expect to see a
bandwidth savings on the internal backbone from the POP to the
aggregation point.

The next interesting point is that you could actually improve the cache
hit rate by combining the caches at each aggregation point; the larger
userbase meant that any given bit of content out on the Internet was
more likely to be in cache.  However, this had the ability to stress the
network in unexpected ways, as significant cache-site to cache-site data 
flows were happening in ways that network engineering hadn't always 
anticipated.

A third interesting thing was noted.  The Internet grows very fast. 
While there's always someone visiting www.cnn.com, as the number of other
sites grew, there was a slow reduction in the overall cache hit rate over
the years as users tended towards more diverse web sites.  This is the
result of the ever-growing quantity of information out there on the
Internet.

This doesn't map exactly to the current model with P2P, yet I suspect it
has a number of loose parallels.

Now, I have to believe that it's possible that a few BitTorrent users in
the same city will download the same Linux ISO.  For that ISO, and for
any other spectacularly popular download, yes, I would imagine that there
is some minor savings in bandwidth.  However, with 10M down and 384K up,
even if you have 10 other users in the city who are all sending at full
384K to someone new, that's not full line speed, so the client will still
try to pull additional capacity from elsewhere to get that full 10M speed.

I've always seen P2P protocols as behaving in an opportunistic manner.
They're looking for who has some free upload capacity and the desired
object.  I'm positive that a P2P application can tell that a user in
New York is closer to me (in Milwaukee) than a user in China, but I'd
quite frankly be shocked if it could do a reasonable job of
differentiating between a user in Chicago, Waukesha (few miles away),
or Milwaukee.

In the end, it may actually be easier for an ISP to deal with the
deterministic behaviour of having data from me go to the local 
upstream transit pipe than it is for my data to be sourced from a
bunch of other random nearby on-net sources.

I certainly think that P2P could be a PITA for network engineering.
I simultaneously think that P2P is a fantastic technology from a showing-
off-the-idea-behind-the-Internet viewpoint, and that in the end, the 
Internet will need to be able to handle more applications like this, as 
we see things like videophones etc. pop up.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

 On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
  So your recommendation is that universities, enterprises and ISPs simply 
  stop offering all Internet service because a few particular application 
  protocols are badly behaved?
 
  They should stop to offer flat-rate ones anyway.
 
 Comcast's management has publically stated anyone who doesn't like the 
 network management controls on its flat rate service can upgrade to 
 Comcat's business class service.
 
 Problem solved?

Assuming a business class service that's reasonably priced and featured?
Absolutely.  I'm not sure I've seen that to be the case, however.  Last
time I checked with a local cable company for T1-like service, they wanted
something like $800/mo, which was about $300-$400/mo more than several of
the CLEC's.  However, that was awhile ago, and it isn't clear that the
service offerings would be the same.

I don't class cable service as being as reliable as a T1, however.  We've
witnessed that the cable network fails shortly after any regional power
outage here, and it has somewhat regular burps in the service anyways.

I'll note that I can get unlimited business-class DSL (2M/512k ADSL) for
about $60/mo (24m), and that was explicitly spelled out to be unlimited-
use as part of the RFP.

By way of comparison, our local residential RR service is now 8M/512k for 
about $45/mo (as of just a month or two ago).

I think I'd have to conclude that I'd certainly see a premium above and
beyond the cost of a residential plan to be reasonable, but I don't expect
it to be many multiples of the resi service price, given that DSL plans
will promise the bandwidth at just a slightly higher cost.

 Or would some P2P folks complain about having to pay more money?

Of course they will.

  Or do general per-user ratelimiting that is protocol/application agnostic.
 
 As I mentioned previously about the issues involving additional in-line 
 devices and so on in networks, imposing per user network management and 
 billing is a much more complicated task.
 
 If only a few protocol/applications are causing a problem, why do you need 
 an overly complex response?  Why not target the few things that are 
 causing problems?

Well, because when you promise someone an Internet connection, they usually
expect it to work.  Is it reasonable for Comcast to unilaterally decide that
my P2P filesharing of my family photos and video clips is bad?

  A better idea might be for the application protocol designers to improve 
  those particular applications.
 
  Good luck with that.
 
 It took a while, but it worked with the UDP audio/video protocol folks who 
 used to stress networks.  Eventually those protocol designers learned to 
 control their applications and make them play nicely on the network.

:-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

 Is it reasonable for your filesharing of your family photos and video 
 clips to cause problems for all the other users of the network?  Is that 
 fair or just greedy?

It's damn well fair, is what it is.  Is it somehow better for me to go and
e-mail the photos and movies around?  What if I really don't want to
involve the ISP's servers, because they've proven to be unreliable, or I
don't want them capturing backup copies, or whatever?

My choice of technology for distributing my pictures, in this case, would
probably result in *lower* overall bandwidth consumption by the ISP, since
some bandwidth might be offloaded to Uncle Fred in Topeka, and Grandma
Jones in Detroit, and Brother Tom in Florida who happens to live on a much
higher capacity service.

If filesharing my family photos with friends and family is sufficient to 
cause my ISP to buckle, there's something very wrong.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

 Joe Greco wrote:
  Well, because when you promise someone an Internet connection, they usually
  expect it to work.  Is it reasonable for Comcast to unilaterally decide that
  my P2P filesharing of my family photos and video clips is bad?

 
 Comcast is currently providing 1GB of web hosting space per e-mail 
 address associated with each account; one could argue that's a 
 significantly more efficient method of distributing that type of content 
 and it still doesn't cost you anything extra.

Wow, that's incredibly ...small.  I've easily got ten times that online
with just one class of photos.  There's a lot of benefit to just letting
people yank stuff right off the old hard drive.  (I don't /actually/ use
P2P for sharing photos, we have a ton of webserver space for it, but I
know people who do use P2P for it)

 The use case you describe isn't the problem though,

Of course it's not, but the point I'm making is that they're using a 
shotgun to solve the problem.

[major snip]

 Again, 
 flat-rate pricing does little to discourage this type of behavior.

I certainly agree with that.  Despite that, the way that Comcast has
reportedly chosen to deal with this is problematic, because it means
that they're not really providing true full Internet access.  I don't
expect an ISP to actually forge packets when I'm attempting to
communicate with some third party.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


  1   2   3   4   5   6   7   8   9   10   >