Re: the O(N^2) problem

2008-04-14 Thread Joe Greco

 The risk in a reputation system is collusion.

/One/ risk in a reputation system is collusion.

Reputation is a method to try to divine legitimacy of mail based on factors
other than whether or not a recipient authorized a sender to send mail.  To
a large extent, the majority of the focus on fighting spam has been to try
to do this sort of divination by coding clever things into machines, but it 
should be clear to anyone who has ever had legitimate mail mysteriously go 
missing, undelivered, or delayed that the process isn't without the
occasional falsing.

There are both positive (whitelist) and negative (DNSBL, local This-Is-Spam,
etc) reputation lists, and there are pros and cons to each.

Consider, for example, Kevin Day's example of the Group-B-Objectionable
scenario.  This is a nonobvious issue that can subvert the reputation of
a legitimate mailer.

On the flip side, what about someone who actually wants to receive mail
that an organization such as Spamhaus has deemed to be hosted on a spammy
IP?  (And, Steve and the Spamhaus guys, this is in no way a criticism of
the job you guys do, the Internet owes you a debt of gratitude for doing
a nearly impossible job in such a professional manner)

There are risks inherent with having any third party, specifically
including the ISP or mailbox provider, trying to determine the nature of
the communications, and filtering on that basis.

This is why I've been talking about paradigms that eliminate the need for
third parties to do analysis of e-mail, and rely on the third parties to
simply implement systems that allow the recipient to control mail.  There
are a number of such systems that are possible.

However, the current systems of divining legitimacy (reputation, filtering,
whatever) generate results that loosely approximate the typical mail that
the average user would wish to receive.  Users have been trained to consider
errors in the process as acceptable, and even unavoidable.

It's ridiculous when systems like Hotmail silently bitbucket e-mail from
a sender (and IP) that has never spammed, and have ONLY sent transactional
e-mail and customer support correspondence, and the individually composed
non-HTML REPLIES to customer inquiries are eaten by Hotmail, or tossed in
the spam folder.  Nice.  (I know, we all have our stories)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-14 Thread Joe Greco

  You want to define standards?  Let's define some standard for 
  establishing permission to mail.  If we could solve the 
  permission problem, then the filtering wouldn't be such a 
  problem, because there wouldn't need to be as much (or maybe 
  even any).  As a user, I want a way to unambiguously allow a 
  specific sender to send me things, spam filtering be 
  damned.  I also want a way to retract that permission, and 
  have the mail flow from that sender (or any of their 
  affiliates) to stop.
  
  Right now I've got a solution that allows me to do that, but 
  it requires a significant paradigm change, away from 
  single-e-mail-address.
 
 In general, your permission to send idea is a good one to
 put in the requirements list for a standard email architecture.
 But your particular solution stinks because it simply adds
 another bandage to a creaky old email architecture that is 
 long past its sell-by date.

Yes.  I'm well aware of that.  My requirements list included that my
solution be able to actually /fix/ something with /today's/ architecture;
this is a practical implementation to solve a real problem, which was
that I was tired of vendor mail being confused for spam.

So, yes, it stinks when compared to the concept of a shiny new mail
architecture.  However, it currently works and is successfully whitelisting
the things I intended.  I just received a message from a tool battery
distributor that some batteries I ordered months ago are finally shipping.
It was crappy HTML, and I would normally have completely missed it -
probably even forgetting that we had ordered them, certainly not
recognizing the From line it came from.  It's a success story.  Rare.

You are welcome to scoff at it as being a stinky bandaid on a creaky mail
system.

 IMHO, the only way that Internet email can be cleaned up is
 to create an entirely new email architecture using an entirely
 new set of protcols with entirely new port assignments and 
 no attempt whatsoever to maintain reverse compatibility with
 the existing architecture. That is a fair piece of work and
 requires a lot of people to get their heads out of the box
 and apply some creativity. Many will say that the effort is
 doomed before it starts because it is not compatible with
 what went before. I don't buy that argument at all.
 
 In any case, a new architecture won't come about until we have
 some clarity of the requirements of the new architecture. And
 that probably has to be hashed out somewhere else, not on any
 existing mailing list.

If such a discussion does come about, I want people to understand that
user-controlled permission is a much better fix than arbitrary spam
filtering steps.  There's a lot of inertia in the traditional spam 
filtering advice, and a certain amount of resistance to considering
that the status quo does not represent e-mail nirvana.

Think of it as making that unsubscribe at the bottom of any marketing
e-mail actually work, without argument, without risk.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco

 Gak, there isn't even a standard code which means MAILBOX FULL or
 ACCOUNT NOT RECEIVING MAIL other than MAILBOX FULL, maybe by choice,
 maybe non-payment, as specific as a site is comfortable with.
 
 That's what I mean by standards and at least trying to focus on what
 can be done rather than the endless retelling of what can't be done.

I would have thought it was obvious, but to see this sort of enlightened
ignorance(*) suggests that it isn't:  The current methods of spam filtering
require a certain level of opaqueness.

Having just watched the gory hashing through of how $MEGAISP deals with
filtering on another list, I was amazed that the prevailing stance among
mailbox hosters is that they don't really care about principles, and that
they mostly care about whether or not users complain.

For example, I feel very strongly that if a user signs up for a list, and
then doesn't like it, it isn't the sender's fault, and the mail isn't spam.
Now, if the user revokes permission to mail, and the sender keeps sending,
that's covered as spam under most reasonable definitions, but that's not
what we're talking about here.

To expect senders to have psychic knowledge of what any individual recipient
is or is not going to like is insane.  Yet that's what current expectations
appear to boil down to.

So, on one hand, we have the filtering by heuristics, which require a
level of opaqueness, because if you respond 567 BODY contained www.sex.com,
mail blocked to their mail, you have given the spammer feedback to get
around the spam.

And on the other hand, we have the filtering by statistics, which requires
a large userbase and probably a This Is Spam button, where you use a
complaint driven model to reject mail, but this is severely complicated 
because users have also been trained to report as spam any other mail that
they don't want, which definitely includes even things that they've opted
in to.

So you have two opaque components to filtering.  And senders are
deliberately left guessing - is the problem REALLY that a mailbox is full,
or am I getting greylisted in some odd manner?

Filtering stinks.  It is resource-intensive, time-consuming, error-prone,
and pretty much an example of something that is desperately flagging the
current e-mail system is failing.

You want to define standards?  Let's define some standard for establishing
permission to mail.  If we could solve the permission problem, then the
filtering wouldn't be such a problem, because there wouldn't need to be as
much (or maybe even any).  As a user, I want a way to unambiguously allow
a specific sender to send me things, spam filtering be damned.  I also
want a way to retract that permission, and have the mail flow from that
sender (or any of their affiliates) to stop.

Right now I've got a solution that allows me to do that, but it requires a
significant paradigm change, away from single-e-mail-address.

Addressing standards of the sort you suggest is relatively meaningless
in the bigger picture, I think.  Nice, but not that important.

(*) It's enlightened to hope for standards that would allow remote sites
to have some vague concept of what the problem is.  I respect that.
It just seems to be at odds with current reality.

 More specific and standardized SMTP failure codes are just one example
 but I think they illustrate the point I'm trying to make.
 
 Oh yeah here's another (ok maybe somewhere this is written down), how
 about agreeing on contact mailboxes like we did with
 [EMAIL PROTECTED]

Yeah, like that's actually implemented or useful at a majority of domains.

 Is it [EMAIL PROTECTED] or [EMAIL PROTECTED] or [EMAIL PROTECTED] or
 [EMAIL PROTECTED] (very commonly used) or [EMAIL PROTECTED] Who cares? But
 let's pick ONE, stuff it in an RFC or BCP and try to get each other to
 conform to it.

Having defined methods for contacting people OOB would be nice.  IFF (and
often/mostly they don't) anyone cared to actually try to resolve individual
problems.  Don't expect them to want to, because for the most part, they do
not.  Sigh.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco

 On April 13, 2008 at 14:24 [EMAIL PROTECTED] (Joe Greco) wrote:
   I would have thought it was obvious, but to see this sort of enlightened
   ignorance(*) suggests that it isn't:  The current methods of spam filtering
   require a certain level of opaqueness.
 
 Indeed, that must be the problem.
 
 But then you proceed to suggest:
 
   So, on one hand, we have the filtering by heuristics, which require a
   level of opaqueness, because if you respond 567 BODY contained 
 www.sex.com,
   mail blocked to their mail, you have given the spammer feedback to get
   around the spam.
 
 Giving the spammer feedback?
 
 In the first place, I think s/he/it knows what domain they're using if
 they're following bounces at all. Perhaps they have to guess among
 whether it was the sender, body string, sending MTA, but really that's
 about it and given one of those four often being randomly generated
 (sender) and another (sender MTA) deducible by seeing if multiple
 sources were blocked on the same email...my arithmetic says you're
 down to about two plus or minus.

In many (even most) cases, that is only useful if you're sending a lot of
mail towards a single source, a variable which introduces yet *another*
ambiguity, since volume is certainly a factor in blocking decisions. 
Further, if you look at the average mail message, you have domains based
on multiple factors, such as services to do open tracking (1x1/invisible
pixels, etc), branding, and many other reasons that there could be more
than a single domain in a single message.  Further, once you're being
blocked, it may be implemented by-IP even though there was some other
metric that triggered the block.

Having records that allow a sender to go back and unilaterally determine 
what was amiss may not be considered desirable by the receiving site.
 
 But even that is naive since spammers of the sort anyone should bother
 worrying about use massive bot armies numbering O(million) and
 generally, and of necessity, use fire and forget sending techniques.

Do you mean to suggest that your definition of spammer only includes
senders using massive bot armies?  That'd be mostly pill spammers,
phishers, and other really shady operators.  There are whole other classes
of spam and spammer.

 Perhaps you have no conception of the amount of spam the major
 offenders send out. It's on the order of 100B/day, at least.

I have some idea.  However, I will concede that my conception of current
spam volumes is based mostly on what I'm able to quantify, which is the
~4-8GB/day of spam we receive here.

 That's why you and your aunt bessie and all the people on this list
 get the same exact spam. Because they're being sent out in the
 hundreds of billions. Per day.

Actually, we see significant variation in spam received per address.

 Now, what exactly do you base your interesting theory that spammers
 analyze return codes to improve their techniques for sending through
 your own specific (not general) mail blocks? Sure they do some
 bayesian scrambling and so forth but that's general and will work on
 zillions of sites running spamassassin or similar so that's worthwhile
 to them.

I'm sure that if you were to talk to the Postmasters at any major ISP/mail
provider, especially ones like AOL, Hotmail, Yahoo, and Earthlink, that
you would discover that they're familiar with businesses which claim to be
in the business of enhancing deliverability.

However, what I'm saying was pretty much the inverse of the theory that you
attribute to me:  I'm saying that receivers often do NOT provide feedback
detailing the specifics of why a block happened.  As a matter of fact, I 
think I can say that the most common feedback provided in the mail world 
would be notice of listing on a DNS blocking list, and this is primarily 
because the default code and examples for implementation usually provide 
some feedback about the source (or, at least, source DNSBL) of the block.

You'll see generic guidance such as the Yahoo! error message that started
this thread (temporarily deferred due to user complaints, IIRC), but 
that's not particularly helpful, now, is it.  It doesn't tell you which
user, or how many complaints, etc.

 But what, exactly, do you base your interesting theory that if a site
 returned 567 BODY contained www.sex.com that spammers in general and
 such that it's worthy of concern would use this information to tune
 their efforts?

Because there are businesses out there that claim to do that very sort of
thing, except that they do it by actually sending mail and then checking
canary e-mail boxes on the receiving site to measure effectiveness of their
delivery strategy.  Failures result in further tuning.

Being able to simply analyze error messages would result in a huge boost
for their effectiveness, since they would essentially be able to monitor
the deliverability of entire mail runs, rather than assuming that the
deliverability percentage of their canaries, plus any open tracking

Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco
.

Again - to them.

But they're hardly the only class of spammers.  I realize it's convenient
to ignore that fact for the purposes of this discussion, since it supports
your argument while ignoring the fact that other spammers would mine a
lot of useful information out of such messages.

 But any such return codes should be voluntary,

And they are.  To the best of my knowledge, you can put pretty much any
crud you like after the ### , and if anybody wanted to return this data,
they would be doing it today.

 particularly the
 details, and a receiving MTA should be free to respond with as much or
 as little information as they are comfortable with right down to the
 big red button, 421 it just ain't happenin' bub!
 
 But it was just an example of how perhaps some standards, particularly
 regarding mail rejection, might help operationally. I'm not pushing
 the particular example I gave of extending status codes.
 
 Also, again I can't claim to know what you're working on, but there
 are quite a few disposable address systems in production which use
 various variations such as one per sender, one per message, change it
 only when you want to, etc. But maybe you have something better, I
 encourage you to pursue your vision.

No.  The difference to my solution is simply that it solves all the
problems I outlined when I wanted to solve the problem I started with -
finding a clean way to be able to exempt senders from anti-spam checks
that they frequently fell afoul of.

But then again, I am merely saying that there are solutions capable, but
that they all seem to require some paradigm shift.

 And, finally, one quote:
 
 I didn't say I had a design.  Certainly there are solutions to the
 problem, but any solution I'm aware of involves paradigm changes of
 some sort, changes that apparently few are willing to make.
 
 Gosh if you know of any FUSSP* whose only problem is that it requires
 everyone on the internet to abandon SMTP entirely or similar by all
 means share it.

That was kind of the nifty part to my solution:  it didn't require any
changes at any sender's site.  By accepting some tradeoffs, I was able
to compartmentalize all the permission issues as functions controlled by
the receiving site.

 Unfortunately this is a common hand-wave, oh we could get rid of spam
 overnight but it would require changes to (SMTP, usually) which would
 take a decade or more to implement, if at all!
 
 Well, since it's already BEEN a decade or more that we've all been
 fussing about spam in a big way maybe we should have listened to
 people with a secret plan to end the war back in 1998. So I'm here to
 tell ya I'll listen to it now and I suspect so will a lot of others.

If we cannot have a flag day for the e-mail system, and obviously, duh,
we cannot have a flag day for the e-mail system, we have to look at other
changes.

That's too big a paradigm shift.

My solution is a comprehensive solution to the permission problem, which is
a root issue in the fight against spam, but it is based on a paradigm shift
that ISP's are unwilling to underwrite - dealing with per-correspondent
addresses.  This has challenges associated with it, primarily related to
educating users how to use it, and then getting users to commit to actually
doing so.

That's not TOO big a paradigm shift, since it's completely backwards-
compatible and managed at the receiving site without any support required
anywhere else in the e-mail system, but since service providers aren't 
interested in it, it is a non-starter.  Were it interesting, it wouldn't
be that tough to support relatively transparently via plugins into modern
browsers such as Firefox and Thunderbird.  But it is a LARGE paradigm
shift, and it doesn't even solve every problem with the e-mail system.

I am unconvinced that there aren't smaller potential paradigm shifts that
could be made.  However...

It is exceedingly clear to me that service providers prefer to treat the
spam problem in a statistical manner.  It offers fairly good results (if
you consider ~90%-99% accuracy to be acceptable) but doesn't actually do
anything for users who need e-mail that they can actually rely on.  It's
cheap (relatively speaking) and the support costs can be made to be cheap.

 * FUSSP - Final and Ultimate Solution to the Spam Problem.

Shoot all the spammers?  :-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco

 On Sun, Apr 13, 2008, Joe Greco wrote:
  browsers such as Firefox and Thunderbird.  But it is a LARGE paradigm
  shift, and it doesn't even solve every problem with the e-mail system.
  
  I am unconvinced that there aren't smaller potential paradigm shifts that
  could be made.  However...
 
 There already has been a paradigm shift. University students (college for 
 you
 'merkins) use facebook, myspace (less now, thankfully!) and IMs as their
 primary online communication method. A number of students at my university
 use email purely because the university uses it for internal systems
 and communication, and use the above for everything else.
 
 I think you'll find that we are the paradigm shift that needs to happen.
 The younger people have already moved on. :)

I believe this is functionally equivalent to the block 25 and consider
SMTP dead FUSSP.

It's worth noting that each newer system is being systematically attacked
as well.  It isn't really a solution, it's just changing problem platforms.
The abuse remains.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-13 Thread Joe Greco

 On Sun, Apr 13, 2008, Joe Greco wrote:
  I believe this is functionally equivalent to the block 25 and consider
  SMTP dead FUSSP.
  
  It's worth noting that each newer system is being systematically attacked
  as well.  It isn't really a solution, it's just changing problem platforms.
  The abuse remains.
 
 Yes, but the ownership of the problem is better defined for messages -inside-
 a system.
 
 If you've got tens of millions of users on your IM service, you can start
 using statistical techniques on your data to identify likely spam/ham,
 and (very importantly) you are able to cut individual users off if they're
 doing something nasty. Users can't fake their identity like they can
 with email. There's no requirement for broadcasting messages a la email
 lists (which btw is touted as one of those things that break when various
 anti-spam verify-sender proposals come up.)
 
 Besides - google has a large enough cross section of users' email to do
 these tricks. I'd love to be a fly on the wall at google for just this
 reason ..

Few of these systems have actually been demonstrated to be invulnerable
to abuse.  As a matter of fact, I just saw someone from LinkedIn asking
about techniques for mitigating abuse.  When it's relatively cheap (think:
economically attractive in excessively poor countries with high
unemployment) to hire human labor, or even to engineer CAPTCHA evasion
systems where you have one of these wonderful billion-node-botnets
available, it becomes feasible to get your message out.  Statistically,
there will be some holes.  You only need a very small success rate.

The relative anonymity offered by e-mail is a problem, yes, but it is only
one challenge to the e-mail architecture.  For example, given a realistic
way to revoke permission to mail, having an anonymous party send you a
message (or even millions of messages) wouldn't be a problem, because you
could stop the flow whenever you wanted.  The problem is that there isn't
a commonly available way to revoke permission to mail.

I've posted items in places where e-mail addresses are likely to be
scraped or otherwise picked up and later spammed.  What amazed me was
how cool it was that I could actually post a usable e-mail address and
receive comments from random people, and then when the spam began to
roll in, I could simply turn off the address, and it doesn't even hit
the mailservers.  That's the power of being able to revoke permission.
The cost?  A DNS query and answer anytime some spammer tries to send 
to that address.  But a DNS query was happening anyways...

The solution I've implemented here, then, has the interesting quality
of moving ownership of the problem of permission within our systems,
without also requiring that all correspondents use our local messaging
systems (bboard, private messaging, whatever) or having to do ANY work
to figure out what's spam vs ham, etc.  That's my ultimate reply to 
your message, by the way.

Since it is clear that many other networks have no interest in stemming
the flood of trash coming from their operations, and clearly they're
not going to be interested in permission schemes that require their
involvement, I'd say that solutions that do not rely on other networks
cooperating to solve the problem bear the best chance of dealing with
the problem.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-11 Thread Joe Greco

  The lesson one should get from all this is that the ultimate harm of
  spammers et al is that they are succeeding in corrupting the idea of a
  standards-based internet.
  
  Sites invent policies to try to survive in a deluge of spam and
  implement those policies in software.
  
  Usually they're loathe to even speak about how any of it works either
  for fear that disclosure will help spammers get around the software or
  fear that someone, maybe a customer maybe a litigious marketeer who
  feels unfairly excluded, will hold their feet to the fire.
  
  So it's a vast sea of security by obscurity and standards be damned.
  
  It's a real and serious failure of the IETF et al.
 
 Has anyone ever figured out what percentage of a connection to the
 internet is now overhead i.e. spam, scan, viruses, etc? More than 5%? If
 we put everyone behind 4to6 gateways would the spam crush the gateways
 or would the gateways stop the spam? Would we add code to these
 transitional gateways to make them do more than act like protocol
 converters and then end up making them permanent because of benefit?
 Perhaps there's more to transitioning to a new technology after all?
 Maybe we could get rid of some of the cruft and right a few wrongs while
 we're at it?

We(*) can't even get BCP38 to work.  Ha.

Having nearly given up in disgust on trying to devise workable anti-spam
solutions that would reliably deliver requested/desired mail to my own
mailbox, I came to the realization that the real problem with the e-mail
system is so fundamental that there's no trivial way to save it.  

Permission to mail is implied by simply knowing an e-mail address.  If I
provide [EMAIL PROTECTED] to a vendor in order to receive updates to an
online order, the vendor may retain that address and then mail it again at
a later date.  Worse, if the vendor shares the address list with someone
else, we eventually have the Millions CD problem - and I have no idea who
was responsible.

Giving out tagged addresses gave a somewhat useful way to track back the
who was responsible, but didn't really offload the spam from the mail
server.

I've solved my spam problem (or, more accurately, am in the process of
slowly solving my spam problem) by changing the paradigm.  If the problem 
is that knowing an e-mail address acts as the key to the mail box, then 
giving the same key to everyone is stupid.

For vendors, I now give them a crypto-signed e-mail address(*2).  By 
making the key a part of the DNS name, I can turn off reception for a 
bad sender (anyone I don't want to hear from anymore!) or a sender who's
shared my address with their affiliates (block two for the price of
one!)  All other validated mail makes it to my mailbox without further
spam filtering of any kind.

This has been excessively effective, though doing it for random consumers
poses a lot of interesting problems.  However, it proves to me that one
of the problems is the permission model currently used.

The spam problem is potentially solvable, but there's a failure to figure
out (at a leadership level) paradigm changes that could actually make a 
difference.  There's a lot of resistance to changing anything about the
way e-mail works, and understandably so.  However, these are the sorts of
things that we have to contemplate and evaluate if we're really interested
in making fundamental changes that reduce or eliminate abuse.

(*) fsvo we that doesn't include AS14536.

(*2) I've omitted a detailed description of the strategy in use because
 it's not necessarily relevant to NANOG.  I'm happy to discuss it
 with anyone interested.  It has technical merit going for it, but it
 represents a significant divergence from current practice.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: spam wanted :)

2008-04-10 Thread Joe Greco

 Randy Bush [EMAIL PROTECTED] writes:
 
  this would be a straight sample, before filtering, ip address
  blocking, etc.
 
  i realize this is difficult, as all of us go through much effort to
  reject this stuff as early as possible.  but it will be a sample
  unbiased by your filtering techniques.
 
 How do you classify email as spam without adding bias?

You can always claim bias.

There's often been debate, even in the anti-spam community, about what
spam actually means.  The meaning has repeatedly been diluted over the
years, to a point where some now define it merely as that which we do
not want, an attitude supported in code by some service providers who
now sport great big Easy Buttons (with apologies to any office supply
chain) labelled This Is Spam.

Even so, there's some complexity - users making typos, for example.

However, the easiest way to avoid bias is to look for a mail stream that
has the quality of not having any valid recipients.  There will be, of 
course, someone who will disagree with me that mail sent to an address 
that hasn't been valid in years, and whose parent domain was unresolvable
in DNS for at least a year is spam.  However, it's as unbiased as I can
reasonably imagine being.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Problems sending mail to yahoo?

2008-04-10 Thread Joe Greco

 Barry Shein wrote:
  Is it just us or are there general problems with sending email to
  yahoo in the past few weeks? Our queues to them are backed up though
  they drain slowly.
  
  They frequently return:
  
 421 4.7.0 [TS01] Messages from MAILSERVERIP temporarily deferred due 
  to user complaints - 4.16.55.1; see 
  http://postmaster.yahoo.com/421-ts01.html
  
  (where MAILSERVERIP is one of our mail server ip addresses)
 
  Just wondering if this was a widespread problem or are we just so
  blessed, and any insights into what's going on over there.
 
 I see this a lot also and what I see causing it is accounts on my servers
 that don't opt for spam filtering and they have their accounts here set to
 forward mail to their yahoo.com accounts - spam and everything then gets
 sent there - they complain to yahoo.com about the spam and bingo - email
 delays from here to yahoo.com accounts

We had this happen when a user forwarded a non-filtered mail stream from
here to Yahoo.  The user indicated that no messages were reported to Yahoo
as spam, despite the fact that it's certain some of them were spam.

I wouldn't trust the error message completely.  It seems likely that a jump
in volume may trigger this too, especially of an unfiltered stream.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Nanog 43/CBX -- Hotel codes etc

2008-04-05 Thread Joe Greco

 Anyway -- I regard most of those warnings as quite overblown.  I mean,
 on lots of subway cars you stand out more if you don't have white
 earbuds in, probably attached to iPhones.  Midtown is very safe.  Your
 laptop bag doesn't have to say laptop on it to be recognized as such,
 but there are so many other people with laptop bags that you won't stand
 out if you have one.  Subway crime?  The average daily ridership is
 about 5,000,000; there are on average 9 felonies a day on the whole
 system. To quote a city police official I met, that makes the subways
 by far the safest city in the world.

That's probably an abuse of statistics.

 Yes, you're probably at more risk if you look like a tourist.  But there
 are lots of ways to do that, like waiting for a walk sign before
 crossing the street...  (Visiting Tokyo last month was quite a shock to
 my system; I had to unlearn all sorts of things.)

Looking and acting like you belong is good advice in most circumstances.
Act like the other monkeys.  If you don't give someone reason to question
you, they probably won't.  Wait, oh, that's the guide book for infiltrating
facilities ...  ;-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: fiber switch for gig

2008-04-01 Thread Joe Greco

 Speaking of running gig long distances, does anyone on the list have
 suggestions on a 8 port L2 switch with fiber ports based on personal
 experience?  Lots of 48 port gig switches have 2-4 fiber uplink ports, but
 this means daisy-chains instead of hub/spoke.  Looking for a central switch
 for a star topography to home fiber runs that is cost effective and works.
 
 Considering:
 DLink DXS-3326GSR
 NetGear GSM7312
 Foundry SX-FI12GM-4
 Zyxel GS-4012F
 
 I realize not all these switches are IEEE 802.3ae, Clause 49 or IEEE 802.3aq
 capable.

Cost effective would probably be the Dell 6024F.  We have some of these
and they've worked well, but we're not making any use of their advanced
features.  Can be had cheaply on eBay these days.  Has basic L3
capabilities (small forwarding table, OSPF), built in redundant power
supply, etc.  If you're fine with a non-ae/aq switch, these are worth
considering.

16 SFP plus 8 shared SFP/copper make it a fairly flexible device.

You did say cost effective, right?  :-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: EU Official: IP Is Personal

2008-01-23 Thread Joe Greco

 Paul Vixie wrote:
  [EMAIL PROTECTED] (Hank Nussbacher) writes:
  http://ap.google.com/article/ALeqM5g08qkYTaNhLlscXKMnS3V8dkc-WwD8UAGH900
 
  they say it's personally identifiable information, not personal property.
  EU's concern is the privacy implications of data that google and others
  are saving, they are not making a statement related to address ownership.
 
 Correct. In the EU DP framework (see: 
 [...]
 P. S. How many bits in the mask are necessary to achieve the non-PII aim?

So, this could be basically a matter of dredging up someone with a /25 
allocated to them personally, in the EU service area.  I think I know 
some people like that.

I know for a fact that I know people with swamp C's here in the US.  That
would seem to set the bar higher than a mere 7 bits.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-22 Thread Joe Greco
.

The reasonable thing to do, when you're just looking for some numbers, is
to come up with a reasonable way to generate those numbers, without giving
yourself an ulcer over the other possibilities of what may or may not be
on some specific network somewhere, or whether or not the other features
that come along with something like the upgrade to a sup720 should somehow 
be attributed to some other thing.

But getting back to this statement of yours:

 I cannot think of a pair of boxes where one can support a full table  
 and one can't where the _only_ difference is prefix count.  

I'll put a nail in this, AND cure some of your unhappiness, by noting the
following:

Per Froogle, which is public information that can be readily verified by
the random reader, it appears that a SUP720-3B can be had for ~$8K.  It 
appears that a SUP720-3BXL can be had for ~$29K.  IGNORING THE FACT that
the average network probably isn't merely upgrading from 3B to 3BXL, and
that line cards may need upgrades or daughtercards, that gives us a cost
of somewhere around $21K that can be attributed to JUST the growth in
table size.  (At least, I'm not /aware/ of any difference between the 3B 
and 3BXL other than table size.)

Will everyone decide to make that /particular/ jump in technology?  No.

Is it a fair answer to the question being asked?  It's a conservative
estimate, and so it is safe to use for the purposes of William's 
discussion.  It is a middle-of-the-road number.  There WILL be networks
that do not experience these costs, for various reasons.  There WILL be
networks where the costs are substantially higher, maybe because they've
got a hundred routers that all need to be upgraded.  There will even be
networks who have the 7600 platform and have already deployed the 3bxl.

The more general problem of what does it cost to carry another route
is somewhat like arguing about how many angels can dance on the head of
a pin.  Unlike the angels, there's an actual answer to the question, but
we're not able to accurately determine all the variables with precision.
That doesn't mean it's completely unreasonable to make a ballpark guess.

Remember the wisdom of Pnews:

This program posts news to thousands of machines throughout the entire
civilized world.  Your message will cost the net hundreds if not thousands of
dollars to send everywhere.  Please be sure you know what you are doing.

This is hardly different, and we're trying to get a grasp on what it is 
we're doing.  Your input of useful numbers and estimates would be helpful
and interesting.  Your arguments about why it's all wrong, minus any better
suggestion of how to do it, are useless.  Sorry, that's just the way it is.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-21 Thread Joe Greco

  For example, the Cisco 3750G has all of features except for the
  ability to hold 300k+  prefixes. Per CDW, the 48-port version costs
  $10k, so the difference (ergo cost attributable to prefix count) is
  $40k-$10k=$30k, or 75%.
 
 Unfortunately, I have to run real packets through a real router in the  
 real world, not design a network off CDW's website.
 
 As a simple for-instance, taking just a few thousand routes on the  
 3750 and trying to do multipath over, say 4xGigE, the 'router' will  
 fail and you will see up to 50% packet loss.  This is not something I  
 got off CDW's website, this is something we saw in production.
 
 And that's without ACLs, NetFlow, 100s of peering sessions, etc.  None  
 of which the 3750 can do and still pass gigabits of traffic through a  
 layer 3 decision matrix.

Patrick,

Please excuse me for asking, but you seem to be arguing in a most unusual
manner.  You seem to be saying that the 3750 is not a workable device for
L3 routing (which may simply be a firmware issue, don't know, don't care).
From the point of finding a 48-port device which could conceivably route
packets at wirespeed, even if it doesn't /actually/ do so, this device 
seems like a reasonable choice for purposes of cost comparisons to me.  
But okay, we'll go your way for a bit.

Given that the 3750 is not acceptable, then what exactly would you propose
for a 48 port multigigabit router, capable of wirespeed, that does /not/
hold a 300K+ prefix table?  All we need is a model number and a price, and
then we can substitute it into the pricing questions previously posed.

If you disagree that the 7600/3bxl is a good choice for the fully-capable
router, feel free to change that too.  I don't really care, I just want to
see the cost difference between DFZ-capable and non-DFZ-capable on stuff
that have similar features in other ways.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-21 Thread Joe Greco

 On Mon, 21 Jan 2008, Joe Greco wrote:
  Given that the 3750 is not acceptable, then what exactly would you propose
  for a 48 port multigigabit router, capable of wirespeed, that does /not/
  hold a 300K+ prefix table?  All we need is a model number and a price, and
  then we can substitute it into the pricing questions previously posed.
 
  If you disagree that the 7600/3bxl is a good choice for the fully-capable
  router, feel free to change that too.  I don't really care, I just want to
  see the cost difference between DFZ-capable and non-DFZ-capable on stuff
  that have similar features in other ways.
 
 If using the 7600/3bxl as the cost basis of the upgrade, you might as 
 well compare it to the 6500/7600/sup2 or sup3b.  Either of these would 
 likely be what people buying the 3bxls are upgrading from, in some cases 
 just because of DFZ growth/bloat, in others, to get additional features 
 (IPv6).

I see a minor problem with that in that if I don't actually need a chassis
as large as the 6500/sup2, there's a bit of a hefty jump to get to that
platform from potentially reasonable lesser platforms.  If you're upgrading,
though, it's essentially a discard of the sup2 (because you lose access to
the chassis), so it may be fair to count the entire cost of the sup720-3bxl.

Punching in 720-3bxl to Froogle comes up with $29K.  Since there are other
costs that may be associated with the upgrade (daughterboards, incompatible
line cards, etc), let's just pretend $30K is a reasonable figure, unless
someone else has Figures To Share.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-20 Thread Joe Greco

  However, if you look, all the prepaid plans that I've seen look  
  suspiciously
  like predatory pricing.  The price per minute is substantially  
  higher than
  an equivalent minute on a conventional plan.  Picking on ATT, for a  
  minute,
  here, look at their monthly GoPhone prepaid plan, $39.99/300  
  anytime, vs
  $39.99/450 minutes for the normal.  If anything, the phone company  
  is not
  extending you any credit, and has actually collected your cash in  
  advance,
  so the prepaid minutes ought to be /cheaper/.
 
 I disagree.  Ever heard of volume discounts?
 
 Picking on att again, a typical iPhone user signs up for 24 months @ ~ 
 $100/month, _after_ a credit check to prove they are good for it or  
 plunking down a hefty deposit.
 
 Compare that $2.4 kilo-bux to the $40-one-time payment by a pre-paid  
 user.  Or, to be more far, how about $960 ($40/month for voice only)  
 compared to $40 one-time?
 
 Hell yes I expect more minutes per dollar on my long-term contract.
 
 Hrmm, wonder if someone will offer pay-as-you-go broadband @ $XXX (or  
 $0.XXX) per gigabyte?

Actually, I was fairly careful, and I picked monthly recurring plans in 
both cases.  The typical prepaid user is NOT going to pay a $40-one-
time payment, because the initial cost of the phone is going to be a
deterrent from simply ditching the phone after $40 is spent.

The lock-in of contracts is typically done to guarantee that the cell
phone which they make you buy is paid for, and it is perfectly possible
(though somewhat roundabout) to get the cheaper postpaid plan without a
long contract - assuming you meet their creditworthiness guidelines.
Even without that, once you've gone past your one or two year commitment,
you continue at that same rate, so we can still note that the economics
are interesting.

The iPhone seems to be some sort of odd case, where we're not quite sure
whether there's money going back and forth between ATT and Apple behind
the scenes to subsidize the cost of the phones (or I may have missed the
news).  So talking about your iPhone is pretty much like comparing Apples
and oranges, and yes, you set yourself up for that one.

To put it another way, they do not give you a better price per minute if
you go and deposit $2400 in your prepaid account.  You can use your volume
discount argument once you come up with a compelling explanation for that.
;-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Cost per prefix [was: request for help w/ ATT and terminology]

2008-01-20 Thread Joe Greco

 But before we go too far down this road, everyone here should realize  
 that new PI space and PA deaggregation WILL CONTINUE TO HAPPEN.
 
 Many corporations paying for Internet access will NOT be tied to a  
 single provider.  Period.  Trying to tell them you are too small, you  
 should only let us big networks have our own space is a silly  
 argument which won't fly.
 
 The Internet is a business tool.  Trying to make that tool less  
 flexible, trying to tie the fate of a customer to the fate of a single  
 provider, or trying force them to jump through more hoops than you  
 have to jump through for the same redundancy / reliability is simply  
 not realistic.  And telling them it will cost some random network in  
 some random other place a dollar a year for their additional  
 flexibility / reliability / performance is not going to convince them  
 not to do it.
 
 The number of these coAt least not while the Internet is still driven  
 by commercial realities.  (Which I personally think is a Very Good  
 Thing - much better than the alternative.)  Someone will take the  
 customer's check, so the prefix will be in the table.  And since you  
 want to take your customers' checks to provide access to that ISP's  
 customer, you will have to carry the prefix.
 
 Of course, that doesn't mean we shouldn't be thrifty with table  
 space.  We just have to stop thinking that only the largest providers  
 should be allowed to add a prefix to the table.  At least if we are  
 going to continue making money on the Internet.

While I agree with this to some extent, it is clear that there are some
problems.  The obvious problem is where the line is drawn; it is not
currently reasonable for each business class DSL line to be issued PI
space, but it is currently reasonable for the largest 100 companies in
the world to have PI space.  (I've deliberately drawn the boundary lines
well outside what most would argue as a reasonable range; the boundaries
I've drawn are not open to debate, since they're for the purposes of
contemplating a problem.)

I don't think that simply writing a check to an ISP is going to be
sufficiently compelling to cause networks of the world to accept a 
prefix in the table.  If I happen to be close to running out of table
entries, then I may not see any particular value in accepting a prefix
that serves no good purpose.  For example, PA deaggregated space and
prefixes from far away might be among the first victims, with the former
being filtered (hope you have a covering route!) and the latter being
filtered with a local covering route installed to default a bunch of
APNIC routes out a reasonable pipe.

For the overall good of the Internet, that's not particularly desirable,
but it will be a reality for providers who can't keep justifying
installing lots of routers with larger table sizes every few years.

There is, therefore, some commercial interest above and beyond hey, 
look, some guy paid me.  We'd like the Internet to work _well_, and
that means that self-interest exclusive of all else is not going to be
a good way to contemplate commercial realities.

So, what can reasonably be done?  Given what I've seen over the years,
I keep coming back to the idea that PI space allocations are not all
that far out of control, but the PA deaggregation situation is fairly
rough.  There would also seem to be some things that smaller sites could
do to fix the PA deagg situation.  Is this the way people see things
going, if we're going to be realistic?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-20 Thread Joe Greco

 I think the point is that you need to get buyers to segregate =
 themslevesinto two groups - the light users and the heavy users. By =
 heavy users I mean the 'Bandwidth Hogs' (Oink, Oink) and a light user =
 someone like myself for whom email is the main application. Afterall the =
 problem with the current system is that there is no segregation - =
 everyone is on basically the same plan.=20

Well, yes.

 The pricing plan needs to be structure in a way that light users have an =
 incentive to take a different pricing plan than do the heavy users.=20

Using the local cable company as an example, right now, I believe that
they're doing Road Runner Classic for $40/mo, with Road Runner Turbo for
$50/mo (approx).  Extra speed for Turbo (14M/1M, IIRC)

The problem is, Road Runner is delivering 7M/512K for $40/mo, which is
arguably a lot more capacity than maybe 50-80% of the customers actually
need.

Ma Bell is selling DSL a *lot* cheaper (as low as $15, IIRC).

So, does:

1) Road Runner drop prices substantially (keep current pricing for high
   bandwidth users), and continue to try to compete with DSL, which could 
   have the adverse side effect of damaging revenue if customers start
   moving in volume to the cheaper plan,

2) Road Runner continue to provide service to the shrinking DSL-less service
   areas at a premium price, relying on apathy to minimize churn in the
   areas where Ma Bell is likely leafing every bill with DSL adverts,

3) Road Runner decide to keep the high paying customers, for now, and try to
   minimize bandwidth, and then deal with the growth of DSL coverage at a 
   future date by dropping prices later?

Option 1) is aggressive but kills profitability.  If done right, though, 
it ensures that cable will continue to compete with DSL in the future.
Option 2) is a holding pattern that is the slow path to irrelevancy.  
Option 3) is a way to maximize current profitability, but makes it 
difficult to figure out just when to implement a strategy change.  In 
the meantime, DSL continues to nibble away at the customer base.  The
end result is unpredictable.

I'm going to tend to view 3) as the shortsighted approach that is also
going to be very popular with businesses who cannot see out beyond next
quarter's profits.

The easiest way to encourage light users to take a different pricing plan
is to give them one.  If Road Runner does that, that's option 1), complete
with option 1)'s problem.  On the flip side, if you seriously think that 
$40/month is an appropriate light pricing plan and high bandwidth users 
should pay more (let's say $80/), then there's a competition problem with
DSL where DSL is selling tiers, and even the highest is at least somewhat
cheaper.

That means that the main advantages to Road Runner are:

1) Availability in non-DSL areas,

2) A 14M/1M service plan currently unmatched by DSL (TTBOMK).

That latter one is simply going to act as a magnet to the high bandwidth
users.

Interesting.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: An Attempt at Economically Rational Pricing: Time Warner Trial

2008-01-19 Thread Joe Greco

Condensing a few messages into one:

Mikael Abrahamsson writes:
 Customers want control, that's why the prepaid mobile phone where you get
 an account you have to prepay into, are so popular in some markets. It
 also enables people who perhaps otherwise would not be eligable because of
 bad credit, to get these kind of services.

However, if you look, all the prepaid plans that I've seen look suspiciously 
like predatory pricing.  The price per minute is substantially higher than
an equivalent minute on a conventional plan.  Picking on ATT, for a minute,
here, look at their monthly GoPhone prepaid plan, $39.99/300 anytime, vs
$39.99/450 minutes for the normal.  If anything, the phone company is not
extending you any credit, and has actually collected your cash in advance,
so the prepaid minutes ought to be /cheaper/.

Roderick S. Beck writes:
 Do other industries have mixed pricing schemes that successfully =
 coexist? Some restuarants are all-you-can-eat and others are pay by =
 portion. You can buy a car outright or rent one and pay by the mile.=20

Certainly.  We already have that in the Internet business, in the form of
business vs residential service, etc.  For example, for a residential
circuit where I wanted to avoid a disclosed (in the fine print, sigh)
monthly limit, we instead ordered a business circuit, which we were
assured differed from a T1 in one way (on the usage front):  there was 
no specific performance SLA, but there were no limits imposed by the
service provider, and it was explicitly okay to max it 24/7.  This cost
all of maybe $15/month extra (prices have since changed, I can't check.)

Quinn Kuzmich writes:
 You are sadly mistaken if you think this will save anyone any cash,
 even light users.  Their prices will not change, not a chance.
 Upgrade your network instead of complaining that its just kids
 downloading stuff and playing games.

It is certainly true that the price is resistant to change.  In the local
area, RR recently increased speeds, and I believe dropped the base price
by $5, but didn't tell any of their legacy customers.  The pricing aspect
in particular has been somewhat obscured; when I called in to have a
circuit updated to Road Runner Turbo, the agent merely said that it would
only cost $5/month more (despite it being $10/ more, since the base
service price had apparently dropped $5).  They seemed hesitant to explain.

Michael Holstein writes:
 The problem is the inability of the physical media in TWC's case (coax) 
 to support multiple simultaneous users. They've held off infrastructure 
 upgrades to the point where they really can't offer unlimited 
 bandwidth. TWC also wants to collect on their unlimited package, but 
 only to the 95% of the users that don't really use it,

Absolutely.  If you can do that, you're good to go.  Except that you run
into this dynamic where someone else comes in and picks the fruit.  In
Road Runner's case, they're going to be competing with ATT who is going
to be trying to pick off those $35-$40/mo low volume customers into a
less expensive $15-$20/mo plan.

 and it appears 
 they don't see working to accommodate the other 5% as cost-effective.

Certainly, but only if they can retain the large number of high-paying 
customers who make up that 95%.

 My guess is the market will work this out. As soon as it's implemented, 
 you'll see ATT commercials in that town slamming cable and saying how 
 DSL is really unlimited.

Especially if ATT can make it really unlimited.  Their speeds do not
quite compete with Road Runner Turbo, but for 6.0/768 here, ATT Y! is
$34.99/mo, while RR appears to be $40(?) for 7.0/512.

The difference is that's the top-of-the-line legacy (non-U-verse) ATT
DSL offering; there are less expensive ones.  Getting back to what Roderick
Beck said, ATT is *effectively* offering mixed pricing schemes, simply by
offering various DSL speeds.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: request for help w/ ATT and terminology

2008-01-18 Thread Joe Greco

 On Thu, 17 Jan 2008 17:35:30 -0500
 [EMAIL PROTECTED] wrote:
  On Thu, 17 Jan 2008 21:29:37 GMT, Steven M. Bellovin said:
  
   You don't always want to rely on the DNS for things like firewalls
   and ACLs.  DNS responses can be spoofed, the servers may not be
   available, etc.  (For some reason, I'm assuming that DNSsec isn't
   being used...)
  
  Been there, done that, plus enough other stupid DNS tricks and
  stupid /etc/host tricks to get me a fair supply of stories best
  told over a pitcher of Guinness down at the Undergroud..
 
 I prefer nice, hoppy ales to Guiness, but either works for stories..

Heh.

  *Choosing* to hardcode rather than use DNS is one thing.  *Having* to
  hardcode because the gear is too stupid (as Joe Greco put it) is
  however Caveat emptor no matter how you slice it...
 
 Mostly.  I could make a strong case that some security gear shouldn't
 let you do the wrong thing.  (OTOH, my preferred interface would do the
 DNS look-up at config time, and ask you to confirm the retrieved
 addresses.)  You can even do that look-up on a protected net in some
 cases.

It's all nice and trivial to generate scenarios that could work, but the
cold, harsh reality of the world is full of scenarios that don't work.

Exempting /etc/resolv.conf (or Windows equiv) from blame could be
considered equally silly, because DHCP certainly allows discovery of
DNS servers ...  yet we already exempted that scenario.  Why not exempt
more difficult scenarios, such as how do you use DNS to specify a
firewall rule that (currently) allows 123.45.67.0/24.  Your suggested
interface for single addresses is actually fairly reasonable, but is not
comprehensive by a long shot, and still has some serious issues (such as
what happens when the firewall in question is under someone else's
administrative control, the config-time nature of the DNS resolution 
solution means that the use of DNS doesn't actually result in your being
able to get that update installed without their intervention).

It's also worth remembering that hardware manufactured fairly recently
still didn't have DNS lookup capabilities; I think only our newest
generation of APC RPDU's has it, for example, and it doesn't do it for
ACL purposes.  The CPU's in some of these things are tiny, as are the
memories, ROM/flash, etc.  And it's simply unfair to say that equipment
older than N years must be obsolete.

As much as I'd like it to be easy to renumber, I'd say that it's
unreasonable to assume that it is actually trivial to do so.  Further,
the real experiences of those who have had to undergo such an ordeal
should represent some hard-learned wisdom to those working on
autoconfiguration for IPv6; if we don't learn from our v4 problems,
then that's stupid.  (That's primarily why this is worth discussing)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: request for help w/ ATT and terminology

2008-01-17 Thread Joe Greco

 P.S. if your network is all in one cage, it can't be that difficult
 to just renumber it all into ATT address space.

Oh, come on, let's not be naive.  It's perfectly possible to have a common
situation where it would be exceedingly difficult to do this.  Anything
that gets wired in by IP address, particularly on remote computers, would
make this a killer.  That could include things such as firewall rules/ACL's,
recursion DNS server addresses, VPN adapters, VoIP equipment with stacks too
stupid to do DNS, etc.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: request for help w/ ATT and terminology

2008-01-17 Thread Joe Greco

 On Thu, 17 Jan 2008 09:15:30 CST, Joe Greco said:
  make this a killer.  That could include things such as firewall rules/ACL's,
  recursion DNS server addresses, VPN adapters, VoIP equipment with stacks too
  stupid to do DNS, etc.
 
 I'll admit that fixing up /etc/resolv.conf and whatever the Windows equivalent
 is can be a pain - but for the rest of it, if you bought gear that's too
 stupid to do DNS, I have to agree with Leigh's comment: Caveat emptor.

Wow, as far as I can tell, you've pretty much condemned most firewall
software and devices then, because I'm really not aware of any serious
ones that will successfully implement rules such as allow from
123.45.67.0/24 via DNS.  Besides, if you've gone to the trouble of
acquiring your own address space, it is a reasonable assumption that 
you'll be able to rely on being able to tack down services in that
space.  Being expected to walk through every bit of equipment and
reconfigure potentially multiple subsystems within it is unreasonable.

Taking, as one simple example, an older managed ethernet switch, I see
the IP configuration itself, the SNMP configuration (both filters and
traps), the ACL's for management, the time server IP, etc.  I guess if
you feel that Bay Networks equipment was a bad buy, you're welcome to
that opinion.  I can probably dig up some similar Cisco gear.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Looking for geo-directional DNS service

2008-01-16 Thread Joe Greco

 [EMAIL PROTECTED] (Joe Greco) writes:
  ...
  So, anyways, would it be entertaining to discuss the relative merits of
  various DNS implementations that attempt to provide geographic answers 
  to requests, versus doing it at a higher level?  (I can hear everyone 
  groaning now, and some purist somewhere probably having fits)
 
 off topic.  see http://lists.oarci.net/mailman/listinfo/dns-operations.

Possibly, but I found myself removed from that particular party, and the
request was on NANOG, not on dns-operations.  I was under the impression 
that dns-operations was for discussion of DNS operations, not 
implementation choices.  Whether NANOG is completely appropriate remains 
to be seen; I haven't heard a ML complaint though.  There would ideally 
be a list for implementation and design of such things, but I've yet to 
see one that's actually useful, which is, I suspect, why NANOG got a 
request like this.

Besides, if you refer back to the original message in this thread, where I
was driving would be much closer to being related to what the OP was 
interested in.

Hank was saying:

 What I am looking for is a commercial DNS service.
 [...]
 Another service I know about is the Ultradns (now Neustar) Directional DNS:
 http://www.neustarultraservices.biz/solutions/directionaldns.html
 But this service is based on statically defined IP responses at each of
 their 14 sites so there is no proximity checking done.

So there are three basic ways to go about it,

1) Totally static data (in which case anycast and directionality are not a
   consideration, at least at the DNS level), which does not preclude doing
   things at a higher level.

2) Simple anycast, as in the Directional DNS service Hank mentioned, which
   has thoroughly been thrashed into the ground as to why it ain't great,
   which it seems Hank already understood.

3) Complex DNS implementations.  Such as ones that will actually do active
   probes, etc.  Possibly combined with 1) even.

I was trying to redirect the dead anycast horse beating back towards a 
discussion of the relative merits of 1) vs 3).  The largest problems with 
3) seem to revolve around the fact that you generally have no idea where 
a request /actually/ originated, and you're pinning your hopes on the 
client's resolver having some vague proximity to the actual client. 
Redirection at a higher level is going to be desirable, but is not always 
possible, such as for protocols like NNTP.

I'm happy to be criticized for guiding a conversation back towards being
relevant...  :-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Joe Greco

 On Mon, 14 Jan 2008 18:43:12 -0500
 William Herrin [EMAIL PROTECTED] wrote:
  On Jan 14, 2008 5:25 PM, Joe Greco [EMAIL PROTECTED] wrote:
So users who rarely use their connection are more profitable to the ISP.
  
   The fat man isn't a welcome sight to the owner of the AYCE buffet.
  
  Joe,
  
  The fat man is quite welcome at the buffet, especially if he brings
  friends and tips well.
 
 But the fat man isn't allowed to take up residence in the restaurant
 and continously eat - he's only allowed to be there in bursts, like we
 used to be able to assume people would use networks they're connected
 to. Left running P2P is the fat man never leaving and never stopping
 eating.

Time to stop selling the always on connections, then, I guess, because
it is always on - not P2P - which is the fat man never leaving.  P2P
is merely the fat man eating a lot while he's there.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Joe Greco

 Joe Greco wrote:
  Time to stop selling the always on connections, then, I guess, because
  it is always on - not P2P - which is the fat man never leaving.  P2P
  is merely the fat man eating a lot while he's there.
 
 As long as we're keeping up this metaphor, P2P is the fat man who says 
 he's gonna get a job real soon but dude life is just SO HARD and crashes 
 on your couch for three weeks until eventually you threaten to get the 
 cops involved because he won't leave. Then you have to clean up 
 thirty-seven half-eaten bags of Cheetos.

I have no idea what the networking equivalent of thirty-seven half-eaten
bags of Cheetos is, can't even begin to imagine what the virtual equivalent
of my couch is, etc.  Your metaphor doesn't really make any sense to me,
sorry.

Interestingly enough, we do have a pizza-and-play place a mile or two
from the house, you pay one fee to get in, then quarters (or cards or
whatever) to play games - but they have repeatedly answered that they
are absolutely and positively fine with you coming in for lunch, and 
staying through supper.  And we have a discount card, which they used
to give out to local businesspeople for business lunches, on top of it.

 Every network has limitations, and I don't think I've ever seen a 
 network that makes every single end-user happy with everything all the 
 time. You could pipe 100Mbps full-duplex to everyone's door, and someone 
 would still complain because they don't have gigabit access to lemonparty.

Certainly.  There will be gigabit in the future, but it isn't here (in
the US) just yet.  That has very little to do with the deceptiveness
inherent in selling something when you don't intend to actually provide
what you advertised.

 Whether those are limitations of the technology you chose, limitations 
 in your budget, policy restrictions, whatever.
 
 As long as you fairly disclose to your end-users what limitations and 
 restrictions exist on your network, I don't see the problem.

You've set out a qualification that generally doesn't exist.  For example,
this discussion included someone from a WISP, Amplex, I believe, that 
listed certain conditions of use on their web site, and yet it seems like
they're un{willing,able} (not assigning blame/fault/etc here) to deliver
that level of service, and using their inability as a way to justify
possibly rate shaping P2P traffic above and beyond what they indicate on 
their own documents.

In some cases, we do have people burying TC in lengthy TC documents,
such as some of the 3G cellular providers who advertise Unlimited
Internet(*) data cards, but then have a slew of (*) items that are
restricted - but only if you dig into the fine print on Page 3 of the
TC.  I'd much prefer that the advertising be honest and up front, and
that ISP's not be allowed to advertise unlimited service if they are
going to place limits, particularly significant limits, on the service.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Looking for geo-directional DNS service

2008-01-15 Thread Joe Greco

 Except Hank is asking for true topological distance (latency /  
 throughput / packetloss).
 
 Anycast gives you BGP distance, not topological distance.
 
 Say I'm in Ashburn and peer directly with someone in Korea where he  
 has a node (1 AS hop), but I get to his node in Ashburn through my  
 transit provider (2 AS hops), guess which node anycast will pick?

Ashburn and other major network meet points are oddities in a very complex
network.  It would be fair to note that anycast is likely to be reasonably
effective if deployed in a manner that was mindful of the overall Internet
architecture, and made allowances for such things.

Anycast by itself probably isn't entirely desirable in any case, and could
ideally be paired up with other technologies to fix problems like this.

I haven't seen many easy ways to roll-your-own geo-DNS service.  The ones
I've done in the past simply built in knowledge of the networks in question,
and where such information wasn't available, took best guess and then may
have done a little research after the fact for future queries.  This isn't
as comprehensive as doing actual latency / throughput / pl checking.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: FW: ISPs slowing P2P traffic...

2008-01-15 Thread Joe Greco

 Joe Greco wrote:
  I have no idea what the networking equivalent of thirty-seven half-eaten
  bags of Cheetos is, can't even begin to imagine what the virtual equivalent
  of my couch is, etc.  Your metaphor doesn't really make any sense to me,
  sorry.
 
 There isn't one. The fat man metaphor was getting increasingly silly, 
 I just wanted to get it over with.

Actually, it was doing pretty well up 'til near the end.  Most of the
amusing stuff was [off-list.]  The interesting conclusion to it was that
obesity is a growing problem in the US, and that the economics of an AYCE
buffet are changing - mostly for the owner.

  Interestingly enough, we do have a pizza-and-play place a mile or two
  from the house, you pay one fee to get in, then quarters (or cards or
  whatever) to play games - but they have repeatedly answered that they
  are absolutely and positively fine with you coming in for lunch, and 
  staying through supper.  And we have a discount card, which they used
  to give out to local businesspeople for business lunches, on top of it.
 
 That's not the best metaphor either, because they're making money off 
 the games, not the buffet. (Seriously, visit one of 'em, the food isn't 
 very good, and clearly isn't the real draw.) 

True for Chuck E Cheese, but not universally so.  I really doubt that
Stonefire is expecting the people who they give their $5.95 business
lunch card to to go play games.  Their pizza used to taste like cardboard
(bland), but they're much better now.  The facility as a whole is designed
to address the family, and adults can go get some Asian or Italian pasta,
go to the sports theme area that plays ESPN, and only tangentially notice
the game area on the way out.  The toddler play areas (8yr) are even free.

http://www.whitehutchinson.com/leisure/stonefirepizza.shtml

This is falling fairly far from topicality for NANOG, but there is a
certain aspect here which is exceedingly relevant - that businesses
continue to change and innovate in order to meet customer demand.

 I suppose you could market 
 Internet connectivity this way - unlimited access to HTTP and POP3, and 
 ten free SMTP transactions per month, then you pay extra for each 
 protocol. That'd be an awfully tough sell, though.

Possibly.  :-)

  As long as you fairly disclose to your end-users what limitations and 
  restrictions exist on your network, I don't see the problem.
  
  You've set out a qualification that generally doesn't exist.
 
 I can only speak for my network, of course. Mine is a small WISP, and we 
 have the same basic policy as Amplex, from whence this thread 
 originated. Our contracts have relatively clear and large (at least by 
 the standards of a contract) no p2p disclaimers, in addition to the 
 standard no traffic that causes network problems clause that many of 
 us have. The installers are trained to explicitly mention this, along 
 with other no-brainer clauses like don't spam.

Actually, that's a difference, that wasn't what [EMAIL PROTECTED] was talking
about.  Amplex web site said they would rate limit you down to the minimum 
promised rate.  That's disclosed, which would be fine, except that it
apparently isn't what they are looking to do, because their oversubscription
rate is still too high to deliver on their promises.

 When we're setting up software on their computers (like their email 
 client), we'll look for obvious signs of trouble ahead. If a customer 
 already has a bunch of p2p software installed, we'll let them know they 
 can't use it, under pain of find a new ISP.
 
 We don't tell our customers they can have unlimited access to do 
 whatever the heck they want. The technical distinctions only matter to a 
 few customers, and they're generally the problem customers that we don't 
 want anyway.

There is certainly some truth to that.  Getting rid of the unprofitable
customers is one way to keep things good.  However, you may find yourself
getting rid of some customers who merely want to make sure that their ISP
isn't going to interfere at some future date.  

 To try to make this slightly more relevant, is it a good idea, either 
 technically or legally, to mandate some sort of standard for this? I'm 
 thinking something like the Nutrition Facts information that appears 
 on most packaged foods in the States, that ISPs put on their Web sites 
 and advertisements. I'm willing to disclose that we block certain ports 
 for our end-users unless they request otherwise, and that we rate-limit 
 certain types of traffic. 

ABSOLUTELY.  We would certainly seem more responsible, as providers, 
if we disclosed what we were providing.

 I can see this sort of thing getting confusing 
 and messy for everyone, with little or no benefit to anyone. Thoughts?

It certainly can get confusing and messy.

It's a little annoying to help someone go shopping for broadband and then
have to dig out the dirty details in the TC, if they're even there.

In a similar way, I get highly annoyed

Re: Looking for geo-directional DNS service

2008-01-15 Thread Joe Greco

 Unless you define topologically nearest as what BGP picks, that is  
 incorrect.  And even if you do define topology to be equivalent to  
 BGP, that is not what is of the greatest interest.   
 Goodput (latency, packet loss, throughput) is far more important.   
 IMHO.

Certainly, but given some completely random transaction, there's still
going to be a tendency for anycast to be some sort of improvement over
pure random chance.  1000 boneheaded anycast implementations cannot be
wrong.  :-)  That you don't get it right every time doesn't make it
wrong every time.

I'm certainly not arguing for anycast-only solutions, and said so.  I'll
be happy to consider it as a first approximation to getting something to
a topologically nearby network, though as I also said, there needs to
be some care taken in the implementation.

Anycast can actually be very powerful within a single AS, where of course
you have some knowledge of the network and predictability.  You lose some
(probably a lot) of that in the translation to the public Internet, but
I'm going to go out on a bit of a limb and guess that if I were to stick an
anycast node in Chicago, Sydney, and Amsterdam, I'm very likely to be able
to pick my networks such that I get a good amount of localization.

Of course, nobody's perfect, and it probably needs to be a data-driven 
business if you really want well-optimized redirection.  However, that's
a bit of magic.  Even the fabled Akamai used to direct us to some ISP up
in Minnesota...  (BFG)

So, anyways, would it be entertaining to discuss the relative merits of
various DNS implementations that attempt to provide geographic answers 
to requests, versus doing it at a higher level?  (I can hear everyone 
groaning now, and some purist somewhere probably having fits)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: ISPs slowing P2P traffic...

2008-01-14 Thread Joe Greco

 Geo:
 
 That's an over-simplification.  Some access technologies have different
 modulations for downstream and upstream.
 i.e. if a:b and a=b, and c:d and cd, a+bc+d.
 
 In other words, you're denying the reality that people download a 3 to 4
 times more than they upload and penalizing every in trying to attain a 1:1
 ratio.

So, is that actually true as a constant, or might there be some
cause-effect mixed in there?

For example, I know I'm not transferring any more than I absolutely must
if I'm connected via GPRS radio.  Drawing any sort of conclusions about
my normal Internet usage from my GPRS stats would be ... skewed ... at
best.  Trying to use that reality as proof would yield you an exceedingly
misleading picture.

During those early years of the retail Internet scene, it was fairly easy
for users to migrate to usage patterns where they were mostly downloading
content; uploading content on a 14.4K modem would have been unreasonable.
There was a natural tendency towards eyeball networks and content networks.

However, these days, more people have always on Internet access, and may
be interested in downloading larger things, such as services that might
eventually allow users to download a DVD and burn it.

http://www.engadget.com/2007/09/21/dvd-group-approves-restrictive-download-to-burn-scheme/

This means that they're leaving their PC on, and maybe they even have other
gizmos or gadgets besides a PC that are Internet-aware.

To remain doggedly fixated on the concept that an end-user is going to
download more than they upload ...  well, sure, it's nice, and makes
certain things easier, but it doesn't necessarily meet up with some of
the realities.  Verizon recently began offering a 20M symmetrical FiOS
product.  There must be some people who feel differently.

So, do the modulations of your access technologies dictate what your
users are going to want to do with their Internet in the future, or is it
possible that you'll have to change things to accomodate different
realities?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: FW: ISPs slowing P2P traffic...

2008-01-14 Thread Joe Greco

 From my experience, the Internet IP Transit Bandwidth costs ISP's a lot
 more than the margins made on Broadband lines.
 
 So users who rarely use their connection are more profitable to the ISP.

The fat man isn't a welcome sight to the owner of the AYCE buffet.

What exactly does this imply, though, from a networking point of view?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco
 being limited to 256kbps up, unless I am on 
the business service, where it'd be 768kbps up.  This seems quite fair and
equitable.  It's clearly and unambiguously disclosed, it's still 
guaranteeing delivery of the minimum class of service being purchased, etc.

If such an ISP were unable to meet the commitment that it's made to
customers, then there's a problem - and it isn't the customer's problem,
it's the ISP's.  This ISP has said We guarantee our speeds will be as
good or better than we specify - which is fairly clear.

You might want to check to see if you've made any guarantees about the
level of service that you'll provide to your customers.  If you've made
promises, then you're simply in the unenviable position of needing to
make good on those.  Operating an IP network with a basic SLA like this
can be a bit of a challenge.  You have to be prepared to actually make
good on it.  If you are unable to provide the service, then either there
is a failure at the network design level or at the business plan level.

One solution is to stop accepting new customers where a tower is already
operating at a level which is effectively rendering it full.
 
... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

 Joe Greco wrote,
  There are lots of things that could heavily stress your upload channel.
  Things I've seen would include:
 
  1) Sending a bunch of full-size pictures to all your friends and family,
 which might not seem too bad until it's a gig worth of 8-megapixel 
 photos and 30 recipients, and you send to each recipient separately,
  2) Having your corporate laptop get backed up to the company's backup
 server,
  3) Many general-purpose VPN tasks (file copying, etc),
  4) Online gaming (capable of creating a vast PPS load, along with fairly
 steady but low volumetraffic),
 
  etc.  P2P is only one example of things that could be stressful.
   
 These things all happen - but they simply don't happen 24 hours a day, 7 
 days a week.   A P2P client often does.

It may.  Some of those other things will, too.  I picked 1) and 2) as
examples where things could actually get busy for long stretches of
time.

In this business, you have to realize that the average bandwidth use of
a residential Internet connection is going to grow with time, as new and
wonderful things are introduced.  In 1995, the average 14.4 modem speed
was perfectly fine for everyone's Internet needs.  Go try loading web
pages now on a 14.4 modem...  even web pages are bigger.

 snip for brevity
 
  The questions boil down to things like:
 
  1) Given that you unable to provide unlimited upstream bandwidth to your 
 end users, what amount of upstream bandwidth /can/ you afford to
 provide?
   
 Again - it depends.   I could tell everyone they can have 56k upload 
 continuous and there would be no problem from a network standpoint - but 
 it would suck to be a customer with that restriction. 

If that's the reality, though, why not be honest about it?

 It's a balance between providing good service to most customers while 
 leaving us options.

The question is a lot more complex than that.  Even assuming that you have
unlimited bandwidth available to you at your main POP, you are likely to
be using RF to get to those remote tower sites, which may mean that there 
are some specific limits within your network, which in turn implies other
things.

  What Amplex won't do...
 
  Provide high burst speed if  you insist on running peer-to-peer file 
  sharing
  on a regular basis.  Occasional use is not a problem.   Peer-to-peer
  networks generate large amounts of upload traffic.  This continuous traffic
  reduces the bandwidth available to other customers - and Amplex will rate
  limit your connection to the minimum rated speed if we feel there is a
  problem. 
  
 
  So, the way I would read this, as a customer, is that my P2P traffic would
  most likely eventually wind up being limited to 256kbps up, unless I am on 
  the business service, where it'd be 768kbps up.  

 Depends on your catching our attention.  As a 'smart' consumer you might 
 choose to set the upload limit on your torrent client to 200k and the 
 odds are pretty high we would never notice you.

... today.  And since 200k is less than 256k, I would certainly expect
that to be true tomorrow, too.  However, it might not be, because your
network may not grow easily to accomodate more customers, and you may
perceive it as easier to go after the high bandwidth users, yes?

 For those who play nicely we don't restrict upload bandwidth but leave 
 it at the capacity of the equipment (somewhere between 768k and 1.5M).
 
 Yep - that's a rather subjective criteria.   Sorry.
 
  This seems quite fair and
  equitable.  It's clearly and unambiguously disclosed, it's still 
  guaranteeing delivery of the minimum class of service being purchased, etc.
 
  If such an ISP were unable to meet the commitment that it's made to
  customers, then there's a problem - and it isn't the customer's problem,
  it's the ISP's.  This ISP has said We guarantee our speeds will be as
  good or better than we specify - which is fairly clear.
 
 We try to do the right thing - but taking the high road costs us when 
 our competitors don't.   I would like to think that consumers are smart 
 enough to see the difference but I'm becoming more and more jaded as 
 time goes on

You've picked a business where many customers aren't technically
sophisticated.  That doesn't necessarily make it right to rip them
off - even if your competitors do.

  One solution is to stop accepting new customers where a tower is already
  operating at a level which is effectively rendering it full.
 
 Unfortunately full is an ambiguous definition.Is it when:
 
 a)  Number of Customers * 256k up = access point limit?
 b)  Number of Customers * 768k down = access point limit?
 c)  Peak upload traffic = access point limit?
 d)  Peak download traffic = access point limit?
 (e) Average ping times start to increase?
 
 History shows (a) and (b) occur well before the AP is particularly 
 loaded and would be wasteful of resources.

Certainly, but it's the only way to actually be able to guarantee

Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

 It may.  Some of those other things will, too.  I picked 1) and 2) as
 examples where things could actually get busy for long stretches of
 time.
 
 The wireless ISP business is a bit of a special case in this regard, where 
 P2P traffic is especially nasty.
 
 If I have ten customers uploading to a Web site (some photo sharing site, or 
 Web-based email, say), each of whom is maxing out their connection, that's 
 not a problem.

That is not in evidence.  In fact, quite the opposite...  given the scenario
previously described (1.5M tower backhaul, 256kbps customer CIR), it would 
definitely be a problem.  The data doesn't become smaller simply because it
is Web traffic.

 If I have one customer running Limewire or Kazaa or whatever P2P software all 
 the cool kids are running these days, even if he's rate-limited himself to 
 half his connection's maximum upload speec, that often IS a problem.

That is also not in evidence, as it is well within what the link should be
able to handle.

 It's not the bandwidth, it's the number of packets being sent out.

Well, PPS can be a problem.  Certainly it is possible to come up with
hardware that is unable to handle the packets per second, and wifi can
be a bit problematic in this department, since there's such a wide
variation in the quality of equipment, and even with the best, performance
in the PPS arena isn't generally what I'd consider stellar.  However, I'm
going to guess that there are online gaming and VoIP applications which are
just as stressful.  Anyone have a graph showing otherwise (preferably
packet size and PPS figures on a low speed DSL line, or something like
that?)

 One customer, talking to twenty or fifty remote hosts at a time, can kill a 
 wireless access point in some instances. All those little tiny packets 

Um, I was under the impression that FastTrack was based on TCP...?  I'm not
a file-sharer, so I could be horribly wrong.  But if it is based on TCP,
then one would tend to assume that actual P2P data transfers would appear
to be very similar to any other HTTP (or more generally, TCP) traffic - and
for transmitted data, the packets would be large.  I was actually under the
impression that this was one of the reasons that the DPI vendors were
successful at selling the D in DPI.

 tie up the AP's radio time, and the other nine customers call and complain.

That would seem to be an implementation issue.  I don't hear WISP's crying
about gaming or VoIP traffic, so apparently those volumes of packets per
second are fine.  The much larger size of P2P data packets should mean that 
the rate of possible PPS would be lower, and the number of individual remote 
hosts should not be of particular significance, unless maybe you're trying 
to implement your WISP on consumer grade hardware.

I'm not sure I see the problem.

 One customer just downloading stuff, disabling all the upload features in 
 their P2P client of choice, often causes exactly the same problem, as the 
 kids tend to queue up 17 CDs worth of music then leave it running for a week. 
 The software tries its darnedest to find each of those hundreds of different 
 files, downloading little pieces of each of 'em from multiple servers. 

Yeah, but little pieces still works out to fairly sizeable chunks, when 
you look at it from the network point of view.  It isn't trying to download
a 600MB ISO with data packets that are only 64 bytes of content each.

 We go out of our way to explain to every customer that P2P software isn't 
 permitted on our network, and when we see it, we shut the customer off until 
 that software is removed. It's not ideal, but given the limitations of 
 wireless technology, it's a necessary compromise. I still have a job, so we 
 must have a few customers who are alright with this limitation on their 
 broadband service.
 
 There's more to bandwidth than just bandwidth.

If so, there's also Internet, service, and provider in ISP.

P2P is nasty because it represents traffic that wasn't planned for or
allowed for in many business models, and because it is easy to perceive
that traffic as unnecessary or illegitimate.

For now, you can get away with placing such a limit on your broadband
service, and you still have a job, but there may well come a day when
some new killer service pops up.  Imagine, for example, TiVo deploying
a new set of video service offerings that bumped them back up into being
THE device of the year (don't think TiVo?  Maybe Apple, then...  who
knows?)  Downloads interesting content for local storage.  Everyone's
buzzing about it.  The lucky 10% buy it.

Now the question that will come back to you is, why can't your network
deliver what's been promised?

The point here is that there are people promising things they can't be
certain of delivering.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct

Re: ISPs slowing P2P traffic...

2008-01-13 Thread Joe Greco

  P2P based CDN's are a current buzzword; 

P2P based CDN's might be a current buzzword, but are nothing more than
P2P technology in a different cloak.  No new news here.

 This should prove to be interesting.   The Video CDN model will be a 
 threat to far more operators than P2P has been to the music industry.
 
 Cable companies make significant revenue from video content (ok - that 
 was obvious).Since they are also IP Network operators they have a 
 vested interest in seeing that video CDN's  that bypass their primary 
 revenue stream fail.The ILEC's are building out fiber mostly so that 
 they can compete with the cable companies with a triple play solution.   
 I can't see them being particularly supportive of this either.  As a 
 wireless network operator I'm not terribly interested in helping 3rd 
 parties that cause issue on my network with upload traffic (rant away 
 about how were getting paid by the end user to carry this traffic...).

At the point where an IP network operator cannot comprehend (or, worse,
refuses to comprehend) that every bit received on the Internet must be
sourced from somewhere else, then I wish them the best of luck with the
legislated version of network neutrality that will almost certainly
eventually result from their shortsighted behaviour.

You do not get a free pass just because you're a wireless network
operator.  That you've chosen to model your network on something other
than a 1:1 ratio isn't anyone else's problem, and if it comes back to
haunt you, oh well.  It's nice that you can take advantage of the fact
that there are currently content-heavy and eyeball-heavy networks, but
to assume that it must stay that way is foolish.

It's always nice to maintain some particular model for your operations
that is beneficial to you.  It's clearly ideal to be able to rely on
overcommit in order to be able to provide the promises you've made to
customers, rather than relying on actual capacity.  However, this will
assume that there is no fundamental change in the way things work, which
is a bad assumption on the Internet.

This problem is NOTHING NEW, and in fact, shares some significant
parallels with the way Ma Bell used to bill out long distance vs local 
service, and then cried and whined about how they were being undercut
by competitive LD carriers.  They ... adapted.  Can you?  Will you?

And yes, I realize that this borders on unfair-to-the-(W)ISP, but if
you are incapable of considering and contemplating these sorts of
questions, then that's a bad thing.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Assigning IPv6 /48's to CPE's?

2007-12-31 Thread Joe Greco
 that most websites care about a specific PC behind a NAT gateway
as opposed to the small set of users behind this IP address is a minor
distinction at best - they can still track you, and since most households
only have a single computer, it's best to assume they can already deal with
the more difficult realities of multiple users on a single computer.

Given the ready availability of addresses, it may not be that long before
we start seeing the anti-NAT happen; a single PC that utilizes a vaguely
RFC3041-like strategy, but instead of allocating a single address at a
time, it may allocate a /pool/ of them from the local subnet, and use a
different IPv6 address for each outgoing request.  Think of it as
extending the port number field into the lower bits of the address field...
I'm sure someone has a name for this already, but I have no idea what it
is.

Anyways, I suggest you run over and read 

http://www.6net.org/publications/standards/draft-vandevelde-v6ops-nap-01.txt

as it is useful foundation material to explain IPv6 strategies and how they
differ from IPv4.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-26 Thread Joe Greco

 If the ops community doesn't provide enough addresses and a way to use
 them then the vendors will do the same thing they did in v4. It's not
 clear to me where their needs don't coincide in this case.
 
 there are three legs to the tripod
 
   network operator
   user
   equipment manufacturer
 
 They have (or should have) a mutual interest in:
 
   Transparent and automatic configuration of devices.
   The assignment of globally routable addresses to internet
   connected devices
   the user having some control over what crosses the boundry
   between their network and the operators.

Yes, well, that sounds fine, but I think that we've already hashed over
at least some of the pressures on businesses in this thread.  I've
tried to focus on what's in the Subject:, and have mostly ignored other
problems, which would include things such as cellular service, where I
suspect that the service model is such that they'll want to find a way
to allocate users a /128 ...

There is, further, an effect which leads to equipment mfr being split
into netwk equipment mfr and cpe equipment mfr, because the CPE guys 
will be trying to build things that'll work for the end user, working
around any brokenness, etc.  The problem space is essentially polarized, 
between network operators who have their own interests, and users who
have theirs.

So, as /engineers/ for the network operators, the question is, what can
we do to encourage/coerce/force the businesses on our side of the 
equation to allocate larger rather than smaller numbers of bits, or find
other solutions?

What could we do to encourage, or better yet, mandate, that an ISP end-
user connection should be allocated a minimum of /56, even if it happens 
to be a cellular service?  ( :-) )

What do we do about corporate environments, or any other environment where
there may be pressure to control topology to avoid DHCP PD to devices
added to the network on an ad-hoc basis?

Is it actually an absolutely unquestionable state of affairs that the
smallest autoconfigurable subnet is a /64?  Because if not, there are
options there ...  but of course, that leads down a road where an ISP may
not want to allocate as much as a /64 ...

What parts of this can we tackle through RIR policy?  RFC requirements?
Best practice?  Customer education?  ( :-) )  Other ideas?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-25 Thread Joe Greco
 or not it
actually winds up as cheaper to support a single address space size on
backroom systems will all work to shape what actually happens.

  So, the point is, as engineers, let's not be completely naive.  Yes,  
  we
  /want/ end-users to receive a /56, maybe even a /48, but as an  
  engineer,
  I'm going to assume something more pessimistic.  If I'm a device  
  designer,
  I can safely do that, because if I don't assume that a PD is going  
  to be
  available and plan accordingly, then my device is going to work in  
  both
  cases, while the device someone who has relied on PD is going to break
  when it isn't available.
 
 Assuming that PD is available is naive.  However, assuming it is not is
 equally naive. 

No, it's not equally naive.  The bridging scenario is likely to work in
all cases, therefore, assuming bridging as a least common denominator is
actually pretty smart - even though I would prefer to see a full
implementation that works in all cases.  Assume the worst, hope for the
best.  If that's naive, well, then it's all a lost cause.  You can call
it coldly cynical all you'd like, though.  ;-)

 The device must be able to function in both  
 circumstances
 if possible, or, should handle the case where it can't function in a  
 graceful
 and informative manner.

That much is certain.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-24 Thread Joe Greco

 It's likely that the device may choose to nat when they cannot obtain a
 prefix... pd might be desirable but if you can't then the alternative is
 easy.

I thought we were all trying to discourage NAT in IPv6.  Clearly, NAT
solves the problem ... while introducing 1000 new ones.  :-/

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-24 Thread Joe Greco

 Joe Greco wrote:
 [..]
  Okay, here, let me make it reaaally simple.
 
 Yes, indeed lets make it reaaally simple for you:
 
  If your ISP has been delegated a /48 (admittedly unlikely, but possible=
 )
  for $1,250/year, and they assign you a /56, their cost to provide that
  space is ~$5.  They can have 256 such customers.
 
 Fortunately ISP's get their space per /32 and up based on how much they
 can justify, which is the amount of customers they have.
 
 As such for a /32 single /48 is only (x / 65k) =3D like 20 cents or so?
 And if you are running your business properly you will have more clients
 and the price will only go down and down and down.

 Really (or should I write reaaally to add force?) if you
 as an ISP are unable to pay the RIR fees for that little bit of address
 space, then you definitely have bigger problems as you won't be able to
 pay the other bills either.

There's a difference between unable to pay the RIR fees and do not deem
any business value in spending the money.  Engineers typically feel that
businesses should be ready and willing to spend more money for reasons that
the average business person won't care about.

Pretend I'm your CFO.  Explain the value proposition to me.  Here's the
(slightly abbreviated) conversation.

Well, you say we need to spend more money every year on address space.
Right now we're paying $2,250/year for our /32, and we're able to serve
65 thousand customers.  You want us to start paying $4,500/year, but Bob
tells me that we're wasting a lot of our current space, and if we were 
to begin allocating less space to customers [aside: /56 vs /48], that we
could actually serve sixteen million users for the same cash.  Is there
a compelling reason that we didn't do that from the outset?

This discussion is getting really silly; the fact of the matter is that
this /is/ going to happen.  To pretend that it isn't is simply naive.

 How high are your transitequipment bills again, and how are you exactly
 charging your customers? ah, not by bandwidth usage, very logical!

Perhaps end-user ISP's don't charge by bandwidth usage...

 As an enduser I would love to pay the little fee for IP space that the
 LIR (ISP in ARIN land) pays to the RIR and then simply pay for the
 bandwidth that I am using + a little margin so that they ISP also earns
 some bucks and can do writeoffs on equipment and personnel.

Sure, but that's mostly fantasyland.  The average ISP is going to want to
monetize the variables.  You want more bandwidth, you pay more.  You want
more IP's, you pay more.  This is one of the reasons some of us are 
concerned about how IPv6 will /actually/ be deployed ...  quite frankly, 
I would bet that it's a whole lot more likely that an end-user gets 
assigned a /64 than a /48 as the basic class of service, and charge for 
additional bits.  If we are lucky, we might be able to s/64/56/.

I mean, yeah, it'd be great if we could mandate /48 ...  but I just can't
see it as likely to happen.

 For some magic reasons though(*), it seems to be completely ludacrist to
 do it this way, even though it would make the bill very clear and it
 would charge the right amount for the right things and not some
 arbitrary number for some other arbitrary things and then later
 complaining that people use too much bandwidth because they use
 bittorrent and other such things. For the cable folks: make upstream
 bandwidth more pricey per class than downstream, problem of
 heavy-uploaders solved as they get charged.

Sure, but that's how the real world works.  The input from engineering
folks is only one small variable in the overall scheme of things.  It is
a /mistake/ to assume that cost-recovery must be done on a direct basis.
There's a huge amount of business value in being able to say unlimited(*)
Internet service for $30/mo!  The current offerings in many places should
outline this clearly.

So, the point is, as engineers, let's not be completely naive.  Yes, we
/want/ end-users to receive a /56, maybe even a /48, but as an engineer,
I'm going to assume something more pessimistic.  If I'm a device designer,
I can safely do that, because if I don't assume that a PD is going to be 
available and plan accordingly, then my device is going to work in both 
cases, while the device someone who has relied on PD is going to break 
when it isn't available.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

 There is a huge detent at /48, but there's a certain amount of guidance
 that can only be derived from operational experience. It's not clear to
 me why /56 would be unacceptable, particularly if you're delegating them
 to a device that already has a /64. Are one's customers attached via
 point-to-point links, or do they sit on shared broadcast domain where
 their cpe is receiving a /128 and requesting pd from the outset?
 
 When someone plugs an apple airport into a segment of the corporate lan
 should it be be able to request pd under those circumstances as well?
 how is that case different than plugging it in on a residential connection?
 
 These are issues providers can and should grapple with. 

More likely, at least some of them are fairly naive questions.

For example, /experience/ tells us that corporate LAN policies are often
implemented without regard to what we, as Internet engineers, would
prefer, so I can guarantee you with a 100% certainty that there will be
at least some networks, and more than likely many networks, where you
will not be able to simply request a prefix delegation and have that work
the way you'd like.  There will always be some ISP who has delegated, or
some end site who has received, a far too close to being just large
enough allocation, and so even if we assume that every router vendor
and IPv6 implementation from here to eternity has no knobs to disable
prefix delegation, simple prefix exhaustion within an allocation will be 
a problem.  All the screams of but they should have been allocated more
will do nothing to change this.

Further, if we consider, for a moment, a world where prefix delegation is
the only method of adding something like an Apple Airport to an existing
network, this is potentially encouraging the burning of /64's for the
addition of a network with perhaps a single client.  That's perfectly fine,
/as long as/ networks are allocated sufficient resources.  This merely
means that from a fairly pessimistic viewpoint, IPv6 is actually a 64-bit
address space for purposes of determining how much address space is
required.

So, from the point of view of someone manufacturing devices to attach to
IPv6 networks, I have some options.

I can:

1) Assume that DHCP PD is going to work, and that the end user will have
   a prefix to delegate, which might be valid or it might not.  This leaves
   me in the position of having to figure out a backup strategy, because I
   do not want users returning my device to Best Buy because it don't
   work.  The backup strategy is bridging.

2) Assume that DHCP PD is not going to work, and make bridging the default
   strategy.  DHCP PD can optionally be a configurable thing, or autodetect,
   or whatever, but it will not be mandatory.

I am being facetious here, of course, since only one of those is really
viable in the market.  Anyone who thinks otherwise is welcome to explain to
me what's going to happen in the case where there are no P's to D.

I will leave the difference between corporate and residential as an exercise
to the reader; suffice it to say that the answers are rather obvious in the
same manner.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

 Once upon a time, Florian Weimer [EMAIL PROTECTED] said:
   Right now, we might say wow, 256 subnets for a single end-user... 
   hogwash! and in years to come, wow, only 256 subnets... what were we 
   thinking!?
  
   Well, what's the likelihood of the only 256 subnets problem?
  
  There's a tendency to move away from (simulated) shared media networks.
  One host per subnet might become the norm.
 
 So each host will end up with a /64?

That's a risk.  It is more like each host might end up with a /64.

Now, the thing here is, there's nothing wrong with one host per subnet.
There's just something wrong with blowing a /64 per subnet in an
environment where you have one host per subnet, and a limited amount of
bits above /64 (you essentially have /unlimited/ addresses within the 
/64, but an ISP may be paying for space, etc).

Now, understand, I /like/ the idea of /64 networks in general, but I do
have concerns about where the principle breaks down.  If we're agreed to
contemplate IPv6 as being a 64-bit address space, and then allocating 
space on that basis, I would suggest that some significant similarities 
to IPv4 appear.  In particular, a NAT gateway for IPv4 translates fairly
well into a subnet-on-a-/64 in IPv6.

That is interesting, but it may not actually reduce the confusion as to
how to proceed.

 How exactly are end-users expected to manage this?  Having a subnet for
 the kitchen appliances and a subnet for the home theater, both of which
 can talk to the subnet for the home computer(s), but not to each other,
 will be far beyond the abilities of the average home user.

Well, this gets back to what I was saying before.

At a certain point, Joe Sixpack might become sophisticated enough to have
an electrician come in and run an ethernet cable from the jack on the
fridge to his home router.  He might also be sophisticated enough to pay
$ElectronicsStore installation dep't to run an ethernet cable from the
jack on the home theater equipment to the home router.  I believe that
this may in fact have come to pass ...

Now the question is, what should happen next.

The L3 option is that the home router presents a separate /64 on each
port, and offers some firewalling capabilities.  I hinted before that I
might not be thrilled with this, due to ISP commonly controlling CPE, but
that can be addressed by making the router separate.

There's a trivial L2 option as well.  You can simply devise a L2 switch
that implements filtering policies.  Despite all the cries of that's
not how we do it in v4! and we can't change the paradigm, the reality
is that this /could/ be perfectly fine.  As a matter of fact, for Joe
Sixpack, it almost certainly /is/ fine.

Joe Sixpack's policy is going to read just like what you wrote above.
subnet for appliances, subnet for computer, subnet for theater,
with the appliances and theater only being able to talk to computer.
He's not going to care if it's an actual subnet or just a logical blob.
This is easy to do at L2 or L3.  We're more /used/ to doing it at L3,
but it's certainly workable at L2, and the interface to do so doesn't
necessarily even need to look any different, because Joe Sixpack does
not care about the underlying network topology and strategy.

I would absolutely like to see DHCP PD be usable for environments where
multiple prefixes are available and allowed, but I believe we're going
to also be needing to look at bridging.

There's /going/ to be some crummy ISP somewhere that only allocates end
users a /64, or there's /going/ to be a business with a network that will
refuse DHCP PD, and as a result there /will/ be a market for devices that
have the ability to cope.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

 If operational simplicity of fixed length node addressing is a
 technical reason, then I think it is a compelling one. If you've ever
 done any reasonable amount of work with Novell's IPX (or other fixed
 length node addressing layer 3 protocols (mainly all of them except
 IPv4!)) you'll know what I mean.
 
 I think Ethernet is also another example of the benefits of
 spending/wasting address space on operational convenience - who needs
 46/47 bits for unicast addressing on a single layer 2 network!? If I
 recall correctly from bits and pieces I've read about early Ethernet,
 the very first versions of Ethernet only had 16 bit node addressing.
 They then decided to spend/waste bits on addressing to get
 operational convenience - plug and play layer 2 networking.

The difference is that it doesn't cost anything.  There are no RIR fees,
there is no justification.  You don't pay for, or have to justify, your 
Ethernet MAC addresses.

With IPv6, there are certain pressures being placed on ISP's not to be
completely wasteful.

This will compel ISP's to at least consider the issues, and it will most
likely force users to buy into technologies that allow them to do what they
want.  And inside a /64, you have sufficient space that there's probably
nothing you can't do.  :-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

   I think Ethernet is also another example of the benefits of
   spending/wasting address space on operational convenience - who needs
   46/47 bits for unicast addressing on a single layer 2 network!? If I
   recall correctly from bits and pieces I've read about early Ethernet,
   the very first versions of Ethernet only had 16 bit node addressing.
   They then decided to spend/waste bits on addressing to get
   operational convenience - plug and play layer 2 networking.
  
  The difference is that it doesn't cost anything.  There are no RIR fees,
  there is no justification.  You don't pay for, or have to justify, your 
  Ethernet MAC addresses.
  
  With IPv6, there are certain pressures being placed on ISP's not to be
  completely wasteful.
 
 I don't think there is that difference at all. MAC address allocations
 are paid for by the Ethernet chipset/card vendor, and I'm pretty sure
 they have to justify their usage before they're allowed to buy another
 block. I understand they're US$1250 an OUI, so something must have
 happened to prevent somebody buying them all up to hoard them, creating
 artificial scarcity, and then charging a market sensitive price for
 them, rather than the flat rate they cost now. That's not really any
 different to an ISP paying RIR fees, and then indirectly passing those
 costs onto their customers.

MAC address allocations are paid for by the Ethernet chipset/card vendor.

They're not paid for by an ISP, or by any other Ethernet end-user, except
as a pass-through, and therefore it's considered a fixed cost.  There are
no RIR fees, and there is no justification.  You buy a gizmo with this
RJ45 and you get a unique MAC.  This is simple and straightforward.  If
you buy one device, you get one MAC.  If you buy a hundred devices, you
get one hundred MAC's.  Not 101, not 99.  This wouldn't seem to map well
at all onto the IPv6 situation we're discussing.

With an IPv6 prefix, it is all about the prefix size.  Since a larger 
allocation may cost an ISP more than a smaller allocation, an ISP may 
decide that they need to charge a customer who is allocated a /48 more 
than a customer who is allocated a /64.

I don't pay anyone anything for the use of the MAC address I got on this
free ethernet card someone gave me, yet it is clearly and unambiguously
mine (and only mine) to use.  Does that clarify things a bit?

If you are proposing that RIR's cease the practice of charging different
amounts for different allocation sizes, please feel free to shepherd that
through the approvals process, and then I will certainly agree that there
is no longer a meaningful cost differential for the purposes of this
discussion.  Otherwise, let's not pretend that they're the same thing, 
since they're clearly not.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-23 Thread Joe Greco

  MAC address allocations are paid for by the Ethernet chipset/card vendor.
  
  They're not paid for by an ISP, or by any other Ethernet end-user, except
  as a pass-through, and therefore it's considered a fixed cost.  There are
  no RIR fees, and there is no justification.  You buy a gizmo with this
  RJ45 and you get a unique MAC.  This is simple and straightforward.  If
  you buy one device, you get one MAC.  If you buy a hundred devices, you
  get one hundred MAC's.  Not 101, not 99.  This wouldn't seem to map well
  at all onto the IPv6 situation we're discussing.
 
 How many ISP customers pay RIR fees? Near enough to none, if not none.

Who said anything about ISP customers paying RIR fees?  Although they
certainly do, indirectly.

 I never have when I've been an ISP customer.

(Must be one of those legacy ISP's?)

 Why are you pretending they do? 

I don't recall bringing them into the discussion, BUT...

 I think your taking an end-user perspective when discussing
 ethernet but an RIR fee paying ISP position when discussing IPv6 subnet
 allocations. That's not a valid argument, because you've changed your
 viewpoint on the situation to suit your position.

Oddly enough, I'm one of those rare people who've worked with both ISP's
and OEM's that have been assigned MAC's.  You can think as you wish, and
you're wrong. 

 Anyway, the point I was purely making was that if you can afford to
 spend the bits, because you have them (as you do in Ethernet by design,
 as you do in IPv6 by design, but as you *don't* in IPv4 by design), you
 can spend them on operational convenience for both the RIR paying
 entity *and* the end-user/customer. Unnecessary complexity is
 *unnecessary*, and your customers won't like paying for it if they
 discover you've chosen to create it either on purpose or through
 naivety.

Okay, here, let me make it reaaally simple.

If I am going out and buying an Ethernet card today, the mfr will pay $.NN 
for my MAC address, a cost that is built into the retail cost of the card.
It will never be more or less than $.NN, because the number of MAC
addresses assigned to my card is 1.  Always 1.  Always $.NN.

If I am going out and buying IPv6 service today, the ISP will pay a
variable amount for my address space.  The exact amount is a function of
their own delegation size (you can meander on over to ARIN yourself) and
the size they've delegated to you; and so, FOR PURPOSES OF ILLUSTRATION,
consider this.

If your ISP has been delegated a /48 (admittedly unlikely, but possible)
for $1,250/year, and they assign you a /56, their cost to provide that
space is ~$5.  They can have 256 such customers.

If your ISP has been delegated a /48 (admittedly unlikely, but possible)
for $1,250/year, and they assign you a /48, their cost to provide that
space is ~$1,250.  They can have 1 such customer.

If your ISP has been delegated a /41, for $1,250/year, and they assign
you a /48, their cost to provide that space is ~$10.  They can have 128
such customers.

There is a significant variation in pricing as the parameters are changed.
You do not just magically have free bits in IPv6 by design; the ISP is
paying for those bits.  There will be factors MUCH more real than whether
or not customers like paying for it if they discover you've chosen to
create [complexity], because quite frankly, residential end users do not
typically have a clue, and so even if you do tick off 1% who have a clue,
you're still fine.

Now, seriously, just who do you think is paying for the space?  And if
$competitor down the road is charging rock bottom prices for Internet
access, how much money does the ISP really want to throw at extra address
space?  (Do you want me to discuss naivety now?)

And just /how/ is this in any way similar to Ethernet MAC addresses, 
again?  Maybe I'm just too slow and can't see how fixed cost ==
variable cost.  I won't accept any further hand-waving as an answer,
so to continue, please provide solid examples, as I've done.

Perhaps more on-topic, how many IP addresses can dance on the head of 
a /64?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-21 Thread Joe Greco

  Why not a /48 for all? IPv6 address space is probably cheap enough that
  even just the time cost of dealing with the occasional justification
  for moving from a /56 to a /48 might be more expensive than just giving
  everybody a /48 from the outset. Then there's the op-ex cost of
  dealing with two end-site prefix lengths - not a big cost, but a
  constant additional cost none the less.
 
 And let's not ignore the on-going cost of table-bloat. If you provide a 
 /48 to everyone, in 5 years, those allocations may/may not look stupid. :)
 
 Right now, we might say wow, 256 subnets for a single end-user... 
 hogwash! and in years to come, wow, only 256 subnets... what were we 
 thinking!?

Well, what's the likelihood of the only 256 subnets problem?

Given that a subnet in the current model consists of a network that is
capable of swallowing the entire v4 Internet, and still being virtually
empty, it should be clear that *number of devices* will never be a serious
issue for any network, business or residential.  You'll always be able to
get as many devices as you'd like connected to the Internet with v6.  This
may ignore some /current/ practical issues that devices such as switches
may impose, but that doesn't make it any less true.

The question becomes, under what conditions would you need separate
subnets.  We have to remember that the answer to this question can be,
and probably should be, relatively different than it is under v4.  Under
v4, subnet policies involved both network capacity and network number
availability.  A small business with a /25 allocation might use a /26 and
a /27 for their office PC's, a /28 for a DMZ, and the last /28 for
miscellaneous stuff like a VPN concentrator, etc.  The office PC /26 and
/27 would generally be on different switches, and the server would have
more than one gigE port to accomodate.  To deal with higher bandwidth
users, you typically try to split up those users between the two networks.

Under a v6 model, it may be simpler and more convenient to have a single
PC network, with dual gigE LAG (or even 10G) to the switch(es).  So I am
envisioning that separate networks primarily imposed due to numbering
reasons under v4 will most likely become single networks under v6.

The primary reasons I see for separate networks on v6 would include
firewall policy (DMZ, separate departmental networks, etc)...

And I'm having some trouble envisioning a residential end user that 
honestly has a need for 256 networks with sufficiently differently
policies.  Or that a firewall device can't reasonably deal with those 
policies even on a single network, since you mainly need to protect
devices from external access.

I keep coming to the conclusion that an end-user can be made to work on
a /64, even though a /56 is probably a better choice.  I can't find the
rationale from the end-user's side to allocate a /48.  I can maybe see
it if you want to justify it from the provider's side, the cost of dealing
with multiple prefix sizes.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-21 Thread Joe Greco
 nearby friends and neighbors.

Having fewer options is going to be easier for the ISP, I suspect.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: v6 subnet size for DSL leased line customers

2007-12-21 Thread Joe Greco
 to expect his device to be smart enough to tell him
what he needs to do, and whether the underlying network is one thing or
another isn't a serious consideration.

You simply have to realize that L2 and L3 aren't as different as you seem
to think.  You can actually consider them flip sides of a coin in many
cases.

 Actually, there is some guarantee that, in IPv6, you'll be able to do  
 that,
 or, you will know that you could not.  You will make a DHCP6 request
 for a prefix delegation, and, you will receive it or be told no.

So, as I said...

 Most likely, that is how most such v6 gateways will function.

/Possibly/.  It would be much more likely to be that way if everyone
was issued large CIDR blocks, every router was willing to delegate a
prefix, and there was no call for bridging.

 I think that bridges are less likely to be the norm in IPv6.

I'm skeptical, but happy to be proven wrong someday.

  If we have significant customer-side routing of IPv6, then there's  
  going
  to need to be some way to manage that.  I guess that's RIPv6/ng.  :-)

 Nope... DHCPv6 prefix delegation and Router discovery.

We'll see.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: unwise filtering policy from cox.net

2007-11-21 Thread Joe Greco

 Given what Sean wrote goes to the core of how mail is routed, you'd
 pretty much need to overhaul how MX records work to get around this one,
 or perhaps go back to try to resurrect something like a DNS MB record,
 but that presumes that the problem can't easily be solved in other
 ways.  Sean demonstrated one such way (move the high volume stuff to its
 own domain).

Moving abuse@ to its own domain may work, however, fixing this problem at
the DNS level is probably an error, and probably non-RFC-compliant anyways.

The real problem here is probably one of:

1) Mail server admin forgot (FSVO forgot, which might be didn't even
   stop to consider, considered it and decided that it was worthwhile to
   filter spam sent to abuse@, not realizing the implications for abuse 
   reporting, didn't have sufficient knowledge to figure out how to
   exempt abuse@, etc.)

2) Server software doesn't allow exempting a single address; this is a
   common problem with certain software, and the software should be fixed,
   since the RFC's essentially require this to work.  Sadly, it is 
   frequently assumed that if you cannot configure your system to do X, 
   then it's all right to not do X, regardless of what the RFC's say.

The need to be able to accept unfiltered recipients has certain 
implications for mail operations, such as that it could be bad to use IP 
level filtering to implement a shared block for bad senders.  

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: unwise filtering policy from cox.net

2007-11-20 Thread Joe Greco

 Or it was a minor oversight and you're all pissing and moaning over nothing?
 
 That's a thought too.

Pretty much all of network operations is pissing and moaning over
nothing, if you wish to consider it such.  Some of us actually care.

In any case, I believe that I've found the Cox abuse folks to be
pretty helpful and clueful in the past, but they may have some of the
typical problems, such as having to forward mail for abuse@ through
a large e-mail platform that's designed for customers.  I'm certainly
not saying that it's all right to have this problem, but I would
certainly encourage you to try sending along a brief note without any
BL-listed URL's, to see if you can get a response that way.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: General question on rfc1918

2007-11-13 Thread Joe Greco

 Hi there, I just had a real quick question. I hope this is found to 
 be on topic.
 
 Is it to be expected to see rfc1918 src'd packets coming from transit 
 carriers?
 
 We have filters in place on our edge (obviously) but should we be seeing 
 traffic from 192.168.0.0 and 10.0.0.0 et cetera hitting our transit 
 interfaces?
 
 I guess I'm not sure why large carrier networks wouldn't simply filter this 
 in their core?

[pick-a-random-BCP38-snipe ...]

It's a feature: You can tell which of your providers does BCP38 this way.

Heh.

It's the networking equivalent of all the bad sorts of DOS/Windows 
programming.  You know, the rule that says once it can run successfully,
it must be correct.  Never mind checking for exceptional conditions,
buffer overruns, etc.

It's the same class of problem where corporate IT departments, listening
to some idiot, filter all ICMP, and are convinced this is okay because 
they can reach ${one-web-site-of-your-choice}, and refuse to contemplate
that they might have broken something.

Once your network is routing packets and you aren't hearing complaints
about being unable to reach a destination, it's got to be configured
correctly ... right?

Consider it life on the Internet.  Do their job for them.

Around here, we've been doing BCP38 since before there was a BCP38.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: cpu needed to NAT 45mbs

2007-11-08 Thread Joe Greco

 I do the networking in my house, and hang out with guys that do networking in 
 small offices that have a few T1s.   Now I am talking to people about a DS3 
 connection for 500 laptops*, and I am bing told a p4 linux box with 2 nics 
 doing NAT will not be able to handle the load.   I am not really qualified 
 to 
 say one way or the other.  I bet someone here is.

So, are they Microsoft fans, or Cisco fans, or __ fans?  For any of
the above, you can make the corresponding product fail too.  :-)

The usual rules for PC's-as-routers apply.  You can find extensive
discussions of this on lists such as the Quagga list (despite the list
being intended for routing _protocols_ rather than routing platforms) and
the Soekris (embedded PC) lists.

Briefly,

1) Small packet traffic is harder than large packet traffic,

2) Good network cards and competent OS configuration will help extensively,

3) The more firewall rules, the slower things will tend to be (highly
   implementation-dependent)

4) In the case of NAT, it would seem to layer some additional delays on top
   of #3.

We've successfully used a carefully designed FreeBSD machine (PIII-850,
dual fxp) as a load balancer in the past, which shares quite a few
similarities to a NAT device.  The great upside is complete transparency
as to what's happening and why, and the ability to affect this as desired.
I don't know how close we ran to 100Mbps, but I know we exceeded 45.

With sufficient speed, you can make up for many sins, including a
relatively naive implementation.  With that in mind, I'd guess that you 
are more likely to be successful than not.  The downside is that if it
doesn't work out, you can recycle that PC into a more traditional role.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Hey, SiteFinder is back, again...

2007-11-05 Thread Joe Greco

 Sean,
 
  Yes, it sounds like the evil bit.  Why would anyone bother to set it?
 
  Two reasons
 
  1) By standardizing the process, it removes the excuse for using
  various hacks and duct tape.
 
  2) Because the villian in Bond movies don't view themselves as evil.
  Google is happy to pre-check the box to install their Toolbar, OpenDNS
  is proud they redirect phishing sites with DNS lookups, Earthlink says it
  improves the customer experience, and so on.
 
 Forgive my skepticism, but what I would envision happening is resolver
 stacks adding a switch that would be on by default, and would translate
 the response back to NXDOMAIN.  At that point we would be right back
 where we started, only after a lengthy debate, an RFC, a bunch of code,
 numerous bugs, and a bunch of I told you sos.

The other half of this is that it probably isn't *appropriate* to encourage
abuse of the DNS in this manner, and if you actually add a framework to do
this sort of thing, it amounts to tacit (or explicit) approval, which will
lead to even more sites doing it.

Consider where it could lead.  Pick something that's already sketchy, such
as hotel networks.  Creating the perfect excuse for them to map every domain
name to 10.0.0.1, force it through a web proxy, and then have their tech
support people tell you that if you're having problems, make sure you set
the browser-uses-evilbit-dns.  And that RFC mandate to not do things like
this?  Ignored.  It's already annoying to try to determine what a hotel
means if they say they have Internet access.

Reinventing the DNS protocol in order to intercept odd stuff on the Web 
seems to me to be overkill and bad policy.  Could someone kindly explain
to me why the proxy configuration support in browsers could not be used 
for this, to limit the scope of damage to the web browsing side of things? 
I realize that the current implementations may not be quite ideal for 
this, but wouldn't it be much less of a technical challenge to develop a
PAC or PAC-like framework to do this in an idealized fashion, and then 
actually do so?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Any help for Yahoo! Mail arrogance?

2007-10-30 Thread Joe Greco

  I'm pretty sure
  none of our systems have been compromised and forwards mail that we
  don't know about.
 
 Yet your sending IP reputation is poor

Do you actually have data that confirms that?

We've had random problems mailing Hotmail (frequently), Yahoo!
(infrequently), and other places where the mail stream consists of
a low volume (10/day) of transactional and support e-mail directly
arising from user-purchased services, on an IP address that had 
never previously sent e-mail - ever.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Joe Greco

 Rep. Boucher's solution: more capacity, even though it has been 
 demonstrated many times more capacity doesn't actually solve this 
 particular problem.

That would seem to be an inaccurate statement.

 Is there something in humans that makes it difficult to understand
 the difference between circuit-switch networks, which allocated a fixed 
 amount of bandwidth during a session, and packet-switched networks, which 
 vary the available bandwidth depending on overall demand throughout a 
 session?
 
 Packet switch networks are darn cheap because you share capacity with lots 
 of other uses; Circuit switch networks are more expensive because you get
 dedicated capacity for your sole use.

So, what happens when you add sufficient capacity to the packet switch
network that it is able to deliver committed bandwidth to all users?

Answer: by adding capacity, you've created a packet switched network where
you actually get dedicated capacity for your sole use.

If you're on a packet network with a finite amount of shared capacity,
there *IS* an ultimate amount of capacity that you can add to eliminate 
any bottlenecks.  Period!  At that point, it behaves (more or less) like
a circuit switched network.

The reasons not to build your packet switched network with that much
capacity are more financial and technical than they are impossible.  We
know that the average user will not use all their bandwidth.  It's also
more expensive to install more equipment; it is nice when you can fit
more subscribers on the same amount of equipment.

However, at the point where capacity becomes a problem, you actually do
have several choices:

1) Block certain types of traffic,

2) Limit {certain types of, all} traffic,

3) Change user behaviours, or

4) Add some more capacity

Come to mind as being the major available options.  ALL of these can be
effective.  EACH of them has specific downsides.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-26 Thread Joe Greco

 
 On Fri, 26 Oct 2007, Paul Ferguson wrote:
  The part of this discussion that really infuriates me (and Joe
  Greco has hit most of the salient points) is the deceptiveness
  in how ISPs underwrite the service their customers subscribe to.
 
  For instance, in our data centers, we have 1Gb uplinks to our ISPs,
  but guaranteed service subscription (a la CIR) to a certain rate
  which we engineer (based on average traffic volume, say, 400Mb), but
  burstable to full line rate -- if the bandwidth is available.
 
  Now, we _know_ this, because it's in the contract. :-)
 
  As a consumer, my subscription is based on language that doesn't
  say you can only have the bandwidth you're paying for when we
  are congested, because we oversubscribed our network capacity.
 
  That's the issue here.
 
 You have a ZERO CIR on a consumer Internet connection.

Where's it say that?

 How many different ways can an ISP say speeds may vary and are not 
 guaranteed.  It says so in the _contract_.  So why don't you know
 that?

Gee, that's not exactly what I read.

http://help.twcable.com/html/twc_sub_agreement2.html

Section 6 (a) Speeds and Network Management.  I acknowledge that each tier
or level of the HSD Service has limits on the maximum speed at which I may
send and receive data at any time, as set forth in the price list or Terms
of Use.  I understand that the actual speeds I may experience at any time
will vary based on a number of factors, including the capabilities of my
equipment, Internet congestion, the technical properties of the websites,
content and applications that I access, and network management tools and
techniques employed by TWC. I agree that TWC or ISP may change the speed of
any tier by amending the price list or Terms of Use. My continued use of the
HSD Service following such a change will constitute my acceptance of any new
speed. I also agree that TWC may use technical means, including but not
limited to suspending or reducing the speed of my HSD Service, to ensure
compliance with its Terms of Use and to ensure that its service operates
efficiently.

Both to ensure that its service operates efficiently and techniques
employed by TWC would seem to allow for some variation in speed by the
local cable company - just as the speed on a freeway may drop during
construction, or during rush hour.  However, there's very strong language 
in there that indicates that the limits on sending and receiving are set 
forth in the price list.

 ISPs tell you that when you order, in the terms of service, when you call
 customer care that speeds may vary and are not guaranteed.

Speeds may vary and are not guaranteed is obvious on the Internet.
We're deliberately going to screw with your speeds if you use too much
is not, at least to your average consumer.

 How much do you pay for your commercial 1GE connection with a 400Mbps CIR? 
 Is it more or less than what you pay for a consumer connection with a ZERO 
 CIR?

Show me a consumer connection with a contract that /says/ that it has a 
zero CIR, and we can start that discussion.  Your saying that it has a
zero CIR does not make it so.

 ISPs are happy to sell you SLAs, CIRs, etc.  But if you don't buy SLAs,
 CIRs, etc, why are you surprised you don't get them?

There's a difference between not having a SLA, CIR, etc., all of which I'm
fine for with a residential class connection, and having an ISP that sells
20Mbps! Service! Unlimited! but then quietly messes with users who
actually use that.

The ISP that sells a 20Mbps pipe, and doesn't mess with it, but has a
congested upstream, these guys are merely oversubscribed.  That's the
no-SLA-no-CIR situation.

 Once again blinkspeeds may vary and are not guaranteed/blink.
 
 Now that you know that speeds may vary and are not guaranteed, does
 that make you satisified?

Only if my ISP isn't messing with my speeds, or has made it exceedingly
clear in what ways they'll be messing with my speeds so that they do not
match what I paid for on the price list.

Let me restate that:  I don't really care if I get 8 bits per second to
some guy in Far North, Canada who is on a dodgy satellite Internet link.
That's what speeds may vary and are not guaranteed should refer to -
things well beyond an ISP's control.

Now, let me flip this on its ear.  We rent colo machines to users.  We
provide flat rate pricing.  When we sell a machine with 1Mbps of 
Internet bandwidth, that is very much speeds may vary and are not 
guaranteed - HOWEVER, we do absolutely promise that if it's anything 
of ours that is causing delivery of less than 1Mbps, WE WILL FIX IT. 
PERIOD.  This isn't a SLA.  This isn't a CIR.  This is simple honesty,
we deliver what we advertised, and what the customer is paying for.

The price points that consumers are paying for resi Internet may not
allow quite that level of guarantee, but does that mean that they do
not deserve to be provided with some transparency so that end users 
understand what the ACTUAL policy

Re: Internet access in Japan (was Re: BitTorrent swarms have a deadly bite on broadband nets)

2007-10-24 Thread Joe Greco

 I did consulting work for NTT in 2001 and 2002 and visited their Tokyo =
 headquarters twice. NTT has two ILEC divisions, NTT East and NTT West. =
 The ILEC management told me in conversations that there was no money in =
 fiber-to-the-home; the entire rollout was due to government pressure and =
 was well below a competitive rate of return. Similarly, NTT kept staff =
 they did not need becuase the government wanted to maintain high =
 employment in Japan and avoid the social stress that results from =
 massive layoffs.

Mmm hmm.  That sounds somewhat like the system we were promised here in
America.  We were told by the ILEC's that it was going to be very expensive
and that they had little incentive to do it, so we offered them a package
of incentives - some figure as much as $200 billion worth.

See http://www.newnetworks.com/broadbandscandals.htm

 You should not  assume that 'Japanese capitalism' works =
 like American capitalism. 

That could well be; it appears that American capitalism is much better at
lobbying the political system.  They eventually found ways way to take
their money and run without actually delivering on the promises they made.
I'll bet the American system paid out a lot better for a lot less work.

Anyways, it's clear to me that any high bandwidth deployment is an immense
investment for a society, and one of the really interesting meta-questions
is whether or not such an investment will still be paying off in ten years,
or twenty, or...

The POTS network, which merely had to transmit voice, and never had to 
deal with substantial growth of the underlying bandwidth (mainly moving
from analog to digital trunks, which increased but then fixed the
bandwidth), was a long-term investment that has paid off for the telcos
over the years, even if there was a lot of wailing along the way.

However, one of the notable things about data is that our needs have
continued to grow.  Twenty years ago, a 9600 bps Internet connection
might have served a large community, where it was mostly used for
messaging and an occasional interactive session.  Fifteen years ago,
a 14.4 bps was a nice connection for a single user.  Ten years ago,
a 1Mbps connection was pretty sweet (maybe a bit less for DSL, a bit
more for cable). 

Things pretty much go awry at that point, and we no longer see such
impressive progression in average end-user Internet connection speeds.
This didn't stop speed increases elsewhere, but it did put the brakes
on rapid increases here.

If we had received the promised FTTH network, we'd have speeds of up
to 45Mbps, which would definitely be in-line with previous growth (and
the growth of computing and storage technologies).

At a LAN networking level, we've gone from 10Mbps to 100Mbps to 1Gbps
as the standard ethernet interface that you might find on computers and
networking devices.

So the question is, had things gone differently, would 45Mbps still be
adequate?  And would it be adequate in 10 or 20 years?  And what effect
would that have had overall?

Certainly it would be a driving force for continued rapid growth in
both networking and Internet technologies.  As has been noted here in the
past, current Ethernet (40G/100G) standards efforts haven't been really
keeping pace with historical speed growth trends.

Has the failure to deploy true high-speed broadband in a large and key
market such as the US resulted in less pressure on vendors by networks
for the next generations of high-speed networking?

Or, getting back to the actual situation here in the US, what implications
does the continued evolution of US broadband have for other network
operators?  As the ILEC's and cablecos continue to grow and dominate the
end-user Internet market, what's the outlook on other independent networks,
content providers, etc.?  The implications of the so-called net neutrality
issues are just one example of future issues.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-24 Thread Joe Greco

 I wonder how quickly applications and network gear would implement QoS
 support if the major ISPs offered their subscribers two queues: a default
 queue, which handled regular internet traffic but squashed P2P, and then a
 separate queue that allowed P2P to flow uninhibited for an extra $5/month,
 but then ISPs could purchase cheaper bandwidth for that.
 
 But perhaps at the end of the day Andrew O. is right and it's best off to
 have a single queue and throw more bandwidth at the problem.

A system that wasn't P2P-centric could be interesting, though making it
P2P-centric would be easier, I'm sure.  ;-)

The idea that Internet data flows would ever stop probably doesn't work
out well for the average user.

What about a system that would /guarantee/ a low amount of data on a low
priority queue, but would also provide access to whatever excess capacity
was currently available (if any)?

We've already seen service providers such as Virgin UK implementing things
which essentially try to do this, where during primetime they'll limit the
largest consumers of bandwidth for 4 hours.  The method is completely
different, but the end result looks somewhat similar.  The recent 
discussion of AU service providers also talks about providing a baseline 
service once you've exceeded your quota, which is a simplified version of
this.

Would it be better for networks to focus on separating data classes and 
providing a product that's actually capable of quality-of-service style 
attributes?

Would it be beneficial to be able to do this on an end-to-end basis (which
implies being able to QoS across ASN's)?

The real problem with the throw more bandwidth solution is that at some
point, you simply cannot do it, since the available capacity on your last
mile simply isn't sufficient for the numbers you're selling, even if you
are able to buy cheaper upstream bandwidth for it.

Perhaps that's just an argument to fix the last mile.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Comcast blocking p2p uploads

2007-10-21 Thread Joe Greco

 Leo Bicknell wrote:
  I'm a bit confused by your statement. Are you saying it's more
  cost effective for ISP's to carry downloads thousands of miles
  across the US before giving them to the end user than it is to allow
  a local end user to upload them to other local end users?
   
 Not to speak on Joe's behalf, but whether the content comes from 
 elsewhere on the Internet or within the ISP's own network the issue is 
 the same: limitations on the transmission medium between the cable modem 
 and the CMTS/head-end.  The issue that cable companies are having with 
 P2P is that compared to doing a HTTP or FTP fetch of the same content 
 you will use more network resources, particularly in the upstream 
 direction where contention is a much bigger issue.  On DOCSIS 1.x 
 systems like Comcast's plant, there's a limitation of ~10mbps of 
 capacity per upstream channel.  You get enough 384 - 768k connected 
 users all running P2P apps and you're going to start having problems in 
 a big hurry.  It's to remove some of the strain on the upstream channels 
 that Comcast has started to deploy Sandvine to start closing *outbound* 
 connections from P2P apps.

That's part of it, certainly.  The other problem is that I really doubt
that there's as much favoritism towards local clients as Leo seems to
believe.  Without that, you're also looking at a transport issue as you
shove packets around.  Probably in ways that the network designers did
not anticipate.

Years ago, dealing with web caching services, there was found to be a
benefit, a limited benefit, to setting up caching proxies within a major
regional ISP's network.  The theoretical benefit was to reduce the need 
for internal backbone and external transit connectivity, while improving
user experience.

The interesting thing is that it wasn't really practical to cache on a
per-POP basis, so it was necessary to pick cache locations at strategic
locations within the network.  This meant you wouldn't expect to see a
bandwidth savings on the internal backbone from the POP to the
aggregation point.

The next interesting point is that you could actually improve the cache
hit rate by combining the caches at each aggregation point; the larger
userbase meant that any given bit of content out on the Internet was
more likely to be in cache.  However, this had the ability to stress the
network in unexpected ways, as significant cache-site to cache-site data 
flows were happening in ways that network engineering hadn't always 
anticipated.

A third interesting thing was noted.  The Internet grows very fast. 
While there's always someone visiting www.cnn.com, as the number of other
sites grew, there was a slow reduction in the overall cache hit rate over
the years as users tended towards more diverse web sites.  This is the
result of the ever-growing quantity of information out there on the
Internet.

This doesn't map exactly to the current model with P2P, yet I suspect it
has a number of loose parallels.

Now, I have to believe that it's possible that a few BitTorrent users in
the same city will download the same Linux ISO.  For that ISO, and for
any other spectacularly popular download, yes, I would imagine that there
is some minor savings in bandwidth.  However, with 10M down and 384K up,
even if you have 10 other users in the city who are all sending at full
384K to someone new, that's not full line speed, so the client will still
try to pull additional capacity from elsewhere to get that full 10M speed.

I've always seen P2P protocols as behaving in an opportunistic manner.
They're looking for who has some free upload capacity and the desired
object.  I'm positive that a P2P application can tell that a user in
New York is closer to me (in Milwaukee) than a user in China, but I'd
quite frankly be shocked if it could do a reasonable job of
differentiating between a user in Chicago, Waukesha (few miles away),
or Milwaukee.

In the end, it may actually be easier for an ISP to deal with the
deterministic behaviour of having data from me go to the local 
upstream transit pipe than it is for my data to be sourced from a
bunch of other random nearby on-net sources.

I certainly think that P2P could be a PITA for network engineering.
I simultaneously think that P2P is a fantastic technology from a showing-
off-the-idea-behind-the-Internet viewpoint, and that in the end, the 
Internet will need to be able to handle more applications like this, as 
we see things like videophones etc. pop up.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

 On Sun, 21 Oct 2007, Mikael Abrahamsson wrote:
  So your recommendation is that universities, enterprises and ISPs simply 
  stop offering all Internet service because a few particular application 
  protocols are badly behaved?
 
  They should stop to offer flat-rate ones anyway.
 
 Comcast's management has publically stated anyone who doesn't like the 
 network management controls on its flat rate service can upgrade to 
 Comcat's business class service.
 
 Problem solved?

Assuming a business class service that's reasonably priced and featured?
Absolutely.  I'm not sure I've seen that to be the case, however.  Last
time I checked with a local cable company for T1-like service, they wanted
something like $800/mo, which was about $300-$400/mo more than several of
the CLEC's.  However, that was awhile ago, and it isn't clear that the
service offerings would be the same.

I don't class cable service as being as reliable as a T1, however.  We've
witnessed that the cable network fails shortly after any regional power
outage here, and it has somewhat regular burps in the service anyways.

I'll note that I can get unlimited business-class DSL (2M/512k ADSL) for
about $60/mo (24m), and that was explicitly spelled out to be unlimited-
use as part of the RFP.

By way of comparison, our local residential RR service is now 8M/512k for 
about $45/mo (as of just a month or two ago).

I think I'd have to conclude that I'd certainly see a premium above and
beyond the cost of a residential plan to be reasonable, but I don't expect
it to be many multiples of the resi service price, given that DSL plans
will promise the bandwidth at just a slightly higher cost.

 Or would some P2P folks complain about having to pay more money?

Of course they will.

  Or do general per-user ratelimiting that is protocol/application agnostic.
 
 As I mentioned previously about the issues involving additional in-line 
 devices and so on in networks, imposing per user network management and 
 billing is a much more complicated task.
 
 If only a few protocol/applications are causing a problem, why do you need 
 an overly complex response?  Why not target the few things that are 
 causing problems?

Well, because when you promise someone an Internet connection, they usually
expect it to work.  Is it reasonable for Comcast to unilaterally decide that
my P2P filesharing of my family photos and video clips is bad?

  A better idea might be for the application protocol designers to improve 
  those particular applications.
 
  Good luck with that.
 
 It took a while, but it worked with the UDP audio/video protocol folks who 
 used to stress networks.  Eventually those protocol designers learned to 
 control their applications and make them play nicely on the network.

:-)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

 Is it reasonable for your filesharing of your family photos and video 
 clips to cause problems for all the other users of the network?  Is that 
 fair or just greedy?

It's damn well fair, is what it is.  Is it somehow better for me to go and
e-mail the photos and movies around?  What if I really don't want to
involve the ISP's servers, because they've proven to be unreliable, or I
don't want them capturing backup copies, or whatever?

My choice of technology for distributing my pictures, in this case, would
probably result in *lower* overall bandwidth consumption by the ISP, since
some bandwidth might be offloaded to Uncle Fred in Topeka, and Grandma
Jones in Detroit, and Brother Tom in Florida who happens to live on a much
higher capacity service.

If filesharing my family photos with friends and family is sufficient to 
cause my ISP to buckle, there's something very wrong.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

 Joe Greco wrote:
  Well, because when you promise someone an Internet connection, they usually
  expect it to work.  Is it reasonable for Comcast to unilaterally decide that
  my P2P filesharing of my family photos and video clips is bad?

 
 Comcast is currently providing 1GB of web hosting space per e-mail 
 address associated with each account; one could argue that's a 
 significantly more efficient method of distributing that type of content 
 and it still doesn't cost you anything extra.

Wow, that's incredibly ...small.  I've easily got ten times that online
with just one class of photos.  There's a lot of benefit to just letting
people yank stuff right off the old hard drive.  (I don't /actually/ use
P2P for sharing photos, we have a ton of webserver space for it, but I
know people who do use P2P for it)

 The use case you describe isn't the problem though,

Of course it's not, but the point I'm making is that they're using a 
shotgun to solve the problem.

[major snip]

 Again, 
 flat-rate pricing does little to discourage this type of behavior.

I certainly agree with that.  Despite that, the way that Comcast has
reportedly chosen to deal with this is problematic, because it means
that they're not really providing true full Internet access.  I don't
expect an ISP to actually forge packets when I'm attempting to
communicate with some third party.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Can P2P applications learn to play fair on networks?

2007-10-21 Thread Joe Greco

  Surely one ISP out there has to have investigated ways that p2p could
  co-exist with their network..
 
 Some ideas from one small ISP.
 
 First, fileshare networks drive the need for bandwidth, and since an ISP 
 sells bandwidth that should be viewed as good for business because you 
 aren't going to sell many 6mb dsl lines to home users if they just want to 
 do email and browse.

One of the things to remember is that many customers are simply looking
for Internet access, but couldn't tell a megabit from a mackerel.

Given that they don't really have any true concept, many users will look
at the numbers, just as they look at numbers for other things they
purchase, and they'll assume that the one with better numbers is a better
product.  It's kind of hard to test drive an Internet connection, anyways.

This has often given cable here in the US a bit of an advantage, and I've
noticed that the general practice of cable providers is to try to maintain
a set of numbers that's more attractive than those you typically land with
DSL.

[snip a bunch of stuff that sounds good in theory, may not map in practice]

 If you expect them to pay for 6mb pipes, they better see it run faster than 
 it does on a 1.5mb pipe or they are going to head to your competition.

A small number of them, perhaps.

Here's an interesting issue.  I recently learned that the local RR
affiliate has changed its service offerings.  They now offer 7M/512k resi
for $45/mo, or 14M/1M for $50/mo (or thereabouts, prices not exact).

Now, does anybody really think that the additional capacity that they're
offering for just a few bucks more is real, or are they just playing the
numbers for advertising purposes?  I have no doubt that you'll be able to
burst higher, but I'm a bit skeptical about continuous use.

Noticed about two months ago that ATT started putting kiosks for U-verse
at local malls and movie theatres.  Coincidence?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: 240/4

2007-10-18 Thread Joe Greco
 organizations to complete the transition
 to IPv6 before IPv4 runs out.

Certainly.  So why would we distract them with an intermediate transition
to IPv4-240+?  Remember, I was not able to find any case that successfully
worked; even if there are some cases that work without patching, it seems
that the vast majority of sites will need to change to move from IPv4 to
your transition IPv4-240+.

 We cannot cop out on releasing 240/4 just because it is no magic bullet.

But we could cop out on releasing 240/4 because it's just too much work for
a small benefit to a few sites on the Internet, at a huge cost to the rest
of the Internet.  That's not fair.

 How would you feel if your arguments against 240/4 and other
 half-measures resulted in them not being carried out. And then we hit
 the brick wall of IPv4 exhaustion and some businesses start to lose
 serious money?

I'm fine with that, especially since it appears that implementing
IPv4-240+ will incur even more serious money for every participating
network on the Internet, in upgrades, adminitrative time and effort, etc.

 --Michael Dillon
 
 P.S. and how will you feel if those businesses trawl the record on the
 Internet to discover that you, and employee of one of their competitors,
 caused 240/4 to not be released and thereby harmed their businesses. You
 will be explaining in front of a judge.

Whatever.  I can sue you for having blue skin.  Doesn't make me right, and
doesn't mean I'll win. 

I could even sue you for releasing 240/4 and causing me economic harm by 
forcing me to upgrade a bunch of infrastructure.  Funny how that blade
can cut both ways.

 We should do everything we can to remove roadblocks which would cause
 IPv4 to run out sooner,

Where practical.  This ... isn't.

 or would cause some people to delay IPv6 deployment.

And this ... would cause some people to delay IPv6.  So it's bad.

Hey, I have an idea, how about we recycle all that dead air up in 224/4?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: 240/4

2007-10-18 Thread Joe Greco

 Please don't try to engineer other people's networks because they are
 not going to listen to you. It is a fact that 240/4 addresses work fine
 except for one line of code in IOS, MS-Windows, Linux, BSD, that
 explicitly disallows packets with this address. People have already
 provided patches for Linux and BSD so that 240/4 addresses work
 normally. Cisco would fix IOS if the IETF would unreserve these
 addresses, and likely MS would follow suit, especially after Cisco makes
 their changes.

Now, please explain the magic method you're going to use to cause that
one line of code to be removed from more than a billion devices that
are currently able to use the Internet.

Remember that a lot of these devices are deployed in spots such as little
gateway NAT devices owned by John Doe at 123 Anydrive, and so when he is
unable to get to some website because some brilliant hosting service has
chosen to place a bunch of servers on 241.2.3.0/24, his reaction is most
likely going to be so and so sucks and move onto a competitor's web
site.

Further, when one of your magic clients with the updated version of
Windows XP that supports IPv4-240+ and the misfortune to actually *BE*
on one of those decides to contact pretty much any existing website on a 
VPS that's on auto pilot, and there's a ton of those, dontchaknow, we
are talking a problem significantly worse than failed to update bogon
filters.  Not only does the hosting company have to fix their bogon
filters, but they also have to fix the TCP stack on every server under
their control, which is going to be extremely labor intensive.

Do we want to start discussing all the other places that knowledge of
network classes is built into software, and the subtle ways in which things
may break, even if they appear to work for some crappy definition of
work?

Please don't try to re-engineer the entire IPv4 Internet because you'd like
a small additional allocation of IP space that isn't currently usable.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: 240/4

2007-10-18 Thread Joe Greco
 would do such an upgrade.

Mmm.

 First, if
 it is bundled up in a patch release with other stuff. 

Oh.  So, um, like, you're talking about Flag Day!  Party like it's 1983?

 And secondly if a
 customer requests it. The cost is effectively zero in the first case,
 and in the second case it will be covered by revenue.

That seems very self-centered.  There's no cost to anyone else to have
to make this work?  Really?  Because the minute the words you have to
patch utter forth from your mouth telling me how I have to patch my
stuff, that's taking time away from me, which I value in dollars.

   We should do everything we can to remove roadblocks which 
  would cause
   IPv4 to run out sooner,
  
  Where practical.  This ... isn't.
 
 What is impractical with asking the IETF to revise an RFC? 

Asking the IETF to revise an RFC is not impractical.  Asking the IETF
to revise an RFC, however, has no effect on the installed base.  We have
all kinds of RFC's out there for services that flopped and failed and no
longer are in use anywhere.  The existence of an RFC is fairly
meaningless.  The BEST RFC's document things that either currently /are/,
or that /could be/, where we're trying to guide new creation.  They don't 
try to /change/ the installed base in some radical way.

Actually, though, I have a better solution.  Let's ask the IETF to revise
an RFC, and define the first octet of an IPv4 address as being from 0-
65535.  That's asking the IETF to revise an RFC, too, such request being
just as practical as what you suggest, and yet I'd say that the overall 
solution is just as likely to work well as IPv4-240+.  It'd probably
also solve the transition to IPv6 issue; we wouldn't need to.

 What is
 impractical in asking ARIN to add a question to their forms just as they
 have already done for 32-bit AS numbers? What is impractical in asking
 vendors to remove the code blocks in their next patch release cycle?

Because it's not backwards compatible in the least, and it is a major
distraction from making forward progress.

You want this?  Run it on your network.  Have fun.  Once you put it on
the public Internet, it's not going to work.  Have more juicy funness.

  And this ... would cause some people to delay IPv6.  So it's bad.
 
 IPv6 is not a universal good.

No, but it's a path forward that doesn't rely on all of the badness we
have today.

 The Internet is far more complex with far
 more dark corners than you realize.

I'm not sure that's true.  I'm aware of a *lot* of dark corners that have
a *huge* amount of stuff and I can tell you that the *vast* *majority* of
it will not be upgraded to handle IPv4-240+.

That there may be more dark corners above and beyond the ones I am actually
aware of is a fact that I'm also aware of; my inability to quantify all
possible dark corners doesn't mean that there's some magic dark corner
where all the dark corners I'm aware of will be transformed to be IPv4-240+
capable. 

 But for the owners of those dark
 corners it makes economic sense so why should anyone try and convert
 them to the one true Internet architecture?

Possibly because they want to be connected to it?  Just a thought.
If you want to be part of the community, it is probably a good idea to
go along with the basic rules agreed upon by the community.  

If it makes economic sense for you to use IPv4-240+ internally, by all
means, allocate and NAT it.  I tried that just this morning.  I'm sure
that given enough hammering and patching, it could be made to work for
some limited use, but it's going to require a significant amount of 
work.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: 240/4

2007-10-18 Thread Joe Greco

 Joe,
 On Oct 18, 2007, at 8:49 AM, Joe Greco wrote:
  The ROI on the move to v6 is immense compared to the ROI on the move
  to v4-240+, which will surely only benefit a few.
 
 I am told by people who have inside knowledge that one of the issues  
 they are facing in deploying IPv6 is that an IPv6 stack + IPv4 stack  
 have a larger memory footprint that IPv4 alone in devices that have  
 essentially zero memory for code left (in fact, they're designed that  
 way).  Fixing devices so that they can accept 240/4 is a software fix  
 that can be done with a binary patch and no additional memory.  And  
 there are a _lot_ of these devices.

Sure, I agree there are.  How does that number compare to the number of
devices which can't or won't be upgraded to IPv4-240+?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: 240/4

2007-10-18 Thread Joe Greco

 Consider an auto company network. behind firewalls and having  
 thousands and thousands of robots and other factory floor machines.   
 Most of these have IPv4 stacks that barely function and would never  
 function on IPv6.  One company estimated that they needed 40 million  
 addresses for this purpose.

I guess I have a certain amount of skepticism that an auto company's
robotic control network needs to have public IP addresses.

In an ideal world, where it's like it was 20 years ago and we tell
everyone register some space, yeah, it was a grand idea.  Now, with
space running out, we need IPv6 for that, and in ten years, all those
little robots will begin to find themselves having their controller 
boards replaced.  There may not be a perfect path forward for them, 
but it seems likely that they can actually deal with the problem in
suboptimal ways until they're actually capable of IPv6.

It is in no way thrilling, but it doesn't seem likely that IPv4-240+ is
going to be a grand solution for devices where the IP stacks are already
admittedly barely functional, or that public IP addresses are necessary,
in which case there's a certain amount of freedom to recycle as much of
the existing IP space as is needed.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: 240/4

2007-10-18 Thread Joe Greco

 Or simply ask IANA to open up 256/5. After all, this is just an entry in a
 table, should be easy to do, especially if it is done on Apr 1st. ;-)

DOH!  Point: you.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: 240/4

2007-10-18 Thread Joe Greco

I hadn't intended to post any further replies, but given the source and
the message here, felt this warranted it:

 Compared to the substantial training (just getting NOC monkeys to understand
 hexidecimal can be a challenge), back office system changes, deployment
 dependencies, etc. to use ipv6, the effort involved in patching systems to use
 240/4 is lost in the noise. Saying deploying a large network with 240/4
 is a problem of the same scale as migrating to ipv6 is like saying that
 trimming a hangnail is like having a leg amputated; both are painful but one
 is orders of magnitude more so than the other.

So is this a statement that Cisco is volunteering to provide free binary
patches for its entire product line?  Including the really old stuff
that happens to be floating around out there and still in use?

Because if it's not, your first stop should be to get your own shop
in order and on board, because for a major router vendor to not make
free binary patches available for its entire product line certainly
does represent a huge roadblock with adoption of IPv4-240+.

The day you guys release a set of free binary patches for all your
previous products, including stuff like the old Compatible Systems
line, old Cisco gear like the 2500, and old Linksys products, then
I'll be happy to concede that I could be wrong and that vendors might
actually make it possible for IPv4-240+ to be usable.

Until then, this doesn't carry much credibility, and continuing this
thread is a waste of time.  Nobody cares if you're able to patch a 
current Linux system so that you can make one measly node on the
Internet work with IPv4-240+.  It's getting the rest of them to be
patched - including all the hosts and networking gear - that's the 
problem.

If you just want to discuss your clever Linux patches, the Linux
mailing lists are  thataway.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: 240/4

2007-10-18 Thread Joe Greco

  why on earth would you want to go and hack this stuff together,  
  knowing that it WILL NEVER WORK
 
 Because I have read reports from people whose technical expertise I
 trust. They modified the TCP/IP code of Linux and FreeBSD and were able
 to freely use 240/4 address space to communicate between machines. This
 means that IT WILL WORK.
 
 The reports stated that the code patch was simple because it involved
 simply removing a line of code that disallowed 240/4 addresses.
 
 This demonstrates that enabling 240/4 is a very simple technical issue.
 The only real difficulty here is getting the right people to act on it.
 
 Companies like Cisco don't even need to wait for the IETF in order to
 implement a command like
ip class-e
 as long as they ship it with a default of
no ip class-e

I don't even know where to begin.  Well, maybe here:

The only real difficulty here is getting the right people to act on it.

That neatly sums up the problem.

When you can round up:

1) All the programmers for all the tens of thousands of different IP
   devices that are out on the market, have them dig up the source code
   for these devices (some of which may have been a few employers ago),
   and you get them all to agree to post updated copies of their firmware,
   which might be problematic for those companies that went T.U.,

You still have the giant problem of:

2) Getting over 100 MILLION users to all update the BILLIONS of devices
   that are out there with that firmware.

Once you have a game plan for getting those hundred million people to do
this, then we may have something to talk about.  Until then, not so much.

Your people whose technical expertise you trust clearly figured out
that there are cases where you can make moving an IPv4-240+ packet work.
Anyone can make that happen.  However, they apparently failed to impress
upon you that what they were (hopefully) saying is that enabling IPv4-
240+ on a single device is a very simple technical issue.  Deploying it 
on a wider scale ... not so simple.

What kind of customer would actively solicit an IP address assignment
that won't reach random segments of the Internet?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-10 Thread Joe Greco

 On Mon, 8 Oct 2007, Joe Greco wrote:
  It's arrogant to fix brokenness?  Because I'm certainly there.  In my
  experience, if you don't bother to address problems, they're very likely
  to remain, especially when money is involved on the opposite side.
 
 There's a big difference between fixing brokenness and demanding that 
 somebody else do something that might make sense in your situation but not 
 in theirs.

Well, then, when someone actually demands that, then why don't you have
that little chat with them.  Otherwise, you might want to recognize that
someone actually /asked/ me what I would do - and I answered.  For you
(or anyone else) to turn that on its ear and make it out like I was
demanding that somebody else do something is, at best, poor form.

[lots of boring and obvious US Internet boom/bust history snipped]

 In other words, capacity in the US is cheap because a bunch of investors 
 screwed up.  That's nothing new; it's how the American railroads got built 
 in the mid to late 1800s, and it's how the original American phone 
 networks got built in the early 1900s.  Investors will presumably keep 
 making similar mistakes, and society will be better off because of it. 
 But counting on them to make the same mistake while investing in the same 
 thing within the same decade may be pushing it.

So there's nowhere else in the world that there's cheap capacity?
There are other areas of the world that are served by the Internet, and
it seems unlikely to me that cheap bandwidth in every single area is due 
to the competition/bankruptcy cycle.

Actually, the thing that tends to be /most/ special about a location such
as Australia is that running the capacity out there is a lot different
than running fiber along tracks in the US.  The race into financial ruin
caused by competitiveness among carriers was not a certainty, but the
excessive levels of excess capacity from numerous providers probably 
forced it to become one, as smaller fish fought for a slice of the pie.
The lack of large amounts of excess capacity on competing carriers 
clearly keeps the AU costs high, possibly (probably) artificially so.

Therefore, I'm not sure I would accept this argument about why US
capacity is cheap as being a complete answer, though in the context of
talking about why AU is expensive, it's certainly clear that the lack
of competition to AU is a major factor.

 If you're an ISP in an area served by an expensive long haul capacity 
 monopoly rather than a cheap competitive free for all, the economic 
 decisions you're likely to make are quite different than the decisions 
 made in major American cities.  If you can always go get more cheap 
 capacity, encouraging your customers to use a lot of it and thereby become 
 dependent on it may be a wise move, or at least may not hurt you much. 

I'm not actually certain under what circumstances *encouraging* your
customers to use a lot of bandwidth is a wise move, since there are still
issues with overcommit in virtually every ISP network.

 It's probably cheaper than keeping track of who's using what and having to 
 deal with variable bills.  But if the capacity you buy is expensive, you 
 probably don't want your customers using a lot of it unless they're 
 willing to pay you at least what you're paying for it.  Charging per bit, 
 or imposing bandwidth caps, is a way to align your customers' economic 
 interests with your own, and to encourage them to behave in the way that 
 you want them to.

Well, my initial message included this:

: Continued reliance on broadband users using tiny percentages of their
: broadband connection certainly makes the ISP business model easier, but
: in the long term, isn't going to work out well for the Internet's
: continuing evolution.

So, now you've actually stumbled into a vague understanding of what I 
was initially getting at.  Good.  :-)

I am seeing a continued growth of bandwidth-intensive services, including
new, sophisticated, data-driven technologies.  I am concerned about the 
impact that forcing customers to behave in the way that you want them 
to has on the development of new technologies.

Let's get into that, just a little bit.

One of the biggest challenges for the Internet has got to be the steadily
increasing storage market, combined with the continued development of
small, portable processors for every application, meaning that there's
been an explosion of computing devices.

Ten years ago, your average PC connected to the Internet, and users might
actually have downloaded the occasional software update manually.  Today,
it is fairly common to configure PC's to download updates - not only for
Windows, but for virus scanners, Web browsers, e-mail clients, etc., all
automatically.  To fail to arrange this is actually risking viral
infection.  Download-and-run software is getting more common.  Microsoft
distributed Vista betas as DVD ISO's.  These things are not getting
smaller.

Ten years ago, portable GPS-based

Re: Why do some ISP's have bandwidth quotas?

2007-10-10 Thread Joe Greco

 On Oct 10, 2007, at 5:18 PM, Mikael Abrahamsson wrote:
  On Wed, 10 Oct 2007, Joe Greco wrote:
  One of the biggest challenges for the Internet has got to be the  
  steadily
  increasing storage market, combined with the continued development of
  small, portable processors for every application, meaning that  
  there's
  been an explosion of computing devices.
 
  The one thing that scares me the most is that I have discovered  
  people around me that use their bittorrent clients with rss feeds  
  from bittorrent sites to download everything (basically, or at  
  least a category) and then just delete what they don't want.  
  Because they're paying for flat rate there is little incentive in  
  trying to save on bandwidth.
 
  If this spreads, be afraid, be very afraid. I can't think of  
  anything more bandwidth intensive than video, no software updates  
  downloads in the world can compete with people automatically  
  downloading DVDRs or xvids of tv shows and movies, and then  
  throwing it away because they were too lazy to set up proper  
  filtering in the first place.
 
 Many people leave the TV on all the time, at least while they are home.
 
 On the Internet broadcasting side, we (AmericaFree.TV) have some  
 viewers that do the same - one has racked
 up a cumulative 109 _days_ of viewing so far this year. (109 days in  
 280 days duration works out to 9.3 hours per day.) I am sure that  
 other video providers can provide similar reports. So, I don't think  
 that things are that different here in the new regime.

That's scary enough.  However, consider something like TiVo.  Our dual-
tuner DirecTiVo spends a fair amount of its time recording.

Now, first, some explanation.

We're not a huge TV household.  The DirecTiVo is a first generation, ~30
hour unit.  It's set up to record about 50 different things on season pass,
many of which are not currently available.  It's also got an extensive
number of thumbs rated (and therefore often automatically recorded as a
suggestion) items.  I'm guessing that a minimum of 90% of what is recorded
is either deleted or rolls off the end without being watched, yet there 
are various shows (possibly just one) on the unit from last year yet.

All things considered, this harms no one and nothing, since the TiVo is
not using any measurable resource to do the recordings that would not
otherwise have been used.

A DVR on a traditional cable network is non-problematic, as is a DVR on
any of the next gen broadcast/multicast style networks that could be
deployed as a drop-in replacement for legacy cable.

More interesting are some of the new cable video on demand services,
which could create a fair amount of challenge for cable service 
providers.  However, even there, the challenge is limited to the service
provider's network, and it is unlikely that the load created cannot be
addressed.  Multiple customer DVR's requesting content for speculative
download purposes (i.e. for TiVo-style favorites support) could be 
broadcast or multicast the material at a predetermined time, essentially
minimizing the load caused by speculative downloading.  True in-real-time
VOD would be limited to users actually in front of the glass.

All of this, however, represents content within the cable provider's
network.  From the TiVo user perspective above, even if a vast majority
of the content is being discarded, it shouldn't really be a major problem.

Now, for something (seemingly) completely different.

Thirty years ago, TV was dominated by the big broadcast networks.  Shows
were expensive to produce, equipment was expensive, and the networks tried
to aim at large interest groups.  Shows such as Star Trek had a lot of
difficulty succeeding, for many reasons, but thrived in syndication.

With the advent of cable networks, we saw the launch of channels such as
SciFi, which was originally pegged as a place where Star Treks and
other sci-fi movies would find a second life.  However, if you look at
what has /actually/ happened, many networks have actually started
originating their own high-quality, much more narrowly targetted shows.
We've seen Battlestar Galactica and Flash Gordon appear on SciFi,
for example.  Part of this is that it is less difficult and complex to
produce shows, with the advances in technology that we've seen.  I
picked SciFi mainly because there's a lot of bleedover from legacy
broadcast TV to provide some compare/contrast - but more general 
examples, such as HBO produced shows, exist as well.

A big question, then, is will we continue to see this sort of effect?
Can we expect TV to continue to evolve towards more highly targetted
segments?  I believe that the answer is yes, and along with that may
come a move towards a certain amount of more amateur content.  Something
more like video podcasting than short YouTube videos.  And it'll get
better (or worse, depending on POV) as time goes on.  Technology improves.
Today's cell phones, for example, can take

Re: Why do some ISP's have bandwidth quotas?

2007-10-08 Thread Joe Greco

 On Mon, 8 Oct 2007, Mark Newton wrote:
  Thought experiment:  With $250 per megabit per month transit and $30 - 
  $50 per month tail costs, what would _you_ do to create the perfect 
  internet industry?
 
 I would fix the problem, ie get more competition into these two areas 
 where the prices are obvisouly way higher than in most parts of the 
 civilised world, much higher than is motivated by the placement there in 
 the middle of an ocean.
 
 Perhaps it's hard to get the transoceanic cost down to european levels, 
 but a 25 time difference, that's just too much.

That's approximately correct.  The true answer to the thought experiment
is address those problems, don't continue to blindly pay those costs and
complain about how unique your problems are.  Because the problems are
neither unique nor new - merely ingrained.  People have solved them
before.

 And about the local tail, that's also 5-10 times higher than normal in the 
 western world, I don't see that being motivated by some fundamental 
 difference.

The fundamental difference is that it's owned by a monopoly.

Here in the US, we wrestled with Mark's problems around a decade ago, 
when transit was about that expensive, and copper cost big bucks.  There
was a lot of fear and paranoia about selling DSL lines for a fraction of
what the cost of the circuit if provided with committed bandwidth would
cost.  

The whole Info Superhighway thing was supposed to result in a national
infrastructure that provided residential users with 45Mbps to-the-home
capabilities on a carrier-neutral network built by the telcos.  These
promises by the telcos were carefully and incrementally revoked, while
the incentives we provided to the telcos remained.  As a result, we're
now in a situation where the serious players are really the ILEC and
the cable companies, and they've shut out the CLEC's from any reasonable
path forward.

Despite this, wholesale prices did continue to drop.  Somehow, amazingly,
the ILEC found it possible to provide DSL at extremely competitive
prices.  Annoyingly, a bit lower than wholesale costs...  $14.99/mo
for 768K DSL, $19.99/mo for 1.5M, etc.  They're currently feeling the 
heat from Road Runner, whose prices tend towards being a bit more
expensive, but speeds tend towards better too.  :-)

Anyways, as displeased as I may be with the state of affairs here in the
US, it is worth noting that the speeds continue to improve, and projects
such as U-verse and FIOS are promising to deliver higher bandwidth to 
the user, and maintain pressure on the cable companies for them to do
better as well.

US providers do not seem to be doing significant amounts of DPI or other
policy to manage bandwidth consumption.  That doesn't mean that there's
no overcommit crisis, but right now, limits on upload speeds appear to
combine with a lack of killer centralized content distribution apps and
as a result, the situation is stable.

My interest in this mainly relates to how these things will impact the
Internet in the future, and I see some possible problems developing.  I
do believe that video-over-IP is a coming thing, and I see a very scary
(for network operators) scenario of needing to sustain much greater levels
of traffic, as podcast-like video delivery is something that would be a
major impact.  Right now, both the ILEC and the cable company appear to
be betting that they'll continue to drive the content viewing of their
customers through broadcast, and certainly that's the most efficient
model we've found, but it only works for popular stuff.  That still
leaves a wildly large void for a new service model.  The question of
whether or not such a thing can actually be sustained by the Internet is
fascinating, and whether or not it'll crush current network designs.

With respect to the AU thing, it would be interesting to know whether or
not the quotas in AU have acted to limit the popularity of services such 
as YouTube (my guess would be an emphatic yes), as I see YouTube as being 
a precursor to video things-to-come.  Looking at whether or not AU has
stifled new uses for the Internet, or has otherwise impacted the way users
use the Internet, could be interesting and potentially valuable 
information to futurists and even other operators.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-08 Thread Joe Greco
 
 to your cousin who owns a farm outside of town? This question is 
 largely ignored in discussions about cranking the 'net to ever faster 
 speeds, at least in the US. I'd be interested to know how it's 
 addressed elsewhere in the world.

I'd like to see it addressed.  I'd like to see widespread Internet
availability.  At this point, it's possible to make a video call to
your cousin who owns a farm outside of town, but doing so probably
requires you to be signed up for satellite based broadband, or long
distance wireless.  Both services exist, and people do use them.  I
know one guy in rural Illinois who maintains a radio tower so he can
get wifi access from the nearest highspeed Internet source (~miles).
He plays multiplayer shoot'ems on the Internet, not the sort of thing
you'd do over dialup, and he's good enough that his ping times aren't a 
noticeable handicap.  I'd note that that was even several years ago.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-08 Thread Joe Greco

 $quoted_author = Joe Greco ;
  
   That's approximately correct.  The true answer to the thought experiment
   is address those problems, don't continue to blindly pay those costs and
   complain about how unique your problems are.  Because the problems are
   neither unique nor new - merely ingrained.  People have solved them
   before.
   
   Address those problems sounds quite a bit like an old Sam Kinnison 
   routine, paraphrased as move to where the broadband is! You live in 
   a %*^* expensive place. Sorry, but your statement comes across as 
   arrogant, at least to me.
  
  It's arrogant to fix brokenness?  Because I'm certainly there.  In my
  experience, if you don't bother to address problems, they're very likely
  to remain, especially when money is involved on the opposite side.
 
 it's arrogant to use throwaway lines like address those problems when the
 reality is a complex political and corporate stoush over a former government
 entity with a monopoly on the local loop.
 
 AU should be at a stage where the next generation network (FTTx, for some
 values of x hopefully approaching H) will be built by a new, neutral entity
 owned by a consortium of telcos/ISPs with wholesale charges set on a cost
 recovery basis.  if either political party realises how important this is
 for AUs future and stares down telstra in their game of ACCC chicken, that
 may even become a reality.  

So, in other words, it is arrogant for me to not have a detailed game plan
to deal with another continent's networking political problems, and instead
to summarize it as address those problems.

Okay, then.

Well, I certainly apologize.  My assumption was that the membership of this
mailing list was:

1) Not stupid,

2) Actually fairly experienced in these sorts of issues, meaning that they
   are capable of filling in the large blanks themselves, and

3) Probably not interested in a detailed game plan for something outside
   of the North American continent anyways, given the NA in NANOG.

Certainly the general plan you suggest sounds like a good one.  We kind of
screwed that up here in the US.  Despite having screwed it up, we've still
got cheap broadband.  I'd actually like to see something very much more
like what you suggest for AU here in the US.

But there was more than one problem listed.

The other major factor seems to be transit bandwidth.  I believe I already
mentioned that there are others who are actually working to address those
problems, so I am guessing that my terse suggestion was actually spot on.
Otherwise they wouldn't be working on a new fiber from Australia to Guam.

The only thing that seems to be particularly new or unique about this
situation is that it was a momentary flash here in the US, when broadband
was first deployed, and providers were terrified of high volume users.
That passed fairly rapidly, and we're now on unlimited plans. 

I would, however, caution the folks in AU to carefully examine the path
that things took here in the US - and avoid the mistakes.  We started out
with a plan to have a next generation neutral network, and it looks like
it would have kept the US in the lead of the Internet revolution.  The
first mistake, in my opinion, was not creating a truly neutral entity to
do that network, and instead allowing Ma Bell to create it for us.  But
it's late and I'm guessing most of the interested folks here have already
got a good idea of how it all went wrong.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-07 Thread Joe Greco

 On Sat, Oct 06, 2007, Joe Greco wrote:
  However, it is equally possible that there'll be some newfangled killer 
  app that comes along.  At some point, this will present a problem.  All
  the self-justification in the world will not matter when the customers
  want to be able to do something that uses up a little more bandwidth.
 
 The next newfangled app came along - its P2P. Australian ISPS have already
 responded by throttling back P2P.

I'm not talking about the next newfangled app that came along 8 years
ago.  That's what P2P is.

P2P, as it currently exists, is a network-killer app, but not really the
sort of killer app that I'm talking about.

The World Wide Web was a killer app.  It transformed the Internet in a
fundamental way.

Instant messaging was a killer app.  It changed how people communicated.

VoIP and YouTube are somewhat less successful killer apps, and that less
successful is at least partly tied into some of the issues at hand here.

We're starting to see the (serious) distribution of video via the Internet,
and I expect that one possible outcome will be a system of TiVo-like video
delivery without the complication of a subscription to a cable or satellite
provider's choice of package.  This would allow the sourcing of content
from many sources.  It could be that something akin to video podcasting 
is the next killer app, in other words.

Or it could be time for something completely different.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-07 Thread Joe Greco

   Comparative to Milwaukee, I'd be guessing delivering high performance
   internet and making enough money to fund expansion and eat is harder at
   a non US ISP. It's harder, but there's nothing wrong with it. It compels
   you to get inventive.
  
  The costs to provide DSL up here in Milwaukee are kind of insane,
 
 Insanity is a relative term :-) Try to deliver Internet outside of the
 US in countries that share western culture and you'll start to
 understand why caps are seen as an excusable form of treatment for the
 insanity.

Okay, so, let's pretend that today I'm sitting in Sweden.  Go.

Extremely high speed connectivity, uncapped, well in excess of what is
delivered in most parts of the US.  I was just informed that Road Runner
upgraded locally from 5 to 7Mbps a month ago, and has a premium 15Mbps
offering now, but there are folks with 100Mbps over there.

So, your point is, what, that it's easier to deliver Internet outside of
the US in countries that share western culture?  That could be true, we're
tied up by some large communications companies who don't want carrier-
neutral networks to the residence.

If we just want to start making up claims that fit the observed facts, I
would say that the amount that a user can download from the Internet in 
countries that share western culture tends to decrease with distance from
Sweden, though not linearly.  AU gets placed on the far end of that.  :-)
(That's both a joke AND roughly true!)

 Clearly they're not something we'd prefer, but they are useful
 to manage demand in the context of high costs with customers who
 benchmark against global consumer pricing (or those who think that the
 Internet is a homogeneous thing)
 
 ...Hmm, that's a good idea, perhaps you should do that (get out of the
 US) before you start saying what we're doing is wrong with your
 business or insane or perhaps unreasonable.
 
 And I agree with Mark Newton's sentiments. It's completely delusional of
 you to insist that the rest of the world follow the same definition of
 reasonable. We're not the same. Which is good in some respects as it
 does create some diversity. And I'm quite pleased about that :-)

Well, since I didn't insist that you follow any definition of reasonable,
and in fact I started out by saying

: Continued reliance on broadband users using tiny percentages of their
: broadband connection certainly makes the ISP business model easier, but
: in the long term, isn't going to work out well for the Internet's
: continuing evolution.

it would seem clear that I'm not particularly interested in your local
economics, no matter how sucky you've allowed them to be, but was more
interested in talking about the problem in general.  I *am* interested
in the impact that it has on the evolution of the Internet.

That you're so pleased to be diverse in a way that makes it more
difficult for your users to join the modern era and use modern apps
is sufficient to make me wonder.  There's certainly some delusional
going on there.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-06 Thread Joe Greco

 Joe Greco wrote:
  Technically the user can use the connection to it's maximum
  theoretical speed as much as they like, however, if an ISP has a
  quota set at 12G/month, it just means that the cost is passed along
  to them when they exceed it.
  
  And that seems like a bit of the handwaving.  Where is it costing the
  ISP more when the user exceeds 12G/month?
  
  Think very carefully about that before you answer.  If it was arranged
  that every customer of the ISP in question were to go to 100%
  utilization downloading 12G on the first of the month at 12:01AM, it
  seems clear to 
  me that you could really screw up 95th.
 
 First, the total transfer vs. 95%ile issue.  I would imagine that's just a
 matter of keeping it simple.  John Q. Broadbanduser can understand the
 concept of total transfer.  But try explaining 95%ile to him.  Or for that
 matter, try explaining it to the average billing wonk at your average
 residential ISP.  As far as the 12GB cap goes, I guess it would depend on
 the particular economics of the ISP in question.  12GB for a small ISP in a
 bandwidth-starved country isn't as insignificant as you make it sound.  But
 lets look at your more realistic second whatif:

Wasn't actually my whatif.

  90GB/mo is still a relatively small amount of bandwidth.  That works
  out to around a quarter of a megabit on average.  This is nowhere
  near the 100% situation you're discussing.  And it's also a lot
  higher than the 12GB/mo quota under discussion.
 
 As you say, 90GB is roughly .25Mbps on average.  Of course, like you pointed
 out, the users actual bandwidth patterns are most likely not a straight
 line.  95%ile on that 90GB could be considerably higher.  But let's take a
 conservative estimate and say that user uses .5Mbps 95%ile.  And lets say
 this is a relatively large ISP paying $12/Mb.  That user then costs that ISP
 $6/month in bandwidth.  (I know, that's somewhat faulty logic, but how else
 is the ISP going to establish a cost basis?)

That *is* faulty logic, of course.  It doesn't make much sense in the
typical ISP scenario of multiple bursty customers.  It's tricky to
compute what the actual cost is, however.

One of the major factors that's really at the heart of this is that a
lot of customers currently DO NOT use much bandwidth, a model which fits
well to 12G/mo quota plans.  It's easy to forget that this means that a
lot of users may in fact only use 500MB/mo.  As a result, the actual
cost of bandwidth to the ISP for the entire userbase doesn't end up being
$6/user.

 If that user is only paying
 say $19.99/month for their connection, that leaves only $13.99 a month to
 pay for all the infrastructure to support that user, along with personnel,
 etc all while still trying to turn a profit.  In those terms, it seems like
 a pretty reasonable level of service for the price.  If that same user were
 to go direct to a carrier, they couldn't get .5Mbps for anywhere near that
 cost, even ignoring the cost of the last-mile local loop.  And for that same
 price they're also probably getting email services with spam and virus
 filtering, 24-hr. phone support, probably a bit of web hosting space, and
 possibly even a backup dial-up connection.

That makes it sound really nice and all, but the point I was trying to
make here was that these sorts of limits stifle other sorts of innovation.
My point was that cranking up the bandwidth management only *appears* to
solve a problem that will eventually become more severe - there are going
to be ever-more-bandwidth-intensive applications.

That brings us back to that question of how much bandwidth should we be
able to deliver to users, so the $6/user is certainly relevant in that
light.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-06 Thread Joe Greco

 In the Australian ISP's case (which is what started this) it's rather
 worse.
 
 The local telco monopoly bills between $30 and $50 per month for access
 to the copper tail.
 
 So there's essentially no such thing as a $19.99/month connection here
 (except for short-lived flash-in-the-pan loss-leaders, and we all know
 how they turn out)
 
 So to run the numbers:  A customer who averages .25Mbit/sec on a tail acquired
 from the incumbent requires --
 
Port/line rental from the telco   ~ $50
IP transit~ $ 6 (your number)
Transpacific backhaul ~ $50 (I'm not making this up)

These look like great places for some improvement.

 Like I said a few messages ago, as much as your marketplace derides 
 caps and quotas, I'm pretty sure that most of you would prefer to do 
 business with my constraints than with yours.

That's nice from *your* point of view, as an ISP, but from the end-user's
point of view, it discourages the development and deployment of the next
killer app, which is the point that I've been making.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-06 Thread Joe Greco

  Of course, that's obvious.  The point here is that if your business is so
  fragile that you can only deliver each broadband customer a dialup modem's
  worth of bandwidth, something's wrong with your business.
 
 Granted 12G is a small allocation. But getting back to the original
 question which was Is there some kind of added cost running a non US
 ISP?
 
 Why yes, yes there is. Transit out of the country (or in a US context,
 out of a state) is around 25 times more expensive.

Than local peering costs?  That seems fine.  The real question is what
transit bandwidth costs.  We've got small ISP's around here paying $45-
$60/Mbit.

 Combine that with a
 demand on offshore content of around 70-90% of your total network load
 and you can see that those kind of changes to the cost structure make
 you play the game differently. Add to that an expectation to be as well
 connected as those in the continental US, and you can see that it's
 about managing expectations.
 
 Comparative to Milwaukee, I'd be guessing delivering high performance
 internet and making enough money to fund expansion and eat is harder at
 a non US ISP. It's harder, but there's nothing wrong with it. It compels
 you to get inventive.

The costs to provide DSL up here in Milwaukee are kind of insane, as 
you tend to get it on both ends.  However, I'm not aware of any ISP's
setting up quotas.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-06 Thread Joe Greco

  No, its that they've run the numbers and found the users above 12G/month
  are using a significant fraction of their network capacity for whatever
  values of signficant and fraction you define.
  
  Of course, that's obvious.  The point here is that if your business is so
  fragile that you can only deliver each broadband customer a dialup modem's
  worth of bandwidth, something's wrong with your business.
 
 If your business model states that you will not charge clients for
 something when they have no problem paying for it in order to make the
 service better for them, then there is something wrong with your
 business model.
 
 Note that no one said can't deliver the service. You want unlimited
 bandwidth, either pay for it, or go to one of the bigger guys who will
 give it to you. Good luck when you want any sort of technical support...

Actually, I wasn't talking about unlimited bandwidth.  I was talking
more about quotas that are so incredibly small as to be stifling to new
offerings.  There are USB pen drives that hold more than 12GB.

I'm really expecting InterneTiVo to become a big thing at some point in
the not-too-distant future, probably nearly as soon as there's some
broadband deployment capable of dealing with the implications, and an
Akamai-like system to deliver the content on-net where possible.

However, it is equally possible that there'll be some newfangled killer 
app that comes along.  At some point, this will present a problem.  All
the self-justification in the world will not matter when the customers
want to be able to do something that uses up a little more bandwidth.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-06 Thread Joe Greco
, it appears that AU ISP's are
simply passing on costs, minimizing the services offered in order to keep
service prices as low as possible, and then sitting around justifying it.

At a certain point, the deployment cost of your telco network is covered,
and it is no longer reasonable to be paying $50/line/month for mere access
to the copper.

 The point here is that you guys in the US have a particular market
 dynamic that's shaped your perspective of what reasonable is.  

Actually, we're more of a content network, delivering content globally,
and we deal with regional issues around the globe.  I probably have a
pretty good idea about a wide variety of strategies.  Your view appears to
be somewhat more provincial, defending a status quo that doesn't honestly
make sense.  My perspective of what reasonable is certainly isn't shaped
by the US market, which is quite nearly as broken as the major
communications companies have been able to get away with.

 It's
 completely delusional of you to insist that the rest of the world
 follow the same definition of reaosnable,

Interestingly enough, you're now putting words in my mouth, because there
is no way in hell that I would suggest that AU follow the US model.  I
would not wish that on anybody.  I might suggest that AU follow the model
proposed back in the mid '90's - which would be a good idea, in fact.

 *ESPECIALLY* when the rest
 of the world is subsidizing your domestic Internet by paying for all
 the international transit.

Interesting thing about how the market works, isn't it.  It seems that
there's substantially more value to be had in AU connecting to the US 
than there is the other way around, and costs are shifted accordingly.

It isn't fair, but it's the way it works.  Historically, AU has always
had connectivity issues.  There was a time when a machine I ran was 
burning up a measurable fraction of the total connectivity to AU 
sending USENET to you guys (anybody remember in the early '90's, when 
AARNet only had 1-2Mbps to the US?  Remember the 
alt.binaries.pictures.erotica fiasco?)

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-05 Thread Joe Greco

  Now, ISP economics pretty much require that some amount of overcommit
  will happen.  However, if you have a 12GB quota, that works out to
  around 36 kilobits/sec average.  Assuming the ISP is selling 10Mbps
  connections (and bearing in mind that ADSL2 can certainly go more than
  that), what that's saying is that the average user can use 1/278th of
  their connection.  I would imagine that the overcommit rate is much
  higher than that.
 
 I don't think that things should be measured like this. Throughput !=
 bandwidth.

No, but it gives some rational way to look at it, as long as we all realize
what we're talking about.  The other ways I've seen it discussed mostly
involve a lot of handwaving.

 Technically the user can use the connection to it's maximum theoretical
 speed as much as they like, however, if an ISP has a quota set at
 12G/month, it just means that the cost is passed along to them when they
 exceed it.

And that seems like a bit of the handwaving.  Where is it costing the ISP
more when the user exceeds 12G/month?

Think very carefully about that before you answer.  If it was arranged
that every customer of the ISP in question were to go to 100% utilization
downloading 12G on the first of the month at 12:01AM, it seems clear to
me that you could really screw up 95th.

  Note: I'm assuming the quota is monthly, as it seems to be for most
  AU ISP's I've looked at, for example:
 
 Yes most are monthly based on GB.
 
  capacity is being stifled by ISP's that are stuck back
  in speeds (and policies) appropriate for the year 2000.  
 
 Imagine a case (even in the largest of ISP's), where there are no
 quotas, and everyone has a 10Mbps connection.

I'm imagining it.  I've already stated that it's a problem.

 I don't think there is an ISP in existence that has the infrastructure
 capacity to carry all of their clients using all of the connections
 simultaneously at full speed for long extended periods.

I'll go so far as to say that there's no real ISP in existence that
could support it for any period.

 As bandwidth and throughput increases, so does the strain on the
 networks that are upstream from the client.

Obviously.

 Unless someone pays for the continuously growing data transfers, then
 your 6Mbps ADSL connection is fantastic, until you transit across the
 ISP's network who can't afford to upgrade the infrastructure because
 clients think they are being ripped off for paying 'extra'.
 
 Now, at your $34/month for your resi ADSL connection, the clients call
 the ISP and complain about slow speeds, but when you advise that they
 have downloaded 90GB of movies last month and they must pay for it, they
 wont. Everyone wants it cheaper and cheaper, but yet expect things to
 work 100% of the time, and at 100% at maximum advertised capacity. My
 favorites are the clients who call the helpdesk and state I'm trying to
 run a business here (on their residential ADSL connection).

90GB/mo is still a relatively small amount of bandwidth.  That works out 
to around a quarter of a megabit on average.  This is nowhere near the 
100% situation you're discussing.  And it's also a lot higher than the
12GB/mo quota under discussion.

  What are we missing out on because ISP's are more interested in keeping
  bandwidth use low?  
 
 I don't think anyone wants to keep bandwidth use low, it's just in order
 to continue to allow bandwidth consumption to grow, someone needs to pay
 for it.

How about the ISP?  Surely their costs are going down.  Certainly I know
that our wholesale bandwidth costs have dropped orders of magnitude in 
the last ~decade or so.  Equipment does occasionally need to be replaced.
I've got a nice pair of Ascend GRF400's out in the garage that cost $65K-
$80K each when originally purchased.  They'd be lucky to pull any number
of dollars these days.  It's a planned expense.  As for physical plant,
I'd imagine that a large amount of that is also a planned expense, and is
being paid down (or already paid off), so arguing that this is somewhere
that a lot of extra expense will exist is probably silly too.

  What fantastic new technologies haven't been developed
  because they were deemed impractical given the state of the Internet?
 
 Backbone connections worth $34/month, and infrastructure gear that
 upgrades itself at no cost.

Hint: that money you're collecting from your customers isn't all profit.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-05 Thread Joe Greco

  And before anyone accuses me of sounding overly critical 
  towards the AU ISP's, let me point out that we've dropped the 
  ball in a major way here in the United States, as well.
 
 We've dropped the ball in any place where the broadband architecture is
 to backhaul IP packets from the site where DSL or cable lines are
 concentrated, into an ISP's PoP. This means that P2P packets between
 users at the same concentration site, are forced to trombone back and
 forth over the same congested circuits. 

This would seem to primarily be an issue /due/ to congestion of those
circuits.  The current solution, as you suggest, is not ideal, but it
isn't necessarily clear that a solution to this will be better.

Let's look at an infrastructure that would be representative of what
often happens here in Milwaukee.

ATT provides copper DSL wholesale services to an ISP.  This means that
a packet goes from the residence to the local CO, where ATT aggregates
over its network to a ATM circuit that winds up at an ISP POP.  Then, to
get to a DSL customer with actual ATT service, the packets go down to
Chicago, over transit to ATT, and then back up to Milwaukee...

Getting the ISP to have equipment colocated at the point where DSL lines
are concentrated would certainly help for the case where packets where
transiting from one neighborhood customer of an ISP to another
neighborhood customer of an ISP, but in the common case, it isn't clear
to me that the payoff would be significant.

Getting all the ISP's to peer with each other at the DSL concentration
point would solve the problem, but again, the question is how
significant that payoff would be.

It would seem like a larger payoff to simply make sure sufficient 
capacity existed to move packets as required, since this not only solves
the local packet problem you suggest, but the more general overall
problem that ISP's face.

 And P2P is the main way to
^currently
 reduce the overall load that video places on the Internet.


... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-05 Thread Joe Greco

 On Fri, Oct 05, 2007, Joe Greco wrote:
 
   Technically the user can use the connection to it's maximum theoretical
   speed as much as they like, however, if an ISP has a quota set at
   12G/month, it just means that the cost is passed along to them when they
   exceed it.
  
  And that seems like a bit of the handwaving.  Where is it costing the ISP
  more when the user exceeds 12G/month?
 
 No, its that they've run the numbers and found the users above 12G/month
 are using a significant fraction of their network capacity for whatever
 values of signficant and fraction you define.

Of course, that's obvious.  The point here is that if your business is so
fragile that you can only deliver each broadband customer a dialup modem's
worth of bandwidth, something's wrong with your business.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-04 Thread Joe Greco

 On Thu, 4 Oct 2007, Hex Star wrote:
  Why is it that the US has ISP's with either no quotas or obscenely high ones
  while countries like Australia have ISP's with ~12gb quotas? Is there some
  kind of added cost running a non US ISP?
 
 Depending upon the country you're in, that is a possibility.  Some 
 countries have either state-run or monopolistic telcos, so there is little 
 or no competition to force prices down over time.
 
 Even in the US, there is a huge variability in the price of telco services 
 from one part of the country to another.

Taking a slightly different approach to the question, it's obvious that
overcommit continues to be a problem for ISP's, both in the States and
abroad.

It'd be interesting to know what the average utilization of an unlimited
US broadband customer was, compared to the average utilization of an 
unlimited AU broadband customer.  It would be interesting, then, to look
at where the quotas lie on the curve in both the US and AU.

Regardless, I believe that there is a certain amount of shortsightedness
on the part of service providers who are looking at bandwidth management
as the cure to their bandwidth ills.  It seems clear that the Internet
will remain central to our communications needs for many years, and that
delivery of content such as video will continue to increase.  End users
do not care to know that they have a quota or that their quota can be
filled by a relatively modest amount of content.  Remember that a 1Mbps
connection can download ~330GB/mo, so the aforementioned 12GB is nearly 
*line noise* on a multimegabit DSL or cable line.

Continued reliance on broadband users using tiny percentages of their
broadband connection certainly makes the ISP business model easier, but
in the long term, isn't going to work out well for the Internet's
continuing evolution.

And before anyone accuses me of sounding overly critical towards the AU
ISP's, let me point out that we've dropped the ball in a major way here
in the United States, as well.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Why do some ISP's have bandwidth quotas?

2007-10-04 Thread Joe Greco

 On 4-Oct-2007, at 1416, Joe Greco wrote:
  It'd be interesting to know what the average utilization of an  
  unlimited
  US broadband customer was, compared to the average utilization of an
  unlimited AU broadband customer.  It would be interesting, then, to  
  look
  at where the quotas lie on the curve in both the US and AU.
 
 I think the implication here is that there's a smoothing effect that  
 comes with large customer bases.

Probably not even large customer bases.

 For example, I remember back to when DSL was first rolled out in New  
 Zealand. It was priced well beyond the means of any normal  
 residential user, and as a result DSL customers tended to be just the  
 people who would consume a lot of external bandwidth.
 
 At around the same time, my wife's mother in Ontario, Canada got  
 hooked up with a cablemodem on the grounds that unlimited cable  
 internet service cost less than a second phone line (she was fed up  
 with missing phone calls when she was checking her mail).
 
 She used/uses her computer mainly for e-mail, although she  
 occasionally uses a browser. (These days I'm sure legions of  
 miscreants are using her computer too, but back then we were pre- 
 botnet).
 
 If you have mainly customers like my mother-in-law, with just a few  
 heavy users, the cost per user is nice and predictable, and you don't  
 need to worry too much about usage caps.
 
 If you have mainly heavy users, the cost per user has the potential  
 to be enormous.
 
 It seems like the pertinent question here is: what is stopping DSL  
 (or cable) providers in Australia and New Zealand from selling N x  
 meg DSL service at low enough prices to avoid the need for a data  
 cap? Is it the cost of crossing an ocean which makes the risk of  
 unlimited service too great to implement, or something else?

Quite frankly, this touches on one aspect, but I think it misses entirely
others.

Right now, we have a situation where some ISP's are essentially cherry
picking desirable customers.  This can be done by many methods, ranging
from providing slow basic DSL services, or placing quotas on service,
or TOS restrictions, all the way to terminating the service of high-
volume customers.  A customer who gives you $40/mo for a 5Mbps connection
and uses a few gig a month is certainly desirable.  By either telling the
high volume customers that they're going to be capped, or actually
terminating their services, you're discouraging those who are
unprofitable.  It makes sense, from the ISP's limited view.

However, I then think about the big picture.  Ten years ago, hard drives 
were maybe 10GB, CPU's were maybe 100MHz, a performance workstation PC
had maybe 64MB RAM, and a Road Runner cable connection was, I believe,
about 2 megabits.  Today, hard drives are up to 1000GB (x100), CPU's are
quadcore at 2.6GHz (approximately x120 performance), a generous PC will
have 8GB RAM (x128), and ...  that Road Runner, at least here in
Milwaukee, is a blazing 5Mbps...  or _2.5x_ what it was.

Now, ISP economics pretty much require that some amount of overcommit
will happen.  However, if you have a 12GB quota, that works out to
around 36 kilobits/sec average.  Assuming the ISP is selling 10Mbps
connections (and bearing in mind that ADSL2 can certainly go more than
that), what that's saying is that the average user can use 1/278th of
their connection.  I would imagine that the overcommit rate is much
higher than that.

Note: I'm assuming the quota is monthly, as it seems to be for most
AU ISP's I've looked at, for example:

http://www.ozemail.com.au/products/broadband/plans.html

Anyways, my concern is that while technology seems to have improved 
quite substantially in terms of what computers are capable of, our
communications capacity is being stifled by ISP's that are stuck back
in speeds (and policies) appropriate for the year 2000.  

Continued growth and evolution of cellular networks, for example, have
taken cell phones from a premium niche service with large bag phones
and extremely slow data services, up to new spiffy high technology where
you can download YouTube on an iPhone and watch videos on a pocket-sized
device.

What are we missing out on because ISP's are more interested in keeping
bandwidth use low?  What fantastic new technologies haven't been developed
because they were deemed impractical given the state of the Internet?

Time to point out that, at least in the US, we allowed this to be done to
ourselves...

http://www.newnetworks.com/broadbandscandals.htm

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Good Stuff [was] Re: shameful-cabling gallery of infamy - does anybody know

2007-09-12 Thread Joe Greco

 If you find any pictures of NY.NET; Terry Kennedy made the above
 look sloppy. Many places ban cable ties due to the sharp ends;
 some allow 'em if tensioned by a pistol-grip installer.

The tie gun is a good solution, but quite frankly, you don't need one
to do a good job with cable ties.  This is mainly a training issue,
and the training is substantially easier than training folks to use
lacing cord.  The rule doesn't need to be much more than clean cut
required, if you can't do a clean cut, then leave the tail on.  

Xcelite makes some fantastic tools, as anyone in this business should
know, and they have a wide selection of full flush cutters that will
work fine.  There are some other manufacturers who make this sort of
cutter, of course, but they're a bit tricky to find.  The key thing 
is that people learn not to just use any old wire cutters to snip 
these.

If you're really good, and the situation allows, you can use a knife
or box cutter to trim ends as well.

 Terry
 required lacing cord. You can guess his heritage.

That's mostly a pain to do.  Looks nice, but hell to modify, and more
time and effort to install initially.

 As for horror stories, a certain ISP near here that started out in
 a video store had piles of Sportsters. The wall warts were lined
 up and glued dead-bug style to a number of long 1x3's; then #14
 copper was run down each side, daisy-chain soldered to each plug
 blade. There was no attempt to insulate any of upright plugs...

ExecPC, here in Wisconsin, had a much more elegant solution.  ExecPC
BBS was the largest operating BBS in the world, with a large LAN net
and a PC per dial-in line.  They had built a room with a custom rack
system built right in, where a motherboard, network, video, and modem
card sat in a slot, making a vertical stack of maybe 8 nodes, and then
a bunch of those horizontally,  and then several rows of those.  That
was interesting all by itself, but then they got into the Internet biz
early on.

They had opted to go with USR Courier modems for the Internet stuff.
Being relatively cheap, they didn't want to go for any fancier rack
mount stuff (== much more expensive).  So they went shopping.  They
found an all metal literature rack at the local office supply store
that had 120 slots (or maybe it was two 60 slot units).  They took a
wood board and mounted it vertically above the unit.  This held a 
large commercial 120-to-24vac step-down transformer and a variac 
that was used to trim the AC voltage down to the 20VAC(?) needed by 
the Couriers.

Down the backside, they ran a run of wide finger duct vertically.
Inside this, they ran two thick copper bars that had been drilled
and tapped 120 times by a local machine shop.  When connected to
the step-down transformer's output, this formed the power backbone.
They had a guy snip the power cables off the Courier wall warts,
and spade lug them, and screwed them in.  Instant power for 120
modems.

Slip a modem in each slot.  Run phone wire up to one of five
AllenTel AT125-SM's hanging on the back of the plywood, and there 
you have 5 25pr for inbound.  Run serial cables up to one of four
Portmaster PM2E-30's sitting on top of the racks, then network to 
a cheap Asante 10 megabit hub, and you're done.  5 x 25pr POTS in,
power in, ethernet out, standalone 120 line dialin solution.

Multiply solution by 10 and you get to the biggest collection of
Courier modems I've ever seen.

They continued to do this until the advent of X2, which required
T1's to a Total Control chassis, at which point they started to
migrate to rackmount gear (they had no space to go beyond 1200
analog Couriers anyways).

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Good Stuff [was] Re: shameful-cabling gallery of infamy - does anybody know where it went?

2007-09-12 Thread Joe Greco

 On Wed, Sep 12, 2007 at 08:36:45AM -0400, Joe Abley wrote:
  This (the general subject of how to keep real-world cabinets tidy and  
  do cabling in a sane way) seems like an excellent topic for a NANOG  
  tutorial. I'd come, for sure :-)
 
 This is a topic that I am quite interested in.  I have no telco
 background, but got started in a shop on par with many of these
 photos.  Around my current job, I'm the guy who is known for
 whining about crappy cabling jobs.
 
 Does anyone know if any good resources on best-practices at this sort
 of thing?  I'm pretty sure that others must've already figured out the
 trickier stuff that I've thought about.
 
 For example - some of the posted pictures show the use of fiber ducts
 lifted above cable ladders.  Why opt for such a two-level design
 instead of bundling fibers in flex-conduit and running the conduits
 adjacent on the ladder?

Design decisions for cabling will vary with the facility and actual 
intended uses.  For example, an Internet Service Provider with significant
telecom requirements may be designed quite differently than a hosting
provider.

Facilities where the design is not likely to change significantly are a
good candidate for tidy cabling of the sort under discussion here, but
where changes are expected to be common and frequent, there are other
ways to make it look nice, without investing a ton of time just in time
for next quarter's changes.

The best thing you can do is to allow for what might seem to be excessive
amounts of space for cable management, and then be prepared to spend TIME
when installing equipment or making changes.  The biggest thing that any
serious cablemonkey will tell you (and I won't argue it!) is that the job
takes TIME to do right.  Remember that the time invested isn't being
invested just to make it look good, but more importantly to make it easy
to deal with when something goes wrong.  Good cable guys deserve a lot of 
respect, for making it so easy to debug what's going on when something 
goes wrong.

The design for your facility is best based on the unique situation present
at your facility.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Using Mobile Phone email addys for monitoring

2007-09-06 Thread Joe Greco

 Once upon a time, Duane Waddle [EMAIL PROTECTED] said:
  We tend to avoid the whole SMTP mess and deliver messages to mobiles and
  pagers via a modem and the provider's TAP gateway.  It works quite well with
  Verizon and ATT/Cingular, but I've no experience with T-Mobile.
 
 T-Mobile dropped their TAP access several years ago. 

Well, good, because they were pretty cruddy at it.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: Using Mobile Phone email addys for monitoring

2007-09-06 Thread Joe Greco

Anyone else have any issues, past or present, with this kind of thing?
 
 It takes ~ 7 minutes from the time Nagios sends an email sms to ATT to 
 the time it hits my phone.  I'm using @mobile.mycingular.com because 
 mmode.com stopped working (which results in at least two txt pages vs. 
 the one I was used to).
 
   Is SMTP to a mobile phone a fundamentally flawed way to do this?
 
 I'm beginning to think it is!

It appears that device messaging in general is getting more difficult.
We use SNPP and TAP paging to drive paging to actual pagers.  Years ago,
I experimented with using cell phones instead of pagers, and the
reliability of the service offered by cell phone companies was all over
the map, despite the fact that a phone ought to make a fairly ideal
pager, being two-way capable, rechargeable, etc.  Slow and non-deliveries
were about ~50%.

These days, we're seeing that problem with our pager service, where the
pager is a confirmed delivery pager, like the PF1500.  In this model, the
pager network knows where it last saw the pager, so there's no multistate
or nationwide broadcasting of pages - the local tower speaks to the pager,
which confirms.  If it fails to confirm, the network queues the message,
and when the pager reappears, rebroadcasts.  This even handles the case
where the tower is too distant to hear the pager, since the page is still
sent in the last seen area.

Unfortunately, we've noticed a degradation in service quality on the part
of the paging network, with problems ranging from busies on the TAP dial
pool, to other really stupid stuff.  It used to be that I could be in a
basement or other RF-nasty environment, come on out, and pages would be
retransmitted to me within a few minutes.  Now, I can drive around areas
near towers, not get pages, or, for more fun, and this is great, get near
a different tower, get *new* pages, followed an hour or two (or twelve)
later by *old* pages.

I think I mostly despise the UI on the PF1500 anyways.  I'd rather be able
to dismiss a page with a single keystroke, and overall I preferred the way
the Mot Adv Elite used to work.

Anyways, this is an interesting and useful topic, which I'm watching with
some interest.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: How should ISPs notify customers about Bots (Was Re: DNS Hijacking

2007-07-24 Thread Joe Greco

 On Mon, 23 Jul 2007, Joe Greco wrote:
Yes, when there are better solutions to the problem at hand.
  
   Please enlighten me.
 
  Intercept and inspect IRC packets.  If they join a botnet channel, turn on
  a flag in the user's account.  Place them in a garden (no IRC, no nothing,
  except McAfee or your favorite AV/patch set).
 
 Pleaes do this at 1Gbps, really 2Gbps today and 20gbps shortly, in a cost
 effective manner.

M... okay.  Would you like solution #1 or solution #2?  (You can pay
for solutions above and beyond that)

Solution #1:  you know you need to intercept irc.vel.net, so you inject
an IGP /32 route for the corresponding IP address, and run it through your
IDS sensor.  Now, you're not going to be able to claim that you actually 
have even 100Mbps of traffic bound for that destination, so that's well
within the capabilities of modern IDS systems.  This has the added benefit
of not hijacking someone's namespace, AND not breaking normal
communications with the remote site.

Bonus points for doing it on Linux or FreeBSD and selectively port
forwarding actual observed bot clients to a local cleansing IRCD, then
mailing off the problem IP to support so they can contact the customer 
about AV and bot cleansing software, etc.

Oh, you were going to claim that your routers can't handle a few extra /32
routes?  I guess I have no help for you there.  You win.  So on to #2.

Solution #2: You really can't handle a few extra /32 routes?  Then go
ahead, and hijack that DNS, but run it to a transparent proxy server
that is in compliance with the ideas outlined above.

Cost effective?  One FreeBSD box, some freely available tools, and some
time by some junior engineer with a little IRC experience, and you have
a great tool available, which doesn't inconvenience legitimate users but
does accomplish *MORE* than what Cox did.

 Please also do this on encrypted control channels or
 channels not 'irc', also please stay 'cost effective'.

So I'm supposed to invent a solution that does WAY MORE than what Cox 
was trying to accomplish, and then you'll listen?  Forget that (or
pay me).

 Additionally,
 please do NOT require in-line placement unless you can do complete
 end-to-end telco-level testing (loops, bit pattern testing, etc), 

Ok.

 also
 it'd be a good idea to have a sensible management interface for these
 devices (serial port 9600 8n1 at least along with a scriptable
 ssh/telnet/text-ish cli).

Ok.

 Looking at DPI (which is required for your solution to work) you are still
 talking about paying about 500k/edge-device for a carrier-grade DPI
 solution that can reliably do +2gbps line-rate inspection and actions.

Yeah, I see that.  Not.  (I do see your blind spot, though.)

 This quickly becomes non-cost-effective if your network is more than 1
 edge device and less than 500k customers... Adding cost (operational cost
 you can only recover via increased user fees) is going to make this not
 deployable in any real network.

Uh huh.

  Wow, I didn't even have to strain myself.
 
 sarcasim aside, this isn't a simple problem and at scale the solutions
 trim down quickly away from anything that seems 'great' :( using DNS
 and/or routing tricks to circumvent known bad behaviours are the only
 solutions that seem to fall out. Yes they aren't subscriber specific, but
 you can't get to subscriber specific capabilities without a fairly large
 cost outlay.

That's not true.

The problem is isolating the traffic in question.  Since you DO NOT HAVE
GIGABITS OF TRAFFIC destined for IRC servers, this becomes a Networking
101-style question.  A /32 host route is going to be effective.
Manipulating DNS is definitely the less desirable method, because it has
the potential for breaking more things.  But, hey, it can be done, and
with an amount of effort that isn't substantially different from the
amount of work Cox would have had to do to accomplish what they did.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: How should ISPs notify customers about Bots (Was Re: DNS Hijacking

2007-07-24 Thread Joe Greco

 On 7/24/07, Joe Greco [EMAIL PROTECTED] wrote:
  The problem is isolating the traffic in question.  Since you DO NOT HAVE
  GIGABITS OF TRAFFIC destined for IRC servers, this becomes a Networking
  101-style question.  A /32 host route is going to be effective.
  Manipulating DNS is definitely the less desirable method, because it has
  the potential for breaking more things.  But, hey, it can be done, and
  with an amount of effort that isn't substantially different from the
  amount of work Cox would have had to do to accomplish what they did.
 
 Yup - though I still dont see much point in specialcasing IRC.  

This is probably true.  However, in this case, apparently Cox felt there
was some benefit to tackling this class of bot.

My guess would have been that they were abandoned, and as such, there
wouldn't have been much point to doing this.  However, maybe that wasn't
the case.

 It
 would probably be much more cost effective in the long run to have
 something rather more comprehensive.

Sure, but that actually *is* more difficult.  It isn't just a technical
solution.  It has to involve actual ongoing analysis of botnets, and how
they operate, plus technical countermeasures.  Are there ISP's who are
willing to devote resources to that?

 Yes there are a few bots around still using IRC but a lot of them have
 moved to other, better things (and there's fun headless bots too,
 hardcoded with instructions and let loose so there's no CC, no
 centralized domain or dynamic dns for takedown.. you want to make a
 change? just release another bot into the wild).

Hardly unexpected.  The continuing evolution is likely to be pretty 
scary.  Disposables are nice, but the trouble and slowness in seeding 
makes them less valuable.  I'm expecting that we'll see 
compartmentalized bots, where each bot has a small number of neighbors,
a pseudo-scripting command language, extensible communication ABI to 
facilitate the latest in detection avoidance, and some basic logic to 
seed/pick neighbors that aren't local.  Build in some strong 
encryption, have them each repeat the encrypted orders to their 
neighbors, and you have a structure that would be exceedingly 
difficult to deal with.

Considering how long ago that sort of model was proposed, it is actually
remarkable that it doesn't seem to have been perfected by now, and that
we're still blocking IRC.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: How should ISPs notify customers about Bots (Was Re: DNS Hijacking

2007-07-24 Thread Joe Greco

 On Jul 24, 2007, at 8:59 AM, Joe Greco wrote:
  But, hey, it can be done, and with an amount of effort that isn't  
  substantially different from the
  amount of work Cox would have had to do to accomplish what they did.
 
 Actually, it's requires a bit more planning and effort, especially if  
 one gets into sinkholing and then reinjecting, which necessitates  
 breaking out of the /32 routing loop post-analysis/-proxy.  

Since what I'm talking about is mainly IDS-style inspection of packets,
combined with optional redirection of candidate hosts to a local 
cleanser IRCD, the only real problem is dumping outbound packets 
somewhere where the /32 routing loop would be avoided.  Presumably it
isn't a substantial challenge for a network engineer to implement a
policy route for packets from that box to the closest transit (even
if it isn't an optimal path).  It's only IRC, after all.  ;-)

 It can  
 and is done, but performing DNS poisoning with an irchoneyd setup is  
 quite a bit easier.  

Similar in complexity, just without the networking angle.

 And in terms of the amount of traffic headed  
 towards the IRC servers in question - the miscreants DDoS one  
 another's CC servers all the time, so it pays to be careful what one  
 sinkholes, backhauls, and re-injects not only in terms of current  
 traffic, but likely traffic.

I don't see how what I suggest could be anything other than a benefit 
to the Internet community, when considering this situation.  If your
network is generating a gigabit of traffic towards an IRC server, and 
is forced to run it through an IDS that only has 100Mbps ports, then
you've decreased the attack by 90%.  Your local clients break, because
they're suddenly seeing 90% packet loss to the IRC server, and you now
have a better incentive to fix the attack sources.

Am I missing some angle there?  I haven't spent a lot of time considering
it.

 In large networks, scale is also a barrier to deployment.  Leveraging  
 DNS can provide a pretty large footprint over the entire topology for  
 less effort, IMHO.

Yes, there is some truth there, especially in networks made up of
independent autonomous systems.  DNS redirection to a host would
favor port redirection, so an undesirable side effect would be that
all Cox customers connecting to irc.vel.net would have appeared to
be coming from the same host.  It is less effort, but more invasive.

 Also, it appears (I've no firsthand knowledge of this, only the same  
 public discussions everyone else has seen) that the goal wasn't just  
 to classify possibly-botted hosts, but to issue self-destruct  
 commands for several bot variations which support this functionality.

The road to hell is paved with good intentions.  The realities of the
consumer broadband scene make it necessary to take certain steps to
protect the network.  I think everyone here realized what the goal of
the exercise was.  The point is that there are other ways to conduct
such an exercise.  In particular, I firmly believe that any time there
is a decision to break legitimate services on the net, that we have an
obligation to seriously consider the alternatives and the consequences.

 [Note:  This is not intended as commentary as to whether or not the  
 DNS poisoning in question was a Good or Bad Idea, just on the delta  
 of effort and other operational considerations of DNS poisoning vs.  
 sinkholing/re-injection.]
 
 Public reports that both Cox and Time-Warner performed this activity  
 nearly simultaneously; was it a coordinated effort?  Was this a one- 
 time, short-term measure to try and de-bot some hosts?  Does anyone  
 have any insight as to whether this exercise has resulted in less  
 undesirable activity on the networks in question?

That's a very interesting question.  I would have expected the bots in
question to be idle and abandoned, but perhaps that is not the case.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: How should ISPs notify customers about Bots (Was Re: DNS Hijacking

2007-07-24 Thread Joe Greco

 On Tue, 24 Jul 2007, Joe Greco wrote:
  So I'm supposed to invent a solution that does WAY MORE than what Cox
  was trying to accomplish, and then you'll listen?  Forget that (or
  pay me).
 
 Since it was a false positive, 

Fact not in evidence, as much as it'd be good if it were so.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: How should ISPs notify customers about Bots (Was Re: DNS Hijacking

2007-07-24 Thread Joe Greco

 On Tue, 24 Jul 2007, Joe Greco wrote:
   On Mon, 23 Jul 2007, Joe Greco wrote:
  Yes, when there are better solutions to the problem at hand.

 Please enlighten me.
   
Intercept and inspect IRC packets.  If they join a botnet channel, turn 
on
a flag in the user's account.  Place them in a garden (no IRC, no 
nothing,
except McAfee or your favorite AV/patch set).
  
   Pleaes do this at 1Gbps, really 2Gbps today and 20gbps shortly, in a cost
   effective manner.
 
  M... okay.  Would you like solution #1 or solution #2?  (You can pay
  for solutions above and beyond that)
 
 I tried to be nice and non-sarcastic. I outlined requirements from a real
 network security professional on a large transit IP network. You
 completely glossed over most of it and assumed a host of things that
 weren't in the requirements. I'm sorry that i didn't get my point across
 to you, please have a nice day.

As far as Please enlighten me followed by Please do this at 1Gbps,
really 2Gbps today and 20gbps shortly, in a cost effective manner goes,
I don't consider that to be non-sarcastic.  I consider it to be either
very rude, or perhaps a challenge.  I attempted to reply in an
approximately equivalent tone.

But, now, what exactly did I gloss over?  And what things did I assume
that weren't in the requirements?

It's already been demonstrated that it doesn't need to handle 1Gbps,
2Gbps, or 20Gbps, so those requirements are irrelevant.

You then said:
 Please also do this on encrypted control channels or
 channels not 'irc', also please stay 'cost effective'.

But I'm not about to be trapped into building a solution that does WAY
MORE than what Cox was trying to do.  That it was a requirement from a
real network security professional is not relevant, as we're discussing
ways to accomplish what Cox was trying, without the related breakage.

You further said:
 Additionally,
 please do NOT require in-line placement unless you can do complete
 end-to-end telco-level testing (loops, bit pattern testing, etc),

To which I said: Ok., because my solution meets that measure.  It does
not require in-line placement, condition met.

You went on to say:
 also
 it'd be a good idea to have a sensible management interface for these
 devices (serial port 9600 8n1 at least along with a scriptable
 ssh/telnet/text-ish cli).

And again I said: Ok., because my solution can be built on a FreeBSD
or Linux box, and as a result, you gain those features too.  Condition
met.

And finally, you say:
 Looking at DPI (which is required for your solution to work) you are still
 talking about paying about 500k/edge-device for a carrier-grade DPI
 solution that can reliably do +2gbps line-rate inspection and actions.

And I finally said: Yeah, I see that.  Not.

Because I don't fundamentally believe that you need to do deep packet
inspection of all packets in order to accomplish what Cox was doing.

So what exactly did I assume that wasn't in the requirements (and by
that, I mean the requirements to do what Cox was attempting to do, not
the requirements of some random real network security professional)?

If you really think I glossed over something that was important, then
by all means, point it out and call me on it.  Don't just say HAND.

Part of network engineering is being a little bit clever.  Brute force
is great when you really need to move 20Gbps of traffic.  Any idiot 
can buy big iron to move traffic.  However, putting your face in front
of the firehose is a bad way to take a sip of water.  With that in mind,
I will once again point out that doing the equivalent what Cox was 
trying to do did not /require/ the ability to do deep packet inspection 
at 20 gigabits per second, and as a result, I'm exercising my option of
being a clever network engineer, rather than one who simply throws
money and big iron at the problem.

You asked for enlightenment.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: DNS Hijacking by Cox

2007-07-23 Thread Joe Greco

 On Jul 22, 2007, at 6:28 PM, Niels Bakker wrote:
  if you are a cox customer you might want to have a reasoned  
  discussion with them and find out more details and whether you can  
  reach a resolution. if they dont play ball tho you ultimately  
  would have to vote with your $$ and switch..
  This is a ridiculous argument as in many places there is only one  
  game in town for affordable high speed internet for end users.
 
 Yes, but at least the incumbents have their cash cows protected (who  
 me?  cynical?)
 
 However, you don't have to switch providers to run your own caching  
 server.  Unless Cox is intercepting all DNS queries (instead of just  
 mucking about with the caching servers they operate), running your  
 own caching server will likely solve the problem.

I'll accept that argument once you've explained to all your family
members how to do it - and they've actually done it, successfully.

Let's be real now.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: DNS Hijacking by Cox

2007-07-23 Thread Joe Greco

 On Mon, 23 Jul 2007, Joe Greco wrote:
  I'll accept that argument once you've explained to all your family
  members how to do it - and they've actually done it, successfully.
 
  Let's be real now.
 
 If we're going to be real now, consider how rarely ISPs have done this
 over the last several years.
 
 Its very hard to wake the dragon.  Yes, ISPs can do all sorts of awful 
 things, but the reality is most of the big ISPs are extremely conservative 
 at taking any steps that disrupts customers traffic.  While they sometimes 
 make a mistake, it takes a lot to get big ISPs to do anything.  Since 2005
 when ISPs started doing this, how many false positives have come up?
 
 I don't think it is real to think big ISPs are going to redirect 
 customer traffic in order to steal customer credit card numbers or destroy
 a competitor.

I can't help but notice you totally avoided responding to what I wrote;
I would have to take this to mean that you know that it is fundamentally
unreasonable to expect users to set up their own recursers to work around
ISP recurser brokenness (which is essentially what this is).

That was my point.

And, incidentally, I do consider this a false positive.  If any average
person might be tripped up by it, and we certainly have a lot of average
users on IRC, then it's bad.  So, the answer is, at least one false
positive.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


Re: How should ISPs notify customers about Bots (Was Re: DNS Hijacking

2007-07-23 Thread Joe Greco

 On Sun, 22 Jul 2007, Joe Greco wrote:
  We can break a lot of things in the name of saving the Internet.  That
  does not make it wise to do so.
 
 Since the last time the subject of ISPs taking action and doing something 
 about Bots, a lot of people came up with many ideas involving the ISP 
 answering DNS queries with the addresses of ISP cleaning servers.
 
 Just about every commercial WiFi hotspot and hotel login system uses a 
 fake DNS server to redirect users to its login pages. 

I think there's a bit of a difference, in that when you're using every
commercial WiFi hotspot and hotel login system, that they redirect
everything.  Would you truly consider that to be the same thing as one
of those services redirecting www.cnn.com to their own ad-filled news
page?

While I'm not a fan of it, I know that when I go to a hotel, I should 
try to pull up www.cnn.com (which is actually what I use, because I
so rarely use that URL, so it doesn't pollute my browser cache).  If I
get CNN, then I'm live.  If I have to click a button and agree to some
terms, then I'm live a bit later.

However, if I were to go to a hotel, and they intercept random (to me)
web sites, I'd consider that a very bad thing.

 Many universities 
 use a fake DNS server to redirect student computers to cleaning sites.

I'm not sure I entirely approve of that, either, but at least it is more
like the hotel login scenario than the hotel random site redirection
scenario.
 
 What should be the official IETF recognized method for network operators 
 to asynchronously communicate with users/hosts connect to the network for
 various reasons getting those machines cleaned up?

That's a good question.  It would actually be good to have a system in
place, something competent, instead of the mishmash of broken trash in
use by hotels to log in users, etc.  I'd see it as an overall benefit.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again. - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.


  1   2   >