Re: myth of the great transition (was US Defense Department forma lly adopts IPv6)

2003-06-19 Thread Peter Deutsch


J. Noel Chiappa wrote:
 
  From: Keith Moore [EMAIL PROTECTED]
 
  The reason that we are explaining (once again) why NAT sucks is that
  some people in this community are still in denial about that
 
 The person who's most in denial around here is you - about how definitively
 the market has, for the moment, chosen IPv4+NAT as the best balance between
 cost and effectiveness.
 
 Get a grip. We all know you don't like NAT. You don't need to reply to
 *every* *single* *message* *about* *NAT* explaining for the 145,378,295th
 time how bad they are.

Legend tells us Cato, a Roman senator during the Punic Wars, finished
every speech he made in the Senate with the words Carthage Must Be
Destroyed. It didn't matter if it was a speech about defense, or
monetary policy or the Roman water works. His one-eyed devotion to this
task was, well, determined. Keith sort of puts me in mind of Cato...


- peterd (CMBD)

PDF: I've decided that as punishment for joining in Yet Another
Flamewar About NATs (YAFAN), I must now append something suitable to
every message.



Re: myth of the great transition

2003-06-19 Thread Peter Deutsch


Keith Moore wrote:
 
   expecting the network
   to isolate insecure hosts from untrustworthy attackers, or more
   generally, to enforce policy about what kinds of content are
   permitted to pass, has always been a stretch.
  
 
  So where do firewalls fit into your picture? Do they represent for the
  network or for the hosts?
 
 I believe the primary purpose of firewalls should be to protect the network,
 not the hosts, from abusive or unauthorized usage.

So for every firewall you purchase and install, you can focus its
configuration and operation on protecting the network from your users. I
trust you agree that it's appropriate for the rest of the world to be
free to make similar decisions in what they choose to be their own
perceived best interest? And I hope you'll not be *too* surprised when
the vast majority decide that protecting their own machines from the
network is more important than protecting the network from their own
machines...



- peterd (CMBD)



Re: The utilitiy of IP is at stake here

2003-06-01 Thread Peter Deutsch
g'day,


Anthony Atkielski wrote:
...
  Someone she corresponds with blasts an email to
  a bunch of folks leaving all addresses exposed,
  and one of the addressees does some action which
  exposes the email to a spammer's harvesting process?
 
 This is getting more and more farfetched.

Oh, really now...

Anthony, please don't take this the wrong way, but it's really starting
to look either like you don't know enough to extrapolate out from your
own experience, or you're just trolling.

Consider the following randomly chosen message from Dave Farber's IP
list from earlier today:


--8--8--8--8--8--
  cut herecut herecut herecut herecut here


Subject: [IP] more on Stopping spam isn't as easy as you might
hope
   Date: Sat, 31 May 2003 13:07:01 -0400
   From: Dave Farber [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]



Date: Sat, 31 May 2003 12:49:42 -0400
From: Meng Weng Wong [EMAIL PROTECTED]
Subject: Re: [IP] Stopping spam isn't as easy as you might hope
To: [EMAIL PROTECTED]
Cc: Dave Farber [EMAIL PROTECTED]

 Date: Sat, 31 May 2003 04:01:08 -0400
 From: John R Levine [EMAIL PROTECTED]
 
 The social problem with designated sender is that there are plenty of
 perfectly legitimate reasons for mail from a domain to originate someplace
 other than its home network.  Lots of people maintain accounts at Yahoo or
 other free mail providers, but send mail with their Yahoo address from
 their home ISP using the ISP's mail server.

MUAs should add a configuration field to distinguish header From:
vs. envelope from.  That solves this problem.  If they choose not to
do this, they should send mail through Yahoo's web interface.  That's
a fair constraint.  Yahoo gives them free email; in return, they're
supposed to give Yahoo their eyeballs.
...

--8--8--8--8--8--
  cut herecut herecut herecut herecut here

Observe that this message contained not one, but two additional email
addresses besides Dave's and the list address Note also, that the folks
over there are pounding on this problem, as well (nd for those few
who've been out of the galaxy, Dave's list is a distilled collection of
posts, most of which are forwarded to him, so almost every posting Dave
sends on has at least one email address in it. Some, including this
randomly chosen example, have more.)


Note, there's nothing special about the IP list in this case, I just
used it because Dave's list is the next one in my mailbox after the IETF
list, so I didn't have to go far to find a refutation of your claim.
Most Usenet groups would probably turn up an example or two, if you
bothered to go look. Note also that as a curtesy, I've blanked the
domain names in this example, but this is a formality, since this list
is available in a public archive, so you don't even need to subscribe to
harvest it...


  Or more explicitly, someone she knows copies her
  in a post to a mailing list which is being harvested.
 
 A list to which she doesn't belong?  Again, this seems unlikely.

see above...

  The point being that it isn't difficult to end up
  in the spammer's email address lists.
 
 I have quite a few addresses that remain untouched.  Only the ones for which
 an obvious harvesting path exists have received spam.

Please stop thinking the entire world is just like you. It's not, so
behaving like it is can be quite counterproductive (not to mention
downright harmful if engineering decisions actually get made based upon
your ignorance).

Put another way, I'm happy for you that spam is not yet a major problem
for you. It's a major problem for lots of people on this list. More
importantly, it's perceived to be a growing problem for the rest of the
Internet, for which a solution will be needed in the not too distant
future, so people on this list are discussing the near future, not
*your* particular present reality. Your but it's not a problem for me
reaction is more than distracting, it's downright counter-productive in
this context.



- peterd



-- 
-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

There are more things in heaven and earth, Horatio, Than are
 dreamt of in your philosophy. 

- Hamlet, Act I, Scene V
 
-



Re: The utilitiy of IP is at stake here

2003-05-31 Thread Peter Deutsch


Paul Hoffman / IMC wrote:
...
 So far on this thread, we have heard from none of the large-scale
 mail carriers, although we have heard that the spam problem is
 costing them millions of dollars a year. That should be a clue to the
 IETF list. If there is a problem that is affecting a company to the
 tune of millions of dollars a year, and that company thinks that the
 problem could be solved, they would spend that much money to solve
 it. Please note that they aren't.

Well, perhaps it's more accurate to say if they thought it could be
solved by working with all those nice and entusiastic folks on the IETF
general discussion list... ;-)



 I have spoken to some of these heavily-affected companies (instead of
 just hypothesizing about them). Their answers were all the same: they
 don't believe the problem is solvable for the amount of money that
 they are losing. They would love to solve the spam problem: not only
 would doing so save them money, it would get them new income. Some
 estimate this potential income to be hundreds of millions of dollars
 a year, much more than they are losing on spam. But they believe that
 the overhead of the needed trust system, and the cost of losing mail
 that didn't go through the trust system, is simply too high.
 
 You might disagree with them, and based on that disagreement you
 might write a protocol. But don't do so saying the big carriers will
 want this without much more concrete evidence as to their desires.

Paul, are you aware of any concrete numbers here? I've looked through
the IMC site, but the only references to cost seem to be in a report
from the late 90's, with no hard data. If not, this might be something
the IMC could consider pulling together? I'd agree that there's way too
much hand-waving going on here on this point...


- peterd

-- 
-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

Angels fly because they take themselves so lightly

   - G.K. Chesterton  
-



Re: The utilitiy of IP is at stake here

2003-05-31 Thread Peter Deutsch
g'day,

Paul Hoffman / IMC wrote:
 
 At 10:40 AM -0700 5/30/03, Peter Deutsch wrote:
 Paul Hoffman / IMC wrote:
 ...
 Well, perhaps it's more accurate to say if they thought it could be
 solved by working with all those nice and entusiastic folks on the IETF
 general discussion list... ;-)
 
 We disagree here. For the millions of dollars that they are losing,
 they would come up with the solution with the IETF or not. They
 haven't.
... 
 Again, the summary is that these folks are hurting badly enough to
 throw highly-qualified full-time staff on the problem, and they don't
 believe any of the solutions that have been presented so far will
 save them enough money. If they thought differently, they would have
 deployed them by now so that they could save those millions of
 dollars.

Then we're actually in agreement. What I was trying to point out was
these folks are spending money on the problem, but aren't trying to
engineer the solution on the IETF general mailing list (the
implicationbeing that we probably shouldn't be trying to do this
either). And yes, I know I'm one of those who's been guilty of this,
although my motivation was more to change the direction of thought, not
to bake a solution here.

Thanks for what info you provided, although I echo DaveC's request for
whatever additional info you could manage. I think this sort of thing
provides a welcome dose of reality in the debate. Any solution which
requires solutions beyond the costs aren't likely to thrive, whether
that's the cost to an individual or to a major ISP. Having such real
data helps constrain the engineering usefully, to say the least, given
the amount of anecdotal but I haven't seen that going on.


- peterd



-- 
-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

   The invisible and the non-existent look very much alike. 

-- Delo McKown  
-



Re: The utilitiy of IP is at stake here

2003-05-30 Thread Peter Deutsch
g'day,

Tony Hain wrote:
 
 Alain Durand wrote:
  I tend to agree with Dave Crocker, getting 100+ millions
  users to upgrade to SMTPng is not going to be any easier than
  getting them to move to IPv6... It will also suffer from the
  second design syndrome. I will not fool myself and believe it
  can happen overnight
 
 In this case, I disagree. Yes SMTP will have to exist for some time to
 come, but it wouldn't take much to convince people that moving to a new
 mail system would either reduce spam, or had adequate mechanisms for
 financial recourse. If the courts routinely granted judgments to
 individuals of 100 $/euro for every received unsolicited message, people
 would jump at the chance to run the new mail tool, and spam as we know
 it would loose its economic viability. Making that work means absolute
 traceability of the message origin.
 
  For this effort to be effective, I think it will have to be
  done in a way that is at odds with the traditional IETF thinking:
 
  1) Compatibility with SMTP is not desirable
  2) Some form of privacy is not desirable
  3) To much scalability is not desirable
 

Sorry, guys, I don't see this one taking wing. I'd agree that many of us
would jump at the chance to receive the occasional $100 gratuity, but
far fewer would want to sign up for the corollary, a system in which you
willingly and consciously abandon all hopes for privacy and anonymity. I
think the issue of preserving privacy will be a major one for us all in
the coming years, so starting the design of a new system with the axiom
that privacy is not desirable seems, well, I find it hard to describe
without being either flip or rude.

I personally want a next generation system that would *increase* my
privacy, not attempt to make a virtue out of *removing* the few shreds
of annonymity I have left. I would specifically refuse to use such a
system. And yes, I also want it to make unsolicited, bulk email harder
to send to me, but not at the cost of my privacy.

As I've already pointed out, I think we need to have another look at the
problem definition before we get too far down the design path. For
example, virtually every posting on this topic over the past few days
seems to be labouring under the assumption that the spammer wants to
trigger a commercial exchange of some sort with the recipient (with the
corollaries that the commercial entities can be traced, they will allow
you to impose costs upon them as a cost of doing business, etc). From
looking at a lot of the crap I'm getting, I'd say that a certain
percentage of it has no reasonable expectation that I'll react to it at
all (e.g. the Portugese language spam, the spam containing viruses, the
spam containing random strings of junk which I assume might help it get
past spam filters, but which guarantee that I wont take the sender
seriously as a someone I'd be willing to share my credit card with,
etc). 

Here's a radical thought, what if some percentage of this problem is
simply economic terrorism and random script kiddies doing the equivalent
of scribbling on the walls and tagging the billboards? No amount of
legislated Subject lines, protocol design and/or education will solve
that problem. In case you missed it, graffitti is already illegal, but
it hasn't been eliminated by legislation.

Maybe somebody should get some foundation to fund study to trace a pile
of this stuff to its roots and do some statistically valid analysis on
its origins, goals, etc. Otherwise, we seem to be in grave danger of
designing a system (spam control) without ever talking to its users (the
spam generators). Sounds like a recipe for disaster to me...

- peterd



-- 
-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

Bungle...
   That's an 'i', you idiot...
  Oh, right. 'Bingle...

- Red versus Blue...

-



A peer-to-peer trust system model (was: Re: spam)

2003-05-29 Thread Peter Deutsch
 to see where the discussion  goes once the flames die
down

As I said, I've done some digging and found nothing exactly like this,
but Paul's casual remark suggests I'm missing something basic in the
literature (admittedly I haven't done an exhaustive search yet).
Pointers to the obvious work, or pointers to the obvious holes, would be
most welcome. And of course, pointers to the best mailing list for
follow-ups are probably a *really* good idea



- peterd





-- 
-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

Bungle...
   That's an 'i', you idiot...
  Oh, right. 'Bingle...

- Red versus Blue...

-



Re: A peer-to-peer trust system model (was: Re: spam)

2003-05-29 Thread Peter Deutsch
g'day,

Einar Stefferud wrote:
 
 Hello  Peter --
 
 I hate to be the one to tell you that the following is provably false:
 
 The unlying (sic) assumption here is that trust is a transitive relationship,
 
 Which leaves a bit of a gapping hole in your entire logical build...

Not at all, since the assumption of transitive trust is used merely to
prime the pump. Once you start to develop evidence that disagrees with
your assumptions, you are expected to change your trust rules
accordingly. That's actually the heart of the system.

For example, I might start off by trusting mail from a particular
mailing list, and all its participants (say, anyone from my family
mailing list). I would then accept trust tokens from anyone who submits
a valid token from anyone on that mail list. Of course, if anyone used
such a tokento feed me spam, I'd hit the Junk This button on my MUA,
which would in turn tell my MTA to remove both the sender and that trust
token from my trusted list.

Put simply, I'd use a rule that says something like fool me once, shame
on you, fool me twice, shame on me.

Note that this wouldn't prevent any of the folks on that mailing list
from reaching me, it would only prevent my MTA from trusting the
offender's token in the future. You could even tune that by putting
additional policy info in the trust token (you could put in a degree of
trust number, indicating how well you know the bearer, for example).


Now, suppose I wanted to send mail to Paul Vixie. I might just try to
send him mail, but from recent experience, I would expect that to go
something like this: Hi, Paul!, Mail System Error - Returned Mail.

Hmmm...

So, my MTA checks Paul's list of trusted buddies in the new, improved
DNS++, but doesn't recognize anyone in the list as somebody who's issued
me a trust token recently. So, off it goes to the Token Oracle, and ask
her for a trust path between myself and Paul Vixie (trust me, this can
be done. I have a proof of this, but the margins of my screen are too
small to contain it. It's enough for the purposes of this exposition to
note that this is something that can be precomputed so it can be
obtained somewhat efficiently).

So, back comes the Oracle, with the path:

  Peter Deutsch - Einar Stefferud - Randy Bush - Paul Vixie


In other words, there is a trust chain from Einer Stefferud (who trust
me), to Randy Bush (who trusts Einar), to Paul Vixie (who trusts Randy).

Well, that's okay then, since I have a trust token from Einar Stefferud,
because I earned a trust token from you last week and you'd kindly
supplied me with one. Okay, so my MTA again contacts Paul's MTA and
offers it the trust token I have from you, as well as the trust chain.
Now, Paul can elect to accept mail from me, since the path checks out
and the token's good, and we'd be in business. Parenthetically, his MTA
would add the trust token from Einar Stefferud to his keychain for the
next time somebody comes a'calling.

Of course, if Paul reads my mail and decides that I really am as much of
a bozo as he'd feared, he's free to hit *his* Junk This button. This
would revoke my credit, and your trust token to me in his eyes, so he's
free to go back and finish reading the IETF mailing list without any
further direct interruption me. If I really want to reach him again, I
could try to find other paths from the tokens I've got left, until
either I've used up all my friends and acquaintences in a vain attempt
to get Paul's attention, or perhaps until I finally (through constant
allusions to Tom Lehrer) convince Paul Vixie that I'm not so bad after
all (heck, he says, this guy's a dope, but I do like 'Poisoning
Pigeons in the Park'...)


So, trust can be assumed to be transitive to prime the pump. Where you
find that this assumption is not valid, you can use the evidence that
it's not to tune and adjust your list of trusted sources. It's this
tuning over time that would make them more effective and lead to the
predicted success of the technique. 


As a final observation, the transitive nature of the trust is not the
key part of the system. To me, it's the ability to put policy decisions
in the hands of the recipient based upon past experience with trusted
sources, without having those trusted sources participate in the
interaction in real time. This seems to offer simplicity and scaling,
and means we can build this beast and get it out without requiring such
things as a single globally populated PKI, or universal takeup on the
scheme (the degenerative case is to accept everything, as folks do today
- the benefits accrue to the participants proportional to their
participation, but it begins paying off the first time you reject an
unknown sender without a trust token).


So, in summary, trust may not be transitive, but it makes a useful axiom
to kick things off. To paraphrase somebody's point a few hundred
postings ago, something can be an axiom without being true... ;-)


- peterd

Re: A peer-to-peer trust system model (was: Re: spam)

2003-05-29 Thread Peter Deutsch
g'day,

Oops, bad form to follow-up to your own posts, but I just want to make
sure I'm on record as being the first to notice that this is really just
another instantiation of the Six Degrees of Kevin Bacon. In honour of
this observation, my current working name for this system is Bacon
(for the hopefully obvious reason).


I wrote:

 So, back comes the Oracle, with the path:
 
   Peter Deutsch - Einar Stefferud - Randy Bush - Paul Vixie

Sorry Randy, I'm going to drop you from the example. I think it's
funnier if it reads:


   Peter Deutsch - Einar Stefferud - Kevin Bacon - Paul Vixie



And if you don't get this, go read:

   http://www-distance.syr.edu/bacon.html



- peterd


-- 
-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

Bungle...
   That's an 'i', you idiot...
  Oh, right. 'Bingle...

- Red versus Blue...

-



Re: please move the spam discussion elsewhere

2003-05-27 Thread Peter Deutsch
g'day,

Hopefully, I can eventually control my fascination with this particular
blinking light and stop feeding this particular troll. Meanwhile, I do
feel that this particular consciousness-raising effort has value for the
members of this list, since as Paul Vixie has pointed out it may be time
to revisit the group's earlier decision to avoid tackling the problem.


So here I go again, picking up points from multiple postings:

Anthony Atkielski wrote:
 
 Right now, there is probably no other greater problem with the Internet than
 spam.  That sounds more than important enough to justify discussion here.
 You can delete any messages that mention spam, if you want.

To our credit, everyone involved in this thread (until this mutation
appeared) seem to have properly appended the appropriate Subject Re:
spam, making automatic SPAM filtering trivially easy. ;-)



and:

 What's your point?
 
 Sysadmins are supposed to be past the learning curve already.

At the risk of repeating myself, economists will tell you that it is at
margin that marginal costs are calculated. Those who would keep
repeating the mantra that email costs only $1 or $2 a month are IMHO
entirely missing the point.

Sure, these poor deluded folks should just abandon any attempt at
operating a school network or providing their community with anything
but barebones commercial services, but they chose to push the envelope a
bit, using donated equipment, part time staff and volunteers to stretch
the budget. There are a lot of folks at the periphery who are living
this story, whether in the developing world, or in non-commercial
settings. Shame of those who would consume their precious resources or
divert their attention from other, more worthy activities.

The guy helping to hold it all together in this example (and I used it
only as an illustrative example) is actually pretty good at teaching
primary school kids basic computer skills, and can certainly deliver
basic services such as a functioning file server with his extra time,
but he's ill-equipped to handle mailstorms, Denial of Service attacks or
other infrastructure problems. The system relies on folks like me for
escalation support, and it usually works fine. That's why I get so
irritated when folks like you try to minimize the costs of spam in this
thread. It really shouldn't be your call to tell them that they're
playing out of their league.

In this case, the guy called me in to investigate a sluggish,
non-responsive Window NT file server that we discovered was being
bombarded by repeated email connections from overseas, since it had
somehow got onto some spammers' list of relay hosts. Note, this was a
system he inherited, and it's not primarily a mail host, so go sign up
for a $1 email account really doesn't solve the underlying problem
here. After all, we all get stuck with legacy systems support at some
point, non?

In this case, we closed it up and did what we could to fix the
overflowing disk problems, etc, but the connection requests continue. So
your answer to the school would appear to be to shut it all down down,
or fire the computer teacher because he can't solve this problem on his
own? That's just silly...


and:

  usually it takes a week, but you did it all in
  36 hours.  congrats?
 
 I'm afraid I don't understand.

Here I'm going to agree with you. I do this consciously, and with the
expectation that this thread is starting on its death spiral. Can
somebody please make a reference to Adolf Hitler, so we can all declare
victory and go home?


- peterd (queue the Monthy Python SPAM song...)



-- 
---
   Peter Deutsch   [EMAIL PROTECTED]
   Gydig Software


  Don't get me wrong: Emacs is a great operating system -
   it lacks a good editor, though.

- Thomer M. Gil
--



Re: spam

2003-05-27 Thread Peter Deutsch
g'day,

Dean Anderson wrote:
...
 In other words, cost plays a big part in the decision.  But as has been so
 roundly demonstrated, the cost associated with email is practically
 non-existant, and does not ever cost any user more than $1 or $2 per
 month, which they pay for email services.

Dammit, I'm trying to be good but you insist on repeating this canard,
despite the fact that the vast majority of folks have *not* agreed with
you, and some of us have specifically challenged (I wont be so
presumptuous as to say refuted) your claim.

Gas in Southern California can currently be had for as little as $US1.61
a gallon. It's possible to buy a 200 Gig hard drive from Fry's in
California for about $US140. It's possible to buy advertizing supported,
limited storage public email accounts for a few dollars a month (heck,
you can even get free accounts at no cash outlay to you).  This does
not mean that a car that gets 30 miles per gallon costs 5.3 cents a
month to run; this does not mean you can build a commercial quality file
server for under $200; And it most certainly does not mean that email
costs anyone only $1 or $2 per month.

If you continue to repeat this claim using such words as roundly
demonstrated, I will conclude that you are either a poor engineer or a
troll. In either case, you will eventually provoke taunts and jears from
the audience, and we're trying very hard to raise the tone of this
place. So please, cease and desist such activity at once.

And even if you can't master engineering math, you really should
consider learning some basic economics. If nothing else, it might be
useful to you in balancing your checkbook.


 This is substantially different from the case with faxes. And it does seem
 to be a valid arguement that if technology does eliminate the burdens
 imposed, then the junk fax law could be reversed.
 
 In the case of spam, there is no cost _shifting_ whatsoever, since by
 definition, everyone pays their own way. Even spammers.  Unlike faxes, the
 receipt of a spam does not increase the cost of the recipient's email.
 Email is usually fixed cost, and flat rate.  Even when one pays by the
 octet, the cost of a spam is in the millionths of a cent, which I think
 is less than the cost to carry out the trash of one junk postal mail.

As we say in French, Ca c'est des horse patooties. Again, the cost of
email is not merely the cost of the physical file storage, you need to
consider the cost of your time processing it, the cost of time spent
dealing with such things as denial of service attacks, the opportunity
costs paid when folks hijack resources from their legitimate purpose and
so on. There is also a social cost when, for example parents must
forego having their children use email because of fears they will be
exposed to the most egregious porn and violence. Ask for, or help
develop, metrics for measuring such costs, but please do not deny their
existance.

The cost of physical delivery and storage are but a small part of the
total cost of ownership here. By ignoring all but the upfront costs, it
looks suspiciously like you're trying to lend legitimacy to odious
practices by sleight-of-hand mathematics. Shame on you...



- peterd

-- 
-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

Bungle...
   That's an 'i', you idiot...
  Oh, right. 'Bingle...

- Red versus Blue...

-



Re: spam

2003-05-27 Thread Peter Deutsch
g'day,

Vernon Schryver wrote:
 
  From: Peter Deutsch [EMAIL PROTECTED]
 
  ...
  despite the fact that the vast majority of folks have *not* agreed with
  you, and some of us have specifically challenged (I wont be so
  presumptuous as to say refuted) your claim.
 
 That's nonsense.  There have been repeated statements to the effect
 that doubling the size of a large mail system to deal with a doubling
 of spam is very expensive.  That does not conflict with claims that
 the cost to provide mail service for a single user is low.

Actually I didn't say the cost to provide mail service for a single
user is low, since I would interpret that to mean you were talking only
about the hardware cost of the mail hosting system. In fact, the line I
was responding to was:

# the cost associated with email is practically non-existant,
# and does not ever cost any user more than $1 or $2 per month, 

I have been trying to make the point (albeit unsuccessfully, it seems)
that if you want to talk about the cost associated with email you need
to look at the total cost of ownership for processing email, and that
this is dominated by human costs, especially at the margins of the
network. 

Maybe I should have written something like: The cost of physical
delivery and storage are but a small part of the total cost of ownership
here. Oh, wait - I did, that was the bit you left off. ;-)

Somebody did a calculation assuming a person who bills out at $60/hour.
In fact, it's not outlandish for a consultant or other professional to
bill out at $150 to $250 an hour, which means that one hour of
spam-related lost time a year costs would you as much as $20 per month.
If that's all the time you spent processing spam, it would translate to
300 seconds of lost time a month, or 10 seconds a day for those of us
who read email every day. Adjust your constants and multiply by whatever
fudge factor suits your usage pattern, the point is that the real cost
of spam is far more than the cost of the email account and human costs
would appear to dominate the equation.

*That* is why I object to statements like If you pay $1 per month for
email, and get 50 messages per day, then a spam costs you $0.000.
That's no more true that that gas at $1.61 mean your car costs you 5.3 a
mile to run (yeah, I know, somebody already pointed out my typo on that
one in a previous message... :-)


...
 Yes, the costs of spam are mostly in what it costs in human time.
 
 However, the fact that the human costs of spam are large do not justify
 your canards and other nonsense about the costs of bandwidth, CPU
 cycles, disk space, and even human system administrator time to deal
 with spam.

Actually, others may have talked about the cost of bandwidth and CPU,
but I don't think I did. I *did* cite a real example of lost time and
effort tracking down a problem in an underfunded school as an
illustrative example, but it was not a canard (An unfounded or false,
deliberately misleading story.). The school was St.Joseph's School,
Mountain View, Ca and I was the volunteer who donated his time in lieu
of payment. IIRC correctly, the school would bill me $10/hour for
unfulfilled volunteer requirements at the end of the school year, so I
think it fair to assign an economic value of $10/hour to this volunteer
time, even though you probably couldn't hire a good sysadmin in Mountain
for $10/hour and I certainly bill my time out at more than that for
consulting work.

So forget the staff member's time for the moment, I personally donated
volunteer time to determin why the machine was sluggish and to take some
remedial cleanup steps. I did this in lieu of a $10/hour payment which I
would otherwise have had to make and having donated my time, I therefore
didn't donate my time to do something else I could have done for the
school. Sure I did other stuff on that trip, but I have no trouble
saying that this incident cost the school about $10 in opportunity cost.
They fold it into their overhead, and get that much less stuff elsewhere
as a consequence. And I personally have no trouble assigning this
expense to the cost of SPAM account, since that's what it was about -
a spamer beat up a small group's server, and they lost time and money as
a consequence. Now, can that other guy please stop saying there's no
cost to spam? *That* appears to me to be a canard...



 Again, if spam costs mail providers much more than $1 or $2/month/user,
 then how can free providers offer mailboxes and how can you buy full
 Internet service including the use of modem pools or whatever for
 $10-$15/month?

Again (not a good sign, I suspect the frequent occurance of this word
indicates that we're all looping here..) this is not about just the cost
of providing a mail host, it's about total cost of ownership.

I give up. Michel's right, we should stop now. I'm off to get a beer and
heartily recommend everybody else do the same



- peterd

A modest proposal (was: Re: spam)

2003-05-27 Thread Peter Deutsch
g'day,

J. Noel Chiappa wrote:
...
 Which is precisely why I say that the solution to spam is to charge for
 email. It avoids the whole question of defining what is and is not spam.
 
 More specifically, change the email protocol so that when email arrives from
 an entity which is not on the email from these entities is free list, the
 email is rejected unless is accompanied by a payment for $X (where X is set
 by a knob on the machine).

You probably know this already, but for those who don't, Brad Templeton
proposed this scheme a while ago, based upon am micropayments model and
called it estamps. See:

http://www.templetons.com/brad/spam/estamps.html

He's also got a summary page on the topic of spam at:

http://www.templetons.com/brad/spam/


He's since repudiated the idea, but it's been taken up and worked on in
the context of the hashcash system, which seeks to impose a measureable
computation cost on the sender in lieu of processing a micropayment.


Although Brad himself has repudiated the idea, I believe that the
general approach of automating the accounting and imposition of cost (as
is done in Hashcash) shows some promise, but what I actually think we
need is an automatic way to extend trust and build trust relationships.
Paul Vixie alluded to a trusted-introducer model similar in concept to
pgp but more market-ready a couple of postings ago, which I actually
think is the way to go.

So, okay, this discussion needs to move off the ietf general list, but
again I agree with Paul. Where is the direction about where we should be
heading with this?

- peterd


-- 
-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

Bungle...
   That's an 'i', you idiot...
  Oh, right. 'Bingle...

- Red versus Blue...

-



Re: Barrel-bottom scraping

2003-03-20 Thread Peter Deutsch


Scott W Brim wrote:
 
 On Thu, Mar 20, 2003 06:58:32AM +0200, Pekka Savola allegedly wrote:
  On Thu, 20 Mar 2003, Brian E Carpenter wrote:
   [...] However, I think doing some ISOC/IETF joint
   tutorials just before an IETF is definitely worth a try.
...
 Right.  I suggest we try some contiguous tutorials to start with, and if
 they are *too* successful we can either add some discontiguous ones or
 move the ones we have.
 
 I don't know what attendance was like at the later INET tutorials.
 Perhaps we could just take over the whole format?

Folks, am I the only one who thinks this is a bad idea?

What's being discussed here is starting a new consulting company,
specializing in IETF technologies and delivering its services through
tutorials. This is hard work, and there is already significant
competition in the area. More importantly, the effort required to get to
the volumes needed is immense.

To pull this off and make a profit, you will need to change from a
volunteer mentality to a professionally run, fee-for-service mentality,
with attention paid to advertising (or you wont get the needed volume of
traffic), printing services (people will need classroom and take-home
material), folks will need to track bookings, arrange the needed rooms,
there will be *additional* cneed for cookies and coffee, etc. I ran my
own company for quite a few years doing exactly this sort of work and
although you can make a reasonably good living at it the delta between
the technology and the rest of the work needed is far larger that
technical folks usually seem to grok at first reading.

Ask yourself this - if ISOC didn't make enough to justify a meeting this
year, why would the IETF think they could generate enough traffic to do
it during the current tech depression?


Nobody responded to my earlier post, but my suggestion is still to push
for an IETF TLD. Once it's in the root, it's basically zero impact on
others, no need to negotiate revenue sharing, no need to lay out cash in
advertising courses, booking rooms, printing materials, finding
instructors, arranging cookies, following up on bad visa charges and the
host of other things needed to run a tutorial company. Your fixed costs
are well defined and quite amenable to covering with donations and your
profit margins once you past break-even are great. Oh, and it's a great
way for participants to show their direct support for the process by
using their IETF domain for posting to the IETF lists. What's not to
like here?


- peterd


-- 
-
Peter Deutsch   [EMAIL PROTECTED]
  Gydig Software


  No, Harry - even in the wizarding world,
hearing voices is not a good sign...

- Hermione Granger

-



Re: Financial state of the IETF - to be presented Wednesday

2003-03-17 Thread Peter Deutsch
g'day,

Bill Strahm wrote:
 
 I tend to disagree with you Ross,
 
 First it is not excessive by definition because we are not covering our costs.  

Actually, from Harald's numbers the meeting fee more than covers the
direct costs of the meetings. What they don't cover is the total cost of
operating the secretariat and the rest of the IETF's activities.

One question that seems appropriate to ask is whether it is fair (for
some value of fair) to tax attendees in this way to cover overall costs,
or whether there is some more equitable funding mechanism that would
help spread this cost around a bit more. After all, the current system
is optimised for non-attendees. They get the full benefit of
participation, without carrying any of the associated costs...


 Second I don't think it is excessive because I know of MANY weeklong
 conferences that want in the order of 1000-1700 registration fees...

You're comparing apples and hand grenades. Professionally run
conferences operated by for-profit entities (to pick one example) have
entirely different cost structures. Sure, they're more expensive, but
the IETF doesn't offer professional training sessions, professionally
printed literature, trade shows, etc. All these have associated costs
and put further demands upon the infrastructure.

Ultimately, it doesn't matter if you think Networld+Interop is somehow a
better deal than the IETF. What matters is whether the IETF can cover
its costs by charging another $100 per meeting without provoking a
measurable decline in attendance. If that happens, would you then
increase the cost further? Taken to extreme, you only need one person
willing to pay $2.5 million per year and the problem's solved but
somehow I'm not sure this is going to solve the problem in practice...


- peterd



--
-
Peter Deutsch   [EMAIL PROTECTED]
  Gydig Software


  No, Harry - even in the wizarding world,
hearing voices is not a good sign...

- Hermione Granger

-



Why not a .IETF TLD? (was: Re: Financial state of the IETF...)

2003-03-16 Thread Peter Deutsch
 approve of. You'd need to do some real
market research to determine if this is all viable or if I'm really as
special as my mom always thought, but my guess is you could find a whole
passle of intellectual property types who'd sign up for their favorite
strings on principle (after fighting you tooth and nail through the
twisty little passages of ICANN, all the same until the TLD went
live).

My guess is that there's an inbuilt free rent in *any* TLD (why do you
think they're so popular??) but even if there isn't, all you're really
trying to do is generate supplemental revenues equal to the delta
between current revenues and expenses, so this looks like a *very*
promising line to take. 

The only other alternative for an organization who sees its membership
falling is to cut costs or increase fees. The former reduces performance
and the later could lead to a death spiral as rising costs chase more
and more people away. Finding an alternative revenue stream seems the
only *healthy* long term alternative.


Oh wait - there is a hitch. Of course, if we try to do this, the IETF
would then be finally forced to visit the ICANN Alternate Reality Plane
that the rest of the world has struggled with for so long. Whether this
is considered a good thing or a bad thing is left as an exercise for
the reader but if any organization has a claim to a TLD, it would seem
to be the group that defines and maintains the very technologies and
procedures used to make the service work. This approach requires no
revenue-sharing agreements with the other TLD operators, no changes in
technologies or procedures and shouldn't destabilize the root since
it's a single additional TLD with minimal impact on traffic patterns.
Putting aside any moral claims, the IETF should be able to quickly reach
consensus upon an RFC stating that this specific TLD wouldn't hurt the
current DNS... ;-)

Okay, that's more than my 2 cents on this subject. Do with it as you
will...


And finally, a couple of specific comments on the posted financials
before I close.

Any business plan predicated upon the assumption that attendance will
maintain or return to the higher levels of previous years seems fatally
flawed, to say the least. The hi tech train wreck has now lasted three
years and shows no signs of being cleared from the tracks any time soon,
so we shouldn't allow ourselves too much irrational optimism on this
front. A more likely scenario is falling attendance for at least another
year, if not more, and this should be in the budget.

Also, to respond to Steve Casner's comment about comparisons with past
costs, given the inertia in starting and perpetrating working groups I
would guess that a 20 percent reduction in attendees doesn't
automatically translate to a 20 percent reduction in demand for the
number of meeting rooms, just more space available in each room, so
there seems to have been a ratchet effect here on the cost base. And if
the IETF's cost base is now permanently higher than it was a few years
ago, you will either need to take steps to fix the revenue side, or
you'll need to fix the demand side.

Thus, it looks like one of the steps needed in these harder times is a
cost-cutting exercise to reduce the number or working groups, and thus
the number of rooms needed. The demands upon space likely wont drop back
down again on their own, so some hard calls might be needed to balance
the books here.


In summary, I would suggest that if decisions are made based upon
built-in assumptions such as attendance is going back up or falling
attendance automatically lowers costs, we'll all be revisiting this
whole debate again a year from now, but with the numbers in worse shape
than they are today...



 - peterd (who remembers this specific analysis on the
cost of cookies cycling round before...)

-- 
-
Peter Deutsch   [EMAIL PROTECTED]
  Gydig Software


 As Oprah Winfrey likes to say, There's only two ways
to lose weight - eat less, or exercise more...

-



Re: Searching for depressing moments of Internet history.....

2003-01-12 Thread Peter Deutsch
g'day,

There was also a certain amount of piggybacking onto legitimate open FTP
sites to deliver porn going on for a while that predated the web by
years.

Back in those more innocent days some folks actually left their incoming
FTP directories open  and apparently there arose an on-demand file
delivery service, in which folks would post requests for specific files
and the pusher would push it into a well-known open FTP directory for
pickup, then delete it so people wouldn't know it was going on. It was
being used for porn, and also for various other contraband stuff
(cracking programs, etc). I found out about this when the campus
Director of Computing called that he'd received a complaint from NSF
about McGill's porn site, which was located on one of our departmental
machines (NSF was sharing the cost of the link to Canada, felt this
violated the AUP and wanted it shut down NOW...) By the time I got to a
terminal to check things out, all the bad stuff was gone, but yes we had
a machine with an open directory, and when we started monitoring it, we
saw lots of stuff coming and going until we closed it up.

Off the top of my head, I'd say this would have been about 1989-89 or
so. Others may have anecdotal evidence pushing it further back than
that.


- peterd


Eliot Lear wrote:
 
 If you are looking for the first site that spoke HTTP, then I don't
 know.  On the other hand, as we know there has been a history of
 development surrounding Internet porn.  First of all, there were the
 various newsgroups, and the various (dis)assembly programs.  One of
 these was AUB and it dates back to around 1992.  Another of which was
 FSP, the UDP-based file sharing protocol written to get around TCP port
 filters.  That would be in the 1993 time frame.  While to the best of my
 knowledge IRC had no attachment to sex for its development, it in
 combination with various hacker sites played a role in sites being
 hacked and becoming warez and porn sites.
 
 Eliot
 
 Harald Tveit Alvestrand wrote:
  Despite having lived through much recent history, I've forgotten a lot
  of it
 
  I just wonder: does anyone know/remember when the first Web porn site
  came online?
 
  Wondering whether it was before or after the first official release of
  Mosaic
 
Harald
 
 
 

-- 

-
Peter Deutsch   [EMAIL PROTECTED]
  Gydig Software

  No, Harry - even in the wizarding world,
hearing voices is not a good sign...

- Hermione Granger

--




Re: Searching for depressing moments of Internet history.....

2003-01-12 Thread Peter Deutsch
g'day,

Franck Martin wrote:
 
  -Original Message-
  From: Steven M. Bellovin [mailto:[EMAIL PROTECTED]]
  Sent: Monday, 13 January 2003 1:38
  To: Eliot Lear
  Cc: Harald Tveit Alvestrand; [EMAIL PROTECTED]
  Subject: Re: Searching for depressing moments of Internet
  history.
 
 
  In message [EMAIL PROTECTED], Eliot Lear writes:
  If you are looking for the first site that spoke HTTP, then I don't
  know.  On the other hand, as we know there has been a history of
  development surrounding Internet porn.  First of all, there were the
  various newsgroups, and the various (dis)assembly programs.  One of
  these was AUB and it dates back to around 1992.  Another of
  which was
  FSP, the UDP-based file sharing protocol written to get
  around TCP port
  filters.  That would be in the 1993 time frame.  While to
  the best of my
  knowledge IRC had no attachment to sex for its development, it in
  combination with various hacker sites played a role in sites being
  hacked and becoming warez and porn sites.
  
 
  Google's archive has the first alt.sex posting as April 8, 1988.
 
  It's very rare that I have a work-related justification for searching
  alt.sex...
 
 An interesting subject for a thesis:
 
 The Porn and The Internet.
 
 You think I can apply for a grant to the NSF?

Nah, that would never fly. How about An Investigation into the Utility
of Stimulus-Response Sites For Promoting Networked Growth Patterns.
Worded that way, if DARPA wont fund it, you could always send it to the
folks at NIH or whoever it is that funds the biotech stuff

- peterd


-- 

-
Peter Deutsch   [EMAIL PROTECTED]
  Gydig Software

I was *that* close to getting 'download dirty pictures
from the Internet'  added to my job description

 -Wally from Dilbert...

--




Re: Please recant or appologize to Jim Flemming

2003-01-07 Thread Peter Deutsch


Craig S. Williams wrote:
 
 Can't we all just get a long?


Seems more like Strother Martin in Cool Hand Luke

   What we have here, is a failure to communicate...


- peterd



-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

  It doesn't hurt to be optimistic. You can always cry later.
 -- Lucimar Santos De Lima
--




Re: Spring 2003 IETF - Why San Francisco?

2002-11-25 Thread Peter Deutsch


John Stracke wrote:
 
 Harald Tveit Alvestrand wrote:
 
  If we get twice as many people as in Atlanta, crowding may be a
  problem. But twice as many people is a LARGE increase.
 
 Besides which, the last IETF meeting in the Bay Area was in 1996; the
 local population of companies that will pay to send people has probably
 dropped off a bit since then.

Yahbut, the number of people who will come looking for pointers to
work is going to be *way* up. There may not be more people in the
meetings, but there may be lots more people in the corridors...
:-/


- peterd

-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software

  Time will end all of my troubles, but I don't
always approve of time's methods...

-




Re: IPR and I-D boilerplate

2002-07-01 Thread Peter Deutsch



Thanks a 1x10^6. I missed that!


- peterd

[EMAIL PROTECTED] wrote:
 
 Peter,
 
 ...there were pretty categoric statements made during the last iteration of
 this thread that
 a Drafts archive *was* going up soon. Has this idea been shelved,
 canceled, delayed or absorbed by the event horizon surrounding the
 infinitely dense Black Hole that is the intellectual property mess? ;-)
 
 Try http://ietfreport.isoc.org/
 
 Regards,
 
 Graham Travers
 
 International Standards Manager
 BTexact Technologies
 
 e-mail:   [EMAIL PROTECTED]
 tel:  +44(0) 1359 235086
 mobile:   +44(0) 7808 502536
 fax:  +44(0) 1359 235087
 
 HWB279, PO Box 200,London, N18 1ZF, UK
 
 BTexact Technologies is a trademark of British Telecommunications
 plc
 Registered office: 81 Newgate Street London EC1A 7AJ
 Registered in England no. 180
 
 This electronic message contains information from British
 Telecommunications plc which may be privileged or confidential. The
 information is intended to be for the use of the individual(s) or entity
 named above. If you are not the intended recipient be aware that any
 disclosure, copying, distribution or use of the contents of this information
 is prohibited. If you have received this electronic message in error, please
 notify us by telephone or email (to the numbers or address above)
 immediately.
 

-- 

-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software


   That's it for now. Remember to read chapter 11 on the
   implications of quantum mechanic theory for time travel
   and be prepared to have been here last week to discuss.

-




Re: IPR and I-D boilerplate

2002-07-01 Thread Peter Deutsch



Joe Touch wrote:
. . . 
 Yes. The history here is the reason why the drafts are ephemeral and not
 archived - to encourage the exchange of incomplete ideas. The success of
 this history is what is being compromised.
 
 Archiving them creates an environment where drafts and updates will be
 stalled, with the response well, since this is archival, we'd better
 get it a little more complete. Given how long it takes for even the
 active drafts to make it to RFC with such discussion, the chilling
 effect on the creation of RFCs (at least by people who _are_ careful,
 who you want to encourage) may grind things to a halt.

Well, if the community collectively decided that they didn't want to
remember its history, that would be fine but I mentioned this because I
thought that when we last went through this it was the consensus that
remembering would be a Good Thing (to quote a well know flowing
arranging allegedly insider trader.. ;-) and I saw statements that an
archive was coming soon. This may have been in out-of-band
communications, not the list but in any event as Graham has pointed out,
ISOC is doing this now. Good stuff...

FWIW, I personally don't buy the it will have a chilling effect on
discussion arguement, given that the email lists are archived here,
there and everywhere and now Google has all the old Usenet postings up.
Have you actually gone to Google and checked out the Usenet archives?
It's amazing what's out there.

So why not RFC Drafts? I've already had cause a couple of times to help
research a couple of prior art claims and I was amazed to find out what
was still out there. It was just hard to track down. I don't think we do
ourselves any favours by forcing folks into this attic cleaning mode.
We might as well put it all out as a tool for research, because it's not
just about patent claims. It's actually useful to trace out the
development process so you can understand it, and make it work better.
Sort of a first derivative of research, to do research about research I
guess...

- peterd

-- 

-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software


   That's it for now. Remember to read chapter 11 on the
   implications of quantum mechanic theory for time travel
   and be prepared to have been here last week to discuss.

-




Re: IPR and I-D boilerplate

2002-07-01 Thread Peter Deutsch



Keith Moore wrote:
 
 it's really very simple: people posted I-Ds with the assurance that they
 would be retired after six months.  it's not reasonable for IETF to
 violate that assurance without permission.

Errr, people post IDs to publicly accessible mailing lists which are
being archived onto the Internet. One consequence of this is that copies
of those drafts (and everything else posted to the mailing list) will
never leave the net. The question is how hard it is going to be for
folks to access them in the future, and whether it is useful for the
IETF as a whole to make sure that such access is simple for all.

What I think folks may reasonably assume is that any draft they post
will go out of scope after six months and not be in consideration for
working group study but it's unrealistic, and has been since about 1988,
to think that they you can arrange for them to disappear from the face
of the Internet and not be accessible after that period.


 so if IETF wants to make old drafts publically available (and I agree
 this could be a useful thing), it really should get permission from the
 authors. or at least notify them and give authors the chance to say
 please do not make my old documents publically accessible.

If you really think this is practical, then you should contact the nices
folks at Google and ask them to take down the Usenet archives, and
perhaps look around at the various unofficial mailing archives that so
many folks run today. First time I went to the Google archive I found
postings I'd made in 1987 are still floating around. It never occured to
me that I could ask Google to stop sharing my misspent youth with the
world...


 it would also be reasonable to allow authors to specify, when submitting
 a new I-D, whether the draft should be made available after expiration.

Sorry, I don't see this as being reasonable *or* practical. If you work
in private groups to develop private standards this might be reasonable,
but the IETF works very hard to make their work publically accessible
and one consequence of that is that folks have to accept that entering
the process means everything you do is going to be out there. Even if
the IETF didn't run such an archive of mailing lists, etc other folks
do, and have done so for a long time. All we're talking about here is
simplifying things...


- peterd

-- 

-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software


   That's it for now. Remember to read chapter 11 on the
   implications of quantum mechanic theory for time travel
   and be prepared to have been here last week to discuss.

-




Re: IPR and I-D boilerplate

2002-06-30 Thread Peter Deutsch

g'day,


John C Klensin wrote:
. . . 
 Please, folks, I am _not_ trying to restart the discussion of
 archival I-Ds.  Personally, I remain opposed to the idea, and
 I believe that they should be treated as drafts and discarded.
 If they result in an RFC, then the RFC should stand on its own.
 Nor do I think that there is any quick fix to the patent
 situation, least of all anything like this.

Well, without repeating the entire thread yet again there were pretty
categoric statements made during the last iteration of this thread that
a Drafts archive *was* going up soon. Has this idea been shelved,
canceled, delayed or absorbed by the event horizon surrounding the
infinitely dense Black Hole that is the intellectual property mess? ;-)

And if we're going to state our own opinions in an aside, I personally
believe that such an archive would be invaluable. He who fogets history
is doomed to repeat it and all that


- peterd


-- 

-
Peter Deutsch   [EMAIL PROTECTED]
Gydig Software


   That's it for now. Remember to read chapter 11 on the
   implications of quantum mechanic theory for time travel
   and be prepared to have been here last week to discuss.

-




Re: Global PKI on DNS?

2002-06-11 Thread Peter Deutsch

g'day,

John Stracke wrote:
 
 Such software would not see this kind of data unless a user
 of the server tried to use this stuff, and in that case I don't see
 why that user couldn't upgrade her own software to get it to work.
 
 Because it's not their software? If I wanted to do PKI through DNS, and my
 ISP's server did not support TCP, I might be stuck.  Personally, I don't
 depend on my ISP for DNS, but many users do.

So users wanting this new service will be pretty motivated to switch DNS
servers when the time comes, what's the big deal in that? Somebody (I
think it was Keith) suggested earlier in this thread that nobody should
be trusted with the single PKI root. Maybe the same sentiment applies to
DNS roots, as well?? Certainly it would seem to apply to trusting them
with a single DNS service provider at the subroot level...

(As he hides behind blast wall, to avoid flying shrapnel...  ;-)


- peterd

-- 
---
   Peter Deutsch   [EMAIL PROTECTED]


   I had to do an assignment on wild animals, and I decided to
do my report on alligators. To complete my research, I took a
trip to the zoo. I wanted to make a day of it, so I took along
my pet dog. I figured we could throw a little frisbee,
enjoy the sun, but boy was that trip a disaster. I had to
tell my teacher that my homework ate my dog...

--




Re: Global PKI on DNS?

2002-06-11 Thread Peter Deutsch



John Stracke wrote:
 
  Because it's not their software? If I wanted to do PKI through DNS, and
 my
  ISP's server did not support TCP, I might be stuck.  Personally, I
 don't
  depend on my ISP for DNS, but many users do.
 
 So users wanting this new service will be pretty motivated to switch DNS
 servers when the time comes, what's the big deal in that?
 
 The big deal is that some of the more restrictive ISPs may not permit
 customers to bypass their DNS servers.  Same as with HTTP interception
 proxies.

And ther are multiple possible answers to that sort of behaviour, none
of which require technological solutions, since it's not a technological
problem. Users can be told this function is not available from this
ISP, change ISPs and we let the free market do its thing. Operators of
such a new service can run DNS servers on different ports for this
functionality. There are probably lots of things you could do, but the
fact that a particular ISP is behaving in an antisocial manner shouldn't
be an issue for this list, should it?

Last week I was told by a relative down in Australiaa that his ISP still
scans for multiple hosts hiding behind NAT boxes. OTOH, one of my ISPs
(Earthlink) regularly tries to *sell* me NAT boxes. Neither behavior
would seem relevant to the NAT versus anti-NAT debate on this list but I
happen to rather like the fact that my ISP recognizes that I want run
this technology and doesn't try to treat me like a criminal for doing
so.

Now, it's a bit more tricky when the ISP is doing proxy interception,
but frankly maybe we shouldn't be overloading the current DNS service
with this. I didn't see anything so far in this thread that would
discourage me from using DNS *technology* in this application, but maybe
you would definitely want to set up your own root for this service. It
would get you out from under the many operational restrictions folks put
on DNS for stability reasons anyways, and by using a different port
you'd find the proxy/interception issues go away, too.

Sounds like a win for everybody...

- peterd




-- 
---
   Peter Deutsch   [EMAIL PROTECTED]


   I had to do an assignment on wild animals, and I decided to
do my report on alligators. To complete my research, I took a
trip to the zoo. I wanted to make a day of it, so I took along
my pet dog. I figured we could throw a little frisbee,
enjoy the sun, but boy was that trip a disaster. I had to
tell my teacher that my homework ate my dog...

--




Re: I-D ACTION:draft-etal-ietf-analysis-00.txt

2002-03-30 Thread Peter Deutsch
  be something we might want to investigate.
 
 This need not necessarily be considered a failure of the IETF.  It might
 be an indication of the maturity of the IETF, in that other standards
 bodies/companies/users can use IETF protocols/services/BCPs as a foundation
 for whatever it is they're trying to do.

Your mileage may vary, etc but if people are taking the IETF work and
not growing it in the IETF, I personally conclude that the IETF is
failing to provide a suitable home for new ideas. It's supposed to be
*the* place where open standards protocols are developed in a
vendor-neutral, intellectually honest forum. If people find they can't
get their work done here, and elect to do the work elsewhere, sounds
like failure of *something* to me. Of course, it *does* solve the
overcrowding problem, so if you want to measure success by the ability
to get a cookie in the corridor, this would be a good thing...  ;-)

- peterd

-- 
---
   Peter Deutsch   [EMAIL PROTECTED]
   Gydig Software


  This, my friend, is a pint.
  It comes in pints?!? I'm getting one!!

 - Lord of the Rings

--




Re: I-D ACTION:draft-etal-ietf-analysis-00.txt

2002-03-29 Thread Peter Deutsch
 by
 the
 KLOC metric.  They had determined that the product would have 150ish KLOC
 in it
 and so had every programmer report the number of KLOC they had contributed
 that week.
 
 One week I was looking through the code I had inherited and realized that I
 had two
 copies of a set of utilities that did the same code.  I spent a day or two
 removing
 one set, and porting that half of code to use the other set of utilities
 (Basically
 I had inherited two developers code).  Well my KLOC for the week was
 somewhere in
 the -10 range, and it was a month before I started going positive again.  My
 reviews
 sucked, but it was the right thing to do.
 
 Becareful what you measure, because that is the behaviour you will get
 
 Bill
 

-- 
---
   Peter Deutsch   [EMAIL PROTECTED]
   Gydig Software


  This, my friend, is a pint.
  It comes in pints?!? I'm getting one!!

 - Lord of the Rings

--




Re: IETF Meetings - High Registration Fees

2002-03-18 Thread Peter Deutsch

g'day,

Paul Robinson wrote:
 
 On Mar 18, Brian E Carpenter [EMAIL PROTECTED] wrote:
 
  That's an interesting assertion, but it isn't true. The decline in IETF attendance
  since the economic downturn started is across the board - large companies are
  just as sensitive to meeting costs as small companies or individuals. The whole
  idea of tiered prices is based on a massive misunderstanding of the way companies
  manage expenses.
 
 I can assure you it isn't. Have you noticed that nobody from any company has
 piped up in this thread to say oooh, no, that would be a bad idea!. I can
 assure you that for large multi-nationals the difference between paying $500
 for a delegate and $5000 is a drop in the proverbial ocean, especially when
 it comes to standards tracking. 

Well, since you asked, oooh, no, that would be a bad idea.

I've run my own company, I've been an independent consultant and I was
an Engineering Director at Cisco for a couple of years. At Cisco I
managed a team of about 80 people, and I got to decide how many of them
would go to the IETF each meeting. Yup, at Cisco we didn't ask John
Chambers how many people to send to the IETF, each Business Unit made
these decisions independently based upon the needs of their markets. We
managed our own budgets and schedules, and had to hit both revenue and
spending milestones along the way. The IETF was just one small part of
what we did and that's true for all the other Business Units at Cisco
who independently decide who to send to each meeting.

In a world of market downturns the difference between $500 a person and
$5,000 is not a drop in the proverbial ocean. Adding an extra $15,000
annual cost, times the several people I sent each trip would definitely
have led to me looking for cutbacks. Yes, even large companies need to
watch their spending. In at least one case where I allowed folks to go
to a set of meetings, I can assure you I would *not* have authorized it
if the costs had increased by an order of magnitude.

I fully endorse keeping my people in the industry loop, I endorse open
standards and I endorse career development for my staff but given what
the markets have done over the past two years, you shouldn't assume
people like me would just roll over and pay whatever was charged. Not on
*this* reality plane.

And even if we did, I agree with the previous posters who question the
effect this would have on the organization. The more dependent you are
on a smaller group of decision-makers who hold the purse strings, the
more beholden to them you become.  Yes, $450 is a lot of money to small
companies, and when I was an independent consultant on more than one
occasion I elected not to go to a meeting because of the cost, but
having sat on both sides of the fence, asking one group to subsidize
another doesn't seem to ever be a healthy long term strategy for any
group.

So, that leaves cost containment. Does the IETF spend too much on
cookies? I suspect not, but as Harald has pointed out the figures are
out there. Go have a look and let us know where you think the cuts
should should be made. Blindly assuming that large corporations will
willingly pay an order of magnitude more for the privilege of
subsidizing individual contributors doesn't seem viable to me...


- peterd


-- 
---
   Peter Deutsch   [EMAIL PROTECTED]
   Gydig Software


  This, my friend, is a pint.
  It comes in pints?!? I'm getting one!!

 - Lord of the Rings

--




Re: IETF Meetings - High Registration Fees

2002-03-18 Thread Peter Deutsch

g'day,

Scott Lawrence wrote:
...

  In addition, I still find it amazing that people are justifying costs due to
  the number of breakfasts and cookies being served. The word 'ludicrous' is
  overused on this list, but I think I've found a situation it applies to -
  please, ask yourself whether the cookies are really needed. :-)
 
 Actually, I think the cookies and coffee are probably a major net
 productivity gain for the group, because they make it possible for
 people to congregate locally between meetings rather than scatter to
 find their fixes.

It's a very common perk here in Silicon Valley to provide employees with
free coffee/tea/soft drinks. The cost of this can run to several
dollars/day per person. For a company as large as Cisco (40,000 at its
peak, in the 2x,000 range now) this works out to millions of
dollars/year. Now, you might think that cutting out the free drinks
would be a slam-dunk no-brainer for the accountants but people still
give free drinks to their staff. Now, this is *not* just because people
would be unhappy. Unhappy was when we had to lay off thousands of
employees a year ago. People are less insistent on their perks this
year, so why do companies still think it worth paying for free drinks?
Let's consider another set of numbers.

The averaged loaded cost of an engineer in Silicon Valley is something
on the order of $200,000/year (that's salary, plus all costs to put that
employee to work, pay for the health insurance, laptop, travel, etc).
The senior folks who go to the the IETF probably average out to a bit
more than that. *That* number works out to something very close to
$120/hour, assuming 210 work days/year, and an 8 hour day (yeah, I know,
you work more than 8 hours a day - humour me here).

Now, if each time I give you a 35 cent soda, I can get another 15
minutes of work out of you, then the net profit on that soda to me as an
employer is something like $30-0.35 = $29.65. In effect, my employees
are paying *me* for the soft drinks. Thanks, folks.

And *that's* why it pays to issue cookies and drinks at the IETF. Each
time you *don't* have to go stand in line at the coffee shop to spend $2
for a soft drink, or gosh forbid $6.00 for a latte with extra foam and a
cookie, the collective wisdom of the IETF benefits from another 15
minutes of your time and you metaphorically pocket $30. Do that three
times a day for a week and you've paid for your IETF meeting fee...

When I was attending the IETF meetings, some of the best work was
definitely done while scarfing down a coffee and pastry (Hi Steve!). Do
the math on how many collective hours of work this works out to in a
year:

O(1000 people/meeting) x O(3 break/day) O(15 minute/break)
   x 5 days x 3 meetings/year 

Yup, that's over 10,000 hours/year of work done in exchange for those
cookies. Now, there's some bio-overhead in that number, but the
benefits are real enough that I'd vote to keep paying for the cookies...


- peterd





---
   Peter Deutsch   [EMAIL PROTECTED]
   Gydig Software


  This, my friend, is a pint.
  It comes in pints?!? I'm getting one!!

 - Lord of the Rings

--




Re: Yes, conformance testing required... Re: Fwd: Re: IP: Microsoft breaks Mime specification

2002-01-27 Thread Peter Deutsch



Kyle Lussier wrote:
 
   I seem to be getting two conflicting viewpoints:
  
 #1 Vendors can only be trusted to be interoperable on their own,
and can not be forced to conform.
  
 #2 Vendors absolutely can't be trusted to be interoperable,
without conformance testing.
 
  Kyle, in all kindness, you're missing the most fundamental
  viewpoint expressed here recently: The IETF isn't the place,
  nor is it the organization, that could or should take on the
  role of interoperability-cop.
 
 Some have proposed the ISOC as a body to do this kind of thing.
 
 Is it also public opinion that the ISOC should or shouldn't do
 something like this?
 
 I agree with all of everything being said.  We mostly just need
 to find the right body to do this kind of thing, and it's
 still gotta be a jury of peers for it to have any value.
 
 We need a United Nations of Standards Citizenship.


Kyle, please don't take this the wrong way, but don't you think you've had your
say on this subject? I count 31 messages from you on this topic since last
Tuesday, including seven today. There are some people who share your interest,
but the community seems to agree this is not the forum you seek. If you think
ISOC might be the place, please take it over there, but personally I think it's
time to let this one die here.

Would somebody please mention Adolf Hitler so we can declare this thread
complete?


AD-thanks-VANCE...


- peterd


-- 
-
Peter Deutsch   [EMAIL PROTECTED]

All my life I wanted to be someone. I suppose I should 
 have been more specific.

   - Jane Wagner
-




Re: trying to reconcile two threads

2001-11-29 Thread Peter Deutsch



Fred Baker wrote:
 
 At 01:57 PM 11/28/2001, Charles Adams wrote:
 This may be the wrong time to interject this, but I know of a local cable
 company that requires you to register a single MAC address.
 
 mine does that. I gave them the mac address of my router.

Yup, and my latest NAT box actually has a clone MAC address option in the
setup menu (I think it was the Barricade, but in any case it the new one with
dialup modem backup I just installed after my local DSL provide went belly up on
me. Belt and suspenders for me from now on).

As the old saying goes, if you build a smarter mouse trap, all you get are
smarter mice...



- peterd
-- 
--
Peter Deutsch work email:  [EMAIL PROTECTED]
Director of Engineering
Edge Delivery Products
Content Networking Business Unit private:  [EMAIL PROTECTED]
Cisco Systems


   That's it for now. Remember to read chapter 11 on the
   implications of quantum mechanic theory for time travel
   and be prepared to have been here last week to discuss...

--




Re: Why IPv6 is a must?

2001-11-27 Thread Peter Deutsch



Anthony Atkielski wrote:
 
 Caitlin writes:
 
  If a node only requires accessibility by a
  few specialized nodes (such as a water meter)
  then making it *visible* to more is just
  creating a security hole that has to be plugged.
 
 Only if the information made thus available itself constitutes a security
 breach, which is not necessarily the case.  Knowing how much water someone
 consumes or how many cans of Coke remain in a distributing machine would
 probably not be a security issue for most users...

I can't help myself.

Actually, having access to such stats as amount of power used, coke consumed,
late-night pizzas ordered from the Pentagon, or number of routine status
messages transmitted from ships of a specific call sign, can reveal a surprising
amount of detail.

It's fairly well known that the Americans had broken the Japanese codes during
World War II, but it's less well known that this was not a one shot break, but
an ongoing process of breaks, loss of capability and rebreaks. Periodically the
Japanese would reissue their code books and change the callsigns of their
various ships. The U.S. code breakers would then have to recreate their
penetration by identifying each vessel's new call sign, identify specific
message types and using these to rediscover the code groups.

One technique they had for this was to detect traffic patterns from specific
callsigns; by detecting similar patterns before and after the change, they could
identify specific ships. They could then attack the message traffic looking for
identical or similar messages, which in turn would lead to new breaks into the
system. Another technique was to monitor ambient traffic patterns. A spike in
traffic for a vessel or group would indicate potential upcoming operations,
especially if you were monitoring major capital ships.

Operations research has come a long way since then, and these or similar
techniques are now used in industry for marketing and sales purposes. U.S. law
enforcement was even using power consumption (as measured by infrared detectors)
as an indicator of potential pot growing in your hydroponic basement garden for
a while. This last one ran afoul of the illegal search and seizure bits of the
U.S. constitution but The World Is A Very Big Place and not everybody might be
as picky as the U.S. on such things.

The moral of the story? Traffic patterns and metadata can be powerful tools and
one person's junk is another person's data. You should not assume that the
majority of people shouldn't or wouldn't care about it leaking out, even if at
first glance it seems pretty mundane.


- peterd


-- 
--
Peter Deutsch work email:  [EMAIL PROTECTED]
Director of Engineering
Edge Delivery Products
Content Networking Business Unit private:  [EMAIL PROTECTED]
Cisco Systems



  Many people can predict the future. Me, I can predict the past...

--




Re: Carrier Class Gateway

2001-04-26 Thread Peter Deutsch



Willis, Scott L wrote:
 
 Why Waste time with calculations, It's an American Ship!  Swing the 16 guns
 and blow the Bridge. Bush can call it routine and not apologize for it.

Errr, actually carriers don't have 16 guns, the battleships did. There
*were* smaller caliber turrents on the older (e.g. WWII Essex class)
carriers for antiaircraft work, and the newer carriers have such things
as Phalanx for the same reason, but definitely not something as big as
16. Now, sending off a flight of F-15s with laser guided weapons on the
other hand...


- peterd



 
 -Original Message-
 From: Pat Holden [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, April 25, 2001 5:13 PM
 To: Jose Manuel Arronte Garcia; [EMAIL PROTECTED]; Lloyd Wood
 Cc: [EMAIL PROTECTED]
 Subject: Re: Carrier Class Gateway
 
 one would have to consider high tides during a full moon to get an accurate
 measurement.
 
  I am also sorry about this but...
 
  I think all the calculation regarding height limit should be made based on
  high tides; it is easier to know if a ship would be able to pass on high
  tide or not, when its the sentsitive time to let it pass, with it is
  higher tides...
 
  Manuel Arronte.
 
 
 
  - Original Message -
  From: [EMAIL PROTECTED]
  To: Lloyd Wood [EMAIL PROTECTED]
  Cc: [EMAIL PROTECTED]
  Sent: Wednesday, April 25, 2001 1:44 P
  Subject: RE: Carrier Class Gateway
 
 
  
  There's some discussion of Panama requirements in 'The New New
   Thing'.
  Not just a lock, but there's a bridge to worry about; passing
 under
   it
  at low tide is your height limit.

 i would imagine the problem would be at high, not low, tide.
  
oops. mea culpa.
   
L.
  
  
   Sorry to add yet another post to a pointless thread but...
   Lloyd was right the first time.  Height limit would be based on low
 tide.
   For ships that are near the height limit, waiting a mean time of 6 hours
   for the next low tide is not a big deal.
   -Mark

-- 
--
Peter Deutsch work email:  [EMAIL PROTECTED]
Director of Engineering
Edge Delivery Products
Content Networking Business Unit private:  [EMAIL PROTECTED]
Cisco Systems


   There are only three types of mathematician 
 - those who can count and those who can't.


--




Re: Topic drift Re: An Internet Draft as reference material

2000-09-30 Thread Peter Deutsch in Mountain View

Greg Minshall wrote:
 
 i think there are two issues.
 
 one is that when I-Ds were created, there was some controversy, mainly
 revolving around the notion that we already had a forum for people putting out
 ideas (known as RFCs), and that the fact that the public concept of RFC was
 different from our intent, we should stick by our intent (and work on
 educating the public).  if i remember correctly, it was within this part of
 the discussion that we decided that I-Ds would be ephemeral documents.

A couple of people are claiming to represent the intent of people "back
then" - maybe I should go back to the Historical Internet Drafts
Directory and review that debate to see whether everyone is remembering
history correctly. Ooops, I can't do that until there *is* an Historical
Internet Drafts directory!  :-)

Sorry to poke fun, I *do* respect your opinion of what was the group
think intent from tat period, but frankly time has moved on and we all
have different needs than we did back then. I remember this whole thread
getting a treatment something like a year ago and I argued then for
institutional memory. I also argued that some of the participants of the
IETF seem to have developed an exagerated sense of the group's real
importance and ability to control how others perceive it. 

Bottom line is that access to historical information is useful. The IETF
should (and I'm glad to hear, will) make this material available. As
Martha Stewart says, "And this is good".

- peterd


 if *we*, as an organization (or whatever we are) decide that I-Ds should no
 longer be ephemeral documents, then we probably pop right back up in the
 middle of the "should we have two archival document series" discussion again.
 (though frankly i'm not sure the energy is there for at least the "RFC is the
 one" side of the discussion.)
 
 the second issue, as many have pointed out, is that there is no way to stop
 http://www.internetdraftsforever.com from springing up (hey, maybe it's
 already there!).  certainly, no one is (seriously) trying to prevent that from
 happening.
 
 i think of the current (officially ephemeral) I-Ds as being like Usenet
 postings (remember those?).  people *do* cite them in articles occasionally,
 dejanews (or whatever) does hang on to them forever, you can (or you could, in
 the past at least) buy CDs full of them.  but, they don't have the "cachet" of
 an RFC.
 
 i personally would vote for keeping the I-Ds "officially ephemeral", and if
 deja-id pops up to archive them, i'll probably occasionally poke around in
 there myself.
 
 cheers,  Greg
 
   
 
Part 1.2   Type: application/pgp-signature

-- 
--------
Peter Deutsch   work email: 
[EMAIL PROTECTED]
Technical Leader
Content Services Business Unit private:
[EMAIL PROTECTED]
Cisco Systems or  : [EMAIL PROTECTED]

 Alcohol and calculus don't mix. Never drink and derive.





Re: draft-ietf-nat-protocol-complications-02.txt

2000-07-20 Thread Peter Deutsch in Mountain View

g'day,

Masataka Ohta wrote:
.  .  .
 If IETF makes it clear that AOL is not an ISP, it will commercially
 motivate AOL to be an ISP.

Not to be unkind, since the IETF has done some good work, but the above
statement is incorrect. If you'd written "If AOL perceives that the
market would punish them if the IETF makes it clear that AOL is not an
ISP, it will commercially motivate AOL to be an ISP" you might be closer
to the mark. 

The bottom line is, the world isn't waiting for us to tell them the
right way to do what they want and the clever solutions we came up with
as solutions to the networking problems of 1970, or 1980, or 1990 don't
demand that they adopt our proposals for solving their problems of 2000.
We're in serious danger of surrendering to the same elitist posturing
for which we used to vilify the mainframe community. Pity, but we'll
have only ourselves to blame if and when the users pass us by...


- peterd (feeling testy this evening)

-- 
--------
Peter Deutsch   work email: 
[EMAIL PROTECTED]
Engineerng Manager
Caching  Content Routing
Content Services Business Unit private:
[EMAIL PROTECTED]
Cisco Systems

  "I want to die quietly and peacefully in my sleep like my granfather,
 not screaming in terror like his passengers..."

 - Emo Phillips





Re: prohibiting RFC publication

2000-04-10 Thread Peter Deutsch in Mountain View

g'day,

Tripp Lilley wrote:

 On Sun, 9 Apr 2000, Peter Deutsch in Mountain View wrote:

  readily accessible. I still see value in having documents come out as "Request
  For Comments" in the traditional sense, but it certainly wouldn't  hurt to find
  ways to better distinguish between the Standards track and other documents.

 Here's a novel idea: we could stop calling them all "RFCs". Call them by
 the designators they get once they're blessed (ie: STD, INF, EXP, etc.),
 and stop ourselves citing them as RFC [0-9]+.

 Change begins at home, as they say...

Yeah, although I'd personally hum for keeping the RFC nomencalture for the Standard
and Experimental class RFCs, as the name is understand to encompass that anyways. The
rest we could lump under something like "OFI" (Offered For Information? The marketing
guys here agree that they wont write code if I don't name products... ;-) Anyways, we
need to draw a clearer line between the standards which have been wrought by the
IETF, and information which has been captured and tamed, so to speak...

- peterd

--
--------
Peter Deutsch work email:  [EMAIL PROTECTED]
Technical Leader
Content Services Business Unit private: [EMAIL PROTECTED]
Cisco Systems  or  : [EMAIL PROTECTED]

 Alcohol and calculus don't mix. Never drink and derive.






Re: prohibiting RFC publication

2000-04-09 Thread Peter Deutsch in Mountain View

g'day,

Dave Crocker wrote:
.  .  .

 It strikes me that it would be much, much more productive to fire up a
 working group focused on this topic, since we have known of the application
 level need for about 12 years, if not longer.

Which raises the interesting question as to what the participants would hope to
be the outcome of such a working group and whether we could possibly move
towards something ressembling a technical consensus, given the current polarity
of the debate. There are certainly more people than Keith who would brand the
relevant practices as evil and immoral. For sure you'd hear strong views
against endorsing transparent proxies, NATs and other things already touched
upon here over the past few months. I could mention a few more, at least as
controversial, which I'd see as coming into scope in the near future but
frankly I'm not personally willing to spend any energy trying to engage in any
form of consensus building on such things right now. The parties are simply too
far apart for me to expect there to be anything but grief at the other end.

I've seen us spend a lot of time engaging in working groups in which some
number of the participants has as their goal the invalidating of the underlying
concept or torpedoing the process itself. Having been there, done that and
collected the T-shirt a couple of times myself, I wouldn't go through that
again just because I have a soft spot, either for the IETF or in my head.

That isn't to say I disagree with you, Dave. There's definitely work to be done
here. It's just that this is one hairy tarball, and although there's going to
be lots done in this area over the next couple of years we've probably reached
a fork in the road where the IETF has to take stock of itself before it can
play a useful role. If it doesn't do that I predict that some of the major
architectural and implementation decisions in this particular subspace will be
taking place outside the IETF. And clearly, some would think that a good thing.




- peterd

--

--
Peter Deutsch   work email:  [EMAIL PROTECTED]
Technical Leader
Content Services Business Unit   private: [EMAIL PROTECTED]
Cisco Systems or  :  [EMAIL PROTECTED]

 Alcohol and calculus don't mix. Never drink and derive.
--





Re: recommendation against publication of draft-cerpa-necp-0

2000-04-08 Thread Peter Deutsch in Mountain View

Hi Patrik,

Patrik Fältström wrote:

 At 17.29 -0700 2000-04-07, Peter Deutsch wrote:
   LD is intended to sit in front of a cluster of
 cache engines containing similar data, performing automatic
 distribution of incoming requests among the multiple caches. It does
 this by intercepting the incoming IP packets intended for a specific
 IP address and multiplexing it among the caches.

 What you are doing is giving a product to the service provider, which
 is the one which owns the services which you load balance between.

 In the case of a transparent proxy which is discussed in this thread,
 the interception is done neither by the service provider (i.e. CNN or
 whoever) nor by the customer, and neither of them are informed about
 what is happening.

 That is a big difference.

I agree, and would welcome a BCP document pointing out this distinction
and explaining why the later is harmful. Banning publication of
technologies a priori (as I argue elsewhere) wont stop the technology
being developed, wont stop the practice and just leads to the IETF
abdicating it role as a meeting point for technical innovation.

 - peterd






Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Peter Deutsch in Mountain View

g'day,

Keith Moore wrote:

 Peter,

 I think that by now I've made my points and defended them adequately and
 that there is little more to be acheived by continuing a public,
 and largely personal, point-by-point argument.  If you want to continue
 this in private mail I'll consider it.

Okay, but I'd like to make clear that I don't regard this as a "largely
personal...argument". On the contrary, I've drunk beer with you, I like you as
a person and would be happy to drink beer with you again. I am engaging here
*only* because I think the principles I'm defending are so important. It really
is nothing personal.


 The simple fact is that I believe that the idea of interception proxies
 does not have sufficient technical merit to be published by IETF, and
 that IETF's publication of a document that tends to promote the use
 of such devices would actually be harmful to Internet operation and
 its ability to support applications.

Fair enough, but my primary goal was not to justify this particular technique,
but to address the issue of whether we should be preventing the publication of
particular techniques, and under what ground rules. The industry and their
customers have already decided against you on this one. I'm wondering about the
future of an IETF that consistently takes itself out of play in this way. I'm
sure there are other techniques on their way that are going to allow us to find
out...


 p.s. I think the term you're looking for is "nihil obstat".

Yup, that's it. Thanks...

  -
peterd




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Peter Deutsch in Mountain View



Keith Moore wrote:

  The industry and their customers have already decided against you on
  this one.

 Industry people love to make such claims.  They're just marketing BS.
 The Internet isn't in final form yet and I don't expect it to stabilize
 for at least another decade.  There's still lots of time for people to
 correct brain damage.

Well, I don't share the view of a monotonic march towards the "correct" Internet.
Just as the first single celled giving off massive amounts of waste oxygen created
an environment which led eventually to the furry mammals, the Internet responds and
evolves from instantiation to instantiation. I hear talk about products which people
expect to only have a lifetime of a few years, or even a period of months, until
evolution moves us all on. Some of the things that you find so offensive may not
even be relevant in a couple of years.

But (you knew they'd be a but, didn't you?) there is a substantial market for
products based upon interception or redirection technologies today. I don't offer
this as a technical argument for their adoption. I was merely pointing out that the
market has voted on this technique and judged it useful despite what the IETF might
or might not decree. Short of punishing those poor misguided users, I don't know
what else you can accomplish on this one...


  I'm wondering about the future of an IETF that consistently takes itself
  out of play in this way.

 IETF's job is to promote technical sanity, not to support unsound vendor
 practices.

Well there you go. You think the IETF's Seal of Approval and promotion of technical
sanity can prevent our unsound vendor practices  from perpetrating Marketing BS on
poor users. You're right - the positions are fairly clear at this point. I'll try to
quieten down now...


  - peterd




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-08 Thread Peter Deutsch in Mountain View
-
peterd


----------
Peter Deutsch work email:  [EMAIL PROTECTED]
Technical Leader
Content Services Business Unit   private:
[EMAIL PROTECTED]
Cisco Systems   or  :  [EMAIL PROTECTED]

 Alcohol and calculus don't mix. Never drink and derive.
--





Re: prohibiting RFC publication

2000-04-08 Thread Peter Deutsch
the IETF to be
a clearing house for information in the process. If you want the
appropriate words of RFC 22026 and 1718 deleted that you take the
appropriate steps to initiate the change, but I suggest that meanwhile
you shouldn't be denying the documented evidence for such a role within
the IETF.



- peterd




-- 
------
Peter Deutsch work email:  [EMAIL PROTECTED]
Technical Leader
Content Services Business Unit   private: 
[EMAIL PROTECTED]
Cisco Systems   or  :  [EMAIL PROTECTED]

   A specification that has been superseded by a more recent

   specification or is for any other reason considered to be obsolete is

   assigned to the "Historic" level.  (Purists have suggested that the

   word should be "Historical"; however, at this point the use of

   "Historic" is historical.)
--




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Peter Deutsch
uing that this document should be added to that set.
 or at least, that it needs substantial revision before it is found
 acceptable.

So you are arguing for explicit censorship of ideas based upon your
own moral assessment of the potential misuse of those ideas? Wow.
Now *that* is a dangerous notion indeed. I sincerely hope it is not
a widely held one within the echelons of the IETF...


- peterd



-- 
------
Peter Deutsch work email:  [EMAIL PROTECTED]
Technical Leader
Content Services Business Unit   private: 
[EMAIL PROTECTED]
Cisco Systems   or  :  [EMAIL PROTECTED]

 Alcohol and calculus don't mix. Never drink and derive.
--




Re: recommendation against publication of draft-cerpa-necp-0

2000-04-07 Thread Peter Deutsch

g'day,

"Michael B. Bellopede" wrote:
...
 Regardless of what occurs at higher layers, there is still the problem of
 changing the source address in an IP packet which occurs at the network(IP)
 layer.

The Content Services Business Unit of Cisco (Fair Disclosure time -
that's my employer  and my business unit) sells a product called
"Local Director". LD is intended to sit in front of a cluster of
cache engines containing similar data, performing automatic
distribution of incoming requests among the multiple caches. It does
this by intercepting the incoming IP packets intended for a specific
IP address and multiplexing it among the caches. Are we doing
something illegal or immoral here? No, we're offering hot spare
capability, load balancing, increased performance, and so on. The
net is a better place than it was a few years ago, when a web page
would contain a list of links and an invitation to "please select
the closest server to you".

We also have a product called "Distributed Director", which is
essentially a DNS server appliance which can receive incoming DNS
requests (e.g for "www.cnn.com") and reroute it to one or more cache
farms for distributed load balancing. If intercepting IP addresses
is evil, then presumably intercepting DNS requests is more evil,
since it's higher up the IP stack? No, it's a legitimate tool for
designing massive Content Service Networks of the scale needed in
the coming years.

Can a combination of DD and LD be misused? Sure, but I hope you're
not suggesting that we should be cancelling these products because
somebody might misuse them? There are all kinds of technologies
which can be used or abused. Banning discussion of such technologies
based upon an individual's sense of what is a moral or legal use of
that technology (when the individual doesn't justify this through
any particular creditials in either morality or the law) strikes me
as somewhat naive, to say the least...

- peterd


-- 
------
Peter Deutsch work email:  [EMAIL PROTECTED]
Technical Leader
Content Services Business Unit   private: 
[EMAIL PROTECTED]
Cisco Systems   or  :  [EMAIL PROTECTED]

 Alcohol and calculus don't mix. Never drink and derive.
--




Re: recommendation against publication of draft-cerpa-necp-02.txt

2000-04-07 Thread Peter Deutsch
mmoral behavior reflects poorly
 on IETF as an institution and would impair IETF's ability to do its work.
 It is not useful to direct IETF's energies in these directions.

If we're going to be foul-mouthed about it, then to quote Saturday
Night Live - 
"Jane, you ignorant slut".

Publishing of a technical document is not promoting "illegal or
clearly immoral behaviour", any more than publishing instructions on
driving a car is promoting carjacking. *That's* the conflation of
ideas I charged you with, not simply carrying two ideas in a single
posting.


 The alternative  - to pretend that there are no social implications
 to what we are doing in IETF - strikes me as dangerous and irresponsible.
 
  So because someone can pick up a router and beat someone to death
  with it, we shouldn't build routers?
 
 no, if someone designed a router whose primary purpose were to beat
 someone to death, we shouldn't endorse such a product.

Okay, I'll see your moral indignation and raise you a moral outrage.
Since when is the publishing of technical information for the
education of the IETF community endorsement of anything other than
the free exchange of ideas? Frankly *I'm* morally offended at that
notion as I think it strikes at the very heart of the IETF and what
made it a successfully organization worthy of my support. If this
were to become the way this organization actually does work in the
future, I would predict its speedy demise as a useful place for the
free interchange of ideas.
 
.  .  .
  So you are arguing for explicit censorship of ideas based upon your
  own moral assessment of the potential misuse of those ideas? Wow.
  Now *that* is a dangerous notion indeed. I sincerely hope it is not
  a widely held one within the echelons of the IETF...
 
 Your use of the word "censorship" is incorrect.  I'm not arguing that
 IETF should try to prevent anybody from publishing their own ideas
 in any forum willing to support them.  Instead I'm arguing that IETF
 and the RFC Editor should not serve as that forum.

Fortunately, I don't think your view really reflects the spirit of
the majority, but I will say again I find it dangerous and offensive
in the extreme. You are advocating that the IETF censor ideas, for
what you claim are the best of reasons. Frankly, I think you value
the IETF brand too much, and the free exchange of ideas too little.

.  .  .
 And absolutely I am making an argument based on my own assessment of
 both the morality of the practice and the technical issues associated
 with that practice.  Why should it be dangerous or wrong to argue for
 what one believes is right?

Because nobody died and made you king and TWIAVBP. I'm offended at
the notion that a former Area Director of the IETF would advocate
censoring what others can publish in the Internet's premier
technical exchange forum based not on the quality of the technical
information, but on how that information may be misused. Heck, I'm
also offended that you've dropped the 'the" in front of the term
"the IETF", as it always makes me think of the old "Royal We" that
the Queen of England allegedly uses and I don't want to be thinking
of the Queen of England every time I read one of your postings. Why
can't I demand the IETF forbid any mail posted from you without the
leading "the"?

Okay, it's Friday and I'm being silly, but the underlying concept
here is most definitely censorship of ideas in a most pernicious
form. It's the censorship of ideas based upon how those ideas may be
misused. That always the first step justification used by those who
would protect us from ourselves. Shame on you...


        - peterd



-- 
--
Peter Deutsch work email:  [EMAIL PROTECTED]
Technical Leader
Content Services Business Unit   private: 
[EMAIL PROTECTED]
Cisco Systems   or  :  [EMAIL PROTECTED]

 Alcohol and calculus don't mix. Never drink and derive.
--




Re: Switches on Oz power outlets

2000-03-06 Thread Peter Deutsch



Ross Finlayson wrote:
 
 At 01:59 PM 3/6/00 -0800, Cameron Young wrote:
 Most of the wall power outlets have little rocker switches built into the
 outlet cover that also needs to be turned on.
 
 *Don't* forget to check this if you are charging a cell phone / laptop for
 use the next day.  Hotel staff have a habit of turning all these switches
 off whenever they clean a room.
 
 Also, don't forget that on wall switches (like almost every country in with
 world except the US :-) "down" means "on".

Well, if you pronounce "U.S." as "United States and Canada, except
for parts of my house in Montreal". As I renovated, I made a point
of going through and installing the switches the "right" way up
(drove my Canadian spouse crazy at the time, but she got used to it!
:-) Of course, now I'm selling the place, the agent seems to think I
should switch them all back...

- peterd



Re: To address or NAT to address?

1999-12-02 Thread Peter Deutsch

g'day,

David R. Conrad wrote:
 
 Charlie,
 
  DNS is supposed to be a way to resolve domain names into IP addresses.
 
 As a hammer is supposed to be a way to pound nails.  However, when it is
 perceived that all you have is a hammer, it is amazing what begins to look
 like nails.

Actually, I think it would be as accurate to say "DNS is a distributed
database service. The first application was name to IP address
translation, but it's now used for a number of such applications." 

As someone pointed out in another message in this thread, if you started
designing a new distributed, scalable database service for the Internet,
you'd probably come up with something that looks a lot like DNS. You're
likely to add some things specific to your application, but the basics
would be the there.


  How else would one get an IP(v6) address from a domain name other
  than by using DNS?  Am I missing something here?
 
 Yes.  The DNS has grown a bit from a simple lookup mechanism.

Actually, it's still a relatively simple lookup mechanism (boolean
domain names, anyone? :-) The interesting thing is how many different
applications for this technology there are, with more coming along. Some
of these new applications would benefit from changes to the technology
(such as adding support for various types of searching, for example) but
because of the mission critical nature of the initial existing services
the community is loath to take experimenting too far. At some point you
have to wonder if this is not having a chilling effect on innovation and
whether the technology wouldn't benefit from moving some of this stuff
out of the current service and legacy root.


- peterd

-- 
--
  
  "Suddenly, nothing happened"

-