Re: prohibiting RFC publication
On Sun, 9 Apr 2000, Peter Deutsch in Mountain View wrote: readily accessible. I still see value in having documents come out as "Request For Comments" in the traditional sense, but it certainly wouldn't hurt to find ways to better distinguish between the Standards track and other documents. Here's a novel idea: we could stop calling them all "RFCs". Call them by the designators they get once they're blessed (ie: STD, INF, EXP, etc.), and stop ourselves citing them as RFC [0-9]+. Change begins at home, as they say... -- Tripp Lilley * [EMAIL PROTECTED] * http://stargate.sg505.net/~tlilley/ -- "I only counted on today's sunlight and snow, on the rain that dampened my face." - L.E. Modesitt, Jr., Adiamante
Re: prohibiting RFC publication
g'day, Tripp Lilley wrote: On Sun, 9 Apr 2000, Peter Deutsch in Mountain View wrote: readily accessible. I still see value in having documents come out as "Request For Comments" in the traditional sense, but it certainly wouldn't hurt to find ways to better distinguish between the Standards track and other documents. Here's a novel idea: we could stop calling them all "RFCs". Call them by the designators they get once they're blessed (ie: STD, INF, EXP, etc.), and stop ourselves citing them as RFC [0-9]+. Change begins at home, as they say... Yeah, although I'd personally hum for keeping the RFC nomencalture for the Standard and Experimental class RFCs, as the name is understand to encompass that anyways. The rest we could lump under something like "OFI" (Offered For Information? The marketing guys here agree that they wont write code if I don't name products... ;-) Anyways, we need to draw a clearer line between the standards which have been wrought by the IETF, and information which has been captured and tamed, so to speak... - peterd -- Peter Deutsch work email: [EMAIL PROTECTED] Technical Leader Content Services Business Unit private: [EMAIL PROTECTED] Cisco Systems or : [EMAIL PROTECTED] Alcohol and calculus don't mix. Never drink and derive.
Re: prohibiting RFC publication
At 10:33 AM 4/9/00 -0400, Fred Baker wrote: wrestled to the appearance of support as standards. We're all aware of cases where something was poublished as informational, experimental, etc, and the next press release announced support of that "standard", and of cases where RFCs, like IP on Avian Carriers, started winding up on RFPs simply because it was an RFC, and therefore "must" be the standard. This is another case of meaning dilution that I worry about. In absolute terms, these misuses/abuses of RFC reference are quite bothersome. However they have been a fact of life pretty much forever. Absent evidence that they have become a more serious problem than usual, the noise-factor of the misuses does not seem to cause enough community damage to warrant changing existing practise. (I didn't read your note, Fred, as promoting a change, but others have been in favor of it.) =-=-=-=-= Dave Crocker [EMAIL PROTECTED] Brandenburg Consulting www.brandenburg.com Tel: +1.408.246.8253, Fax: +1.408.273.6464 675 Spruce Drive, Sunnyvale, CA 94086 USA
Re: prohibiting RFC publication
On Sun, 09 Apr 2000 23:01:38 PDT, Dave Crocker [EMAIL PROTECTED] said: At 10:33 AM 4/9/00 -0400, Fred Baker wrote: cases where RFCs, like IP on Avian Carriers, started winding up on RFPs simply because it was an RFC, and therefore "must" be the standard. This is another case of meaning dilution that I worry about. In absolute terms, these misuses/abuses of RFC reference are quite bothersome. The important question is "Does RFC2549 support prove to be self-limiting in the marketplace". I'm afraid I know the answer, and don't like it... ;( Valdis Kletnieks Operating Systems Analyst Virginia Tech
Re: recommendation against publication of draft-cerpa-necp-02.txt
Let's remember that a major goal of these facilities is to get a user to a server that is 'close' to the user. Having interception done only at distant, localized server farm facilities will not achieve that goal. granted, but... an interception proxy that gets the user to a server that is 'close' to that user (in the sense of network proximity), but 'distant' from the content provider (in the sense that it has a significant chance of misrepresenting or damaging the content) is of dubious value. and a technology that only works correctly on the server end seems like a matter for the server's network rather than the public Internet - and therefore not something which should be standardized by IETF. I do think there is potential for standardizing content replication and the location of nearby servers which act on behalf of the content provider (with their explicit authorization, change-control, etc). But IP-layer interception has some fairly significant limitations for this application. For one thing, different kinds of content on the same server often have different consistency requirements, which become significant when your replicas are topologically distant from one another. If you treat an entire server as having a single IP address you probably don't get the granularity you need to implement efficient replication - you may spend more effort keeping your replicas consistent (and propagating the necessary state from one to another) than you save by replicating the content in the first place. Obviously you can use multiple IP addresses, assigning different addresses to different kinds of content, but this also has limitations. You can also get into problems when the network routing changes during a session or when the client itself is mobile. Bottom line is that IP-layer interception - even when done "right" - has fairly limited applicability for location of nearby content. Though the technique is so widely mis-applied that it might still be useful to define what "right" means. Keith
Re: prohibiting RFC publication
At 16:09 09-04-00 , Peter Deutsch in Mountain View wrote: Well put. As Dave has pointed out earlier this weekend, there is a burning need for better, permanent access to the Drafts collection. If we had that, perhaps much of this discussion might become moot, since some of the out-on-a-limb stuff may be circulated in a less "official" form, but remain permanently and readily accessible. I still see value in having documents come out as "Request For Comments" in the traditional sense, but it certainly wouldn't hurt to find ways to better distinguish between the Standards track and other documents. The notion of resurrecting the IEN series was mooted several years ago. However, the community as a whole did not support that notion with any significant vigour. So that hasn't happened. My personal view is that there would be some value to having the IENs alive and well, but there are issues with such an idea. Also, some items put out as I-Ds really well and truly ought not be in any IETF-related archival documents. While the folks in this discussion might disagree on which drafts fall in that category, everyone believes that at least some documents ought not be published in an IETF-related archival document series. That all noted, I think this conversation isn't really productive any longer (if it ever was). The I-D in question has been referred to an existing IETF WG for review, which is a very normal kind of process that we're all familiar with. I've never seen a draft document that failed to benefit from broad review, so I think this has to be a good thing. All IMHO. Ran [EMAIL PROTECTED]
Re: recommendation against publication of draft-cerpa-necp-02.txt
On Mon, 10 Apr 2000 07:00:56 EDT, Keith Moore said: and a technology that only works correctly on the server end seems like a matter for the server's network rather than the public Internet - and therefore not something which should be standardized by IETF. Much the same logic can be applied to NAT (the way it's usually implemented). Both have issues, both have proponents, and both will be done even more brokenly if there's no standard for them. Personally, I'd rather have the IETF issue verbiage saying "Do it this way", than have 50 million content providers all implement it in subtly different and broken ways. "You are trapped in a twisty little maze of proxies, all different..." ;) -- Valdis Kletnieks Operating Systems Analyst Virginia Tech
Re: prohibiting RFC publication
RJ Atkinson wrote: While the folks in this discussion might disagree on which drafts fall in that category, everyone believes that at least some documents ought not be published in an IETF-related archival document series. Mmm...I think the patent thread pointed out that, if we archived all the I-Ds, it'd be a good repository for patent examiners to search. Since some people patent bad ideas, archiving bad ideas would be useful there. -- /===\ |John Stracke| http://www.ecal.com |My opinions are my own. | |Chief Scientist |==| |eCal Corp. |All your problems can be solved by not caring!| |[EMAIL PROTECTED]| | \===/
Re: recommendation against publication of draft-cerpa-necp-02.txt
and a technology that only works correctly on the server end seems like a matter for the server's network rather than the public Internet - and therefore not something which should be standardized by IETF. Much the same logic can be applied to NAT (the way it's usually implemented). true. Both have issues, both have proponents, and both will be done even more brokenly if there's no standard for them. yes, this is the dilemma. IETF has a hard time saying "if you're going to do this bad thing, please do it in this way". for example, it's unlikely that the vendors of products which do the bad thing would consent to such a statement. and if you take out the language that says "this is bad" then the vendors will cite the RFC as if it were a standard. and given that NATs are already in blatent violation of the standards, it's not clear why NAT vendors would adhere to standards for NATs. nor is it clear how reasonable standards for NATs could say anything other than "modification of IP addresses violates the IP standard; you therefore MUST NOT do this". Personally, I'd rather have the IETF issue verbiage saying "Do it this way", than have 50 million content providers all implement it in subtly different and broken ways. not sure what content providers have to do with this - if content providers harm their own content, it's not clear why IETF should care - there are ample incentives for them to fix their own problems. Keith
Re: prohibiting RFC publication
The I-D in question has been referred to an existing IETF WG for review, that assertion was made, but not confirmed by the ADs. is it really true? it seems odd because it really isn't in scope for wrec. Keith
Re: recommendation against publication of draft-cerpa-necp-02.txt
In your previous mail you wrote: But IP-layer interception has some fairly significant limitations for this application. ... There's a technical problem with IP intercepting that I've not seen mentioned, including in the draft. Any intercepting based on TCP or UDP port numbers or that makes any assumptions about TCP or UDP port numbers will have problems, because of IPv4 fragmentation. It seems plausible that intercepting done by/for the server(s) would want to redirect all traffic for a given IP address, and so not be affected by port numbers. (Thus, it may make sense for the draft to not mention the issue.) = the first fragment has 8 bytes or more of payload, then the port numbers. And other fragments share the same ID then it is possible to apply the same action on all the fragments if they follow the same path at the interception point. This can be hairy if fragments are not in the usual order, for instance if someone sends the last one first (this is not as stupid as it seems because the last fragment provides the whole length of the packet). However, "transparent" HTTP proxy and email filtering and rewriting schemes such as AOL's that need to intercept only traffic to a particular port cannot do the right thing if the client has a private FDDI or 802.5 network (e.g. behind a NAT box) or has an ordinary 802.3 network but follows the widespread, bogus advice to use a small PPP MTU. = but fragmentation is not the best way to fight against "transparent" proxies (:-)... Yes, I realize IPv6 doesn't have fragmentation = IPv6 has fragmentation, but only from end to end (no fragmentation en route). Packet IDs are used by IPv6 only with fragmentation (they are in fragmentation headers) too... but most if not all of the distant-from-server IP interception schemes sound unlikely to work with IPv6 for other reasons. = I'd like that this is true (another reason to switch to IPv6 :-) but the only thing which is broken by interception is authentication (IPSec is mandatory to implement, not (yet) to use with IPv6). Encryption isn't really "transparent" proxies friendly too (:-). Regards [EMAIL PROTECTED]
Re: recommendation against publication of draft-cerpa-necp-02.txt
Bottom line is that IP-layer interception - even when done "right" - has fairly limited applicability for location of nearby content. Though the technique is so widely mis-applied that it might still be useful to define what "right" means. And there you have the argument for publishing this document. I much prefer a model where we allow for free exchange of ideas, even bad ones. I tend to believe that if someone took the time to write up a document that there's probably some reason for it. So let's call this an experimental RFC and get on with life. Isn't that what the experimental category denotes? Derrell
Re: recommendation against publication of draft-cerpa-necp-02.txt
Let's remember that a major goal of these facilities is to get a user to a server that is 'close' to the user. Having interception done only at distant, localized server farm facilities will not achieve that goal. ... client -- Internet - ISP - Intercept - Internet - Server1 - Internet - Server2 - Internet - Server3 In the second case (which is what I am opposing) the server provider does not have anything to do with the interception. He runs only Server1, while Server2 and Server3 are caches which the ISP chooses to redirect the packages to which are addressed to Server1. That's an assumption that's not always valid. There are cases in existence now where a service provider *pays* the ISP to run a local mirror, leading to client -- Internet - ISP - Intercept - Internet - Server1 - subnet - Server2 It would be entirely possible for the service provider, having paid the ISP not to get traffic from the ISP's clients, to block that traffic - or limit its bandwidth. Consider the progession: client -- Internet - ISP - Router - 56k - Server1 - T3 - Server1 client -- Internet - ISP - Intercept - 56k - Server1 - T3 - Server2 client -- Internet - ISP - Intercept - Internet - Server1 - subnet - Server2 What is the fundamental difference between choosing the best path and choosing the best source? Arguments that the latter breaks the IP model are simply arguments that the IP model is broken for today's Internet and will be even more broken for tomorrow's. The IETF can fix the model ... or leave that to someone else. -- Dick St.Peters, [EMAIL PROTECTED] Gatekeeper, NetHeaven, Saratoga Springs, NY Saratoga/Albany/Amsterdam/BoltonLanding/Cobleskill/Greenwich/ GlensFalls/LakePlacid/NorthCreek/Plattsburgh/... Oldest Internet service based in the Adirondack-Albany region
Re: prohibiting RFC publication
At 10:39 AM 10/04/00 -0400, Keith Moore wrote: The I-D in question has been referred to an existing IETF WG for review, that assertion was made, but not confirmed by the ADs. is it really true? it seems odd because it really isn't in scope for wrec. Let me jog your memory: At 06:29 PM 30/12/99 +0100, Patrik Fältström wrote: A request has arrived to publish the named document as informational RFC. The IESG wants all documents in this area to explicitely pass the WREC working group, ... I then sought clarification, re-sent the document to WREC (it had been sent before), to determine if there was a conflict - there wasn't. The authors re-submitted it. John --- Network Appliance Direct / Voicemail: +31 23 567 9615 Kruisweg 799 Fax: +31 23 567 9699 NL-2132 NG Hoofddorp Main Office: +31 23 567 9600 ---
Re: recommendation against publication of draft-cerpa-necp-02.txt
Bottom line is that IP-layer interception - even when done "right" - has fairly limited applicability for location of nearby content. Though the technique is so widely mis-applied that it might still be useful to define what "right" means. That sounds overly optimistic. user experience/expectation context is verything TCP end2end ness? if you access a web page from our server, chances are its fectehc by one of several httpds from one of a LOT of NFS or samba servers, which, depending on local conditions. if you send audio on the net, its quite possible it goes through several a2d and d2a conversions (.. thru a PSTN/SIP or 323 gateway) - in fact, if you speak on an apparently end2end PSTN transatlantic phone call, chances are your voice is digitzed and re-digitzed several times by transcoder/compressers its the 21st century: f you dont use end2end crypto, then you gotta expect people to optimize their resources to give you the best service money can buy for the least they have to spend. hey, when you buy a book written by the author, it was usually typeset, proofread, and re-edited by several other people even this email may not be from me... cheers jon "every decoding is an encoding" maurice zapp from the Euphoric State University, in small world, by david lodge
Re: recommendation against publication of draft-cerpa-necp-02.txt
From: Jon Crowcroft [EMAIL PROTECTED] ... its the 21st century: f you dont use end2end crypto, then you gotta expect people to optimize their resources to give you the best service money can buy for the least they have to spend. ... That's an interesting idea. People might eventually finally start using end2end crpyto not for privacy or authnetication where they really care about either, but for performance and correctness, to defend against the ISP's who find it cheaper to give you the front page of last week's newspaper instead of today's. Vernon Schryver[EMAIL PROTECTED]
Re: recommendation against publication of draft-cerpa-necp-02.txt
From: Vernon Schryver [EMAIL PROTECTED] Subject: Re: recommendation against publication of draft-cerpa-necp-02.txt Date: Mon, 10 Apr 2000 10:41:43 -0600 (MDT) From: Jon Crowcroft [EMAIL PROTECTED] ... its the 21st century: f you dont use end2end crypto, then you gotta expect people to optimize their resources to give you the best service money can buy for the least they have to spend. ... That's an interesting idea. People might eventually finally start using end2end crpyto not for privacy or authnetication where they really care about either, but for performance and correctness, to defend against the ISP's who find it cheaper to give you the front page of last week's newspaper instead of today's. Maybe this is a reason for these ISPs to filter such traffic out... Cheers, Magnus
Re: recommendation against publication of draft-cerpa-necp-02.txt
At 11.50 -0400 2000-04-10, Dick St.Peters wrote: What is the fundamental difference between choosing the best path and choosing the best source? Arguments that the latter breaks the IP model are simply arguments that the IP model is broken for today's Internet and will be even more broken for tomorrow's. The IETF can fix the model ... or leave that to someone else. The difference between what you describe and a random transparent proxy is that in your case it is the service provider which is building a service with whatever technology he chooses. It is not a random ISP in the middle which intercepts and changes IP-packages without neither of client nor service provider knowing anything about it. If the service provider knows about it, he can choose software (or whatever) which can stand the intercept. Yes, it is the same technology which is used, but not in the same ways in both cases. I.e. for me it is a question _who_ is managing the interception. paf
Re: recommendation against publication of draft-cerpa-necp-02.txt
Yo Randy! On Mon, 10 Apr 2000, Randy Bush wrote: all these oh so brilliant folk on the anti-cacheing crusade should be sentenced to live in a significantly less privileged country for a year, where dialup ppp costs per megabyte of international traffic and an engineer's salary is $100-200 per month. we are spoiled brats. Been there, done that, the LEGALLY required cache did NOT help. I bypassed it whenever possible. Cacheing is NOT the answer. Reports from the recent Adelaide meeting confirm this. RGDS GARY --- Gary E. Miller Rellim 20340 Empire Ave, Suite E-3, Bend, OR 97701 [EMAIL PROTECTED] Tel:+1(541)382-8588 Fax: +1(541)382-8676
Re: recommendation against publication of draft-cerpa-necp-02.txt
One other item: Neither this, nor many NAT I-D's, address the particular issue of sourcing IP addresses not assigned or owned by the host/gateway, e.g., as they affect the standards of RFCs 1122, 1123, and 1812. If a device creates (rewrites) IP source addresses with addresses not its own, it would be useful to see a section specifically addressing the resulting implications. Joe
Re: recommendation against publication of draft-cerpa-necp-02.txt
Bottom line is that IP-layer interception - even when done "right" - has fairly limited applicability for location of nearby content. Though the technique is so widely mis-applied that it might still be useful to define what "right" means. And there you have the argument for publishing this document. no, this document doesn't try to do that - the protocol it proposes is an attempt to work around one of the many problem associated with interception proxies, but it's hardly a blueprint for how to do them "right" (nor does it purport to be). Keith
Re: recommendation against publication of draft-cerpa-necp-02.txt
From: Randy Bush [EMAIL PROTECTED] ... That's an interesting idea. People might eventually finally start using end2end crpyto not for privacy or authnetication where they really care about either, but for performance and correctness, to defend against the ISP's who find it cheaper to give you the front page of last week's newspaper instead of today's. and, since we're into exaggeration and hyperbole, i imagine you won't complain about paying seven times as much for connectivity. Most of the exaggeration and hyperbole comes from the caching sales people. They'd have you believe that caches never miss, or that cache filling is free. The news services I watch have front pages with significant (e.g. editorial and not just DJI numbers) changes every hour or so. all these oh so brilliant folk on the anti-cacheing crusade should be sentenced to live in a significantly less privileged country for a year, where dialup ppp costs per megabyte of international traffic and an engineer's salary is $100-200 per month. we are spoiled brats. Cachine won't increase those low salaries. Many people think we should pay for the bandwidth we use, although not all favor accounting for each bit. That one now talks about paying per MByte instead of Kbit of traffic is a radical change due in part to using instead of conserving. Undersea fiber isn't paid for by caching. The primary waste (and perhaps use) of bandwidth is advertising that almost no one sees, unless you think single-digit response rates amount to more than almost no one. Check the source of the next dozen web pages you fetch. Even if you use junk filters, chances are that more of the bits are advertising than content. Caching that drivel sounds good, but its providers are already doing things that merely start with caching to get it to you faster and cheaper. Caching and proxying with the cooperation of the content provider can help the costs of long pipes. No one has said anything bad about that kind of caching, when done competently. "Transparent" caching and proxying without the permission of the content provider will soon be used for political censorship, if not already, and likely against your $100/month engineers. How much "transparent proxy" hardware and software has already been sold to authoritarian governments? Yes, it's quixotic to worry about that last. Everyone who feels comfortable with the IETF's fine words about wiretapping should stop to think about reality, and do their part in the real battle by putting end2end encryption into everything they code, specify, or install. Vernon Schryver[EMAIL PROTECTED]
Re: recommendation against publication of draft-cerpa-necp-0
Peter Deutsch wrote: g'day, "Michael B. Bellopede" wrote: ... Regardless of what occurs at higher layers, there is still the problem of changing the source address in an IP packet which occurs at the network(IP) layer. The Content Services Business Unit of Cisco (Fair Disclosure time - that's my employer and my business unit) sells a product called "Local Director". LD is intended to sit in front of a cluster of cache engines containing similar data, performing automatic distribution of incoming requests among the multiple caches. It does this by intercepting the incoming IP packets intended for a specific IP address and multiplexing it among the caches. Are we doing something illegal or immoral here? No, we're offering hot spare capability, load balancing, increased performance, and so on. The net is a better place than it was a few years ago, when a web page would contain a list of links and an invitation to "please select the closest server to you". We also have a product called "Distributed Director", which is essentially a DNS server appliance which can receive incoming DNS requests (e.g for "www.cnn.com") and reroute it to one or more cache farms for distributed load balancing. If intercepting IP addresses is evil, then presumably intercepting DNS requests is more evil, since it's higher up the IP stack? No, it's a legitimate tool for designing massive Content Service Networks of the scale needed in the coming years. These are both conformant with RFC 1122/1123 (together STD-3) because they redistribute IP addresses within a stub network. Same with DHCP. The questionable practices (wrt STD-3) arise when sourcing IP addresses not delegated to your authority (i.e., running these services on transit to someone else's server), rather than running them as a head-end to your own stub. Joe
Re: recommendation against publication of draft-cerpa-necp-02.txt
its the 21st century: f you dont use end2end crypto, then you gotta expect people to optimize their resources to give you the best service money can buy for the least they have to spend. ... That's an interesting idea. People might eventually finally start using end2end crpyto not for privacy or authnetication where they really care about either, but for performance and correctness, to defend against the ISP's who find it cheaper to give you the front page of last week's newspaper instead of today's. or ISPs might start penalizing encrypted packets. I just don't buy the argument that we can solve these problems by adding more complexity. That's like saying that a country can get more security by building more planes, tanks, bombs, etc. It might work, but then again, it might fuel an arms race. Keith
Re: recommendation against publication of draft-cerpa-necp-02.txt
all these oh so brilliant folk on the anti-cacheing crusade should be sentenced to live in a significantly less privileged country for a year, where dialup ppp costs per megabyte of international traffic and an engineer's salary is $100-200 per month. and as long as we're talking about just deserts...all of those ISPs that put an interception proxy between their dialup customers and the rest of the Internet should be required to put another interception proxy on the other side of their international links, between those clients and the ISP's local server customers. that way, they will do the same degree of harm to their own business customers that they are doing to other ISPs' business customers. Keith
Re: recommendation against publication of draft-cerpa-necp-02.txt
I tried to send this earlier, but got a response from [EMAIL PROTECTED] complaining that every line is a bogus majordomo command. My logs say I sent to [EMAIL PROTECTED] and not [EMAIL PROTECTED] or anything smilar. I did use the word "s-u-b-s-c-r-i-b-e-r-s" 3 times. This time I've replaced all with "[users]". I suspect a serious or at least irritating bug in a defense against stupid "u-n-s-u-b-s-c-r-i-b-e" requests. If I'm right, then someone needs to stop and think a little. From: Keith Moore [EMAIL PROTECTED] That's an interesting idea. People might eventually finally start using end2end crpyto not for privacy or authnetication where they or ISPs might start penalizing encrypted packets. Why not? ISP's that figure that last week's or even this morning's Wall Street Journal front page is good enough might well charge more for traffic that goes outside their networks to get the current WSJ, or the WSJ with the Doubleclick ads that Dow Jones prefers. I wonder how long before an ISP with a transparent proxy uses it to modify the stream of ads, replacing some with more profitable bits. It's not as if "commercial insertion" is a new idea. The local TV affliate or cable operator's computers replace a lot of dead air and other people's ads with their ownas I think about it, I realize I've got to be behind the times. I bet many of the so called free ISP's and perhaps others must already be optimizing the flow of information to their [users]. There's only so much screen real estate and conscious attention behind those eyeballs. They'd not want to be blatant about it, unlike "framing", to avoid moot excitment among lawyers and [users]. If you must pay for your [users]' web surfing by posting ads, where better but on top or instead of other people's ads? I just don't buy the argument that we can solve these problems by adding more complexity. That's like saying that a country can get more security by building more planes, tanks, bombs, etc. It might work, but then again, it might fuel an arms race. You've written today about the complications of simplistic solutions to problems that are not as simple as they sound. You're right, of course. The reasons why no one uses real encryption now do not include it being free or as easy as not using it. For example, simply using HTTPS if you want to read the WSJ without local improvements might not be a good enough, depend on how much you can trust that the public key you get from the nearby PKI servers really belongs to Dow Jones and not the local ministry of information. What?--you say the public key infrastructure is invulnerable to bureaucrats in the middle with very large purses and bigger sticks?--well, if you say so... The problem with transparent proxies is that they are men in the middle, and so are very good at wire tapping, censoring, and improving information. And even harder to trust. Stealth proxies are vastly more powerful than remote controlled taps on everyone's routers and PBX's. Vernon Schryver[EMAIL PROTECTED]
RE: breaking the IP model (or not)
it's completely natural that people will try such approaches - they are trying to address real problems and they want quick solutions to those problems. In particular, they will try such approaches if they are not presented with better alternatives. but if the quick fix solutions get entrenched then they cause their own set of problems which are worse then the original problems. this is not progress. Progress is created by the development of solutions that do not cause their own set of problems. This does not happen merely by condemning other less desirable solutions. It happens by presenting a better alternative. - indicators that there is an important problem that needs to be solved in a technically sound fashion If this mail thread is any indication, I'd say the indicator is shining brightly.