Re: unwise filtering policy from cox.net
Suresh Ramasubramanian wrote: > Most mailservers do allow you to exempt specific addresses from filtering. > On the LHS of the @ of a remote address? I think that was Sean's point. Eliot
Re: unwise filtering policy from cox.net
Hey Paul, > -- Sean Donelan <[EMAIL PROTECTED]> wrote: > > > On Tue, 20 Nov 2007, [EMAIL PROTECTED] wrote: > >> <[EMAIL PROTECTED]> > >>(reason: 552 5.2.0 F77u1Y00B2ccxfT000 Message Refused. A > URL in > >> the content of your message was found on...uribl.com. For > resolution do > >> not contact Cox Communications, contact the block list > administrators.) > > An unfortunate limitation of the SMTP protocol is it initially only > > looks at the right-hand side of an address when connecting to a > > server to send e-mail, and not the left-hand side. [...] > > Sure, it's an "unfortunate limitation", but I hardly think it's > an issue to hand-wave about and say "oh, well". > > Suggestions? Given what Sean wrote goes to the core of how mail is routed, you'd pretty much need to overhaul how MX records work to get around this one, or perhaps go back to try to resurrect something like a DNS MB record, but that presumes that the problem can't easily be solved in other ways. Sean demonstrated one such way (move the high volume stuff to its own domain). Eliot
Re: Hey, SiteFinder is back, again...
David Conrad wrote: > > On Nov 5, 2007, at 2:13 PM, Bora Akyol wrote: >> Do common endpoints (Windows Vista/XP, MacOS X 10.4/5) support DNSSEC >> Validation? If not, then do people have a choice? > > Yes and no. Of course, nobody supports the "Evil bit" today, so some change would be necessary one way or the other to deal with this. One wonders whether Verizon's behavior is enough to cause Microsoft to turn on a caching resolver. One issue Dave didn't raise is that firewalls often block DNS requests from OTHER than caching resolvers. Cough. So, how much is that NXDOMAIN worth to you? Eliot
Re: Hey, SiteFinder is back, again...
Sean, >> >> Yes, it sounds like the evil bit. Why would anyone bother to set it? > > Two reasons > > 1) By standardizing the process, it removes the excuse for using > various hacks and duct tape. > > 2) Because the villian in Bond movies don't view themselves as evil. > Google is happy to pre-check the box to install their Toolbar, OpenDNS > is proud they redirect phishing sites with DNS lookups, Earthlink says it > improves the customer experience, and so on. Forgive my skepticism, but what I would envision happening is resolver stacks adding a switch that would be on by default, and would translate the response back to NXDOMAIN. At that point we would be right back where we started, only after a lengthy debate, an RFC, a bunch of code, numerous bugs, and a bunch of "I told you sos". Or put another way: what is a client resolver supposed to do in the face of this bit? Eliot
Re: Hey, SiteFinder is back, again...
Sean Donelan wrote: > I just wish the IETF would acknowledge this and go ahead and define a > DNS bit for artificial DNS answers for all these "address correction" > and "domain parking" and "domain tasting" people to use for their keen > "Web 2.0" ideas. Yes, it sounds like the evil bit. Why would anyone bother to set it? Eliot
Re: Access to the IPv4 net for IPv6-only systems, was: Re: WG Action: Conclusion of IP Version 6 (ipv6)
Iljitsch van Beijnum wrote: >> That isn't actually true. I could move to IPv6 and deploy a NAT-PT >> box to give my customers access to the v4 Internet regardless of >> whatever the rest of the community thinks. > > And then you'll see your active FTP sessions, SIP calls, RTSP > sessions, etc fail. Somehow we made it work for v4. How did that happen?
Re: shim6 @ NANOG (forwarded note from John Payne)
Stephen, > I'm not a fan of "build it and they will come" engineering. I suppose a reasonable question one could ask is this: who's the customer? Is the customer the ISP? I tend to actually it's the end enterprise. But that's just me. Eliot
Re: shim6 @ NANOG (forwarded note from John Payne)
Stephen Sprunk wrote: > Shim6 is an answer to "what kind of multihoming can we offer to sites > without PI space?"; it is yet to be seen if anyone cares about the > answer to that question. This argument is circular. The only real way to test demand is to offer a service and see if customers bite. Eliot
Re: ISMS working group and charter problems
Daniel, All solutions will use a different SSH port as part of the standard just so that firewall administrators have the ability to block. Eliot Daniel Senie wrote: > At 02:00 PM 9/6/2005, Dave Crocker wrote: > > >> Eliot, >> >>> I need your help to correct for an impending mistake by the ISMS >>> working group in the IETF. >> >> >> >> Your note is clear and logical, and seems quite compelling. >> >> Is there any chance of getting a proponent of the working group's >> decision to post a defense? >> >> (By the way, I am awestruck at the potential impact of changing SNMP >> from UDP-based to TCP-based, given the extensive debates that took >> place about this when SNMP was originally developed. Has THIS >> decision been subject to adequate external review, preferably >> including a pass by the IAB?) > > > I agree the argument is well laid out, and would be interested in > hearing the thinking of ISMS in response. > > I'm more than a bit concerned, however, when folks start talking about > solutions that will permit things to pass through firewalls without > configuration. Those in charge of firewalls are often purposely setting > policy. If there is a perceived need for a policy that prevents SNMP > traffic, then it should remain possible for the administrator of that > network element to make that call. I must say I have some concern with > overlaying SNMP on SSH, since that precludes the firewall knowing > whether the traffic is general SSH keyboard traffic or network management. > > Let's hear more about the thinking involved. >
Re: yahoo abuse contact please
Josh Duffek wrote: http://abuse.yahoo.com/ ? josh Ok, I have a response. Thanks all.
yahoo abuse contact please
Anyone got one? Amusingly, the search engine these guys run can't seem to provide me this small bit of information. Thanks in advance, Eliot
NETCONF checkpoint
[replies to either the netconf list if you are a member or to me, and I will forward them *directly* to the netconf list unless instructed NOT to do so.] Dear NANOG folk, The NETCONF working group of the IETF is currently developing a collection of protocol specifications for the configuration of network elements. This work originated from a roadshow that many of us went on to learn what operators of different types want in such a protocol. Now we would like to checkpoint with you on some of the contents of those specifications. What follows a highlight of two issues I believe are important to NANOG. There are, however, many unresolved issues with NETCONF, some more important than others, that have been posted by the chair today to the netconf mailing list. They can be viewed by going to http://ops.ietf.org/lists/netconf/netconf.2003. The working group solicits your opinion on as many of these issues as you care to comment on. The protocol itself is split into two parts: an abstract set of functions, and a binding to specific protocols, including SSH, BEEP, and SOAP over HTTP(s). Each protocol has its pluses and minuses. As envisioned, the base protocol supports an option for notifications. The idea is that a manager would be notified of configuration-related events, such as a card insertion or removal, and act appropriately to configure the element. The envisioned format of notifications is either reliable syslog from RFC 3195 or something similar. Because notifications are asynchronous, one writes code that implements a dispatch mechanism that discriminates on the type of event. Notifications would be an option that not all managers would have to implement. Our first question is whether or not you are interested in receiving such notifications? Related, do you use such a mechanism today? The working group is attempting to determine whether notifications should remain as part of the base specification. Here are the choices facing the group: Option A. Leave them in as currently specified, and require all protocol mappings to support them. Option B. Allow them to be asynchronous, but don't use RFC 3195, and require all mappings to support them. Option C. Remove them entirely from the specification and let vendors implement RFC 3195 or other notification mechanisms as they see fit (for instance, existing syslog). Do you have an opinion on which of these options you would like? Related, the NETCONF base protocol currently makes use of the notion of channels. Channels are a basic concept in the BEEP protocol, and they exist in SSH as well. However, use of multiple channels in SSH is not supported in common SSH applications. They are completely absent in HTTP, and so the notion of a session would have to be introduced in the mapping. Channels are a means of maintaining multiple communication streams within a single session. In NETCONF, there are three types of streams: management, operations, and notifications. Channels allow these different message types to be multiplexed within the same session without the need to interleave the messages in a way that preserves their integrity (e.g. valid XML instance documents). NETCONF utilizes channels to separate asynchronous messages (notifications or RPC progress reports) from normal operations, as well as separate high priority operations (RPC abort) from normal operations. The working group has three choices: Option A. Keep channels in the base document and require each mapping to support them. Option B. Make channels optional. Option C. Remove channels from the base protocol and allow their use in the protocol bindings. Which option do you prefer? Again, these are important protocol issues that will affect your ability to build tools. There are others. If you would like to read the entire set of documents, you will find them by going to the NETCONF working group charter page: http://www.ietf.org/html.charters/netconf-charter.html. If there is sufficient interest we will seek another BOF at Miami. Thanks for your help, Eliot
Re: Cisco, Anti-virus Vendors Team on Network Security
According to the marketing folk, "it's a phased approach". This translates to two things: 1. There is a plan for an open API. 2. *NIX is not where the problem lies, right now. Eliot
IETF needs a new Ops Aarea Director
As some of you may already know, Randy Bush has resigned as Ops Area Director for the IETF. The community was well served by Randy, particularly because he has a good head on his shoulders and strong ties with the operational community. If you or someone you know would like to have broad impact on protocol development, the IETF needs a well qualified successor. The job is not an easy one, however, and it pays $0. Nominations are being taken by the IETF NOMCOM. The announcement can be found here: http://www.ietf.org/mail-archive/ietf-announce/Current/msg27262.html Regards, Eliot
Re: IPv6 NAT
Patrick W. Gilmore wrote: NAT is harmful to many protocols. Stateful inspection is not. Possibly. But Joe User will never use those "many protocols". Plus the overwhelming majority of protocols are not harmed by NAT. Of course NAT causes all sorts of damage to all sorts of protocols, as the debate over VPN software demonstrated, nevermind voice applications and peer to peer networking. It also has substantial implications for mobility. This has all been well documented, as have workarounds. Having yet another argument about this on nanog is a waste of bits (to which I freely admit I'm contributing). Let me suggest we not bother with the rest of the argument, but just have people search the archives. Eliot
Re: IAB concerns against permanent deployment of edge-based filtering
Valdis hits the nail on the head. And this boils down to something that I believe is attributable to someone commenting on the old FSP protocol, perhaps Erik Fair: The Internet routes around damage. Damage can take the form of a broken link, or it can take the form of an access-list. In the early '90s, NASA attempted to protect its links from "unauthorized use" (which in this particular case was porn). That caused a whole protocol to be developed (proving the old adage). Well, nowadays you don't even need to build a whole protocol- you can just use HTTP. And that was the point of Keith's & Ned's RFC on HTTP as a substrate. Excessive restrictions in firewalls bring about this use, and that makes the HTTP implementations fairly complex, and it will subvert the intentions of network administrators. So as a temporary measure during an active attack, access-lists make sense. Over the long haul, however, unless you're going to block downstream TCP packets with SYN only and ALL OTHER TRAFFIC, IP can run on just about anything. Eliot [EMAIL PROTECTED] wrote: On Sat, 18 Oct 2003 11:14:42 PDT, [EMAIL PROTECTED] said: There is a real danger that long-term continued blocking will lead to "everything on one port" fair amount of handwaving there. Question: Why was RFC3093 published? (Think(*) for a bit here...) About a month later, there was a *major* flame-fest on the IETF list due to this message: http://www.ietf.org/mail-archive/ietf/Current/msg11918.html Yes, the basic reason for this proposal was because many firewalls will pass HTTP but not BEEP. What major P2P applications have included a "run over port 80" option to let themselves through firewalls? It's not just handwaving. (*) Remember - satire isn't funny if it isn't about something recognizable...
Re: News coverage, Verisign etc.
Howard C. Berkowitz wrote: I have gotten a reasoned response from the technology editor of the Washington Post, and we are discussing things. While I wouldn't have done it that way, he had a rational explanation of why the story was written the way it was, and definitely indicating there will be continuing coverage of the issue. He believes there's always room for improving coverage. Care to share? Eliot
Re: NTP, possible solutions, and best implementation
okay. two valid cases to be concerned about: The most valid case is when we all go and buy GPS receivers from the same vendor who turns out to have a bug or a vulnerability of some form. The other valid case is if the defense department brought down the sattelite system for some odd reason. And they seem to not have a shortage of odd reasons. Some sort of a backup, such as PPS, or WWV* is nice, but so long as there are a few of these in the network somewhere, life should go on. Many enterprise networks run with 0 stratum 1s. Eliot
Re: NTP, possible solutions, and best implementation
[EMAIL PROTECTED] wrote: Beware the single point of failure. If all your clocks come from GPS, then GPS is the SPOF. Can you describe what would be involved to cause this sort of single point of failure to fail? Eliot
Re: williams spamhaus blacklist
Andy Walden wrote: Godwin's Law should probably be extended to September 11 references. Walden's Corollary? ;-) Eliot
Re: Verisign Responds
Jim Segrave wrote: And the usual US-centric view... Which congress person does Demon Netherlands, T-dialin, Wanadoo France, Tiscali etc. go to? I recognize it sounds U.S.-centric, but quite frankly since the U.S. Department of Commerce claims ownership here, I don't have a any grand more politically correct answer for you. Eliot
Re: Verisign Responds
Randy Bush wrote: it would ust make wildcards illegal in top level domains, not subdomains. there are tlds with top level wildcards that are needed and in legitimate use. verisign has not done anything strictly against spec. this is a social and business issue. And this in itself indicates a possible failure in our model. When someone can do something that causes so much outrage, and we the community have no recourse, something is wrong. Maybe we're in the realm of politics, but our implementations reflect our values. Do you feel the same today about the GPG/PGP v. X.509 as you did before Verisign decided to become an unauthorized interloper? Might we have a standards problem with SSL, because people cannot simply NOT trust Verisign certs? After all, how many certificates can you get out of SSL for a server or a client? all this noise and bluster is depressing. it indicates that we are in a very quickly maturing industry because a lot of probably-soon-to-be-ex engineers have too much time on their hands. I take a different view. If people who are upset with Verisign's change DON'T say anything, then there's no reason for Verisign to change. I suspect that the better forum may be one's Congress person... Eliot
[resend] OT: operators script writers' experience wanted
[For some reason, the first message ended up in the bit bucket] Dear all, Over the last few years, a bunch of us from the vendor community have sought your opinion about doing programmatic configuration to routers, switches, and the like. Over the last few months, the NETCONF working group was formed in the IETF. This working group started with a draft that uses BEEP as a transport. We heard from you that you want an SSH and for that matter a serial interface to the protocol. Our goal is to meet operator needs while at the same time providing sufficient formalism that scripts won't break so easily. In the latest iteration of the draft we've split the NETCONF protocol from the transport mapping and defined several transport protocol mappings, including SSH. I'd like to raise with you some concerns the working group has had with SSH, and gather your opinions about how to proceed. As implementors we're very concerned about prompts and asynchronous messages. To address this, we plan to implement the transport mapping through the subsystem facility, so that you don't get MOTDs, and you don't get prompts or other stuff that make scripts go crazy. It's all that other stuff that you script writers end up having to special case in expect(1). Several of the working group members have extensive SSH experience, however. They have expressed concerns regarding the applications used, and in particular OpenSSH. Specifically, the concern is about whether people would end up having to use expect(1) with an SSH application due to messages the local program generates. We are in particular not talking about messages generated by the remote end (e.g, the router) but the SSH program itself. Question number 1: does this behavior present current script writers problems? How have you gotten around this? Is it not an issue? The other issue we have is that we wonder whether what is really wanted is a way to prototype on the TTY, while still being able to use a more formal interface once things are debugged. If the idea is to be able to cut and paste "netconf" stuff into the TTY, this doesn't mandate a formal protocol definition. Vendors can just "do it". On the other hand, question # 2 if that leaves the one protocol as NETCONF over BEEP with an informal way to get to NETCONF over SSH, does that present operators problems? For instance, are you relying on SSH public/private user keys as your authentication mechanism? BEEP uses TLS and at least server-side certificates (TLS is nearly identical to SSL). For TTY access, is it sufficient that the protocol be usable on the TTY for cut and paste purposes? This would be the equivalent of /usr/sbin/sendmail -bs or typing "xml" at the command prompt. If you could respond to me, I'll be happy to summarize. Thanks for your comments, Eliot
Re: Internet Monitoring Center
I say to that... http://www.ofcourseimright.com/~lear/fishbowl.jpg
Re: Remote email access
It's a rare day when I differ with Dave over mail standards, so something's weird. Dave Crocker wrote: Some current choices: Email standards provide for posting of email to the usual port 25 or to port 773 for the newer "submit" service. (Submit is a clone of SMTP that operates on a different port and is permitted to evolve independently of SMTP, in order to tailor posting by originators, differently from server-to-server email relaying.) There is also a de facto standard for doing SMTP over SSL on port 465, although this collides with the IANA assignment of that port to another service. The submission port, according to IANA is 587. I'm not a fan. I also think experience has shown that it is POSSIBLE to protect port 25 appropriately. It's just a matter of doing it... See http://www.iana.org/assignments/port-numbers Standardized SMTP authentication uses the SMTP Auth command or the SASL service within SMTP. It can also use the de fact "POP hack". All 3 of these mechanisms are inline -- as part of the posting protocol -- so that they work over whatever port is being used for posting. Standardized privacy for SMTP uses SMTP over SSL or it uses SMTP with SASL. SASL can be used on any SMTP or Submit port. SSL can only be used on port 25 if the SMTP service is not available to other SMTP servers for relaying (or, really, for last-hop SMTP delivery). Although Dave is correct about SSL, RFC 3207 discusses the use of TLS for purposes of encryption AND authentication. I use this for my own sendmail. The biggest problem is ensuring that appropriate certificates are installed. Most of the common MUAs I tested have a way to do it, but it's messy (to say the least). Eliot
Re: What could have been done differently?
Sean, Ultimately, all mass-distributed software is vulnerable to software bugs. Much as we all like to bash Microsoft, the same problem can and has occurred through buffer overruns. One thing that companies can do to mitigate a failure is to detect it faster, and stop the source. Since you don't know what the failure will look like, the best you can do is determine what is ``nominal'' through profiling, and use IDSes to report to NOCs for considered action. There are two reasons companies don't want to do this: 1. It's hard (and expensive). Profiling nominal means installing IDSes everywhere in one's environment at a time when you think things are actually working and making assumptions that *other* behavior is to be reported. Worse, network behavior is often cyclical, and you need to know how that cycle will impact what is nominal. Indeed you can have a daily, weekly, monthly, quarterly, and annual cycle. Add to this ongoing software deployment and you have something of a moving target. 2. It doesn't solve all attacks. Only attacks that break the profile will be captured. Those are going to be those that use new or unusual ports, existing "bad" signatures, or excessive bandwidth. On the other hand, in *some* environments, IDS and an active NOC may improve predictability by reducing time needed to diagnose the problem. Who knows? Perhaps some people did benefit through these methods. I'm very curious in netmatrix's view of the whole matter, as compared to comparable events. NANOG presentation, Peter? Eliot
Re: Risk of Internet collapse grows
Yah, the abstract indicates what most of us already know. Good coverage and redundancy options in urban areas; less so for rural areas. Why should this shock anyone? Imminent death of the 'net is *not predicted ;-) Eliot
Re: Breaking Stuff by Fixing NAT
Crist J. Clark wrote: But there are still management reservations, the only reservation we do not have a good answer for is the (arbitrary) claim that turning off NAT may break stuff for customers who depend on it. Now we have customers that do some pretty messed up stuff, and everybody knows about various commercial apps that do really, really messed up stuff, but none of us can think of anything that turning NAT off will break. But perhaps all of our minds are just too cluttered with all of the weird stuff that turning off NAT will allow to _work._ I have to admit a certain amount of amusement when I read this. In general you should be okay. The things that could break are likely those things that have IP addresses hardcoded. None of the following checks is any different than what you would do to renumber a network. So, check your access lists on your routers, check any UNIX configuration files, as well as any SSL certificates that were somehow gotten with 10/8 addresses. Also, if you do H.323, check your gateway configurations. Users that make use of personal firewalls may have some minor complications along these same lines, particularly if servers are changing addresses. The one change that you should be mindful of is this: if the company *was* relying in some way on security through obscurity, you may need to add a few additional protections, particularly if you want to prevent peer-to-peer access, such as Gnutella. Make sure that you have a real firewall in place, as you should have before ;-) Regards, Eliot
Trying to understand network operator management requirements
Hi, I've put a stake in the ground regarding network management. Below is a URL that discusses the problem. I'm wondering if you would like to send me comments (off list) on what I've gotten right and what I've gotten wrong. This draft compliment's Bill Woodcock's draft, in as much as I'm really summarizing where he is specifying, and I'm also looking in other areas. I welcome your comments on any point you believe to be correct, incorrect, or just poorly written ;-) ftp://ftp.ietf.org/internet-drafts/draft-lear-config-issues-00.txt Thanks in advance, Eliot
Re: How do you stop outgoing spam?
Tony Hain wrote: > Public executions would be much more effective than preventing > legitimate customers from getting their job done. A proposed activity for Portland? Network engineer assisted homocide? ;-)
Re: How do you stop outgoing spam?
Rafi Sadowsky wrote: > Maybe I'm missing something obvious but do how you get rate-limiting per > TCP *flow* with Cisco IOS ? There is something called flow-based RED (FRED) but it consumes a whole lot of memory because you have to keep track of lots more state. I don't know about that code. At the least what you can do is use the rate-limit command and rate limit *all* outbound TCP/80 traffic (or for that matter all access-list captured traffic). Now, doing so will make any but the most trivial outbound TCP/80 absolutely painful, and will cause tail drop. See Cathy Wittbrodt's work in this space, which was presented at NANOG some time ago. Note, I'm not saying you should *do* this. It may be going a bit too far for anti-spam. Eliot
Re: How do you stop outgoing spam?
Paul Vixie wrote: > per-destination host AND port egress rate shaping. if someone tries to send > more than 1Kbit/sec to all port 80's, or more than 1Kbit/sec to any single > IP address, then you can safely RED their overage. this violates the whole > peer-to-peer model but there's no help for that in the short term. if some > internet cafe has a CuCme camera setup then you can find a way to let that > traffic off-net without rate shaping. this will be the exception. Please be aware that this could have unintended consequences, and should be used in very constrained ways. In particular, there are any number of applications, including VPN applications that use port 80. I would recommend that only specified destinations get such treatment, if you apply it at all. Eliot
Re: Traffic Threshold monitoring?
Rob Mitzel wrote: > So my question is...what's out there that will allow us to check > thresholds on traffic, and notify us if needed? RMON alarms and events for one. These are available on pretty much all recent versions of IOS. You can set a rising or falling threshhold on any MIB variable you like, and period of time between polls. This will generate a trap to a network management station, and you can choose to do what with you will the alarms. If you want to tie this stuff into scripts you can use the net-snmp trap daemon to call various trap handlers that could do something keep track of the duration of the spike or send an alert. Another thing that is out there in later releases is the EVENT MIB. This is probably overkill for what you want, and the only way to configure it is through SNMP. For all of this stuff there is documentation on CCO. For RMON alarms and events, see: http://www.cisco.com/warp/public/477/RMON/18.shtml For the EVENT MIB see: http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121newft/121t/121t3/dtevent.htm The net-snmp package is available at SourceForge: http://sourceforge.net/projects/net-snmp Eliot
an itty bitty survey...
Hi all, [This may sound like a perennial question.] I'm curious as to how you configure your routers (whatever they may be). In particular, what tools do you use? Home grown? Rancid? Vendor provided? I'll summarize. Thanks in advance, Eliot
Re: IP renumbering timeframe
Randy is right. We don't know both sides. That having been said... Ralph Doncaster wrote: > What it tells me is I should have wasted enough space to consume 8 /24s > long ago, so I could get a /20 directly from ARIN. Right. What ISPs need to realize is that whatever benefit that is gained from provider-based addressing can be negated by people not having faith that they can transition from one set of addresses to another. Being excessively strict benefits no one. And so each side needs to be reasonable. Otherwise we'll have end customers going to ARIN -- or EBay. In other words, this might be another instance of a frog in the pot. Eliot
Re: Large ISPs doing NAT?
Deepak Jain wrote: > MY question is -- How do you know if a justification for _public_ space > handling a large NAT'd pool is the proper size and not an over/under > allocation based on the customer in question? Why is the answer to this question any different than it has been since BCP-12? The answer is that we don't, but we guard against the problem with methods such as slow start allocations. Eliot
Re: Large ISPs doing NAT?
I don't know if this is an annual argument yet, but the frog is in the pot, and the flame is on. Guess who's playing the part of the frog? Answer: ISPs who do this sort of thing. Value added security is a nice thing. Crippling Internet connections will turn the Internet into the phone company, where only the ISP gets to say what services are good and which ones are bad. While an ISP might view it appealing to be a baby bell, remember from whence we all come: the notion that the middle should not inhibit the endpoints from doing what they want. You find this to be a support headache? Offer a deal on Norton Internet Security or some such. Offer to do rules merges. Even offer a provisioning interface to some access-lists. Just make sure that when that next really fun game is delivered on a play station that speaka de IP your customers can play it, and that you haven't built a business model around them not being able to play it. Eliot mike harrison wrote: >>On Monday, 2002-04-29 at 08:43 MST, Beckmeyer <[EMAIL PROTECTED]> wrote: >> >>>Is anybody here doing NAT for their customers? >> > > Tony Rall: > >>If you're NATing your customers you're no longer an ISP. You're a >>sort-of-tcp-service-provider (maybe a little udp too). NAT (PAT even more > > > Depends on scale and application. We have lots of customers > that we NAT, one way or another. And a lot more that we don't. > Some customers WANT to 'just see out' and they like all the 'weird stuff > turned off'. Sometimes it's a box at the customers end, sometimes > it's nat'd IP's on the dial-up/ISDN/FracT1/T1/Wireless connection itself. > > Saying we are not an ISP because we do some NAT is a little harsh. > Giving the customer options and making things work (when done right, > and explained properly we have no sales droids) is good business > and I think good for the 'net. It gives the clueless (and sometimes > cluefull) just a little more isolation. > > What is wrong is NAT'ing when you should not. > > > >
Re: Tauzin-Dingell (was ICANN)
Bill, I don't think people objected so much to the note about the specific issue relating to the bill as they did you campaigning against a member of Congress. It would be good if you could stick to the issue and not the person. Eliot