Re: Abuse response [Was: RE: Yahoo Mail Update]

2008-04-16 Thread Jack Bates


Dave Pooser wrote:

Handling the abuse desk well (or poorly) builds (or damages) the brand.


...among people who are educated among such things. Unfortunately, people
with clue are orders of magnitude short of a majority, and the rest of the
world (ie: potential customers) wouldn't know an abuse desk from a
self-abuse desk.


I think that depends on the nature of the abuse desk, how it interfaces with 
other networks and the customer base. Of course, I get to be the NOC guy and the 
abuse guy here. It's nice to have less than a million customers. However, I find 
that how NOC issues and abuse issues are handled are very similar. It is, of 
course, easier to reach another NOC than it is the senior abuse staff that 
actually have clue, generally. Both departments need a certain amount of front 
line protection to keep them from being swamped with issues that can be handled 
by others. Never the less, when they can interface with customers and with the 
other departments that spend more time with customers, it does improve the 
company's service level.


If there is a routing, firewalling, or email delivery issue with a much larger 
network, the effectiveness of the NOC/Abuse Dept will determine how well the 
customers will handle the interruption. If the company has built trust with the 
customer and related to them in a personal way, then the customer will in turn 
tend to be more understanding of the issues involved, or in some cases at least 
point their anger at the right company.


-Jack

Learning to mitigate the damage caused by Murphy's law.


Re: Abuse response [Was: RE: Yahoo Mail Update]

2008-04-15 Thread Jack Bates


William Herrin wrote:


Without conceding the garbage collection issue, let me ask you
directly: how do you propose to motivate qualified folks to keep
working the abuse desk?



Ask AOL?

-Jack


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-23 Thread Jack Bates


Tim Franklin wrote:

For the UK (and NL), on the tech side we're seeing some success with EFM
on copper, in this particular case on an Actelis platform.  It's a new
unit in the CO, from 1-8 pairs from the CO to the customer premises, up to
a total bandwidth across all pairs of 40Mb/s in each direction. 
(Admittedly 40Mb/s is 8 pairs at something like 1km from the exchange, but

I'm seeing some useful 6-10M symmetric services on 4 or so pairs).

snip

Errr, 8 pairs per customer? Even 4 is a step backwards. If we're going to do 
construction at that level, might as well drop in fiber. We're still enjoying 
the fact that ADSL runs on 1/2 a pair while the customer's phone service is out.


Jack


Re: BitTorrent swarms have a deadly bite on broadband nets

2007-10-23 Thread Jack Bates


Tim Franklin wrote:

Doing (or getting the incumbent to do, where the last mile is a monopoly)
a little bit more of what you already do seems to be an awful lot easier
than doing something completely different.  Certainly in the (admittedly
all European) countries where I've seen it done, getting 4 or 8 copper
pairs from a customer site to the exchange is an order of magnitude or
more difference in both cost and lead time to doing anything at all with
fibre.

Sorry, I am the incumbent. ;) I was just thinking of the copper necessary to do 
such a task on a massive scale. It's definitely not in the ground or on a pole 
at this point in time. One reason DSL was so desireable for many small ILECs was 
the recovery of copper from dual phone lines caused by dialup.



Every house I've lived in has had 4 pairs already going from the house to
the first street cabinet, with just the first pair connected for voice,
and getting a second line has always just needed a patch onto a spare pair
from the cab to the exchange.  Obviously if *everyone* wants 4 or 8 pairs
to their house, there's going to need to be a lot more copper between the
exchange and the street cabs.  It's not clear that *everyone* wants
upstream though, and 2M to 5M on a single pair (depending on distance /
quality) is quite possible if you wanted to think in terms of ubiquitous
symmetric service.


In newer homes, most of the ILECs I work with use 6 pair drops. This is 
relatively new, though. There are times that it was just 2 pair. It's amazing 
how things change back and forth over 50+ years.




I take it that getting spare / new copper in the US is more painful?


Depends on locale and quantities. I know of several rural ILECs which are 
currently undergoing a 3 year process to recover copper. This involves replacing 
bad peds and boots, cutting off the bad copper in them, and either pulling up 
some slack or splicing in some fresh copper (splicing 600 pair in a bucket truck 
is rough on the legs ;). We're shocked DSL has worked at all in some of this plant.


The cost to recover and repair what we have is far less than throwing anything 
else into the ground, but no one considered needing as much copper as it would 
take to bump everyone from DSL to a 4 pair system. I won't even discuss RBOC 
mentality when it comes to rural plant (including the entire state of Oklahoma).


Jack Bates



Re: Can P2P applications learn to play fair on networks?

2007-10-22 Thread Jack Bates


Bora Akyol wrote:

1) Legal Liability due to the content being swapped. This is not a technical
matter IMHO.


Instead of sending an icmp host unreachable, they are closing the connection via 
spoofing. I think it's kinder than just dropping the packets all together.



2) The breakdown of network engineering assumptions that are made when
network operators are designing networks.

I think network operators that are using boxes like the Sandvine box are
doing this due to (2). This is because P2P traffic hits them where it hurts,
aka the pocketbook. I am sure there are some altruistic network operators
out there, but I would be sincerely surprised if anyone else was concerned
about fairness



As has been pointed out a few times, there are issues with CMTS systems, 
including maximum upstream bandwidth allotted versus maximum downstream 
bandwidth. I agree that there is an engineering problem, but it is not on the 
part of network operators. DSL fits in it's own little world, but until VDSL2 
was designed, there were hard caps set to down speed versus up speed. This has 
been how many last mile systems were designed, even in shared bandwidth mediums. 
More downstream capacity will be needed than upstream. As traffic patterns have 
changed, the equipment and the standards it is built upon have become antiquated.


As a tactical response, many companies do not support the operation of servers 
for last mile, which has been defined to include p2p seeding. This is their 
right, and it allows them to protect the precious upstream bandwidth until 
technology can adapt to a high capacity upstream as well as downstream for the 
last mile.


Currently I show an average 2.5:1-4:1 ratio at each of my pops. Luckily, I run a 
DSL network. I waste a lot of upstream bandwidth on my backbone. Most 
downstream/upstream ratios I see on last mile standards and equipment derived 
from such standards isn't even close to 4:1. I'd expect such ratio's if I 
filtered out the p2p traffic on my network. If I ran a shared bandwidth last 
mile system, I'd definitely be filtering unless my overall customer base was 
small enough to not care about maximums on the CMTS.


Fixed downstream/upstream ratios must die in all standards and implementations. 
It seems a few newer CMTS are moving that direction (though I note one I quickly 
found mentions it's flexible ratio as beyond DOCSIS 3.0 features which implies 
the standard is still fixed ratio), but I suspect it will be years before 
networks can adapt.



Jack Bates


Re: dns authority changes and lame servers

2007-10-18 Thread Jack Bates


Justin Scott wrote:

We also have home-grown scripts that figure out whether a domain is
delegated to us or not and flag the ones that aren't.  In the case of
the free service we flag them for two weeks and if they still aren't
delegated to us after that period we disable them on the DNS servers but
leave the domain in their account.  In the case of the paid service we
make a note of the status in the database but do not make any changes to
the account (they're paying us, after all, to have it there).  We don't
do recursive lookups so it's not an issue (even though it's technically
an RFC violation, if I remember correctly).


We use home-grown scripts to follow the NS trail and verify that we are listed 
in some form or fashion. If we aren't, we handle the problem based on the 
criteria. If the domain is listed elsewhere, we immediately remove and notify. 
If the domain isn't listed in TLD, we notify yet hold the domain for I think 30 
days before removing it; unless the status changes.


Jack


Re: FBI tells the public to call their ISP for help

2007-06-15 Thread Jack Bates


D'Arcy J.M. Cain wrote:


You're kidding, right?  Have you ever called an ISP to report a
technical problem that has nothing to do with your computer or even
your connection to them, say a reverse DNS issue?  If you tell them
that you run Unix they just ask you to run IE anyway.  If you don't run
Windows they won't help you.  That's a pretty clear message.



You're kidding, right? We much prefer the security holes of Firefox and Safari. 
Of course, don't ask my helpdesk for help with linux dialup issues. Too many 
variations of how to do it. They'll give you the right info, though.


Jack


Re: ISP CALEA compliance

2007-05-11 Thread Jack Bates


Donald Stahl wrote:
Working hard to defend privacy does not automatically equal protecting 
people who exploit children- and I'm getting sick and tired of people 
screaming Think of the children! It's a stupid, fear mongering tactic- 
and hopefully one day people will think of it in the same way as crying 
wolf.




Confirming a warrant == working hard to defend privacy.

Making sure check clears != working hard to defend privacy
(Yep, you are protected from the government until they pay me.)

Deleting logs to inhibit valid warrants != working hard to defend privacy.

CALEA itself is only for taps, and does not cover record storage. We'll 
be hit with that next, and it probably won't be nice legislation based 
on what other countries have passed. Lack of maintaining any more of 
records and even purposefully deleting them to inhibit law enforcement 
will leave the government no choice but to let a bunch of non-technical 
people design how we should store records.


The new rules for cnpi come into effect later this year, designed to 
keep telco's a little sharper on maintaining customer privacy.


As for CALEA and data taps, who are you fooling? Do you tell customers 
they have an expectation of privacy on the Internet? Does anyone here 
actually believe that? If so, why are there rantings and ravings about 
the weakness in encryption protocols? Why encrypt data at all over the 
Internet? Why sign code? If there's an expectation of privacy, then 
there should be an expectation of security. If my data can't be viewed, 
it won't be modified. Perhaps you believe that criminals have the right 
to invade privacy, but the government doesn't have that right even when 
they do have just cause.



Great- so a bunch of people who want the laws bent for them go on a 
power trip because you expect them to OBEY THE LAW and you end up with 
no recourse against them. Yeah- this is the America I want to live in. 
You're absolutely right- it's a crying shame we aren't all buddies with 
the fed's- after all- they only want what's best for us! I'm looking 
forward to the day when the government tells me what to think- thinking 
is hard after all.


I have no problem with expecting a LEA to follow the law. I do have an 
issue with making life as difficult as possible for them to do their job 
when they are within the law. I'm not surprised that when they are 
dealing with companies that delete all evidence they might need or push 
as much red tape as possible, that the LEA turns around and scrutinizes 
the company to find where they might be in breach of the law.




If you don't have anything to hide- then why should you care right?


Privacy is always a large concern. However, privacy should be addressed 
through proper channels, not by trying to circumvent the laws that have 
passed.


On the other hand- these sorts of laws may just be enough to push 
everyone to use encryption- and then what will LE do?




I agree that it will most likely push criminals to use encryption. On 
the other hand, lots of criminals are stupid, so perhaps some good will 
come out of it. If it pushes everyone to use encryption, we are better 
for it. See above, what expectation of privacy did we have to begin 
with? Encryption good.



Jack


Re: ISP CALEA compliance

2007-05-10 Thread Jack Bates


Jason Frisvold wrote:


Here's a question that's come up around here.  Does a CALEA intercept
include hairpining or is it *only* traffic leaving your network?
I'm of the opinion that a CALEA intercept request includes every bit
of traffic being sent or received by the targeted individual, but
there is strong opposition here that thinks only internet-related
traffic counts.



IANAL... The law does include hairpining, however, the conference we went to 
last week on CALEA gave us a lot of insight. The LEAs we talked to were 
interested in us working with them. They understand that the mandate requires 
some things that are technically infeasible or so cost prohibitive as to mandate 
abandoning broadband all together. For example, how do you tap a customer that 
is in a cyber cafe? How do you handle hairpining on a wireless bridge? There 
is entire DSLAM infrastructure out there that has no filtering capabilities and 
the closest one could tap is leaving the DSLAM, but not traffic between 
customers on the same DSLAM. In general, they seemed to be happy if we could get 
traffic isolated down to a town level, and just do the best we could to assist 
in meeting the traffic tap.


Jack Bates


Re: ISP CALEA compliance

2007-05-10 Thread Jack Bates


William Allen Simpson wrote:

Speaking from experience, that's very likely -- a lot of negotiation
trouble.  No matter what happens, you'll pay some attorney fees.

Also, the gag order was ruled unconstitutional, so always inform your
customer!  They may be willing to work out attorney fees, and/or join
you in a suppression hearing.

You probably should remember to call your congresscritters to complain
each and every time it happens.

Most important: call your state ACLU, as they are trying to keep track,
and might be of some help. ;-)

You work so hard to defend people that exploit children? Interesting. We are 
talking LEA here and not the latest in piracy law suits. The #1 request from a 
LEA in my experience concerns child exploitation.



Follow the usual best practices, and you may save time and money.

1. Ensure that your DHCP, RADIUS, SMTP, and other logs are always,
ALWAYS, *ALWAYS* rolled over and deleted within 7 days without backup.
I'd recommend 3 days, but operational requirements vary.

This has been a nice trick by many, and it does circumvent CALEA as if you can't 
give the the customer info to begin with, they probably won't be able to request 
a tap. The exception is emergency taps requested while an action is going on.



2. Insist that you receive payment *in advance* before doing anything!
And wait until the check clears.



I'm not sure that this would work with all LEA orders.


3. Remind the requesting agency that everything must be signed by a
judge.  Call the issuing court to confirm.  Don't accept exigent
administrative requests.  The recent inspector general report showed
that most administrative requests were never followed up by actual
judicially approved requests, and virtually none of them warranted
exigent status -- they were illegal shortcuts.



The last I checked, LEAs have a 48 hour window for emergency orders, and they 
are supposed to be honored. I'd definitely check with a lawyer on that one.



4. Never, NEVER, *NEVER* speak to a federal agent of any kind.  Do not
allow them into the building.  Require them to speak to your attorney.
Require everything in writing.  No exceptions!

We returned the first request as inadequate -- since it misspelled the
name of the company and the address, and wasn't accompanied by a check.

Our problem was that we weren't rigorous about #1 (some staff had been
keeping some backups sometimes), and the resulting time and expense for
extracting lawful information from all the rest was painful.  Learn
from our mistake.


Hmmm, you must have been one of those types the agents I talked to were 
referring to. They said that those who give them the most flack usually get the 
least amount of slack. Play hardball with the government, and it will play 
hardball back at you. I'd definitely make sure you stick to #4 if following #1-3.


Of course, IANAL and YMMV.

Jack Bates


Re: ISP CALEA compliance

2007-05-10 Thread Jack Bates


William Allen Simpson wrote:

We've never charged on a usage model.  We always charged on a fixed
tier bandwidth model, payable in advance.



I think what he meant was My DSL has been broke for 3 months now, and I haven't 
not be able to use it. You can't charge me for something which wasn't working!


*checks logs*

Well, interestingly enough we see that you used it here, here, here, and here. 
Pay the bill, please.



Jack Bates


Re: [funsec] Not so fast, broadband providers tell big users (fwd)

2007-03-13 Thread Jack Bates


Jeff Shultz wrote:


Alexander Harrowell wrote:



768 ain't broadband. Buy Cisco, Alcatel, and Akamai stock!


If you don't like it, you can always return to dialup.

It certainly is - just ask the CALEA folks and as for who is pushing 
the bandwidth curve, for the most part it seems to be gamers in search 
of the ever shrinking ping time. I suspect they make up most of our 
 1536kb/sec download customers.


Gamers don't really need much in bandwidth. They need the low ping times, so 
they *must* ensure that there is no saturation or routing overhead. Granted, 
there are some games that are bandwidth intensive, but everyone's busy playing 
WoW. Gamers are great for detecting those really hard to spot problems that only 
effect gaming and voip.


What parts of the world have long since upgraded to those speeds - and 
how do they compare size-wise to the USA? We've got an awful lot of 
legacy infrastructure that would need to be overcome.


Japan has, for one. Definitely a size difference. In US metropolitan areas we 
are seeing a lot more fiber to the home. The cost will never be justified in US 
rural areas. Just look at Oklahoma. Most connectivity in Oklahoma will actually 
be from Dallas or Kansas City.


I will happily agree that it would be nice to have higher upload speeds 
than DSL generally provides nowadays. What are cable upload speeds like?


I would like to blame the idiots that decided that of the signal range to be 
used on copper for dsl, only a certain amount would be dedicated to upload 
instead of negotiating. What on earth do I want to do with 24Mb down and 1Mb up? 
Can't I have 12 and 12? Someone please tell me there's a valid reason why the 
download range couldn't be variable and negotiated and that's it's completely 
impossible for one to have 20Mb up and 1.5 Mb down.



Jack Bates


Re: comcast spam policies

2007-02-07 Thread Jack Bates


Albert Meyer wrote:


Didn't we all figure out years ago that, when using a telephone or cable 
company for Internet service, you have to just use the pipe and get your 
services (mail, news, etc.) elsewhere? Bemoaning the poor quality of 
telco/cableco mail servers is kind of like wishing that the rain 
wouldn't be so damn wet.




I just know you meant to add with the exception of those few telco mail servers 
that are run well.



-Jack


Re: Quick BGP peering question

2007-01-03 Thread Jack Bates


James Blessing wrote:

Very simply : Would you accept traffic from a customer who insists on sending 0
prefixes across a BGP session?



I just ran through a related issue with one of my upstream peers. It appears 
that they have a RPF strictly enforced policy, yet during the process of 
renumbering a customer of a customer from another ISPs space, they were wanting 
to throw all traffic (our IPs and the other provider's) out to us.


It comes down to a simple question of policy, and if you are going to mandate 
how your customers route proper, valid traffic. I about pulled the plug in my 
situation, but finally got it sorted out. Thank goodness some routers can allow 
exceptions to RPF and other providers just use ACLs instead.


0 prefixes is no different than partial prefixes. Asymmetric routing should not 
be a crime on the Internet because I don't like it or but basic RPF is easier 
and you're doing something funky anyways.



Jack Bates



Re: Bogon Filter - Please check for 77/8 78/8 79/8

2006-12-13 Thread Jack Bates


[EMAIL PROTECTED] wrote:


One wonders whether it might not be more effective in the
long run to sue ICANN/IANA rather than suing completewhois.com.



Of course, it could be that I used the wrong term. IANAL after all. Perhaps the 
right term was injunction? Does that qualify as a lawsuit? Unfortunately, people 
seem to think the legal system is strictly about money. Perhaps it is. However, 
in the process of people getting money, I've noticed people have solved their 
initial problem temporarily.


besides, it didn't look like it really took all that much to solve the 
completewhois.com problem. Surely people don't pay their lawyers without first 
yelling, screaming, and calling everyone and their dog (or posting to NANOG) in 
the attempt to get what they want first. :)


Jack


Re: Bogon Filter - Please check for 77/8 78/8 79/8

2006-12-11 Thread Jack Bates


Allan Houston wrote:
This probably isn't helped much by sites like completewhois.com still 
showing these ranges as bogons..


http://www.completewhois.com/bogons/active_bogons.htm

They've ignored all my attempts to get them to update so far.. sigh..



They just need someone using the address space to slap them with a lawsuit.

Jack Bates


Re: Bogon Filter - Please check for 77/8 78/8 79/8

2006-12-11 Thread Jack Bates


Scott Morris wrote:

So we're saying that a lawsuit is an intelligent method to force someone
else to correct something that you are simply using to avoid the irritation
of manually updating things yourself???

That seems to be the epitomy of laziness vs. litigousness. 


Scott



I would doubt the person using a bogon list would be the initiator of a lawsuit. 
It would be more plausible that the person using the netspace listed incorrectly 
as a bogon would have just cause for filing a lawsuit.


It's annoying enough to chase after all the people who manually configure bogon 
networks and forget them in their firewalls. From previous posts, it appears 
that this is a case of continued propagation of incorrect information after 
being notified of the inaccuracy, and the information is published as being 
fact; implying accuracy.


Jack Bates


Re: 10,352 active botnets (was Re: register.com down sev0?

2006-10-26 Thread Jack Bates


Matthew Crocker wrote:



Maybe the new slogan needs to be Save the Internet! Train the chimps!


Shouldnt  'ip verify unicast source reachable-by rx' be a default 
setting on all interfaces?  Only to be removed by trained chimps?




Only if you wish to break existing configurations during IOS upgrades. I could 
see ip verify unicast source reachable-by any (less breakage), but rx will kill 
all types of good asymmetric routing. The largest breakage I have seen caused by 
rx is the link IP breakage caused by the router responding out multiple 
interfaces. It's also a problem when customers are straddling the fence, 
purposefully using asymmetric routing.


It would be nicer to have router support where a packet is acceptable if it's 
network is acceptable in the BGP (or IGP) policy/filter (ie, network may not be 
there, but it is allowed) as well as the link addresses associated with the BGP 
(or IGP) peer.


-Jack


Re: ATT refuses to provide PTR records?

2006-10-18 Thread Jack Bates


Mark Foster wrote:
Surely if you have _a_ matching forward and reverse DNS pair, that'd get 
you started?


The problem in our case is that this wasn't an email issue. Any service 
(http/ftp/nntp/etc) which performed rDNS lookups prior to handling the 
connection would end up timing out the connection due to the fact that ATT had 
setup a CNAME which pointed to a nameserver that no longer existed (from when 
the IP was owned by someone else). The actual complaint was failure to ftp files 
from the location due to the ftp server doing rDNS. ATT refused to remove the 
old CNAME which was defunct. We didn't need matching anything. NXDOMAIN would 
have even been acceptable. However, forwarding the request to non-existent 
nameservers is not.




The issue was where there was no matching A/PTR set, this would increase 
the likelyhood of a spam host or something... right?




The issue was that when revoking an IP from a customer, ATT did not remove the 
rDNS configuration for that IP. Had they done so, their own servers would have 
reported back that there wasn't any rDNS (NXDOMAIN) which would have been 
perfectly acceptable.


Jack Bates


Re: ATT refuses to provide PTR records?

2006-10-17 Thread Jack Bates


Mike Walter wrote:

We have a customer that has ATT and they reassigned the IP space to our
name servers to allow us to do reverse DNS for them.



We had a similar situation. ATT states that they will only handle rDNS using 
domains that they control. They will happily CNAME the IPs appropriately or 
reassign the IP space, depending on block size and request.


The issue we ran into was that we couldn't get them to *unassign* a CNAME for an 
IP block so that it would fail immediately, and so servers (web,ftp, etc) which 
requested rDNS for the connection information would time out connections waiting 
for the non-existent nameservers. We weren't really interested in handling rDNS 
for the IP given that it wasn't handling mail, web, or have any A records 
pointing to it. It is the easiest way to get it done, though.


Jack Bates




Re: Zimbabwe satellite service shutdown for non-payment

2006-09-19 Thread Jack Bates


Brandon Galbraith wrote:
Does any fiber run into Zimbabwe? Or is everything via satellite? There 
has to be a remaining uplink (albeit low-capacity) if nameservers within 
the country are still accessible.




Zimbabwe's government owned telephone company controls Internet access. When I 
was working there in 98ish, it was mandatory for all providers to interlink with 
the telephone company and use their satellite uplink at outrageous pricing. 
There was a few exceptions, mostly companies that were faster than the telco at 
setting up Internet connections and had the political power to hold on to them. 
The only other connectivity feeding Zimbabwe outside of the satellite uplinks 
was microwave to South Africa where it picked up fiber. I believe this link was 
primarily for phone, and not Internet.


I doubt much has changed since I was there. Towards the end of my visit, riots 
broke out and shortly after I left it paid not to be white in Zimbabwe and 
definitely not a white farmer. The economy didn't fare well. A beautiful 
country, but unfortunately not very ideal for a network engineer.


Jack Bates


Re: ARIN sucks? was Re: Kremen's Buddy?

2006-09-14 Thread Jack Bates


Lasher, Donn wrote:

YMMV, but my mileage has been just as bad yours, in some cases worse.
Converting from swip's to RWHOIS took 6 months. ARIN is painful. Overly
painful for someone who you pay for the right to  USE IP addresses on a
yearly basis 


Of course, that's just my personal viewpoint.



I'm curious why you converted to RWHOIS. I SWIP'd my entire network to 
get my assignments. Many large ISPs still SWIP. I didn't have time to 
mess with RWHOIS.


-Jack


Re: Kremen's Buddy?

2006-09-13 Thread Jack Bates


Richard A Steenbergen wrote:
Ever notice the only folks happy with the status quo are the few who have 
already have an intimate knowledge of the ARIN allocation process, and/or 
have the right political connections to resolve the issues that come up 
when dealing with them?


Try looking at it from an outsider's point of view instead. If you're new 
to dealing with ARIN, it is not uncommon to find the process is absolutely 
baffling, frustrating, slow, expensive, and requiring intrusive disclosure 
just shy of an anal cavity probe.




I take offense to all this misinformation based on my not so long ago viewpoint 
as an outsider. Based on everything I heard here, I had a negative view of ARIN. 
After all, everyone here deals with them. If they hate dealing with ARIN, it 
must be horrible. Live an learn.


My experiences with ARIN are simple. It was a lot of work. I didn't have any of 
my netblocks SWIP'd, hadn't analyzed my network in the way that ARIN wanted, and 
so I had to work to get all this information together the first time. However, I 
found ARIN easy to work with. They helped me out when I had questions, and when 
I was terrified that they wouldn't give me IPs, they were generous. My second 
time in dealing with them was aggravating, as I wanted more than what they 
issued (they use time between requests to determine a trend of actual IP 
utilization). However, they were right, and my last request expanded the 
previous request block out (I love contiguous when I can have it) and started a 
new one (yipee! another route!).


Please remember the outsiders. They expect that everyone dealing with ARIN and 
talking bad about the process to know what they are talking about. ARIN may not 
be perfect, but newcomers shouldn't be afraid. The hardest part is information 
gathering to setup for the first time, as many people don't have the information 
ARIN requests readily available. After that, a little due diligence and it's a 
cake walk.


-Jack


Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-13 Thread Jack Bates


David Conrad wrote:
I'm sure the same argument was used for telephone numbers when technical 
folk were arguing against number portability.




Number portability is a different can of worms, and many telephone companies 
pushed for it. However, telephone numbers have been assigned in large blocks, 
when only 1 number might be needed. This was a big issue for CLEC dailups, where 
999 numbers could go to waste. If ARIN handed out prefixes the same way, there 
wouldn't be any IPv4 space left.


Dude! Check it! I got a /20 for my house, man! It was a steal. Remember in the 
day when ARIN wouldn't let me have it because I only have 2 hosts here? *insane 
laughter*  or  IPs for sale! We've acquired 20 /8 networks! How big do 
you want to go? (given that laws have indicated a dislike for domain squatting, 
I wonder how IP squatting would work?)


-Jack


Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-08 Thread Jack Bates




Jon Lewis wrote:


In small quantities, and which tie you to particular providers.  Shells 
of companies have been bought (or just claimed) for their large, 
especially pre-ARIN, PI-IP assignments.  To a young ISP, a /16 for 
example may seem like a lifetime supply of IP space, and save the 
company many thousands of dollars (ARIN registration fees) and paperwork 
hassles.




Actually, their issue is that ARIN would only transfer the netblock to 
them under the condition of them signing the contract (which effectively 
states that ARIN controls the netblocks). They would also be liable for 
the annual fees. They are trying to treat IP address space as property 
which they own, and refuse to agree to ARIN registration/fees to obtain 
what they feel is their property. Unfortunately, while ARIN is a steward 
and technically does not *own* the IP address space, neither does the 
ISP that uses the space. The defendant apparently misses the fact that 
IP space is a community asset and is thus handled by the community.


IANAL, but I doubt they can prove Antitrust in this case. If only we 
could handle other limited resources in the world as effectively; 
including BGP routing table bloat.



-Jack


Re: [Fwd: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-08 Thread Jack Bates


Matt Ghali wrote:


Yes, at the least, wasting huge piles of ARIN's money on legal fees; 
which is likely Kremen's entire intent, to teach them a lesson for not 
handing over what he wanted.




Correction. Wasting huge piles of our money. I was hoping the money would go 
towards a new template, too!


-Jack


Re: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?

2006-09-08 Thread Jack Bates


Niels Bakker wrote:
Address space policy has always been the result of a community 
consensus. Just because that consensus has shifted over the years does 
not mean that older entries in some database have suddenly developed 
into property. All it means is that the community is very friendly for 
not applying the new rules retroactively.




The worst part of the filing was the fact that it asserted that by allowing the 
previous owner to retain ownership of the netblock, they were able to allocate 
addresses to customers and stay in business (as if they couldn't ask ARIN for 
more IP addresses).


The purpose of the transfer, if I read the filing correctly, was to give Kremen 
the right to force all routing of the block to stop to the various people using 
it to extort money out of them (based on the wording in the filing, that 
apparently is money Kremen lost). Many of the suballocated users of the netblock 
would probably have been innocent bystanders that are using a cheap ISP.


Of course, in reality, even with the transfer of the netblock, new allocations 
would have been requested and granted for networks requiring them. However, how 
many people would have been requested to pay or forced to immediately renumber? 
I've had to renumber a /18 when CW decided to drop customers here. If they had 
forced them unroutable (claiming ownership) while I was trying to renumber, we 
wouldn't be questioning the status of IP addresses as property.



-Jack


Re: Quarantine your infected users spreading malware

2006-03-01 Thread Jack Bates




David Nolan wrote:
snip


(*): For anyone who doesn't know, URPF is essentially a way to do 
automatic acls, comparing the source IP of on an incoming packet to the 
routing table to verify the packet should have come from this 
interface.  With the right hardware this is significantly cheaper then 
acl processing.  And its certainly easier to maintain.  And by injecting 
a /32 null route into the route table you can cause a host's local 
router to start discarding all traffic from that IP.



snip sig

Yeah, but it's not near as fun as dynamic acls updated via a script 
monitoring flow logs in real-time. It's definitely easier to implement, 
though.


For people utilizing RBE/dhcp combo on Cisco routers, it is also 
possible to just remove the /32 route that was dynamically created which 
will kill traffic until the customer requests dhcp again, which will by 
that time place them in the quarantine. One advantage to temp route 
removal is that it requires no cleanup. Just make sure you don't wipe 
out your permanent static routes.


-Jack


Re: Quarantine your infected users spreading malware

2006-02-23 Thread Jack Bates




Andy Davidson wrote:


And they don't care !  How is someone else telling them that they need a 
virus checker going to change anything ?




We allowed users back online to run Housecall at trendmicro for free so 
they could get cleaned up and save some money. However, the resuspend 
rate was so high, we quickly changed to offline cleanup only. It will 
remain until we perfect our auto defense system.


Customers just want things to work. They don't care if they are 
infected. It's amazing how many customers swear they aren't scanning or 
sending email, and refuse to understand that their computer is capable 
of doing things without them knowing.


-Jack



Re: no whois info ?

2004-12-13 Thread Jack Bates
[EMAIL PROTECTED] wrote:
The network itself is the primary contact information
for a domain. Every nameserver has an IP address
whose connectivity can be tracked through the network.
Same thing for mail servers and anything else with
an A record. This means that operationally it is
far more important for the RIR whois directory to
have working technical contacts.
A few weeks ago, we had a customer contact us regarding issues 
communicating with a domain. Investigation revealed that the domain 
handled it's own primary DNS server and the secondary DNS was pointed to 
another provider which had restricted outside queries to that particular 
server (and wasn't authoritative for the domain in the first place). The 
problem was that the TTL's on the NS RRs were different by 2 days and 
the remaining NS in cache was refusing queries.

IP addresses weren't registered to the responsible party. Domain wasn't 
registered to responsible party. We had to relay the information in a 
best effort approach through three different organizations in the 
hopes that the responsible person would get informed and fix the 
problem. This is not the ideal method of contact and wasted man hours in 
multiple organizations due to inaccurate information. The primary use of 
whois is still valid and anonymous/inaccurate records waste time and 
money for legitimate purposes.

-Jack


Need Help finding support for specific technology with SONET gear

2004-12-09 Thread Jack Bates
Please CC me in all replies or reply offlist if not appropriate for list.
I am trying to find equipment for an OC48+ ring for hauling DS3s. I've 
read lots of documentation on handling fiber failures and repathing the 
circuits the other direction on the ring. I know a lot of providers will 
sell half the OC48 as protected and the other half as unprotected, fiber 
cuts resulting in the unprotected circuits all being taken down for the 
protected circuits. But I need something a little different.

I need something that will detect LOS on a single DS3 and repath that 
DS3 to a different port at a remote location. While not the cleanest 
transition, it will handle catastrophic failure of edge router 
configurations by redirecting circuits to a different location where 
routing is mirrored (and down until signaling is established). I also 
need something that will support transitioning all circuits leaving the 
ring at one location to another when communication is lost with that site.

I can't imagine that someone hasn't done this, but I can't find any 
information on it. I'm not very familiar with SONET (a little lower 
level than I usually deal with) or what various vendors support. My 
telco boys tell me that their existing gear won't handle repathing 
single DS3's when they alarm; only fiber cuts. In addition, it would be 
nice if returning to the primary path can be manual or configured to 
wait a specified time interval to insure stability (nothing like 
equipment which likes to bring circuits up twice before resuming 
service). Hints, tips, and tricks welcome. I have certain edge routers 
that I need to ensure availability even during catastrophic failure 
without requiring each of the customers on those routers to maintain 
separate circuits.

Thanks,
Jack Bates


Re: Energy consumption vs % utilization?

2004-10-26 Thread Jack Bates
Erik Haagsman wrote:


Which means you have to make sure the revenue generated by those 98% 
underutilized servers covers your powerbill and other expenses, 
preferrably leaving some headroom for a healthy profit margin. As
long as that's the case there's no real waste of energy, the services
 people run on their servers are supposed to be worth the energy and 
other costs, whether they physically fully utilize their power or
not.

Yet there are a lot of clusters which are designed for peak load, which 
will waste energy during non-peak hours. Developing an in-house system 
for shutting down power to excess servers in a cluster might increase 
the healthy profit margin.

-Jack


Re: [arin-announce] IPv4 Address Space (fwd)

2003-10-29 Thread Jack Bates
Dave Howe wrote:

Indeed so yes - however... A large and increasing segment of my users are
VPN laptop users with ADSL at home. our accounts department looks a
certain amount of askance at IT when they get a large phone bill in
expenses for someone using a 33.6 modem right next to a nice shiny half
meg ADSL connection that IPSec won't traverse
IPSec can traverse NAT. Often times, it's the implementation of IPSec 
that doesn't traverse NAT. Software vendors apparently didn't think it 
necessary to allow for address translation. Tis sad, really.

I have customers that can VPN through NAT and customers that require 
public addressing. The one's that really make me laugh seem to require a 
static IP address.

Of course, the customer is always right.

-Jack



Re: [arin-announce] IPv4 Address Space (fwd)

2003-10-29 Thread Jack Bates
David Raistrick wrote:

You seem to be arguing that NAT is the only way to prevent inbound access.
While it's true that most commercial IPv4 firewalls bundle NAT with packet
filtering, the NAT is not required..and less-so with IPv6.
I think the point that was being made was that NAT allows the filtering 
of the box to be more idiot proof. Firewall rules tend to be complex, 
which is why mistakes *do* get made and systems still get compromised. 
NAT interfaces and setups tend to be more simplistic, and the IP 
addresses of the device won't route publicly through the firewall or any 
unknown alternate routes.

-Jack



Re: data request on Sitefinder

2003-10-21 Thread Jack Bates
Owen DeLong wrote:
The issues that must be addressed are the issues of internet governance,
control of the root (does Verisign serve ICANN or vice-versa), and
finally, whether the .com/.net zones belong to the public trust or to
Verisign.  Focusing on the technical is to fiddle while Rome burns.
This is the part that drives me nuts. Unless a court ordered it 
otherwise, the root servers can designate ANYONE as the registry for a 
tld. My understanding is that the process for making such changes is 
lengthy and involves the agreement of 3 different organizations. 
However, on a technical level, such changes are possible, and such a 
registry can form agreements with ICANN and the various registrars.

I'm suddenly reminded of .org...

-Jack



Re: data request on Sitefinder

2003-10-20 Thread Jack Bates
todd glassey wrote:
Richard -
Do they (Verisign)  have any legal reason to??? - is there anything between
them and ANY of their clients that requires them to inform them before any
changes to protocol facilities are made - I think not.
To inform? Not yet, although I have the feeling that this will be 
changed due to historic record. However, changes that have an effect are 
always analyzed and a course of action chosen. I believe this is the job 
of ICANN. At some point, ICANN's power will need to be tested and set in 
stone. Only the community can create or strip that power. Yet if an 
organization is going to exist to serve the community and maintain 
order, then it needs the power to do it.

I think Vixie has alluded to this a few times, and I know there is much 
that goes on in the hallways concerning the overall problem of who 
controls what. Verisign is just helping to push the process along. I 
doubt it will end as they want it to.

-Jack



Re: IAB concerns against permanent deployment of edge-based filtering

2003-10-18 Thread Jack Bates
Jun-ichiro itojun Hagino wrote:
While short term traffic filters are deployed, the appropriate recommended
longer term action is to:
Edge networks have a lot more to upgrade than backbone networks. 
Obtaining IOS code that works for all the different types of routers and 
meets the ISP's policy is not an easy task. Some files had to be 
specifically requested from the vendor as they no longer supported the 
version and didn't have pre-compiles. In addition, there are other 
bugs in IOS which must be considered for each application of a router 
deployment. This must be tested and monitored at one application locale 
and once verified can be deployed for all similar applications.

In some cases,there were no alternatives, and this forced hardware 
upgrades at various locations (ie, memory). Such upgrades require money 
and time on the part of the edge customers. The peering blocks allow for 
limited protection of a majority of the customers while they continue to 
repair their networks.

RPC blocks will be much worse, as the storm is still pretty loud, and 
low bandwidth customers cannot handle the extra noise.

-Jack



Re: Site Finder

2003-10-16 Thread Jack Bates
Owen DeLong wrote:



They claim to be representing the USER community and to know better
than we what they end users want.  They think we're just a bunch of
geek engineers that are unwilling to embrace new ideas.  Most of all,
they think they can make money this way, and, they don't really care
about anything else. They're just trying to manipulate things so that
the backlash doesn't cause them too much difficulty as they inflict
this on the internet.
I wonder how eager they would be to implement wildcards if restricted 
from making any revenue from the service the wildcard points to (ie. 
sitefinder).

While I agree that handling of NXDOMAIN needs to improve, such handling 
must be done by the application. Popular browsers have already started 
doing this. While it is possible for the servers pointed to by a 
wildcard to handle individual services, it is impossible for said 
servers to handle all services currently in use and likely to be 
implemented. If the servers discard packets, then they will place 
applications in a wait timeout with no explanation as to why. If they 
rejected connections, then applications will operate as if the remote 
service were down and not that the remote server itself was unresolvable.

There are, of course, minor irritations with a wildcard concerning 
email. There are also privacy concerns, especially if the servers the 
wildcard points to handle the smtp connection. It was previously stated 
that the servers did not log the smtp connection information, but there 
were no protections given to say that this wouldn't change.

I find it sad that Verisign believes they can actually dictate what my 
customers see better than I can. Worst of all, Versign has to realize 
that the bind patches WILL be used if wildcarding is reimplemented by 
them and the resulting issues from use of the patch will a direct result 
of Verisign's actions.

-Jack



Re: [Fwd: [IP] VeriSign to revive redirect service]

2003-10-16 Thread Jack Bates
Paul Vixie wrote:

While I agree that handling of NXDOMAIN needs to improve, such handling 
must be done by the application. Popular browsers have already started ...


i think i agree with where this was going, but it would be a fine thing if
we all stop calling this NXDOMAIN.  the proper term is RCODE 3.  when you say
NXDOMAIN you sound like you've only read the BIND sources and not the RFC's.
NXDOMAIN is a BINDism, whereas RCODE 3 refers to the actual protocol element.
Sorry, Paul. I have gotten too used to seeing the BINDism on-list. You 
will find that most of my speach matches that of those I'm talking to. 
It cuts down on miscommunication and confusion. Please see fit to report 
me to RFC-ignorant for not using the proper RFC terminology. :)

-Jack



Re: Wired mag article on spammers playing traceroute games with trojaned boxes

2003-10-09 Thread Jack Bates
Vinny Abello wrote:

Personally, I think preventing residential broadband customers from 
hosting servers would limit a lot of that. I'm not saying that IS the 
solution. Whether or not that's the right thing to do in all 
circumstances for each ISP is a long standing debate that surfaces here 
from time to time. Same as allowing people to host mail servers on cable 
modems or even allowing them to access mail servers other than the ISP's.

The issue comes in defining a server. You can block 1024 access, but 
spammers don't have to reference port 80 in their emails. You can 
mandate NAT, but this breaks commonly used systems (especially for 
broadband) like DirectPlay. One of the selling points for broadband is 
gaming. Yet some gaming systems were designed to make connections both 
ways and dynamic port forwarding doesn't work in all senarios.

-Jack



Re: Is there anything that actually gets users to fix their computers?

2003-10-03 Thread Jack Bates
John Renwick wrote:
You've put your finger on it.  ISPs have to help users understand that their
machines are broken in a way that makes them unable to gain access to the
Internet -- then most will take them to the shop PDQ, and hopefully get them
back with some protection installed.
While suspending service is a harsh step, sometimes it is required to 
get the user's attention. More than that, and as explained to my 
customers, their service was interrupted because their computer was 
insecure. The level of that insecurity is unknown by us and we try to 
protect our users. After all, does the user just have Virus X, or do 
they have Virus Y which includes a keylogger?

My customers are learning what keyloggers are and what viruses are 
capable of. Wouldn't you want to know that your bank details can be 
learned despite the SECURE connection to your bank because a virus 
placed a keylogger on your computer? It's true. It scares them. Then 
again, they should be scared. Insecure systems are nothing to joke 
about. They can cause real damage.

-Jack



Re: Internet privacy

2003-10-02 Thread Jack Bates
Allen McRay wrote:

To learn how to assign WHOIS contact information and about other actions you
can take to protect your personal information today, visit
www.InternetPrivacyAdvocate.org.
It's rediculous to state that placing contact information for a domain 
name is a privacy issue. A domain is public record, as should the 
contact information be. Is verisign out to help spammers any way that 
they can? It's bad enough that the whois information is often out of 
date with obvious bogus information like 555-1212.

-Jack



Re: Internet privacy

2003-10-02 Thread Jack Bates
Jeffrey Meltzer wrote:

What valid reason would you have for getting in contact with a domain owner,
if they've unlisted themselves and don't want to be contacted?
Problem with email or a website to a given domain. The fact that IP 
addresses aren't swip'd out to the individual owners. Multiple domains 
owned by different people can be hosted on the same IP.

Sometimes it's a matter of fixing problems, not just abuse.

-Jack



Re: Verisign Responds

2003-09-24 Thread Jack Bates
Paul Vixie wrote:

It's still to be seen if ISC's cure is worse than the disease; as 
instead of detecting and stoping wildcard sets, it looks for delegation. 


that's because wildcard (synthesized) responses do not look different
on the wire, and looking for a specific A RR that can be changed every day
or even loadbalanced through four /16's that may have real hosts in them
seems like the wrong way forward.
See the NANOG archives for my post reguarding wildcard caching and set 
comparison with additional resolver functionality for requesting if the 
resolver wishes to receive wildcards or NXDOMAIN.

-Jack



Re: monkeys.dom UPL being DDOSed to death

2003-09-24 Thread Jack Bates
Geo. wrote:

Blacklists are just one kind of filter. If we could load software that
allowed us to forward spams caught by other filters into it and it
maintained a DNS blacklist we could have our servers use, we wouldn't need
big public rbl's, everyone doing any kind of mail volume could easily run
their own IF THE SOFTWARE WAS AVAILABLE. A distributed solution for a
distributed problem.
The benefit of using a blacklist like monkeys or ordb is that there is 
only one removal process for all the mail servers. The issue is that 
when the webserver is dDOS'd, it is very hard for people to get removed.

Running local blacklists on common themes (such as open proxy/open 
relay) has the same issue. Yes, one can blacklist the site, but how do 
you get it delisted once the problem is fixed?

I had openrbl.org in my rejections for awhile so that people could find 
all the blacklists that they were on. Since the dDOS of openrbl, I've 
had to change it to my local scripts which don't cover near what openrbl 
did.

-Jack



Re: monkeys.dom UPL being DDOSed to death

2003-09-24 Thread Jack Bates
Geo. wrote:

There shouldn't be a need for any removal process. A server should be listed
for as long as the spam continues to come from it. Once the spam stops the
blacklisting should stop as well. That is how a dynamic list SHOULD work.
Depends on the type of listing. Open proxies and open relays are best 
removed by request of owner once they are fixed or staled out after a 
retest at a later time, although retests should be far and few between 
(many use anything from 1-6 months). Just because spam is not 
temporarily coming from an insecure host does not mean that the host has 
been secured.

Direct Spam is difficult to automatically detect, and reports are not 
always accurate (see SpamCop). It tends to be a very manual process. A 
lot of work goes into maintaining a list like SBL or SPEWS.

Spam is also very transient which makes local detection of a spammer's 
activities difficult. They may just be focusing on someone else for a 
week or two before plastering your servers again. If you removed them, 
they will do considerable damage before they get relisted via the manual 
process (delay between first email received and first recipient 
reporting can easily exceed hours).

The other issue with shared listings is what one considers acceptable or 
unacceptable. Easynet, for example, lists a lot of mail senders which I 
accept mail for due to user demand. They consider the email spam or 
resource abuse (broken mailers) while I am meeting the demands of my 
customers who are paying to receive the email. This isn't a collateral 
damage issue. It is an issue of where a network decides to draw the line 
on accepting or rejecting email.

-Jack



Re: Verisign Responds

2003-09-24 Thread Jack Bates
Paul Vixie wrote:

oh... that wasn't a joke, then?

there won't be a protocol change of that kind, not in a million years.
It doesn't have to be a protocol change. Strictly an implementation 
change. It would break less than the current implementation change ya'll 
made can break. Reguardless of if resolver functionality for application 
support is included or not doesn't really matter. The ability to tell 
the recursor to accept or not accept the wildcard records is functional 
and doesn't care about delegation; strictly if the record returned 
matched a wildcard set. It preforms the same service that the delegation 
patches did except it won't break tld's like de.

-Jack



Re: Verisign Responds

2003-09-24 Thread Jack Bates
Paul Vixie wrote:

you are confused. and in any case this is off-topic. take it to namedroppers,
but before you do, please read rfc's 1033, 1034, 1035, 2136, 2181, and 2317.
Can someone please tell me how a change to a critical component of the 
Internet which has the capacity to cause harm is not an operational issue?

A TLD issues a wildcard. Instead of discovering if records match the 
wildcard and returning NXDOMAIN (which is what everone wanted), the 
software was designed to restrict records based on delegation.

Delegation was not broken. The changes made allow engineers to break it. 
I'd consider this an issue. Reports have already come in of all the 
various domains that people will mandate delegate-only for. For the 
record, .museum was listed several times despite the request in 
documentation to not force delegation, as were other zones.

In fact, many people were confused. They didn't understand what zone 
delegation was. For the record, I've read all the RFC's you posted. To 
many, it's an issue of wildcards. Yet BIND didn't solve the wildcard 
problem. It solved a delegation problem, which was not only not broken 
but has traditional use.

Which countermeasures being implemented did the IAB have an issue 
with? I wonder since their arguement against the wildcards was the fact 
that it breaks traditional use. BIND now easily breaks traditional use.

-Jack





Re: ICANN asks VeriSign to pull redirect service

2003-09-23 Thread Jack Bates
John Dvorak wrote:
and the response from Russell Lewis:
http://www.icann.org/correspondence/lewis-to-twomey-21sep03.htm
explenative deleted! The Internet works perfectly fine for years. They 
make a change which is confirmed to disrupt service. Instead of 
restoring the stable state while conducting a review, they feel that 
they must keep the service as is? This is the problem with a for-profit 
company. They are keeping it to make money. The truth is that no one 
would complain about reverting back to the stable condition which 
everyone has lived with for years. They are complaining now. In 
addition, the IAB has already published a report that pointed out the 
various problems. Much discussion has existed on the topic across all 
the major networking/spam/security mailing lists. It is obvious that 
they have broken a lot of things. To not revert is to push their own 
needs; ie. profit.

This quote is also interesting:

This was done after many months of testing and analysis and in compliance with
all applicable technical standards
The system is technically within standards. The purpose of the IAB is 
not only to watch standards, but to also understand common use of the 
network. Many standards have been changed to reflect common use. A good 
section of RFC's are about common use. As networks implement standards, 
there are always incompatibilities and extra's that are added in. As 
deployment reaches general use, it is one of the duties of the IAB to 
recognize that such utilization is in place and what the overall effect 
on the Internet is.

In this case, IAB did state that the wildcard use did break commonly 
used mechanisms deployed on the Internet, even if it was technically 
within the standard. This is one reason that it was recommended that the 
users of the tld be allowed to decide on if a wildcard is appropriate. 
For .museum, it is appropriate. It's even in their ICANN agreements. For 
com and net, no such precedent was ever set and complaints from the 
users of said tld are being ignored. Common use was broken.

-Jack



Re: bind 9.2.3rc3 successful

2003-09-23 Thread Jack Bates
Paul Vixie wrote:
i do not expect the ietf to say that root and tld zones should all be
delegation-only.  but good luck trying.
It hasn't been that large an issue in the past, and as pointed out by 
some, the countermeasures are just as harmful. I hope that 
delegation-only is only a temporary measure in bind. I'm sure some 
people will keep it running and probably put it on tld's where it'll 
break valid records.

-Jack



Re: Providers removing blocks on port 135?

2003-09-23 Thread Jack Bates
Mike Tancsa wrote:

Local government has nothing to do with it.  It was just some dime a 
dozen porn company.

Back to the everyone's doing it, so let's not bother syndrome.

-Jack



Re: bind 9.2.3rc3 successful

2003-09-23 Thread Jack Bates
Dan Riley wrote:

It breaks a few things we care about--for example, www.ithaca.ny.us is
a naked CNAME in the the us root:
There's no reason to force .us as delegate only. Force com and net to 
delegate only and you'll have the Internet as it was before this debate 
started.

-Jack



Re: Providers removing blocks on port 135?

2003-09-23 Thread Jack Bates
Mike Tancsa wrote:

I am not advocating that at all.  (everyone's doing it, so let's not 
bother) However, I dont see what the municipal government has to do 
with a matter like this.  I imagine its a civil issue where you have to 
get the lawyers involved :(  Certainly if the company persisted, we 
would have done so.  The fact that they can then go to another ISP who 
does not care and allows them to use their network is another issue.

Of course, it depends on the local laws, but in many locations, 
pornography has a lot of restrictions and when those restrictions are 
broken, it becomes a criminal matter. For example, most of my user's 
have family accounts. This means that their email is not only theirs 
but their children and grandchildren's. Even if the owner of the account 
is an adult, the fact that their children are present when they read 
their email means that all pornographic spam they receive is essentially 
being delivered to a minor. This is especially true with misleading 
subject lines where children are exposed to unwanted material before 
anyone realizes it. In Oklahoma, at least, it is a criminal offense to 
expose children to pornographic material.

-Jack



Re: Verisign Responds

2003-09-23 Thread Jack Bates
Paul Vixie wrote:

wildcards don't work that way.  there are ns rr's in .com for verisign.com,
so you get a referral to those servers no matter whether a *.com wildcard
exists or not.
I think the point was that if catching typographical errors was so 
important to verisign, they would have created a *.verisign.com wildcard 
as well.

-Jack



Re: Verisign Responds

2003-09-23 Thread Jack Bates
Dan Hollis wrote:

On Tue, 23 Sep 2003 [EMAIL PROTECTED] wrote:

On Mon, 22 Sep 2003, Dave Stewart wrote:

Courts are likely to support the position that Verisign has control of .net 
and .com and can do pretty much anything they want with it.
ISC has made root-delegation-only the default behaviour in the new bind, 
how about drafting up an RFC making it an absolute default requirement for 
all DNS?
That would be making a fundamental change to the DNS
to make wildcards illegal anywhere. Is that what you
want?


no it wouldnt. it would ust make wildcards illegal in top level domains, 
not subdomains.

Actually, it's worst than that. root-delegation-only does not just 
change the wildcard behavior. RRs which are in the tld itself instead of 
being delegated (like some of the ccTLDs) break if forced into 
root-delegation-only. This is one of the points in the IAB opinion 
concerning remedies causing other problems.

The issue itself is political, but it does have technical ramifications. 
It's still to be seen if ISC's cure is worse than the disease; as 
instead of detecting and stoping wildcard sets, it looks for delegation. 
It is also configurable to a degree that inexperienced operators will 
break their DNS implementations out of ignorance (like ignoring the ISC 
recomendation and root-delegating .de).

One should consider sponsored TLDs like .museum the exception. If you 
have filtering rules (like smtp) that are bypassed as a result of the 
wildcard, then those rules themselves should be changed. The sponsored 
TLDs and even a lot of the ccTLDs have a rather small subdomain base, 
allowing for unified agreement on changes made to the zone. The legacy 
TLD's should be rather static to ensure stability in DNS architecture 
overall. The subdomain base is massive, making communication and 
agreement on changes difficult. If I'm not mistaken, this is one of the 
duties of ICANN.

-Jack





Re: monkeys.dom UPL being DDOSed to death

2003-09-23 Thread Jack Bates
Raymond Dijkxhoorn wrote:

[Mimedefang] monkeys.dom UPL being DDOSed to death 
Jon R. Kibler [EMAIL PROTECTED] 
Tue Sep 23 14:15:01 2003 
The computer security industry really needs to figure out how to get law 
enforcement to take these attacks seriously. It would only take a few good 
prosecutions to put an end to these types of attacks. Any 
thoughts/suggestions?

This is really a dark day for those of us fighting spam. I looks like the 
spammers have won a BIG battle. The only question now is who will be the 
causality in this war?

This goes beyond spam and the resources that many mail servers are 
using. These attacks are being directed at anti-spam organizations 
today. Where will they point tomorrow? Many forms of breaking through 
network security require that a system be DOS'd while the crime is being 
committed. These machines won't quiet down after the blacklists are shut 
down. They will keep attacking hosts. For the US market, this is a 
national security issue. These systems will be exploited to cause havoc 
among networks of all types and sizes; governmental and commercial.

Windows Update may be protected for now, but it still has limitations. 
It can be killed to the point of non use. Then how will system get 
patched to protect themselves from new exploits? The problem will 
escalate. There are many financial institutions online. Does anyone 
doubt that their security can be penetrated? What about DoD networks?

There are a lot of social aspects to internetworking. Changes need to be 
made. Power needs to be allocated appropriately. A reconing needs to 
occur. All the businesses that make and spend mass amount of money due 
to the Internet need to strongly consider that there won't be a product 
if the social ramifications are solved.

Users don't want to be online and check email just to find hundreds of 
advertisements, pornography, and illegal material in their inbox. Users 
don't want to hear that they've been infected with the latest virus and 
can no longer be online until they fix the problem; usually resulting in 
money. Users don't want to hear that they can't reach site X because of 
some change in architecture. If the general masses get fed up with the 
Internet, there won't be an Internet. Millions of dollars are easily 
being lost because of malicious activity on the Internet. Millions more 
are being lost due to differences of opinion in the governing bodies of 
the Internet.

Is everyone so short sighted and greedy as to not recognize that they 
are dying a slow financial death?

-jack



Re: monkeys.dom UPL being DDOSed to death

2003-09-23 Thread Jack Bates
Joe St Sauver wrote:
Note that not all DNSBLs are being effectively hit. DNSBLs which run with
publicly available zone files are too distributed to be easily taken down,
particularly if periodic deltas are distributed via cryptographically 
signed Usenet messages (or other push channels). You can immunize DNSBLs
from attack, *provided* that you're willing to publicly distribute the 
contents of those DNSBLs. 
Actually, SBL has had a lot of issues. The issue isn't always with the 
dns zones. It is true that one can distribute the zones to make dDOS 
more difficult; although not impossible. However, in the case of SBL, 
they have had issues with the web servers being dDOS'd. The ability to 
lookup why a host is blacklisted, and in the case of relay/proxy lists 
to request removal, is also important.

There are still a lot of blacklists out there; njabl, ordb, dsbl, 
reynolds, sbl, and spews (in a round about sort of way). Yet what 
happens when  a business desides to destroy his competitor's website? 
What happens when someone decides they don't like magazine X or vendor X 
and attacks their web farms? Shall the Internet be called akamai? Don't 
get me wrong. It's a good service, but not invulnerable. 
windowsupdate.com can still be brought to it's knees if the attacker is 
persistant enough.

Of course, when big money businesses are involved, things get done. Yet 
what about the smaller business or the charity? What about critical 
infrastructure? Does anyone claim that MAE East and West couldn't be 
made inoperational by dDOS? How does that shift the network and peering? 
What are the ramifications?

Of the various RPC worms, spybot is the most malicious in intent. Yet 
what if parts of Swen/Gibe/Sobig.F were incorporated into blaster. 
Process terminations to make repair difficult and to open the computer 
to other viruses and vulnerabilites. Installed proxy servers and bots. 
Keyloggers. Now collect your information, gather your bots, and watch a 
single phrase create destruction.

Things have not improved over the last year. They have gotten worse. The 
Internet is more malicious than ever. It is quickly becoming the Inner 
City Projects of communication. Greed and hatred created some of the 
worst neighborhoods in the world. The same concept will apply to 
network. If action isn't taken, it will get worse. More money will be 
lost over the coming years. Many people will be hurt. Communication will 
be impaired.

Question: Why is it not illegal for an ISP to allow a known vulnerable 
host to stay connected and not even bother contacting the owner? There 
are civil remedies that can be sought but no criminal. Bear in mind, 
these vulnerable hosts are usually in the process of performing 
malicious activity when they are reported.

Ron has reported many of the IP addresses that dDOS'd monkeys.com. Under 
the same token, Ron has also reported to many ISP's about spammers which 
have abused servers under his control, scanning and utilizing open 
proxies; which is theft of resources. Why is nothing done about these 
people? Why is the ISP not held liable for allowing the person to 
continue in such malicious activity?

-Jack



Re: Detecting a non-existent domain

2003-09-23 Thread Jack Bates
Kee Hinckley wrote:

Getting practical for a minute.  What is the optimal way now to see if a 
given host truly exists?  Assume that I can't control the DNS server--I 
need to have this code run in any (*ix) environment. Assume also that I 
don't want to run around specialcasing specific IP addresses or 
TLDs--this needs to work reliably no matter what the domain.  User gives 
me a string, and I need to see if the given host is a real machine.

A set comparison between the domain your interested in and *.TLD will 
inform you if the domain is pointed to the same IP addresses as the 
wildcard. In many cases, this is sufficient and can be made to work 
dynamically and quickly with most software and scripts.

-Jack



Re: VeriSign SMTP reject server updated

2003-09-22 Thread Jack Bates
Matt Larson wrote:

In response to this feedback, we have deployed an alternate SMTP
implementation using Postfix that should address many of the concerns
we've heard.  Like snubby, this server rejects any mail sent to it (by
returning 550 in response to any number of RCPT TO commands).
Matt,

The problem is that some systems have a specially formatted response 
message that they send to their users under certain conditions. For 
example, commonly used Exchange servers will send User unknown for any 
550 issued on a RCPT command, where as they would inform the user that 
the domain did not exist for nxdomain. I have heard that these messages 
were also sent back in the proper language.

How will users of such systems know if it was a recipient issue or a 
domain issue? Granted, part of this problem in the example is the smtp 
implementation (which any abuse desk will tell you that it is 
aggrivating to get a call about a User unknown message when a Security 
Policy 550 5.7.1 was issued with comment).

Of course, mail is the least of concerns. There are millions of programs 
written that check for NXDOMAIN. A lot of this software cannot readily 
be changed to recognize the wildcard, requiring recursors to be patched; 
which is almost as repulsive as the wildcard to begin with.

Here's just 2 commonly used applications, who's output has changed which 
will break many expect scripts and then some.

$ ftp jkfsdkjlsfkljsf.com
ftp: connect: Connection refused
ftp quit
$ ftp jklfskjlsfljks.microsoft.com
jklfskjlsfljks.microsoft.com: unknown host
ftp quit
$ telnet jlkfsjklsfjklsfd.com
Trying 64.94.110.11...
^C$ telnet jksfljksfdljkfs.microsoft.com
jksfljksfdljkfs.microsoft.com: Unknown host


-Jack



Re: Providers removing blocks on port 135?

2003-09-19 Thread Jack Bates
Adam Hall wrote:



Anyone know anything about prorviders removing ACLs from their routers 
to allow ports 135/445/ back into their network?  Curious only 
because customers are calling in saying that Verizon, Cox, Bellsouth, 
and DSL.net are doing so and seem to have a big problem with the fact 
that we're hesitent follow their lead.

No two networks are the same, nor do they have the same issues. The new 
RPC exploit worm will be interesting to watch on the above networks if 
they've dropped their blocks. There's also a question of at which layer 
they have done so. For example, if blocks were removed from central 
sites in favor of blocks that were pushed out to the end users.

Allowing the various scans out costs other people money. If nothing 
else, I'll leave 135 in place long enough to ensure that the number of 
users that are infected are manageable. My transit customers are all 
telling me the same thing. They are still pushing it to get people 
cleaned up and patched. They want their blocks to remain (so they don't 
have to pay us more).

-Jack



Re: Providers removing blocks on port 135?

2003-09-19 Thread Jack Bates
Owen DeLong wrote:

Yes.   I responded to this in a previous post.  We must do what we must do
temporarily to keep things running.  However, breaking the net is not a 
long
term solution.  We must work to solve the underlying problem or it just 
becomes
an arms-race where eventually, no services are useful.

I agree, and as a point of fact, many ISP's allow their users to opt out 
of spam. The ability to opt out of port filtering is a little more 
difficult, but it is not impossible. Most authentication methods 
designed have support for telling connection equipment what security 
lists to use and how to treat a specific user. Some systems, like mine, 
do not run authentication models that support this, but I consider it 
very wise to change.

In my case, I will maintain a filter anywhere in the network that it is 
required in order to help protect the network and the users who rely 
upon the network. Currently, estimates show that removing port 135 at 
this junction would allow the current Blaster infected users to become 
infected with Nachi/Welchia which has more network impact. Some 
segments, despite blocks, have already had small outbreaks which we had 
to irradicate. In addition, dialups have very little bandwidth to begin 
with. The amount of traffic generated on icmp and 135 is currently high 
enough to severly cripple connectivity on an unprotected dialup account.

I do agree that it is a temporary measure. Yet, one must remember that 
each network has it's own definitions of temporary, drastic, and 
appropriate. I now return you to contacting those infected users in your 
network. :)

-Jack



Re: Root Server Operators (Re: What *are* they smoking?)

2003-09-18 Thread Jack Bates
Paul Vixie wrote:

actually, i had it convincingly argued to me today that wildcards in root
or top level domains were likely to be security problems, and that domains
like .museum were the exception rather than the rule, and that bind's
configuration should permit a knob like don't accept anything but delegations
unless it's .museum or a non-root non-tld.  i guess the ietf has a lot to
think about now.
Paul,

I would argue as seen in some of my other posts, that the wildcard 
feature of .museum is not always wanted either. Would it not be wise to 
push forward into the future with support for software to request if it 
wants a wildcard or not? While a wildcard bit is ideal, there are 
methods of determining wildcard programatically. Being able to cache and 
handle such information is important as different applications have 
different requirements.

After all, is this the Internet or just the World Wide Web? wildcards at 
the roots are catering solely to the web and disrupting other protocols 
which require NXDOMAIN.

-Jack



Re: Class A Data Center

2003-09-18 Thread Jack Bates
[EMAIL PROTECTED] wrote:
Particularly of interest would be established standards for Class A
Datacenter specifically relating to the physical plant -- Power,
cooling, physical security, etc.  I think we can all agree in general on
N+1 everything, and we can go round and round again on what exactly
constitutes Tier-1 provider, but what about the physical space itself?
I can put a fully-redundant network with multiple Tier-1 connections
in my garage but I still wouldn't consider my garage to then be a Class
A Datacenter.
And let's not forget that they need to have good staffing, especially 
the abuse department.

-Jack



Re: Root Server Operators (Re: What *are* they smoking?)

2003-09-17 Thread Jack Bates
Paul Vixie wrote:
no.  not just because that's not how our internal hashing works, but
because hosted tld's like .museum have had wildcards from day 1 and
the registrants there are perfectly comfortable with them.  there's
no one-policy-fits-all when it comes to tld's, so we would not want
to offer a knob that tried to follow a single policy for all tld's.
I agree Paul. This is a policy issue and not a technical issue. TLDs 
that are sponsored or setup with a specific design sometimes do and 
should be allowed to use the wildcard for the tld. The issue has become 
that net and com are public trusts and changes were made to them without 
authorization by the public and damage was caused as a result.

Just as root server operators are subject to operating as IANA dictates, 
so should Verisign be subject to operating as IAB and ICANN dictate. The 
Internet as a whole depends on the predictability of TLDs. It is 
impossible to maintain a security policy based on unpredictable information.

I would recommend that the TLDs which do utilize wildcards setup or 
restrict such use in a predictable manner. While historically it has not 
been an issue, such as nonexistant .museum domains being forged in email 
envelopes, such practices could be exploited at a later time. The 
ability to recognize that a domain is not registered and should not be 
used is paramount in basic forgery detection.

One method that might be considered for recursive servers as well as 
resolvers, is the ability to specify if a wildcard entry will be 
accepted or not, perhaps at any level or just at the 2nd level. Cached 
records which are wildcards could be marked as such so that subsequent 
queries could specify desire of accepting or not accepting the wildcard 
entry. A web browser, for example, which supports its own redirections 
for NXDOMAIN, might wish to ignore the wildcard records, as would smtp 
servers.

While I believe that net and com should never have wildcards, the 
ability to detect, cache, and override wildcards for tld's such as 
.museum when the application requires it is paramount. I realize that 
the client software can perform the queries and detection itself, but in 
many cases, there wouldn't be an effecient way to cache the information 
without the resolver and recursive cache being aware of the process and 
performing such detection would require two queries versus one.

-Jack



Re: Verisign brain damage and DNSSec.....Was:Re: What *are* they smoking?

2003-09-17 Thread Jack Bates
Eric Germann wrote:

And whats to say they don't get around our methods of blacklisting it by 
changing the IP around every zone update?
 
result=query domain.tld
wild=query *.tld
if result=wild  dontwantwild then result=NXDOMAIN

-Jack



Re: Root Server Operators (Re: What *are* they smoking?)

2003-09-17 Thread Jack Bates
Aaron Dewell wrote:

The point is, this makes a reasonable backup plan.  Far from ideal, but
we're dealing with a state-supported monopoly who can do whatever they
want.  Get this in place, then think about how to throw the monopolies
out.  This works in the meantime.  They will likely compromise this far,
even if they won't back down.
I'm thinking security for the long term. Even if com and net are 
returned to their non-wildcard states, there are other tld's which will 
continue using wildcards. Subject to a wildcard bit being implemented to 
DNS, my suggested method allows for optimum performance and 
functionality when DNS is being used as part of a security model.

The TTL is 15 minutes, so your hypothetical server would be throwing away
it's cache every 15 minutes.  Then re-querying everything.  You'd have to
have a _lot_ of outgoing email to justify that.
I don't know about you, but I don't want to cache bogus information for 
longer than 15 minutes. If someone sends random-string domains as the 
envelope from to my mail server, I want the cache to purge itself 
quickly. Yet, if they are sending the same bad address to my mail server 
repetitively, I want my cache to hold the record briefly; say 15 
minutes. I'd hate to see a spammer issuing jlkfsjklfsj.com 5,000 times 
to my mail server in rapid succession and my recursor have to ask for it 
every time. On the other side, I would hate to cache 100,000 bogus 
domains for 1 day, wasting cache.

This solution still requires increased overhead, and more modifications
to BIND.  Which has more impact on your server, this BIND overhead, or one
additional query from your MTA?  My guess is the query is cheaper overall.
And you have to convince ISC to implement these changes, or write them
yourself, then you have the potential cost of an unstable nameserver.
Overall, I'd take the one addition query based on the compromise solution.
My mail server doesn't use a bind recursor, so I'll end up making the 
change myself for that particular system. However, a solution needs to 
be devised for the long term. The best solution is a wildcard bit. 
Second to that, smart recursors and resolvers that can detect the wildcard.

-Jack





Re: IP issues with .com/.net change?

2003-09-17 Thread Jack Bates
Alex Kamantauskas wrote:

 Not really operational content, but I was wondering if there was an
 intellectual property issue with the Verisign .com/.net redirect?
Not sure about IP, but there are privacy issues. Verisign has 
intentionally redirected all email that was mistyped on the recipient to 
their server. Instead of immediately rejecting and terminating the 
connection, they allow the send to issue 3 commands, which would 
typically give them the sender and rcpt information where previously the 
 information would not leave the originating mail server. How could 
this be construed as anything but address harvesting and a breach of 
privacy?

In addition, at no point has Verisign obtained permission to steal 
information in this way. They are eavesdropping! Every time I've 
checked, port 80 was down on the destination IP, but 25 was running full 
speed. It makes me wonder if their real intent wasn't to collect that 
information to begin with.

-Jack



Re: Verisign brain damage and DNSSec.....Was:Re: What *are* they smoking?

2003-09-16 Thread Jack Bates
[EMAIL PROTECTED] wrote:
How frikking many hacks will we need to BIND9 to work around this braindamage?
One to stuff back in the NXDomain if the A record points there, another to
do something with make-believe DNSsec from them. What's next?
You mean that you don't like it when the Authority the community places 
its trust in abuses that power? heh. Go figure. I hope they are sued out 
of existance. At the least, ICANN needs to do its job. I have a severe 
issue with changes being made that cause a lot of damage.

-Jack



Re: More on the DDoS Attack

2003-09-13 Thread Jack Bates
Eric Gauthier wrote:
  
Take a look and let me know what you think.  Any question or comments -  
editorial or otherwise - would be greatly appreciated. 

Nice layout. Reverse the the process so default is a good host and 
integrate it with radius, using access lists versus private/public 
addresses and you have a nice method for jailing an infected user so 
that they can still dial up and get virus defs, patches, etc and that's 
it. Granted, it would take some tweaking.

-Jack



Re: Microsoft distributes free CDs in Japan to patch Windows

2003-09-09 Thread Jack Bates
Petri Helenius wrote:
How long until the next worm/virus/trojan would first disable this 
handshake and then attach
to the network? Or you expect to terminate customers within the 24 hours 
new patches
are out if they donĀ“t patch? or 72 hours?

I fully expect malicious code and even users to disable the handshake. 
That's fine. If a user happens to become infected, then they can be 
suspended or transfered to *must* perform handshake.

Not everyone uses antivirus software. Not everyone will patch the 
security holes in their current software. Many would object to having to 
perform patches and delay their Internet surfing. Yet with such a 
protocol, a way could be provided for allowing a user to establish a 
connection which only allows them to fix their system without the 
outside world able to attack them and vice versa. Once patched, the 
system would recognize them as patched and allow full IP connectivity.

Imagine how nice it would be if someone buying an XP machine this 
morning could actually connect to the Internet, patch their system, and 
be able to use the Internet without ever having their RPC exploited. If 
a user is infected with a virus, wouldn't it be nice if they could 
purchase A/V software and then be able to perform updates and clean 
their system without causing any harm to the network?

-Jack



Re: What were we saying about edge filtering?

2003-09-06 Thread Jack Bates
Christopher L. Morrow wrote:

keep in mind its not destination addresses that are the problem here, BUT
True, but there is RPF checks based on routing. anything routed to NULL0 
is generally treated by such filters as an invalid route and will 
discard the packet of any source address from such a route.

Setting up BGP peers internally and applying route policies to null 
route the routes received from the bogon peers would allow for easily 
invalidating the routes and dropping packets which supposably originate 
from them.

I know this is easily done with vendor C. I suspect that the other 
vendors have implemented something very similar (heard J was easier than C).

-Jack



Re: BMITU

2003-09-06 Thread Jack Bates
Robert Bridgham wrote:

it runs but even Hotmail.com uses Qmail as it's MTA.  This the one of the
leading webmail sites in the world with between 80-100million accounts, and
still running strong.  I would definitely put my vote to Qmail for any
organization, any size!
telnet mx1.hotmail.com 25
Trying 65.54.252.99...
Connected to mx1.hotmail.com.
Escape character is '^]'.
220 mc5-f7 Microsoft ESMTP MAIL Service, Version: 5.0.2195.5600 ready at 
 Sat, 6
 Sep 2003 13:51:52 -0700
quit
221 mc5-f7 Service closing transmission channel
Connection closed by foreign host.

I wouldn't recommend it myself, but well... ummm, yeah.

-Jack



Re: What do you want your ISP to block today?

2003-09-04 Thread Jack Bates
Gerardo Gregory wrote:

these ports.  The internet in itself is nothing more than a 
communications link, and the ISP's are providers to this link.  The 
purpose of which is the exchange of information over a public medium.
You want an ISP to begin filtering at the 4th layer (OSI 
Reference...yikes), why  Besides alleviating the headaches of some 
Hmmm. Perhaps I should shut down my abuse desk and just be a 
communications link. After all, the user's computer wants to transmit 
viruses or spam, so why should I stop it?

If people run layer 7 filtering to stop abuse, what makes you think they 
won't run layer 4 to meet the same goals? A lot of networks already run 
layer 3 filtering for misbehaving networks and bogon filters. Spam 
filtering takes place at anywhere from 3-7, depending on the network.

One can't have it both ways. You either do no filtering and watch the 
system completely crash as you can't afford the overhead of the 
malicious content which is on the rise, or you apply filters to protect 
your network and *the* network overall. Not filtering consumer networks 
will cause issues at the backbone networks, forcing upgrades and driving 
prices back up.

If we don't protect *our* network, then some governments will start 
mandating how they'll protect it. I for one do not wish to give up 
control of what I've designed, built, and improved to people who usually 
don't know what telnet is, much less ssh.

-Jack



Re: What do you want your ISP to block today?

2003-09-04 Thread Jack Bates
Johannes Ullrich wrote:

Charge the same and take your 'abuse' team out for lunch on the change
you save by blocking the ports ;-)
We were looking at blocking 25 outbound except to designated servers as 
well for many of our dialup and broadband customers. Those with the 
service get the benefit of not worrying about account suspensions for a 
majority of the issues (open proxies, viruses, yada yada). You'd be 
surprised how many customers really don't want to have their system 
suspended and don't care if they have 30 viruses.

-Jack



Re: What were we saying about edge filtering?

2003-09-04 Thread Jack Bates
Christopher L. Morrow wrote:
At the edge, very near the originating host there is no reason not to
filter these, if you find the sources you might consider asking them why
they didn't filter these for you...
And what is the reason to not filter these in the backbone? Full spoof 
protection at some levels is near impossible. However, bogon filtering 
is not.

-Jack



Re: BMITU

2003-09-04 Thread Jack Bates
Fisher, Shawn wrote:

I would like to get some opinions on the Best Mailserver in the Universe.
Is there a more appropriate list for this question?
I'm partial to sendmail due to the grandfather clause, but if I could go 
back in time and redesign everything, I'd be a diehard postfix fan. I 
have seen postfix and sendmail used in mail servers handling over 30 
Million accounts. Good enough for ya?

-Jack



Re: What were we saying about edge filtering?

2003-09-04 Thread Jack Bates
[multiple response]

Christopher L. Morrow wrote:

I'm going to take a stab at: The next 69.0.0.0/8 release? Certainly there
was some lesson learned from this, no?
I don't buy it, Chris. Are you saying that a large backbone provider 
can't maintain up-to-date bogon filters? In fact, I'd say they would be 
better at it, and if they were using the filters, then there would be 
less need for their customers to apply the filters and we'd have less 
bogon issues in the future.

Owen DeLong wrote:
 Source address-based filtering in the backbone is expensive and, in
 many cases, non-feasible.
Most vendor equipment is easily capable of handling bogon filtering 
using any number of methods. This is particular true when filtering 
packets that are not announced bogons (such as most dDOS spoof attacks), 
even if announced bogon packets are allowed through.

-Jack



Re: bgp as-path info

2003-09-02 Thread Jack Bates
If you look closely, they are probably not just stripping your AS. They 
are probably aggregating your network. One provider that I am aware of 
that does this is ATT. Since your advertisements out the other network 
will be more specific, traffic will only come through them. If the 
networks are the same size, then traffic will most likely come through 
your first provider due to AS path counts.

Usually, you have to request that your more specific routes be allowed 
out due to multi-homing. In the case of ATT, they have a community that 
you must send with the route to have it sent beyond their local network. 
It's really just a matter of default preference on the part of your 
provider. Some default to advertise more specific while others default 
to advertising their aggregates. The latter is used most commonly when a 
provider does a lot of BGP peering that is not multi-homed. It's not a 
bad policy when it comes to looking at the BGP tables.

-Jack

Austad, Jay wrote:

I just brought up a BGP session with one of my providers, they are stripping
our AS as it leaves their network, so it looks like the route is originating
from their network.  I have another provider that I will be bringing up BGP
with later this week.  Once I bring up the other provider, I will be
advertising several networks out both of them.
Is this as-path stripping going to cause issues?  Does it matter either way?

-jay




Re: IPv6 vs IPv4 (Re: Sprint NOC? Are you awake now?)

2003-09-02 Thread Jack Bates
Nenad Pudar wrote:

Again my point is that your site (or any other that use the same dns for 
ipv4 and 6) may be blackholed by ipv6 (it is not the question primary 
about the quality ipv6 connction it is the fact that your ipv4 
connection which may be excelant is blackholed with your ipv6 connection 
which may not be good and to me the most obvious solution is not to use 
the same dns name for both) for the people coming through 6bone or even 
for the majority of people   not peering with Verio.

It's a valid point, except that IPv4 could just as easily have had a 
problem. Network connectivity issues happen. Whether one uses IPv4 or 
IPv6 in the connection is not decided by the server, but by the client. 
If an IPv6 path is really bad, the client should switch to an IPv4 path 
and vice versa. If the software in use by the client does not make this 
easy, it is not the fault of the server.

Perhaps a better solution than different DNS names for IP versions 
should be better client abilities. Is it unreasonable for the client 
system to detect that the IPv6 path seems unreasonable and quickly check 
to see if there is a better IPv4 path? Or perhaps the software utilizing 
the IP stack should allow the user to specify which method they'd like 
to utilize at that moment in time (ie, web-browser; view site with 
IPv4|IPv6).

This would solve the problem you are indicating and not overcomplicate 
the server side which is working fine. People don't want to learn to 
type www.ipv6.example.com and www.ipv4.example.com. It makes much more 
sense to just change the software to choose which method it wants. Not 
that software vendors would incorporate such features.

-Jack

This the trace from 6 bone looking glass

traceroute6 to 2001:418:3f4:0:2a0:24ff:fe83:53d8 
(2001:418:3f4:0:2a0:24ff:fe83:53d8) from 2001:6b8::204, 30 hops max, 12 
byte packets
1  6bone-gw4  0.749 ms  0.537 ms  0.506 ms
2  gw1-bk1  1.103 ms  1.101 ms  1.046 ms
3  tu-16.r00.plalca01.us.b6.verio.net  186.424 ms  186.129 ms  187.344 ms
4  tu-800.r00.asbnva01.us.b6.verio.net  246.76 ms  246.798 ms  246.759 ms
5  t2914.nnn-7202.nether.net  458.76 ms  446.925 ms  496.061 ms
6  2001:418:3f4:0:2a0:24ff:fe83:53d8  450.172 ms  477.296 ms  453.895 ms





Re: What do you want your ISP to block today?

2003-08-30 Thread Jack Bates
Rob Thomas wrote:

Oh, good gravy!  I have a news flash for all of you security experts
out there:  The Internet is not one, big, coordinated firewall with a
handy GUI, waiting for you to provide the filtering rules.  How many
of you experts regularly sniff OC-48 and OC-192 backbones for all
those naughty packets?  Do you really want ISPs to filter the mother
of all ports-of-pain, TCP 80?
Yes. While I hate to admit it, the one thing worse than not applying 
filters is applying them incorrectly. A good example would be the icmp 
rate limits. It's one thing to shut off icmp, or even filtering 92 byte 
icmp. The second one rate-limits icmp echo/reply, they just destroyed 
the number one network troubleshooting and performance testing tool. If 
it was a full block, one would say it's filtered. Yet with rate 
limiting, you just see sporatic results; sometimes good, sometimes high 
latency, sometimes dropped.

Filter edges, and if you apply a backbone filter, apply it CORRECTLY! 
Rate-limiting icmp is not correctly.

-Jack



Re: What do you want your ISP to block today?

2003-08-30 Thread Jack Bates
Sean Donelan wrote:

If you don't want to download patches from Microsoft, and don't want to
pay McAfee, Symantec, etc for anti-virus software; should ISPs start
charging people clean up fees when their computers get infected?
www.google.com
+Free +AntiVirus
Now was that so hard?

-Jack



Re: Automatic shutdown of infected network connections

2003-08-30 Thread Jack Bates
Sean Donelan wrote:

How many ISPs disconnect infected computers from the network?  Do you
leave them connected because they are paying customers, and how else
could they download the patch from microsoft?
We disconnect after contact if they remain infected after 72 hours or 
once we determine contact won't be possible.

User's are responsible for their own computers. We understand that many 
of them need the service in order to fix their systems. However, a line 
has to be drawn at some point. I want the 135 blocks removed, and in 
order to do that, the malicious packets must be reduced to a minimum.

-Jack



Re: On the back of other 'security' posts....

2003-08-30 Thread Jack Bates
Owen DeLong wrote:
 Again, I just don't see where an ISP can or should be held liable for
forwarding what appears to be a correctly formatted datagram with a valid
destination address.  This is the desired behavior and without it, the
internet stops working.  The problem is systems with consistent and
persistent vulnerabilities.  One software company is responsible for
most of these, and, that would be the best place to concentrate any
litigation aimed at fixing the problem through liquidated damages.
Most dDOS's come from bots. Bots are installed on all operating systems 
and all architectures. I'd be surprised if the packets are all spoofed. 
In most dDOS cases these days, they are real IP's and just a high number 
of bots.

The person responsible is the bot maintainer. Finding the controller 
medium (probably irc) is the hard part, but once done, monitoring who 
controls the bots isn't near as hard. Tracking them down can be abit of 
fun, but usually they get lazy about covering tracks at that point. A 
few media enriched prison sentences would be good.

-Jack



Re: GLBX ICMP rate limiting (was RE: Tier-1 without their own backbone?)

2003-08-29 Thread Jack Bates
Temkin, David wrote:

We've noticed that one of our upstreams (Global Crossing) has introduced 
ICMP rate limiting 4/5 days ago.  This means that any traceroutes/pings 
through them look awful (up to 60% apparent packet loss).  After 
contacting their NOC, they said that the directive to install the ICMP 
rate limiting was from the Homeland Security folks and that they would not 
remove them or change the rate at which they limit in the foreseeable 
future.

rant
Are people idiots or do they just not possess equipment capable of 
trashing 92 byte icmp traffic and letting the small amount of normal 
traffic through unhindered? They are raising freakin' complaints from 
users who think the Microsoft ICMP tracert command is just the end all, 
be all and is of course completely WRONG with rate-limiting in effect.
/rant

-Jack



Re: Fun new policy at AOL

2003-08-29 Thread Jack Bates
Gary E. Miller wrote:
Maybe if PacBell (and others) actually disciplined their more out of
control DSL customers then other ISPs would not feel the need to do it
for them.
It doesn't matter. A large percentage of open proxies are on dynamic 
DSL. Since a lot of ISPs will not handle proxy reports and take care of 
the problem, and the blacklists are about useless since the open proxy 
will switch IPs, it's just best to wipe out the entire dynamic range.

-Jack



Re: Fun new policy at AOL

2003-08-29 Thread Jack Bates
Michel Py wrote:
 If ISPs don't want people to run SMTP servers on their DSL line they
should provide a top-notch smarthost, which most don't.

The one's that don't provide a top-notch smarthost usually don't handle 
abuse complaints either. Just what do they do for their customers? I'm 
curious.

-Jack



Re: Fun new policy at AOL

2003-08-29 Thread Jack Bates
Mikael Abrahamsson wrote:
You switch service provider or give them a whack with the cluebat.

Some providers don't support auth do to the insecure passwords their 
users have. Having your server opened up to relay spam because your user 
had a bad password is not a good prospect.

-Jack



Re: Fun new policy at AOL

2003-08-29 Thread Jack Bates
JC Dill wrote:
Either the webmail solution meets your needs, or you need to obtain 
service from a company that offers a solution that meets your needs.  
Why is this so hard to understand?

Or people implement a protocol that doesn't break existing uses of the 
system (let's not forget the issues with many mailing-lists and .forward 
files).

Personally, I like the idea of verifying that an IP address that is 
sending mail is allowed to send mail according to domain X, which is 
either verified by the mail from rhs or by the (he|eh)lo parameter. One 
or the other should be able to be verified; mail from rhs when at the 
home network and (he|eh)lo parameter at remote sites. Checking the MX 
records for each would make a good portion of the current mail servers 
compliant (except those with seperate outbound/inbound servers) and 
having a different tag (txt, new DNS record, special dns tag like 
outmail.fqdn) would allow outbound only servers to quickly meet compliance.

It's quicker and more simplistic than any proposal I've read. It doesn't 
break anonymous forwarding or sending mail through other provider's smtp 
servers. What it does do is verify that someone is responsible for that 
mail connection and that someone is domain X without arguement.

I don't care if envelopes appear to be forged. It's done regularly in 
production. What I do care about is being able to say that someone is 
responsible for the email. If domain X said that a server can send mail 
outbound and it's not the mail I wanted, holder of domain X is liable 
and lawyers can do the dirty work they are paid for. Or at a minimum, I 
can block domain X and not feel bad about it.

-Jack



Re: Fun new policy at AOL

2003-08-29 Thread Jack Bates
[EMAIL PROTECTED] wrote:

So the provider allows the user to pick an insecure password, and then
complains that they can't support a security measure because of their poor
policy choices/enforcement?
You have an easy way to change password enforcement of an existing user 
base? Dealing with people infected with the latest worms has increased 
workloads across the board. That's only a small percentage of the user 
base. Password enforcement on an existing user base will cause problems 
for a majority of the user base.

Proprietary dialers help, but have their own problems. If you use the 
mail interface to change the dialup passwords, you'll get calls from 
users that can no longer dial in; otherwise you fragment passwords on an 
account and add overhead that's unnecessary. Adding the policy and 
waiting for it to rotate out would take over a decade.

I wouldn't recommend a policy change like that for any user base over 
10,000.

-Jack



Re: Microsoft distributes free CDs in Japan to patch Windows

2003-08-25 Thread Jack Bates
Sean Donelan wrote:

As some of you know, the standard Microsoft OS distribution sold
in stores on CD is a year or so old, and doesn't include any recent
patches.  You needed to download recent patches from Microsoft's
web site.  Unfortunately, with the latest round of worms, Windows
doesn't survive on the net long enough to downdload patches.
Which is why Microsoft should issue a software equivelant of a recall. 
Systems shouldn't be sold vulnerable without at least a patch CD.

-Jack



Re: Microsoft distributes free CDs in Japan to patch Windows

2003-08-25 Thread Jack Bates
Paul A. Bradford wrote:

2. the remote control being hijacked by someone besides MS?
  2a. Hey I'd love to be able to shut folks that were killing my network
off until they update, but is it my right?
Automatic cutoff until update check every 7 days?

-Jack



Re: Microsoft distributes free CDs in Japan to patch Windows

2003-08-25 Thread Jack Bates
Henry Linneweh wrote:

Microsoft has a task scheduler that people should learn to use to remind
them to check update to make sure their patches are current, it is
located in the control panel and labled Scheduled Tasks and has an
Add Scheduled Tasks icon to add update, FYI
 
And that helps a fresh store bought computer how? It'll be infected 
before it can even download the first initial patches.

-Jack



Re: Cisco filter question

2003-08-22 Thread Jack Bates
Scott McGrath wrote:

Geo,

Look at your set interface Null0 command the rest is correct
you want to set the next hop to be Null0.  How to do this is left as an 
exercise for the reader.

Interface Null0 works fine. Here's a quick check.

Inbound (from peers) policy matches
route-map nachi-worm, permit, sequence 10
  Match clauses:
ip address (access-lists): 199
length 92 92
  Set clauses:
interface Null0
  Policy routing matches: 10921 packets, 1048416 bytes
Outbound (to internal network) accesslist matches
Extended IP access list 181
deny tcp any any eq 135 (1994 matches)
permit icmp any any echo (757 matches)
permit icmp any any echo-reply (381 matches)
permit ip any any (381370 matches)
I cleared 181 first, then cleared route-map counters. I then checked 
route-map counters first before checking access-list counters. This 
means the access-list has more time to accrue maches yet it is 
considerably smaller. The checks were a matter of seconds. I'd say the 
policy is working. The echo/echo-reply could easily be everyday pings 
which are up abit due to various networks having performance issues.

IOS Versioning can sometimes have issues. There's also the question of 
if the packet came in the inbound interface that had the policy applied.

-Jack



Re: Cisco filter question

2003-08-22 Thread Jack Bates
[EMAIL PROTECTED] wrote:

ip address (access-lists): 199
  ^^^

Extended IP access list 181
  ^^^



Did you mean to have a mismatch between the numbers?
Or is there some magic configuration detail that links
the two together that I haven't learned about yet?
They are comparitive lists. 181 lists all traffic leaving the router 
towards my networks while 199 is the list for the routemap that filters 
inbound icmp traffic of 92 bytes. 181 would be legitimate icmp traffic 
which is why it's lower than route-map nachi-worm which uses acl 199.

-Jack



Re: Email virus protection

2003-08-21 Thread Jack Bates
Stephen J. Wilcox wrote:

We dont filter by file type.. people do send exe's legitimately!



You can zip the exe, or you can rename the exe, or you can ask not to 
have exe's filtered at all.

Sometimes solutions can be simple.

-Jack



Re: Email virus protection

2003-08-21 Thread Jack Bates
Stephen J. Wilcox wrote:
Just like what some viruses do you mean?

A zipped virus or a renamed virus to say exd or dat is less likely to 
get an infection hold than .pif, .bat, or .exe

-Jack



Re: Email virus protection

2003-08-20 Thread Jack Bates
Christopher J. Wolff wrote:

Hello,

What is the most common method for providing virus protection for your
hosted email customers?  Thank you in advance.
The best method for protection of your network (by limiting exposure of 
your users to viruses) is to strip executable files. We replace the 
files with a small text file mentioning the filename and a brief 
description of why we stripped it and who to contact if they need the file.

I recommend executable stripping before virus scanning in all cases. 
Virus scanning is still vulnerable to startup viruses (Sobig-F could 
have infected numberous users before the dat files update).

-Jack



Re: Email virus protection

2003-08-20 Thread Jack Bates
Gary E. Miller wrote:

I love guys like you.  All my customers once had (still have) admins
that filtered and cleaned their email for them.  Also added
firewalls for their protection.  Now they are my customers because they
do not want your protections.
I never understood ISPs that can apply a filter but not make an 
exception. All my filters, network and service level, have exclusions. 
The filters are designed to protect the network from the users. Less 
than 0.1% of my users do not want such protections, and those users are 
cleared of them.

In the last 3 days, I have received over 50 thankyou emails from 
customers concerning Sobig-F stripping. One user said that they wanted 
off filtering because they updated their anti-virus definitions once a 
week and that they were expecting an email from someone, but I'd 
stripped the attachment. It turns out that the user hadn't updated since 
Sobig-F released 2 days ago and since the from address was something he 
was looking for, he would have run the executable I'd stripped. I 
informed him that the file was viral, and he informed me that he'd like 
to keep the filtering. This is normal of most requests.

I will agree with you that there are many networks that deploy filtering 
and do not work with the customer concerning the filtering. To do so is 
poor business practice in my opinion. The problem isn't the filtering. 
It is the lack of contact with the customer.

-Jack



Re: Why do you use Netflow

2003-08-19 Thread Jack Bates
[EMAIL PROTECTED] wrote:

Are operators frequently using netflow nowadays?  I assume that if you are, you turn 
it on only for
some limited duration to collect your data and then go back and do your analysis.  Is 
this assumption correct?
Netflow overhead is relatively low considering what it does. I keep mine 
on at peering points.

What are you looking at when you analyze this data?  I've seen uses such as
top 10 destination AS's for peering evaluations.  What else?  Billing?
Number one use for netflow, scan detections. I detect most users 
infected with a virus before remote networks can auto-gen a report. I 
also detect mail being sent from various customer machines. High volume 
traffic flags me so I can investigate if it's spam or not.

I can tell you (well, I won't without a court order, but I could) the 
username, or customer name (if static), of every worm infected user on 
my network at any given point in time. 50+ inactive flows for an IP 
address is definite worm sign. If you want to be more specific, do 
sequential scan checks on the flow data. Has been very useful in dealing 
with Blaster.

Netflow is particularly useful when utilizing NAT, as it's much easier 
to collected netflow data than translation tables.

On a cold, boring day, you can setup aggregates and generate cute little 
statistics for all sorts of things, and I hear it's useful in some 
scenarios.

-Jack



Re: Why do you use Netflow

2003-08-19 Thread Jack Bates
Jason Frisvold wrote:

We used ip accounting the other night to detect and disable a large
number of worm infected users that took out the router completely..  I
think net flow would have been too much overhead at the time...  Once we
were down to a more manageable number of infected users, we used netflow
to pinpoint them immediately...  (Note, we don't leave netflow on all
the time)
One method for limiting netflow accounting to manageable ammounts is to 
access-list the port involved. This is why I did institute 135 blocking. 
This flags the flow as inactive which only holds it for like 15 seconds 
on default. Of course, this still may not be enough for some routers. I 
just happen to have prepared for this actual event due to constant DDOS 
attacks about nine months ago (reverse view, change rule matches).

-Jack



  1   2   3   >