Re: How Not to Multihome

2007-10-10 Thread Stephen Satchell


Justin M. Streiner wrote:


On Tue, 9 Oct 2007, Patrick W. Gilmore wrote:

Justin, if Provider A _has_ permission from Provider B to announce a 
prefix, do you believe Provider A should be allowed to announce the 
prefix?


As long as all of the relevant parties know about it and are OK with it, 
that's fine.  It's just not my first choice for solving the customer's 
multihoming dilemma, that's all :)


jms



Back when I was a NOC monkey (that stopped a month ago), I had exactly 
that situation.  I had MCI and SBC as upstreams.  Before multihoming, my 
network was split in two segments, one for each substream.  This made 
things like DNS interesting.


When I got my ASN, I got agreement from both MCI and SBC to announce my 
/21 allocations from them over both upstream circuits.  As a result, I 
was able to go back to a single inside network, a single pair of DNS 
servers, and no more cross-router traffic via the Internet cloud.


I then got my ARIN allocation and went through the Fiscal Quarter From 
Hell renumbering everything into the new number block.  I dropped MCI 
(long story) and lit up Idacomm, but kept SBC link and numbers.


When I left the ISP, my routers had been announcing my suballocation of 
SBC space for more than a year.   With their permission.  Their only 
requirement is that I have proper routing objects in a routing registry 
so SBC could see that the route I was announcing was valid.  (What was 
VERY interesting was that I was using the ARIN registry, and SBC was 
not.  The resulting bru-ha-ha uncovered a synchronization problem that 
ARIN had, and that ARIN fixed.)






Re: Apple Airport Extreme IPv6 problems?

2007-09-16 Thread Stephen Satchell


Iljitsch van Beijnum wrote:

Does browser caching still work these days? I thought all web admins 
disabled it on their servers because they can't be bothered to think 
about which cache directives to send along with each page. I can rarely 
return to a previously viewed page without the browser hitting the 
network, in any event.


Actually, browser caching is a function of the Web design tags, not the 
server.  So, the decision to allow caching is on the page creator.  On 
my own sites, I leave caching to the default unless there is a good 
reason to disable caching.  One site I used to run, a warranty form 
processor, I disabled all caching -- at all levels -- because it was a 
database-driven site allowing updates from multiple people at the same 
time, so caching was highly inappropriate.


Caching use to bite me regularly when I was doing customer support. 
Which led to the mantra "Clear the cache!"


Re: NAT Multihoming

2007-06-03 Thread Stephen Satchell


Chris Owen wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jun 3, 2007, at 4:19 PM, Simon Leinen wrote:


You write "when" rather than "if" - is ignoring reasonable TTLs
current practice?


Definitely.  We've seen 15 minute TTLs regularly go 48 hours without 
updating on Cox or Comcast's name servers.  I believe the most I've seen 
was 8 days (Cox).


The last time I renumbered, I found that quite a few people were not 
honoring the TTLs I put in my DNS zone files.  I would clone the new 
address and monitor traffic to the old address -- and it took up to 
seven days for the traffic to the old address to die down enough that I 
could take it out.  This is based on a server farm of, at the time, 162 
servers.


Custom customer zone files hosted elsewhere?  I had a few of those, the 
effect of which is not included in the observation above.


By the way, I standardized on a customer zone TTL of 14400 (four hours) 
for all zones.  That provided a good balance betwen agility and master 
DNS server load.  rDNS is currently 172800 (two days).  DNS A records 
are 432000 (5 days).


Re: VOIP and QOS across local network and the Internet

2007-05-15 Thread Stephen Satchell


Answers interlined:

Rick Kunkel wrote:


- Do you offer QOS services across your network for VOIP or other types of
traffic?


Yes.


- Do you do this on a per-customer basis, or is it done globally?


Globally


- For those that offer QOS services for VOIP, is traffic classification
done by TCP layer protocol and port number?


Yes



Re: Broadband routers and botnets - being proactive

2007-05-12 Thread Stephen Satchell


Gadi Evron wrote:

>[snip]


The previous unaddressed threat which most of us chose to ignore was
spoofing. We all knew of it for a very long time, but some of us believed
it did not pose a threat to the Internet or their networks for no other
reason than "it is not currently being exploited" and "there are enough
bots out there for spoofing to not be necessary". I still remember the
bitter argument I had with Randy Bush over that one. This is a rare
opportunity, let's not waste it.

We are all busy, but I hope some of you will have the time to look into
this.

I am aware of and have assisted several ISPs, who spent some time and
effort exploring this threat and in some cases acting on it. If anyone can
share their experience on dealing with securing their infrastructure in
this regard publicly, it would be much appreciated.


I don't know who the "us" is who you are referring to.  One of the first 
things I did when I took over the management of the network at $DAYJOB 
was to tighten up the packet filtering at the edge of my network.  That 
included fixing up the inbound and outbound filters:


  * blocking most "small services" inbound
  * blocking ports inbound used in widespread attacks
  * blocking multi-cast IP addresses inbound
  * blocking BOGON and RFC1918 source IP addresses inbound
  * blocking non-owned IP source addresses, including RFC1918, outbound
  * null-routing RFC1918 target addresses outbound

(Under consideration but not yet implemented:  null-routing BOGON target 
addresses)


In my research into my new job, I got the impression that the above was 
considered one of the Best Current Practices for router configuration.


I currently have a customer who is getting DDoSed by someone spoofing 
the source IP address in a TCP SYN flood.  The problem us bad enough 
that I'm building a level-2 firewall (using a Linux box) to rate-limit 
TCP SYN to port 80 on his two IP addresses, and to raise an alarm when 
the incoming rate exceeds a threshold.


When I ask my upstream where the SYN flood is entering *his* routers, 
the answer is "everywhere, I see these packets on every single upstream 
port I have."


The last time I was able to do a packet capture and analysis during the 
flood, I found the source IP address of the packets that got through 
were evenly distributed across the IP address spectrum, with obvious 
notches in BOGON, RFC1918, and multicast IP ranges.  (For those of you 
who like to build tools, I found using a FFT of the source addresses to 
be an excellent tool for analyzing traffic patterns.)


So I don't have a problem sourcing such floods, because my ACLs block 
attempts to do so.  I sure have problems sinking them.


Re: ISP CALEA compliance

2007-05-10 Thread Stephen Satchell


David Lesher wrote:


Speaking on Deep Background, the Press Secretary whispered:
You work so hard to defend people that exploit children? Interesting. We are 
talking LEA here and not the latest in piracy law suits. The #1 request from a 
LEA in my experience concerns child exploitation.


I think you'll find most intercept orders are drug cases. 


And no matter what, we still have a Constitutionsort of...
Which brings up my point be sure and let your Hill Critters
know what shit you are going though 


So far, my involvement with law enforcement has been split evenly 
between illegal gambling and income tax evasion.  Nothing else.


Of course, I'm based in Nevada; if I were elsewhere the gambling 
("gaming" as it's called here) would most likely drop off the map.


Re: Thoughts on increasing MTUs on the internet

2007-04-12 Thread Stephen Satchell


Steven M. Bellovin wrote:

On Thu, 12 Apr 2007 11:20:18 +0200
Iljitsch van Beijnum <[EMAIL PROTECTED]> wrote:


Dear NANOGers,

It irks me that today, the effective MTU of the internet is 1500
bytes, while more and more equipment can handle bigger packets.

What do you guys think about a mechanism that allows hosts and
routers on a subnet to automatically discover the MTU they can use
towards other systems on the same subnet, so that:

1. It's no longer necessary to limit the subnet MTU to that of the
least capable system

2. It's no longer necessary to manage 1500 byte+ MTUs manually

Any additional issues that such a mechanism would have to address?



Last I heard, the IEEE won't go along, and they're the ones who
standardize 802.3.

A few years ago, the IETF was considering various jumbogram options.
As best I recall, that was the official response from the relevant
IEEE folks: "no". They're concerned with backward compatibility.  


Perhaps that has changed (and I certainly) don't remember who sent that
note.  


No, I doubt it will change.  The CRC algorithm used in Ethernet is 
already strained by the 1500-byte-plus payload size.  802.3 won't extend 
 to any larger size without running a significant risk of the CRC 
algorithm failing.


From a practical side, the cost of developing, qualifying, and selling 
new chipsets to handle jumbo packets would jack up the cost of inside 
equipment.  What is the payback?  How much money do you save going to 
jumbo packets?


Show me the numbers.


Re: Abuse procedures... Reality Checks

2007-04-10 Thread Stephen Satchell


[EMAIL PROTECTED] wrote:


I also find it curious that you claim to have people on staff at your
company who know what SWIP means. Perhaps you could ask them to share
that information with us since I have never seen this documented
anywhere. Do they really know what you claim they know?

--Michael Dillon



Google is your friend.

http://www.arin.net/registration/guidelines/report_reassign.html

Shared WHOIS Project (SWIP)

"SWIP is a process used by organizations to submit information about 
downstream customer's address space reassignments to ARIN for inclusion 
in the WHOIS database. Its goal is to ensure the effective and efficient 
maintenance of records for IP address space.


"SWIP is intended to:

* Provide information to identify the organizations utilizing each 
subdelegated IP address block.

* Provide registration information for each IP address block.
* Track utilization of allocated IP address blocks to determine if 
additional allocations may be justified.


"For IPv4, organizations can use the Reassign-Simple, Reassign-Detailed, 
Reallocate, and Network-Modification templates to report SWIP information.


"Organizations reporting IPv6 reassignment information can use the IPv6 
Reassign, IPv6 Reallocate, and IPv6 Modify templates.


"Organizations may only submit reassignment data for records within 
their allocated blocks. ARIN reserves the right to make changes to these 
records upon the organization's approval. Up to 10 templates may be 
submitted as part of a single e-mail."


SWIPs are required for reallocations of /29 and larger if the allocation 
owner does not operate a RWhoIs server.


Of course, SWIP is a ARIN thing, and you work for BRITISH 
TELECOMMUNICATIONS PLC.  As a US network operator, I was well aware of 
the requirements for SWIP, because ARIN rules make it clear that, as a 
netblock owner of an ARIN allocation, I'm required to do it.


Which numbering authority do you work with day to day?


Re: Abuse procedures... Reality Checks

2007-04-07 Thread Stephen Satchell


Frank Bulk wrote:
> [[Attribution deleted by Frank Bulk]]
Neither I nor J. Oquendo nor anyone else are required to 
spend our time, our money, and our resources figuring out which 
parts of X's network can be trusted and which can't.  


It's not that hard, the ARIN records are easy to look up.  Figuring out that
network operator has a /8 that you want to block based on 3 or 4 IPs in
their range requires just as much work.


It's *very* hard to do it with an automated system, as such automated 
look-ups are against the Terms of Service for every single RIR out there.


Please play the bonus round:  try again.


Re: PG&E on data centre cooling..

2007-03-31 Thread Stephen Satchell


John Kinsella wrote:

On Fri, Mar 30, 2007 at 02:53:58AM +, Paul Vixie wrote:

[EMAIL PROTECTED] ("Dorn Hetzel") writes:

I preferred the darkness of PAIX back in the late 90's.  We had a
christmas tree in our cage and it looked great in the dark :)

that was brian reid's idea, and it was a great one, and equinix-san-jose
was merely copying paix (where al and jay had just spent a few years).
most importantly, it's STILL dark, and still looks great.


I sorta wonder why the default is lights on, actually...I used to always
love walking into dark datacenters and seeing the banks of GSRs (always
thought they had good Blink) and friends happily blinking away. 


What we really need is a datacenter with lit floor tiles. ;)

John(damn I've been in a DC with clear floor tiles...why didn't I think
of this then?)


How about the concept used in movie theatres?  Line the walkways with 
white LEDs so that people can walk safely.


Far less power, easy to run from small UPS, and use LED exit lights to 
keep the fire marshalls happy.  Even mark the location of fire 
extinguishers in LEDs.


Customers would be encourages to bring their own florescent panel lamps; 
rentals would be available for the forgetful.





Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Stephen Satchell


Douglas Otis wrote:

On Sat, 2007-03-31 at 16:47 -0500, Frank Bulk wrote:

For some operations or situations 24 hours would be too long a time to wait.
There would need to be some mechanism where the delay could be bypassed.


What operation requires a new domain be published within 24 hours?  Even
banks require several days before honoring checks as protection against
fraud.  A slight delay allows preemptive enforcement measures.  It seems
most if not all operations could factor in this delay into their
planning.


"Sips of knowledge intoxicates the mind, while deeper drinking sobers it 
again."  Where did you drink that Kool-Aide?


Back when I was in the bank automation business, the main effort was to 
build a "quick-clearing" process, measured in hours, for checks.  The 
idea is that an electronic recording of the check would be sent to the 
issuing bank, payment made by the issuing bank to the account of the 
receiving bank, and the payment confirmed when the physical paper (or 
photocopy) of the check arrived.


(If the paper never showed up, the issuing bank would reverse the 
transfer of the money.)


The idea of fractional-day clearing was to reduce the float between 
banks.  Whether that fractional-day clearing made all the way to the 
customer is the decision of the receiving bank, as it controls when the 
funds are released to the depositor.  The receiving bank can *use* that 
money if it doesn't credit the depositing account immediately, but waits 
a day.


I'm a customer.  I want a domain name *now*, not in the future.  I 
believe that, given the speed of the Internet, there is no reason to 
introduce delays.


As for "tasting", I'm against it.  The cost of a domain name is small 
enough that there is no need to have a tasting.  Some of the excuses 
I've seen to support tasting can readily be handled by other processes 
that have the same effect, but without the potential for harm by abusers.


Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Stephen Satchell


Gadi Evron wrote:


Amen. Really.

I'd honestly like more ideas.


What did IETF and ICANN say when you approached them through their 
public-comment channels?


Re: On-going Internet Emergency and Domain Names

2007-03-31 Thread Stephen Satchell


Kradorex Xeron wrote:

What needs to be done is the ISPs allowing botnets and malware to run rampid 
on their networks to be held accountable for being negligent on their network 
security, Service provider abuse mailboxes should be paid more heed to, and 
reports should be acted upon,


The presupposes that people will report problems.  The situation with
spam shows clearly that when the problem gets big enough, people will
*stop* *reporting* *incidents*.

Out of a clear blue sky, one of my servers found its way into the CBL.
No spam reports, none at all.  (I'm the Abuse Investigator, the one who
has to read all the reports -- and the spam -- directed at
[EMAIL PROTECTED], so I would know.)




Re: NOC Personel Question (Possibly OT)

2007-03-15 Thread Stephen Satchell


Gadi Evron wrote:


Anyway, I have a friend who used managed to get "Not A Janitor" on his
business card.


My all-time favorite business card was one from Autodesk from the chief 
financial officer, who appeared to be a real Niven fan:


  Speaker to Bankers


Re: DNS Query Question

2007-01-23 Thread Stephen Satchell


Dennis Dayman wrote:


I have a customer having some DNS issues. They have done some research
regarding some DNS timeout errors they saw with Verizon's sender verify
looking up their MX records. What they have discovered is their current 
DNS service has a 1% failure/timeout rate. They are exploring other 
vendors (UltraDNS for one), but need an estimate of the number of DNS 
queries for accurate pricing to put together a ROI argument for the

switch.


I had some problems with DNS timeout, and discovered that by doing 
priority queuing in my Cisco routers I was able to cut the failure rate 
to my authoritative DNS servers to near zero.  The only time my DNS 
servers don't give a proper response is when a router is being flooded 
with other outbound data.


Is your customer using BIND?  What do the statistics tell you?  How many 
DNS servers are handling the traffic?  Are they load-balanced?  Has the 
DNS servers been upgraded to handle more traffic?  Does the customer 
segregate their authoritative servers from their recursive ones?  (That 
one change right there improved my DNS reliability and servicability by 
several orders of magnitude!)


From your description, I'd say there was a lot more work to be done 
first, unless they just don't have the people to do it right.




Re: Phishing and BGP Blackholing

2007-01-02 Thread Stephen Satchell


[EMAIL PROTECTED] wrote:


Then there's the whole trust issue - though the Team Cymru guys do an awesome
job doing the bogon feed, it's rare that you have to suddenly list a new
bogon at 2AM on a weekend.  And there's guys that *are* doing a good job
at tracking down and getting these sites mitigated, they prefer to get the
sites taken down at the source.  I'm not sure they would *want* to be trying
to do a BGP feed.


As an operator of a large collections of Web hosting sites, I appreciate 
the work of those guys who track down sites and send alerts.  I can then 
surgically remove the offending phishing sites quickly.  When a customer 
does the sites (and I've had a few of those) I usually find multiple 
phishing payload sites...and the account is so closed so quickly that 
the perps don't even have time to fetch the data they collected.


The champaionship record is nine payload-sites for different phishing 
targets.


Re: Bogon Filter - Please check for 77/8 78/8 79/8

2006-12-11 Thread Stephen Satchell


Jared Mauch wrote:

linking to stuff like the bogon-announce list too wouldn't
be a bad idea either :)



Bogon announce list?


Re: UUNET issues?

2006-11-05 Thread Stephen Satchell


David Lesher wrote:

Speaking on Deep Background, the Press Secretary whispered:

On Nov 5, 2006, at 1:51 AM, Randy Bush wrote:

"Could you be any less descriptive of the problem you are seeing?"

the internet is broken.  anyone know why?

Did you ping it?

is that what broke it?

I'm sure it just needs to be rebooted.

Is this the day we disconnect everything and blow all the dirt out?


You only *think* you are joking.  I still remember the Day of the Great 
Restart when everyone on the ARPAnet had to shut down the IMPs and TIPs, 
and reload the control software.  Why?  There were literally thousands 
of rogue packets flying around the net, eating up bandwidth (and in 
those days, we are talking 56 kbps links!) and boy were those 
tubie-thingies plugged up!


Shortly after that cusp event, per-packet TTL field was added to the NTP 
protocol, which is why TCP/IP has the TTL field in the IP packet.


The network had added to it a self-cleaning function.  Think of it as 
one long continuous sneeze.




Re: Sagonet - Failing miserably with network security Someone needs to handle this.

2006-10-29 Thread Stephen Satchell


Chris Jester wrote:

65.110.62.120

Sagonet,

We have a serious hacker here who is ACTIVLY engaged in logins
on our network (have him in a honeypot at the moment). He is running
exploits from your network and
also I have been hearing from others that you have been notified of this
a few times yet have done nothing about it.  Can we get someone to handle
this immediately please?


Thank you for the report.  I've added 65.110.62.120 in our perimeter 
firewalls, on the off chance that the guy has broken into one or more 
servers at American Internet (Reno).  If he (she) did, it may explain 
some traffic anomolies we've been seeing this past week.


Re: [Fwd: RE: Kremen VS Arin Antitrust Lawsuit - Anyone have feedback?]

2006-09-11 Thread Stephen Satchell



On Mon, 11 Sep 2006, Chris Jester wrote:


Also, what about ARINS hardcore attitude making it near impossible
to aquire ip space, even when you justify it's use?  I have had
nightmares myself as well as MANY of my collegues share similar experiences.
I am having an issue right now with a UNIVERSITY in Mexico tryin to get
ip's from the mexican counterpart.  Why is it that they involve lawyers,
ask you all your customers names and etc... This is more information than
I think they should be requiring. Any company that wishes to engage in
business as an ISP or provider in some capacity should be granted the
right to their own ip space. We cannot trust using ips swipped to us by
upstreams and the like. Its just not safe to do that and you lose control.

Actually, is there anyone else who shares these nightmares with me?
I brought up the lawsuit with Kremen and ARIN to see if this is a common
issue.  What are your views, and can someone share nightmare stories?


When I went after my own /21 after headaches with the numbers from my 
upstreams, UUNET and SBC, I sought help from a consultant (inexpensive, 
I might add) to let me know *exactly* what I needed to provide to ARIN 
to justify a private allocation.  Unlike the original poster's claim, I 
didn't have to "open the kimono" wide.  ARIN looked at my existing 
utilization, my basic numbering plan, my then-existing map of domain 
names to IP addresses, my application for one ASN, and after one 
back-and-forth they said "yes."


Today I'm a very happy multi-homed camper.