RE: PSINet/Cogent Latency

2002-07-23 Thread Phil Rosenthal


I have a small RRD project box that polls 200 interfaces and has it
takes 1 minute, 5 seconds to run with 60%  cpu usage (so obviously it
can be streamlined if I wanted to work on it). I guess the limit in this
implementation is 1000 interfaces per box in this setup -- but I see
most of the CPU usage is in the forking of snmpget over and over.  Im
sure I could write a small program in C that could do this at least 10X
more efficiently.  That's 10,000 interfaces with RRD on one intel -- if
you are determined to do it.

I think if you are billing 10k interfaces, you can afford a 2nd intel
box to check the 2nd 10,000, no?

My point is that if you have sufficient clue, time, and motivation --
Today's generic PCs are capable to do many large tasks... 

--Phil


-Original Message-
From: Richard A Steenbergen [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, July 23, 2002 2:10 AM
To: Phil Rosenthal
Cc: 'Doug Clements'; [EMAIL PROTECTED]
Subject: Re: PSINet/Cogent Latency


On Tue, Jul 23, 2002 at 01:56:45AM -0400, Phil Rosenthal wrote:
 
 I don't think RRD is that bad if you are gonna check only every 5 
 minutes...

RRD doesn't measure anything, it stores and graphs data. The perl
pollers everyone is using can barely keep up with 5 minute samples on a
couple dozen routers and a few hundred interfaces, requiring poller
farms to be distributed across a network, 'lest a box or part of the
network break and you lose data.

 Again, perhaps I'm just missing something, but so lets say you measure

 30 seconds late , and it thinks its on time -- So that one sample will

 be higher , then the next one will be on time, so 30 seconds early for

 that sample -- it will be lower.  On the whole -- it will be accurate 
 enough -- no?

enough is a relative term, but sure. :)

 I'm not saying a hardware solution can't be better -- but it is likely

 overkill compared to a few cheap intels running RRD -- assuming your 
 snmpd can deal with the load...

What hardware... storing a few byte counters is trivial, but polling
them through snmp is what is hard (never trust a protocol named simple
or trivial). Creating a buffer of samples which can be periodically
sampled should be easy and painless. I don't know if I call periodic ftp

painless but its certainly a start.

-- 
Richard A Steenbergen [EMAIL PROTECTED]
http://www.e-gerbil.net/ras
PGP Key ID: 0x138EA177  (67 29 D7 BC E8 18 3E DA  B2 46 B3 D8 14 36 FE
B6)




RE: PSINet/Cogent Latency

2002-07-23 Thread Phil Rosenthal


I see your point, but I still think RRD is good enough.

If cisco/foundry/juniper added this to their respective OS's -- I'd be a
happy camper... If they don't -- I won't lose sleep over it.

--Phil

-Original Message-
From: Doug Clements [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, July 23, 2002 2:12 AM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: PSINet/Cogent Latency


- Original Message -
From: Phil Rosenthal [EMAIL PROTECTED]
Subject: RE: PSINet/Cogent Latency


 I don't think RRD is that bad if you are gonna check only every 5 
 minutes...

 Again, perhaps I'm just missing something, but so lets say you measure

 30 seconds late , and it thinks its on time -- So that one sample will

 be higher , then the next one will be on time, so 30 seconds early for

 that sample -- it will be lower.  On the whole -- it will be accurate 
 enough -- no?

If you're polling every 5 minutes, with 2 retrys per poll, and you miss
2 retrys, then your next poll will be 5 minutes late. It's not
disastrous, but it's also not perfect. Again, peaks and vallys on your
graph cost more than smooth lines, even with the same total bandwidth.

Do you want to be the one to tell your customers your billing setup is
accurate enough, and especially that it's going to have a tendancy to
be accurate enough in your favor?

 Besides I think RRD has a bunch of things built in to deal with 
 precisely this problem.

Wouldn't that be just spiffy!

 I'm not saying a hardware solution can't be better -- but it is likely

 overkill compared to a few cheap intels running RRD -- assuming your 
 snmpd can deal with the load...

No extra hardware needed. I think the desired solution was integration
into the router. The data is already there, you just need software to
compile it and ship it out via a reliable reporting mechanism. For being
relatively simple, it's a nice idea that it could replace the almost
in an almost accurate billing process.

--Doug





RE: Security of DNSBL spam block systems

2002-07-23 Thread Phil Rosenthal
Title: Message



IMHO 
Even the really large DNSBL's are barely used -- I think (much) less than 5% of 
total humanmail recipients are behind a mailserver that uses 
one...
--Phil

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
  Big_BandwidthSent: Tuesday, July 23, 2002 2:14 AMTo: 
  [EMAIL PROTECTED]Subject: Security of DNSBL spam block 
  systems
  
  What are the security implications of someone 
  hacking a DNSBL (Real-time-spam-block-list) and changing the block list to 
  include (deny email from) some very large portion or all IPv4 
  space?
  
  Given that a signifigant number of the spam blocking lists seem to 
  operate on a shoestring budget in someone's basement, how can we be assured 
  that they have sufficient resources to secure their systems adequatley, and 
  monitor for intrusion 24x7?
  
  Unless I am missing something, this would seem to be a real handy and 
  centralizedmethod for someonetointerfere substantially with 
  the proper operation of a few thousand email servers and hold up global email 
  traffic for a few hours.
  
  -BB
  
  
  
  


Re: PSINet/Cogent Latency

2002-07-23 Thread Alexander Koch


On Tue, 23 July 2002 02:25:36 -0400, Phil Rosenthal wrote:
 I have a small RRD project box that polls 200 interfaces and has it
 takes 1 minute, 5 seconds to run with 60%  cpu usage (so obviously it
 can be streamlined if I wanted to work on it). I guess the limit in this
 implementation is 1000 interfaces per box in this setup -- but I see
 most of the CPU usage is in the forking of snmpget over and over.  Im
 sure I could write a small program in C that could do this at least 10X
 more efficiently.  That's 10,000 interfaces with RRD on one intel -- if
 you are determined to do it.
 
 I think if you are billing 10k interfaces, you can afford a 2nd intel
 box to check the 2nd 10,000, no?

Phil,

imagine some four routers dying or not answering queries,
you will see the poll script give you timeout after timeout
after timeout and with some 50 to 100 routers and the
respective interfaces you see mrtg choke badly, losing data.

You see, the poll script is doing one after the other,
mainly, so you wait too long and then the next run starts
and then something.

mrtg/rrd is not the tool of choice for accounting / billing
but nice enough for showing you 'backup' graphs for visitors
probably.

Alexander




Re: PSINet/Cogent Latency

2002-07-23 Thread Richard A Steenbergen


On Tue, Jul 23, 2002 at 02:25:36AM -0400, Phil Rosenthal wrote:
 I have a small RRD project box that polls 200 interfaces and has it
 takes 1 minute, 5 seconds to run with 60%  cpu usage (so obviously it
 can be streamlined if I wanted to work on it). I guess the limit in this
 implementation is 1000 interfaces per box in this setup -- but I see
 most of the CPU usage is in the forking of snmpget over and over.  Im
 sure I could write a small program in C that could do this at least 10X
 more efficiently.  That's 10,000 interfaces with RRD on one intel -- if
 you are determined to do it.

10x? Wanna try a higher order of magnitude?

While you're at it, eliminate the forking to the rrdtool bin when you're
adding data. A little thought and profiling goes a long way, this is
simple number crunching we're talking about, not supercomputer work. The
problem comes from the perl mentality (why is there no C lib for
efficiently adding to an rrd db? because they're expecting everyone to
call it from perl :P), it's good enough for my couple boxes and you can
throw more machines at it.

But again, I have no doubt that if you designed it properly you could 
throw lots of snmp queries and scale decently to a nice sized core 
network, I've seen it done. The problem is potential communication loss 
between the poller and the device, and the amount of work that the device 
(which usually isn't running gods gift to any code let alone snmp code) 
has to do for higher sampling rates with many interfaces.

-- 
Richard A Steenbergen [EMAIL PROTECTED]   http://www.e-gerbil.net/ras
PGP Key ID: 0x138EA177  (67 29 D7 BC E8 18 3E DA  B2 46 B3 D8 14 36 FE B6)



Re: PSINet/Cogent Latency

2002-07-23 Thread Gary E. Miller


Yo Alexander!

On Tue, 23 Jul 2002, Alexander Koch wrote:

 imagine some four routers dying or not answering queries,
 you will see the poll script give you timeout after timeout
 after timeout and with some 50 to 100 routers and the
 respective interfaces you see mrtg choke badly, losing data.

Yep.  Anything gets behind and it all gets behind.

That is why we run multiple copies of MRTG.  That way polling for one set
of hosts does not have to wait for another set.  If one set is timing
out the other just keeps on as usual.

RGDS
GARY
---
Gary E. Miller Rellim 20340 Empire Blvd, Suite E-3, Bend, OR 97701
[EMAIL PROTECTED]  Tel:+1(541)382-8588 Fax: +1(541)382-8676





password stores?

2002-07-23 Thread Daniska Tomas


hi,

i'm wondering how large isps offering managed cpe services manage their
password databases.

let's say radius/tacacs is used for normal cpe user aaa, but there is
some 'backup' local user account created on the cpe for situations when
the radius server is unreachable. for security reasons, this backup
account (as well as snmp communities, radius key etc.) is unique per cpe
to avoid frauds caused by end-users (even if one does password recovery
on the cpe, they still don't have the password for other cpe's).

if there are hundreds or thousands of these cpe's that could mean
storing of tens thousands of password. are there any crypto-based
products available or do the people use their own stuff?


thanks

--
 
Tomas Daniska
systems engineer
Tronet Computer Networks
Plynarenska 5, 829 75 Bratislava, Slovakia
tel: +421 2 58224111, fax: +421 2 58224199
 
A transistor protected by a fast-acting fuse will protect the fuse by
blowing first.




Re: PSINet/Cogent Latency

2002-07-23 Thread Matt Zimmerman


On Tue, Jul 23, 2002 at 02:40:10AM -0400, Richard A Steenbergen wrote:

 While you're at it, eliminate the forking to the rrdtool bin when you're
 adding data. A little thought and profiling goes a long way, this is
 simple number crunching we're talking about, not supercomputer work. The
 problem comes from the perl mentality (why is there no C lib for
 efficiently adding to an rrd db? because they're expecting everyone to
 call it from perl :P), it's good enough for my couple boxes and you can
 throw more machines at it.

There is a C library, librrd.  That is how the other language APIs are
built.  As to efficiency, there is a lot of stringification, which is
inconvenient and unnatural in C, but this should not be the bottleneck in
the collection operation.

 But again, I have no doubt that if you designed it properly you could
 throw lots of snmp queries and scale decently to a nice sized core
 network, I've seen it done. The problem is potential communication loss
 between the poller and the device, and the amount of work that the device
 (which usually isn't running gods gift to any code let alone snmp code)
 has to do for higher sampling rates with many interfaces.

That said, bulk statistical exports from the device itself can easily be
more implemented efficiently than SNMP.  But unless the export process is
universally standardized, SNMP (for all its warts, and it has many) will
still have an edge in that it works nearly everywhere (for varying values of
works).

-- 
 - mdz



RE: PSINet/Cogent Latency

2002-07-23 Thread Alex Rubenstein



On Tue, 23 Jul 2002, Phil Rosenthal wrote:


 I have a small RRD project box that polls 200 interfaces and has it
 takes 1 minute, 5 seconds to run with 60%  cpu usage (so obviously it
 can be streamlined if I wanted to work on it). I guess the limit in this
 implementation is 1000 interfaces per box in this setup -- but I see
 most of the CPU usage is in the forking of snmpget over and over.  Im
 sure I could write a small program in C that could do this at least 10X
 more efficiently.  That's 10,000 interfaces with RRD on one intel -- if
 you are determined to do it.

Interesting. We have a dual p3-700, doing LOTS of other things, which does
1600 interfaces under MRTG using small amounts of CPU.

You are using 'Forks', if you're using MRTG, no?

This whole process takes less than 2 minutes.



 I think if you are billing 10k interfaces, you can afford a 2nd intel
 box to check the 2nd 10,000, no?

First and foremost, you said RRD, not billing.

Who uses RRD for billing purposes?



 My point is that if you have sufficient clue, time, and motivation --
 Today's generic PCs are capable to do many large tasks...

Quite. In regards to billing, we have some home grown software that (don't
laugh too hard) runs as an NT service; it collects 1,700 ports of
information every five minutes (Bytes[In|Out], BitsSec[In|Out],
AdminStatus, OperStatus, Time) in only 60 seconds; we've found the best
way to do this is to blast SNMP requests, and wait for replies which are
then event driven; wait 10 seconds, retry all the ones we get, then try
again. We've found that this works the best, having tried about 4
different ways of doing it over the last 5 years. It's all then nicely
stored in a SQL DB.




-- Alex Rubenstein, AR97, K2AHR, [EMAIL PROTECTED], latency, Al Reuben --
--Net Access Corporation, 800-NET-ME-36, http://www.nac.net   --





debugging packet loss

2002-07-23 Thread Ralph Doncaster


I'm seeing 2-5% packet loss going through a Cisco 2621 with 10mbps of
traffic running at ~50% CPU.  (packet loss based on ping results)

Pinging another box on the same catalyst 2900 switch gives no packet loss,
so it seems the 2621 is the source of the packet loss.  I need help
figuring out why it is dropping packets, and how to stop it.

The odd thing is that the interface stats don't show any dropped packets:
  Full-duplex, 100Mb/s, 100BaseTX/FX
  ARP type: ARPA, ARP Timeout 00:01:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of show interface counters 00:10:16
  Queueing strategy: fifo
  Output queue 0/40, 0 drops; input queue 1/512, 0 drops
  5 minute input rate 5257000 bits/sec, 1383 packets/sec
  5 minute output rate 5692000 bits/sec, 1448 packets/sec
 845743 packets input, 392733148 bytes
 Received 403 broadcasts, 0 runts, 0 giants, 0 throttles
 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
 0 watchdog, 0 multicast
 0 input packets with dribble condition detected
 887446 packets output, 430245866 bytes, 0 underruns
 0 output errors, 0 collisions, 0 interface resets
 0 babbles, 0 late collision, 22694 deferred
 0 lost carrier, 0 no carrier
 0 output buffer failures, 0 output buffers swapped out

Ralph Doncaster
principal, IStop.com 




Re: PSINet/Cogent Latency

2002-07-23 Thread Streiner, Justin


On Mon, 22 Jul 2002, Alex Rubenstein wrote:

 Yes, it's horrid. I've been peering with PSI for going on three years, and
 it's never been as bad as it is now.

I took advantage of their free peering offer back in the day, and ended
up peering with them for about 18 months (06/1999 - 01/2001).  It took
about 9 months for them to get the circuit installed.

For the first few months, everything was great, but then we started
getting massive spikes in latency (300-700ms) just getting across the pipe
between my router and PSI's router.  I liken it to owning an old Audi -
they were great when they ran, but spent more time in the shop than on the
road.

The process of opening tickets and getting clued people in their NOC to
talk to me was an adventure.  PSI, much like some other providers, went to
great pains to try keeping $CUSTOMER from having a direct path to
$CLUEDPEOPLE.

They could never adequately explain the latency, other than it would
mysteriously go away and re-appear, more or less independent of the amount
of traffic on the circuit.  Eventually an upper-level engineer told me
that the saturation was due to congestion on their end of the pipe, and
getting some fatter pipe in there would take 60 days.

Fine.

90 days later, the bigger pipe is installed on their end and the latency
goes away for a few weeks, then comes back.

Wash.  Rinse.  Repeat.

A few more months of that, and I cancelled the peering.

 oddly enough, we see 30+ msec across a DS3 to them, which isn't that
 loaded (35 to 40 mb/s).

 Then, behind whatever we peer with, we see over 400 msec, with 50% loss,
 during business hours.




Re: PSINet/Cogent Latency

2002-07-23 Thread Scott Granados


It has a lot of similarities to old Audi's.  Remember they used to work 
fine and then for no reason used to fall in to drive, rev high, and run 
over Grandma and  the kids!  Sounds a bit like their peering.:)

On Tue, 23 
Jul 2002, Streiner, Justin wrote:

 
 On Mon, 22 Jul 2002, Alex Rubenstein wrote:
 
  Yes, it's horrid. I've been peering with PSI for going on three years, and
  it's never been as bad as it is now.
 
 I took advantage of their free peering offer back in the day, and ended
 up peering with them for about 18 months (06/1999 - 01/2001).  It took
 about 9 months for them to get the circuit installed.
 
 For the first few months, everything was great, but then we started
 getting massive spikes in latency (300-700ms) just getting across the pipe
 between my router and PSI's router.  I liken it to owning an old Audi -
 they were great when they ran, but spent more time in the shop than on the
 road.
 
 The process of opening tickets and getting clued people in their NOC to
 talk to me was an adventure.  PSI, much like some other providers, went to
 great pains to try keeping $CUSTOMER from having a direct path to
 $CLUEDPEOPLE.
 
 They could never adequately explain the latency, other than it would
 mysteriously go away and re-appear, more or less independent of the amount
 of traffic on the circuit.  Eventually an upper-level engineer told me
 that the saturation was due to congestion on their end of the pipe, and
 getting some fatter pipe in there would take 60 days.
 
 Fine.
 
 90 days later, the bigger pipe is installed on their end and the latency
 goes away for a few weeks, then comes back.
 
 Wash.  Rinse.  Repeat.
 
 A few more months of that, and I cancelled the peering.
 
  oddly enough, we see 30+ msec across a DS3 to them, which isn't that
  loaded (35 to 40 mb/s).
 
  Then, behind whatever we peer with, we see over 400 msec, with 50% loss,
  during business hours.
 




Level3 Fiber Cut

2002-07-23 Thread German Martinez


Somebody affected by a Level3 fiber between Boston and New York ?

Thanks
German




Re: password stores?

2002-07-23 Thread Sean Donelan



On Tue, 23 Jul 2002, Daniska Tomas wrote:
 i'm wondering how large isps offering managed cpe services manage their
 password databases.

Slovakia, that's an interesting one for NANOG.

Key management is still a hard problem.  It would be nice if the NSA
published how they do it, but I suspect they don't have a cost-effective
way either. Vendors/providers are all over the board.  For the most part,
if you are concerned about security you should view it as any other vendor
default password.

On the other hand, people sometimes latch onto small vulnerabilities. If
the only way the password can be used is at the local console, it may be
considered only a slightly increased security risk.  If someone has
physical access to your console, you're usually toast anyway.  You might
configure things so the local password only works when the network
authentication is not available.  This reduces the window of opportunity.
Its still a risk.  But it may be an acceptable risk, such the fire
department requiring a master key kept in a lockbox outside the front door
of a office building.

The broadband forums have started talking about this.  But the solution
they came up with isn't that great, disable local access.  I suspect
eventually we'll see PK smartcard addressible CPE, much like
satellite/cable set-top boxes, and customers will no longer be able to
(easily) access the  box.




Re: debugging packet loss

2002-07-23 Thread Jason Lewis



 I'm seeing 2-5% packet loss going through a Cisco 2621 with 10mbps of
 traffic running at ~50% CPU.  (packet loss based on ping results)


Isn't ping the first thing to be dropped in favor of other traffic?  I
remember a similar issue and Cisco saying that was the behavior.  Don't
quote me on that.

jas






RE: password stores?

2002-07-23 Thread Shawn Solomon


One common solution is a hash based on the cpe site name or some other
unique key provided by the cpe information (address, ph #, etc).
Changing the hash occasionally provides new passwords, and it is all
easily scripted..



-Original Message-
From: Daniska Tomas [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, July 23, 2002 2:35 AM
To: [EMAIL PROTECTED]
Subject: password stores?


hi,

i'm wondering how large isps offering managed cpe services manage their
password databases.

let's say radius/tacacs is used for normal cpe user aaa, but there is
some 'backup' local user account created on the cpe for situations when
the radius server is unreachable. for security reasons, this backup
account (as well as snmp communities, radius key etc.) is unique per cpe
to avoid frauds caused by end-users (even if one does password recovery
on the cpe, they still don't have the password for other cpe's).

if there are hundreds or thousands of these cpe's that could mean
storing of tens thousands of password. are there any crypto-based
products available or do the people use their own stuff?


thanks

--
 
Tomas Daniska
systems engineer
Tronet Computer Networks
Plynarenska 5, 829 75 Bratislava, Slovakia
tel: +421 2 58224111, fax: +421 2 58224199
 
A transistor protected by a fast-acting fuse will protect the fuse by
blowing first.




RE: debugging packet loss

2002-07-23 Thread Andy Dills


On Tue, 23 Jul 2002, Phil Rosenthal wrote:


 ---
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of
 Jason Lewis

 Isn't ping the first thing to be dropped in favor of other traffic?  I
 remember a similar issue and Cisco saying that was the behavior.  Don't
 quote me on that.

 jas
 ---

 Even if it is, that still means that other packets could be lost had
 those pings not been there.

Not neccessarily. It's my experience that ciscos will sometime drop icmp
instead of replying when under load...but that's only for packets directed
at its interfaces.

So, I might see 5% packetloss from the router itself, but 0% packetloss
for everything behind it.

Andy


Andy Dills  301-682-9972
Xecunet, LLCwww.xecu.net

Dialup * Webhosting * E-Commerce * High-Speed Access




Anybody has BGP problems with Savvis today?

2002-07-23 Thread william


Does anybody else has BGP problems with Savvis today? 

They are usually very proactive on any problems, call me even for 20 
second interruptions but today my BGP session has been dead for probably 
5-6 hours (and effectively my ability to use Savvis as upstream provider), 
I called them 3 hours ago and still to hear back from anyone. And I'm 
fairly certain its something on their side, I'v done 0 changes in BGP 
setup  filters in the last days/weeks and already rebooted router just 
in case. Very strange for them to be so unresponsive... Wonder if anyone 
know what is going on?

Thanks

William




Re: debugging packet loss

2002-07-23 Thread Wes Bachman


On Tue, 2002-07-23 at 10:53, Ralph Doncaster wrote:
 
 I'm seeing 2-5% packet loss going through a Cisco 2621 with 10mbps of
 traffic running at ~50% CPU.  (packet loss based on ping results)
 ...

If its not the switch or source ethernet segment generally, and it
doesn't appear to be the router itself, I would concentrate on the
ethernet segment between the switch and the router.  (Assuming that your
packet loss is while pinging the router, and not something on the other
end of the router.)

-Wes Bachman  wbachman@leepfrogDOTcom

-- 
Wes Bachman
System  Network Administration, Software Development
Leepfrog Technologies, Inc.




Re: PSINet/Cogent Latency

2002-07-23 Thread Vadim Antonov




Some long long long time ago I wrote a small tool called snmpstatd.  Back 
then Sprint management was gracious to allow me to release it as a 
public-domain code.

It basically collects usage statistics (in 30-sec peaks and 5-min
averages), memory and CPU utilization from routers, by performing
_asynchronous_ SNMP polling.  I believe it can scale to about 5000-1
routers.  It also performs accurate time base interpolation for 30-sec
sampling (i.e. it always requests router's local time and uses it for
computing accurate 30-sec peak usage).

The data is stored in text files which are extremely easy to parse.

The configuration is text-based; it also includes compact status alarm 
output (i.e. which routers/links are down),  PostScript chart generator,
and troff/nroff based text report generator, with summary downtime and
usage figures + significant events.  The tool was used routinely to 
produce reporting on ICM-NET performance for NSF.

This thing may need some hacking to accomodate later-day IOS bogosities,
though.

If anyone wants it, I have it at www.kotovnik.com/~avg/snmpstatd.tar.gz

--vadim

On Mon, 22 Jul 2002, Gary E. Miller wrote:

 
 Yo Alexander!
 
 On Tue, 23 Jul 2002, Alexander Koch wrote:
 
  imagine some four routers dying or not answering queries,
  you will see the poll script give you timeout after timeout
  after timeout and with some 50 to 100 routers and the
  respective interfaces you see mrtg choke badly, losing data.
 
 Yep.  Anything gets behind and it all gets behind.
 
 That is why we run multiple copies of MRTG.  That way polling for one set
 of hosts does not have to wait for another set.  If one set is timing
 out the other just keeps on as usual.
 
 RGDS
 GARY
 ---
 Gary E. Miller Rellim 20340 Empire Blvd, Suite E-3, Bend, OR 97701
   [EMAIL PROTECTED]  Tel:+1(541)382-8588 Fax: +1(541)382-8676
 
 




New NANOG traceroute (fixes -T compromise)

2002-07-23 Thread Ehud gavron


The new NANOG traceroute (also known as TrACESroute)
is now available.  It fixes the latest security compromise
detailed on Bugtraq and on SUSE security focus at
http://online.securityfocus.com/advisories/2740

Ehud Gavron
[EMAIL PROTECTED]

Directory:
ftp://ftp.login.com/pub/software/traceroute/beta/

Code:
ftp://ftp.login.com/pub/software/traceroute/beta/traceroute.c

Directions:
ftp://ftp.login.com/pub/software/traceroute/beta/0_readme.txt






RE: Security of DNSBL spam block systems

2002-07-23 Thread Brad Knowles


At 2:29 AM -0400 2002/07/23, Phil Rosenthal wrote:

  IMHO Even the really large DNSBL's are barely used -- I think
  (much) less than 5% of total human mail recipients are behind
  a mailserver that uses one...

Not true.  There are plenty of large sites that use them (e.g., 
AOL), and many sites use them to help ensure that they themselves 
don't get added to the black lists.


IMO, there is a serious risk of having DNSBL servers attacked and 
used as a DoS.

The easiest way would be to check to see if the servers being 
used are open public caching recursive servers, in addition to their 
authoritative services.  If so, then they would be open to cache 
poisoning attacks.

That said, I think the bigger black list services are run by 
people who have at least half a clue as to how a nameserver should be 
operated, and therefore they should be relatively secure.  However, 
they would still be at risk if one of their parent zones is served by 
a nameserver that mixes both authoritative service  
caching/recursive service, and therefore would be easily subject to 
cache poisoning.

-- 
Brad Knowles, [EMAIL PROTECTED]

They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety.
 -Benjamin Franklin, Historical Review of Pennsylvania.



RE: Security of DNSBL spam block systems

2002-07-23 Thread Simon Lyall


On Tue, 23 Jul 2002, Brad Knowles wrote:
   IMO, there is a serious risk of having DNSBL servers attacked and
 used as a DoS.

A slightly different sort of DOS from what you mean would be what we got a
few days ago. I got a call from our Noc about problems with our
old (but still online) incoming mail servers. They were taking about a
minute to put up their SMTP banner when you connected to them.

Turned out the problem was that we were using bl.spamcop.net which was
being DOSed at the time ( according to most reports, some said they had
upstream link problems ) .

The live servers are using spamassassin which has decent timeouts so they
were not affected. We try and slave as many RBLs as possible locally
to avoid these sort of problems.

-- 
Simon Lyall.|  Newsmaster  | Work: [EMAIL PROTECTED]
Senior Network/System Admin |  Postmaster  | Home: [EMAIL PROTECTED]
ihug, Auckland, NZ  | Asst Doorman | Web: http://www.darkmere.gen.nz




Sunspot Activity Radio Blackouts

2002-07-23 Thread Andy Ellifson


For anyone that operates a wireless network or a
copper based network:


Official Space Weather Advisory issued by NOAA Space
Environment Center
Boulder, Colorado, USA

SPACE WEATHER ADVISORY BULLETIN #02- 2
2002 July 23 at 12:00 p.m. MDT (2002 July 23 1800 UTC)

 ( CORRECTED ) MAJOR SUNSPOT ACTITVITY 

A major sunspot region has rotated onto the visible
face of the sun. 
This region, designated as Region 39 by NOAA Space
Environment Center
forecasters, is believed to have been the source of
three large coronal
mass ejections on the far side of the sun beginning on
July 16.  This
region will rotate across the visible side of the sun
over the next two
weeks and is expected to produce more solar activity.

Since appearing on the visible side yesterday (July
22) this region has
already produced a major flare at 6:35 pm Mountain
Daylight Time (MDT)
on July 22 (0035, July 23 UTC).  Radio blackouts
reached category R3
(Strong) on the NOAA space weather scales.  In
response to the major
flare, a geomagnetic storm is possible and is expected
to begin between
8:00 pm MDT on July 23 and 8 am MDT on July 24 (0200 -
1400, July 24
UTC). The geomagnetic storm may reach category G2
(moderate) levels on
the NOAA space weather scales.

Category R3 radio blackouts result in widespread HF
radio communication
outages on the dayside of the Earth and can also
degrade low frequency
navigation signals.  Category G2 geomagnetic storms
can lead to minor
problems with electrical power systems, spacecraft
operations,
communications systems, and some navigational systems.
  Aurora
Borealis / Australis (northern / southern lights) may
be seen down into
the mid latitudes (New York, Madison, Boise,
Vladivostok,  Rome,
Tasmania, Wellington - NZ, Puerto Montt - Chile)

Data used to provide space weather services are
contributed by NOAA, 
USAF, NASA, NSF, USGS, the International Space
Environment Services 
and other observatories, universities, and
institutions. For more 
information, including email services, see SEC's Space
Weather 
Advisories Web site http://sec.noaa.gov/advisories or
(303) 497-5127.
The NOAA Public Affairs contact is Barbara McGehan at 
[EMAIL PROTECTED] or (303) 497-6288.