Re: Speedtest site accuracy [was: Bandwidth issues in the Sprint network]

2008-04-08 Thread Doug Clements

On Tue, Apr 8, 2008 at 11:15 AM, Scott Weeks <[EMAIL PROTECTED]> wrote:
>  This brings up a PITA point for me.  Recently, I have seen a rash of 
> "Speedsite test server at  says blah, blah, blah" tickets finally 
> reach me and I am telling everyone they're not an accurate way to measure 
> network performance.  I notice that at least some are just sending text in 
> Latin.
>
>  To other medium-sized eyeball network providers (I'm defining medium size as 
> 50-150K DSL/Cable connections and 50-1500 leased line customers): are you 
> seeing this and what do you tell your customers?

We tell our customers to make sure to use the test site on our
network, which will be quite a bit more accurate than some random
location on the internet they might pick.

There's no reason it can't be reasonably accurate, if you care to
address it. We normally get within a few percent of a given line rate
ranging over normal DSL speeds to T1s to DS3s to Fast Ethernet. It's a
very easy and user-understandable way to say "Your T1 is installed,
there's no errors that we see, you're getting about 1.4mbit on the
speed test, have a nice day", or, alternately, "You're getting
95mbit/sec down and only 45mbit/sec up, you probably have a duplex
mixmatch on your newly installed colo server".

--Doug


Re: M$SQL cleanup incentives

2003-02-22 Thread Doug Clements

On Sat, Feb 22, 2003 at 09:25:24AM -0500, William Allen Simpson wrote:
> Doug Clements wrote:
> > Which is it? Where do you draw the line between something that's big
enough
> > to block forever and something that's not worth tracking down?
>
> Where it causes a network meltdown.  The objective reality is pretty
> clear to some (many? most?) of us.

I see. So you're still filtering port 25 from the Morris sendmail worm.

The issue I had with your argument is "forever". You should realize as well
as anyone that the course of software development and implementation will
mitigate the threats of the slammer worm until it's nothing more than a bad
memory.

> Filtering is not fun.  That's why I'm trying to get everyone to
> cooperate in eradication of this particular problem, so that we could
> drop filters.  (Look at the subject line.)

The first step in eradication is detection. I presume that since you're
taking this stance, you're checking your filter logs and attempting to
notify the appropriate partys for each hit.

If you're not, then our buddy trying to infect all the machines on his
network every so often is being more effective in wiping out the worm.

> Right now, whether you know it or not, filtering is all that's holding
> the Internet as a whole together  If you didn't filter, you're
> actually depending on the good graces of the rest of us that did!

If you "didn't" filter or "don't" filter? We definately filtered when the
worm first came out. We don't block port 1433 anymore (nor does any of our
upstreams), but we still report suspicious traffic. Regardless of what
everyone else is doing, the worm is not causing a meltdown anymore. The
correct course of action is to remove filters as resources allow, and
investigate infected machines as they are noticed.

I'm sorry, but I'm not seeing your case for implementing permament filters
for this or anything else.

--Doug



Re: M$SQL cleanup incentives

2003-02-22 Thread Doug Clements

I'll bite..

- Original Message -
From: "William Allen Simpson" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, February 21, 2003 2:25 PM
Subject: Re: M$SQL cleanup incentives


[snip]
> I'm of the technical opinion that everyone will need to filter outgoing
> 1434 udp forever.
[snip]
> Iljitsch van Beijnum wrote:
> > Maybe the best approach is to try and deliberately infect the entire
> > local net every few minutes or so to detect new vulnerable systems while
> > the people installing them are still on the premises.
> >
> Gosh, should we do that for every known virus/worm/vulnerability?

Which is it? Where do you draw the line between something that's big enough
to block forever and something that's not worth tracking down? You lambast
him for attempting a solution that is foolish to apply for every known
possible problem where if your solution was applied as such, we'd have a
swiss-cheese internet in which any commonly used destination port is blocked
due to the scads of IIS/bind/fingerd/ftpd/whatever worms.

Have fun filtering.

> Or maybe you don't actually own and/or have legal and financial
> accountability for your own network?

Or maybe he likes having a network his customers can actually use.

--Doug



Re: PSINet/Cogent Latency

2002-07-22 Thread Doug Clements


- Original Message -
From: "Phil Rosenthal" <[EMAIL PROTECTED]>
Subject: RE: PSINet/Cogent Latency


> I don't think RRD is that bad if you are gonna check only every 5
> minutes...
>
> Again, perhaps I'm just missing something, but so lets say you measure
> 30 seconds late , and it thinks its on time -- So that one sample will
> be higher , then the next one will be on time, so 30 seconds early for
> that sample -- it will be lower.  On the whole -- it will be accurate
> enough -- no?

If you're polling every 5 minutes, with 2 retrys per poll, and you miss 2
retrys, then your next poll will be 5 minutes late. It's not disastrous, but
it's also not perfect. Again, peaks and vallys on your graph cost more than
smooth lines, even with the same total bandwidth.

Do you want to be the one to tell your customers your billing setup is
"accurate enough", and especially that it's going to have a tendancy to be
"accurate enough" in your favor?

> Besides I think RRD has a bunch of things built in to deal with
> precisely this problem.

Wouldn't that be just spiffy!

> I'm not saying a hardware solution can't be better -- but it is likely
> overkill compared to a few cheap intels running RRD -- assuming your
> snmpd can deal with the load...

No extra hardware needed. I think the desired solution was integration into
the router. The data is already there, you just need software to compile it
and ship it out via a reliable reporting mechanism. For being relatively
simple, it's a nice idea that it could replace the "almost" in an "almost
accurate" billing process.

--Doug




Re: PSINet/Cogent Latency

2002-07-22 Thread Doug Clements


- Original Message -
From: "Phil Rosenthal" <[EMAIL PROTECTED]>
Subject: RE: PSINet/Cogent Latency

> Call me crazy -- but what's wrong with setting up RRDtool with a
> heartbeat time of 30 seconds, and putting in cron:
> * * * * * rrdscript.sh ; sleep 30s ; rrdscript.sh
>
> Wouldn't work just as well?
>
> I haven't tried it -- so perhaps this is too taxing (probably you would
> only run this on a few interfaces anyway)...

Redback's implementation overcame the limitation of monitoring say, 20,000
user circuits. You don't want to poll 20,000 interfaces for maybe 4 counters
each, every 5 minutes.

I think the problem with using rrdtool for billing purposes as described is
that data can (and does) get lost. If your poller is a few cycles late, the
burstable bandwidth measured goes up when the poller catches up to the
interface counters. More bursting is bad for %ile (or good if you're selling
it), and the customer won't like the fact that they're getting charged for
artifically high measurements.

Bulkstats lets the measurement happen independant of the reporting.

--Doug




Re: PSINet/Cogent Latency

2002-07-22 Thread Doug Clements


- Original Message -
From: "Richard A Steenbergen" <[EMAIL PROTECTED]>
Subject: Re: PSINet/Cogent Latency
> Personally I would like to see the data collection done on the router
> itself where it is simple to collect data very frequently, then pushed
> out. This is particularly important when you are doing things like billing
> 95th percentile, where a loss of connectivity between the polling machine
> and the device is a loss of billing information.

Redbacks can actually do this with what they call Bulkstats. Collects data
on specified interfaces and ftp uploads the data file every so specified
often. Pretty slick.

Course, this isn't very helpful with Redback's extensive core router lineup,
but still.

--Doug




Re: NANOG costs

2002-04-10 Thread Doug Clements


on 4/10/02 8:22 AM, Ukyo Kuonji at [EMAIL PROTECTED] wrote:
> Also, you must realize that not everyone gets hit with the $300 charge.
> Hosts and presenters get free admission, and students get a greatly reduced
> fee.

Woah, students get a reduced fee? Where do I sign up? I looked and looked
for something like this, but I've ended up paying full price for the last 2
NANOGs.

--Doug