Re: 3rd party network monitoring

2008-03-07 Thread Jason LeBlanc


One app I like a lot is Ping Plotter, but it only runs on Windows, so it 
isn't good for remote monitoring.  We do use it for some things, 
however.  I like the detailed traceroute / latency visualization it 
has.  It also has a hard time with a lot (100+) nodes being monitored.  
SmokePing works well, but lacks detail in the form of traceroute.


Jeroen Massar wrote:

John A. Kilpatrick wrote:


On Wed, 5 Mar 2008, Tom Sands wrote:

When we did get a hold of someone, they mentioned they could support 
simple ICMP requests.


To them simple means it's just a ping check.  They won't 
montior/graph/care about latency.


I was pondering creating a smoke ping collective.  Get a bunch of 
guys to agree to run smokeping and monitor each other.  That's a 
great tool for visualizing changes in latency and works just as well 
with ICMP as with HTTP.


There is this really awesome project from RIPE (like usual ;)

Please check, and start using RIPE TTM: http://www.ripe.net/ttm/
See the site for presentations, tools, info, etc etc etc etc...

Enjoy ;)

Greets,
 Jeroen





Re: 3rd party network monitoring

2008-03-07 Thread Jason LeBlanc


My bad, you might be able to do it with PingPlotter using remote proxies 
that are linux.  I can see using the Vixie personal colo list to find 
cheap vm offerings in various locations.  Other option, a few could get 
together and share some resources to get the proxies distributed.


http://www.pingplotter.com/manual/pro/remote_trace.html

Jeroen Massar wrote:

John A. Kilpatrick wrote:


On Wed, 5 Mar 2008, Tom Sands wrote:

When we did get a hold of someone, they mentioned they could support 
simple ICMP requests.


To them simple means it's just a ping check.  They won't 
montior/graph/care about latency.


I was pondering creating a smoke ping collective.  Get a bunch of 
guys to agree to run smokeping and monitor each other.  That's a 
great tool for visualizing changes in latency and works just as well 
with ICMP as with HTTP.


There is this really awesome project from RIPE (like usual ;)

Please check, and start using RIPE TTM: http://www.ripe.net/ttm/
See the site for presentations, tools, info, etc etc etc etc...

Enjoy ;)

Greets,
 Jeroen





Re: 3rd party network monitoring

2008-03-07 Thread Jason LeBlanc


I did look at it, it still lacks a few things, but it does cover most.  
It would be nice if you added some screenshots or demo pages as to what 
the reporting looks like.  I had to dig around and find a paper on the 
slammer worm to see what the output looks like.


Jeroen Massar wrote:

Jason LeBlanc wrote:


My bad, you might be able to do it with PingPlotter using remote 
proxies that are linux.  I can see using the Vixie personal colo list 
to find cheap vm offerings in various locations.  Other option, a few 
could get together and share some resources to get the proxies 
distributed.


Did you actually *check* the URL I passed in? TTM does quite a bit 
more and is already distributed around the world and available to ISP's.


Again:


There is this really awesome project from RIPE (like usual ;)

Please check, and start using RIPE TTM: http://www.ripe.net/ttm/
See the site for presentations, tools, info, etc etc etc etc...


Greets,
 jeroen





Re: IPv6 network boundaries vs. IPv4

2007-08-27 Thread Jason LeBlanc


OT: He probably meant MOP and LAT are not routable, man that brings back 
memories.


Kevin Oberman wrote:

Date: Sat, 25 Aug 2007 23:56:29 -0600
From: John Osmon [EMAIL PROTECTED]
Sender: [EMAIL PROTECTED]


Is anyone out there setting up routing boundaries differently for
IPv4 and IPv6?  I'm setting up a network where it seems to make
sense to route IPv4, while bridging IPv6 -- but I can be talked
out of it rather easily.

Years ago, I worked on a academic network where we had a mix
of IPX, DECnet, Appletalk, and IP(v4).  Not all of the routers
actually routed each protocol -- DECnet wasn't routable, and I recall
some routers that routed IPX, while bridging IP...



DECnet not routable? Not even close to true. At one time DECnet was
technically well ahead of IP networking and far more commonly used. It
was not until about 1993 that IP traffic passed DECnet as the dominate
protocol and ESnet continued to route DECnet, mostly to support the High
Energy Physics community. When the Hinsdale fire segmented tie IP
Internet in 1988, the global DECnet Internet survived, albeit with limits
bandwidth between the coasts.

DECnet was far from perfect and, over time, IP surpassed it in terms of
both performance and robustness, but it was not only routable, but
globally routed long ago.

  

This all made sense at the time -- there were IPX networks that needed
to be split, while IP didn't need to be.  DECnet was... DECnet -- and 
Appletalk was chatty, but useful. 



AppleTalk was a royal pain! Gator boxes and FastPaths would go insane
and saturate the network with broadcasts. But AppleTalk did have some
really neat features.

  
I keep hearing the mantra in my head of: I want my routers to route, and 
my switches to switch.  I agree wholeheartedly if there is only one 
protocol -- but with the mix of IPv4 and IPv6, are there any folks

doing things differently?  With a new protocol in the mix are the
lessons of the last 10 (or so) years not as clear-cut?



Most routers are a blend of router and switch. The Cisco 6500 and 7600
boxes are probably the most popular large router in the world, but the
heart of each is a Catalyst switch. So, the switch switches and the
router routes, but they are both the same box.

At a major networking show we would switch the IPv6 back to the core
routers because of bugs in the IPv6 implementations on many systems.

You do what works best for your network. If it means switching IPv6, so
be it. This is probably especially true when the router is from a
company that charges substantially extra for IPv6 software licenses. If
the is only limited IPv6 traffic, switching to a central router might
not only be technically the best solution, but the most reasonable
fiscal approach.
  




Re: Market for diversity

2007-08-26 Thread Jason LeBlanc


I agree with this, and many people take the Ts  Cs, MSA, etc the vendor 
anyway.  We have a standing habit of reading over our new contracts with 
our attorney on a con call, we always edit them, send them back to the 
vendor and negotiate on any changes.  Its amazing how much you can get 
things changed in your favor if you're persistent.


More on point for this thread, I always have new vendors bring in fiber 
maps and show me their paths.  Images of the intended path specified on 
the map are part of the contract, including verbage regarding failover 
paths.  Once I know where their fiber is, I can look for another vendor 
that takes a different path.  Some locations are easier than others of 
course.  A lot depends on what the motto is as to where they like to run 
fiber, or who they lease/bought their fiber from. 

What I find hard to combat are MA changing operations over time, 
overlooking contractual obligations on the vendor's part usually.  This 
is a reason we always use 12 mo terms, we can change things fast enough 
to beat their changing things for us.  Sometimes we even go back to the 
same vendor, just to make sure the new company and contract detail what 
we have and where it goes.  Sounds a little tedious, but at least you 
know where your circuits go.


Sean Donelan wrote:


On Sat, 25 Aug 2007, Andy Davidson wrote:
Is it not possible to require that each of your suppliers provide 
over a specified path ?  I'm planning a build-out that will require a 
diverse path between two points, and one supplier has named two 
routes, and promised that they wont change for the duration of the 
contract.  Perhaps I am naive, but a promise should be a promise.


Just naive.  Most people make assumptions about what was promised.  If it
sounds too good to be true, it probably is.  What the sales person 
promises, the fine print takes away.


http://www.atis.org/ndai/ATIS_NDAI_Final_Report_2006.pdf

You will find out no one will sell to you if the contract requires 
some things, and the alternatives are rather limited.


I would be more concerned about suppliers that promise things that 
aren't possible than suppliers that decline to sell things that aren't 
possible.
Unrealastic buyers are just as much of a problem as non-performance by 
sellers.


If anyone promises their network will never do down, they will never have
single paths, they are perfect; you should grab your wallet and run away.




Re: Network Inventory Tool

2007-08-16 Thread Jason LeBlanc


I would second this.  We're evaling it right now, takes a little getting 
used to but the capabilities are pretty impressive.  There is a pretty 
steep cost to play initially.  Once the first chunk of existing devices 
are licensed adding more isn't as painful, at least thats how I'm 
selling it within my org.  This is far more than an inventory tool, the 
config management is where it really is impressive.


James Fogg wrote:

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On


Behalf Of Wguisa71
  

Sent: Monday, August 13, 2007 11:31 PM
To: NANOG
Subject: Network Inventory Tool


Guys,
	 
	Does anyone known some tool for network documentation with:
	 
	- inventory (cards, serial numbers, manufactor...)

- documentation (configurations, software version control, etc)
- topology building (L2, L3.. connections, layer control, ...)
	 
	All-in-one solution and It don't need to be free. I'm just


looking
  

for some thing to control the equipments we have like routers
from some sort of suppliers, etc...
	 
	Marcio

	 


Opsware Network Automation System does an excellent job. Not free. It
also handles configuration management, software management, compliance,
configuration policy management and other needs.	 

  




Re: Why do we use facilities with EPO's?

2007-07-26 Thread Jason LeBlanc


I do.  Hurricane Wilma, blew the roof off our building, water pouring in 
pooling under the floor and onto the PDUs and UPS (800amps of 480v).  We 
wanted to save the data on the servers, had to hit the EPO to enter the 
room (anyone have an idea of how far that much power would arc?).  It 
was STILL quite scary since the batteries were still charged, I actually 
flipped the breaker on the UPS.  Not fun to be around that much power 
when there is a lot of water.  Only time I've ever seen an EPO hit in 
person.


Jerry Pasker wrote:


I've always wondered who died or was injured and caused the EPO to 
come in to existence.  There have been lots of EPO caused downtime 
stories, but does anyone on the NANOG list even have one single Thank 
God for the EPO story?  I'll feel better about the general state of 
the world if I know that the EPO actually has a real valid use that 
has been ACTUALLY PROVEN IN PRACTICE rather than just in someone's mind.



-Jerry   Is so anti EPO, he has no remote EPO buttons, and even 
has the irrational fear about the jumper on the EPO terminal strip 
inside his UPSes coming undone.






Re: [cacti-announce] Cacti 0.8.6j Released (fwd)

2007-01-25 Thread Jason LeBlanc


This is where dbms' designed for data warehouses might come into play, 
something like SybaseIQ.  It is adapted for long term storage and retrieval.


[EMAIL PROTECTED] wrote:

 how do you define your schema?
how long does it take to insert/index/whatnot the data?



This is a much bigger deal than most people realize.
Poor schema design will cause your system to choke
bade when you try to scale it. In fact, relational 
databases are not the ideal way to store this kind

of data so when you design your schema, you are really
fighting against the database to wrestle it into something
that will work.

  
	this is a huge burden to figure it all out, implement 
and then monitor/operate 24x7.  miss enough samples or data 
and you end up billing too little.  this is why most folks 
have either cooked their own, or use some expensive suite of 
tools, leaving just a little bit of other stuff out there.



Personally, I doubt that it is possible to build a
workable system, even with plugins, that will do the
job for a significant percentage of service providers.
Different companies have different needs, different
hot button items, etc. This is an area where breaking
the problem down into well-defined separate problems
with a well-defined linkage, will go a long way.

But to start with, just solving the data storage problem
is a good place to start. If someone can create a specialized
network monitoring database that scales, then the rest of
the toolkit will be much easier to deal with. Note that 
people have done a lot of research on this sort of

time-series database. People working in high-energy physics
also have to deal with massive sets of time-series data.
There is plenty of literature out there to help guide
a design effort. But Open-Source developers don't usually
do this kind of up-front research before starting coding.
Money and manpower won't solve that kind of problem.

--Michael Dillon

  




Re: Cable Tying with Waxed Twine

2007-01-25 Thread Jason LeBlanc


I can't imagine using wax twine, I love my velcro.

Randy Epstein wrote:

Hey Marty :)

snip
  

and digg it:

http://www.digg.com/mods/The_lost_art_of_cable-lacing...



Corrected URL:
http://www.digg.com/mods/The_lost_art_of_cable-lacing...?cshow=194773

  

-M



Randy
  




Re: [cacti-announce] Cacti 0.8.6j Released (fwd)

2007-01-24 Thread Jason LeBlanc


I would say somewhere around 4000 network interfaces (6-8 stats per int) 
and around 1000 servers (8-10 stats per server) we started seeing 
problems, both with navigation in the UI and with stats not reliably 
updating.  I did not try that poller, perhaps its worth trying it again 
using it.  I will also say this was about 2 years ago, I think the box 
it was running on was a dual P3-1000 with a raid 10 using 6 drives (10k 
rpm I think).


After looking for 'the ideal' tool for many years, it still amazes me 
that no one has built it.  Bulk gets, scalable schema and good 
portal/UI.  RTG is better than MRTG, but the config/db/portal are still 
lacking.



Jon Lewis wrote:


On Mon, 22 Jan 2007, Jason LeBlanc wrote:

Anyone thats seen MRTG (simple, static) on a large network realizes 
that decoupling the graphing from the polling is necessary.  The disk 
i/o is brutal.  Cacti has a slick interface, but also doesn't scale 
all that well for large networks.  I prefer RTG, though I haven't 
seen a nice interface for it, yet.


How large did you have to get for cacti to not scale?  Did you try 
the cactid poller [which is much faster than the standard poller]?


--
 Jon Lewis   |  I route
 Senior Network Engineer |  therefore you are
 Atlantic Net|
_ http://www.lewis.org/~jlewis/pgp for PGP public key_




Re: Google wants to be your Internet

2007-01-24 Thread Jason LeBlanc


I hear you on the double, triple nat nightmare, I'm there myself.  I'm 
working on rolling out VRFs to solve that problem, still testing.  The 
nat complexities and bugs (nat translations losing their mind and 
killing connectivity for important apps) are just too much for some of 
our customers, users, etc to deal with.  Some days it kills me that v6 
is still not really viable, I keep asking providers where they're at 
with it.  Their most common complaint is that the operating systems 
don't support it yet.  They mention primarily Windows since that is what 
is most implemented, not in the colo world but what the users have.  I 
suggested they offer a service that somehow translates (heh, shifting 
the pain to them) v4 to v6 for their customers to move it along.


Roland Dobbins wrote:



On Jan 24, 2007, at 4:58 AM, Mark Smith wrote:


The problem is that you can't be sure that if you use RFC1918 today you
won't be bitten by it's non-uniqueness property in the future. When
you're asked to diagnose a fault with a device with the IP address
192.168.1.1, and you've got an unknown number of candidate devices
using that address, you really start to see the value in having world
wide unique, but not necessarily publically visible addressing.



That's what I meant by the 'as long as one is sure one isn't buying 
trouble down the road' part.  Having encountered problems with 
overlapping address space many times in the past, I'm quite aware of 
the pain, thanks.


;

RFC1918 was created for a reason, and it is used (and misused, we all 
understand that) today by many network operators for a reason.  It is 
up to the architects and operators of networks to determine whether or 
not they should make use of globally-unique addresses or RFC1918 
addresses on a case-by-case basis; making use of RFC1918 addressing is 
not an inherently stupid course of action, its appropriateness in any 
given situation is entirely subjective.


---
Roland Dobbins [EMAIL PROTECTED] // 408.527.6376 voice

Technology is legislation.

-- Karl Schroeder







Re: [cacti-announce] Cacti 0.8.6j Released (fwd)

2007-01-22 Thread Jason LeBlanc


Anyone thats seen MRTG (simple, static) on a large network realizes that 
decoupling the graphing from the polling is necessary.  The disk i/o is 
brutal.  Cacti has a slick interface, but also doesn't scale all that 
well for large networks.  I prefer RTG, though I haven't seen a nice 
interface for it, yet.


Chris Owen wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Jan 21, 2007, at 11:35 PM, Travis H. wrote:


That is, most of the dynamically-generated content doesn't need to be
generated on demand.  If you're pulling data from a database, pull it
all and generate static HTML files.  Then you don't even need CGI
functionality on the end-user interface.  It thus scales much better
than the dynamic stuff, or SSL-encrypted sessions, because it isn't
doing any computation.


While I certainly agree that cacti is a bit of a security nightmare, 
what you suggest may not scale all that well for a site doing much 
graphing.  I'm sure the average cacti installation is recording 
thousands of things every 5 minutes but virtually none of those are 
ever actually graphed.  Those that are viewed certainly aren't viewed 
every 5 minutes.  Even if polling and graphing took the same amount of 
resources that would double the load on the machine.  My guess though 
is that graphing actually takes many times the resources of polling.  
Just makes sense to only graph stuff when necessary.


Chris

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (Darwin)

iD8DBQFFtE/NElUlCLUT2d0RAtbeAJ91qMtm8VtWSLHJ/gLsg3DnqitlwQCeK1pn
bqmZZoK821K76KMj/0bxDNk=
=Rx6P
-END PGP SIGNATURE-




Re: Anything going on in Atlanta, GA?

2007-01-11 Thread Jason LeBlanc


I'm on 6 and experienced no issues, they are also on 5 which is where 
the problem may have had more impact, as that is the old PAIX space 
where more telco stuff goes on.


Randy Epstein wrote:

Bill,

  

Switch and Data was reporting power issues at 56 Marietta
earlier.  Don't know if it was isolated to their suite, or
more widespread.

bill



No issues on 2nd, 3rd or 4th floor.  Not sure about the 6th (where SD is
located.)

There are also separate generators in the building for the various tenants.

Regards,

Randy
  




Re: Anit-Virus help for all of us??????

2003-11-24 Thread Jason LeBlanc
I tend to encourage people to use PestPatrol for the malware on windoze 
boxes.

Suresh Ramasubramanian wrote:

Jeff Shultz  writes on 11/24/2003 1:46 PM:

Firewalls at least tend to be a bit more hands off... and I'd like to
hear more about the snake oil parts. Doesn't the 1/2wall that XP
ships with default to disabled? 


Interesting reading here -
http://groups.google.com/groups?q=vernon+schryver+snake+oil+firewall



Re: Extreme BlackDiamond

2003-10-13 Thread Jason LeBlanc
75xx/GSR, dCEF?  75xx/GSR are L3 switches then. ;)  Not to add 
flame-bait, but..

http://www.cisco.com/univercd/cc/td/doc/product/software/ios121/121cgcr/switch_c/xcprt2/xcdcef.htm

Mikael Abrahamsson wrote:

On Mon, 13 Oct 2003 [EMAIL PROTECTED] wrote:

 

I don't understand how you can differentiate between a router and an L3
switch. In my view L3 switch is a marketing term. All high end boxes
do hardware based IP forwarding, whether their ancestry is from the L2
or the L3 side.
   

To me something that uses hardware assist, setup by the cpu per 
destination, is an L3 Switch. Something that does equal route lookups per 
packet all the time is a router.

 




Re: Extreme BlackDiamond

2003-10-13 Thread Jason LeBlanc
bgp scanner cpu usage == number of neighbors * number of routes in table

lots of neighbors would cause this, for longer periods.  If running 
SUP1A/MSFC this could be worse than with MSFC2 (slightly more CPU 
power), and much worse than SUP2 I'm guessing.

Tom (UnitedLayer) wrote:

On Mon, 13 Oct 2003 [EMAIL PROTECTED] wrote:
 

Maybe you could expand on the BGP scanner problems - we haven't seen
them all the time we've been running 6500 native with full routes (about
1.5 years now).
   

BGP Scanner taking up close to 100% of CPU on a box periodically.
GSR doesn't seem to do it, but a buncha other cisco boxes do.
Its more irritating than anything else, especially when customers complain
that when they traceroute they see ~200ms latency to the router...