Re: [c-nsp] Cisco Security Advisory: Crafted IP Option Vulnerability
I would say that this would work: http://addxorrol.blogspot.com/2007/01/one-of-most-amusing-new-features-of.html It requires expensive software, BinNavi and IDA Pro Advanced, but anyone equipped with those tools could do it. I heard that parts of PaiMei work under BSD/Linux, and certainly GPF and Autodafé could be used for fault injection during step-mode debugging. PaiMei also uses IDA. The other tools are open-source including PaiMei itself. Using PyDBG in PaiMei could speed up the debugging faster than gdb by way of scripting, which could allow things like process stalking. If that's the case, I could invision anyone with a symbol table could get PoC remote code execution (ala Mike Lynn and Hacking Exposed: Cisco Networks) within 3 hours and have a reliable exploit within 10 hours. Worm at 11. But PaiMei doesn't do that (yet), and nobody has the rest of the resources to accomplish this task. Right? But, you don't really even need a symbol table if you have lots of time to debug and design the exploit. This is more advanced and would require somebody like Halvar Flake, FX, or Pedram Amini. All three of which I credit for this vulnerability information feasibility fact-finding. So it's too late. Don't bother upgrading now; you're already owned. Unless they are blocking it at the ISP borders in the same way they blocked out the Cisco IPv4 Crafted DoS vulnerability in 2003. ISP's probably got the patch (or at least Cisco's ISP's did) a week ago. Had rolling reboots lately? Don't know why? Lots of "miscellaneous" ISP maintenace. I wonder... Hey Cisco - listen up. Hire some vulnerability assessors before the future probable Month-of-Cisco-Bugs becomes Year-of-Cisco-Bugs aka loss of 10B US dollars in revenue. Or whatever John Chambers makes, whichever is lower. -dre On 1/24/07, Kevin Graham <[EMAIL PROTECTED]> wrote: On Wed, 24 Jan 2007, Cisco Systems Product Security Incident Response Team wrote: > Cisco Security Advisory: Crafted IP Option Vulnerability If I recall correctly, this is the first (PSIRT acknowledged) stack/heap vulnerability since Michael Lynn's much-publicized BlackHat presentation. While there was plenty of brief speculation at the time of what Chinese/Russian/American-xenophobic-target hax0rs had already implemented, not much bubbled up to the operational world... Does anyone more active in the security community have pointers as to how generic (and common) are tools targeting IOS exist? On 1/24/07, Paul Stewart <[EMAIL PROTECTED]> wrote: > I have read over this and am "fearful" of what I read.. my first thought is > to drop everything, get emergency maintenance window releases and spend a > couple of nights upgrading like crazy... "20070124-crafted-tcp" seems obvious enough (though it would've been good for PSIRT to indicate how "small" the leakage per packet is to gauge CoPP values), but "20070124-crafted-ip-option" likely should tingle your spine. ___ cisco-nsp mailing list [EMAIL PROTECTED] https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: Cable Tying with Waxed Twine
>Upon leaving a router at telx and asking one of their techs to plug >in the equipment for me, I came back to find all my cat5 cables neatly >tied with some sort of waxed twine, using an interesting looping knot >pattern that repeated every six inches or so using a single piece of >string. For some reason, I found this trick really cool. It's called 'wax lacing' and it was originally a CO standard. It was adapted to collocation, FWIW, first by MCI, IIRC, then Level(3). Level(3) mastered the art of building converged central office and colo (T Colo + Colo) by taking Bellcore standards and CO experience and creating hybrid standards of design and installation. Internap used this standard as well. The beauty of using this technique is service delivery and aesthetics. You don't just do and un-do wax lacing. It's meant to be permanent so in order to use it extensively, you need to have a superior cross connect system and plant engineering in place and a detailed service delivery methodology. This doesn't work in most places because they don't have or do enough detail planning. The knot you are seeing is likely "chicago knot". It should be easily undone by tugging on one of the two short ends. Wax is also used in conjunction with "fish paper", green wax paper that is used as a coating between metal and cable so that wear is offset from vibrations et. al. There are multiple reasons to use wax over zip ties. Some are safety related, some are service delivery related, and some are wear related. It is definately not cheap. It also a highly technical undertaking to do correctly.. You have to make all your decisions on cabling up front i.e. split at center, left to right, split at rack, mid to upper, mid to lower, etc. http://www.dairiki.org/hammond/cable-lacing-howto/ and digg it: http://www.digg.com/mods/The_lost_art_of_cable-lacing... (I'm well under 50. See digg article :) ) -M<
Re: Cable-Tying with Waxed Twine
Confession time - I'm over 50 At 09:41 p.m. 24/01/2007 -0700, you wrote: As for plastic ties (TyRap is the brand name for the Thomas & Betts version) they may be easy to use, but they do have several functional drawbacks, including: 1) difficulty in maintaining consistent tension from tie to tie, and as a correlary it is comparatively easy to overtighten one, risking compression-related damage to the underlying cabling, or as mentioned above, increasing crosstalk when using twisted-pair cables You can buy a cable-tie gun from Panduit, along with ties on a bandolier. They are used in appliance manufacture for making up wiring looms, instead of lacing them. The tension is programmable .You may also remember that in cars, the wiring harness was in a cloth jacket.. 2) can harden and/or become brittle over time, eventually failing under stress H'mm - you buy various grades of cable-tie. I have a lot of personal experience with a black Ty-Rap. Its black with a stainless-steel tag. The black makes it UV-stable and I get nervous if we don't have a few thousand in stock. I carry a few hundred in my van... White ties aren't UV stable and so are indoor rated only. Of course I live in a country where the weather report gives a UV rating each day, due to the Ozone depletion making a hole right above us - due to CFCs in aerosol can's. Thanks guysand girls. Get Joe Abley to tell you about CityLink over a few beers. But basically, its a 20Km metro fiber network suspended off the trolley bus wires. I built the fist 200 odd buildings, before we got "staff". The fiber is attached to a synthetic rope (kevlar) which is the catenary wire, by a TyRap ty25 (from memory), every 300 mm. The way we work was my van pulled the trailer with the fiber drum, Ryan and Glenn were in the cherry-picker, moving from pole to pole. I was on the ground cable tying like mad. Ryan then pulled the cable up, tensioned it, made it fast,and we moved on. Been doing it since 1996. These days we use self supporting fiber, so run much faster, no cable ties until we overlay 3) typical background vibration causes them to tend to chafe the sheaths of the wiring that the ties are in direct contact with, over a period of years. buy the ones with stainless tags - they last for years. The cheap plastic ones are toys Lacing is a lot slower than using platic ties, and doing it is rough on your fingers. If you're lucky you know a data tech who can show you how to do it properly, it's really not something that you can just describe in writing. Depending upon the specific need, contact points may also have pieces of fish paper laced to them before the wiring is laid out and laced into place. Not unusual to see this when DC power cables are being secured. H'mmm - the DC cables I'm used to are the size of your arm - per polarity.we don't lace them, just bury them. But sorry - I'm old and been around. I worked in a power utility for 14 years. BTW Broadband over Power - we call ripple control. It turns on the street lights, load control etc. Been doing it for years and its not hard to go both ways. Zellweger in Uster Switzerland used to make the cool stuff. I have photos somewhere. We also inject DC into the AC network, but thats another beer or two. First you have to work out why the utilities use AC.. Rich
Re: Colocation in the US.
> If you have water for the racks: we've all gotta have water for the chillers. (compressors pull too much power, gotta use cooling towers outside.) > http://www.knuerr.com/web/en/index_e.html?products/miracel/cooltherm/cooltherm.html~mainFrame i love knuerr's stuff. and with mainframes or blade servers or any other specialized equipment that has to come all the way down when it's maintained, it's a fine solution. but if you need a tech to work on the rack for an hour, because the rack is full of general purpose 1U's, and you can't do it because you can't leave the door open that long, then internal heat exchangers are the wrong solution. knuerr also makes what they call a "CPU cooler" which adds a top-to-bottom liquid manifold system for cold and return water, and offers connections to multiple devices in the rack. by collecting the heat directly through paste and aluminum and liquid, and not depending on moving-air, huge efficiency gains are possible. and you can dispatch a tech for hours on end without having to power off anything in the rack except whatever's being serviced. note that by "CPU" they mean "rackmount server" in nanog terminology. CPU's are not the only source of heat, by a long shot. knuerr's stuff is expensive and there's no standard for it so you need knuerr-compatible servers so far. i envision a stage in the development of 19-inch rack mount stuff, where in addition to console (serial for me, KVM for everybody else), power, ethernet, and IPMI or ILO or whatever, there are two new standard connectors on the back of every server, and we've all got boxes of standard pigtails to connect them to the rack. one will be cold water, the other will be return water. note that when i rang this bell at MFN in 2001, there was no standard nor any hope of a standard. today there's still no standard but there IS hope for one. > (there are other vendors too, of course) somehow we've got standards for power, ethernet, serial, and KVM. we need a standard for cold and return water. then server vendors can use conduction and direct transfer rather than forced air and convection. between all the fans in the boxes and all the motors in the chillers and condensers and compressors, we probably cause 60% of datacenter related carbon for cooling. with just cooling towers and pumps it ought to be more like 15%. maybe google will decide that a 50% savings on their power bill (or 50% more computes per hydroelectric dam) is worth sinking some leverage into this. > http://www.spraycool.com/technology/index.asp that's just creepy. safe, i'm sure, but i must be old, because it's creepy.
Re: Colocation in the US.
How about CO2? tv - Original Message - From: "Mike Lyon" <[EMAIL PROTECTED]> To: "Brandon Galbraith" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]>; "Paul Vixie" <[EMAIL PROTECTED]>; Sent: Wednesday, January 24, 2007 5:49 PM Subject: Re: Colocation in the US. I think if someone finds a workable non-conductive cooling fluid that would probably be the best thing. I fear the first time someone is working near their power outlets and water starts squirting, flooding and electricuting everyone and everything. -Mike On 1/24/07, Brandon Galbraith <[EMAIL PROTECTED]> wrote: On 1/24/07, Deepak Jain <[EMAIL PROTECTED]> wrote: > > > Speaking as the operator of at least one datacenter that was originally > built to water cool mainframes... Water is not hard to deal with, but > it > has its own discipline, especially when you are dealing with lots of it > (flow rates, algicide, etc). And there aren't lots of great manifolds > to > allow customer (joe-end user) service-able connections (like how many > folks do you want screwing with DC power supplies/feeds without some > serious insurance).. > > Once some standardization comes to this, and valves are built to detect > leaks, etc... things will be good. > > DJ > In the long run, I think this is going to solve a lot of problems, as cooling the equipment with a water medium is more effective then trying to pull the heat off of everything with air. But standardization is going to take a bit.
Re: Cable-Tying with Waxed Twine
> The other thing I found interesting; The use of Zip Ties on Copper Cabling > is frowned upon by BICSI. Velcro preferred. > > Something to do with the compression on a twisted-pair cable caused by > over-tight nylon cable ties screwing with their twist rates, and thus > changing their Crosttalk characteristics... Yep. For starters, the stuff that Dan Mahoney is looking for is properly known as waxed linen lacing cord. In a past life I used to order the stuff made by Ludlow Textiles through Graybar, their part # back then was 89039323. It's not always in stock in individual stores. As for plastic ties (TyRap is the brand name for the Thomas & Betts version) they may be easy to use, but they do have several functional drawbacks, including: 1) difficulty in maintaining consistent tension from tie to tie, and as a correlary it is comparatively easy to overtighten one, risking compression-related damage to the underlying cabling, or as mentioned above, increasing crosstalk when using twisted-pair cables 2) can harden and/or become brittle over time, eventually failing under stress 3) typical background vibration causes them to tend to chafe the sheaths of the wiring that the ties are in direct contact with, over a period of years. Lacing is a lot slower than using platic ties, and doing it is rough on your fingers. If you're lucky you know a data tech who can show you how to do it properly, it's really not something that you can just describe in writing. Depending upon the specific need, contact points may also have pieces of fish paper laced to them before the wiring is laid out and laced into place. Not unusual to see this when DC power cables are being secured.
Re: Colocation in the US.
Paul Vixie wrote: i'm spec'ing datacenter space at the moment, so this is topical. at 10kW/R you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R by requiring a lot of aisleway around every set of racks (~200sf per 4R cage) to get it down to 200W/SF, or you'd compromise on W/R. i suspect that the folks offering 10kW/R are making it up elsewhere, like 50sf/R averaged over their facility. (this makes for a nice-sounding W/R number.) i know how to cool 200W/SF but i do not know how to cool 333W/SF unless everything in the rack is liquid cooled or unless the forced air is bottom->top and the cabinet is completely enclosed and the doors are never opened while the power is on. If you have water for the racks: http://www.knuerr.com/web/en/index_e.html?products/miracel/cooltherm/cooltherm.html~mainFrame (there are other vendors too, of course) The CRAY bid for the DARPA contract also has some interesting cooling solutions as I recall, but that is a longer way out.
Re: Colocation in the US.
Brandon Galbraith wrote: On 1/24/07, Mike Lyon <[EMAIL PROTECTED]> wrote: I think if someone finds a workable non-conductive cooling fluid that would probably be the best thing. I fear the first time someone is working near their power outlets and water starts squirting, flooding and electricuting everyone and everything. -Mike http://en.wikipedia.org/wiki/Mineral_oil http://www.spraycool.com/technology/index.asp
RE: Cable-Tying with Waxed Twine
On Wed, 24 Jan 2007, Chris Cahill wrote: > > On another off topic note, does anyone know the origin of including > mints with telco rack gear? I often see this in rack screw bags, > shelves, adaptors, etc.. when you get stuck in a DC all damned night you get stinky breath, it's a hint from your 'friends'... :)
Re: Cable-Tying with Waxed Twine
age of 35). Also you could ask your friendly local full license, old school radio ham etc etc... It's a dying skill, not because it isn't good, but because it takes training/practice and time. Tiewraps (Zip ties) are cheap, quick and require little (if any) training. When I sat my ham license, tying cables wasn't a component of the course. :) Though of course, many older-school licensees are probably from telco or professional RF backgrounds. (We wont mention how many years _under_ the average age, I am...) The other thing I found interesting; The use of Zip Ties on Copper Cabling is frowned upon by BICSI. Velcro preferred. Something to do with the compression on a twisted-pair cable caused by over-tight nylon cable ties screwing with their twist rates, and thus changing their Crosttalk characteristics... Mark. (Sporting the scars from poorly trimmed cable ties!)
Re: Cisco Security Advisory: Crafted IP Option Vulnerability
On 1/24/07, Gadi Evron <[EMAIL PROTECTED]> wrote: How many OPK's are being released today.. anyone? Ovulation Predictor Kits? OEM Preinstallation Kits? -dre
RE: Cable-Tying with Waxed Twine
> -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of > Steve Rubin > Sent: Wednesday, January 24, 2007 4:50 PM > To: nanog@merit.edu > Subject: Re: Cable-Tying with Waxed Twine > > > Dan Mahoney, System Admin wrote: > > > > Hey all, > > > > This seems a wee bit off topic, but definitely relates to network > > operations (somewhere below layer 1) and I can't think of a better > > place to ask. > > > > Upon leaving a router at telx and asking one of their techs to plug > in > > the equipment for me, I came back to find all my cat5 cables neatly > > tied with some sort of waxed twine, using an interesting looping knot > > pattern that repeated every six inches or so using a single piece of > > string. For some reason, I found this trick really cool. > > > > I have tried googling for the method, (it's apparently standard, I've > > seen it in play elsewhere), and for the type of twine, but had little > > luck. I was wondering if any of the gurus out there would care to > > share what this knot-pattern is actually called, and/or if there's a > > (illustrated) howto somewhere? > > > > -Dan "Tired of getting scratched up by jagged cable ties" Mahoney > > > > > > Best site I have seen so far: > http://www.dairiki.org/hammond/cable-lacing-howto/ I have recently fallen in love with lacing. It is definitely a very clean method of securing cables, and is an art form that seems to be dying with old telco guys. There are a couple of different stitches, including the Chicago and Kansas city stitch. The best cord to use is a 6 ply poly lacing cord that can be purchased from western filament, inc. part#9PRT125W. I believe that it is about $7.00 per half pound roll, with a $50 minimum order. Check out chapter 5 of the following Qwest technical publication for details on how to tie the knots. http://www.qwest.com/techpub/77350/77350.pdf On another off topic note, does anyone know the origin of including mints with telco rack gear? I often see this in rack screw bags, shelves, adaptors, etc.. -Chris
Re: Cable-Tying with Waxed Twine
Dan Mahoney, System Admin wrote: Upon leaving a router at telx and asking one of their techs to plug in the equipment for me, I came back to find all my cat5 cables neatly tied with some sort of waxed twine, using an interesting looping knot pattern that repeated every six inches or so using a single piece of string. For some reason, I found this trick really cool. As others have already indicated (and with some good links) it's cable lacing. For how to's .. find anyone that has done a recognised apprenticeship in electrical, telecommunications, RF, or "multiskill" (electical/electromechanical/mechanical) and ask them to teach you (in this day and age of training courses, that probably means finding someone over the age of 35). Also you could ask your friendly local full license, old school radio ham etc etc... It's a dying skill, not because it isn't good, but because it takes training/practice and time. Tiewraps (Zip ties) are cheap, quick and require little (if any) training. Regards, Mat
Re: Cable-Tying with Waxed Twine
Here's some nice lacing on our FLM150 rack: http://fiveforty.net/mux/Picture_010.jpg http://fiveforty.net/mux/Picture_013.jpg On Wed, Jan 24, 2007 at 07:30:06PM -0500, Dan Mahoney, System Admin wrote: > > Hey all, > > This seems a wee bit off topic, but definitely relates to network > operations (somewhere below layer 1) and I can't think of a better place > to ask. > > Upon leaving a router at telx and asking one of their techs to plug in the > equipment for me, I came back to find all my cat5 cables neatly tied with > some sort of waxed twine, using an interesting looping knot pattern that > repeated every six inches or so using a single piece of string. For some > reason, I found this trick really cool. > > I have tried googling for the method, (it's apparently standard, I've seen > it in play elsewhere), and for the type of twine, but had little luck. I > was wondering if any of the gurus out there would care to share what this > knot-pattern is actually called, and/or if there's a (illustrated) howto > somewhere? > > -Dan "Tired of getting scratched up by jagged cable ties" Mahoney > > -- > > Dan Mahoney > Techie, Sysadmin, WebGeek > Gushi on efnet/undernet IRC > ICQ: 13735144 AIM: LarpGM > Site: http://www.gushi.org > ---
Re: Cable-Tying with Waxed Twine
> Return-path: <[EMAIL PROTECTED]> > Upon leaving a router at telx and asking one of their techs to plug in the > equipment for me, I came back to find all my cat5 cables neatly tied with > some sort of waxed twine it is called "laced." very common among telephants. when you leave the colo, you will only be known by your cable dress. randy
Re: Cable-Tying with Waxed Twine
I order it from www.tecratools.com, you can also get the lacing needles and everything else you might need: A somewhat decent resource: http://www.tecratools.com/pages/tecalert/cable_lacing.html Needles and lace: http://www.tecratools.com/pages/telecom/cable_tools.html I have seen some Qwest and BellSouth technical documents which go into a little more detail of how they expect it to be done, but go find someone who's done any kind of cabling in a CO and they can teach you :) -- Tim On 1/24/07, William Yardley <[EMAIL PROTECTED]> wrote: On Wed, Jan 24, 2007 at 07:30:06PM -0500, Dan Mahoney, System Admin wrote: [...] > I came back to find all my cat5 cables neatly tied with some sort of > waxed twine, using an interesting looping knot pattern that repeated > every six inches or so using a single piece of string. [...] > I have tried googling for the method, (it's apparently standard, I've > seen it in play elsewhere), and for the type of twine, but had little > luck. The kind my vendor was able to get was flat (not the normal stuff). As far as I know, this stuff is usually surprisingly expensive and / or comes in large cases. You might just see if the people at your colo can give you a roll or two, or ask where they order theirs (last time I asked, they bought it by the case). I believe this is the stuff I have: http://www.edmo.com/index.php?module=products&func=display&prod_id=20352 I got it from a local outfit (Danbru - http://danbru.com - great Socal vendor) at ~ $35/roll, which seemed exorbitant to me. w
Re: Cable-Tying with Waxed Twine
On Wed, 24 Jan 2007, Dan Mahoney, System Admin wrote: equipment for me, I came back to find all my cat5 cables neatly tied with some sort of waxed twine, using an interesting looping knot pattern that repeated every six inches or so using a single piece of string. For some reason, I found this trick really cool. It's called "lacing" and it's been used by telephone guys for ever. Find an older guy and he can probably teach you. ;) Try http://www.tecratools.com/pages/tecalert/cable_lacing.html as a starter. --- david raistrickhttp://www.netmeister.org/news/learn2quote.html [EMAIL PROTECTED] http://www.expita.com/nomime.html
Re: Cable-Tying with Waxed Twine
On Wed, Jan 24, 2007 at 07:30:06PM -0500, Dan Mahoney, System Admin wrote: [...] > I came back to find all my cat5 cables neatly tied with some sort of > waxed twine, using an interesting looping knot pattern that repeated > every six inches or so using a single piece of string. [...] > I have tried googling for the method, (it's apparently standard, I've > seen it in play elsewhere), and for the type of twine, but had little > luck. The kind my vendor was able to get was flat (not the normal stuff). As far as I know, this stuff is usually surprisingly expensive and / or comes in large cases. You might just see if the people at your colo can give you a roll or two, or ask where they order theirs (last time I asked, they bought it by the case). I believe this is the stuff I have: http://www.edmo.com/index.php?module=products&func=display&prod_id=20352 I got it from a local outfit (Danbru - http://danbru.com - great Socal vendor) at ~ $35/roll, which seemed exorbitant to me. w
Re: Cable-Tying with Waxed Twine
On Wed, Jan 24, 2007 at 07:30:06PM -0500, Dan Mahoney, System Admin wrote: > Upon leaving a router at telx and asking one of their techs to plug in the > equipment for me, I came back to find all my cat5 cables neatly tied with > some sort of waxed twine, using an interesting looping knot pattern that > repeated every six inches or so using a single piece of string. For some > reason, I found this trick really cool. > > I have tried googling for the method, (it's apparently standard, I've seen > it in play elsewhere), and for the type of twine, but had little luck. I > was wondering if any of the gurus out there would care to share what this > knot-pattern is actually called, and/or if there's a (illustrated) howto > somewhere? >From your description, it sounds like you might be describing a series of half hitches. I don't know if it has a more specific title than that. If you wanted to create it on (say) a vertical bundle, you just pass the line around the back of the bundle then put the working end between the line and the bundle, and tighten by pulling away from the knots you've already tied. Repeat this over and over up (or down) the bundle to get your nice pattern happening. A benefit of this knot is that if you pull the working end towards the knots you've already tied, the knot will slide back, so you can tie each knot quickly then pull it back to the right position, so you get a nice even run of loops. You'll need to secure each end of the line with something that can stand tension at a sharp angle. A quick examination of pikiwedia's knots list suggests something like an icicle hitch or rolling hitch, but they might be a bit tricky to tie in tight spaces. I've just tried two half hitches on a broomstick and it doesn't hold too badly, but I wouldn't guarantee it'll be safe long term. As to the line to use, I'd imagine that an office supplies store would probably have a range of possibilities. - Matt -- "I have a cat, so I know that when she digs her very sharp claws into my chest or stomach it's really a sign of affection, but I don't see any reason for programming languages to show affection with pain." -- Erik Naggum, comp.lang.lisp
Re: Cable-Tying with Waxed Twine
Dan Mahoney, System Admin wrote: Hey all, This seems a wee bit off topic, but definitely relates to network operations (somewhere below layer 1) and I can't think of a better place to ask. Upon leaving a router at telx and asking one of their techs to plug in the equipment for me, I came back to find all my cat5 cables neatly tied with some sort of waxed twine, using an interesting looping knot pattern that repeated every six inches or so using a single piece of string. For some reason, I found this trick really cool. I have tried googling for the method, (it's apparently standard, I've seen it in play elsewhere), and for the type of twine, but had little luck. I was wondering if any of the gurus out there would care to share what this knot-pattern is actually called, and/or if there's a (illustrated) howto somewhere? -Dan "Tired of getting scratched up by jagged cable ties" Mahoney Best site I have seen so far: http://www.dairiki.org/hammond/cable-lacing-howto/
RE: Cable-Tying with Waxed Twine
It's called cable lacing... And CO guys have done it forever. Looks really pretty, but it's a pain in the butt to do. :) And sucks if you have to rip a cable out to replace things. Other than that, check out: http://www.dairiki.org/hammond/cable-lacing-howto/ Cheers, Scott PS. A really good pair of flush cuts (wire snips, but not the "diamond-cut" ones) will help with the tie wraps too! -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dan Mahoney, System Admin Sent: Wednesday, January 24, 2007 7:30 PM To: nanog@merit.edu Subject: Cable-Tying with Waxed Twine Hey all, This seems a wee bit off topic, but definitely relates to network operations (somewhere below layer 1) and I can't think of a better place to ask. Upon leaving a router at telx and asking one of their techs to plug in the equipment for me, I came back to find all my cat5 cables neatly tied with some sort of waxed twine, using an interesting looping knot pattern that repeated every six inches or so using a single piece of string. For some reason, I found this trick really cool. I have tried googling for the method, (it's apparently standard, I've seen it in play elsewhere), and for the type of twine, but had little luck. I was wondering if any of the gurus out there would care to share what this knot-pattern is actually called, and/or if there's a (illustrated) howto somewhere? -Dan "Tired of getting scratched up by jagged cable ties" Mahoney -- Dan Mahoney Techie, Sysadmin, WebGeek Gushi on efnet/undernet IRC ICQ: 13735144 AIM: LarpGM Site: http://www.gushi.org ---
Cable-Tying with Waxed Twine
Hey all, This seems a wee bit off topic, but definitely relates to network operations (somewhere below layer 1) and I can't think of a better place to ask. Upon leaving a router at telx and asking one of their techs to plug in the equipment for me, I came back to find all my cat5 cables neatly tied with some sort of waxed twine, using an interesting looping knot pattern that repeated every six inches or so using a single piece of string. For some reason, I found this trick really cool. I have tried googling for the method, (it's apparently standard, I've seen it in play elsewhere), and for the type of twine, but had little luck. I was wondering if any of the gurus out there would care to share what this knot-pattern is actually called, and/or if there's a (illustrated) howto somewhere? -Dan "Tired of getting scratched up by jagged cable ties" Mahoney -- Dan Mahoney Techie, Sysadmin, WebGeek Gushi on efnet/undernet IRC ICQ: 13735144 AIM: LarpGM Site: http://www.gushi.org ---
Re: Colocation in the US.
On 1/24/07, Deepak Jain <[EMAIL PROTECTED]> wrote: Speaking as the operator of at least one datacenter that was originally built to water cool mainframes... Water is not hard to deal with, but it has its own discipline, especially when you are dealing with lots of it (flow rates, algicide, etc). And there aren't lots of great manifolds to allow customer (joe-end user) service-able connections (like how many folks do you want screwing with DC power supplies/feeds without some serious insurance).. Once some standardization comes to this, and valves are built to detect leaks, etc... things will be good. DJ In the long run, I think this is going to solve a lot of problems, as cooling the equipment with a water medium is more effective then trying to pull the heat off of everything with air. But standardization is going to take a bit.
Re: Colocation in the US.
On 1/24/07, Mike Lyon <[EMAIL PROTECTED]> wrote: I think if someone finds a workable non-conductive cooling fluid that would probably be the best thing. I fear the first time someone is working near their power outlets and water starts squirting, flooding and electricuting everyone and everything. -Mike http://en.wikipedia.org/wiki/Mineral_oil
Re: Colocation in the US.
I think if someone finds a workable non-conductive cooling fluid that would probably be the best thing. I fear the first time someone is working near their power outlets and water starts squirting, flooding and electricuting everyone and everything. -Mike On 1/24/07, Brandon Galbraith <[EMAIL PROTECTED]> wrote: On 1/24/07, Deepak Jain <[EMAIL PROTECTED]> wrote: > > > Speaking as the operator of at least one datacenter that was originally > built to water cool mainframes... Water is not hard to deal with, but it > has its own discipline, especially when you are dealing with lots of it > (flow rates, algicide, etc). And there aren't lots of great manifolds to > allow customer (joe-end user) service-able connections (like how many > folks do you want screwing with DC power supplies/feeds without some > serious insurance).. > > Once some standardization comes to this, and valves are built to detect > leaks, etc... things will be good. > > DJ > In the long run, I think this is going to solve a lot of problems, as cooling the equipment with a water medium is more effective then trying to pull the heat off of everything with air. But standardization is going to take a bit.
Re: Colocation in the US.
Speaking as the operator of at least one datacenter that was originally built to water cool mainframes... Water is not hard to deal with, but it has its own discipline, especially when you are dealing with lots of it (flow rates, algicide, etc). And there aren't lots of great manifolds to allow customer (joe-end user) service-able connections (like how many folks do you want screwing with DC power supplies/feeds without some serious insurance).. Once some standardization comes to this, and valves are built to detect leaks, etc... things will be good. DJ Mike Lyon wrote: Paul brings up a good point. How long before we call a colo provider to provision a rack, power, bandwidth and a to/from connection in each rack to their water cooler on the roof? -Mike
Re: [cacti-announce] Cacti 0.8.6j Released (fwd)
On Wed, 24 Jan 2007, Mark Boolootian wrote: I see a reference in the response to RTG. RTG's claim to fame looks like speed. In comparison to RRDTOOL-based applications, RTG stores raw values rather than cooked averages, allowing for a great deal more flexibility in analysis. And you aren't limited to a temporally fixed window of data. And meaning that speed of analysis would be function of x where x is length of time in the analysed period. RRD takes good intermediate approach storing all the data for latest few data samples and averages for longer time period data. Some people also use double approach where data is both stored in RRD for quicker access to graphing (for day/month/year like network engineers here like to see since the days when MRTG came out) as well as storing data in SQL database for more detailed analysis to be done by request (of course your database is also continues to grow undefinetly unlike fixed-size RRD files). -- William Leibzon Elan Networks [EMAIL PROTECTED]
NANOG 39: Partial agenda posted
The agenda for the plenary sessions at NANOG 39 has been posted at http://nanog.org/mtg-0702/topics.html Times for the tutorial and BOF sessions, which will be held Monday and Tuesday afternoons, will be updated soon. See you in Toronto! (U.S. residents: don't forget your passports...) Steve Feldman PC Chair
Re: [cacti-announce] Cacti 0.8.6j Released (fwd)
On 1/24/2007 2:46 PM, Ray Burkholder wrote: > WMI requires Windows Authentication, and if one is running Linux tools, > there are issues. I havn't come a cross an easy way to get to WMI from > Linux yet. Anyone have any suggestions? I've been working on this for a while actually. WMI is WBEM, except that WMI uses DCOM as a transfer protocol instead of using HTTP like WBEM. The big problem for Linux is that there aren't any implementations. However there are some interesting tools that provide gateway services that get around the problem. Part of the openpegasus tarball is a program called wmimapper that provides a WBEM to WMI gateway. Basically you send it WBEM queries with HTTP authentication etc, and it converts those into WMI requests. It runs on Windows (to generate the DCOM), and it's source-only so you'll need to compile it yourself (although IBM and HP also include older ports in their server monitoring software). I've been using it to pull Everest sensor data off Windows boxes into Cacti on Linux for a while. There are some problems with the whole thing, but it pretty much works. SNMP Informant has a WMI-SNMP gateway agent that makes some/most Windows data available through SNMP, which is handy. nsclient also provides access to some perfmon and static data through a custom agent/proxy protocol too. http://forums.cacti.net/viewtopic.php?t=11752 http://www.openpegasus.org/ http://www.snmp-informant.com/ http://nsclient.ready2run.nl/ -- Eric A. Hallhttp://www.ehsco.com/ Internet Core Protocols http://www.oreilly.com/catalog/coreprot/
New APNIC IPv4 address ranges
Forwarding on for APNIC... Original Message Subject:[Apnic-announce] New APNIC IPv4 address ranges Date: Thu, 18 Jan 2007 16:00:03 +1000 From: [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Dear colleagues APNIC received the following IPv4 address blocks from IANA in Jan 2007 and will be making allocations from these ranges in the near future: 116/8 APNIC 117/8 APNIC 118/8 APNIC 119/8 APNIC 120/8 APNIC APNIC has made this announcement to enable the Internet community to update network configurations, such as routing filters, where required. Routability testing of new prefixes will commence on Friday January 19 2007. The daily report will be published at the usual URL: http://www.ris.ripe.net/debogon/debogon.html For more information on the resources administered by APNIC, please see: http://www.apnic.net/db/ranges.html For information on the minimum allocation sizes within address ranges administered by APNIC, please see: http://www.apnic.net/db/min-alloc.html Kind regards Guangliang Guangliang Panemail: [EMAIL PROTECTED] Resources Services Managersip: [EMAIL PROTECTED] APNIC phone: +61 7 3858 3188 http://www.apnic.net/ fax: +61 7 3858 3199
Re: [cacti-announce] Cacti 0.8.6j Released (fwd)
On 1/24/2007 3:05 PM, Paul Vixie wrote: > glibly said, sir. but i disasterously underestimated the amount of time > and money it would take to build BIND9. since i'm talking about a scalable > pluggable portable F/L/OSS framework that would serve disparite interests > and talk to devices that will never go to an snmp connectathon, i'm trying > to set a realistic goal. anyone who want to convince me that it can be done > for less than what i'm saying will have to first show me their credentials, > second convince david conrad and jerry scharf. (after that, i'm all ears.) Trying to do a comprehensive monolith will certainly make it a 5-year process. It seems that such an effort is doomed from the start though (as you say, who would fund it?) so I'm not really sure why it would be offered up as the only available outcome. Take a different approach, it wouldn't be that hard to develop the framework alone. The killer for all these things is in the widgets that hang off them, but if the framework was usable and the widgets were easy to write (say, documented better than BIND9's API for example), the users would take care of providing the widgets. Look at all the noobs writing plugins for cacti and spamassassin and... users will write the plugins if the framework is accessible. Don't give me a package that tries to provide everything, give me a daemon with inter-process messaging, event triggers, an extensible OO inheritance model and I'll do my own damn widgets... It wouldn't take five years to write that. It's a summer project. Some of the things I want in an NMS that I can't find in end-all-be-all monolithic packages: self-config stuff default polling cycle authentication data-storage interfaces etc. host/device information static info (hostname, etc) dynamic info (hardware inventory, software inventory, etc) browser interface MIB browser CIM browser others polling events ICMP SNMP GET WBEM script interface TCP connection interface etc. alarm events SNMP traps WBEM notifications syslog eventlog etc. action events alerts (mail, pager, whatever) run local script run remote script manipulate escalation interface event unanswered, chain to other event event cleared, chain to other event reporting browser meters (eg, watch this mib with realtime tachometer) long-term graphing trend analysis/reporting etc. Really it comes down to having a framework in place that can be extended by end-user admins. IOW it's the section heads, not the list items. -- Eric A. Hallhttp://www.ehsco.com/ Internet Core Protocols http://www.oreilly.com/catalog/coreprot/
Re: Colocation in the US.
Vendor S? :) tv - Original Message - From: "JC Dill" <[EMAIL PROTECTED]> Cc: Sent: Tuesday, January 23, 2007 4:11 PM Subject: Re: Colocation in the US. Robert Sherrard wrote: Who's getting more than 10kW per cabinet and metered power from their colo provider? I had a data center tour on Sunday where they said that the way they provide space is by power requirements. You state your power requirements, they give you enough rack/cabinet space to *properly* house gear that consumers that much power. If your gear is particularly compact then you will end up with more space than strictly necessary. It's a good way of looking at the problem, since the flipside of power consumption is the cooling problem. Too many servers packed in a small space (rack or cabinet) becomes a big cooling problem. jc
Re: Colocation in the US.
The current high watt cooling technologies are definately more expensive (much more). Also, a facility would still need traditional forced to maintain the building climate. tv - Original Message - From: "Todd Glassey" <[EMAIL PROTECTED]> To: "Tony Varriale" <[EMAIL PROTECTED]>; Sent: Wednesday, January 24, 2007 2:09 PM Subject: Re: Colocation in the US. If the cooling is cheaper than the cost of the A/C or provides a backup, its a no brainer. Todd Glassey -Original Message- From: Tony Varriale <[EMAIL PROTECTED]> Sent: Jan 24, 2007 11:20 AM To: nanog@merit.edu Subject: Re: Colocation in the US. I think the better questions are: when will customers be willing to pay for it? and how much? :) tv - Original Message - From: "Mike Lyon" <[EMAIL PROTECTED]> To: "Paul Vixie" <[EMAIL PROTECTED]> Cc: Sent: Wednesday, January 24, 2007 11:54 AM Subject: Re: Colocation in the US. Paul brings up a good point. How long before we call a colo provider to provision a rack, power, bandwidth and a to/from connection in each rack to their water cooler on the roof? -Mike On 24 Jan 2007 17:37:27 +, Paul Vixie <[EMAIL PROTECTED]> wrote: [EMAIL PROTECTED] (david raistrick) writes: > > I had a data center tour on Sunday where they said that the way > > they > > provide space is by power requirements. You state your power > > requirements, they give you enough rack/cabinet space to *properly* > > house gear that consumers that > > "properly" is open for debate here. ... It's possible to have a > facility built to properly power and cool 10kW+ per rack. Just that > most > colo facilties aren't built to that level. i'm spec'ing datacenter space at the moment, so this is topical. at 10kW/R you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R by requiring a lot of aisleway around every set of racks (~200sf per 4R cage) to get it down to 200W/SF, or you'd compromise on W/R. i suspect that the folks offering 10kW/R are making it up elsewhere, like 50sf/R averaged over their facility. (this makes for a nice-sounding W/R number.) i know how to cool 200W/SF but i do not know how to cool 333W/SF unless everything in the rack is liquid cooled or unless the forced air is bottom->top and the cabinet is completely enclosed and the doors are never opened while the power is on. you can pay over here, or you can pay over there, but TANSTAAFL. for my own purposes, this means averaging ~6kW/R with some hotter and some colder, and cooling at ~200W/SF (which is ~30SF/R). the thing that's burning me right now is that for every watt i deliver, i've got to burn a watt in the mechanical to cool it all. i still want the rackmount server/router/switch industry to move to liquid which is about 70% more efficient (in the mechanical) than air as a cooling medium. > > It's a good way of looking at the problem, since the flipside of > > power > > consumption is the cooling problem. Too many servers packed in a > > small > > space (rack or cabinet) becomes a big cooling problem. > > Problem yes, but one that is capable of being engineered around > (who'd > have ever though we could get 1000Mb/s through cat5, after all!) i think we're going to see a more Feinman-like circuit design where we're not dumping electrons every time we change states, and before that we'll see a standardized gozinta/gozoutta liquid cooling hookup for rackmount equipment, and before that we're already seeing Intel and AMD in a watts-per-computron race. all of that would happen before we'd air-cool more than 200W/SF in the average datacenter, unless Eneco's chip works out in which case all bets are off in a whole lotta ways. -- Paul Vixie
Re: [cacti-announce] Cacti 0.8.6j Released (fwd)
On Wed, Jan 24, 2007 at 08:34:19AM -0500, Jason LeBlanc wrote: > > I would say somewhere around 4000 network interfaces (6-8 stats per int) > and around 1000 servers (8-10 stats per server) we started seeing > problems, both with navigation in the UI and with stats not reliably > updating. I did not try that poller, perhaps its worth trying it again > using it. I will also say this was about 2 years ago, I think the box > it was running on was a dual P3-1000 with a raid 10 using 6 drives (10k > rpm I think). > > After looking for 'the ideal' tool for many years, it still amazes me > that no one has built it. Bulk gets, scalable schema and good > portal/UI. RTG is better than MRTG, but the config/db/portal are still > lacking. So, i've been the caretaker of a few different snmp pollers over a few years, as well as done some database foo (250m+ rows/day of data) and these things interrelate in a number of ways. First start with the polling, you need to do bulkget/bulkwalk of the various mibs to collect the data in a reasonable way, timestamp it all (either internally before you "cook" the data), poll frequently enough to detect spikes (including inaccurate spikes and backwards/missing counter bugs), etc.. Take a simple set of data you might want to collect: router interfaces (mib) up/down in/out octets, in/out packets, in errors/out drops speed (ifMIB too?) ifMIB (64-bit counters, but only sometimes) description speed (interface mib too?) mpls ? ldp? te? paths? mac accounting ? then you get into do you store the raw data you collect with markers for snmp timeouts, or just a 5 min calculation/sample? (this relates to the above 250m rows/day) how do you define your schema? how long does it take to insert/index/whatnot the data? how to handle ifindex moves (not just one vendor too, don't forget that)? how do you match that link to a customer for billing? who gets what reports? engineering reports too? provisioning link-in? tie to ip address db (interface ip<->customer mapping)? the list goes on and on, this is just part of it, let alone any possible tracking of assets/hardware, let alone proactive network monitoring (tie those traps/walks) to the internal ping(er) to passive network monitoring, etc.. this is a huge burden to figure it all out, implement and then monitor/operate 24x7. miss enough samples or data and you end up billing too little. this is why most folks have either cooked their own, or use some expensive suite of tools, leaving just a little bit of other stuff out there. in a lot of ways, just buying a ge/10ge and paying some alternate price for it may be cheaper than a burstable rate as it could reduce a lot of this extra cost. i remember hearing that it cost telcos more to count/track the calls to give you a detailed bill than for the call itself. this is why flat-rate is nearly king these days (in the us at least). - jared -- Jared Mauch | pgp key available via finger from [EMAIL PROTECTED] clue++; | http://puck.nether.net/~jared/ My statements are only mine.
Re: [cacti-announce] Cacti 0.8.6j Released (fwd)
[EMAIL PROTECTED] (Jeroen Massar) writes: > > ..., $5M over three years? spread out over 50 network owners that's > > $3K a month. i don't see that happening in a consolidation cycle like > > this one, but hope springs eternal. "give randy and hank the money, > > they'll take care of this for us once and for all." > > Heh, for that kind of money you can even convince me to do it ;) glibly said, sir. but i disasterously underestimated the amount of time and money it would take to build BIND9. since i'm talking about a scalable pluggable portable F/L/OSS framework that would serve disparite interests and talk to devices that will never go to an snmp connectathon, i'm trying to set a realistic goal. anyone who want to convince me that it can be done for less than what i'm saying will have to first show me their credentials, second convince david conrad and jerry scharf. (after that, i'm all ears.) -- Paul Vixie
RE: [cacti-announce] Cacti 0.8.6j Released (fwd)
> > > Maybe this is overly naïve, but what about the ability to > auto-magically import and search various vendor SNMP/WMI > MIBs? I can think of 3 open source NMS that do a good job if > you set up all 3 to monitor the network, but they all overlap > and none of them really do a good job. Importing and searching MIBs is an interesting idea. However, for some mibs, like Cisco's Qos and Dial-Peer mibs, sometimes wrapper code has to be used to ferret out the appropriate groupings to use as logical entities for displaying. WMI requires Windows Authentication, and if one is running Linux tools, there are issues. I havn't come a cross an easy way to get to WMI from Linux yet. Anyone have any suggestions? -- Scanned for viruses and dangerous content at http://www.oneunified.net and is believed to be clean.
Re: Colocation in the US.
I think the better questions are: when will customers be willing to pay for it? and how much? :) tv - Original Message - From: "Mike Lyon" <[EMAIL PROTECTED]> To: "Paul Vixie" <[EMAIL PROTECTED]> Cc: Sent: Wednesday, January 24, 2007 11:54 AM Subject: Re: Colocation in the US. Paul brings up a good point. How long before we call a colo provider to provision a rack, power, bandwidth and a to/from connection in each rack to their water cooler on the roof? -Mike On 24 Jan 2007 17:37:27 +, Paul Vixie <[EMAIL PROTECTED]> wrote: [EMAIL PROTECTED] (david raistrick) writes: > > I had a data center tour on Sunday where they said that the way they > > provide space is by power requirements. You state your power > > requirements, they give you enough rack/cabinet space to *properly* > > house gear that consumers that > > "properly" is open for debate here. ... It's possible to have a > facility built to properly power and cool 10kW+ per rack. Just that > most > colo facilties aren't built to that level. i'm spec'ing datacenter space at the moment, so this is topical. at 10kW/R you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R by requiring a lot of aisleway around every set of racks (~200sf per 4R cage) to get it down to 200W/SF, or you'd compromise on W/R. i suspect that the folks offering 10kW/R are making it up elsewhere, like 50sf/R averaged over their facility. (this makes for a nice-sounding W/R number.) i know how to cool 200W/SF but i do not know how to cool 333W/SF unless everything in the rack is liquid cooled or unless the forced air is bottom->top and the cabinet is completely enclosed and the doors are never opened while the power is on. you can pay over here, or you can pay over there, but TANSTAAFL. for my own purposes, this means averaging ~6kW/R with some hotter and some colder, and cooling at ~200W/SF (which is ~30SF/R). the thing that's burning me right now is that for every watt i deliver, i've got to burn a watt in the mechanical to cool it all. i still want the rackmount server/router/switch industry to move to liquid which is about 70% more efficient (in the mechanical) than air as a cooling medium. > > It's a good way of looking at the problem, since the flipside of > > power > > consumption is the cooling problem. Too many servers packed in a > > small > > space (rack or cabinet) becomes a big cooling problem. > > Problem yes, but one that is capable of being engineered around (who'd > have ever though we could get 1000Mb/s through cat5, after all!) i think we're going to see a more Feinman-like circuit design where we're not dumping electrons every time we change states, and before that we'll see a standardized gozinta/gozoutta liquid cooling hookup for rackmount equipment, and before that we're already seeing Intel and AMD in a watts-per-computron race. all of that would happen before we'd air-cool more than 200W/SF in the average datacenter, unless Eneco's chip works out in which case all bets are off in a whole lotta ways. -- Paul Vixie
RE: [cacti-announce] Cacti 0.8.6j Released (fwd)
Maybe this is overly naïve, but what about the ability to auto-magically import and search various vendor SNMP/WMI MIBs? I can think of 3 open source NMS that do a good job if you set up all 3 to monitor the network, but they all overlap and none of them really do a good job. I also am using a closed-source NMS at work that does little more than minimal on-system agent monitoring of Windows/Linux based servers (disk space cpu memory utilization). Good graphing, good alerts, good SNMP integration, granularity, and escalation, as well as pretty executive reports to keep PHB's happy (and that display the system as 5 9's uptime no matter how many times the mail server crashed!). The reason why the open-source tools don't work is a lack of comprehensive coverage of Cisco, third party network kit, Linux and Windows. It just doesn't quite "do it all". The reason why the closed-source tool didn't work (in my mind) is that it just doesn't have the flexibility to deal with anything other than what it's expecting. I've submitted a few dozen support tickets with them (and they will remain nameless) simply because of a lack of SNMP knowledge on their part. Please forgive me for all above M$ specific references, I work in a MS and *IX environment. Andrew D Kirch - All Things IT Office: 317-755-0202 "si hoc legere scis nimium eruditiones habes." > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of > Ray Burkholder > Sent: Wednesday, January 24, 2007 1:12 PM > To: nanog@merit.edu > Subject: RE: [cacti-announce] Cacti 0.8.6j Released (fwd) > > > I see a reference in the response to RTG. RTG's claim to fame looks like > speed. > > I've done some work with Cricket and have figured out a way to get at it's > schema. I've been looking at mating Cricket' s 'getter and schema with > Drraw and genDevConfig tools and putting a Mason based HTML wrapper around > the whole thing so people can pick and choose the components of charts > they > want to see (per chart), (per page). And by filling in simple web forms, > it > would be easy to generate command lines for genDevConfig to go out and > create the customized SNMP queries that are needed for Dial-Peers, Cisco's > Quality of Service, etc. > > Would anyone be interested in such a contraption? > > > -Original Message- > > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On > > Behalf Of Paul Vixie > > Sent: Wednesday, January 24, 2007 13:43 > > To: nanog@merit.edu > > Subject: Re: [cacti-announce] Cacti 0.8.6j Released (fwd) > > > > > > [EMAIL PROTECTED] (Jason LeBlanc) writes: > > > > > After looking for 'the ideal' tool for many years, it still > > amazes me > > > that no one has built it. Bulk gets, scalable schema and > > good portal/UI. > > > RTG is better than MRTG, but the config/db/portal are still lacking. > > > > if funding were available, i know some developers we could > > hire to build the ultimate scalable pluggable network F/L/OSS > > management/monitoring system. if funding's not available > > then we're depending on some combination of hobbiests (who've > > usually got rent to pay, limiting their availability for this > > work) and in-house toolmakers at network owners (who've > > usually got other work to do, or who would be under pressure > > to monetize/license/patent the results if That Much Money was > > spent in ways that could otherwise directly benefit their > > competitors.) > > > > "been there, done that, got the t-shirt." is there funding > > available yet? > > like, $5M over three years? spread out over 50 network > > owners that's ~$3K a month. i don't see that happening in a > > consolidation cycle like this one, but hope springs eternal. > > "give randy and hank the money, they'll take care of this for > > us once and for all." > > -- > > Paul Vixie > > > > -- > > Scanned for viruses and dangerous content at > > http://www.oneunified.net and is believed to be clean. > > > > > > > -- > Scanned for viruses and dangerous content at > http://www.oneunified.net and is believed to be clean.
Re: [cacti-announce] Cacti 0.8.6j Released (fwd)
> I see a reference in the response to RTG. RTG's claim to fame looks like > speed. In comparison to RRDTOOL-based applications, RTG stores raw values rather than cooked averages, allowing for a great deal more flexibility in analysis. And you aren't limited to a temporally fixed window of data.
RE: [cacti-announce] Cacti 0.8.6j Released (fwd)
I see a reference in the response to RTG. RTG's claim to fame looks like speed. I've done some work with Cricket and have figured out a way to get at it's schema. I've been looking at mating Cricket' s 'getter and schema with Drraw and genDevConfig tools and putting a Mason based HTML wrapper around the whole thing so people can pick and choose the components of charts they want to see (per chart), (per page). And by filling in simple web forms, it would be easy to generate command lines for genDevConfig to go out and create the customized SNMP queries that are needed for Dial-Peers, Cisco's Quality of Service, etc. Would anyone be interested in such a contraption? > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On > Behalf Of Paul Vixie > Sent: Wednesday, January 24, 2007 13:43 > To: nanog@merit.edu > Subject: Re: [cacti-announce] Cacti 0.8.6j Released (fwd) > > > [EMAIL PROTECTED] (Jason LeBlanc) writes: > > > After looking for 'the ideal' tool for many years, it still > amazes me > > that no one has built it. Bulk gets, scalable schema and > good portal/UI. > > RTG is better than MRTG, but the config/db/portal are still lacking. > > if funding were available, i know some developers we could > hire to build the ultimate scalable pluggable network F/L/OSS > management/monitoring system. if funding's not available > then we're depending on some combination of hobbiests (who've > usually got rent to pay, limiting their availability for this > work) and in-house toolmakers at network owners (who've > usually got other work to do, or who would be under pressure > to monetize/license/patent the results if That Much Money was > spent in ways that could otherwise directly benefit their > competitors.) > > "been there, done that, got the t-shirt." is there funding > available yet? > like, $5M over three years? spread out over 50 network > owners that's ~$3K a month. i don't see that happening in a > consolidation cycle like this one, but hope springs eternal. > "give randy and hank the money, they'll take care of this for > us once and for all." > -- > Paul Vixie > > -- > Scanned for viruses and dangerous content at > http://www.oneunified.net and is believed to be clean. > > -- Scanned for viruses and dangerous content at http://www.oneunified.net and is believed to be clean.
Re: [cacti-announce] Cacti 0.8.6j Released (fwd)
Paul Vixie wrote: > [EMAIL PROTECTED] (Jason LeBlanc) writes: > >> After looking for 'the ideal' tool for many years, it still amazes me >> that no one has built it. Bulk gets, scalable schema and good portal/UI. >> RTG is better than MRTG, but the config/db/portal are still lacking. [..] > "been there, done that, got the t-shirt." is there funding available yet? > like, $5M over three years? spread out over 50 network owners that's ~$3K > a month. i don't see that happening in a consolidation cycle like this one, > but hope springs eternal. "give randy and hank the money, they'll take care > of this for us once and for all." Heh, for that kind of money you can even convince me to do it ;) Greets, Jeroen (dreams about a long holiday after finishing it ;) signature.asc Description: OpenPGP digital signature
Re: Colocation in the US.
Paul brings up a good point. How long before we call a colo provider to provision a rack, power, bandwidth and a to/from connection in each rack to their water cooler on the roof? -Mike On 24 Jan 2007 17:37:27 +, Paul Vixie <[EMAIL PROTECTED]> wrote: [EMAIL PROTECTED] (david raistrick) writes: > > I had a data center tour on Sunday where they said that the way they > > provide space is by power requirements. You state your power > > requirements, they give you enough rack/cabinet space to *properly* > > house gear that consumers that > > "properly" is open for debate here. ... It's possible to have a > facility built to properly power and cool 10kW+ per rack. Just that most > colo facilties aren't built to that level. i'm spec'ing datacenter space at the moment, so this is topical. at 10kW/R you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R by requiring a lot of aisleway around every set of racks (~200sf per 4R cage) to get it down to 200W/SF, or you'd compromise on W/R. i suspect that the folks offering 10kW/R are making it up elsewhere, like 50sf/R averaged over their facility. (this makes for a nice-sounding W/R number.) i know how to cool 200W/SF but i do not know how to cool 333W/SF unless everything in the rack is liquid cooled or unless the forced air is bottom->top and the cabinet is completely enclosed and the doors are never opened while the power is on. you can pay over here, or you can pay over there, but TANSTAAFL. for my own purposes, this means averaging ~6kW/R with some hotter and some colder, and cooling at ~200W/SF (which is ~30SF/R). the thing that's burning me right now is that for every watt i deliver, i've got to burn a watt in the mechanical to cool it all. i still want the rackmount server/router/switch industry to move to liquid which is about 70% more efficient (in the mechanical) than air as a cooling medium. > > It's a good way of looking at the problem, since the flipside of power > > consumption is the cooling problem. Too many servers packed in a small > > space (rack or cabinet) becomes a big cooling problem. > > Problem yes, but one that is capable of being engineered around (who'd > have ever though we could get 1000Mb/s through cat5, after all!) i think we're going to see a more Feinman-like circuit design where we're not dumping electrons every time we change states, and before that we'll see a standardized gozinta/gozoutta liquid cooling hookup for rackmount equipment, and before that we're already seeing Intel and AMD in a watts-per-computron race. all of that would happen before we'd air-cool more than 200W/SF in the average datacenter, unless Eneco's chip works out in which case all bets are off in a whole lotta ways. -- Paul Vixie
IAB Workshop on Routing and Addressing [Was: Re: Google wants to be yo ur Internet]
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 - -- Jason LeBlanc <[EMAIL PROTECTED]> wrote: >...Some days it kills me that v6 >is still not really viable, I keep asking providers where they're >at with it. Their most common complaint is that the operating >systems don't support it yet. They mention primarily Windows since >that is what is most implemented, not in the colo world but what the >users have. I suggested they offer a service that somehow translates >(heh, shifting the pain to them) v4 to v6 for their customers to move >it along. > If you *really* want to know where things with IPv6, then you need to read this: Report from the IAB Workshop on Routing and Addressing http://www.ietf.org/internet-drafts/draft-iab-raws-report-00.txt - - ferg -BEGIN PGP SIGNATURE- Version: PGP Desktop 9.5.2 (Build 4075) wj8DBQFFt5xMq1pz9mNUZTMRApvtAKCSIwmfi4ISc8jFg7yHgt2rlrK+7gCgyHiY /ukrrvZTVFL52zm7eu2ZuZs= =OtBi -END PGP SIGNATURE- -- "Fergie", a.k.a. Paul Ferguson Engineering Architecture for the Internet fergdawg(at)netzero.net ferg's tech blog: http://fergdawg.blogspot.com/
Re: [cacti-announce] Cacti 0.8.6j Released (fwd)
[EMAIL PROTECTED] (Jason LeBlanc) writes: > After looking for 'the ideal' tool for many years, it still amazes me > that no one has built it. Bulk gets, scalable schema and good portal/UI. > RTG is better than MRTG, but the config/db/portal are still lacking. if funding were available, i know some developers we could hire to build the ultimate scalable pluggable network F/L/OSS management/monitoring system. if funding's not available then we're depending on some combination of hobbiests (who've usually got rent to pay, limiting their availability for this work) and in-house toolmakers at network owners (who've usually got other work to do, or who would be under pressure to monetize/license/patent the results if That Much Money was spent in ways that could otherwise directly benefit their competitors.) "been there, done that, got the t-shirt." is there funding available yet? like, $5M over three years? spread out over 50 network owners that's ~$3K a month. i don't see that happening in a consolidation cycle like this one, but hope springs eternal. "give randy and hank the money, they'll take care of this for us once and for all." -- Paul Vixie
Re: Super Bowl Sunday February 4th
If there is nothing going on, does anyone know of a good sports bar to watch the game at? Ron Muir wrote: Is there anything organized for the Super Bowl on Sunday Night? The last time Super Bowl fell on a NANOG (NANOG 15) Sunday several of the sponsors got together and had a Super Bowl party at the hotel. Does anyone know of anything this time? Ron Muir
Re: Colocation in the US.
[EMAIL PROTECTED] (david raistrick) writes: > > I had a data center tour on Sunday where they said that the way they > > provide space is by power requirements. You state your power > > requirements, they give you enough rack/cabinet space to *properly* > > house gear that consumers that > > "properly" is open for debate here. ... It's possible to have a > facility built to properly power and cool 10kW+ per rack. Just that most > colo facilties aren't built to that level. i'm spec'ing datacenter space at the moment, so this is topical. at 10kW/R you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R by requiring a lot of aisleway around every set of racks (~200sf per 4R cage) to get it down to 200W/SF, or you'd compromise on W/R. i suspect that the folks offering 10kW/R are making it up elsewhere, like 50sf/R averaged over their facility. (this makes for a nice-sounding W/R number.) i know how to cool 200W/SF but i do not know how to cool 333W/SF unless everything in the rack is liquid cooled or unless the forced air is bottom->top and the cabinet is completely enclosed and the doors are never opened while the power is on. you can pay over here, or you can pay over there, but TANSTAAFL. for my own purposes, this means averaging ~6kW/R with some hotter and some colder, and cooling at ~200W/SF (which is ~30SF/R). the thing that's burning me right now is that for every watt i deliver, i've got to burn a watt in the mechanical to cool it all. i still want the rackmount server/router/switch industry to move to liquid which is about 70% more efficient (in the mechanical) than air as a cooling medium. > > It's a good way of looking at the problem, since the flipside of power > > consumption is the cooling problem. Too many servers packed in a small > > space (rack or cabinet) becomes a big cooling problem. > > Problem yes, but one that is capable of being engineered around (who'd > have ever though we could get 1000Mb/s through cat5, after all!) i think we're going to see a more Feinman-like circuit design where we're not dumping electrons every time we change states, and before that we'll see a standardized gozinta/gozoutta liquid cooling hookup for rackmount equipment, and before that we're already seeing Intel and AMD in a watts-per-computron race. all of that would happen before we'd air-cool more than 200W/SF in the average datacenter, unless Eneco's chip works out in which case all bets are off in a whole lotta ways. -- Paul Vixie
Re: Google wants to be your Internet
On 24-Jan-2007, at 10:01, Jamie Bowden wrote: Some days it kills me that v6 is still not really viable, I keep asking providers where they're at with it. Their most common complaint is that the operating systems don't support it yet. They mention primarily Windows since that is what is most implemented, not in the colo world but what the users have. Windows XP SP2 has IPv6. It isn't enabled by default, but it's not difficult to do. Apparently Vista does do IPv6 by default out of the box, but I don't have a Vista system to play with yet to confirm this. I might argue that, legacy systems and hardware aside, the main reason that v6 might be considered non-viable these days is the lack of customers willing to pay for it. I don't think the viability of v6 has been blocking on operating systems or router hardware for quite some time, now. It's still a problem for many operational support systems, but arguably that would change rapidly if there was some prospect of revenue. Joe
RE: Google wants to be your Internet
> -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On > Behalf Of Jason LeBlanc > Sent: Wednesday, January 24, 2007 8:40 AM > To: Roland Dobbins > Cc: NANOG > Subject: Re: Google wants to be your Internet > I hear you on the double, triple nat nightmare, I'm there > myself. I'm > working on rolling out VRFs to solve that problem, still > testing. The > nat complexities and bugs (nat translations losing their mind and > killing connectivity for important apps) are just too much > for some of > our customers, users, etc to deal with. Some days it kills > me that v6 > is still not really viable, I keep asking providers where they're at > with it. Their most common complaint is that the operating systems > don't support it yet. They mention primarily Windows since > that is what > is most implemented, not in the colo world but what the users > have. I > suggested they offer a service that somehow translates (heh, shifting > the pain to them) v4 to v6 for their customers to move it along. Windows XP SP2 has IPv6. It isn't enabled by default, but it's not difficult to do. Apparently Vista does do IPv6 by default out of the box, but I don't have a Vista system to play with yet to confirm this. Jamie Bowden -- "It was half way to Rivendell when the drugs began to take hold" Hunter S Tolkien "Fear and Loathing in Barad Dur" Iain Bowen <[EMAIL PROTECTED]>
Re: Google wants to be your Internet
On Jan 24, 2007, at 5:48 AM, <[EMAIL PROTECTED]> wrote: The whole address conservation mantra has turned out to be a lot of smoke and mirrors anyway. At the time, yes, this particular issue was overhyped, just as the routing-table-expansion issue was underhyped. As we move to an 'Internet of Things', however, it will become manifestl With regards to the perceived advantages and disadvantages of IPv6 as it is currently defined, there is wide range of opinion on the subject. For many, the 'still-need-NAT-under-IPv6 vs. IPv6- eliminates-the-need-for-NAT' debate is of minor importance compared to more fundamental questions. --- Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
RE: Google wants to be your Internet
> The problem is that you can't be sure that if you use RFC1918 > today you won't be bitten by it's non-uniqueness property in > the future. When you're asked to diagnose a fault with a > device with the IP address 192.168.1.1, and you've got an > unknown number of candidate devices using that address, you > really start to see the value in having world wide unique, > but not necessarily publically visible addressing. A lot of people who implemented RFC 1918 addressing in the past didn't actually read RFC 1918. They just heard the mantra of address conservation and learned that RFC 1918 defined something called "private" addresses. Then, without reading the RFC, they made assumptions in interpreting the meaning of "private". Now, many of those people or their successors have been bit hard by problems created by using RFC 1918 addresses in networks which are not really private at all, i.e. wholly unconnected from other IP networks. Those people now see the benefits of using truly globally unique registered addresses. The whole address conservation mantra has turned out to be a lot of smoke and mirrors anyway. The dotcom collapse followed by the telecom collapse shows that it was a sham argument based on the ridiculous theory that exponential growth of the network was really sustainable. Now we live in a time where there is no shortage of IP addresses. Even IPv4 addresses are not guaranteed to ever run out as IPv6 begins to be used for some of the drivers of network growth. IPv6 makes NAT obsolete because IPv6 firewalls can provide all the useful features of IPv4 NAT without any of the downsides. --Michael Dillon
Re: Google wants to be your Internet
I hear you on the double, triple nat nightmare, I'm there myself. I'm working on rolling out VRFs to solve that problem, still testing. The nat complexities and bugs (nat translations losing their mind and killing connectivity for important apps) are just too much for some of our customers, users, etc to deal with. Some days it kills me that v6 is still not really viable, I keep asking providers where they're at with it. Their most common complaint is that the operating systems don't support it yet. They mention primarily Windows since that is what is most implemented, not in the colo world but what the users have. I suggested they offer a service that somehow translates (heh, shifting the pain to them) v4 to v6 for their customers to move it along. Roland Dobbins wrote: On Jan 24, 2007, at 4:58 AM, Mark Smith wrote: The problem is that you can't be sure that if you use RFC1918 today you won't be bitten by it's non-uniqueness property in the future. When you're asked to diagnose a fault with a device with the IP address 192.168.1.1, and you've got an unknown number of candidate devices using that address, you really start to see the value in having world wide unique, but not necessarily publically visible addressing. That's what I meant by the 'as long as one is sure one isn't buying trouble down the road' part. Having encountered problems with overlapping address space many times in the past, I'm quite aware of the pain, thanks. ;> RFC1918 was created for a reason, and it is used (and misused, we all understand that) today by many network operators for a reason. It is up to the architects and operators of networks to determine whether or not they should make use of globally-unique addresses or RFC1918 addresses on a case-by-case basis; making use of RFC1918 addressing is not an inherently stupid course of action, its appropriateness in any given situation is entirely subjective. --- Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
Re: [cacti-announce] Cacti 0.8.6j Released (fwd)
I would say somewhere around 4000 network interfaces (6-8 stats per int) and around 1000 servers (8-10 stats per server) we started seeing problems, both with navigation in the UI and with stats not reliably updating. I did not try that poller, perhaps its worth trying it again using it. I will also say this was about 2 years ago, I think the box it was running on was a dual P3-1000 with a raid 10 using 6 drives (10k rpm I think). After looking for 'the ideal' tool for many years, it still amazes me that no one has built it. Bulk gets, scalable schema and good portal/UI. RTG is better than MRTG, but the config/db/portal are still lacking. Jon Lewis wrote: On Mon, 22 Jan 2007, Jason LeBlanc wrote: Anyone thats seen MRTG (simple, static) on a large network realizes that decoupling the graphing from the polling is necessary. The disk i/o is brutal. Cacti has a slick interface, but also doesn't scale all that well for large networks. I prefer RTG, though I haven't seen a nice interface for it, yet. How large did you have to get for cacti to "not scale"? Did you try the cactid poller [which is much faster than the standard poller]? -- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net| _ http://www.lewis.org/~jlewis/pgp for PGP public key_
Re: DNS Query Question
Stephen Satchell wrote: From your description, I'd say there was a lot more work to be done first, unless they just don't have the people to do it right. forgot, but when I talked to Rodney on the phone the other day he reminded me that DNS is recursive and that if Verizon with their *own* DNS servers can't resolve the records then it MIGHT not be DNS after all. maybe this Sender Verify isn't related to DNS issues -Dennis
Re: DNS Query Question
Stephen Satchell wrote: Is your customer using BIND? They are using their co-lo's so I am unsure What do the statistics tell you? This is a dumb user that I'm dealing with. No experience. Router to them means a police officer. How many DNS servers are handling the traffic? two (2) Are they load-balanced? unsure. they are on different sub nets Has the DNS servers been upgraded to handle more traffic? Does the customer segregate their authoritative servers from their recursive ones? (That one change right there improved my DNS reliability and servicability by several orders of magnitude!) they don't own the servers. if they did I could easily fix this. I do know that their bandwidth provider has said that they do *tend* to have issues From your description, I'd say there was a lot more work to be done first, unless they just don't have the people to do it right. yup. think this is why I am going down the managed session road. -Dennis
Re: Google wants to be your Internet
On Jan 24, 2007, at 4:58 AM, Mark Smith wrote: The problem is that you can't be sure that if you use RFC1918 today you won't be bitten by it's non-uniqueness property in the future. When you're asked to diagnose a fault with a device with the IP address 192.168.1.1, and you've got an unknown number of candidate devices using that address, you really start to see the value in having world wide unique, but not necessarily publically visible addressing. That's what I meant by the 'as long as one is sure one isn't buying trouble down the road' part. Having encountered problems with overlapping address space many times in the past, I'm quite aware of the pain, thanks. ;> RFC1918 was created for a reason, and it is used (and misused, we all understand that) today by many network operators for a reason. It is up to the architects and operators of networks to determine whether or not they should make use of globally-unique addresses or RFC1918 addresses on a case-by-case basis; making use of RFC1918 addressing is not an inherently stupid course of action, its appropriateness in any given situation is entirely subjective. --- Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
Re: Google wants to be your Internet
On Wed, 24 Jan 2007 02:07:06 -0800 Roland Dobbins <[EMAIL PROTECTED]> wrote: > Of course I understand this, but I also understand that if one can > get away with RFC1918 addresses on a non-Internet-connected network, > it's not a bad idea to do so in and of itself; quite the opposite, in > fact, as long as one is sure one isn't buying trouble down the road. > The problem is that you can't be sure that if you use RFC1918 today you won't be bitten by it's non-uniqueness property in the future. When you're asked to diagnose a fault with a device with the IP address 192.168.1.1, and you've got an unknown number of candidate devices using that address, you really start to see the value in having world wide unique, but not necessarily publically visible addressing. -- "Sheep are slow and tasty, and therefore must remain constantly alert." - Bruce Schneier, "Beyond Fear"
Re: Google wants to be your Internet
On 23 Jan 2007, at 16:48, Sean Donelan wrote: Why is IP required, Because using something that works so well means less wheel reinvention. and even if you used IP for transport why must the meter identification be based on an IP address? Idenification via IP address (exclusively) is bad. I'd argue that if you are looking to check the meter for consumption data and for problems, a store-and-forward message system which didn't depend on always-on connectivity would preserve enough address space to make it viable as well. -a
Re: Google wants to be your Internet
On Jan 24, 2007, at 12:33 AM, <[EMAIL PROTECTED]> wrote: Just remember, IP addresses are *NOT* Internet addresses. They are Internet Protocol addresses. Connection to the Internet and public announcement of prefixes are totally irrelevant. Of course I understand this, but I also understand that if one can get away with RFC1918 addresses on a non-Internet-connected network, it's not a bad idea to do so in and of itself; quite the opposite, in fact, as long as one is sure one isn't buying trouble down the road. --- Roland Dobbins <[EMAIL PROTECTED]> // 408.527.6376 voice Technology is legislation. -- Karl Schroeder
RE: Google wants to be your Internet
> We also see this with extranet/supply-chain-type connectivity > between large companies who have overlapping address space, > and I'm afraid it's only going to become more common as more > of these types of relationships are established. Fortunately, IP addresses are not intended for use on the Internet. Rather, they are intended for use with Internet Protocol (IP) implementations. That's why the RIRs, in alignment with RFC 2050, section 3(a), do give out IP address allocations to organizations who are connected to extranet-type networks. If you read RFC 1918, section 2, category 3, you will see that this is consistent. So if the power companies want to assign a unique network address to all power meters then there is no good reason to stop them. After all, it is consistent with the goals of the original IP designers to address every light switch and toaster. Just remember, IP addresses are *NOT* Internet addresses. They are Internet Protocol addresses. Connection to the Internet and public announcement of prefixes are totally irrelevant. --Michael Dillon