RE: Eaton 9130 UPS feedback

2012-11-14 Thread Erik Amundson
I've had issues and experience with many types of UPSes, including HP (probably 
OEM'd from someone else), APC, EATON/Powerware, and Liebert/Emerson.  I keep 
coming back to APC.  Solid units, and are always slightly 'ahead' in 
technology.  Sure, I've seen each model have failures and even faults (big boom 
style), but APC provides a solid product and supports their customers the best 
if you ask me.  That being said, a very close second choice would be 
EATON/Powerware.

- Erik


-Original Message-
From: Seth Mattinen [mailto:se...@rollernet.us] 
Sent: Tuesday, November 13, 2012 1:59 PM
To: nanog@nanog.org
Subject: Eaton 9130 UPS feedback

Does anyone use Eaton 9130 series UPS for anything? I'm curious how
they've worked out for you.

I bought a 700VA model to give it a whirl versus the traditional APC
since the Eaton is an online type with static bypass and also does some
high efficiency thing where it normally stays on bypass, but the first
thing it did on the bench was have the inverter/rectifier or bypass
section catch on fire and destroy itself.

~Seth




not excactly on-topic Server Cabinet question

2012-02-01 Thread Erik Amundson
I apologize for this being off-topic in the NANOG list, but I'm hoping some of 
you have experience with the particulars of what I'm looking for...

I am looking for a server cabinet which has an electric latching mechanism on 
it.  I want to use my existing security system and proximity card reader, but 
have a cabinet door that would open when the card reader is read.

Does anyone sell anything like this?

I've looked into Rittal, APC, Hoffman, Wrightline (now Eaton), Knurr (via 
Liebert/Emerson), Middle Atlantic, and Panduit.  So far, nothing.

Any suggestions?  Feel free to reply off-list.

- Erik



RE: Yup; the Internet is screwed up.

2011-06-23 Thread Erik Amundson
My big concern with pitiful low speed upstream speed is the whole 'cloud' 
movement.  Every one will have all of their 'data' in the 'cloud' sooner than 
we all think, and that involves uploading it from their PC to the 'cloud'.  For 
instance, I use a 'cloud' drive to backup bunches of data (150+GB of data).  
However, doing the initial backup really is no fun, even though I have a 10Mbps 
connection at home, the upload is more like 1.5Mbps.  150GB over 1.5Mbps is no 
fun, and most 'non-technical' folks would have given up a long time ago trying 
to backup that data...

The 'cloud' is going to create a strong 'want' (some may choose to call it a 
'need') for higher speed brodband, and symmetrical speeds.

- Erik


-Original Message-
From: Seth Mattinen [mailto:se...@rollernet.us] 
Sent: Thursday, June 23, 2011 1:52 PM
To: nanog@nanog.org
Subject: Re: Yup; the Internet is screwed up.

On 6/22/11 3:07 PM, Joe Greco wrote:
> Your average person cares a whole lot less about what's crossing their
> Internet connection than they care about whether or not "this works" 
> than I do.
> 
> I continue to be amazed at the quality of Netflix video coming across
> the wire.  Our local cable company just recently upped their old 7M/512K
> normal tier to 10M/1M, and is now offering much higher speed tiers as
> well, which isn't going to be discouraging to anyone wanting to do this
> sort of thing.

What still dismays me is the pitiful low upstream speeds that are still
common. Not because most people want to run servers or host content at
home (they don't), but because they want to share content with friends
and the user experience can be greatly enhanced with symmetric speeds.
Sharing those HD videos or 1,000 pictures during party weekend is less
painful if it takes 10 minutes to upload rather than 10 hours.

Also, things like GoToMyPC and "back to my Mac" are end user experience
things that are best served by not using horribly low upstream speeds. I
can understand that a decade ago most people were still sharing content
offline, but dare I say now sharing online is becoming more common than
offline.


> I guess the most telling bit of all this was when I found myself needing
> an ethernet switch behind the TV, AND WAS ABLE TO FILL ALL THE PORTS, for
> 
> Internet-capable TV set
> Internet-capable Blu-Ray player
> Networkable TiVo
> AppleTV
> Video Game Console
> Networked AV Receiver
> UPS
> and an uplink of course.  8 ports.  Geez.
> 
> That keeps striking me as such a paradigm shift.
> 

I was talking to one of my friends about when we wired his house a while
back. When he moved in we wired the crap out of it - we put Ethernet
ports in the kitchen, behind the sofa, everywhere. The one place we
didn't put anything though was behind the entertainment center. We put
it lots of coax and wiring for surround sound, but at the time it never
occurred to us to put Ethernet there. Of course, now there has to be
without question.

~Seth



RE: Yup; the Internet is screwed up.

2011-06-22 Thread Erik Amundson
I agree, the whole use of the terms 'need' and 'want' in this conversation are 
ridiculous.  It's the Internet.  The entire thing isn't a 'need'.  It's not 
like life support or something that will cause loss of life if it isn't there.  
The only thing to even discuss here is 'want'.  Yes, consumers 'want' 
super-fast Internet, faster than any of us can comprehend right now.  1Tbps to 
the house, for everyone, for cheap!

- Erik

-Original Message-
From: Michael K. Smith - Adhost [mailto:mksm...@adhost.com] 
Sent: Wednesday, June 22, 2011 3:19 PM
To: Jeroen van Aart; NANOG list
Subject: Re: Yup; the Internet is screwed up.



On 6/22/11 12:48 PM, "Jeroen van Aart"  wrote:

>Steven Bellovin wrote:
>> When I was in grad school, the director of the computer center 
>> (remember
>> those) felt that there was no need for 1200 bps modems -- 300 bps was 
>> fine, since no one could read the scrolling output any faster than 
>> that anyway.
>> 
>> Right now, I'm running an rsync job to back up my laptop's hard drive 
>>to my  office.  I hope it finishes before I leave today for Denver.
>
>I understand the sentiment, but the comparison is flawed in my opinion.
>The speeds back then were barely any faster than you could type, I know 
>all too well the horrors of 1200/75 baud connectivity.
>
>Luckily nowadays now it's about getting your dvd torrent downloaded in 
>2 minutes, vs. 20 minutes, or 2 hours. Or your whole disk backed up 
>before your flight leaves. You're now able to back it up online to begin with.
>
>The thing here is that I talk about *necessity*. Once connectivity has 
>reached a certain speed threshold having increased speed generally 
>starts leaning towards *would be nice* instead of *must*.
>
>And so far the examples people gave are almost all more in the realm of 
>luxury problems than problems that hinder your life in fundamental ways.
>
>If you have a 100 mbps broadband connection and your toddlers are 
>slowing down your video conference call with your boss by watching the 
>newest Dexter (hah!). Then your *need* can be easily satisfied by 
>telling your toddlers to cut the crap for a while. Sure it'd be nice if 
>your toddlers could watch Dexter kill another victim whilst you were 
>having a smooth video conference talk with your boss, but it's not 
>necessary.
>
>Greetings,
>Jeroen

To paraphrase Randy Bush - I hope all my competitors work on their version of 
what their customers "need" versus what they "want".  Why on earth would you 
not want to give them what they want?  Why does "need" have anything to do with 
it, particularly when "need" is impossible to quantify?

Mike





RE: Open Source / Low Cost NMS for Server Hardware / Application Monitoring

2009-07-22 Thread Erik Amundson
We've been using Ipswitch WhatsUp Gold for many years.  Their recent 
improvements to the product have been mainly system monitoring stuff.

The product has grown in capabilities hugely since version 4 when we started 
with them (they are on version 12 now), and with that improvement in 
capabilities, the price has gone up a bit.  It's still a whole lot less than 
most other options, however.

There isn't too much in the way of agents, but we've integrated a ton of 
proprietary systems with WhatsUp Gold via it's SQL database back-end.

They also have fully scriptable monitoring as a standard feature now.

Anyways, thought I'd put in my two cents...

- Erik


-Original Message-
From: Matthew Huff [mailto:mh...@ox.com] 
Sent: Wednesday, July 22, 2009 1:08 PM
To: 'nanog@nanog.org'
Subject: Open Source / Low Cost NMS for Server Hardware / Application Monitoring

I apologize for not starting a new thread before, I didn't realize that the 
nanog mailing list created a thread-index rather than using the subject.

Even though NANOG is primarily for network operators, I know that a number of 
members work in NOCs where there is also monitoring of servers/applications. I 
would appreciate it if anyone has suggestions about monitoring systems that 
would be applicable to our environment. We have a large number of custom 
applications on a large number of hosts including Windows 2003/2008, Linux 
x86/x86_64 and Solaris Sparc/x86_64. We are looking for a better way of 
monitoring our environment. We are looking for recommendations for opensource 
or low-cost. We would prefer solutions where the basic monitoring is ready out 
of the box. Native agents with custom scripting would be highly desired (rather 
than SNMP/DMI/WMI polling).

Some of our requirements:

.   Native agents for Windows 2003/2008, Linux, Linux x86_64, Solaris Sparc 
and Solaris x86_64. Either binaries or source code.
.   Ability to send alerts via email, pager and/or snmp
.   Monitoring of OS properties like memory, disk, cpu, etc...
.   Ability to extend agents with scripting to allow monitoring of custom 
services
.   Plug-in architecture for third-party add-ons
.   Reliable Architecture
.   Reasonable user interface
.   Non-blocking polling
.   Active Project (New Releases on regular basis and have existed for a 
reasonable period)

Based on our research and feedback from NANOG, we have put a preliminary list 
of product to evaluate:

Hyperic http://www.hyperic.com/
OpenNMS http://www.opennms.org/wiki/Main_Page
opsview http://www.opsview.org/
osimius http://www.osmius.net/en/
PandoraFMS  http://pandorafms.org/
Zabbix  http://www.zabbix.com/
Groundwork  http://www.groundworkopensource.com/
Nagios  http://www.nagios.org
Zenoss  http://zenoss.com
OpManager   http://www.manageengine.com
Orion   http://www.solarwinds.com/products/orion/
BigBrother  http://bb4.com/
Argus   http://argus.tcp4me.com/
Xymon   http://www.xymon.com
Spiceworks  http://www.spiceworks.com/
ICINGA  http://www.icinga.org


Matthew Huff   | One Manhattanville Rd
OTA Management LLC | Purchase, NY 10577
http://www.ox.com  | Phone: 914-460-4039
aim: matthewbhuff  | Fax:   914-460-4139







RE: MGE UPS Systems

2009-07-13 Thread Erik Amundson
We asked our local APC rep that exact question last week.  He told us that the 
MGE line of UPSes fill a whole that APC doesn't have in their own lineup, and 
they will continue being sold as the product for folks absolutely require a 
typical double-conversion online UPS.  As opposed to the 'hybrid' model of the 
Symmetra...

It didn't sound to me like they were planning on getting rid of the MGE line at 
this point.

- Erik

-Original Message-
From: Seth Mattinen [mailto:se...@rollernet.us] 
Sent: Monday, July 13, 2009 2:43 PM
To: nanog@nanog.org
Subject: MGE UPS Systems

I'm curious if anyone might know what the future of the MGE line of UPS
systems are. My concern is that they're dead-end since being merged into
APC, and APC wanting to sell me APC stuff. The problem I face is that my
facility was designed with separate equipment spaces, which is great for
normal electrical gear that can be backed against a wall, but not so
great for a product which is designed for the "everything goes in with
your racks" mindset. Any insight would be appreciated.

~Seth



RE: PKI operators anyone?

2007-09-06 Thread Erik Amundson

This is true, however I find no excuse when the issue is in completely
server-client based Wintel software.  Yeah, that's right Java and Cisco
ACS, I'm taking a shot at you!

In that situation you could at least offer a patch or update or registry
hack or kludge or something to make it work.  I'm sure my 2.4Ghz Intel
processor in my PC and server can handle many certificates of big key
size.

Erik Amundson


-Original Message-
From: Joel Jaeggli [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 05, 2007 2:29 PM
To: Erik Amundson
Cc: Joe Maimon; North American Networking and Offtopic Gripes List
Subject: Re: PKI operators anyone?

Erik Amundson wrote:
> Validity periods aside, we have experimented quite a bit with putting
> certs on everything we possibly can, and we've found that there are a
> whole lot of products that can't handle root key sizes above 2048,
some
> can't even handle anything larger than 1024.
> 
> Included in the 'can't handle your root' list are several Cisco
products
> (some products can handle 2048, some 1024, some 4096), and many
software
> products that use an older Java version that has a max of 2048.
> 
> This has always raised the question: Why do software authors think to
> implement PKI, but not think that key sizes will eventually grow over
> time?  Seems very short-sighted to me.

Consider the hardware platforms some of these operations run on... It
takes a long time to generate 1024 bit dsa keys on a 20mhz motorola
68020. Using them in a key exchange is also expnsive on such hardware...
I think it's a safe assumption that there's some planned obsolence where
the software and hardware elements of the platform meet in the
cryptogrphic realm.

> I guess the option to choose for full interoperability is 1024 keys on
> all certs, but that is at a sacrifice of security on your higher-level
> certs...
> 
> - Erik Amundson
> 
> 
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of
> Joe Maimon
> Sent: Wednesday, September 05, 2007 9:06 AM
> To: North American Networking and Offtopic Gripes List
> Subject: PKI operators anyone?
> 
> 
> MS-PRESS recommended design guidelines for multi-tier PKI systems for 
> validity periods are along the lines of
> 
> 8 years for the root
> 4 years for the "policy"
> 2 years for the "issuing"
> 1 year for the issued certificate
> 
> This is ostensibly due to fears of brute force cracking of the private

> keys over the root key's validity period.
> 
> Accompanied with this recommendation is one for key lengths of
> 
> 4096 for the root
> 2048 for the policy
> 1024 for the issuing and for the issued.
> 
> I have found the downside to this: Constant renewals every single year

> of either minor or major impact.
> 
> While MS-AD pki client implementations seem to handle most of the 
> (except for the root) resigning just fine, external implementation 
> struggle with some details, such as "chaining up to the root" trusting

> (thereby only requiring them to trust the root cert) and such as 
> trusting two different certs (for an issuing CA that gets resigned)
but 
> that have the same common name, hence loads of fun every 11 months or
> so.
> 
> I am about to recommend a re implementation along these lines
> 
> 80 years for the root, 4096bit key
> 35 years for the policy, 4096bit key
> 15 years for the issuing, ?bit key
> <=5 years for the issued certificates.
> 
> Good idea? Bad Idea? Comments? Are all pki client implementation in
the 
> wild 4096bit compatible?
> 
> Thanks in advance,
> 
> Joe
> 



RE: DNS issues?

2007-07-20 Thread Erik Amundson

I was having issues reaching any of the roots, and any of the TLDs for
.com and .net.  I tested them directly, and had no luck.

I was merely looking to see if anything was broken elsewhere in the
world to see if it was just my network so I could diagnose things a bit
better.

XO's problem eventually got fixed (relatively), and I was suddenly able
to get answers from the roots/TLDs again over Verizon
Business/UUNET/MCI/WORLDCOM/whatever, and Qwest after about 45 minutes.
I have found no explanation for the loss of connectivity as of yet.

I found it strange that I had issues reaching the name serves even after
shutting off XO, so that's why I asked...

Some of NANOG folks have been helpful and kind, some not, but I read
NANOG a lot, and I'm not surprised. :)

ThanksI guess.


Erik Amundson


RE: DNS issues?

2007-07-20 Thread Erik Amundson

I also have UUNET and Qwest routes working fine, but even on those, I couldn't 
reach the roots for quite a while.

Erik Amundson
IT Infrastructure Manager
Open Access Technology Int'l, Inc.
Phone (763) 201-2005
mailto:[EMAIL PROTECTED]
 
CONFIDENTIAL INFORMATION:  This email and any attachment(s) contain 
confidential and/or proprietary information of Open Access Technology 
International, Inc.  Do not copy or distribute without the prior written 
consent of OATI.  If you are not a named recipient to the message, please 
notify the sender immediately and do not retain the message in any form, 
printed or electronic.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 20, 2007 2:34 PM
To: Erik Amundson
Cc: nanog@nanog.org
Subject: Re: DNS issues?

On Fri, 20 Jul 2007 14:25:41 CDT, Erik Amundson said:

> We just lost DNS totally for a while and just got it back...XO 
> communications also lost almost all Internet routes at the same time...

Cause (losing all routes) -> effect (can't reach stuff other side of the 
routes, including DNS servers)?

Sounds like an XO routing issue, not a DNS issue.


DNS issues?

2007-07-20 Thread Erik Amundson
We just lost DNS totally for a while and just got it back...XO
communications also lost almost all Internet routes at the same time...

 

Anyone else noticing weirdness?

 

 

Erik Amundson