Re: [lopsa-tech] Cloud Databases?

2010-10-07 Thread Luke S Crawford
Atom Powers  writes:

> One of the COs in my company keeps asking "why don't we put it all in
> the cloud"? I'm running out of good answers.

Cost it out.   Depending on how much dealing with hardware costs you,
there are cases where the cloud can save you money;  but usually, 
if you plan on leaving a server up for more than four months,
it is cheaper to buy a server and co-locate it, before you
count the inconvenience of dealing with physical hardware.

The other misconception people have is that ec2 will eliminate your 
SysAdmin.  It won't.  It will reduce the load on the hardware guy, but 
you still need people who understand *NIX.

> Question:
> What are your experiences with cloud-based (or cloud-capable) content
> management systems?
> What options are there for putting a database-driven application in "the 
> cloud"?

I know a lot of people who buy and co-locate beefy servers for
database servers, who then go and use AWS or similar for web front ends.
Most people I know doing databases in 'the cloud'  use the "database 2.0"
stuff.  Couchdb and the like.  NoSQL type stuff.  

I have a bunch of customers running databases on my VPS platform and I
can tell you that sharing disk gives you suboptimal performance.

the thing is, if you have two people splitting a SATA disk down the middle,
sure, each guy gets half the space, but each person gets a whole lot
less than half the performance.   SATA disk is pretty good for sequential
transfer, and operating systems go to great lengths to attempt to turn
random writes into sequential writes.  

The problem is that if you've got two different virtuals each streaming
sequential writes to the same disk, you get what is basically random 
access to those disks, as it's got to switch fairly quickly from writing one
stream to writing the other.  

I am given to understand that Linode deals with this by keeping count
of your IOPS, and if you trip a threshold, they warn you.  If you trip
the next threshold, they start actively limiting your disk I/O.  

This, I think, is a good idea, and something I will likely implement myself.
but the point is that sharing spinning disk is not good for I/O
performance.

(now, how well does amazon EBS stack up to a database write load vs. 
local disk?   I don't know.  I'd be interested to hear, though.  Amazon
also offers SAS disks, I think, and sas performance degrades less
when it becomes random (vs sata)  but a shared sas disk is still never
going to beat a unshared sas disk for performance.) 

Now, this isn't to say that the cloud is always a bad idea;  if you have
compute needs that vary by the hour, it's goddamn difficult to beat 
amazon.com.   And if you need a dev box with 512MiB ram, there is 
no way you are going to be able to host even a free 1u for what it costs
to get a VPS of that size.   
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] PSU wear out

2010-09-30 Thread Luke S Crawford
da...@lang.hm writes:

> I am in the process of doing this with a batch of systems purchased
> 5-6
> years ago.


How long do you usually try to keep servers online?   with my cost of hardware
(lower than usual, as far as I can tell, as I build)  and my cost of
power (higher than usual, as I am in California)  and my higher than 
average cost to downtime, I'm usually looking to retire old servers
after about three years, but I don't know if that's the right policy.

Do you usually do it based on the ratio of what you are paying for power
vs what you pay for a new server?  or do you keep a batch of servers
online until you start seeing more than a certain amount of failures?  
a mix of both?  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] whole disk encryption

2010-08-24 Thread Luke S Crawford
Doug Hughes  writes:

> Well, swapping the electronics will not help anybody. A modern drive is 
> carefully factory calibrated with its particular electronics and heads. 
> There is nothing that you can swap between drives to make them useful. 

Actually, with consumer-grade sata, swapping the drive electronics 
(between drives of the same model, of course)  works far more often than
it does not, in my experience.  Now, it's not something I'd /depend/ on,
just 'cause manufacturers change things mid-run, but it is one of the 
things I try when I'm doing data recovery.  Last time I had to do
that sort of thing, swapping the electronics didn't fix it, but to verify
it wasn't something on the PCB we moved the PCB from the bad disk to a good
disk just like it, and the good disk still appeared to work.  

(we ended up fixing the drive by sticking it in the freezer for a few hours;
it worked long enough for us to get the data off of it.That was maybe 
three years ago.)
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Maintenance contracts vs Spares [SEC=UNCLASSIFIED]

2010-08-12 Thread Luke S Crawford
"Robinson, Greg"  writes:
> 
> Wanted some input on the maintenance contract vs. hot/luke warm/cold
> spares debate.  What does your $work do, and is it value for money?

My experience working for people who have support contracts has been 
that convincing your fancy support people that an intermittent 
problem needs more than a reboot is usually just as hard (sometimes harder) 
than just fixing the problem myself, so  I keep spares on-hand.

On the other hand, when your power supply catches fire or the machine
is otherwise broken in a way that a non-technical person can see as broken,
both dell and HP support is excellent.  

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-20 Thread Luke S Crawford
Ross West  writes:

> BTW: Are you getting the rack pre-installed in 1/4s? Because you
> generally can't get conversion kits - since split racks are custom
> with isolated cable access to each part. So this could be all a moot
> idea since you're now offering unsecured/shared access, and people
> will use someone else's plug.

I've rented similar products where the space was marked off by shelves, and
where each rack had it's own PDU...  I think the 'don't touch other people's
stuff' social barrier is strong enough.  (besides, if you do plug into
an empty looking pdu, and the other customer notices, /best case/ you 
get unplugged.  More likely the other customer complains and I kick
you off.) 


> Also you might find that the main colo provider has issues with you
> doing your own custom electrical/network installation between rack
> spaces. 95% of colo 1/4 racks do not have intra-rack distribution, so
> you need to go external - and then you might then be forced to use
> things like armored/waterproof electrical cabling if you're allowed at
> all.

Yes, I'd be doing a regular rack deliniated by shelves.

> You can get a big (eg 60amp/208v/3phase) circuit and only pay for
> usage with a minimum commit (watts) in some places, so it becomes
> almost exactly like a burstable network pipe.

Yup, but the places around here that I know about that do that charge
about as much for just the rack, before power, as I'm paying for
a full rack and 2x20a circuits, and more thant twice as much as the
single rack/ 15a circuit deal I'm talking about using for the cheap
co-lo  -  before paying for /any/ power.  


> > Yeah.  208v would actually be fine, if I could find it cheap.  but
> > the opportunity I see now is a reasonable (not awesome but reasonable)
> > cost per watt at he.net, which gets me lots of rackspace (which is nice
> > to have, even though it's not really essential)   and he.net has pretty
> > smooth remote hands and access control policies, if I can set it up such
> > that people can be trusted to deal withstuff.
> 
> 208v is horrid for "cheap colo" - people commonly bring 120v only
> devices to be installed since they don't know the difference, and then
> when faced with a C14 receptacle, you need to solve the problem of
> different plugs and power.

Eh, really, I don't mind something of a barrier to entry.   I wouldn't
mind handing out c14 computer power cords, along with a 'this will fry 
stuff meant for 120v' warning... considering that  everything that uses
iec cables that has been built in the last 15 years support 100-240v  (and
stuff has a switch for a bit before that) I think it'll be fine.  
the c14 sockets will prevent people from accidentally frying their
wall warts.  


> DCs have generally smartened up and now bill based on watts, and
> can give it via whatever voltage you want/need. They have
> re-discovered why the electrical company bills in kWh and not
> amperage. :-)

> The days of amperage based billing (regardless of voltage) is long
> gone - except for the lucky SOBs that have old power circuits under
> those old contracts.

Huh.  around here, i've only seen metered billing at much more expensive
co-los.  the cheap places where I work charge by circuit capacity.  


___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-17 Thread Luke S Crawford
"John Stoffel"  writes:

> Luke,
> 
> One thing I've been wondering about here is physical access issues,
> which you haven't really talked about.  If you're going to let your
> customers in a 4am to muck in their 1/4 of a rack, how are you going
> to limit their access to the *other* 3/4 of the rack?  What's to keep
> that user from plugging into someone else's outlets?  

I think just having seperate PDUs would be enough, especially
if I have a shelf every 'border'  -  I've shared racks that way
before when I was smaller an only needed 1/2 rack.  SysAdmins
are pretty good about "Don't touch other people's power"  I think.  

If I'm wrong, of course, I'm sunk.  But compitition in that building 
allows open access (though there is a process for bringing in new 
servers that he.net implements.  I don't think it works that well
'cause they don't check power usage.)  but I'd bet money that 
my customers can keep their hands off of another person's
clearly marked PDU, especially when it's obvious that the access control
people record who comes in when.  


> Do they even sell 1/4 rack doors with individual keys for a good
> price?  And don't you need those doors on the front and back as well?  

I should check... if it's cheap, might as well, right?

> I'd almost say that you should cut costs by only allowing physical
> access during *your* hours, with a bigger up-front fee for 24x7
> emergency access if something goes wrong.

eh, even in that case I'm going to charge enough to price myself out of 
this market.  the compitition all allows unsupervised access. 

> Basically, you're spending all this time worrying about the power and
> what happens if someone goes over their limit, and taking out someone
> else.  Instead you should be working to make it as standard and cookie
> cutter as possible so that you just populate a rack and then sell the
> bits here and there.  


Yeah, standard is important.  that's why a lower density 1/4 rack has me
thinking more than a higher density rack, even at a slightly higher
cost per watt.  I won't have to balance high density with low 
density customers... just you stay between this shelf and this shelf.

I will play with doing dedicated servers at some point;  it's the natural 
complement to the vps hosting, and the opteron 41xx series looks like it'll
let me deploy pretty cheap 4-6 core/ 16GiB ram boxes, which /might/
be closer to the price range I can sell into.  still, I'm going to want
$256/month for those, so buying will still likely be a better deal if you 
keep them very long, and you don't get ripped off too badly on 1u colo.   
(I've tried renting 8 core, 32GiB ram for $512 or so a month,
and I've failed.  it's above the cost threshold I'm able to sell 
into, and it's obvious to anyone pricing it out that you save money
quickly by buying and co-locating.)

But even 16GiB for $256, well, I don't have a lot of faith I'll 
sell many, even if that's within the price range my customers are willing
to think about, well, my customers are the sort who are willing to
do a bit of extra work to save some money.  All my other plans are
cheaper than co-locating some ancient hardware you have laying about...
while this will be  more expensive.  

-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-16 Thread Luke S Crawford
"j...@coats.org"  writes:

> One more thing, sorry for the additional post.
> 
> I was wondering if there are 'network' version of Kill-a-Watt power monitors
> for anything like a reasonable price?
> 
> For an industrial level answer I think that APCC sells networked power strips
> so you can actually set up a server to monitor (and even power cycle) power
> to customer servers.


I have some 8 port PDUS right now by APC that I paid $200 each for, new 
surplus...  (they retail for $400 or so) those are pretty nice.  I have a 
boatload of old baytech RPC3 rebooting metered PDUs, but most of those 
don't meter correctly anymore (though they still work for remote reboot)  
so I'm getting rid of those, even though they are well supported
by powerman and the like.  

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-16 Thread Luke S Crawford
Doug Hughes  writes:

> If this is your own space, you should seriously look at doing cooling 
> right too. Cheaper up-front cooling is very expensive in long-term 
> operations. A good initial cooling infrastructure pays for itself in 
> operations costs, though the capital outlay is more.


As I grow, I will seriously consider this... but right now, we're talking 
between $10k and $20K in total revinue, so my own place is not an 
option.

If you know of bay area co-los with good cost per watt on 208v, I'd
like to hear about them.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-16 Thread Luke S Crawford
Brian Mathis  writes:

> I think you missed the point there.  ALL of the risk is to *your
> business*, so you're the one who needs to mitigate it.  When you go
> this route you also need to be monitoring the total circuit usage and
> if you see spikes or the general water mark getting too high, that's
> your cue to get a higher capacity circuit or move some systems to
> other circuits.

see this is what I'm trying to avoid.   Oversubscribing power has serious
negative consiquences, and if I can get it cheap enough that I don't 
have to oversubscribe, I will potentially be providing a more valuable
service, with less work for me, than I would be if I were oversubscribing.

> You definitely need to be aware of the max possible draw of each
> device, so you know how much you can oversell each circuit, and based
> on your initial description, you're running a low margin business and
> there you must rely on walking that thin line.

My hope was that I could figure out a way to not oversubscribe that
circut.   e.g.  you have 1/4th of that circut, and you can't use more.  
(if you go over, your breaker blows.)  This way, I don't need to 
worry about how much each piece of equipment works.   If your ports exceed
your power draw, your ports get shut off.   I've lowered my cost (meaning
I can lower my retail price)   and the end user experience is a lot more
like what they'd get with a full circuit, e.g. if I oversubscribe power, your
uptime is a /lot/ more dependent on my compitence in managing that 
oversubscription than if I hard partition the power. 

> As already mentioned, you can get smart PSUs and some of them have the
> ability to shut down systems based on priority, in case someone goes
> over.  There's probably a feature to shut down on specific port usage,
> but I would only activate that if you were in an "over" situation.


Yeah.  from what others have said, the PDU is the way forward, rather than
seperate circuit breakers, even if I want to automatically shut people off
when they exceed the threshold.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-16 Thread Luke S Crawford
Doug Hughes  writes:

> servertech has the capability to delegate access to particular ports
> for particular users for remote reset. You could give each user
> permission to reset their port and consolidate to a larger PDU. You
> could also set warning thresholds to send to your pager through your
> monitoring system which collects SNMP traps. (it also has an API so
> you could hook-in to a web form if you want to code it yourself and
> manage what they see based upon their clientID or such)

powerman is my prefered tool for this sort of thing;  it means I can
buy different brands of PDUs (and pdus without multi-user capabilities)
and have multi-user capabilities.

> Geist may have a less expensive, more customizable option to suit your
> needs. You spend a little more upfront for the larger rackmount pdu,
> but certainly not $200 per user.

I was suggesting buying an 8 port pdu per customer.   but yeah, 8 power
ports is way overkill for 4 amps, so a larger pdu with multi-user
access, native, or through power man, is certainly cheaper... though
giving everyone their own PDU does make a much 
cleaner "yours vs. mine" deliniation.  

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-16 Thread Luke S Crawford
Ross West  writes:

> Having been involved with this kind of thing in the past, so let me
> make some commentary.  As was pointed out - most of your answers are
> actually determined by the business, and shouldn't be done via a
> technical solution.
> 
> > 1. It's got to be cheap.
> 
> It's not that cheap up front. A customer will do stuff at 3am, and then
> screw it up needing physical access "right away" because they punched
> in the wrong IP address into their firewall and it don't have a OOB
> connection, or you can't hook up a simple crash cart (KVM on a cart)
> to it. So you (or your employee) is running around then, or you'll
> need to invest in higher-end/remote management gear.  Oh, and they'll
> agree to any cost at that moment (3hr minimum @ $150/hr?) but refuse
> to pay since it only took 10min to fix, or simply can't afford it.


Yeah, this is the current situation, only I have good serial out of band
access setup for everyone.

I've stopped accepting co-lo customers for this very reason.

With this new setup, I'd like to give them access to the rack... he.net
will manage access for me, so they can go down and screw with it
without waking me up.

My problem is that I need to make sure they won't overdraw the 
circut while they are screwing around, and take down other customers.

(there is a certain level of risk inherent to just sharing a rack...
but I think the biggest risk is of overdrawing the shared circut.) 


> > 2. it's got to be low-calorie on my part.
> 
> Then do pre-pay rather than post-pay. Ie: People pay you $100 and put
> that into their "account", then you debit that "account" with ongoing
> costs as they happen. When the balance is $0, turn it off
> automatically. No money, no service. Also gets around the issue of
> people paying 1/2 their bill - remember you're aiming for the cheapest
> customers possible.

Yeah.  I need to get a good shutoff policy for  co-lo customers.

> Be sure to check on the govt rules for holding money in an account on
> someone's behalf - it might be different than holding a generic
> deposit for billed services.

this is something I'd not thought of, thanks. I guess maybe I could
structure it as a setup fee with a 'last month free' deal?   
or something.  I guess I'd have to talk to someone about that.  

> > 3.  there has to be good isolation.
> 
> This is standard business stuff - I'm assuming that you already do a
> VPS service from your website, so how do you handle a 75mbps burst to
> a virtual server on a 100mb cable that has normally ~50mbps usage?
> Employ the same attitude.

The difference is twofold.  

First:  I can get bandwidth on plans where I have a 100Mbps commit on a
1000Mbps connection.   I can overrun my commit with no more consiquence
than paying more money.

second:  if the pipe is overran momentairly, the consiquence is that
some packets are dropped.   If the power is overran momentairly,
all servers on the circut fail.   This isn't the 'cloud'  - people
expect their co-located servers to stay up.  

> > 3.5 (electrical Qs)
> 
> You will not get a NEMA5-15 Receptacle (the standard one) 7.5a fuse
> for a few reasons:
> 
>  - Probably somewhere breaks the electrical code
> 
>  - Your non-standard 7.5a breaker will cost 5x the 15a breaker (if you
> can even find it)

those two are very good points.   And as another poster said, the breaker
won't really do what I want without giving myself some headroom, anyhow.

>  - You're not going to be around when the circuit pops and needs to be
> _manually_ reset.  Remotely resettable ones cost stupid amounts of
> money.

Like i said, this part, at least, isn't a problem.   if you blow your
circut, well, that sounds like you have a problem until you can get it
reset.  

> So my advice is suck up the upfront cost and install a remotely
> manageable PDU (eg: www.servertech.com), and bill the customer based
> on actual usage (watts!).

I have several by that brand, in fact.  pretty nice stuff.  

> Running 230v/400v/600v is great, but people assume 120v and bring gear
> for that (eg: wall warts for a 5 port dlink switch). Servers aren't as
> much of a problem.

Yeah.  208v would actually be fine, if I could find it cheap.  but
the opportunity I see now is a reasonable (not awesome but reasonable)  
cost per watt at he.net, which gets me lots of rackspace (which is nice
to have, even though it's not really essential)   and he.net has pretty 
smooth remote hands and access control policies, if I can set it up such
that people can be trusted to deal withstuff.  

> People have _no_ idea how much power their gear uses. I've had people
> come with 15 disk san arrays assuming it's the same as their 1U server
> at 150w.  I've also had the inverse too (told modem draws 1 amp @
> 230v).


Yup.  I most often get people who think that the big cost is the rackspace..
I mean, that's the most visible, but it probably has the least to do with 
my costs.  Sometimes I wonder how much of that is actually negotiat

Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-16 Thread Luke S Crawford
Richard Chycoski  writes:

> - They don't actually trip at the rated current. Breakers have a 
> profile, and will trip quickly if you draw significantly more than their 
> rated current, but may sit at (or even somewhat above) their rated 
> current for quite some time.

> - Breakers also get weak if tripped frequently or run near their limits 
> for a long time. This can take a few years to happen, but it *will* 
> happen eventually, and your customers will get unhappy if they can no 
> longer run up to their rated current. Breakers are not meant to be run 
> at-or-near their rated current continuously.

Thanks, this is the sort of thing I need to worry about, and the thing
I don't know much about.   exactly the info I was going for here.

It sounds like what you are saying is that giving the 
customers smaller breakers won't solve my problem of keeping
them away from the limits on my larger breaker.   I'm still
going to have to hunt them down and pester them about keeping it
within 75%, and considering my goals, that might be best done
with a metering PDU.   Hell, giving everyone a metering/rebooting
pdu will set me back one time $200-$400 per customer, but it would
also add a lot of value for the customer, if I gave them remote
reboot capability.  I could charge a one time setup fee, maybe 
waived or reduced if they pre-pay for a certain number of months.

> If you actually want to shut down the circuit based on current limit, 
> you could use a monitored power bar that also has controlled shutdown 
> for each circuit, but the cost may exceed your business model. The 
> advantage to this is that if you can use larger (even 30A) circuits to 
> feed the power bar, and can choose to warn/charge $ if the customer 
> exceeds their limit, but not actually shut them down unless the entire 
> circuit is at risk.

> Another reason that breakers are undesirable - you actually have to go 
> to them and switch them on after they trip. I don't know if this is an 
> inconvenience for you or not.

I don't think this would be a huge deal.  You trip your breaker,
you go deal with it. SysAdmins are used to a blown circuit being a big
deal.  Hell, when something blows, you don't just want to reset the 
breaker anyhow, right?  you want to go in and figure out why it blew,
and unplug the offending equipment, or it's just going to blow again.

I think if I'm up front about it, while it is a inconvenience, it's
an acceptable tradeoff for my target customer base. 

the idea is that I want to sell my customers (SysAdmins, mostly)  a 
product that is as much like the product I buy from the data center is,
only smaller, and proportionally cheaper.  

> What kind of gear are you planning to let your customers install that 
> will vary in power requirements this much? 

Normal PC hardware varies quite a lot between 'idle' and 
'loaded'   -  sure, if you do a real careful load and burn in you can 
get a pretty good idea what it's max draw, but if someone is trying to 
push it (and many people are)  or if they dont' take the 75% rule
seriously (you'd be amazed how many SysAdmins don't.) it's not going
to work without supervision.

but, I have a bigger problem; I  want this to be an unsupervised 1/4 rack,
so the real problem would be a customer coming in at 4am and, say, 
replacing a low power server with a high power server.  

>  You might solve the problem 
> by putting a 'Kill-a-Watt' on their gear and getting them to run the 
> equipment full-out. Once you know the maximum current requirement, 
> charge them based on the maximum power usage, and distribute the 
> equipment on circuits accordingly. (Monitored power bars could also do 
> this, but the Kill-a-Watt costs about $20.) This is less convenient if 
> there are frequent equipment changes, but if most customers drop it in 
> and forget it, it might be practical. (You could have a fee for gear 
> change, to cover the overhead of doing the power profile.)

Again, we're moving away from cheap, if I'm supervising burn in.  I 
wanted an unmanaged "just don't touch other people's stuff" split 
rack, and depending on people to estimate their own power simply 
won't work without strong enforcement, either having headroom and 
charging overages, (and charging appropriate monthlies to cover the 
headroom) or having some automated thing shut down customers who are over.   
Higher one time costs (electrician time, advanced PDUs) are acceptable, 
higher running costs (making me supervise all equipment installs)
will jack up prices way beyond what people want to pay.  also, an 
open access 1/4 rack is more value to my target market than something
where they need me to dick around with their equipment before they can
add anything.  


I know it looks like I'm both expecting my customers to know what
they are doing /and/ then not trusting them to treat the circuit with
respect.   My goal is to create something where if you 'push it' the 
consequences fall on you, not on the guy sharing th

Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-16 Thread Luke S Crawford
Doug Hughes  writes:

> Don't forget that a 15A circuit only gives you 12A. Expect it to blow if 
> you use more than that for some amount of time on a slow blow fuse/circuit.
> 
> Cutting power is kind of draconian. Why not just have a penalty fee? 
> With a smart monitoring strip and thresholds, you can charge a premium 
> if they use more than their alotment. Limit to 3 customers per circuit 
> at 4A each. Also, run 208VAC so you have more headroom and you won't see 
> things blowing. Put it in your TOC that all equipment must be capable of 
> running at 208V. There are so few things that don't anymore... (old 
> modems with power bricks come to mind).

> You'll save a few bucks on your electric by running at the higher 
> voltage, too.



Why don't I only sell 3 quarter racks with some headroom and charge
for overages? Mostly, because I'm trying for cheap;  the market
I serve is, ah, rather price sensitive.   I think my customers would be
okay with being treated the way I'm treated by the co-lo (which is to say,
here is your circuit, don't blow it.)   It looks like he.net 15 amp 120v
cabs are so cheap that I can make good profit charging $150/month for a 
quarter rack with 1/4th of a 15a circuit.   (and he.net bandwidth is so
cheap I can pretty much give it away.)   The he.net 15amp 
cabs are slightly more expensive per watt than my 2x20a 120v setups 
at SVTIX, but I think that if i'm renting out customer accessible 
shared racks, I'm better off giving them a little space. 


and I haven't found cheap (even per watt!) 208v power in the bay area.  
(hearkulies data in sacramento has cheap 208v, but I haven't seen it here in 
the bay, and herakulies doesn't have the abundance of cheap bandwidth that
bay area co-los have.  tips would be appreciated.  )

Certainly for my own stuff, I would prefer 208v if I could get it.  
part of the problem is that even when it's cheaper per watt,
the sales people think it's more expensive because it's more
expensive per amp, so they try to steer me (the value customer) 
away.  It usually takes quite a bit of arguing on my part before they
will even quote it.   
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-16 Thread Luke S Crawford
Brian Mathis  writes:

> It does put you at some risk, and the amount of the overage fee should
> be proportional to that risk.


this is a risk to /other people's equipment./  - it's not just
a risk to me.and from what I've seen, almost everyone pushes it to the 
hairy edge  in situations where they charge per circuit rather than by 
usage, so one person going over has a higher chance, I think, of killing 
everyone else than you think.


___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-16 Thread Luke S Crawford
Steven Kurylo  writes:

> You'll have to check with the authorities where you live.  It may not
> be legal for you do to the electrical work.  That said, you can
> certainly replace the breakers in the panel with 4/5/7amp breakers.
> So you have a 200amp panel, and instead of the normal 15amp breakers
> for each circuit, you just pop in the smaller one.

Ok, thanks.  yeah, I don't want to go in and start doing this sort of
thing myself, at least not without the proper supervision.  I'd
be paying a licenced electrician to do it... but my experience has been
that you get much better results paying other people if you have some
idea as to what is possible and what should be done.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] electrical questions... smaller breakers for splitting a circuit?

2010-07-15 Thread Luke S Crawford

so here is the problem:  I am exploring the idea of getting
back into the co-location business.   Now, for this to work for me
and my market, it's got to fit several things.

1. It's got to be cheap.   There is a place for selling expensive 
co-lo... but my entire business is built around selling less expensive 
stuff to people willing to skimp on certain features  (and on skimping on 
the right features for that group.)  

this means I probably want to sell half-racks, or even quarter racks.  
 
2. it's got to be low-calorie on my part.   if I've got to go re-negotiate
with a customer every time they exceed their power allowance, or argue
about if 75% or 85% utilization is acceptable, I'm going to want to charge
more than my market is willing to bear.   Negotiation is fine in many
markets, but in my market, it would raise costs an unacceptable amount. 

3.  there has to be good isolation.  It's fine if there are sharp edges;
my customers are willing to tolerate it if it's easy for them to break their
own stuff.  But if their neighbor's mistakes or my mistakes take them down?  
that is not acceptable.

So, here is what I was thinking.  what if I split every 15a circuit into
two 7.5a circuits.   put a breaker on each that blew at 7.5 amps.   
the idea is to make things 'fire and forget'  -  if you exceed 7.5 amps,
well, your shit breaks.  No negotiation.  

(this also removes the possibility of me or one of my people 'going easy' on 
one customer eating more than their share for a few days and then the 
other customer sharing that circuit suddenly having a spike and killing off
both users.) 

Anyhow, uh, is anyone else doing this?  is it an absolutely stupid idea?
should I go looking for a PDU that simulates this behavior?  or writing
a perl script hitting a PDU to simulate this? or is it better to have an 
electrician wire such a thing up with real breakers.  would such a thing
even be possible?   (I mean, ultimately, I'd like a breaker that goes 
at 3.75amps, 4 to the 15 amp circuit.) 
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] physicalization

2010-06-27 Thread Luke S Crawford
Edward Ned Harvey  writes:

> > From: Luke S Crawford [mailto:l...@prgmr.com]
> > 
> > http://www.supermicro.com/Aplus/system/1U/1042/AS-1042G-TF.cfm
> 
> Incidentally, Luke (or anyone else) ... Do you own any of these systems?  I
> guess I don't care if it's Supermicro, or any other brand.  But the
> supermicro link is here.

> As long as it's AMD processors.

I haven't got a quad-socket box, but I have some single socket and 
dual socket boxes.  

I've got one of these that hasn't hit production yet:
http://www.supermicro.com/Aplus/system/1U/1012/AS-1012G-MTF.cfm  

(and two more that have)

also, I've got a dual socket system using the following motherboard:
http://www.supermicro.com/Aplus/motherboard/Opteron6100/SR56x0/H8DGU-F.cfm

that uses the same power supply (and otherwise is almost exactly like)
http://www.supermicro.com/Aplus/system/2U/2022/AS-2022G-URF.cfm

which will be going into production within the next few days.

All of these are filled with 8 core 115w chips (this is a cost
optimization... if you are willing to pay more than 2x as much,
you get 12 cores for the same power draw.)  

I've got a kill-a-watt (and several remote PDUs, which is what
I normally use, but as we're doing this on pre-production equipment,
I can use the kill-a-watt to make the test the same as yours.)


How many disks do you have in the server when you measure?  or do
you want to do the measurement without disks?   


> Personally, I think this should be done for every server, ever.  I see too
> often, that other admins overload UPS's or circuit breakers, or waste money
> by overprovision cooling or UPS's.  All of these are bad scenarios, and all
> of them are easily avoidable for almost no cost, in terms of time or money.

I think it's also important to monitor power load in real time.  
(I mean, testing for max draw before you put things in production 
as you suggest is /essential/ but I think it's also important to see 
things in real time.   It picks up mistakes like plugging the new server 
into the wrong pdu, and covers your ass if your 'fully loaded' test wasn't 
as good as you thought.)

generally speaking, when I'm paying for power by the circuit (many data
centers charge you per circuit, regardless of utilization)  I attempt to 
keep a circuit a tad under 75% utilization.  75% is where my alarms go 
off (and it's the max recommended sustained draw on any circuit.)

> For the heck of it, here are my "basic instructions," that I think should
> always be habitually completed.
> 
> I use a kill-a-watt.  When the server is new, I write a 3-line python
> script, which is an infinite loop of random number generation.  I launch as
> many of these simultaneously as required, to fully max out all the CPU's.  I
> measure both the VA and W consumption of the server, and record it all in a
> spreadsheet. 

What's the three lines?   I want to use the same three lines you do. 

my understanding is that fully maxing out a cpu is somewhat more complex
than that... you want to exercise more of it's instruction set, thus
burnmmx and related programs, though I don't know how relevant burnmmx is
to a modern CPU.  Now, a random number generator is a lot better than
nothing, and probably 'good enough' for what we're doing... but if we
are comparing, I want to use what you use.

> I assume it's 120V, so you know the A by knowing the VA.  I
> keep track of which servers are plugged into which UPS, and how many A are
> available in the circuit feeding the UPS.  (I also measure the "charging"
> current of the UPS.)  I always fluff everything by about 20%.  And I
> estimate approx 3 BTU's cooling are required per W.

I'm smaller than you are, apparently.  The cost of cooling is rolled into
the cost of my power.  Also, as far as I can tell, adding a UPS, when
you are already at a datacenter with a good UPS and generator doesn't 
add anything besides something else to fail

> Many times before, I have also maxed out the disk, memory, or network
> utilization, and consistently find that the idle power consumption equals
> the fully active power.  It's only the CPU or GPU that seems to vary the
> power consumption of the box significantly.

I believe that heavy seeks vary the power used by the disk by a 
good bit.  (sequential transfers, not so much.  It's waggling the read-write
head about, as far as I can tell, that varies the power draw.)


-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] physicalization

2010-06-27 Thread Luke S Crawford
Edward Ned Harvey  writes:

> You seem to be in favor of AMD processors over Intel.  While I'll agree the
> purchase cost is lower, I never trust the rated power spec of any systems,
> and I always measure the actual fully-loaded power consumption of any
> servers I buy.  

not always.  Intel usually wins in situations where per-core performance
matters.

> My lowest power server is a Dell R610 Intel, 4 cores, 8 threads, 16G ram,
> 180W/190VA.
> My highest power server is a Dell 2970 AMD, 4 cores, 4 threads, 8G ram,
> 320w/325VA.

> Maybe I'm just buying the wrong AMD chips or something, but every time I
> test it, I would estimate the AMD processors are 2-4x higher power
> consumption than Intel, for equivalent processing power.

how are you getting 4 cores in a 2970?  are you using dual-core opterons?
if you are comparing two dual-core opterons with one quad-core xeon,
sure the opterons are going to use more power.   

of course, the r610 is also a dual-processor box, so maybe you are
just using dual-socket motherboards with single procesors in both
cases?  either way, that's a /whole lot/ more power than I'd expect
from a single CPU, unless you are using SE chips or something?  

Or are these both 2p dual core systems? 

(this is why I think it's confusing to talk about systems by their
dell or HP model;   usually that gives you a lot less information that
mentioning the CPUs, ram and disk involved.  Sure, the dell and HP
sales guys like to think there is a big difference between dell 
xeons and HP xeons, but there's not.) 

The other thing to consider is that each disk is going to eat 
around 10 watts, and the 2970 can hold more.  

measuring at the plug, the boxes I'm currently using for small VPSs

http://www.supermicro.com/Aplus/system/1U/1012/AS-1012G-MTF.cfm

in production, with an 8 core 2Ghz amd eats around 120 watts,
according to my APC pdu.   This is the 115w rated chips, not
the HE edition.  

In my experience, a reasonably chosen amd system  will beat a
equivalant intel system per-core on power usage. 

This was not true after Intel moved to QPI (the nehalam, 55xx series
xeons)  before amd came out with the G34 socket chips.   The Nehalam 
platform was pretty solid... it look a lot of the ideas from the AMD 
arcatecture (QPI looks a whole lot like hyper transport)  and implemented 
them better.  it was certainly better than the socket F opterons.  But now 
that the G34 opterons are out, AMD has regained their lead.

It's interesting because before nehalam, qpi, and the move to 
ddr3 and away from FBDIMMs, AMD socket F was lower power than the 
intel Xeons/FBDIMM setups.  But when Nehalam came out, damn, it looked
like the end for AMD.  

Back in the pre-nehalam days, when intel used a shared FSB and FBDIMMS, 
which often use as much as 10 watts each,   you could save clients
more than half their power bill by moving them from low power xeons
with FBDIMMS to low power xeons with reg. ecc ddr2.  (you could use
the same xeons, this was before intel started putting the memory
controller on the cpu, so you did need to replace the motherboard.)

Once I was hired to do a blade eval for a client.  The blades used low
power xeons, but they also used FBDIMMS.  I got them to order some
pizza boxes from Dell (the DCS division?)  that used the same low 
power xeons and registered ecc ddr2 rather than FBDIMMS.  the pizza 
boxes blew the blades out of the water. 


___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] physicalization

2010-06-26 Thread Luke S Crawford
Edward Ned Harvey  writes:

> We're talking a whole new scale here.  512 atom processors in a 10u
> formfactor, consuming 2kw, for $200 each.  Looking quickly at dell, it's not
> difficult to get down to $200 per core, 4 cores in 1U.  But you're going to
> fit at most 40 cores into 10u, and it will still consume 2kw of power at
> that scale.  


http://www.supermicro.com/Aplus/system/1U/1042/AS-1042G-TF.cfm

that thing supports 4 12 core opterons in 1u.   48 cores per U.  and something
like 6.6 watts per core, assuming the 80 watt ACP.  if the atom is 4 watts per 
core, I bet you are getting more compute per watt out of those opterons.  

It's not expensive, either.  I have a similar system right now.  Now,
I can't get very much more than 3Kw usable per rack around here anyhow,
so density doesn't matter that much for me, and I'm fairly I/O bound,
so I like having lots of disks, so I opted for a dual-socket system with
8 disks:

http://www.supermicro.com/Aplus/system/2U/2022/AS-2022G-URF.cfm

Now I'm a cheap bastard, so I found some of those chassis used, and
I bought a new power supply, new motherboard, and new backplane.  Total
cost for those parts?  under six hundred bucks.  add in the cost of ram,
reg. ecc ddr3 is around $130 for a 4GiB stick on newegg business right now,
and CPU, the 12 core seems to start around $700 each, and the 8 core, 
2Ghz start around $300, and you can see that my costs blow seamicro
away, even if I charge $500 for the hour and a half it takes me to 
put this together and setup/deal with burn in.  (btw, if anyone wants 
me to assemble them one, order me the parts, and I'll do it for $500.)


> The root concept is:  Break away from the assumption of xeon or equivalent
> amd processors.  Jump down to the super small, super low-power, super cheap
> class of processors, atom, arm, etc... and use them to beat the xeons for
> some situations, such as distributed work load, or server virtualization.

This is a good evaluation to step back and make every few years.  
It's especially important for me, because my customers will actually
pay me more for a dedicated server than for a virtual server of the
same specifications.  

There certainly is an 'optimal size' for servers, and if you go over
that or under that you end up paying more per compute resource than
you would otherwise, so if you are as cost conscious as I am, it's 
/very important/ to remain aware of where this optimal size is.

This optimal size does vary a lot if you are say, cpu bound vs. ram
bound, vs I/O bound.  But, I think the atoms don't really compete
with the opterons in any way (unless you are optimizing for /isolation/
rather than performance.) 

> The cost to purchase is approximately equal (as far as I can tell) but the
> power and density are improved by an order of magnitude each, compared
> against "standard" servers.

The capital cost of the atoms only look good if you think one atom
core is worth one xeon or one opteron core, which is simply not the case.

If someone comes out with an ARM based board with socketed ram, I'll
re-run my evaluation.  But for now, as far as I can tell, AMD has
the atom whooped in terms of capital costs, and competes in terms of
power costs.   (Xeon, as usual, takes the crown if you have lots of money,
power, and you care mostly about per-core performance.) 
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] physicalization

2010-06-26 Thread Luke S Crawford
Edward Ned Harvey  writes:
> The specific requirement we're solving is a workload which is highly
> parallelizable.  We're currently paying Amazon EC2, but finding it's not
> very economical.  So we're exploring alternatives, hoping to find a way to
> run these parallelized jobs more economically.  It is also ideal to keep the
> workload in-house, so we can eliminate the security concern about Amazon
> employees having access to our data and so forth.


You jumped from  "ec2 is more money than i want to pay"  to 
"let's use a bunch of tiny computers rather than fewer larger ones."

I think this is probably the wrong choice.  the thing is, ec2 is expensive
because they don't have serious price compitition.   The same is true of
seamicro.   If you buy standard servers, you can have compitition...
if you want.  Me, I buy parts and assemble, which gives me a discount,
but, uh, if you assume that the hp, dell, and the rackable products
are all about the same (assuming they have the same cpu/ram/disk) 
and you get competitive bids, you can do fairly well that way, too.

a fully loaded seamicro rack is abt. $140K, right? before disk? for 1024GiB 
ram and 512 CPUs. with my current setup, I end up paying about $1900 for 
32GiB ram and 8 (/much/ more powerful) CPUs before disk. So, I'd need 32 
of those puppies, at a much more reasonable $60,800 - it would give me 
half as many CPUs, but, uh, from my experience a atom core is less than 
half a opteron core.

And my price includes ECC ram, something the seamicro doesn't support,
which makes a whole lot of problems largely go away.  

how much power would that eat? in my setup, we'd be talking about 3840 
watts, each one of those single socket 8 core boxes eats about one amp of 
120v, give or take. they say 4w per server, so that'd be 2048 watts for 
their solution; so yeah, it will take quite a while to make up the price 
difference.

My point is just that with the current market realities, buying an 8
core/16 core opteron box with 32 or 64GiB ram is going to give you 
more performance at a lower price point than any number of atom servers.

If you do care about cost per core, or cores per watt, look at the new
amd stuff. the socket G34 stuff is really cheap and works well.
I pay $300 for 8 2.0Ghz cores in a single socket.   

The new opteron 4100 series stuff has just come out, so I don't have
any direct experience, but it looks like it might be even cheaper
(and lower power) per core.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Question on Dell R905 and third party memory (WARNING)

2010-05-25 Thread Luke S Crawford
Edward Ned Harvey  writes:

> If you assemble all these "standard compatible" components from various
> manufacturers, each one will only warrant their own component.  You have a
> problem, you call up Seagate, the drive passes the diag, so they tell you
> it's your HBA.  You call Intel or LSI or Adaptec, the HBA passes their diag,
> so they say it's the drive.

Funny, my experience sysadmining (other people's) dell kit,
entirely consisting of dell or HP parts has been that not only is 
the warranty quite a bit shorter than the warranty for the individual 
parts, but if you have an intermittent problem that passes the 
'dell diagnostic' even during that warranty period the dell folks won't 
fix it until you figure out and prove exactly what the problem is.  

I've probably spent more than six months of my life trying to figure out
problems with under warranty dell servers that Dell wouldn't help us with 
until I proved exactly what the problem was.   

Also, if you max out the ram, you usually end up paying about 2x per 
system going with dell over assembling supermicros.  That pays for 
a /whole lot/ of spares for the occasional "I can't figure out what
is wrong with this server, throw it out the window and pop in a new one"

> You call up Sun or HP or Dell, with all of your components being one-name
> branded, and they assume support ownership for the system as a whole.  Not
> just one component. And since they've got thousands of units deployed, they
> don't have weird compatibility glitches like this anyway.

You do have a point with the 'weird compatability glitches'  - Dell does
help there, at least for servers that are fairly new.  

The big problem is that I find that my clients who use dell often
end up keeping servers around for 4, 5, sometimes even 6 or 7 years.
Long past the time when I've given them to my little brother, an
employee, or ebay.  (I tend to toss a server after 3 years)  -
usually getting parts out of dell for a server more than 5 years old is
impossible, or, at least really, really expensive.   Once I wanted to 
upgrade the ram in a old workstation.  they wanted something like
a grand for 512MiB of rambus.  

The thing is, if you are paying twice as much per server as I am, the 
"pay for new hardware vs pay for sysadmin time and power to run old junk"
equation tips in the other direction, but that's another source of
sysadmin frusteration.  

I would have saved many hours last month if I had just ordered from dell.
But, I'd also have paid twice as much;  being as my company spends more
than twice as much on hardware as it does on my salary, it's worth it
for me to figure out what is going on.   (and I really only have to
pay this cost once every time I change my arcatecture.  now that I know
what's going on,  I've got a few years of trouble-free ordering cheap
supermicro systems.)

-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Question on Dell R905 and third party memory

2010-05-24 Thread Luke S Crawford
Rick Thomas  writes:

> Interesting...
> 
> I'm in the process of ordering a pair of R905's to run Eucalyptus for  
> a cloud-computing lab for the CC Department here at Rutgers.  I'm  
> ordering them with Four Quad Core Opteron 2.2GHz (total of 16 cores)  
> but "only"[*] 16 GB of RAM (8 x 2GB sticks) with the intention of  
> adding Crucial (or other reputable 3rd-party) RAM after the project  
> gets started and we can gauge how much RAM we "really" want to put on  
> it.  The theory is that adding processor chips is tricky but adding  
> RAM DIMMs is easy.
> 
> That's the theory, anyway.  Ski's experience gives me reason to  
> question that theory.  Anybody got a better theory, before we go and  
> spend the money?

If you have a good ESD setup, and you are willing to wear a strap,
it's all pretty easy.  I buy SuperMicro SuperServers and plug in everything.
It's maybe 20 minutes of work (then a day or two of burn in, but that's just
loud noise in the garage)  

If you don't have a good ESD setup, or if you or your techs are not willing
to use a wrist strap, yeah, let dell do it all.   they pay a lot, but your
reliability will go down measurably if you don't use ESD protection.

On the other hand, with a little research, you should be able to find
ram that fits in your server.  that shouldn't be the hard part.  

> BTW:  On the "customization" page for the R905, Dell has the warning  
> "Memory configurations with greater than 4 DIMMs per processor cause  
> memory to clock down to 533 MHz" (It's normally rated at 667 MHz).  I  
> don't know if this relates to Ski's problem, but it caught my eye  
> anyway...

This is true of almost everything.   Usually it's automatic.
But lt month I bought some of those new AMD G34 12 and 8 core
boxes.  I filled it up with 1333 ram, and the goddamn thing crashed.
A lot.   Well, I bought two servers, each with 32GiB ram of a different
brand from different vendors (hynix from central computers, and 
Kingston from newegg)  so I was pretty certain the ram wasn't bad.  

I bashed my head against the wall for a while until I noticed that 
the ram was actually running at 1333.   I clocked the ram back, and
the systems were quite happy.  

Figuring out compatable ram for your system can be pretty tricky until
you become familiar with your arcatecture.  the kingston website
is an excelent resource.  you type in your server or motherboard
model and it will give you a list of compatable kingston ram, and
usually has a word or two about the motherboard and it's quirks.

(my experience has been that this list works just fine.  I've never
had them give me ram that didn't work.)

I think crucial and corsair have similar tools

The problem with the server manufacturers 'tested ram' list is that they
test ram when the part comes out, and then they seem to loose interest,
so especially with AMD systems where a motherboard may remain current
for several years with the addition of bios upgrades to support newer
CPUs, the manufacturers list of approved memory doesn't help much.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] ILOM recommendations for x86 servers?

2010-05-06 Thread Luke S Crawford
Doug Hughes  writes:
>  If you've got a decent, embedded KVM you can do a lot of things that 
> you simply cannot do with a serial or even in front of the machine 
> including:
> * Viewing stats from FRUs like fan and CPU temperatures
> * Forwarding alarms from same into your event management system
> * multiple levels of permission for accessing different components

Yes, those things can be handy if dmidecode or lmstats doesn't give you
that info when the box is running.  

> * capabilities for upgrading BIOS and or BMC when it needs it
> * ability to access a virtual CD drive either from an ISO image on your 

These things can be done via pxe.

> * ability to see graphical things that might show up on the console 
> (possibly more important if your box is windows - based)

This is very important, sometimes, and if you need graphics out of band,
there is no substitute for KVM over IP.

> What you don't get from a serial console that you do get from sitting in 
> front of the machine.
> * reset/power cycle of machine

Well, you /need/ metering PDUs, right?  switchable PDUs are not  that much
more expensive than metering PDUs and they solve that problem.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] ILOM recommendations for x86 servers?

2010-05-06 Thread Luke S Crawford
sky...@cs.earlham.edu writes:

> OSHA and likely your state's labor department regulate the maximum noise
> level a worker can be exposed to without protection. You ought to be able
> to get your employer to provide you either ear plugs or something more
> permanent.

I've got earplugs and I offer 'em to everyone but nobody uses them.  
Not that I blame them, I don't use them myself, and I probably do 90% of
the colo monkey work by myself.  

The other day at the hardware store I noticed noise dampening headphones;   
they looked like the enclosed ear protectors and claimed 27Dbi protection, 
but they had a line in where you could plug in your MP3 player.  I should
give something like that a go. 
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] ILOM recommendations for x86 servers?

2010-05-05 Thread Luke S Crawford
Brian Mathis  writes:

> I am using Avocent KVM over IP and it works just fine.  I am using it
> to access text-mode Linux consoles and a few Windows servers.
> Sometimes the mouse doesn't sync up, but that's the nature of how
> these things work.  I think the people who complain about them somehow
> expect them to work exactly as if they are sitting at the physical
> console, which will never be the case.

Yes, this is what I'm trying to say.  KVM over IP can never be expected to
be as good as sitting in front of the box, the roar of the data center cooling
system slowly driving you mad.  Except the thing is, a serial console is 
/better/ than actually sitting at the console, because I have logging and 
other capabilities. A KVM over IP is /worse/  than sitting at the console.

Of course, if you are running windows or otherwise need a out of band GUI,
then yes, you must put up with the quirks of a KVM over IP.  I'm just saying,
if all you need is a text console, plain serial is by far superior. 
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] ILOM recommendations for x86 servers?

2010-05-05 Thread Luke S Crawford
Steven Kurylo  writes:

> While the DRACs on my 2650's are old (which may excuse them), I find
> they're horrible compared to my HP iLo's.

This was also my experience.  (I don't own any HP or Dell kit, but
I've worked for clients with many of both.)  

> They're quite slow.  Sometimes the DRACs serial and video redirect
> become read-only.  I've never figured what causes it.  They hang more
> often, though I can't recall off hand I'm sure there's a way to reboot
> them without pulling power.

You can solve most of DRAC's problems by redirecting the linux console
to the DRAC serial and using SSH to access the drac.   

a seperate serial console and rebooter setup is still cheaper and more 
reliable, though.

I have never seen a KVM over IP that worked very well, but the HP
stuff was noticably more reliable than the DELL stuff.  Either way,
you are best off using serial (either through the 'lights out' card
or through a seperate serial console server.)  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] ILOM recommendations for x86 servers?

2010-05-05 Thread Luke S Crawford
Hugh Brown  writes:
> I'm starting to reconsider favourite hardware vendors.  ("I only have
> one rose to give away...")  I'm looking for advice on ILOMs, and in
> particular console access/Serial-Over-LAN.

I use an OpenGear brand console server and baytech rebooters;  it's really 
nice, and /much/ cheaper per port than any ILM I've seen.   The other
advantage is that it's compatable with any vendor of server hardware,
so you can always go with what is cheapest or otherwise best.  You don't
need to re-write your management scripts every time you switch
hardware vendors.  

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] How to Virtualize Sparc Solaris 8

2010-02-25 Thread Luke S Crawford
Edward Ned Harvey  writes:

> Working at semiconductor companies, it seems to be a common request, that we
> need to obtain some sparc solaris 8 machine to run a memory compiler.  In
> the past, IÂąve solved this by begging local IT depts for their garbage, and
> build one of these machines.  IÂąd very much like to improve this.
> 
> Does anybody know of any way to create a sparc solaris 8 machine modern day?
> Whether virtual or otherwise ... Preferably virtual.


I'm pretty sure qemu has a sparc emulator that you could install
solaris 8 on.  You'd need to get ahold of some install media or
something, though.  I've never tried it myself;  a buddy of 
mine has a 8 CPU enterprise.  


___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Small Data Center Switches

2010-01-29 Thread Luke S Crawford
Chris McEniry  writes:

> Anyone have a recommendation - manufacturer and class - for a small
> datacenter (couple dozen servers, 100Mb of internet traffic) switch
> infrastructure?
> 
> I'm biased for Cisco because of history, but I'm now looking at a
> startup's budget.  We have a bsd firewall in front of everything, and
> the switch infrastructure is just doing layer 2.  We currently have some
> Procurves in place, but they're showing drops (DiscardIn mostly -
> usually < 1% but spiked up to 10% at one point) even though they're not
> even coming close to the rated specs for traffic.  And it's just drops -
> I got gooseeggs for other error counters.  Am I naive to think that the
> they should be showing clean (systems are showing clean)? or am I just
> using the wrong tool for the job?

I had a similar problem with my own procurves not too long ago.
After pulling out a bunch of hair and cursing my large layer 2 (I have 
3 racks on one flat vlan)  I figured out the problem was that I'm a moron,
and I had assigned two virtual machines the same MAC address.

(we discovered this because the problem got worse when one of the systems
in question started getting a lot of traffic, and it brought the network
to it's knees...  but we had symptoms as you describe, the mysterious 
<1% drops at the switch, for several months before, and after we sorted 
out the dupe MAC problem, that problem went away.)  


I don't know if that's your problem or not, but especially now that
everyone and his brother is using virtualization, mac address conflicts
are quite a bit more common than they used to be.  


-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Novell

2009-12-11 Thread Luke S Crawford
Richard Chycoski  writes:

> Please, it's time to 'get real'.
> 
> Sure, some of these companies offend my sensibilities when they
> trample existing standards - but just avoiding them does not fix the
> problem.

Sure it does.  well, only you avoiding them does not solve the 
problem, but if enough of us do so, then yes, that company goes under,
and the other companies hopefully become fearful, maybe fearful enough
to respect their customers.  to give us a useful product at a reasonable
price. 

This is part of the reason for lists like this, so we can talk about
what works and what does not work.  

> There are truly moral issues for not dealing with a person, company,
> organisation, government, whatever.

It's not a matter of morals, it's a matter of self interest.  
screw me once, shame on you, screw me twice, well, you can't screw 
me twice because I'm not doing business with you anymore.

> If I attempted to avoid everyone who had ever offended my
> sensibilities in any way, I'd never get out of bed in the morning -
> after all, I offend myself sometimes!

Right, they say the best strategy is 'tit for tat with forgiveness'
but sharing information is essential to a free and fair market.  

Would you buy a DB server if the license terms said you could not share
the results of tests you rain against it?   I wouldn't.  But it has
nothing to do with being a commie, and everything to do with being
an informed consumer. 

The biggest advantage of open-source software is that even if the
original entity that wrote the software starts engaging in practices
that hurt you, the customer, the codebase can be forked.  worst
comes to worst, I can hire someone to write security updates and 
maintain it myself. 

> However, please don't expect everyone to follow your code for avoiding
> anything non-completely-open, non-standard-fanatic,
> non-overly-corporate. The rest of us have to live in the real world,
> and make compromises of some of these issues to get:

I think all of us are trying to make money here.  I'm closer to 
the dollars than most, as the company is mine, so the dollars I
spend on software come directly out of my own paycheque.  
hell, I essentially am a corporation.

I don't mind buying software, if it earns or saves me more
money than it costs, I just want a good value for my money.
(and in my space, most of the good stuff is free.  I recognize
that this is different in other spaces.)  

I'm not saying you should never pay for closed-source software;
I am saying that it is important to share your experiences
with your peers when a company screws you.I'm also saying
that if a company develops a reputation for screwing developers
or sysadmins, that is also a valid thing to express.  Reputations
are important things in markets;  if reputations did not
matter, then 'always defect' would be the dominant strategy.



___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Novell

2009-12-11 Thread Luke S Crawford
Edward Ned Harvey  writes:

> MS has some products that are best in class (particularly AD, and I also
> love exchange), and some that are pure garbage.
> 
> Google has some products that are best in class (I personally love google
> code and google earth), and some that are pure garbage.
> 
> And Apple is the same.  And Dell.  And Adobe.  And every one of these major
> companies.


See, I think it's important to call out companies when they do something 
that goes against the interests of their users.   I think it is
valid to avoid companies that have an eregreus history of doing things that
harm the interests of users similar to yourself.  It's essential
for a rational market;  otherwise these companies will continue to abuse
you.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Novell

2009-12-11 Thread Luke S Crawford
Adam Tauno Williams  writes:

> On Thu, 2009-12-10 at 22:01 -0700, Yves Dorfsman wrote:
> > Richard Chycoski wrote:
> > > AD is solid, scalable, and well supported. There *are* some gotchas if 
> > > you are looking for 100% LDAP compatibility, but for authc/authz (login, 
> > > groups, etc.) nothing else performs quite as well. (I do hope that Open 
> > > LDAP catches up!)
> > What is the advantage of going ldap against AD vs. using kerberos ?
> 
> AD is Kerberos.  LDAP and Kerberbos are not the same thing
> (identification vs. authorization).  You need LDAP + Kerberos or you
> need AD.


You can authenticate straight to ldap without using kerberos, if you
want.  Kerberos is nicer, though, as you get your ticket granting
ticket, and you don't need to re-authenticate for 8 hours if you 
have everything setup right.  authenticating drectly to ldap is 
very much like a 'slightly more secure NIS'  - which is to say,
you still need to type your password every time you login.

(there's also a set of patches for OpenSSH, OpenSSH-LPK, that
allow you to store a users public key in ldap rather than in 
~/.ssh/authorized_keys which is better than passwords, if
kerberos is not an option.  Kerberos is porobably the best
tool for this job, though.)  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Novell

2009-12-11 Thread Luke S Crawford
Yves Dorfsman  writes:

> Richard Chycoski wrote:
> 
> 
> > AD is solid, scalable, and well supported. There *are* some gotchas if 
> > you are looking for 100% LDAP compatibility, but for authc/authz (login, 
> > groups, etc.) nothing else performs quite as well. (I do hope that Open 
> > LDAP catches up!)
> 
> What is the advantage of going ldap against AD vs. using kerberos ?

OpenLDAP/kerberos  works swimmingly on Linux and Mac, and has 
cheap failover options;  I've not gotten a non-AD LDAP/kerberos 
type system working to auth windows clients, so I guess the advantage 
of AD is that you can use it on windows clients as well as Linux
clients.  


___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Platform bashing (was Re: Email naming convention)

2009-10-25 Thread Luke S Crawford
Elizabeth Schwartz  writes:

> Just asking for a satisfied large corporate customer will knock a lot
> of cheapo's out of the water...

Yes, and that is a great strategy if you want to spend more money.

You are essentially putting a 'marketing acumin' test in there.   
not only does the provider need a large customer, they need to negotiate 
a deal with them whereby they can say they are a customer.

Now, there are reasons to ask about capacity, if you need a lot,
If you need 100 racks, realistically, I'm not goint to be very good 
for you. I could probably do it, but it'd be a struggle for me, 
there'd be delays, etc... so capacity does matter.

But the logo on the page doesn't really tell you anything about capacity.
It just means that they know someone or are good enough at negotiation to
get permission to use the logo.   in my case, I bet if I worked at it, I
could get some large company to use me as SysAdmin training, or off-site
monitoring or something else that didn't require much iron; and 
maybe even get a logo; but that still wouldn't mean I could field 
100 racks in any reasonable period of time.

I would argue that bashing cheap providers for being cheap is worse
(that is, more self-destructive)  than bashing microsoft for being
microsoft. If you take price as an indicator of quality, you will 
find that there are many low-quality providers who are perfectly 
happy to charge you premium prices.  

--
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
 
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization: Death of Commodity Hardware?

2009-09-19 Thread Luke S Crawford
da...@lang.hm writes:

> if that is your problem, just create chroot sandboxes with your
> different services in them. that let's you get most of the isolation
> of VMs, but with non of the CPU/ram overhead.

Yup.  I started selling FreeBSD jails, and they work really well if everyone
cooperates, but one box can thrash everyone's cache.  It was a lot of
sysadmin work to make service semi-reasonable when you have untrusted users.
(and you'd have to keep kicking people off who try to do 'big' things on 
a system sized for 'small' things)  

It is a happy medium, often, when you control everything, but there is 
a tradeoff.


> > How come anyone still listens to sales?  I mean, the last person I
> > am going to take advice from is someone who has an obvious interest
> > in deceiving me.
> 
> as far as management goes, the rational is that they are the experts
> who know more than their staff does (on this particular topic anyway)

Yes, but it doesn't matter how much they know if they are not acting in
your interest.


-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization: Death of Commodity Hardware?

2009-09-19 Thread Luke S Crawford
Lamont Granquist  writes:

> > I've seen a lot of IT managers jump on the virtualizaiton bandwagon, but 
> > then 
> > when forced to defend their numbers have been unable to do so.
> 
> yep.
> 
> that's basically my problem.
> 
> i'm also not seeing the justification for the associated infrastructure 
> for example, why have 4 GigE drops per server, when we have 400 servers in 
> a business unit that do around 600-800 Mbps peak across every interface 
> and every server?  why have a lot of shared storage when we basically do 
> no i/o (as well as no CPU and no network).


Yeah.  virtualization is a tool for making larger servers look like
a number of smaller servers.  it only makes sense if you need servers
that are smaller than the optimal server size.  (by optimal server size
I mean, if you buy a dual-core server with 2gib ram and co-locate it, you
are paying a lot more per unit of cpu/ram than if you buy an 8 core
32GiB ram server.  But then, if you buy a 32 core, 128GiB ram system, you
probably go back to paying more per cpu/ram unit.   There's a certain
size of server that it is the cheapest to buy/host.  Virtualization 
makes sense if that server is larger and more powerful than the servers you 
want to use.)

Usually the super giant servers make no more sense than filling up my
rack with atoms.  (atom boards actually have a horrible performance per
watt, when you consider that they eat about 50w each, are limited to
2GiB non-ecc ram, and have absolutely pathetic CPU performance.  If you 
could get an atom with a chipset that only ate 5 or 10 watts, like you can 
on a netbook, they'd make a lot of sense in a lot of places.  But with the 
945GC chipset, they are garbage.)  

> > one thing that a lot of people initially see is 'hey, we are reducing the 
> > number of servers that we run, so we will also save a huge amount of 
> > sysadmin 
> > time', what they don't recognize is that you have the same number of 
> > systems 
> > to patch (plus the new host systems that you didn't have before), so unless 
> > you implement management software at the same time your time requriements 
> > go 
> > up, not down.
> 
> yeah, well since i've got a strong configuration management background 
> that is one thing that is obviously wrong.

Many people outside the field think that most SysAdmin time is spent monkeying
with hardware.   But there is some truth to the 'virtualization can save
you sysadmin time' marketing bulletpoint, assuming you have a big 
'maybe' or 'sometimes' in there.   

If you really do need a whole lot of trivially sized services, virtualization
and giving each small service it's own 'server' can sometimes be less work
than making your mailserver config play nice with the apache config and the
fileserver config and the db config on the one giant monster server.  
In that case, virtualization can also make it easier to make the db
server the problem of bob in accounting without letting him have root on
your webserver, and let the fileserver be the problem of jane in IT without
letting her have root on the developers' git server.Granted, you can do 
the same thing by just running different services under different users, but 
that's easier with some applications than with others.  

> i spent quite a bit of time beating up some citrix reps drilling down into 
> what they can do for box builds and configuration management, and 
> virtualization's approach was basically to increase the velocity of being 
> able to do the disk duping approach to deployment and configuration 
> management which does nothing for life-cycle configuration management 
> (e.g. changing /etc/resolv.conf on every existing deployed server).  i've 
> seen virtualization pitched in CTO/CEO-focused materials as "solving" 
> configuration management, however.

How come anyone still listens to sales?  I mean, the last person I 
am going to take advice from is someone who has an obvious interest 
in deceiving me.  

But yes, I find this "everything is an image' approach that the 
virtualization folks seem to be taking to be very wrong.   I think the
only rational way to manage virtual machines is to manage then the same
way you manage your physical machines.   Cobbler/Coan is actually pretty
good for this sort of thing.

For prgmr.com, I'm actually working on a paravirtualized netboot environment,
so that customers can manage physical and virtual servers (hopefully
even physical servers that aren't hosted with me)  using the same tools.
Anything else, I think, is insanity--  It's just plain stupid to 
keep adding more tiny virtuals to your cluster once you need the power
of multiple physical boxes.  

-- 
Luke S. Crawford
http://prgmr.c

Re: [lopsa-tech] Virtualization: Death of Commodity Hardware?

2009-09-19 Thread Luke S Crawford
Lamont Granquist  writes:

> On Tue, 15 Sep 2009, Daniel Pittman wrote:
> > For what it is worth, I have found that "container" solutions often scale
> > better to the actual workload than "pretend hardware" virtualization does,
> > because it shares RAM between the containers much more easily for these 
> > tiny,
> > almost nothing, applications.
> 
> Well, BackInMyDay(tm), we use to just run multiple apps on different ports 
> and let this novel thing called a "process scheduler" let the server do 
> two different tasks at once.

Yup.  much lower overhead than virtualizing stuff, in terms of CPU
cycles and ram.  Oftentimes, more sysadmin work, though. 

> i'm still a bit unconvinced that virtualization really solves any 
> fundamental problems over just managing multiple running software 
> instances on the same server.

The thing is, if I have 8 2GiB p4 servers running various flavors of 
linux, doing different tasks, I can consolidate them all on to one of my
new 8 core 32GiB ram boxes, and save a lot on power.  If I do that without
virtualization, I will save some cpu power/ram, but it will be more work.
I've gotta move everything to running on the same linux version, and 
I've gotta make sure the sendmail config for the apache server doesn't 
step on the sendmail config for the mail server, etc... 
doing it with virtualization is usually less sysadmin time, but it wastes
some cpu/ram.

A previous poster suggested os-level virtualization like OpenVZ;  that
is sort of a middle of the road solution between consolidating the 
old fashioned way and fully virtualizing.  It's more efficient that
using paravirtualization or full virtualization in terms of cpu/ram,
but it's also more work.  all servers need to use the same kernel, 
and if one machine eats a lot more resources than it should, you 
really need sysadmin intervention.   (personally, I prefer paravirtualization
for this reason.  I host untrusted users, and I don't mind throwing away
some ram if it means I don't have to worry about it when some joker
decides to run mprime, or when someone tries to run a giant webapp on
the smallest plan.) 

You can further tweak OpenVZ so that it errors on the side of automatically
killing heavy processes rather than letting them effect the rest of the 
system, but for partitioning, it just can't touch a paravirtualized system.

I've had several consulting gigs where I show up and the client is 
expecting a big performance gain out of using virtualization over
consolidating the old fashioned way.   The gig usually ends pretty fast
when I explain that just isn't how it works.  
 
-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] software hard disk recovery

2009-09-16 Thread Luke S Crawford
Edward Ned Harvey  writes:

> > Hm.   One thing you can try that has worked for me is to freeze the
> > drive.
> > stick it in a ziplock and put it in the freezer for a few hours.   then
> > plug it in and try to get the data you need immediately.
> 
> I've heard of that trick before - although - I think it mostly works if the
> circuitry is the failing component.  And it may be, who knows.  

My understanding was that if the bearings were failing in some way, freezing
it subtly changes things so that it works for a while.  

But yeah, no matter how it worked, it did save my ass in production, once.

http://xenophilia.xen.prgmr.com/  

(never ever give a customer non-redundant disk, even if you tell them it's not
redundant.)  

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] software hard disk recovery

2009-09-16 Thread Luke S Crawford
Edward Ned Harvey  writes:

> One of my friends windows machine died today - boots up only half way
> through the windows splash screen and freezes.  There is one file he wants
> to recover out of it, but it seems to be suffering from hardware failure
> because attaching the disk via USB enclosure to another computer will only
> let him load the USB mass storage device, and won't go any further than
> that.


Hm.   One thing you can try that has worked for me is to freeze the drive.
stick it in a ziplock and put it in the freezer for a few hours.   then
plug it in and try to get the data you need immediately.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization: Death of Commodity Hardware?

2009-09-14 Thread Luke S Crawford
Lamont Granquist  writes:

> Here we were back in 2001-2003 buying up cheap 1U dual proc ~4GB RAM 
> server with a 1-4 drives in them and putting them all over the datacenter. 
> It all made a lot of sense in the push to having lots of smaller, cheaper 
> components.
> 
> Now with virtualization it seems we're back to buying big Iron again.  I'm 
> seeing a lot more 128GB RAM 8-core servers with 4 GigE drops and FC 
> attatched storage to SANs.
> 
> Has anyone really costed this out to figure out what makes sense here?

I have.  And my competitors have as well.  I use 32GiB ram/ 8 core servers.
$1500 in parts, if you cheap out on local disk.  As far as I can tell,
this is also what my compitition uses.  

> Instead I'm seeing those apps moved onto BigIron and BigStorage with an 
> awful lot of new capex and licensing spending on VMware.  So where, 
> exactly are the cost savings?

> So, did I just turn into a dinosaur in the past 5 years and IT has moved 
> entirely back to large servers and expensive storage -- or can someone 
> make sense of the current state of commodity vs. BigIron for me?

Uh, talk to an 'enterprise' salesman.  My guess is that it is just another
way for the salesguys to soak large companies.  

Shared storage is really, really nice.  But local storage is pretty good, and 
is a small fraction of the cost.  

> It definitely seems absurd that the most efficient way to buy CPU these 
> days is 8-core servers when there's so many apps that only use about 1% of 
> a core that we have to support.  Without virtualization that becomes a 
> huge waste.  In order to utilize that CPU efficiently, you need to run 
> many smaller images.  Because of software bloat, you need big RAM servers 
> now.  Then when you look at the potentially bursty I/O needs of the 
> server, you go with expensive storage arrays to get the IOPS and require 
> fibre channel, and now that you have the I/O to drive the network, you 
> need 4+ GigE drops per box.

the most efficent way to run servers now is to buy 32GiB/64GiB ram, pack it
in a box with 2 quad core opterons, and a  bunch of local disk.  

Shared storage makes things a lot easier, yes.  But it's not required.
If you need the iops, buy some decent SAS local disk.  If you are like
me / linode/ slicehost, well, buy SATA, and tell your customers to buy
enough ram to cache the data they really need.  

> At the same time, throwing away all the 2006-vintage 2-core 16GB servers 
> and replacing them with all this capex on BigIron seems like it isn't 
> saving much money...  Has anyone done a careful *independent* performance 
> analysis of what their applications are actually doing (for a large 
> web-services oriented company with ~100 different app farms or so) and 
> looked at the different costing models and what performance you can get 
> out of virtualization?

Well, I can tell you what I'm doing, and I'm pretty sure linode does something
fairly similar (though linode uses Intel CPUs, I'm pretty sure they still
go 8 core/32GiB ram.  I'm basing the 32GiB ram on an old post of Caker's and
the intel CPUS off /proc/cpuinfo on a client's box.)  

My personal cutoff is 8GiB.  if a box has less than that, move it into
a virtual image one one of my super-cheap 32GiB/8 core boxes, and you'll
save a bundle in power costs.  

your 2 core 16GiB ram servers, well, I /strongly reccomend/ dedicating
a core to the master control system for doing I/O, if you virtualize. 
(the Dom0 in xen, which is what I'm familiar with)  but that becomes 
harder when that takes you down to 1 cpu.   

How much power do the things eat?  with low-power CPUs, my 8 core opteron
systems eat around 2a of 120v  Over the 3 year life of a piece of hardware, 
power costs end up being more than hardware costs, so I'm pretty quick to 
get rid of my old garbage.  My power/cooling/rackspace costs for one
of these servers is about $75/month.  

(of course, if you are paying 'enterprise salesman' rates for hardware, 
an 8 core, 32GiB ram rig is probably going to set you back $4-$6K, and while
it probably comes with better disk, and ram that is 20% faster, that changes
the balance.  Power is expensive, but not _that_ expensive.  In that case,
often it makes sense to run hardware until it won't run any more.  Personally
I'd shoot the salesman and get on with buying cheap hardware, but apparently
'enterprises' don't like doing that sort of thing.)

-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization that doesn't suck

2009-09-06 Thread Luke S Crawford
Tracy Reed  writes:

> On Sun, Sep 06, 2009 at 10:56:16AM -0400, Edward Ned Harvey spake thusly:
> I have done such tests but I don't have them at hand to share with you
> at the moment. Expensive single disks such as 15k SAS can do
> 1Gb/s. But your typical 7.2k RPM SATA does 50-70 it seems. I will see
> if I can dig some up. The nice thing about AoE is that there is little
> to no network overhead. It is not TCP nor IP. It runs purely at layer
> 2.

My experience has been I can expect something between 70-100megabytes/sec
for 7500RPM, depending on where in the disk it is.   if I get 50MB/sec
on the outer tracks, I assume the drive has a bunch of remapped sectors and
I send it back.  (this is for 1.5TiB or 1TiB drives with 32MiB cache.)

I only expect 120Megabytes/sec out of a 15K drive;   The big win with 15K
drives is not sequential speed (the higher density of the sata drives
help with that)  but random access speed.  

Anyhow, with my workload, I never have sequential access. I've got
a bunch of virtuals sharing a disk, so all access, even if it looked 
sequential from within the DomU, is random.  

So yeah;  a few bonded gigabit links seems reasonable to me for as many
drives as I can fit in one chassis.  

> > PS.  2G of ram in a server doesn't sound like much to me.  I never buy any
> > laptop with less than 4G anymore.
> 
> The 2G I was refering to is in the SAN head. Not the server running
> the virual machines. The 2G in each SAN head is there just to run the
> kernel, vblade, and provide caching. RAM is so cheap that future
> machines may have 4 or 8G depending on how this affects the
> price/performance. RAM is so cheap now that I expect we will just go
> 8G.

Yeah, I always fill a motherboard to the max. it economically supports;  
8GiB, usually, for single-sockets, 32 or 64GiB for dual-socket boards.

For a highly random load like mine, ram cache makes a big difference, 
much bigger, I think, than the difference between 2G and 4g links. 
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization that doesn't suck

2009-09-05 Thread Luke S Crawford
Edward Ned Harvey  writes:

> > First, in my experience, all 'full virtualization' products suck.
> > They are slow and buggy.  I only support paravirtualized kernels.
> 
> Correct me if I'm wrong, but paravirtualized you can only run similar guest
> OSes, right?  What if your host is Linux, and you need a Windows Server
> guest?  I think this is something that can only be done fully virtualized.

The only way I know how to run windows under linux, or linux under windows
is to use full virtualization, yes.

You may wish to check out project satori.  it lets you run linux guests on
the microsoft hypervisor.  No idea how well it works, or how well the 
microsoft hypervisor works.  
http://www.xen.org/products/satori.html

There is no reason why paravirtualized OSs need to be similar;  I'm happily
running OpenSolaris, NetBSD and Linux guests under xen.  But they do 
need to be paravirtualized to work under Xen, and that's probably not
ever going to happen to windows.

> > The commercial xenserver product might be worth a shot if you really
> > need graphical tools.  It's free, I hear, for the basic version.   But
> > the management console is windows only.
> 
> It's not so much that I really need graphical tools - But how else can you
> get the gui console of the guest OS?  Suppose the guest is unavailable for
> some reason via ssh/rdp/vnc.  Then you need the actual console of the guest,
> to either fix the problem or reboot the guest, and the only way I know of is
> to get inside something like VMM or VMware Console Client.  Do you know
> another way?  I'm aware of VNC console, but it's not secure enough to leave
> it normally-on, and I don't think you can temporarily enable it while the
> guest is already running.

the 'xm console' virtual serial console is what I use.   But then,
I don't use graphical guests, so a emulated serial console is actually 
better than an emulated crt.  

The only thing I'd reccomend is ssh port forwarding and the VNC console...
that way you could access the vnc console (which looks like the video
card to the guest) remotely, but you only have ssh open to the outside.  

> > But, in my opinion, graphical tools are overrated.   xm is really the
> > only
> > management tool you need.
> 
> This is again, a place where maybe you know something I don't know - 
> How can you eject a CD via xm command, and then insert a different CD
> without shutting down the guest OS?

You should be able to do it with xm block-attach

 block-attach domain-id be-dev fe-dev mode [bedomain-id]
   Create a new virtual block device.  This will trigger a hotplug
   event for the guest.

and xm block-detach

block-detach domain-id devid [--force]
   Detach a domain’s virtual block device. devid may be the symbolic
   name or the numeric device id given to the device by domain 0.  You
   will need to run xm block-list to determine that number.


Works fine for adding/removing disks to running paravirtualized guests.
I don't remember last time I tried doing this on a HVM guest or what the
results were, but it should work.  

-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization that doesn't suck

2009-09-05 Thread Luke S Crawford
Tracy Reed  writes:

> On Sat, Sep 05, 2009 at 01:51:24AM -0400, Edward Ned Harvey spake thusly:
> > I don't have the ability to do things like head migration due to
> > lack of shared storage, so that's not an issue for me.
> 
> These days there is absolutely no excuse for not having shared
> storage. AoE has been working perfectly for me for years and I run my
> business on it.

Details, please.  Are you using the coraid products?  what do you do 
for redundancy?  what happens when the SAN box fails?
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Virtualization that doesn't suck

2009-09-05 Thread Luke S Crawford
> or not - but you can only get a guest console from the local host machine.


Don't use the graphical console.  However, if you must, you can get
vnc access to the virtual framebuffer:  

grep vnc /etc/xen/xmexample.hvm

will give you the bits required to connect to vnc remotely.   For obvious
security reasons, it defaults to only listening on 127.0.0.1
(locally, you can also have a SDL console, which is obviously better.)

Personally, I always use

xm console domain

which gives you access to the xen virtual serial console.  Takes some
work to setup in hvm mode, but works there, too.   In paravirt mode,
it's dead easy.


--
Luke S. Crawford 
http://nostarch.com/xen.htm
http://prgmr.com/xen/
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Swap sizing in Linux HPC cluster nodes.

2009-09-04 Thread Luke S Crawford
Phil Pennock  writes:

> While I mostly agree about the limited utility of swap, on FreeBSD I
> still went (go, on my personal box) for swap >= RAM for one simple
> reason: kernel crash dumps.

The linux guys are very anti-dump.   Personally, I find that 95% of the
time console output (and I think a logging serial console is essential)  
gives you all the info you need anyhow, and the other 5% of the time, 
I don't seem to be able to get anyone else to help me.  

The current crashdump tool, kdump, lets you copy the kernel to a partition,
or even over the network to somewhere else.  

But personally, I don't save dumps under Linux, just 'cause they change 
crashdump utilities every two years, and because nobody seems to know how to
do anything with them.  

FreeBSD is superior in this regard,  getting a backtrace out of a FreeBSD
dump is trivial.  

-- 
Luke S. Crawford
http://prgmr.com/xen/ -   Hosting for the technically adept
http://nostarch.com/xen.htm   -   We don't assume you are stupid.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] does anyone have (non-binary) USENET archives?

2009-09-03 Thread Luke S Crawford

I'm looking to setup a historical usenet search now that google seems
to have turned that functionality off.

I'm mostly interested in technical newsgroups.
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Rackspace managed hosting vs in-house

2009-05-14 Thread Luke S Crawford
Tracy Reed  writes:

> I have a consulting client who has engaged me to look at their
> infrastructure and do some analysis and then make a recommendation on
> either:
> 
> 1. Hiring their own full-time sysadmin to re-architect everything and
> move it to an in-house virtualized environment.
> 
> or
> 
> 2. Outsourcing the whole works to Rackspace and get out of the
> business of owning hardware, paying a colo for a rack, and bringing on
> a full-time sysadmin.


*key* here is to have multiple providers for everything.  Outsourcing is
great; but you don't want to get locked into only one provider. 

The other thing to consider is that renting servers doesn't mean you don't
need a guy who knows *NIX.  

Also, owning your own hardware is a lot cheaper than renting.   Sometimes
that doesn't matter because hardware isn't a significant part of your cost
structure.  For me it does, because hardware is a very large portion of
my costs.  building a 2x2.1Ghz shanghai server with 32GB ram 
and a mirrored terrabyte?  $1200 in parts or so and and some 
work.  (but then, I've got my workstation setup[1].  nothing fancy, but 
adaquate ESD protection is worth many times the effort if broken servers
wake you up.) Renting a dual xeon from rackspace with 16GB ram,
I believe, would be close to that much every month.

Just to give you some idea, I'm paying around a grand a month for a full 
rack w/ a small bandwidth commit and 2x 20A circuts.  


[1] http://prgmr.com/~lsc/luke_opterons.jpg

-- 
Luke S. Crawford
http://prgmr.com/xen/  -   Hosting for the technically adept
   We don't assume you are stupid.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Linux 32 vs 64 BIT and the future of 32 BIT

2009-03-16 Thread Luke S Crawford
apostolos pantazis  writes:
> These days it seems to be getting harder and harder finding quality
> support under 32 BIT; In some cases vendors have flat out specified
> that the future of support under 32 BIT is grim. Yet the enterprises
> of the world are still running 32 BIT and I am wondering: what is your
> experience in regards to the future of 32 BIT?


Almost all server kit worth having (at least, if you are paying for power)
supports the x86_64 extensions these days.  Personally, I support a bunch 
of RHEL5/CentOS5 Xen hosts, and I see many more bugs on the hosts running
an i386 hypervisor/control domain than I see on the hosts running an 
x86_64 hypervisor/control domain.  (my theory is that the i386 code just 
doesn't get exercised as much.)

As for small servers (which I usually put on Xen guests) i386 still has 
something of an edge.  Libs, from what I understand, consume more ram in 
x86_64 than in i386.  

that said, you can get 32GB of registered ecc ddr2 for under $700 these
days, so personally I don't worry about it.  All my stuff is x86_64.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] IP accounting tools that support IPv6?

2009-03-15 Thread Luke S Crawford

so I'm looking for a tool that will let me set a sniffer on a SPAN port 
monitoring my uplink, and will count packets (well, count bytes to/from 
a particular IP.  I don't get charged on packets.)  

See, I've got port level BCP38 nearly everywhere, so I don't have to worry
much about spoofing, and with this (unlike port-level accounting)  I can sell
customers storage or other things on my own network and only charge them
for packets that leave my network.  

So yeah, right now I'm using bandwidthd, and it is IPv4 only and doesn't
give me the 95th percentile.  Suboptimal. 

I found this:

http://www.pmacct.net/

which looks a lot like what I want.  a tool to drop my packet counts into
rrdtool.  (from there I can do what I need, no problem.)

Does anyone else have experience with this sort of thing?  is there
something I'm missing vs. port counters?  


  


___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] WARNING: NetApp terminates existing StoreVault service/support contracts

2009-03-13 Thread Luke S Crawford
Brad Knowles  writes:

> We're now looking at Sun's Unified Storage Servers as a lower-tier of 
> storage service than we've ever had before.  With Matching Grant pricing 
> (50% off retail), you can get the cost/usable Tebibyte (uTiB) down to 
> $1500 with one head and three fully populated shelves, and your cost per 
> additional uTiB is less than $1000.


Yes.   Now, the first thing I notice about the sun storege solutions is
that they don't appear to have redundant heads.   Is that a major concern?
how do you deal with it?   I have seen emc nas heads crash (and in my case
they failed over to the other head; I wouldn't have noticed if I wasn't 
monitoring it.)  I would like to hear from a user of the sun product on the
subject.  



___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Tapes, not backup...

2009-02-12 Thread Luke S Crawford
Doug Hughes  writes:
> > Are there any tape systems that have a lower cost per gigabyte  than
> > 1TB or 1.5TB sata disks?
> You don't offer much context to your question, but the answer is most
> assuredly yes. It depends how you want to use your tapes.
> 
> 1 1TB enterprise seagate SATA disk is $200-$245 online (actually about
> 900GiB because they use power10 TB instead of power2)
> 1 LTO4 (800GiB) tape is about $48 in quantity one (significantly less
> in bulk)

Interesting.  hm.  and the drives look to be around $3000 before the robot.
I do have a metric ton of 1gb fibre channel stuff laying about. 

But my experience with tape has been that it is way less reliable than even 
the crappiest consumer-grade drives you can find.  This may be ignorance or 
incompetence on my part (I don't think I've touched a tape since I reached
the age of majority)  -  but either way, for me, tape has been unreliable,
and disk, while not completely reliable, fails in ways I've seen thousands of
times before, ways that I can deal with.  (more importantly, when data is
corrupted on my spinning disk, my pager goes off.  With tape, you don't 
know until bob in accounting wants that really important file,
or the new guy wipes out the billing server.)

See, the biggest reason why I don't use tape is that my experience
has been that tape backups seem to fail at least one out of five tapes.  
(of course, this mostly comes from my experience as a tape monkey in
highschool.  Maybe it was just that the companies I worked for didn't have
anyone that knew anything about tape.  This was largely before I knew anything
about anything.   My job was usually to take the tape out of the server, 
put it on the bottom of the tape stack (usually in a filing cabinet)  and 
put the top tape in the server, but of the 5 tape restores I saw during 
that period, 2 of them went off without a hitch, 1 of them required going 
back a month because of a corrupted 'level 0', and two of them were total 
losses.)

Because I've never seen tape work reliably, when I am in control, I 
always provision spinning disk, as I know how to deal with failures in that
arena. (and yes, you do need to checksum your disk data, and monitor your
logs closely to deal with I/O errors.  silent errors do happen... but they 
happen everywhere.  You wouldn't run without ecc ram, would
you?  checksum your backups, and preferably your running disk.)   It's 
possible that tape is just really sensitive to humidity (I've read that 
in dry areas ESD is a leading cause of tape failure.  The Sacramento valley, 
where I spent my childhood and most of the parts of my career that dealt with 
tapes, can get pretty dry at times.   All of the places I worked, at best,
kept the tape in a locked filing cabinet.  So it could just be ignorance.)

if you are using raid (especially something like zfs that does
checksuming on the block level, but even if you just do a md5sum that verifies 
periodically) and you monitor the damn things, (this means replacing the drive
when you see I/O errors, Not just rebooting the box and saying you cleared
the errors.  Silent errors do happen, but the vast majority of disk errors
on systems with ECC ram do leave spoor in the logs, it's just that most 
people ignore those errors that can be 'cleared' by a reboot.) consumer 
grade is probably 'good enough.' it's certainly much, much better than my 
experience with tape has been, and it gets your cost per gig almost down to 
the cost of tape.  (I've worked with terrifyingly massive installations
that used consumer-grade disk.  The key is the ability to notice when you 
have disk errors and the ability to take any server out of production at 
any time without breaking anything, and then a few guys with shopping carts 
full of hard drives wandering the aisles of your data center. 

but for small shops, It's cheap to put a 16 bay supermicro in some far-off 
co-lo (or under the bed).   just leave the disks in, and your total cost is 
usually less than the drive and tapes for equivalent capacity.

> depending upon how many you buy and what your requirements are, tape
> is still significantly cheaper than disk, especially once you add in
> power for  TB of running disk. Tape wins in offsite archival as
> well.

yes, power is a big deal.  you have a good point there.  

I would argue that spinning disk is also pretty convenient for off-site, 
though.  spinning up a 16-bay supermicro under my bed (or in a co-lo on the 
other side of the globe) is pretty cheap, and I can automate the 'tape 
change' process as well as the 'restore file x from last week 'cause joe in 
accounting screwed it up' process, and I can easily have multiple levels of 
isolation.  I guess you can probably do all the same things with a tape robot 
of sufficient capacity/scriptability, but writing a interface to the spinning
disk system is trivial. 


Another advantage of disk is that I already know about disk (and if the 
new guy doesn't, she is going to l

Re: [lopsa-tech] Tapes, not backup...

2009-02-12 Thread Luke S Crawford
Tim Kirby  writes:
> In a flash of deja vu all over again, the question was "do I know of any
> (preferably OSS) tape library management apps for a linux box with an
> attached library for simple tape archival use. A very quick google
> didn't pop up anything obviously useful, but that's always colored by
> the keywords used...

Are there any tape systems that have a lower cost per gigabyte  than 
1TB or 1.5TB sata disks?  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] recommendation for SSL CA?

2009-01-15 Thread Luke S Crawford
Peter Loron  writes:

> I have purchased a few certs from these guys and have been satisfied.
> They have a reseller program that looks pretty simple. I have no
> experience with the reseller program itself.
> 
> http://www.geocerts.com/resellers


I think geocerts actually resells geotrust certificates... 
http://geotrust.com/

me, I go even cheaper and use rapidssl certs 
https://www.rapidssl.com

(though you should find a reseller... you save a lot)  

but both geotrust and rapidssl are signed by the equifax root certificate,
so the  difference is mostly in how much verification they do that you
are who you say they are.  In my case, I paid $15 or so for a year, I think,
through some no-name reseller.   Rapidssl verified I had access to an email 
in the prgmr.com WHOIS records, and signed my cert.  Weak, really, but I'm 
not handling credit cards and I'm cheap.  It is signed by the equifax 
root certificate, which is in most browsers, and I haven't gotten any 
complaints.

It's also a single root certificate.  A while back a client of mine
bought one of the godaddy second level certificates, where his cert was
signed by another company that was signed by one of the root certificiates.

It was a right pain to get it working under tomcat.  I got it figured out,
but it would have been much cheaper for him to just get something signed 
by a root certificate.  
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/