Re: [lopsa-tech] SAN recommendation

2010-07-02 Thread chutchin
> I'm looking into replacing our aging EMC SAN installation with
> something else, and I thought I'd ping the list for suggestions.
> Basically I'm looking for, preferably, one box that can do NFS really
> well and also do fibre channel really well (mostly for ESX LUNs). I've
> used NetApp gear in the past for both these purposes to great success,
> and I'd happily try them again, but I suspect there might be some
> newer, more agile companies out there with competitive alternatives
> and better pricing.
>

Have you looked at Pillar Data Systems?  They have a very compelling
product that offers everything you are asking for here as well as the
ability to seamlessly migrate less used or less IO needy data from
expensive high performance FC drives to more cost targeted lower
performance but hight data density SATA drives.

> The management interface counts for a lot too: I hate EMC's with a
> fiery passion. Something web based is much, much better than a java
> app that only works well on Windows.
>

This is where they may fall down as the only interface I have seen is a
gooey one.  That is not to say they do not offer it I just do not remember
seeing it in thier lab we played in.

> Aside from excelling at NFS and fc, and being "officially supported"
> by VMware, Inc., flexibility in terms of being able to use both SATA
> and FC/SAS disks where I like is a good plus too. Easy storage
> expansion by adding more drive shelves is important. Enough
> controllers with sufficient horsepower such that they can be slammed
> by NFS but still serve up fibre channel without a hitch is important.
> In terms of capacity, I'd probably need around 16TB usable to start,
> and growing quickly from there.
>

Pillar also has good growth options from what I have seen.  As stated
above they can use FC and SATA drives in the same array (different trays
needed if I remember correctly) all managed by the same head end unit/s. 
You can have a dedicated NAS head serving up NFS independant of the FC raw
storage controller.

> Cost is a consideration, but I should probably put this in
> perspective: I know how much top-tier stuff from NetApp and EMC costs,
> and I'm prepared to pay that, but if there's another company out there
> that is just as good or better for a bit less, then I should consider
> their products too. I'm not expecting 30TB for $8k or anything like
> that! But I don't want to spend money I don't have to just because I
> haven't heard of an alternative.
>

Cost was very competative with most every other vendor we looked at and
they were one the final two list when we made our last purchase of new
storage infrastructure.  Unfortunately for them HP already had a foot in
the door with a lower end unit that we were quickly outgrowing and we were
not yet in need of many of the more compelling features they were offering
us.  In the end HP also did some VERY aggresive price cuts on the hardware
including new 4Gb fiber switches at VERY good pricing.  We spent a good
deal of time looking at Pillar Data products and were very impressed with
what they were capable of and the engineers we were dealing with were
quite good at storage.

Their "founder" likes to play around in quite expensive boats a lot these
days.  He wanted them to design a good backend for his other pet company,
Oracle.  I was really rooting for Pillar but the really cool features that
we really could have made good use of two years later were not enough to
push them ahead of HP's EVA line that was already in house and hence had
the support and management experience already in house as well.  Factor in
the whole package pricing with the 4Gb switches practically free, we just
could not make the case with management.

Were I in a position today to make a case for a new storage vendor (and I
REALLY wish I were), they would be high on the list of vendors I would be
courting.  I would also like to go back and try just a little bit harder
to get them over HP now that we really could use the other features that
HP did not offer or are now charging significantly more for.  Odd how fast
those discounts shrink when you have already made the big purchase.  HP is
far from alone in that though.  MPC Corp rebranded a Lefthand Networks
iSCSI device for a while and our "all inclusive" licensing was found to be
wanting when we started looking at snapshots and replication.  Got to love
sales criminals.


> So, any companies / products you can recommend along those lines, I'd
> love to hear about. Thanks!
>
> Dan


Charlie

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Edward Ned Harvey
> From: tech-boun...@lopsa.org [mailto:tech-boun...@lopsa.org] On Behalf
> Of C.M. Connelly
> 
> people have figured things out), I'm leaning toward just setting
> up web-based repos for those projects rather than worry.

I assume you mean via http.  FWIW, apache is much more powerful than
svnserve, but it's also slower and more difficult to set up.  The man
reasons to use http instead of svnserve are:  (a) you want logging of access
(b) you want authentication methods that are unavailable in svnserve

I don't know any others...

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Anybody with Netapp (take 2)

2010-07-02 Thread Edward Ned Harvey
> From: Tom Limoncelli [mailto:t...@whatexit.org]
> 
> Just to repeat the problem:
> 
> Suppose you put info on a Netapp that wasn't supposed to be stored
> online.  It gets into a snapshot.  You can't delete it from the
> snapshot, because snapshots are read-only.  You can delete the
> snapshot, but what if your users need the snapshot for other reasons?

No, that wasn't the problem.

The problem was:  If you have private data, which is correctly protected,
and then you delete it, you should know that it always was and always will
be correctly protected, right?  Not quite.  After you delete the data, if
you change perms of the parent, then the deleted data is suddenly exposed to
people who should never have had access to it.

Permission can be granted to the data, after the delete, which was never
granted before.

> 
> ...
> 
> You can disable the ability for people to view snapshots until the
> ...

You are right that the problem could be fixed by sacrificing all the
snapshots that contained the private data.  But I didn't know until you said
so, about disabling access to the snapshots.

But if you prevent access to the snapshots, doesn't that *almost* make the
snapshot as useless as deleting the snapshot?  Now the snapshot can only be
used by root, so it's basically for backups only, or for restoring user
things with administrator assistance.


___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Edward Ned Harvey
> From: tech-boun...@lopsa.org [mailto:tech-boun...@lopsa.org] On Behalf
> Of Tom Perrine
> 
> We've been running SVN (and Perforce) over NFS for years.  Clients are
> Mac, Windows, Linux and the NFS servers have been
>  at least 4 generations of NetApp.

How are you using windows as a NFS client?  I know it can be done; I just
have never known anybody to actually *use* that option.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] lowering net.ipv4.tcp_fin_timeout

2010-07-02 Thread Daniel Pittman
Matt Lawrence  writes:
> On Thu, 1 Jul 2010, Matt Lawrence wrote:
>> On Thu, 1 Jul 2010, Aleksey Tsalolikhin wrote:
>>
>>> Do you have HTTP pipelining enabled?  So you can download all 60
>>> images over a single TCP/IP connection?
>>>
>>> This is "KeepAlive On" in Apache httpd httpd.conf
>>
>> Good point.  I just asked and it is turned off.  Since Apache configs are
>> out of my juristiction, I have passed along the recommendation that it be
>> turned on.  Thanks for the pointer.
>
> I have done more research.  The reason it is turned off is that the number
> of unique visitors is so high that Apache winds up with too many idle
> threads eating up too much memory which also crashes the box.

Idle, as controlled by MaxSpareServers, or active, as controlled by MaxClients
or so?  In my experience usually the later, but I have watched a number of my
staff spend inordinate amounts of time trying to tune the former to fix this
over the years.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] SAN recommendation

2010-07-02 Thread Doug Hughes
You might check out DDN if you need really massive amounts of storage. 
They are best at the 1/2PB plus range and have good FC or even IB 
support. There are many filesystem options to go along with it. We're 
using GPFS to the storage with NFS to clients.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] SAN recommendation

2010-07-02 Thread Paul Graydon
I've some experience with BlueArc in the past (late '06 I think it was, 
and even though I've left I know they're still using it).  The unit was 
more than a little dodgy to start with resulting in a number of gray 
hairs as we had a few firmware problems over the first year, e.g. it not 
alerting us about failed disks, reporting the wrong disks failed and a 
few performance problems, but by the end of that it was performing very 
well indeed, far beyond our original expectations.  There were a number 
of things we thought NetApp does it better but then NetApp also happens 
to do it for significantly more money.

Paul

On 07/02/2010 11:31 AM, Tom Perrine wrote:
> Dan Parsons wrote:
>
>
>> So, any companies / products you can recommend along those lines, I'd
>> love to hear about. Thanks!
>>  
> There seem to be a few companies aimed directly at NetApp, just as Juniper 
> seemd to originally be aimed at Cisco :-)
>
> We've had good luck with BlueArc for the past 4 years, as we continue to use 
> NetApp (going on 11 years).  BlueArc
> started in the very high-end space (bigger and ) above where NetApp could 
> go with the original 6070s, but I've been
> told that they have some lower end products now.
>
> BlueArc is typically CLI compatible with NetApp, too :-)
>
> Again, they are likely too expensive, but might have started shipping some 
> smaller products.
>
> ___
> Tech mailing list
> Tech@lopsa.org
> http://lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>   http://lopsa.org/
>

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] SAN recommendation

2010-07-02 Thread Adam Levin

On Fri, 2 Jul 2010, Dan Parsons wrote:
> Thank you very much, Nicholas. Compellent looks particularly
> interesting... can anyone here comment on their experience with their
> equipment and support?

Support has been excellent so far, but they're still a relatively small 
company.  They're based out of Eden Prairie, Minnesota, and their support 
is still controlled out of there, though they do have 24/7 international 
support at the first level.  They have the usual break/fix contracts with 
Unisys to come in and swap drives.  The cool thing is that since they're 
small, every customer is a big fish, so you can get to important people 
who can actually make changes quickly in pretty short order.  We worked 
closely with their engineers to get the StorageCenter working with IBM's 
SVC, and we've been to a couple of customer councils already.  For pure 
SAN, they'd currently be my first choice for almost any general purpose 
SAN application and probably quite a few more specific application 
requirements.

-Adam

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] SAN recommendation

2010-07-02 Thread Adam Levin

On Fri, 2 Jul 2010, Dan Parsons wrote:
> I'm looking into replacing our aging EMC SAN installation with
> something else, and I thought I'd ping the list for suggestions.
> Basically I'm looking for, preferably, one box that can do NFS really
> well and also do fibre channel really well (mostly for ESX LUNs). I've
> used NetApp gear in the past for both these purposes to great success,
> and I'd happily try them again, but I suspect there might be some
> newer, more agile companies out there with competitive alternatives
> and better pricing.

To be honest, I haven't found one yet.  :)

I won't say that NetApp does FC SAN "really well".  Functionally, it works 
perfectly, and since WAFL is below it, the added snapshot and cloning 
functionality is a nice value add.  However, it's very expensive, not only 
because NetApp is a premium by itself, but the overhead of WAFL, then an 
FC container on top of that, and *then* a filesystem in the container 
means there's a lot of overhead vs. a standard SAN.

Then there's the value-add of Snap Manager for Virtual Infrastructure.  In 
fact, if you're going with a NetApp anyway, I would avoid FC altogether 
and just run your datastores on the filer via NFS.  It's much more 
cost-effective, especially on a NetApp, and simpler to manage.  Plus, you 
can get 10Gb performance out of it and save money by not buying a SAN (if 
it's just ESX you need it for).

> The management interface counts for a lot too: I hate EMC's with a
> fiery passion. Something web based is much, much better than a java
> app that only works well on Windows.

Well, I'm a command line guy, so I use OnTAP via SSH.  I avoid GUIs when I 
can.

> Aside from excelling at NFS and fc, and being "officially supported"
> by VMware, Inc., flexibility in terms of being able to use both SATA
> and FC/SAS disks where I like is a good plus too. Easy storage
> expansion by adding more drive shelves is important. Enough
> controllers with sufficient horsepower such that they can be slammed
> by NFS but still serve up fibre channel without a hitch is important.
> In terms of capacity, I'd probably need around 16TB usable to start,
> and growing quickly from there.

At the moment I think NetApp, assuming you can afford it, is the best at 
excelling (as much as possible) at all of those requirements.

> So, any companies / products you can recommend along those lines, I'd
> love to hear about. Thanks!

As far as pure storage, my current favorite is Compellent, which Nick 
already mentioned.  It's a fantastic box (we have one in our lab) and does 
an great job.  The snapshots (they call them replays) are powerful, and 
the way it manages the disks is second to none.  However, they don't 
currently do NFS, though it's coming with a bolt-on BSD appliance that 
does ZFS.  Eventually it'll be built-in to their box (they run BSD under 
the covers).  I'm not sure I trust ZFS enough for it yet, but with 
commercial support it should prosper quite well.

-Adam

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] SAN recommendation

2010-07-02 Thread Dan Parsons
Thank you very much, Nicholas. Compellent looks particularly
interesting... can anyone here comment on their experience with their
equipment and support?

Dan



On Fri, Jul 2, 2010 at 4:04 PM, Nicholas Tang  wrote:
> I haven't used them, I'm a 3par fan (I'd recommend them, but they've got no
> NAS features built-in), but I've heard good things about Compellant.  Great
> featureset, reasonable price.
>
> If you're looking lower cost, check out Nexenta - it's based on opensolaris
> but with a bunch of commercially supported additional features.
>
> And no, we haven't used them either, but we're going to be trying them out
> soon.  ;)
>
> Nicholas
>
> On Jul 2, 2010 3:41 PM, "Dan Parsons"  wrote:
>> I'm looking into replacing our aging EMC SAN installation with
>> something else, and I thought I'd ping the list for suggestions.
>> Basically I'm looking for, preferably, one box that can do NFS really
>> well and also do fibre channel really well (mostly for ESX LUNs). I've
>> used NetApp gear in the past for both these purposes to great success,
>> and I'd happily try them again, but I suspect there might be some
>> newer, more agile companies out there with competitive alternatives
>> and better pricing.
>>
>> The management interface counts for a lot too: I hate EMC's with a
>> fiery passion. Something web based is much, much better than a java
>> app that only works well on Windows.
>>
>> Aside from excelling at NFS and fc, and being "officially supported"
>> by VMware, Inc., flexibility in terms of being able to use both SATA
>> and FC/SAS disks where I like is a good plus too. Easy storage
>> expansion by adding more drive shelves is important. Enough
>> controllers with sufficient horsepower such that they can be slammed
>> by NFS but still serve up fibre channel without a hitch is important.
>> In terms of capacity, I'd probably need around 16TB usable to start,
>> and growing quickly from there.
>>
>> Cost is a consideration, but I should probably put this in
>> perspective: I know how much top-tier stuff from NetApp and EMC costs,
>> and I'm prepared to pay that, but if there's another company out there
>> that is just as good or better for a bit less, then I should consider
>> their products too. I'm not expecting 30TB for $8k or anything like
>> that! But I don't want to spend money I don't have to just because I
>> haven't heard of an alternative.
>>
>> So, any companies / products you can recommend along those lines, I'd
>> love to hear about. Thanks!
>>
>> Dan
>> ___
>> Tech mailing list
>> Tech@lopsa.org
>> http://lopsa.org/cgi-bin/mailman/listinfo/tech
>> This list provided by the League of Professional System Administrators
>> http://lopsa.org/
>

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] SAN recommendation

2010-07-02 Thread Nicholas Tang
I haven't used them, I'm a 3par fan (I'd recommend them, but they've got no
NAS features built-in), but I've heard good things about Compellant.  Great
featureset, reasonable price.

If you're looking lower cost, check out Nexenta - it's based on opensolaris
but with a bunch of commercially supported additional features.

And no, we haven't used them either, but we're going to be trying them out
soon.  ;)

Nicholas

On Jul 2, 2010 3:41 PM, "Dan Parsons"  wrote:
> I'm looking into replacing our aging EMC SAN installation with
> something else, and I thought I'd ping the list for suggestions.
> Basically I'm looking for, preferably, one box that can do NFS really
> well and also do fibre channel really well (mostly for ESX LUNs). I've
> used NetApp gear in the past for both these purposes to great success,
> and I'd happily try them again, but I suspect there might be some
> newer, more agile companies out there with competitive alternatives
> and better pricing.
>
> The management interface counts for a lot too: I hate EMC's with a
> fiery passion. Something web based is much, much better than a java
> app that only works well on Windows.
>
> Aside from excelling at NFS and fc, and being "officially supported"
> by VMware, Inc., flexibility in terms of being able to use both SATA
> and FC/SAS disks where I like is a good plus too. Easy storage
> expansion by adding more drive shelves is important. Enough
> controllers with sufficient horsepower such that they can be slammed
> by NFS but still serve up fibre channel without a hitch is important.
> In terms of capacity, I'd probably need around 16TB usable to start,
> and growing quickly from there.
>
> Cost is a consideration, but I should probably put this in
> perspective: I know how much top-tier stuff from NetApp and EMC costs,
> and I'm prepared to pay that, but if there's another company out there
> that is just as good or better for a bit less, then I should consider
> their products too. I'm not expecting 30TB for $8k or anything like
> that! But I don't want to spend money I don't have to just because I
> haven't heard of an alternative.
>
> So, any companies / products you can recommend along those lines, I'd
> love to hear about. Thanks!
>
> Dan
> ___
> Tech mailing list
> Tech@lopsa.org
> http://lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
> http://lopsa.org/
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] SAN recommendation

2010-07-02 Thread Tom Perrine
Dan Parsons wrote:

> So, any companies / products you can recommend along those lines, I'd
> love to hear about. Thanks!

There seem to be a few companies aimed directly at NetApp, just as Juniper 
seemd to originally be aimed at Cisco :-)

We've had good luck with BlueArc for the past 4 years, as we continue to use 
NetApp (going on 11 years).  BlueArc
started in the very high-end space (bigger and ) above where NetApp could 
go with the original 6070s, but I've been
told that they have some lower end products now.

BlueArc is typically CLI compatible with NetApp, too :-)

Again, they are likely too expensive, but might have started shipping some 
smaller products.

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Eric Sorenson

On Jul 2, 2010, at 12:19 PM, C.M. Connelly wrote:

> The big problem with file:/// repos is making sure you get all the
> group permissions and user umasks sorted out.  Although I haven't
> really gotten many complaints or questions (which suggests that
> people have figured things out), I'm leaning toward just setting
> up web-based repos for those projects rather than worry.


One other subtle thing that just bit me about file:///svn/ access is that pre- 
and post- commit hooks need to be able to run everywhere that check-ins might 
happen. 

 - Eric Sorenson - N37 17.255 W121 55.738  - http://twitter.com/ahpook  -


___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] SAN recommendation

2010-07-02 Thread Dan Parsons
I'm looking into replacing our aging EMC SAN installation with
something else, and I thought I'd ping the list for suggestions.
Basically I'm looking for, preferably, one box that can do NFS really
well and also do fibre channel really well (mostly for ESX LUNs). I've
used NetApp gear in the past for both these purposes to great success,
and I'd happily try them again, but I suspect there might be some
newer, more agile companies out there with competitive alternatives
and better pricing.

The management interface counts for a lot too: I hate EMC's with a
fiery passion. Something web based is much, much better than a java
app that only works well on Windows.

Aside from excelling at NFS and fc, and being "officially supported"
by VMware, Inc., flexibility in terms of being able to use both SATA
and FC/SAS disks where I like is a good plus too. Easy storage
expansion by adding more drive shelves is important. Enough
controllers with sufficient horsepower such that they can be slammed
by NFS but still serve up fibre channel without a hitch is important.
In terms of capacity, I'd probably need around 16TB usable to start,
and growing quickly from there.

Cost is a consideration, but I should probably put this in
perspective: I know how much top-tier stuff from NetApp and EMC costs,
and I'm prepared to pay that, but if there's another company out there
that is just as good or better for a bit less, then I should consider
their products too. I'm not expecting 30TB for $8k or anything like
that! But I don't want to spend money I don't have to just because I
haven't heard of an alternative.

So, any companies / products you can recommend along those lines, I'd
love to hear about. Thanks!

Dan
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread C.M. Connelly
"ENH" == Edward Ned Harvey 

ENH> Also if you're using file:/// to access the repo, does it
ENH> keep track of which users made which changes?

At least on Linux it does; it uses your username.  I still have
some file:///-based SVN repos at home, and I believe some of the
students use them for some projects (although they may be using
ssh+svn as well as or instead of file:///).

The big problem with file:/// repos is making sure you get all the
group permissions and user umasks sorted out.  Although I haven't
really gotten many complaints or questions (which suggests that
people have figured things out), I'm leaning toward just setting
up web-based repos for those projects rather than worry.

   Claire

*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
  Claire M. Connelly c...@math.hmc.edu
  System Administrator, Dept. of Mathematics, Harvey Mudd College
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*


pgpajbKqRhGTT.pgp
Description: PGP signature
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] lowering net.ipv4.tcp_fin_timeout

2010-07-02 Thread Matt Lawrence
On Fri, 2 Jul 2010, Nicholas Tang wrote:

> Oh, I agree with all of that.  My only point was that, if the environment
> isn't too crazy, you could have a varnish or nginx box running in an hour or
> two that would have a bigger impact than the tcp changes and wouldn't
> require making changes like that.  That's why I started w/ the term "proxy"
> in front of load balancer, because the benefit you'd be deriving would be
> the tcp offload, not the load balancing (I considered leaving "load
> balancer" out entirely, maybe it would've been more clear if I had).  :)

You are absolutely correct from a technical standpoint.  But doing this 
would require deploying a new system and that's not a simple task in this 
environment.  These folks aren't agile and don't have a flexible 
environment.  Going to a cloud model where standing up new systems is 
really cheap and easy is one of the things I am an advocate for.

I had a recruiter call me last week about a job with a company that does 
lots of agile development.  It sounded really interesting, but they 
decided to outsource their systems management instead of hiring.

-- Matt
It's not what I know that counts.
It's what I can remember in time to use.
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] lowering net.ipv4.tcp_fin_timeout

2010-07-02 Thread Nicholas Tang
Oh, I agree with all of that.  My only point was that, if the environment
isn't too crazy, you could have a varnish or nginx box running in an hour or
two that would have a bigger impact than the tcp changes and wouldn't
require making changes like that.  That's why I started w/ the term "proxy"
in front of load balancer, because the benefit you'd be deriving would be
the tcp offload, not the load balancing (I considered leaving "load
balancer" out entirely, maybe it would've been more clear if I had).  :)

Nicholas

On Fri, Jul 2, 2010 at 2:33 PM, Matt Lawrence  wrote:

> On Fri, 2 Jul 2010, Nicholas Tang wrote:
>
> > Have you considered putting a proxy / load balancer in front of it
> (running
> > on a modern OS :) ) - varnish, for instance, or something like nginx -
> that
> > can absorb the workload of managing the TCP connections and just hand off
> > the individual requests to the old Apache box on the back-end?  There are
> > others, too... haproxy... perlbal... I'm not sure how effectively they
> work
> > at TCP offload, but something like varnish or nginx should do the job.
>  It
> > wouldn't require any changes to the actual server on the back-end, except
> > *maybe* changing an ip address or port, and could be done pretty much
> > entirely through DNS changes/ tricks.
> >
> > (I say this with no real knowledge of your infrastructure, but in theory,
> it
> > should work.)
>
> A lot of the architecture in this setup is ten years old.  Things have
> changed a lot in the last decade.  The currently accepted way of doing
> things is a pair of redundant load balancers talking to multiple
> application servers that in turn talk to a redundant database setup.  Ten
> years ago, not so much.  Ten years ago setting up application servers in a
> HA cluster was the way to go.  Today I'm suggesting they get rid of the HA
> clustering and web servers, let the load balancers handle the failover.
> Application servers should no longer be aware of their backup server, they
> should just be aware that other application servers are running in
> parallel.
>
> Longer term, I'm hoping to convince the folks here to go to architectures
> that have been well accepted in the industry for the past few years.
> Shorter term they are going to throw better hardware and RHEL 5 at the
> problem.  For the next few days, I'm trying to convince them that dropping
> the tcp_fin_timeout to 30 seconds may get them by without more crashes.
> Webserver has already wedged once today, the backup server in the cluster
> took over without any issues (everyone was surprised that it actually
> worked this time).
>
> -- Matt
> It's not what I know that counts.
> It's what I can remember in time to use.
> ___
> Tech mailing list
> Tech@lopsa.org
> http://lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] Anybody with Netapp (take 2)

2010-07-02 Thread Tom Limoncelli
Sorry for replying to an old thread but...

Just to repeat the problem:

Suppose you put info on a Netapp that wasn't supposed to be stored online.
 It gets into a snapshot.  You can't delete it from the snapshot, because
snapshots are read-only.  You can delete the snapshot, but what if your
users need the snapshot for other reasons?

What doesn't work:
You can try to set the permissions of the parent directory to be  so
nobody can get to it from
/home/$USER/path/to/the/.snapshot/*/file
(for example, set "to" to mode ).
Why doesn't that work?  People can get to the file through this path
instead:
/home/$USER/.snapshot/*/path/to/the/file

Here's the new information that I have:
You can disable the ability for people to view snapshots until the snapshot
has expired and been deleted naturally.  Presumably this doesn't affect your
backup and other systems.  If, during that time, someone needs access to
snapshotted data, you can enable it briefly and re-disable them.  Thereby
reducing the risk window.

Tom


On Wed, Apr 28, 2010 at 8:06 PM, Edward Ned Harvey wrote:

> I hope you like academic exam questions, long since after you completed
> your
> degree.  ;-)
>
> Here's a new question for netapp admins:
>
> (as root)
> mkdir -p a/b/c
> echo "secret info" > a/b/c/info.txt
> chmod 777 a
> chmod 700 a/b
> chmod 777 a/b/c
> chmod 666 a/b/c/info.txt
>
> Now, a normal user should not have any access to info.txt because they get
> blocked by the 700 perms at the "b" directory.  But if the file were moved
> outside the "b" directory, or if the perms were more permissive on the "b"
> directory, then normal users could have access.  The only obstacle stopping
> users from accessing "secret info" are the 700 perms on "b" directory.
>
> Create snapshot.
>
> echo "public info" > a/b/c/info.txt
> Now, do one of the following:
>  mv a/b/c a/c
>  or
>  chmod 777 a/b
>
> By doing this, normal users have been granted access to info.txt, but if
> they read it, they'll only see "public info."  But the question is:  Can a
> normal user access "secret info" in either a/c/.snapshot, or in
> a/b/c/.snapshot?
>
>
> ___
> Tech mailing list
> Tech@lopsa.org
> http://lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>



-- 
http://EverythingSysadmin.com  -- my blog
http://www.TomOnTime.com -- my advice
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] lowering net.ipv4.tcp_fin_timeout

2010-07-02 Thread Matt Lawrence
On Fri, 2 Jul 2010, Nicholas Tang wrote:

> Have you considered putting a proxy / load balancer in front of it (running
> on a modern OS :) ) - varnish, for instance, or something like nginx - that
> can absorb the workload of managing the TCP connections and just hand off
> the individual requests to the old Apache box on the back-end?  There are
> others, too... haproxy... perlbal... I'm not sure how effectively they work
> at TCP offload, but something like varnish or nginx should do the job.  It
> wouldn't require any changes to the actual server on the back-end, except
> *maybe* changing an ip address or port, and could be done pretty much
> entirely through DNS changes/ tricks.
>
> (I say this with no real knowledge of your infrastructure, but in theory, it
> should work.)

A lot of the architecture in this setup is ten years old.  Things have 
changed a lot in the last decade.  The currently accepted way of doing 
things is a pair of redundant load balancers talking to multiple 
application servers that in turn talk to a redundant database setup.  Ten 
years ago, not so much.  Ten years ago setting up application servers in a 
HA cluster was the way to go.  Today I'm suggesting they get rid of the HA 
clustering and web servers, let the load balancers handle the failover. 
Application servers should no longer be aware of their backup server, they 
should just be aware that other application servers are running in 
parallel.

Longer term, I'm hoping to convince the folks here to go to architectures 
that have been well accepted in the industry for the past few years. 
Shorter term they are going to throw better hardware and RHEL 5 at the 
problem.  For the next few days, I'm trying to convince them that dropping 
the tcp_fin_timeout to 30 seconds may get them by without more crashes. 
Webserver has already wedged once today, the backup server in the cluster 
took over without any issues (everyone was surprised that it actually 
worked this time).

-- Matt
It's not what I know that counts.
It's what I can remember in time to use.
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Tom Perrine
Nick Silkey wrote:

> It appears fsfs-formatted svn repos are indeed NFS safe, but I wanted
> to ask the audience.  Anyone with experience doing this (good, bad,
> otherwise), let me know.  I welcome responses like 'yeah, it can be
> done.  i did it, but the performance stunk!' ... not just the simple
> 'yes' or 'no'.

We've been running SVN (and Perforce) over NFS for years.  Clients are Mac, 
Windows, Linux and the NFS servers have been
 at least 4 generations of NetApp.

I believe that for a while we even had the same filesystems exported via both 
NFS and CIFS on those repositories; I can
look into it further if anyone is interested.

--tep

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] lowering net.ipv4.tcp_fin_timeout

2010-07-02 Thread Nicholas Tang
Have you considered putting a proxy / load balancer in front of it (running
on a modern OS :) ) - varnish, for instance, or something like nginx - that
can absorb the workload of managing the TCP connections and just hand off
the individual requests to the old Apache box on the back-end?  There are
others, too... haproxy... perlbal... I'm not sure how effectively they work
at TCP offload, but something like varnish or nginx should do the job.  It
wouldn't require any changes to the actual server on the back-end, except
*maybe* changing an ip address or port, and could be done pretty much
entirely through DNS changes/ tricks.

(I say this with no real knowledge of your infrastructure, but in theory, it
should work.)

Nicholas

On Fri, Jul 2, 2010 at 12:35 PM, Matt Lawrence  wrote:

> On Thu, 1 Jul 2010, Matt Lawrence wrote:
>
> > On Thu, 1 Jul 2010, Aleksey Tsalolikhin wrote:
> >
> >> Do you have HTTP pipelining enabled?  So you can download all 60
> >> images over a single TCP/IP connection?
> >>
> >> This is "KeepAlive On" in Apache httpd httpd.conf
> >
> > Good point.  I just asked and it is turned off.  Since Apache configs are
> > out of my juristiction, I have passed along the recommendation that it be
> > turned on.  Thanks for the pointer.
>
> I have done more research.  The reason it is turned off is that the number
> of unique visitors is so high that Apache winds up with too many idle
> threads eating up too much memory which also crashes the box.  I'm trying
> to chase down what KeepALiveTimeout is set to, perhaps it is higher than
> it need to be.
>
> -- Matt
> It's not what I know that counts.
> It's what I can remember in time to use.
> ___
> Tech mailing list
> Tech@lopsa.org
> http://lopsa.org/cgi-bin/mailman/listinfo/tech
> This list provided by the League of Professional System Administrators
>  http://lopsa.org/
>
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] lowering net.ipv4.tcp_fin_timeout

2010-07-02 Thread Matt Lawrence
On Thu, 1 Jul 2010, Matt Lawrence wrote:

> On Thu, 1 Jul 2010, Aleksey Tsalolikhin wrote:
>
>> Do you have HTTP pipelining enabled?  So you can download all 60
>> images over a single TCP/IP connection?
>>
>> This is "KeepAlive On" in Apache httpd httpd.conf
>
> Good point.  I just asked and it is turned off.  Since Apache configs are
> out of my juristiction, I have passed along the recommendation that it be
> turned on.  Thanks for the pointer.

I have done more research.  The reason it is turned off is that the number 
of unique visitors is so high that Apache winds up with too many idle 
threads eating up too much memory which also crashes the box.  I'm trying 
to chase down what KeepALiveTimeout is set to, perhaps it is higher than 
it need to be.

-- Matt
It's not what I know that counts.
It's what I can remember in time to use.
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 7/2/10 09:32 , Nick Silkey wrote:
> What was the format of the repo(s) during the 1.2 days when you hit
> corruption issues?  bdb?  I believe that with 1.2, fsfs became the
> default format for new repos ...

fsfs did have corruption issues in 1.2; they're long since fixed.  Several
of the fsfs developers have a strong interest in having fsfs work well in
MIT's AFS space.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkwuC18ACgkQIn7hlCsL25VRbgCglxmcWv8SUy8YLJUVmFUp1Xzl
TI4AnRMroDcZxerRceaMiOUsHikCQ163
=/ql1
-END PGP SIGNATURE-
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 7/2/10 09:06 , Nick Silkey wrote:
> It appears fsfs-formatted svn repos are indeed NFS safe, but I wanted
> to ask the audience.  Anyone with experience doing this (good, bad,
> otherwise), let me know.  I welcome responses like 'yeah, it can be
> done.  i did it, but the performance stunk!' ... not just the simple
> 'yes' or 'no'.

I haven't tried it with NFS, but it works fine in AFS read/write volumes.
(fsfs was specifically designed to work in AFS/NFS; early performance was
weak but it's improved with every release.)

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkwuCswACgkQIn7hlCsL25XpXgCgpuPVEzfaW8TddjU50QICa+gn
wC8AnjtHH7Ys3lHp4jYnvi46xkHUQieJ
=1B/F
-END PGP SIGNATURE-
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Nathan Hruby
On Fri, Jul 2, 2010 at 8:06 AM, Nick Silkey  wrote:
> tech@ --
>
> $work has many svn repos hosted atop direct-attached disk. As time
> rolls on, we are encountering space constraint where they are stored.
> Rather than take an outage to resize disk/move repos/etc where this
> could potentially happen again over time, were looking to move this to
> our 3170 filers where disk is air (need more space?  *poof*).
>
> It appears fsfs-formatted svn repos are indeed NFS safe, but I wanted
> to ask the audience.  Anyone with experience doing this (good, bad,
> otherwise), let me know.  I welcome responses like 'yeah, it can be
> done.  i did it, but the performance stunk!' ... not just the simple
> 'yes' or 'no'.

We have many tens of thousands of svn repos on NFS with no NFS
specific tunings to the repo usage, mostly it's perfectly happy stuff.

You can see performance issues with httpd+mod_svn+dav and large repos
(where large is both in size of files stored, as well as number of
commits) when your clients are separated from the server via slow or
high latency links.  For example,  "svn up" to pull 1,000 changes via
dav on a client with a DSL link on the other side of the globe from
the server tends to work poorly because the command channel timeouts
while other threads are doing pull operations.  If that sort of thing
is a big use case for you, I'd recommend looking into using svnserve
as well as, or instead of, DAV.

HTH,

-n
-- 
---
nathan hruby 
metaphysically wrinkle-free
---

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Nick Silkey
On Fri, Jul 2, 2010 at 9:19 AM, Edward Ned Harvey  wrote:
> Where did you hear that?

The Internets via Google.  :)

> I did that, and I ended up with corrupted repository.  But it was a few
> years ago, probably svn 1.2.  So maybe they improved it since then?
>
> IMHO, there's no reason *not* to lock down the repo directory, and run
> everything through svnserve.  It's the only way (aside from apache) that I
> feel the repo is safe.  Plus suddenly you can access it from windows clients
> and make it available to other networks and so on.  (IMHO, tortoisesvn and
> tortoise diff are enormous value-adds.)
>
> Also if you're using file:/// to access the repo, does it keep track of
> which users made which changes?

We currently front with Apache 2.2 + mod_dav_svn + authn tied to AD
via mod_ldap + authz via mod_authz_svn.  Were happy with this.  Apart
from many CVS/RCS repos coming out of the woodwork, moving in, and
constraining the disk where we store repos.  ;)

Also fwiw, all clients are at least 1.5+ for merge info data purposes;
servers are 1.6+.

What was the format of the repo(s) during the 1.2 days when you hit
corruption issues?  bdb?  I believe that with 1.2, fsfs became the
default format for new repos ...

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


Re: [lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Edward Ned Harvey
> From: tech-boun...@lopsa.org [mailto:tech-boun...@lopsa.org] On Behalf
> Of Nick Silkey
> 
> It appears fsfs-formatted svn repos are indeed NFS safe, but I wanted
> to ask the audience.  

Where did you hear that?


> Anyone with experience doing this (good, bad,
> otherwise), let me know.  I welcome responses like 'yeah, it can be
> done.  i did it, but the performance stunk!' ... not just the simple
> 'yes' or 'no'.

I did that, and I ended up with corrupted repository.  But it was a few
years ago, probably svn 1.2.  So maybe they improved it since then?

IMHO, there's no reason *not* to lock down the repo directory, and run
everything through svnserve.  It's the only way (aside from apache) that I
feel the repo is safe.  Plus suddenly you can access it from windows clients
and make it available to other networks and so on.  (IMHO, tortoisesvn and
tortoise diff are enormous value-adds.)

Also if you're using file:/// to access the repo, does it keep track of
which users made which changes?

___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/


[lopsa-tech] storing fsfs-formatted svn repos atop nfs: good bad or ugly?

2010-07-02 Thread Nick Silkey
tech@ --

$work has many svn repos hosted atop direct-attached disk. As time
rolls on, we are encountering space constraint where they are stored.
Rather than take an outage to resize disk/move repos/etc where this
could potentially happen again over time, were looking to move this to
our 3170 filers where disk is air (need more space?  *poof*).

It appears fsfs-formatted svn repos are indeed NFS safe, but I wanted
to ask the audience.  Anyone with experience doing this (good, bad,
otherwise), let me know.  I welcome responses like 'yeah, it can be
done.  i did it, but the performance stunk!' ... not just the simple
'yes' or 'no'.

Thanks.

-nick
___
Tech mailing list
Tech@lopsa.org
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/