On 02/04/2012 11:39 AM, Boris Epstein wrote:
> On Sat, Feb 4, 2012 at 11:41 AM, Laurent Wandrebeck
> wrote:
>
>> Hi,
>>
>> I'm happily running moosefs (packages available in rpmforge repo) for a
>> year and a half, 120TB, soon 200. So easy to setup and grow it's
>> indecent :)
>>
>> Laurent.
>>
>>
On Feb 5, 2012, at 6:33 PM, John R Pierce wrote:
> I just tried a bunch of combinations on a 3 x 11 raid60 configuration
> plus 3 global hotspares, and decided that letting the controller (LSI
> 9260-8i MegaSAS2) do it was easier all the way around. of course, with
> other controllerrs, your
On 02/05/12 3:49 PM, Ljubomir Ljubojevic wrote:
> What about Software RAID 10 (far)? It gives 2 x read speed and 1 x write
> speed (speed of single HDD).
we use raid10 for all our database servers. often as many as 20 disks
in a single raid set.
--
john r pierceN
On 02/06/2012 12:33 AM, John R Pierce wrote:
> On 02/05/12 3:24 PM, Ross Walker wrote:
>> It might be easier to do the striping in software cause that's a zero
>> over-head operation and it makes the hardware RAID easier to setup, maintain
>> and can make rebuilds less painful depending on the co
On 02/05/12 3:24 PM, Ross Walker wrote:
> It might be easier to do the striping in software cause that's a zero
> over-head operation and it makes the hardware RAID easier to setup, maintain
> and can make rebuilds less painful depending on the controller.
I just tried a bunch of combinations on
On Feb 5, 2012, at 5:42 PM, Boris Epstein wrote:
> What you are saying seems to make sense actually. I wonder how much a RAID6
> with a few spares would make sense. If we are talking a large number of
> disks then RAID 6 + 2 spares means overpaying only for 5 disks. Not a lot
> if the total numbe
On 02/05/12 2:42 PM, Boris Epstein wrote:
> What you are saying seems to make sense actually. I wonder how much a RAID6
> with a few spares would make sense. If we are talking a large number of
> disks then RAID 6 + 2 spares means overpaying only for 5 disks. Not a lot
> if the total number of them
On Sun, Feb 5, 2012 at 5:31 PM, Ross Walker wrote:
> On Feb 5, 2012, at 10:32 AM, Phil Schaffner
> wrote:
>
> > Boris Epstein wrote on 02/04/2012 11:57 AM:
> >> What is RAID0+1?
> >
> > Nested RAID. Paraphrasing http://en.wikipedia.org/wiki/RAID :
> >
> > For a RAID 0+1, drives are first combin
On Feb 5, 2012, at 10:32 AM, Phil Schaffner wrote:
> Boris Epstein wrote on 02/04/2012 11:57 AM:
>> What is RAID0+1?
>
> Nested RAID. Paraphrasing http://en.wikipedia.org/wiki/RAID :
>
> For a RAID 0+1, drives are first combined into multiple level 0 RAIDs
> that are themselves treated as sin
On 02/05/2012 04:37 PM, Boris Epstein wrote:
> On Sun, Feb 5, 2012 at 10:32 AM, Phil Schaffner> wrote:
>
>> Boris Epstein wrote on 02/04/2012 11:57 AM:
>>> What is RAID0+1?
>>
>> Nested RAID. Paraphrasing http://en.wikipedia.org/wiki/RAID :
>>
>> For a RAID 0+1, drives are first combined into mult
On Sun, Feb 5, 2012 at 10:32 AM, Phil Schaffner wrote:
> Boris Epstein wrote on 02/04/2012 11:57 AM:
> > What is RAID0+1?
>
> Nested RAID. Paraphrasing http://en.wikipedia.org/wiki/RAID :
>
> For a RAID 0+1, drives are first combined into multiple level 0 RAIDs
> that are themselves treated as s
Boris Epstein wrote on 02/04/2012 11:57 AM:
> What is RAID0+1?
Nested RAID. Paraphrasing http://en.wikipedia.org/wiki/RAID :
For a RAID 0+1, drives are first combined into multiple level 0 RAIDs
that are themselves treated as single drives to be combined into a
single RAID 1.
Phil
_
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Le 04/02/2012 18:39, Boris Epstein a écrit :
>>
> Hello Laurent,
>
> Thanks! Very useful info, I never even heard of MooseFS and it
> sounds very nice.
>
> One question: what happens if you lose your master server in their
> designation? Or is it
Boris Epstein writes:
>>
>>
>> Hello Boris,
>>
>> I'm in a similar search for a scalable and resilient solution. So far I
>> like
>> glusterfs, relatively easy to setup, no meta-server required, decent
>> performance, but I haven't tested it thoroughly. Been playing with their
>> latest beta relea
On Sat, Feb 4, 2012 at 11:41 AM, Laurent Wandrebeck
wrote:
> Hi,
>
> I'm happily running moosefs (packages available in rpmforge repo) for a
> year and a half, 120TB, soon 200. So easy to setup and grow it's
> indecent :)
>
> Laurent.
>
> ___
> CentOS ma
>
>
> Hello Boris,
>
> I'm in a similar search for a scalable and resilient solution. So far I
> like
> glusterfs, relatively easy to setup, no meta-server required, decent
> performance, but I haven't tested it thoroughly. Been playing with their
> latest beta release in a raid0+1 setup; haven't m
Hi,
I'm happily running moosefs (packages available in rpmforge repo) for a
year and a half, 120TB, soon 200. So easy to setup and grow it's
indecent :)
Laurent.
pgpVVykG4B19E.pgp
Description: PGP signature
___
CentOS mailing list
CentOS@centos.org
ht
Boris Epstein writes:
> Hello listmates,
>
> This is not specifically CentOS-related - though I will probably execute
> this design on CentOS if I decide to do so. It will certainly be some kind
> of Linux.
>
> At any rate, here's my situation. I would like to build a fairly large
> storage solu
Hello listmates,
This is not specifically CentOS-related - though I will probably execute
this design on CentOS if I decide to do so. It will certainly be some kind
of Linux.
At any rate, here's my situation. I would like to build a fairly large
storage solution (let us say, 100 TB). I want this
19 matches
Mail list logo