On 03/29/2018 03:48 PM, Alexander Schreiber via cctalk wrote:
> Also, AFS is built around volumes (think "virtual disks") and you have
> the concept of a r/w volume with (potentially) a pile of r/o volumes
> snapshotted from it. So one thing I did was that every (r/w) volume
> had a directory
On Wed, Mar 28, 2018 at 01:17:08PM -0400, Ethan via cctalk wrote:
> > I know of no RAID setup that can save me >from stupid.
>
> I use rsync. I manually rsync the working disks to the backup disks every
> week or two. Working disks have the shares to other hosts. If something
> happens to that
1 PM (GMT-08:00)
> > To: "General Discussion: On-Topic and Off-Topic Posts"
> > <cctalk@classiccmp.org>
> > Subject: RAID? Was: PATA hard disks, anyone?
> >
> > How many drives would you need, to be able to set up a RAID, or hot
> &
On Wed, Mar 28, 2018 at 05:40:29PM -0700, Richard Pope via cctalk wrote:
> I have been kind of following this thread. I have a question about MTBF. I
> have four HGST UltraStar Enterprise 2TB drives setup in a Hardware RAID 10
> configuration. If the the MTBF is 100,000 Hrs for each drive does
It's not quite that bad. The answer is that the MTBF of four drives is
probably not simply the MTBF of one drive divided by four. If you have a good
description of the probability of failure as a function of drive age (i.e., a
picture of its particular "bathtub curve") you can then work out
Fred,
I appreciate the explanation. So with out a 1,000, 10,000, or even
100,000 drives there is no way to know how long my drives in the RAID
will last. All I know for sure is that I can lose anyone drive and the
RAID can be rebuilt.
GOD Bless and Thanks,
rich!
On 3/28/2018 4:43 PM,
On Wed, 28 Mar 2018, Richard Pope via cctalk wrote:
I have been kind of following this thread. I have a question about MTBF.
I have four HGST UltraStar Enterprise 2TB drives setup in a Hardware RAID 10
configuration. If the the MTBF is 100,000 Hrs for each drive does this mean
that the
Hello all,
I have been kind of following this thread. I have a question about
MTBF. I have four HGST UltraStar Enterprise 2TB drives setup in a
Hardware RAID 10 configuration. If the the MTBF is 100,000 Hrs for each
drive does this mean that the total MTBF is 25,000 Hrs?
GOD Bless and
On 03/28/2018 12:32 PM, Fred Cisin via cctalk wrote:
With very unreliable drives, that isn't acceptable. If each "drive"
within the RAID were itself a RAID, . . . Getting to be a complicated
controller, or cascading controllers, . . .
Many of the SCSI / SAS RAID controllers that I've worked
On Wed, Mar 28, 2018 at 09:33:38AM -0400, Paul Koning via cctalk wrote:
[...]
> The basic assumption is that failures are "fail stop", i.e., a drive refuses
> to deliver data. (In particular, it doesn't lie -- deliver wrong data. You
> can build systems that deal with lying drives but RAID is not
> On Mar 28, 2018, at 2:32 PM, Fred Cisin via cctalk
> wrote:
>
>>> How many drives would you need, to be able to set up a RAID, or hot
>>> swappable RAUD (Redundant Array of Unreliable Drives), that could give
>>> decent reliability with such drives?
>>> How many to
How many drives would you need, to be able to set up a RAID, or hot swappable
RAUD (Redundant Array of Unreliable Drives), that could give decent reliability
with such drives?
How many to be able to not have data loss if a second one dies before the first
casualty is replaced?
How many to be
On 03/28/2018 10:17 AM, Ethan via cctalk wrote:
>> I know of no RAID setup that can save me >from stupid.
>
> I use rsync. I manually rsync the working disks to the backup disks
> every week or two. Working disks have the shares to other hosts. If
> something happens to that data, deleted by
On 2018-03-28 1:17 PM, Ethan via cctalk wrote:
>> I know of no RAID setup that can save me >from stupid.
>
> I use rsync. I manually rsync the working disks to the backup disks
> every week or two. Working disks have the shares to other hosts. If
> something happens to that data, deleted by
On 03/28/2018 11:51 AM, David Brownlee via cctalk wrote:
A step up from rsync can be dirvish - it uses rsync, but before each
backup it creates a hardlink tree of the previous backup, then rsyncs
over it. The net effect is you only pay the block cost of one copy of
unchanged files, plus an
On 03/28/2018 11:17 AM, Paul Berger via cctalk wrote:
You mean something like someone who writes a script that does blind cd
to the directory and then proceeds to delete the contents?
This is one of the primary reasons that I prefer to see the full path
specified on the rm command.
--
On 28 March 2018 at 18:17, Ethan via cctalk wrote:
> I know of no RAID setup that can save me >from stupid.
>>
>
> I use rsync. I manually rsync the working disks to the backup disks every
> week or two. Working disks have the shares to other hosts. If something
> happens
On 2018-03-28 2:09 PM, Ali via cctalk wrote:
Original message
From: Chuck Guzis via cctalk <cctalk@classiccmp.org>
Date: 3/28/18 10:02 AM (GMT-08:00)
To: Paul Koning via cctalk <cctalk@classiccmp.org>
Subject: Re: RAID? Was: PATA hard disks, anyo
I know of no RAID setup that can save me >from stupid.
I use rsync. I manually rsync the working disks to the backup disks every
week or two. Working disks have the shares to other hosts. If something
happens to that data, deleted by accident or encrypted by malware. Meh.
Hardware like
Original message
From: Chuck Guzis via cctalk <cctalk@classiccmp.org>
Date: 3/28/18 10:02 AM (GMT-08:00)
To: Paul Koning via cctalk <cctalk@classiccmp.org>
Subject: Re: RAID? Was: PATA hard disks, anyone?
>I know of no RAID setup that can save
On 03/28/2018 06:33 AM, Paul Koning via cctalk wrote:
> These are straightforward questions of probability math, but it takes
> some time to get the details right. For one thing, you need
> believable numbers for the underlying error probabilities. And you
> have to analyze the cases carefully.
> On Mar 27, 2018, at 8:51 PM, Fred Cisin via cctalk
> wrote:
>
> Well outside my realm of expertise (as if I had a realm!), . . .
>
> How many drives would you need, to be able to set up a RAID, or hot swappable
> RAUD (Redundant Array of Unreliable Drives), that
On 28 March 2018 at 02:51, Fred Cisin via cctalk wrote:
> Well outside my realm of expertise (as if I had a realm!), . . .
>
> How many drives would you need, to be able to set up a RAID, or hot
> swappable RAUD (Redundant Array of Unreliable Drives), that could give
>
On 2018-03-27 10:05 PM, Ali via cctalk wrote:
Original message
From: Fred Cisin via cctalk <cctalk@classiccmp.org>
Date: 3/27/18 5:51 PM (GMT-08:00)
To: "General Discussion: On-Topic and Off-Topic Posts" <cctalk@classiccmp.org>
Subject: RAID
Original message
From: Fred Cisin via cctalk <cctalk@classiccmp.org>
Date: 3/27/18 5:51 PM (GMT-08:00)
To: "General Discussion: On-Topic and Off-Topic Posts" <cctalk@classiccmp.org>
Subject: RAID? Was: PATA hard disks, anyone?
How many
Well outside my realm of expertise (as if I had a realm!), . . .
How many drives would you need, to be able to set up a RAID, or hot
swappable RAUD (Redundant Array of Unreliable Drives), that could give
decent reliability with such drives?
How many to be able to not have data loss if a
26 matches
Mail list logo