Neil Brown wrote:
On Friday April 13, [EMAIL PROTECTED] wrote:
Dear All,
I have an 8-drive raid-5 array running under 2.6.11. This morning it
bombed out, and when I brought
it up again, two drives had incorrect event counts:
sda1: 0.8258715
sdb1: 0.8258715
sdc1: 0.8258715
sdd1:
mdadm --create --assume-clean - but I'm
not sure
which drives should be included/excluded when I do this.
Many thanks!
Chris Allen.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
Neil Brown wrote:
On Friday April 13, [EMAIL PROTECTED] wrote:
Dear All,
I have an 8-drive raid-5 array running under 2.6.11. This morning it
bombed out, and when I brought
it up again, two drives had incorrect event counts:
sda1: 0.8258715
sdb1: 0.8258715
sdc1: 0.8258715
sdd1:
俞先印 wrote:
Ilinux-raid want to create raid0 use mdadm 2.5.6, kernel 2.6.18-iop3 on the
intel iop80331(32bit). use 5 disks, and every hard disk is 500G. But it can't
beyond 2T. How can support 2T on the 32bit cpu ?
command and log :
#mdadm -C /dev/md0 -l0 -n5 /dev/sd[c,d,e,f,g]
#
Andrew Moise wrote:
On 10/17/06, Gordon Henderson [EMAIL PROTECTED] wrote:
I have had problems with XFS, but that was about 2 years ago, so things
might have improved by then.
Well, filling some random files with zeroes because of an unclean
shutdown is still defined as correct behavior in
Ok, after more testing, this lockup happens consistently when
bitmaps are switched on and never when they are switched off.
Ideas anybody?
On Sun, Oct 08, 2006 at 12:25:46AM +0100, Chris Allen wrote:
Neil Brown wrote:
On Tuesday June 13, [EMAIL PROTECTED] wrote:
Will that fix
Neil Brown wrote:
On Monday October 9, [EMAIL PROTECTED] wrote:
Ok, after more testing, this lockup happens consistently when
bitmaps are switched on and never when they are switched off.
Are you happy to try a kernel.org kernel with a few patches and a
little shell script running?
Neil Brown wrote:
On Tuesday June 13, [EMAIL PROTECTED] wrote:
Will that fix be in 2.6.17?
Probably not. We have had the last 'rc' twice and I so I don't think
it is appropriate to submit the patch at this stage.
I probably will submit it for an early 2.6.17.x. and for 2.6.16.y.
Nix wrote:
On 25 Jun 2006, Chris Allen uttered the following:
Back to my 12 terabyte fileserver, I have decided to split the storage
into four partitions each of 3TB. This way I can choose between XFS
and EXT3 later on.
So now, my options are between the following:
1. Single 12TB /dev/md0
Gordon Henderson wrote:
I use option 2 (above) all the time, and I've never noticed any
performance issues. (not issues with recovery after a power failure) I'd
like to think that on a modern processor the CPU can handle the parity,
etc. calculations several orders of magnitude faster than the
Adam Talbot wrote:
ACK!
At one point some one stated that they were having problems with XFS
crashing under high NFS loads... Did it look something like this?
-Adam
nope, it looked like the trace below - and I could make it happen
consistently by thrashing xfs.
Not even sure it was
Back to my 12 terabyte fileserver, I have decided to split the storage
into four partitions
each of 3TB. This way I can choose between XFS and EXT3 later on.
So now, my options are between the following:
1. Single 12TB /dev/md0, partitioned into four 3TB partitions. But how do
I do this? fdisk
Martin Schröder wrote:
2006/6/23, Francois Barre [EMAIL PROTECTED]:
Loosing data is worse than loosing anything else. You can buy you
That's why RAID is no excuse for backups.
We have 50TB stored data now and maybe 250TB this time next year.
We mirror the most recent 20TB to a secondary
v2.5?
- I have read good things about bitmaps. Are these production ready? Any
advice/caveats?
Many thanks for reading,
Chris Allen.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org
H. Peter Anvin wrote:
Gordon Henderson wrote:
On Thu, 22 Jun 2006, Chris Allen wrote:
Dear All,
I have a Linux storage server containing 16x750GB drives - so 12TB raw
space.
Just one thing - Do you want to use RAID-5 or RAID-6 ?
I just ask, as with that many drives (and that much data
On Sat, Mar 18, 2006 at 08:13:48AM +1100, Neil Brown wrote:
On Friday March 17, [EMAIL PROTECTED] wrote:
Dear All,
We have a number of machines running 4TB raid5 arrays.
Occasionally one of these machines will lock up solid and
will need power cycling. Often when this happens, the
of crash has to be escalated to a senior
engineer.
Is there any way of making the array so that there is
never more than one drive out of sync? I don't mind
if it slows things down *lots* - I'd just much prefer
robustness over performance.
Thanks,
Chris Allen
would it have sorted out the corruption?
Or would it have made things worse?
Any advice much appreciated.
Regards,
Chris Allen.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org
18 matches
Mail list logo