I had the same problem.
Re-doing the partition from ext2 to linux raid fixed my problem, but I
see you're already using that FS type.
Maybe it was the action of re-partitioning in general that fixed my
problem? You could try deleting it and re-creating that partition,
syncing, and rebooting?
Gre
> I wonder how long it would take to run an fsck on one large filesystem?
:)
I would imagine you'd have time to order a new system, build it, and
restore the backups before the fsck was done!
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAI
> Also, don't use ext*, XFS can be up to 2-3x faster (in many of the
> benchmarks).
I'm going to swap file systems and give it a shot right now! :)
How is stability of XFS? I heard recovery is easier with ext2/3 due to
more people using it, more tools available, etc?
Greg
-
To unsubscribe from t
Justin, thanks for the script. Here's my results. I ran it a few times
with different tests, hence the small number of results you see here,
I slowly trimmed out the obvious not-ideal sizes.
System
---
Athlon64 3500
2GB RAM
4x500GB WD Raid editions, raid 5. SDE is the old 4-platter version
(5000YS
What sort of tools are you using to get these benchmarks, and can I
used them for ext3?
Very interested in running this on my server.
Thanks,
Greg
On Jan 16, 2008 11:13 AM, Justin Piszcz <[EMAIL PROTECTED]> wrote:
> For these benchmarks I timed how long it takes to extract a standard 4.4
> GiB
Hey guys.
I've got a 1TB RAID5 array with 3x500gb drives. I just RMA'd a drive.
Can someone help me properly swap out /dev/sdd for the new drive?
I've done this before but I've had issues that bugged me. Everything
worked fine, but if I would examine details on drives, I'd have FOUR
drives listed
So I've been slowly expanding my knowledge of mdadm/linux raid.
I've got a 1 terabyte array which stores mostly large media files, and
from my reading, increasing the stripe size should really help my
performance
Is there any way to do this to an existing array, or will I need to
backup the array
Any reason 0.9 is the default? Should I be worried about using 1.0
superblocks? And can I "upgrade" my array from 0.9 to 1.0 superblocks?
Thanks,
Greg
On 11/1/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Tuesday October 30, [EMAIL PROTECTED] wrote:
> > Which is the default type of superblock? 0
Which is the default type of superblock? 0.90 or 1.0?
On 10/30/07, Neil Brown <[EMAIL PROTECTED]> wrote:
> On Friday October 26, [EMAIL PROTECTED] wrote:
> > Can someone help me understand superblocks and MD a little bit?
> >
> > I've got a raid5 array with 3 disks - sdb1, sdc1, sdd1.
> >
> > --ex
Can someone help me understand superblocks and MD a little bit?
I've got a raid5 array with 3 disks - sdb1, sdc1, sdd1.
--examine on these 3 drives shows correct information.
However, if I also examine the raw disk devices, sdb and sdd, they
also appear to have superblocks with some semi valid
10 matches
Mail list logo