:[EMAIL PROTECTED] On Behalf Of Jeff Zheng
Sent: Thursday, 17 May 2007 5:39 p.m.
To: Neil Brown; [EMAIL PROTECTED]; Michal Piotrowski; Ingo
Molnar; [EMAIL PROTECTED];
linux-kernel@vger.kernel.org; [EMAIL PROTECTED]
Subject: RE: Software raid0 will crash the file-system, when
each disk is 5TB
Yeah
On Friday May 18, [EMAIL PROTECTED] wrote:
> Fix confirmed, filled the whole 11T hard disk, without crashing.
> I presume this would go into 2.6.22
Yes, and probably 2.6.21.y, though the patch will be slightly
different, see below.
>
> Thanks again.
And thank-you for pursuing this with me.
Nei
l Brown; [EMAIL PROTECTED]; Michal Piotrowski; Ingo
> Molnar; [EMAIL PROTECTED];
> linux-kernel@vger.kernel.org; [EMAIL PROTECTED]
> Subject: RE: Software raid0 will crash the file-system, when
> each disk is 5TB
>
>
> Yeah, seems you've locked it down, :D. I've
On May 17 2007 21:11, Neil Brown wrote:
>On Thursday May 17, [EMAIL PROTECTED] wrote:
>> XOR it (0^0=1), and hence fills up the host disk.
>
>Uhmm... you need to check your maths.
>
>$ perl -e 'printf "%d\n", 0^0;'
>0
>
>:-)
(ouch)
You know just as I that ^ is the power operator!
I just... wrongl
On Thursday May 17, [EMAIL PROTECTED] wrote:
> XOR it (0^0=1), and hence fills up the host disk.
Uhmm... you need to check your maths.
$ perl -e 'printf "%d\n", 0^0;'
0
:-)
NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROT
On May 17 2007 09:42, Jeff Zheng wrote:
>
>Problem is that is only happens when you actually write data to the
>raid. You need the actual space to reproduce the problem.
That should not be a big problem. Create like 4x950G virtual sparse
drives (takes roughly or so 4x100 MB on the host after mkfs
IL PROTECTED]
> Sent: Thursday, 17 May 2007 5:31 p.m.
> To: [EMAIL PROTECTED]; Jeff Zheng; Michal Piotrowski; Ingo
> Molnar; [EMAIL PROTECTED];
> linux-kernel@vger.kernel.org; [EMAIL PROTECTED]
> Subject: RE: Software raid0 will crash the file-system, when
> each disk is 5TB
&g
On Thursday May 17, [EMAIL PROTECTED] wrote:
>
> Uhm, I just noticed something.
> 'chunk' is unsigned long, and when it gets shifted up, we might lose
> bits. That could still happen with the 4*2.75T arrangement, but is
> much more likely in the 2*5.5T arrangement.
Actually, it cannot be a probl
> What is the nature of the corruption? Is it data in a file
> that is wrong when you read it back, or does the filesystem
> metadata get corrupted?
The corruption is in fs metadata, jfs is completely destroied, after
Umount, fsck does not recogonize it as jfs anymore. Xfs gives kernel
Crash,
On Wednesday May 16, [EMAIL PROTECTED] wrote:
> On Thu, 17 May 2007, Neil Brown wrote:
>
> > On Thursday May 17, [EMAIL PROTECTED] wrote:
> >>
> >>> The only difference of any significance between the working
> >>> and non-working configurations is that in the non-working,
> >>> the component devi
On Thu, 17 May 2007, Neil Brown wrote:
On Thursday May 17, [EMAIL PROTECTED] wrote:
The only difference of any significance between the working
and non-working configurations is that in the non-working,
the component devices are larger than 2Gig, and hence have
sector offsets greater than 32
On Thursday May 17, [EMAIL PROTECTED] wrote:
> I tried the patch, same problem show up, but no bug_on report
>
> Is there any other things I can do?
>
What is the nature of the corruption? Is it data in a file that is
wrong when you read it back, or does the filesystem metadata get
corrupted?
I tried the patch, same problem show up, but no bug_on report
Is there any other things I can do?
Jeff
> Yes, I meant 2T, and yes, the components are always over 2T.
> So I'm at a complete loss. The raid0 code follows the same
> paths and does the same things and uses 64bit arithmetic wher
On Thursday May 17, [EMAIL PROTECTED] wrote:
>
> > The only difference of any significance between the working
> > and non-working configurations is that in the non-working,
> > the component devices are larger than 2Gig, and hence have
> > sector offsets greater than 32 bits.
>
> Do u mean 2T
> The only difference of any significance between the working
> and non-working configurations is that in the non-working,
> the component devices are larger than 2Gig, and hence have
> sector offsets greater than 32 bits.
Do u mean 2T here?, but in both configuartion, the component devices ar
On Wednesday May 16, [EMAIL PROTECTED] wrote:
> Here is the information of the created raid0. Hope it is enough.
Thanks.
Everything looks fine here.
The only difference of any significance between the working and
non-working configurations is that in the non-working, the component
devices are lar
: Jeff Zheng; linux-kernel@vger.kernel.org;
[EMAIL PROTECTED]
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB
my experiance is taht if you don't have CONFIG_LBD enabled then the
kernel will report the larger disk as 2G and everything will work, you
just won'
Zheng; linux-kernel@vger.kernel.org;
[EMAIL PROTECTED]
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB
On May 16 2007 11:04, [EMAIL PROTECTED] wrote:
>
> I'm getting ready to setup a similar machine that will have 3x10TB (3
> 15 disk arrays with 750G driv
On May 16 2007 11:04, [EMAIL PROTECTED] wrote:
>
> I'm getting ready to setup a similar machine that will have 3x10TB (3 15 disk
> arrays with 750G drives), but won't be ready to try this for a few more days.
You could emulate it with VMware. Big disks are quite "cheap" when
they are not allocate
On Wed, 16 May 2007, Andreas Dilger wrote:
On May 16, 2007 11:09 +1200, Jeff Zheng wrote:
We are using two 3ware disk array controllers, each of them is connected
8 750GB harddrives. And we build a software raid0 on top of that. The
total capacity is 5.5TB+5.5TB=11TB
We use jfs as the file-sy
On Wed, 16 May 2007, Bill Davidsen wrote:
Jeff Zheng wrote:
Here is the information of the created raid0. Hope it is enough.
If I read this correctly, the problem is with JFS rather than RAID?
he had the same problem with xfs.
David Lang
-
To unsubscribe from this list: send the line "u
ernel.org; [EMAIL PROTECTED]
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB
On Wednesday May 16, [EMAIL PROTECTED] wrote:
Anybody have a clue?
No...
When a raid0 array is assemble, quite a lot of message get printed
about number of zones and hash_spac
On May 16, 2007 11:09 +1200, Jeff Zheng wrote:
> We are using two 3ware disk array controllers, each of them is connected
> 8 750GB harddrives. And we build a software raid0 on top of that. The
> total capacity is 5.5TB+5.5TB=11TB
>
> We use jfs as the file-system, we have a test application that
May 2007 12:04 p.m.
To: Michal Piotrowski
Cc: Jeff Zheng; Ingo Molnar; [EMAIL PROTECTED];
linux-kernel@vger.kernel.org; [EMAIL PROTECTED]
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB
On Wednesday May 16, [EMAIL PROTECTED] wrote:
> >
> > Anybody have a
On Wednesday May 16, [EMAIL PROTECTED] wrote:
> >
> > Anybody have a clue?
> >
No...
When a raid0 array is assemble, quite a lot of message get printed
about number of zones and hash_spacing etc. Can you collect and post
those. Both for the failing case (2*5.5T) and the working case
(4*2.55T) is
[Ingo, Neil, linux-raid added to CC]
On 16/05/07, Jeff Zheng <[EMAIL PROTECTED]> wrote:
Hi everyone:
We are experiencing problems with software raid0, with very
large disk arrays.
We are using two 3ware disk array controllers, each of them is connected
8 750GB harddrives. And we build a
26 matches
Mail list logo