:[EMAIL PROTECTED] On Behalf Of Jeff Zheng
Sent: Thursday, 17 May 2007 5:39 p.m.
To: Neil Brown; [EMAIL PROTECTED]; Michal Piotrowski; Ingo
Molnar; linux-raid@vger.kernel.org;
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: RE: Software raid0 will crash the file-system, when
each disk is 5TB
Yeah
On Friday May 18, [EMAIL PROTECTED] wrote:
> Fix confirmed, filled the whole 11T hard disk, without crashing.
> I presume this would go into 2.6.22
Yes, and probably 2.6.21.y, though the patch will be slightly
different, see below.
>
> Thanks again.
And thank-you for pursuing this with me.
Nei
l Brown; [EMAIL PROTECTED]; Michal Piotrowski; Ingo
> Molnar; linux-raid@vger.kernel.org;
> [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: RE: Software raid0 will crash the file-system, when
> each disk is 5TB
>
>
> Yeah, seems you've locked it down, :D. I've writt
IL PROTECTED]
> Sent: Thursday, 17 May 2007 5:31 p.m.
> To: [EMAIL PROTECTED]; Jeff Zheng; Michal Piotrowski; Ingo
> Molnar; linux-raid@vger.kernel.org;
> [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: RE: Software raid0 will crash the file-system, when
> each disk is 5TB
>
>
On Thursday May 17, [EMAIL PROTECTED] wrote:
>
> Uhm, I just noticed something.
> 'chunk' is unsigned long, and when it gets shifted up, we might lose
> bits. That could still happen with the 4*2.75T arrangement, but is
> much more likely in the 2*5.5T arrangement.
Actually, it cannot be a probl
> What is the nature of the corruption? Is it data in a file
> that is wrong when you read it back, or does the filesystem
> metadata get corrupted?
The corruption is in fs metadata, jfs is completely destroied, after
Umount, fsck does not recogonize it as jfs anymore. Xfs gives kernel
Crash,
On Wednesday May 16, [EMAIL PROTECTED] wrote:
> On Thu, 17 May 2007, Neil Brown wrote:
>
> > On Thursday May 17, [EMAIL PROTECTED] wrote:
> >>
> >>> The only difference of any significance between the working
> >>> and non-working configurations is that in the non-working,
> >>> the component devi
On Thu, 17 May 2007, Neil Brown wrote:
On Thursday May 17, [EMAIL PROTECTED] wrote:
The only difference of any significance between the working
and non-working configurations is that in the non-working,
the component devices are larger than 2Gig, and hence have
sector offsets greater than 32
On Thursday May 17, [EMAIL PROTECTED] wrote:
> I tried the patch, same problem show up, but no bug_on report
>
> Is there any other things I can do?
>
What is the nature of the corruption? Is it data in a file that is
wrong when you read it back, or does the filesystem metadata get
corrupted?
I tried the patch, same problem show up, but no bug_on report
Is there any other things I can do?
Jeff
> Yes, I meant 2T, and yes, the components are always over 2T.
> So I'm at a complete loss. The raid0 code follows the same
> paths and does the same things and uses 64bit arithmetic wher
On Thursday May 17, [EMAIL PROTECTED] wrote:
>
> > The only difference of any significance between the working
> > and non-working configurations is that in the non-working,
> > the component devices are larger than 2Gig, and hence have
> > sector offsets greater than 32 bits.
>
> Do u mean 2T
> The only difference of any significance between the working
> and non-working configurations is that in the non-working,
> the component devices are larger than 2Gig, and hence have
> sector offsets greater than 32 bits.
Do u mean 2T here?, but in both configuartion, the component devices ar
On Wednesday May 16, [EMAIL PROTECTED] wrote:
> Here is the information of the created raid0. Hope it is enough.
Thanks.
Everything looks fine here.
The only difference of any significance between the working and
non-working configurations is that in the non-working, the component
devices are lar
On Wed, 16 May 2007, Bill Davidsen wrote:
Jeff Zheng wrote:
Here is the information of the created raid0. Hope it is enough.
If I read this correctly, the problem is with JFS rather than RAID?
he had the same problem with xfs.
David Lang
-
To unsubscribe from this list: send the line "u
ROTECTED]; [EMAIL PROTECTED]
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB
On Wednesday May 16, [EMAIL PROTECTED] wrote:
Anybody have a clue?
No...
When a raid0 array is assemble, quite a lot of message get printed
about number of zones and hash_spac
May 2007 12:04 p.m.
To: Michal Piotrowski
Cc: Jeff Zheng; Ingo Molnar; linux-raid@vger.kernel.org;
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Software raid0 will crash the file-system, when each disk
is 5TB
On Wednesday May 16, [EMAIL PROTECTED] wrote:
> >
> > Anybody have a
On Wednesday May 16, [EMAIL PROTECTED] wrote:
> >
> > Anybody have a clue?
> >
No...
When a raid0 array is assemble, quite a lot of message get printed
about number of zones and hash_spacing etc. Can you collect and post
those. Both for the failing case (2*5.5T) and the working case
(4*2.55T) is
[Ingo, Neil, linux-raid added to CC]
On 16/05/07, Jeff Zheng <[EMAIL PROTECTED]> wrote:
Hi everyone:
We are experiencing problems with software raid0, with very
large disk arrays.
We are using two 3ware disk array controllers, each of them is connected
8 750GB harddrives. And we build a
18 matches
Mail list logo