On 9/20/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
>
> Next application modifies D0 -> D0' and also writes other
> data D3, D4. Now you have
>
> Disk0 Disk1 Disk2 Disk3
>
> D0 D1 D2 P0,1,2
> D0' D3 D4 P0',3,4
>
> But if D1 and D2 stays immut
Here is a different twist on your interesting scheme. First
start with writting 3 blocks and parity in a full stripe.
Disk0 Disk1 Disk2 Disk3
D0 D1 D2 P0,1,2
Next application modifies D0 -> D0' and also writes other
data D3, D4. Now you have
D
On 9/10/07, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
> The problem with RAID5 is that different blocks share the same parity,
> which is not the case for RAIDZ. When you write a block in RAIDZ, you
> write the data and the parity, and then you switch the pointer in
> uberblock. For RAID5, you
On 9/12/07, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
> On Wed, Sep 12, 2007 at 02:24:56PM -0700, Adam Leventhal wrote:
> > On Mon, Sep 10, 2007 at 12:41:24PM +0200, Pawel Jakub Dawidek wrote:
> > I'm a bit surprised by these results. Assuming relatively large blocks
> > written, RAID-Z and RA
On Thu, Sep 13, 2007 at 04:58:10AM +, Marc Bevand wrote:
> Pawel Jakub Dawidek FreeBSD.org> writes:
> >
> > This is how RAIDZ fills the disks (follow the numbers):
> >
> > Disk0 Disk1 Disk2 Disk3
> >
> > D0 D1 D2 P3
> > D4 D5 D6 P7
> > D8
Pawel Jakub Dawidek FreeBSD.org> writes:
>
> This is how RAIDZ fills the disks (follow the numbers):
>
> Disk0 Disk1 Disk2 Disk3
>
> D0 D1 D2 P3
> D4 D5 D6 P7
> D8 D9 D10 P11
> D12 D13 D14 P15
> D1
On Wed, Sep 12, 2007 at 07:39:56PM -0500, Al Hopper wrote:
> >This is how RAIDZ fills the disks (follow the numbers):
> >
> > Disk0 Disk1 Disk2 Disk3
> >
> > D0 D1 D2 P3
> > D4 D5 D6 P7
> > D8 D9 D10 P11
> > D12 D13 D14
On Thu, 13 Sep 2007, Pawel Jakub Dawidek wrote:
> On Wed, Sep 12, 2007 at 11:20:52PM +0100, Peter Tribble wrote:
>> On 9/10/07, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
>>> Hi.
>>>
>>> I've a prototype RAID5 implementation for ZFS. It only works in
>>> non-degraded state for now. The idea is
On Thu, Sep 13, 2007 at 12:56:44AM +0200, Pawel Jakub Dawidek wrote:
> On Wed, Sep 12, 2007 at 11:20:52PM +0100, Peter Tribble wrote:
> > My understanding of the raid-z performance issue is that it requires
> > full-stripe reads in order to validate the checksum. [...]
>
> No, checksum is independ
On Wed, Sep 12, 2007 at 11:20:52PM +0100, Peter Tribble wrote:
> On 9/10/07, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
> > Hi.
> >
> > I've a prototype RAID5 implementation for ZFS. It only works in
> > non-degraded state for now. The idea is to compare RAIDZ vs. RAID5
> > performance, as I su
On Wed, Sep 12, 2007 at 02:24:56PM -0700, Adam Leventhal wrote:
> I'm a bit surprised by these results. Assuming relatively large blocks
> written, RAID-Z and RAID-5 should be laid out on disk very similarly
> resulting in similar read performance.
>
> Did you compare the I/O characteristic of bot
On Wed, Sep 12, 2007 at 02:24:56PM -0700, Adam Leventhal wrote:
> On Mon, Sep 10, 2007 at 12:41:24PM +0200, Pawel Jakub Dawidek wrote:
> > And here are the results:
> >
> > RAIDZ:
> >
> > Number of READ requests: 4.
> > Number of WRITE requests: 0.
> > Number of bytes to transmit:
On 9/10/07, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
> Hi.
>
> I've a prototype RAID5 implementation for ZFS. It only works in
> non-degraded state for now. The idea is to compare RAIDZ vs. RAID5
> performance, as I suspected that RAIDZ, because of full-stripe
> operations, doesn't work well
On Mon, Sep 10, 2007 at 12:41:24PM +0200, Pawel Jakub Dawidek wrote:
> And here are the results:
>
> RAIDZ:
>
> Number of READ requests: 4.
> Number of WRITE requests: 0.
> Number of bytes to transmit: 695678976.
> Number of processes: 8.
> Bytes per second: 1305
> My question is: Is there any interest in finishing RAID5/RAID6 for ZFS?
> If there is no chance it will be integrated into ZFS at some point, I
> won't bother finishing it.
Your work is as pure an example as any of what OpenSolaris should be about. I
think there should be no problem having a n
> As you can see, two independent ZFS blocks share one parity block.
> COW won't help you here, you would need to be sure that each ZFS
> transaction goes to a different (and free) RAID5 row.
>
> This is I belive the main reason why poor RAID5 wasn't used in the first
> place.
Exactly right. RAI
On Tue, Sep 11, 2007 at 08:16:02AM +0100, Robert Milkowski wrote:
> Are you overwriting old data? I hope you're not...
I am, I overwrite parity, this is the whole point. That's why ZFS
designers used RAIDZ instead of RAID5, I think.
> I don't think you should suffer from above problem in ZFS due
Hello Pawel,
Monday, September 10, 2007, 6:18:37 PM, you wrote:
PJD> On Mon, Sep 10, 2007 at 04:31:32PM +0100, Robert Milkowski wrote:
>> Hello Pawel,
>>
>> Excellent job!
>>
>> Now I guess it would be a good idea to get writes done properly,
>> even if it means make them slow (like
On Mon, Sep 10, 2007 at 04:31:32PM +0100, Robert Milkowski wrote:
> Hello Pawel,
>
> Excellent job!
>
> Now I guess it would be a good idea to get writes done properly,
> even if it means make them slow (like with SVM). The end result
> would be - do you want fast wrties/slow read
> Now I guess it would be a good idea to get writes done properly,
> even if it means make them slow (like with SVM). The end result
> would be - do you want fast wrties/slow reads go ahead with
> raid-z; if you need fast reads/slow writes go with raid-5.
>
> btw: I'm just thin
Hello Pawel,
Excellent job!
Now I guess it would be a good idea to get writes done properly,
even if it means make them slow (like with SVM). The end result
would be - do you want fast wrties/slow reads go ahead with
raid-z; if you need fast reads/slow writes go with raid-5.
Hi.
I've a prototype RAID5 implementation for ZFS. It only works in
non-degraded state for now. The idea is to compare RAIDZ vs. RAID5
performance, as I suspected that RAIDZ, because of full-stripe
operations, doesn't work well for random reads issued by many processes
in parallel.
There is of co
22 matches
Mail list logo