On Fri, May 7, 2010 at 4:57 AM, Brandon High wrote:
> I believe that the L2ARC behaves the same as a pool with multiple
> top-level vdevs. It's not typical striping, where every write goes to
> all devices. Writes may go to only one device, or may avoid a device
> entirely while using several othe
On May 6, 2010, at 11:08 AM, Michael Sullivan wrote:
> Well, if you are striping over multiple devices the you I/O should be spread
> over the devices and you should be reading them all simultaneously rather
> than just accessing a single device. Traditional striping would give 1/n
> performanc
On 06/05/2010 19:08, Michael Sullivan wrote:
Hi Marc,
Well, if you are striping over multiple devices the you I/O should be
spread over the devices and you should be reading them all
simultaneously rather than just accessing a single device.
Traditional striping would give 1/n performance im
On Thu, May 6, 2010 at 11:08 AM, Michael Sullivan
wrote:
> The round-robin access I am referring to, is the way the L2ARC vdevs appear
> to be accessed. So, any given object will be taken from a single device
> rather than from several devices simultaneously, thereby increasing the I/O
> throughp
On Fri, 7 May 2010, Michael Sullivan wrote:
Well, if you are striping over multiple devices the you I/O should be spread
over the devices and you
should be reading them all simultaneously rather than just accessing a single
device. Traditional
striping would give 1/n performance improvement r
On Thu, May 6, 2010 at 1:18 AM, Edward Ned Harvey wrote:
> > From the information I've been reading about the loss of a ZIL device,
> What the heck? Didn't I just answer that question?
> I know I said this is answered in ZFS Best Practices Guide.
>
> http://www.solarisinternals.com/wiki/index.php
Hi Marc,
Well, if you are striping over multiple devices the you I/O should be spread
over the devices and you should be reading them all simultaneously rather than
just accessing a single device. Traditional striping would give 1/n
performance improvement rather than 1/1 where n is the number
Hi Michael,
What makes you think striping the SSDs would be faster than round-robin?
-marc
On Thu, May 6, 2010 at 1:09 PM, Michael Sullivan wrote:
> Everyone,
>
> Thanks for the help. I really appreciate it.
>
> Well, I actually walked through the source code with an associate today and
> we
Everyone,
Thanks for the help. I really appreciate it.
Well, I actually walked through the source code with an associate today and we
found out how things work by looking at the code.
It appears that L2ARC is just assigned in round-robin fashion. If a device
goes offline, then it goes to the
On Wed, May 5, 2010 at 8:47 PM, Michael Sullivan
wrote:
> While it explains how to implement these, there is no information regarding
> failure of a device in a striped L2ARC set of SSD's. I have been hard
> pressed to find this information anywhere, short of testing it myself, but I
> don't h
On 06/05/2010 15:31, Tomas Ögren wrote:
On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes:
On Wed, 5 May 2010, Edward Ned Harvey wrote:
In the L2ARC (cache) there is no ability to mirror, because cache device
removal has always been supported. You can't mirror a cache devic
On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes:
> On Wed, 5 May 2010, Edward Ned Harvey wrote:
>>
>> In the L2ARC (cache) there is no ability to mirror, because cache device
>> removal has always been supported. You can't mirror a cache device, because
>> you don't need it.
>
> How do
On Wed, 5 May 2010, Edward Ned Harvey wrote:
In the L2ARC (cache) there is no ability to mirror, because cache device
removal has always been supported. You can't mirror a cache device, because
you don't need it.
How do you know that I don't need it? The ability seems useful to me.
Bob
--
B
On 6 May 2010, at 13:18 , Edward Ned Harvey wrote:
>> From: Michael Sullivan [mailto:michael.p.sulli...@mac.com]
>>
>> While it explains how to implement these, there is no information
>> regarding failure of a device in a striped L2ARC set of SSD's. I have
>
> http://www.solarisinternals.com/w
> From: Michael Sullivan [mailto:michael.p.sulli...@mac.com]
>
> My Google is very strong and I have the Best Practices Guide committed
> to bookmark as well as most of it to memory.
>
> While it explains how to implement these, there is no information
> regarding failure of a device in a striped
Hi Ed,
Thanks for your answers. Seem to make sense, sort of…
On 6 May 2010, at 12:21 , Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Michael Sullivan
>>
>> I have a question I cannot seem to find an answer to
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Michael Sullivan
>
> I have a question I cannot seem to find an answer to.
Google for ZFS Best Practices Guide (on solarisinternals). I know this
answer is there.
> I know if I set up ZIL
Ok, thanks.
So, if I understand correctly, it will just remove the device from the VDEV and
continue to use the good ones in the stripe.
Mike
---
Michael Sullivan
michael.p.sulli...@me.com
http://www.kamiogi.net/
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034
On 5
The L2ARC will continue to function.
-marc
On 5/4/10, Michael Sullivan wrote:
> HI,
>
> I have a question I cannot seem to find an answer to.
>
> I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
>
> I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
> relocated
On Tue, May 4, 2010 at 12:16 PM, Michael Sullivan <
michael.p.sulli...@mac.com> wrote:
> I have a question I cannot seem to find an answer to.
>
> I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
>
> I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
> relocated
On 05 May, 2010 - Michael Sullivan sent me these 0,9K bytes:
> HI,
>
> I have a question I cannot seem to find an answer to.
>
> I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
>
> I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will
> be relocated back to the spo
HI,
I have a question I cannot seem to find an answer to.
I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
relocated back to the spool. I'd probably have it mirrored anyway, just in
case. However you cannot
22 matches
Mail list logo