Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Bob Friesenhahn
On Wed, 5 May 2010, Edward Ned Harvey wrote: In the L2ARC (cache) there is no ability to mirror, because cache device removal has always been supported. You can't mirror a cache device, because you don't need it. How do you know that I don't need it? The ability seems useful to me. Bob --

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Tomas Ögren
On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes: On Wed, 5 May 2010, Edward Ned Harvey wrote: In the L2ARC (cache) there is no ability to mirror, because cache device removal has always been supported. You can't mirror a cache device, because you don't need it. How do you know

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
On 06/05/2010 15:31, Tomas Ögren wrote: On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes: On Wed, 5 May 2010, Edward Ned Harvey wrote: In the L2ARC (cache) there is no ability to mirror, because cache device removal has always been supported. You can't mirror a cache

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Brandon High
On Wed, May 5, 2010 at 8:47 PM, Michael Sullivan michael.p.sulli...@mac.com wrote: While it explains how to implement these, there is no information regarding failure of a device in a striped L2ARC set of SSD's.  I have been hard pressed to find this information anywhere, short of testing it

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Michael Sullivan
Everyone, Thanks for the help. I really appreciate it. Well, I actually walked through the source code with an associate today and we found out how things work by looking at the code. It appears that L2ARC is just assigned in round-robin fashion. If a device goes offline, then it goes to

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Marc Nicholas
Hi Michael, What makes you think striping the SSDs would be faster than round-robin? -marc On Thu, May 6, 2010 at 1:09 PM, Michael Sullivan michael.p.sulli...@mac.com wrote: Everyone, Thanks for the help. I really appreciate it. Well, I actually walked through the source code with an

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Michael Sullivan
Hi Marc, Well, if you are striping over multiple devices the you I/O should be spread over the devices and you should be reading them all simultaneously rather than just accessing a single device. Traditional striping would give 1/n performance improvement rather than 1/1 where n is the

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Giovanni Tirloni
On Thu, May 6, 2010 at 1:18 AM, Edward Ned Harvey solar...@nedharvey.comwrote: From the information I've been reading about the loss of a ZIL device, What the heck? Didn't I just answer that question? I know I said this is answered in ZFS Best Practices Guide.

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Bob Friesenhahn
On Fri, 7 May 2010, Michael Sullivan wrote: Well, if you are striping over multiple devices the you I/O should be spread over the devices and you should be reading them all simultaneously rather than just accessing a single device.  Traditional striping would give 1/n performance improvement

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Brandon High
On Thu, May 6, 2010 at 11:08 AM, Michael Sullivan michael.p.sulli...@mac.com wrote: The round-robin access I am referring to, is the way the L2ARC vdevs appear to be accessed.  So, any given object will be taken from a single device rather than from several devices simultaneously, thereby

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
On 06/05/2010 19:08, Michael Sullivan wrote: Hi Marc, Well, if you are striping over multiple devices the you I/O should be spread over the devices and you should be reading them all simultaneously rather than just accessing a single device. Traditional striping would give 1/n performance

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Richard Elling
On May 6, 2010, at 11:08 AM, Michael Sullivan wrote: Well, if you are striping over multiple devices the you I/O should be spread over the devices and you should be reading them all simultaneously rather than just accessing a single device. Traditional striping would give 1/n performance

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread BM
On Fri, May 7, 2010 at 4:57 AM, Brandon High bh...@freaks.com wrote: I believe that the L2ARC behaves the same as a pool with multiple top-level vdevs. It's not typical striping, where every write goes to all devices. Writes may go to only one device, or may avoid a device entirely while using

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Michael Sullivan I have a question I cannot seem to find an answer to. Google for ZFS Best Practices Guide (on solarisinternals). I know this answer is there. I know if I set up ZIL on

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-05 Thread Michael Sullivan
Hi Ed, Thanks for your answers. Seem to make sense, sort of… On 6 May 2010, at 12:21 , Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Michael Sullivan I have a question I cannot seem to find an answer to.

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-05 Thread Edward Ned Harvey
From: Michael Sullivan [mailto:michael.p.sulli...@mac.com] My Google is very strong and I have the Best Practices Guide committed to bookmark as well as most of it to memory. While it explains how to implement these, there is no information regarding failure of a device in a striped L2ARC

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-05 Thread Michael Sullivan
On 6 May 2010, at 13:18 , Edward Ned Harvey wrote: From: Michael Sullivan [mailto:michael.p.sulli...@mac.com] While it explains how to implement these, there is no information regarding failure of a device in a striped L2ARC set of SSD's. I have

[zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Michael Sullivan
HI, I have a question I cannot seem to find an answer to. I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's. I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be relocated back to the spool. I'd probably have it mirrored anyway, just in case. However you

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Tomas Ögren
On 05 May, 2010 - Michael Sullivan sent me these 0,9K bytes: HI, I have a question I cannot seem to find an answer to. I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's. I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be relocated back to the spool.

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Freddie Cash
On Tue, May 4, 2010 at 12:16 PM, Michael Sullivan michael.p.sulli...@mac.com wrote: I have a question I cannot seem to find an answer to. I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's. I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be relocated back to

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Marc Nicholas
The L2ARC will continue to function. -marc On 5/4/10, Michael Sullivan michael.p.sulli...@mac.com wrote: HI, I have a question I cannot seem to find an answer to. I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's. I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Michael Sullivan
Ok, thanks. So, if I understand correctly, it will just remove the device from the VDEV and continue to use the good ones in the stripe. Mike --- Michael Sullivan michael.p.sulli...@me.com http://www.kamiogi.net/ Japan Mobile: +81-80-3202-2599 US Phone: +1-561-283-2034 On 5