Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Darren J Moffat
On 12/07/11 20:48, Mertol Ozyoney wrote: Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware. The only vendor i know that can do this is Netapp In fact , most of our functions, like replication is not dedup aware. For example, thecnicaly it's possible to optimize our

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Mertol Ozyoney Sent: Wednesday, December 07, 2011 3:49 PM To: Brad Diggs Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup Unfortunetly

[zfs-discuss] Issues with Areca 1680

2011-12-08 Thread Stephan Budach
Hi all, I have a server that is build on top of an Asus board which is equipped with an Areca 1680 HBA. Since ZFS like raw disks, I changed its mode from RAID to JBOD in the firmware and rebootet the host. Now, I do have 16 drives in the chassis and the line out like this: root@vsm01:~#

[zfs-discuss] SOLVED: Issues with Areca 1680

2011-12-08 Thread Stephan Budach
Am 08.12.11 18:14, schrieb Stephan Budach: Hi all, I have a server that is build on top of an Asus board which is equipped with an Areca 1680 HBA. Since ZFS like raw disks, I changed its mode from RAID to JBOD in the firmware and rebootet the host. Now, I do have 16 drives in the chassis and

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Ian Collins
On 12/ 9/11 12:39 AM, Darren J Moffat wrote: On 12/07/11 20:48, Mertol Ozyoney wrote: Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware. The only vendor i know that can do this is Netapp In fact , most of our functions, like replication is not dedup aware. For example,

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Mark Musante
You can see the original ARC case here: http://arc.opensolaris.org/caselog/PSARC/2009/557/20091013_lori.alt On 8 Dec 2011, at 16:41, Ian Collins wrote: On 12/ 9/11 12:39 AM, Darren J Moffat wrote: On 12/07/11 20:48, Mertol Ozyoney wrote: Unfortunetly the answer is no. Neither l1 nor l2

Re: [zfs-discuss] First zone creation - getting ZFS error

2011-12-08 Thread Betsy Schwartz
I would also try it without the /zones mountpoint. Putting the zone root dir on an alternate mountpoint caused problems for us. Try creating /datastore/zones for a zone root home, or just make the zones in /datastore Solaris seems to get very easily confused when zone root is anything out of

Re: [zfs-discuss] First zone creation - getting ZFS error

2011-12-08 Thread Ian Collins
On 12/ 9/11 11:37 AM, Betsy Schwartz wrote: On Dec 7, 2011, at 9:50 PM, Ian Collins i...@ianshome.com wrote: On 12/ 7/11 05:12 AM, Mark Creamer wrote: Since the zfs dataset datastore/zones is created, I don't understand what the error is trying to get me to do. Do I have to do: zfs create

Re: [zfs-discuss] Scrub found error in metadata:0x0, is that always fatal? No checks um errors now...

2011-12-08 Thread Nigel W
On Mon, Dec 5, 2011 at 17:46, Jim Klimov jimkli...@cos.ru wrote: So, in contrast with Nigel's optimistic theory that metadata is anyway extra-redundant and should be easily fixable, it seems that I do still have the problem. It does not show itself in practice as of yet, but is found by scrub