Re: [zfs-discuss] Cold failover of COMSTAR iSCSI targets on shared storage

2012-10-01 Thread Evaldas Auryla
On 26/09/12 00:52, Richard Elling wrote: On Sep 25, 2012, at 1:32 PM, Jim Klimov > wrote: Q: Which services are the complete list needed to set up the COMSTAR server from scratch? Dunno off the top of my head. Network isn't needed (COMSTAR can serve FC), but you can

Re: [zfs-discuss] zfs sync=disabled property

2011-11-09 Thread Evaldas Auryla
On 11/ 9/11 03:11 PM, Edward Ned Harvey wrote: From: Evaldas Auryla [mailto:evaldas.aur...@edqm.eu] Sent: Wednesday, November 09, 2011 8:55 AM I was thinking about STEC ZeusRAM, but unfortunately it's SAS only device, and it won't make into X4540 (SATA ports only), so another optio

Re: [zfs-discuss] zfs sync=disabled property

2011-11-09 Thread Evaldas Auryla
On 11/ 9/11 01:42 AM, Edward Ned Harvey wrote: I know a lot of people will say "don't do it," but that's only partial truth. The real truth is: At all times, if there's a server crash, ZFS will come back along at next boot or mount, and the filesystem will be in a consistent state, that was in

[zfs-discuss] zfs sync=disabled property

2011-11-08 Thread Evaldas Auryla
Hi all, I'm trying to evaluate what are the risks of running NFS share of zfs dataset with sync=disabled property. The clients are vmware hosts in our environment and server is SunFire X4540 "Thor" system. Though general recommendation tells not to do this, but after testing performance with

Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Evaldas Auryla
, Hung-ShengTsao (Lao Tsao) Ph.D. wrote: what is output echo |format On 5/19/2011 3:55 AM, Evaldas Auryla wrote: Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in "zpool s

[zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Evaldas Auryla
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in "zpool status" output: NAME STATE READ WRITE CKSUM cuve ON

Re: [zfs-discuss] Modify stmf_sbd_lu properties

2011-05-11 Thread Evaldas Auryla
On 05/10/11 09:45 PM, Don wrote: Is it possible to modify the GUID associated with a ZFS volume imported into STMF? To clarify- I have a ZFS volume I have imported into STMF and export via iscsi. I have a number of snapshots of this volume. I need to temporarily go back to an older snapshot wit

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-09 Thread Evaldas Auryla
On 05/ 6/11 07:21 PM, Brandon High wrote: On Fri, May 6, 2011 at 9:15 AM, Ray Van Dolson wrote: We use dedupe on our VMware datastores and typically see 50% savings, often times more. We do of course keep "like" VM's on the same volume I think NetApp uses 4k blocks by default, so the block s

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Evaldas Auryla
On 04/ 8/11 01:14 PM, Ian Collins wrote: You have built-in storage failover with an AR cluster; and they do NFS, CIFS, iSCSI, HTTP and WebDav out of the box. And you have fairly unlimited options for application servers, once they are decoupled from the storage servers. It doesn't seem like mu

Re: [zfs-discuss] Best choice - file system for system

2011-01-28 Thread Evaldas Auryla
On 01/28/11 02:37 PM, Edward Ned Harvey wrote: Let's go into that a little bit. If you're piping zfs send directly into zfs receive, then it is an ideal backup method. But not everybody can afford the disk necessary to do that, so people are tempted to "zfs send" to a file or tape. There are

Re: [zfs-discuss] zfs recv failing - "invalid backup stream"

2011-01-12 Thread evaldas
Hi, reminds me about this dedup bug, don't use the "-d" switch in zfs send, it produces broken stream that you won't be able to receive. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

Re: [zfs-discuss] Deduped zfs streams broken in post b134 ?

2010-11-19 Thread evaldas
Sry, the script was cut off, ending part is: mp/ddtest-snap2.zfs = It works in OpenSolaris b134, but not in OpenIndiana b147, nor Solaris Express 11, where zfs receive exists on second incremental snapshot with error message: cannot receive incremental stream: invalid backup stream

[zfs-discuss] Deduped zfs streams broken in post b134 ?

2010-11-19 Thread evaldas
Hi, Here is a small script to test deduped zfs send stream: = #!/bin/bash ZFSPOOL=rpool ZFSDATASET=zfs-send-dedup-test dd if=/dev/random of=/var/tmp/testfile1 bs=512 count=10 zfs create $ZFSPOOL/$ZFSDATASET cp /var/tmp/testfile1 /$ZFSPOOL/$ZFSDATASET/testfile1 zfs snapshot $ZFSPOOL/$z