Re: [zfs-discuss] Use of blocksize (-b) during zfs zvol create, poor performance

2010-06-30 Thread Mike La Spina
Hi Eff,

There are a significant number of variables to work through with dedup and 
compression enabled. So the first suggestion I have is to disable those 
features for now so your not working with too many elements. 

With those features set aside an NTFS cluster operation does not = a 64k raw 
I/O block. As well the ZFS 64k blocksize does not = one I/O operation. We may 
also need to consider the overall network performance behavior and iSCSI 
protocol characteristics and the Windows network stack.

iperf is a good tool to rule that out.

What I primarily suspect the issue may be is that write I/O operations are not 
aligned and are waiting for a I/O completion over multiple vdevs. Alignment is 
important for write I/O optimization and how the I/O maps at the software raid 
mode will make a significant impact to the DMU and SPA operations on a specific 
vdev layout. You may also have an issue with write cache operations,  by 
default large I/O calls such as 64K will not use a ZIL cache vdev, if you have 
one defined, but will be written directly to your array vdevs which will also 
include a transaction group write operation. 

To ensure ZIL log usage with 64k I/O's you can apply the following: 
edit the /etc/system file with  

set zfs:zfs_immediate_write_sz = 131071

a reboot is required to activate the system file

You have also not indicated what your zpool configuration looks like, that 
would helpful in the discussion area. 

It appears that your applying the x4500 as a backup target which means you 
should (if not already) enable write caching on the COMSTAR LU properties for 
this type of application.

e.g
stmfadm modify-lu -p wcd=false 600144F02F2280004C1D62010001

To help triage the perf issue further you could post 2 'kstat zfs' + 2 'kstat 
stmf' outputs on a 5 min interval and a 'zpool iostat -v 30 5' which would help 
visualize the I/O behavior. 

Regards,

Mike

http://blog.laspina.ca/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Mike La Spina
I use zfs send/recv in the enterprise and in smaller environments all time and 
it's is excellent.

Have a look at how awesome the functionally is in this example.

http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zfs

Regards,

Mike
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-18 Thread Mike La Spina
Ciao,

Your GUID's must not be the same an NAA is already established on the targets 
and if you previously tried to initialize the LUN with VMware it would have 
assigned the value in the VMFS header wich is now stored on your raw ZFS 
backing store. This will confuse VMware and it will remember it now some where 
in its definitions. You need to remove the second datastore from VMware and 
delete the target definition and ZFS backing store.

Once you recreate the backing and target you should have a new GUID and iqn 
which should cure the issue.

Regards,

Mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-17 Thread Mike La Spina
Hello Tano,

The issue here is not the target or VMware but a missing GUID on the target as 
the issue.

Observe the target smf properties using

iscsitadm list target -v

You have

iSCSI Name: iqn.1986-03.com.sun:02:35ec26d8-f173-6dd5-b239-93a9690ffe46.vscsi
Connections: 0
ACL list:
TPGT list:
TPGT: 1
LUN information:
LUN: 0
GUID: 0
VID: SUN
PID: SOLARIS
Type: disk
Size: 1.3T
Backing store: /dev/zvol/rdsk/vdrive/LUNB
Status: online
Target: iscsi
iSCSI Name: iqn.1986-03.com.sun:02:4d469663-2304-4796-87a5-dffa03cd14ea.iscsi
Connections: 0
ACL list:
TPGT list:
TPGT: 1
LUN information:
LUN: 0
GUID: 0
VID: SUN
PID: SOLARIS
Type: disk
Size: 750G
Backing store: /dev/zvol/rdsk/vdrive/LUNA
Status: online
 
Both targets have the same invalid GUID of zero and this will prevent NAA from 
working properly.

To fix this you can create a two new temporary targets and export the smf props 
to an xml file.

e.g. 

svccfg -export iscsitgt > /iscsibackup/myiscsitargetbu.xml

then edit the xml file switching the newly generated guid's to your valid 
targets and zeroing the temp ones.

Now you can import the file with

scvadm import /iscsibackup/myiscsitargetbu.xml

When you restart your iscsitgt server you should have the guids in place and it 
should work with vmware.

The you can delete the temps targets.

http://blog.laspina.ca
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss