Re: [zfs-discuss] adpu320 scsi timeouts only with ZFS

2010-01-19 Thread Andreas Grüninger
Maybe there are too many I/Os for this controller.

You may try this settings
B130
echo zfs_txg_synctime_ms/W0t2000 | mdb -kw 
echo zfs_vdev_max_pending/W0t5 | mdb -kw 

older versions
echo zfs_txg_synctime/W0t2 | mdb -kw 
echo zfs_vdev_max_pending/W0t5 | mdb -kw 

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD sale on newegg

2010-04-20 Thread Andreas Grüninger
I did the same experiment in an VMWare guest (SLES10 x64). The archive was 
stored on the vdisk and untarring went to the same vdisk.
The storage backend is sun system with 64 GB RAM, 2*QC cpus, 24 SAS disks with 
450 GB, 4 vdevs with 6 disks as RAIDZ2, an Intel X25-E as log device (c2t1d0).
A StorageTek SAS RAID Host Bus Adapters with 256 RAM and BBU for the zpool and 
a second HBA for the slog device.
c3 is for the zpool and c2 for slog (c2t1d0)/boot (c2t0d0) devices.
There are actually 140 VMs running and used over NFS from VSphere 4 with two 1 
Gb/s links.

zd-nms-s5:/build # iostat -indexC 5
before untarring
  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.0  396.00.0 9428.3  0.0  0.10.00.2   0   5   0   0   0   0 c2
0.0   14.00.0   61.9  0.0  0.00.02.8   0   1   0   0   0   0 
c2t0d0
0.0  382.00.0 9366.4  0.0  0.00.00.1   0   3   0   0   0   0 
c2t1d0
  265.40.0 3631.20.0  0.0  1.20.04.3   0 105   0   0   0   0 c3
9.80.0  148.20.0  0.0  0.00.03.4   0   3   0   0   0   0 
c3t0d0
8.80.0  137.70.0  0.0  0.00.03.6   0   3   0   0   0   0 
c3t1d0


zd-nms-s5:/build # iostat -indexC 5
during untarring
   extended device statistics    errors ---
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.0 1128.30.0 31713.6  0.0  0.20.00.1   0  12   0   0   0   0 c2
0.00.00.00.0  0.0  0.00.00.0   0   0   0   0   0   0 
c2t0d0
0.0 1128.30.0 31713.6  0.0  0.20.00.1   1  12   0   0   0   0 
c2t1d0
 2005.7 5708.9 7423.7 42041.5  0.1 61.70.08.0   0 1119   0   0   0   0 
c3
   82.8  602.2  364.9 2408.4  0.0  4.40.06.4   1  68   0   0   0   0 
c3t0d0
   72.4  601.6  288.5 2452.7  0.0  4.20.06.2   1  61   0   0   0   0 
c3t1d0


zd-nms-s5:/build # time tar jxf /tmp/gcc-4.4.3.tar.bz2

real0m58.086s
user0m12.241s
sys 0m6.552s

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] build a zfs continuous data replication in opensolaris

2010-04-22 Thread Andreas Grüninger
You may have a look in the whitepaper from Torsten Frueauf.
see here http://sun.systemnews.com/articles/137/4/OpenSolaris/22016

This should give you the functionality of a DRBD-Cluster.

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] build a zfs continuous data replication in opensolaris

2010-04-22 Thread Andreas Grüninger
If you read this
http://hub.opensolaris.org/bin/download/Project+colorado/files/Whitepaper-OpenHAClusterOnOpenSolaris-external.pdf
and especially starting at page 25 you will find a detailed explanation how to 
implement a storage cluster with shared storage based on Comstar and ISCSI.
If you want to install on physical hardware just ignore the installation and 
configuration of VirtualBox.
IMHO simpler than AVS.

Regards

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool/zfs list question

2010-05-31 Thread Andreas Grüninger
Use

zfs get -Hp used pool1/nfs1

to get a parsable output.

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10U8, Sun Cluster, and SSD issues.

2010-06-01 Thread Andreas Grüninger
The Intel SSD is not a dual ported SAS device. This device must be supported by 
the SAS expander in your external chassis.
Did you use an AAMUX transposer card for the SATA device between the connector 
of the chassis and the SATA drive?

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Monitoring filessytem access

2010-06-18 Thread Andreas Grüninger
Here is a dtrace script based of one of the examples for the nfs provider.
Especially useful when you use NFS for ESX or other hypervisors.

Andreas

#!/usr/sbin/dtrace -s

#pragma D option quiet

inline int TOP_FILES = 50;

dtrace:::BEGIN
{
   printf(Tracing... Hit Ctrl-C to end.\n);
   startscript = timestamp;
}

nfsv3:::op-read-start,
nfsv3:::op-write-start
{
   start[args[1]-noi_xid] = timestamp;
   size[args[1]-noi_xid] = args[2]-count;
}

nfsv3:::op-read-done,
nfsv3:::op-write-done
/start[args[1]-noi_xid] != 0/
{
  this-elapsed = timestamp - start[args[1]-noi_xid];
  this-size = size[args[1]-noi_xid];
  @rw[probename == op-read-done ? read : write] = 
quantize(this-elapsed / 1000);
  @host[args[0]-ci_remote] = sum(this-elapsed);
  @file[args[1]-noi_curpath] = sum(this-elapsed);
  @rwsc[probename == op-read-done ? read : write] = count();
  @rws[probename == op-read-done ? read : write] = 
quantize(this-size);
/*   @rwsl[probename == op-read-done ? read : write] = 
lquantize(this-size,4096,8256,64);
 */  @hosts[args[0]-ci_remote] = sum(this-size);
  @files[args[1]-noi_curpath] = sum(this-size);
  this-size = 0;
  size[args[1]-noi_xid] = 0;
  start[args[1]-noi_xid] = 0;
}

dtrace:::END
{
   this-seconds = (timestamp - startscript)/10;
   printf(\nNFSv3 read/write top %d files (total us):\n, TOP_FILES);
   normalize(@file, 1000);
   trunc(@file, TOP_FILES);
   printa(@file);

   printf(NFSv3 read/write distributions (us):\n);
   printa(@rw);

   printf(\nNFSv3 read/write top %d files (total MByte):\n, TOP_FILES);
   normalize(@files, 1024*1024);
   trunc(@files, TOP_FILES);
   printa(@files);

   printf(\nNFSv3 read/write by host (total ns):\n);
   printa(@host);

   printf(\nNFSv3 read/write by host (total s):\n);
   normalize(@host, 10);
   printa(@host);

   printf(\nNFSv3 read/write by host (total Byte):\n);
   printa(@hosts);

   printf(\nNFSv3 read/write by host (total kByte):\n);
   normalize(@hosts,1024);
   printa(@hosts);
   denormalize(@hosts);

   printf(\nNFSv3 read/write by host (total kByte/s):\n);
   normalize(@hosts,this-seconds*1024);
   printa(@hosts);

   printf(NFSv3 read/write distributions (Byte):\n);
   printa(@rws);

/*printf(NFSv3 read/write distributions (Byte):\n);
   printa(@rwsl);
 */
   printf(NFSv3 read/write counts:\n);
   printa(@rwsc);

   printf(\nScript running for %20d seconds ,this-seconds);
}
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SCSI write retry errors on ZIL SSD drives...

2010-08-25 Thread Andreas Grüninger
Ray

Supermicro does not support the use of SSDs behind an expander.

You must put the SSD in the head or use an interposer card see here:
http://www.lsi.com/storage_home/products_home/standard_product_ics/sas_sata_protocol_bridge/lsiss9252/index.html
Supermicro offers an interposer card too: AOCSMPLSISS9252 .

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SCSI write retry errors on ZIL SSD drives...

2010-08-25 Thread Andreas Grüninger
This was the information I got from the distributor but this faq is newer.

Anyway you have still the problems.

When we installed the Intel-X25 we had also problems with timeout.
We replaced the original SUN StorageTek SAS HBA (LSI based, 1068E, newest 
firmware) with an original SUN StorageTek SAS RAID HBA (SUN OEM version of 
Adaptec 5085).
No timeouts since this replacement.

Andreas
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss