[zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Evaldas Auryla
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in zpool status output: NAME STATE READ WRITE CKSUM cuve

[zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Sašo Kiselkov
Hi all, I'd like to ask whether there is a way to monitor disk seeks. I have an application where many concurrent readers (50) sequentially read a large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor read/write ops using iostat, but that doesn't tell me how contiguous the data

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Jim Klimov
I am not sure you can monitor actual mechanical seeks short of debugging and interrogating the HDD firmware - because it is the last responsible logic in the chain of caching, queuing and issuing actual commands to the disk heads. For example, a long logical IO spanning several cylinders would

Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Hung-ShengTsao (Lao Tsao) Ph.D.
what is output echo |format On 5/19/2011 3:55 AM, Evaldas Auryla wrote: Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in zpool status output: NAME

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Jim Klimov
2011-05-19 17:00, Jim Klimov пишет: I am not sure you can monitor actual mechanical seeks short of debugging and interrogating the HDD firmware - because it is the last responsible logic in the chain of caching, queuing and issuing actual commands to the disk heads. For example, a long logical

Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Chris Ridd
On 19 May 2011, at 08:55, Evaldas Auryla wrote: Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in zpool status output: NAME STATE

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Tomas Ögren
On 19 May, 2011 - Sa??o Kiselkov sent me these 0,6K bytes: Hi all, I'd like to ask whether there is a way to monitor disk seeks. I have an application where many concurrent readers (50) sequentially read a large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor read/write

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-19 Thread Frank Van Damme
Op 03-05-11 17:55, Brandon High schreef: -H: Hard links If you're going to this for 2 TB of data, remember to expand your swap space first (or have tons of memory). Rsync will need it to store every inode number in the directory. -- No part of this copyright message may be reproduced, read or

Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Evaldas Auryla
The same format as in zpool status: 3. c9t5000C50025D5A266d0 SEAGATE-ST91000640SS-AS02-931.51GB /pci@0,0/pci10de,376@e/pci1000,3080@0/iport@f/disk@w5000c50025d5a266,0 4. c9t5000C50025D5AF66d0 SEAGATE-ST91000640SS-AS02-931.51GB

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Sašo Kiselkov
On 05/19/2011 03:35 PM, Tomas Ögren wrote: On 19 May, 2011 - Sa??o Kiselkov sent me these 0,6K bytes: Hi all, I'd like to ask whether there is a way to monitor disk seeks. I have an application where many concurrent readers (50) sequentially read a large dataset (10T) at a fairly low speed

Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Evaldas Auryla Is there an easy way to map these sas-addresses to the physical disks in enclosure ? Of course in the ideal world, when a disk needs to be pulled, hardware would know about

Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Chris Ridd
On 19 May 2011, at 14:44, Evaldas Auryla wrote: Hi Chris, there is no sestopo on this box (Solaris Express 11 151a), fmtopo -dV works nice, although it's a bit overkill with manually parsing the output :) You need to install pkg:/system/io/tests. Chris

Re: [zfs-discuss] ZFS ZPOOL = trash

2011-05-19 Thread Ong Yu-Phing
I'll add my 2 cents, since I just suffered some pretty bad pool corruption a few months ago and went through a lot of pain to get most of it restored. See http://www.opensolaris.org/jive/thread.jspa?messageID=512687 for the gory details. Steps you should take: 1) as mentioned above, delete

Re: [zfs-discuss] ZFS ZPOOL = trash

2011-05-19 Thread Ong Yu-Phing
If you cannot even boot to single user mode on the server, boot from SXCE or openindiana, then: 1. import syspool: # zpool import syspool 2. mount affected rootfs: # mkdir /a; mount -F zfs syspool/rootfs-nmu-### /a 3. remove zpool.cache: # rm -f /a/etc/zfs/zpool.cache 4. rebuild boot archive:

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-19 Thread Daniel Rock
On Thu, 19 May 2011 15:39:50 +0200, Frank Van Damme wrote: Op 03-05-11 17:55, Brandon High schreef: -H: Hard links If you're going to this for 2 TB of data, remember to expand your swap space first (or have tons of memory). Rsync will need it to store every inode number in the directory.

Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Eric D. Mudama
On Thu, May 19 at 9:55, Evaldas Auryla wrote: Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in zpool status output: NAME STATE READ

Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Hung-ShengTsao (Lao Tsao) Ph.D.
IIRC there are tool/SW from lsi like MegaRaid SW that may display some info not sure you can use Common Array Manager On 5/19/2011 11:20 AM, Eric D. Mudama wrote: On Thu, May 19 at 9:55, Evaldas Auryla wrote: Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path,

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Brandon High
On Thu, May 19, 2011 at 5:35 AM, Sašo Kiselkov skiselkov...@gmail.com wrote: I'd like to ask whether there is a way to monitor disk seeks. I have an application where many concurrent readers (50) sequentially read a large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Richard Elling
On May 19, 2011, at 5:35 AM, Sašo Kiselkov wrote: Hi all, I'd like to ask whether there is a way to monitor disk seeks. I have an application where many concurrent readers (50) sequentially read a large dataset (10T) at a fairly low speed (8-10 Mbit/s). I can monitor read/write ops using

Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Richard Elling
On May 19, 2011, at 12:55 AM, Evaldas Auryla wrote: Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with sas-addresses such as this in zpool status output: NAME STATE

[zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-19 Thread Alex
I thought this was interesting - it looks like we have a failing drive in our mirror, but the two device nodes in the mirror are the same: pool: tank state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for

Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-19 Thread Jim Klimov
Just a random thought: if two devices have same IDs and seem to work in turns, are you certain you have a mirror and not two paths to the same backend? A few years back I was given to support a box with sporadically failing drives which turned out to be two paths to the same external array,

[zfs-discuss] Faulted Pool Question

2011-05-19 Thread Paul Kraus
I just got a call from another of our admins, as I am the resident ZFS expert, and they have opened a support case with Oracle, but I figured I'd ask here as well, as this forum often provides better, faster answers :-) We have a server (M4000) with 6 FC attached SE-3511 disk arrays (some

Re: [zfs-discuss] Solaris vs FreeBSD question

2011-05-19 Thread Chris Forgeron
-Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Kraus Over the past few months I have seen mention of FreeBSD a couple time in regards to ZFS. My question is how stable (reliable) is ZFS on this platform ?

Re: [zfs-discuss] Faulted Pool Question

2011-05-19 Thread Richard Elling
On May 19, 2011, at 2:09 PM, Paul Kraus p...@kraus-haus.org wrote: I just got a call from another of our admins, as I am the resident ZFS expert, and they have opened a support case with Oracle, but I figured I'd ask here as well, as this forum often provides better, faster answers :-)