Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-22 Thread Brandon High
You can resume a send if the destination has a snapshot in common with the
source. If you don't, there's nothing you can do.

It probably taking a while to restart because the sends that were
interrupted need to be rolled back.

Sent from my Nexus One.

On May 21, 2010 9:44 PM, Thomas Burgess wonsl...@gmail.com wrote:

I can't tell you for sure

For some reason the server lost power and it's taking forever to come back
up.

(i'm really not sure what happened)

anyways, this leads me to my next couple questions:


Is there any way to resume a zfs send/recv

Why is it taking so long for the server to come up?
it's stuck on Reading ZFS config

and there is a FLURRY of hard drive lights blinking (all 10 in sync )





On Sat, May 22, 2010 at 12:26 AM, Brandon High bh...@freaks.com wrote:

 On Fri, May 21, 201...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread Andreas Iannou

Hallo again,

 

I'm wondering how one would get the serial numbers for a HDD to identify them 
via OpenSolaris and match them to the ZFS pool.

 

Like I'm trying to workout which HDD c7t3d0 reliably. Can serial numbers be 
read by OpenSolaris?

 

Cheers,

Andre
  
_
View photos of singles in your area! Looking for a hot date?
http://clk.atdmt.com/NMN/go/150855801/direct/01/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-22 Thread Thomas Burgess
yah, unfortunately this is the first send.  i'm trying to send 9 TB of data.
 It really sucks because i was at 6 TB when it lost power

On Sat, May 22, 2010 at 2:34 AM, Brandon High bh...@freaks.com wrote:

 You can resume a send if the destination has a snapshot in common with
 the source. If you don't, there's nothing you can do.

 It probably taking a while to restart because the sends that were
 interrupted need to be rolled back.

 Sent from my Nexus One.

 On May 21, 2010 9:44 PM, Thomas Burgess wonsl...@gmail.com wrote:

 I can't tell you for sure

 For some reason the server lost power and it's taking forever to come back
 up.

 (i'm really not sure what happened)

 anyways, this leads me to my next couple questions:


 Is there any way to resume a zfs send/recv

 Why is it taking so long for the server to come up?
 it's stuck on Reading ZFS config

 and there is a FLURRY of hard drive lights blinking (all 10 in sync )





 On Sat, May 22, 2010 at 12:26 AM, Brandon High bh...@freaks.com wrote:
 
  On Fri, May 21, 201...


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread Andreas Iannou

I should mention that iostat -En doesn't return any information. Is there a 
reliable way of reading SMART information natively in OpenSolaris?
 
Cheers,
Andre
 


From: andreas_wants_the_w...@hotmail.com
To: zfs-discuss@opensolaris.org
Date: Sat, 22 May 2010 16:49:15 +1000
Subject: [zfs-discuss] HDD Serial numbers for ZFS



Hallo again,
 
I'm wondering how one would get the serial numbers for a HDD to identify them 
via OpenSolaris and match them to the ZFS pool.
 
Like I'm trying to workout which HDD c7t3d0 reliably. Can serial numbers be 
read by OpenSolaris?
 
Cheers,
Andre



Looking for a hot date? View photos of singles in your area!
  
_
Browse profiles for FREE! Meet local singles online.
http://clk.atdmt.com/NMN/go/150855801/direct/01/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread Thomas Burgess
install smartmontools


There is no package for it but it's EASY to install

once you do, you can get ouput like this:


pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda 7200.12 family
Device Model: ST31000528AS
Serial Number:6VP06FF5
Firmware Version: CC34
User Capacity:1,000,204,886,016 bytes
Device is:In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 4
Local Time is:Sat May 22 11:15:50 2010 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status:  (   0) The previous self-test routine
completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection:  ( 609) seconds.
Offline data collection
capabilities:  (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time:  (   1) minutes.
Extended self-test routine
recommended polling time:  ( 192) minutes.
Conveyance self-test routine
recommended polling time:  (   2) minutes.
SCT capabilities:(0x103f) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED
 WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x000f   113   099   006Pre-fail  Always
  -   55212722
  3 Spin_Up_Time0x0003   095   095   000Pre-fail  Always
  -   0
  4 Start_Stop_Count0x0032   100   100   020Old_age   Always
  -   132
  5 Reallocated_Sector_Ct   0x0033   100   100   036Pre-fail  Always
  -   1
  7 Seek_Error_Rate 0x000f   081   060   030Pre-fail  Always
  -   136183285
  9 Power_On_Hours  0x0032   091   091   000Old_age   Always
  -   7886
 10 Spin_Retry_Count0x0013   100   100   097Pre-fail  Always
  -   0
 12 Power_Cycle_Count   0x0032   100   100   020Old_age   Always
  -   132
183 Runtime_Bad_Block   0x   100   100   000Old_age   Offline
   -   0
184 End-to-End_Error0x0032   100   100   099Old_age   Always
  -   0
187 Reported_Uncorrect  0x0032   100   100   000Old_age   Always
  -   0
188 Command_Timeout 0x0032   100   100   000Old_age   Always
  -   0
189 High_Fly_Writes 0x003a   085   085   000Old_age   Always
  -   15
190 Airflow_Temperature_Cel 0x0022   063   054   045Old_age   Always
  -   37 (Lifetime Min/Max 32/40)
194 Temperature_Celsius 0x0022   037   046   000Old_age   Always
  -   37 (0 16 0 0)
195 Hardware_ECC_Recovered  0x001a   048   025   000Old_age   Always
  -   55212722
197 Current_Pending_Sector  0x0012   100   100   000Old_age   Always
  -   0
198 Offline_Uncorrectable   0x0010   100   100   000Old_age   Offline
   -   0
199 UDMA_CRC_Error_Count0x003e   200   200   000Old_age   Always
  -   0
240 Head_Flying_Hours   0x   100   253   000Old_age   Offline
   -   23691039612915
241 Total_LBAs_Written  0x   100   253   000Old_age   Offline
   -   263672243
242 Total_LBAs_Read 0x   100   253   000Old_age   Offline
   -   960644151

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
100  Not_testing
200  Not_testing
300  Not_testing
400  Not_testing
500  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


On Sat, May 22, 2010 at 3:09 AM, Andreas Iannou 
andreas_wants_the_w...@hotmail.com wrote:

  I 

Re: [zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread Andreas Iannou

Thanks Thomas, I thought there'd already be a package in the repo for it. 

 

Cheers,

Andre
 


Date: Sat, 22 May 2010 03:17:38 -0400
Subject: Re: [zfs-discuss] HDD Serial numbers for ZFS
From: wonsl...@gmail.com
To: andreas_wants_the_w...@hotmail.com
CC: zfs-discuss@opensolaris.org

install smartmontoolsá




There is no package for it but it's EASY to install


once you do, you can get ouput like this:





pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net


=== START OF INFORMATION SECTION ===
Model Family: á á Seagate Barracuda 7200.12 family
Device Model: á á ST31000528AS
Serial Number: á á6VP06FF5
Firmware Version: CC34
User Capacity: á á1,000,204,886,016 bytes
Device is: á á á áIn smartctl database [for details use: -P show]
ATA Version is: á 8
ATA Standard is: áATA-8-ACS revision 4
Local Time is: á áSat May 22 11:15:50 2010 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled


=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


General SMART Values:
Offline data collection status: á(0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: á á á( á 0) The previous self-test routine completed
without error or no self-test has everá
been run.
Total time to complete Offlineá
data collection: ( 609) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: á á á á á á(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: á á á á(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routineá
recommended polling time: ( á 1) minutes.
Extended self-test routine
recommended polling time: ( 192) minutes.
Conveyance self-test routine
recommended polling time: ( á 2) minutes.
SCT capabilities: á á á (0x103f) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.


SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME á á á á áFLAG á á VALUE WORST THRESH TYPE á á áUPDATED 
áWHEN_FAILED RAW_VALUE
áá1 Raw_Read_Error_Rate á á 0x000f á 113 á 099 á 006 á áPre-fail áAlways á á á 
- á á á 55212722
áá3 Spin_Up_Time á á á á á á0x0003 á 095 á 095 á 000 á áPre-fail áAlways á á á 
- á á á 0
áá4 Start_Stop_Count á á á á0x0032 á 100 á 100 á 020 á áOld_age á Always á á á 
- á á á 132
áá5 Reallocated_Sector_Ct á 0x0033 á 100 á 100 á 036 á áPre-fail áAlways á á á 
- á á á 1
áá7 Seek_Error_Rate á á á á 0x000f á 081 á 060 á 030 á áPre-fail áAlways á á á 
- á á á 136183285
áá9 Power_On_Hours á á á á á0x0032 á 091 á 091 á 000 á áOld_age á Always á á á 
- á á á 7886
á10 Spin_Retry_Count á á á á0x0013 á 100 á 100 á 097 á áPre-fail áAlways á á á 
- á á á 0
á12 Power_Cycle_Count á á á 0x0032 á 100 á 100 á 020 á áOld_age á Always á á á 
- á á á 132
183 Runtime_Bad_Block á á á 0x á 100 á 100 á 000 á áOld_age á Offline á á 
á- á á á 0
184 End-to-End_Error á á á á0x0032 á 100 á 100 á 099 á áOld_age á Always á á á 
- á á á 0
187 Reported_Uncorrect á á á0x0032 á 100 á 100 á 000 á áOld_age á Always á á á 
- á á á 0
188 Command_Timeout á á á á 0x0032 á 100 á 100 á 000 á áOld_age á Always á á á 
- á á á 0
189 High_Fly_Writes á á á á 0x003a á 085 á 085 á 000 á áOld_age á Always á á á 
- á á á 15
190 Airflow_Temperature_Cel 0x0022 á 063 á 054 á 045 á áOld_age á Always á á á 
- á á á 37 (Lifetime Min/Max 32/40)
194 Temperature_Celsius á á 0x0022 á 037 á 046 á 000 á áOld_age á Always á á á 
- á á á 37 (0 16 0 0)
195 Hardware_ECC_Recovered á0x001a á 048 á 025 á 000 á áOld_age á Always á á á 
- á á á 55212722
197 Current_Pending_Sector á0x0012 á 100 á 100 á 000 á áOld_age á Always á á á 
- á á á 0
198 Offline_Uncorrectable á 0x0010 á 100 á 100 á 000 á áOld_age á Offline á á 
á- á á á 0
199 UDMA_CRC_Error_Count á á0x003e á 200 á 200 á 000 á áOld_age á Always á á á 
- á á á 0
240 Head_Flying_Hours á á á 0x á 100 á 253 á 000 á áOld_age á Offline á á 
á- á á á 23691039612915
241 Total_LBAs_Written á á á0x á 100 á 253 á 000 á áOld_age á Offline á á 
á- á á á 263672243
242 Total_LBAs_Read á á á á 0x á 100 á 253 á 000 á áOld_age á Offline á á 
á- á á á 960644151


SMART Error Log Version: 1
No Errors Logged


SMART Self-test log structure revision number 1
No self-tests have been logged. á[To run self-tests, use: smartctl -t]




SMART Selective self-test log data structure revision number 1
áSPAN áMIN_LBA áMAX_LBA áCURRENT_TEST_STATUS
áá á1 á á á á0 á á á á0 áNot_testing
áá á2 á á á á0 á á á á0 

Re: [zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread Thomas Burgess
i don't think there is but it's dirt simple to install.

I followed the instructions here:


http://cafenate.wordpress.com/2009/02/22/setting-up-smartmontools-on-opensolaris/



On Sat, May 22, 2010 at 3:19 AM, Andreas Iannou 
andreas_wants_the_w...@hotmail.com wrote:

  Thanks Thomas, I thought there'd already be a package in the repo for it.

 Cheers,
 Andre

 --
 Date: Sat, 22 May 2010 03:17:38 -0400
 Subject: Re: [zfs-discuss] HDD Serial numbers for ZFS
 From: wonsl...@gmail.com
 To: andreas_wants_the_w...@hotmail.com
 CC: zfs-discuss@opensolaris.org

 install smartmontoolsá


 There is no package for it but it's EASY to install

 once you do, you can get ouput like this:


  pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
 smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
 Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

 === START OF INFORMATION SECTION ===
 Model Family: á á Seagate Barracuda 7200.12 family
 Device Model: á á ST31000528AS
 Serial Number: á á6VP06FF5
 Firmware Version: CC34
 User Capacity: á á1,000,204,886,016 bytes
 Device is: á á á áIn smartctl database [for details use: -P show]
 ATA Version is: á 8
 ATA Standard is: áATA-8-ACS revision 4
 Local Time is: á áSat May 22 11:15:50 2010 EDT
 SMART support is: Available - device has SMART capability.
 SMART support is: Enabled

 === START OF READ SMART DATA SECTION ===
 SMART overall-health self-assessment test result: PASSED

 General SMART Values:
 Offline data collection status: á(0x82) Offline data collection activity
 was completed without error.
 Auto Offline Data Collection: Enabled.
 Self-test execution status: á á á( á 0) The previous self-test routine
 completed
 without error or no self-test has everá
 been run.
 Total time to complete Offlineá
 data collection: ( 609) seconds.
 Offline data collection
 capabilities: (0x7b) SMART execute Offline immediate.
 Auto Offline data collection on/off support.
 Suspend Offline collection upon new
 command.
 Offline surface scan supported.
 Self-test supported.
 Conveyance Self-test supported.
 Selective Self-test supported.
 SMART capabilities: á á á á á á(0x0003) Saves SMART data before entering
 power-saving mode.
 Supports SMART auto save timer.
 Error logging capability: á á á á(0x01) Error logging supported.
 General Purpose Logging supported.
 Short self-test routineá
 recommended polling time: ( á 1) minutes.
 Extended self-test routine
 recommended polling time: ( 192) minutes.
 Conveyance self-test routine
 recommended polling time: ( á 2) minutes.
 SCT capabilities: á á á (0x103f) SCT Status supported.
 SCT Feature Control supported.
 SCT Data Table supported.

 SMART Attributes Data Structure revision number: 10
 Vendor Specific SMART Attributes with Thresholds:
 ID# ATTRIBUTE_NAME á á á á áFLAG á á VALUE WORST THRESH TYPE á á áUPDATED
 áWHEN_FAILED RAW_VALUE
 áá1 Raw_Read_Error_Rate á á 0x000f á 113 á 099 á 006 á áPre-fail áAlways á
 á á - á á á 55212722
 áá3 Spin_Up_Time á á á á á á0x0003 á 095 á 095 á 000 á áPre-fail áAlways á
 á á - á á á 0
 áá4 Start_Stop_Count á á á á0x0032 á 100 á 100 á 020 á áOld_age á Always á
 á á - á á á 132
 áá5 Reallocated_Sector_Ct á 0x0033 á 100 á 100 á 036 á áPre-fail áAlways á
 á á - á á á 1
 áá7 Seek_Error_Rate á á á á 0x000f á 081 á 060 á 030 á áPre-fail áAlways á
 á á - á á á 136183285
 áá9 Power_On_Hours á á á á á0x0032 á 091 á 091 á 000 á áOld_age á Always á
 á á - á á á 7886
 á10 Spin_Retry_Count á á á á0x0013 á 100 á 100 á 097 á áPre-fail áAlways á
 á á - á á á 0
 á12 Power_Cycle_Count á á á 0x0032 á 100 á 100 á 020 á áOld_age á Always á
 á á - á á á 132
 183 Runtime_Bad_Block á á á 0x á 100 á 100 á 000 á áOld_age á Offline á
 á á- á á á 0
 184 End-to-End_Error á á á á0x0032 á 100 á 100 á 099 á áOld_age á Always á
 á á - á á á 0
 187 Reported_Uncorrect á á á0x0032 á 100 á 100 á 000 á áOld_age á Always á
 á á - á á á 0
 188 Command_Timeout á á á á 0x0032 á 100 á 100 á 000 á áOld_age á Always á
 á á - á á á 0
 189 High_Fly_Writes á á á á 0x003a á 085 á 085 á 000 á áOld_age á Always á
 á á - á á á 15
 190 Airflow_Temperature_Cel 0x0022 á 063 á 054 á 045 á áOld_age á Always á
 á á - á á á 37 (Lifetime Min/Max 32/40)
 194 Temperature_Celsius á á 0x0022 á 037 á 046 á 000 á áOld_age á Always á
 á á - á á á 37 (0 16 0 0)
 195 Hardware_ECC_Recovered á0x001a á 048 á 025 á 000 á áOld_age á Always á
 á á - á á á 55212722
 197 Current_Pending_Sector á0x0012 á 100 á 100 á 000 á áOld_age á Always á
 á á - á á á 0
 198 Offline_Uncorrectable á 0x0010 á 100 á 100 á 000 á áOld_age á Offline á
 á á- á á á 0
 199 UDMA_CRC_Error_Count á á0x003e á 200 á 200 á 000 á áOld_age á Always á
 á á - á á á 0
 240 Head_Flying_Hours á á á 0x á 100 á 253 á 000 á áOld_age á Offline á
 á á- á á á 23691039612915
 241 Total_LBAs_Written á á á0x á 100 á 253 á 000 á áOld_age á Offline á
 á á- á á á 263672243
 242 Total_LBAs_Read á á á á 0x á 100 á 253 á 000 á áOld_age á 

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-22 Thread Brandon High
On Fri, May 21, 2010 at 10:22 PM, Thomas Burgess wonsl...@gmail.com wrote:
 yah, it seems that rsync is faster for what i need anywaysat least right
 now...

If you don't have snapshots you want to keep in the new copy, then probably...

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HDD Serial numbers for ZFS

2010-05-22 Thread James C. McPherson

On 22/05/10 05:09 PM, Andreas Iannou wrote:

I should mention that iostat -En doesn't return any information. Is
there a reliable way of reading SMART information natively in OpenSolaris?

Cheers,
Andre


From: andreas_wants_the_w...@hotmail.com
To: zfs-discuss@opensolaris.org
Date: Sat, 22 May 2010 16:49:15 +1000
Subject: [zfs-discuss] HDD Serial numbers for ZFS

Hallo again,

I'm wondering how one would get the serial numbers for a HDD to identify
them via OpenSolaris and match them to the ZFS pool.

Like I'm trying to workout which HDD c7t3d0 reliably. Can serial numbers
be read by OpenSolaris?



prtconf -vis your friend. Example:



 $ iostat -En c3t3d0
c3t3d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3320620AS  Revision: DSerial No:
Size: 320.07GB 320072933376 bytes
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0


## Note that there is no serial number reported by iostat.

$ ls -lart /dev/dsk/c3t3d0s2
   2 lrwxrwxrwx   1 root root  59 Mar 19 22:27 
/dev/dsk/c3t3d0s2 - 
../../devices/p...@0,0/pci10de,3...@a/pci1000,3...@0/s...@3,0:c



## note the /devices path.



$ prtconf -v /devices/p...@0,0/pci10de,3...@a/pci1000,3...@0/s...@3,0
sd, instance #36
System properties:
name='lun' type=int items=1
value=
name='target' type=int items=1
value=0003
name='class' type=string items=1
value='scsi'
Driver properties:
name='inquiry-serial-no' type=string items=1 dev=none
value='4QF01RZE'
name='pm-components' type=string items=3 dev=none
value='NAME=spindle-motor' + '0=off' + '1=on'
name='pm-hardware-state' type=string items=1 dev=none
value='needs-suspend-resume'
name='ddi-failfast-supported' type=boolean dev=none
name='ddi-kernel-ioctl' type=boolean dev=none
name='fm-ereport-capable' type=boolean dev=none
name='device-nblocks' type=int64 items=1 dev=none
value=2542eab0
name='device-blksize' type=int items=1 dev=none
value=0200
Hardware properties:
name='devid' type=string items=1

value='id1,s...@tata_st3320620as_4qf01rze'
name='inquiry-product-id' type=string items=1
value='ST3320620AS'
name='inquiry-device-type' type=int items=1
value=
name='inquiry-revision-id' type=string items=1
value='3.AAD'
name='inquiry-vendor-id' type=string items=1
value='ST3320620AS'
name='pm-capable' type=int items=1
value=00010003
name='sas-mpt' type=boolean
name='compatible' type=string items=5
value='scsiclass,00.vATA.pST3320620AS.rD' + 
'scsiclass,00.vATA.pST3320620AS' + 'scsa,00.bmpt' + 'scsiclass,00' + 
'scsiclass'

name='lun' type=int items=1
value=
name='target' type=int items=1
value=0003

[extra output elided]



This is the info you want:

name='devid' type=string items=1
value='id1,s...@tata_st3320620as_4qf01rze'


And while I'm at it, let me recommend my presentation on
devids and guids

http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuid.pdf



hth,
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-22 Thread Thomas Burgess
i only care about the most recent snapshot, as this is a growing video
collection.

i do have snapshots, but i only keep them for when/if i accidently delete
something, or rename something wrong.


On Sat, May 22, 2010 at 3:43 AM, Brandon High bh...@freaks.com wrote:

 On Fri, May 21, 2010 at 10:22 PM, Thomas Burgess wonsl...@gmail.com
 wrote:
  yah, it seems that rsync is faster for what i need anywaysat least
 right
  now...

 If you don't have snapshots you want to keep in the new copy, then
 probably...

 -B

 --
 Brandon High : bh...@freaks.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New SSD options

2010-05-22 Thread Ragnar Sundblad

On 22 maj 2010, at 07.40, Don wrote:

 The SATA power connector supplies 3.3, 5 and 12v. A complete
 solution will have all three. Most drives use just the 5v, so you can
 probably ignore 3.3v and 12v.
 I'm not interested in building something that's going to work for every 
 possible drive config- just my config :) Both the Intel X25-e and the OCZ 
 only uses the 5V rail.
 
 You'll need to use a step up DC-DC converter and be able to supply ~
 100mA at 5v.
 It's actually easier/cheaper to use a LiPoly battery  charger and get a
 few minutes of power than to use an ultracap for a few seconds of
 power. Most ultracaps are ~ 2.5v and LiPoly is 3.7v, so you'll need a
 step up converter in either case.
 Ultracapacitors are available in voltage ratings beyond 12volts so there is 
 no reason to use a boost converter with them. That eliminates high frequency 
 switching transients right next to our SSD which is always helpful.
 
 In this case- we have lots of room. We have a 3.5 x 1 drive bay, but a 2.5 
 x 1/4 hard drive. There is ample room for several of the 6.3V ELNA 1F 
 capacitors (and our SATA power rail is a 5V regulated rail so they should 
 suffice)- either in series or parallel (Depending on voltage or runtime 
 requirements).
 http://www.elna.co.jp/en/capacitor/double_layer/catalog/pdf/dk_e.pdf 
 
 You could 2 caps in series for better voltage tolerance or in parallel for 
 longer runtimes. Either way you probably don't need a charge controller, a 
 boost or buck converter, or in fact any IC's at all. It's just a small board 
 with some caps on it.

I know they have a certain internal resistance, but I am not familiar
with the characteristics; is it high enough so you don't need to
limit the inrush current, and is it low enough so that you don't need
a voltage booster for output?

 Cost for a 5v only system should be $30 - $35 in one-off
 prototype-ready components with a 1100mAH battery (using prices from
 Sparkfun.com),
 You could literally split a sata cable and add in some capacitors for just 
 the cost of the caps themselves. The issue there is whether the caps would 
 present too large a current drain on initial charge up- If they do then you 
 need to add in charge controllers and you've got the same problems as with a 
 LiPo battery- although without the shorter service life.
 
 At the end of the day the real problem is whether we believe the drives 
 themselves will actually use the quiet period on the now dead bus to write 
 out their caches. This is something we should ask the manufacturers, and test 
 for ourselves.

Indeed!

/ragge

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New SSD options

2010-05-22 Thread taemun
Basic electronics, go!

The linked capacitor from Elna (
http://www.elna.co.jp/en/capacitor/double_layer/catalog/pdf/dk_e.pdf) has an
internal resistance of 30 ohms.

Intel rate their 32GB X25-E at 2.4W active (we aren't interested in idle
power usage, if its idle, we don't need the capacitor in the first place) on
the +5V rail, thats 0.48A. (P=VI)

V=IR, supply is 5V, current through load is 480mA, hence R=10.4 ohms.
The resistance of the X25-E under load is 10.4 ohms.

Now if you have a capacitor discharge circuit with the charged Elna
DK-6R3D105T - the largest and most suitable from that datasheet - you have
40.4 ohms around the loop (cap and load). +5V over 40.4 ohms. The maximum
current you can pull from that is I=V/R = 124mA. Around a quarter what the
X25-E wants in order to write.

The setup won't work.

I'd suggest something more along the lines of:
http://www.cap-xx.com/products/products.htm
Which have an ESR around 3 orders of magnitude lower.

t

On 22 May 2010 18:58, Ragnar Sundblad ra...@csc.kth.se wrote:


 On 22 maj 2010, at 07.40, Don wrote:

  The SATA power connector supplies 3.3, 5 and 12v. A complete
  solution will have all three. Most drives use just the 5v, so you can
  probably ignore 3.3v and 12v.
  I'm not interested in building something that's going to work for every
 possible drive config- just my config :) Both the Intel X25-e and the OCZ
 only uses the 5V rail.
 
  You'll need to use a step up DC-DC converter and be able to supply ~
  100mA at 5v.
  It's actually easier/cheaper to use a LiPoly battery  charger and get a
  few minutes of power than to use an ultracap for a few seconds of
  power. Most ultracaps are ~ 2.5v and LiPoly is 3.7v, so you'll need a
  step up converter in either case.
  Ultracapacitors are available in voltage ratings beyond 12volts so there
 is no reason to use a boost converter with them. That eliminates high
 frequency switching transients right next to our SSD which is always
 helpful.
 
  In this case- we have lots of room. We have a 3.5 x 1 drive bay, but a
 2.5 x 1/4 hard drive. There is ample room for several of the 6.3V ELNA 1F
 capacitors (and our SATA power rail is a 5V regulated rail so they should
 suffice)- either in series or parallel (Depending on voltage or runtime
 requirements).
  http://www.elna.co.jp/en/capacitor/double_layer/catalog/pdf/dk_e.pdf
 
  You could 2 caps in series for better voltage tolerance or in parallel
 for longer runtimes. Either way you probably don't need a charge controller,
 a boost or buck converter, or in fact any IC's at all. It's just a small
 board with some caps on it.

 I know they have a certain internal resistance, but I am not familiar
 with the characteristics; is it high enough so you don't need to
 limit the inrush current, and is it low enough so that you don't need
 a voltage booster for output?

  Cost for a 5v only system should be $30 - $35 in one-off
  prototype-ready components with a 1100mAH battery (using prices from
  Sparkfun.com),
  You could literally split a sata cable and add in some capacitors for
 just the cost of the caps themselves. The issue there is whether the caps
 would present too large a current drain on initial charge up- If they do
 then you need to add in charge controllers and you've got the same problems
 as with a LiPo battery- although without the shorter service life.
 
  At the end of the day the real problem is whether we believe the drives
 themselves will actually use the quiet period on the now dead bus to write
 out their caches. This is something we should ask the manufacturers, and
 test for ourselves.

 Indeed!

 /ragge

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/recv over ssh

2010-05-22 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Brent Jones
 
 Problem with mbuffer, if you do scripted send/receives, you'd have to
 pre-start an Mbuffer session on the receiving end somehow.
 SSH is always running on the receiving end, so no issues there.

Could you use ssh to start the mbuffer session?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-22 Thread Bob Friesenhahn

On Fri, 21 May 2010, David Dyer-Bennet wrote:


To be comfortable (I don't ask for know for a certainty; I'm not sure
that exists outside of faith), I want a claim by the manufacturer and
multiple outside tests in significant journals -- which could be the
blog of somebody I trusted, as well as actual magazines and such.
Ideally, certainly if it's important, I'd then verify the tests myself.


For me, know for a certainty means that the feature is clearly 
specified in the formal specification sheet for the product, and the 
vendor has historically published reliable specification sheets. 
This may not be the same as money in the bank, but it is better than 
relying on thoughts from some blog posting.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-22 Thread Bob Friesenhahn

On Fri, 21 May 2010, Brandon High wrote:


My understanding is that the controller contains enough cache to
buffer enough data to write a complete erase block size, eliminating
the need to read / erase / write that a partial block write entails.
It's reported to do a copy-on-write, so it doesn't need to do a read
of existing blocks when making changes, which gives it such high iops
- Even random writes are turned into sequential writes (much like how
ZFS works) of entire erase blocks. The excessive spare area is used to
ensure that there are always full pages free to write to. (Some
vendors are releasing consumer drives with 60/120/240 GB, using 7%
reserved space rather than the 27% that the original drives ship
with.)


FLASH is useless as working space since it does not behave like RAM so 
every SSD needs to have some RAM for temporary storage of data.  This 
COW approach seems nice except that it would appear to inflate 
performance by only considering a specific magic block size and 
alignment.  Other block sizes and alignments would require that 
existing data be read so that the new block content can be 
constructed.  Also, the blazing fast write speed (which depends on 
plenty of already erased blocks) would stop once the spare space in 
the SSD has been consumed.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interesting experience with Nexenta - anyone seen it?

2010-05-22 Thread Bob Friesenhahn

On Fri, 21 May 2010, Don wrote:

You know- it would probably be sufficient to provide the SSD with 
_just_ a big capacitor bank. If the host lost power it would stop 
writing and if the SSD still had power it would probably use the 
idle time to flush it's buffers. Then there would be world peace!


This makes the assumption that an SSD will want to flush its write 
cache as soon as possible rather than just letting it sit there 
waiting for more data.  This is probably not a good assumption.  If 
the OS sends 512 bytes of data but the SSD block size is 4K, it is 
reasonable for the SSD to wait for 3584 more contiguous bytes of data 
before it bothers to write anything.


Writes increase the wear on the flash and writes require a slow erase 
cycle so it is reasonable for SSDs to buffer as much data in their 
write cache as possible before writing anything.  An advanced SSD 
could write non-contiguous sectors in a SSD page and then use a sort 
of lookup table to know where the sectors actually are.  Regardless, 
under slow write conditions, it is is definitely valuable to buffer 
the data for a while in the hope that more related data will appear, 
or the data might even be overwritten.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS no longer working with FC devices.

2010-05-22 Thread Bob Friesenhahn

On Fri, 21 May 2010, Demian Phillips wrote:


For years I have been running a zpool using a Fibre Channel array with
no problems. I would scrub every so often and dump huge amounts of
data (tens or hundreds of GB) around and it never had a problem
outside of one confirmed (by the array) disk failure.

I upgraded to sol10x86 05/09 last year and since then I have
discovered any sufficiently high I/O from ZFS starts causing timeouts
and off-lining disks. This leads to failure (once rebooted and cleaned
all is well) long term because you can no longer scrub reliably.


The problem could be with the device driver, your FC card, or the 
array itself.  In my case, issues I thought were to blame on my 
motherboard or Solaris were due to a defective FC card and replacing 
the card resolved the problem.


If the problem is that your storage array is becoming overloaded with 
requests, then try adding this to your /etc/system file:


* Set device I/O maximum concurrency
* 
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Device_I.2FO_Queue_Size_.28I.2FO_Concurrency.29
set zfs:zfs_vdev_max_pending = 5

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New SSD options

2010-05-22 Thread Bob Friesenhahn

On Fri, 21 May 2010, Don wrote:

You could literally split a sata cable and add in some capacitors 
for just the cost of the caps themselves. The issue there is whether 
the caps would present too large a current drain on initial charge 
up- If they do then you need to add in charge controllers and you've 
got the same problems as with a LiPo battery- although without the 
shorter service life.


Electricity does run both directions down a wire and the capacitor 
would look like a short circuit to the supply when it is first turned 
on.  You would need some circuitry which delays applying power to the 
drive before the capacitor is sufficiently charged, and some circuitry 
which shuts off the flow of energy back into the power supply when the 
power supply shuts off (could be a silicon diode if you don't mind the 
0.7 V drop).


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
I am new to OSOL/ZFS but have just finished building my first system.

I detailed the system setup here:  
http://opensolaris.org/jive/thread.jspa?threadID=128986tstart=15

I ended up having to add an additional controller card as two ports on the 
motherboard did not work as standard Sata port.  Luckily I was able to salvage 
an LSI SAS card from an old system.  

Things seem to be working OK for the most part.. But I am trying to dig a bit 
deeper into the performance.  I have done some searching and it seems that the 
iostat -x can help you better understand your performance.

I have 8 drives in the system.  2 are in a mirrored boot pool and the other 6 
are in a single raidz2 pool.  All 6 are the same.  Samsung 1TB Spinpoints.

Here is what my output looks like from iostat -x 30 during a scrub of the 
raidz2 pool:

devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
cmdk0   299.71.5 37080.91.6  7.7  2.0   32.3  98  99 
cmdk1   300.21.3 37083.01.5  7.7  2.0   32.2  98  99 
cmdk2  1018.61.6 37141.31.7  0.5  0.71.2  22  43 
cmdk3 0.01.80.05.2  0.0  0.0   33.7   1   2 
cmdk4  1045.62.1 37124.31.4  0.7  0.71.3  21  41 
sd6   0.01.80.05.2  0.0  0.0   25.1   0   1 
sd71033.42.5 37128.51.8  0.0  1.01.0   3  38 
sd81044.52.5 37129.41.8  0.0  0.90.9   3  36 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
cmdk0   301.91.3 37339.01.7  7.8  2.0   32.1  99  99 
cmdk1   302.11.4 37341.01.8  7.7  2.0   32.0  99  99 
cmdk2  1048.11.5 37400.41.6  0.5  0.71.1  20  42 
cmdk3 0.01.50.05.1  0.0  0.0   36.5   1   2 
cmdk4  1054.41.6 37363.11.5  0.7  0.61.2  20  40 
sd6   0.01.50.05.1  0.0  0.0   30.4   0   1 
sd71044.42.1 37404.21.7  0.0  0.90.9   3  38 
sd81050.52.1 37382.81.9  0.0  0.90.9   3  36 
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
cmdk0   296.31.5 36195.41.7  7.8  2.0   32.7  99  99 
cmdk1   295.21.5 36230.11.8  7.7  2.0   32.5  98  98 
cmdk2   987.52.0 36171.51.7  0.6  0.71.3  22  43 
cmdk3 0.01.50.05.1  0.0  0.0   37.7   1   2 
cmdk4  1018.32.0 36160.81.6  0.7  0.61.4  21  41 
sd6   0.01.50.05.1  0.0  0.1   40.3   0   2 
sd71005.32.6 36300.61.8  0.0  1.11.1   3  39 
sd81016.02.5 36260.12.0  0.0  1.01.0   3  36 


I think cmdk3 and sd6 are in my rpool.  I tried to split the pools across the 
controllers for better performance.

It seems to me that cmdk0 and cmdk1 are much slower than the others..  But I am 
not sure why or what to check next...  In fact I am not even sure how I can 
trace back that device name to figure out which controller it is connected to.

Any ideas or next steps would be appreciated.

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread taemun
iostat -xen 1 will provide the same device names as the rest of the system
(as well as show error columns).

zpool status will show you which drive is in which pool.

As for the controllers, cfgadm -al groups them nicely.

t

On 23 May 2010 03:50, Brian broco...@vt.edu wrote:

 I am new to OSOL/ZFS but have just finished building my first system.

 I detailed the system setup here:
 http://opensolaris.org/jive/thread.jspa?threadID=128986tstart=15

 I ended up having to add an additional controller card as two ports on the
 motherboard did not work as standard Sata port.  Luckily I was able to
 salvage an LSI SAS card from an old system.

 Things seem to be working OK for the most part.. But I am trying to dig a
 bit deeper into the performance.  I have done some searching and it seems
 that the iostat -x can help you better understand your performance.

 I have 8 drives in the system.  2 are in a mirrored boot pool and the other
 6 are in a single raidz2 pool.  All 6 are the same.  Samsung 1TB Spinpoints.

 Here is what my output looks like from iostat -x 30 during a scrub of the
 raidz2 pool:

 devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
 cmdk0   299.71.5 37080.91.6  7.7  2.0   32.3  98  99
 cmdk1   300.21.3 37083.01.5  7.7  2.0   32.2  98  99
 cmdk2  1018.61.6 37141.31.7  0.5  0.71.2  22  43
 cmdk3 0.01.80.05.2  0.0  0.0   33.7   1   2
 cmdk4  1045.62.1 37124.31.4  0.7  0.71.3  21  41
 sd6   0.01.80.05.2  0.0  0.0   25.1   0   1
 sd71033.42.5 37128.51.8  0.0  1.01.0   3  38
 sd81044.52.5 37129.41.8  0.0  0.90.9   3  36
 extended device statistics
 devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
 cmdk0   301.91.3 37339.01.7  7.8  2.0   32.1  99  99
 cmdk1   302.11.4 37341.01.8  7.7  2.0   32.0  99  99
 cmdk2  1048.11.5 37400.41.6  0.5  0.71.1  20  42
 cmdk3 0.01.50.05.1  0.0  0.0   36.5   1   2
 cmdk4  1054.41.6 37363.11.5  0.7  0.61.2  20  40
 sd6   0.01.50.05.1  0.0  0.0   30.4   0   1
 sd71044.42.1 37404.21.7  0.0  0.90.9   3  38
 sd81050.52.1 37382.81.9  0.0  0.90.9   3  36
 extended device statistics
 devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
 cmdk0   296.31.5 36195.41.7  7.8  2.0   32.7  99  99
 cmdk1   295.21.5 36230.11.8  7.7  2.0   32.5  98  98
 cmdk2   987.52.0 36171.51.7  0.6  0.71.3  22  43
 cmdk3 0.01.50.05.1  0.0  0.0   37.7   1   2
 cmdk4  1018.32.0 36160.81.6  0.7  0.61.4  21  41
 sd6   0.01.50.05.1  0.0  0.1   40.3   0   2
 sd71005.32.6 36300.61.8  0.0  1.11.1   3  39
 sd81016.02.5 36260.12.0  0.0  1.01.0   3  36


 I think cmdk3 and sd6 are in my rpool.  I tried to split the pools across
 the controllers for better performance.

 It seems to me that cmdk0 and cmdk1 are much slower than the others..  But
 I am not sure why or what to check next...  In fact I am not even sure how I
 can trace back that device name to figure out which controller it is
 connected to.

 Any ideas or next steps would be appreciated.

 Thanks.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Following up with some more information here:

This is the output of iostat -xen 30

extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
  296.82.9 36640.27.5  7.8  2.0   26.16.6  99  99   0   0   0   0 
c7d0
  296.72.5 36618.17.5  7.8  2.0   26.16.6  99  99   0   0   0   0 
c7d1
  952.02.6 36689.67.1  0.5  0.80.60.8  26  46   0   0   0   0 
c8d1
0.01.70.05.8  0.0  0.0   10.7   13.3   1   1   0   0   0   0 
c9d0
  963.72.8 36712.46.4  0.5  0.70.60.7  24  43   0  84   0  84 
c9d1
0.01.70.05.8  0.0  0.00.0   22.9   0   1   0   0   0   0 
c4t15d0
 1000.93.7 36605.06.9  0.0  0.90.00.9   3  39   0   0   0   0 
c4t16d0
 1000.74.1 36579.67.5  0.0  0.90.00.9   4  37   0   0   0   0 
c4t17d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
  302.65.4 36937.5  156.3  7.7  2.0   24.96.4  98  99   0   0   0   0 
c7d0
  303.15.4 36879.0  156.2  7.7  2.0   24.86.4  98  99   0   0   0   0 
c7d1
  961.45.8 36974.3  155.6  0.5  0.80.50.8  26  47   0   0   0   0 
c8d1
0.03.50.0   22.7  0.0  0.0   11.19.5   1   2   0   0   0   0 
c9d0
  961.15.5 37044.0  154.5  0.6  0.70.60.8  26  45   0  84   0  84 
c9d1
0.03.40.0   22.7  0.0  0.00.0   13.8   0   1   0   0   0   0 
c4t15d0
  998.17.1 36995.6  155.4  0.0  1.00.01.0   3  40   0   0   0   0 
c4t16d0
  996.87.2 36954.1  156.5  0.0  0.90.00.9   4  37   0   0   0   0 
c4t17d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
  296.03.4 36583.8  155.5  7.8  2.0   26.06.6  99  99   0   0   0   0 
c7d0
  296.33.3 36538.6  155.5  7.8  2.0   25.96.6  99  99   0   0   0   0 
c7d1
  920.53.4 36649.7  155.0  0.7  0.80.70.9  29  49   0   0   0   0 
c8d1
0.01.80.05.9  0.0  0.0   10.6   11.4   1   1   0   0   0   0 
c9d0
  948.93.3 36689.5  154.3  0.6  0.70.60.7  25  44   0  84   0  84 
c9d1
0.01.80.05.9  0.0  0.00.0   26.2   0   1   0   0   0   0 
c4t15d0
  974.03.8 36643.4  154.7  0.0  1.00.01.0   4  41   0   0   0   0 
c4t16d0
  976.83.8 36634.6  155.6  0.0  1.00.01.0   4  38   0   0   0   0 
c4t17d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
  301.33.6 37353.6  151.9  7.8  2.0   25.56.5  99  99   0   0   0   0 
c7d0
  301.53.5 37340.0  151.8  7.8  2.0   25.46.5  99  99   0   0   0   0 
c7d1
  979.53.4 37466.4  151.5  0.5  0.70.50.7  24  45   0   0   0   0 
c8d1
0.01.70.05.9  0.0  0.0   12.4   13.7   1   1   0   0   0   0 
c9d0
  995.83.4 37456.7  151.0  0.5  0.70.50.7  23  43   0  84   0  84 
c9d1
0.01.80.05.9  0.0  0.00.0   23.5   0   1   0   0   0   0 
c4t15d0
 1022.74.0 37410.8  151.4  0.0  0.90.00.9   3  38   0   0   0   0 
c4t16d0
 1020.14.4 37473.8  152.0  0.0  0.90.00.9   4  38   0   0   0   0 
c4t17d0

I looked at it for a while and the number of errors is not increasing.. I think 
that may have stemmed from when I was testing hot-plugging of the drive and 
disconnected it at one point.

The -xen helped me determine that it  was disks c7d0 and c7d1 that were slower.

I then tried to look at the cache settings in format:

pfexec format -e
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c4t15d0 DEFAULT cyl 19454 alt 2 hd 255 sec 63
  /p...@0,0/pci1002,5...@3/pci1014,3...@0/s...@f,0
   1. c4t16d0 ATA-SAMSUNG HD103SJ-0001-931.51GB
  /p...@0,0/pci1002,5...@3/pci1014,3...@0/s...@10,0
   2. c4t17d0 ATA-SAMSUNG HD103SJ-0001-931.51GB
  /p...@0,0/pci1002,5...@3/pci1014,3...@0/s...@11,0
   3. c7d0 SAMSUNG-S246JDWZ40807-0001-931.51GB
  /p...@0,0/pci-...@11/i...@0/c...@0,0
   4. c7d1 SAMSUNG-S246JDWZ40807-0001-931.51GB
  /p...@0,0/pci-...@11/i...@0/c...@1,0
   5. c8d1 SAMSUNG-S246JDWZ40807-0001-931.51GB
  /p...@0,0/pci-...@11/i...@1/c...@1,0
   6. c9d0 DEFAULT cyl 19454 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@14,1/i...@1/c...@0,0
   7. c9d1 SAMSUNG-S246JDWZ40806-0001-931.51GB
  /p...@0,0/pci-...@14,1/i...@1/c...@1,0

If I look at c4t1d0, everything is how I expect it to be:

Specify disk (enter its number): 2
selecting c4t17d0
[disk formatted]
/dev/dsk/c4t17d0s0 is part of active ZFS pool tank. Please see zpool(1M).


FORMAT MENU:
disk   - select a disk
type   - select (define) a disk type

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Bob Friesenhahn

On Sat, 22 May 2010, Brian wrote:


The -xen helped me determine that it  was disks c7d0 and c7d1 that were slower.


You may be right, but is not totally clear since you really need to 
apply a workload which is assured to consistently load the disks.  I 
don't think that 'scrub' is necessarily best for this.  Perhaps the 
load issued to these two disks contains more random access requests.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Roy Sigurd Karlsbakk
 extended device statistics   
 errors --- 
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w
 trn tot device
   296.82.9 36640.27.5  7.8  2.0   26.16.6  99  99   0   0 
  0   0 c7d0
   296.72.5 36618.17.5  7.8  2.0   26.16.6  99  99   0   0 
  0   0 c7d1
   952.02.6 36689.67.1  0.5  0.80.60.8  26  46   0   0 
  0   0 c8d1
 0.01.70.05.8  0.0  0.0   10.7   13.3   1   1   0   0  
 0   0 c9d0
   963.72.8 36712.46.4  0.5  0.70.60.7  24  43   0  84 
  0  84 c9d1
 0.01.70.05.8  0.0  0.00.0   22.9   0   1   0   0  
 0   0 c4t15d0
  1000.93.7 36605.06.9  0.0  0.90.00.9   3  39   0   0 
  0   0 c4t16d0
  1000.74.1 36579.67.5  0.0  0.90.00.9   4  37   0   0 
  0   0 c4t17d0

Seems to me c9d1 is having a hard time. How is your zpool layout?
 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New SSD options

2010-05-22 Thread Haudy Kazemi

Bob Friesenhahn wrote:

On Fri, 21 May 2010, Don wrote:

You could literally split a sata cable and add in some capacitors for 
just the cost of the caps themselves. The issue there is whether the 
caps would present too large a current drain on initial charge up- If 
they do then you need to add in charge controllers and you've got the 
same problems as with a LiPo battery- although without the shorter 
service life.


Electricity does run both directions down a wire and the capacitor 
would look like a short circuit to the supply when it is first turned 
on.  You would need some circuitry which delays applying power to the 
drive before the capacitor is sufficiently charged, and some circuitry 
which shuts off the flow of energy back into the power supply when the 
power supply shuts off (could be a silicon diode if you don't mind the 
0.7 V drop).


Bob


You can also use an appropriately wired field effect transistor (FET) / 
MOSFET of sufficient current carrying capacity as a one-way valve 
(diode) that has minimal voltage drop.

More:
http://electronicdesign.com/article/power/fet-supplies-low-voltage-reverse-polarity-protecti.aspx
http://www.electro-tech-online.com/general-electronics-chat/32118-using-mosfet-diode-replacement.html


In regard to how long do you need to continue supplying power...that 
comes down to how long does the SSD wait before flushing cache to 
flash.  If you can identify the maximum write cache flush interval, and 
size the battery or capacitor to exceed that maximum interval, you 
should be okay.  The maximum write cache flush interval is determined by 
a timer that says something like okay, we've waited 5 seconds for 
additional data to arrive to be written.  None has arrived in the last 5 
seconds, so we're going to write what we already have to better ensure 
data integrity, even though it is suboptimal from a absolute performance 
perspective.  In conventional terms of filling city buses...the bus 
leaves when it is full of people, or 15 minutes has passed since the 
last bus left.


Does anyone know if there is a way to directly or indirectly measure the 
write caching flush interval?  I know cache sizes can be found via 
performance testing, but what about write intervals?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brandon High
On Sat, May 22, 2010 at 11:41 AM, Brian broco...@vt.edu wrote:
 If I look at c7d0, I get a message about no Alt Slice found and I don't 
 have access to the cache settings.  Not sure if this is part of my problem or 
 not:

That can happen if the controller is not using AHCI. It'll effect your
performance pretty drastically too.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Is there a way within opensolaris to detect if AHCI is being used by various 
controllers?

I suspect you may be accurate an AHCI is not turned on.  The bios for this 
particular motherboard is fairly confusing on the AHCI settings.  The only 
setting I have is actually in the raid section, and it seems to let select 
between IDE/AHCI/RAID as an option.  However, I can't tell if it applies only 
if one is using software RAID.

If I set it to AHCI, another screen appears prior to boot that is titled AMD 
AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
Is there a way from the grub menu to request opensolaris boot without the 
splashscreen, but instead boot with debug information printed to the console?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
If you install Opensolaris with the AHCI settings off, then switch them on,
it will fail to boot


I had to reinstall with the settings correct.

the best way to tell if ahci is working is to use cfgadm
if you see your drives there, ahci is on

if not, then you may need to reinstall with it on (for the rpool at least)


On Sat, May 22, 2010 at 4:43 PM, Brian broco...@vt.edu wrote:

 Is there a way within opensolaris to detect if AHCI is being used by
 various controllers?

 I suspect you may be accurate an AHCI is not turned on.  The bios for this
 particular motherboard is fairly confusing on the AHCI settings.  The only
 setting I have is actually in the raid section, and it seems to let select
 between IDE/AHCI/RAID as an option.  However, I can't tell if it applies
 only if one is using software RAID.

 If I set it to AHCI, another screen appears prior to boot that is titled
 AMD AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
 Is there a way from the grub menu to request opensolaris boot without the
 splashscreen, but instead boot with debug information printed to the
 console?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
I am not sure I fully understand the question...  It is setup as raidz2 - is 
that what you wanted to know?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Thanks - 
   I can give reinstalling a shot.  Is there anything else I should do first?  
Should I export my tank pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
just to make sure i understand what is going on here,

you have a rpool which is having performance issues, and you discovered ahci
was disabled?


you enabled it, and now it won't boot.  correct?

This happened to me and the solution was to export my storage pool and
reinstall my rpool with the ahci settings on.

Then i imported my storage pool and all was golden


On Sat, May 22, 2010 at 5:25 PM, Brian broco...@vt.edu wrote:

 Thanks -
   I can give reinstalling a shot.  Is there anything else I should do
 first?  Should I export my tank pool?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Ian Collins

On 05/23/10 08:52 AM, Thomas Burgess wrote:
If you install Opensolaris with the AHCI settings off, then switch 
them on, it will fail to boot



I had to reinstall with the settings correct.

Well you probably didn't have to.  Booting form the live CD and 
importing the pool would have put things right.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Ian Collins

On 05/23/10 08:43 AM, Brian wrote:

Is there a way within opensolaris to detect if AHCI is being used by various 
controllers?

I suspect you may be accurate an AHCI is not turned on.  The bios for this particular 
motherboard is fairly confusing on the AHCI settings.  The only setting I have is 
actually in the raid section, and it seems to let select between 
IDE/AHCI/RAID as an option.  However, I can't tell if it applies only if one 
is using software RAID.

   

[answered in other post]


If I set it to AHCI, another screen appears prior to boot that is titled AMD 
AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
Is there a way from the grub menu to request opensolaris boot without the 
splashscreen, but instead boot with debug information printed to the console?
   


Just hit a key once the bar is moving.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
This didn't work for me.  I had the exact same issue a few days ago.

My motherboard had the following:

Native IDE
AHCI
RAID
Legacy IDE

so naturally i chose AHCI, but it ALSO had a mode called IDE/SATA combined
mode

I thought i needed this to use both the ide and ant sata ports, turns out it
was basically an ide emulation mode for sata, long story short i ended up
with opensolaris installed in IDE mode.

I had to reinstall.  I tried the livecd/import method and it still failed to
boot.


On Sat, May 22, 2010 at 5:30 PM, Ian Collins i...@ianshome.com wrote:

 On 05/23/10 08:52 AM, Thomas Burgess wrote:

 If you install Opensolaris with the AHCI settings off, then switch them
 on, it will fail to boot


 I had to reinstall with the settings correct.

  Well you probably didn't have to.  Booting form the live CD and importing
 the pool would have put things right.

 --
 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
this old thread has info on how to switch from ide-sata mode


http://opensolaris.org/jive/thread.jspa?messageID=448758#448758




On Sat, May 22, 2010 at 5:32 PM, Ian Collins i...@ianshome.com wrote:

 On 05/23/10 08:43 AM, Brian wrote:

 Is there a way within opensolaris to detect if AHCI is being used by
 various controllers?

 I suspect you may be accurate an AHCI is not turned on.  The bios for this
 particular motherboard is fairly confusing on the AHCI settings.  The only
 setting I have is actually in the raid section, and it seems to let select
 between IDE/AHCI/RAID as an option.  However, I can't tell if it applies
 only if one is using software RAID.



 [answered in other post]


  If I set it to AHCI, another screen appears prior to boot that is titled
 AMD AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
 Is there a way from the grub menu to request opensolaris boot without the
 splashscreen, but instead boot with debug information printed to the
 console?



 Just hit a key once the bar is moving.

 --
 Ian.


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Sometimes when it hangs on boot hitting space bar or any key won't bring it 
back to the command line.  That is why I was wondering if there was a way to 
not show the splashscreen at all, and rather show what it was trying to load 
when it hangs.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Ok.  What worked for me was booting with the live CD and doing:

pfexec zpool import -f rpool
reboot

After that I was able to boot with AHCI enabled.  The performance issues I was 
seeing are now also gone.  I am getting around 100 to 110 MB/s during a scrub.  
Scrubs are completing in 20 minutes for 1TB of data rather than 1.2 hours.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
GREAT, glad it worked for you!



On Sat, May 22, 2010 at 7:39 PM, Brian broco...@vt.edu wrote:

 Ok.  What worked for me was booting with the live CD and doing:

 pfexec zpool import -f rpool
 reboot

 After that I was able to boot with AHCI enabled.  The performance issues I
 was seeing are now also gone.  I am getting around 100 to 110 MB/s during a
 scrub.  Scrubs are completing in 20 minutes for 1TB of data rather than 1.2
 hours.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Ian Collins

On 05/23/10 11:31 AM, Brian wrote:

Sometimes when it hangs on boot hitting space bar or any key won't bring it 
back to the command line.  That is why I was wondering if there was a way to 
not show the splashscreen at all, and rather show what it was trying to load 
when it hangs.
   

From my /rpool/boot/grub/menu.lst:

title OpenSolaris Development snv_133 Debug
findroot (pool_rpool,0,a)
bootfs rpool/ROOT/opensolaris
kernel$ /platform/i86pc/kernel/$ISADIR/unix -kd -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive

The -kd drops into the kernel debugger.  If all you want to do is 
loose the splash screen, copy your existing entry and use:


kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
I'm confusedI have a filesystem on server 1 called tank/nas/dump

I made a snapshot called first

zfs snapshot tank/nas/d...@first

then i did a zfs send/recv like:

zfs send tank/nas/d...@first | ssh wonsl...@192.168.1.xx /bin/pfexec
/usr/sbin/zfs recv tank/nas/dump


this worked fine, next today, i wanted to send what has changed

i did


zfs snapshot tank/nas/d...@second


now, heres where i'm confusedfrom reading the man page i thought this
command would work:


pfexec zfs send -i tank/nas/d...@first tank/nas/d...@second| ssh
wonsl...@192.168.1.15 /bin/pfexec /usr/sbin/zfs recv -vd tank/nas/dump



but i get an error:

cannot receive incremental stream: destination tank/nas/dump has been
modified
since most recent snapshot


why is this?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Ian Collins

On 05/23/10 01:18 PM, Thomas Burgess wrote:


this worked fine, next today, i wanted to send what has changed

i did
zfs snapshot tank/nas/d...@second

now, heres where i'm confusedfrom reading the man page i thought 
this command would work:


pfexec zfs send -i tank/nas/d...@first tank/nas/d...@second| ssh 
wonsl...@192.168.1.15 mailto:wonsl...@192.168.1.15 /bin/pfexec 
/usr/sbin/zfs recv -vd tank/nas/dump



It should (you can shorten the first snap to first.


but i get an error:

cannot receive incremental stream: destination tank/nas/dump has been 
modified

since most recent snapshot

Well has it?  Even wandering around the filesystem with atime enabled 
will cause this error.


Add -f to the receive to force a roll-back to the state after the 
original snap.


--

Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
On Sat, May 22, 2010 at 9:26 PM, Ian Collins i...@ianshome.com wrote:

 On 05/23/10 01:18 PM, Thomas Burgess wrote:


 this worked fine, next today, i wanted to send what has changed

 i did
 zfs snapshot tank/nas/d...@second

 now, heres where i'm confusedfrom reading the man page i thought this
 command would work:

 pfexec zfs send -i tank/nas/d...@first tank/nas/d...@second| ssh
 wonsl...@192.168.1.15 mailto:wonsl...@192.168.1.15 /bin/pfexec
 /usr/sbin/zfs recv -vd tank/nas/dump

  It should (you can shorten the first snap to first.


 but i get an error:

 cannot receive incremental stream: destination tank/nas/dump has been
 modified
 since most recent snapshot

  Well has it?  Even wandering around the filesystem with atime enabled
 will cause this error.

 Add -f to the receive to force a roll-back to the state after the original
 snap.

 Ahh, this i didn't know. Yes, i DID cd to the dir and check some stuff and
atime IS enabledthis is NOT very intuitive.

adding -F worked...thanks




 --

 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hanging - SOLVED

2010-05-22 Thread Eduardo Bragatto

Hi,

I have fixed this problem a couple weeks ago, but haven't found the  
time to report it until now.


Cindy Swearingen was very kind in contacting me to resolve this issue,  
I would like to take this opportunity to express my gratitude to her.


We have not found the root cause of the error. Cindy suspected about  
some known bugs in release 5/09 that have been fixed in 10/09, but we  
could not confirm that as the real cause of the problem. Anyway, I  
went ahead and re-installed the operating system with the latest  
Solaris release (10/09) and zpool import worked like there was  
nothing wrong.


I have scrubbed the pool and no errors were found. I'm using the  
system since the OS was re-installed (exactly 10 days now) without any  
problems.


If you get yourself in a situation where zpool import hangs and  
never finishes because it hangs while mounting some of the ZFS  
filesystems, make sure you try to import that pool on the newest  
stable system before wasting too much time debugging the problem.


Thanks,
Eduardo Bragatto.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Haudy Kazemi

Brian wrote:

Sometimes when it hangs on boot hitting space bar or any key won't bring it 
back to the command line.  That is why I was wondering if there was a way to 
not show the splashscreen at all, and rather show what it was trying to load 
when it hangs.
  

Look at these threads:

OpenSolaris b134 Genunix site iso failing boot
http://opensolaris.org/jive/thread.jspa?threadID=125445tstart=0

Build 134 Won't boot
http://ko.opensolaris.org/jive/thread.jspa?threadID=125486
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6932552

How to bypass boot splash screen?
http://opensolaris.org/jive/thread.jspa?messageID=355648


They talk changing some Grub menu.lst options by either adding 
'console=text' or removing 'console=graphics' .  See if that works for 
you too.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Thomas Burgess
 
 but i get an error:
 
 cannot receive incremental stream: destination tank/nas/dump has been
 modified
 since most recent snapshot

Whenever you send a snap, and you intend to later receive an incremental,
just make the filesystem read-only, to ensure you'll be able to receive the
incremental later.

zfs set readonly=on somefilesystem

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Ian Collins

On 05/23/10 03:56 PM, Thomas Burgess wrote:

let me ask a question though.

Lets say i have a filesystem

tank/something

i make the snapshot

tank/someth...@one

i send/recv it

then i do something (add a file...remove something, whatever) on the 
send side, then i do a send/recv and force it of the next filesystem



What do you mean force it of the next filesystem?

will the new recv'd filesystem be identical to the original forced 
snapshot or will it be a combination of the 2?


The received filesystem will be identical to the sending one.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots send/recv

2010-05-22 Thread Thomas Burgess
ok, so forcing just basically makes it drop whatever changes were made

Thats what i was wondering...this is what i expected


On Sun, May 23, 2010 at 12:05 AM, Ian Collins i...@ianshome.com wrote:

 On 05/23/10 03:56 PM, Thomas Burgess wrote:

 let me ask a question though.

 Lets say i have a filesystem

 tank/something

 i make the snapshot

 tank/someth...@one

 i send/recv it

 then i do something (add a file...remove something, whatever) on the send
 side, then i do a send/recv and force it of the next filesystem

  What do you mean force it of the next filesystem?


  will the new recv'd filesystem be identical to the original forced
 snapshot or will it be a combination of the 2?


 The received filesystem will be identical to the sending one.

 --
 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-22 Thread Richard Elling
On May 21, 2010, at 7:03 PM, Brandon High wrote:

 On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess wonsl...@gmail.com wrote:
 shouldn't the newer server have LESS load?
 Please forgive my ubernoobness.
 
 Depends on what it's doing!
 
 Load average is really how many process are waiting to run, so it's
 not always a useful metric. If there are processes waiting on disk,
 you can have high load with almost no cpu use. Check the iowait with
 iostat or top.

iowait is defined to be 0. It is a useless metric anyway.
 -- richard

-- 
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss