Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-04 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi

may be check out stec ssd
or  checkout the service manual of sun zfs appliance service manual
to see the read and write ssd in the system
regards


Sent from my iPad

On Aug 3, 2012, at 22:05, Hung-Sheng Tsao (LaoTsao) Ph.D laot...@gmail.com 
wrote:

 Intel 311 Series Larsen Creek 20GB 2.5 SATA II SLC Enterprise Solid State 
 Disk SSDSA2VP020G201
 
 Average Rating
 (12 reviews)
 Write a Review
 
 Sent from my iPad
 
 On Aug 3, 2012, at 21:39, Bob Friesenhahn bfrie...@simple.dallas.tx.us 
 wrote:
 
 On Fri, 3 Aug 2012, Karl Rossing wrote:
 
 I'm looking at 
 http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
  wondering what I should get.
 
 Are people getting intel 330's for l2arc and 520's for slog?
 
 For the slog, you should look for a SLC technology SSD which saves unwritten 
 data on power failure.  In Intel-speak, this is called Enhanced Power Loss 
 Data Protection.  I am not running across any Intel SSDs which claim to 
 match these requirements.
 
 Extreme write IOPS claims in consumer SSDs are normally based on large write 
 caches which can lose even more data if there is a power failure.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-03 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
Intel 311 Series Larsen Creek 20GB 2.5 SATA II SLC Enterprise Solid State Disk 
SSDSA2VP020G201

Average Rating
(12 reviews)
Write a Review

Sent from my iPad

On Aug 3, 2012, at 21:39, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Fri, 3 Aug 2012, Karl Rossing wrote:
 
 I'm looking at 
 http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
  wondering what I should get.
 
 Are people getting intel 330's for l2arc and 520's for slog?
 
 For the slog, you should look for a SLC technology SSD which saves unwritten 
 data on power failure.  In Intel-speak, this is called Enhanced Power Loss 
 Data Protection.  I am not running across any Intel SSDs which claim to 
 match these requirements.
 
 Extreme write IOPS claims in consumer SSDs are normally based on large write 
 caches which can lose even more data if there is a power failure.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import the zpool

2012-08-02 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
so zpool import -F ..
zpool import -f ...
all not working?
regards


Sent from my iPad

On Aug 2, 2012, at 7:47, Suresh Kumar sachinnsur...@gmail.com wrote:

 Hi Hung-sheng,
  
 It is not displaying any output, like the following.
  
 bash-3.2#  zpool import -nF tXstpool
 bash-3.2#
  
  
 Thanks  Regards,
 Suresh.
  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import the zpool

2012-08-02 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
can you post zpool history
regards

Sent from my iPad

On Aug 2, 2012, at 7:47, Suresh Kumar sachinnsur...@gmail.com wrote:

 Hi Hung-sheng,
  
 It is not displaying any output, like the following.
  
 bash-3.2#  zpool import -nF tXstpool
 bash-3.2#
  
  
 Thanks  Regards,
 Suresh.
  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-26 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
imho, the 147440-21 does not list the bugs that solved by 148098-
even through it obsoletes the 148098



Sent from my iPad

On Jul 25, 2012, at 18:14, Habony, Zsolt zsolt.hab...@hp.com wrote:

 Thank you for your replies.
 
 First, sorry for misleading info.  Patch 148098-03  indeed not included in 
 recommended set, but trying to download it shows that 147440-15 obsoletes it
 and 147440-19 is included in latest recommended patch set.
 Thus time solves the problem elsewhere.
 
 Just for fun, my case was:
 
 A standard LUN used as a zfs filesystem, no redundancy (as storage already 
 has), and no partition is used, disk is given directly to zpool.
 # zpool status -oraarch
  pool: -oraarch
 state: ONLINE
 scan: none requested
 config:
 
NAME STATE READ WRITE CKSUM
xx-oraarch   ONLINE   0 0 0
  c5t60060E800570B90070B96547d0  ONLINE   0 0 0
 
 errors: No known data errors
 
 Partitioning shows this.  
 
 partition pr
 Current partition table (original):
 Total disk sectors available: 41927902 + 16384 (reserved sectors)
 
 Part  TagFlag First SectorSizeLast Sector
  0usrwm   256  19.99GB 41927902
  1 unassignedwm 0  0  0
  2 unassignedwm 0  0  0
  3 unassignedwm 0  0  0
  4 unassignedwm 0  0  0
  5 unassignedwm 0  0  0
  6 unassignedwm 0  0  0
  8   reservedwm  41927903   8.00MB 41944286
 
 
 As I mentioned I did not partition it, zpool create did.  I had absolutely 
 no idea how to resize these partitions, where to get the available number of 
 sectors and how many should be skipped and reserved ...
 Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) 
 , restored data.  
 
 Partition looks like this now, I do not think I could have created it easily 
 manually.
 
 partition pr
 Current partition table (original):
 Total disk sectors available: 209700062 + 16384 (reserved sectors)
 
 Part  TagFlag First Sector Size Last Sector
  0usrwm   256   99.99GB  209700062
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 2097000638.00MB  209716446
 
 Thank you for your help.
 Zsolt Habony
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Hung-Sheng Tsao (LaoTsao) Ph.D


Sent from my iPad

On Jul 11, 2012, at 13:11, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Wed, 11 Jul 2012, Richard Elling wrote:
 The last studio release suitable for building OpenSolaris is available in 
 the repo.
 See the instructions at 
 http://wiki.illumos.org/display/illumos/How+To+Build+illumos
 
 Not correct as far as I can tell.  You should re-read the page you 
 referenced.  Oracle recinded (or lost) the special Studio releases needed to 
 build the OpenSolaris kernel.  

hi
you can still download 12 12.1 12.2, AFAIK through OTN


 The only way I can see to obtain these releases is illegally.
 
 However, Studio 12.3 (free download) produces user-space executables which 
 run fine under Illumos.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots slow on sol11?

2012-06-27 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
just wondering can you change from samba to SMB?
regards

Sent from my iPad

On Jun 27, 2012, at 2:46, Carsten John cj...@mpi-bremen.de wrote:

 -Original message-
 CC:ZFS Discussions zfs-discuss@opensolaris.org; 
 From:Jim Klimov jimkli...@cos.ru
 Sent:Tue 26-06-2012 22:34
 Subject:Re: [zfs-discuss] snapshots slow on sol11?
 2012-06-26 23:57, Carsten John wrote:
 Hello everybody,
 
 I recently migrated a file server (NFS  Samba) from OpenSolaris (Build 
 111) 
 to Sol11.
 (After?) the move we are facing random (or random looking) outages of 
 our Samba...
 
 As for the timeouts, check if your tuning (i.e. the migrated files
 like /etc/system) don't enforce long TXG syncs (default was 30sec)
 or something like that.
 
 Find some DTrace scripts to see if ZIL is intensively used during
 these user-profile writes, and if these writes are synchronous -
 maybe an SSD/DDR logging device might be useful for this scenario?
 
 Regarding the zfs-auto-snapshot, it is possible to install the old
 scripted package from OpenSolaris onto Solaris 10 at least; I did
 not have much experience with newer releases yet (timesliderd) so
 can't help better.
 
 HTH,
 //Jim Klimov
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 Hi everybody,
 
 in the meantime I was able to eliminate the snapshots. I disabled snapshot, 
 but the issue still persists. I will now check Jim's suggestions.
 
 thx so far
 
 
 Carsten
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (fwd) Re: ZFS NFS service hanging on Sunday

2012-06-25 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
in solaris zfs cache many things, you should have more ram
if you setup 18gb swap , imho, ram should be high than 4gb
regards

Sent from my iPad

On Jun 25, 2012, at 5:58, tpc...@mklab.ph.rhul.ac.uk wrote:

 
 2012-06-14 19:11, tpc...@mklab.ph.rhul.ac.uk wrote:
 
 In message 201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk, 
 tpc...@mklab.ph.r
 hul.ac.uk writes:
 Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
 My WAG is that your zpool history is hanging due to lack of
 RAM.
 
 Interesting.  In the problem state the system is usually quite responsive, 
 eg. not memory trashing.  Under Linux which I'm more
 familiar with the 'used memory' = 'total memory - 'free memory', refers to 
 physical memory being used for data caching by
 the kernel which is still available for processes to allocate as needed 
 together with memory allocated to processes, as opposed to
 only physical memory already allocated and therefore really 'used'.  Does 
 this mean something different under Solaris ?
 
 Well, it is roughly similar. In Solaris there is a general notion
 
 [snipped]
 
 Dear Jim,
Thanks for the detailed explanation of ZFS memory usage.  Special 
 thanks also to John D Groenveld for the initial suggestion of a lack of RAM
 problem.  Since up-ing the RAM from 2GB to 4GB the machine has sailed though 
 the last two Sunday mornings w/o problem.  I was interested to
 subsequently discover the Solaris command 'echo ::memstat | mdb -k' which 
 reveals just how much memory ZFS can use.
 
 Best regards
 Tom.
 
 --
 Tom Crane, Dept. Physics, Royal Holloway, University of London, Egham Hill,
 Egham, Surrey, TW20 0EX, England.
 Email:  T.Crane@rhul dot ac dot uk
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good tower server for around 1,250 USD?

2012-03-23 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
well
check  this link

https://shop.oracle.com/pls/ostore/product?p1=SunFireX4270M2serverp2=p3=p4=sc=ocom_x86_SunFireX4270M2servertz=-4:00

you may not like the price



Sent from my iPad

On Mar 23, 2012, at 17:16, The Honorable Senator and Mrs. John 
Blutarskybl...@nymph.paranoici.org wrote:

 On Fri Mar 23 at 10:06:12 2012 laot...@gmail.com wrote:
 
 well
 use component of x4170m2 as example you will be ok
 intel cpu
 lsi sas controller non raid
 sas 72rpm hdd
 my 2c
 
 That sounds too vague to be useful unless I could afford an X4170M2. I
 can't build a custom box and I don't have the resources to go over the parts
 list and order something with the same components. Thanks though.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good tower server for around 1,250 USD?

2012-03-22 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
well
use component of x4170m2 as example you will be ok
intel cpu
lsi sas controller non raid
sas 72rpm hdd
my 2c

Sent from my iPad

On Mar 22, 2012, at 14:41, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Thu, 22 Mar 2012, The Honorable Senator and Mrs. John Blutarsky wrote:
 
 This will be a do-everything machine. I will use it for development, hosting
 various apps in zones (web, file server, mail server etc.) and running other
 systems (like a Solaris 11 test system) in VirtualBox. Ultimately I would
 like to put it under Solaris support so I am looking for something
 officially approved. The problem is there are so many systems on the HCL I
 don't know where to begin. One of the Supermicro super workstations looks
 
 Almost all of the systems listed on the HCL are defunct and no longer 
 purchasable except for on the used market.  Obtaining an approved system 
 seems very difficult. In spite of this, Solaris runs very well on many 
 non-approved modern systems.
 
 I don't know what that means as far as the ability to purchase Solaris 
 support.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
are the disk/sas controller the same on both server?
-LT

Sent from my iPad

On Mar 13, 2012, at 6:10, P-O Yliniemi p...@bsd-guide.net wrote:

 Hello,
 
 I'm currently replacing a temporary storage server (server1) with the one 
 that should be the final one (server2). To keep the data storage from the old 
 one I'm attempting to import it on the new server. Both servers are running 
 OpenIndiana server build 151a.
 
 Server 1 (old)
 The zpool consists of three disks in a raidz1 configuration:
 # zpool status
  pool: storage
 state: ONLINE
  scan: none requested
 config:
 
NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0
 
 errors: No known data errors
 
 Output of format command gives:
 # format
 AVAILABLE DISK SELECTIONS:
   0. c2t1d0 LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126
  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0 ST3000DM- W1F07HW-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1 ST3000DM- W1F05H2-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0 ST3000DM- W1F032R-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   4. c5d1 ST3000DM- W1F07HZ-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
 
 (c5d1 was previously used as a hot spare, but I removed it as an attempt to 
 export and import the zpool without the spare)
 
 # zpool export storage
 
 # zpool list
 (shows only rpool)
 
 # zpool import
   pool: storage
 id: 17210091810759984780
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:
 
storage ONLINE
  raidz1-0  ONLINE
c4d0ONLINE
c4d1ONLINE
c5d0ONLINE
 
 (check to see if it is importable to the old server, this has also been 
 verified since I moved back the disks to the old server yesterday to have it 
 available during the night)
 
 zdb -l output in attached files.
 
 ---
 
 Server 2 (new)
 I have attached the disks on the new server in the same order (which 
 shouldn't matter as ZFS should locate the disks anyway)
 zpool import gives:
 
 root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:
 
storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE
 
 The problem is that all the disks are there and online, but the pool is 
 showing up as unavailable.
 
 Any ideas on what I can do more in order to solve this problem ?
 
 Regards,
  PeO
 
 
 
 zdb_l_c4d0s0.txt
 zdb_l_c4d1s0.txt
 zdb_l_c5d0s0.txt
 zdb_l_c5d1s0.txt
 zdb_l_c7t5000C50044A30193d0s0.txt
 zdb_l_c7t5000C50044E0F316d0s0.txt
 zdb_l_c7t5000C50044760F6Ed0s0.txt
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
read the link please
it seems that afmter you create the  radiz1 zpool
you need to destroy the fakedisk so it will have contains data when you to the 
copy
copy the data by following the steps in the link

replace the  fakedisk withnthe real disk

this is a good approach that i did not know before
-LT

Sent from my iPad

On Mar 7, 2012, at 17:48, Bob Doolittle bob.doolit...@oracle.com wrote:

 Wait, I'm not following the last few steps you suggest. Comments inline:
 
 On 03/07/12 17:03, Fajar A. Nugraha wrote:
 - use the one new disk to create a temporary pool
 - copy the data (zfs snapshot -r + zfs send -R | zfs receive)
 - destroy old pool
 - create a three-disk raidz pool using two disks and a fake device,
 something like http://www.dev-eth0.de/creating-raidz-with-missing-device/
 
 Don't I need to copy the data back from the temporary pool to the new raidz 
 pool at this point?
 I'm not understanding the process beyond this point, can you clarify please?
 
 - destroy the temporary pool
 
 So this leaves the data intact on the disk?
 
 - replace the fake device with now-free disk
 
 So this replicates the data on the previously-free disk across the raidz pool?
 
 What's the point of the following export/import steps? Renaming? Why can't I 
 just give the old pool name to the raidz pool when I create it?
 
 - export the new pool
 - import the new pool and rename it in the process: zpool import
 temp_pool_name old_pool_name
 
 Thanks!
 
 -Bob
 
 
 
 In the end I
 want the three-disk raidz to have the same name (and mount point) as the
 original zpool. There must be an easy way to do this.
 Nope.
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need hint on pool setup

2012-02-01 Thread Hung-Sheng Tsao (laoTsao)
my 2c
1 just  do mirror  of 2 dev with 20 hdd with 1 spare
2 raidz2 with   5 dev for 20 hdd,  with one spare 

Sent from my iPad

On Feb 1, 2012, at 3:49, Thomas Nau thomas@uni-ulm.de wrote:

 Hi
 
 On 01/31/2012 10:05 PM, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. wrote:
 what is your main application for ZFS? e.g. just NFS or iSCSI  for home dir 
 or VM? or Window client?
 
 Yes, fileservice only using CIFS, NFS, Samba and maybe iSCSI
 
 Is performance important? or space is more important?
 
 a good balance ;)
 
 what is the memory of your server?
 
 96G
 
 do you want to use ZIL or L2ARC?
 
 ZEUS STECRAM as ZIL (mirrored); maybe SSDs and L2ARC
 
 what is your backup  or DR plan?
 
 continuous rolling snapshot plus send/receive to remote site
 TSM backup at least once a week to tape; depends on how much
 time the TSM client needs to walk the filesystems
 
 You need to answer all these question first
 
 did so
 
 Thomas
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Hung-Sheng Tsao (laoTsao)
what is the server you attach to D2700?
the hp spec for d2700 did not include solaris, so not sure how you get support 
from hp:-(

Sent from my iPad

On Jan 31, 2012, at 20:25, Ragnar Sundblad ra...@csc.kth.se wrote:

 
 Just to follow up on this, in case there are others interested:
 
 The D2700s seems to work quite ok for us. We have four issues with them,
 all of which we will ignore for now:
 - They hang when I insert an Intel SSD SATA (!) disk (I wanted to test,
  both for log device and cache device, and I had those around).
  This could probably be fixed with a firmware upgrade, but:
 - It seems the firmware can't be upgraded if you don't have one of a few
  special HP raid cards! Silly!
 - The LEDs on the disks: On the first bay it is turned off, on the rest
  it is turned on. They all flash at activity. I have no idea why this
  is, and I know to little about SAS chassises to even guess. This could
  possibly change with a firmware upgrade of the chassis controllers, but
  maybe not.
 - In Solaris 11, the /dev/chassis/HP-D2700-SAS-AJ941A.xx.../Drive_bay_NN
  is supposed to contain a soft link to the device for the disk in the bay.
  This doesn't seem to work for bay 0. It may be related to the previous
  problem, but maybe not.
 
 (We may buy a HP raid card just to be able to upgrade their firmware.)
 
 If we have had the time we probably would have tested some other jbods
 too, but we need to get those rolling soon, and these seem good enough.
 
 We have tested them with multipathed SAS, using a single LSI SAS 9205-8e
 HBA and connecting the two ports on the HBA to the two controllers in the
 D2700.
 
 To get multipathing, you need to configure the scsi_vhci driver, in
 /kernel/drv/scsi_vhci.conf for sol10 or /etc/driver/drv/scsi_vhci.conf for
 sol11-x86. To get better performance, you probably want to use
 load-balance=logical-block instead of load-balance=round-robin.
 See examples below.
 
 You may also need to run stmsboot -e to enable multipathing. I still haven't
 figured out what that does (more than updating /etc/vfstab and /etc/dumpdates
 which you typically don't use with ifs), maybe nothing.
 
 Thanks to all that have helped with input!
 
 /ragge
 
 
 -
 
 
 For solaris 10u8 and later, in /kernel/drv/scsi_vhci.conf.DIST:
 ###
 ...
 device-type-scsi-options-list =
  HP  D2700 SAS AJ941A, symmetric-option,
  HP  EG, symmetric-option;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 symmetric-option = 0x100;
 
 device-type-mpxio-options-list =
  device-type=HP  D2700 SAS AJ941A, 
 load-balance-options=logical-block-options,
  device-type=HP  EG, load-balance-options=logical-block-options;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 logical-block-options=load-balance=logical-block, region-size=20;
 ...
 ###
 
 
 For solaris 11, in /etc/driver/drv/scsi_vhci.conf on x86
 (in /kernel/drv/scsi_vhci.conf.DIST on sparc?):
 ###
 ...
 #load-balance=round-robin;
 load-balance=logical-block;
 region-size=20;
 ...
 scsi-vhci-failover-override =
   HP  D2700 SAS AJ941A, f_sym,
   HP  EG,   f_sym;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 ###
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and iscsi performance help

2012-01-27 Thread Hung-Sheng Tsao (laoTsao)
hi
IMHO, upgrade to s11 if possible
use the COMSTAR based iscsi 

Sent from my iPad

On Jan 26, 2012, at 23:25, Ivan Rodriguez ivan...@gmail.com wrote:

 Dear fellows,
 
 We have a backup server with a zpool size of 20 TB, we transfer
 information using zfs snapshots every day (we have around 300 fs on
 that pool),
 the storage is a dell md3000i connected by iscsi, the pool is
 currently version 10, the same storage is connected
 to another server with a smaller pool of 3 TB(zpool version 10) this
 server is working fine and speed is good between the storage
 and the server, however  in the server with 20 TB pool performance is
 an issue  after we restart the server
 performance is good but with the time lets say a week the performance
 keeps dropping until we have to
 bounce the server again (same behavior with new version of solaris in
 this case performance drops in 2 days), no errors in logs or storage
 or the zpool status -v
 
 We suspect that the pool has some issues probably there is corruption
 somewhere, we tested solaris 10 8/11 with zpool 29,
 although we haven't update the pool itself, with the new solaris the
 performance is even worst and every time
 that we restart the server we get stuff like this:
 
 SOURCE: zfs-diagnosis, REV: 1.0
 EVENT-ID: 0168621d-3f61-c1fc-bc73-c50efaa836f4
 DESC: All faults associated with an event id have been addressed.
 Refer to http://sun.com/msg/FMD-8000-4M for more information.
 AUTO-RESPONSE: Some system components offlined because of the
 original fault may have been brought back online.
 IMPACT: Performance degradation of the system due to the original
 fault may have been recovered.
 REC-ACTION: Use fmdump -v -u EVENT-ID to identify the repaired components.
 [ID 377184 daemon.notice] SUNW-MSG-ID: FMD-8000-6U, TYPE: Resolved,
 VER: 1, SEVERITY: Minor
 
 And we need to export and import the pool in order to be  able to  access it.
 
 Now my question is do you guys know if we upgrade the pool does this
 process  fix some issues in the metadata of the pool ?
 We've been holding back the upgrade because we know that after the
 upgrade there is no way to return to version 10.
 
 Does anybody has experienced corruption in the pool without a hardware
 failure ?
 Is there any tools or procedures to find corruption on the pool or
 File systems inside the pool ? (besides scrub)
 
 So far we went through the connections cables, ports and controllers
 between the storage and the server everything seems fine, we've
 swapped network interfaces, cables, switch ports etc etc.
 
 
 Any ideas would be really appreciate it.
 
 Cheers
 Ivan
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-27 Thread Hung-Sheng Tsao (laoTsao)
it seems that you will need to work  with oracle support:-(

Sent from my iPad

On Jan 27, 2012, at 3:49, sureshkumar sachinnsur...@gmail.com wrote:

 Hi Christian ,
 
 I was disabled the MPXIO   zpool clear is working for some times  its 
 failed in few iterations.
 
 I am using one Sparc machine [with the same OS level of x-86 ] I didn't 
 faced any issue with the Sparc architecture.
 Is it was the problem with booting sequence?
 
 Please help me.
 
 Thanks,
 Sudheer.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-24 Thread Hung-Sheng Tsao (laoTsao)
how did you issue  reboot, try 
shutdown -i6  -y -g0 

Sent from my iPad

On Jan 24, 2012, at 7:03, sureshkumar sachinnsur...@gmail.com wrote:

 Hi all,
 
 
 I am new to Solaris  I am facing an issue with the dynapath [multipath s/w] 
 for Solaris10u10 x86 .
 
 I am facing an issue with the zpool.
 
 Whats my problem is unable to access the zpool after issue a reboot.
 
 I am pasting the zpool status below.
 
 ==
 bash-3.2# zpool status
   pool: test
  state: UNAVAIL
  status: One or more devices could not be opened.  There are insufficient
 replicas for the pool to continue functioning.
  action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
  scan: none requested
  config:
 
 NAME STATE READ WRITE CKSUM
 test UNAVAIL  0 0 0  insufficient 
 replicas
 = 
 
 But all my devices are online  I am able to access them.
 when I export  import the zpool , the zpool comes to back to available state.
 
 I am not getting whats the problem with the reboot.
 
 Any suggestions regarding this was very helpful.
 
 Thanks Regards,
 Sudheer.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] patching a solaris server with zones on zfs file systems

2012-01-21 Thread Hung-Sheng Tsao (laoTsao)
which version of solaris?
s10u? live upgrade, zfs snap, halt zone, backup zone, zoneadm detach zone, 
zoneadm attach -U zone after os upgrade by  zfs snap and liveupgrade of just 
upgrade from dvd
or s11? beadm for new root, upgrade os, treat zone as above
regards



Sent from my iPad

On Jan 21, 2012, at 5:46, bhanu prakash bhanu.sys...@gmail.com wrote:

 Hi All,
 
 Please let me know the procedure how to patch a server which is having 5 
 zones on zfs file systems.
 
 Root file system exists on internal disk and zones are existed on SAN.
 
 Thank you all,
 Bhanu
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Do the disks in a zpool have a private region that I can read to get a zpool name or id?

2012-01-12 Thread Hung-Sheng Tsao (laoTsao)
if the disks are assigned by two hosts
may be just do zpool import should see the zpool in other hosts? not sure

as for AIX, control hdd, zpool will need partition that solaris could understand
i do not know what AIX used for partition should not be the same for solaris



Sent from my iPad

On Jan 12, 2012, at 9:51, adele@oracle.com adele@oracle.com wrote:

 Hi all,
 
 My cu has following question.
 
 
 Assume I have allocated a LUN from external storage to two hosts ( by mistake 
 ). I create a zpool with this LUN on host1 with no errors. On host2 when I 
 try to create a zpool by
 using the same disk ( which is allocated to host2 as well ), zpool create - 
 comes back with an error saying   /dev/dsk/cXtXdX is part of exported or 
 potentially active ZFS pool test.
 Is there a way for me to check what zpool disk belongs to from 'host2'. Do 
 the disks in a zpool have a private region that I can read to get a zpool 
 name or id?
 
 
 Steps required to reproduce the problem
 
 Disk doubly allocated to host1, host2
 host1 sees disk as disk100
 host2 sees disk as disk101
 host1# zpool create host1_pool disk1 disk2 disk100
 returns success ( as expected )
 host2# zpool create host2_pool disk1 disk2 disk101
 invalid dev specification
 use '-f' to overrite the following errors:
 /dev/dsk/disk101 is part of exported or potentially active ZFS pool test. 
 Please see zpool
 
 zpool did catch that the disk is part of an active pool, but since it's not 
 on the same host I am not getting the name of the pool to which disk101 is 
 allocated. It's possible we might go ahead and use '-f' option to create the 
 zpool and start using this filesystem. By doing this we're potentially 
 destroying filesystems on host1, host2 which could lead to severe downtime.
 
 Any way to get the pool name to which the disk101 is assigned ( with a 
 different name on a different host )? This would aid us tremendously in 
 avoiding a potential issue. This has happened with Solaris 9, UFS once before 
 taking out two Solaris machines.
 
 What happens if diskis assigned to a AIX box and is setup as part of a Volume 
 manager on AIX and we try to create 'zpool' on Solaris host. Will ZFS catch 
 this, by saying something wrong with the disk?
 
 Regards,
 Adele
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to allocate dma memory for extra SGL

2012-01-10 Thread Hung-Sheng Tsao (laoTsao)
how is the ram size
what is the zpool setup and what is your hba and hdd size and type


Sent from my iPad

On Jan 10, 2012, at 21:07, Ray Van Dolson rvandol...@esri.com wrote:

 Hi all;
 
 We have a Solaris 10 U9 x86 instance running on Silicon Mechanics /
 SuperMicro hardware.
 
 Occasionally under high load (ZFS scrub for example), the box becomes
 non-responsive (it continues to respond to ping but nothing else works
 -- not even the local console).  Our only solution is to hard reset
 after which everything comes up normally.
 
 Logs are showing the following:
 
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2MPT SGL mem alloc failed
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2Unable to allocate dma memory for 
 extra SGL.
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2Unable to allocate dma memory for 
 extra SGL.
  Jan  8 09:44:10 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:10 prodsys-dmz-zfs2Unable to allocate dma memory for 
 extra SGL.
  Jan  8 09:44:10 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:10 prodsys-dmz-zfs2MPT SGL mem alloc failed
  Jan  8 09:44:11 prodsys-dmz-zfs2 rpcmod: [ID 851375 kern.warning] WARNING: 
 svc_cots_kdup no slots free
 
 I am able to resolve the last error by adjusting upwards the duplicate
 request cache sizes, but have been unable to find anything on the MPT
 SGL errors.
 
 Anyone have any thoughts on what this error might be?
 
 At this point, we are simply going to apply patches to this box (we do
 see an outstanding mpt patch):
 
 147150 --  01 R-- 124 SunOS 5.10_x86: mpt_sas patch
 147702 --  03 R--  21 SunOS 5.10_x86: mpt patch
 
 But we have another identically configured box at the same patch level
 (admittedly with slightly less workload, though it also undergoes
 monthly zfs scrubs) which does not experience this issue.
 
 Ray
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread Hung-Sheng Tsao (laoTsao)
what is zpool zpool status
are you using the default 128k stripsize for zpool
is your server x86 ? or sparc t3 , how many socket?
IMHO, t3  for oracle need careful tuning
since many oracle ops need fast single thread cpu


Sent from my iPad

On Jan 5, 2012, at 11:40, grant lowe glow...@gmail.com wrote:

 Ok. I blew it. I didn't add enough information. Here's some more detail:
 
 Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring 
 performance with the results of the bonnie++ output and comparing with with 
 the the zpool iostat output. It's with the zpool iostat I'm not seeing a lot 
 of writes.
 
 Like I said, I'm new to this and if I need to provide anything else I will. 
 Thanks, all.
 
 
 On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:
 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread Hung-Sheng Tsao (laoTsao)
one still does not understand your setup
1 what is hba in T3-2
2 did u setup raid6 (how) in ramsan array? or present the ssd as jbod to zpool
3 which model of RAMSAN
4 are there any other storage behind RAMSAN
5 do you set up zpool with zil and or ARC?
6 IMHO, the hybre approach to ZFS is the most cost effective, 7200rpm SAS with 
zil and ARC and mirror Zpool

the problem with raid6 with 8k and oracle 8k is the mismatch of stripsize
we know the zpool use dynamic stripsize in raid, not the same as in hw raid
but similar consideration still exist



Sent from my iPad

On Jan 5, 2012, at 11:40, grant lowe glow...@gmail.com wrote:

 Ok. I blew it. I didn't add enough information. Here's some more detail:
 
 Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring 
 performance with the results of the bonnie++ output and comparing with with 
 the the zpool iostat output. It's with the zpool iostat I'm not seeing a lot 
 of writes.
 
 Like I said, I'm new to this and if I need to provide anything else I will. 
 Thanks, all.
 
 
 On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:
 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread Hung-Sheng Tsao (laoTsao)
i just take look the ramsan web site
there are many whitepaper on oracle, none on ZFS


Sent from my iPad

On Jan 5, 2012, at 12:58, Hung-Sheng Tsao (laoTsao) laot...@gmail.com wrote:

 one still does not understand your setup
 1 what is hba in T3-2
 2 did u setup raid6 (how) in ramsan array? or present the ssd as jbod to zpool
 3 which model of RAMSAN
 4 are there any other storage behind RAMSAN
 5 do you set up zpool with zil and or ARC?
 6 IMHO, the hybre approach to ZFS is the most cost effective, 7200rpm SAS 
 with zil and ARC and mirror Zpool
 
 the problem with raid6 with 8k and oracle 8k is the mismatch of stripsize
 we know the zpool use dynamic stripsize in raid, not the same as in hw raid
 but similar consideration still exist
 
 
 
 Sent from my iPad
 
 On Jan 5, 2012, at 11:40, grant lowe glow...@gmail.com wrote:
 
 Ok. I blew it. I didn't add enough information. Here's some more detail:
 
 Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring 
 performance with the results of the bonnie++ output and comparing with with 
 the the zpool iostat output. It's with the zpool iostat I'm not seeing a lot 
 of writes.
 
 Like I said, I'm new to this and if I need to provide anything else I will. 
 Thanks, all.
 
 
 On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:
 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-04 Thread Hung-Sheng Tsao (laoTsao)
what is your storage?
internal sas or external array
what is  your zfs setup?


Sent from my iPad

On Jan 4, 2012, at 17:59, grant lowe glow...@gmail.com wrote:

 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resolving performance issue w/ deduplication (NexentaStor)

2011-12-30 Thread Hung-Sheng Tsao (laoTsao)
now s11 support shadow migration, just for this  purpose, AFAIK
not sure nexentaStor support shadow migration



Sent from my iPad

On Dec 30, 2011, at 2:03, Ray Van Dolson rvandol...@esri.com wrote:

 On Thu, Dec 29, 2011 at 10:59:04PM -0800, Fajar A. Nugraha wrote:
 On Fri, Dec 30, 2011 at 1:31 PM, Ray Van Dolson rvandol...@esri.com wrote:
 Is there a non-disruptive way to undeduplicate everything and expunge
 the DDT?
 
 AFAIK, no
 
  zfs send/recv and then back perhaps (we have the extra
 space)?
 
 That should work, but it's disruptive :D
 
 Others might provide better answer though.
 
 Well, slightly _less_ disruptive perhaps.  We can zfs send to another
 file system on the same system, but different set of disks.  We then
 disable NFS shares on the original, do a final zfs send to sync, then
 share out the new undeduplicated file system with the same name.
 Hopefully the window here is short enough that NFS clients are able to
 recover gracefully.
 
 We'd then wipe out the old zpool, recreate and do the reverse to get
 data back onto it..
 
 Thanks,
 Ray
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Hung-Sheng Tsao (laoTsao)
what is the ram size?
are there many snap? create then delete?
did you run a scrub?

Sent from my iPad

On Dec 18, 2011, at 10:46, Jan-Aage Frydenbø-Bruvoll j...@architechs.eu wrote:

 Hi,
 
 On Sun, Dec 18, 2011 at 15:13, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
 laot...@gmail.com wrote:
 what are the output of zpool status pool1 and pool2
 it seems that you have mix configuration of pool3 with disk and mirror
 
 The other two pools show very similar outputs:
 
 root@stor:~# zpool status pool1
  pool: pool1
 state: ONLINE
 scan: resilvered 1.41M in 0h0m with 0 errors on Sun Dec  4 17:42:35 2011
 config:
 
NAME  STATE READ WRITE CKSUM
pool1  ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t12d0   ONLINE   0 0 0
c1t13d0   ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
c1t24d0   ONLINE   0 0 0
c1t25d0   ONLINE   0 0 0
  mirror-2ONLINE   0 0 0
c1t30d0   ONLINE   0 0 0
c1t31d0   ONLINE   0 0 0
  mirror-3ONLINE   0 0 0
c1t32d0   ONLINE   0 0 0
c1t33d0   ONLINE   0 0 0
logs
  mirror-4ONLINE   0 0 0
c2t2d0p6  ONLINE   0 0 0
c2t3d0p6  ONLINE   0 0 0
cache
  c2t2d0p10   ONLINE   0 0 0
  c2t3d0p10   ONLINE   0 0 0
 
 errors: No known data errors
 root@stor:~# zpool status pool2
  pool: pool2
 state: ONLINE
 scan: scrub canceled on Wed Dec 14 07:51:50 2011
 config:
 
NAME  STATE READ WRITE CKSUM
pool2 ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t14d0   ONLINE   0 0 0
c1t15d0   ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
c1t18d0   ONLINE   0 0 0
c1t19d0   ONLINE   0 0 0
  mirror-2ONLINE   0 0 0
c1t20d0   ONLINE   0 0 0
c1t21d0   ONLINE   0 0 0
  mirror-3ONLINE   0 0 0
c1t22d0   ONLINE   0 0 0
c1t23d0   ONLINE   0 0 0
logs
  mirror-4ONLINE   0 0 0
c2t2d0p7  ONLINE   0 0 0
c2t3d0p7  ONLINE   0 0 0
cache
  c2t2d0p11   ONLINE   0 0 0
  c2t3d0p11   ONLINE   0 0 0
 
 The affected pool does indeed have a mix of straight disks and
 mirrored disks (due to running out of vdevs on the controller),
 however it has to be added that the performance of the affected pool
 was excellent until around 3 weeks ago, and there have been no
 structural changes nor to the pools neither to anything else on this
 server in the last half year or so.
 
 -jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Hung-Sheng Tsao (laoTsao)
not sure oi support shadow migration 
or you may be to send zpool to another server then send back to  do defrag
regards

Sent from my iPad

On Dec 19, 2011, at 8:15, Gary Mills gary_mi...@fastmail.fm wrote:

 On Mon, Dec 19, 2011 at 11:58:57AM +, Jan-Aage Frydenbø-Bruvoll wrote:
 
 2011/12/19 Hung-Sheng Tsao (laoTsao) laot...@gmail.com:
 did you run a scrub?
 
 Yes, as part of the previous drive failure. Nothing reported there.
 
 Now, interestingly - I deleted two of the oldest snapshots yesterday,
 and guess what - the performance went back to normal for a while. Now
 it is severely dropping again - after a good while on 1.5-2GB/s I am
 again seeing write performance in the 1-10MB/s range.
 
 That behavior is a symptom of fragmentation.  Writes slow down
 dramatically when there are no contiguous blocks available.  Deleting
 a snapshot provides some of these, but only temporarily.
 
 -- 
 -Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-16 Thread Hung-Sheng Tsao (laoTsao)
imho, if possible pick sas 7200 hdd
no hw-raid for ZFS
mirror and with ZIL and good size memory


Sent from my iPad

On Dec 16, 2011, at 17:36, t...@ownmail.net wrote:

 I could use some help with choosing hardware for a storage server. For
 budgetary and density reasons, we had settled on LFF SATA drives in the
 storage server. I had closed in on models from HP (DL180 G6) and IBM
 (x3630 M3), before discovering warnings against connecting SATA drives
 with SAS expanders.
 
 So I'd like to ask what's the safest way to manage SATA drives. We're
 looking for a 12 (ideally 14) LFF server, 2-3U, similar to the above
 models. The HP and IBM models both come with SAS expanders built into
 their backplanes. My questions are:
 
 1. Kludginess aside, can we build a dependable SMB server using
 integrated HP or IBM expanders plus the workaround
 (allow-bus-device-reset=0) presented here: 
 http://gdamore.blogspot.com/2010/12/update-on-sata-expanders.html ?
 
 2. Would it be better to find a SATA card with lots of ports, and make
 1:1 connections? I found some cards (arc-128, Adaptec 2820SA) w/Solaris
 support, for example, but I don't know how reliable they are or whether
 they support a clean JBOD mode.
 
 3. Assuming native SATA is the way to go, where should we look for
 hardware? I'd like the IBM  HP options because of the LOM  warranty,
 but I wouldn't think the hot-swap backplane offers any way to bypass the
 SAS expanders (correct me if I'm wrong here!). I found this JBOD:
 http://www.pc-pitstop.com/sata_enclosures/sat122urd.asp  I also know
 about SuperMicro. Are there any other vendors or models worth
 considering?
 
 Thanks!
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Recovery: What do I try next?

2011-11-05 Thread LaoTsao
Dir you try
Zpool clear -F  bank0 with tbe latest solaris express?

Sent from my iPad

On Nov 5, 2011, at 2:35 PM, Myers Carpenter my...@maski.org wrote:

 I would like to pick the brains of the ZFS experts on this list: What
 would you do next to try and recover this zfs pool?
 
 I have a ZFS RAIDZ1 pool named bank0 that I cannot import.  It was
 composed of 4 1.5 TiB disks.  One disk is totally dead.  Another had
 SMART errors, but using GNU ddrescue I was able to copy all the data
 off successfully.
 
 I have copied all 3 remaining disks as images using dd on to another
 another filesystem.  Using the loopback filesystem I can treat these
 images as if they were real disks.  I've made a snapshot of the
 filesystem the disk images are on so that I can try things and
 rollback the changes if needed.
 
 gir is the computer these disks are hosted on.  It used to be a
 Nexenta server, but is now Ubuntu 11.10 with the zfs on linux modules.
 
 I have tried booting up Solaris Express 11 Live CD and doing zpool
 import -fFX bank0 which ran for ~6 hours and put out: one or more
 devices is currently unavailable
 
 I have tried zpool import -fFX bank0 on linux with the same results.
 
 I have tried moving the drives back into the controller config they
 where before, and booted my old Nexenta root disk where the
 /etc/zfs/zpool.cache still had an entry for bank0.  I was not able to
 get the filesystems mounts. I can't remember what errors I got.  I can
 do it again if the errors might be useful.
 
 Here is the output of the different utils:
 
 root@gir:/bank3/hd# zpool import -d devs
  pool: bank0
id: 3936305481264476979
 state: FAULTED
 status: The pool was last accessed by another system.
 action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-EY
 config:
 
bank0  FAULTED  corrupted data
  raidz1-0 DEGRADED
loop0  ONLINE
loop1  ONLINE
loop2  ONLINE
c10t2d0p0  UNAVAIL
 
 
 root@gir:/bank3/hd# zpool import -d devs bank0
 cannot import 'bank0': pool may be in use from other system, it was
 last accessed by gir (hostid: 0xa1767) on Mon Oct 24 15:50:23 2011
 use '-f' to import anyway
 
 
 root@gir:/bank3/hd# zpool import -f -d devs bank0
 cannot import 'bank0': I/O error
Destroy and re-create the pool from
a backup source.
 
 root@gir:/bank3/hd# zdb -e -p devs bank0
 Configuration for import:
vdev_children: 1
version: 26
pool_guid: 3936305481264476979
name: 'bank0'
state: 0
hostid: 661351
hostname: 'gir'
vdev_tree:
type: 'root'
id: 0
guid: 3936305481264476979
children[0]:
type: 'raidz'
id: 0
guid: 10967243523656644777
nparity: 1
metaslab_array: 23
metaslab_shift: 35
ashift: 9
asize: 6001161928704
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 13554115250875315903
phys_path: '/pci@0,0/pci1002,4391@11/disk@3,0:q'
whole_disk: 0
DTL: 57
create_txg: 4
path: '/bank3/hd/devs/loop0'
children[1]:
type: 'disk'
id: 1
guid: 17894226827518944093
phys_path: '/pci@0,0/pci1002,4391@11/disk@0,0:q'
whole_disk: 0
DTL: 62
create_txg: 4
path: '/bank3/hd/devs/loop1'
children[2]:
type: 'disk'
id: 2
guid: 9087312107742869669
phys_path: '/pci@0,0/pci1002,4391@11/disk@1,0:q'
whole_disk: 0
DTL: 61
create_txg: 4
faulted: 1
aux_state: 'err_exceeded'
path: '/bank3/hd/devs/loop2'
children[3]:
type: 'disk'
id: 3
guid: 13297176051223822304
path: '/dev/dsk/c10t2d0p0'
devid:
 'id1,sd@SATA_ST31500341AS9VS32K25/q'
phys_path: '/pci@0,0/pci1002,4391@11/disk@2,0:q'
whole_disk: 0
DTL: 60
create_txg: 4
 
 zdb: can't open 'bank0': No such file or directory
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs scripts

2011-09-10 Thread LaoTsao
imho, there is not harm to  use  in both cmd

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Sep 10, 2011, at 4:59, Toby Thain t...@telegraphics.com.au wrote:

 On 09/09/11 6:33 AM, Sriram Narayanan wrote:
 Plus, you'll need an  character at the end of each command.
 
 
 Only one of the commands needs to be backgrounded.
 
 --Toby
 
 -- Sriram
 
 On 9/9/11, Tomas Forsman st...@acc.umu.se wrote:
 On 09 September, 2011 - cephas maposah sent me these 0,4K bytes:
 
 i am trying to come up with a script that incorporates other scripts.
 
 eg
 zfs send pool/filesystem1@100911  /backup/filesystem1.snap
 zfs send pool/filesystem2@100911  /backup/filesystem2.snap
 
 #!/bin/sh
 zfs send pool/filesystem1@100911  /backup/filesystem1.snap 
 zfs send pool/filesystem2@100911  /backup/filesystem2.snap
 
 ..?
 
 i need to incorporate these 2 into a single script with both commands
 running concurrently.
 
 /Tomas
 --
 Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
 |- Student at Computing Science, University of Umeå
 `- Sysadmin at {cs,acc}.umu.se
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does the zpool cache file affect import?

2011-08-29 Thread LaoTsao
Q?
are you intent to import this zpool to different host?


Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 29, 2011, at 14:13, Gary Mills mi...@cc.umanitoba.ca wrote:

 I have a system with ZFS root that imports another zpool from a start
 method.  It uses a separate cache file for this zpool, like this:
 
if [ -f $CCACHE ]
then
echo Importing $CPOOL with cache $CCACHE
zpool import -o cachefile=$CCACHE -c $CCACHE $CPOOL
else
echo Importing $CPOOL with device scan
zpool import -o cachefile=$CCACHE $CPOOL
fi
 
 It also exports that zpool from the stop method, which has the side
 effect of deleting the cache.  This all works nicely when the server
 is rebooted.
 
 What will happen when the server is halted without running the stop
 method, so that that zpool is not exported?  I know that there is a
 flag in the zpool that indicates when it's been exported cleanly.  The
 cache file will exist when the server reboots.  Will the import fail
 with the `The pool was last accessed by another system.' error, or
 will the import succeed?  Does the cache change the import behavior?
 Does it recognize that the server is the same system?  I don't want
 to include the `-f' flag in the commands above when it's not needed.
 
 -- 
 -Gary Mills--Unix Group--Computer and Network Services-
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-15 Thread LaoTsao
imho, not a good idea, any two hdd in your raid0 fail zpool is dead
if possible just do one hdd raid0 then use zfs to do mirror
raidz or raidz2 will be the last choice

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 12, 2011, at 21:34, Tom Tang thomps...@supermicro.com wrote:

 Suppose I want to build a 100-drive storage system, wondering if there is any 
 disadvantages for me to setup 20 arrays of HW RAID0 (5 drives each), then 
 setup ZFS file system on these 20 virtual drives and configure them as RAIDZ?
 
 I understand people always say ZFS doesn't prefer HW RAID.  Under this case, 
 the HW RAID0 is only for stripping (allows higher data transfer rate), while 
 the actual RAID5 (i.e. RAIDZ) is done via ZFS which takes care all the 
 checksum/error detection/auto-repair.  I guess this will not affect any 
 advantages of using ZFS, while I could get higher data transfer rate.  
 Wondering if it's the case?  
 
 Any suggestion or comment?  Please kindly advise.  Thanks!
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread LaoTsao
iirc if you use two hdd, you can import the zpool
can you try to import -R with only two hdd at time

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 15, 2011, at 13:42, Stu Whitefish swhitef...@yahoo.com wrote:

 Unfortunately this panics the same exact way. Thanks for the suggestion 
 though.
 
 
 
 - Original Message -
 From: Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. laot...@gmail.com
 To: zfs-discuss@opensolaris.org
 Cc: 
 Sent: Monday, August 15, 2011 3:06:20 PM
 Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
 inaccessible!
 
 may be try the following
 1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
 then choose single user mode(6))
 2)when ask to mount rpool just say no
 3)mkdir /tmp/mnt1 /tmp/mnt2
 4)zpool  import -f -R /tmp/mnt1 tank
 5)zpool import -f -R /tmp/mnt2 rpool
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-09 Thread LaoTsao
nothing to worry about
as for dd you need s? in addition to c8t0d0

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 9, 2011, at 4:51, Lanky Doodle lanky_doo...@hotmail.com wrote:

 Hiya,
 
 Is there any reason (and anything to worry about) if disk target IDs don't 
 start at 0 (zero). For some reason mine are like this (3 controllers - 1 
 onboard and 2 PCIe);
 
 AVAILABLE DISK SELECTIONS:
   0. c8t0d0 ATA-ST9160314AS-SDM1 cyl 19454 alt 2 hd 255 sec 63
  /pci@0,0/pci10de,cb84@5/disk@0,0
   1. c8t1d0 ATA-ST9160314AS-SDM1 cyl 19454 alt 2 hd 255 sec 63
  /pci@0,0/pci10de,cb84@5/disk@1,0
   2. c9t7d0 ATA-HitachiHDS72302-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@7,0
   3. c9t8d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@8,0
   4. c9t9d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@9,0
   5. c9t10d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@a,0
   6. c9t11d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@b,0
   7. c9t12d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@c,0
   8. c9t13d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@d,0
   9. c9t14d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,376@a/pci1000,3140@0/sd@e,0
  10. c10t8d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@8,0
  11. c10t9d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@9,0
  12. c10t10d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@a,0
  13. c10t11d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@b,0
  14. c10t12d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@c,0
  15. c10t13d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@d,0
  16. c10t14d0 ATA-Hitachi HDS7230-A5C0 cyl 60798 alt 2 hd 255 sec 
 252
  /pci@0,0/pci10de,377@f/pci1000,3140@0/sd@e,0
 
 So apart from the onboard controller, the tx (where x is the number) doesn't 
 start at 0.
 
 Also, I am trying to make disk LEDs blink by using dd so I can match up disks 
 in Solaris to the physical slot, but I can't work out the right command;
 
 admin@ok-server01:~# dd if=/dev/dsk/c9t7d0 of=/dev/null
 dd: /dev/dsk/c9t7d0: open: No such file or directory
 
 admin@ok-server01:~# dd if=/dev/rdsk/c9t7d0 of=/dev/null
 dd: /dev/rdsk/c9t7d0: open: No such file or directory
 
 Thanks
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored rpool

2011-08-08 Thread LaoTsao
http://download.oracle.com/docs/cd/E19963-01/html/821-1448/gjtuk.html#gjtui


Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 8, 2011, at 5:15, Lanky Doodle lanky_doo...@hotmail.com wrote:

 Hiya,
 
 I am using S11E Live CD to install. The install wouldn't let me select 2 
 disks for a mirrored rpool so I done this post-install using this guide;
 
 http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html
 
 Before I go ahead and continue building my server (zpools) I want to make 
 sure the above guide is correct for S11E?
 
 The mirrored rpool seems to look OK but want to make sure there's nothing 
 else to do.
 
 Thanks
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Exapnd ZFS storage.

2011-08-03 Thread LaoTsao
if you have 4 hdd then expand raidz is simple(from 4 hdd raidz)

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 3, 2011, at 3:02, Nix mithun.gaik...@gmail.com wrote:

 Hi,
 
 I have 4 disk with 1 TB of disk and I want to expand the zfs pool size.
 
 I have 2 more disk with 1 TB of size.
 
 Is it possible to expand the current RAIDz array with new disk?
 
 Thanks,
 Nix
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] L2ARC and poor read performance

2011-06-07 Thread LaoTsao
You have un balance setup
Fc 4gbps vs 10gbps nic
After 10b/8b encoding it is even worse, but this not yet impact your benchmark 
yet

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Jun 7, 2011, at 5:46 PM, Phil Harman phil.har...@gmail.com wrote:

 On 07/06/2011 20:34, Marty Scholes wrote:
 I'll throw out some (possibly bad) ideas.
 
 Thanks for taking the time.
 
 Is ARC satisfying the caching needs?  32 GB for ARC should almost cover the 
 40GB of total reads, suggesting that the L2ARC doesn't add any value for 
 this test.
 
 Are the SSD devices saturated from an I/O standpoint?  Put another way, can 
 ZFS put data to them fast enough?  If they aren't taking writes fast enough, 
 then maybe they can't effectively load for caching.  Certainly if they are 
 saturated for writes they can't do much for reads.
 
 The SSDs are barely ticking over, and can deliver almost as much throughput 
 as the current SAN storage.
 
 Are some of the reads sequential?  Sequential reads don't go to L2ARC.
 
 That'll be it. I assume the L2ARC is just taking metadata. In situations such 
 as mine, I would quite like the option of routing sequential read data to the 
 L2ARC also.
 
 I do notice a benefit with a sequential update (i.e. COW for each block), and 
 I think this is because the L2ARC satisfies most of the metadata reads 
 instead of having to read them from the SAN.
 
 What does iostat say for the SSD units?  What does arc_summary.pl (maybe 
 spelled differently) say about the ARC / L2ARC usage?  How much of the SSD 
 units are in use as reported in zpool iostat -v?
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS issues and the choice of platform

2011-05-26 Thread LaoTsao
Any support contract is worth some things
In your case you will need
1. Server contract 
2. Array contract or part of server
3. Solaris/solaris express support

There is no free lunch, if you want support you will need to pay $ or your xxx 
is on the line
My 2c


Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On May 26, 2011, at 8:20 AM, Edward Ned Harvey 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Daniel Carosone
 
 On Wed, May 25, 2011 at 10:59:19PM +0200, Roy Sigurd Karlsbakk wrote:
 The systems where we have had issues, are two 100TB boxes, with some
 160TB raw storage each, so licensing this with nexentastor will be
 rather expensive. What would you suggest? Will a solaris express
 install give us good support when the shit hits the fan?
 
 No more so than what you have now, without a support contract.
 
 Are you suggesting that support contracts on sol11exp are useless?  Maybe I
 should go tell my boss to cancel ours...  *sic*
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread LaoTsao
Well
With various fock of opensource project
E.g. Zfs, opensolaris, openindina etc there are all different
There are not guarantee to be compatible 

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On May 24, 2011, at 4:40 PM, Ian Collins i...@ianshome.com wrote:

 On 05/25/11 07:49 AM, Brandon High wrote:
 On Tue, May 24, 2011 at 12:41 PM, Richard Elling
 richard.ell...@gmail.com  wrote:
 There are many ZFS implementations, each evolving as the contributors 
 desire.
 Diversity and innovation is a good thing.
 ... unless Oracle's zpool v30 is different than Nexenta's v30.
 
 That could be a disaster for everyone if they are incompatible.
 
 Now with Oracle development in secret, I guess incompatible branches of ZFS 
 are inevitable.
 
 -- 
 Ian.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GPU acceleration of ZFS

2011-05-10 Thread Hung-Sheng Tsao (LaoTsao) Ph. D.


IMHO, zfs need to run in all kind of HW
T-series CMT server that can help sha calculation since T1 day, did not 
see any work in ZFS to take advantage it



On 5/10/2011 11:29 AM, Anatoly wrote:

Good day,

I think ZFS can take advantage of using GPU for sha256 calculation, 
encryption and maybe compression. Modern video card, like 5xxx or 6xxx 
ATI HD Series can do calculation of sha256 50-100 times faster than 
modern 4 cores CPU.


kgpu project for linux shows nice results.

'zfs scrub' would work freely on high performance ZFS pools.

The only problem that there is no AMD/Nvidia drivers for Solaris that 
support hardware-assisted OpenCL.


Is anyone interested in it?

Best regards,
Anatoly Legkodymov.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Proper procedure when device names have changed

2010-09-13 Thread LaoTsao 老曹


try export and import the zpool

On 9/13/2010 1:26 PM, Brian wrote:

I am running zfs-fuse on an Ubuntu 10.04 box.  I have a dual mirrored pool:
mirror sdd sde mirror sdf sdg

Recently the device names shifted on my box and the devices are now sdc sdd sde and sdf.  
The pool is of course very unhappy about the mirrors are no longer matched up and one 
device is missing.  What is the proper procedure to deal with this?

-brian
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-07 Thread LaoTsao 老曹
 may be 5x(3+1) use one disk from each controller, 15TB usable space, 
3+1 raidz rebuild time should be reasonable



On 9/7/2010 4:40 AM, hatish wrote:

Thanks for all the replies :)

My mindset is split in two now...

Some detail - I'm using 4 1-to-5 Sata Port multipliers connected to a 4-port 
SATA raid card.

I only need reliability and size, as long as my performance is the equivalent 
of one drive, Im happy.

Im assuming all the data used in the group is read once when re-creating a lost 
drive. Also assuming space consumed is 50%.

So option 1 - Stay with the 2 x 10drive RaidZ2. My concern is the stress on the 
drives when one drive fails and the others go crazy (read-wise) to re-create 
the new drive. Is there no way to reduce this stress? Maybe limit the data 
rate, so its not quite so stressful, even though it will end up taking longer? 
(quite acceptable)
[Available Space: 16TB, Redundancy Space: 4TB, Repair data read: 4.5TB]

And option 2 - Add a 21st drive to one of the motherboard sata ports. And then 
go with 3 x 7drive RaidZ2. [Available Space: 15TB, Redundancy Space: 6TB, 
Repair data read: 3TB]

Sadly, SSD's wont go too well in a PM based setup like mine. I may add it 
directly onto the MB if I can afford it. But again, performance is not a 
prioity.

Any further thoughts and ideas are much appreciated.
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ufs root to zfs root liveupgrade?

2010-08-29 Thread LaoTsao 老曹


hi
My user error
it is fine now
I may  have use* reboot* and not init 6

So far I try the following:
1)with two hdd, ufsroot liveug to zfsroot work
2)with one hdd but different slice ufsroot liveug to zfsroot work
grub does provide the choice of ufsroot and zfsroot
*init 6* seems to be very important to update the menu.1st before reboot
regards


On 8/28/2010 5:17 PM, Ian Collins wrote:

On 08/28/10 11:39 PM, LaoTsao 老曹 wrote:

 hi all
Try to learn how UFS root to ZFS root  liveUG work.

I download the vbox image of s10u8, it come up as UFS root.
add a new  disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it come up with UFS root,
lustatus show ufsroot active
zpool rpool is mounted but not used by boot


As Casper said, you have to change boot drive.

The easiest way to migrate to ZFS is to use a spare slice on the 
original drive for the new pool You can then mirror that off to 
another drive.


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ufs root to zfs root liveupgrade?

2010-08-28 Thread LaoTsao 老曹

 hi all
Try to learn how UFS root to ZFS root  liveUG work.

I download the vbox image of s10u8, it come up as UFS root.
add a new  disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it come up with UFS root,
lustatus show ufsroot active
zpool rpool is mounted but not used by boot

Is this a known bug? I donot have access to sunsolve now
regards


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ufs root to zfs root liveupgrade?

2010-08-28 Thread LaoTsao 老曹

 thx
try to detach the old ufs and boot from the new zfsroot
it fail
I reattach the ufsroot disk and find out that in the 
rpool/boot/grub/menu.1st

findroot (rootfs0,0,a) and not findroot (pool_rpool,0,a)
not sure what is the correct findroot here
even with this change to findroot and try to boot it fail with cannot 
find file

regards


On 8/28/2010 7:47 AM, casper@sun.com wrote:



  hi all
Try to learn how UFS root to ZFS root  liveUG work.

I download the vbox image of s10u8, it come up as UFS root.
add a new  disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it come up with UFS root,
lustatus show ufsroot active
zpool rpool is mounted but not used by boot


You'll need to boot from a different disk; I don't think that the
OS can change the boot disk (it can on SPARC but it can't on x86)

Casper

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with SAN's and HA

2010-08-27 Thread LaoTsao 老曹



On 8/27/2010 12:25 AM, Michael Dodwell wrote:

Lao,

I had a look at the HAStoragePlus etc and from what i understand that's to 
mirror local storage across 2 nodes for services to be able to access 'DRBD 
style'.

not true, HAS+ use shred storage.
in this case since ZFS is not clustered FS so it need to be configured 
as failover FS, only one host can access the zpool at time.

it need to export and import to failover between hosts.
oracle  solaris cluster provide the cluster framework
e.g. private interconnect
global did
allow U setup NFS on top of failover HAS+(with ZFS) etc


Having a read thru the documentation on the oracle site the cluster software 
from what i gather is how to cluster services together (oracle/apache etc) and 
again any documentation i've found on storage is how to duplicate local storage 
to multiple hosts for HA failover. Can't really see anything on clustering 
services to use shared storage/zfs pools.
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread LaoTsao 老曹


hi
may be boot a livecd then export and import the zpool?
regards

On 8/27/2010 8:27 AM, Rainer Orth wrote:

For quite some time I'm bitten by the fact that on my laptop (currently
running self-built snv_147) zpool status rpool and format disagree about
the device name of the root disk:

r...@masaya 14  zpool status rpool
   pool: rpool
  state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
 still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
 pool will no longer be accessible on older software versions.
  scan: none requested
config:

 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 0
   c1t0d0s3  ONLINE   0 0 0

errors: No known data errors

r...@masaya 3 # format -e
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c3t0d134583970drive type unknown
   /p...@0,0/pci8086,2...@1e/pci17aa,2...@0,2/blk...@0
1. c11t0d0ATA-ST9160821AS-Ccyl 19454 alt 2 hd 255 sec 63
   /p...@0,0/pci17aa,2...@1f,2/d...@0,0
Specify disk (enter its number):

zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
correctly believe it's c11t0d0(s3) instead.

This has the unfortunate consequence that beadm activatenewbe  fails
in a quite non-obvious way.

Running it under truss, I find that it invokes installgrub, which
fails.  The manual equivalent is

r...@masaya 266 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 
/dev/rdsk/c1t0d0s3
cannot read MBR on /dev/rdsk/c1t0d0p0
open: No such file or directory
r...@masaya 267 # installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 
/dev/rdsk/c11t0d0s3
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 273 sectors starting at 50 (abs 16115)

For the time being, I'm working around this by replacing installgrub by
a script, but obviously this shouldn't happen and the problem isn't easy
to find.

I thought I'd seen a zfs CR for this, but cannot find it right now,
especially with search on bugs.opensolaris.org being only partially
functional.

Any suggestions?

Thanks.
Rainer

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Networker Dedup @ ZFS

2010-08-26 Thread LaoTsao 老曹
 IMHO, if U use the backup SW that support dedupe in the SW then ZFS is 
still a viable solution



On 8/26/2010 6:13 PM, Sigbjørn Lie wrote:

Hi Daniel,

We we're looking into very much the same solution you've tested. 
Thanks for your advise. I think we will look for something else. :)


Just out of curiosity, what  ZFS tweaking did you do?  And what much 
pricier competitor solution did you end up with in the end?



Regards,
Sigbjorn



Daniel Whitener wrote:

Sigbjorn

Stop! Don't do it... it's a waste of time.  We tried exactly what
you're thinking of... we bought two Sun/Oracle 7000 series storage
units with 20TB of ZFS storage each planning to use them as a backup
target for Networker.  We ran into several issues eventually gave up
the ZFS networker combo.  We've used other storage devices in the past
(virtual tape libraries) that had deduplication.  We were used to
seeing dedup ratios better than 20x on our backup data.  The ZFS
filesystem only gave us 1.03x, and it had regular issues because it
couldn't do dedup for such large filesystems very easily.  We didn't
know it ahead of time, but VTL solutions use something called
variable length block dedup, whereas ZFS uses fixed block length
dedup. Like one of the other posters mentioned, things just don't line
up right and the dedup ratio suffers.  Yes, compression works to some
degree -- I think we got 2 or 3x on that, but it was a far cry from
the 20x that we were used to seeing on our old VTL.

We recently ditched the 7000 series boxes in favor of a much pricier
competitor.  It's about double the cost, but dedup ratios are better
than 20x.  Personally I love ZFS and I use it in many other places,
but we were very disappointed with the dedup ability for that type of
data.  We went to Sun with our problems and they ran it up the food
chain and word came back down from the developers that this was the
way it was designed, and it's not going to change anytime soon.  The
type of files that Networker writes out just are not friendly at all
with the dedup mechanism used in ZFS.  They gave us a few ideas and
things to tweak in Networker, but no measurable gains ever came from
any of the tweaks.

If are considering a home-grown ZFS solution for budget reasons, go
for it just do yourself a favor and save yourself the overhead of
trying to dedup.  When we disabled dedup on our 7000 series boxes,
everything worked great and compression was fine with next to no
overhead.  Unfortunately, we NEEDED at least a 10x ratio to keep the 3
week backups we were trying to do.  We couldn't even keep a 1 week
backup with the dedup performance of ZFS.

If you need more details, I'm happy to help.  We went through months
of pain trying to make it work and it just doesn't for Networker data.

best wishes
Daniel








2010/8/18 Sigbjorn Lie sigbj...@nixtra.com:

Hi,

We are considering using a ZFS based storage as a staging disk for 
Networker. We're aiming at
providing enough storage to be able to keep 3 months worth of 
backups on disk, before it's moved

to tape.

To provide storage for 3 months of backups, we want to utilize the 
dedup functionality in ZFS.


I've searched around for these topics and found no success stories, 
however those who has tried
did not mention if they had attempted to change the blocksize to any 
smaller than the default of

128k.

Does anyone have any experience with this kind of setup?


Regards,
Sigbjorn


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with SAN's and HA

2010-08-26 Thread LaoTsao 老曹

 be very careful here!!

On 8/26/2010 9:16 PM, Michael Dodwell wrote:

Hey all,

I currently work for a company that has purchased a number of different SAN 
solutions (whatever was cheap at the time!) and i want to setup a HA ZFS file 
store over fiber channel.

Basically I've taken slices from each of the sans and added them to a ZFS pool 
on this box (which I'm calling a 'ZFS proxy'). I've then carved out LUN's from 
this pool and assigned them to other servers. I then have snapshots taken on 
each of the LUN's and replication off site for DR. This all works perfectly 
(backups for ESXi!)

However, I'd like to be able to a) expand and b) make it HA. All the 
documentation i can find on setting up a HA cluster for file stores replicates 
data from 2 servers and then serves from these computers (i trust the SAN's to 
take care of the data and don't want to replicate anything -- cost!). Basically 
all i want is for the node that serves the ZFS pool to be HA (if this was to be 
put into production we have around 128tb and are looking to expand to a pb). We 
have a couple of IBM SVC's that seem to handle the HA node setup in some 
obscure property IBM way so logically it seems possible.

Clients would only be making changes via a single 'zfs proxy' at a time 
(multi-pathing setup for fail over only) so i don't believe I'd need to OCFS 
the setup? If i do need to setup OCFS can i put ZFS on top of that? (want 
snap-shotting/rollback and replication to a off site location, as well as all 
the goodness of thin provisioning and de-duplication)

However when i import the ZFS pool onto the 2nd box i got large warnings about 
it being mounted elsewhere and i needed to force the import, then when 
importing the LUN's i saw that the GUUID was different so multi-pathing doesn't 
pick that the LUN's are the same? can i change a GUUID via smtfadm?
if U force the import and zfs are mounted by two hosts, Ur zfs could 
become corrupted!!!

recovery is not easy

Is any of this even possible over fiber channel?

please at least take look the document on oracle solaris cluster software
it detail the way to use ZFS in cluster env
http://docs.sun.com/app/docs/prod/sun.cluster32?l=ena=view
zfs
http://docs.sun.com/app/docs/doc/820-7359/gbspx?l=ena=view




  Is anyone able to point me at some documentation? Am i simply crazy?

Any input would be most welcome.

Thanks in advance,
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cloud Storage

2010-08-25 Thread LaoTsao 老曹


IMHO, may be take look the ZFS  appliance (7000 storage) from  oracle
it provide GUI for Dtrace based Analytics and WGUI management.
It support 1/2 PB now and will be support much more in near future.
 http://www.oracle.com/us/products/servers-storage/039224.pdf
it support local cluster and remote replication etc
it support 10ge and IB etc
regards


On 8/25/2010 1:42 PM, J.P. King wrote:


This is slightly off topic, so I apologise in advance.

I'm investigating the option of offering private cloud storage.  
I've found many things which offer features that I want, but nothing 
that seems to glue them all together into a useful whole.  Thus I 
would like to pick your collective brains on the matter.  The reason 
for this mailing list is that the obvious solution to aspects of this 
is to use ZFS as the underlying filesystem, and this is the only 
storage mailing list I am subscribed to.  :-)


What I would like to achieve:

Large (by my standard) scale storage.  Lets say petabyte scale, 
although I'll start around 50-100TB.


Redundancy across machines of data.  This doesn't mean that I have to 
have synchronous mirroring or anything, but I don't want data stored 
in just one location.  I also don't require this happen at the block 
level.  I am quite happy for a system which has two copies of every 
file one two machines done at the application level.


An Amazon S3 style interface.  It doesn't have to be the same API as 
S3, but something which has the same sorts of features would be good.


Scalability.  My building blocks would be X4540's or similar.  I want 
to be able to add more of these and be able to manage the storage 
well.  I want the front end to hide that the data is half on machines 
Alpha and Aleph and half on machines Beta and Beth.


A means of managing all this storage.  I'll accept a web front end, 
but I'd rather have something scriptable with an API.


Does anyone have any thoughts, pointers, or suggestions.  If people 
have ideas that force me away from ZFS then I'm interested, although 
that does mean that this thread would drift more off topic.


Has anyone done anything like this?  Whether public cloud or private 
cloud?  In my head the model worked really well, but research didn't 
result in the solutions I thought I was going to find.


This is intended to be a service, so if it is too shonky then it won't 
meet my needs.


Oh, and free isn't a requirement, but it is definitely a bonus.

Thanks in advance,

Julian
--
Julian King
Computer Officer, University of Cambridge, Unix Support
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrink zpool

2010-08-25 Thread LaoTsao 老曹

 not possible now

On 8/25/2010 2:34 PM, Mike DeMarco wrote:

Is it currently or near future possible to shrink a zpool remove a disk
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog and TRIM support

2010-08-25 Thread LaoTsao 老曹

 IMHO, U want -E for ZIL and -M for L2ARC


On 8/25/2010 2:44 PM, Karl Rossing wrote:
I'm trying to pick between an Intel X25-M or Intel X25-E for a slog 
device.


At some point in the future, TRIM support will become available 
http://mail.opensolaris.org/pipermail/onnv-notify/2010-July/012674.html. 
The X25-M support TRIM while X25-E don't support trim.


Does TRIM support mater when selecting slog drives?

Thanks
Karl













CONFIDENTIALITY NOTICE: This communication (including all attachments) 
is confidential and is intended for the use of the named addressee(s) 
only and may contain information that is private, confidential, 
privileged, and exempt from disclosure under law. All rights to 
privilege are expressly claimed and reserved and are not waived. Any 
use, dissemination, distribution, copying or disclosure of this 
message and any attachments, in whole or in part, by anyone other than 
the intended recipient(s) is strictly prohibited. If you have received 
this communication in error, please notify the sender immediately, 
delete this communication from all data storage devices and destroy 
all hard copies.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread LaoTsao

sas-2 with 7200 rpm sas 1tb ot 2tb hdd

--- Original message ---

From: Dr. Martin Mundschenk m.mundsch...@me.com
To: zfs-discuss@opensolaris.org
Sent: 25.8.'10,  15:29

Hi!

I'm running a OSOL box for quite a while and I think ZFS is an amazing 
filesystem. As a computer I use a Apple MacMini with USB and FireWire 
devices attached. Unfortunately the USB and sometimes the FW devices just 
die, causing the whole system to stall, forcing me to do a hard reboot.


I had the worst experience with an USB-SATA bridge running an Oxford 
chipset, in a way that the four external devices stalled randomly within a 
day or so. I switched to a four slot raid box, also with USB bridge, but 
with better reliability.


Well, I wonder what are the components to build a stable system without 
having an enterprise solution: eSATA, USB, FireWire, FibreChannel?


Martin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (preview) Whitepaper - ZFS Pools Explained - feedback welcome

2010-08-25 Thread LaoTsao 老曹

 dtrace is DTrace

On 8/25/2010 3:27 PM, F. Wessels wrote:

Although it's bit much Nexenta oriented, command wise. It's a nice introduction. I did found one 
thing, page 28 about the zil. There's no zil device, the zil can be written to an optional slog 
device. And the last line first paragraph, If you can, use memory based SSD devices. At 
least change memory into dram, flash is also memory. Perhaps even better is If you can, use a 
non volatile dram based device.
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog and TRIM support [SEC=UNCLASSIFIED]

2010-08-25 Thread LaoTsao 老曹


-M has larger capacity and L2ARC is mostly for read  and not much write 
and U also need memory for ARC

L2ARC should be the sze of Ur working dataset
ZIL is mostly for write and U want SSD for better write and longer life  
and ZIl may be =1/2 phy memory

regards


On 8/25/2010 9:18 PM, Wilkinson, Alex wrote:

 0n Wed, Aug 25, 2010 at 02:54:42PM -0400, LaoTsao ?? wrote:

 IMHO, U want -E for ZIL and -M for L2ARC

Why ?

-Alex

IMPORTANT: This email remains the property of the Department of Defence and is 
subject to the jurisdiction of section 70 of the Crimes Act 1914. If you have 
received this email in error, you are requested to contact the sender and 
delete the email.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss