Re: [zfs-discuss] DDT sync?

2011-05-31 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Edward Ned Harvey > > So here's what I'm going to do. With arc_meta_limit at 7680M, of which > 100M > was consumed "naturally," that leaves me 7580 to play with. Call it 7500M. > Divide by 41

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Fajar A. Nugraha
On Wed, Jun 1, 2011 at 7:06 AM, Bill Sommerfeld wrote: > On 05/31/11 09:01, Anonymous wrote: >> Hi. I have a development system on Intel commodity hardware with a 500G ZFS >> root mirror. I have another 500G drive same as the other two. Is there any >> way to use this disk to good advantage in thi

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Daniel Carosone
On Wed, Jun 01, 2011 at 05:45:14AM +0400, Jim Klimov wrote: > Also, in a mirroring scenario is there any good reason to keep a warm spare > instead of making a three-way mirror right away (beside energy saving)? > Rebuild times and non-redundant windows can be decreased considerably ;) Perhaps wh

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-31 Thread Rocky Shek
Thomas, You can consider the DataON DNS-1600(4U 24 3.5" Bay 6Gb/s SAS JBOD). It is perfect for ZFS storage as the alternative of J4400. http://dataonstorage.com/dns-1600 And we recommend you to use native SAS HD like Seagate Constellation ES 2TB SAS to connect 2 hosts for fail-over cluster

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Jim Klimov
> If it is powered on, then it is a warm spare :-) > Warm spares are a good idea. For some platforms, you can > spin down the > disk so it doesn't waste energy. But I should note that we've had issues with a hot spare disk added to rpool in particular, preventing boots on Solaris 10u8. It turn

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Richard Elling
On May 31, 2011, at 5:16 PM, Daniel Carosone wrote: > Namely, leave the third drive on the shelf as a cold spare, and use > the third sata connector for an ssd, as L2ARC, ZIL or even possibly > both (which will affect selection of which device to use). If it is powered on, then it is a warm spare

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Jim Klimov
> What about writes? Writes in a mirror are deemed to be not faster than the slowest disk - all two or three drives must commit a block before it is considered written (in sync write mode), likewise for TXG sync but with some optimization by caching and write-coalescing. //Jim _

Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-31 Thread Daniel Carosone
On Tue, May 31, 2011 at 05:32:47PM +0100, Matt Keenan wrote: > Jim, > > Thanks for the response, I've nearly got it working, coming up against a > hostid issue. > > Here's the steps I'm going through : > > - At end of auto-install, on the client just installed before I manually > reboot I do th

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Daniel Carosone
On Wed, Jun 01, 2011 at 10:16:28AM +1000, Daniel Carosone wrote: > On Tue, May 31, 2011 at 06:57:53PM -0400, Edward Ned Harvey wrote: > > If you make it a 3-way mirror, your write performance will be unaffected, > > but your read performance will increase 50% over a 2-way mirror. All 3 > > drives

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Daniel Carosone
On Tue, May 31, 2011 at 06:57:53PM -0400, Edward Ned Harvey wrote: > If you make it a 3-way mirror, your write performance will be unaffected, > but your read performance will increase 50% over a 2-way mirror. All 3 > drives can read different data simultaneously for the net effect of 3x a > singl

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Bill Sommerfeld
On 05/31/11 09:01, Anonymous wrote: > Hi. I have a development system on Intel commodity hardware with a 500G ZFS > root mirror. I have another 500G drive same as the other two. Is there any > way to use this disk to good advantage in this box? I don't think I need any > more redundancy, I would li

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread David Magda
On May 31, 2011, at 19:00, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk >> >> Theoretically, you'll get a 50% read increase, but I doubt it'll be that >> high in >> practice. What about

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Bob Friesenhahn
On Tue, 31 May 2011, Edward Ned Harvey wrote: If you make it a 3-way mirror, your write performance will be unaffected, but your read performance will increase 50% over a 2-way mirror. All 3 drives can read different data simultaneously for the net effect of 3x a single disk read performance.

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk > > Theoretically, you'll get a 50% read increase, but I doubt it'll be that high > in > practice. In my benchmarking, I found 2-way mirror reads 1.97x the speed of a sing

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Anonymous > > Hi. I have a development system on Intel commodity hardware with a 500G > ZFS > root mirror. I have another 500G drive same as the other two. Is there any > way to use this disk t

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Tomas Ögren
On 31 May, 2011 - Gertjan Oude Lohuis sent me these 0,9K bytes: > On 05/31/2011 03:52 PM, Tomas Ögren wrote: >> I've done a not too scientific test on reboot times for Solaris 10 vs 11 >> with regard to many filesystems... >> > >> http://www8.cs.umu.se/~stric/tmp/zfs-many.png >> >> As the picture

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Richard Elling
On May 31, 2011, at 2:29 PM, Gertjan Oude Lohuis wrote: > On 05/31/2011 03:52 PM, Tomas Ögren wrote: >> I've done a not too scientific test on reboot times for Solaris 10 vs 11 >> with regard to many filesystems... >> > >> http://www8.cs.umu.se/~stric/tmp/zfs-many.png >> >> As the picture shows

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Gertjan Oude Lohuis
On 05/31/2011 12:26 PM, Khushil Dep wrote: Generally snapshots are quick operations but 10,000 such operations would I believe take enough to time to complete as to present operational issues - breaking these into sets would alleviate some? Perhaps if you are starting to run into many thousands o

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Gertjan Oude Lohuis
On 05/31/2011 03:52 PM, Tomas Ögren wrote: I've done a not too scientific test on reboot times for Solaris 10 vs 11 with regard to many filesystems... http://www8.cs.umu.se/~stric/tmp/zfs-many.png As the picture shows, don't try 1 filesystems with nfs on sol10. Creating more filesystems

Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-31 Thread Matt Keenan
I've written a possible solution using svc-system-config SMF where on first boot it will import -f a specified list of pools, and it does work, I was hoping to find a cleaner solution via zpool.cache... but if there's no way to achieve it I guess I'll have to stick with the other solution. I

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Roy Sigurd Karlsbakk
> Hi. I have a development system on Intel commodity hardware with a > 500G ZFS > root mirror. I have another 500G drive same as the other two. Is there > any > way to use this disk to good advantage in this box? I don't think I > need any > more redundancy, I would like to increase performance if

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Jim Klimov
> Hi. I have a development system on Intel commodity hardware with > a 500G ZFS > root mirror. I have another 500G drive same as the other two. Is > there any > way to use this disk to good advantage in this box? I don't > think I need any > more redundancy, I would like to increase performance

Re: [zfs-discuss] not sure how to make filesystems

2011-05-31 Thread Jim Klimov
Alas, I have some notes on the subject of migration from UFS to ZFS with split filesystems (separate /usr /var and some /var/* subdirs), but they are an unpublished document in Russian ;) Here I will outline some main points, but will probably have omitted some others :( Hope this helps anyway...

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Matthew Ahrens
On Tue, May 31, 2011 at 6:52 AM, Tomas Ögren wrote: > > On a different setup, we have about 750 datasets where we would like to > use a single recursive snapshot, but when doing that all file access > will be frozen for varying amounts of time (sometimes half an hour or > way more). Splitting it

[zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Anonymous
Hi. I have a development system on Intel commodity hardware with a 500G ZFS root mirror. I have another 500G drive same as the other two. Is there any way to use this disk to good advantage in this box? I don't think I need any more redundancy, I would like to increase performance if possible. I ha

Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-31 Thread Jim Klimov
Actually if you need beadm to "know" about the data pool, it might be beneficial to mix both approaches - yours with bemount, and init-script to enforce the pool import on that first boot... HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Importing Corrupted zpool never ends

2011-05-31 Thread Jim Klimov
if the conditions were "right". My FreeRAM-Watchdog code and compiled i386 binary and a primitive SMF service wrapper can be found here: http://thumper.cos.ru/~jim/freeram-watchdog-20110531-smf.tgz Other related forum threads: * zpool import hangs indefinitely (retry post in parts;

Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-31 Thread Jim Klimov
I should have seen that coming, but didn't ;) I think in ths case I would go with a different approach: don't import the data pool in the AI instance and save it to zpool.cache Instead, make sure it is cleanly exported from AI instance, and in the installed system create a self-destructing init s

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Jim Klimov
In general, you may need to keep data in one dataset if it is somehow related (i.e. backup of a specific machine or program, a user's home, etc) and if you plan to manage it in a consistent manner. For example, CIFS shares can not be nested, so for a unitary share (like "distribs") you would proba

Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-31 Thread Matt Keenan
Jim, Thanks for the response, I've nearly got it working, coming up against a hostid issue. Here's the steps I'm going through : - At end of auto-install, on the client just installed before I manually reboot I do the following : $ beadm mount solaris /a $ zpool export data $ zpool im

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Eric D. Mudama
On Tue, May 31 at 8:52, Paul Kraus wrote: When we initially configured a large (20TB) files server about 5 years ago, we went with multiple zpools and multiple datasets (zfs) in each zpool. Currently we have 17 zpools and about 280 datasets. Nowhere near the 10,000+ you intend. We are moving

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Jerry Kemp
Gertjan, In addition to the comments directly relating from your post, we have had similar discussions previously on the zfs-discuss list. If you care to go and review the list archives, I can share that we have had similar discussions on at least the following time periods. March 2006 May 2008

Re: [zfs-discuss] not sure how to make filesystems

2011-05-31 Thread Enda O'Connor
On 29/05/2011 19:55, BIll Palin wrote: I'm migrating some filesystems from UFS to ZFS and I'm not sure how to create a couple of them. I want to migrate /, /var, /opt, /export/home and also want swap and /tmp. I don't care about any of the others. The first disk, and the one with the UFS fil

[zfs-discuss] Importing Corrupted zpool never ends

2011-05-31 Thread Christian Becker
Hi There, I need to import an corrupted ZPOOL after double-Crash (Mainboard and one HDD) on a different system. It is a RAIDZ1 - 3 HDDs - only 2 are working. Problem: spool import -f poolname runs and runs and runs. Looking after iostat (not zpool iostat) it is doing something - but what?

[zfs-discuss] not sure how to make filesystems

2011-05-31 Thread BIll Palin
I'm migrating some filesystems from UFS to ZFS and I'm not sure how to create a couple of them. I want to migrate /, /var, /opt, /export/home and also want swap and /tmp. I don't care about any of the others. The first disk, and the one with the UFS filesystems, is c0t0d0 and the 2nd disk is

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Tomas Ögren
On 31 May, 2011 - Khushil Dep sent me these 4,5K bytes: > The adage that I adhere to with ZFS features is "just because you can > doesn't mean you should!". I would suspect that with that many > filesystems the normal zfs-tools would also take an inordinate length > of time to complete their opera

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Paul Kraus
On Tue, May 31, 2011 at 6:08 AM, Gertjan Oude Lohuis wrote: > "Filesystem are cheap" is one of ZFS's mottos. I'm wondering how far > this goes. Does anyone have experience with having more than 10.000 ZFS > filesystems? I know that mounting this many filesystems during boot > while take considera

Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-05-31 Thread Jim Klimov
Interesting, although makes sense ;) Now, I wonder about reliability (with large 2-3Tb drives and long scrub/resilver/replace times): say I have 12 drives in my box. I can lay them out as 4*3-disk raidz1, 3*4-disk-raidz1 or a 1*12-disk raidz3 with nearly the same capacity (8-9 data disks plus

Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-05-31 Thread Paul Kraus
On Fri, May 27, 2011 at 2:49 PM, Marty Scholes wrote: > For what it's worth, I ran a 22 disk home array as a single RAIDZ3 vdev > (19+3)for several > months and it was fine.  These days I run a 32 disk array laid out as four > vdevs, each an > 8 disk RAIDZ2, i.e. 4x 6+2. I tested 40 drives

Re: [zfs-discuss] Question on ZFS iSCSI

2011-05-31 Thread Jim Klimov
> The volume is exported as whole disk. When given whole disk, zpool > creates GPT partition table by default. You need to pass the partition > (not the disk) to zdb. Yes, that is what seems to be the problem. However, for the zfs volumes (/dev/zvol/rdsk/pool/dcpool) there seems to be no concept o

Re: [zfs-discuss] Question on ZFS iSCSI

2011-05-31 Thread Fajar A. Nugraha
On Tue, May 31, 2011 at 5:47 PM, Jim Klimov wrote: > However it seems that there may be some extra data beside the zfs > pool in the actual volume (I'd at least expect an MBR or GPT, and > maybe some iSCSI service data as an overhead). One way or another, > the "dcpool" can not be found in the phy

[zfs-discuss] Question on ZFS iSCSI

2011-05-31 Thread Jim Klimov
I have a oi_148a test box with a pool on physical HDDs, a volume in this pool shared over iSCSI with explicit commands (sbdadm and such), and this iSCSI target is initiated by the same box. In the resulting iSCSI device I have another ZFS pool "dcpool". Recently I found the iSCSI part to be a pote

Re: [zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Khushil Dep
The adage that I adhere to with ZFS features is "just because you can doesn't mean you should!". I would suspect that with that many filesystems the normal zfs-tools would also take an inordinate length of time to complete their operations - scale according to size. Generally snapshots are quic

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-31 Thread Jim Klimov
So, if I may, is this the correct summary of the answer to original question (on JBOD for a ZFS HA cluster): === SC847E26-RJBOD1 with dual-ported SAS drives are known to work in a failover HA storage scenario, allowing both servers (HBAs) access to each single SAS drive individually, so zp

[zfs-discuss] Experiences with 10.000+ filesystems

2011-05-31 Thread Gertjan Oude Lohuis
"Filesystem are cheap" is one of ZFS's mottos. I'm wondering how far this goes. Does anyone have experience with having more than 10.000 ZFS filesystems? I know that mounting this many filesystems during boot while take considerable time. Are there any other disadvantages that I should be aware of?