Re: [zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-21 Thread Meilicke, Scott
Thank you Bob and Richard. I will go with A, as it also keeps things simple. One physical device per pool. -Scott On 10/20/09 6:46 PM, "Bob Friesenhahn" wrote: > On Tue, 20 Oct 2009, Richard Elling wrote: >> >> The ZIL device will never require more space than RAM.

[zfs-discuss] Setting up an SSD ZIL - Need A Reality Check

2009-10-20 Thread Scott Meilicke
, I am leaning towards option C. Any gotchas I should be aware of? Thanks, Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] OLD pool information visible in "zpool import"

2009-10-17 Thread Jake Scott
On what is now a live system, I had previously been tinkering with ZFS, creating and destroying pools and datasets. Those old pools still seem to be visible to the system even though I've re-created new pools with new names : zpool status pool: BackupP0 state: ONLINE scrub: none requested

Re: [zfs-discuss] poor man's Drobo on FreeNAS

2009-09-30 Thread Scott Meilicke
Requires a login... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Incremental snapshot size

2009-09-30 Thread Scott Meilicke
It is more cost, but a WAN Accelerator (Cisco WAAS, Riverbed, etc.) would be a big help. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Scott Meilicke
> zfs share -a Ah-ha! Thanks. FYI, I got between 2.5x and 10x improvement in performance, depending on the test. So tempting :) -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org h

Re: [zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Scott Meilicke
itself of course. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Scott Meilicke
, Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS & HW RAID

2009-09-18 Thread Scott Lawson
Bob Friesenhahn wrote: On Fri, 18 Sep 2009, David Magda wrote: If you care to keep your pool up and alive as much as possible, then mirroring across SAN devices is recommended. One suggestion I heard was to get a LUN that's twice the size, and set "copies=2". This way you have some redund

Re: [zfs-discuss] ZFS & HW RAID

2009-09-18 Thread Scott Lawson
list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- _ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre M

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-16 Thread Scott Meilicke
I think in theory the ZIL/L2ARC should make things nice and fast if your workload includes sync requests (database, iscsi, nfs, etc.), regardless of the backend disks. But the only sure way to know is test with your work load. -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] Pulsing write performance

2009-09-08 Thread Scott Meilicke
True, this setup is not designed for high random I/O, but rather lots of storage with fair performance. This box is for our dev/test backend storage. Our production VI runs in the 500-700 IOPS (80+ VMs, production plus dev/test) on average, so for our development VI, we are expecting half of tha

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
raidz, Dell 2950, 16GB RAM. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
Yes, I was getting confused. Thanks to you (and everyone else) for clarifying. Sync or async, I see the txg flushing to disk starve read IO. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
So, I just re-read the thread, and you can forget my last post. I had thought the argument was that the data were not being written to disk twice (assuming no separate device for the ZIL), but it was just explaining to me that the data are not read from the ZIL to disk, but rather from memory to

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
Doh! I knew that, but then forgot... So, for the case of no separate device for the ZIL, the ZIL lives on the disk pool. In which case, the data are written to the pool twice during a sync: 1. To the ZIL (on disk) 2. From RAM to disk during tgx If this is correct (and my history in this thread

Re: [zfs-discuss] Understanding when (and how) ZFS will use spare disks

2009-09-04 Thread Scott Meilicke
the spare *would* take over in these cases, since the pool is degraded. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
writing to the SSD/ZIL, and not to spinning disk. Eventually that data on the SSD must get to spinning disk. To the books I go! -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
? -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
a good idea, although I have not yet tried and tested it myself. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Scott Meilicke
re) with a single mirror using two 7200 drives gave me about 200 IOPS using the same test, presumably because of the large amounts of RAM for the L2ARC cache. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list z

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-08-31 Thread Scott Meilicke
As I understand it, when you expand a pool, the data do not automatically migrate to the other disks. You will have to rewrite the data somehow, usually a backup/restore. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-27 Thread Scott Meilicke
Roman, are you saying you want to install OpenSolaris on your old servers, or make the servers look like an external JBOD array, that another server will then connect to? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-disc

Re: [zfs-discuss] How to find poor performing disks

2009-08-26 Thread Scott Lawson
may help to isolate poorly performing individual disks. Scott Meilicke wrote: You can try: zpool iostat pool_name -v 1 This will show you IO on each vdev at one second intervals. Perhaps you will see different IO behavior on any suspect drive. -Scott ___

Re: [zfs-discuss] problem with zfs

2009-08-26 Thread Scott Lawson
serge goyette wrote: actually i did apply the latest recommended patches Recommended patches and upgrade clusters are different by the way. 10_Recommended != Upgrade Cluster that. Upgrade cluster will upgrade the system to a effectively the Solaris Release that the upgrade cluster is minu

Re: [zfs-discuss] problem with zfs

2009-08-26 Thread Scott Lawson
erved. Use is subject to license terms. Assembled 27 October 2008 -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Ser

Re: [zfs-discuss] How to find poor performing disks

2009-08-26 Thread Scott Meilicke
You can try: zpool iostat pool_name -v 1 This will show you IO on each vdev at one second intervals. Perhaps you will see different IO behavior on any suspect drive. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Failure of Quicktime *.mov files after move to zfs disk

2009-08-21 Thread Scott Laird
Checksum all of the files using something like md5sum and see if they're actually identical. Then test each step of the copy and see which one is corrupting your files. On Fri, Aug 21, 2009 at 1:43 PM, Harry Putnam wrote: > During the course of backup I had occassion to copy a number of > quickti

Re: [zfs-discuss] zfs fragmentation

2009-08-12 Thread Scott Lawson
n than you have concurrent streams. This avoids having one save set that finishes long after all the others because of poorly balanced save sets. Couldn't agree more Mike. -- Mike Gerdts http://mgerdts.blogspot.com/ -- ______

Re: [zfs-discuss] Live resize/grow of iscsi shared ZVOL

2009-08-12 Thread Scott Meilicke
t one path active, you should be fine. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] NFS load balancing / was: ZFS, ESX , and NFS. oh my!

2009-08-12 Thread Scott Meilicke
Yes! That would be icing on the cake. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Scott Lawson
__ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand

Re: [zfs-discuss] zfs fragmentation

2009-08-07 Thread Scott Meilicke
writes to a system without a separate ZIL also be written as intelligently as with a separate ZIL? Thanks, Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-07 Thread Scott Meilicke
this will be just fine :) -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Can I setting 'zil_disable' to increase ZFS/iscsi performance ?

2009-08-06 Thread Scott Meilicke
You can use a separate SSD ZIL. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-04 Thread Scott Meilicke
This has been a very enlightening thread for me, and explains a lot of the performance data I have collected on both 2008.11 and 2009.06 which mirrors the experiences here. Thanks to you all. NFS perf tuning, here I come... -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] grow zpool by replacing disks

2009-08-03 Thread Scott Lawson
Tobias Exner wrote: Hi list, some months ago I spoke with an zfs expert on a Sun Storage event. He told it's possible to grow a zpool by replacing every single disk with a larger one. After replacing and resilvering all disks of this pool zfs will provide the new size automatically. Now

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40

2009-08-01 Thread Scott Lawson
Dave Stubbs wrote: I don't mean to be offensive Russel, but if you do ever return to ZFS, please promise me that you will never, ever, EVER run it virtualized on top of NTFS (a.k.a. worst file system ever) in a production environment. Microsoft Windows is a horribly unreliable operating system

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Scott Lawson
work core routers and needless to say achieves very high throughput. I have seen it pushing the full capacity of the SAS link to the J4500 quite commonly. This is probably the choke point for this system. /Scott -- ___ Scott

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-28 Thread Scott Lawson
% ONLINE - nbupool 40.8T 34.4T 6.37T 84% ONLINE - [r...@solnbu1 /]#> -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland

[zfs-discuss] L2ARC support in Solaris 10 (Update 8?)

2009-07-22 Thread Scott Lawson
support are invited from the list. Thanks, Scott. -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand Phon

Re: [zfs-discuss] Migrating a zfs pool to another server

2009-07-21 Thread Scott Lawson
g/mailman/listinfo/zfs-discuss -- _____ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641 Mobile : +64 27 568 7611 ma

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-20 Thread Scott Meilicke
27;s Copy on Write model: http://en.wikipedia.org/wiki/Zfs#Copy-on-write_transactional_model So I'm not sure what the 'RAID-Z should mind the gap on writes' comment is getting at either. Clarification? -Scott -- This message posted from opensolaris.org __

Re: [zfs-discuss] ZFS pegging the system

2009-07-17 Thread Scott Laird
Have each node record results locally, and then merge pair-wise until a single node is left with the final results? If you can do merges that way while reducing the size of the result set, then that's probably going to be the most scalable way to generate overall results. On Thu, Jul 16, 2009 at

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Scott Lawson
second 'cpio -C 131072 -o > /dev/null' 48000256 blocks real1m59.11s user0m9.93s sys 1m49.15s Feel free to clean up with 'zfs destroy nbupool/zfscachetest'. Scott Lawson wrote: Bob, Output of my run for you. System is a M3000 with 16 GB RAM and 1 zpool called test

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Scott Lawson
Bob Friesenhahn wrote: On Wed, 15 Jul 2009, Scott Lawson wrote: NAME STATE READ WRITE CKSUM test1 ONLINE 0 0 0 mirror ONLINE 0 0

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Scott Lawson
) 'cpio -C 131072 -o > /dev/null' 48000256 blocks real3m25.13s user0m2.67s sys 0m28.40s Doing second 'cpio -C 131072 -o > /dev/null' 48000256 blocks real8m53.05s user0m2.69s sys 0m32.83s Feel free to clean up with 'zfs destroy test1/zfscachet

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-12 Thread Scott Lawson
stroy test1/zfscachetest'. Looks like a 25% performance loss for me. I was seeing around 80MB/s sustained on the first run and around 60M/'s sustained on the 2nd. /Scott. Bob Friesenhahn wrote: There has been no forward progress on the ZFS read performance issue for a week now. A 4X r

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Scott Meilicke
536K 2.97M data01 59.7G 20.4T 32 23 483K 2.97M data01 59.7G 20.4T 37 37 538K 4.70M While writes are being committed to the ZIL all the time, periodic dumping to the pool still occurs, and during those times reads are starved. Maybe this doesn't happen in the

Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Scott Lawson
David Magda wrote: On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote: I have seen UPSs help quite a lot for short glitches lasting seconds, or a minute. Otherwise the outage is usually longer than the UPSs can stay up since the problem required human attention. A standby generator is neede

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Scott Meilicke
4.23K 223K 23.2M data01 55.6G 20.4T 13 4.37K 87.1K 23.9M data01 55.6G 20.4T 21 3.33K 136K 18.6M data01 55.6G 20.4T468496 2.89M 1.82M data01 55.6G 20.4T687 0 4.13M 0 -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Scott Lawson
Monish Shah wrote: A related question: If you are on a UPS, is it OK to disable ZIL? I think the answer to this is no. UPS's do fail. If you have two redundant units, answer *might* be maybe. But prudence says *no*. I have seen numerous UPS' failures over the years, cascading UPS failures

Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Scott Lawson
Haudy Kazemi wrote: Hello, I've looked around Google and the zfs-discuss archives but have not been able to find a good answer to this question (and the related questions that follow it): How well does ZFS handle unexpected power failures? (e.g. environmental power failures, power supply

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Scott Meilicke
com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations "The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups." -Scott -- This message posted from op

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-26 Thread Scott Meilicke
iSCSI is not (See my earlier zpool iostat data for iSCSI). Isn't this what we expect, because NFS does syncs, while iSCSI does not (assumed)? -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] zfs on 32 bit?

2009-06-26 Thread Scott Laird
It's actually worse than that--it's not just "recent CPUs" without VT support. Very few of Intel's current low-price processors, including the Q8xxx quad-core desktop chips, have VT support. On Wed, Jun 24, 2009 at 12:09 PM, roland wrote: >>Dennis is correct in that there are significant areas wh

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-25 Thread Scott Meilicke
t, you remind me that my test was flawed, in that my iSCSI numbers were using the ESXi iSCSI SW initiator, while the NFS tests were performed with the VM as the guest, not ESX. I'll give ESX as the NFS client, vmdks on NFS, a go and get back to you. Thanks! Scott -- This message posted

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-25 Thread Scott Meilicke
a separate device. The periodic high writes show it being flushed. You can also see reads stall to nearly zero as the ZIL is dumping. Not good. This thread is discussing this behavior: http://www.opensolaris.org/jive/thread.jspa?threadID=106453 Coming from a mostly Windows world, I really like th

Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread Scott Lawson
replacing a disk. HTH, Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ Scott Lawson Systems Architect Manukau Insti

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-24 Thread Scott Meilicke
ZIL usage, from what I have read you will only see benefits if you are using NFS backed storage, but that it can be significant. Remove the ZIL for testing to see the max benefit you could get. Don't do this in production! -Scott -- This message posted from o

Re: [zfs-discuss] SAN server

2009-06-22 Thread Scott Meilicke
f IO? Also, to ensure you can recover from failures, consider separate pools for your database files and log files, both for MySQL and Exchange. Good luck! -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opens

Re: [zfs-discuss] SAN server

2009-06-22 Thread Scott Meilicke
, try to get a card that supports JBOD mode so you can use software raid if you change your mind. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O th

2009-06-19 Thread Scott Meilicke
Generally, yes. Test it with your workload and see how it works out for you. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!

2009-06-19 Thread Scott Meilicke
So how are folks getting around the NFS speed hit? Using SSD or battery backed RAM ZILs? Regarding limited NFS mounts, underneath a single NFS mount, would it work to: * Create a new VM * Remove the VM from inventory * Create a new ZFS file system underneath the original * Copy the VM to that fi

Re: [zfs-discuss] 7110 questions

2009-06-18 Thread Scott Meilicke
Both iSCSI and NFS are slow? I would expect NFS to be slow, but in my iSCSI testing with OpenSolaris 2008.11, performance we reasonable, about 2x NFS. Setup: Dell 2950 with a SAS HBA and SATA 3x5 raidz (15 disks, no separate ZIL), iSCSI using vmware ESXi 3.5 software initiator. Scott -- This

Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!

2009-06-16 Thread Scott Meilicke
lexibility over iSCSI (quotas, reservations, etc.) -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Asymmetric mirroring

2009-06-10 Thread Scott Meilicke
The SATA drive will be your bottleneck, and you will lose any speed advantages of the SAS drives, especially using 3 vdevs on a single SATA disk. I am with Richard, figure out what performance you need, and build accordingly. -- This message posted from opensolaris.org __

Re: [zfs-discuss] ZFS Path = ???

2009-05-19 Thread Scott Lawson
admin deploy -a zfs -x zfs /usr/share/webconsole/webapps/zfs Also if you wish to make the webconsole accessible from more than just the localhost, use : # svccfg -s svc:/system/webconsole setprop options/tcp_listen = true # smcwebserver restart Hope this helps, Scott. cindy.swearin...@su

Re: [zfs-discuss] With RAID-Z2 under load, machine stops responding to local or remote login

2009-05-15 Thread Scott Duckworth
amped with such a load. We're running Solaris 10, not OpenSolaris, so it could also be the case that there is a regression somewhere in there. Scott Duckworth, Systems Programmer II Clemson University School of Computing On Tue, May 12, 2009 at 10:10 PM, Rince wrote: > Hi world, > I h

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Scott Lawson
Bob Friesenhahn wrote: On Thu, 7 May 2009, Scott Lawson wrote: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. Just thought I would point out that these are hardware backed RAID arrays. You

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Scott Lawson
Roger Solano wrote: Hello, Does it make any sense to use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks? For example: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-04-30 Thread Scott Lawson
Wilkinson, Alex wrote: 0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote: >On Thu, 30 Apr 2009, Wilkinson, Alex wrote: >> >> I currently have a single 17TB MetaLUN that i am about to present to an >> OpenSolaris initiator and it will obviously be ZFS. However

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-04-30 Thread Scott Lawson
Wilkinson, Alex wrote: 0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote: >On Thu, 30 Apr 2009, Wilkinson, Alex wrote: >> >> I currently have a single 17TB MetaLUN that i am about to present to an >> OpenSolaris initiator and it will obviously be ZFS. However

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Richard Elling wrote: Some history below... Scott Lawson wrote: Michael Shadle wrote: On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson wrote: If possible though you would be best to let the 3ware controller expose the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Michael Shadle wrote: On Mon, Apr 27, 2009 at 5:32 PM, Scott Lawson wrote: One thing you haven't mentioned is the drive type and size that you are planning to use as this greatly influences what people here would recommend. RAIDZ2 is built for big, slow SATA disks as reconstruction

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Michael Shadle wrote: On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson wrote: If possible though you would be best to let the 3ware controller expose the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you will then gain the full benefits of ZFS. Block self healing etc etc

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
gives you greater cover in the event of a drive failing in a large vdev stripe. /Scott Leon Meßner wrote: Hi, i'm new to the list so please bare with me. This isn't an OpenSolaris related problem but i hope it's still the right list to post to. I'm on the way to move a ba

Re: [zfs-discuss] Sun's flash offering(s)

2009-04-19 Thread Scott Laird
service contract, etc, wasn't important to you. Compare the URL above with this one: http://www.intel.com/design/flash/nand/extreme/index.htm Scott ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs as a cache server

2009-04-09 Thread Scott Lawson
you best read performance. Also chuck in as much RAM as you can for ARC caching. Hope this real world case is of use to you. Feel free to ask any more questions.. Cheers, Scott. Francois wrote: Hello list, What would be the best zpool configuration for a cache/proxy server (probably ba

Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Scott Lawson
zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology

Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Scott Lawson
Michael Shadle wrote: On Wed, Apr 1, 2009 at 3:19 AM, Michael Shadle wrote: I'm going to try to move one of my disks off my rpool tomorrow (since it's a mirror) to a different controller. According to what I've heard before, ZFS should automagically recognize this new location and have no

Re: [zfs-discuss] Can this be done?

2009-03-31 Thread Scott Lawson
Michael Shadle wrote: On Mon, Mar 30, 2009 at 4:13 PM, Michael Shadle wrote: Sounds like a reasonable idea, no? Follow up question: can I add a single disk to the existing raidz2 later on (if somehow I found more space in my chassis) so instead of a 7 disk raidz2 (5+2) it becomes a 6+2

Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Scott Lawson
pensolaris.org/mailman/listinfo/zfs-discuss -- _ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckla

Re: [zfs-discuss] ZFS on a SAN

2009-03-12 Thread Scott Lawson
en importing to the new system. Works well as long as both systems are of the same OS revision or greater on the target system. /Scott. Grant Lowe wrote: Hi Erik, A couple of questions about what you said in your email. In synopsis 2, if hostA has gone belly up and is no longer accessible, th

Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Scott Lawson
r,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- _______ Scott Lawson Systems Architect Manukau

Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Scott Lawson
to them and they might not understand having to change their shell paths to get the userland that they want ;) On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson wrote: Stephen Nelson-Smith wrote: Hi, I recommended a ZFS-based archive solution to a client needing to have a network-b

Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Scott Lawson
ver a commercially supported solution for them. Thanks, S. -- _____ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 764

Re: [zfs-discuss] ZFS on SAN? Availability edition.

2009-02-18 Thread Scott Lawson
Miles Nordin wrote: "sl" == Scott Lawson writes: sl> Electricity *is* the lifeblood of available storage. I never meant to suggest computing machinery could run without electricity. My suggestion is, if your focus is _reliability_ rather than availability

Re: [zfs-discuss] ZFS on SAN? Availability edition.

2009-02-18 Thread Scott Lawson
t the twinstrata website. (as should others). Sorry to all if we are diverging too much from zfs-discuss. /Scott This stuff does happen. When you have been around for a while you see it. Robin Harris wrote: Calculating the availability and economic trade-offs of configurations is hard. Rule of

Re: [zfs-discuss] qmail on zfs

2009-02-18 Thread Scott Lawson
Robert Milkowski wrote: Hello Asif, Wednesday, February 18, 2009, 1:28:09 AM, you wrote: AI> On Tue, Feb 17, 2009 at 5:52 PM, Robert Milkowski wrote: Hello Asif, Tuesday, February 17, 2009, 7:43:41 PM, you wrote: AI> Hi All AI> Does anyone have any experience on running qmail on solar

Re: [zfs-discuss] ZFS on SAN?

2009-02-18 Thread Scott Lawson
Hi Andras, No problems writing direct. Answers inline below. (If there are any typo's it cause it's late and I have had a very long day ;)) andras spitzer wrote: Scott, Sorry for writing you directly, but most likely you have missed my questions regarding your SW design, wheneve

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
David Magda wrote: On Feb 17, 2009, at 21:35, Scott Lawson wrote: Everything we have has dual power supplies, feed from dual power rails, feed from separate switchboards, through separate very large UPS's, backed by generators, feed by two substations and then cloned to another

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
Toby Thain wrote: On 17-Feb-09, at 3:01 PM, Scott Lawson wrote: Hi All, ... I have seen other people discussing power availability on other threads recently. If you want it, you can have it. You just need the business case for it. I don't buy the comments on UPS unreliability. H

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
ms. I have 0% Solaris older than Solaris 10. Why would you? In short I hope people don't hold back from adoption of ZFS because they are unsure about it. Judge for yourself as I have done and dip your toes in at whatever rate you are happy to do so. Thats what I did. /Scott. I also use

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
laris 10. Why would you? In short I hope people don't hold back from adoption of ZFS because they are unsure about it. Judge for yourself as I have done and dip your toes in at whatever rate you are happy to do so. Thats what I did. /Scott. I also use it at home too with and old D1000 attached

Re: [zfs-discuss] Two pools on one slice

2009-02-09 Thread Scott Watanabe
Have tried the procedure in the ZFS TS guide? http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Panic.2FReboot.2FPool_Import_Problems -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] how to fix zpool with corrupted disk?

2009-01-26 Thread Scott Watanabe
Looks like your scrub was not finished yet. Did check it later? You should not have had to replace the disk. You might have to reinstall the bootblock. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org h

Re: [zfs-discuss] Bug report: disk replacement confusion

2009-01-23 Thread Scott L. Burson
controller port, so that the new device will have the same device name as the failed one. -- Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] Bug report: disk replacement confusion

2009-01-22 Thread Scott L. Burson
Well, the second resilver finished, and everything looks okay now. Doing one more scrub to be sure... -- Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

[zfs-discuss] Bug report: disk replacement confusion

2009-01-22 Thread Scott L. Burson
they wind up on different ports? If so, seems like it needs to back-map that information to the device names when mounting. Or something :) -- Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolar

<    1   2   3   4   >