[zfs-discuss] OLD pool information visible in zpool import

2009-10-17 Thread Jake Scott
On what is now a live system, I had previously been tinkering with ZFS, creating and destroying pools and datasets. Those old pools still seem to be visible to the system even though I've re-created new pools with new names : zpool status pool: BackupP0 state: ONLINE scrub: none

Re: [zfs-discuss] Incremental snapshot size

2009-09-30 Thread Scott Meilicke
It is more cost, but a WAN Accelerator (Cisco WAAS, Riverbed, etc.) would be a big help. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] poor man's Drobo on FreeNAS

2009-09-30 Thread Scott Meilicke
Requires a login... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Scott Meilicke
, Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Scott Meilicke
itself of course. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to verify if the ZIL is disabled

2009-09-23 Thread Scott Meilicke
zfs share -a Ah-ha! Thanks. FYI, I got between 2.5x and 10x improvement in performance, depending on the test. So tempting :) -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] ZFS HW RAID

2009-09-19 Thread Scott Lawson
Bob Friesenhahn wrote: On Fri, 18 Sep 2009, David Magda wrote: If you care to keep your pool up and alive as much as possible, then mirroring across SAN devices is recommended. One suggestion I heard was to get a LUN that's twice the size, and set copies=2. This way you have some

Re: [zfs-discuss] ZFS HW RAID

2009-09-18 Thread Scott Lawson
-- _ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641 Mobile

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-16 Thread Scott Meilicke
I think in theory the ZIL/L2ARC should make things nice and fast if your workload includes sync requests (database, iscsi, nfs, etc.), regardless of the backend disks. But the only sure way to know is test with your work load. -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] Pulsing write performance

2009-09-08 Thread Scott Meilicke
True, this setup is not designed for high random I/O, but rather lots of storage with fair performance. This box is for our dev/test backend storage. Our production VI runs in the 500-700 IOPS (80+ VMs, production plus dev/test) on average, so for our development VI, we are expecting half of

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
not yet tried and tested it myself. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Understanding when (and how) ZFS will use spare disks

2009-09-04 Thread Scott Meilicke
the spare *would* take over in these cases, since the pool is degraded. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
? -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
Doh! I knew that, but then forgot... So, for the case of no separate device for the ZIL, the ZIL lives on the disk pool. In which case, the data are written to the pool twice during a sync: 1. To the ZIL (on disk) 2. From RAM to disk during tgx If this is correct (and my history in this

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
So, I just re-read the thread, and you can forget my last post. I had thought the argument was that the data were not being written to disk twice (assuming no separate device for the ZIL), but it was just explaining to me that the data are not read from the ZIL to disk, but rather from memory

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
Yes, I was getting confused. Thanks to you (and everyone else) for clarifying. Sync or async, I see the txg flushing to disk starve read IO. Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Pulsing write performance

2009-09-04 Thread Scott Meilicke
raidz, Dell 2950, 16GB RAM. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Scott Meilicke
) with a single mirror using two 7200 drives gave me about 200 IOPS using the same test, presumably because of the large amounts of RAM for the L2ARC cache. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-08-31 Thread Scott Meilicke
As I understand it, when you expand a pool, the data do not automatically migrate to the other disks. You will have to rewrite the data somehow, usually a backup/restore. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-27 Thread Scott Meilicke
Roman, are you saying you want to install OpenSolaris on your old servers, or make the servers look like an external JBOD array, that another server will then connect to? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] How to find poor performing disks

2009-08-26 Thread Scott Meilicke
You can try: zpool iostat pool_name -v 1 This will show you IO on each vdev at one second intervals. Perhaps you will see different IO behavior on any suspect drive. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] problem with zfs

2009-08-26 Thread Scott Lawson
. Use is subject to license terms. Assembled 27 October 2008 -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services

Re: [zfs-discuss] problem with zfs

2009-08-26 Thread Scott Lawson
serge goyette wrote: actually i did apply the latest recommended patches Recommended patches and upgrade clusters are different by the way. 10_Recommended != Upgrade Cluster that. Upgrade cluster will upgrade the system to a effectively the Solaris Release that the upgrade cluster is

Re: [zfs-discuss] How to find poor performing disks

2009-08-26 Thread Scott Lawson
poorly performing individual disks. Scott Meilicke wrote: You can try: zpool iostat pool_name -v 1 This will show you IO on each vdev at one second intervals. Perhaps you will see different IO behavior on any suspect drive. -Scott ___ zfs

Re: [zfs-discuss] Failure of Quicktime *.mov files after move to zfs disk

2009-08-21 Thread Scott Laird
Checksum all of the files using something like md5sum and see if they're actually identical. Then test each step of the copy and see which one is corrupting your files. On Fri, Aug 21, 2009 at 1:43 PM, Harry Putnamrea...@newsguy.com wrote: During the course of backup I had occassion to copy a

Re: [zfs-discuss] NFS load balancing / was: ZFS, ESX , and NFS. oh my!

2009-08-12 Thread Scott Meilicke
Yes! That would be icing on the cake. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Live resize/grow of iscsi shared ZVOL

2009-08-12 Thread Scott Meilicke
active, you should be fine. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs fragmentation

2009-08-12 Thread Scott Lawson
/ -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641 Mobile : +64 27 568 7611 mailto:sc...@manukau.ac.nz http

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Scott Lawson
-- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641 Mobile : +64 27 568 7611 mailto:sc...@manukau.ac.nz http://www.manukau.ac.nz

Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-07 Thread Scott Meilicke
this will be just fine :) -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs fragmentation

2009-08-07 Thread Scott Meilicke
a separate ZIL also be written as intelligently as with a separate ZIL? Thanks, Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Can I setting 'zil_disable' to increase ZFS/iscsi performance ?

2009-08-06 Thread Scott Meilicke
You can use a separate SSD ZIL. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] grow zpool by replacing disks

2009-08-03 Thread Scott Lawson
Tobias Exner wrote: Hi list, some months ago I spoke with an zfs expert on a Sun Storage event. He told it's possible to grow a zpool by replacing every single disk with a larger one. After replacing and resilvering all disks of this pool zfs will provide the new size automatically. Now

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40

2009-08-01 Thread Scott Lawson
Dave Stubbs wrote: I don't mean to be offensive Russel, but if you do ever return to ZFS, please promise me that you will never, ever, EVER run it virtualized on top of NTFS (a.k.a. worst file system ever) in a production environment. Microsoft Windows is a horribly unreliable operating system

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Scott Lawson
achieves very high throughput. I have seen it pushing the full capacity of the SAS link to the J4500 quite commonly. This is probably the choke point for this system. /Scott -- ___ Scott Lawson Systems Architect Manukau

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-28 Thread Scott Lawson
- [r...@solnbu1 /]# -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09

[zfs-discuss] L2ARC support in Solaris 10 (Update 8?)

2009-07-22 Thread Scott Lawson
, Scott. -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641

Re: [zfs-discuss] Migrating a zfs pool to another server

2009-07-21 Thread Scott Lawson
-- _ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641 Mobile : +64

Re: [zfs-discuss] triple-parity: RAID-Z3

2009-07-20 Thread Scott Meilicke
://en.wikipedia.org/wiki/Zfs#Copy-on-write_transactional_model So I'm not sure what the 'RAID-Z should mind the gap on writes' comment is getting at either. Clarification? -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] ZFS pegging the system

2009-07-17 Thread Scott Laird
Have each node record results locally, and then merge pair-wise until a single node is left with the final results? If you can do merges that way while reducing the size of the result set, then that's probably going to be the most scalable way to generate overall results. On Thu, Jul 16, 2009 at

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Scott Lawson
) 'cpio -C 131072 -o /dev/null' 48000256 blocks real3m25.13s user0m2.67s sys 0m28.40s Doing second 'cpio -C 131072 -o /dev/null' 48000256 blocks real8m53.05s user0m2.69s sys 0m32.83s Feel free to clean up with 'zfs destroy test1/zfscachetest'. Scott Lawson wrote: Bob

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Scott Lawson
Bob Friesenhahn wrote: On Wed, 15 Jul 2009, Scott Lawson wrote: NAME STATE READ WRITE CKSUM test1 ONLINE 0 0 0 mirror ONLINE 0 0

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Scott Lawson
-o /dev/null' 48000256 blocks real1m59.11s user0m9.93s sys 1m49.15s Feel free to clean up with 'zfs destroy nbupool/zfscachetest'. Scott Lawson wrote: Bob, Output of my run for you. System is a M3000 with 16 GB RAM and 1 zpool called test1 which is contained on a raid 1 volume

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-12 Thread Scott Lawson
% performance loss for me. I was seeing around 80MB/s sustained on the first run and around 60M/'s sustained on the 2nd. /Scott. Bob Friesenhahn wrote: There has been no forward progress on the ZFS read performance issue for a week now. A 4X reduction in file read performance due to having read

Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Scott Lawson
Haudy Kazemi wrote: Hello, I've looked around Google and the zfs-discuss archives but have not been able to find a good answer to this question (and the related questions that follow it): How well does ZFS handle unexpected power failures? (e.g. environmental power failures, power supply

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Scott Meilicke
4.23K 223K 23.2M data01 55.6G 20.4T 13 4.37K 87.1K 23.9M data01 55.6G 20.4T 21 3.33K 136K 18.6M data01 55.6G 20.4T468496 2.89M 1.82M data01 55.6G 20.4T687 0 4.13M 0 -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Scott Lawson
David Magda wrote: On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote: I have seen UPSs help quite a lot for short glitches lasting seconds, or a minute. Otherwise the outage is usually longer than the UPSs can stay up since the problem required human attention. A standby generator is

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-30 Thread Scott Meilicke
20.4T 32 23 483K 2.97M data01 59.7G 20.4T 37 37 538K 4.70M While writes are being committed to the ZIL all the time, periodic dumping to the pool still occurs, and during those times reads are starved. Maybe this doesn't happen in the 'real world' ? -Scott

Re: [zfs-discuss] zfs on 32 bit?

2009-06-26 Thread Scott Laird
It's actually worse than that--it's not just recent CPUs without VT support. Very few of Intel's current low-price processors, including the Q8xxx quad-core desktop chips, have VT support. On Wed, Jun 24, 2009 at 12:09 PM, rolandno-re...@opensolaris.org wrote: Dennis is correct in that there are

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-26 Thread Scott Meilicke
iSCSI is not (See my earlier zpool iostat data for iSCSI). Isn't this what we expect, because NFS does syncs, while iSCSI does not (assumed)? -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread Scott Meilicke
/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups. -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-25 Thread Scott Meilicke
reads stall to nearly zero as the ZIL is dumping. Not good. This thread is discussing this behavior: http://www.opensolaris.org/jive/thread.jspa?threadID=106453 Coming from a mostly Windows world, I really like the tools that you get on Opensolaris to see this kind of stuff. -Scott

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-25 Thread Scott Meilicke
that my test was flawed, in that my iSCSI numbers were using the ESXi iSCSI SW initiator, while the NFS tests were performed with the VM as the guest, not ESX. I'll give ESX as the NFS client, vmdks on NFS, a go and get back to you. Thanks! Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-24 Thread Scott Meilicke
, from what I have read you will only see benefits if you are using NFS backed storage, but that it can be significant. Remove the ZIL for testing to see the max benefit you could get. Don't do this in production! -Scott -- This message posted from opensolaris.org

Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread Scott Lawson
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ Scott Lawson Systems Architect Manukau Institute

Re: [zfs-discuss] SAN server

2009-06-23 Thread Scott Meilicke
, to ensure you can recover from failures, consider separate pools for your database files and log files, both for MySQL and Exchange. Good luck! -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!

2009-06-19 Thread Scott Meilicke
So how are folks getting around the NFS speed hit? Using SSD or battery backed RAM ZILs? Regarding limited NFS mounts, underneath a single NFS mount, would it work to: * Create a new VM * Remove the VM from inventory * Create a new ZFS file system underneath the original * Copy the VM to that

Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O th

2009-06-19 Thread Scott Meilicke
Generally, yes. Test it with your workload and see how it works out for you. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 7110 questions

2009-06-18 Thread Scott Meilicke
Both iSCSI and NFS are slow? I would expect NFS to be slow, but in my iSCSI testing with OpenSolaris 2008.11, performance we reasonable, about 2x NFS. Setup: Dell 2950 with a SAS HBA and SATA 3x5 raidz (15 disks, no separate ZIL), iSCSI using vmware ESXi 3.5 software initiator. Scott

Re: [zfs-discuss] Asymmetric mirroring

2009-06-10 Thread Scott Meilicke
The SATA drive will be your bottleneck, and you will lose any speed advantages of the SAS drives, especially using 3 vdevs on a single SATA disk. I am with Richard, figure out what performance you need, and build accordingly. -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS Path = ???

2009-05-19 Thread Scott Lawson
deploy -a zfs -x zfs /usr/share/webconsole/webapps/zfs Also if you wish to make the webconsole accessible from more than just the localhost, use : # svccfg -s svc:/system/webconsole setprop options/tcp_listen = true # smcwebserver restart Hope this helps, Scott. cindy.swearin...@sun.com

Re: [zfs-discuss] With RAID-Z2 under load, machine stops responding to local or remote login

2009-05-15 Thread Scott Duckworth
running Solaris 10, not OpenSolaris, so it could also be the case that there is a regression somewhere in there. Scott Duckworth, Systems Programmer II Clemson University School of Computing On Tue, May 12, 2009 at 10:10 PM, Rince rincebr...@gmail.com wrote: Hi world, I have a 10-disk RAID-Z2

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Scott Lawson
Roger Solano wrote: Hello, Does it make any sense to use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks? For example: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Scott Lawson
Bob Friesenhahn wrote: On Thu, 7 May 2009, Scott Lawson wrote: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. Just thought I would point out that these are hardware backed RAID arrays. You

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-04-30 Thread Scott Lawson
Wilkinson, Alex wrote: 0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote: On Thu, 30 Apr 2009, Wilkinson, Alex wrote: I currently have a single 17TB MetaLUN that i am about to present to an OpenSolaris initiator and it will obviously be ZFS. However, I am

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
greater cover in the event of a drive failing in a large vdev stripe. /Scott Leon Meßner wrote: Hi, i'm new to the list so please bare with me. This isn't an OpenSolaris related problem but i hope it's still the right list to post to. I'm on the way to move a backup server to using zfs based

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Michael Shadle wrote: On Mon, Apr 27, 2009 at 5:32 PM, Scott Lawson scott.law...@manukau.ac.nz wrote: One thing you haven't mentioned is the drive type and size that you are planning to use as this greatly influences what people here would recommend. RAIDZ2 is built for big, slow SATA

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Richard Elling wrote: Some history below... Scott Lawson wrote: Michael Shadle wrote: On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson scott.law...@manukau.ac.nz wrote: If possible though you would be best to let the 3ware controller expose the 16 disks as a JBOD to ZFS and create

Re: [zfs-discuss] Sun's flash offering(s)

2009-04-19 Thread Scott Laird
to you. Compare the URL above with this one: http://www.intel.com/design/flash/nand/extreme/index.htm Scott ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Scott Lawson
. ) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology

Re: [zfs-discuss] Can this be done?

2009-03-31 Thread Scott Lawson
Michael Shadle wrote: On Mon, Mar 30, 2009 at 4:13 PM, Michael Shadle mike...@gmail.com wrote: Sounds like a reasonable idea, no? Follow up question: can I add a single disk to the existing raidz2 later on (if somehow I found more space in my chassis) so instead of a 7 disk raidz2

Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Scott Lawson
-discuss -- _ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Zealand Phone : +64 09 968

Re: [zfs-discuss] ZFS on a SAN

2009-03-12 Thread Scott Lawson
system. Works well as long as both systems are of the same OS revision or greater on the target system. /Scott. Grant Lowe wrote: Hi Erik, A couple of questions about what you said in your email. In synopsis 2, if hostA has gone belly up and is no longer accessible, then a step that is implied

Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Scott Lawson
to change their shell paths to get the userland that they want ;) /flame On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson scott.law...@manukau.ac.nz wrote: Stephen Nelson-Smith wrote: Hi, I recommended a ZFS-based archive solution to a client needing to have a network-based archive of 15TB

Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Scott Lawson
/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information

Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Scott Lawson
. -- _ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641 Mobile : +64 27 568 7611 mailto:sc...@manukau.ac.nz http

Re: [zfs-discuss] ZFS on SAN?

2009-02-18 Thread Scott Lawson
Hi Andras, No problems writing direct. Answers inline below. (If there are any typo's it cause it's late and I have had a very long day ;)) andras spitzer wrote: Scott, Sorry for writing you directly, but most likely you have missed my questions regarding your SW design, whenever you have

Re: [zfs-discuss] qmail on zfs

2009-02-18 Thread Scott Lawson
Robert Milkowski wrote: Hello Asif, Wednesday, February 18, 2009, 1:28:09 AM, you wrote: AI On Tue, Feb 17, 2009 at 5:52 PM, Robert Milkowski mi...@task.gda.pl wrote: Hello Asif, Tuesday, February 17, 2009, 7:43:41 PM, you wrote: AI Hi All AI Does anyone have any experience on running

Re: [zfs-discuss] ZFS on SAN? Availability edition.

2009-02-18 Thread Scott Lawson
at the twinstrata website. (as should others). Sorry to all if we are diverging too much from zfs-discuss. /Scott This stuff does happen. When you have been around for a while you see it. Robin Harris wrote: Calculating the availability and economic trade-offs of configurations is hard. Rule of thumb

Re: [zfs-discuss] ZFS on SAN? Availability edition.

2009-02-18 Thread Scott Lawson
Miles Nordin wrote: sl == Scott Lawson scott.law...@manukau.ac.nz writes: sl Electricity *is* the lifeblood of available storage. I never meant to suggest computing machinery could run without electricity. My suggestion is, if your focus is _reliability_ rather than

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
? In short I hope people don't hold back from adoption of ZFS because they are unsure about it. Judge for yourself as I have done and dip your toes in at whatever rate you are happy to do so. Thats what I did. /Scott. I also use it at home too with and old D1000 attached to a v120 with 8 x 320 GB scsi's

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
Toby Thain wrote: On 17-Feb-09, at 3:01 PM, Scott Lawson wrote: Hi All, ... I have seen other people discussing power availability on other threads recently. If you want it, you can have it. You just need the business case for it. I don't buy the comments on UPS unreliability. Hi, I

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
David Magda wrote: On Feb 17, 2009, at 21:35, Scott Lawson wrote: Everything we have has dual power supplies, feed from dual power rails, feed from separate switchboards, through separate very large UPS's, backed by generators, feed by two substations and then cloned to another data

Re: [zfs-discuss] Two pools on one slice

2009-02-09 Thread Scott Watanabe
Have tried the procedure in the ZFS TS guide? http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Panic.2FReboot.2FPool_Import_Problems -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] how to fix zpool with corrupted disk?

2009-01-26 Thread Scott Watanabe
Looks like your scrub was not finished yet. Did check it later? You should not have had to replace the disk. You might have to reinstall the bootblock. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Bug report: disk replacement confusion

2009-01-23 Thread Scott L. Burson
controller port, so that the new device will have the same device name as the failed one. -- Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

[zfs-discuss] Bug report: disk replacement confusion

2009-01-22 Thread Scott L. Burson
it needs to back-map that information to the device names when mounting. Or something :) -- Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Bug report: disk replacement confusion

2009-01-22 Thread Scott L. Burson
Well, the second resilver finished, and everything looks okay now. Doing one more scrub to be sure... -- Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] ZFS vs HardWare raid - data integrity?

2009-01-08 Thread Scott Laird
RAID 2 is something weird that no one uses, and really only exists on paper as part of Berkeley's original RAID paper, IIRC. raidz2 is more or less RAID 6, just like raidz is more or less RAID 5. With raidz2, you have to lose 3 drives per vdev before data loss occurs. Scott On Thu, Jan 8

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Scott Laird
. Going forward, SSDs will almost certainly be more reliable, as long as you have something SMART-ish watching the number of worn-out SSD cells and recommending preemptive replacement of worn-out drives every few years. That should be a slow, predictable process, unlike most HD failures. Scott

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-08 Thread Scott Laird
from backup. I haven't dealt with PC laptops in years, so I can't really compare models. Scott On Thu, Jan 8, 2009 at 2:40 PM, JZ j...@excelsioritsolutions.com wrote: Thanks Scott, I was really itchy to order one, now I just want to save that open $ for Remy+++. Then, next question, can I

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-07 Thread Scott Laird
twice :-(. It's hard to see how better hardware would have helped with that, though. Scott ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2009-01-07 Thread Scott Laird
On Wed, Jan 7, 2009 at 6:43 PM, JZ j...@excelsioritsolutions.com wrote: ok, Scott, that sounded sincere. I am not going to do the pic thing on you. But do I have to spell this out to you -- somethings are invented not for home use? Yeah, I'm sincere, but I've ordered more or less the same

[zfs-discuss] Unable to add cache device

2009-01-02 Thread Scott Laird
by the resilvering or is something wrong? Scott ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Unable to add cache device

2009-01-02 Thread Scott Laird
appear to do the trick. Scott ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Unable to add cache device

2009-01-02 Thread Scott Laird
On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling richard.ell...@sun.com wrote: Scott Laird wrote: On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai mritun+opensola...@gmail.com wrote: As for source, here you go :) http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/zpool

Re: [zfs-discuss] separate home partition?

2008-12-29 Thread scott
root partition, i'll think it over, but given my luck with updates, i don't imagine doing any. thank you once again for all of your valuable input. scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] separate home partition?

2008-12-28 Thread scott
thanks for the input. since i have no interest in multibooting (virtualbox will suit my needs), i created a 10gb partition on my 500gb drive for opensolaris and reserved the rest for files (130gb worth). after installing the os and fdisking the rest of the space to solaris2, i created a zpool

[zfs-discuss] separate home partition?

2008-12-26 Thread scott stanley
(i use the term loosely because i know that zfs likes whole volumes better) when installing ubuntu, i got in the habit of using a separate partition for my home directory so that my data and gnome settings would all remain intact when i reinstalled or upgraded. i'm running osol 2008.11 on an

Re: [zfs-discuss] separate home partition?

2008-12-26 Thread scott
do you mean a pool on a SEPARATE partition? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

<    1   2   3   >