Re: [zfs-discuss] zfs backup and restore

2007-05-24 Thread Louwtjie Burger
A good place to start is: http://www.opensolaris.org/os/community/zfs/ Have a look at: http://www.opensolaris.org/os/community/zfs/docs/ as well as http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide# Create some files, which you can use as disks within zfs and demo to you

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Torrey McMahon
I did say depends on the guarantees, right? :-) My point is that all hw raid systems are not created equally. Nathan Kroenert wrote: Which has little benefit if it's the HBA or the Array internals change the meaning of the message... That's the whole point of ZFS's checksumming - It's end t

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Nathan Kroenert
Which has little benefit if it's the HBA or the Array internals change the meaning of the message... That's the whole point of ZFS's checksumming - It's end to end... Nathan. Torrey McMahon wrote: Toby Thain wrote: On 22-May-07, at 11:01 AM, Louwtjie Burger wrote: On 5/22/07, Pål Baltzers

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Torrey McMahon
Toby Thain wrote: On 22-May-07, at 11:01 AM, Louwtjie Burger wrote: On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote: What if your HW-RAID-controller dies? in say 2 years or more.. What will read your disks as a configured RAID? Do you know how to (re)configure the >controller or restore

Re: [zfs-discuss] No zfs_nocacheflush in Solaris 10?

2007-05-24 Thread Torrey McMahon
Albert Chin wrote: On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote: I'm getting really poor write performance with ZFS on a RAID5 volume (5 disks) from a storagetek 6140 array. I've searched the web and these forums and it seems that this zfs_nocacheflush option is the solution,

Re: [zfs-discuss] Re: Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-24 Thread Toby Thain
On 24-May-07, at 6:51 PM, [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: You're right of course and lots of people use them. My point is that Solaris has been 64 bits lon ger then most others. ... IRIX was much earlier than Solaris; Solaris was pretty late in the 64 bit game wi

Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread eric kustarz
Don't take this numbers too seriously - those were only first tries to see where my port is and I was using OpenSolaris for comparsion, which has debugging turned on. Yeah, ZFS does a lot of extra work with debugging on (such as verifying checksums in the ARC), so always do serious performa

Re: [zfs-discuss] Boot drive restore procedure

2007-05-24 Thread Tomas Ögren
On 24 May, 2007 - Russell Baird sent me these 0,6K bytes: > I have my ZFS mirror on 2 external drives and no ZFS on my boot drive. > If I crash my boot drive and I don't have a complete backup of the > boot drive, can I restore just the /etc/zfs/zpool.cache file? > > I tried it and it worked.

[zfs-discuss] Boot drive restore procedure

2007-05-24 Thread Russell Baird
I have my ZFS mirror on 2 external drives and no ZFS on my boot drive. If I crash my boot drive and I don't have a complete backup of the boot drive, can I restore just the /etc/zfs/zpool.cache file? I tried it and it worked. Once I rebooted with the new drive, the ZFS pool reappeared. I ju

[zfs-discuss] Boot drive restore procedure

2007-05-24 Thread Russell Baird
I have my ZFS mirror on 2 external drives and no ZFS on my boot drive. If I loose my boot drive and I don't have a complete backup, can I restore just the /etc/zfs This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@op

[zfs-discuss] zfs backup and restore

2007-05-24 Thread Roshan Perera
Hi, I believe Solaris 10 version 3 supports zfs backup and restore. How can I upgrade previous versions of Solaris to run zfs backup/restore and where to download the relevant versions. Also, I have a customer wanting to know (now I am interested too) the detailed information of how the zfs sn

Re: [zfs-discuss] how do I revert back from ZFS partitioned disk to original partitions

2007-05-24 Thread Cindy . Swearingen
Arif, You need to boot from {net | DVD} in single-user mode, like this: boot net -s or boot cdrom -s Then, when you get to a shell prompt, relabel the disk like this: # format -e format> label [0] SMI Label [1] EFI Label Specify Label type[0]: 0 Then, you should be able to repartition howev

Re: [zfs-discuss] Re: Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-24 Thread Casper . Dik
>[EMAIL PROTECTED] wrote: > >> >> >You're right of course and lots of people use them. My point is that >> >Solaris has been 64 bits lon ger then most others. I think 64 bits in >> >AIX got 64 bits after Solaris and Linux (via Alpha) did. >> >Irix was 64 bit near the same time as Solaris but the

Re: [zfs-discuss] Re: Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-24 Thread Joerg Schilling
[EMAIL PROTECTED] wrote: > > >You're right of course and lots of people use them. My point is that > >Solaris has been 64 bits lon ger then most others. I think 64 bits in > >AIX got 64 bits after Solaris and Linux (via Alpha) did. > >Irix was 64 bit near the same time as Solaris but the end of

[zfs-discuss] how do I revert back from ZFS partitioned disk to original partitions

2007-05-24 Thread Arif Khan
I accidentally created a zpool on a boot disk, it paniced the system and now I can jumpstart and install the OS on it. This is what it looks like. partition> p Current partition table (original): Total disk sectors available: 17786879 + 16384 (reserved sectors) Part TagFlag First

Re: [zfs-discuss] Re: Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-24 Thread Casper . Dik
>You're right of course and lots of people use them. My point is that >Solaris has been 64 bits lon ger then most others. I think 64 bits in >AIX got 64 bits after Solaris and Linux (via Alpha) did. >Irix was 64 bit near the same time as Solaris but the end of the Irix >is visible. Did they por

[zfs-discuss] Re: Re: Need guidance on RAID 5, ZFS, and RAIDZ on home file server

2007-05-24 Thread Tom Buskey
> On Wed, May 23, 2007 at 08:03:41AM -0700, Tom Buskey > wrote: > > > > Solaris is 64 bits with support for 32 bits. I've > been running 64 bit Solaris since Solaris 7 as I > imagine most Solaris users have. I don't think any > other major 64 bit OS has been in general use as long > (VMS?). > >

Re: [zfs-discuss] No zfs_nocacheflush in Solaris 10?

2007-05-24 Thread Albert Chin
On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote: > I'm running SunOS Release 5.10 Version Generic_118855-36 64-bit > and in [b]/etc/system[/b] I put: > > [b]set zfs:zfs_nocacheflush = 1[/b] > > And after rebooting, I get the message: > > [b]sorry, variable 'zfs_nocacheflush' is not d

Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread Dick Davies
On 24/05/07, Brian Hechinger <[EMAIL PROTECTED]> wrote: I don't know about FreeBSD PORTS, but NetBSD's ports system works very well on solaris. The only thing I didn't like about it is it considers gcc a dependency to certain things, so even though I have Studio 11 installed, it would insist on

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Richard Elling
Anton B. Rang wrote: Richard wrote: Any system which provides a single view of data (eg. a persistent storage device) must have at least one single point of failure. Why? Consider this simple case: A two-drive mirrored array. Use two dual-ported drives, two controllers, two power supplies, a

[zfs-discuss] No zfs_nocacheflush in Solaris 10?

2007-05-24 Thread Grant Kelly
Hi, I'm running SunOS Release 5.10 Version Generic_118855-36 64-bit and in [b]/etc/system[/b] I put: [b]set zfs:zfs_nocacheflush = 1[/b] And after rebooting, I get the message: [b]sorry, variable 'zfs_nocacheflush' is not defined in the 'zfs' module[/b] So is this variable not available in the

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Frank Fitch
Anton B. Rang wrote: Richard wrote: Any system which provides a single view of data (eg. a persistent storage device) must have at least one single point of failure. Why? Consider this simple case: A two-drive mirrored array. Use two dual-ported drives, two controllers, two power supplies, a

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Darren J Moffat
Anton B. Rang wrote: Richard wrote: Any system which provides a single view of data (eg. a persistent storage device) must have at least one single point of failure. Why? Consider this simple case: A two-drive mirrored array. Use two dual-ported drives, two controllers, two power supplies, a

[zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Anton B. Rang
Richard wrote: > Any system which provides a single view of data (eg. a persistent storage > device) must have at least one single point of failure. Why? Consider this simple case: A two-drive mirrored array. Use two dual-ported drives, two controllers, two power supplies, arranged roughly as fo

Re: [zfs-discuss] RFE: ISCSI alias when shareiscsi=on

2007-05-24 Thread Darren J Moffat
Adam Leventhal wrote: Right now -- as I'm sure you have noticed -- we use the dataset name for the alias. To let users explicitly set the alias we could add a new property as you suggest or allow other options for the existing shareiscsi property: shareiscsi='alias=potato' This would sort of

Re: [zfs-discuss] RFE: ISCSI alias when shareiscsi=on

2007-05-24 Thread Adam Leventhal
Right now -- as I'm sure you have noticed -- we use the dataset name for the alias. To let users explicitly set the alias we could add a new property as you suggest or allow other options for the existing shareiscsi property: shareiscsi='alias=potato' This would sort of match what we do for th

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Dave Fisk
> Please tell us how many storage arrays are required to meet a theoretical I/O bandwidth of 244 GBytes/s? Just considering disks, you need approximately 6,663 all streaming 50 MB/sec with RAID-5 3+1 (for example). That is assuming sustained large block sequential I/O. If you have 8 KB Ran

Re: [zfs-discuss] Limiting ZFS ARC doesn't work on Sol10-Update 3

2007-05-24 Thread Roch Bourbonnais
This looks like another instance of 6429205 each zpool needs to monitor its throughput and throttle heavy writers| or at least it is a contributing factor. Note that your /etc/system is mispelled (maybe just in the e-mail) Didn't you get a console message ? -r Le 24 mai 07 à 09:50, Amer

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Richard Elling
Anton B. Rang wrote: Thumper seems to be designed as a file server (but curiously, not for high availability). hmmm... Often people think that because a system is not clustered, then it is not designed to be highly available. Any system which provides a single view of data (eg. a persistent

Re: [zfs-discuss] ditto blocks

2007-05-24 Thread Toby Thain
On 24-May-07, at 6:26 AM, Henk Langeveld wrote: Richard Elling wrote: It all depends on the configuration. For a single disk system, copies should generally be faster than mirroring. For multiple disks, the performance should be similar as copies are spread out over different disks. Here

Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread Claus Guttesen
> iozone. So I installed solaris 10 on this box and wanted to keep it > that way. But solaris lacks FreeBSD ports ;-) so when current upgraded Not entirely. :) I don't know about FreeBSD PORTS, but NetBSD's ports system works very well on solaris. The only thing I didn't like about it is it co

[zfs-discuss] shareiscsi is cool, but what about sharefc or sharescsi?

2007-05-24 Thread Brian Hechinger
I'd love to be able to server zvols out as SCSI or FC targets. Are there any plans to add this to ZFS? That would be amazingly awesome. -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it's just that most

Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread Brian Hechinger
On Thu, May 24, 2007 at 01:16:32PM +0200, Claus Guttesen wrote: > > iozone. So I installed solaris 10 on this box and wanted to keep it > that way. But solaris lacks FreeBSD ports ;-) so when current upgraded Not entirely. :) I don't know about FreeBSD PORTS, but NetBSD's ports system works ver

[zfs-discuss] RFE: ISCSI alias when shareiscsi=on

2007-05-24 Thread cedric briner
Starting from this thread: http://www.opensolaris.org/jive/thread.jspa?messageID=118786𝀂 I would love to have the possibility to set an ISCSI alias when doing an shareiscsi=on on ZFS. This will greatly facilate to identify where an IQN is hosted. the ISCSI alias is defined in rfc 3721 e.g. ht

Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread Pawel Jakub Dawidek
On Thu, May 24, 2007 at 01:16:32PM +0200, Claus Guttesen wrote: > >I'm all set for doing performance comparsion between Solaris/ZFS and > >FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I > >think I'm ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB > >RAM, 15 x 74GB

Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread Claus Guttesen
I'm all set for doing performance comparsion between Solaris/ZFS and FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I think I'm ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the links to disks are

Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread Pawel Jakub Dawidek
On Thu, May 24, 2007 at 11:20:44AM +0100, Darren J Moffat wrote: > Pawel Jakub Dawidek wrote: > >Hi. > >I'm all set for doing performance comparsion between Solaris/ZFS and > >FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I > >think I'm ready. The machine is 1xQuad-core DELL

Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread James Blackburn
Or if you do want to use bfu because you really want to match your source code revisions up to a given day then you will need to build the ON consolidation yourself and you an the install the non debug bfu archives (note you will need to download the non debug closed bins to do that). The README

Re: [zfs-discuss] Preparing to compare Solaris/ZFS and FreeBSD/ZFS performance.

2007-05-24 Thread Darren J Moffat
Pawel Jakub Dawidek wrote: Hi. I'm all set for doing performance comparsion between Solaris/ZFS and FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I think I'm ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unf

Re: [zfs-discuss] ditto blocks

2007-05-24 Thread Henk Langeveld
Richard Elling wrote: It all depends on the configuration. For a single disk system, copies should generally be faster than mirroring. For multiple disks, the performance should be similar as copies are spread out over different disks. Here's a crazy idea: could we use zfs on dvd for s/w dist

[zfs-discuss] Limiting ZFS ARC does n't work on Sol10-Update 3

2007-05-24 Thread Amer Ather
IHAC complaining about database startup failure after large files are copied into ZFS filesystem. If he wait for some time then it works. It seems that ZFS is not freeing buffers from its ARC cache fast enough. Lockstat shows long block events for lock arc_reclaim_thr_lock: Adaptive mutex hold: 5