Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-01 Thread Simon Breden
If it's of interest, I've written up some articles on my experiences of 
building a ZFS NAS box which you can read here:
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/

I used CIFS to share the filesystems, but it will be a simple matter to use NFS 
instead: issue the command 'zfs set sharenfs=on pool/filesystem' instead of 
'zfs set sharesmb=on pool/filesystem'.

Hope it helps.
Simon

Originally posted to answer someone's request for info in storage:discuss
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Per filesystem scrub

2008-04-01 Thread Darren J Moffat
kristof wrote:
 I would be very happy having a filesystem based zfs scrub
 
 We have a 18TB big zpool, it takes more then 2 days to do the scrub.
 
 Since we cannot take snapshots during the scrub, this is unacceptable

We have recently discovered the same issue on one of our internal build 
machines.  We have a daily bringover of the Teamware onnv-gate that is 
snapshoted when it completes and as such we can never run a full scrub. 
  Given some of our storage is reaching (or past) EOSL I really want to 
be able to scrub the important datasets (ie all those other than the 
clones of onnv).

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS problem with oracle

2008-04-01 Thread Wiwat Kiatdechawit
I implement ZFS with Oracle but it slower than UFS very much. Do you have
any solution?

 

Can I fix this problem with ZFS direct I/O. If it can, how to set it?

 

Wiwat

 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Per filesystem scrub

2008-04-01 Thread Adam Leventhal
On Mar 31, 2008, at 10:41 AM, kristof wrote:
 I would be very happy having a filesystem based zfs scrub

 We have a 18TB big zpool, it takes more then 2 days to do the scrub.

 Since we cannot take snapshots during the scrub, this is unacceptable

While per-dataset scrubbing would certainly be a coarse-grained  
solution to
your problem, work is underway to address the problematic interaction  
between
scrubs and snapshots.

Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problem with oracle

2008-04-01 Thread Ed Saipetch

Wiwat,

You should make sure that you have read the Best Practices Guide and the 
Evil Tuning Guide for helpful information on optimizing ZFS for Oracle.  
There are some things you can do to tweak ZFS to get better performance 
like using a separate filesystem for logs and separating the ZFS intent 
log (ZIL) from the main pool.


They can be found here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

Also, what kind of disk subsystem (number of disks, is it an array?, 
etc.) and how do you have your zfs pools configured (raid type, separate 
ZIL, etc.)?


Hope this gives you a start.

-Ed

Wiwat Kiatdechawit wrote:


I implement ZFS with Oracle but it slower than UFS very much. Do you 
have any solution?


 


Can I fix this problem with ZFS direct I/O. If it can, how to set it?

 


Wiwat

 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Per filesystem scrub

2008-04-01 Thread Wade . Stuart

[EMAIL PROTECTED] wrote on 04/01/2008 04:25:39 AM:

 kristof wrote:
  I would be very happy having a filesystem based zfs scrub
 
  We have a 18TB big zpool, it takes more then 2 days to do the scrub.
 
  Since we cannot take snapshots during the scrub, this is unacceptable

 We have recently discovered the same issue on one of our internal build
 machines.  We have a daily bringover of the Teamware onnv-gate that is
 snapshoted when it completes and as such we can never run a full scrub.
   Given some of our storage is reaching (or past) EOSL I really want to
 be able to scrub the important datasets (ie all those other than the
 clones of onnv).

 --
 Darren J Moffat

Aye,  or better yet -- give the scrub/resilver/snap reset issue fix very
high priority.   As it stands snapshots are impossible when you need to
resilver and scrub (even on supposedly sun supported thumper configs).


-Wade Stuart

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Per filesystem scrub

2008-04-01 Thread Webmail

 We have recently discovered the same issue on one of our internal build
 machines.  We have a daily bringover of the Teamware onnv-gate that is
 snapshoted when it completes and as such we can never run a full scrub.
   Given some of our storage is reaching (or past) EOSL I really want to
 be able to scrub the important datasets (ie all those other than the
 clones of onnv).
 
 Aye,  or better yet -- give the scrub/resilver/snap reset issue fix very
 high priority.   As it stands snapshots are impossible when you need to
 resilver and scrub (even on supposedly sun supported thumper configs).

Since the scrub walks down the metadata tree and that filesystem
definitions are somewhere on top of that tree, it shouldn't be too hard to
make the scrub start from that point instead of the uberblock, or not?

-mg

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem importing pool from BSD 7.0 into Nexenta

2008-04-01 Thread Michael Armbrust
On Mon, Mar 31, 2008 at 8:35 AM, Bob Friesenhahn 
[EMAIL PROTECTED] wrote:

 On Mon, 31 Mar 2008, Tim wrote:

  Perhaps someone else can correct me if I'm wrong, but if you're using
 the
  whole disk, ZFS shouldn't be displaying a slice when listing your disks,
  should it?  I've *NEVER* seen it do that on any of mine except when
 using
  partials/slicese.
 
  I would expect:
  c1d1s8
 
  To be:
  c1d1

 Yes, this seems suspicious.  It is also suspicious that some devices
 use 'p' (partition?) while others use 's' (slice?).


I agree that its really weird that its trying to look at partitions and
slices, when BSD has no problem recognizing the whole disks.  Is there any
way to override where zfs import is looking, or am I going to have to
recreate the pool from scratch?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unable to run scrub on degraded zpool

2008-04-01 Thread Robin Bowes
Hi,

I've got a 10-disk raidz2 zpool with a dead drive (it's actually been 
physically removed from the server pending replacement).

This is how it looks:

# zpool status space
   pool: space
  state: DEGRADED
status: One or more devices could not be used because the label is 
missing or
 invalid.  Sufficient replicas exist for the pool to continue
 functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
  scrub: resilver completed after 0h0m with 0 errors on Tue Apr  1 
15:08:29 2008
config:

 NAME  STATE READ WRITE CKSUM
 space DEGRADED 0 0 0
   raidz2  DEGRADED 0 0 0
 c2t2d0ONLINE   0 0 0
 c2t3d0ONLINE   0 0 0
 c0t0d0ONLINE   0 0 0
 c0t1d0ONLINE   0 0 0
 c0t2d0ONLINE   0 0 0
 17557252296421049869  FAULTED  0 0 0  was 
/dev/dsk/c2t3d0s0
 c0t4d0ONLINE   0 0 0
 c0t5d0ONLINE   0 0 0
 c0t6d0ONLINE   0 0 0
 c0t7d0ONLINE   0 0 0

errors: No known data errors



When I try and run a scrub it seems that a resilver is run instead of a 
scrub, and the resilver finishes almost immediately (as you can see from 
the scrub:  line above.

Is this normal behaviour? Or is something somewhat crook with my zpool?

What's going on with that long disk name (formerly c2t3d0s0) ?

When I replace the failed disk (in the same port) will it be added to 
the array automatically, or will I have to add it manually (zpool replace) ?

Thanks,

R.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to unmount when devices write-disabled?

2008-04-01 Thread Brian Kolaci
In a recovery situation where the primary node crashed, the
disks get write-disabled while the failover node takes control.
How can you unmount the zpool?  It panics the system and actually
gets into a panic loop when it tries to mount it again on next boot.

Thanks,

Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Device fail timeout?

2008-04-01 Thread Luke Scharf
I'm running ZFS in a test-server against a bunch of drives in an Apple
XRaid (configured in the JBOD mode).  It works pretty well, except that
when I yank one of the drives,  ZFS hangs -- presumably, it's waiting
for a response from the the XRAID.

Is there any way to set the device-failure timeout with ZFS?

Thanks,
-Luke


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What to do about retryable write errors?

2008-04-01 Thread Martin Englund
I've got a newly created zpool where I know (from the previous UFS) that one of 
the disks has retryable write errors.

What should I do about it now? Just leave zfs to deal with it? Repair it?

If I should repair, if this procedure ok?

zpool offline z2 c5t4d0
format -d c5t4d0
repair ...
zpool online z2 c5t4d0

cheers,
/Martin
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What to do about retryable write errors?

2008-04-01 Thread Richard Elling
Martin Englund wrote:
 I've got a newly created zpool where I know (from the previous UFS) that one 
 of the disks has retryable write errors.

 What should I do about it now? Just leave zfs to deal with it? Repair it?
   

Retryable write errors are not fatal, they are retried.
What do you think you can do to repair them?
I'd raise an eyebrow, but otherwise not worry unless
there are fatal errors.
 -- richard

 If I should repair, if this procedure ok?

 zpool offline z2 c5t4d0
 format -d c5t4d0
 repair ...
 zpool online z2 c5t4d0

 cheers,
 /Martin
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Device fail timeout?

2008-04-01 Thread Luke Scharf
Richard Elling wrote:
 In general, ZFS doesn't manage device timeouts.  The lower
 layer drivers do.  The timeout management depends on which OS,
 OS version, and HBA you use.  A fairly extreme example may be
 Solaris using parallel SCSI and the sd driver, which uses a default
 timeout of 60 seconds and 5 retries.  In the more recent Solaris NV
 builds, FMA has been enhanced with an io-retire module which
 can make better decisions on whether the device is behaving well.

What, ZFS isn't the whole kernel?  ;-)

I can Google/RTFM from here.

Thanks!
-Luke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss