Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-06 Thread Dick Davies
Does 'zpool attach' enough for a root pool?
I mean, does it install GRUB bootblocks on the disk?

On Wed, Jul 2, 2008 at 1:10 PM, Robert Milkowski [EMAIL PROTECTED] wrote:
 Hello Tommaso,

 Wednesday, July 2, 2008, 1:04:06 PM, you wrote:

  the root filesystem of my thumper is a ZFS with a single disk:



 is it possible to add a mirror to it? I seem to be able only to add a new
 PAIR of disks in mirror, but not to add a mirror to the existing disk ...

 zpool attach

-- 
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-06 Thread Tommaso Boccali
we did a mistake :(

tom

On Wed, Jul 2, 2008 at 5:58 PM, Richard Elling [EMAIL PROTECTED] wrote:
 Tommaso Boccali wrote:

 Ciao, the rot filesystem of my thumper is a ZFS with a single disk:

 bash-3.2# zpool status rpool
  pool: rpool
  state: ONLINE
  scrub: none requested
 config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c5t0d0s0  ONLINE   0 0 0
spares
  c0t7d0AVAILc1t6d0AVAILc1t7d0
  AVAIL

 is it possible to add a mirror to it? I seem to be able only to add a new
 PAIR of disks in mirror, but not to add a mirror to the existing disk ...


 As Edna and Robert mentioned, zpool attach will add the mirror.
 But note that the X4500 has only two possible boot devices:
 c5t0d0 and c5t4d0.  This is a BIOS limitation.  So you will want
 to mirror with c5t4d0 and configure the disks for boot.  See the
 docs on ZFS boot for details on how to configure the boot sectors
 and grub.
 -- richard





-- 
Tommaso Boccali
INFN Pisa
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-06 Thread Tommaso Boccali

 As Edna and Robert mentioned, zpool attach will add the mirror.
 But note that the X4500 has only two possible boot devices:
 c5t0d0 and c5t4d0.  This is a BIOS limitation.  So you will want
 to mirror with c5t4d0 and configure the disks for boot.  See the
 docs on ZFS boot for details on how to configure the boot sectors
 and grub.
 -- richard


uhm, bad.

I did not know this, so now the root is
bash-3.2# zpool status rpool
  pool: rpool
 state: ONLINE
 scrub: resilver completed after 0h8m with 0 errors on Wed Jul  2 16:09:14 2008
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c5t0d0s0  ONLINE   0 0 0
c1t7d0ONLINE   0 0 0
spares
  c0t7d0  AVAIL
  c1t6d0  AVAIL


while c5t4d0 belongs to a raiz pool:

...
  raidz1ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
c5t4d0  ONLINE   0 0 0
c6t7d0  ONLINE   0 0 0
c5t5d0  ONLINE   0 0 0
c5t6d0  ONLINE   0 0 0
c5t7d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
...

is it possible to restore the good behavior?
something like
- detach c1t7d0 from rpool
- detach c5t4d0 from the other pool (the pool still survives since it is raidz)
- reattach in reverse order? (and so reform mirror and raidz?)

thanks a lot again

tommaso




-- 
Tommaso Boccali
INFN Pisa
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correctly ?

2008-07-06 Thread Peter Tribble
On Sun, Jul 6, 2008 at 8:48 AM, Rob Clark [EMAIL PROTECTED] wrote:
 I am new to SX:CE (Solaris 11) and ZFS but I think I found a bug.

 I have eight 10GB drives.
...
 I have 6 remaining 10 GB drives and I desire to raid 3 of them and mirror 
 them to the other 3 to give me raid security and integrity with mirrored 
 drive performance. I then want to move my /export directory to the new 
 drive.

...
 # zpool create -f temparray raidz c1t2d0 c1t4d0 c1t5d0 mirror c1t3d0 c1t6d0 
 c1t8d0
...
 The question (Bug?) is Shouldn't I get this instead ?

 # zfs list | grep temparray
 temparray  97.2K  19.5G  1.33K  /temparray

 Why do I get 29.3G instead of 19.5G ?

Because what you've created is a pool containing two components:
 - a 3-drive raidz
 - a 3-drive mirror
concatenated together.

I think that what you're trying to do based on your description is to create
one raidz and mirror that to another raidz. (Or create a raidz out of mirrored
drives.) You can't do that. You can't layer raidz and mirroring.

You'll either have to use raidz for the lot, or just use mirroring:

zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0
mirror c1t6d0 c1t8d0

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-06 Thread Jeff Bonwick
I would just swap the physical locations of the drives, so that the
second half of the mirror is in the right location to be bootable.
ZFS won't mind -- it tracks the disks by content, not by pathname.
Note that SATA is not hotplug-happy, so you're probably best off
doing this while the box is powered off.  Upon reboot, ZFS should
figure out what happened, update the device paths, and... that's it.

Jeff

On Sun, Jul 06, 2008 at 08:47:25AM +0200, Tommaso Boccali wrote:
 
  As Edna and Robert mentioned, zpool attach will add the mirror.
  But note that the X4500 has only two possible boot devices:
  c5t0d0 and c5t4d0.  This is a BIOS limitation.  So you will want
  to mirror with c5t4d0 and configure the disks for boot.  See the
  docs on ZFS boot for details on how to configure the boot sectors
  and grub.
  -- richard
 
 
 uhm, bad.
 
 I did not know this, so now the root is
 bash-3.2# zpool status rpool
   pool: rpool
  state: ONLINE
  scrub: resilver completed after 0h8m with 0 errors on Wed Jul  2 16:09:14 
 2008
 config:
 
 NAME  STATE READ WRITE CKSUM
 rpool ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c5t0d0s0  ONLINE   0 0 0
 c1t7d0ONLINE   0 0 0
 spares
   c0t7d0  AVAIL
   c1t6d0  AVAIL
 
 
 while c5t4d0 belongs to a raiz pool:
 
 ...
   raidz1ONLINE   0 0 0
 c0t4d0  ONLINE   0 0 0
 c1t4d0  ONLINE   0 0 0
 c5t4d0  ONLINE   0 0 0
 c6t7d0  ONLINE   0 0 0
 c5t5d0  ONLINE   0 0 0
 c5t6d0  ONLINE   0 0 0
 c5t7d0  ONLINE   0 0 0
 c1t5d0  ONLINE   0 0 0
 ...
 
 is it possible to restore the good behavior?
 something like
 - detach c1t7d0 from rpool
 - detach c5t4d0 from the other pool (the pool still survives since it is 
 raidz)
 - reattach in reverse order? (and so reform mirror and raidz?)
 
 thanks a lot again
 
 tommaso
 
 
 
 
 -- 
 Tommaso Boccali
 INFN Pisa
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Rob Clark
 Peter Tribble wrote:
 Because what you've created is a pool containing two
 components:
 - a 3-drive raidz
 - a 3-drive mirror
 concatenated together.
 

OK. Seems odd that ZFS would allow that (would people want that configuration
instead of what I am  attempting to do).


 I think that what you're trying to do based on your description is to create
 one raidz and mirror that to another raidz. (Or create a raidz out of mirrored
 drives.) You can't do that. You can't layer raidz and mirroring.
 You'll either have to use raidz for the lot, or just use mirroring:
 zpool create temparray mirror c1t2d0 c1t4d0 mirror c1t5d0 c1t3d0 mirror 
 c1t6d0 c1t8d0

Bummer.


Curiously I can get that same odd size with either of these two commands (the 
second attempt sort of looks like it is raid + mirroring):


# zpool create temparray1 mirror c1t2d0 c1t4d0 mirror c1t3d0 c1t5d0 mirror 
c1t6d0 c1t8d0

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: temparray1
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
temparray1  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t6d0  ONLINE   0 0 0
c1t8d0  ONLINE   0 0 0

errors: No known data errors

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  4.36G  5.42G35K  /rpool
rpool/ROOT 3.09G  5.42G18K  legacy
rpool/ROOT/snv_91  3.09G  5.42G  3.01G  /
rpool/ROOT/snv_91/var  84.5M  5.42G  84.5M  /var
rpool/dump  640M  5.42G   640M  -
rpool/export   14.0M  5.42G19K  /export
rpool/export/home  14.0M  5.42G  14.0M  /export/home
rpool/swap  640M  6.05G16K  -
temparray1 92.5K  29.3G 1K  /temparray1
# zpool destroy temparray1


And the pretty one:


# zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz c1t6d0 
c1t8d0

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t1d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: temparray
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
temparray   ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t6d0  ONLINE   0 0 0
c1t8d0  ONLINE   0 0 0

errors: No known data errors

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  4.36G  5.42G35K  /rpool
rpool/ROOT 3.09G  5.42G18K  legacy
rpool/ROOT/snv_91  3.09G  5.42G  3.01G  /
rpool/ROOT/snv_91/var  84.6M  5.42G  84.6M  /var
rpool/dump  640M  5.42G   640M  -
rpool/export   14.0M  5.42G19K  /export
rpool/export/home  14.0M  5.42G  14.0M  /export/home
rpool/swap  640M  6.05G16K  -
temparray94K  29.3G 1K  /temparray
# zpool destroy temparray


That second attempt leads this newcommer to imagine that they have 3 raid 
drives mirrored to 3 raid drives.


Is there a way to get mirror performance (double speed) with raid integrity 
(one drive can fail and you are OK)? I can't imagine that there exists no one 
who would want that configuration.


Thanks for your comment Peter.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-06 Thread Johan Hartzenberg
On Sun, Jul 6, 2008 at 10:27 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:

 I would just swap the physical locations of the drives, so that the
 second half of the mirror is in the right location to be bootable.
 ZFS won't mind -- it tracks the disks by content, not by pathname.
 Note that SATA is not hotplug-happy, so you're probably best off
 doing this while the box is powered off.  Upon reboot, ZFS should
 figure out what happened, update the device paths, and... that's it.


Wishlist item nr 1: Ability to setup raid 1+z
Wishlist item nr 2: Remove disks from pools

  _J

-- 
Any sufficiently advanced technology is indistinguishable from magic.
Arthur C. Clarke

Afrikaanse Stap Website: http://www.bloukous.co.za

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Peter Tribble
On Sun, Jul 6, 2008 at 10:13 AM, Rob Clark [EMAIL PROTECTED] wrote:

 Is there a way to get mirror performance (double speed) with raid integrity 
 (one drive can fail and you are OK)? I can't imagine that there exists no one 
 who would want that configuration.

That's what mirroring does - you have redundant data. The extra performance is
just a side-effect.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Measuring ZFS performance - IOPS and throughput

2008-07-06 Thread Ross
Can anybody tell me how to measure the raw performance of a new system I'm 
putting together?  I'd like to know what it's capable of in terms of IOPS and 
raw throughput to the disks.

I've seen Richard's raidoptimiser program, but I've only seen results for 
random read iops performance, and I'm particularly interested in write 
performance.  That's because the live server will be fitted with 512MB of nvram 
for the ZIL, and I'd like to see what effect that actually has.

The disk system will be serving NFS to VMware to act as the datastore for a 
number of virtual machines.  I plan to benchmark the individual machines to see 
what kind of load they put on the server, but I need the raw figures from the 
disk to get an idea of how many machines I can serve before I need to start 
thinking bigger.

I'd also like to know if there's any easy way to see the current performance of 
the system once it's in use?  I know VMware has performance monitoring built 
into the console, but I'd prefer to take figures directly off the storage 
server if possible.

thanks,

Ross
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Ross
I'm no expert in ZFS, but I think I can explain what you've created there:

# zpool create temparray1 mirror c1t2d0 c1t4d0 mirror c1t3d0 c1t5d0 mirror 
c1t6d0 c1t8d0

This creates a stripe of three mirror sets (or in old fashioned terms, you have 
a raid-0 stripe made up of three raid-1 sets of two disks).  It'll give you 
30GB of capacity, all your disks are mirrored to another (so your data is safe 
if any one drive fails).  I believe it will give you 3x the write performance 
(as data will be streamed across the three sets), and should give 2x the read 
performance (as data can be read from any of the mirror drives).

I don't really understand why you're trying to mix raid-z and mirroring, but 
from what you say for performance, I suspect this may be the setup you are 
looking for.

For your second one I'm less sure what's going on:
# zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz c1t6d0 
c1t8d0

This creates three two disk raid-z sets and stripes the data across them.  The 
problem is that a two disk raid-z makes no sense.  Traditionally this level of 
raid needs a minimum of three disks to work.  I suspect ZFS may be interpreting 
raid-z as requiring one parity drive, in which case this will effectively 
mirror the drives, but without the read performance boost that mirroring would 
give you.

The way zpool create works is that you can specify raid or mirror sets, but 
that if you list a bunch of these one after the other, it simply strips the 
data across them.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? SX:CE snv_91 - ZFS - raid and mirror - drive sizes don't add correc

2008-07-06 Thread Johan Hartzenberg
On Sun, Jul 6, 2008 at 3:46 PM, Ross [EMAIL PROTECTED] wrote:


 For your second one I'm less sure what's going on:
 # zpool create temparray raidz c1t2d0 c1t4d0 raidz c1t3d0 c1t5d0 raidz
 c1t6d0 c1t8d0

 This creates three two disk raid-z sets and stripes the data across them.
  The problem is that a two disk raid-z makes no sense.  Traditionally this
 level of raid needs a minimum of three disks to work.  I suspect ZFS may be
 interpreting raid-z as requiring one parity drive, in which case this will
 effectively mirror the drives, but without the read performance boost that
 mirroring would give you.

 The way zpool create works is that you can specify raid or mirror sets, but
 that if you list a bunch of these one after the other, it simply strips the
 data across them.

 I read somewhere, a long time ago when ZFS documentation were still mostly
speculation, that raidz will use mirroring when the amount of data to be
written is less than what justifies 2+parity.  Eg in stead of 1+parity, you
get mirrored data for small writes, and essentially raid-5 for big writes,
with writes with intermediate sizes having raid 5 - like spread of blocks
across disks but using fewer than the total nr of disks in the set.

If that still holds true, then a raidz of 2 disks is probably just a mirror?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] confusion and frustration with zpool

2008-07-06 Thread Pete Hartman
I have a zpool which has grown organically.  I had a 60Gb disk, I added a 
120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.

The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor 
OneTouch USB drives.

The original system I created the 60+120+500 pool on was Solaris 10 update 3, 
patched to use ZFS sometime last fall (November I believe).  In early June, a 
storm blew out my root drive.  Thinking it was an opportunity to upgrade, I 
re-installed with OpenSolaris, and completed the mirroring which I had intended 
for some time, and upgraded zfs from v4 to v10.

The system was not stable.  Reading around, I realized that 512M of RAM and a 
32-bit CPU was probably a poor choice for an OpenSolaris, ZFS based web and 
file server for my home.  So I purchased an ASUS AMD64x2 system and 4G of RAM 
and this weekend I was able to get that set up.

However, my pool is not behaving well.  I have had insufficient replicas for 
the pool and corrupted data for the mirror piece that is on both the USB 
drives.  This confuses me because I'm also seeing no known data errors which 
leads me to wonder where this corrupted data might be.  I did a zpool scrub, 
thinking I could shake out what the problem was; earlier when the system was 
unstable doing this pointed out a couple of MP3 files that were incorrect, and 
as they were easily replaced I just removed them and was able to get a clean 
filesystem.

My most recent attempt to clear this involved removing the 750G drive and then 
trying to bring it online; this had no effect, but now the 750 is on c0 rather 
than c7 at the OS device level.

I've googled for some guidance and found advice to export/import, and while 
this cleared the original insufficient replicas problem, it has not done 
anything for the alleged corrupted data.

I have a couple thousand family photos (many of which are backed up elsewhere, 
but would be a huge problem to re-import) and several thousand MP3s and AACs 
(iTunes songs, many of which are backed up, but many are not because of being 
recently purchased).  I've been hearing how ZFS is the way I should go, which 
is why I made this change last fall, but at this point I am only having 
confusion and frustration.  

Any advice for other steps I could take to recover would be great.

here is some data directly from the system (yes, I know, somewhere along the 
line I set the date one day ahead of the real date, I will be fixing that later 
:) ):

-bash-3.2# zpool status local
  pool: local
 state: DEGRADED
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
local DEGRADED 0 0 0
  mirror  ONLINE   0 0 0
c6d1p0ONLINE   0 0 0
c0t0d0s3  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c6d0p0ONLINE   0 0 0
c0t0d0s4  ONLINE   0 0 0
  mirror  UNAVAIL  0 0 0  corrupted data
c8t0d0p0  ONLINE   0 0 0
c0t0d0s5  ONLINE   0 0 0

errors: No known data errors
-bash-3.2# zpool history local
History for 'local':
2007-11-19.11:45:11 zpool create -m /local2 local c1d0p0
2007-11-19.13:38:44 zfs recv local/main
2007-11-19.13:52:51 zfs set mountpoint=/local-pool local
2007-11-19.13:53:09 zfs set mountpoint=/local local/main
2007-11-19.14:00:48 zpool add local c1d1p0
2007-11-19.14:26:35 zfs destroy local/[EMAIL PROTECTED]
2007-11-28.18:38:26 zpool add local /dev/dsk/c3t0d0p0
2008-05-12.10:20:48 zfs set canmount=off local
2008-05-12.10:21:24 zfs set mountpoint=/ local
2008-06-16.15:56:29 zpool import -f local
2008-06-16.15:58:04 zpool export local
2008-06-27.21:41:35 zpool import local
2008-06-27.22:42:09 zpool attach -f local c5d0p0 c7t0d0s3
2008-06-28.09:06:51 zpool clear local c5d0p0
2008-06-28.09:07:00 zpool clear local c7t0d0s3
2008-06-28.09:07:11 zpool clear local
2008-06-28.09:35:39 zpool attach -f local c5d1p0 c7t0d0s4
2008-06-28.09:36:23 zpool attach -f local c6t0d0p0 c7t0d0s5
2008-06-28.13:15:26 zpool clear local
2008-06-28.13:16:48 zpool scrub local
2008-06-28.18:30:19 zpool clear local
2008-06-28.18:30:37 zpool upgrade local
2008-06-28.18:53:33 zfs create -o mountpoint=/opt/csw local/csw
2008-06-28.21:59:38 zpool export local
2008-07-06.23:25:41 zpool import local
2008-07-06.23:26:19 zpool scrub local
2008-07-07.08:40:13 zpool clear local
2008-07-07.08:43:39 zpool export local
2008-07-07.08:43:54 zpool import local
2008-07-07.08:44:20 zpool clear local
2008-07-07.08:47:20 zpool export local
2008-07-07.08:56:49 zpool import local
2008-07-07.08:58:57 zpool export local
2008-07-07.09:00:26 zpool import local
2008-07-07.09:18:16 zpool export local
2008-07-07.09:18:26 zpool import local
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org

Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-06 Thread Tommaso Boccali
is there a way to do it via software ? (attach remove add detach)

if not else, it would help me quite a lot to understand the underlying
zfs mechanism ...
thanks

;)

tom

On Sun, Jul 6, 2008 at 10:27 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
 I would just swap the physical locations of the drives, so that the
 second half of the mirror is in the right location to be bootable.
 ZFS won't mind -- it tracks the disks by content, not by pathname.
 Note that SATA is not hotplug-happy, so you're probably best off
 doing this while the box is powered off.  Upon reboot, ZFS should
 figure out what happened, update the device paths, and... that's it.

 Jeff

 On Sun, Jul 06, 2008 at 08:47:25AM +0200, Tommaso Boccali wrote:
 
  As Edna and Robert mentioned, zpool attach will add the mirror.
  But note that the X4500 has only two possible boot devices:
  c5t0d0 and c5t4d0.  This is a BIOS limitation.  So you will want
  to mirror with c5t4d0 and configure the disks for boot.  See the
  docs on ZFS boot for details on how to configure the boot sectors
  and grub.
  -- richard
 

 uhm, bad.

 I did not know this, so now the root is
 bash-3.2# zpool status rpool
   pool: rpool
  state: ONLINE
  scrub: resilver completed after 0h8m with 0 errors on Wed Jul  2 16:09:14 
 2008
 config:

 NAME  STATE READ WRITE CKSUM
 rpool ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c5t0d0s0  ONLINE   0 0 0
 c1t7d0ONLINE   0 0 0
 spares
   c0t7d0  AVAIL
   c1t6d0  AVAIL


 while c5t4d0 belongs to a raiz pool:

 ...
   raidz1ONLINE   0 0 0
 c0t4d0  ONLINE   0 0 0
 c1t4d0  ONLINE   0 0 0
 c5t4d0  ONLINE   0 0 0
 c6t7d0  ONLINE   0 0 0
 c5t5d0  ONLINE   0 0 0
 c5t6d0  ONLINE   0 0 0
 c5t7d0  ONLINE   0 0 0
 c1t5d0  ONLINE   0 0 0
 ...

 is it possible to restore the good behavior?
 something like
 - detach c1t7d0 from rpool
 - detach c5t4d0 from the other pool (the pool still survives since it is 
 raidz)
 - reattach in reverse order? (and so reform mirror and raidz?)

 thanks a lot again

 tommaso
 



 --
 Tommaso Boccali
 INFN Pisa
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tommaso Boccali
INFN Pisa
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confusion and frustration with zpool

2008-07-06 Thread Pete Hartman
I'm doing another scrub after clearing insufficient replicas only to find 
that I'm back to the report of insufficient replicas, which basically leads me 
to expect this scrub (due to complete in about 5 hours from now) won't have any 
benefit either.

-bash-3.2#  zpool status local
  pool: local
 state: FAULTED
 scrub: scrub in progress for 0h32m, 9.51% done, 5h11m to go
config:

NAME  STATE READ WRITE CKSUM
local FAULTED  0 0 0  insufficient replicas
  mirror  ONLINE   0 0 0
c6d1p0ONLINE   0 0 0
c0t0d0s3  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c6d0p0ONLINE   0 0 0
c0t0d0s4  ONLINE   0 0 0
  mirror  UNAVAIL  0 0 0  corrupted data
c8t0d0p0  ONLINE   0 0 0
c0t0d0s5  ONLINE   0 0 0

errors: No known data errors
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Streaming video and audio over CIFS lags.

2008-07-06 Thread MC
 Then I went and bought an Intel PCI Gigabit Ethernet card for 25€ which seems 
 to have solved the problem.

Is this really the case?  If so that is an important clue to finding out why 
virtualized opensolaris performance is so poor.  I tried every network adapter 
in virtualbox and vmware and performance always sucked, but maybe it is still 
narrowed down to a networking configuration problem.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-06 Thread Richard Elling
Tommaso Boccali wrote:
 is there a way to do it via software ? (attach remove add detach)
   

Skeleton process:
1. detach c1t7d0 from the root mirror
2. replace c5t4d0 with c1t7d0

In the details, you will need to be careful with the partitioning
for the root mirror.  You will need to use slices because the boot
process does not understand EFI labels.  In other words, your
rpool mirror at c1t7d0 has an EFI label and is not bootable.
Note: this is not a ZFS limitation, it is a boot limitation.

The detailed procedure for configuring a boot mirror using
ZFS as the root file system is in the ZFS Administration Guide
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
 -- richard

 if not else, it would help me quite a lot to understand the underlying
 zfs mechanism ...
 thanks

 ;)

 tom

 On Sun, Jul 6, 2008 at 10:27 AM, Jeff Bonwick [EMAIL PROTECTED] wrote:
   
 I would just swap the physical locations of the drives, so that the
 second half of the mirror is in the right location to be bootable.
 ZFS won't mind -- it tracks the disks by content, not by pathname.
 Note that SATA is not hotplug-happy, so you're probably best off
 doing this while the box is powered off.  Upon reboot, ZFS should
 figure out what happened, update the device paths, and... that's it.

 Jeff

 On Sun, Jul 06, 2008 at 08:47:25AM +0200, Tommaso Boccali wrote:
 
 As Edna and Robert mentioned, zpool attach will add the mirror.
 But note that the X4500 has only two possible boot devices:
 c5t0d0 and c5t4d0.  This is a BIOS limitation.  So you will want
 to mirror with c5t4d0 and configure the disks for boot.  See the
 docs on ZFS boot for details on how to configure the boot sectors
 and grub.
 -- richard

 
 uhm, bad.

 I did not know this, so now the root is
 bash-3.2# zpool status rpool
   pool: rpool
  state: ONLINE
  scrub: resilver completed after 0h8m with 0 errors on Wed Jul  2 16:09:14 
 2008
 config:

 NAME  STATE READ WRITE CKSUM
 rpool ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c5t0d0s0  ONLINE   0 0 0
 c1t7d0ONLINE   0 0 0
 spares
   c0t7d0  AVAIL
   c1t6d0  AVAIL


 while c5t4d0 belongs to a raiz pool:

 ...
   raidz1ONLINE   0 0 0
 c0t4d0  ONLINE   0 0 0
 c1t4d0  ONLINE   0 0 0
 c5t4d0  ONLINE   0 0 0
 c6t7d0  ONLINE   0 0 0
 c5t5d0  ONLINE   0 0 0
 c5t6d0  ONLINE   0 0 0
 c5t7d0  ONLINE   0 0 0
 c1t5d0  ONLINE   0 0 0
 ...

 is it possible to restore the good behavior?
 something like
 - detach c1t7d0 from rpool
 - detach c5t4d0 from the other pool (the pool still survives since it is 
 raidz)
 - reattach in reverse order? (and so reform mirror and raidz?)

 thanks a lot again

 tommaso
   

 --
 Tommaso Boccali
 INFN Pisa
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   



   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Measuring ZFS performance - IOPS and throughput

2008-07-06 Thread Richard Elling
Ross wrote:
 Can anybody tell me how to measure the raw performance of a new system I'm 
 putting together?  I'd like to know what it's capable of in terms of IOPS and 
 raw throughput to the disks.

 I've seen Richard's raidoptimiser program, but I've only seen results for 
 random read iops performance, and I'm particularly interested in write 
 performance.  That's because the live server will be fitted with 512MB of 
 nvram for the ZIL, and I'd like to see what effect that actually has.
   

Cool.  Yes, RAIDoptimizer's performance model is trivially simple because
it uses disk datasheet specifications, not measured data.  There is a lot of
work being measured by filebench, which should be installed for you in
/usr/benchmarks

 The disk system will be serving NFS to VMware to act as the datastore for a 
 number of virtual machines.  I plan to benchmark the individual machines to 
 see what kind of load they put on the server, but I need the raw figures from 
 the disk to get an idea of how many machines I can serve before I need to 
 start thinking bigger.
   
 I'd also like to know if there's any easy way to see the current performance 
 of the system once it's in use?  I know VMware has performance monitoring 
 built into the console, but I'd prefer to take figures directly off the 
 storage server if possible.
   

Something like NFSstat is probably the best indicator from the client
perspective.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Measuring ZFS performance - IOPS and throughput

2008-07-06 Thread Richard Elling
Ross Smith wrote:
 Thanks Richard, filebench sounds ideal for testing the abilities of 
 the server, far better than I expected to find actually.
  
 NFSstat might be tricky however, since the clients are going to be 
 running XP :).  I've got a very basic free benchmark that I'll use to 
 check that virtual disk performance over NFS is acceptible on the 
 client, and then I'll use the performance figures on VMware and the 
 fileserver to see how the clients are doing once I've a few running.
  
 On the ZFS fileserver, is iostat all I need to get a quick snapshot of 
 the load on the system?  Is there anything on Solaris like Microsoft's 
 performance monitor, where I can log figures over a period of time, or 
 bring up a chart of performance over time?  What I'd really like to 
 know is the average load in terms of iops and bandwidth, plus the peak 
 figures for each statistic too.

There is a performance monitor, actually man different ways to look at
performance.  In general, the best view is from the client.  The farther
you get from the client, the less you can see.  For example, using iostat
on the server is a common thing to do, but since the server caches
data from the disks and iostat only shows I/O  requests, it will be
difficult to correlate server disk I/O to client performance over time.
There is a wealth of information on Solaris performance analysis and
tuning knowledge available on the net.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confusion and frustration with zpool

2008-07-06 Thread Jeff Bonwick
As a first step, 'fmdump -ev' should indicate why it's complaining
about the mirror.

Jeff

On Sun, Jul 06, 2008 at 07:55:22AM -0700, Pete Hartman wrote:
 I'm doing another scrub after clearing insufficient replicas only to find 
 that I'm back to the report of insufficient replicas, which basically leads 
 me to expect this scrub (due to complete in about 5 hours from now) won't 
 have any benefit either.
 
 -bash-3.2#  zpool status local
   pool: local
  state: FAULTED
  scrub: scrub in progress for 0h32m, 9.51% done, 5h11m to go
 config:
 
 NAME  STATE READ WRITE CKSUM
 local FAULTED  0 0 0  insufficient replicas
   mirror  ONLINE   0 0 0
 c6d1p0ONLINE   0 0 0
 c0t0d0s3  ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c6d0p0ONLINE   0 0 0
 c0t0d0s4  ONLINE   0 0 0
   mirror  UNAVAIL  0 0 0  corrupted data
 c8t0d0p0  ONLINE   0 0 0
 c0t0d0s5  ONLINE   0 0 0
 
 errors: No known data errors
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19

2008-07-06 Thread Gilberto Mautner
Hello Ross,

We're trying to accomplish the same goal over here, ie. serving multiple
VMware images from a NFS server.

Could you tell what kind of NVRAM device did you end up choosing? We bought
a Micromemory PCI card but can't get a Solaris driver for it...

Thanks

Gilberto


On 7/6/08 9:54 AM, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

 --
 
 Message: 6
 Date: Sun, 06 Jul 2008 06:37:40 PDT
 From: Ross [EMAIL PROTECTED]
 Subject: [zfs-discuss] Measuring ZFS performance - IOPS and throughput
 To: zfs-discuss@opensolaris.org
 Message-ID: [EMAIL PROTECTED]
 Content-Type: text/plain; charset=UTF-8
 
 Can anybody tell me how to measure the raw performance of a new system I'm
 putting together?  I'd like to know what it's capable of in terms of IOPS and
 raw throughput to the disks.
 
 I've seen Richard's raidoptimiser program, but I've only seen results for
 random read iops performance, and I'm particularly interested in write
 performance.  That's because the live server will be fitted with 512MB of
 nvram for the ZIL, and I'd like to see what effect that actually has.
 
 The disk system will be serving NFS to VMware to act as the datastore for a
 number of virtual machines.  I plan to benchmark the individual machines to
 see what kind of load they put on the server, but I need the raw figures from
 the disk to get an idea of how many machines I can serve before I need to
 start thinking bigger.
 
 I'd also like to know if there's any easy way to see the current performance
 of the system once it's in use?  I know VMware has performance monitoring
 built into the console, but I'd prefer to take figures directly off the
 storage server if possible.
 
 thanks,
 
 Ross
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] x4500 panic report.

2008-07-06 Thread Jorgen Lundman

On Saturday the X4500 system paniced, and rebooted. For some reason the 
/export/saba1 UFS partition was corrupt, and needed fsck. This is why 
it did not come back online. /export/saba1 is mounted logging,noatime, 
so fsck should never (-ish) be needed.

SunOS x4500-01.unix 5.11 snv_70b i86pc i386 i86pc

/export/saba1 on /dev/zvol/dsk/zpool1/saba1 
read/write/setuid/devices/intr/largefiles/logging/quota/xattr/noatime/onerror=panic/dev=2d80024
 
on Sat Jul  5 08:48:54 2008


One possible related bug:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4884138


What would be the best solution? Go back to latest Solaris 10 and pass 
it on to Sun support, or find a patch for this problem?



Panic dump follows:


-rw-r--r--   1 root root 2529300 Jul  5 08:48 unix.2
-rw-r--r--   1 root root 10133225472 Jul  5 09:10 vmcore.2


# mdb unix.2 vmcore.2
Loading modules: [ unix genunix specfs dtrace cpu.AuthenticAMD.15 uppc 
pcplusmp scsi_vhci ufs md ip hook neti sctp arp usba uhci s1394 qlc fctl 
nca lofs zfs random cpc crypto fcip fcp logindmux nsctl sdbc ptm sv ii 
sppp rdc nfs ]

  $c
vpanic()
vcmn_err+0x28(3, f783ade0, ff001e737aa8)
real_panic_v+0xf7(0, f783ade0, ff001e737aa8)
ufs_fault_v+0x1d0(fffed0bfb980, f783ade0, ff001e737aa8)
ufs_fault+0xa0()
dqput+0xce(1db26ef0)
dqrele+0x48(1db26ef0)
ufs_trans_dqrele+0x6f(1db26ef0)
ufs_idle_free+0x16d(ff04f17b1e00)
ufs_idle_some+0x152(3f60)
ufs_thread_idle+0x1a1()
thread_start+8()


  ::cpuinfo
  ID ADDR FLG NRUN BSPL PRI RNRN KRNRN SWITCH THREAD 
PROC
   0 fbc2fc10  1b00  60   nono t-0 
ff001e737c80 sched
   1 fffec3a0a000  1f10  -1   nono t-0ff001e971c80
  (idle)
   2 fffec3a02ac0  1f00  -1   nono t-1ff001e9dbc80
  (idle)
   3 fffec3d60580  1f00  -1   nono t-1ff001ea50c80
  (idle)

  ::panicinfo
  cpu0
   thread ff001e737c80
  message dqput: dqp-dq_cnt == 0
  rdi f783ade0
  rsi ff001e737aa8
  rdx f783ade0
  rcx ff001e737aa8
   r8 f783ade0
   r90
  rax3
  rbx0
  rbp ff001e737900
  r10 fbc26fb0
  r10 fbc26fb0
  r11 ff001e737c80
  r12 f783ade0
  r13 ff001e737aa8
  r143
  r15 f783ade0
   fsbase0
   gsbase fbc26fb0
   ds   4b
   es   4b
   fsbase0
   gsbase fbc26fb0
   ds   4b
   es   4b
   fs0
   gs  1c3
   trapno0
  err0
  rip fb83c860
   cs   30
   rflags  246
  rsp ff001e7378b8
   ss   38
   gdt_hi0
   gdt_lo e1ef
   idt_hi0
   idt_lo 77c00fff
  ldt0
 task   70
  cr0 8005003b
  cr2 fee7d650
  cr3  2c0
  cr4  6f8

  ::msgbuf
quota_ufs: over hard disk limit (pid 600, uid 178199, inum 941499, fs 
/export/zero1)
quota_ufs: over hard disk limit (pid 600, uid 33647, inum 29504134, fs 
/export/zero1)

panic[cpu0]/thread=ff001e737c80:
dqput: dqp-dq_cnt == 0


ff001e737930 genunix:vcmn_err+28 ()
ff001e737980 ufs:real_panic_v+f7 ()
ff001e7379e0 ufs:ufs_fault_v+1d0 ()
ff001e737ad0 ufs:ufs_fault+a0 ()
ff001e737b00 ufs:dqput+ce ()
ff001e737b30 ufs:dqrele+48 ()
ff001e737b70 ufs:ufs_trans_dqrele+6f ()
ff001e737bc0 ufs:ufs_idle_free+16d ()
ff001e737c10 ufs:ufs_idle_some+152 ()
ff001e737c60 ufs:ufs_thread_idle+1a1 ()
ff001e737c70 unix:thread_start+8 ()

syncing file systems...




-- 
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confusion and frustration with zpool

2008-07-06 Thread Pete Hartman
I'm not sure how to interpret the output of fmdump:

-bash-3.2#  fmdump -ev
TIME CLASS ENA
Jul 06 23:25:39.3184 ereport.fs.zfs.vdev.bad_label 
0x03b3e4e8b1900401
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum 
0xdaffb466a7e1
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum 
0xdaffb466a7e1
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum 
0xdaffb466a7e1
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum 
0xdaffb466a7e1
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum 
0xdaffb466a7e1
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum 
0xdaffb466a7e1
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum 
0xdaffb466a7e1
Jul 07 03:32:14.3561 ereport.fs.zfs.checksum 
0xdaffb466a7e1
Jul 07 03:32:14.3561 ereport.fs.zfs.data 
0xdaffb466a7e1
Jul 07 08:43:51.9399 ereport.fs.zfs.vdev.bad_label 
0xeb15a1de01f00401
Jul 07 08:56:46.8978 ereport.fs.zfs.vdev.bad_label 
0xf66406a7f9f00401
Jul 07 09:00:25.6136 ereport.fs.zfs.vdev.bad_label 
0xf992ce4b4c11
Jul 07 09:00:25.6136 ereport.fs.zfs.io 
0xf992ce4b4c11
Jul 07 09:00:25.6136 ereport.fs.zfs.io 
0xf992ce4b4c11
Jul 07 09:00:27.1258 ereport.fs.zfs.io 
0xf99870686ff00401
Jul 07 09:00:27.1258 ereport.fs.zfs.io 
0xf99870686ff00401
Jul 07 09:00:27.6452 ereport.fs.zfs.io 
0xf99a5fd3be900401
Jul 07 09:00:27.6452 ereport.fs.zfs.io 
0xf99a5fd3be900401
Jul 07 09:12:58.8672 ereport.fs.zfs.vdev.bad_label 
0x0488e4f3f2b1
Jul 07 09:13:04.2748 ereport.fs.zfs.vdev.bad_label 
0x049d0a0437a00401
Jul 07 09:18:23.3689 ereport.fs.zfs.vdev.bad_label 
0x0941c1d9ae91
Jul 07 13:32:19.9203 ereport.fs.zfs.checksum 
0xe6fa55a373b1
Jul 07 13:32:19.9203 ereport.fs.zfs.checksum 
0xe6fa55a373b1
Jul 07 13:32:19.9203 ereport.fs.zfs.checksum 
0xe6fa55a373b1
Jul 07 13:32:19.9203 ereport.fs.zfs.checksum 
0xe6fa55a373b1
Jul 07 13:32:19.9203 ereport.fs.zfs.checksum 
0xe6fa55a373b1
Jul 07 13:32:19.9203 ereport.fs.zfs.checksum 
0xe6fa55a373b1
Jul 07 13:32:19.9203 ereport.fs.zfs.checksum 
0xe6fa55a373b1
Jul 07 13:32:19.9203 ereport.fs.zfs.checksum 
0xe6fa55a373b1
Jul 07 13:32:19.9203 ereport.fs.zfs.data 
0xe6fa55a373b1
Jul 07 20:03:41.6315 ereport.fs.zfs.vdev.bad_label 
0x3cb5f9c64ac1
Jul 07 20:03:42.5642 ereport.fs.zfs.vdev.bad_label 
0x3cb97354d311
Jul 07 20:03:43.3098 ereport.fs.zfs.vdev.bad_label 
0x3cbc3a681b31
Jul 07 20:03:58.6815 ereport.fs.zfs.vdev.bad_label 
0x3cf57dee8401
Jul 07 20:04:01.0846 ereport.fs.zfs.vdev.bad_label 
0x3cfe71b9f5800401
Jul 07 20:04:03.2627 ereport.fs.zfs.vdev.bad_label 
0x3d068ee974a00401
Jul 07 20:04:06.2904 ereport.fs.zfs.vdev.bad_label 
0x3d11d65e5831


So current sequence of events:

The scrub from this morning completed, and it now is calling out a 
specific file with problems.

Based on the bad_label messages above, I went to my USB devices to 
double check their labels; format shows them without problems.  So does 
fdisk.  Just to be sure, I went to the format partition menu and re-ran 
label without changing anything.

I then ran a zpool clear, and now it looks like everything is online 
except that one file:

-bash-3.2# zpool status -v
   pool: local
  state: ONLINE
status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
 entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
  scrub: scrub completed after 4h22m with 1 errors on Mon Jul  7 
13:44:31 2008
config:

 NAME  STATE READ WRITE CKSUM
 local ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c6d1p0ONLINE   0 0 0
 c0t0d0s3  ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c6d0p0ONLINE   0 0 0
 c0t0d0s4  ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c8t0d0p0  ONLINE   0 0 0
 c0t0d0s5  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

 /local/share/music/Petes-itunes/Scientist/Scientific Dub/Satta 
Dread Dub.mp3

HOWEVER, it does not appear that things are good:

-bash-3.2# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
local   630G   228G   403G36%  ONLINE  -
rpool55G  2.63G  52.4G 4%  ONLINE  -

-bash-3.2# df -k /local
Filesystemkbytesused   avail capacity  Mounted on
local/main   238581865 238567908   0   100%/local

-bash-3.2# cd '/local/share/music/Petes-itunes/Scientist/Scientific Dub/'
-bash-3.2# ls -l
total 131460
-rwxr--r--   1 elmegil  other8374348 Jun 10 18:51 Bad Days Dub.mp3
-rwxr--r--   1 elmegil  other5355853 Jun 10 18:51 Blacka Shade of 
Dub.mp3
-rwxr--r--   1 elmegil  other7260905 Jun 10 18:50 Drum Song Dub.mp3
-rwxr--r--   1 elmegil  other6058878 Jun 10 

Re: [zfs-discuss] x4500 panic report.

2008-07-06 Thread James C. McPherson
Jorgen Lundman wrote:
 On Saturday the X4500 system paniced, and rebooted. For some reason the 
 /export/saba1 UFS partition was corrupt, and needed fsck. This is why 
 it did not come back online. /export/saba1 is mounted logging,noatime, 
 so fsck should never (-ish) be needed.
 
 SunOS x4500-01.unix 5.11 snv_70b i86pc i386 i86pc
 
 /export/saba1 on /dev/zvol/dsk/zpool1/saba1 
 read/write/setuid/devices/intr/largefiles/logging/quota/xattr/noatime/onerror=panic/dev=2d80024
  
 on Sat Jul  5 08:48:54 2008
 
 
 One possible related bug:
 
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4884138

Yes, that bug is possibly related. However, the panic stacks listed
in it do not match yours.

 What would be the best solution? Go back to latest Solaris 10 and pass 
 it on to Sun support, or find a patch for this problem?

Since the panic stack only ever goes through ufs, you should
log a call with Sun support.
...
   ::msgbuf
 quota_ufs: over hard disk limit (pid 600, uid 178199, inum 941499, fs 
 /export/zero1)
 quota_ufs: over hard disk limit (pid 600, uid 33647, inum 29504134, fs 
 /export/zero1)
 
 panic[cpu0]/thread=ff001e737c80:
 dqput: dqp-dq_cnt == 0
 
 
 ff001e737930 genunix:vcmn_err+28 ()
 ff001e737980 ufs:real_panic_v+f7 ()
 ff001e7379e0 ufs:ufs_fault_v+1d0 ()
 ff001e737ad0 ufs:ufs_fault+a0 ()
 ff001e737b00 ufs:dqput+ce ()
 ff001e737b30 ufs:dqrele+48 ()
 ff001e737b70 ufs:ufs_trans_dqrele+6f ()
 ff001e737bc0 ufs:ufs_idle_free+16d ()
 ff001e737c10 ufs:ufs_idle_some+152 ()
 ff001e737c60 ufs:ufs_thread_idle+1a1 ()
 ff001e737c70 unix:thread_start+8 ()

Although given the entry in the msgbuf, perhaps
you might want to fix up your quota settings on that
particular filesystem.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 panic report.

2008-07-06 Thread Jorgen Lundman
  Since the panic stack only ever goes through ufs, you should
log a call with Sun support.

We do have support, but they only speak Japanese, and I'm still quite 
poor at it. But I have started the process of having it translated and 
passed along to the next person. It is always fun to see what it becomes 
at the other end. Meanwhile, I like to research and see if it is a 
already known problem, rather than just sit around and wait.



  quota_ufs: over hard disk limit (pid 600, uid 33647, inum 29504134, 
fs /export/zero1)

 
 Although given the entry in the msgbuf, perhaps
 you might want to fix up your quota settings on that
 particular filesystem.
 

Customers pay for a certain amount of disk-quota, and being users, 
always stay close to the edge. Those messages are as constant as 
precipitation in the rainy season.

Are you suggestion that indicate a problem, beyond that the user is out 
of space?

Lund


-- 
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 panic report.

2008-07-06 Thread James C. McPherson
Jorgen Lundman wrote:
   Since the panic stack only ever goes through ufs, you should
 log a call with Sun support.
 
 We do have support, but they only speak Japanese, and I'm still quite 
 poor at it. But I have started the process of having it translated and 
 passed along to the next person. It is always fun to see what it becomes 
 at the other end. Meanwhile, I like to research and see if it is a 
 already known problem, rather than just sit around and wait.

That sounds like a learning opportunity :-)

   quota_ufs: over hard disk limit (pid 600, uid 33647, inum 29504134, 
 fs /export/zero1)
 
 Although given the entry in the msgbuf, perhaps
 you might want to fix up your quota settings on that
 particular filesystem.

 
 Customers pay for a certain amount of disk-quota, and being users, 
 always stay close to the edge. Those messages are as constant as 
 precipitation in the rainy season.
 
 Are you suggestion that indicate a problem, beyond that the user is out 
 of space?

I don't know, I'm not a UFS expert (heck, I'm not an expert
on _anything_). Have you investigated putting your paying
customers onto zfs and managing quotas with zfs properties
instead of ufs?




James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 panic report.

2008-07-06 Thread Jorgen Lundman
 I don't know, I'm not a UFS expert (heck, I'm not an expert
 on _anything_). Have you investigated putting your paying
 customers onto zfs and managing quotas with zfs properties
 instead of ufs?

Yep, we spent about 6 weeks during the trial period of the x4500 to try 
to find a way for ZFS to be able to replace the current NetApps. History 
of this mailing-list should have it, and thanks to everyone who helped.

But it was just not possible. Perhaps now it can be done, using 
mirror-mounts, but the 50 odd servers hanging off the x4500 don't all 
support it, so it would still not be feasible.

Unless there has been some advancement in ZFS in the last 6 months I am 
not aware of... like user quotas?

Thanks for your assistance.

Lund

-- 
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss