Re: [zfs-discuss] zfs resilvering

2008-09-29 Thread Mikael Kjerrman
Hi,

it was actually shared both as a dataset and a NFS-share.

we had zonedata/prodlogs set up as a dataset and then
we had zonedata/tmp mounted as a NFS filesystem within the zone.

//Mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs resilvering

2008-09-29 Thread Mikael Kjerrman
Richard,

thanks alot for that answer. It can be argued back and forth what is right, but 
it helps knowing the reason behind the problem. Again, thanks alot...

//Mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs resilvering

2008-09-26 Thread Mikael Kjerrman
Hi,

I've searched without luck, so I'm asking instead.

I have a Solaris 10 box,

# cat /etc/release
   Solaris 10 11/06 s10s_u3wos_10 SPARC
   Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 14 November 2006

this box was rebooted this morning and after the boot I noticed a resilver was 
in progress. But the suggested time seemed a bit long, so is this a problem 
which can be patched or remediated in another way?

# zpool status -x
  pool: zonedata
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 0.04% done, [b]4398h43m[/b] to go
config:

NAME   STATE READ WRITE CKSUM
zonedata   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t60060E8004282B00282B10A0d0  ONLINE   0 0 0
c6t60060E8004283300283310A0d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t60060E8004282B00282B10A1d0  ONLINE   0 0 0
c6t60060E8004283300283310A1d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t60060E8004282B00282B10A2d0  ONLINE   0 0 0
c6t60060E8004283300283310A2d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t60060E8004282B00282B10A4d0  ONLINE   0 0 0
c6t60060E8004283300283310A4d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t60060E8004282B00282B10A5d0  ONLINE   0 0 0
c6t60060E8004283300283310A5d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t60060E8004282B00282B10A6d0  ONLINE   0 0 0
c6t60060E8004283300283310A6d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t60060E8004282B00282B2022d0  ONLINE   0 0 0
c6t60060E800428330028332022d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t60060E8004282B00282B2023d0  ONLINE   0 0 0
c6t60060E800428330028332024d0  ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t60060E8004282B00282B2024d0  ONLINE   0 0 0
c6t60060E800428330028332023d0  ONLINE   0 0 0


I also have a question about sharing a zfs from the global zone to a local 
zone. Are there any issues with this? We had an unfortunate sysadmin who did 
this and our systems hung. We have no logs that show anyhing at all, but I 
thought I'd ask just be sure.

cheers,

//Mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs resilvering

2008-09-26 Thread Mikael Kjerrman
define a lot :-)

We are doing about 7-8M per second which I don't think is a lot but perhaps it 
is enough to screw up the estimates? Anyhow the resilvering completed about 
4386h earlier than expected so everything is ok now, but I still feel that the 
way it figures out the number is wrong.

Any thoughts on my other issue?

cheers,

//Mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yet another zfs vs. vxfs comparison...

2007-07-21 Thread Mikael Kjerrman
Hi,

thanks for the reply. But there must be a better explanation other than that? 
Otherwise it seems kinda harsh to loose 20GB per 1TB and I will most likely 
have to answer this question when we are going to discuss if we are to migrate 
to zfs over vxfs..
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Yet another zfs vs. vxfs comparison...

2007-07-20 Thread Mikael Kjerrman
Hi,

sorry if I am brining up old news, but I couldn't find a good answer searching 
the previous posts (My mom always says I am bad with finding things :)

However I noticed a difference when creating a zfs filesystem compared with a 
vxfs filesystem in the available size. ie.

ZFS
zonedata/zfs   [b]392G[/b]   120G   272G31%/zfs

VxFS
/dev/vx/dsk/zonedg/zonevol
   [b]400G[/b]78M   397G 1%/vxfs

They are both build from 4 LUNs of the same size from the same array
So where did the 8G's go?

thanks,

//Mike
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool unavailable after reboot

2006-07-17 Thread Mikael Kjerrman
Hi,

so it happened...

I have a 10 disk raidz pool running Solaris 10 U2, and after a reboot the whole 
pool became unavailable after apparently loosing a diskdrive. (The drive is 
seemingly ok as far as I can tell from other commands)

--- bootlog ---
Jul 17 09:57:38 expprd fmd: [ID 441519 daemon.error] SUNW-MSG-ID: ZFS-8000-CS, 
TYPE: Fault, VER: 1, SEVERITY: Major
Jul 17 09:57:38 expprd EVENT-TIME: Mon Jul 17 09:57:38 MEST 2006
Jul 17 09:57:38 expprd PLATFORM: SUNW,UltraAX-i2, CSN: -, HOSTNAME: expprd
Jul 17 09:57:38 expprd SOURCE: zfs-diagnosis, REV: 1.0
Jul 17 09:57:38 expprd EVENT-ID: e2fd61f7-a03d-6279-d5a5-9b8755fa1af9
Jul 17 09:57:38 expprd DESC: A ZFS pool failed to open.  Refer to 
http://sun.com/msg/ZFS-8000-CS for more information.
Jul 17 09:57:38 expprd AUTO-RESPONSE: No automated response will occur.
Jul 17 09:57:38 expprd IMPACT: The pool data is unavailable
Jul 17 09:57:38 expprd REC-ACTION: Run 'zpool status -x' and either attach the 
missing device or
Jul 17 09:57:38 expprd  restore from backup.
---

--- zpool status -x ---
bash-3.00# zpool status -x
  pool: data
 state: FAULTED
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
dataUNAVAIL  0 0 0  insufficient replicas
  c1t0d0ONLINE   0 0 0
  c1t1d0ONLINE   0 0 0
  c1t2d0ONLINE   0 0 0
  c1t3d0ONLINE   0 0 0
  c2t0d0ONLINE   0 0 0
  c2t1d0ONLINE   0 0 0
  c2t2d0ONLINE   0 0 0
  c2t3d0ONLINE   0 0 0
  c2t4d0ONLINE   0 0 0
  c1t4d0UNAVAIL  0 0 0  cannot open
--

The problem as I see it is that the pool should be able to handle 1 disk error, 
no?
and the online, attach, replace commands doesn't work when the pool is 
unavailable. I've filed a case with Sun, but thought I'd ask around here to see 
if anyone has experienced this before.


cheers,

//Mikael
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss