[zfs-discuss] zpool replace not working, any ideas?

2008-11-08 Thread Aaron Theodore
I have been trying to replace a disk in a raidz1 zpool for a few days 
now, whatever i try zfs keeps using the original disk rather than the 
replacement.

I'm running snv_95
-
  pool: tank
 state: ONLINE
 scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c3t0d0  ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
c3t1d0  ONLINE   0 0 0
-
I'm trying to replace c3t0d0, so far i have tried a number of ways...
1. offline the disk, unconfigure with cfgadm, hotplug new disk and zpool 
replace.
While replacing i get...
(start)
replacing DEGRADED 0 0 0
  c3t0d0s0/o  FAULTED  0 0 0  corrupted data
  c3t0d0  ONLINE   0 0 0
. (end)
replacing DEGRADED 0 0   206
  c3t0d0s0/o  FAULTED  0 0 0  corrupted data
  c3t0d0  ONLINE   0 0 0
- Tried to detach the old disk c3t0d0s0/o but zfs said it was being used.
- Tried exporting the pool then reimporting it, faulted disk shows up 
different now
replacing   DEGRADED 0 0 0
  10193049639260089137  FAULTED  0 0 0  was 
/dev/dsk/c3t0d0s0/old
  c3t0d0ONLINE   0 0 0
- Tried scrubbing a few times as well as rebooting, then gave up.

2. reinserted original disk, ran a scrub to ensure everything was happy
- tried hotplugging new disk via usb2 then doing a replace
 (zpool replace tank c3t0d0 c4t0d0)
replacing  ONLINE   0 0 0
  c3t0d0   ONLINE   0 0 0
  c4t0d0   ONLINE   0 0 0
- At the end of the scrub/replace zfs still refuses to let me remove 
c3t0d0, and when accessing the pool
 it is still being used, while c4t0d0 is not.
- scrubbed a few more times, after detaching c4t0d0
- tried to offline c3t0d0 then do a (zpool replace tank c3t0d0 c4t0d0)
replacing  DEGRADED 0 0 0
  c3t0d0   OFFLINE  0 0 0
  c4t0d0   ONLINE   0 0 0
At this point i'm waiting for a resliver to complete ETA 4h 20min


So any ideas how to fix this???


Also when resilvering i usually get:

errors: Permanent errors have been detected in the following files:
:<0x0>

but that goes away once the resilver completes


thanks

Aaron
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] MIgrating to ZFS "root"/"boot" with system in several datasets

2008-11-08 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

>> Any advice?. Suggestions/alternative approaches welcomed.
> One obvious question - why?

Two reasons:

1. Backup policies and ZFS properties.

2. I don't have enough spare space to "rejoin" all system slices in a
single one.

I thinking in messing with ICF.* files. Seems easy enough to try.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSRYRpJlgi5GaxT1NAQI/igP/V/I8yxTPIemBh71Oo6hPcvUYPQVyt8G2
CYkzT+3/3zLPqktHtdEPJzgcqyRyZPhgn14pBuSeMZ6CYZE4Crf3VxAMFwOKBGWX
jqCPben0AnJhgbyk+PQvxrPI6vxzzPwPlNWWGv2VZelBdDFbzmdhEUhpF4xW4ACX
7cJX9L3gz6M=
=uFZh
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, Kernel Panic on import

2008-11-08 Thread Andrew
So I tried a few more things..
I think the combination of the following in /etc/system made a difference:
set pcplusmp:apic_use_acpi=0
set sata:sata_max_queue_depth = 0x1
set zfs:zfs_recover=1 <<< I had this before
set aok=1   <<< I had this before too

I crossed my fingers, and it actually imported this time.. Somehow ..

solaria ~ # zpool status
  pool: itank
 state: ONLINE
 scrub: scrub in progress for 0h7m, 2.76% done, 4h33m to go
config:

NAME STATE READ WRITE CKSUM
itankONLINE   0 0 0
  raidz1 ONLINE   0 0 0
c12t1d0  ONLINE   0 0 0
c13t0d0  ONLINE   0 0 0
c11t0d0  ONLINE   0 0 0
c13t1d0  ONLINE   0 0 0
c11t1d0  ONLINE   0 0 0

Running some scrubs on it now, and I HOPE everything is okay...

Anything else you suggest I try before it's considered stable?
Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] MIgrating to ZFS "root"/"boot" with system in several datasets

2008-11-08 Thread Ian Collins
Jesus Cea wrote:
> Hi, everybody.
>
> I just trying to upgrade my Solaris 10 Update 6 from UFS to ZFS. But I
> want to keep different portions of the OS in different ZFS datasets,
> just like I was doing until now. For example, my script to upgrade from
> Update 5 to Update 6 was:
>
> """
> [EMAIL PROTECTED] /]# cat z-live_upgrade-Solaris10u6
> lucreate -n Solaris10u6 \
> -m /:/dev/md/dsk/d0:ufs \
> -m /usr/openwin:/dev/md/dsk/d3003:ufs \
> -m /usr/dt:/dev/md/dsk/d3004:ufs \
> -m /var/sadm:/dev/md/dsk/d3005:ufs \
> -m /usr/jdk:/dev/md/dsk/d3006:ufs \
> -m /opt/sfw:/dev/md/dsk/d3007:ufs \
> -m /opt/staroffice8:/dev/md/dsk/d3008:ufs \
> -m /usr/sfw:/dev/md/dsk/d3023:ufs
> """
>
> I would like to be able to place these filesystems in different datasets
> under ZFS root/boot, but option "-m" in "lucreate" is not supported when
> upgrading to ZFS.
>
> I would like to have something like:
>
> /pool/ROOT
> /pool/ROOT/Sol10u6ZFS
> /pool/ROOT/Sol10u6ZFS/usr/openwin  <- I want this!
> /pool/ROOT/Sol10u6ZFS/usr/dt   <- I want this!
> ...
> etc.
>
> Any advice?. Suggestions/alternative approaches welcomed.
One obvious question - why?

I think it beat to keep everything installed by the OS together in order
to maintain a consistent system across upgrades and additional software
(like sfw) in another pool, with appropriate mountpoints.

You can do the by simply moving the data to the alternative location
before migrating.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] MIgrating to ZFS "root"/"boot" with system in several datasets

2008-11-08 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi, everybody.

I just trying to upgrade my Solaris 10 Update 6 from UFS to ZFS. But I
want to keep different portions of the OS in different ZFS datasets,
just like I was doing until now. For example, my script to upgrade from
Update 5 to Update 6 was:

"""
[EMAIL PROTECTED] /]# cat z-live_upgrade-Solaris10u6
lucreate -n Solaris10u6 \
-m /:/dev/md/dsk/d0:ufs \
-m /usr/openwin:/dev/md/dsk/d3003:ufs \
-m /usr/dt:/dev/md/dsk/d3004:ufs \
-m /var/sadm:/dev/md/dsk/d3005:ufs \
-m /usr/jdk:/dev/md/dsk/d3006:ufs \
-m /opt/sfw:/dev/md/dsk/d3007:ufs \
-m /opt/staroffice8:/dev/md/dsk/d3008:ufs \
-m /usr/sfw:/dev/md/dsk/d3023:ufs
"""

I would like to be able to place these filesystems in different datasets
under ZFS root/boot, but option "-m" in "lucreate" is not supported when
upgrading to ZFS.

I would like to have something like:

/pool/ROOT
/pool/ROOT/Sol10u6ZFS
/pool/ROOT/Sol10u6ZFS/usr/openwin  <- I want this!
/pool/ROOT/Sol10u6ZFS/usr/dt   <- I want this!
...
etc.

Any advice?. Suggestions/alternative approaches welcomed.

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSRV/KZlgi5GaxT1NAQL+sgP+P6VRu351lkNjF8A18w+BFacLaJxYU7UV
1K9bAHgqdTg/YdlJZHN91Rmq3YEs5vtAzgyoSYxBzhdEO2ZYZl6dvlOO9jCNFFpy
TjK0EBbYPZzl/bJzQVWic4kmEgkKQmdaUM4c4fj2SseEvoI7ZPxdhJn5gT97LiN5
h9NK+cd94UA=
=qzza
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss