Dennis Clarke wrote:

Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM
partitions to ZFS.

I used Live Upgrade to migrate from U1 to U2 and that went without a
hitch on my SunBlade 2000. And the initial conversion of one side of the
UFS mirrors to a ZFS pool and subsequent data migration went fine.
However, when I attempted to attach the second side mirrors as a mirror
of the ZFS pool, all hell broke loose.

The system more or less became unresponsive after a few minutes. It
appeared that ZFS had taken all available memory because I saw tons of
errors on the console about failed memory allocations.

Any thoughts/suggestions?

The data I migrated consisted of about 80GB. Here's the general flow of
what I did:

1. break the SVM mirrors
  metadetach d5 d51
  metadetach d6 d61
  metadetach d7 d71
2. remove the SVM mirrors
  metaclear d51
  metaclear d61
  metaclear d71
3. combine the partitions with format. They were contiguous
  partitions on s4, s5 & s6 of the disk, I just made a single
  partition on s4 and cleared s5 & s6.
4. create the pool
  zpool create storage cXtXdXs4
5. create three filesystems
  zfs create storage/app
  zfs create storage/work
  zfs create storage/extra
6. migrate the data
  cd /app; find . -depth -print | cpio -pdmv /storage/app
  cd /work; find . -depth -print | cpio -pdmv /storage/work
  cd /extra; find . -depth -print | cpio -pdmv /storage/extra
7. remove the other SVM mirrors
  umount /app; metaclear d5 d50
  umount /work; metaclear d6 d60
  umount /extra; metaclear d7 d70

before you went any further here did you issue a metastat command and also
did you have any metadb's on that other disk before you nuked those slices ?
I did have metadbs on the s7 slices but I removed them with metadb. I did a fair amount of metastats as well.

just asking here

I am hoping that you did a metaclear d5 and then metaclear d50 in order to
clear out both the one sided mirror as well as its component.

I'm just fishing around here ..

8. combine the partitions with format. They were contiguous
  partitions on s4, s5 & s6 of the disk, I just made a single
  partition on s4 and cleared s5 & s6.

okay .. I hope that SVM was not looking for them.  I guess you would get a
nasty stack of errors in that case.
Yeah. Actually format was pretty helpful as it told me particular slices of the disk was in use by SVM. I didn't have any problems with the first side of the ZFS mirror.

9. attach the partition to the pool as a mirror
  zpool attach storage cXtXdXs4 cYtYdYs4

So you wanted a mirror ?

Like :

# zpool status
 pool: storage
state: ONLINE
scrub: none requested
config:

       NAME          STATE     READ WRITE CKSUM
       storage       ONLINE       0     0     0
         mirror      ONLINE       0     0     0
           c0t0d0s4  ONLINE       0     0     0
           c0t1d0s4  ONLINE       0     0     0

errors: No known data errors

that sort of deal ?
Yes, that's exactly right.

Something that just occured to me, that I will have to look at when I get to the system is that I don't recall if I had any swap partitions enabled or not. If I do, that could help ball up the system as it tries to swap stuff out to disk in order to give space to ZFS.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to