Re: [zfs-discuss] Growing root pool ?
I'm not even trying to stripe it across multiple disks, I just want to add another partition (from the same physical disk) to the root pool. Perhaps that is a distinction without a difference, but my goal is to grow my root pool, not stripe it across disks or enable raid features (for now). Currently, my root pool is using c1t0d0s4 and I want to add c1t0d0s0 to the pool, but can't. -Wyllys This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Growing root pool ?
Luckily, my system had a pair of identical, 232GB disks. The 2nd wasn't yet used, so by juggling mirrors (create 3 mirrors, detach the one to change, etc...), I was able to reconfigure my disks more to my liking - all without a single reboot or loss of data. I now have 2 pools - a 20GB root pool and a 210GB other pool, each mirrored on the other disk. If not for the extra disk and the wonderful zfs snapshot/send/receive feature it would have taken a lot more time and aggravation to get it straightened out. -Wyllys This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Growing root pool ?
I had a similar configuration until my recent re-install to snv91. Now I am have just 2 ZFS pools - one for root+boot (big enough to hold multiple BEs and do LiveUpgrades) and another for the rest of my data. -Wyllys This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS boot (post build 88) questions
Are there any updated guides/blogs on how to configure ZFS boot on a build 88 or later system? If I already have an existing zpool, will I be able to just add a root/boot dataset or does the root/boot dataset have to have it's own pool? I have several working systems that have small UFS partitions for booting and LiveUpgrading and then the rest of the disk(s) are in a large zpool. I'm hoping I can migrate smoothly to a ZFS boot system and have all my disks in the same zpool without destroying everything and starting over. -Wyllys This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] zpool remove problem
That doesn't work either. I ended up destroying the entire pool and starting over. There was a lot of data already in there, but it wasn't critical, luckily. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] zpool remove problem
I have a pool with 3 partitions in it. However, one of them is no longer valid, the disk was removed and modified so that the original partition is no longer available. I cannot get zpool to remove it from the pool. How do I tell zfs to take this item out of the pool if not with zfs remove ? Thanks, Wyllys here is my pool: zpool status pool: bigpool state: FAULTED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: none requested config: NAMESTATE READ WRITE CKSUM bigpool FAULTED 0 0 0 insufficient replicas c0d1s7ONLINE 0 0 0 c0d0p4UNAVAIL 0 0 0 cannot open c0d0s2ONLINE 0 0 0 errors: 1 data errors, use '-v' for a list I want to remove c0d0p4: # zpool remove bigpool c0d0p4 cannot remove c0d0p4: only inactive hot spares or cache devices can be removed This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss