Thanks everyone. Your inputs helped me a lot.
The 'rpool/ROOT' mountpoint is set to 'legacy' as I don't see any reason to
mount it. But I am not certain if that can cause any issue in the future, or
that's a right thing to do. An
27;legacy' as I don't see any reason
to mount it. Not sure if it's a right thing to do. Any suggestions ?
Thanks
Arjun
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ot disk (one being Cloned)
Thanks
Arjun
On Fri, Apr 8, 2011 at 10:46 PM, Cindy Swearingen <
cindy.swearin...@oracle.com> wrote:
> Arjun,
>
> Yes, you an choose any name for the root pool, but an existing
> limitation is that you can't rename the root pool by using the
> zpo
Hi,
Let me add another query.
I would assume it would be perfectly ok to choose any name for root
pool, instead of 'rpool', during the OS install. Please suggest
otherwise.
Thanks
Arjun
On 4/8/11, Arjun YK wrote:
> Hello,
>
> I have a situation where a host, which is bo
rpool' before or after this export.
I cannot see how this can be achieved. So, I decided to live with the
name 'temp-rpool'. But, is renaming 'rpool' recommended or supported
parctice ?
Any help is appreciated.
Thanks
Arjun
__
# zfs set mountpoint=/rootdir temp-name/ROOT/s10s_u8wos_08a
# zfs mount temp-name/ROOT/s10s_u8wos_08a
# edit /mnt/rootdir/etc/hosts
Issue I have now is how do I put this pool 'temp-name' back to its original
name 'rpool' so that LDOM can boot ? Could someone s
arate dataset", but no option is given to set quota. May be, others set
quota manually.
So, I am trying to understand what's the best practice for /var in ZFS. Is
that exactly same as in UFS or is there anything different ?
Could someone share some thoughts ?
Some more insight:
I have the following zpools setup:
aaa_zvol: 2 250GB IDE in Raid0
storage raidZ1:
1 500 GB IDE
1 500 GB SATA
//aaa_zvol/aaa_zvol (the zvol exported from the aaa_zvol pool)
When I run the array in a degraded mode, ie place one of the drives in the
offline state, t