On 12/02/08 03:21, jan damborsky wrote: > Hi Dick, > > I am redirecting your question to zfs-discuss > mailing list, where people are more knowledgeable > about this problem and your question could be > better answered. > > Best regards, > Jan > > > dick hoogendijk wrote: > >> I have s10u6 installed on my server. >> zfs list (partly): >> NAME USED AVAIL REFER MOUNTPOINT >> rpool 88.8G 140G 27.5K /rpool >> rpool/ROOT 20.0G 140G 18K /rpool/ROOT >> rpool/ROOT/s10BE2 20.0G 140G 7.78G / >> >> But just now, on a newly installed s10u6 system I got rpool/ROOT with a >> mountpoint "legacy" >> >> The mount point for /<rootpoolname>/ROOT is supposed to be "legacy" because that dataset should never be mounted. It's just a "container" dataset to group all the BEs.
>> The drives were different. On the latter (legacy) system it was not >> formatted (yet) (in VirtualBox). On my server I switched from UFS to >> ZFS, so I first created a rpool and than did a luupgrade into it. >> This could explain the mountpoint /rpool/ROOT but WHY the difference? >> Why can't s10u6 install the same mountpoint on the new disk? >> The server runs very well; is this "legacy" thing really needed? >> >> When you created the rpool, did you also explicitly create the rpool/ROOT datasets? If you did create it and didn't set the mount point to "legacy", that explains why you ended up with your original configuration. If you didn't create the rpool/ROOT dataset yourself, and instead let LiveUpgrade create it automatically, and LiveUpgrade set the mountpoint to /rpool/ROOT, then that's a bug in LiveUpgrade (though a minor one, I think). Lori _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss