bash-3.00# zpool status
pool: data1
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
data1 ONLINE 0 0 0
mirrorONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0
Yeah, that's one of my concerns. Others are /platform/sun4* partition. I know
nothing about them.
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
I see now that data1 and data2 are ZFS file systems, which should have
been clear to me from your mount output.
A mounted UFS file system would look like this:
/stuff on /dev/dsk/c0t0d0s5 read/write/setuid/intr/largefiles/xattr...
ZFS file systems aren't associated with a particular device.
As you have ZFS on this machine please send the output of
zpool status
zfs list
PS. The OS is named Solaris 10 or SunOS 5.10
--
This message posted from opensolaris.org
___
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org
Xaverius,
Looks like you have a ZFS root file system on this system.
ZFS file systems are mounted automatically even in Solaris 10
releases. No need exists to add ZFS file system entries to the
/etc/vfstab file for them to mount automatically. Entries in this
file are only needed for UFS file
Our server use solaris 5.10, and I plan to restart it after installing patch
for our application. What my concern is about the file system, whether it will
be automatically mount after restart or not. I'm new in Solaris and I only know
Solaris will automatically mount based on /etc/vfstab. Can