[zfs-discuss] how to make two disks of one pool mirror both readable separately.

2008-08-03 Thread wan_jm
there are two disks in one ZFS pool used as mirror. So we all know that there 
are the same date on the two disks. I want to know, how can migrate them into 
two separate pools, so I can later read  & write them separately.( just as in 
UFS mirror, we can mount each separately).

thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] where was zpool status information keeping.

2008-07-23 Thread wan_jm
the os 's / first is on mirror  /dev/dsk/c1t0d0s0 and /dev/dsk/c1t1d0s0, and 
then created  home_pool using mirror, here is the mirror information.
  pool: omp_pool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
omp_pool  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c1t3d0s0  ONLINE   0 0 0
c1t2d0s0  ONLINE   0 0 0

then changed the root to raw /dev/dsk/c1t1d0s0, and then reboot the system from 
it. zfs is ok now as everything keeps unchanged. then I run zpool detach 
command and then zpool attach. zfs is still ok in the root environment. but 
when I boot the system on /dev/dsk/c1t0d0s0(disk0). home_pool is UNAVAIL now.
pool: home_pool
 state: UNAVAIL
status: One or more devices could not be used because the the label is missing 
or invalid.  There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from a backup source.
   see: http://www.sun.com/msg/ZFS-8000-5E
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
home_pool UNAVAIL  0 0 0  insufficient replicas
  mirror  UNAVAIL  0 0 0  insufficient replicas
c1t1d0s7  FAULTED  0 0 0  corrupted data
c1t0d0s7  FAULTED  0 0 0  corrupted data

what is the reason. In My opinion, there must some information keep in /. so 
what and where it is ?
thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mount failed at boot stops network services.

2008-06-27 Thread wan_jm
I don't know why that zfs mount failed stops all the other network service.
maybe it is not a bug of zfs. it must a bug with SMF in my opinion. do you 
think so
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mount failed at boot stops network services.

2008-06-27 Thread wan_jm
I just have a try. 
In my opinion, if the directory is not empty, zpool should not create the pool.

let me give a senario if some day our software runs on the customer site. but 
one engineer of the customer did the above operation failed, but he didn't do 
anything more. few days later , the os automatically rebooted, then the machine 
stops services.

don't you think it is a bug ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs mount failed at boot stops network services.

2008-06-27 Thread wan_jm
the procedure is follows:
1. mkdir /tank
2. touch /tank/a
3. zpool create tank c0d0p3
this command give the following error message:
cannot mount '/tank': directory is not empty;
4. reboot.
then the os can only be login in from console. does it a bug?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how to migrating from UFS in a short time.

2008-06-02 Thread wan_jm
In our system, we need to migrating from UFS in a short time.
according to the ZFS_Best_Practices_Guide. before migration, we should unshared 
the UFS, and then umount the ZFS,and then do migration, which means during 
migration, the service on the machine should stop. but we can't afford the 
operation for too long time. what can i do?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss