> > bash-3.00# dd if=/dev/urandom of=/dev/dsk/c1t10d0
> bs=1024 count=20480
> 
> A couple of things:
> 
> (1) When you write to /dev/dsk, rather than
> /dev/rdsk, the results
> are cached in memory.  So the on-disk state may
> have been unaltered.

That's why I also did a zpool export <poolname> followed by a zpool import 
<poolname>. According to the demo, this makes it go to stable storage. My 
contention is that other than the pool name and the disk slice information, I 
did exactly as what was described in the self heal demo yet did not see the 
appropiate response from zfs

> 
> (2) When you write to /dev/rdsk/c-t-d, without
> specifying a slice,
> that actually refers to the entire disk
>  *including* its EFI label.
> That was probably not your intent.  When you give
>  ZFS a c-t-d
> name with no slice, we format it with an EFI label
>  and put all
>    of the content (everything but the label) in s0.


You lost me here. I guess I will have to dwelve deeper into Solaris disk naming 
conventions. I would prefer to avoid thinking in terms of 
slices/partitions/disks etc.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to