Re: [zfs-discuss] Problem adding device to mirror

2008-12-21 Thread Richard Elling
Hi Walter,
did you try the procedure described in the ZFS Trouble Shooting Guide?
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Resolving_ZFS_Mount_Point_Problems_That_Prevent_Successful_Booting
 -- richard


Walter White wrote:
> Hi all,
>
> I attempted to mirror my rpool with a USB drive.  I tried to create a 
> partition with the same size so the pools would be identical.  I believe the 
> command completed successfully, but I ran zpool attach rpool dev1 dev2.  And 
> that is when my computer hung.  I powered down and started back up to find 
> grub didn't have any configurations and I cannot import my rpool.
>
> After having booted into the live cd, my root partition did not exist.  I 
> updated it using the format command and now see this:
>
> [code]
> j...@opensolaris:~$ pfexec zpool import
>   pool: rpool
> id: 3281909672341943803
>  state: UNAVAIL
> status: The pool was last accessed by another system.
> action: The pool cannot be imported due to damaged devices or data.
>see: http://www.sun.com/msg/ZFS-8000-EY
> config:
>
>   rpool   UNAVAIL  insufficient replicas
> c4t0d0s0  UNAVAIL  corrupted data
> [/code]
>
> [code]
> Total disk cylinders available: 30398 + 2 (reserved cylinders)
>
> Part  TagFlag Cylinders SizeBlocks
>   0   rootwu   1 - 30394  232.83GB(30394/0/0) 488279610
>   1 unassignedwu   00 (0/0/0) 0
>   2 backupwu   0 - 30394  232.84GB(30395/0/0) 488295675
>   3 unassignedwu   00 (0/0/0) 0
>   4 unassignedwu   00 (0/0/0) 0
>   5 unassignedwu   00 (0/0/0) 0
>   6 unassignedwu   00 (0/0/0) 0
>   7 unassignedwu   00 (0/0/0) 0
>   8   bootwu   0 - 07.84MB(1/0/0) 16065
>   9 unassignedwu   00 (0/0/0) 0
> [/code]
>
> [code]
> j...@opensolaris:~$ pfexec prtvtoc /dev/rdsk/c4t0d0s2
> * /dev/rdsk/c4t0d0s2 partition map
> *
> * Dimensions:
> * 512 bytes/sector
> *  63 sectors/track
> * 255 tracks/cylinder
> *   16065 sectors/cylinder
> *   30400 cylinders
> *   30398 accessible cylinders
> *
> * Flags:
> *   1: unmountable
> *  10: read-only
> *
> * Unallocated space:
> * First SectorLast
> * Sector CountSector 
> *   0 16065 16064
> *   488311740 32130 488343869
> *
> *  First SectorLast
> * Partition  Tag  FlagsSector CountSector  Mount Directory
>0  201  16065 488279610 488295674
>2  501  0 488295675 488295674
>8  101  0 16065 16064
>
> [/code]
>
> Can I recover my files?  Please let me know if you need more information.
>
>
> Thanks,
> Walter
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem adding device to mirror

2008-12-21 Thread Walter White
After having rebooted from the live cd back to the actual system, I thought I 
was home-free.  I can now see my GRUB menu and it does start to boot.  However, 
after about 2 seconds, it reboots as I am guessing it cannot find the other 
important labels.

Is there some way I can restore these labels to recover my data?


Thanks,
Walter
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Problem adding device to mirror

2008-12-21 Thread Walter White
Hi all,

I attempted to mirror my rpool with a USB drive.  I tried to create a partition 
with the same size so the pools would be identical.  I believe the command 
completed successfully, but I ran zpool attach rpool dev1 dev2.  And that is 
when my computer hung.  I powered down and started back up to find grub didn't 
have any configurations and I cannot import my rpool.

After having booted into the live cd, my root partition did not exist.  I 
updated it using the format command and now see this:

[code]
j...@opensolaris:~$ pfexec zpool import
  pool: rpool
id: 3281909672341943803
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

rpool   UNAVAIL  insufficient replicas
  c4t0d0s0  UNAVAIL  corrupted data
[/code]

[code]
Total disk cylinders available: 30398 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
  0   rootwu   1 - 30394  232.83GB(30394/0/0) 488279610
  1 unassignedwu   00 (0/0/0) 0
  2 backupwu   0 - 30394  232.84GB(30395/0/0) 488295675
  3 unassignedwu   00 (0/0/0) 0
  4 unassignedwu   00 (0/0/0) 0
  5 unassignedwu   00 (0/0/0) 0
  6 unassignedwu   00 (0/0/0) 0
  7 unassignedwu   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 unassignedwu   00 (0/0/0) 0
[/code]

[code]
j...@opensolaris:~$ pfexec prtvtoc /dev/rdsk/c4t0d0s2
* /dev/rdsk/c4t0d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
*  63 sectors/track
* 255 tracks/cylinder
*   16065 sectors/cylinder
*   30400 cylinders
*   30398 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector 
*   0 16065 16064
*   488311740 32130 488343869
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  201  16065 488279610 488295674
   2  501  0 488295675 488295674
   8  101  0 16065 16064

[/code]

Can I recover my files?  Please let me know if you need more information.


Thanks,
Walter
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS recovery?

2008-12-21 Thread Walter White

Hi all,

I was attempting to mirror my root storage pool and apparently nuked or screwed 
up my vtoc.  Using the format command, I have been able to now at least see the 
pool exists, but it is in a corrupt state.  Talking to some others, they have 
instructed me that you may have more ideas as to what I should try.

I installed Open Solaris to my entire harddrive and tried to mirror it to an 
external USB drive when this happened.

This is my prtvtoc command output:

j...@opensolaris:~$ pfexec prtvtoc /dev/rdsk/c4t0d0s2
* /dev/rdsk/c4t0d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
*  63 sectors/track
* 255 tracks/cylinder
*   16065 sectors/cylinder
*   30400 cylinders
*   30398 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*First SectorLast
*Sector CountSector 
*   0 16065 16064
*   488311740 32130 488343869
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  201  16065 488279610 488295674
   2  501  0 488295675 488295674
   8  101  0 16065 16064

And here is my partition table:
Part  TagFlag Cylinders SizeBlocks
  0   rootwu   1 - 30394  232.83GB(30394/0/0) 488279610
  1 unassignedwu   00 (0/0/0) 0
  2 backupwu   0 - 30394  232.84GB(30395/0/0) 488295675
  3 unassignedwu   00 (0/0/0) 0
  4 unassignedwu   00 (0/0/0) 0
  5 unassignedwu   00 (0/0/0) 0
  6 unassignedwu   00 (0/0/0) 0
  7 unassignedwu   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 unassignedwu   00 (0/0/0) 0


Prior to this partition table, when I had rebooted after the computer had hung, 
I did not have a root partition entry here.  I put this here and now can at 
least see the pool:

j...@opensolaris:~$ pfexec zpool import
  pool: rpool
id: 3281909672341943803
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

rpool   UNAVAIL  insufficient replicas
  c4t0d0s0  UNAVAIL  corrupted data


Is there anything I can do to recover the data?  Please let me know what I can 
do and if you need any other information.  Any help is greatly appreciated.


Thanks,
Walter

_
It’s the same Hotmail®. If by “same” you mean up to 70% faster.
http://windowslive.com/online/hotmail?ocid=TXT_TAGLM_WL_hotmail_acq_broad1_122008___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] add new disk to existing rpool

2008-12-21 Thread Richard Elling
iman habibi wrote:
> Hello All
> i want to add second disk to existing rpool with one disk.
> but when i run this command it returns this error,,why?
>
> "device is too small?? "
> (all of my disks are simillar.)

The newly attaching disk (slice in this case, because it is an
rpool) must have the same number of blocks, or more.
Even though you may have two disks with the same vendor
label, they may have a different number of available blocks.
prtvtoc provides this info relatively easily as the "sector
count" column.  Compare:
prtvtoc /dev/rdsk/c0t0d0s0
prtvtoc /dev/rdsk/c0t8d0s0  (!!)
As Cyril notes, and the ZFS Administration Guide says,
you must use a slice for the rpool, not a whole disk.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-21 Thread dick hoogendijk
On Sun, 21 Dec 2008 07:36:07 PST
Uwe Dippel  wrote:

> [i]If you want to add the entire Solaris partition to the zfs pool as
> a mirror, use zpool attach -f rpool c1d0s0 c2d0s2[/i]
> 
> So my mistake in the first place (see first post), in short, was only
> the last digit: I ought to have used the complete drive (slice 2),
> instead of *thinking* that it is untouchable, and zfs/zpool would set
> up s0 properly to be used?
> 
> Dick, it seems we have to get used to the idea, that slice 2 is
> touchable, after all.

That may be, but all my mirror disks are like c0d0s0 c0d1s0. s0 taking
up the whole disk. On some there is a s2 on some there isn't. Also, SUN
itself mentions s0 in explaining zfs root as bootable. There is no
mention of s2. As far as I'm concerned bootable ZFS is on s0;
non-bootable drives have an EFI label ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv104 ++
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a set of zfs snapshots

2008-12-21 Thread Juergen Nickelsen
Juergen Nickelsen  writes:

>> If that's what you want, do an incremental send (-I).
>
> To be a bit more detailed, first create the file system on the
> target machine by sending the first snapshot that you want to have
> replicated in full. After that, send each of the following snapshots
> incrementally, based on the previous.

Sorry, I was confused. I assumed the "-i" option; the "-I" does, to
my knowledge, automatically what I have outlined here in full, but
does not yet exist with the Solaris 10 release we use.

-- 
It is easy to be blinded to the essential uselessness of computers by
the sense of accomplishment you get from getting them to run at all.
  -- Douglas Adams
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a set of zfs snapshots

2008-12-21 Thread Juergen Nickelsen
Ian Collins  writes:

>> I suspect that a 'zfs copy' or somesuch would be a nice utility
>> when wanting to shove a parent and all of it's snapshots to
>> another system.
>>   
> If that's what you want, do an incremental send (-I).

To be a bit more detailed, first create the file system on the
target machine by sending the first snapshot that you want to have
replicated in full. After that, send each of the following snapshots
incrementally, based on the previous.

So if you have this on host a:

tank/m...@snap1
tank/m...@snap2
tank/m...@snap3
tank/m...@snap4

and a pool "data" on host b, do it like this:

a# zfs send tank/m...@snap1 | ssh b zfs recv -d data
a# zfs send -i tank/m...@snap1 tank/m...@snap2 | ssh b zfs recv data/myfs
a# zfs send -i tank/m...@snap2 tank/m...@snap3 | ssh b zfs recv data/myfs
a# zfs send -i tank/m...@snap3 tank/m...@snap4 | ssh b zfs recv data/myfs

Regards, Juergen.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-21 Thread Uwe Dippel
[i]If you want to add the entire Solaris partition to the zfs pool as a mirror, 
use
zpool attach -f rpool c1d0s0 c2d0s2[/i]

So my mistake in the first place (see first post), in short, was only the last 
digit: I ought to have used the complete drive (slice 2), instead of *thinking* 
that it is untouchable, and zfs/zpool would set up s0 properly to be used?

Dick, it seems we have to get used to the idea, that slice 2 is touchable, 
after all.

Richard, thanks for the hint with the bug. But, we (I dunno, at least I) would 
definitively see something like what Johan wrote in the Admin Guide; instead of 
the marketing drool!

Yes, I did what Richard proposed, it went through enormously well, and fast, 
and after around one hour resilvering had finished. I could do the grub-thingy 
(also here, the description is bad: one doesn't make /dsk/ bootable, but 
/rdsk/), and then rebooting, the underlying (Linux) grub of the MBR/first 
partition came up, and with 'c' I could change the chainloading from (hd0,1) to 
(hd1,1), and Nevada booted up very well and properly from the new drive. This 
is so much more advanced! If only the bugs were actually taken out, and even 
more importantly: it deserves a proper coherent and consistent documentation!
QED: see subject! "How to create a basic new filesystem" The ZFS Admin Guide is 
silent on this. The ZFS Admin Guide is silent on s2. The ZFS Admin Guide does 
not even tackle the topic of a plurality of systems on a drive. 

Sorry for taking your precious time, but had there been a useful guide, I 
surely would have not seen the need to ask in the first place.

Thanks for all your help!

Uwe
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a set of zfs snapshots

2008-12-21 Thread Elaine Ashton
Actually, this is pretty cool.

If I choose the last/latest/most recent snapshot and do like so:

zfs send -R tank/filesystem/foo...@backup_date | ssh newsys zfs recv -d 
tank/test

It copies everything over as I had hoped. I suppose this was somewhat obvious 
after re-reading the options but could maybe be more explicitly mentioned in 
the doc.

Thanks, again. :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a set of zfs snapshots

2008-12-21 Thread Elaine Ashton
> If that's what you want, do an incremental send (-I).

Well, if I believe the documentation, -I will send 'all incremental streams 
from one snapshot to a  cumulative snapshot' which isn't quite what I want.  I 
more or less want an exact duplicate of the current system with the snapshots 
on another system.

The -R option sounds more promising as it supposedly sends a 'replication 
stream of all descendent file systems.' and claims all properties, snapshots, 
descendent filesystems and clones are preserved. I'll give that a try and see 
if that is, indeed, what I'm looking for.

Thanks :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and AVS replication performance issues

2008-12-21 Thread Ahmed Kamal
Hi,

I have setup AVS replication between two zvols on two opensolaris-2008.11
nodes. I have been seeing BIG performance issues, so I tried to setup the
system to be as fast as possible using a couple of tricks. The detailed
setup and performance data are below:

*  A 100G zvol has been setup on each node of an AVS replicating pair
* A "ramdisk" has been setup on each node using the following command.
This functions as a very fast logging disk!

  ramdiskadm -a ram1 10m

* The replication relationship has been setup using

  sndradm -E pri /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 sec
/dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 ip async

* The AVS driver was configured to *not* log the disk bitmap to disk,
rather to keep it in kernel memory and write it to disk only upon machine
shutdown. This is configured as such

  # grep bitmap_mode /usr/kernel/drv/rdc.conf
  rdc_bitmap_mode=2;

* The replication was configured to be in logging mode (To avoid any
possible network bottlenecks)

  #sndradm -P
  /dev/zvol/rdsk/gold/myzvol  <-  pri:/dev/zvol/rdsk/gold/myzvol
  autosync: off, max q writes: 4096, max q fbas: 16384, async threads:
2, mode: async, state: logging

=== Testing ===

All tests were performed using the following command line

# dd if=/dev/zero of=/dev/zvol/rdsk/gold/xxVolNamexx oflag=dsync
bs=256M count=10

I usually ran a couple of runs initially to avoid caching effects.

=== Results ===

The results

The following Results were reported after initial couple of runs to avoid
cache effects

Run#dd count=N Native Vol Throughput   Replicated Vol
Throughput (logging mode)
1 4 42.2 MB/s
   4.9 MB/s
2 4 52.8 MB/s
5.5 MB/s
3 1050.9 MB/s
4.6 MB/s

As you can see the performance is almost 10 times slower!! Although no disk
logging is done to disk (only ram disk if not driver memory), and no network
traffic ! Seems to me that the AVS kernel drivers slow the system A LOT for
simply hooking any write operation and flipping a kernel memory bit per 32k
written disk space?!!

Any suggestions as to why this is happening is most appreciated
Best Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-21 Thread Johan Hartzenberg
On Sun, Dec 21, 2008 at 12:13 PM, dick hoogendijk  wrote:

> On Sat, 20 Dec 2008 17:02:31 PST
> Uwe Dippel  wrote:
>
> > Now I modified the slice s0, so that is doesn't overlap with s2 (the
> > whole disk) any longer:
> >
> > Part  TagFlag Cylinders SizeBlocks
> >   0   rootwm   3 - 10432  159.80GB(10430/0/0)
> > 335115900 1 unassignedwm   00
> > (0/0/0) 0 2 backupwu   0 - 10441
> > 159.98GB(10442/0/0) 335501460
>
> As mentioned previously you do not need to fiddle with partitions and
slices if you don't want to use less than the entire disk.

If you want to add the entire Solaris partition to the zfs pool as a mirror,
use
zpool attach -f rpool c1d0s0 c2d0s2

If you want to add the entire physical disk to the pool as a mirror, use
zpool attach rpool c1d0s0 c2d0p0

If you want to Extend the pool using the space in the entire Solaris
partition, use
zpool add -f rpool c2d0s2

If you want to Extend the pool using the entire physical disk, use
zpool add rpool c2d0p0

The -f to force is required to override the bug about s2 overlapping with
other slices.  The above assume you have not modified s2 to be anything
other than the entire Solaris partition, as is the default.

The only time to use anything other than s2 or p0 is when you specifically
want to use less than the whole partition or disk.  In that case you need to
define slices/partitions.


-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-21 Thread dick hoogendijk
On Sat, 20 Dec 2008 17:02:31 PST
Uwe Dippel  wrote:

> Now I modified the slice s0, so that is doesn't overlap with s2 (the
> whole disk) any longer:
> 
> Part  TagFlag Cylinders SizeBlocks
>   0   rootwm   3 - 10432  159.80GB(10430/0/0)
> 335115900 1 unassignedwm   00
> (0/0/0) 0 2 backupwu   0 - 10441
> 159.98GB(10442/0/0) 335501460

I have a few ZFS disks running here with -NO- s2. You don't -NEED- that
slice, you know. Setting it to 0,0 is fine. Just make sure you leave
part 0, 8 and 9 as they are and you'll be fine. ZFS will use s0.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv104 ++
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMART data

2008-12-21 Thread Carsten Aulbert


Mam Ruoc wrote:
>> Carsten wrote:
>> I will ask my boss about this (since he is the one
>> mentioned in the
>> copyright line of smartctl ;)), please stay tuned.
> 
> How is this going? I'm very interested too... 

Not much happening right now, December meetings, holiday season, ...

But thanks for pinging me - I tend to forget such things.

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] add new disk to existing rpool

2008-12-21 Thread Cyril Plisko
On Sun, Dec 21, 2008 at 10:10 AM, iman habibi  wrote:
> Hello All
> i want to add second disk to existing rpool with one disk.
> but when i run this command it returns this error,,why?

That's because your c0t8d0 disk has EFI label. with EFI label the
largest partiotion you can create is slightly smaller than the whole
disk (17 KB less to be exact). That causes the zpool attach command to
fail (you cannot mirror from a bigger disk to a smaller one). Another
thing is you probably do not want to EFI label on rpool devices
anyway, since you cannot boot from EFI  label today.
You may use format -e command to change the label from EFI to SMI.


> # format
> Searching for disks...done
>
> c0t11d0: configured with capacity of 34.18GB
>
>
> AVAILABLE DISK SELECTIONS:
>0. c0t0d0 
>   /p...@1f,4000/s...@3/s...@0,0
>1. c0t8d0 
>   /p...@1f,4000/s...@3/s...@8,0

See - the label on this disk hints that it is EFI rather than SMI.

>2. c0t10d0 
>   /p...@1f,4000/s...@3/s...@a,0
>3. c0t11d0 
>   /p...@1f,4000/s...@3/s...@b,0
>4. c0t12d0 
>   /p...@1f,4000/s...@3/s...@c,0
>

-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] add new disk to existing rpool

2008-12-21 Thread iman habibi
Hello All
i want to add second disk to existing rpool with one disk.
but when i run this command it returns this error,,why?

"device is too small?? "
(all of my disks are simillar.)
here are my information:
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c0t0d0s0  ONLINE   0 0 0

errors: No known data errors

# zpool attach rpool c0t0d0s0 c0t8d0
cannot attach c0t8d0 to c0t0d0s0: device is too small

# format
Searching for disks...done

c0t11d0: configured with capacity of 34.18GB


AVAILABLE DISK SELECTIONS:
   0. c0t0d0 
  /p...@1f,4000/s...@3/s...@0,0
   1. c0t8d0 
  /p...@1f,4000/s...@3/s...@8,0
   2. c0t10d0 
  /p...@1f,4000/s...@3/s...@a,0
   3. c0t11d0 
  /p...@1f,4000/s...@3/s...@b,0
   4. c0t12d0 
  /p...@1f,4000/s...@3/s...@c,0

# df -ah
Filesystem size   used  avail capacity  Mounted on
rpool/ROOT/s10s_u6wos_07b
33G   4.2G28G14%/
/devices 0K 0K 0K 0%/devices
ctfs 0K 0K 0K 0%/system/contract
proc 0K 0K 0K 0%/proc
mnttab   0K 0K 0K 0%/etc/mnttab
swap  1014M   1.5M  1012M 1%/etc/svc/volatile
objfs0K 0K 0K 0%/system/object
sharefs  0K 0K 0K 0%/etc/dfs/sharetab
fd   0K 0K 0K 0%/dev/fd
rpool/ROOT/s10s_u6wos_07b/var
33G67M28G 1%/var
swap  1012M 0K  1012M 0%/tmp
swap  1012M40K  1012M 1%/var/run
rpool/export33G20K28G 1%/export
rpool/export/home   33G18K28G 1%/export/home
rpool   33G94K28G 1%/rpool
-hosts   0K 0K 0K 0%/net
auto_home0K 0K 0K 0%/home
lct:vold(pid546) 0K 0K 0K 0%/vol

why it returns this error?
Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss