Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-19 Thread Renil Thomas
Were you able to get more insight about this problem ?
U7 did not encounter such problems.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS user quota, userused updates?

2009-10-19 Thread Jorgen Lundman


Are there way to force ZFS to update, or refresh it in some way when the user 
quota/used value is not true to what is the case? Are there known way to make it 
out of sync that we should avoid?


SunOS x4500-11.unix 5.10 Generic_141445-09 i86pc i386 i86pc
(Solaris 10 10/09 u8)


zpool1/sd01_mail   223M  15.6T   222M  /export/sd01/mail


# zfs userspace zpool1/sd01_mail
TYPENAMEUSED  QUOTA
POSIX User  1029   54.0M   100M

# df -h .
Filesystem size   used  avail capacity  Mounted on
zpool1/sd01_mail16T   222M16T 1%/export/sd01/mail


# ls -lhn
total 19600
-rw---   1 1029 21001.7K Oct 20 12:03 
1256007793.V4700025I1770M252506.vmx06.unix:2,S
-rw---   1 1029 21001.7K Oct 20 12:04 
1256007873.V4700025I1772M63715.vmx06.unix:2,S
-rw---   1 1029 21001.6K Oct 20 12:05 
1256007926.V4700025I1773M949133.vmx06.unix:2,S
-rw---   1 1029 2100 76M Oct 20 12:23 
1256009005.V4700025I1791M762643.vmx06.unix:2,S
-rw---   1 1029 2100 54M Oct 20 12:36 
1256009769.V4700034I179eM739748.vmx05.unix:2,S

-rw--T   1 1029 21002.0M Oct 20 14:39 file

The 54M file appears to to accounted for, but the 76M is not. I recently added a 
2M by chown to see if it was a local-disk, vs NFS problem. The previous had not 
updated for 2 hours.



# zfs get useru...@1029 zpool1/sd01_mail
NAME  PROPERTY   VALUE  SOURCE
zpool1/sd01_mail  useru...@1029  54.0M  local


Any suggestions would be most welcome,

Lund


--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zpool without any redundancy

2009-10-19 Thread Espen Martinsen
Hi,
  This might be a stupid question, but I can't figure it out.

  Let's say I've chosen to live with a zpool without redundancy, 
  (SAN disks, has actually raid5 in disk-cabinet)

m...@mybox:~# zpool status  BACKUP
  pool: BACKUP
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
BACKUP   ONLINE   0 0 0
  c0t200400A0B829BC13d0  ONLINE   0 0 0
  c0t200400A0B829BC13d1  ONLINE   0 0 0
  c0t200400A0B829BC13d2  ONLINE   0 0 0

errors: No known data errors


The question:
Would it be a good idea to torn OFF the 'checksum' property of the ZFS
  filesystems?

I know the manual says it is not recommended to turn off integrity of 
user-data, but what
will happen if the algorithm actually finds one?  I would not have any way to 
fix that, except
delete/overrite the data. (will I be able to point out what files are involved)


Yours

Espen Martinsen
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Local user security with move from Mac OSX to OpenSolaris

2009-10-19 Thread Boyd Waters
I have many disks created with ZFS on Mac OSX that I'm trying to move to 
OpenSolaris 2009.6

I created an OpenSolaris user with the same numeric userid as the Mac system. 
Then I performed a [b]zpool import macpool[/b] to mount the data.

It's all there, fine, but the OpenSolaris (non-root) user cannot access the 
data. [i]chown[/i]and [i]chmod[/i]do not seem to help.

ls -l /macpool

drwxr-xr-x  2 bwaters  staff  68 Oct 02 00:05 foobar

ls -l /macpool/foobar

permission denied


Any way to get OpenSolaris users to see this data?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stupid to have 2 disk raidz?

2009-10-19 Thread James Dickens
On Fri, Oct 16, 2009 at 1:40 PM, Erik Trimble  wrote:

> Prasad Unnikrishnan wrote:
>
>> Add the new disk - start writing new blocks to that disk, instead of
>> waiting to re-layout all the stipes. And when the disk is not active, do
>> slow/safe copy on write to balance all the blocks?
>>
>>
> Conceptually, yes, doing a zpool expansion while the pool is live isn't
> hard to map out, conceptually.
>
> As always, the devil is in the details. In this case, the primary problem
> I'm having is maintaining two different block mapping schemes (one for the
> old disk layout, and one for the new disk layout) and still being able to
> interrupt the expansion process.  My primary problem is that I have to keep
> both schemes in memory during the migration, and if something should happen
> (i.e. reboot, panic, etc) then I lose the current state of the zpool, and
> everything goes to hell in a handbasket.
>

In a way I think the key of this working is the code for device removal,
because when you are removing a device, you take from one and put on
another, it should be much easier to use this code and move 1/N of existing
data to a new device using functions from device removal modifications, i
could be wrong but it may not be as far as people fear. Device removal was
mentioned in the Next word for ZFS video.

James Dickens
http://uadmin.blogspot.com
jamesd...@gmail.com


>
>
> --
> Erik Trimble
> Java System Support
> Mailstop:  usca22-123
> Phone:  x17195
> Santa Clara, CA
> Timezone: US/Pacific (GMT-0800)
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-10-19 Thread Albert Chin
On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote:
> Thanks for reporting this.  I have fixed this bug (6822816) in build  
> 127.

Thanks. I just installed OpenSolaris Preview based on 125 and will
attempt to apply the patch you made to this release and import the pool.

> --matt
>
> Albert Chin wrote:
>> Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
>> snapshot a few days ago:
>>   # zfs snapshot a...@b
>>   # zfs clone a...@b tank/a
>>   # zfs clone a...@b tank/b
>>
>> The system started panicing after I tried:
>>   # zfs snapshot tank/b...@backup
>>
>> So, I destroyed tank/b:
>>   # zfs destroy tank/b
>> then tried to destroy tank/a
>>   # zfs destroy tank/a
>>
>> Now, the system is in an endless panic loop, unable to import the pool
>> at system startup or with "zpool import". The panic dump is:
>>   panic[cpu1]/thread=ff0010246c60: assertion failed: 0 == 
>> zap_remove_int(mos, ds_prev->ds_phys->ds_next_clones_obj, obj, tx) (0x0 == 
>> 0x2), file: ../../common/fs/zfs/dsl_dataset.c, line: 1512
>>
>>   ff00102468d0 genunix:assfail3+c1 ()
>>   ff0010246a50 zfs:dsl_dataset_destroy_sync+85a ()
>>   ff0010246aa0 zfs:dsl_sync_task_group_sync+eb ()
>>   ff0010246b10 zfs:dsl_pool_sync+196 ()
>>   ff0010246ba0 zfs:spa_sync+32a ()
>>   ff0010246c40 zfs:txg_sync_thread+265 ()
>>   ff0010246c50 unix:thread_start+8 ()
>>
>> We really need to import this pool. Is there a way around this? We do
>> have snv_114 source on the system if we need to make changes to
>> usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the "zfs
>> destroy" transaction never completed and it is being replayed, causing
>> the panic. This cycle continues endlessly.
>>
>>   
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] heads up on SXCE build 125 (LU + mirrored root pools)

2009-10-19 Thread Cindy Swearingen


Hi everyone,

Currently, the device naming changes in build 125 mean that you cannot
use Solaris Live Upgrade to upgrade or patch a ZFS root dataset in a
mirrored root pool.

If you are considering this release for the ZFS log device removal
feature, then also consider that you will not be able to patch or
upgrade the ZFS root dataset in a mirrored root pool in this release
with Solaris Live Upgrade. Unmirrored root pools are not impacted.

OpenSolaris releases are not impacted by the build 125 device naming
changes.

I don't have a CR yet that covers this problem, but we will keep you
posted.

Thanks,

Cindy


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help! System panic when pool imported

2009-10-19 Thread Matthew Ahrens
Thanks for reporting this.  I have fixed this bug (6822816) in build 
127.  Here is the evaluation from the bug report:


The problem is that the clone's dsobj does not appear in the origin's 
ds_next_clones_obj. 

The bug can occur can occur under certain circumstances if there was a 
"botched upgrade" when doing "zpool upgrade" from pool version 10 or 
earlier to version 11 or later, while there was a clone in the pool.


The problem is caused because upgrade_clones_cb() failed to call 
dmu_buf_will_dirty(origin->ds_dbuf).


This bug can have several effects:

1. assertion failure from dsl_dataset_destroy_sync()
2. assertion failure from dsl_dataset_snapshot_sync()
3. assertion failure from dsl_dataset_promote_sync()
4. incomplete scrub or resilver, potentially leading to data loss

The fix will address the root cause, and also work around all of these 
issues on pools that have already experienced the botched upgrade, 
whether or not they have encountered any of the above effects.


Anyone who may have a botched upgrade should run "zpool scrub" after 
upgrading to bits with the fix in place (build 127 or later).


--matt

Albert Chin wrote:

Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
snapshot a few days ago:
  # zfs snapshot a...@b
  # zfs clone a...@b tank/a
  # zfs clone a...@b tank/b

The system started panicing after I tried:
  # zfs snapshot tank/b...@backup

So, I destroyed tank/b:
  # zfs destroy tank/b
then tried to destroy tank/a
  # zfs destroy tank/a

Now, the system is in an endless panic loop, unable to import the pool
at system startup or with "zpool import". The panic dump is:
  panic[cpu1]/thread=ff0010246c60: assertion failed: 0 == zap_remove_int(mos, 
ds_prev->ds_phys->ds_next_clones_obj, obj, tx) (0x0 == 0x2), file: 
../../common/fs/zfs/dsl_dataset.c, line: 1512

  ff00102468d0 genunix:assfail3+c1 ()
  ff0010246a50 zfs:dsl_dataset_destroy_sync+85a ()
  ff0010246aa0 zfs:dsl_sync_task_group_sync+eb ()
  ff0010246b10 zfs:dsl_pool_sync+196 ()
  ff0010246ba0 zfs:spa_sync+32a ()
  ff0010246c40 zfs:txg_sync_thread+265 ()
  ff0010246c50 unix:thread_start+8 ()

We really need to import this pool. Is there a way around this? We do
have snv_114 source on the system if we need to make changes to
usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the "zfs
destroy" transaction never completed and it is being replayed, causing
the panic. This cycle continues endlessly.

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interesting bug with picking labels when expanding a slice where a pool lives

2009-10-19 Thread Cindy Swearingen

Hi Tomas,

Increasing the slice size in a pool by using the format utility is not
equivalent to increasing a LUN size. Increasing a LUN size triggers
a sysevent from the underlying device that ZFS recognizes. The
autoexpand feature takes advantage of this mechanism.

I don't know if a bug is here, but I will check.

A workaround in the mean time might be to use whole disks instead
of slices on non-root pools. Take a look at the autoexpand feature in
the SXCE, build 117 release. You can read about it here:

http://docs.sun.com/app/docs/doc/817-2271/githb?a=view

Cindy

On 10/19/09 14:18, Tomas Ögren wrote:

On 19 October, 2009 - Cindy Swearingen sent me these 2,4K bytes:


Hi Tomas,

I think you are saying that you are testing what happens when you  
increase a slice under a live ZFS storage pool and then reviewing

the zdb output of the disk labels.

Increasing a slice under a live ZFS storage pool isn't supported and
might break your pool.


It also happens on a non-live pool, that is, if I export, increase the
slice and then try to import.
r...@ramses:~# zpool export striclek
r...@ramses:~# format
Searching for disks...done
... increase c1t1d0s0 
r...@ramses:~# zpool import striclek
cannot import 'striclek': one or more devices is currently unavailable

.. which is the way to increase a pool within a disk/device if I'm not
mistaken.. Like if the storage comes off a SAN and you resize the LUN..


I think you are seeing some remnants of some old pools on your slices
with zdb since this is how zpool import is able to import pools that
have been destroyed.


Yep, that's exactly what I see. The issue is that the new&good labels
aren't trusted anymore (it also looks at old ones) and also that "zpool
import" picks information from different labels and presents it as one
piece of info.

If I was using some SAN and my lun got increased, and the new storage
space had some old scrap data on it, I could get hit by the same issue.


Maybe I missed the point. Let me know.



Cindy

On 10/19/09 12:41, Tomas Ögren wrote:

Hi.

We've got some test machines which amongst others has zpools in various
sizes and placements scribbled all over the disks.

0. HP DL380G3, Solaris10u8, 2x16G disks; c1t0d0 & c1t1d0
1. Took a (non-emptied) disk, created a 2GB slice0 and a ~14GB (to the
   last cyl) slice7.
2. zpool create striclek c1t1d0s0
3. zdb -l /dev/rdsk/c1t1d0s0  shows 4 labels, each with the same guid
   and only c1t1d0s0 as vdev. All is well.
4. format, increase slice0 from 2G to 16G. remove slice7. label.
5. zdb -l /dev/rdsk/c1t1d0s0 shows 2 labels from the correct guid &
   c1t1d0s0, it also shows 2 labels from some old guid (from an rpool
   which was abandoned long ago) belonging to a
   mirror(c1t0d0s0,c1t1d0s0). c1t0d0s0 is current boot disk with other
   rpool and other guid.
6. zpool export striclek;zpool import  shows guid from the "working
   pool", but that it's missing devices (although only lists c1t1d0s0 -
   ONLINE)
7. zpool import striclek  doesn't work. zpool import theworkingguid
   doesn't work.

If I resize the slice back to 2GB, all 4 labels shows the workingguid
and import works again.

Questions:
* Why does 'zpool import' show the guid from label 0/1, but wants vdev
  conf as specified by label 2/3?
* Is there no timestamp or such, so it would prefer label 0/1 as they
  are brand new and ignore label 2/3 which are waaay old.


I can agree to being forced to scribble zeroes/junk all over the "slice7
space" which we're expanding to in step 4.. But stuff shouldn't fail
this way IMO.. Maybe comparing timestamps and see that label 2/3 aren't
so hot anymore and ignore them, or something..

zdb -l and zpool import dumps at:
http://www.acc.umu.se/~stric/tmp/zdb-dump/

/Tomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



/Tomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-19 Thread Paul B. Henson
On Sat, 17 Oct 2009, dick hoogendijk wrote:

> It's a bootblock issue. If you really want to get back to u6 you have to
> "installgrub /boot/grub/stage1 /boot/grub/stage2" from th update 6 image
> so mount it (with lumount or easier, with zfs mount) and make sure you
> take the stage1 stage2 from this update. ***WARNING*** adter doing so,
> you're u6 will boot, but you're u8 will not. In activating update 8 all
> GRUB items are synced. That way all BE's are bootable. That's the way
> it's supposed to be. Maybe something went wrong and only the new u8 BE
> has the understanding of the new bootblocks.

I restored the U6 grub, and sure enough, I was able to boot my U6 BE again.
However, I was also still able to boot the U8 BE. Thanks much, I'll pass
this info on to my open support ticket and see what they have to say.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interesting bug with picking labels when expanding a slice where a pool lives

2009-10-19 Thread Tomas Ögren
On 19 October, 2009 - Cindy Swearingen sent me these 2,4K bytes:

> Hi Tomas,
>
> I think you are saying that you are testing what happens when you  
> increase a slice under a live ZFS storage pool and then reviewing
> the zdb output of the disk labels.
>
> Increasing a slice under a live ZFS storage pool isn't supported and
> might break your pool.

It also happens on a non-live pool, that is, if I export, increase the
slice and then try to import.
r...@ramses:~# zpool export striclek
r...@ramses:~# format
Searching for disks...done
... increase c1t1d0s0 
r...@ramses:~# zpool import striclek
cannot import 'striclek': one or more devices is currently unavailable

.. which is the way to increase a pool within a disk/device if I'm not
mistaken.. Like if the storage comes off a SAN and you resize the LUN..

> I think you are seeing some remnants of some old pools on your slices
> with zdb since this is how zpool import is able to import pools that
> have been destroyed.

Yep, that's exactly what I see. The issue is that the new&good labels
aren't trusted anymore (it also looks at old ones) and also that "zpool
import" picks information from different labels and presents it as one
piece of info.

If I was using some SAN and my lun got increased, and the new storage
space had some old scrap data on it, I could get hit by the same issue.

> Maybe I missed the point. Let me know.

>
> Cindy
>
> On 10/19/09 12:41, Tomas Ögren wrote:
>> Hi.
>>
>> We've got some test machines which amongst others has zpools in various
>> sizes and placements scribbled all over the disks.
>>
>> 0. HP DL380G3, Solaris10u8, 2x16G disks; c1t0d0 & c1t1d0
>> 1. Took a (non-emptied) disk, created a 2GB slice0 and a ~14GB (to the
>>last cyl) slice7.
>> 2. zpool create striclek c1t1d0s0
>> 3. zdb -l /dev/rdsk/c1t1d0s0  shows 4 labels, each with the same guid
>>and only c1t1d0s0 as vdev. All is well.
>> 4. format, increase slice0 from 2G to 16G. remove slice7. label.
>> 5. zdb -l /dev/rdsk/c1t1d0s0 shows 2 labels from the correct guid &
>>c1t1d0s0, it also shows 2 labels from some old guid (from an rpool
>>which was abandoned long ago) belonging to a
>>mirror(c1t0d0s0,c1t1d0s0). c1t0d0s0 is current boot disk with other
>>rpool and other guid.
>> 6. zpool export striclek;zpool import  shows guid from the "working
>>pool", but that it's missing devices (although only lists c1t1d0s0 -
>>ONLINE)
>> 7. zpool import striclek  doesn't work. zpool import theworkingguid
>>doesn't work.
>>
>> If I resize the slice back to 2GB, all 4 labels shows the workingguid
>> and import works again.
>>
>> Questions:
>> * Why does 'zpool import' show the guid from label 0/1, but wants vdev
>>   conf as specified by label 2/3?
>> * Is there no timestamp or such, so it would prefer label 0/1 as they
>>   are brand new and ignore label 2/3 which are waaay old.
>>
>>
>> I can agree to being forced to scribble zeroes/junk all over the "slice7
>> space" which we're expanding to in step 4.. But stuff shouldn't fail
>> this way IMO.. Maybe comparing timestamps and see that label 2/3 aren't
>> so hot anymore and ignore them, or something..
>>
>> zdb -l and zpool import dumps at:
>> http://www.acc.umu.se/~stric/tmp/zdb-dump/
>>
>> /Tomas
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fishworks on x4275?

2009-10-19 Thread Erast

Frank Cusack wrote:
On October 19, 2009 9:53:14 AM +1300 Trevor Pretty 
 wrote:

Frank

I've been looking into:-
http://www.nexenta.com/corp/index.php?option=com_content&task=blogsection
&id=4&Itemid=128


Thanks!  I *thought* there was a Nexenta solution but a google search
didn't turn anything up for me.  I'll definitely be looking into this.
The high level documentation is pretty weak, I guess I have to dig in.
But while I have the attention of this list, does NexentaStor "natively"
support AFP and "bonjour" or can I just add that myself?


You can add this yourself via NMS plugin. The developers portal explains 
API and provides examples on how this can be done:


http://www.nexentastor.org/

The Plugin API documentation collected here:

http://www.nexentastor.org/projects/site/wiki/PluginAPI

I think the closest example to follow would be Amanda Client:

http://www.nexentastor.org/projects/amanda-client/repository

Or UPS integration plugin:

http://www.nexentastor.org/projects/ups/repository

The plugin then can be uploaded into NexentaStor public repository and 
will be available to everyone who wants to use AFP sharing protocol.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade'd to U8 and now can't boot previous U6 BE :(

2009-10-19 Thread Cindy Swearingen


We are working on evaluating all the issues and will get problem
descriptions and resolutions posted soon. I've asked some of you to
contact us directly to provide feedback and hope those wheels are
turning.

So far, we have these issues:

1. Boot failure after LU with a separate var dataset.
This is CR 6884728.
2. LU failure after s10u8 LU with zones.
3. Boot failure from a previous BE if either #1 or #2 failure
occurs.

If you have a support contract, the best resolution is to open a service 
ticket so these issues can be escalated.


If not, feel free to contact me directly with additional symptoms and/or
workarounds.

Thanks,

Cindy

On 10/17/09 09:24, dick hoogendijk wrote:

On Sat, 2009-10-17 at 08:11 -0700, Philip Brown wrote:

same problem here on sun x2100 amd64


It's a bootblock issue. If you really want to get back to u6 you have to
"installgrub /boot/grub/stage1 /boot/grub/stage2" from th update 6 image
so mount it (with lumount or easier, with zfs mount) and make sure you
take the stage1 stage2 from this update.
***WARNING*** adter doing so, you're u6 will boot, but you're u8 will
not. In activating update 8 all GRUB items are synced. That way all BE's
are bootable. That's the way it's supposed to be. Maybe something went
wrong and only the new u8 BE has the understanding of the new
bootblocks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interesting bug with picking labels when expanding a slice where a pool lives

2009-10-19 Thread Cindy Swearingen

Hi Tomas,

I think you are saying that you are testing what happens when you 
increase a slice under a live ZFS storage pool and then reviewing

the zdb output of the disk labels.

Increasing a slice under a live ZFS storage pool isn't supported and
might break your pool.

I think you are seeing some remnants of some old pools on your slices
with zdb since this is how zpool import is able to import pools that
have been destroyed.

Maybe I missed the point. Let me know.

Cindy

On 10/19/09 12:41, Tomas Ögren wrote:

Hi.

We've got some test machines which amongst others has zpools in various
sizes and placements scribbled all over the disks.

0. HP DL380G3, Solaris10u8, 2x16G disks; c1t0d0 & c1t1d0
1. Took a (non-emptied) disk, created a 2GB slice0 and a ~14GB (to the
   last cyl) slice7.
2. zpool create striclek c1t1d0s0
3. zdb -l /dev/rdsk/c1t1d0s0  shows 4 labels, each with the same guid
   and only c1t1d0s0 as vdev. All is well.
4. format, increase slice0 from 2G to 16G. remove slice7. label.
5. zdb -l /dev/rdsk/c1t1d0s0 shows 2 labels from the correct guid &
   c1t1d0s0, it also shows 2 labels from some old guid (from an rpool
   which was abandoned long ago) belonging to a
   mirror(c1t0d0s0,c1t1d0s0). c1t0d0s0 is current boot disk with other
   rpool and other guid.
6. zpool export striclek;zpool import  shows guid from the "working
   pool", but that it's missing devices (although only lists c1t1d0s0 -
   ONLINE)
7. zpool import striclek  doesn't work. zpool import theworkingguid
   doesn't work.

If I resize the slice back to 2GB, all 4 labels shows the workingguid
and import works again.

Questions:
* Why does 'zpool import' show the guid from label 0/1, but wants vdev
  conf as specified by label 2/3?
* Is there no timestamp or such, so it would prefer label 0/1 as they
  are brand new and ignore label 2/3 which are waaay old.


I can agree to being forced to scribble zeroes/junk all over the "slice7
space" which we're expanding to in step 4.. But stuff shouldn't fail
this way IMO.. Maybe comparing timestamps and see that label 2/3 aren't
so hot anymore and ignore them, or something..

zdb -l and zpool import dumps at:
http://www.acc.umu.se/~stric/tmp/zdb-dump/

/Tomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Numbered vdevs

2009-10-19 Thread Cindy Swearingen

Hi Markus,

The numbered VDEVs listed in your zpool status output facilitate log
device removal that integrated into build 125. Eventually, they will
also be used for removal of redundant devices when device removal
integrates.

In build 125, if you create a pool with mirrored log devices, and
then you want to remove the mirrored log devices, you can remove them
individually by device name or by referring to the top-level VDEV name.
See the example below of removing by the top-level VDEV name.

You can read about this feature here:

http://docs.sun.com/app/docs/doc/817-2271/givdo?a=view

I can't find the PSARC case so maybe it only integrated as this CR:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6574286

Cindy

# zpool create tank2 mirror c0t4d0 c0t5d0 log mirror c0t6d0 c0t7d0
# zpool status tank2
  pool: tank2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tank2   ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0
logs
  mirror-1  ONLINE   0 0 0
c0t6d0  ONLINE   0 0 0
c0t7d0  ONLINE   0 0 0

errors: No known data errors
# zpool remove tank2 mirror-1
# zpool status tank2
  pool: tank2
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tank2   ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c0t4d0  ONLINE   0 0 0
c0t5d0  ONLINE   0 0 0

errors: No known data errors




On 10/19/09 11:53, Markus Kovero wrote:
Hi, I just noticed this on snv_125, is there oncoming feature that 
allows use of numbered vdevs or what for are these?


(raidz2-N)

 


  pool: tank

 state: ONLINE

config:

 


NAME   STATE READ WRITE CKSUM

tank   ONLINE   0 0 0

  raidz2-0 ONLINE   0 0 0

c8t40d0ONLINE   0 0 0

c8t36d0ONLINE   0 0 0

c8t38d0ONLINE   0 0 0

c8t39d0ONLINE   0 0 0

c8t41d0ONLINE   0 0 0

c8t42d0ONLINE   0 0 0

c8t43d0ONLINE   0 0 0

  raidz2-1 ONLINE   0 0 0

c8t44d0ONLINE   0 0 0

c8t45d0ONLINE   0 0 0

c8t46d0ONLINE   0 0 0

c8t47d0ONLINE   0 0 0

c8t48d0ONLINE   0 0 0

c8t49d0ONLINE   0 0 0

c8t50d0ONLINE   0 0 0

  raidz2-2 ONLINE   0 0 0

c8t51d0ONLINE   0 0 0

c8t86d0ONLINE   0 0 0

c8t87d0ONLINE   0 0 0

c8t149d0   ONLINE   0 0 0 


c8t91d0ONLINE   0 0 0

c8t94d0ONLINE   0 0 0

c8t95d0ONLINE   0 0 0

 


Yours

Markus Kovero




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] EON ZFS Storage 0.59.4 based on snv_124 released!

2009-10-19 Thread Andre Lue
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is 
released on Genunix! Many thanks to Genunix.org for download hosting and 
serving the opensolaris community.

EON ZFS storage is available in a 32/64-bit CIFS and Samba versions:
tryitEON 64-bit x86 CIFS ISO image version 0.59.4 based on snv_124

* eon-0.594-124-64-cifs.iso
* MD5: 4bda930d1abc08666bf2f576b5dd006c
* Size: ~89Mb
* Released: Monday 19-October-2009

tryitEON 64-bit x86 Samba ISO image version 0.59.4 based on snv_124

* eon-0.594-124-64-smb.iso
* MD5: 80af8b288194377f13706572f7b174b3
* Size: ~102Mb
* Released: Monday 19-October-2009

tryitEON 32-bit x86 CIFS ISO image version 0.59.4 based on snv_124

* eon-0.594-124-32-cifs.iso
* MD5: dcc6f8cb35719950a6d4320aa5925d22
* Size: ~56Mb
* Released: Monday 19-October-2009

tryitEON 32-bit x86 Samba ISO image version 0.59.4 based on snv_124

* eon-0.594-124-32-smb.iso
* MD5: 3d6debd4595c1beb7ebbb68ca30b7391
* Size: ~69Mb
* Released: Monday 19-October-2009

New/Changes/Fixes:
- ntpd and nscd starting moved to /mnt/eon0/.exec
- added .disable and .purge feature
- install.sh bug fix for virtual disks, multiple run and improved error checking
- new transporter.sh CLI to automate upgrades, backups or downgrades to 
backed-up versions
- eon rebooting at grub in ESXi, Fusion and various versions of VMware 
workstation. This is related to bug 6820576. Workaround, at grub press e and 
add on the end of the kernel line "-B disable-pcieb=true"
http://eonstorage.blogspot.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Interesting bug with picking labels when expanding a slice where a pool lives

2009-10-19 Thread Tomas Ögren
Hi.

We've got some test machines which amongst others has zpools in various
sizes and placements scribbled all over the disks.

0. HP DL380G3, Solaris10u8, 2x16G disks; c1t0d0 & c1t1d0
1. Took a (non-emptied) disk, created a 2GB slice0 and a ~14GB (to the
   last cyl) slice7.
2. zpool create striclek c1t1d0s0
3. zdb -l /dev/rdsk/c1t1d0s0  shows 4 labels, each with the same guid
   and only c1t1d0s0 as vdev. All is well.
4. format, increase slice0 from 2G to 16G. remove slice7. label.
5. zdb -l /dev/rdsk/c1t1d0s0 shows 2 labels from the correct guid &
   c1t1d0s0, it also shows 2 labels from some old guid (from an rpool
   which was abandoned long ago) belonging to a
   mirror(c1t0d0s0,c1t1d0s0). c1t0d0s0 is current boot disk with other
   rpool and other guid.
6. zpool export striclek;zpool import  shows guid from the "working
   pool", but that it's missing devices (although only lists c1t1d0s0 -
   ONLINE)
7. zpool import striclek  doesn't work. zpool import theworkingguid
   doesn't work.

If I resize the slice back to 2GB, all 4 labels shows the workingguid
and import works again.

Questions:
* Why does 'zpool import' show the guid from label 0/1, but wants vdev
  conf as specified by label 2/3?
* Is there no timestamp or such, so it would prefer label 0/1 as they
  are brand new and ignore label 2/3 which are waaay old.


I can agree to being forced to scribble zeroes/junk all over the "slice7
space" which we're expanding to in step 4.. But stuff shouldn't fail
this way IMO.. Maybe comparing timestamps and see that label 2/3 aren't
so hot anymore and ignore them, or something..

zdb -l and zpool import dumps at:
http://www.acc.umu.se/~stric/tmp/zdb-dump/

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool wont get back online

2009-10-19 Thread Bob Friesenhahn

On Mon, 19 Oct 2009, Jonas Nordin wrote:


Hi, thank you for replying.

I tried to set a label but I got this.

#format -e

The device does not support mode page 3 or page 4,
or the reported geometry info is invalid.
WARNING: Disk geometry is based on capacity data.


Maybe your drives have bad firmware?  Certain products (e.g. 
particular Seagate models) are known to spontaneously expire due to 
firmware bugs.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool wont get back online

2009-10-19 Thread Jonas Nordin
Hi, thank you for replying.

I tried to set a label but I got this.


#format -e

The device does not support mode page 3 or page 4,
or the reported geometry info is invalid.
WARNING: Disk geometry is based on capacity data.

The current rpm value 0 is invalid, adjusting it to 3600

The device does not support mode page 3 or page 4,
or the reported geometry info is invalid.
WARNING: Disk geometry is based on capacity data.

The current rpm value 0 is invalid, adjusting it to 3600

The device does not support mode page 3 or page 4,
or the reported geometry info is invalid.
WARNING: Disk geometry is based on capacity data.

The current rpm value 0 is invalid, adjusting it to 3600

The device does not support mode page 3 or page 4,
or the reported geometry info is invalid.
WARNING: Disk geometry is based on capacity data.

The current rpm value 0 is invalid, adjusting it to 3600
done

c5t1d0: configured with capacity of 698.60GB
c6t0d0: configured with capacity of 698.60GB
c7t0d0: configured with capacity of 698.60GB
c7t1d0: configured with capacity of 698.60GB


AVAILABLE DISK SELECTIONS:
   0. c5t0d0 
  /p...@0,0/pci1043,8...@5/d...@0,0
   1. c5t1d0 
  /p...@0,0/pci1043,8...@5/d...@1,0
   2. c6t0d0 
  /p...@0,0/pci1043,8...@5,1/d...@0,0
   3. c6t1d0 
  /p...@0,0/pci1043,8...@5,1/d...@1,0
   4. c7t0d0 
  /p...@0,0/pci1043,8...@5,2/d...@0,0
   5. c7t1d0 
  /p...@0,0/pci1043,8...@5,2/d...@1,0
Specify disk (enter its number): 1
selecting c5t1d0
[disk formatted]
Disk not labeled.  Label it now? y
Warning: error setting drive geometry.
Warning: error writing VTOC.
Warning: no backup labels
Write label failed




Running fdisk on a disk that is unavailable will show no partition at all.
format> fdisk
No fdisk table exists. The default partition for the disk is:

  a 100% "SOLARIS System" partition

Type "y" to accept the default partition,  otherwise type "n" to edit the
 partition table.
n
 Total disk size is 45600 cylinders
 Cylinder size is 32130 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===

WARNING: no partitions are defined!



Doing the same but on the healthy disk.

format> fdisk
 Total disk size is 45600 cylinders
 Cylinder size is 32130 (512 byte) blocks

   Cylinders
  Partition   Status TypeStart   End   Length%
  =   ==  =   ===   ==   ===
  1EFI0  45600
45601100


Looks like the partition information is gone on four of the drives and without 
it there is no way to set a label?
Creating a new partition will erase all information of the disk?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] The iSCSI-backed zpool for my zone hangs.

2009-10-19 Thread Jacob Ritorto
	My goal is to have a big, fast, HA filer that holds nearly everything 
for a bunch of development services, each running in its own Solaris 
zone.  So when I need a new service, test box, etc., I provision a new 
zone and hand it to the dev requesters and they load their stuff on it 
and go.


	Each zone has zonepath on its own zpool, which is an iSCSI-backed 
device pointing to an a unique sparse zvol on the filer.


	If things slow down, we buy more 1U boxes with lots of CPU and RAM, 
don't care about the disk, and simply provision more LUNs on the filer. 
 Works great.  Cheap, good performance, nice and scalable.  They smiled 
on me for a while.


Until the filer dropped a few packets.

	I know it shouldn't happen and I'm addressing that, but the failure 
mode for this eventuality is too drastic.  If the filer isn't responding 
nicely to the zone's i/o request, the zone pretty much completely hangs, 
responding to pings perhaps, but not allowing any real connections. 
Kind of, not surprisingly, like a machine whose root disk got yanked 
during normal operations.


	To make it worse, the whole global zone seems unable to do anything 
about the issue.  I can't down the affected zone; zoneadm commands just 
put the zone in a shutting_down state forever.  zpool commands just 
hang.  Only thing I've found to recover (from far away in the middle of 
the night) is to uadmin 1 1 the global zone.  Even reboot didn't work. 
So all the zones on the box get hard-reset and that makes all the dev 
guys pretty unhappy.


	I thought about setting failmode to continue on these individual zone 
pools because it's set to wait right now.  How do you folks predict that 
action will change play?


thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Numbered vdevs

2009-10-19 Thread Markus Kovero
Hi, I just noticed this on snv_125, is there oncoming feature that allows use 
of numbered vdevs or what for are these?
(raidz2-N)

  pool: tank
 state: ONLINE
config:

NAME   STATE READ WRITE CKSUM
tank   ONLINE   0 0 0
  raidz2-0 ONLINE   0 0 0
c8t40d0ONLINE   0 0 0
c8t36d0ONLINE   0 0 0
c8t38d0ONLINE   0 0 0
c8t39d0ONLINE   0 0 0
c8t41d0ONLINE   0 0 0
c8t42d0ONLINE   0 0 0
c8t43d0ONLINE   0 0 0
  raidz2-1 ONLINE   0 0 0
c8t44d0ONLINE   0 0 0
c8t45d0ONLINE   0 0 0
c8t46d0ONLINE   0 0 0
c8t47d0ONLINE   0 0 0
c8t48d0ONLINE   0 0 0
c8t49d0ONLINE   0 0 0
c8t50d0ONLINE   0 0 0
  raidz2-2 ONLINE   0 0 0
c8t51d0ONLINE   0 0 0
c8t86d0ONLINE   0 0 0
c8t87d0ONLINE   0 0 0
c8t149d0   ONLINE   0 0 0
c8t91d0ONLINE   0 0 0
c8t94d0ONLINE   0 0 0
c8t95d0ONLINE   0 0 0

Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fishworks on x4275?

2009-10-19 Thread Frank Cusack
On October 19, 2009 9:53:14 AM +1300 Trevor Pretty 
 wrote:

Frank

I've been looking into:-
http://www.nexenta.com/corp/index.php?option=com_content&task=blogsection
&id=4&Itemid=128


Thanks!  I *thought* there was a Nexenta solution but a google search
didn't turn anything up for me.  I'll definitely be looking into this.
The high level documentation is pretty weak, I guess I have to dig in.
But while I have the attention of this list, does NexentaStor "natively"
support AFP and "bonjour" or can I just add that myself?


Only played with a VM so far on my laptop, but it does seem to be an
alternative to the Sun product if you don't want to buy a S7000.

IMHO: Sun are missing a great opportunity not offering a reasonable
upgrade path from an X to an S7000.


I agree.  However, looking over the Nexenta offering I may prefer
that anyway!  I can get the density of the 7000 series by using x4540
hardware down the road.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pool as root of zone

2009-10-19 Thread Hank Ratzesberger
Thanks again for comments, I want to clear this up with a few notes:

 o In OSOL 2009-06, zones MUST be installed in a zfs filesystem. 
 o This is different than any dataset specified, which is like adding an fs.
 
And of course if you specify as a dataset the same zfs pool that you installed 
into, the system will boot into single user mode waiting for the local 
filesystem service to clear with an error something like "No files expected in 
pool"

Anyway, despite my operator error, I am happy to be running under 2009-06 where 
I can put the zone in its own zfs fs.  As mentioned elsewhere, the zone path is 
not hidden so users can have some idea what other zones are running on your 
system.

Regards,
Hank
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The ZFS FAQ needs an update

2009-10-19 Thread Cindy Swearingen

Its updated now. Thanks for mentioning it.

Cindy

On 10/18/09 10:19, Sriram Narayanan wrote:

All:

Given that the latest S10 update includes user quotas, the FAQ here
[1] may need an update

-- Sriram

[1] http://opensolaris.org/os/community/zfs/faq/#zfsquotas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iscsi/comstar performance

2009-10-19 Thread Jim Dunham

Frank Middleton wrote:

On 10/13/09 18:35, Albert Chin wrote:


Maybe this will help:
  
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html


Well, it does seem to explain the scrub problem. I think it might
also explain the slow boot and startup problem - the VM only has
564M available, and it is paging a bit. Doing synchronous i/o for
swap makes no sense. Is there an official way to disable this
behavior?

Does anyone know if the old iscsi system is going to stay around,
or will COMSTAR replace it at some point? The 64K metadata
block at the start of each volume is a bit awkward, too. - it seems
to throw VBox into a tizzy when (failing to) boot MSWXP.


There are two options here, see  stmfadm(1m) for details:

If the backing store deviceis a ZFS ZVOL, then the metadata is stored  
in a special data object in the ZVOL rather than using the first 64K  
of the ZVOL.


The command "stmfadm -o meta=/path/to/metadata-file create-lu /path/to/ 
backing/store" is used to specify a file-based location to store SBD  
metadata. This method can be used to upgrade old iSCSI Target Daemon  
backing storage devices to iSCSI Target COMSTAR, if the backing store  
device is not a ZVOL.


Note: For ZVOL support, there is a corresponding ZFS storage pool  
change to support this functionality, so a "zpool upgrade ..."  to  
version 16 is required:


# zpool upgrade -v
.
.
 16  stmf property support

- Jim



The options seem to be

a) stay with the old method and hope it remains supported

b) figure out a way around the COMSTAR limitations

c) give up and use NFS

Using ZFS as an iscsi backing store for VirtualBox images seemed
like a great idea, so simple to maintain and robust, but COMSTAR
seems to have sand-bagged it a bit. The performance was quite
acceptable before but it is pretty much unusable this way.

Any ideas would be much appreciated

Thanks -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool wont get back online

2009-10-19 Thread Francois Napoleoni

Hi Jonas,

At first sight it looks like your "unopenable" disks were relabeled with 
SMI label (hence the cyl count on format's output).
Not sure if it is completely safe data wise but you could try to relabel 
your disks with EFI label (use format -e to access label choice).


F.


On 10/18/09 20:47, Jonas Nordin wrote:

hi,

After a shutdown my zpool wont go online again, zpool status showed that only 
one of five hard drives is online. I tried to export the pool and get it back 
in hope of a fix but with no change.
I have replaced the sata cables and even replaced the motherboard but it's 
always showes the same status of the zpool.

#zpool import will show

  pool: tank
id: 6529188950165676222
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

tankUNAVAIL  insufficient replicas
  raidz1UNAVAIL  insufficient replicas
c7t1d0  UNAVAIL  cannot open
c6t1d0  ONLINE
c6t0d0  UNAVAIL  cannot open
c5t1d0  UNAVAIL  cannot open
c7t0d0  UNAVAIL  cannot open

#format shows

c5t1d0: configured with capacity of 698.60GB
c6t0d0: configured with capacity of 698.60GB
c7t0d0: configured with capacity of 698.60GB
c7t1d0: configured with capacity of 698.60GB


AVAILABLE DISK SELECTIONS:
   0. c5t0d0 DEFAULT cyl 30512 alt 2 hd 255 sec 63
  /p...@0,0/pci1043,8...@5/d...@0,0
   1. c5t1d0 ATA-WDCWD7500AAKS-0-4G30 cyl 45598 alt 2 hd 255 sec 126
  /p...@0,0/pci1043,8...@5/d...@1,0
   2. c6t0d0 ATA-WDCWD7500AAKS-0-4G30 cyl 45598 alt 2 hd 255 sec 126
  /p...@0,0/pci1043,8...@5,1/d...@0,0
   3. c6t1d0 ATA-WDC WD7500AAKS-0-4G30-698.64GB
  /p...@0,0/pci1043,8...@5,1/d...@1,0
   4. c7t0d0 ATA-WDCWD7500AAKS-0-4G30 cyl 45598 alt 2 hd 255 sec 126
  /p...@0,0/pci1043,8...@5,2/d...@0,0
   5. c7t1d0 ATA-WDCWD7500AAKS-0-4G30 cyl 45598 alt 2 hd 255 sec 126
  /p...@0,0/pci1043,8...@5,2/d...@1,0

I find the format a bit strange since it lists the capacity of the four missing 
zpool drives but not the hard drive that is online.

Any take on how I can fix this or am I screwed?

Jonas


--
Francois Napoleoni / Sun Support Engineer
mail  : francois.napole...@sun.com
phone : +33 (0)1 3403 1707
fax   : +33 (0)1 3403 1114
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss