Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Jeff Bonwick
If your entire pool consisted of a single mirror of two disks, A and B,
and you detached B at some point in the past, you *should* be able to
recover the pool as it existed when you detached B.  However, I just
tried that experiment on a test pool and it didn't work.  I will
investigate further and get back to you.  I suspect it's perfectly
doable, just currently disallowed due to some sort of error check
that's a little more conservative than necessary.  Keep that disk!

Jeff

On Mon, Apr 28, 2008 at 10:33:32PM -0700, Benjamin Brumaire wrote:
 Hi,
 
 my system (solaris b77) was physically destroyed and i loosed data saved in a 
 zpool mirror. The only thing left is a dettached vdev from the pool. I'm 
 aware that uberblock is gone and that i can't import the pool. But i still 
 hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) 
 i can go too recover at least partially some data)
 
 thanks in advance for any hints.
 
 bbr
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - Implementation Successes and Failures

2008-04-29 Thread Simon Breden
Hi Dominic,

I've built a home fileserver using ZFS and I'd be happy to help. I've written 
up my experiences, from the search for suitable devices thru researching 
compatible hardware, and finally configuring it to share files.

I also build a second box for backups, again using ZFS, and used iSCSI, to add 
a bit of fun.

For more fun, I chose to aggregate gigabit ethernet ports into a speedy link 
between the ZFS fileserver and a Mac Pro computer, which, with limited testing, 
appears to be transferring data at around 80+ MBytes/sec sustained, using a 
CIFS share, and this transfer speed appears to be limited by the speed of the 
Mac's single disk, so I expect it can be pushed to go even faster.

It has been a great experience using ZFS in this way.

You can find my write up here:
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/

If it sounds of interest, feel free to contact me.

Simon
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Benjamin Brumaire
Jeff thank you very much for taking time to look at this.

My entire pool consisted of a single mirror of two slices on different disks A 
and B. I attach a third slice on disk C and wait for resilver and then detach 
it. Now disks A and B burned and I have only disk C at hand.

bbr
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-29 Thread Simon Breden
Rick, I have the same motherboard on my backup machine and got 48MBytes/sec 
sustained on a 650GB transfer (but that was using iSCSI), so I suggest two 
things:

1. Make sure you are using the latest stable -- i.e. not beta, BIOS update. You 
can use a USB thumbdrive to install it, and can save the old one on there too, 
in case you want to return to it.

2. I assume you have at least Category 5e ethernet cables between all boxes 
linked to your gigabit switch. If not, that could be the cause as Cat. 5 might 
not be sufficient. I use Cat. 6 because I wanted to be sure if I got low speeds 
that it wasn't the cables letting me down. You might also be able to do a cable 
test as part of the system's POST, set within the BIOS (I know my other M2N-SLI 
Deluxe can do this anyway).

Hope it helps.

Simon
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Jeff Bonwick
Urgh.  This is going to be harder than I thought -- not impossible,
just hard.

When we detach a disk from a mirror, we write a new label to indicate
that the disk is no longer in use.  As a side effect, this zeroes out
all the old uberblocks.  That's the bad news -- you have no uberblocks.

The good news is that the uberblock only contains one field that's hard
to reconstruct: ub_rootbp, which points to the root of the block tree.
The root block *itself* is still there -- we just have to find it.

The root block has a known format: it's a compressed objset_phys_t,
almost certainly one sector in size (could be two, but very unlikely
because the root objset_phys_t is highly compressible).

It should be possible to write a program that scans the disk, reading
each sector and attempting to decompress it.  If it decompresses into
exactly 1K (size of an uncompressed objset_phys_t), then we can look
at all the fields to see if they look plausible.  Among all candidates
we find, the one whose embedded meta-dnode has the highest birth time
in its dn_blkptr is the one we want.

I need to get some sleep now, but I'll code this up in a couple of
days and we can take it from there.  If this is time-sensitive,
let me know and I'll see if I can find someone else to drive it.
[ I've got a bunch of commitments tomorrow, plus I'm supposed to
be on vacation... typical...  ;-)  ]

Jeff

On Tue, Apr 29, 2008 at 12:15:21AM -0700, Benjamin Brumaire wrote:
 Jeff thank you very much for taking time to look at this.
 
 My entire pool consisted of a single mirror of two slices on different disks 
 A and B. I attach a third slice on disk C and wait for resilver and then 
 detach it. Now disks A and B burned and I have only disk C at hand.
 
 bbr
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-29 Thread Rick
 Rick, I have the same motherboard on my backup
 machine and got 48MBytes/sec sustained on a 650GB
 transfer (but that was using iSCSI), so I suggest two
 things:
 
 1. Make sure you are using the latest stable -- i.e.
 not beta, BIOS update. You can use a USB thumbdrive
 to install it, and can save the old one on there too,
 in case you want to return to it.
 
 2. I assume you have at least Category 5e ethernet
 cables between all boxes linked to your gigabit
 switch. If not, that could be the cause as Cat. 5
 might not be sufficient. I use Cat. 6 because I
 wanted to be sure if I got low speeds that it wasn't
 the cables letting me down. You might also be able to
 do a cable test as part of the system's POST, set
 within the BIOS (I know my other M2N-SLI Deluxe can
 do this anyway).
 
 Hope it helps.
 
 Simon

Simon,

Hey Simon. I have not updated to the latest BIOS. I'm 1 or 2 revisions behind. 
I will check into updating, however I think that's not my issue.

I do have a Cat6 cable that I'm plugging in to a 10/100/1000 switch. It's a 
SOHO switch which does not have a configuration interface. I do know the port 
works as 1000 because my last mobo was connected to the same cable and port at 
1000. For whatever reason, I cannot set my nge interface to 1000 though. It 
runs fine, but only connects at 100 full duplex. On the same switch I also have 
a PS3 which connects at 1000. The typical speed from the Solaris ZFS shared 
data to the PS3 is 700KiB/s. That's streaming data though. The actual copy 
speed is 9.3MiB/s. Although, to be honest, I'm not sure of the protocol it uses 
to copy data. MediaTomb is the DLNA application that feeds my PS3. Anyway, all 
other devices use Cat5e and connect to the same switch (at least, all other 
devices that are involved in this little fiasco.)

The 2MiB/s speed isn't so bad, or at least uncommon in my office. Copying from 
another windows box (laptop) to the winxp box, that was used in the tests 
quoted before, results in 2.2MiB/s - 3.4MiB/s while using MS SMB. I'm less 
concerned with speeds around that mark. The only speed that really concerns me 
is the 342KiB/s write speed. It's not a great situation when your backup 
solution only accepts 342KiB/s on writes. :)

And for those getting annoyed, I'm aware that this is probably not a direct 
fault of ZFS. However, I'm not sure how exactly the code between CIFS and ZFS 
interfaces, so I included zfs discuss on my initial post. Because the 
read/write speeds of other protocols is faster on the same ZFS mirror, I have 
to assume that there is definitely an issue with CIFS or my configuration of 
CIFS. Still, I have not found any information at all on tweaking CIFS in 
Solaris. I can't even find a config file for it. :(

rick
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool attach vs. zpool iostat

2008-04-29 Thread Robert Milkowski
Hello zfs-discuss,

  S10U4+patches, SPARC

  If I attach a disk to vdev in a pool to get mirrored configuration
  then during resilver zpool iostat 1 will report only reads being
  done from pool and basically no writes. If I do zpool iostat -v 1
  then I can see it is writing to new device however on a pool and
  mirror/vdev level it is still reporting only reads.

  If during resilvering reads are reported shouldn't it be te same for
  writes?

-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
 http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-04-29 Thread Benjamin Brumaire
If  I understand you correctly the steps to follow are:

 read each sector   (dd bs=512 count=1 split=n is enough?)
 decompress it   (any tools implementing the algo  lzjb?)
 size = 1024?
 structure might be objset_phys_t?
 take the oldest birth time as the root block
 construction of the uberblocks 

Unfortunately I can't help with a C program but if I will be happy to support 
you in any other way.
Don't consider it's time sensitive, those data are very important but I can 
continue my business without it.

Again thanks you very much for your help. I really appreciate.

bbr
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ? ZFS boot in nv88 on SPARC ?

2008-04-29 Thread Ulrich Graef
Hi,

ZFS won't boot on my machine.

I discovered, that the lu manpages are there, but not
the new binaries.
So I tried to set up ZFS boot manually:

  zpool create -f Root c0t1d0s0
 
  lucreate -n nv88_zfs -A nv88 finally on ZFS  -c nv88_ufs -p Root -x /zones
 
  zpool set bootfs=Root/nv88_zfs Root
 
  ufsdump 0f - / | ( cd /Root/nv88_zfs; ufsrestore -rf - ; )
 
  eeprom boot-device=disk1
 
  Correct vfstab of the boot environment to:
 Root/nv88_zfs   -   /   zfs -   no  -
 
  zfs set mountpoint=legacy Root/nv88_zfs
 
  mount -F zfs Root/nv88_zfs /mnt
 
  bootadm update-archive -R /mnt
 
  umount /mnt
 
  installboot /usr/platform/SUNW,Ultra-60/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0

When I try to boot I get the message in the ok prompt:

Can't mount root
Fast Data Access MMU Miss

Same with: boot disk1 -Z Root/nv88_zfs

What is missing in the setup?
Unfortunately opensolaris contains only the preliminary setup for x86,
so it does not help me...

Regards,

Ulrich

-- 
| Ulrich Graef, Senior System Engineer, OS Ambassador\
|  Operating Systems, Performance \ Platform Technology   \
|   Mail: [EMAIL PROTECTED] \ Global Systems Enginering \
|Phone: +49 6103 752 359\ Sun Microsystems Inc  \

Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1,
   D-85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ? ZFS boot in nv88 on SPARC ?

2008-04-29 Thread Ulrich Graef
Ulrich Graef wrote:
 Hi,
 
 ZFS won't boot on my machine.
 
 I discovered, that the lu manpages are there, but not
 the new binaries.
 So I tried to set up ZFS boot manually:
 
 
 zpool create -f Root c0t1d0s0

 lucreate -n nv88_zfs -A nv88 finally on ZFS  -c nv88_ufs -p Root -x /zones

 zpool set bootfs=Root/nv88_zfs Root

 ufsdump 0f - / | ( cd /Root/nv88_zfs; ufsrestore -rf - ; )

 eeprom boot-device=disk1

 Correct vfstab of the boot environment to:
Root/nv88_zfs   -   /   zfs -   no  -

 zfs set mountpoint=legacy Root/nv88_zfs

 mount -F zfs Root/nv88_zfs /mnt

 bootadm update-archive -R /mnt

 umount /mnt

 installboot /usr/platform/SUNW,Ultra-60/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
 
 
 When I try to boot I get the message in the ok prompt:
 
 Can't mount root
 Fast Data Access MMU Miss
 
 Same with: boot disk1 -Z Root/nv88_zfs
 
 What is missing in the setup?
 Unfortunately opensolaris contains only the preliminary setup for x86,

I mean the: opensolaris.org website contains only the description
about setting up zfs root and boot for x86 until nevada build 87.

 so it does not help me...
 
 Regards,
 
   Ulrich
 


-- 
| Ulrich Graef, Senior System Engineer, OS Ambassador\
|  Operating Systems, Performance \ Platform Technology   \
|   Mail: [EMAIL PROTECTED] \ Global Systems Enginering \
|Phone: +49 6103 752 359\ Sun Microsystems Inc  \

Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1,
   D-85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-29 Thread Simon Breden
Hey, hi Rick!

The obvious thing that is wrong is the network being recognised as 100Mbps and 
not 1000. Hopefully, the read/write speeds will fix themselves once the network 
problem is fixed.

As it's the same cable you had working previously at 1000Mbps on your other 
computer and the same switch, I suppose, then it all points to problems with 
the network on the solaris box.

The first thing to try is replace the cable with another one if you have 
another one around, although I suppose you would already have tried that.

Then, see if there's anything that could possibly be setup wrongly in the BIOS, 
but I don't recall there being anything to change, although I could be wrong.

Then you could try to see if you can cause your networking on Solaris to be 
re-setup somehow.

You could try:

# svcadm disable /network/physical:nwam
# svcadm enable /network/physical:nwam

But I suppose this won't change anything.

Also I would investigate the possibility of getting the latest BIOS, unless you 
know of a good reason not to. This might be a bug in the BIOS.

Lastly, could there be a bug in the build 86 regarding the nge driver?

That's all I can think of, good luck! :)

Simon
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] any 64-bit mini-itx successes

2008-04-29 Thread Benjamin Ellison
Is there anyone who has successfully put together a high-powered mini-itx ZFS 
box that would be willing to post their system specs?

I'm eyeballing the KI690-AM2...
http://www.albatron.com.tw/english/product/mb/pro_detail.asp?rlink=Overviewno=239

...but am having a difficult time locating it and a suitable case currently.  I 
was in negotiations with a hardware source, but he/his company disappeared 
(thank goodness I hadn't ordered from him!).

Thoughts/advice?   Any other suggestions for a mini-itx (or other small form 
factor) mobo that supports enough ram/cpu/SATA ?

My goals  constraints for this system are:
*) ZFS -- I want this to run opensolaris/ZFS
*) Size -- I don't have room for anything bigger than the Chenbro ES34069 case
http://www.chenbro.com/corporatesite/products_detail.php?serno=100
*) Power -- I want to be able to put a hefty chunk of RAM and a nice cpu in it, 
because I'd like this box to be multi-purpose (eventually)

Thanks,
--Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-29 Thread Simon Breden
Rick,

Glad it worked ;-)

Now if I were you, I would not upgrade the BIOS unless you really want/need to.

I look forward to seeing your revised speed test data for reads and writes with 
the gigabit network speed working correctly. I think it should make a little 
difference -- I'm guessing you'll get 40 - 45 MBytes/sec sustained assuming 
disks are fast enough at the other end, AND you have a Cat 5e/6 cable at the 
other end too.

Simon
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Finding Pool ID

2008-04-29 Thread Ajay Kumar
Folks,
How can I find out zpool id without using zpool import? zpool list 
and zpool status does not have option as of Solaris 10U5.. Any back door 
to grab this property will be helpful.


Thank  you
Ajay
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance so bad on my system

2008-04-29 Thread Bob Friesenhahn
On Tue, 29 Apr 2008, Krzys wrote:

 I am not sure, I had very ok system when I did originally build it and when I
 did originally started to use zfs, but now its so horribly slow. I do believe
 that amount of snaps that I have are causing it.

This seems like a bold assumption without supportive evidence.

 # zpool list
 NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
 mypool  278G255G   23.0G91%  ONLINE -
 mypool21.59T   1.54T   57.0G96%  ONLINE -

Very full!

 For example I am trying to copy 1.4G file from my /var/mail to /d/d1 directory
 which is zfs file system on mypool2 pool. It takes 25 minutes to copy it, 
 while
 copying it to tmp directory only takes few seconds. Whats wrong with this? Why
 its so long to copy that wile to my zfs file system?

Not good.  Some filesystems get slower when they are almost full since 
they have to work harder to find resources and verify quota limits.  I 
don't know if that applies to ZFS.

However, it may be that you have one or more disks which is 
experiencing many soft errors (several re-tries before success) and 
maybe you should look into that first.  ZFS runs on top of a bunch of 
other subsystems and drivers so if those other subsystems and drivers 
are slow to repond then ZFS will be slow.  With your raidz2 setup, all 
it takes is one slow disk to slow everything down.

I suggest using 'iostat -e' to check for device errors, and 'iostat 
-x' (while doing the copy) to look for suspicious device behavior.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding Pool ID

2008-04-29 Thread Eric Schrock
This is present as the 'guid' property in Solaris Nevada.  If you're on
a previous release, you can do one of the following:

- 'zdb -l device in pool' and look for the 'pool_guid' property (if
  you're using whole disks you'll still need the s0 slice).

- '::walk spa | ::print spa_t spa_name spa_root_vdev-vdev_guid' from
  'mdb -k'.

Hope that helps,

- Eric

On Tue, Apr 29, 2008 at 11:27:18AM -0400, Ajay Kumar wrote:
 Folks,
 How can I find out zpool id without using zpool import? zpool list 
 and zpool status does not have option as of Solaris 10U5.. Any back door 
 to grab this property will be helpful.
 
 
 Thank  you
 Ajay
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding Pool ID

2008-04-29 Thread Richard Elling
Ajay Kumar wrote:
 Folks,
 How can I find out zpool id without using zpool import? zpool list 
 and zpool status does not have option as of Solaris 10U5.. Any back door 
 to grab this property will be helpful.

   

It seems to be a heck of a lot easier to just use zpool import without
the -a option and without a pool name.  I'm curious as to why this
method will not work for you?
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding Pool ID

2008-04-29 Thread Robert Milkowski
Hello Richard,

Tuesday, April 29, 2008, 5:51:01 PM, you wrote:

RE Ajay Kumar wrote:
 Folks,
 How can I find out zpool id without using zpool import? zpool list 
 and zpool status does not have option as of Solaris 10U5.. Any back door 
 to grab this property will be helpful.

   

RE It seems to be a heck of a lot easier to just use zpool import without
RE the -a option and without a pool name.  I'm curious as to why this
RE method will not work for you?


IIRC it will work only for exported pools. You need to use zdb for
already imported pools.

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs performance so bad on my system

2008-04-29 Thread Chris Linton-Ford


  For example I am trying to copy 1.4G file from my /var/mail to /d/d1 
  directory
  which is zfs file system on mypool2 pool. It takes 25 minutes to copy it, 
  while
  copying it to tmp directory only takes few seconds. Whats wrong with this? 
  Why
  its so long to copy that wile to my zfs file system?

/tmp is an in-memory filesystem. Use e.g. /var/tmp for actual
disk-to-disk performance.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding Pool ID

2008-04-29 Thread Richard Elling
Robert Milkowski wrote:
 Hello Richard,

 Tuesday, April 29, 2008, 5:51:01 PM, you wrote:

 RE Ajay Kumar wrote:
   
 Folks,
 How can I find out zpool id without using zpool import? zpool list 
 and zpool status does not have option as of Solaris 10U5.. Any back door 
 to grab this property will be helpful.

   
   

 RE It seems to be a heck of a lot easier to just use zpool import without
 RE the -a option and without a pool name.  I'm curious as to why this
 RE method will not work for you?


 IIRC it will work only for exported pools. You need to use zdb for
 already imported pools.

   
zpool get guid [poolname] will display the GUID.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any 64-bit mini-itx successes

2008-04-29 Thread Mads Toftum
On Tue, Apr 29, 2008 at 07:41:01AM -0700, Benjamin Ellison wrote:
 Is there anyone who has successfully put together a high-powered mini-itx 
 ZFS box that would be willing to post their system specs?
 
 I'm eyeballing the KI690-AM2...
 http://www.albatron.com.tw/english/product/mb/pro_detail.asp?rlink=Overviewno=239
 
(can't quite tell wheteher that's 2G max or 2G/module)
http://linitx.com/viewproduct.php?prodid=11921 

 ...but am having a difficult time locating it and a suitable case currently.  
 I was in negotiations with a hardware source, but he/his company disappeared 
 (thank goodness I hadn't ordered from him!).
 
 Thoughts/advice?   Any other suggestions for a mini-itx (or other small form 
 factor) mobo that supports enough ram/cpu/SATA ?
 
Not the easiest requirements as mini-itx usually means little space for
memory and sata ports. I think the best (but not cheap) option I've seen
is:
Commell LV-676D Intel Core 2 Duo Mainboard
(for 273.35 pounds at linitx, but I've seen them for less elsewhere).

vh

Mads Toftum
-- 
http://soulfood.dk
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any 64-bit mini-itx successes

2008-04-29 Thread Wes Felter
I wonder how hard it would be to get Solaris running on the new ReadyNAS.

http://www.netgear.com/Products/Storage/ReadyNASPro.aspx

Wes Felter - [EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] lost zpool when server restarted.

2008-04-29 Thread Krzys



I have a problem on one of my systems with zfs. I used to have zpool created 
with 3 luns on SAN. I did not have to put any raid or anything on it since it 
was already using raid on SAN. Anyway server rebooted and I cannot zee my 
pools. 
When I do try to import it it does fail. I am using EMC Clarion as SAN and 
powerpath
# zpool list
no pools available
# zpool import -f
  pool: mypool
  id: 4148251638983938048
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
  devices and try again.
  see: http://www.sun.com/msg/ZFS-8000-3C
config:
  mypool UNAVAIL insufficient replicas
  emcpower0a UNAVAIL cannot open
  emcpower2a UNAVAIL cannot open
  emcpower3a ONLINE

I think I am able to see all the luns and I should be able to access them on my 
sun box.
# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=APM00070202835 [NRHAPP02]
Logical device ID=6006016045201A001264FB20990FDC11 [LUN 13]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==
 Host --- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016041E035A4d0s0 SP A4 active 
alive 0 0
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016941E035A4d0s0 SP B5 active 
alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016141E035A4d0s0 SP A5 
active alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016841E035A4d0s0 SP B4 
active alive 0 0


Pseudo name=emcpower1a
CLARiiON ID=APM00070202835 [NRHAPP02]
Logical device ID=6006016045201A004C1388343C10DC11 [LUN 14]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==
 Host --- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016041E035A4d1s0 SP A4 active 
alive 0 0
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016941E035A4d1s0 SP B5 active 
alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016141E035A4d1s0 SP A5 
active alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016841E035A4d1s0 SP B4 
active alive 0 0


Pseudo name=emcpower3a
CLARiiON ID=APM00070202835 [NRHAPP02]
Logical device ID=6006016045201A00A82C68514E86DC11 [LUN 7]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==
 Host --- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016041E035A4d3s0 SP A4 active 
alive 0 0
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016941E035A4d3s0 SP B5 active 
alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016141E035A4d3s0 SP A5 
active alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016841E035A4d3s0 SP B4 
active alive 0 0


Pseudo name=emcpower2a
CLARiiON ID=APM00070202835 [NRHAPP02]
Logical device ID=600601604B141B00C2F6DB2AC349DC11 [LUN 24]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B
==
 Host --- - Stor - -- I/O Path - -- Stats ---
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016041E035A4d2s0 SP A4 active 
alive 0 0
3074 [EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c2t5006016941E035A4d2s0 SP B5 active 
alive 0 0
3072 [EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL PROTECTED]/[EMAIL 
PROTECTED],0 c3t5006016141E035A4d2s0 SP A5 
active alive 0 0
3072 [EMAIL 

Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-29 Thread dhenry
Hi Rick,

I have the same problem as you. (sorry for my english)
I have installed the same OS on a gigabyte mother board. I wanted to make a NAS 
with the nice ZFS.
First I tried the new smb kernel implementation: file navigation (on windows) 
and  streaming were too slow. File transfer was acceptable. I thought that was 
because of the young kernel implementation.
So I removed every configuration related to it and I tried samba.
Samba brought significant boost in file navigation. (I was happy) but streaming 
were even worst and file transfer were weird: when I transfer 2 files the speed 
were around 10MB/s and for one file the speed is around 100Kb/s ...
I clearly, the wires are able to transfer at full speed, clearly ZFS is able to 
undertake it (and much more I think) but something weird is going on. 
At that point, I was going to reinstall UBUNTU and forget about solaris and ZFS 
but your story gave me hope. I'm up to again to find out what's going on.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-29 Thread Eric Schrock
FYI -

If you're doing anything with CIFS and performance, you'll want this
fix:

6686647 smbsrv scalability impacted by memory management issues

Which was putback into build 89 of nevada.

- Eric

On Thu, Apr 24, 2008 at 09:46:04AM -0700, Rick wrote:
 Recently I've installed SXCE nv86 for the first time in hopes of getting rid 
 of my linux file server and using Solaris and ZFS for my new file server. 
 After setting up a simple ZFS mirror of 2 disks, I enabled smb and set about 
 moving over all of my data from my old storage server. What I noticed was the 
 dismal performance while writing. I have tried to find information regarding 
 performance and possible expectations, but I've yet to come across anything 
 with any real substance that can help me out. I'm sure there is some guide on 
 tuning for CIFS, but I've not been able to locate it. The write speeds for 
 NFS described in this post 
 http://opensolaris.org/jive/thread.jspa?threadID=55764tstart=0 made me want 
 to look into NFS. However, after disabling sharing, turning off smb, enabling 
 NFS, and sharing the pool again I see the same if not worse performance on 
 write speeds (ms windows SFU may be partially to blame, so I've gone back to 
 learning how to fix smb instead of learnin
 g 
  and tweaking NFS).
 
 What I'm doing is mounting the smb share with WinXP and pulling data from the 
 ZFS mirror pool at 2.3MiB/s across the network. Writing to the same share 
 from the WinXP host I get a fairly consistent 342KiB/s speed.
 
 Copying data locally from an IDE drive to the zpool mirror (2 SATAII drives) 
 I get much faster performance. As I do with copying data from one zpool 
 mirror (1 SATA1 drive and 1 SATAII drive) to another zpool mirror (2 SATAII 
 drives) on the same host. I'm not sure on performance numbers but it takes 
 *substantially* less time to transfer.
 
 The research I've done thus far indicates that I've got to use a file that's 
 double the size of my ram to ensure that caching doesn't skew the results. So 
 these tests are all done with an 8GB file.
 
 I would imagine that write speeds and read speeds across the network should 
 be much closer. At this point, I'm assuming that I'm doing something wrong 
 here. Anyone want to let me know what I'm missing?
 
 rick
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How do you determine the zfs_vdev_cache_size current value?

2008-04-29 Thread Brad Diggs
How do you ascertain the current zfs vdev cache size (e.g. 
zfs_vdev_cache_size) via mdb or kstat or any other cmd?

Thanks in advance,
Brad
-- 
The Zone Manager
http://TheZoneManager.COM
http://opensolaris.org/os/project/zonemgr

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-29 Thread Rick
 If you're doing anything with CIFS and performance,
 you'll want this
 fix:
 
 6686647 smbsrv scalability impacted by memory
 management issues
 
 Which was putback into build 89 of nevada.
 
 - Eric

Thank you Eric. This is the second time someone has mentioned this to me. I 
imagine it's a significant change. When build 89 is released, I'll be sure to 
upgrade.

As for the speed issue, the bottom line is that I discovered a faulty NIC on 
the WinXP box. Through gathering performance results, I found that the WinXP 
box performance on read was less than a WinXP vm! After swapping out cables and 
using different ports on the switch I found no change. However, after swapping 
NICs on the mobo I gained a huge increase in write speed. I'm not sure why only 
CIFS had issue, probably because it's one of the chattiest protocols ever, but 
that's just a random jab and has no basis in fact. :)

On to the results:
Stats are taken with System Monitor (v2.18.2)  on Solaris. I've verified the 
stats are similar, if not identical, on the WinXP virtual machine and on the 
linux box through conky.

All boxes are 100fdx. The PS3 is 1000fdx. They are all plugged into the same 
swtich (Netgear G5605 v2, 5 port 10/100/1000). File transfers are with a 4GB 
file for SMB and FTP. The HTTP transfer is with a 1.6GB file.

WinXP(1) = virtual machine on the linux box
WinXP(2) = physical machine

WinXP(1): 
FTP
Not tested
HTTP
Read:   500KiB/s - 2.7MiB/s
SMB
Read:1.1MiB/s - 3.4MiB/s
Write:   2.2MiB/s - 3.4MiB/s

WinXP(2):
FTP
Read:4.2MiB/s - 9.9MiB/s
Write:   1.9MiB/s - 6.0MiB/s
HTTP
Read:2.4MiB/s - 6.5MiB/s
SMB
Read:7.6MiB/s - 8.3MiB/s
Write:   7.9MiB/s - 8.9MiB/s

Linux (100fdx)
FTP
Read:1.8MiB/s - 3.7MiB/s
Write:   3.6MiB/s - 3.7MiB/s
HTTP
Read: 2.5MiB/s - 4MiB/s

PS3 (1000fdx):
DLNA
Read:9.4MiB/s

The end result is that I have 63.75 Mbps - 74.66 Mbps read/write via CIFS. Not 
too bad considering it's really just a 100Mbit network link.

Sorry to spam. I thought for sure that the high read/write speeds for FTP and 
HTTP showed that the issue was with CIFS. I'm still kind of baffled as to why 
CIFS was so terrible for the write speed only when, in the end, the issue was 
the NIC.

Thanks to those that helped.
rick
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-29 Thread Simon Breden
Hi Rick,

So just to verify, you never managed to get more than 10 MBytes/sec across the 
link due to the network only giving you a 100 Mbps connection?

Simon
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS - Implementation Successes and Failures

2008-04-29 Thread Jonathan Loran


Dominic Kay wrote:
 Hi

 Firstly apologies for the spam if you got this email via multiple aliases.

 I'm trying to document a number of common scenarios where ZFS is used 
 as part of the solution such as email server, $homeserver, RDBMS and 
 so forth but taken from real implementations where things worked and 
 equally importantly threw up things that needed to be avoided (even if 
 that was the whole of ZFS!).

 I'm not looking to replace the Best Practices or Evil Tuning guides 
 but to take a slightly different slant.  If you have been involved in 
 a ZFS implementation small or large and would like to discuss it 
 either in confidence or as a referenceable case study that can be 
 written up, I'd be grateful if you'd make contact.

 -- 
 Dominic Kay
 http://blogs.sun.com/dom

For all the storage under my management, we are deploying ZFS going 
forward.  There have been issues, to be sure, though none of them show 
stoppers.  I agree with other posters that the way the z* commands 
lockup on a failed device are really not good, and it would be nice to 
be able to remove devices from a zpool.  There have been other 
performance issues that are more the fault of of our SAN nodes than 
ZFS.  But the ease of management, the unlimited nature (volume size to 
number of file systems) of everything ZFS, built in snapshots, and the 
confidence we get in our data make ZFS a winner. 

The way we've deployed ZFS has been to map iSCSI devices from our SAN.  
I know this isn't an ideal way to deploy ZFS, but SAN's do offer 
flexibility that direct attached drives do not.  Performance is now 
sufficient for our needs, but it wasn't at first.  We do everything here 
on the cheap, we have to.  After all, this is University research ;)  
Anyway, we buy commodity x86 servers, and use software iSCSI.  Most of 
our iSCSI nodes run Open-E iSCSI-R3.  The latest version is actually 
quite quick, which wasn't always the case.  I am experimenting using ZFS 
on the iSCSI target, but haven't finished validating that yet. 

I've also rebuilt an older 24 disk SATA chassis with the following parts:

Motherboard:Supermicro PDSME+
Processor: Intel Xeon X3210 Kentsfield 2.13GHz 2 x 4MB L2 Cache LGA 775 
Quad-Core
Disk Controllers x3: Supermicro AOC-SAT2-MV8 8-Port SATA
Hard disks x24: WD-1TB RE2, GP
RAM: Crucial, 4x2GB unbuffered ECC PC2-5300 (8GB total)
New power supplies...

The PDSME+ MB was on the Solaris HCL, and it has four PCI-X slots, so 
using three of the Super Micro MVs' is no problem.  This is obviously a 
standalone system, but it will be for nearline backup data, and doesn't 
have the same expansion requirements as our other servers.  The thing 
about this guy is how smokin fast it is.  I've set it up on snv b86, 
with 4 x 6 drive raid2z stripes, and I'm seeing up to 450MB/sec write 
and 900MB/sec read speeds.  We can't get data into it anywhere that 
quick, but the potential is awesome.  And it was really cheap, for this 
amount of storage.

Our total storage on ZFS now is at: 103TB, some user home directories, 
some software distribution, and a whole lot of scientific data.  I 
compress almost everything, since our bandwidth tends to be SAN pinched, 
not at the head nodes, so we can afford it.

I sleep at night, and the users don't see problems.  I'm a happy camper.

Cheers,

Jon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, CIFS, slow write speed

2008-04-29 Thread Rick
 So just to verify, you never managed to get more than
 10 MBytes/sec across the link due to the network only
 giving you a 100 Mbps connection?

Hi Simon,

I'll try to clear this up. Sorry for the confusion.

The server the Solaris M2N-E is replacing had 2 NICs. When I removed the 
physical box, I left the cat5 cables laying on the desk. When I plugged up the 
Solaris box, I must have plugged in the cable that ended in a 10/100 switch. 
This caused Solaris to boot up and negotiate a 100Mbit full duplex link. After 
reading your post today, I looked at the cable and realized that I'd plugged in 
the wrong cable. After a move and a reboot, the box came up in 1000Mbit full 
duplex.

Solaris side fixed.

The initial problem that started this whole thread was that I wasn't able to 
get above 342KiB/s (2.80 Mbps) when writing to the ZFS CIFS share from WinXP. I 
was able to achieve better speeds with FTP and HTTP. After I fixed the cable 
snafu on the Solaris box, I attempted to re-create the read/write speed tests 
to see if the slow write issue was resolved. It was not. After some more 
troubleshooting, the poor write speed performance turned out to be because of 
an issue with the second NIC on the WinXP motherboard. Once I switched the 
cable to the primary NIC on that mobo, I was able to achieve around 8.9KiB/s 
(74.66 Mbps) write speeds from the WinXP box to the Solaris ZFS CIFS share.

WinXP to Solaris ZFS, slow write speed fixed.

I hope that answers your question. I'm not sure though.

I don't think I can get much higher than the speed I have now. With overhead, 
and other network traffic, I believe that 75-85Mpbs is the most I can really 
hope to achieve where almost all the devices talk at 100Mbit. I have re-read 
your question a few times.

I just tested and found that with 2 WinXP boxes and the PS3 pulling data from 
the Solaris ZFS share, I get 20.3MiB/s or 170.29 Mbps total output from the 
Solaris box. I'm sure that if another one of my networked devices (PS3 doesn't 
count) could talk at 1000Mbits/s I'd get much greater speeds. 

As it stands now, the linux box is a laptop (5400rpm drive) and the WinXP box 
has a SATA 1.5Gbit/s drive. As I said before, they both connect at 100Mbit. 
Those are probably the limiting factors in cresting 10MiB/s plus. Although, 
even on an empty network I believe that 11.5 or so is the most you'd be able to 
get on a 100Mbit link.

Of course, I've not talked network in a bit, so maybe I have my terms mixed. 
I've been trying to review external references just to make sure I'm speaking 
in the correct terms. Feel free to let me know if I have something wrong.

Thanks!

rick
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] share zfs hierarchy over nfs

2008-04-29 Thread Tim Wood
Hi,
I have a pool /zfs01 with two sub file systems /zfs01/rep1 and /zfs01/rep2.  I 
used [i]zfs share[/i] to make all of these mountable over NFS, but clients have 
to mount either rep1 or rep2 individually.  If I try to mount /zfs01 it shows 
directories for rep1 and rep2, but none of their contents.

On a linux machine I think I'd have to set the [i]no_sub_tree_check[/i] flag in 
/etc/exports to let an NFS mount move through the different exports, but I'm 
just beginning with solaris, so I'm not sure what to do here.

I found this post in the forum: 
http://opensolaris.org/jive/thread.jspa?messageID=169354#169354

but that makes it sound like this issue was resolved by changing the NFS client 
behavior in solaris.  Since my NFS client machines are going to be linux 
machines that doesn't help me any.

thanks for any suggestions!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] share zfs hierarchy over nfs

2008-04-29 Thread Bob Friesenhahn
On Tue, 29 Apr 2008, Tim Wood wrote:
 but that makes it sound like this issue was resolved by changing the 
 NFS client behavior in solaris.  Since my NFS client machines are 
 going to be linux machines that doesn't help me any.

Yes, Solaris 10 does nice helpful things that other OSs don't do.  I 
use per-user ZFS filesystems so I encountered the same problem.  It is 
necessary to force the automounter to request the full mount path.

On Solaris and OS-X Leopard client systems I use an /etc/auto_home 
like

# Home directory map for automounter
#
*   freddy:/home/

which also works for Solaris 9 without depending on the Solaris 10 
feature.

For FreeBSD (which uses the am-utils automounter) I figured out this 
horrific looking map incantation:

* 
type:=nfs;rhost:=freddy;rfs:=/home/${key};fs:=${autodir}/${rhost}${rfs};opts:=rw,grpid,resvport,vers=3,proto=tcp,nosuid,nodev

So for Linux, I think that you will also need to figure out an 
indirect-map incantation which works for its own broken automounter. 
Make sure that you read all available documentation for the Linux 
automounter so you know which parts don't actually work.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] import pooling when device is misisng

2008-04-29 Thread John R. Sconiers II
I did a fresh install of Nevada.  I have two zpools that contains the 
devices c0t0d0s4 and c0t1d0s4.  Couldn't find a way to attach the 
missing device without it being imported.  Any help would be 
appreciated


bash-3.2# zpool import
  pool: nfs-share
id: 6871731259521181476
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

nfs-share   UNAVAIL  missing device
  c0t0d0s4  ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
bash-3.2# zpool import -Df nfs-share
cannot import 'nfs-share': no such pool available
bash-3.2#

-- 
*
John R. Sconiers II, MISM, SCSA, SCNA, SCSECA, SCSASC
SUN Microsystems
TSC National Storage Support Engineer 
TSC NSSE
708-203-9228 Cell Phone
708-838-7097 access line / fax
Chicago, IL USA
History is a nightmare from which I am trying to awake.-James Joyce
*


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss