[zfs-discuss] OpenSolaris 2008.11

2008-12-07 Thread Edward Irvine
Folks,

For those of you who haven't had time to follow the Open Solaris  
project,
I recommend the this excellent screen cast. Of particular interest to  
this list
is how ZFS is used to implement a version of Apple's Time Machine.

http://webcast-west.sun.com/interactive/09B12437/index.html

Eddie

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Johan Hartzenberg
On Wed, Dec 3, 2008 at 6:37 PM, Aaron Blew [EMAIL PROTECTED] wrote:

 I've done some basic testing with a X4150 machine using 6 disks in a RAID 5
 and RAID Z configuration.  They perform very similarly, but RAIDZ definitely
 has more system overhead.  In many cases this won't be a big deal, but if
 you need as many CPU cycles as you can muster, hardware RAID may be your
 better choice.



Some people keep stressing the point that HW raid does not include snapshots
or what ever other features, or does so at cost, or ... or ... or .  It
seems to me like we assume that the above poster intended or implied the use
of another file system on the HW raid system.

The poster above did not specify a file system, so I may as well assume the
comparisons is between using ZFS with JBOD vs ZFS on HW-raid.

Then the features available to the administrator are essentially the same.
Not the question becomes: What are the pros and cons for each?

I have not tested this, but I would assume that the HW raid (forget about
cheap motherboard chipset integrated fake-raid) will save some CPU time
because the raid controller has got a dedicated processor to do the stripe
parity calculations.  In addition the ZFS routines may have an easier time
ITO selecting which disk to store the data on (only one disk to choose
from).

On the other hand, ZFS promises better fault detection, but presently this
is temptered by several open bugs against ZFS during situations where
degraded pools are present, eg pools freezing, etc.  HW raid seem to have
this sort of situation under control.

Some HW raids may offer re-layout without losing data.  ZFS does not (yet)
offer this.

ZFS claims better write performance in scenarios where less than a full
stripe width is updated, and raid5 suffers from the write-hole problem.
Nicely defined here: http://blog.dentarg.net/2007/1/10/raid5-write-hole

ZFS updates are atomic - you never need to fsck the file system.

ZFS will work regardless of whether or not you have a HW raid disk
subsystem.

So... what other benefits has ZFS got (as defined in my second paragraph)

For what it is worth, have a look at my ZFS feature wishlist / AKA what it
would take to make ZFS _THE_ last word in storage management:
http://initialprogramload.blogspot.com/2008/07/zfs-missing-features.html

  _J

-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Tomas Ögren
On 07 December, 2008 - Johan Hartzenberg sent me these 6,3K bytes:

 For what it is worth, have a look at my ZFS feature wishlist / AKA what it
 would take to make ZFS _THE_ last word in storage management:
 http://initialprogramload.blogspot.com/2008/07/zfs-missing-features.html

#2 can kinda be solved with L2ARC.. Not entirely, but somewhat..

#3 is coming, but there is no hard ETA (according to Sun when I poked
them).

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-07 Thread Brian Couper
hi,

--- replacing UNAVAIL 0 543 0 insufficient replicas
-- 17096229131581286394 FAULTED 0 581 0 was /dev/dsk/c0t2d0s0/old
-- 11342560969745958696 FAULTED [u][b]0 582 0[/b][/u] was /dev/dsk/c0t2d0s0

Looking at that, i dont think you have fixed the original fault. Its still 
getting write errors. Thats why the resilvering has stopped i recon.

There there any spare drive connection's on the system? Could you free up a 
drive connection? So you can plug the drive in to a different connection.
You will need to resolve the hardware error, is it the drive, the cable or hard 
drive controller.
Remember a hard drives best trick is to act alive and well when is really at 
deaths door.
One of ZFS's best features is its ability to sniff out hardware faults.

To restart the resilver, do a zpool clear and zpool online. This will force the 
zpool and the hard drive online. It will start to resilver, do a zpool status 
-v to monitor the process. Watch out for the error count on the drive. Dont do 
this till you really think you have got the error fixed.

How is your back up situation? Get you critical data off the zpool before 
attempting to repair the zpool or change any thing with the zpool.

What i would do is, get a new drive, connect it to a different hard drive 
connection and use a new cable. Remove the old drive, unplug it.  I would not 
try to replace the faulty drive while is still connected, things are just going 
to get confusing.

Your zpool status will then show the drive as missing. Zpool replace  with the 
new drive.
Your zpool with be fixed in a few hours. Your zpool may give errors across 
other drives, as long as its 50 just use zpool clear. your hardware fault my 
have been causing trouble for ages without you knowing.

Am an amateur ZFS er  so use my advice with caution. 

Brian,
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-07 Thread Courtney Malone
Well you would think that would be the case, but the behavior is the same 
whether the disk is physically present or not. I can even use cfgadm to 
unconfigure the deevice and the pool will stay in the same state and not let me 
offline/detach/replace the vdev. also I don't have any spare ports 
unfortunately.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Torrey McMahon
I'm pretty sure I understand the importance of a snapshot API. (You take 
the snap, then you do the backup or whatever) My point is that, at 
least on my quick read, you can do most of the same things with the ZFS 
command line utilities. The relevant question would then be how stable 
that is for the type of work we're talking about.

Joseph Zhou wrote:
 Ok, Torrey, I like you, so one more comment before I go to bed --

 Please go study the EMC NetWorker 7.5, and why EMC can claim 
 leadership in VSS support.
 Then, if you still don't understand the importance of VSS, just ask me 
 in an open fashion, I will teach you.

 The importance of storage in system and application optimization can 
 be very significant.
 You do coding, do you know what's TGT from IBM in COBOL, to be able to 
 claim enterprise technology?
 If not, please study.
 http://publib.boulder.ibm.com/infocenter/pdthelp/v1r1/index.jsp?topic=/com.ibm.entcobol.doc_4.1/PGandLR/ref/rpbug10.htm
  


 Open Storage is a great concept, but we can only win with realy 
 advantages, not fake marketing lines.
 I hope everyone enjoyed the discussion. I did.

 zStorageAnalyst


 - Original Message - From: Torrey McMahon [EMAIL PROTECTED]
 To: Joseph Zhou [EMAIL PROTECTED]
 Cc: Richard Elling [EMAIL PROTECTED]; William D. Hathaway 
 [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
 zfs-discuss@opensolaris.org; [EMAIL PROTECTED]
 Sent: Sunday, December 07, 2008 2:40 AM
 Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
 X4150/X4450


 Compared to hw raid only snapshots ZFS is still, imho, easier to use.

 If you start talking about VSS, aka shadow copy for Windows, you're 
 now at the fs level. I can see that VSS offers an API for 3rd parties 
 to use but, as I literally just started reading about it, I'm not an 
 expert. From a quick glance I think the ZFS feature set is 
 comparable. Is there a C++ API to ZFS? Not that I know of. Do you 
 need one? Can't think of a reason off the top of my head given the 
 way the zpool/zfs commands work.

 Joseph Zhou wrote:
 Torrey, now this impressive as the old days with Sun Storage.

 Ok, ZFS PiT is only a software solution.
 The Windows VSS is not only a software solution, but also a 3rd 
 party integration standard from MS.
 What's your comment on ZFS PiT is better than MS PiT, in light of 
 openness and 3rd-party integration???

 Talking about garbage!
 z


 - Original Message - From: Torrey McMahon 
 [EMAIL PROTECTED]
 To: Richard Elling [EMAIL PROTECTED]
 Cc: Joseph Zhou [EMAIL PROTECTED]; William D. 
 Hathaway [EMAIL PROTECTED]; 
 [EMAIL PROTECTED]; zfs-discuss@opensolaris.org; 
 [EMAIL PROTECTED]
 Sent: Sunday, December 07, 2008 1:58 AM
 Subject: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on 
 Sun X4150/X4450


 Richard Elling wrote:
 Joseph Zhou wrote:

 Yeah?
 http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm
  

 Snapshot is a big deal?


 Snapshot is a big deal, but you will find most hardware RAID 
 implementations
 are somewhat limited, as the above adaptec only supports 4 
 snapshots and it is an
 optional feature.  You will find many array vendors will be happy 
 to charge lots
 of money for the snapshot feature.

 On top of that since the ZFS snapshot is at the file system level 
 it's much easier to use. You don't have to quiesce the file system 
 first or hope that when you take the snapshot you get a consistent 
 data set. I've seen plenty of folks take hw raid snapshots without 
 locking the file system first, let alone quiescing the app, and 
 getting garbage.






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Ian Collins



 On Mon 08/12/08 08:14 , Torrey McMahon [EMAIL PROTECTED] sent:
 I'm pretty sure I understand the importance of a snapshot API. (You take
 the snap, then you do the backup or whatever) My point is that, at 
 least on my quick read, you can do most of the same things with the ZFS
 command line utilities. The relevant question would then be how stable 
 that is for the type of work we're talking about.
 
Or through the APIs provided by libzfs.

-- 
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Torrey McMahon
Ian Collins wrote:

  On Mon 08/12/08 08:14 , Torrey McMahon [EMAIL PROTECTED] sent:
   
 I'm pretty sure I understand the importance of a snapshot API. (You take
 the snap, then you do the backup or whatever) My point is that, at 
 least on my quick read, you can do most of the same things with the ZFS
 command line utilities. The relevant question would then be how stable 
 that is for the type of work we're talking about.

 
 Or through the APIs provided by libzfs.

I'm not sure if those are published/supported as opposed to just being 
readable in the source. I think the ADM project is the droid we're 
looking for.

Automatic Data Migration http://opensolaris.org/os/project/adm/
ADM is designed to use the Data Storage Management API (aka XDSM) as
defined in the CAE Specification XDSM as documented by the Open
Group. XDSM provides an Open Standard API to Data Migration
Applications (DMAPI) to manage file backup and recovery, automatic
file migration, and file replication. ADM will take advantage of
these APIs as a privileged application and extension to ZFS. 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Ian Collins
On Mon 08/12/08 09:14 , Torrey McMahon [EMAIL PROTECTED] sent:
 Ian Collins wrote:

  Or through the APIs provided by libzfs.

 I'm not sure if those are published/supported as opposed to just being 
 readable in the source. I think the ADM project is the droid we're 
 looking for.
 
Fair point, I've been working with my own (C++) wrapper which abstracts the 
differences.

-- 
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-07 Thread Brian Cameron

Mark/Tomas/Miles:

Thanks for the information.  Unfortunately, using chmod/chown does not
seem a workable solution to me, unless I am missing something.  Normally
logindevperm(4) is used for managing the ownership and permissions of
device files (like the audio device), and if the GDM daemon just calls
chown/chmod on the audio device, then it seems this could easily cause
inconsistencies with logindevperm.

Remember, for example, that multiple users can login into the same
machine.  Perhaps one via the console, and other users via XDMCP or
other remote methods.  VT (Virtual Terminal) will soon integrate
into Solaris and add yet another way that users can log in.

It seems that it would cause obvious problems if the GDM daemon simply
changed the ownership/permissions of the audio device files when starting
the GUI login screen.  What if a second user tries to log in via XDMCP
after another user has already logged in, has ownership of the audio
device, and is using it?  We probably don't want the second login screen
to steal the audio device away from the first user.  Also, making the
file have all user read/write permissions is not desirable since it
would make a denial-of-service attack simple, where a second user could
take over the audio device.

ACL's seemed a good solution since it leaves the overall ownership
and permissions of the device the same, but just adds the gdm user as
having permission to access the device as needed.  Is there any way to
get this same sort of behavior when using ZFS.

If not, can people recommend a better way to manage audio device
permisisons from the login screen?  I know on some Linux distros,
they make the audio device owned by the audio group and ensure that
the gdm user is in the audio group.  Perhaps we should use a similar
approach on Solaris if ACL isn't a practical solution for all file
systems.

Remember that the need to have access to the audio device from the login
screen is only used to support text-to-speech so that users with certain
accessibility needs can navigate the login screen.  In other words, it is
a feature that only a small percentage of users really need, but a
feature that makes the desktop completely unusable for them if it is not
present.

Thoughts?

Thanks again for all the help,

Brian


Mark Shellenbaum wrote:
 
 However, I notice that when using ZFS on Indiana the above commands fail
 with the following error:

File system doesn't support aclent_t style ACL's.
See acl(5) for more information on ACL styles support by Solaris.

 What is the appropriate command to use with ZFS? 
 
 You can use pathconf() with _PC_ACL_ENABLED to determine what flavor of 
 ACL the file system supports.
 
 check out these links.
 
 http://docs.sun.com/app/docs/doc/816-5167/fpathconf-2?a=view
 http://blogs.sun.com/alvaro/entry/detecting_the_acl_type_you
 
 The example in the blog isn't quite correct.  The returned value is a 
 bit mask, and it is possible for a file system to support multiple ACL 
 flavors.
 
 Here is an example of pathconf() as used in acl_strip(3sec)
 
 http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libsec/common/aclutils.c#390
  
 
 
 
   -Mark


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-07 Thread Brian Couper
zpool replace data 11342560969745958696 c0t2d0  that might replace the drive BUT
you will have to sort out the hardware error first.

For now forget about zfs and what is says about the zpool status. Concentrate 
on fixing the hardware error. Use the manufacturs drive check boot CD to check 
the drive again. I know you checked it once before but my money is on the hard 
drive being faulty. I recon you will get errors on the drive if you check it 
again.

If it passes without any errors, and without wipeing the drive, try zpool clear 
and zpool online. It may not get any more write errors.

Is the drive showing up in the format command?

Remember, this small error has all the signs of going pear-shaped on you, so 
back up your data now while you can still read it!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Joseph Zhou
Yes, yes, Torrey, that's why I like you!

You are getting there -- the argument of snopshot is not key in its absolute 
elegance, but what it does in the overall solution. When you are talking 
about PiT with ADM, it made more sense, didn't it?

Please keep in mind that OpenSolaris and ZFS don't need to be the greatest 
technology today, and we need to respect the older generation engineers' 
thoughts -- it's an evolution of transfering enterprise capabilities to 
industry-standard solutions -- not a revolution that Sun Storage just 
re-invented everything.

And think strategically, is VSS just an API?   Even it is, by some logic, 
but what this API doos, in MS long term marketing strategy and its intent to 
claim enterprise. -- and how OpenSolaris and ZFS can claim more 
enterprise, one day???

I have lots other work to do, cannot chat no more.
But this is the first year since 2002 that I did not visit Sun Storage, and 
chat with Real Sun Storage folks over drinks. Miss you guys!
As every year, here is my contribution to open storage -- my frank comments.

Happy holidays!
zStorageAnalyst

- Original Message - 
From: Ian Collins [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; Torrey McMahon [EMAIL PROTECTED]
Cc: Joseph Zhou [EMAIL PROTECTED]; William D. Hathaway 
[EMAIL PROTECTED]; [EMAIL PROTECTED]; 
zfs-discuss@opensolaris.org; [EMAIL PROTECTED]; Richard 
Elling [EMAIL PROTECTED]
Sent: Sunday, December 07, 2008 3:26 PM
Subject: Re: Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun 
X4150/X4450


On Mon 08/12/08 09:14 , Torrey McMahon [EMAIL PROTECTED] sent:
 Ian Collins wrote:

  Or through the APIs provided by libzfs.

 I'm not sure if those are published/supported as opposed to just being
 readable in the source. I think the ADM project is the droid we're
 looking for.

Fair point, I've been working with my own (C++) wrapper which abstracts the 
differences.

-- 
Ian 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Bob Friesenhahn
On Sun, 7 Dec 2008, Joseph Zhou wrote:

 Please keep in mind that OpenSolaris and ZFS don't need to be the greatest
 technology today, and we need to respect the older generation engineers'
 thoughts -- it's an evolution of transfering enterprise capabilities to
 industry-standard solutions -- not a revolution that Sun Storage just
 re-invented everything.

I am not sure what you are trying to say.  Sometimes revolution is 
necessary in order for there to be substantial improvement.  ZFS is a 
revolution rather than an evolution.

 And think strategically, is VSS just an API?   Even it is, by some logic,
 but what this API doos, in MS long term marketing strategy and its intent to
 claim enterprise. -- and how OpenSolaris and ZFS can claim more
 enterprise, one day???

VSS is an NTFS filesystem feature which seems to only have become 
usable as of Windows Server 2003.  It includes arbitrary limitations 
which don't exist in ZFS.  Clearly you are sold on this 
closed-source technology.

To my way of thinking individual components are not in themselves 
enterprise.  The notion of enterprise is that there is a system of 
well integrated components which provide the performance, reliability, 
and maintainability required for mission critical installations. 
Since Microsoft is not a vertically integrated system vendor it can 
only qualify its products as being enterprise in conjuction with a 
real system vendor in order to offer an integrated solution. 
Otherwise it is just a collection of parts which may or may not even 
function together.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrading my ZFS server

2008-12-07 Thread SV
js.lists , or anyone else who is using a XFX MDA72P7509 Motherboard ---

that onboard NIC is a Marvell? -  Do you choose not to use it in favor of the 
Intel PCI NIC?
Marvell provides Solaris 10 x86/x64 drivers on their website, I was hoping the 
Marvell works in Opensolaris, because 97% of the AMD motherboard I researched 
have a Realtek NIC which I don't want.

XFX's website is one of those register your serial number to get access. I 
hate those manufacturers that don't let you research the details before you buy!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-07 Thread Mark Shellenbaum

 
 ACL's seemed a good solution since it leaves the overall ownership
 and permissions of the device the same, but just adds the gdm user as
 having permission to access the device as needed.  Is there any way to
 get this same sort of behavior when using ZFS.
 

I think you may have misunderstood what people were suggesting.  They 
weren't suggesting changing the mode of the file, but using chmod(1M) to 
add/modify ZFS ACLs on the device file.

chmod A+user:gdm:rwx:allow file

See chmod(1M) or the zfs admin guide for ZFS ACL examples.


 If not, can people recommend a better way to manage audio device
 permisisons from the login screen?  I know on some Linux distros,
 they make the audio device owned by the audio group and ensure that
 the gdm user is in the audio group.  Perhaps we should use a similar
 approach on Solaris if ACL isn't a practical solution for all file
 systems.
 
 Remember that the need to have access to the audio device from the login
 screen is only used to support text-to-speech so that users with certain
 accessibility needs can navigate the login screen.  In other words, it is
 a feature that only a small percentage of users really need, but a
 feature that makes the desktop completely unusable for them if it is not
 present.
 
 Thoughts?
 
 Thanks again for all the help,
 
 Brian
 
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-07 Thread James C. McPherson
On Sat, 06 Dec 2008 22:28:36 -0500
Joseph Zhou [EMAIL PROTECTED] wrote:

 Ian, Tim, again, thank you very much in answering my question.
 
 I am a bit disappointed that the whole discussion group does not have
 one person to stand up and say yeah, OpenSolaris absolutely
 outperforms Linux and Windows, because..

Why? What purpose would it serve? For some tasks Linux outperforms
Windows and OpenSolaris. For some tasks Windows outperforms OpenSolaris
and linux. For some tasks OpenSolaris outperforms linux and Windows.

 But I wish, one day, we can be arguing not on a basis of belief, but
 on a basis of facts (referencable data).

So you're discounting all the publicly available information that's
not only on sun.com, but also blogs.sun.com (see, eg, Roch's and
R.Elling's blogs), and joyent, and many other places. 

Why is that?

One thing that I find quite refreshing about these fora is that
there is a distinct preference for hard, referencable data, and
the intestinal fortitude to analyse it objectively.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-07 Thread Courtney Malone
the disk passes sector by sector write tests both with the vendor diag and 
seatools, the cable failed as soon as i tried it in another machine. the disk 
is good, the cable was not. it also shows up in format just fine and it has the 
same partition layout as all the other disks in the pool. zpool state is the 
problem here, like i said it doesnt care if the disk is there or not or even if 
c0::dsk/c0t2d0 is unconfigured with cfgadm the pool stays in a faulted state 
even after zpool clear data and those 2 vdevs under replacing remain faulted 
whether the disk is present or not.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-07 Thread Courtney Malone
is there anyway to use zdb to simply remove those vdevs since they arent active 
members of the pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-07 Thread Julius Roberts
 How do i compile mbuffer for our system,

Thanks to Mike Futerko for help with the compile, i now have it installed OK.

  and what syntax to i use to invoke it within the zfs send recv?

Still looking for answers to this one?  Any example syntax, gotchas
etc would be much appreciated.

-- 
Kind regards, Jules

free. open. honest. love. kindness. generosity. energy. frenetic.
electric. light. lasers. spinning spotlights. stage dancers. heads
bathed in yellow light. silence. stillness. awareness. empathy. the
beat. magic, not mushrooms. thick. tight. solid. commanding.
compelling. uplifting. euphoric. ecstatic, not e. ongoing. releasing.
reforming. meandering. focussing. quickening. quickening. quickening.
aloft. floating. then the beat. fat exploding thick bass-line.
eyes, everywhere. smiling. sharing. giving. trust. understanding.
tolerance. peace. equanimity. emptiness (Earthcore, 2008)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-07 Thread Thomas Maier-Komor
Julius Roberts wrote:
 How do i compile mbuffer for our system,
 
 Thanks to Mike Futerko for help with the compile, i now have it installed OK.
 
  and what syntax to i use to invoke it within the zfs send recv?
 
 Still looking for answers to this one?  Any example syntax, gotchas
 etc would be much appreciated.
 

First start the receive side, then the sender side:

receiver mbuffer -s 128k -m 200M -I sender:8000 | zfs receive filesystem

sender zfs send pool/filesystem | mbuffer -s 128k -m 200M -O receiver:8000

Of course, you should adjust the hostnames accordingly, and set the
mbuffer buffer size to a value that fits your needs (option -m).

BTW: I've just released a new version of mbuffer which defaults to TCP
buffer size of 1M, which can be adjusted with option --tcpbuffer.

Cheers,
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] help please - The pool metadata is corrupted

2008-12-07 Thread Brett
here is the requested output of raidz_open2.d upon running  a zpool status :-

[EMAIL PROTECTED]:/export/home/brett# ./raidz_open2.d
run 'zpool import' to generate trace

60027449049959 BEGIN RAIDZ OPEN
60027449049959 config asize = 4000755744768
60027449049959 config ashift = 9
60027507681841 child[3]: asize = 1000193768960, ashift = 9
60027508294854 asize = 4000755744768
60027508294854 ashift = 9
60027508294854 END RAIDZ OPEN
60027472787344 child[0]: asize = 1000193768960, ashift = 9
60027498558501 child[1]: asize = 1000193768960, ashift = 9
60027505063285 child[2]: asize = 1000193768960, ashift = 9

I hope that helps, means little to me.

One thought I had was maybe i somehow messed up the cables and the devices are 
not in their original sequence. Would this make any difference? I have seen 
examples for raid-z suggesting that the import of a raid-z should figure out 
the devices regardless of the order of devices or of new device numbers so i 
was hoping it didnt matter.

Thanks Rep
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-07 Thread Brian Couper
Am at the limit of my knowlage now.

google  man zpool

UNAVAIL is coming up because the zpool was imported with the drive missing.
Try exporting the pool, rebooting then importing it with the drive connected.

UNAVAIL
The device could not be opened. If a pool is imported when a device was 
unavailable, then the device will be identified by a unique identifier instead 
of its path since the path was never correct in the first place.

zpool attach [-f] pool device new_device have a read of zpool attach, it 
might work.

You could also try adding the drive as a hot spare.

Thats all the help i can give, sorry, i dont know how to change/edit parts of 
ZFS.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss