Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-14 Thread Ross
I'm a bit late replying to this, but I'd take the quick dirty approach personally. When the server is running fine, unplug one disk and just see which one is reported faulty in ZFS. A couple of minutes doing that and you've tested that your raid array is working fine and you know exactly

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-11 Thread Ross
I had similar problems replacing a drive myself, it's not intuitive exactly which ZFS commands you need to issue to recover from a drive failure. I think your problems stemmed from using -f. Generally if you have to use that, there's a step or option you've missed somewhere. However I'm not

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-11 Thread Bob Friesenhahn
On Fri, 11 Apr 2008, Simon Breden wrote: Thanks myxiplx for the info on replacing a faulted drive. I think the X4500 has LEDs to show drive statuses so you can see which physical drive to pull and replace, but how does one know which physical disk to pull out when you just have a standard

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-11 Thread Simon Breden
Thanks Bob, that's good advice. So, before I open my case, I've currently got 3 SATA drives all the same model, so how do I know which one is plugged into which SATA connector on the motherboard? Is there a command I can issue which gives identifying info that includes the disk id AND the SATA

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-11 Thread Simon Breden
To answer my own question, I might have found the answer: # cfgadm -al Ap_Id Type Receptacle Occupant Condition sata0/0::dsk/c1t0d0disk connectedconfigured ok sata0/1::dsk/c1t1d0disk connectedconfigured ok

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-11 Thread Simon Breden
So for a general purpose fileserver using standard SATA connectors on the motherboard, with no drive status LEDs for each drive, using the info above from myxiplx, this faulty drive replacement routine should work in the event that a drive fails: (I have copy pasted the example from myxiplx

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-10 Thread Jonathan Loran
Chris Siebenmann wrote: | What your saying is independent of the iqn id? Yes. SCSI objects (including iSCSI ones) respond to specific SCSI INQUIRY commands with various 'VPD' pages that contain information about the drive/object, including serial number info. Some Googling turns up:

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-09 Thread Jonathan Loran
Just to report back to the list... Sorry for the lengthy post So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more or less work as expected. If I unplug one side of the mirror - unplug or power down one of the iSCSI targets - I/O to the zpool stops for a while, perhaps a

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-08 Thread Chris Siebenmann
| Is it really true that as the guy on the above link states (Please | read the link, sorry) when one iSCSI mirror goes off line, the | initiator system will panic? Or even worse, not boot its self cleanly | after such a panic? How could this be? Anyone else with experience | with iSCSI based

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-07 Thread Ross
To repeat what some others have said, yes, Solaris seems to handle an iSCSI device going offline in that it doesn't panick and continues working once everything has timed out. However that doesn't necessarily mean it's ready for production use. ZFS will hang for 3 mins (180 seconds) waiting

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-07 Thread Richard Elling
Ross wrote: To repeat what some others have said, yes, Solaris seems to handle an iSCSI device going offline in that it doesn't panick and continues working once everything has timed out. However that doesn't necessarily mean it's ready for production use. ZFS will hang for 3 mins (180

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-07 Thread Bob Friesenhahn
On Mon, 7 Apr 2008, Ross wrote: However that doesn't necessarily mean it's ready for production use. ZFS will hang for 3 mins (180 seconds) waiting for the iSCSI client to timeout. Now I don't know about you, but HA to me doesn't mean Highly Available, but with occasional 3 minute breaks.

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-07 Thread Christine Tran
Crazy question here... but has anyone tried this with say, a QLogic hardware iSCSI card? Seems like it would solve all your issues. Granted, they aren't free like the software stack, but if you're trying to setup an HA solution, the ~$800 price tag per card seems pretty darn reasonable

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-07 Thread Tim
On Mon, Apr 7, 2008 at 10:40 AM, Christine Tran [EMAIL PROTECTED] wrote: Crazy question here... but has anyone tried this with say, a QLogic hardware iSCSI card? Seems like it would solve all your issues. Granted, they aren't free like the software stack, but if you're trying to setup an

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-07 Thread Richard Elling
is a virtue. -- richard Date: Mon, 7 Apr 2008 07:48:41 -0700 From: [EMAIL PROTECTED] Subject: Re: [zfs-discuss] OpenSolaris ZFS NAS Setup To: [EMAIL PROTECTED] CC: zfs-discuss@opensolaris.org Ross wrote: To repeat what some others have said, yes, Solaris seems to handle

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-04 Thread Jonathan Loran
This guy seems to have had lots of fun with iSCSI :) http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html This is scaring the heck out of me. I have a project to create a zpool mirror out of two iSCSI targets, and if the failure of one of them will panic my system, that will

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-04 Thread Tim
On Sat, Apr 5, 2008 at 12:25 AM, Jonathan Loran [EMAIL PROTECTED] wrote: This guy seems to have had lots of fun with iSCSI :) http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.htmlhttp://web.ivy.net/%7Ecarton/oneNightOfWork/20061119-carton.html This is scaring the heck out of

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-03 Thread Vincent Fox
Fascinating read, thanks Simon! I have been using ZFS in production data center for some while now, but it never occurred to me to use iSCSI with ZFS also. This gives me some ideas on how to backup our mail pools into some older slower disks offsite. I find it interesting that while a local

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-03 Thread Simon Breden
Thanks a lot, glad you liked it :) Yes I agree, using older, slower disks in this way for backups seems a nice way to reuse old kit for something useful. There's one nasty problem I've seen with making a pool from an iSCSI disk hosted on a different machine, and that is that if you turn off

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-01 Thread Simon Breden
If it's of interest, I've written up some articles on my experiences of building a ZFS NAS box which you can read here: http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command 'zfs