I'm a bit late replying to this, but I'd take the quick dirty approach
personally. When the server is running fine, unplug one disk and just see
which one is reported faulty in ZFS.
A couple of minutes doing that and you've tested that your raid array is
working fine and you know exactly
I had similar problems replacing a drive myself, it's not intuitive exactly
which ZFS commands you need to issue to recover from a drive failure.
I think your problems stemmed from using -f. Generally if you have to use
that, there's a step or option you've missed somewhere.
However I'm not
On Fri, 11 Apr 2008, Simon Breden wrote:
Thanks myxiplx for the info on replacing a faulted drive. I think
the X4500 has LEDs to show drive statuses so you can see which
physical drive to pull and replace, but how does one know which
physical disk to pull out when you just have a standard
Thanks Bob, that's good advice. So, before I open my case, I've currently got 3
SATA drives all the same model, so how do I know which one is plugged into
which SATA connector on the motherboard? Is there a command I can issue which
gives identifying info that includes the disk id AND the SATA
To answer my own question, I might have found the answer:
# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
sata0/0::dsk/c1t0d0disk connectedconfigured ok
sata0/1::dsk/c1t1d0disk connectedconfigured ok
So for a general purpose fileserver using standard SATA connectors on the
motherboard, with no drive status LEDs for each drive, using the info above
from myxiplx, this faulty drive replacement routine should work in the event
that a drive fails: (I have copy pasted the example from myxiplx
Chris Siebenmann wrote:
| What your saying is independent of the iqn id?
Yes. SCSI objects (including iSCSI ones) respond to specific SCSI
INQUIRY commands with various 'VPD' pages that contain information about
the drive/object, including serial number info.
Some Googling turns up:
Just to report back to the list... Sorry for the lengthy post
So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more
or less work as expected. If I unplug one side of the mirror - unplug
or power down one of the iSCSI targets - I/O to the zpool stops for a
while, perhaps a
| Is it really true that as the guy on the above link states (Please
| read the link, sorry) when one iSCSI mirror goes off line, the
| initiator system will panic? Or even worse, not boot its self cleanly
| after such a panic? How could this be? Anyone else with experience
| with iSCSI based
To repeat what some others have said, yes, Solaris seems to handle an iSCSI
device going offline in that it doesn't panick and continues working once
everything has timed out.
However that doesn't necessarily mean it's ready for production use. ZFS will
hang for 3 mins (180 seconds) waiting
Ross wrote:
To repeat what some others have said, yes, Solaris seems to handle an iSCSI
device going offline in that it doesn't panick and continues working once
everything has timed out.
However that doesn't necessarily mean it's ready for production use. ZFS
will hang for 3 mins (180
On Mon, 7 Apr 2008, Ross wrote:
However that doesn't necessarily mean it's ready for production use.
ZFS will hang for 3 mins (180 seconds) waiting for the iSCSI client
to timeout. Now I don't know about you, but HA to me doesn't mean
Highly Available, but with occasional 3 minute breaks.
Crazy question here... but has anyone tried this with say, a QLogic
hardware iSCSI card? Seems like it would solve all your issues.
Granted, they aren't free like the software stack, but if you're trying
to setup an HA solution, the ~$800 price tag per card seems pretty darn
reasonable
On Mon, Apr 7, 2008 at 10:40 AM, Christine Tran [EMAIL PROTECTED]
wrote:
Crazy question here... but has anyone tried this with say, a QLogic
hardware iSCSI card? Seems like it would solve all your issues. Granted,
they aren't free like the software stack, but if you're trying to setup an
is a virtue.
-- richard
Date: Mon, 7 Apr 2008 07:48:41 -0700
From: [EMAIL PROTECTED]
Subject: Re: [zfs-discuss] OpenSolaris ZFS NAS Setup
To: [EMAIL PROTECTED]
CC: zfs-discuss@opensolaris.org
Ross wrote:
To repeat what some others have said, yes, Solaris seems to handle
This guy seems to have had lots of fun with iSCSI :)
http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html
This is scaring the heck out of me. I have a project to create a zpool
mirror out of two iSCSI targets, and if the failure of one of them will
panic my system, that will
On Sat, Apr 5, 2008 at 12:25 AM, Jonathan Loran [EMAIL PROTECTED]
wrote:
This guy seems to have had lots of fun with iSCSI :)
http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.htmlhttp://web.ivy.net/%7Ecarton/oneNightOfWork/20061119-carton.html
This is scaring the heck out of
Fascinating read, thanks Simon!
I have been using ZFS in production data center for some while now, but it
never occurred to me to use iSCSI with ZFS also.
This gives me some ideas on how to backup our mail pools into some older slower
disks offsite. I find it interesting that while a local
Thanks a lot, glad you liked it :)
Yes I agree, using older, slower disks in this way for backups seems a nice way
to reuse old kit for something useful.
There's one nasty problem I've seen with making a pool from an iSCSI disk
hosted on a different machine, and that is that if you turn off
If it's of interest, I've written up some articles on my experiences of
building a ZFS NAS box which you can read here:
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
I used CIFS to share the filesystems, but it will be a simple matter to use NFS
instead: issue the command 'zfs
20 matches
Mail list logo