On Sun, May 23, 2010 at 12:02 PM, Torrey McMahon wrote:
> On 5/23/2010 11:49 AM, Richard Elling wrote:
>>
>> FWIW, the A5100 went end-of-life (EOL) in 2001 and end-of-service-life
>> (EOSL) in 2006. Personally, I hate them with a passion and would like to
>> extend an offer to use my tractor to
On Sat, May 22, 2010 at 11:33 AM, Bob Friesenhahn
wrote:
> On Fri, 21 May 2010, Demian Phillips wrote:
>
>> For years I have been running a zpool using a Fibre Channel array with
>> no problems. I would scrub every so often and dump huge amounts of
>> data (tens or hundr
For years I have been running a zpool using a Fibre Channel array with
no problems. I would scrub every so often and dump huge amounts of
data (tens or hundreds of GB) around and it never had a problem
outside of one confirmed (by the array) disk failure.
I upgraded to sol10x86 05/09 last year and
Is it possible to recover a pool (as it was) from a set of disks that
were replaced during a capacity upgrade?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am using an LSI PCI-X dual port HBA, in a 2 chip opteron system.
Connected to the HBA is a SUN Storagetek A1000 populated with 14 36GB disks.
I have two questions that I think are related.
Initially I set up 2 zpools one on each channel so the pool looked like this:
share
Thanks. I have another spare so I replaced with that and it put the used spare
back to spare status.
I assume at this point once I replace the failed disk I just need to let
solaris see the change and then add it back into the pool as a spare (to
replace the spare I took out and used in the rep
I'm using ZFS and a drive has failed.
I am quite new to solaris and Frankly I seem to know more about ZFS and how it
works then I do the OS.
I have the hot spare taking over the failed disk and from here, do I need to
remove the disk on the OS side (if so what is proper) or do I need to take
ac