I had been having problems with the server. Techs finally found that the
CMOS battery died. (A good reason to check Gary's suggestion.)
Rainer
On 04/10/2015 7:42 PM, Jason Matthews wrote:
Gary is right to suspect other issue.
Is your cmos clock reporting the correct time? A battery failure
could cause the loss of your disk controller setting which might lead
you to believe your disks failed.
J.
Sent from my iPhone
On Oct 4, 2015, at 3:27 PM, Rainer Heilke
<[email protected]> wrote:
Greetings. I've recently had three hard drives fail in my server.
One was the OS disk, so I just reinstalled. The other two, however,
were each one-half of zpool mirrors. They are the problem disks.
Both have been replaced, but now I cannot seem to work with them.
In format -e, they are giving errors, specifically: 1. c3d1 <drive
type unknown> /pci@0,0/pci-ide@11/ide@0/cmdk@1,0 and, 7. c7d1
<drive type unknown> /pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0
There is also a third disk erroring out: 3. c5t9d1 <SS330055-
99JJXXK-0001 cyl 60797 alt 2 hd 255 sec 63>
/pci@0,0/pci1002,5a17@3/pci1000,9240@0/sd@9,1
I am suspecting c3d1 to be an old OS mirror, due to the low
controller number.
When I select 1 or 7, I get a Segmentation fault, and get booted
out of the format utility. (If I select 3, the format utility never
comes back, freezing.) A zpool status shows:
pool: Pool1 state: ONLINE status: Some supported features are not
enabled on the pool. The pool can still be used, but some features
are unavailable. action: Enable all features using 'zpool upgrade'.
Once this is done, the pool may no longer be accessible by software
that does not support the features. See zpool-features(5) for
details. scan: resilvered 2.78M in 0h0m with 0 errors on Tue Sep 16
14:11:00 2014 config:
NAME STATE READ WRITE CKSUM Pool1 ONLINE 0
0 0 c5t8d1 ONLINE 0 0 0
errors: No known data errors
pool: data state: DEGRADED status: One or more devices has
experienced an error resulting in data corruption. Applications
may be affected. action: Restore the file in question if possible.
Otherwise restore the entire pool from backup. see:
http://illumos.org/msg/ZFS-8000-8A scan: resilvered 36.1M in 0h17m
with 738 errors on Thu Oct 1 18:17:43 2015 config:
NAME STATE READ WRITE CKSUM data
DEGRADED 20.6K 0 0 mirror-0 DEGRADED 81.8K
0 0 7152018192933189428 FAULTED 0 0 0 was
/dev/dsk/c11t8d1s0 c6d0 ONLINE 0 0 81.8K
errors: 737 data errors, use '-v' for a list
(Doing a zpool status -v freezes the terminal.)
The system has three disks connected to an LSI MegaRAID SAS 9240-8i
controller.
I am suspecting that disk 3 (c5t9d1) might be the detached mirror
of Pool1 ( c5t8d1), but being unable to work with it, I cannot
verify this. I have no idea on how to deal with the data mirror.
Should I just detach /dev/dsk/c11t8d1s0 ( 7152018192933189428) and
hope that c6d0 will be clean enough for a decent scrub? Or is
/dev/dsk/c11t8d1s0 ( 7152018192933189428) the disk with the less
corrupted data? Not being able to even get a listing (ls) of the
data pool leaves me very hesitant.
Does anyone have any ideas on how to clean this up?
Thanks in advance, Rainer
-- Put your makeup on and fix your hair up pretty, And meet me
tonight in Atlantic City Bruce Springsteen
_______________________________________________ openindiana-discuss
mailing list [email protected]
http://openindiana.org/mailman/listinfo/openindiana-discuss
_______________________________________________ openindiana-discuss
mailing list [email protected]
http://openindiana.org/mailman/listinfo/openindiana-discuss
--
Put your makeup on and fix your hair up pretty,
And meet me tonight in Atlantic City
Bruce Springsteen
_______________________________________________
openindiana-discuss mailing list
[email protected]
http://openindiana.org/mailman/listinfo/openindiana-discuss