Brad Hill wrote:
I've seen reports of a recent Seagate firmware update
bricking drives again.
What's the output of 'zpool import' from the LiveCD?
It sounds like
ore than 1 drive is dropping off.
r...@opensolaris:~# zpool import
pool: tank
id: 16342816386332636568
state: FAULTED
Take the new disk out as well.. foreign/bad non-zero disk label may cause
trouble too.
I've experienced tool core dumps with foreign disk (partition) label which
might be the case if it is a recycled replacement disk (In my case fixed by
plugging the disk it into a linux desktop and "blanking"
Yes. I have disconnected the bad disk and booted with nothing in the slot, and
also with known good replacement disk in on the same sata port. Doesn't change
anything.
Running 2008.11 on the box and 2008.11 snv_101b_rc2 on the LiveCD. I'll give it
a shot booting from the latest build and see if
Just a thought, but have you physically disconnected the bad disk? It's not
unheard of for a bad disk to cause problems with others.
Failing that, it's the "corrupted data" bit that's worrying me, it sounds like
you may have other corruption on the pool (always a risk with single parity
raid),
I do, thank you. The disk that went out sounds like it had a head crash or some
such - loud clicking shortly after spin-up then it spins down and gives me
nothing. BIOS doesn't even detect it properly to do a firmware update.
> Do you know 7200.11 has firmware bugs?
>
> Go to seagate website
This is outside the scope of my knowledge/experience. Maybe there is
now a core file you can examine? That might help you at least see
what's going on?
On Tue, Jan 27, 2009 at 10:32 PM, Brad Hill wrote:
> r...@opensolaris:~# zpool import -f tank
> internal error: Bad exchange descriptor
> Abort
r...@opensolaris:~# zpool import -f tank
internal error: Bad exchange descriptor
Abort (core dumped)
Hoping someone has seen that before... the Google is seriously letting me down
on that one.
> I guess you could try 'zpool import -f'. This is a
> pretty odd status,
> I think. I'm pretty sure
I guess you could try 'zpool import -f'. This is a pretty odd status,
I think. I'm pretty sure raidz1 should survive a single disk failure.
Perhaps a more knowledgeable list member can explain.
On Sat, Jan 24, 2009 at 12:48 PM, Brad Hill wrote:
>> I've seen reports of a recent Seagate firmware
Do you know 7200.11 has firmware bugs?
Go to seagate website to check.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Any ideas on this? It looks like a potential bug to me, or there is something
that I'm not seeing.
Thanks again!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
> I've seen reports of a recent Seagate firmware update
> bricking drives again.
>
> What's the output of 'zpool import' from the LiveCD?
> It sounds like
> ore than 1 drive is dropping off.
r...@opensolaris:~# zpool import
pool: tank
id: 16342816386332636568
state: FAULTED
status: The p
I've seen reports of a recent Seagate firmware update bricking drives again.
What's the output of 'zpool import' from the LiveCD? It sounds like
more than 1 drive is dropping off.
On Thu, Jan 22, 2009 at 10:52 PM, Brad Hill wrote:
>> I would get a new 1.5 TB and make sure it has the new
>> fi
> I would get a new 1.5 TB and make sure it has the new
> firmware and replace
> c6t3d0 right away - even if someone here comes up
> with a magic solution, you
> don't want to wait for another drive to fail.
The replacement disk showed up today but I'm unable to replace the one marked
UNAVAIL:
13 matches
Mail list logo