[zfs-discuss] Re: Re: Unbootable system recovery

2006-11-13 Thread Ewen Chan
Matt:

What's your contact information so that I can send that information to you?

My apologies for taking so long to get back to this.

Sincerely,
Ewen
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Unbootable system recovery

2006-10-07 Thread Ewen Chan
Well, the drives technically didn't malfunction.

Like I said, the reason why I had to pull the drives out is because 70 lbs is a 
little TOO much for me to be able to lift.

The drives aren't more than 3 weeks old, with a DOM of Jul 2006.

Is there anything that I can do to find out how the system was scanning the 
drives (i.e. as I recall, during the installation, c0t7d0 was listed as the 
first device. Is there a way to look at the order that the drives were brought 
online, and maybe I would be able to correlate that to the drive/port map on 
the controller.)

I am banking on that it's SOMETHING related to when I had to plug the drives 
back in after moving the unit, because I didnt' tag the individual cables for 
the drives.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Unbootable system recovery

2006-10-05 Thread Matthew Ahrens

Ewen Chan wrote:

However, in order for me to lift the unit, I needed to pull the
drives out so that it would actually be moveable, and in doing so, I
think that the drive-cable-port allocation/assignment has
changed.


If that is the case, then ZFS would automatically figure out the new 
mapping.  (Of course, there could be an undiscovered bug in that code.)


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Unbootable system recovery

2006-10-05 Thread Akhilesh Mritunjai
Hi,

Like what matt said, unless there is a bug in code, zfs should automatically 
figure out the drive mappings. The real problem as I see is using 16 drives in 
single raidz... which means if two drives malfunction, you're out of luck. 
(raidz2 would survive 2 drives... but still I believe 16 drives is too much).

May I suggest you re-check the cabling as drive going bad might be related to 
that... or even changing the power supply (I got burnt that way). It might just 
be an intermittent drive malfunction. You might also surface scan the drives 
and rule out bad sectors.

Good luck :)

PS: When you get your data back, do switch to raidz2 or mirrored config that 
can survive loss of more than 1 disk. My experience (which is not much) shows 
it doesn't take much to render more than one disk out of 20 or so... especially 
when moving them.

- Akhilesh
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss