On 02/28/2012 07:26 PM, Kahlil Hodgson wrote:
> Hi Emmett,
>
> On Tue, 2012-02-28 at 18:18 -0800, Emmett Culley wrote:
>> I just had a very similar problem with a raid 10 array with four new
>> 1TB drives. It turned out to be the SATA cable.
> ...
>
>> All has been well for a week now.
>>
>> I s
What's funny is WD is just being idiotic. Seagate does NOT have that
extended error checking. I have two barracuda green drives in an sbs 2k8
server on a sas 6 ir and they work perfectly.
On 2/29/2012 3:05 PM, m.r...@5-cent.us wrote:
> Miguel Medalha wrote:
>> A few months ago I had an enormous
> A friend of mine has had a couple of strange problems with the RE (RAID)
> series of Caviars, which utilize the same mechanics as the non-RE
> Blacks. For software RAID, I would recommend that you stick with the
> non-RE versions because of differences in the firmware.
I would recommend the
On 03/01/2012 09:00 AM, Mark Roth wrote:
>
> Miguel Medalha wrote:
>> >
>> > A few months ago I had an enormous amount of grief trying to understand
>> > why a RAID array in a new server kept getting corrupted and suddenly
>> > changing configuration. After a lot of despair and head scratching
Miguel Medalha wrote:
>
> A few months ago I had an enormous amount of grief trying to understand
> why a RAID array in a new server kept getting corrupted and suddenly
> changing configuration. After a lot of despair and head scratching it
> turned out to be the SATA cables. This was a rack server
A few months ago I had an enormous amount of grief trying to understand
why a RAID array in a new server kept getting corrupted and suddenly
changing configuration. After a lot of despair and head scratching it
turned out to be the SATA cables. This was a rack server from Asus with
a SATA back
first off..if you are using the on bios raid turn it off. Secondly
black drives form WD intentionally put themselves into deep cycle diags
every so often. This makes them impossible to use in hardware and FRAID
setups. I have 4 of them in raid 10 under mdraid and i had to disable
bios raid f
On Wed, 2012-02-29 at 14:21 +1100, Kahlil Hodgson wrote:
> > I had a problem like this once. In a heterogeneous array of 80 GB
> > PATA drives (it was a while ago), the one WD drive kept dropping out
> > like this. WD's diagnostic tool showed a problem, so I RMA'ed the
> > drive... only to disco
Hi Emmett,
On Tue, 2012-02-28 at 18:18 -0800, Emmett Culley wrote:
> I just had a very similar problem with a raid 10 array with four new
> 1TB drives. It turned out to be the SATA cable.
...
> All has been well for a week now.
>
> I should have tired replacing the cable first :-)
Ah yes. G
Hi Ellen,
On Tue, 2012-02-28 at 18:59 -0700, Ellen Shull wrote:
> On Tue, Feb 28, 2012 at 5:27 PM, Kahlil Hodgson
> wrote:
> > Now I start to get I/O errors on printed on the console. Run 'mdadm -D
> > /dev/md1' and see the array is degraded and /dev/sdb2 has been marked as
> > faulty.
>
> I ha
On 02/28/12 5:57 PM, Kahlil Hodgson wrote:
> end_request: I/O error, dev sda, sector 8690896
> Buffer I/O error on device dm-0, logical block 1081344
> JBD2: I/O error detected wen updating journal superblock for dm-0-8
> end_request: I/0 error, dev sda, sector 1026056
there's no more info on thos
On 02/28/2012 04:27 PM, Kahlil Hodgson wrote:
> Hello,
>
> Having a problem with software RAID that is driving me crazy.
>
> Here's the details:
>
> 1. CentOS 6.2 x86_64 install from the minimal iso (via pxeboot).
> 2. Reasonably good PC hardware (i.e. not budget, but not server grade either)
>
On Tue, Feb 28, 2012 at 5:27 PM, Kahlil Hodgson
wrote:
> Now I start to get I/O errors on printed on the console. Run 'mdadm -D
> /dev/md1' and see the array is degraded and /dev/sdb2 has been marked as
> faulty.
I had a problem like this once. In a heterogeneous array of 80 GB
PATA drives (it
On Tue, 2012-02-28 at 20:30 -0500, Luke S. Crawford wrote:
> On Wed, Feb 29, 2012 at 11:27:53AM +1100, Kahlil Hodgson wrote:
> > Now I start to get I/O errors on printed on the console. Run 'mdadm -D
> > /dev/md1' and see the array is degraded and /dev/sdb2 has been marked as
> > faulty.
>
> what
On Wed, Feb 29, 2012 at 11:27:53AM +1100, Kahlil Hodgson wrote:
> Now I start to get I/O errors on printed on the console. Run 'mdadm -D
> /dev/md1' and see the array is degraded and /dev/sdb2 has been marked as
> faulty.
what I/O errors?
> So I start again and repeat the install process very c
Hi Keith,
On Tue, 2012-02-28 at 16:43 -0800, Keith Keller wrote:
> One thing you can try is to download WD's drive tester and throw it at
> your drives. It seems unlikely to find anything, but you never know.
> The tester is available on the UBCD bootable CD image (which has lots of
> other handy
Hi Scott,
On Tue, 2012-02-28 at 16:48 -0800, Scott Silva wrote:
> First thing... Are they green drives? Green drives power down randomly and
> can
> cause these types of errors...
These are 'Black' drives.
> Also, maybe the 6GB sata isn't fully supported
> by linux and that board... Try the
On 2012-02-29, Kahlil Hodgson wrote:
>
> 2. Reasonably good PC hardware (i.e. not budget, but not server grade either)
> with a pair of 1TB Western Digital SATA3 Drives.
One thing you can try is to download WD's drive tester and throw it at
your drives. It seems unlikely to find anything, but yo
on 2/28/2012 4:27 PM Kahlil Hodgson spake the following:
> Hello,
>
> Having a problem with software RAID that is driving me crazy.
>
> Here's the details:
>
> 1. CentOS 6.2 x86_64 install from the minimal iso (via pxeboot).
> 2. Reasonably good PC hardware (i.e. not budget, but not server grade ei
Hello,
Having a problem with software RAID that is driving me crazy.
Here's the details:
1. CentOS 6.2 x86_64 install from the minimal iso (via pxeboot).
2. Reasonably good PC hardware (i.e. not budget, but not server grade either)
with a pair of 1TB Western Digital SATA3 Drives.
3. Drives are p
20 matches
Mail list logo