On Mon, Feb 22, 2010 at 3:14 PM, Benjamin Scott dragonh...@gmail.comwrote:
On Mon, Feb 22, 2010 at 1:39 PM, Michael ODonnell
michael.odonn...@comcast.net wrote:
So far, then, it's looking like every Sunday at 4:22 all the RAIDs
(all types or just RAID1?) in standard x86_64 CentOS5.4 (and
On Tue, Feb 23, 2010 at 9:40 AM, Tom Buskey t...@buskey.name wrote:
... patrol reads ...
The correct terminology is a scrub.
Dell and LSI Logic call it patrol read.
I believe I've seen Adaptec call it consistency check, although
that was a long time ago.
What makes your terminology
On Mon, Feb 22, 2010 at 6:40 PM, Benjamin Scott dragonh...@gmail.comwrote:
On Mon, Feb 22, 2010 at 6:05 PM, Bill McGonigle b...@bfccomputing.com
wrote:
1) retail brand names aren't terribly useful, they change vendors fairly
often
And I bet you can find online forums with people who
I executed commands as they would have been during the cron.weekly run and
I can now see why our simple monitor script would conclude the RAID had
a problem based on the resultant contents of /proc/mdstat. During the
check operation the RAID state is described as clean, resyncing
by mdadm and I
On Tue, Feb 23, 2010 at 9:53 AM, Benjamin Scott dragonh...@gmail.comwrote:
On Tue, Feb 23, 2010 at 9:40 AM, Tom Buskey t...@buskey.name wrote:
... patrol reads ...
The correct terminology is a scrub.
Dell and LSI Logic call it patrol read.
I believe I've seen Adaptec call it
In finest NIH form we could deal with the scrubber/patrol terminology
question by inventing a new acronym. How about GRIDLEBYRF for
Gratuitous Reads Intended to Detect Latent Errors Before You're Royally
Fscked ? FWIW, back around 2003 I wrote such logic for an early release
of MD on Red Hat 9
On Tue, Feb 23, 2010 at 11:51 AM, Michael ODonnell
michael.odonn...@comcast.net wrote:
FWIW, back around 2003 I wrote such logic for an early release
of MD on Red Hat 9 and we called it a scrubber, though I'm not sure who
came up with that term or why...
Probably because it was common
On 02/22/2010 06:28 PM, Benjamin Scott wrote:
However, looking at the difference above, I think they're different
*in the wrong way*. It looks like one of the disks specifies
(hd0,0)/grub/grub.conf while the other just specifies
/grub/grub.conf. That doesn't seem right.
Looking at one of
Commanding check to the md device is ordinarily a read-only
operation, despite the terminology in the log that says resyncing.
During the md check operation, the array is clean (not degraded)
and you can see that explicitly with the [UU] status report; if
the array were degraded the failed
On 23-Feb-2010, Tom Buskey t...@buskey.name sent:
The series 2 Tivos need a USB ethernet adapter. Since they run
Linux and not x86 (PPC or MIPS, I have both) and Tivo controls
what gets installed, you have to be picky.
Some Series 2 TiVos have on board Ethernet, the TCD649080 and
TCD649180, at
On Tue, Feb 23, 2010 at 2:01 PM, Michael Bilow
mik...@colossus.bilow.com wrote:
During the md check operation, the array is clean (not degraded)
and you can see that explicitly with the [UU] status report ...
Of course, mdstat still calls the array clean even after
mismatches are detected,
On Tue, February 23, 2010 5:43 pm, Benjamin Scott wrote:
While I run smartd in monitor mode, I've never had it give me a
useful pre-failure alert. Likewise, I've never had the SMART health check
in PC BIOSes give me a useful pre-failure alert. More than once I've seen
SMART report the
On Tue, Feb 23, 2010 at 6:05 PM, Ken D'Ambrosio k...@jots.org wrote:
Huh -- I actually *have* had SMART tell me things were awry, several
times.
Well, that's good to know. :)
Just curious, did you get a chance to see if any of them actually
started failing soon after?
Like I said, I
On 2010-02-23 at 17:43 -0500, Benjamin Scott wrote:
On Tue, Feb 23, 2010 at 2:01 PM, Michael Bilow
mik...@colossus.bilow.com wrote:
During the md check operation, the array is clean (not degraded)
and you can see that explicitly with the [UU] status report ...
Of course, mdstat still calls
14 matches
Mail list logo