On 2007-12-08T15:12:47, Alan Robertson [EMAIL PROTECTED] wrote:
This may be preferable to needing to duplicate this in home-grown
fashion. A lint-like tool is still a good idea, but it should be build
on top of this, IMHO.
/me redirects this whining into /dev/null
I'm not whining.
I have
On 2007-12-06T10:07:25, Andrew Beekhof [EMAIL PROTECTED] wrote:
I mean, if clone:0 fails on node_a, I want to up a fail count for
clone:1/node_a at the same time.
or, is there any good idea to work out the above behavior without clone?
not sure if thats possible yet.
a good idea though
let
I mean, if clone:0 fails on node_a, I want to up a fail count for
clone:1/node_a at the same time.
or, is there any good idea to work out the above behavior without
clone?
not sure if thats possible yet.
a good idea though
let me get back to you with a definitive answer
For
Hi,
Michael Kershaw wrote:
question lies in the configuration of I believe the EvmsSCC
resource. In order for my ocfs2 volumes to mount on boot, I need
evms_activate to run. Everything I've been able to come up with so
far, points to this resource. Am I on the right track here? Anyone
Thanks Ivan. That was exactly what I was looking for. And of course
I know evms has nothing to do with ocfs2, but I do have ocfs file
systems on evms maintained volumes, hence the reason for needing
evms_activate to be run on boot. Twas just the last little piece I
was lacking.
Much
On Dec 10, 2007, at 3:36 AM, Junko IKEDA wrote:
I mean, if clone:0 fails on node_a, I want to up a fail count for
clone:1/node_a at the same time.
or, is there any good idea to work out the above behavior without
clone?
not sure if thats possible yet.
a good idea though
let me get back to