Hi Dejan,

It has been recommended, that required options should not have default 
values. The initial version of the script had a default for that 
variable, but chrooted_path was not required. During the revision's 
suggested by Andreas and Florian, chrooted was converted into a 
required, and unique, variable, with no default.

As for an ocft test case, I've not had a chance to look into that in 
enough detail to be able to create any. I am hoping to be able to look 
it over this weekend.

On 12/09/2011 01:30 AM, Dejan Muhamedagic wrote:
> Hi,
>
> On Tue, Dec 06, 2011 at 01:39:04PM -0400, Chris Bowlby wrote:
>> Hi All,
>>
>>    Ok, I'll look into csync, and will concede the point on the RA syncing
>> the out of chrooted configuration file.
>>
>> I still need to find a means to monitor the DHCP responses however, as
>> that will just improve the reliability of the cluster itself, as well as
>> the service.
> I'm really not sure how to do that.
>
> Didn't review the agent, but on a cursory look, perhaps you could
> provide the default for chrooted_path (/var/lib/dhcp).
>
> BTW, did you think of adding an ocft test case?
>
> Thanks,
>
> Dejan
>
>> On 12/06/2011 12:20 PM, Alan Robertson wrote:
>>> I agree about avoiding the feature to sync config files.  My typical
>>> recommendation is to use drbdlinks and put it on replicated or shared
>>> storage.  In fact, I do that at home, and are doing it for a current
>>> customer.
>>>
>>> By the way, Sean has recently revised drbdlinks to support the OCF
>>> API.  (In fact, it supports all of the OCF, heartbeat-v1 and LSB APIs).
>>>
>>> http://www.tummy.com/Community/software/drbdlinks/
>>>
>>> You can find his source control for it on github:
>>>        https://github.com/linsomniac/drbdlinks
>>>
>>>
>>>
>>>
>>> Quoting Florian Haas<flor...@hastexo.com>:
>>>
>>>> On Tue, Dec 6, 2011 at 4:44 PM, Dejan Muhamedagic<de...@suse.de>   wrote:
>>>>> Hi,
>>>>>
>>>>> On Tue, Dec 06, 2011 at 10:59:20AM -0400, Chris Bowlby wrote:
>>>>>> Hi Everyone,
>>>>>>
>>>>>>     I would like to thank Florian, Andreas and Dejan for making
>>>>>> suggestions and pointing out some additional changed I should make. At
>>>>>> this point the following additional changes have been made:
>>>>>>
>>>>>> - A test case in the validation function for ocf_is_probe has been
>>>>>> reversed tp ! ocf_is_probe, and the "test"/"[ ]" wrappers removed to
>>>>>> ensure the validation is not occuring if the partition is not mounted or
>>>>>> under a probe.
>>>>>> - An extraneous return code has been removed from the "else" clause of
>>>>>> the probe test, to ensure the rest of the validation can finish.
>>>>>> - The call to the DHCPD daemon itself during the start phase has been
>>>>>> wrapped with the ocf_run helper function, to ensure that is somewhat
>>>>>> standardized.
>>>>>>
>>>>>> The first two changes corrected the "Failed Action... Not installed"
>>>>>> issue on the secondary node, as well as the fail-over itself. I've been
>>>>>> able to fail over to secondary and primary nodes multiple times and the
>>>>>> service follows the rest of the grouped services.
>>>>>>
>>>>>> There are a few things I'd like to add to the script, now that the main
>>>>>> issues/code changes have been addressed, and they are as follows:
>>>>>>
>>>>>> - Add a means of copying /etc/dhcpd.conf from node1 to node2...nodeX
>>>>>> from within the script. The logic behind this is as follows:
>>>>> I'd say that this is admin's responsibility. There are tools such
>>>>> as csync2 which can deal with that. Doing it from the RA is
>>>>> possible, but definitely very error prone and I'd be very
>>>>> reluctant to do that. Note that we have many RAs which keep
>>>>> additional configuration in a file and none if them tries to keep
>>>>> the copies of that configuration in sync itself.
>>>> Seconded. Whatever configuration doesn't live _in_ the CIB proper, is
>>>> not Pacemaker's job to replicate. The admin gets to either sync files
>>>> manually across the nodes (csync2 greatly simplifies this; no need to
>>>> reinvent the wheel), or put the config files on storage that's
>>>> available to all cluster nodes.
>>>>
>>>> Cheers,
>>>> Florian
>>>> _______________________________________________________
>>>> Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
>>>> http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
>>>> Home Page: http://linux-ha.org/
>>>>
>>> _______________________________________________________
>>> Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
>>> http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
>>> Home Page: http://linux-ha.org/
>> _______________________________________________________
>> Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
>> Home Page: http://linux-ha.org/
> _______________________________________________________
> Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
> Home Page: http://linux-ha.org/

_______________________________________________________
Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev
Home Page: http://linux-ha.org/

Reply via email to