>>> Ken Gaillot <kgail...@redhat.com> schrieb am 01.06.2016 um 16:14 in 
>>> Nachricht
<574eede2.1090...@redhat.com>:
> On 06/01/2016 06:14 AM, Ulrich Windl wrote:
>> Hello!
>> 
>> I have a question:
>> Inspecting the XML of our cluster, I noticed that there are several IDs 
> ending with "last_0". So I wondered:
>> It seems those IDs are generated for start and stop operations, and I 
> discovered one case where an ID is duplicate (the status is for different 
> nodes, and one is a start operation, while the other is a stop 
> operationhowever).
> 
> The "*_last_*" IDs simply refer to the last (= most recently executed)
> operation :)
> 
> Those IDs are not directly used by the cluster; they're just used to
> store the most recent operation in the CIB.
> 
>> Background: I wrote some program that extarcts the runtimes of operations 
> from the CIB, like this:
>> prm_r00_fs_last_0 13464 stop
>> prm_r00_fs_last_0 61 start
>> prm_r00_fs_monitor_300000 34 monitor
>> prm_r00_fs_monitor_300000 43 monitor
>> 
>> The first word is the "id" attribute, the second is the "exec-time" 
> attribute, and the last one (added to help myself out of confusion) is the 
> "operation" attribute. Values are converted to milliseconds.
>> 
>> Is the name of the id intentional, or is it some mistake?
>> 
>> And another question: For an operation with "start-delay" it seems the start 
> delay is simple added to the queue time (as if the operation was waiting that 
> long). Is that intentional?
> 
> Yes. The operation is queued when it is received, and if it has a start
> delay, a timer is set to execute it at a later time. So the delay
> happens while the operation is queued.

Ken,

thanks for the answers. Is there a way to distinguish "intentional" from "non 
intentional" queueing? One would look deeper into non-intentional queueing.

Regards,
Ulrich

> 
>> Another program tried to extract queue and execution times for operations, 
> and the sorted result looks like this then:
>> 
>> 1 27 prm_nfs_home_exp_last_0 monitor
>> 1 39 prm_q10_ip_2_monitor_60000 monitor
>> 1 42 prm_e10_ip_2_monitor_60000 monitor
>> 1 58 prm_s01_ip_last_0 stop
>> 1 74 prm_nfs_cbw_trans_exp_last_0 start
>> 30001 1180 prm_stonith_sbd_monitor_180000 monitor
>> 30001 178 prm_c11_ascs_ers_monitor_60000 monitor
>> 30002 165 prm_c11_ascs_ers_monitor_45000 monitor
>> 
>> Regards,
>> Ulrich
> 
> _______________________________________________
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 





_______________________________________________
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to