OK, I integrated your service definition for the Amarok service into  
my own system.  Since I don't have amarok installed anywhere, I  
changed the process name that the HostResourceSwRunMonitor looks for  
from "amarokapp" to "mythbackend", and pointed it at a MythTV system  
that I've been building.  With the pollers set for DEBUG logging, here  
are the messages I see on a successful poll:

2008-11-06 08:33:55,425 DEBUG [PollerScheduler-30 Pool-fiber1]  
PollableServiceConfig: Polling 1:192.168.23.7:Amarok using pkg example1
2008-11-06 08:33:55,425 DEBUG [PollerScheduler-30 Pool-fiber1]  
HostResourceSwRunMonitor: poll: service= SNMP address=  
AgentConfig[Address: /192.168.23.7, Port: 161, Community: public,  
Timeout: 3000, Retries: 1, MaxVarsPerPdu: 10, MaxRepititions: 2, Max  
request size: 65535, Version: 1, ProxyForAddress: null]
2008-11-06 08:33:55,425 DEBUG [PollerScheduler-30 Pool-fiber1]  
HostResourceSwRunMonitor: HostResourceSwRunMonitor.poll:  
SnmpAgentConfig address: AgentConfig[Address: /192.168.23.7, Port:  
161, Community: public, Timeout: 3000, Retries: 1, MaxVarsPerPdu: 10,  
MaxRepititions: 2, Max request size: 65535, Version: 1,  
ProxyForAddress: null]
2008-11-06 08:33:55,425 INFO  [PollerScheduler-30 Pool-fiber1]  
Snmp4JWalker: Walking HostResourceSwRunMonitor for /192.168.23.7 using  
version SNMPv1 with config: AgentConfig[Address: /192.168.23.7, Port:  
161, Community: public, Timeout: 3000, Retries: 1, MaxVarsPerPdu: 10,  
MaxRepititions: 2, Max request size: 65535, Version: 1,  
ProxyForAddress: null]
2008-11-06 08:33:55,427 DEBUG [PollerScheduler-30 Pool-fiber1]  
Snmp4JWalker: Sending tracker pdu of size 1
2008-11-06 08:33:55,430 DEBUG  
[DefaultUDPTransportMapping_192.168.23.22/0] Snmp4JWalker: Received a  
tracker pdu of type RESPONSE from /192.168.23.7 of size 1 errorStatus  
= Success errorIndex = 0

There are dozens of the "Sending tracker pdu" / "Received a tracker  
pdu" pairs as the monitor walks through the hrSWRunTable, and then:

2008-11-06 08:33:55,725 DEBUG [PollerScheduler-30 Pool-fiber1]  
HostResourceSwRunMonitor: poll: HostResourceSwRunMonitor poll  
succeeded, addr=192.168.23.7 service name=mythbackend value=mythbackend
2008-11-06 08:33:55,725 DEBUG [PollerScheduler-30 Pool-fiber1]  
PollableServiceConfig: Finish polling 1:192.168.23.7:Amarok using pkg  
example1 result =Up
2008-11-06 08:33:55,725 DEBUG [PollerScheduler-30 Pool-fiber1]  
PollableService: Finish Scheduled Poll of service  
1:192.168.23.7:Amarok, started at Thu Nov 06 08:33:55 EST 2008
2008-11-06 08:33:55,725 DEBUG [PollerScheduler-30 Pool-fiber1]  
LegacyScheduler: schedule: Adding ready runnable  
ScheduleEntry[expCode=1] for 1:192.168.23.7:Amarok (ready in 300000ms)  
at interval 300000
2008-11-06 08:33:55,725 DEBUG [PollerScheduler-30 Pool-fiber1]  
LegacyScheduler: schedule: queue element added, notification not  
performed


After a successful poll, I stopped the mythbackend process on the  
managed system.  The poller caught it on the next poll cycle; here are  
the log messages:

2008-11-06 08:38:56,484 DEBUG [PollerScheduler-30 Pool]  
LegacyScheduler: run: found ready runnable ScheduleEntry[expCode=1]  
for 1:192.168.23.7:Amarok (ready in 0ms)
2008-11-06 08:38:56,485 DEBUG [PollerScheduler-30 Pool-fiber1]  
PollableService: Start Scheduled Poll of service 1:192.168.23.7:Amarok
2008-11-06 08:38:56,485 DEBUG [PollerScheduler-30 Pool-fiber1]  
PollableServiceConfig: Polling 1:192.168.23.7:Amarok using pkg example1
2008-11-06 08:38:56,485 DEBUG [PollerScheduler-30 Pool-fiber1]  
HostResourceSwRunMonitor: poll: service= SNMP address=  
AgentConfig[Address: /192.168.23.7, Port: 161, Community: public,  
Timeout: 3000, Retries: 1, MaxVarsPerPdu: 10, MaxRepititions: 2, Max  
request size: 65535, Version: 1, ProxyForAddress: null]
2008-11-06 08:38:56,485 DEBUG [PollerScheduler-30 Pool-fiber1]  
HostResourceSwRunMonitor: HostResourceSwRunMonitor.poll:  
SnmpAgentConfig address: AgentConfig[Address: /192.168.23.7, Port:  
161, Community: public, Timeout: 3000, Retries: 1, MaxVarsPerPdu: 10,  
MaxRepititions: 2, Max request size: 65535, Version: 1,  
ProxyForAddress: null]
2008-11-06 08:38:56,485 INFO  [PollerScheduler-30 Pool-fiber1]  
Snmp4JWalker: Walking HostResourceSwRunMonitor for /192.168.23.7 using  
version SNMPv1 with config: AgentConfig[Address: /192.168.23.7, Port:  
161, Community: public, Timeout: 3000, Retries: 1, MaxVarsPerPdu: 10,  
MaxRepititions: 2, Max request size: 65535, Version: 1,  
ProxyForAddress: null]
2008-11-06 08:38:56,488 DEBUG [PollerScheduler-30 Pool-fiber1]  
Snmp4JWalker: Sending tracker pdu of size 1
2008-11-06 08:38:56,492 DEBUG  
[DefaultUDPTransportMapping_192.168.23.22/0] Snmp4JWalker: Received a  
tracker pdu of type RESPONSE from /192.168.23.7 of size 1 errorStatus  
= Success errorIndex = 0

Again there are many PDUs exchanged, followed by:

2008-11-06 08:38:56,831 DEBUG [PollerScheduler-30 Pool-fiber1]  
PollableServiceConfig: Finish polling 1:192.168.23.7:Amarok using pkg  
example1 result =Down
2008-11-06 08:38:56,831 INFO  [PollerScheduler-30 Pool-fiber1]  
PollableService: Changing status of PollableElement  
1:192.168.23.7:Amarok from Up to Down
2008-11-06 08:38:56,831 DEBUG [PollerScheduler-30 Pool-fiber1]  
LegacyScheduler: schedule: Adding ready runnable  
ScheduleEntry[expCode=2] for 1:192.168.23.7:Amarok (ready in 30000ms)  
at interval 30000
2008-11-06 08:38:56,831 DEBUG [PollerScheduler-30 Pool-fiber1]  
LegacyScheduler: schedule: queue element added, notification not  
performed
2008-11-06 08:38:56,831 DEBUG [PollerScheduler-30 Pool-fiber1]  
DefaultPollContext: createEvent: uei = uei.opennms.org/nodes/ 
nodeLostService nodeid = 1
2008-11-06 08:38:56,832 DEBUG [PollerScheduler-30 Pool-fiber1]  
DefaultPollContext: openOutage: Opening outage for:  
1:192.168.23.7:Amarok with  
event:[EMAIL PROTECTED], uei: uei.opennms.org/ 
nodes/nodeLostService, id: 0, isPending: true, list size: 0
2008-11-06 08:38:56,834 DEBUG [PollerScheduler-30 Pool-fiber1]  
PollableService: Finish Scheduled Poll of service  
1:192.168.23.7:Amarok, started at Thu Nov 06 08:38:56 EST 2008

So on my system, your configurations appear to work as expected.  That  
leaves the possibility that something is messed up on your system.   
With the pollers set to DEBUG logging, are you still seeing no mention  
of "Amarok", "amarokapp", or "HostResourceSwRunMonitor" in  
poller.log.* on your system?

-jeff

-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Please read the OpenNMS Mailing List FAQ:
http://www.opennms.org/index.php/Mailing_List_FAQ

opennms-devel mailing list

To *unsubscribe* or change your subscription options, see the bottom of this 
page:
https://lists.sourceforge.net/lists/listinfo/opennms-devel

Reply via email to