2011/2/1 nap <napar...@gmail.com>
> Hi,
>
> Thanks for this thread, it seems interesting :)
>
it's always interesting when brains/ideas are competing each other ;)
On Mon, Jan 31, 2011 at 11:42 PM, Hartmut Goebel <h.goe...@goebel-consult.de
> > wrote:
>
>> Am 31.01.2011 23:20, schrieb Gerhard Lausser:
>> > a single format for both python-based checks and traditional plugins
>> > requires converting the output and exitcode of the latter. This probably
>> > adds a lot more cycles.
>> While you are right here at the moment, this may change over time.
>>
>> The current interface is simple but non performing. While the check
>> builds a string, some backend tools needs to parse it again. If shinken
>> get more mature, there may be more backend-tools being able to handle
>> Python objects and gain performance.
>>
> I don't think we should parse/manage perf data. It's not Shinken core job.
> Perfdata are here to be exported, by a broker module or a reactionner call
> (event handler or module). So instead of adding job to the scheduler, let
> keep the string as now, and let the cpu cycle be taken when it's very very
> easy to scale, in the reactionners :)
> (and it ask less code for us :p )
>
Here I add a point to Jean : having checks/plugins in native python is a
question, having perfdata results returned from theses native calls also in
such a native python format/object is another question.. So this specific
question can be left on the side for now I think. (although I find it honest
for the advanced reasons).
( On the same matter one thing I personally quite highly don't like with
nagios perfdata checks syntax is that the plugin checks have to return the
min/max perfdata value thresholds in the output.. I find it's not needed
and overcomplexity because most of the time (> 99% ?) theses values are
always the same (and are even provided to the script by nagios itself) and
could be handled at another higher level thus.. )
> In order to be usable with Nagios there must be something like
>> > if __name__ == '__main__':
>> > # runs as standalone python program
>> > plugin = Class...
>> > print plugin.output + '|' + plugin.perfdata
>> > exit plugin.exitcode
>> >
>> > so the file can be "imported" and cached by a Shinken poller and
>> executed by
>> > a Nagios. (similar to ePN)
>> I missed describing this. This is exactly, what the test()-Method is
>> meant for.
>>
> Why a test? If we have a wrapper script that import/call, we just need a
> standard Execute function?
>
>
>> >> * The performance data will be a dict. Keys are strings, values are
>> >> either integer, float or string (percent), or a list/tuple thereof.
>> > What about min/max/thresholds?
>> I thought, simple lists of values are enough. Am I missing something?
>>
> If we do not parse perfdata, it's solved :)
>
> One thing is for the format. I think it should be like the broker modules.
> The main thing is how we call them. I think a good way is :
> * load the "module" in the poller object. The module got a module_type,
> like "nrpe"
> * for the command call, we can "call" this module. So it will be a standard
> Check with just an additionnal property.
>
> define command {
> command_name SpeedNrpe
> command_line check_nrpe -h $HOSTADRESS$ -u -t 9 -c $ARG1$
> module nrpe
> }
>
> So in the Arbiter we do not even need to know about the 'nrpe' code/module.
> It's a code for the poller, not for the arbiter nor the scheduler (that's it
> that will create the Check objects).
>
> When it reaches the poller, this one already got the modules (nrpe,
> othersupermodule, etc), and the main poller process read the module (can be
> None, so it's the standard fork that take it). I don't know if we should
> make worker by module, or each worker can load all modules. The current fork
> workers work in asynchronous mode. They get all jobs, they launch them, and
> then they loop on the pool calls of all the processes they launched. I don't
> know if it will be as easy for a connect call for example.
>
that looks like KISS .. my own opinion - for now - is that the main poller
process should effectively import all the plugin modules during its
configuration (or reconfig) phase(s) so that the workers have them directly
available in easy way.. if later we see it's not good then by then we can
move/handle the import in the workers (or else) in a more
intelligent/smart/efficient way.
having a draft picture of this could help everybody to see correctly the
same thing ?
Then for the Check, it will call the module and "Execute" it with the Check.
> This module then know what to do with the command_line or maybe others
> parameters. Of course a helper function in the Module code from Grégory will
> help.
>
yeah :°p but side topic thus : I hope this new Module class I created and
have used within all current shinken modules is not "bad" ; well I more hope
that I've not broken anything (that I've not seen/detected) with it 'cause
my personal base configuration is quite small and i could have missed
somethings.. So do not hesitate to review some of these changes and spit
on it ;)
Already I'm not very sure the names I choosed for this ("basemodule.py" for
the file and "Module" for the class) are the best ones.. I'm very
hesitating to rename the class to "BaseModule" in order to make it different
than the one in module.py .. wdyt ? and if you have a better name
proposition then I'm all open to use it !
greg.
------------------------------------------------------------------------------
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires
February 28th, so secure your free ArcSight Logger TODAY!
http://p.sf.net/sfu/arcsight-sfd2d
_______________________________________________
Shinken-devel mailing list
Shinken-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/shinken-devel