Thanks for the replies in the first place!
Let me explain again;

Of course i'm using MRTG / RRDMON / RRDTOOL / and other good graphers...!
They can give a perfect graphical overview.. but there is MORE informational
(performance) data
used within mon which is only used for the monitoring part and after that
(good or false) it's trown away....

Scott points it right, (i agree also)
"> Mon is an excellent monitoring tool, as it was designed to be;  that
doesn't
> necessarily make it an excellent tool for measuring performance, however.
> I'd prefer that core development of mon continue in the direction of
> monitoring state and sending notifications, and leave the task of
graphing,
> reports, etc. to other tools."

Just to leave mon for monitoring, states and alerts; the need for a global
data dbms storage
is there......!! Once the data is in an dbms, seperate tools can handle this
data without disturbing mon....

Which data i'm talking about??? Not the complete SNMP MIB-tree :-)) for
example, but just the data which mon
is already monitoring for a correct operation! For example:
fping.monitor results; snmpvar.monitor results (cpu/process/disk/etc),
http_xxx.monitor results, etc, etc...
Just those data, need to be analysed in real time for correct operation
(=mon) BUT is also important to get
reports (trends / pro-active / average service-level reached / etc) after
running x time.

This prevents this data being retrieved twice, and why not?? It
could(/should) be the last step in each monitoring
script to store the retrieved monitoring-data into a database and nothing
more....!

This idea is just to prevent mon being used as a performance
grapher/reporter and
other _hacks_.... let different programs do the job...!

anyone some perl --> postgresql commands/script-lines for me??
greettz
dick
<>

----- Original Message -----
From: "Scott Prater" <[EMAIL PROTECTED]>
To: "Dick de Waal" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, September 28, 2001 10:36
Subject: RE: New idea / database question....


> Right now, we're using a combination of mon with HP-OV to cover
> monitorization.  I'm looking into RRDTool for reporting.
>
> After spending a couple of years studying the topic, I've finally found it
> useful to separate (mentally) the task of monitoring state from the task
of
> measuring performance. Basically, what you're asking are two different
> questions:
>
> *  Is my system OK right now? (fundamentally a yes/no question, although
> there are many degrees of "yes, but...")  This is monitorization of state.
> Tools such as Tivoli, HP-OV, mon, Big Brother, etc. focus on answering
this
> question.
>
> *  How is my system performing overall? (well, poorly, sometimes better
> than other times, etc.)  This is monitorization of performance over time,
> usually shown in graphs.  Tools such as MRTG and RRDTool focus on
answering
> this question.
>
> Of course, the lines blur, especially when you talk of what different
> products provide -- the answer to the second question can determine the
> answer to the first.  Tools such as MRTG provide limited threshold
checking
> and notification, but they are no substitute for a full-featured
monitoring
> system, such as mon.  On the other hand, tools such as mon can be adapted
to
> save state information for reporting purposes (with modules such as
rrdmon),
> but they're no substitute for reporting tools such as RRDTool.
>
> So far, I haven't found a reasonably-priced (or freeware) package that
does
> it all to my taste.  Tivoli and HP-OV come close, but they still are
focused
> more on the first question, rather than on the second.  There's the
OpenNMS
> project (http://www.opennms.org/), an open source freeware alternative to
> tools such as HP-OV, but as far as I can tell, it's not really ready for
> primetime yet.
>
> So, like most, I use a combination of tools to give me the big picture.
As
> another person pointed out, it's a pain in the neck to have to configure
> several pieces of software to send multiple queries just to get data on
one
> element -- you usually end up cobbling together a series of management
> scripts to tie it all together.  But as the two tasks (checking state and
> measuring performance) are fundamentally two different tasks, albeit very
> closely related, I prefer to work with tools optimized to perform either
one
> or the other.
>
> Mon is an excellent monitoring tool, as it was designed to be;  that
doesn't
> necessarily make it an excellent tool for measuring performance, however.
> I'd prefer that core development of mon continue in the direction of
> monitoring state and sending notifications, and leave the task of
graphing,
> reports, etc. to other tools.
>
> my two cents...
>
> Scott Prater
> Dpto. Sistemas
> [EMAIL PROTECTED]
>
> SERVICOM 2000
> Av. Primado Reig, 189 entlo.
> 46020 Valencia - Spain
> Tel. (+34) 96 332 12 00
> Fax. (+34) 96 332 12 01
> www.servicom2000.com
>
>
>
>
> -----Mensaje original-----
> De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]En
> nombre de Dick de Waal
> Enviado el: jueves, 27 de septiembre de 2001 22:59
> Para: [EMAIL PROTECTED]
> Asunto: New idea / database question....
>
>
> Hello All!
> Did anybody put the monitoring (performance) data from divers monitoring
> scripts
> into a database (like postgresql) for further analysing/reporting??
> Up- and downtimes are not enough for me, and want to store also the SNMP
> (cpu, disk, process, etc),
> HTTP respons, etc data
> in this database to create service level reports... or even, because the
> data is then realtime available, do some
> pro-active SLA monitoring!!!
> (retrieving is somehow already done for the monitoring part.... and after
> comparing in the monitoring scripts,
> this data is trown away...but is just usefull...!!)
>
> Has anybody some idea's/scripts or wanna have a
beta/alpha/stable-tester????
>
> I'm now using the latest version and _of course_ running well again!!
> Even for my test setup on a Sony Vaio laptop....
>
> greeetzz
> dick
> <>
> (not realy a perl programmer..........)
>
>
>

Reply via email to