This is exactly the kind of thing that the Assimilation Project (open source) software is designed to do.

It addresses the execution, scaling, updating the database, and even which machines need to run which scripts - with near-zero configuration.

And in the process it creates a graph-based CMDB (configuration management data base) and performs more conventional monitoring, auditing against desired state, etc.

http://assimproj.org and http://assimilationsystems.com/





On 07/09/2014 05:26 PM, Skylar Thompson wrote:
On 07/09/2014 09:15 AM, leam hall wrote:
I could use some feedback on a thought process.

Situation: Lots machines (3-4 digit count), lots of scripts on each
machine (100-500). The scripts are identified by a set alphanumeric
scheme and the scripts will either pass, fail, or fix the issue the
script refers to.

Planned output is one file per machine in the following format:

     servername 2017-07-09 scriptID  Pass

The idea is that the relevant outputs can be collated into one larger
file for searching, or input into a database.

What are your thoughts on this? Ideally others will use this so I'm
trying to think outside my own habits/needs. Feel free to bounce ideas
around, you probably have ideas I haven't even begun to ponder.
Some questions to consider:

* How critical will it be that you collect the output of every script
every time it runs?

* How likely will it be that you will have to scale this past the
initial environment? This might happen in machine count, script count,
or script run frequency.

* How likely will it be that you will have to change the format of the
message?

* Do you have any security issues to consider (i.e. do messages have to
be signed, will the database have to be opened to the world)?

* What kind of queries do you expect to ask of the data? How often do
you expect to run them?

With this many scripts and systems, I would consider using some kind of
message bus rather than having the scripts (or even an agent running on
the clients) write directly to the database. You could be looking at
hundreds or thousands of simultaneous connections, and most databases
are not designed for that kind of use.

Instead, have the scripts output data into the message bus, and have a
pool of consumer processes that read messages from the bus and into the
database. The consumer processes would start up a connection to the
database, prepare one or more statements, and use the cached statement
handles for every message they consume. This is /much/ more efficient
than setting up a fresh TCP connection, getting the query planner to
parse a statement, insert/commit a single row, and then tear everything
apart. Depending on your database technology, using stored procedures
can be much more efficient than ad-hoc statements, and will definitely
be more secure for non-authenticated connections.

Skylar
_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
  http://lopsa.org/


--
    Alan Robertson <[email protected]> - @OSSAlanR

"Openness is the foundation and preservative of friendship...  Let me claim from you 
at all times your undisguised opinions." - William Wilberforce

_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
http://lopsa.org/

Reply via email to