On Sun, 7 Oct 2001, Pekka Savola wrote:
> On Sun, 7 Oct 2001, Riku Meskanen wrote:
> > On Sun, 7 Oct 2001, Pekka Savola wrote:
> > > On Sun, 7 Oct 2001, Riku Meskanen wrote:
> > > > Does anybody know if there exist a project(s) or effort(s)
> > > > to create a proper centralized management system for Linux?
> > > >
> > > > None of current Linux (OSS) MGMT systems I've heard¹
> > > > or tried have not gone  into the direction wich would
> > > > try to implement centralized installed software,
> > > > configurations and policies from database in MGMT station.
> > >
> > > We're managing about 30-40 servers and equal number of workstations,
> > > ranging from 5.2 to 7.2 beta, using cfengine (www.cfengine.org) and
> > > autoupdate (search freshmeat).
> > >
> > We have around same amount of servers kept up with autorpm
> > the cfengine pointer was good info thanks, I'll check what
> > it can do for us.
> 
> We used autorpm before, but it is way too inflexible and unmaintained to
> be used anymore.
> 
> Autoupdate can solve dependencies (also circular), kernel upgrades w/
> lilo.conf modifications and a lot of interesting stuff.
>
Right, have to check that too.
 
> > But, rather than just talking and thinking ourself
> > here where I happen to work now, I'm having broader
> > view... I have no difficulties understanding what's
> > the point behind the centralized remote software
> > administration tools that have appeared from Microsoft
> > (SMS, W2k AD w/ installation tools), Novell (ZEN), HP
> > (Ignite-UX), Lucent (CSL), etc.
> 
> As noted, the tricks here are just a top of an iceberg.  It would be nice
> to be able to keep all the relevant configs local to the systems centrally
> managed so that in a case of e.g. emergency, you could just "re-create"
> the O/S from scratch with proper IP, host, networking etc. settings.
>
Yes, I know what I'm talking, you got the point starting from "It would
be nice ..." and so on, that is already possible with those products
mentioned above.

I was some time working to build quite a large NMS system, in which
case I already started understanding that it's completely NO-GO for
maintainging the systems by individual manner neither I could imagine 
that neither of the large networks would allow remote upgrades ala RHN
proposed. See my reply to Trond.

Currently I'm working on a fairly modest environment where the total
count of computers is around 10k (all flavors included, most not Linuxes
and not in hands of our department), and personnally responsible only
few. However I'm in a way in charge of developing projects to get better
procedures and would like to get in to the right track from the beginning :)
 
> >   Beats 6-0 running ssh loops for "rpm -ivh ..."
> >   to a bunch of systems. Ofcourse the system with
> >   up2date, autorpm or autoupdate will be kept
> >   upgraded when new packages appear to certain
> >   location, but IMHO it's still a big difference.
> 
> This is no problem; with autorpm and now autoupdate, we have used
> 'install' directory in addition to 'updates'; every rpm appearing to
> 'install' is automatically installed in the system.  This way one can
> trivially install new packages on all the systems.
> 
Oh, thats neat. Didn't think of it yet, but how do you remove
packages?

> >  - Installing and cloning new systems whould be a breeze.
> 
> Installing is no problem with kickstart and/or customized install trees.
>
Agreed, but cloning an existing system... say I would
like to have the same packages and same configurations
of the system that was installed half a year ago, easy?
 
> >  - Having a consistent view from one place what is
> >   installed and where. This would help on reporting,
> >   creating plans for upgrades, checking inconsistencies
> >   recovering possibly compromised or failed systems,
> >   getting a software inventory installed would be childs
> >   play.
> 
> This is one thing that should be focused more on.
> 
> E.g. quite often if you haven't personally installed and administrated a
> box and are about to e.g. upgrade it, you often don't know if there are
> some local issues to consider (e.g. sendmail rpm was removed, replaced
> with something else, etcetc.).  This is a matter of documentation, and
> cannot (much) be helped, but automation and using _self-documenting_ tools
> like CVS might be a key here.
> 
Exactly, I would claim that it's one of the most important
issues here. Depending on the policy set (class or host level)
either the configuration management will override and replace
the changed configuration with the one that comes from MGMT-station
or it will store the changed configuration to MGMT-station 
and keep cvs or whatever history over it.

> > - Centralized configuration file cvs history would be
> >   a nice feature too compared to comparing differencies
> >   with individual systems :)
> 
> Our global configs (distributed through cfengine) are stored in CVS.
> 
> Keeping the most important local configs in cvs would also be interesting,
> but that might be too difficult a task to do properly.
> 
The configuration/script files etc requiring local mods,
would be migrated to MGMT-station, but common files could
as well be saved on class level, right?

:-) riku

-- 
    [ This .signature intentionally left blank ]





_______________________________________________
Redhat-devel-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-devel-list

Reply via email to