Hi Wai Peng and Divyamk,
Thanks for the refreshing feedback!
First of all, why the inclination towards a REAL database format? What are the
pros that a DB offers, over flat files? Speed might be one, but how big are
conf files going to get?
It's neither about size nor speed. Structured database formats are very
good for searching, sorting, inserting, deleting, etc. I think if and
when we really implement such a system using flat files, we might have
also replicated the same features to handle the processing of information.
What I am worried most about, is the idea of storing all your eggs in one
basket. What happens if the registry gets corrupted? How good are the tools
to recover them? Putting all configurations together + using a binary format
for them = bitch when you need to recover something. Best have a backup lying
around, cos when that's dead no daemons will be able to boot.
Correct. Always have backups. This is a system design issue rather than
an implementation one. Text files can get corrupted too. To borrow some
concepts, implementing checksums, keeping the last known good
configuration, creating system restore points, allowing admins to dump
and load current configurations, and in the worst case, restoring from a
failsafe default. The binary database format can also be dumped into
text format and stored somewhere, so that manual recovery becomes possible.
As an example, I am running an automated daily backup of my database
tables by dumping into text format (SQL statements) and committing to
Subversion, so that I lose one at most one day's changes due to
accidental deletion or corruption.
Also, one will have to take note of what happens if a user wants to install a
program by themself? For example, I have specific builds of mplayer/mencoder
that is not available with rpms. They have their own conf files
under /home/waipeng/etc/xxx/. In this case, you need to put aside parts of
the registry for user.
Windows also stores per-user registry information in each user's home
directory. For example, Unix has per-user .bashrc/.bash_profile, etc to
override system defaults. The configuration tools can abstract the
complexities from the user by first looking for user-specific
configurations first, failing which they will fetch the system wide
configurations (this can happen for first time users of the application).
I feel its would be worth while considering the idea of the ability to
configure any single host
over the network or set of server farms using the configurator tool.
This would enable administrators to use it as compliance engine across the
organisation.
Features Like:
Auto update
Jump start software installations from known location
I think this would require a way to perform remote synchronization of
the configuration database, assuming that 'identical' systems are being
configured (except hostnames, ip address, etc). The way I am doing this
now is to setup SSH auto login to run commands on remote systems from a
master system. Updates and software installations can then be run
non-interactively. I use rsync on top of SSH to synchronize
configuration files (and sometimes even filesystems if I need to
propagate manually compiled/installed software). This way, I do less
duplicated work and get to keep my sanity at the same time.
Regards,
Kokhong
_______________________________________________
Slugnet mailing list
[email protected]
http://www.lugs.org.sg/mailman/listinfo/slugnet