On 11/22/2005 8:52 AM, Roger Lucas wrote:
We are looking into Slony as the replication system for a small number of
machines (<30) spread across multiple locations.  Some of the locations are
co-location racks within ISPs so we are rather concerned about security.
Our working assumption is that at some point at least one of the machines
will become completely compromised and a malicious user will gain full root
access to it.  We can happily "write off" that machine (i.e. power it off
remotely) and continue to operate without it in the cluster until we are
able to rebuild it, but the concern is that the malicious user may be able
to corrupt databases on other machines having compromised just that one
machine in the network.

As I understand it, all slon daemons run with full super-user privileges and
the utility "slonik" is able to re-structure the entire replication system
from any node within the network.  This raises the possible scenario:

This understanding is right and the problem is based on the fundamental design of Slony as a network of equal nodes without a predefined master or slave role.

The only thing you could do is to look into the file based log shipping of Slony for those nodes, that you consider candidates for being compromised. A file based leaf node does not run any daemon, nor does it need to talk actively to any other node.


Jan

--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== [EMAIL PROTECTED] #
_______________________________________________
Slony1-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/slony1-general

Reply via email to