snacktime <[EMAIL PROTECTED]> writes:
>> Confusion could be a problem.
>>
>> Having a lot of tables isn't inherently problematic; those that aren't
>> accessed don't consume much in the way of resources.  There's nothing
>> that "polls through" them individually.
>>
>> Having a lot of sequences isn't so good, mind you; those are
>> essentially handled via polling.  5000 of them (1 per client) means
>> that each SYNC will process and store 5000 sequence values.
>
> Ouch.  How often does it sync?

That's up to you :-).

Once a second?  Once every 10 seconds?  Your call...

>> The other notable challenge would be of getting all the schemas and
>> tables and schemas added to replication.  How do new clients get
>> set up???
>
> We have scripts that automate everything, so adding whatever slony
> needs to the setup process wouldn't be too difficult.  I'm more
> concerned about the overhead due to having so many schema's, and it
> does look like that would be an issue.  Still, I think I'll load up
> some dummy data when I get the chance and see how it does with 5000
> schema's, at least then I'll know exactly what kind of load it's
> going to put on the system.

Definitely worth a prototype cycle...

I'd expect the primary evil part would be sequence handling, at least
on the replication side of things.

Lots of tables shouldn't present any particular problem.

As for having thousands of schemas, that won't be a Slony-I problem...
-- 
output = ("cbbrowne" "@" "ca.afilias.info")
<http://dba2.int.libertyrms.com/>
Christopher Browne
(416) 673-4124 (land)
_______________________________________________
Slony1-general mailing list
[email protected]
http://gborg.postgresql.org/mailman/listinfo/slony1-general

Reply via email to