On Thu, 2006-03-02 at 00:38 -0500, [EMAIL PROTECTED] wrote: > > On Wed, 2006-03-01 at 18:22 -0500, Christopher Browne wrote: > >> Rod Taylor wrote: > >> > >> >>The one "grand challenge" you'll face is that getting the subscription > >> >>going, with 224GB of data, will take quite a while, which will leave > >> >>transactions open for quite a while. > >> >> > >> >> > >> > > >> >It helps if you subscribe one table at a time and merge them into an > >> >existing set. > >> > > >> >So, Create set, add table to set, wait..., merge set. Repeat for each > >> >table. > >> > > >> I'd be inclined to wait 'til the end and merge them all, but that's just > >> me... > > > > I've ran into pretty big performance problems with more than a few sets. > > The query for querying for data ends up with a large number of OR's in > > the where clause. > > Take a look at the 1.1 schema; there's an extra index on sl_log_1/sl_log_2 > which seems to make an *enormous* difference when you have more than one > set.
Yeah. I already had that back in 1.0.2 -- but my version of Slony at the time wasn't exactly standard. The problem, when I last looked, is if you have say 1000 tables (thus 1000 sets) you end up with an absolutely huge WHERE clause that burns a significant amount of CPU time. It appeared that a large chunk of this was actually duplicated logic that could be reduced or eliminated but that is as far as I took it. I started merging sets together earlier instead. -- _______________________________________________ Slony1-general mailing list [email protected] http://gborg.postgresql.org/mailman/listinfo/slony1-general
