"Kirill Ponazdyr" <[EMAIL PROTECTED]> writes:

> It is for a advanced syslog server product we are currently developing.
>
> The very basic idea is to feed all syslog messages into a DB and allow
> easy monitoring and even correlation, we use Postgres as our DB Backend,
> in big environments the machine would be hit with dozens of syslog
> messages in a second and the messaging tables could grow out of controll
> pretty soon (We are talking of up to 10mil messages daily).

We have something similar (with about 50 log entries written per
second).  I guess you too have got a time-based column which is
indexed.  This means that you'll run into the "creeping index
syndrome" (as far as I understand it, pages in the index are never
reused because your column values are monotonically increasing).
Expiring old rows is a problem, too.  We now experiment with per-day
tables, which makes expire rather cheep and avoids growing indices.
And backup is much easier, too.

If you have some time for toying with different ideas, you might want
to look at TelegraphCQ.

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match

Reply via email to