Marc Cousin writes:
If I remember correctly (I allready discussed this with Kern Sibbald a while
ago), bacula does each insert in its own transaction : that's how the program
is done
Thanks for the info.
For now, I only could get good performance with bacula and postgresql when
disabling fsync...
Isn't that less safe?
I think I am going to try increasing wal_buffers
Specially was reading http://www.powerpostgresql.com. Towards the middle it
mentions improvements when increasing wal_buffers. So far performance,
against a single client was fair.
How did you test your bacula setup with postgresql for time?
Doing full backups with lots of files?
Also planning to check commit_delay and see if that helps.
I will try to avoid 2 or more machines backing up at the same time.. plus in
a couple of weeks I should have a better machine for the DB anyways..
The description of commit_delay sure sounds very promissing:
---
commit_delay (integer)
Time delay between writing a commit record to the WAL buffer and flushing
the buffer out to disk, in microseconds. A nonzero delay can allow multiple
transactions to be committed with only one fsync() system call, if system
load is high enough that additional transactions become ready to commit
within the given interval. But the delay is just wasted if no other
transactions become ready to commit. Therefore, the delay is only performed
if at least commit_siblings other transactions are active at the instant
that a server process has written its commit record. The default is zero (no
delay).
---
I only wonder what is safer.. using a second or two in commit_delay or using
fsync = off.. Anyone cares to comment?
I plan to re-read carefully the WALL docs on the site.. so I can better
decide.. however any field expierences would be much welcome.
Marc are you on the Bacula list? I plan to join it later today and will
bring up the PostgreSQL issue there.
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster