Re: [Bacula-users] migrating to different database backend

2006-02-06 Thread Martin Simmons
On Fri, 3 Feb 2006 23:15:47 +0100, Magnus Hagander [EMAIL PROTECTED] said: Content-class: urn:content-classes:message Thread-Topic: [Bacula-users] migrating to different database backend This sounds like either table or index bloat. Typical reasons for tihs are not doing vacuum

RE: [Bacula-users] migrating to different database backend

2006-02-06 Thread Magnus Hagander
This sounds like either table or index bloat. Typical reasons for tihs are not doing vacuum (which obviously isn't your problem), or having too few FSM pages. This can also be caused by not running vacuum earlier, but doing it now - if you got far enough away from the

Re: [Bacula-users] migrating to different database backend

2006-02-06 Thread Martin Simmons
On Mon, 6 Feb 2006 21:36:36 +0100, Magnus Hagander [EMAIL PROTECTED] said: OTOH, at least 15% of the 9 million rows in the File table are deleted (by pruning) and reinserted (by backup) every weekend. Within 2 months, almost 100% of the rows will have been deleted and reinserted.

Re: [Bacula-users] migrating to different database backend

2006-02-03 Thread Karl Hakimian
On Thu, Feb 02, 2006 at 07:21:48PM -0500, Dan Langille wrote: As the author of the Bacula PostgreSQL module, I'm curious as to why you would go in that direction. Most people tend to move to PostgreSQL from MySQL. Is there something missing you need? I'm also considering switching from

Re: [Bacula-users] migrating to different database backend

2006-02-03 Thread Dan Langille
On 3 Feb 2006 at 7:10, Karl Hakimian wrote: On Thu, Feb 02, 2006 at 07:21:48PM -0500, Dan Langille wrote: As the author of the Bacula PostgreSQL module, I'm curious as to why you would go in that direction. Most people tend to move to PostgreSQL from MySQL. Is there something

Re: [Bacula-users] migrating to different database backend

2006-02-03 Thread Karl Hakimian
I'll try tuning things if you can get the data to me, or give me access to the database. It's not always indexes. Sometimes it's more along the lines of queries or vacuum. While setting up access to my data, I copied my bacula database to a new database and had quite an unexpected result.

RE: [Bacula-users] migrating to different database backend

2006-02-03 Thread Magnus Hagander
I'll try tuning things if you can get the data to me, or give me access to the database. It's not always indexes. Sometimes it's more along the lines of queries or vacuum. While setting up access to my data, I copied my bacula database to a new database and had quite an unexpected

Re: [Bacula-users] migrating to different database backend

2006-02-03 Thread Martin Simmons
On Fri, 3 Feb 2006 20:22:34 +0100, Magnus Hagander [EMAIL PROTECTED] said: I'll try tuning things if you can get the data to me, or give me access to the database. It's not always indexes. Sometimes it's more along the lines of queries or vacuum. While setting up access

RE: [Bacula-users] migrating to different database backend

2006-02-03 Thread Magnus Hagander
This sounds like either table or index bloat. Typical reasons for tihs are not doing vacuum (which obviously isn't your problem), or having too few FSM pages. This can also be caused by not running vacuum earlier, but doing it now - if you got far enough away from the good path

Re: [Bacula-users] migrating to different database backend

2006-02-02 Thread Dan Langille
On 2 Feb 2006 at 13:57, Aleksandar Milivojevic wrote: I'd like to migrate one of my servers from PostgreSQL to MySQL. My plan was to use pg_dump to create a file with just insert commands, recreate tables in MySQL and then run commands from dump file to populate them. Reinstall director

Re: [Bacula-users] migrating to different database backend

2006-02-02 Thread Aleksandar Milivojevic
Dan Langille wrote: On 2 Feb 2006 at 13:57, Aleksandar Milivojevic wrote: I'd like to migrate one of my servers from PostgreSQL to MySQL. My plan was to use pg_dump to create a file with just insert commands, recreate tables in MySQL and then run commands from dump file to populate them.