Update

The problem is resolved now I think. I reinstalled Postgres, db, tables etc. 
and the result was no better. I then stood up a new SD/NAS and adjusted the 
jobs to backup to this. The result is backups @~30MB/s, where it was 
previously. There is something gone wrong with the prior NAS that I haven’t 
been able to get to the bottom of so I’ll just junk it and move on.

Just an aside - I realised whilst editing the jobs that the storage=“sd used 
for backup jobs" should be specified in the Job resource, it’s not necessary 
(or desirable) to specify the storage in the Pool as well since the job 
overrides the pool. This doesn’t seem to be the case for Copy/Migrate Jobs, the 
storage=“sd used for copy jobs" has the be specified in every Pool used for 
copy jobs. Am I right that there is no equivalent override mechanism for 
Copy/Migrate jobs?

Best
-Chris-




> On 7 Aug 2024, at 23:06, Chris Wilkinson <winstonia...@gmail.com> wrote:
> 
> The results after dropping and reinstating the db + reloading the prior sql 
> dump show no improvement at all, in fact a bit slower. ☹ They are around 
> 1MB/s.
> 
> I did a second test where the data is on the Pi SD card. This was also ~1MB/s 
> so that result seems to rule out the HDD as source of the bottleneck.
> 
> I think that leaves Postgres as the only possible culprit left.
> 
> Thank you all for your suggestions.
> 
> -Chris
> 
> On Wed, 7 Aug 2024, 21:23 Chris Wilkinson, <winstonia...@gmail.com 
> <mailto:winstonia...@gmail.com>> wrote:
> No worries, I've cleared out the db, run the postgres db scripts and imported 
> the sql dump. It's up and running again and all the jobs etc. appear intact. 
> Doing some testing so will report results back.
> 
> -Chris
> 
> On Wed, 7 Aug 2024, 20:58 Bill Arlofski, <w...@protonmail.com 
> <mailto:w...@protonmail.com>> wrote:
> On 8/7/24 1:11 PM, Chris Wilkinson wrote:
> > And then import the saved sql dump which drops all the tables again and 
> > creates/fills them?
> > 
> > -Chris
> 
> Hello Chris!
> 
> My bad!
> 
> I have been using a custom script I wrote years ago to do my catalog backups. 
> It uses what postgresql calls a custom (binary) 
> format. It's typically faster and smaller, so I switched to this format more 
> than 10 years ago. I had not looked at an ASCII 
> dump verison in years and I just looked now, and it does indeed DROP and 
> CREATE everything.
> 
> So, the only thing you needed to do was create the database with the 
> create_bacula_database script, then
> 
> Sorry for the static. :)
> 
> 
> Best regards,
> Bill
> 
> -- 
> Bill Arlofski
> w...@protonmail.com <mailto:w...@protonmail.com>
> 
> -Chris Wilkinson

_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to