[bareos-users] Storage-Job hangs after canceling copy-job in director

2024-02-06 Thread 'Dennis Benndorf' via bareos-users
Hi, we have a setup of 2 servers on both there is one bareos-sd. One SD provides a FileStorage and the other SD takes the stream from the first one and copies/migrates it to tape. In this setup sometimes there is a tape drive error. If this happens the corresponding copy/migrate-job hangs. We

Re: [bareos-users] New PostgreSQL plugin search volunteers for testing

2023-09-19 Thread 'Dennis Benndorf' via bareos-users
Hi Bruno, I think we found the cause: In the function start_backup_job there is a if-statement if self.switch_wal and current_lsn > self.lastLSN: We added some additional information to the else-clause to come closer to that problem:         else:     # Nothing has changed

Re: [bareos-users] New PostgreSQL plugin search volunteers for testing

2023-09-19 Thread 'Dennis Benndorf' via bareos-users
Hi Bruno, we increased the debug level but the load on the database decreased so it ran successful. Will come back to this when there is more load and the problem happens again.  Another thing I found is that Increments seem to not work as I would expect them: The Full which ended at 2023-09-19

Re: [bareos-users] New PostgreSQL plugin search volunteers for testing

2023-09-18 Thread 'Dennis Benndorf' via bareos-users
Hi Bruno, its me once again. The next run with excluded pg_subtrans dir fails on the base directory: JobId 5116772: Fatal error: bareosfd: Traceback (most recent call last): File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 61, in start_backup_file return

Re: [bareos-users] New PostgreSQL plugin search volunteers for testing

2023-09-18 Thread 'Dennis Benndorf' via bareos-users
Hi Bruno, we tested your new plugin with a quite large database with a running application using it. When doing a full backup we ran into: JobId 5116734: Fatal error: bareosfd: Traceback (most recent call last): File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 61, in

Re: [bareos-users] Bareos 20

2021-01-08 Thread 'Dennis Benndorf' via bareos-users
Hi Robert, this wont help you but our environment was also affected by that. So I can confirm that Tape-Setups (at least our two) stop working if upgrading to Bareos 20. Our setup was Ubuntu 18.04 with Bareos 20. I had to downgrade to 19.2.7 (and restore the database before) to get it working

[bareos-users] Re: Wrong slot loaded in correct tape robot

2018-03-08 Thread 'Dennis Benndorf' via bareos-users
Hi @all, fixed it myself. The problem was that new tapes were taken from scratch pool 'scratch'. This volumes were all in the first robot. The fix was to add a parameter 'scratch pool = scratch-mro' to all pools using the second robot. Regards, Dennis Am 08.03.2018 um 15:11 schrieb Dennis

[bareos-users] Re: Wrong slot loaded in correct tape robot

2018-03-08 Thread 'Dennis Benndorf' via bareos-users
Hi, in addition some logs creating a virtual full reading from one robot and writing to the other on: Run Backup job JobName:  backup-otrs-fd Level:    VirtualFull Client:   otrs-fd Format:   Native FileSet:  LinuxSingle Pool: Incremental-Tapes (From Job IncPool

[bareos-users] Wrong slot loaded in correct tape robot

2018-03-08 Thread 'Dennis Benndorf' via bareos-users
Hi @all, I got a weird issue: I have two robots each in one room. It is configured using one storage daemon. If I copy jobs from one robot to the other bareos does the following: *ps -ef |grep mtx* bareos   29424 12486  0 11:22 ?    00:00:00 /bin/sh

[bareos-users] job status does not match

2018-01-23 Thread 'Dennis Benndorf' via bareos-users
Hi, is it known that the upper job status information in bareos-webui are incorrect? Is it a bug or is there a cache or something to update? In my case bareos-webui tells me that 1 job is running at the moment and the rest is waiting. Bconsole tells me that 10 jobs are running... Regards,

[bareos-users] Spooling question

2018-01-12 Thread 'Dennis Benndorf' via bareos-users
Hi @all, we are about to migrate our backup from Amanda to Bareos and are in the middle of this process. Bareos has a lot of nice features, but one thing I miss or maybe I didn`t found it. Is there a possibility to spool to disk in case of tape robot errors? In Amanda all backups were made

Re: [bareos-users] Re: Large backups taking longer after 17.2.4 update

2018-01-05 Thread 'Dennis Benndorf' via bareos-users
Hi, after applying /usr/bin/psql -d bareos -c "REINDEX DATABASE bareos" /usr/bin/psql -d bareos -c "VACUUM" /usr/bin/psql -d bareos -c "ANALYZE" as user postgres, I now get bareos-sd JobId 58569: Despooling elapsed time = 00:03:28, Transfer rate = 303.3 M Bytes/second again.

Re: [bareos-users] Re: Large backups taking longer after 17.2.4 update

2018-01-04 Thread 'Dennis Benndorf' via bareos-users
Interesting point, here are my numbers: ___/All the same server and partition in october:/_ bareos-sd JobId 38400: Despooling elapsed time = 00:03:47, Transfer rate = 303.5 M Bytes/second bareos-sd JobId 38216: Despooling elapsed time = 00:14:14, Transfer rate = 224.6 M Bytes/second bareos-sd

[bareos-users] AlwaysIncremental restore time

2017-10-14 Thread 'Dennis Benndorf' via bareos-users
Hi, lets assume we want to have 90 days of backups. As far as I understand, this means I have one full and 89 incrementals if we backup once a day. My question regarding reducing restore time: * Is it possible to add differential backups in between the chain? (weekly) * Is it possible

Re: [bareos-users] Re: Consolidate and Maximum Concurrent Jobs(SD)

2017-10-14 Thread 'Dennis Benndorf' via bareos-users
ive because we have lots of data. Regards, Dennis Am 13.10.2017 um 13:51 schrieb Jörg Steffens: On 13.10.2017 at 10:11 wrote 'Dennis Benndorf' via bareos-users: Hi, I am facing a problem with the Consolidate Job using AlwaysIncremental and the Maximum Concurrent Jobs of the Storage-Robot

[bareos-users] Consolidate and Maximum Concurrent Jobs(SD)

2017-10-13 Thread 'Dennis Benndorf' via bareos-users
Hi, I am facing a problem with the Consolidate Job using AlwaysIncremental and the Maximum Concurrent Jobs of the Storage-Robot. When doing regular backups Maximum Concurrent Jobs are set to 10, which works fine and leads to 10 clients saving data inparallel. The robot itself has 3 drives.