Hi,
we have a setup of 2 servers on both there is one bareos-sd. One SD
provides a FileStorage and the other SD takes the stream from the first one
and copies/migrates it to tape. In this setup sometimes there is a tape
drive error. If this happens the corresponding copy/migrate-job hangs. We
Hi Bruno,
I think we found the cause:
In the function start_backup_job there is a if-statement
if self.switch_wal and current_lsn > self.lastLSN:
We added some additional information to the else-clause to come closer
to that problem:
else:
# Nothing has changed
Hi Bruno,
we increased the debug level but the load on the database decreased so
it ran successful.
Will come back to this when there is more load and the problem happens
again.
Another thing I found is that Increments seem to not work as I would
expect them:
The Full which ended at 2023-09-19
Hi Bruno,
its me once again. The next run with excluded pg_subtrans dir fails on
the base directory:
JobId 5116772: Fatal error: bareosfd: Traceback (most recent call
last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 61, in
start_backup_file
return
Hi Bruno,
we tested your new plugin with a quite large database with a running
application using it.
When doing a full backup we ran into:
JobId 5116734: Fatal error: bareosfd: Traceback (most recent call
last):
File "/usr/lib64/bareos/plugins/BareosFdWrapper.py", line 61, in
Hi Robert,
this wont help you but our environment was also affected by that. So I
can confirm that Tape-Setups (at least our two) stop working if
upgrading to Bareos 20.
Our setup was Ubuntu 18.04 with Bareos 20. I had to downgrade to 19.2.7
(and restore the database before) to get it working
Hi @all,
fixed it myself. The problem was that new tapes were taken from scratch
pool 'scratch'. This volumes were all in the first robot.
The fix was to add a parameter 'scratch pool = scratch-mro' to all pools
using the second robot.
Regards,
Dennis
Am 08.03.2018 um 15:11 schrieb Dennis
Hi,
in addition some logs creating a virtual full reading from one robot and
writing to the other on:
Run Backup job
JobName: backup-otrs-fd
Level: VirtualFull
Client: otrs-fd
Format: Native
FileSet: LinuxSingle
Pool: Incremental-Tapes (From Job IncPool
Hi @all,
I got a weird issue: I have two robots each in one room. It is
configured using one storage daemon. If I copy jobs from one robot to
the other bareos does the following:
*ps -ef |grep mtx*
bareos 29424 12486 0 11:22 ? 00:00:00 /bin/sh
Hi,
is it known that the upper job status information in bareos-webui are
incorrect? Is it a bug or is there a cache or something to update? In my
case bareos-webui tells me that 1 job is running at the moment and the
rest is waiting. Bconsole tells me that 10 jobs are running...
Regards,
Hi @all,
we are about to migrate our backup from Amanda to Bareos and are in the
middle of this process. Bareos has a lot of nice features, but one thing
I miss or maybe I didn`t found it.
Is there a possibility to spool to disk in case of tape robot errors? In
Amanda all backups were made
Hi,
after applying
/usr/bin/psql -d bareos -c "REINDEX DATABASE bareos"
/usr/bin/psql -d bareos -c "VACUUM"
/usr/bin/psql -d bareos -c "ANALYZE"
as user postgres, I now get
bareos-sd JobId 58569: Despooling elapsed time = 00:03:28, Transfer
rate = 303.3 M Bytes/second
again.
Interesting point, here are my numbers:
___/All the same server and partition in october:/_
bareos-sd JobId 38400: Despooling elapsed time = 00:03:47, Transfer rate
= 303.5 M Bytes/second
bareos-sd JobId 38216: Despooling elapsed time = 00:14:14, Transfer rate
= 224.6 M Bytes/second
bareos-sd
Hi,
lets assume we want to have 90 days of backups. As far as I understand,
this means I have one full and 89 incrementals if we backup once a day.
My question regarding reducing restore time:
* Is it possible to add differential backups in between the chain?
(weekly)
* Is it possible
ive because we
have lots of data.
Regards,
Dennis
Am 13.10.2017 um 13:51 schrieb Jörg Steffens:
On 13.10.2017 at 10:11 wrote 'Dennis Benndorf' via bareos-users:
Hi,
I am facing a problem with the Consolidate Job using AlwaysIncremental
and the Maximum Concurrent Jobs of the Storage-Robot
Hi,
I am facing a problem with the Consolidate Job using AlwaysIncremental
and the Maximum Concurrent Jobs of the Storage-Robot. When doing regular
backups Maximum Concurrent Jobs are set to 10, which works fine and
leads to 10 clients saving data inparallel. The robot itself has 3 drives.
16 matches
Mail list logo