I didn't get any replies, but I devised this query:
SELECT DISTINCT media.volumename, media.volstatus, pool.name AS pool,
media.mediatype, job.name AS job_name, job.starttime
FROM (
SELECT job.name as name, MAX(starttime) AS starttime
FROM job
WHERE job.level = 'F' AND level='F' AND
My backup scheme is:
1. Data on my NAS is replicated in near-real-time to a hot standby
(not using Bareos). Both the NAS and hot standby carry
15-minute-granularity snapshots.
2. The NAS is backed up to tape with a standard (default, even)
full/incremental/differential scheme using 1 tape drive i
On Thu, Jul 8, 2021 at 10:33 AM 'Christian Svensson' via bareos-users <
bareos-users@googlegroups.com> wrote:
> Instead, I would like to propose a change where the spooling process
creates two spool files (each bound to 50% of Maximum Spool Size), Spool-A
and Spool-B.
> When Spool-A is filled up,
I use a spool area to prevent shoe-shining during incremental and
differential backups.
But I also have one large backup which takes a very long time (for
Full backups). It seems to me as if this has become slower since I
introduced spooling.
I can read the source data at about 300MiB/s for a si
0
FD termination status: OK
SD termination status: Fatal Error
Termination:*** Restore Error ***
On Thu, Jun 10, 2021 at 9:37 PM James Youngman wrote:
>
> One of my backup jobs just saw a write error in which it failed to write data
> and then failed to write
Can anybody confirm that this is a suitable query to identify the last
file written to a specific volume?
SELECT
client.name AS client,
path.path,
filename.name,
File.FileId,
JobMedia.EndFile
FROM File
JOIN Filename on File.FilenameId = Filename.FilenameId
JOIN Path on File.PathId = Path
One of my backup jobs just saw a write error in which it failed to write
data and then failed to write an EOF:
10-Jun 15:11 bareos-sd JobId 67: Wrote label to prelabeled Volume
"B0011YL6" on device "tapedrive-0"
(/dev/tape/by-id/scsi-35000e8ec6001-nst)
10-Jun 15:11 bareos-sd JobId 67: New volu
On Wed, Jun 9, 2021 at 12:42 PM Henry MatroƩ wrote:
>
> Ho and for all the folder (ONLY on foldder), i have this message during
> restoration :
>
> bextract JobId 0: Warning: findlib/mkpath.cc:102 Cannot change owner and/or
> group ofERR=permission denied
>
As what user did you run bextract
I recently added a spool directory and was hoping that my incremental jobs
would be able to spool concurrently since in total all my incremental jobs
shoudl add up to much less than the capacity of the spool directory.
Yet all except the running job are "waiting on max Storage jobs". What do
I