Hi,
I have just started encrypting my backup jobs. I have one full backup that
went from completing in
Scheduled time: 12-Aug-2017 20:05:00
Start time: 13-Aug-2017 06:16:47
End time: 13-Aug-2017 15:26:35
Elapsed time: 9 hours 9 mins 48 secs
to r
Just upgraded our environment to 9.0.4. All is going well. We are noticing
strange behavior in the logs. See log below. This is happening on more than
one job. See configuration for this job below log.
20-Sep 09:30 bacula-dir JobId 2349: Start Backup JobId 2349,
Job=C2T-App-Data.2017-09-2
> On Wed, 20 Sep 2017 13:12:01 +, Matthias Koch-Schirrmeister said:
>
> I don't think the connection is dropped at all. Can it be that
> despooling (=writing to the db) takes too long and bacula considers the
> connection broken? That would be a odd behaviour.
The error was "Connection re
On 2017-09-20 09:01, Fabian Brod wrote:
1.If I want to back up multiple folders with projects that have partially
identical files.
However, these project names can change over time.
You don't want to use backup software for this. You want git or zfs with
dedup on, or something. I've never
1.If I want to back up multiple folders with projects that have partially
identical files.
However, these project names can change over time.
projects
Project_A
Project_B
Project_C -> rename to Project_C01
Project_D
Project_E
Project_F
2.Yes, if a project consists of several Terrab
Am 20.09.2017 um 15:38 schrieb Phil Stracchino:
> I'm curious — what back-end DB are you using?
It's PostgreSQL 9.5.3.
Matthias
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slash
On 09/20/17 09:12, Matthias Koch-Schirrmeister wrote:
> Am 20.09.2017 um 13:47 schrieb Josh Fisher:
>>
>> Yes. Job attributes, file metadata, etc. that is to be stored in the db,
>> are spooled to a file in /var/spool/bacula while the fd is actively
>> transmitting data. After data transmission is
Am 20.09.2017 um 13:47 schrieb Josh Fisher:
>
> Yes. Job attributes, file metadata, etc. that is to be stored in the db,
> are spooled to a file in /var/spool/bacula while the fd is actively
> transmitting data. After data transmission is complete (or if the spool
> file becomes large enough?), Ba
Hello, Nikitin,
> Hello. I’ve backed up database via named pipes. The data was saved on server
> and
> it looks ok, but when I’m trying to restore – there is only named pipe files
> restored – not data. Please help, how to restore the data?
Did your fileset have the RedFifo = yes option set?
W
On 9/20/2017 4:42 AM, Matthias Koch-Schirrmeister wrote:
Am 19.09.2017 um 15:46 schrieb Can Şirin:
My jobs have almost 30M
files x 8 parallell jobs, and it takes about 20 hours to despool
attributes.
So you are saying that the whole backup run times out because the
database is taking so long
Hello. I've backed up database via named pipes. The data was saved on server
and it looks ok, but when I'm trying to restore - there is only named pipe
files restored - not data. Please help, how to restore the data?
--
C
Am 19.09.2017 um 15:46 schrieb Can Şirin:
> My jobs have almost 30M
> files x 8 parallell jobs, and it takes about 20 hours to despool
> attributes.
So you are saying that the whole backup run times out because the
database is taking so long? From what I could see with "top", the DB
never create
>> Standard-Device Elapsed time=25:57:52, Transfer rate=1.463 M Bytes/second
>> Sending spooled attrs to the Director. Despooling 431,639,287 bytes ...
And this is how it ended - again, after five or so more hours:
Fatal error: Network error with FD during Backup:
ERR=Connection reset by peer
13 matches
Mail list logo