Re: [Bacula-users] Slow transfer rate - MS Win 2019

2021-11-05 Thread Andras Horvai
Unfortunately, I do not see Event 513 in the application log in Event
viewer. :(
I did an estimate last night to check how much data and files we are about
to backup, I was not able to wait for the result. Today morning I checked
and this is the result:
2000 OK estimate files=444,041 bytes=271,286,711,691
I do not think it is too much. As I mentioned, the estimate operation also
took too long. (is VSS involved by bacula when doing an estimate?)
I cannot find anything useful in the debug log:

dc4-fd: filed/job.c:317-0  Accurate= BaseJob=
flags=<74754>
dc4-fd: filed/job.c:467-0 Calling term_find_files
dc4-fd: filed/job.c:470-0 Done with term_find_files
dc4-fd: lib/bsockcore.c:1203-0 WSAIoctl(SIO_GET_EXTENSION_FUNCTION_POINTER,
WSAID_DISCONNECTEX) ret = 0
dc4-fd: filed/job.c:473-0 Done with free_jcr



On Fri, Nov 5, 2021 at 12:34 PM Josh Fisher  wrote:

>
> On 11/4/21 19:10, Andras Horvai wrote:
>
>
> I think the issue is the number of files/directories on the file server I
> want to backup. Bacula-fd cannot count them... but bacula-fd.exe daemon
> does not use/does not want to use more resources
> on the system. Can I ask bacula-fd somehow to use more resources to speed
> up its operation? :)
>
>
> Your problem may be the result of creating a VSS snapshot on a DFS
> replicated folder. Do you see any Event 513 errors in the Application log
> on the Windows machine? If so, see
> https://docs.microsoft.com/en-US/troubleshoot/windows-server/backup-and-storage/event-id-513-vss-windows-server.
> Is it taking a long time to generate the VSS snapshot?  In DFS management
> console, you can configure the DFS Replication group schedule for 'No
> replication' during certain times. Try turning off DFS replication during
> the time the backup job is running.
>
> Cheers,
> Josh Fisher
>
>
>
> Thanks,
>
> Andras
>
> On Thu, Nov 4, 2021 at 1:12 PM Jose Alberto  wrote:
>
>> Client:.  11.0.5 (03Jun21) Microsoft Windows Server
>> 2008 R2 Enterprise Edition Service Pack 1 (build 7601),
>> 64-bit,Cross-compile,Win64
>> Rate:   26497.7 KB/s
>> Software Compression:   None
>> Comm Line Compression:  80.7% 5.2:1
>>
>> Hosts,,  guest  on vmware.
>>
>>
>>
>>
>> On Wed, Nov 3, 2021 at 8:48 PM Jose Alberto  wrote:
>>
>>> Personally. I have seen the highest speeds in AIX, then in Linux and
>>> finally in Windows. With Windows of 40MB I have not passed.
>>>
>>> Virtual or physical server, antivirus, vss, type of data to be backed
>>> up, are factors that influence Windows
>>>
>>> On Tue, Nov 2, 2021 at 5:00 PM Andras Horvai 
>>> wrote:
>>>
>>>> 1. AV exclusion did not solve the issue. Backup is still
>>>> incredibly slow from this client.
>>>>
>>>> I forgot to mention that I am trying to backup a folder which is DFS
>>>> synchronized. I do not know if it matters or not.
>>>> During the backup I do not see high cpu or memory utilization on client
>>>> caused by bacula-fd.
>>>> It looks like that bacula-fd "cannot find out" what to send to the
>>>> storage daemon...  strage...
>>>> This is the first time I have such issue...
>>>>
>>>>
>>>> On Mon, Nov 1, 2021 at 11:19 PM Heitor Faria 
>>>> wrote:
>>>>
>>>>> "1. The windows 2019 server has the default Virus and Threat
>>>>> protection settings no 3rd party virus scanner is deployed on the server.
>>>>> Can you recommend what to check? How to exclude bacula-fd?"
>>>>>
>>>>> Consult your AV provider to find out.
>>>>>
>>>>> "2. This would be a bigger project in our case but sure I will
>>>>> consider this... "
>>>>>
>>>>> The 11.x version (odd) from bacula.org is also open source. Do not
>>>>> confuse with the Bacula Enterprise Edition (even) versions.
>>>>>
>>>>> Rgds.
>>>>> --
>>>>> MSc Heitor Faria (Miami/USA)
>>>>> CEO Bacula LatAm
>>>>> mobile1: + 1 909 655-8971
>>>>> mobile2: + 55 61 98268-4220
>>>>>
>>>>> América Latina
>>>>> [ http://bacula.lat/]
>>>>>
>>>>>
>>>>>  Original Message 
>>>>> From: Andras Horvai 
>>>>> Sent: Monday, November 1, 2021 04:40 PM
>>>>> To: bacula-users 
>>>>> Subject: Re: [Bacula-users] Slow transfer rate - MS Win 2019

Re: [Bacula-users] Slow transfer rate - MS Win 2019

2021-11-04 Thread Andras Horvai
I think the issue is the number of files/directories on the file server I
want to backup. Bacula-fd cannot count them... but bacula-fd.exe daemon
does not use/does not want to use more resources
on the system. Can I ask bacula-fd somehow to use more resources to speed
up its operation? :)

Thanks,

Andras

On Thu, Nov 4, 2021 at 1:12 PM Jose Alberto  wrote:

> Client:.  11.0.5 (03Jun21) Microsoft Windows Server
> 2008 R2 Enterprise Edition Service Pack 1 (build 7601),
> 64-bit,Cross-compile,Win64
> Rate:   26497.7 KB/s
> Software Compression:   None
> Comm Line Compression:  80.7% 5.2:1
>
> Hosts,,  guest  on vmware.
>
>
>
>
> On Wed, Nov 3, 2021 at 8:48 PM Jose Alberto  wrote:
>
>> Personally. I have seen the highest speeds in AIX, then in Linux and
>> finally in Windows. With Windows of 40MB I have not passed.
>>
>> Virtual or physical server, antivirus, vss, type of data to be backed up,
>> are factors that influence Windows
>>
>> On Tue, Nov 2, 2021 at 5:00 PM Andras Horvai 
>> wrote:
>>
>>> 1. AV exclusion did not solve the issue. Backup is still incredibly slow
>>> from this client.
>>>
>>> I forgot to mention that I am trying to backup a folder which is DFS
>>> synchronized. I do not know if it matters or not.
>>> During the backup I do not see high cpu or memory utilization on client
>>> caused by bacula-fd.
>>> It looks like that bacula-fd "cannot find out" what to send to the
>>> storage daemon...  strage...
>>> This is the first time I have such issue...
>>>
>>>
>>> On Mon, Nov 1, 2021 at 11:19 PM Heitor Faria 
>>> wrote:
>>>
>>>> "1. The windows 2019 server has the default Virus and Threat protection
>>>> settings no 3rd party virus scanner is deployed on the server. Can you
>>>> recommend what to check? How to exclude bacula-fd?"
>>>>
>>>> Consult your AV provider to find out.
>>>>
>>>> "2. This would be a bigger project in our case but sure I will consider
>>>> this... "
>>>>
>>>> The 11.x version (odd) from bacula.org is also open source. Do not
>>>> confuse with the Bacula Enterprise Edition (even) versions.
>>>>
>>>> Rgds.
>>>> --
>>>> MSc Heitor Faria (Miami/USA)
>>>> CEO Bacula LatAm
>>>> mobile1: + 1 909 655-8971
>>>> mobile2: + 55 61 98268-4220
>>>>
>>>> América Latina
>>>> [ http://bacula.lat/]
>>>>
>>>>
>>>>  Original Message 
>>>> From: Andras Horvai 
>>>> Sent: Monday, November 1, 2021 04:40 PM
>>>> To: bacula-users 
>>>> Subject: Re: [Bacula-users] Slow transfer rate - MS Win 2019
>>>>
>>>> Hello Heitor,
>>>>
>>>> How much is very slow? i think on a 1 Gigabit/s connection around 100
>>>> Mbit/s is slow considering that the network test between sd and the client
>>>> gives 925.6 Mbit/s - 977.6 Mbtit/s result
>>>> (I used bacula to test it) The problem is that  when it comes to real
>>>> backup it drops to around 100 Mbit/s.
>>>> 1. The windows 2019 server has the default Virus and Threat protection
>>>> settings no 3rd party virus scanner is deployed on the server. Can you
>>>> recommend what to check? How to exclude bacula-fd?
>>>> 2. This would be a bigger project in our case but sure I will consider
>>>> this...
>>>> 3. Well this is something I can take into account as well
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Andras
>>>>
>>>>
>>>>
>>>> On Sun, Oct 31, 2021 at 1:47 PM Heitor Faria 
>>>> wrote:
>>>>
>>>>> Hello Andras,
>>>>>
>>>>> How much is very low?
>>>>> Bacula only displays the transfer rate after the sw compression
>>>>> reduction, what IMHO is very misleading since it is a "higher-the better"
>>>>> value. Worst decision ever.
>>>>> You gotta divide the ReadBytes by the Job Duration to have the
>>>>> processed data rate.
>>>>> Anyway:
>>>>>
>>>>> 1. Please check for Windows AV presence and put the bacula-fd as an
>>>>> exception. Try again.
>>>>> 2. Update Bacula to the latest 11.0.5 packages from bacula.org, which
>>>>> has a few improvements.
>>>>> 3. Ultimately, you can manually split the FileSet in the Bacula
>>>>> Community edition to paralelize the workload and eventually achieve a
>>>>> better performance, specially for Windows different volumes.
>>>>>
>>>>> Rgds.
>>>>> --
>>>>> MSc Heitor Faria (Miami/USA)
>>>>> CEO Bacula LatAm
>>>>> mobile1: + 1 909 655-8971
>>>>> mobile2: + 55 61 98268-4220
>>>>>
>>>>> América Latina
>>>>> [ http://bacula.lat/]
>>>>>
>>>> ___
>>> Bacula-users mailing list
>>> Bacula-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>>
>>
>>
>> --
>> #
>> #   Sistema Operativo: Debian  #
>> #Caracas, Venezuela  #
>> #
>>
>
>
> --
> #
> #   Sistema Operativo: Debian  #
> #Caracas, Venezuela  #
> #
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow transfer rate - MS Win 2019

2021-11-02 Thread Andras Horvai
1. AV exclusion did not solve the issue. Backup is still incredibly slow
from this client.

I forgot to mention that I am trying to backup a folder which is DFS
synchronized. I do not know if it matters or not.
During the backup I do not see high cpu or memory utilization on client
caused by bacula-fd.
It looks like that bacula-fd "cannot find out" what to send to the storage
daemon...  strage...
This is the first time I have such issue...


On Mon, Nov 1, 2021 at 11:19 PM Heitor Faria  wrote:

> "1. The windows 2019 server has the default Virus and Threat protection
> settings no 3rd party virus scanner is deployed on the server. Can you
> recommend what to check? How to exclude bacula-fd?"
>
> Consult your AV provider to find out.
>
> "2. This would be a bigger project in our case but sure I will consider
> this... "
>
> The 11.x version (odd) from bacula.org is also open source. Do not
> confuse with the Bacula Enterprise Edition (even) versions.
>
> Rgds.
> --
> MSc Heitor Faria (Miami/USA)
> CEO Bacula LatAm
> mobile1: + 1 909 655-8971
> mobile2: + 55 61 98268-4220
>
> América Latina
> [ http://bacula.lat/]
>
>
>  Original Message 
> From: Andras Horvai 
> Sent: Monday, November 1, 2021 04:40 PM
> To: bacula-users 
> Subject: Re: [Bacula-users] Slow transfer rate - MS Win 2019
>
> Hello Heitor,
>
> How much is very slow? i think on a 1 Gigabit/s connection around 100
> Mbit/s is slow considering that the network test between sd and the client
> gives 925.6 Mbit/s - 977.6 Mbtit/s result
> (I used bacula to test it) The problem is that  when it comes to real
> backup it drops to around 100 Mbit/s.
> 1. The windows 2019 server has the default Virus and Threat protection
> settings no 3rd party virus scanner is deployed on the server. Can you
> recommend what to check? How to exclude bacula-fd?
> 2. This would be a bigger project in our case but sure I will consider
> this...
> 3. Well this is something I can take into account as well
>
>
> Thanks,
>
> Andras
>
>
>
> On Sun, Oct 31, 2021 at 1:47 PM Heitor Faria  wrote:
>
>> Hello Andras,
>>
>> How much is very low?
>> Bacula only displays the transfer rate after the sw compression
>> reduction, what IMHO is very misleading since it is a "higher-the better"
>> value. Worst decision ever.
>> You gotta divide the ReadBytes by the Job Duration to have the processed
>> data rate.
>> Anyway:
>>
>> 1. Please check for Windows AV presence and put the bacula-fd as an
>> exception. Try again.
>> 2. Update Bacula to the latest 11.0.5 packages from bacula.org, which
>> has a few improvements.
>> 3. Ultimately, you can manually split the FileSet in the Bacula Community
>> edition to paralelize the workload and eventually achieve a better
>> performance, specially for Windows different volumes.
>>
>> Rgds.
>> --
>> MSc Heitor Faria (Miami/USA)
>> CEO Bacula LatAm
>> mobile1: + 1 909 655-8971
>> mobile2: + 55 61 98268-4220
>>
>> América Latina
>> [ http://bacula.lat/]
>>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow transfer rate - MS Win 2019

2021-11-01 Thread Andras Horvai
Hello Heitor,

How much is very slow? i think on a 1 Gigabit/s connection around 100
Mbit/s is slow considering that the network test between sd and the client
gives 925.6 Mbit/s - 977.6 Mbtit/s result
(I used bacula to test it) The problem is that  when it comes to real
backup it drops to around 100 Mbit/s.
1. The windows 2019 server has the default Virus and Threat protection
settings no 3rd party virus scanner is deployed on the server. Can you
recommend what to check? How to exclude bacula-fd?
2. This would be a bigger project in our case but sure I will consider
this...
3. Well this is something I can take into account as well


Thanks,

Andras



On Sun, Oct 31, 2021 at 1:47 PM Heitor Faria  wrote:

> Hello Andras,
>
> How much is very low?
> Bacula only displays the transfer rate after the sw compression reduction,
> what IMHO is very misleading since it is a "higher-the better" value. Worst
> decision ever.
> You gotta divide the ReadBytes by the Job Duration to have the processed
> data rate.
> Anyway:
>
> 1. Please check for Windows AV presence and put the bacula-fd as an
> exception. Try again.
> 2. Update Bacula to the latest 11.0.5 packages from bacula.org, which has
> a few improvements.
> 3. Ultimately, you can manually split the FileSet in the Bacula Community
> edition to paralelize the workload and eventually achieve a better
> performance, specially for Windows different volumes.
>
> Rgds.
> --
> MSc Heitor Faria (Miami/USA)
> CEO Bacula LatAm
> mobile1: + 1 909 655-8971
> mobile2: + 55 61 98268-4220
>
> América Latina
> [ http://bacula.lat/]
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow transfer rate - MS Win 2019

2021-10-31 Thread Andras Horvai
Hi,

First of all I just noticed I called you John not Josh! Sorry for this!

Second, I turned off LSO and TCP/UDP offload on the client machine. But it
did not help. Statistic from the recent full backup:

  Client: "dc4-fd" 9.4.4 (28May19) Microsoft Standard
Edition (build 9200), 64-bit,Cross-compile,Win64
  FileSet:"dc4-fs" 2021-10-07 14:07:40
  Pool:   "ServersWeeklyFullFile" (From Job resource)
  Catalog:"MyCatalog" (From Client resource)
  Storage:"File" (From Pool resource)
  Scheduled time: 30-Oct-2021 10:11:24
  Start time: 30-Oct-2021 16:38:10
  End time:   30-Oct-2021 23:55:22
  Elapsed time:   7 hours 17 mins 12 secs
  Priority:   10
  FD Files Written:   683,088
  SD Files Written:   683,088
  FD Bytes Written:   323,645,624,433 (323.6 GB)
  SD Bytes Written:   323,794,732,581 (323.7 GB)
  Rate:   12337.8 KB/s
  Software Compression:   None
  Comm Line Compression:  None
  Snapshot/VSS:   yes
  Encryption: no
  Accurate:   no

We have a 1 Gbps connection between bacula sd and the client machine.
The network test from the storage shows this:

Running network test between Client=dc4-fd and Storage=File with 52.42 MB
...
2000 OK bytes=52428800 duration=429ms write_speed=122.2 MB/s
2000 OK bytes=52428800 duration=453ms read_speed=115.7 MB/s

This shows that there is 1 Gbps connection between the storage and the
client.
Where is the bottleneck?

Thanks,

Andras


On Fri, Oct 29, 2021 at 4:53 PM Andras Horvai 
wrote:

> Hi John!
>
> Thanks for answer, sorry for long delay.
> We do not use encryption and compression as well. We backup to
> filestorage. The storage device is directly attached to the bacula-sd
> (director and sd is on the same machine).
> SD is on the same network (gigabit LAN) as the client.
> How can I debug windows file daemon?
>
> Thanks,
>
> Andras
>
>
> On Thu, Oct 21, 2021 at 3:16 PM Josh Fisher  wrote:
>
>> On 10/16/21 17:37, Andras Horvai wrote:
>>
>> Dear List,
>>
>> Recently I faced to the following problem what I cannot solve:
>>
>> I would like to backup a file server. It is a physical machine with
>> Windows 2019 operating system.
>> This bacula sd is on the same vlan as this server (sd and dir is on the
>> same server).
>> The issue is that we have very very low transfer rate.
>> The windows file server and bacula has 1 Gb/s interface, and despite this
>> we have:
>> 9452.2 KB/s rate which is about 74 Mb/s transfer rate. We are using 9.4.4
>> as director and sd.
>> The file daemon version is:
>> 9.4.4 (28May19) Microsoft Standard Edition (build 9200),
>> 64-bit,Cross-compile,Win64
>> I tried to set up the "Maximum Network Buffer Size = 32768" on fd and sd
>> but did not help.
>>
>>
>> Both compression and encryption of data occur on the client. Try a job
>> with no compression and no encryption. It is possible that the compression
>> and/or encryption is slowing the Windows machine.
>>
>> If you are writing to tape, then make sure that data spooling is enabled.
>> Otherwise, if the data rate from the Windows server isn't fast enough, then
>> the tape drive will be constantly rewinding to position its read/write
>> head. Tape drives require a certain data rate to prevent this, and spooling
>> will fix it.
>>
>> If the storage device for the backup volumes is not directly attached to
>> the server that bacula sd runs on and is on the same physical lan as sd and
>> client, then data spooling should be enabled to reduce network contention..
>>
>>
>>
>> If you have any ideas pls. share with me.
>>
>> Thanks,
>>
>> Andras
>>
>>
>> ___
>> Bacula-users mailing 
>> listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Slow transfer rate - MS Win 2019

2021-10-29 Thread Andras Horvai
Hi John!

Thanks for answer, sorry for long delay.
We do not use encryption and compression as well. We backup to filestorage.
The storage device is directly attached to the bacula-sd (director and sd
is on the same machine).
SD is on the same network (gigabit LAN) as the client.
How can I debug windows file daemon?

Thanks,

Andras


On Thu, Oct 21, 2021 at 3:16 PM Josh Fisher  wrote:

> On 10/16/21 17:37, Andras Horvai wrote:
>
> Dear List,
>
> Recently I faced to the following problem what I cannot solve:
>
> I would like to backup a file server. It is a physical machine with
> Windows 2019 operating system.
> This bacula sd is on the same vlan as this server (sd and dir is on the
> same server).
> The issue is that we have very very low transfer rate.
> The windows file server and bacula has 1 Gb/s interface, and despite this
> we have:
> 9452.2 KB/s rate which is about 74 Mb/s transfer rate. We are using 9.4.4
> as director and sd.
> The file daemon version is:
> 9.4.4 (28May19) Microsoft Standard Edition (build 9200),
> 64-bit,Cross-compile,Win64
> I tried to set up the "Maximum Network Buffer Size = 32768" on fd and sd
> but did not help.
>
>
> Both compression and encryption of data occur on the client. Try a job
> with no compression and no encryption. It is possible that the compression
> and/or encryption is slowing the Windows machine.
>
> If you are writing to tape, then make sure that data spooling is enabled.
> Otherwise, if the data rate from the Windows server isn't fast enough, then
> the tape drive will be constantly rewinding to position its read/write
> head. Tape drives require a certain data rate to prevent this, and spooling
> will fix it.
>
> If the storage device for the backup volumes is not directly attached to
> the server that bacula sd runs on and is on the same physical lan as sd and
> client, then data spooling should be enabled to reduce network contention..
>
>
>
> If you have any ideas pls. share with me.
>
> Thanks,
>
> Andras
>
>
> ___
> Bacula-users mailing 
> listBacula-users@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bacula-users
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Slow transfer rate - MS Win 2019

2021-10-16 Thread Andras Horvai
Dear List,

Recently I faced to the following problem what I cannot solve:

I would like to backup a file server. It is a physical machine with Windows
2019 operating system.
This bacula sd is on the same vlan as this server (sd and dir is on the
same server).
The issue is that we have very very low transfer rate.
The windows file server and bacula has 1 Gb/s interface, and despite this
we have:
9452.2 KB/s rate which is about 74 Mb/s transfer rate. We are using 9.4.4
as director and sd.
The file daemon version is:
9.4.4 (28May19) Microsoft Standard Edition (build 9200),
64-bit,Cross-compile,Win64
I tried to set up the "Maximum Network Buffer Size = 32768" on fd and sd
but did not help.

If you have any ideas pls. share with me.

Thanks,

Andras
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] binaries of 9.4.4 ?? rpm for CentOs

2019-06-25 Thread Andras Horvai
Hi,

I see that debs are also uploaded for Debian Jessie and Stretch.
Can I kindly ask when will be Ubuntu bionic available?

Thanks,

Andras

On Tue, Jun 25, 2019 at 11:35 AM Davide Franco  wrote:

> Hello Olivier,
>
> Bacula 9.4.4 rpm packages are now available for rhel/centos 6 & 7.
>
> More platform will come soon.
>
> Best regards
>
> Davide
>
> On Fri, 21 Jun 2019 at 07:51, Olivier Delestre <
> olivier.deles...@univ-rouen.fr> wrote:
>
>> Hi,
>>
>> No news for the binaries of 9.4.4 (  bug fix release ) on
>> https://bacula.org/packages/x ?
>>
>> Today, I use the rpms for CentOs with the bacula 9.4.3 ( for test and
>> 9.2.2 for production )
>>
>> Thanks again
>> Bye
>>
>>
>> ___
>> Bacula-users mailing list
>> Bacula-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/bacula-users
>>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula director error messages

2019-05-09 Thread Andras Horvai
hi,

anybody, any idea regarding this error? Why the termination status of the
previous job was: *** Copying Error *** ?

Thanks,

Andras

On Wed, May 8, 2019 at 4:11 PM Andras Horvai 
wrote:

> you got the point, here it is another error message:
>
> 06-May 12:01 backup2-dir JobId 1038: Using Device "LTO-6" to write.
> 06-May 12:07 backup2-sd JobId 1038: [SI0202] End of Volume "WORMW-1181" at
> 1424:28456 on device "LTO-6" (/dev/nst0). Write of 64512 bytes got -1.
> 06-May 12:07 backup2-sd JobId 1038: Re-read of last block succeeded.
> 06-May 12:07 backup2-sd JobId 1038: End of medium on Volume "WORMW-1181"
> Bytes=2,820,363,420,672 Blocks=43,718,430 at 06-May-2019 12:07.
> 06-May 12:08 backup2-dir JobId 1038: Created new Volume="WORMW-1182",
> Pool="TapeArchive", MediaType="LTO-6" in catalog.
> 06-May 12:08 backup2-sd JobId 1038: Please mount append Volume
> "WORMW-1182" or label a new one for:
> Job:  srv1-job.2019-05-06_09.00.01_21
> Storage:  "LTO-6" (/dev/nst0)
> Pool: TapeArchive
> Media type:   LTO-6
> 06-May 12:46 backup2-sd JobId 1038: Error: [SE0203] The Volume=WORMW-1182
> on device="LTO-6" (/dev/nst0) appears to be unlabeled.
> 06-May 12:47 backup2-sd JobId 1038: Labeled new Volume "WORMW-1182" on
> Tape device "LTO-6" (/dev/nst0).
> 06-May 12:47 backup2-sd JobId 1038: Wrote label to prelabeled Volume
> "WORMW-1182" on Tape device "LTO-6" (/dev/nst0)
> 06-May 12:47 backup2-sd JobId 1038: New volume "WORMW-1182" mounted on
> device "LTO-6" (/dev/nst0) at 06-May-2019 12:47.
> 06-May 12:56 backup2-sd JobId 1038: Fatal error: append.c:170 Error
> reading data header from FD. n=-2 msglen=0 ERR=Connection reset by peer
> 06-May 12:56 backup2-sd JobId 1038: Elapsed time=00:14:48, Transfer
> rate=68.06 M Bytes/second
> 06-May 12:56 backup2-sd JobId 1038: Sending spooled attrs to the Director.
> Despooling 27,981,780 bytes ...
>
> so why I got Connection reset by peer message? SD,FD,Director is on the
> same machine (in the case of Copy jobs)
>
> Thanks,
> Andras
>
> On Wed, May 8, 2019 at 3:10 PM Martin Simmons 
> wrote:
>
>> That look clean.
>>
>> Are there any messages for the "New Backup JobId" (1038)?  I find them
>> printed
>> after the "Termination:" line for the copy job.
>>
>> __Martin
>>
>>
>> >>>>> On Wed, 8 May 2019 14:32:31 +0200, Andras Horvai said:
>> >
>> > hi,
>> >
>> > here is the snipped part: :)
>> >
>> > 06-May 09:00 backup2-dir JobId 1037: Copying using JobId=1016
>> > Job=srv1-job.2019-05-04_02.00.00_59
>> > 06-May 12:01 backup2-dir JobId 1037: Start Copying JobId 1037,
>> > Job=ArchiveJob.2019-05-06_09.00.01_20
>> > 06-May 12:01 backup2-dir JobId 1037: Using Device "FileStorage" to read.
>> > 06-May 12:01 backup2-sd JobId 1037: Ready to read from volume
>> "FILEW-1006"
>> > on File device "FileStorage" (/backup).
>> > 06-May 12:01 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1006"
>> to
>> > addr=531212699
>> > 06-May 12:01 backup2-sd JobId 1037: End of Volume "FILEW-1006" at
>> > addr=2147431799 on device "FileStorage" (/backup).
>> > 06-May 12:01 backup2-sd JobId 1037: Ready to read from volume
>> "FILEW-1007"
>> > on File device "FileStorage" (/backup).
>> > 06-May 12:01 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1007"
>> to
>> > addr=238
>> > 06-May 12:02 backup2-sd JobId 1037: End of Volume "FILEW-1007" at
>> > addr=2147475513 on device "FileStorage" (/backup).
>> > 06-May 12:02 backup2-sd JobId 1037: Ready to read from volume
>> "FILEW-1008"
>> > on File device "FileStorage" (/backup).
>> > 06-May 12:02 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1008"
>> to
>> > addr=238
>> > 06-May 12:02 backup2-sd JobId 1037: End of Volume "FILEW-1008" at
>> > addr=2147475637 on device "FileStorage" (/backup).
>> > 06-May 12:02 backup2-sd JobId 1037: Ready to read from volume
>> "FILEW-1009"
>> > on File device "FileStorage" (/backup).
>> > 06-May 12:02 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1009"
>> to
>> > addr=238
>> > 06-May 12:03 backup2-sd JobId 1037: End of Volume "FILEW-1009" at
>> > ad

Re: [Bacula-users] Bacula director error messages

2019-05-08 Thread Andras Horvai
you got the point, here it is another error message:

06-May 12:01 backup2-dir JobId 1038: Using Device "LTO-6" to write.
06-May 12:07 backup2-sd JobId 1038: [SI0202] End of Volume "WORMW-1181" at
1424:28456 on device "LTO-6" (/dev/nst0). Write of 64512 bytes got -1.
06-May 12:07 backup2-sd JobId 1038: Re-read of last block succeeded.
06-May 12:07 backup2-sd JobId 1038: End of medium on Volume "WORMW-1181"
Bytes=2,820,363,420,672 Blocks=43,718,430 at 06-May-2019 12:07.
06-May 12:08 backup2-dir JobId 1038: Created new Volume="WORMW-1182",
Pool="TapeArchive", MediaType="LTO-6" in catalog.
06-May 12:08 backup2-sd JobId 1038: Please mount append Volume "WORMW-1182"
or label a new one for:
Job:  srv1-job.2019-05-06_09.00.01_21
Storage:  "LTO-6" (/dev/nst0)
Pool: TapeArchive
Media type:   LTO-6
06-May 12:46 backup2-sd JobId 1038: Error: [SE0203] The Volume=WORMW-1182
on device="LTO-6" (/dev/nst0) appears to be unlabeled.
06-May 12:47 backup2-sd JobId 1038: Labeled new Volume "WORMW-1182" on Tape
device "LTO-6" (/dev/nst0).
06-May 12:47 backup2-sd JobId 1038: Wrote label to prelabeled Volume
"WORMW-1182" on Tape device "LTO-6" (/dev/nst0)
06-May 12:47 backup2-sd JobId 1038: New volume "WORMW-1182" mounted on
device "LTO-6" (/dev/nst0) at 06-May-2019 12:47.
06-May 12:56 backup2-sd JobId 1038: Fatal error: append.c:170 Error reading
data header from FD. n=-2 msglen=0 ERR=Connection reset by peer
06-May 12:56 backup2-sd JobId 1038: Elapsed time=00:14:48, Transfer
rate=68.06 M Bytes/second
06-May 12:56 backup2-sd JobId 1038: Sending spooled attrs to the Director.
Despooling 27,981,780 bytes ...

so why I got Connection reset by peer message? SD,FD,Director is on the
same machine (in the case of Copy jobs)

Thanks,
Andras

On Wed, May 8, 2019 at 3:10 PM Martin Simmons  wrote:

> That look clean.
>
> Are there any messages for the "New Backup JobId" (1038)?  I find them
> printed
> after the "Termination:" line for the copy job.
>
> __Martin
>
>
> >>>>> On Wed, 8 May 2019 14:32:31 +0200, Andras Horvai said:
> >
> > hi,
> >
> > here is the snipped part: :)
> >
> > 06-May 09:00 backup2-dir JobId 1037: Copying using JobId=1016
> > Job=srv1-job.2019-05-04_02.00.00_59
> > 06-May 12:01 backup2-dir JobId 1037: Start Copying JobId 1037,
> > Job=ArchiveJob.2019-05-06_09.00.01_20
> > 06-May 12:01 backup2-dir JobId 1037: Using Device "FileStorage" to read.
> > 06-May 12:01 backup2-sd JobId 1037: Ready to read from volume
> "FILEW-1006"
> > on File device "FileStorage" (/backup).
> > 06-May 12:01 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1006"
> to
> > addr=531212699
> > 06-May 12:01 backup2-sd JobId 1037: End of Volume "FILEW-1006" at
> > addr=2147431799 on device "FileStorage" (/backup).
> > 06-May 12:01 backup2-sd JobId 1037: Ready to read from volume
> "FILEW-1007"
> > on File device "FileStorage" (/backup).
> > 06-May 12:01 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1007"
> to
> > addr=238
> > 06-May 12:02 backup2-sd JobId 1037: End of Volume "FILEW-1007" at
> > addr=2147475513 on device "FileStorage" (/backup).
> > 06-May 12:02 backup2-sd JobId 1037: Ready to read from volume
> "FILEW-1008"
> > on File device "FileStorage" (/backup).
> > 06-May 12:02 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1008"
> to
> > addr=238
> > 06-May 12:02 backup2-sd JobId 1037: End of Volume "FILEW-1008" at
> > addr=2147475637 on device "FileStorage" (/backup).
> > 06-May 12:02 backup2-sd JobId 1037: Ready to read from volume
> "FILEW-1009"
> > on File device "FileStorage" (/backup).
> > 06-May 12:02 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1009"
> to
> > addr=238
> > 06-May 12:03 backup2-sd JobId 1037: End of Volume "FILEW-1009" at
> > addr=2147475644 on device "FileStorage" (/backup).
> > 06-May 12:03 backup2-sd JobId 1037: Ready to read from volume
> "FILEW-1010"
> > on File device "FileStorage" (/backup).
> > 06-May 12:03 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1010"
> to
> > addr=238
> > 06-May 12:03 backup2-sd JobId 1037: End of Volume "FILEW-1010" at
> > addr=2147475667 on device "FileStorage" (/backup).
> > 06-May 12:03 backup2-sd JobId 1037: Ready to read from volume
> "FILEW-1011"
> 

Re: [Bacula-users] Bacula director error messages

2019-05-08 Thread Andras Horvai
ot;FILEW-1018" at
addr=2147475651 on device "FileStorage" (/backup).
06-May 12:47 backup2-sd JobId 1037: Ready to read from volume "FILEW-1019"
on File device "FileStorage" (/backup).
06-May 12:47 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1019" to
addr=238
06-May 12:48 backup2-sd JobId 1037: End of Volume "FILEW-1019" at
addr=2147475672 on device "FileStorage" (/backup).
06-May 12:48 backup2-sd JobId 1037: Ready to read from volume "FILEW-1020"
on File device "FileStorage" (/backup).
06-May 12:48 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1020" to
addr=238
06-May 12:49 backup2-sd JobId 1037: End of Volume "FILEW-1020" at
addr=2147475677 on device "FileStorage" (/backup).
06-May 12:49 backup2-sd JobId 1037: Ready to read from volume "FILEW-1021"
on File device "FileStorage" (/backup).
06-May 12:49 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1021" to
addr=238
06-May 12:49 backup2-sd JobId 1037: End of Volume "FILEW-1021" at
addr=2147475654 on device "FileStorage" (/backup).
06-May 12:49 backup2-sd JobId 1037: Ready to read from volume "FILEW-1022"
on File device "FileStorage" (/backup).
06-May 12:49 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1022" to
addr=238
06-May 12:50 backup2-sd JobId 1037: End of Volume "FILEW-1022" at
addr=2147475671 on device "FileStorage" (/backup).
06-May 12:50 backup2-sd JobId 1037: Ready to read from volume "FILEW-1023"
on File device "FileStorage" (/backup).
06-May 12:50 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1023" to
addr=238
06-May 12:50 backup2-sd JobId 1037: End of Volume "FILEW-1023" at
addr=2147475674 on device "FileStorage" (/backup).
06-May 12:50 backup2-sd JobId 1037: Ready to read from volume "FILEW-1024"
on File device "FileStorage" (/backup).
06-May 12:50 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1024" to
addr=238
06-May 12:51 backup2-sd JobId 1037: End of Volume "FILEW-1024" at
addr=2147475677 on device "FileStorage" (/backup).
06-May 12:51 backup2-sd JobId 1037: Ready to read from volume "FILEW-1025"
on File device "FileStorage" (/backup).
06-May 12:51 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1025" to
addr=238
06-May 12:51 backup2-sd JobId 1037: End of Volume "FILEW-1025" at
addr=2147475654 on device "FileStorage" (/backup).
06-May 12:51 backup2-sd JobId 1037: Ready to read from volume "FILEW-1026"
on File device "FileStorage" (/backup).
06-May 12:51 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1026" to
addr=238
06-May 12:52 backup2-sd JobId 1037: End of Volume "FILEW-1026" at
addr=2147475694 on device "FileStorage" (/backup).
06-May 12:52 backup2-sd JobId 1037: Ready to read from volume "FILEW-1027"
on File device "FileStorage" (/backup).
06-May 12:52 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1027" to
addr=238
06-May 12:53 backup2-sd JobId 1037: End of Volume "FILEW-1027" at
addr=2147475665 on device "FileStorage" (/backup).
06-May 12:53 backup2-sd JobId 1037: Ready to read from volume "FILEW-1028"
on File device "FileStorage" (/backup).
06-May 12:53 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1028" to
addr=238
06-May 12:53 backup2-sd JobId 1037: End of Volume "FILEW-1028" at
addr=2147475650 on device "FileStorage" (/backup).
06-May 12:53 backup2-sd JobId 1037: Ready to read from volume "FILEW-1029"
on File device "FileStorage" (/backup).
06-May 12:53 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1029" to
addr=238
06-May 12:54 backup2-sd JobId 1037: End of Volume "FILEW-1029" at
addr=2147475649 on device "FileStorage" (/backup).
06-May 12:54 backup2-sd JobId 1037: Ready to read from volume "FILEW-1030"
on File device "FileStorage" (/backup).
06-May 12:54 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1030" to
addr=238
06-May 12:54 backup2-sd JobId 1037: End of Volume "FILEW-1030" at
addr=2147475651 on device "FileStorage" (/backup).
06-May 12:54 backup2-sd JobId 1037: Ready to read from volume "FILEW-1031"
on File device "FileStorage" (/backup).
06-May 12:54 backup2-sd JobId 1037: Forward spacing Volume "FILEW-1031" to
addr=238
06-May 12:55 backup2-sd JobId 1037: End of Volume "FILEW-1031" at
addr=2147475636 on device "FileStorage" (/backup).
06-May 12:55 backup2-sd JobId 1037: Ready to read from volume "FILEW-1032"
on File device "FileStorage" (/backup).
06-May 12:55 backup2

[Bacula-users] Bacula director error messages

2019-05-06 Thread Andras Horvai
Dear List,

does anyone have any clue why I got director error after a tape change: (I
am using copy job to copy full backups (from file storage) to tape.

Tape change message:

Subject: Bacula: Intervention needed for srv1-job.2019-05-06_09.00.01_21

06-May 12:08 backup2-sd JobId 1038: Please mount append Volume "WORMW-1182"
or label a new one for:
Job:  srv1-job.2019-05-06_09.00.01_21
Storage:  "LTO-6" (/dev/nst0)
Pool: TapeArchive
Media type:   LTO-6


Then comes this:


06-May 09:00 backup2-dir JobId 1037: Copying using JobId=1016
Job=srv1-job.2019-05-04_02.00.00_59
06-May 12:01 backup2-dir JobId 1037: Start Copying JobId 1037,
Job=ArchiveJob.2019-05-06_09.00.01_20
06-May 12:01 backup2-dir JobId 1037: Using Device "FileStorage" to read.

...snip...


06-May 12:56 backup2-sd JobId 1037: End of Volume "FILEW-1034" at
addr=895179674 on device "FileStorage" (/backup).
06-May 12:56 backup2-sd JobId 1037: Elapsed time=00:54:40, Transfer
rate=18.42 M Bytes/second
06-May 12:56 backup2-dir JobId 1037: Error: Bacula backup2-dir 9.4.1
(20Dec18):
  Build OS:   x86_64-pc-linux-gnu ubuntu 18.04
  Prev Backup JobId:  1016
  Prev Backup Job:srv1-job.2019-05-04_02.00.00_59
  New Backup JobId:   1038
  Current JobId:  1037
  Current Job:ArchiveJob.2019-05-06_09.00.01_20
  Backup Level:   Full
  Client: Archiver
  FileSet:"None" 2019-02-02 23:14:56
  Read Pool:  "ServersWeeklyFullFile" (From Command input)
  Read Storage:   "File" (From Pool resource)
  Write Pool: "TapeArchive" (From Command input)
  Write Storage:  "LTO-6" (From Command input)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 06-May-2019 12:01:27
  End time:   06-May-2019 12:56:13
  Elapsed time:   54 mins 46 secs
  Priority:   13
  SD Files Written:   104,642
  SD Bytes Written:   60,444,227,085 (60.44 GB)
  Rate:   18394.5 KB/s
  Volume name(s): WORMW-1181|WORMW-1182
  Volume Session Id:  728
  Volume Session Time:1551107460
  Last Volume Bytes:  34,658,297,856 (34.65 GB)
  SD Errors:  0
  SD termination status:  OK
  Termination:*** Copying Error ***


Thanks for clarification!

Andras
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restore problems

2019-02-20 Thread Andras Horvai
Dear Chandler,

thank you for your email.

Job log of JobID 244:

19-Feb 15:04 backup2-dir JobId 244: Start Restore Job
RestoreFiles.2019-02-19_15.03.58_47
19-Feb 15:04 backup2-dir JobId 244: Using Device "LTO-6" to read.
19-Feb 15:05 backup2-sd JobId 244: Ready to read from volume "RCWORMW-0004"
on Tape device "LTO-6" (/dev/IBMtape0n).
19-Feb 15:05 backup2-sd JobId 244: Forward spacing Volume "RCWORMW-0004" to
addr=109:0
19-Feb 15:06 backup2-sd JobId 244: Elapsed time=00:00:32, Transfer rate=0
Bytes/second
19-Feb 15:06 backup2-dir JobId 244: Bacula backup2-dir 9.4.1 (20Dec18):
  Build OS:   x86_64-pc-linux-gnu ubuntu 18.04
  JobId:  244
  Job:RestoreFiles.2019-02-19_15.03.58_47
  Restore Client: backup2-fd
  Where:  /var/lib/bacula/bacula-restores
  Replace:Always
  Start time: 19-Feb-2019 15:04:00
  End time:   19-Feb-2019 15:06:18
  Elapsed time:   2 mins 18 secs
  Files Expected: 9
  Files Restored: 0
  Bytes Restored: 0 (0 B)
  Rate:   0.0 KB/s
  FD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Restore OK -- warning file count mismatch

19-Feb 15:06 backup2-dir JobId 244: Begin pruning Jobs older than 12 months
.
19-Feb 15:06 backup2-dir JobId 244: No Jobs found to prune.
19-Feb 15:06 backup2-dir JobId 244: Begin pruning Files.
19-Feb 15:06 backup2-dir JobId 244: No Files found to prune.
19-Feb 15:06 backup2-dir JobId 244: End auto prune.

This is all.

The steps I did:

I checked which jobs did backup on the RCWORMW-0004 volume I used the query
command in bconsole (option 13 and then I typed the MediaID).
There I saw that jobID 222 was what I needed:

+---+---+-+--+---+--+-+---+
| jobid | name  | starttime   | type | level |
jobfiles | jobbytes| jobstatus |
+---+---+-+--+---+--+-+---+
|   222 | server1-job   | 2019-02-16 06:46:15 | C| F |
  4,747 | 947,734,512,306 | T |

then I run restore in bconsole I choosed option 3 I typed 222 then I mark *
(I tried to pick only one file or even 9 files does not matter)
enter done, then choose the file daemon (the backup servers file daemon) to
restore files to. Everything was accepted but the result is very
disappointing: 0 files were restored.

Meanwhile I tried bls and I can see the jobs on the tape. Then now I try
bextract to restore files using a file list. Here is the bextract command I
use:

 /opt/bacula/bin/bextract -v -V RCWORMW-0004 -i /root/inc-list.txt
/dev/IBMtape0n ./

inc-list.txt contains this:
d-2019-02-05.02-31.sql.gz
d-2019-02-06.02-15.sql.gz
d-2019-02-07.03-25.sql.gz
d-2019-02-07.23-19.sql.gz
d-2019-02-10.13-53.sql.gz
d-2019-02-12.02-15.sql.gz
d-2019-02-13.02-15.sql.gz
d-2019-02-14.02-16.sql.gz
d-2019-02-15.02-16.sql.gz
d-2019-02-16.05-32.sql.gz

(each file is about 90 GB)

so far the bextract output is:
20-Feb 11:55 bextract JobId 0: Ready to read from volume "RCWORMW-0004" on
Tape device "LTO-6" (/dev/IBMtape0n).


currently it has been running for 3 hours but there is nothing in the
destination directory. I am a bit worried that I lost  my backups...
because I cannot restore them :(.

Thanks for help,

Andras




On Wed, Feb 20, 2019 at 12:50 AM Chandler  wrote:

> Kindly, if you could share the job log for the restore JobId 244, it may
> have some error/warning messages.  Also tell us the complete steps you
> are using when running the restore, it will help us.  Thanks
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] restore problems

2019-02-19 Thread Andras Horvai
Hi Bacula gurus,

recently we bought a LTO-6 tape drive. This is an IBM 7226 LTO-6.
It has the latest firmware J451. Currently we are using bacula 9.4.1 on
Ubuntu 18.04 bionic.
We installed the IBM LTO-6 tape driver on linux: lin_tape. lin_taped daemon
is also running.
Bacula is using /dev/IBMtape0n device to reach the LTO-6 tape. Looks like
backups are
working fine on WORM media, but when I wanted to restore something from a
WORM media (remember backup was fine) I got the following message and
obviously as message says 0 files were restored:
(I am trying to restore here a copy job directly specifying by job id)

19-Feb 15:06 backup2-dir JobId 244: Bacula backup2-dir 9.4.1 (20Dec18):
  Build OS:   x86_64-pc-linux-gnu ubuntu 18.04
  JobId:  244
  Job:RestoreFiles.2019-02-19_15.03.58_47
  Restore Client: backup2-fd
  Where:  /var/lib/bacula/bacula-restores
  Replace:Always
  Start time: 19-Feb-2019 15:04:00
  End time:   19-Feb-2019 15:06:18
  Elapsed time:   2 mins 18 secs
  Files Expected: 9
  Files Restored: 0
  Bytes Restored: 0 (0 B)
  Rate:   0.0 KB/s
  FD Errors:  0
  FD termination status:  OK
  SD termination status:  OK
  Termination:Restore OK -- warning file count mismatch

look the backup result:

  Build OS:   x86_64-pc-linux-gnu ubuntu 18.04
  Prev Backup JobId:  201
  Prev Backup Job:db1-job.2019-02-16_02.00.00_56
  New Backup JobId:   222
  Current JobId:  221
  Current Job:ArchiveJob.2019-02-18_11.21.35_19
  Backup Level:   Full
  Client: Archiver
  FileSet:"None" 2019-02-02 23:14:56
  Read Pool:  "ServersWeeklyFullFile" (From Command input)
  Read Storage:   "File" (From Pool resource)
  Write Pool: "TapeArchive" (From Command input)
  Write Storage:  "LTO-6" (From Command input)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 18-Feb-2019 12:12:24
  End time:   18-Feb-2019 16:11:08
  Elapsed time:   3 hours 58 mins 44 secs
  Priority:   13
  SD Files Written:   4,747
  SD Bytes Written:   947,734,512,306 (947.7 GB)
  Rate:   66164.1 KB/s
  Volume name(s): RCWORMW-0004
  Volume Session Id:  61
  Volume Session Time:1549922956
  Last Volume Bytes:  1,156,597,069,824 (1.156 TB)
  SD Errors:  0
  SD termination status:  OK
  Termination:Copying OK




Do you have any idea why I cannot restore the files from a copy job from
the LTO tape?

Thanks,

Andras
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] copy jobs (tape jobs fails) when tape is not labelled previously

2019-02-03 Thread Andras Horvai
Dear Bacula experts,

I am using the latest Bacula opensource version:
backup2-dir Version: 9.4.1 (20 December 2018) x86_64-pc-linux-gnu ubuntu
18.04

My experience is when I run a copy job (copy volumes from file storage to
tape storage) and when the tape is not mounted and labelled and bacula is
waiting for mount and labeling the tape the
job fails immediately after I label the tape (and gets mounted). (File
daemon and Storage daemon (director also in this case) are on the same
machine!)
Do you have any idea what I did wrong?

03-Feb 21:52 backup2-sd JobId 26: Wrote label to prelabeled Volume
"RCWOROMW-0004" on Tape device "LTO-6" (/dev/nst0)
03-Feb 21:52 backup2-sd JobId 26: Error: bsock.c:383 Write error sending 17
bytes to Storage daemon:backup2:9103: ERR=Broken pipe
03-Feb 21:52 backup2-sd JobId 26: Fatal error: append.c:124 Network send
error to FD. ERR=Broken pipe
03-Feb 21:52 backup2-sd JobId 26: Elapsed time=00:00:01, Transfer rate=0
Bytes/second
*
You have messages.
*
*mess
03-Feb 21:52 backup2-dir JobId 25: Error: Bacula backup2-dir 9.4.1
(20Dec18):
  Build OS:   x86_64-pc-linux-gnu ubuntu 18.04
  Prev Backup JobId:  24
  Prev Backup Job:backup2-job.2019-02-03_21.50.02_03
  New Backup JobId:   26
  Current JobId:  25
  Current Job:ArchiveJob.2019-02-03_21.50.28_04
  Backup Level:   Full
  Client: Archiver
  FileSet:"None" 2019-02-02 23:14:56
  Read Pool:  "ServersWeeklyFullFile" (From Job resource)
  Read Storage:   "File" (From Pool resource)
  Write Pool: "TapeArchive" (From Job Pool's NextPool resource)
  Write Storage:  "LTO-6" (From Job Pool's NextPool resource)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 03-Feb-2019 21:50:30
  End time:   03-Feb-2019 21:52:52
  Elapsed time:   2 mins 22 secs
  Priority:   13
  SD Files Written:   0
  SD Bytes Written:   0 (0 B)
  Rate:   0.0 KB/s
  Volume name(s): RCWOROMW-0004
  Volume Session Id:  2
  Volume Session Time:1549230584
  Last Volume Bytes:  64,512 (64.51 KB)
  SD Errors:  1
  SD termination status:  Error
  Termination:*** Copying Error ***

Thank you,

Andras
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] bacula copy job fails when tape is full and needs to change

2018-12-03 Thread Andras Horvai
Hi,

I know I am using an old version of bacula (7.0.2) and as I mentioned we
are working heavily to upgrade it to a 9.0.8 (shipped with latest ubuntu).
But I setup heartbeat and did not help

and the interesting thing is this (when I check the director status):

 JobId  LevelFiles  Bytes   Status   FinishedName

  4960  Diff300,927203.2 G  Error03-Dec-18 17:42 server-job
  4959  Full300,927203.2 G  Error03-Dec-18 17:42 Archive
  4962  Diff300,927203.2 G  OK   03-Dec-18 18:36 server-job
  4961  Full300,927203.2 G  OK   03-Dec-18 18:36 Archive


so when I ran Archive job it started job 4959 and hooked 4960 to do the
copy from local disk (full backup jobs) to tape. (everything is local)
But tape got full so we needed to replace... and then job failed... seems
like jobs cannot span across tapes.
Then with new tape (where we did not have to replace tape) job was fine.
I think this is the key in the log but unfortunately I have no clue how to
solve it:

03-Dec 17:06 backup-sd JobId 4960: Error: The Volume=WORMW-1246 on
device="LTO-4" (/dev/nst0) appears to be unlabeled.
03-Dec 17:07 backup-sd JobId 4960: Labeled new Volume "WORMW-1246" on tape
device "LTO-4" (/dev/nst0).
03-Dec 17:07 backup-sd JobId 4960: Wrote label to prelabeled Volume
"WORMW-1246" on tape device "LTO-4" (/dev/nst0)
03-Dec 17:07 backup-sd JobId 4960: New volume "RCWORMW-1246" mounted on
device "LTO-4" (/dev/nst0) at 03-Dec-2018 17:07.

*03-Dec 17:41 backup-sd JobId 4960: Fatal error: append.c:149 Error reading
data header from FD. n=-2 msglen=0 ERR=Connection reset by peer*03-Dec
17:41 backup-sd JobId 4960: Elapsed time=00:46:48, Transfer rate=72.38 M
Bytes/second


and here looks like Archive jobs (4959) spans the tapes:

03-Dec 17:42 backup1 JobId 4959: Error: Bacula backup1 7.0.5 (28Jul14):
  Build OS:   x86_64-pc-linux-gnu ubuntu 16.04
  Prev Backup JobId:  4933
  Prev Backup Job:server-job.2018-12-01_02.00.00_42
  New Backup JobId:   4960
  Current JobId:  4959
  Current Job:Archive.2018-12-03_16.02.06_10
  Backup Level:   Full
  Client: None
  FileSet:"None" 2017-06-19 09:00:00
  Read Pool:  "ServersWeeklyFullFile" (From Job resource)
  Read Storage:   "File" (From Pool resource)
  Write Pool: "TapeArchive" (From Pool's NextPool resource)
  Write Storage:  "LTO-4" (From Pool's NextPool resource)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 03-Dec-2018 16:02:09
  End time:   03-Dec-2018 17:42:00
  Elapsed time:   1 hour 39 mins 51 secs
  Priority:   13
  SD Files Written:   300,927
  SD Bytes Written:   203,268,700,917 (203.2 GB)
  Rate:   33929.0 KB/s

*  Volume name(s): WORMW-1245|WORMW-1246*  Volume Session Id:
66
  Volume Session Time:1543248344
  Last Volume Bytes:  151,096,393,728 (151.0 GB)
  SD Errors:  0
  SD termination status:  OK
  Termination:    *** Copying Error ***

Any suggestions are welcomed! Thanks for your help!

Andras


On Mon, Nov 26, 2018 at 12:25 PM Andras Horvai 
wrote:

> Thanks Kern! I will dig deeper in documentation. First I try setup
> heartbeat in SD's config!
>
> On Mon, Nov 26, 2018 at 11:33 AM Kern Sibbald  wrote:
>
>> Oh there are at least five different places to setup Heart Beat Interval
>> (Dir, SD, and FD).  Unfortunately my memory is not good enough to remember
>> them all. Please ask others or see the documentation ...
>>
>> The easiest way is to get on a current version -- e.g. 9.2.2 where it is
>> done by defaut.
>>
>> Best regards,
>> Kern
>>
>> On 11/26/18 11:13 AM, Andras Horvai wrote:
>>
>> Hello Kern,
>>
>> yes you are right I am using bacula 7.0.5 shipped with Ubuntu 16.04.
>> Where should I setup heartbeat interval? On SD's or FD's config? Or both?
>>
>> Thanks for your help!
>>
>> Andras
>>
>> On Mon, Nov 26, 2018 at 10:56 AM Kern Sibbald  wrote:
>>
>>> Hello,
>>>
>>> If I remember right you are running on a *very* old Bacula, and the
>>> problem seems to be that the backup takes more than 2 hours.  One of your
>>> comm lines (SD <-> FD) times out.  I mention your old version because newer
>>> Bacula's automatically fix this problem by turning on Heart Beat Interval =
>>> 300, which is very likely to resolve your problem.
>>>
>>> Best regards,
>>> Kern
>>>
>>> On 11/26/1

Re: [Bacula-users] bacula copy job fails when tape is full and needs to change

2018-11-26 Thread Andras Horvai
Thanks Kern! I will dig deeper in documentation. First I try setup
heartbeat in SD's config!

On Mon, Nov 26, 2018 at 11:33 AM Kern Sibbald  wrote:

> Oh there are at least five different places to setup Heart Beat Interval
> (Dir, SD, and FD).  Unfortunately my memory is not good enough to remember
> them all. Please ask others or see the documentation ...
>
> The easiest way is to get on a current version -- e.g. 9.2.2 where it is
> done by defaut.
>
> Best regards,
> Kern
>
> On 11/26/18 11:13 AM, Andras Horvai wrote:
>
> Hello Kern,
>
> yes you are right I am using bacula 7.0.5 shipped with Ubuntu 16.04.
> Where should I setup heartbeat interval? On SD's or FD's config? Or both?
>
> Thanks for your help!
>
> Andras
>
> On Mon, Nov 26, 2018 at 10:56 AM Kern Sibbald  wrote:
>
>> Hello,
>>
>> If I remember right you are running on a *very* old Bacula, and the
>> problem seems to be that the backup takes more than 2 hours.  One of your
>> comm lines (SD <-> FD) times out.  I mention your old version because newer
>> Bacula's automatically fix this problem by turning on Heart Beat Interval =
>> 300, which is very likely to resolve your problem.
>>
>> Best regards,
>> Kern
>>
>> On 11/26/18 10:34 AM, Andras Horvai wrote:
>>
>> Hi Tilman,
>>
>> thank you for your answer! But unfortunately the firewall cannot be a
>> problem here :)
>> The problem happens only with Copy Jobs. The SD and the FD is on the same
>> device. There is no firewall on the machine.
>> So what I am doing is the following:
>>
>> during weekend I do full backup with the backup server to file storage on
>> the backup server. Then starting from Monday I am doing Copy Job from the
>> backup
>> server to a Tape device connected to the backup server. This works pretty
>> well till Tape does not get full. When Tape gets full bacula asks for
>> another tape.
>> We replace the tape, so job would continue (as expected) but then at the
>> end we got the job error... So I am puzzled what is wrong.
>>
>> Please feel free to share your ideas...
>>
>> Thanks,
>>
>> Andras
>>
>> On Sun, Nov 25, 2018 at 10:28 PM Tilman Schmidt 
>>  wrote:
>>
>>> Hi Andras,
>>>
>>> is there a firewall between the client and the SD?
>>> The message
>>>
>>> > 20-Nov 12:25 backup-sd JobId 4845: Fatal error: append.c:223 Network
>>> error reading from FD. ERR=Connection reset by peer
>>>
>>> looks suspiciously like a firewall killing the FD - SD connection
>>> because it sees it as idle.
>>>
>>> HTH
>>> Tilman
>>>
>>> Am 22.11.2018 um 16:04 schrieb Andras Horvai:
>>> > Dear list,
>>> >
>>> > I have to following problem:
>>> > We use copy jobs to copy weekly full backups to WORM tape but when a
>>> tape
>>> > gets filled and needs to change the copy job failed. Bacula says
>>> > intervention is
>>> > needed so we put a new tape in the tape drive. What can be the problem?
>>> >
>>> > Copy job report:
>>> > 20-Nov 12:25 backup-sd JobId 4838: End of Volume at file 0 on device
>>> > "FileStorage" (/backup), Volume "FILEW-0542"
>>> > 20-Nov 12:25 backup-sd JobId 4838: End of all volumes.
>>> > 20-Nov 12:25 backup-sd JobId 4838: Elapsed time=02:45:29, Transfer
>>> > rate=42.14 M Bytes/second
>>> > 20-Nov 12:25 backup1 JobId 4838: Error: Bacula backup1 7.0.5 (28Jul14):
>>> >   Build OS:   x86_64-pc-linux-gnu ubuntu 16.04
>>> >   Prev Backup JobId:  4837
>>> >   Prev Backup Job:db1-job.2018-11-19_23.09.19_03
>>> >   New Backup JobId:   4845
>>> >   Current JobId:  4838
>>> >   Current Job:Archive.2018-11-20_07.59.53_05
>>> >   Backup Level:   Full
>>> >   Client: None
>>> >   FileSet:"None" 2017-06-19 09:00:00
>>> >   Read Pool:  "ServersWeeklyFullFile" (From Job resource)
>>> >   Read Storage:   "File" (From Pool resource)
>>> >   Write Pool: "TapeArchive" (From Pool's NextPool resource)
>>> >   Write Storage:  "LTO-4" 20-Nov 09:39 backup1 JobId 4845:
>>> Using
>>> > Device "LTO-4" to write.
>>> > 20-Nov 12:01 backup-sd JobId 4845: End of

Re: [Bacula-users] bacula copy job fails when tape is full and needs to change

2018-11-26 Thread Andras Horvai
Hello Kern,

yes you are right I am using bacula 7.0.5 shipped with Ubuntu 16.04.
Where should I setup heartbeat interval? On SD's or FD's config? Or both?

Thanks for your help!

Andras

On Mon, Nov 26, 2018 at 10:56 AM Kern Sibbald  wrote:

> Hello,
>
> If I remember right you are running on a *very* old Bacula, and the
> problem seems to be that the backup takes more than 2 hours.  One of your
> comm lines (SD <-> FD) times out.  I mention your old version because newer
> Bacula's automatically fix this problem by turning on Heart Beat Interval =
> 300, which is very likely to resolve your problem.
>
> Best regards,
> Kern
>
> On 11/26/18 10:34 AM, Andras Horvai wrote:
>
> Hi Tilman,
>
> thank you for your answer! But unfortunately the firewall cannot be a
> problem here :)
> The problem happens only with Copy Jobs. The SD and the FD is on the same
> device. There is no firewall on the machine.
> So what I am doing is the following:
>
> during weekend I do full backup with the backup server to file storage on
> the backup server. Then starting from Monday I am doing Copy Job from the
> backup
> server to a Tape device connected to the backup server. This works pretty
> well till Tape does not get full. When Tape gets full bacula asks for
> another tape.
> We replace the tape, so job would continue (as expected) but then at the
> end we got the job error... So I am puzzled what is wrong.
>
> Please feel free to share your ideas...
>
> Thanks,
>
> Andras
>
> On Sun, Nov 25, 2018 at 10:28 PM Tilman Schmidt 
>  wrote:
>
>> Hi Andras,
>>
>> is there a firewall between the client and the SD?
>> The message
>>
>> > 20-Nov 12:25 backup-sd JobId 4845: Fatal error: append.c:223 Network
>> error reading from FD. ERR=Connection reset by peer
>>
>> looks suspiciously like a firewall killing the FD - SD connection
>> because it sees it as idle.
>>
>> HTH
>> Tilman
>>
>> Am 22.11.2018 um 16:04 schrieb Andras Horvai:
>> > Dear list,
>> >
>> > I have to following problem:
>> > We use copy jobs to copy weekly full backups to WORM tape but when a
>> tape
>> > gets filled and needs to change the copy job failed. Bacula says
>> > intervention is
>> > needed so we put a new tape in the tape drive. What can be the problem?
>> >
>> > Copy job report:
>> > 20-Nov 12:25 backup-sd JobId 4838: End of Volume at file 0 on device
>> > "FileStorage" (/backup), Volume "FILEW-0542"
>> > 20-Nov 12:25 backup-sd JobId 4838: End of all volumes.
>> > 20-Nov 12:25 backup-sd JobId 4838: Elapsed time=02:45:29, Transfer
>> > rate=42.14 M Bytes/second
>> > 20-Nov 12:25 backup1 JobId 4838: Error: Bacula backup1 7.0.5 (28Jul14):
>> >   Build OS:   x86_64-pc-linux-gnu ubuntu 16.04
>> >   Prev Backup JobId:  4837
>> >   Prev Backup Job:db1-job.2018-11-19_23.09.19_03
>> >   New Backup JobId:   4845
>> >   Current JobId:  4838
>> >   Current Job:Archive.2018-11-20_07.59.53_05
>> >   Backup Level:   Full
>> >   Client: None
>> >   FileSet:"None" 2017-06-19 09:00:00
>> >   Read Pool:  "ServersWeeklyFullFile" (From Job resource)
>> >   Read Storage:   "File" (From Pool resource)
>> >   Write Pool: "TapeArchive" (From Pool's NextPool resource)
>> >   Write Storage:  "LTO-4" 20-Nov 09:39 backup1 JobId 4845: Using
>> > Device "LTO-4" to write.
>> > 20-Nov 12:01 backup-sd JobId 4845: End of Volume "WORMW-1242" at
>> > 386:27137 on device "LTO-4" (/dev/nst0). Write of 64512 bytes got -1.
>> > 20-Nov 12:01 backup-sd JobId 4845: Re-read of last block succeeded.
>> > 20-Nov 12:01 backup-sd JobId 4845: End of medium on Volume "WORMW-1242"
>> > Bytes=764,853,046,272 Blocks=11,855,980 at 20-Nov-2018 12:01.
>> > 20-Nov 12:01 backup1 JobId 4845: Created new Volume="WORMW-1243",
>> > Pool="TapeArchive", MediaType="LTO-4" in catalog.
>> > 20-Nov 12:01 backup-sd JobId 4845: Please mount append Volume
>> > "WORMW-1243" or label a new one for:
>> > Job:  db1-job.2018-11-20_07.59.54_12
>> > Storage:  "LTO-4" (/dev/nst0)
>> > Pool: TapeArchive
>> > Media type:   LTO-4
>> > 20-Nov 12:15 backup-sd JobId 4845: Error: The Volume=WORM

Re: [Bacula-users] bacula copy job fails when tape is full and needs to change

2018-11-26 Thread Andras Horvai
Hi Tilman,

thank you for your answer! But unfortunately the firewall cannot be a
problem here :)
The problem happens only with Copy Jobs. The SD and the FD is on the same
device. There is no firewall on the machine.
So what I am doing is the following:

during weekend I do full backup with the backup server to file storage on
the backup server. Then starting from Monday I am doing Copy Job from the
backup
server to a Tape device connected to the backup server. This works pretty
well till Tape does not get full. When Tape gets full bacula asks for
another tape.
We replace the tape, so job would continue (as expected) but then at the
end we got the job error... So I am puzzled what is wrong.

Please feel free to share your ideas...

Thanks,

Andras

On Sun, Nov 25, 2018 at 10:28 PM Tilman Schmidt  wrote:

> Hi Andras,
>
> is there a firewall between the client and the SD?
> The message
>
> > 20-Nov 12:25 backup-sd JobId 4845: Fatal error: append.c:223 Network
> error reading from FD. ERR=Connection reset by peer
>
> looks suspiciously like a firewall killing the FD - SD connection
> because it sees it as idle.
>
> HTH
> Tilman
>
> Am 22.11.2018 um 16:04 schrieb Andras Horvai:
> > Dear list,
> >
> > I have to following problem:
> > We use copy jobs to copy weekly full backups to WORM tape but when a tape
> > gets filled and needs to change the copy job failed. Bacula says
> > intervention is
> > needed so we put a new tape in the tape drive. What can be the problem?
> >
> > Copy job report:
> > 20-Nov 12:25 backup-sd JobId 4838: End of Volume at file 0 on device
> > "FileStorage" (/backup), Volume "FILEW-0542"
> > 20-Nov 12:25 backup-sd JobId 4838: End of all volumes.
> > 20-Nov 12:25 backup-sd JobId 4838: Elapsed time=02:45:29, Transfer
> > rate=42.14 M Bytes/second
> > 20-Nov 12:25 backup1 JobId 4838: Error: Bacula backup1 7.0.5 (28Jul14):
> >   Build OS:   x86_64-pc-linux-gnu ubuntu 16.04
> >   Prev Backup JobId:  4837
> >   Prev Backup Job:db1-job.2018-11-19_23.09.19_03
> >   New Backup JobId:   4845
> >   Current JobId:  4838
> >   Current Job:Archive.2018-11-20_07.59.53_05
> >   Backup Level:   Full
> >   Client: None
> >   FileSet:"None" 2017-06-19 09:00:00
> >   Read Pool:  "ServersWeeklyFullFile" (From Job resource)
> >   Read Storage:   "File" (From Pool resource)
> >   Write Pool: "TapeArchive" (From Pool's NextPool resource)
> >   Write Storage:  "LTO-4" 20-Nov 09:39 backup1 JobId 4845: Using
> > Device "LTO-4" to write.
> > 20-Nov 12:01 backup-sd JobId 4845: End of Volume "WORMW-1242" at
> > 386:27137 on device "LTO-4" (/dev/nst0). Write of 64512 bytes got -1.
> > 20-Nov 12:01 backup-sd JobId 4845: Re-read of last block succeeded.
> > 20-Nov 12:01 backup-sd JobId 4845: End of medium on Volume "WORMW-1242"
> > Bytes=764,853,046,272 Blocks=11,855,980 at 20-Nov-2018 12:01.
> > 20-Nov 12:01 backup1 JobId 4845: Created new Volume="WORMW-1243",
> > Pool="TapeArchive", MediaType="LTO-4" in catalog.
> > 20-Nov 12:01 backup-sd JobId 4845: Please mount append Volume
> > "WORMW-1243" or label a new one for:
> > Job:  db1-job.2018-11-20_07.59.54_12
> > Storage:  "LTO-4" (/dev/nst0)
> > Pool: TapeArchive
> > Media type:   LTO-4
> > 20-Nov 12:15 backup-sd JobId 4845: Error: The Volume=WORMW-1243 on
> > device="LTO-4" (/dev/nst0) appears to be unlabeled.
> > 20-Nov 12:15 backup-sd JobId 4845: Labeled new Volume "WORMW-1243" on
> > tape device "LTO-4" (/dev/nst0).
> > 20-Nov 12:15 backup-sd JobId 4845: Wrote label to prelabeled Volume
> > "WORMW-1243" on tape device "LTO-4" (/dev/nst0)
> > 20-Nov 12:15 backup-sd JobId 4845: New volume "WORMW-1243" mounted on
> > device "LTO-4" (/dev/nst0) at 20-Nov-2018 12:15.
> > 20-Nov 12:25 backup-sd JobId 4845: Fatal error: append.c:223 Network
> > error reading from FD. ERR=Connection reset by peer
> > 20-Nov 12:25 backup-sd JobId 4845: Elapsed time=02:31:15, Transfer
> > rate=46.11 M Bytes/second
> > (From Pool's NextPool resource)
> >   Catalog:"MyCatalog" (From Client resource)
> >   Start time: 20-Nov-2018 09:39:31
> >   End time:   20-Nov-2018 12:25:04
> >   Elapsed time:   2 hours 45 mins 33 secs
> >   Prio

[Bacula-users] bacula copy job fails when tape is full and needs to change

2018-11-22 Thread Andras Horvai
Dear list,

I have to following problem:
We use copy jobs to copy weekly full backups to WORM tape but when a tape
gets filled and needs to change the copy job failed. Bacula says
intervention is
needed so we put a new tape in the tape drive. What can be the problem?

Copy job report:
20-Nov 12:25 backup-sd JobId 4838: End of Volume at file 0 on device
"FileStorage" (/backup), Volume "FILEW-0542"
20-Nov 12:25 backup-sd JobId 4838: End of all volumes.
20-Nov 12:25 backup-sd JobId 4838: Elapsed time=02:45:29, Transfer
rate=42.14 M Bytes/second
20-Nov 12:25 backup1 JobId 4838: Error: Bacula backup1 7.0.5 (28Jul14):
  Build OS:   x86_64-pc-linux-gnu ubuntu 16.04
  Prev Backup JobId:  4837
  Prev Backup Job:db1-job.2018-11-19_23.09.19_03
  New Backup JobId:   4845
  Current JobId:  4838
  Current Job:Archive.2018-11-20_07.59.53_05
  Backup Level:   Full
  Client: None
  FileSet:"None" 2017-06-19 09:00:00
  Read Pool:  "ServersWeeklyFullFile" (From Job resource)
  Read Storage:   "File" (From Pool resource)
  Write Pool: "TapeArchive" (From Pool's NextPool resource)
  Write Storage:  "LTO-4" 20-Nov 09:39 backup1 JobId 4845: Using
Device "LTO-4" to write.
20-Nov 12:01 backup-sd JobId 4845: End of Volume "WORMW-1242" at 386:27137
on device "LTO-4" (/dev/nst0). Write of 64512 bytes got -1.
20-Nov 12:01 backup-sd JobId 4845: Re-read of last block succeeded.
20-Nov 12:01 backup-sd JobId 4845: End of medium on Volume "WORMW-1242"
Bytes=764,853,046,272 Blocks=11,855,980 at 20-Nov-2018 12:01.
20-Nov 12:01 backup1 JobId 4845: Created new Volume="WORMW-1243",
Pool="TapeArchive", MediaType="LTO-4" in catalog.
20-Nov 12:01 backup-sd JobId 4845: Please mount append Volume "WORMW-1243"
or label a new one for:
Job:  db1-job.2018-11-20_07.59.54_12
Storage:  "LTO-4" (/dev/nst0)
Pool: TapeArchive
Media type:   LTO-4
20-Nov 12:15 backup-sd JobId 4845: Error: The Volume=WORMW-1243 on
device="LTO-4" (/dev/nst0) appears to be unlabeled.
20-Nov 12:15 backup-sd JobId 4845: Labeled new Volume "WORMW-1243" on tape
device "LTO-4" (/dev/nst0).
20-Nov 12:15 backup-sd JobId 4845: Wrote label to prelabeled Volume
"WORMW-1243" on tape device "LTO-4" (/dev/nst0)
20-Nov 12:15 backup-sd JobId 4845: New volume "WORMW-1243" mounted on
device "LTO-4" (/dev/nst0) at 20-Nov-2018 12:15.
20-Nov 12:25 backup-sd JobId 4845: Fatal error: append.c:223 Network error
reading from FD. ERR=Connection reset by peer
20-Nov 12:25 backup-sd JobId 4845: Elapsed time=02:31:15, Transfer
rate=46.11 M Bytes/second
(From Pool's NextPool resource)
  Catalog:"MyCatalog" (From Client resource)
  Start time: 20-Nov-2018 09:39:31
  End time:   20-Nov-2018 12:25:04
  Elapsed time:   2 hours 45 mins 33 secs
  Priority:   13
  SD Files Written:   4,792
  SD Bytes Written:   418,488,802,122 (418.4 GB)
  Rate:   42131.2 KB/s
  Volume name(s): WORMW-1242|WORMW-1243
  Volume Session Id:  9
  Volume Session Time:1542631131
  Last Volume Bytes:  33,060,787,200 (33.06 GB)
  SD Errors:  0
  SD termination status:  OK
  Termination:*** Copying Error ***

Regarding copy jobs the FD and the SD are on the same machine.

we are using:

Distributor ID: Ubuntu
Description:Ubuntu 16.04.4 LTS
Release:16.04
Codename:   xenial

bacula 7.0.5

Tape drive: HP   Ultrium 4-SCSI


Thanks for help,

Andras
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] job records have gone after database move

2017-08-04 Thread Andras Horvai
Hi,

Thank you for your answer. I think it is not the problem... Fileset has 2
months retention period it means files are gone from
the Catalog after 2 months... this is perfectly ok for me. But the volume
has 100 years retention period and job records also
inherit this retention period, the problem is that after database restore
these job records have gone from the catalog after
about 2 weeks usage of the new server or so, honestly I cannot tell you
this precisely because I have not checked the presence
of archive jobs in catalog.

This is how I planned the restore from archive:
I would have expected that I had the archive job records id in catalog, so
using those
I would have been able to do a full job restore (this would have perfectly
worked for me).
And this would have worked if I should not had to move the catalog to
another server :( since I checked with the old db and
I was able to see the old archive records...

I am going to double check the logs on the new server but I do not think so
I would find any useful :(

Thanks for any help,

Andras


<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

On Fri, Aug 4, 2017 at 10:02 PM, Dan Langille  wrote:

> > On Aug 3, 2017, at 4:44 AM, Andras Horvai 
> wrote:
> >
> > Hi,
> >
> > I had to move my bacula backup server from one server to another because
> of hardware
> > failure. I am using mysql server as my catalog backend. Before
> installing the new server
> > I did a mysql dump on the old and then I did a mysql restore on the new.
> > Seemed everything was fine. Then recently I had to restore from Archive.
> >
> > I have archive jobs which are copy jobs.
> > I do weekly full backup to a file storage. Then these copy jobs move the
> last full backup from
> > file storage to tape (worm) storage.
> >
> > The retention period of the volume of archive is 100 years. This
> retention
> > time is inherited by the jobs. The file retention time of these jobs is
> 2 months. So in theory,
> > when I have to restore something from archive what is 2 years old I
> would be able to do
> > a full job restore... of that full backup which would be perfectly fine
> for me but the problem:
>
> As I understand it, and perhaps other can correct/confirm:
>
>  - when the first retention period is met, the data can be purged from the
> Catalog.
>
> Perhaps this data has been purged from the new database server after it
> was copied it.  It was
> there, but now it has been purged. Has the 2 month period been met since
> that move?
>
> > It seemed that after the move of the backup server I lost my archive job
> records from the catalog.
> > bacula did not find the archive jobs of the client in the catalog. I had
> to use bscan on the tape
> > volume, so I was able to restore.
> >
> > Fortunately, I did not wipe the old system so I was able to run bconsole
> and I checked
> > that the archive records are present... in the old database
> >
> > Can anybody advise what to check in this case? What did I wrong?
>
> I think you did nothing wrong.
>
> --
> Dan Langille - BSDCan / PGCon
> d...@langille.org
>
>
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] job records have gone after database move

2017-08-03 Thread Andras Horvai
Hi,

I had to move my bacula backup server from one server to another because of
hardware
failure. I am using mysql server as my catalog backend. Before installing
the new server
I did a mysql dump on the old and then I did a mysql restore on the new.
Seemed everything was fine. Then recently I had to restore from Archive.

I have archive jobs which are copy jobs.
I do weekly full backup to a file storage. Then these copy jobs move the
last full backup from
file storage to tape (worm) storage.

The retention period of the volume of archive is 100 years. This retention
time is inherited by the jobs. The file retention time of these jobs is 2
months. So in theory,
when I have to restore something from archive what is 2 years old I would
be able to do
a full job restore... of that full backup which would be perfectly fine for
me but the problem:

It seemed that after the move of the backup server I lost my archive job
records from the catalog.
bacula did not find the archive jobs of the client in the catalog. I had to
use bscan on the tape
volume, so I was able to restore.

Fortunately, I did not wipe the old system so I was able to run bconsole
and I checked
that the archive records are present... in the old database

Can anybody advise what to check in this case? What did I wrong?

Bacula version: 7.0.5+dfsg-4ubuntu0.1
Ubuntu version:

Distributor ID: Ubuntu
Description:Ubuntu 16.04.2 LTS
Release:16.04
Codename:   xenial


Thanks,

Andras
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Copying OK -- with warnings = not OK at all

2017-05-22 Thread Andras Horvai
Hi All,

we have been using bacula community version for years without any big
problems.
I am very satisfied with this network backup solution. But in the last few
weeks
I noticed that the Copy jobs (what copies the most recent full backups
(backup files) to tape)
terminates with this message: Copying OK -- with warnings.

Investigating deeper the issue I found these error messages:

JobId 15225: Error: block_util.c:352 Volume data error at 0:1207535829!
Block checksum mismatch in block=176257 len=64512: calc=b413b0f3
blk=e716de40
22-May 11:02 backup-sd JobId 15225: Elapsed time=00:45:54, Transfer
rate=4.125 M Bytes/second

and recently with only one Copy job I got Fatal error:

JobId 15223: Error: bsock.c:427 Write error sending 32772 bytes to client:
127.0.0.1:9103: ERR=Connection reset by peer
22-May 10:16 backup-sd JobId 15223: Fatal error: read.c:267 Error sending
to File daemon. ERR=Connection reset by peer
22-May 10:16 backup-sd JobId 15223: Elapsed time=00:43:54, Transfer
rate=69.54 M Bytes/second
22-May 10:16 backup-sd JobId 15223: Error: bsock.c:370 Socket has errors=1
on call to client:127.0.0.1:9103
22-May 10:16 backup-sd JobId 15223: Fatal error: fd_cmds.c:138 Read data
not accepted
22-May 10:16 backup-sd JobId 15223: Error: bsock.c:370 Socket has errors=1
on call to client:127.0.0.1:9103
22-May 10:16 backup1 JobId 15223: Error: Bacula backup1 7.0.5 (28Jul14):

With bls utility I checked the backup files on disk and I was able to read
the files without any error.


Another strange issue is that we have a HP Ultrium LTO-4 Tape which
theoretical speed is 120MB/s
but as I see the bacula logs the speed if far below the expected:

69545.7 KB/s (which ended with Fatal error) (183.1 GB written)
1734.0 KB/s (3 GB written)
4121.2 KB/s  (11.36 GB written)
9397.6 KB/s  (29.09 GB written)
4753.2 KB/s  (4.030 GB written)
47121.0 KB/s (424.0 MB written)
1813.7 KB/s (23 GB written)
2469.9 KB/s (28 GB written)
1675.6 KB/s (5.6 GB written)


Probably there is relationship between slow write speed and the error but I
cannot find where the problem is and what to change.
We tried to clean the tape with cleaning cartridge... did not help.

I am using bacula 7.0.5+dfsg-4ubuntu0.1 on Ubuntu 16.04.2 LTS with
kernel: GNU/Linux 4.4.0-57-generic x86_64

dpkg -l | grep bacula:

ii  bacula-common   7.0.5+dfsg-4ubuntu0.1
   amd64network backup service - common support files
ii  bacula-common-mysql 7.0.5+dfsg-4ubuntu0.1
   amd64network backup service - MySQL common files
ii  bacula-console  7.0.5+dfsg-4ubuntu0.1
   amd64network backup service - text console
ii  bacula-director-common  7.0.5+dfsg-4ubuntu0.1
   amd64network backup service - Director common files
ii  bacula-director-mysql   7.0.5+dfsg-4ubuntu0.1
   amd64network backup service - MySQL storage for Director
ii  bacula-fd   7.0.5+dfsg-4ubuntu0.1
   amd64network backup service - file daemon
ii  bacula-sd   7.0.5+dfsg-4ubuntu0.1
   amd64network backup service - storage daemon
ii  bacula-sd-mysql 7.0.5+dfsg-4ubuntu0.1
   amd64network backup service - MySQL SD tools
ii  bacula-server   7.0.5+dfsg-4ubuntu0.1
   all  network backup service - server metapackage

Any comment/help are welcomed!

Best Regards,

Andras Horvai
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] mount with RunBeforeJob

2007-05-21 Thread Andras Horvai

Hi,

Is it possible to mount the actual tape where the backup will run with
runbeforejob?
So I have a job which needs a tape and I would mount this tape with
runbeforejob
with the job which will need the tape?  (I would do this with an external
script which
will call the bconsole utility)

like this:

#!/bin/sh
/usr/local/bacula-2.0.3/sbin/bconsole -c
/usr/local/bacula-2.0.3/etc/bconsole.conf
<-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Error with Bacula web

2006-11-24 Thread Andras Horvai




Hi,


What is the output of the test.php ( http://your_location_to_bacula_web/test.php ) ? 

Do you have php4-pear installed?  Bacula-web works me although I'm using Debian Sarge.

Andras


-Original Message-

From:    [EMAIL PROTECTED]
To:      bacula-users@lists.sourceforge.net
Date:    Friday Friday, November 24, 2006 9:14:50 PM
Subject: [Bacula-users] Error with Bacula web




>


hello,

I'm using bacula with postgres, which works fine. However, the web interface does not work. When running is, I received a page indicating:
Fatal error: Call to undefined function: fetchrow() in /var/www/bacula-web/index.php on line 50
I'm using php4 under Ubuntu, and I installed php4-pgsql.

Does anyone have an idea ?

Thank you, Guy.


Guy Corbaz
ch. du Châtaignier 2
1052 Le Mont
Switzerland
phone:+41 21 652 26 05
mobile: +41 79 420 26 06
freeworld dialup: 785844
iaxtel: 17005530690
e-mail: [EMAIL PROTECTED] 








-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Update clients FileRetention && JobRetention Period?

2006-11-24 Thread Andras Horvai




Hi,

You are right. My answer was wrong. I did some research in the manual and I found this:
(page 292 in the pdf):

-

File Retention =  The File Retention
record defines the length of time that Bacula will keep File records
in the Catalog database. When this time period expires, and if Au-
toPrune is set to yes, Bacula will prune (remove) File records that
are older than the specified File Retention period. The pruning will
occur at the end of a backup Job for the given Client. Note that the
Client database record contains a copy of the File and Job retention
periods, but Bacula uses the current values found in the Director’s
Client resource to do the pruning.

-

So regarding the documentation you don't need to do anything else. You have already 
restarted the director daemon.

Andras

-Original Message-

From:    [EMAIL PROTECTED]
To:      bacula-users@lists.sourceforge.net
Date:    Friday Friday, November 24, 2006 10:50:00 PM
Subject: [Bacula-users] Update clients FileRetention && JobRetention Period?





>


On 11/24/06, pedro moreno <[EMAIL PROTECTED]> wrote:



On 11/24/06, Andras Horvai < [EMAIL PROTECTED]> wrote:
Hi,

Use the update command in bacula console. Check this:
http://www.bacula.org/rel-manual/Bacula_Console.html#_ConsoleChapter 

Andras

-Original Message-

From:    [EMAIL PROTECTED]
To:      bacula-users@lists.sourceforge.net
Date:    Friday Friday, November 24, 2006 6:50:15 PM
Subject: [Bacula-users] Update clients FileRetention && JobRetention Period?




>


   Hi people.

   I Update my Pool && Volume RetentionPeriod, now i need to update my Clients FileRetention && JobRetention period but i didn't see any command to  help with..?

   I change my values in bacula-dir.conf them reload bacula-dir.conf but didn't work.I restart my clients but didn't work.

   This can be possible...?

   I need to increase this values, i have been searching about but still don't find anything about, any info about will be apreciated, thanks all for your time!!! 

   Bacula Server 1.38.11 FreeBSD 6.1-p10.
   





-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash 
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV 

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net 
https://lists.sourceforge.net/lists/listinfo/bacula-users




Andras, the update command doesn't have any argument about clients resource:

Update choice:
     1: Volume parameters
     2: Pool from resource
     3: Slots from autochanger

   Could you please point me where...? 

  Thanks for your time!!!

   Maybe i wasn't clear, the values that i want to update are from the Client Resource in bacula-dir.conf

Client {
  Name = Client1
  Address = 192.168.2.251 
  FDPort = 9102
  Catalog = MyCatalog
  Password = "mypassword"
  File Retention = 26 days
  Job Retention = 27 days
  AutoPrune = yes
}

   This are the values i want to update.
   
   Sorry for this mistake, greetings!!!
   








-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Update clients FileRetention && JobRetention Period?

2006-11-24 Thread Andras Horvai




Hi,

Use the update command in bacula console. Check this:
http://www.bacula.org/rel-manual/Bacula_Console.html#_ConsoleChapter

Andras

-Original Message-

From:    [EMAIL PROTECTED]
To:      bacula-users@lists.sourceforge.net
Date:    Friday Friday, November 24, 2006 6:50:15 PM
Subject: [Bacula-users] Update clients FileRetention && JobRetention Period?




>


   Hi people.

   I Update my Pool && Volume RetentionPeriod, now i need to update my Clients FileRetention && JobRetention period but i didn't see any command to  help with..?

   I change my values in bacula-dir.conf them reload bacula-dir.conf but didn't work.I restart my clients but didn't work.

   This can be possible...?

   I need to increase this values, i have been searching about but still don't find anything about, any info about will be apreciated, thanks all for your time!!! 

   Bacula Server 1.38.11 FreeBSD 6.1-p10.
   








-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] restore takes "forever"

2006-11-22 Thread Andras Horvai
Hi,

Well thanks for your answer. I will change my volumes size useing the
Max Volume Bytes setting. What settings do you reccomend if you used
this feature? If I issue the list volumes command in console I got
this:

+-+---+---++--+--+-+--+---+-+-+
| MediaId | VolumeName| VolStatus | VolBytes   | VolFiles | 
VolRetention | Recycle | Slot | InChanger | MediaType   | LastWritten |
+-+---+---++--+--+-+--+---+-+-+
|   3 | ServersDiff01 | Used  | 51,590,350,000 |   12 |  
432,000 |   1 |0 | 1 | ServersFile | 2006-11-17 00:24:09 |
|   4 | ServersDiff02 | Append| 33,334,244,961 |7 |  
432,000 |   1 |0 | 1 | ServersFile | 2006-11-22 23:01:41 |
+-+---+---++--+--+-+--+---+-+-+

What does VolFiles mean? I didn't find it in the documentation.

Andras

>>
>> Really?  We don't have any volumes quite 50G, but we have several 30G
>> volumes, and have yet to have a problem restoring from them.

> The problem is, as far as I know, that for some reason file volumes are
> not seeked but linearly read from the beginning.

> That means that, if you restore something from the end of a 50GB file 
> the whole 50 GB have to be read but the data is not actually needed. 
> Obviously, a seek would happen virtually instananeous.

> If you have set up your volumes in a way that you usually need the whole
> volume, or only the first part of it, that will not be a problem.

> This was discussed on the mailing lists several times during the last 
> years, but I see nothing mentioning it in the manual.

> Arno



-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] cannot prune/purge volumes

2006-11-06 Thread Andras Horvai
I reindexed my tabels, but it seems that it didn't help too much.
Still takes "forever" to prune/purge my volumes. Forever means not
hours but 30 minutes instead of few minutes which was in the past.
What else could I do? This the mysql output of indexing:

mysql> CREATE INDEX PathIdIdx ON File(PathId);
Query OK, 6390188 rows affected (1 hour 2 min 53.22 sec)
Records: 6390188  Duplicates: 0  Warnings: 0

mysql> CREATE INDEX FilenameIdIdx ON File(FilenameId);
Query OK, 6390188 rows affected (1 hour 11 min 47.64 sec)
Records: 6390188  Duplicates: 0  Warnings: 0

mysql> CREATE INDEX FileSetIdIdx ON Job(FileSetId);
Query OK, 268 rows affected (0.30 sec)
Records: 268  Duplicates: 0  Warnings: 0

mysql> CREATE INDEX ClientIdIdx ON Job(ClientId);
Query OK, 268 rows affected (0.20 sec)
Records: 268  Duplicates: 0  Warnings: 0

The configuration: AMD Athlon(tm) 64 Processor 2800+; 512 MB ram; SATA
disks
Kernel: 2.6.8-2; Mysql 4.0.24

The database size is around 1 GB.

-Original Message-

From:[EMAIL PROTECTED]
To:  [EMAIL PROTECTED]
Date:Sunday Sunday, November 05, 2006 1:31:35 PM
Subject: [Bacula-users] cannot prune/purge volumes

>On Sun, Nov 05, 2006 at 01:03:40PM +0100, Andras Horvai wrote:
> Have you already tried this? My catalog is around 1 GB large.
> How long will it take to reindex it?

It's hard to say, but it could easily take hours, especially if you don't
have a particularly fast machine, so just be patient with it.

-- 
Frank Sweetser fs at wpi.edu  |  For every problem, there is a solution that
WPI Network Engineer  |  is simple, elegant, and wrong. - HL Mencken
GPG fingerprint = 6174 1257 129E 0D21 D8D4  E8A3 8E39 29E3 E2E8 8CEC


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] cannot prune/purge volumes

2006-11-05 Thread Andras Horvai
Thanks for your answer! I started to create the new indexes and let's
see what happen. I will use these commands:
CREATE INDEX PathIdIdx ON File(PathId);
CREATE INDEX FilenameIdIdx ON File(FilenameId);

CREATE INDEX FileSetIdIdx ON Job(FileSetId);
CREATE INDEX ClientIdIdx ON Job(ClientId);

Have you already tried this? My catalog is around 1 GB large.
How long will it take to reindex it?

Andras


-Original Message-

From:[EMAIL PROTECTED]
To:  [EMAIL PROTECTED]
Date:Saturday Saturday, November 04, 2006 6:52:27 PM
Subject: [Bacula-users] cannot prune/purge volumes

>On Sat, Nov 04, 2006 at 06:00:52PM +0100, Andras Horvai wrote:
> Hi,
> 
> I'm using bacula Version: 1.38.11 (28 June 2006) and mysql 4.0.24 with
> a debian sarge system. My problem is that I am not able to purne or
> purge my volumes. It seems that it takes forever to purne/purge the database
> records. Did anybody meet this problem? I enter the media id of the
> volume and nothing happens. I don't receive my bconsole prompt back.

Pruning can take an exorbitantly long time if you don't have good indexes
created on your catalog.

http://paramount.ind.wpi.edu/wiki/doku.php?id=faq#why_does_dbcheck_take_forever_to_run

has some sample indexes that you might try adding.  Check the mysql manpage
for your version of mysql to get the correct commands to add them.

-- 
Frank Sweetser fs at wpi.edu  |  For every problem, there is a solution that
WPI Network Engineer  |  is simple, elegant, and wrong. - HL Mencken
GPG fingerprint = 6174 1257 129E 0D21 D8D4  E8A3 8E39 29E3 E2E8 8CEC


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] cannot prune/purge volumes

2006-11-04 Thread Andras Horvai
Hi,

I'm using bacula Version: 1.38.11 (28 June 2006) and mysql 4.0.24 with
a debian sarge system. My problem is that I am not able to purne or
purge my volumes. It seems that it takes forever to purne/purge the database
records. Did anybody meet this problem? I enter the media id of the
volume and nothing happens. I don't receive my bconsole prompt back.

Thanks your help,

Andras

mailto:[EMAIL PROTECTED]


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] ERR=The system cannot find the file specified.

2006-10-20 Thread Andras Horvai
Hello,

I did some research in the mailing list archives but I didn't find
the solution. I'm using bacula-dir Version: 1.38.11 (28 June 2006) and
windows file daemon version: 1.38.11.

I'm getting this error messages for few files:
ERR=The system cannot find the file specified.

I checked the directory length (with filename) and it didn't reach
the 260 character limit. I suspect that special hungarian characters,
like õ or û cause the problem. I'm sure there is the same problem with
other languages. How can I solve it?

Thanks in advance,

Andras


mailto:[EMAIL PROTECTED]


-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] connection to Filestorage doesn't work

2006-10-03 Thread Andras Horvai
Hello,

First I must apologize if I'm asking a question that has already
answered. I have search the archives on sourceforge but I couldn't get the 
answer.

I'm using the default deb packages of bacula. The operating system is
debian sarge. (The file daemons are version 1.38.10)

The config:

I want to use two storage devices (two separate directories on the
same disk):
bacula-dir.conf:
Storage {
  Name = DesktopStorage
  Maximum Concurrent Jobs = 20
  Address = 10.10.234.82 # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = "secure"
  Device = DesktopsFileStorage
  Media Type = DesktopsFile
}

Storage {
  Name = ServerStorage
  Maximum Concurrent Jobs = 20
  Address = 10.10.234.82 # N.B. Use a fully qualified name here
  SDPort = 9103
  Password = "secure"
  Device = ServersFileStorage
  Media Type = ServersFile
}
bacula-sd.conf:

Device {
  Name = DesktopsFileStorage
  Media Type = DesktopsFile
  Archive Device = /backup/desktops
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = yes;
}
Device {
  Name = ServersFileStorage
  Media Type = ServersFile
  Archive Device = /backup/servers
  LabelMedia = yes;   # lets Bacula label unlabeled media
  Random Access = yes;
  AutomaticMount = yes;   # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = yes;
}

the problem:
I seems that the backup of the servers works fine. The file daemon can
connect to the storage daemon but when I want to backupa desktop
machine it doesn't work.
I created the volumes in the pools. When I issue list nextvol
command on console I got this:
Select Job resource (1-23): 14
The next Volume to be used by Job "client01-job" will be DesktopsDiff01

then I want to run the job manually I got this:

Running Jobs:
Backup Job client01-job.2006-10-02_15.00.11 waiting for Client connection.
Full Backup job horvai-job JobId=149 Volume="" device="/backup/desktops"
Files=0 Bytes=0 Bytes/sec=0
FDSocket closed

I got this message:

02-Oct 15:20 client01-fd: client01-job.2006-10-02_15.00.11 Warning:
c:\cygwin\home\kern\bacula\k\src\win32\lib\../../lib/bnet.c:853 Could
not connect to Storage daemon on 10.10.234.82:9103. ERR=No error
Retrying ...

I don't know why bacula doesn't use the prelabeled volumes. Because
the volume files are exists:

bacula:/backup/desktops# ls -la
-rw-r-  1 bacula tape   221 Oct  2 11:40 DesktopsDiff01
-rw-r-  1 bacula tape   221 Oct  2 11:40 DesktopsDiff02
-rw-r-  1 bacula tape   221 Oct  2 10:59 DesktopsFull01
-rw-r-  1 bacula tape   221 Oct  2 11:11 DesktopsFull02
-rw-r-  1 bacula tape   221 Oct  2 11:17 DesktopsFull03
-rw-r-  1 bacula tape   221 Oct  2 11:38 DesktopsFull04
-rw-r-  1 bacula tape   221 Oct  2 11:39 DesktopsFull05

Thanks in advance,

Andras




mailto:[EMAIL PROTECTED]


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] large file backup

2006-07-11 Thread Andras Horvai
Dear List,

I would like to backup files larger or equal than 4 GB.
How can i do this with bacula? I got this error message:
Could not stat c:/dvdimage.iso .

Thanks in advance,

Andras






-
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] busy writing

2006-02-17 Thread Andras Horvai
Hello,

I'm using debian sarge with bacula 1.36.3

this is one my clients to backup: 

abc-server-fd Version: 1.38.3 (04 January 2006)  VSS Windows Server 2003 MVS NT 
5.2.3790
Daemon started 17-Feb-06 12:31, 0 Jobs run since started.
No Terminated Jobs.
Running Jobs:
Director connected at: 17-Feb-06 14:21
No Jobs running.

A this is the error message which I got:

16-Feb 21:01 bacula-dir: Start Backup JobId 2542, 
Job=abc-server-job.2006-02-16_21.01.20
16-Feb 21:05 abc-server-fd: DIR and FD clocks differ by 257 seconds, FD 
automatically adjusting.
16-Feb 21:01 bacula-sd: abc-server-job.2006-02-16_21.01.20 Fatal error: Device 
/backup is busy writing on another Volume.
16-Feb 21:05 cvo-server-fd: abc-server-job.2006-02-16_21.01.20 Fatal error: 
c:\cygwin\home\kern\bacula\k\src\win32\filed\../../filed/job.c:1602 Bad 
response to Append Data command. Wanted 3000 OK data, got 3903 Error append data


What is the problem? I'm using two separate Storage definition (File and File2) 
but every of them points to the same Device (FileStorage). I haven't 
experienced this error yet.

Thanks,

Andras



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Could not connect to Storage

2006-01-17 Thread Andras Horvai
Hi Landon,

I haven't experienced any problem yet. Could you please tell me any examples? 

A.


>On Jan 16, 2006, at 8:12 AM, Andras Horvai wrote:
>
>>Hi Jari,
>>
>>Thanks for your answer, but the firewall was the problem.
>
>Your next problem will be that 1.36 and 1.38 are not compatible. =)
>
>-landonf
>
>
>



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Could not connect to Storage

2006-01-16 Thread Andras Horvai
Hi Jari,

Thanks for your answer, but the firewall was the problem.

A.

>Andras Horvai wrote:
>>Linux bacula 2.6.8-11-amd64-k8 #1 Wed Jun 1 01:03:08 CEST 2005 x86_64
>>GNU/Linux bacula-1.36.3
>>File daemon's version on the windows server is: 1.38.3
>
>1.36 and 1.38 are not propably compatible enough.
>
>
>
>
>---
>This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
>for problems?  Stop!  Download the new AJAX search engine that makes
>searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
>http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
>___
>Bacula-users mailing list
>Bacula-users@lists.sourceforge.net
>https://lists.sourceforge.net/lists/listinfo/bacula-users



---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Could not connect to Storage

2006-01-16 Thread Andras Horvai
Hi All,

I'm new in the list.

Here is the details:

Linux bacula 2.6.8-11-amd64-k8 #1 Wed Jun 1 01:03:08 CEST 2005 x86_64 GNU/Linux
bacula-1.36.3

File daemon's version on the windows server is: 1.38.3

The problem is this:
I added 6 more pools. There is only one volume in each pool. And when the 
scheduler
wants to start the backup as described via the new pool i receive this error 
message:

13-Jan 21:12 bacula-dir: Start Backup JobId 2001, 
Job=file-server-job.2006-01-13_21.01.20
13-Jan 21:16 file-server-fd: file-server-job.2006-01-13_21.01.20 Warning: 
c:\cygwin\home\kern\bacula\k\src\win32\lib\../../lib/bnet.c:853 Could not 
connect to Storage daemon on 192.168.1.146:9103. ERR=No error
Retrying ...
13-Jan 21:32 file-server-fd: file-server-job.2006-01-13_21.01.20 Warning: 
c:\cygwin\home\kern\bacula\k\src\win32\lib\../../lib/bnet.c:853 Could not 
connect to Storage daemon on 192.168.1.146:9103. ERR=No error
Retrying ...

[...]

13-Jan 22:48 file-server-fd: file-server-job.2006-01-13_21.01.20 Fatal error: 
Failed to connect to Storage daemon: 192.168.1.146:9103
13-Jan 22:48 file-server-fd: file-server-job.2006-01-13_21.01.20 Fatal error: 
c:\cygwin\home\kern\bacula\k\src\win32\lib\../../lib/bnet.c:859 Unable to 
connect to Storage daemon on 192.168.1.146:9103. ERR=No error
13-Jan 22:45 bacula-dir: file-server-job.2006-01-13_21.01.20 Fatal error: 
Socket error from Filed on Storage command: ERR=No data available
13-Jan 22:45 bacula-dir: file-server-job.2006-01-13_21.01.20 Error: Bacula 
1.36.3 (22Apr05): 13-Jan-2006 22:45:11
  JobId:  2001
  Job:file-server-job.2006-01-13_21.01.20
  Backup Level:   Full
  Client: file-server-fd
  FileSet:"file-server" FD%SF3343-34s%df%afds
  Pool:   "Servers-SecondWeek-pool"
  Storage:"File2"
  Start time: 13-Jan-2006 21:12:10
  End time:   13-Jan-2006 22:45:11
  FD Files Written:   0
  SD Files Written:   0
  FD Bytes Written:   0
  SD Bytes Written:   0
  Rate:   0.0 KB/s
  Software Compression:   None
  Volume name(s): 
  Volume Session Id:  3
  Volume Session Time:1137152070
  Last Volume Bytes:  0
  Non-fatal FD errors:0
  SD Errors:  0
  FD termination status:  
  SD termination status:  Waiting on FD
  Termination:*** Backup Error ***

What could be the problem?


The configuration:

I have one storage. It is a file storage. This is the bacula-sd.conf:

Storage { 
  Name = bacula-sd
  SDPort = 9103
  WorkingDirectory = "/var/bacula/working"
  Pid Directory = "/var/bacula/working"
  Maximum Concurrent Jobs = 20
}

Device {
  Name = FileStorage
  Media Type = File
  Archive Device = /backup
  LabelMedia = yes; 
  Random Access = Yes;
  AutomaticMount = yes;   
  RemovableMedia = no;
#  AlwaysOpen = yes;
}

This is the bacula-dir.conf: 

Schedule {
  Name = "DesktopsCycle"
  Run  = Level=Differential Pool=Desktops-Daily-pool mon-thu at 21:01
  Run  = Level=Full Pool=Desktops-FirstWeek-pool 1st fri at 21:01
  Run  = Level=Full Pool=Desktops-SecondWeek-pool 2nd fri at 21:01
  Run  = Level=Full Pool=Desktops-ThirdWeek-pool 3rd fri at 21:01
  Run  = Level=Full Pool=Desktops-ForthWeek-pool 4th fri at 21:01
  Run  = Level=Full Pool=Desktops-FifthWeek-pool 5th fri at 21:01
}

Schedule {
  Name = "ServersCycle"
  Run  = Level=Differential Pool=Servers-Daily-pool mon-thu at 21:01
  Run  = Level=Full Pool=Servers-FirstWeek-pool 1st fri at 21:01
  Run  = Level=Full Pool=Servers-SecondWeek-pool 2nd fri at 21:01
  Run  = Level=Full Pool=Servers-ThirdWeek-pool 3rd fri at 21:01
  Run  = Level=Full Pool=Servers-ForthWeek-pool 4th fri at 21:01
  Run  = Level=Full Pool=Servers-FifthWeek-pool 5th fri at 21:01

}

Pool {
  Name = Desktops-Daily-pool
  Pool Type = Backup
  AutoPrune = yes
  Volume Retention =  5 days
  Recycle = yes
  Accept Any Volume = yes 
}

Pool {
  Name = Desktops-FirstWeek-pool
  Pool Type = Backup
  AutoPrune = Yes
  Volume Retention = 1 month
  Recycle = yes
  Accept Any Volume = yes
}

[...]

Pool {
  Name = Desktops-FifthWeek-pool
  Pool Type = Backup
  AutoPrune = Yes
  Volume Retention = 1 month
  Recycle = yes
  Accept Any Volume = yes
}

Pool {
  Name = Servers-Daily-pool
  Pool Type = Backup
  AutoPrune = yes
  Volume Retention =  5 days
  Recycle = yes
  Accept Any Volume = yes
}

Pool {
  Name = Servers-FirstWeek-pool
  Pool Type = Backup
  AutoPrune = Yes
  Volume Retention = 1 month
  Recycle = yes
  Accept Any Volume = yes
}

[...]

Pool {
  Name = Servers-FifthWeek-pool
  Pool Type = Backup
  AutoPrune = Yes
  Volume Retention = 1 month
  Recycle = yes
  Accept Any Volume = yes
}


Storage {
  Name = File
  Address = 192.168.1.146
  SDPort = 9103
  Password = ""
  Device = FileStorage
  Media Type = File
}

Storage {
  Name = File2
  Address = 192.168.1.146
  SDPort = 9103
  Password = ""
  Device = F