Re: [Bacula-users] fd lose connection

2023-11-07 Thread Lionel PLASSE
I’m sorry but no wi-fi nor 5G in our factory, and don't use my phone too to 
backup my servers :) . 
I was talking about ssh (scp) transfer just to Just to show out  I have no 
problem when uploading big continuous data using other tools through this wan. 
The wan connection is quite stable.

"So it is fine when the NIC is up. Since this is Windows,"
no windows. I discarder windows problem hypothesis by using a migration job, so 
from linux sd to linux sd

Thanks for all, I will find out a solution 
Best regards

PLASSE Lionel | Networks & Systems Administrator 
221 Allée de Fétan
01600 TREVOUX - FRANCE 
Tel : +33(0)4.37.49.91.39
pla...@cofiem.fr
www.cofiem.fr | www.cofiem-robotics.fr

 





-Message d'origine-
De : Josh Fisher via Bacula-users  
Envoyé : mardi 7 novembre 2023 18:01
À : bacula-users@lists.sourceforge.net
Objet : Re: [Bacula-users] fd lose connection


On 11/7/23 04:34, Lionel PLASSE wrote:
> Hello ,
>
> Could  Encryption have any impact in my problem.
>
> I am testing without any encryption between SD/DIR/BConsole or FD and 
> it seems to be more stable. (short sized  job right done , longest job 
> already running : 750Gb to migrate)
>
> My WAN connection seems  to be quite good,  I  achieve transferring big and 
> small  raw files by scp ssh and don't have ping latency or troubles with the 
> ipsec connection.


So it is fine when the NIC is up. Since this is Windows, the first thing to do 
is turn off power saving for the network interface device in Device Manager. 
Make sure that the NIC doesn't ever power down its PHY. 
If any switch, router, or VPN doesn't handle energy-efficient internet in the 
same way, then it can look like a dropped connection to the other side.

Also, you don't say what type of WAN connection this is. Many wireless 
services, 5G, etc. can and will drop open sockets due to inactivity (or 
perceived inactivity) to free up channels.


>
> I tried too with NAT,  by not using IPSEC and setting  Bacula SD & DIR  
> directly in front of the WAN.
> And the same occurs  (wrote X byte  but only Y accepted)
>
> I Tried too to make a migration job to migrate from  SD  to SD through WAN  
> instead of SD-> FD through WAN and the result was the same. (to see if win32 
> FD could be involved)
>   - DIR and SD in the same LAN.
>   - Backup remote  FD through  remote SD, the two are in the same LAN to fast 
> backup : step OK .
> - Then  migration from remote SD to the SD that is in the DIR area 
> through WAN to outsource volumes physical support : step nok The final goal: 
> outsourcing volumes.
> I then discard the gzip compression (just in case)
>
> The errors are quite disturbing :
> *   Error: lib/bsock.c:397 Wrote 65566 bytes to client:192.168.0.4:9102, 
> but only 65536 accepted
>  Fatal error: filed/backup.c:1008 Network send error to SD. 
> ERR=Input/output error Or  (when increasing MaximumNetworkBuffer)
> *   Error: lib/bsock.c:397 Wrote 130277 bytes to 
> client:192.168.0.17:9102, but only 98304 accepted.
>  Fatal error: filed/backup.c:1008 Network send error to SD. 
> ERR=Input/output error Or (Migration job)
> *   Fatal error: append.c:319 Network error reading from FD. ERR=Erreur 
> d'entrée/sortie
>  Error: bsock.c:571 Read expected 131118 got 114684 from 
> Storage daemon:192.168.10.54:9103
>
> It's look like there is a gap between send and receive buffer and looking at 
> the source code, encryption could affect buffer size due to encryption.
> So I  think Bacula-SD could be in cause (maybe).
> Could it be a bug?
> What could I do to determine the problem (activating debug in sd 
> daemon ? )
>
> I use Bacula 13.0.3 on Debian 12 , with ssl 1.1
>
> Thank for helping. Backup scenarios must have an a step of relocating the 
> backup media to be reliable.
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users


___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula copy from tape to tape

2023-11-07 Thread Bill Arlofski via Bacula-users

On 11/7/23 15:09, Rob Gerber wrote:
>

Well, now I know that whole thing is impossible. Simpler, that way.

Thank you for letting me know!


:) Welcome.



could we have disk based backups managed by bacula on this larger
NAS, with copies made to tape for onsite AND offsite storage?


With the large NAS you have described, that is exactly what I would do. :)

Just make sure it is mounted to the SD server via iSCSI, FC, or NFS and *not* 
CIFS. ;)



Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula copy from tape to tape

2023-11-07 Thread Phil Stracchino

On 11/7/23 17:09, Rob Gerber wrote:
Technically I do have a huge NAS that dwarfs the production NAS in size 
(TrueNAS box). It was intended to store one movie, but the movie IT 
people massively over-estimated the amount of space we needed, so it 
never got used. It has occurred to me to use this TrueNAS device as a 
disk based volume storage solution.


Can one make multiple copies of a reference volume? Ie, could we have 
disk based backups managed by bacula on this larger NAS, with copies 
made to tape for onsite AND offsite storage?


You can absolutely to disk-to-disk-to-tape.  It's not conceptually 
difficult to set up.  (I'm doing disk to disk to removable ZFS disk 
pool, but ... same principle.)



Maybe at that point we wouldn't need an onsite tape backup set. Disk 
based backup could speed up our initial backups substantially since both 
of these systems can operate at 10gbe speeds.


Yeah, I love my disk-to-disk setup.  Small restores are almost instant.


--
  Phil Stracchino
  Babylon Communications
  ph...@caerllewys.net
  p...@co.ordinate.org
  Landline: +1.603.293.8485
  Mobile:   +1.603.998.6958



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula copy from tape to tape

2023-11-07 Thread Rob Gerber
Well, now I know that whole thing is impossible. Simpler, that way.

Thank you for letting me know!

Technically I do have a huge NAS that dwarfs the production NAS in size
(TrueNAS box). It was intended to store one movie, but the movie IT people
massively over-estimated the amount of space we needed, so it never got
used. It has occurred to me to use this TrueNAS device as a disk based
volume storage solution.

Can one make multiple copies of a reference volume? Ie, could we have disk
based backups managed by bacula on this larger NAS, with copies made to
tape for onsite AND offsite storage?

Maybe at that point we wouldn't need an onsite tape backup set. Disk based
backup could speed up our initial backups substantially since both of these
systems can operate at 10gbe speeds.

I have it set up with LTO as the primary backup solution because
LTO infrastructure was the only backup plan until management suddenly
demanded I build a massive NAS for housing this movie (then failed to use
said NAS).

Regards,
Robert Gerber
402-237-8692
r...@craeon.net


On Tue, Nov 7, 2023 at 3:28 PM Bill Arlofski via Bacula-users <
bacula-users@lists.sourceforge.net> wrote:

> On 11/7/23 13:04, Rob Gerber wrote:
>  >
> > How difficult will it be to run copy jobs with only 1 LTO8 tape drive?
>
> Hello Rob,
>
> Sorry to be the bringer of bad news, but "difficult" is not the correct
> word.
>
> The word you are looking for is "impossible"
>
> Bacula needs one read device and one write device for copy or migration
> jobs.
>
>
> Best regards,
> Bill
>
> --
> Bill Arlofski
> w...@protonmail.com
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula copy from tape to tape

2023-11-07 Thread Bill Arlofski via Bacula-users

On 11/7/23 13:04, Rob Gerber wrote:
>

How difficult will it be to run copy jobs with only 1 LTO8 tape drive?


Hello Rob,

Sorry to be the bringer of bad news, but "difficult" is not the correct word.

The word you are looking for is "impossible"

Bacula needs one read device and one write device for copy or migration jobs.


Best regards,
Bill

--
Bill Arlofski
w...@protonmail.com



signature.asc
Description: OpenPGP digital signature
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bacula purges old jobs and I don't want it to do that

2023-11-07 Thread Rob Gerber
To update this thread, I ultimately was able to avoid a bscan of all my
backed up media by restoring a backup of my catalog database. I have set
file, job, and volume retention to 1000 years.

Does a virtual full job strategy eliminate information about changed files?
ie, if the full backup captured fileA in one state, and if a later
incremental backup captured fileA in a different state, would the virtual
full consolidation process eliminate reference to the first backup of
fileA? Lets assume that once a tape is full it, nor its associated files or
jobs will never be recycled, at least not for a 7 year period or so.

Incrementals forever could scale very badly in a larger enterprise, but my
objective is to protect a single set of files on a single system. My
largest concern is tapes going missing in an incremental chain, and for
that reason I'm probably going to need to do differential backups
periodically.

Regards,
Robert Gerber
402-237-8692
r...@craeon.net


On Fri, Aug 18, 2023 at 6:25 PM Phil Stracchino 
wrote:

> On 8/18/23 17:24, Rob Gerber wrote:
> > verification jobs. My guess is that bacula purged job records for some
> > reason, possibly because they were older than 6 months. I have volume
> > retention set to 1000 years, but maybe I need to add something for job
> > retention?
>
> There are three retention periods you need to consider:
> — Volume retention
> — Job retention
> — File retention
> I suggest reading up on these in the documentation so that you
> understand what each of them does.  You can then adjust your pools, and
> update all of your volumes, as necessary.
>
>
> > Background info: I am backing up some large video files with LTO8 tape.
> > My goal is to back these files up once, and do incremental backups to
> > capture any new or changed files in the fileset. I do not expect
> > the large media files to be changed once placed on the disk. Basically,
> > I am using an "incrementals forever" strategy.
>
>
> You might want to consider virtual-full jobs as a part of that strategy.
>
> > I am using bacula 13.x. We have a qualstar q24 autochanger. We use LTO 8
> > media. The fileset in question is around 70TiB. I welcome any assistance
> > the list could provide. I'm sure it has to be something simple.
>
>
> You could fix your retention and then scan the pruned volumes back in
> using bscan.  By the sound of it, that would take a while, but it's doable.
>
> What you need to keep in mind is that an "incrementals forever" strategy
> is also a "Bacula database grows forever" strategy.  That will
> eventually become a problem.
>
>
> --
>Phil Stracchino
>Babylon Communications
>ph...@caerllewys.net
>p...@co.ordinate.org
>Landline: +1.603.293.8485
>Mobile:   +1.603.998.6958
>
>
> ___
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users
>
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Bacula copy from tape to tape

2023-11-07 Thread Rob Gerber
I need to back up a 76TiB dataset to LTO8 tape. This data is primarily
large video files that won't change very often. I need two backups, one set
of tapes for onsite storage and one set for offsite storage.

My plan has been to back up the server data using two different job
definitions, with two different pools, sharing a single file definition.
However, the initial backup process takes a long time. This is interfering
with our use of the NAS appliance because we need to clear some items from
the NAS to free up available space, but if we remove them then the second
backup won't back up the removed files a second time. Overall, this "run
the backup twice using different tapes" idea isn't great because it could
lead to backup synchronization issues where the two backups aren't
identical snapshots of a moment in time, and couldn't necessarily be relied
upon to have captured the same data. IE, we might not have two backup
copies of all data, exactly like in this scenario where we want to delete
some data from the NAS for longterm storage on tape.

I have read about bacula's copy job process, whereby an existing backup is
copied to a new set of media, and the job media won't be restored directly
unless the original job media is unavailable.

With my current plan, I am confident that it will be easy to tell the
difference between the offsite and onsite jobs and their associated media.
I mostly distinguish between them by the associated pool.

If I run copy jobs against my first backup, will it be similarly easy to
tell which media is intended for offsite storage and which is intended for
onsite storage?

How difficult will it be to run copy jobs with only 1 LTO8 tape drive?I
know I'd have to provide suitable spool storage to unspool a given tape. My
current spool is 75GiB, so it's woefully undersized for the task.

Can copy jobs copy from 1 tape to the other? I've seen references to copy
jobs as a primary means to move a copy from a disk volume to a tape volume.
I assume this is possible from tape to tape as well, but want to verify.

If I rename a bacula job, pool, or fileset in the bacula director config,
will it cause any issues with data already backed up to tape when I go to
restore? My current backup is named "sharename offsite" and since I can't
easily restore from the copy unless the original job is unavailable or I
specify the copy job directly by jobid, I wouldn't want to take the
original backup offsite and leave the copy onsite. The original full backup
took 2 weeks+ to complete and re-running that backup would stink.

If I were to do a bscan of those tapes later in some disaster recovery
scenario, would I have any issues because the job changed names partway
through the full >> incremental lifecycle?

Would the copied tapes be interchangable with their original counterpart? I
assume not, though it'd be great if so. I imagine a situation where 1 of
several tapes are unavailable from the onsite backup set, and we substitute
the appropriate tape from the offsite backup set. It does strike me as
concerning the possibility that 1 tape is missing from the onsite set and a
different tape missing from the offsite set. In that case, even though the
tapes are different and the data should be identical, would we be unable to
do a full restore of the data from either backup set?

background information:
I am using bacula to back up a nas appliance holding 196TiB of data, mostly
large video files that should not change (though they might be moved to
different folders by video editors).

I am backing this data up to LTO 8 tape using a Qualstar Q24 tape changer.
I have 1 LTO 8 tape drive.

Regards,
Robert Gerber
402-237-8692
r...@craeon.net
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fd lose connection

2023-11-07 Thread Josh Fisher via Bacula-users


On 11/7/23 04:34, Lionel PLASSE wrote:

Hello ,

Could  Encryption have any impact in my problem.

I am testing without any encryption between SD/DIR/BConsole or FD and it seems 
to be more stable. (short sized  job right done , longest job already running : 
750Gb to migrate)

My WAN connection seems  to be quite good,  I  achieve transferring big and 
small  raw files by scp ssh and don't have ping latency or troubles with the 
ipsec connection.



So it is fine when the NIC is up. Since this is Windows, the first thing 
to do is turn off power saving for the network interface device in 
Device Manager. Make sure that the NIC doesn't ever power down its PHY. 
If any switch, router, or VPN doesn't handle energy-efficient internet 
in the same way, then it can look like a dropped connection to the other 
side.


Also, you don't say what type of WAN connection this is. Many wireless 
services, 5G, etc. can and will drop open sockets due to inactivity (or 
perceived inactivity) to free up channels.





I tried too with NAT,  by not using IPSEC and setting  Bacula SD & DIR  
directly in front of the WAN.
And the same occurs  (wrote X byte  but only Y accepted)

I Tried too to make a migration job to migrate from  SD  to SD through WAN  
instead of SD-> FD through WAN and the result was the same. (to see if win32 FD 
could be involved)
  - DIR and SD in the same LAN.
  - Backup remote  FD through  remote SD, the two are in the same LAN to fast 
backup : step OK .
- Then  migration from remote SD to the SD that is in the DIR area through WAN 
to outsource volumes physical support : step nok
The final goal: outsourcing volumes.
I then discard the gzip compression (just in case)

The errors are quite disturbing :
*   Error: lib/bsock.c:397 Wrote 65566 bytes to client:192.168.0.4:9102, 
but only 65536 accepted
 Fatal error: filed/backup.c:1008 Network send error to SD. 
ERR=Input/output error
Or  (when increasing MaximumNetworkBuffer)
*   Error: lib/bsock.c:397 Wrote 130277 bytes to client:192.168.0.17:9102, 
but only 98304 accepted.
 Fatal error: filed/backup.c:1008 Network send error to SD. 
ERR=Input/output error
Or (Migration job)
*   Fatal error: append.c:319 Network error reading from FD. ERR=Erreur 
d'entrée/sortie
 Error: bsock.c:571 Read expected 131118 got 114684 from Storage 
daemon:192.168.10.54:9103

It's look like there is a gap between send and receive buffer and looking at 
the source code, encryption could affect buffer size due to encryption.
So I  think Bacula-SD could be in cause (maybe).
Could it be a bug?
What could I do to determine the problem (activating debug in sd daemon ? )

I use Bacula 13.0.3 on Debian 12 , with ssl 1.1

Thank for helping. Backup scenarios must have an a step of relocating the 
backup media to be reliable.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] fd lose connection

2023-11-07 Thread Lionel PLASSE
Hello ,

Could  Encryption have any impact in my problem.

I am testing without any encryption between SD/DIR/BConsole or FD and it seems 
to be more stable. (short sized  job right done , longest job already running : 
750Gb to migrate)

My WAN connection seems  to be quite good,  I  achieve transferring big and 
small  raw files by scp ssh and don't have ping latency or troubles with the 
ipsec connection.

I tried too with NAT,  by not using IPSEC and setting  Bacula SD & DIR  
directly in front of the WAN.
And the same occurs  (wrote X byte  but only Y accepted)

I Tried too to make a migration job to migrate from  SD  to SD through WAN  
instead of SD-> FD through WAN and the result was the same. (to see if win32 FD 
could be involved)
 - DIR and SD in the same LAN.
 - Backup remote  FD through  remote SD, the two are in the same LAN to fast 
backup : step OK .
- Then  migration from remote SD to the SD that is in the DIR area through WAN 
to outsource volumes physical support : step nok
The final goal: outsourcing volumes.
I then discard the gzip compression (just in case)

The errors are quite disturbing :
*   Error: lib/bsock.c:397 Wrote 65566 bytes to client:192.168.0.4:9102, 
but only 65536 accepted
Fatal error: filed/backup.c:1008 Network send error to SD. 
ERR=Input/output error
Or  (when increasing MaximumNetworkBuffer)
*   Error: lib/bsock.c:397 Wrote 130277 bytes to client:192.168.0.17:9102, 
but only 98304 accepted.
Fatal error: filed/backup.c:1008 Network send error to SD. 
ERR=Input/output error
Or (Migration job)
*   Fatal error: append.c:319 Network error reading from FD. ERR=Erreur 
d'entrée/sortie
Error: bsock.c:571 Read expected 131118 got 114684 from Storage 
daemon:192.168.10.54:9103

It's look like there is a gap between send and receive buffer and looking at 
the source code, encryption could affect buffer size due to encryption.
So I  think Bacula-SD could be in cause (maybe).
Could it be a bug?
What could I do to determine the problem (activating debug in sd daemon ? )

I use Bacula 13.0.3 on Debian 12 , with ssl 1.1

Thank for helping. Backup scenarios must have an a step of relocating the 
backup media to be reliable.

___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users