[Bacula-users] Progressive VFull Questions

2018-05-22 Thread Alfred Weintoegl

In "New Features in 9.0.0 - Progressive Virtual Full" documentation it says:

"The new directive Delete Consolidated Jobs expects a yes or no value 
that if set to yes will cause any old Job that is consolidated during a 
Virtual Full to be deleted.".


Here are some questions of a bacula-newbie:

What directive decides when jobs are deleted when creating a Virtual 
Full Backup:

a) The Retention time of the Incremental Backup Pool for incremental jobs ?
b) The Job Retention time of the incremental jobs?
c) Or if this job is deleted because of "Delete Consolidated Jobs = yes"?

in case of (c): Would jobs be deleted immediately or only after the 
Virtual Full is finished?


Does anyone have experience with Progressive VFull-Backups?


thx
Alfred

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progressive VFull Questions

2018-05-23 Thread Alfred Weintoegl

Thank you Kern,

the PVF whitepaper is a great help, because every aspect is declared 
exactly in this paper.

I didn't knew it before.

And thank you Bill also for your prompt answer.

I've only just seen the following PVF demonstration:
https://www.baculasystems.com/ml/pvf3.svg#PVF-title
and this looks unbelievable but is exactly what we want to use for our 
future backups...
(...as the Progressive Virtual Full Backup is now available also for the 
free bacula version V9.0.x).



Thanks for this great work of free SW
Alfred


Am 23.05.2018 um 11:14 schrieb Kern Sibbald:
> Hello,
>
> Perhaps the PVF whitepaper would help you if you have not already 
seen it:  www.bacula.orgt -> Documentation -> White Papers -> 
Progressive ...

>
>
> On 05/22/2018 02:08 PM, Alfred Weintoegl wrote:
>> In "New Features in 9.0.0 - Progressive Virtual Full" documentation 
it says:

>>
>> "The new directive Delete Consolidated Jobs expects a yes or no 
value that if set to yes will cause any old Job that is consolidated 
during a Virtual Full to be deleted.".

>>
>> Here are some questions of a bacula-newbie:
>>
>> What directive decides when jobs are deleted when creating a Virtual 
Full Backup:
>> a) The Retention time of the Incremental Backup Pool for incremental 
jobs ?

>> b) The Job Retention time of the incremental jobs?
>> c) Or if this job is deleted because of "Delete Consolidated Jobs = 
yes"?

>>
>> in case of (c): Would jobs be deleted immediately or only after the 
Virtual Full is finished?

>>
>> Does anyone have experience with Progressive VFull-Backups?
>>
>>
>> thx
>> Alfred
>>
>> 
-- 


...snip
>
---
Am 22.05.2018 um 19:08 schrieb Bill Arlofski:
...snip
>
>
> Hello Alfred,
>
> Option (c) is the correct choice.
>
>
>> in case of (c): Would jobs be deleted immediately or only after the 
Virtual

>> Full is finished?
>
> Jobs that are consolidated into a Virtual Full are only deleted once the
> Virtual Full job has successfully completed with a JobStatus of "T". 
If the
> Virtual Full fails for any reason, no Incrementals will be deleted. 
And of

> course, this is only true if the "DeleteConsolidatedJobs = yes" is set.
>
>
>> Does anyone have experience with Progressive VFull-Backups?
>
> Yes, I do.   Do you have more questions?
>
> Best regards,
>
> Bill
>
>
>
--
Bill Arlofski
http://www.revpol.com/bacula

-- Not responsible for anything below this line --

Am 23.05.2018 um 11:14 schrieb Kern Sibbald:

Hello,

Perhaps the PVF whitepaper would help you if you have not already seen 
it:  www.bacula.orgt -> Documentation -> White Papers -> Progressive ...



On 05/22/2018 02:08 PM, Alfred Weintoegl wrote:
In "New Features in 9.0.0 - Progressive Virtual Full" documentation it 
says:


"The new directive Delete Consolidated Jobs expects a yes or no value 
that if set to yes will cause any old Job that is consolidated during 
a Virtual Full to be deleted.".


Here are some questions of a bacula-newbie:

What directive decides when jobs are deleted when creating a Virtual 
Full Backup:
a) The Retention time of the Incremental Backup Pool for incremental 
jobs ?

b) The Job Retention time of the incremental jobs?
c) Or if this job is deleted because of "Delete Consolidated Jobs = yes"?

in case of (c): Would jobs be deleted immediately or only after the 
Virtual Full is finished?


Does anyone have experience with Progressive VFull-Backups?


thx
Alfred

-- 


Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users





--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progressive VFull Questions

2018-05-24 Thread Alfred Weintoegl

Hello Bill, Hello Lloyd,

may I present a VFull-Backup-Configuration in testenvironment which runs 
only on Sundays while ervery other day (mon-sat) a incremental is running.
This is a Test-Configuration with the idea that if a VFull doesn't 
succed it should be possible to repeat it within a week.
So there should be one VFull and about 3 months Incrementals (Backups To 
Keep = 95).


My questions concerning VFull: For total storage requirement always 
double quantity of a Full (Full + VFull the first time, VFull + VFull 
the following times) plus backup quantity for 95 incrementals is needed?


And is a "Volume Retention time" or "Job Retention time" still necessary 
when it says "Delete Consolidated Jobs = Yes"?




2 Schedules:

Schedule {
  Name = "WeeklyCycle"
  Run = Incremental Accurate=yes mon-sat at 22:23
}

Schedule {
  Name = "WeeklyVFullCycle"
  Run = VirtualFull Accurate=yes 1st, 2nd, 3rd, 4th, 5th sun at 03:23
}

A Job for the VFull:

Job {
Name = "VFullHomeAndEtc"
Type = Backup
Level = VirtualFull
Client = "ClientCent50-fd"
FileSet="Home and Etc"
Accurate = Yes
Backups To Keep = 95
Messages = Standard
Pool = VFull-Pool4Cent50-01
Next Pool = VFull-Pool4Cent50-02
Schedule = "WeeklyVFullCycle"
Delete Consolidated Jobs = Yes
}

And a Job for the Incremental (and First Normal-Full) Backup:

Job {
Name = "BackupClientCent50"
JobDefs = "DefaultJob"
Client = ClientCent50-fd
Full Backup Pool = Full-Pool4Cent50
Incremental Backup Pool = Inc-Pool4Cent50
FileSet="Home and Etc"
Accurate = Yes
Backups To Keep = 95
Delete Consolidated Jobs = Yes
}


2 Extra-Pools for VFull-Backup (The Incrementals have their own Pool):

Pool {
  Name = VFull-Pool4Cent50-01
  Pool Type = Backup
  Recycle = yes   # automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Action On Purge = Truncate
  Volume Retention = 100 days
  Maximum Volume Bytes = 1G  # Test Volume size
  Label Format = VFullcent50_01-
  Maximum Volumes = 20
  Next Pool = VFull-Pool4Cent50-02
  Storage = File1
}

Pool {
  Name = VFull-Pool4Cent50-02
  Pool Type = Backup
  Recycle = yes   # automatically recycle Volumes
  AutoPrune = yes # Prune expired volumes
  Action On Purge = Truncate
  Volume Retention = 100 days
  Maximum Volume Bytes = 1G  # Test Volume size
  Label Format = VFullcent50_02-
  Maximum Volumes = 20
  Next Pool = VFull-Pool4Cent50-01
  Storage = File2
}


And then I have 3 scripts which are running after/before catalog-backup:

# Backup the catalog database (after the nightly save)
Job {
  Name = "BackupCatalog"
...snip (catalog-backup)...


  RunScript {
   RunsWhen=Before
   RunsOnClient=No
   Console = "prune expired volume yes"
 }
  RunScript {
   RunsWhen=After
   RunsOnClient=No
   Console = "purge volume action=truncate allpools storage=File1"
 }
  RunScript {
   RunsWhen=After
   RunsOnClient=No
   Console = "purge volume action=truncate allpools storage=File2"
 }
}

With Kind Regards
Alfred





Am 23.05.2018 um 17:09 schrieb Lloyd Brown:
I can't speak for Alfred, but I would love to see example configs of how 
to set up a progressive virtual-full, the underlying jobs, pools, etc.  
I'm sure I could muddle my way through it using the documentation, but I 
find working from examples to be a good bit easier.  Also more likely to 
not only work, but work well.


Then again, I've never even gotten traditional virtual-full working.  
It's been on my todo list for several years now.  Go figure.



--
Lloyd Brown
Systems Administrator
Fulton Supercomputing Lab
Brigham Young University
http://marylou.byu.edu


On 05/22/2018 11:08 AM, Bill Arlofski wrote:

Does anyone have experience with Progressive VFull-Backups?

Yes, I do.   Do you have more questions?




--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot



___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Progressive VFull Questions

2018-05-25 Thread Alfred Weintoegl

Hello Bill, many thanks for your reply.

I'll try these very useful tips right away on our test-environment.

And if it's possible I would appreciate it very much if you can send 
some sample configs...
As Lloyd said: "...working from examples to be a good bit easier.  Also 
more likely to not only work, but work well".



Best Regards
Alfred


Am 24.05.2018 um 23:48 schrieb Bill Arlofski:

Hello Alfred,

A few things jump right out at me about your sample config...

Consider standard Full job vs an Incremental or Differential. You do not have
three different Jobs defined, each with a different level. This does not work
in the context of Bacula.  With Bacula, you define one job, and then run it at
different levels at specified times.

It is a similar case with Virtual Full jobs.

With VFulls, you define ONE Job. Then, you set up a schedule that runs
Incrementals on all the days you want, and a Virtual Full level on the day and
time you wish the Virtual Full to be triggered.

I always recommend that the Job is configured with "Level = Incremental", so
that whenever you run it manually, it will already be probably what you want.
If you need it to be a full, or Virtual Full run, then this can be changed
before submitting the job, or on the bconsole command line, for example:

* run job=someJob level=Full



If you have different pools for your normal Fulls and Incrementals, they will
both require a "NextPool = " setting where "" is the Pool you wish to
keep your Virtual Fulls in. Of course the "" will need to be the same in
the Full and Incremental pools.  Additionally, you can also set/override the
NextPool in the schedule, so it would not be required in the Pools at all.


Your schedules do not need the Accurate=yes option since it is set in your Job
already. They also do not need to specify the "Level=" except for the
VirtualFull Run line since the Level is already defined in the Job as 
Incremental.


For the schedule, it might look something simple like:
8<
Schedule {
  Name = "PVF_Schedule"
  Run = at 23:00
  Run = Level=VirtualFull Priority=12 sun at 23:30
}
8<

So, basically, what that example Schedule says is:
- Run Incrementals every day at 23:00. The Job has "Level = Incremental"
   defined, so no need to add it in the Schedule
- Run the VFull on Sunday at 23:30 with a different priority than normal
   jobs. This will ensure that the Incrementals finish before the VFull
   starts.

Additionally, you will want to add the "AllowDuplicateJobs=yes" to your Job
configuration so that if the Sunday Incremental has not finished, the Sunday
Virtual Full can be queued and it will wait. Otherwise it will be
automatically cancelled.


Also, keep in mind that since you have set "Backups To Keep = 95", this will
require that you run one Full (this will happen automatically the first time
Bacula is told to run the Job with Level=Incremental when no Full exists), and
then at least 96 Incrementals before you can run a Virtual Full.

In this case, the easiest way to do this is to simply comment out the
"Level=VirtualFull" line in the PVF_Schedule until you have the Full backup
and all the required Incrementals. If you do not do this, then, each Sunday
when the Schedule triggers the VirtualFull backup, the Job will fail with the
error "not enough backups" - until you have the required 96 jobs. :)


I think this should get you onto the right track. If real sample configs are
required, I think I can probably dig something up from my test environments,
or build some from scratch.  :)

Best regards,

Bill



--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Virtual full backup ends with error

2019-01-11 Thread Alfred Weintoegl

Hello azur,

we had an Error a few days ago which is very like to yours: VFull was 
working fine about half a year and suddenly Bacula can't complete the 
VFull-Job. The error occurs near the end of the job.


Here our last lines from job report:
...
.
.
05-Jan 02:04 npkj-dir JobId 668: All records pruned from Volume 
"VFullHallo02_02-0018"; marking it "Purged"

05-Jan 02:04 npkj-dir JobId 668: Recycled volume "VFullHallo02_02-0018"
05-Jan 02:04 npkj-sd JobId 668: Recycled volume "VFullHallo02_02-0018" 
on File device "FileChgr2-Dev2" (/bacula/backup), all previous data lost.
05-Jan 02:04 npkj-sd JobId 668: New volume "VFullHallo02_02-0018" 
mounted on device "FileChgr2-Dev2" (/bacula/backup) at 05-Jan-2019 02:04.
05-Jan 07:47 npkj-sd JobId 668: End of Volume "VFullHallo02_02-0011" at 
addr=536870854879 on device "FileChgr2-Dev1" (/bacula/backup).
05-Jan 07:47 npkj-sd JobId 668: Ready to read from volume 
"VFullHallo02_02-0012" on File device "FileChgr2-Dev1" (/bacula/backup).
05-Jan 07:47 npkj-sd JobId 668: Forward spacing Volume 
"VFullHallo02_02-0012" to addr=265
05-Jan 07:47 npkj-sd JobId 668: Fatal error: block_util.c:425 Volume 
data error at 0:0! Wanted ID: "BB02", got "". Buffer discarded.
05-Jan 07:47 npkj-sd JobId 668: Fatal error: vbackup.c:130 Fatal append 
error on device "FileChgr2-Dev2" (/bacula/backup): ERR=file_dev.c:190 
Could not open(/bacula/backup/Next-0026,OPEN_READ_WRITE,0640): ERR=No 
such file or directory


05-Jan 07:47 npkj-sd JobId 668: Elapsed time=11:17:33, Transfer 
rate=31.00 M Bytes/second

05-Jan 07:47 npkj-dir JobId 668: Error: Bacula npkj-dir 9.0.6 (20Nov17):
  Build OS:   x86_64-redhat-linux-gnu redhat (Core)
...
.
.

These are my permissions:

[root@bacula backup]# ls -ld
drwx-- 2 bacula bacula 4096 Dec  9 09:38 .
[root@bacula backup]# pwd
/bacula/backup
[root@bacula backup]# ls -l
total 10550886592
-rw-r- 1 bacula tape   4033140368 Dec  9 09:40 Next-0026
[root@bacula backup]# df -h
Filesystem  Size  Used Avail Use% Mounted on
...
/dev/mapper/bacula-bacula30T  9.9T   21T  33% /bacula
...


I'll try again a VFull this weekend ...


Regards
Alfred


Am 11.01.2019 um 08:45 schrieb azu...@pobox.sk:

Hi all,

i'm having problems with virtual full backup on one of our clients. 
Everything was working fine for years but, suddenly, Bacula is unable to 
complete the job - i was trying to run it about 10 times during past few 
days, all of them ends with the same weird error.


The error is occuring near of the end of the job, after 240 volumes were 
created (we have a limit of 5 GB for volume size) and 1,2 TB of data was 
copied.


Here are last lines from job report:

11-jan 05:40 server08-sd JobId 56875: Ready to read from volume 
"cloud0015-16052" on File device "cloud0015-file-storage" 
(/backup/cloud0015).
11-jan 05:40 server08-sd JobId 56875: Forward spacing Volume 
"cloud0015-16052" to addr=226
11-jan 05:40 server00-dir JobId 56875: Volume used once. Marking Volume 
"cloud0015-17313" as Used.
11-jan 05:40 server08-sd JobId 56875: End of medium on Volume 
"cloud0015-17313" Bytes=5,368,688,735 Blocks=83,220 at 11-Jan-2019 05:40.
11-jan 05:40 server00-dir JobId 56875: Created new 
Volume="cloud0015-17314", Pool="cloud0015-pool", 
MediaType="File-cloud0015" in catalog.
11-jan 05:40 server08-sd JobId 56875: Labeled new Volume 
"cloud0015-17314" on File device "cloud0015-file-storage-2" 
(/backup/cloud0015).
11-jan 05:40 server08-sd JobId 56875: Wrote label to prelabeled Volume 
"cloud0015-17314" on File device "cloud0015-file-storage-2" 
(/backup/cloud0015)
11-jan 05:40 server00-dir JobId 56875: Volume used once. Marking Volume 
"cloud0015-17314" as Used.
11-jan 05:40 server08-sd JobId 56875: New volume "cloud0015-17314" 
mounted on device "cloud0015-file-storage-2" (/backup/cloud0015) at 
11-Jan-2019 05:40.
11-jan 05:40 server00-dir JobId 56875: Volume used once. Marking Volume 
"cloud0015-17314" as Used.
11-jan 05:40 server00-dir JobId 56875: Volume used once. Marking Volume 
"cloud0015-17314" as Used.
11-jan 05:41 server00-dir JobId 56875: Volume used once. Marking Volume 
"cloud0015-17314" as Used.
11-jan 05:41 server00-dir JobId 56875: Volume used once. Marking Volume 
"cloud0015-17314" as Used.
11-jan 05:41 server08-sd JobId 56875: End of Volume "cloud0015-16052" at 
addr=5368688648 on device "cloud0015-file-storage" (/backup/cloud0015).
11-jan 05:41 server08-sd JobId 56875: Ready to read from volume 
"cloud0015-16053" on File device "cloud0015-file-storage" 
(/backup/cloud0015).
11-jan 05:41 server08-sd JobId 56875: Forward spacing Volume 
"cloud0015-16053" to addr=226
11-jan 05:41 server00-dir JobId 56875: Volume used once. Marking Volume 
"cloud0015-17314" as Used.
11-jan 05:41 server08-sd JobId 56875: End of medium on Volume 
"cloud0015-17314" Bytes=5,368,688,678 Blocks=83,220 at 11-Jan-2019 05:41.
11-jan 05:41 server00-dir JobId 56875: Created new 
Volume="cloud0015-17315", Pool="cloud0015-pool", 
MediaType="File-clo

Re: [Bacula-users] CentOS install 9.4.2 broken

2019-04-18 Thread Alfred Weintoegl

Hello MI,

for me the following addition to the command does it:

yum install bacula-postgresql  --exclude=bacula-mysql --exclude=mariadb


Regards
Alfred


Am 18.04.2019 um 15:51 schrieb MI:

The .rpm for 9.4.2 cannot be installed because of

     "conflicts between attempted installs of 
bacula-postgresql-9.4.2-1.el7.x86_64 and bacula-mysql-9.4.2-1.el7.x86_64"


I followed the instructions in 
https://blog.bacula.org/whitepapers/CommunityInstallationGuide.pdf


The console output is below:

# cat /etc/redhat-release
     CentOS Linux release 7.5.1804 (Core)

# uname -a
     Linux zukini 3.10.0-862.14.4.el7.x86_64 #1 SMP Wed Sep 26 15:12:11 
UTC 2018 x86_64 x86_64 x86_64 GNU/Linux


# wget 
https://www.bacula.org/downloads/Bacula-4096-Distribution-Verification-key.asc

     ...

# rpm --import Bacula-4096-Distribution-Verification-key.asc

# rm Bacula-4096-Distribution-Verification-key.asc

# mcedit /etc/yum.repos.d/Bacula.repo

     (edited as described in 
https://blog.bacula.org/whitepapers/CommunityInstallationGuide.pdf)


# yum install bacula-postgresql

Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 29 kB 00:00:00
* base: mirror.init7.net
* elrepo: mirrors.coreix.net
* epel: mirror.init7.net
* extras: ftp.rz.uni-frankfurt.de
* updates: mirror.init7.net
Bacula-Community | 2.9 kB 00:00:00
base | 3.6 kB 00:00:00
cuda-10-0-local-10.0.130-410.48 | 2.5 kB 00:00:00
elrepo | 2.9 kB 00:00:00
extras | 3.4 kB 00:00:00
pgdg96 | 4.1 kB 00:00:00
updates | 3.4 kB 00:00:00
Bacula-Community/primary_db | 10 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package bacula-postgresql.x86_64 0:9.4.2-1.el7 will be installed
--> Processing Dependency: bacula-libs for package:
bacula-postgresql-9.4.2-1.el7.x86_64
--> Processing Dependency: perl(Logwatch) for package:
bacula-postgresql-9.4.2-1.el7.x86_64
--> Processing Dependency: libbacfind-9.4.2.so()(64bit) for package:
bacula-postgresql-9.4.2-1.el7.x86_64
--> Processing Dependency: libbac-9.4.2.so()(64bit) for package:
bacula-postgresql-9.4.2-1.el7.x86_64
--> Processing Dependency: libbaccfg-9.4.2.so()(64bit) for package:
bacula-postgresql-9.4.2-1.el7.x86_64
--> Running transaction check
---> Package bacula-libs.x86_64 0:9.4.2-1.el7 will be installed
---> Package bacula-mysql.x86_64 0:9.4.2-1.el7 will be installed
--> Processing Dependency: mysql for package:
bacula-mysql-9.4.2-1.el7.x86_64
--> Running transaction check
---> Package mariadb.x86_64 1:5.5.60-1.el7_5 will be installed
--> Finished Dependency Resolution
--> Finding unneeded leftover dependencies
Found and removing 0 unneeded dependencies

Dependencies Resolved



Package Arch Version Repository Size


Installing:
bacula-postgresql x86_64 9.4.2-1.el7 Bacula-Community 2.9 M
Installing for dependencies:
bacula-libs x86_64 9.4.2-1.el7 Bacula-Community 752 k
bacula-mysql x86_64 9.4.2-1.el7 Bacula-Community 2.9 M
mariadb x86_64 1:5.5.60-1.el7_5 base 8.9 M

Transaction Summary


Install 1 Package (+3 Dependent packages)

Total download size: 15 M
Installed size: 73 M
Is this ok [y/d/N]: y
Downloading packages:
(1/4): bacula-libs-9.4.2-1.el7.x86_64.rpm | 752 kB 00:00:00
(2/4): bacula-mysql-9.4.2-1.el7.x86_64.rpm | 2.9 MB 00:00:01
(3/4): mariadb-5.5.60-1.el7_5.x86_64.rpm | 8.9 MB 00:00:01
(4/4): bacula-postgresql-9.4.2-1.el7.x86_64.rpm | 2.9 MB 00:00:01


Total 7.4 MB/s | 15 MB 00:00:02
Running transaction check
Running transaction test


Transaction check error:
file /opt/bacula/lib64/libbaccats-9.4.2.so conflicts between
attempted installs of bacula-postgresql-9.4.2-1.el7.x86_64 and
bacula-mysql-9.4.2-1.el7.x86_64
file /opt/bacula/lib64/libbacsd-9.4.2.so conflicts between attempted
installs of bacula-postgresql-9.4.2-1.el7.x86_64 and
bacula-mysql-9.4.2-1.el7.x86_64
file /opt/bacula/scripts/bacula_config conflicts between attempted
installs of bacula-postgresql-9.4.2-1.el7.x86_64 and
bacula-mysql-9.4.2-1.el7.x86_64
file /opt/bacula/scripts/create_bacula_database conflicts between
attempted installs of bacula-postgresql-9.4.2-1.el7.x86_64 and
bacula-mysql-9.4.2-1.el7.x86

Re: [Bacula-users] Failed to connect to Client -fd

2019-05-02 Thread Alfred Weintoegl

Try:

netstat -lnp |grep 910
(oder ss -lntp |less -S)


...should show on the Server something like:

... your ip-addr: 9101  ...LISTEN...(bacula-dir)
... 0.0.0.0:9102 ...LISTEN... (bacula-fd)
... 0.0.0.0:9103 ...LISTEN.. .(bacula-sd)


and on the Client (at the other maschine):

... 0.0.0.0:9102 ...LISTEN...

(Listening on 0.0.0.0 means listening from anywhere)

If there is a: "127.0.0.1:9102", then you have to comment out

FDAddress = 127.0.0.1

in bacula-fd.conf  (/etc/bacula/bacula.conf)

(listening on 127.0.0.1 means only listen from this very computer).

And then:
systemctl restart bacula-fd.service


Regards
Alfred


Am 02.05.2019 um 13:47 schrieb Pieter Sybesma via Bacula-users:
Maybe a silly question but do the addresses match at the client and 
director.
Does the name client.hostname from client configuration resolve to the 
address of the client system? If its a different system the ditector 
cant connect to the file daemon. Plus is i rember correctly the file 
daemon should be able to reach the director and the storage daemon in 
version 5.x. Ditector is listening on 127.0.0.1:9101


Pieter

Op 2 mei 2019 om 11:40 heeft preash raj > het volgende geschreven:



I've also tried debbuging bconsole too; please find the result attached.

# bconsole -d 100 -dt
Connecting to Director localhost:9101
02-May-2019 05:30:42 bconsole: bsock.c:236-0 Current 
host[ipv6:::1:9101] All host[ipv6:::1:9101] host[ipv4:127.0.0.1:65535 
]
02-May-2019 05:30:42 bconsole: bsock.c:236-0 Current 
host[ipv4:127.0.0.1:9101 ] All 
host[ipv6:::1:9101] host[ipv4:127.0.0.1:9101 ]
02-May-2019 05:30:42 bconsole: bsock.c:157-0 who=Director daemon 
host=localhost port=9101
02-May-2019 05:30:42 bconsole: cram-md5.c:131-0 cram-get received: 
auth cram-md5 <282828203.1556789442@bacula-dir> ssl=0
02-May-2019 05:30:42 bconsole: cram-md5.c:150-0 sending resp to 
challenge: QT/fB9/mn9woD+IqY5+XlD
02-May-2019 05:30:42 bconsole: cram-md5.c:79-0 send: auth cram-md5 
<1502355591.1556789442@bconsole> ssl=0
02-May-2019 05:30:42 bconsole: cram-md5.c:98-0 Authenticate OK 
yw+DUAdzxH/MeD9/58+PXB

02-May-2019 05:30:42 bconsole: authenticate.c:150-0 >dird: 1000 OK auth
02-May-2019 05:30:42 bconsole: authenticate.c:157-0 bacula-dir Version: 5.2.13 (19 February 2013)

1000 OK: bacula-dir Version: 5.2.13 (19 February 2013)
02-May-2019 05:30:42 bconsole: console.c:1208-0 Opened connection with 
Director daemon

Enter a period to cancel a command.
*status client
The defined Client resources are:
 1: bacula-fd
 2: -fd
 3: -fd
Select Client (File daemon) resource (1-3): 02-May-2019 05:30:48 
bconsole: console.c:329-0 Got poll BNET_SUB_PROMPT

2
Connecting to Client -fd at :9102
Failed to connect to Client -fd.

You have messages.
02-May-2019 05:31:12 bconsole: console.c:329-0 Got poll BNET_EOD


# bconsole
Connecting to Director localhost:9101
1000 OK: bacula-dir Version: 5.2.13 (19 February 2013)
Enter a period to cancel a command.
*setdebug level=100 All
Connecting to Storage daemon File at :9103
3000 OK setdebug=100
Connecting to Client bacula-fd at localhost:9102
2000 OK setdebug=100 trace=0 hangup=0
Connecting to Client -fd at :9102
Failed to connect to Client.
Connecting to Client -fd at :9102
Failed to connect to Client.


On Thu, May 2, 2019 at 11:15 AM preash raj > wrote:


Hi Guys,

I've tried debugging after stopping the services, please check the
server and client results below. Any clue?


Bacula Server:
=
[root@bacula ~]# bacula-dir -f -d 100
bacula-dir: dird.c:223-0 Debug level = 100
bacula-dir: jcr.c:140-0 read_last_jobs seek to 192
bacula-dir: jcr.c:147-0 Read num_items=10
bacula-dir: dir_plugins.c:160-0 Load dir plugins
bacula-dir: dir_plugins.c:162-0 No dir plugin dir!
bacula-dir: mysql.c:709-0 db_init_database first time
bacula-dir: mysql.c:177-0 mysql_init done
bacula-dir: mysql.c:202-0 mysql_real_connect done
bacula-dir: mysql.c:204-0 db_user=bacula db_name=bacula
db_password=admin@123
bacula-dir: mysql.c:227-0 opendb ref=1 connected=1 db=5578f76a53a0
bacula-dir: mysql.c:249-0 closedb ref=0 connected=1 db=5578f76a53a0
bacula-dir: mysql.c:256-0 close db=5578f76a53a0
bacula-dir: dird.c:1215-0 Unlink:
/var/spool/bacula/bacula-dir.bacula-dir.1653251192.mail
bacula-dir: pythonlib.c:102-0 No script dir. prog=DirStartUp
bacula-dir: bnet_server.c:112-0 Addresses host[ipv4:127.0.0.1:9101
]
bacula-dir: job.c:1334-0 wstorage=File
bacula-dir: job.c:1343-0 wstore=File where=Job resource
bacula-dir: job.c:1034-0 JobId=0 created
Job=*JobMonitor*.2019-05-02_01.37.47_01

[root@bacula ~]# bacula-sd -f -d 100
bacula-sd: stored_conf.c:704-0 Inserting director res: bacula-mon
bacula-sd: jcr.c:140-0 read_last_jobs seek to 192
bacula-sd: jcr.c:147-0 Read