Excellent, thank you very much. one last question. I need the client for an
Ubuntu 14 tls. I tried to compile it from the server but the version is
obsolete, I compiled it in another environment (centos 9 stream) to pass it
on and it didn't work, something is always missing. My question is through
a "donation" can't you give me a single download of the bareos-fd version
(client) compatible with my Ubuntu 14 tls?
El martes, 4 de febrero de 2025 a la(s) 5:30:15 a.m. UTC-3, Bruno Friedmann
(bruno-at-bareos) escribió:
> What I know is that you need to carefully size and tune the PostgreSQL
> database depending of your hardware and of course size of files/jobs in.
> it is also often recommended to have 50% of occupied space free for large
> operation like generating bvfs cache for large jobs.
>
> I've seen numerous people that get transient error because PG has just not
> enough space to write the temp result, and just drop the query ( then in a
> microsecond the space used is free again) which make it hard to discover.
>
>
> On Monday, 3 February 2025 at 19:47:34 UTC+1 Rodrigo Yesi wrote:
>
>> Yes, from bconsole I see all of them, from all the clients! I'm currently
>> seeing it. I had to give more resources to the DB, I had a timeout
>> originating from the web to the DB. I increased the time, I gave more
>> resources to the DB. and now I see them correctly. I don't know if it
>> influences, but I had old backs from the previous version of the scales
>> that were 18.x.x.x. Will you have wanted to read those backs and bug?
>>
>> El lunes, 3 de febrero de 2025 a la(s) 5:39:24 a.m. UTC-3, Bruno
>> Friedmann (bruno-at-bareos) escribió:
>>
>>> Is it able to show the restore files in bconsole ?
>>>
>>>
>>> On Thursday, 30 January 2025 at 19:36:58 UTC+1 Rodrigo Yesi wrote:
>>>
>>>>
>>>> I did it, but it still doesn't show the files for the restore.
>>>> El jueves, 30 de enero de 2025 a la(s) 11:33:57 a.m. UTC-3, Bruno
>>>> Friedmann (bruno-at-bareos) escribió:
>>>>
>>>>> You may do this in bconsole, so there's no timeout when recreating the
>>>>> bvfs cache.
>>>>>
>>>>>
>>>>> On Thursday, 30 January 2025 at 15:31:22 UTC+1 Rodrigo Yesi wrote:
>>>>>
>>>>>> Hello, I increased the timeout time and it completed, but it still
>>>>>> shows blank in the web ui.
>>>>>>
>>>>>> El jueves, 30 de enero de 2025 a la(s) 11:12:00 a.m. UTC-3, Rodrigo
>>>>>> Yesi escribió:
>>>>>>
>>>>>>> Jan 30 10:54 bareos-dir JobId 0: Fatal error: cats/bvfs.cc:247
>>>>>>> cats/bvfs.cc:247 query INSERT INTO PathVisibility (PathId, JobId)
>>>>>>> SELECT
>>>>>>> a.PathId,27 FROM (SELECT DISTINCT h . PPathId AS PathId FROM
>>>>>>> PathHierarchy
>>>>>>> AS h JOIN PathVisibili ty AS p ON (h.PathId=p.PathId) WHERE p.JobId=27)
>>>>>>> AS
>>>>>>> LEFT JOIN PathVisibility AS b ON (b.JobId=27 AND a.PathId = b .PathId)
>>>>>>> WHERE b.PathId IS NULL failed : ERROR: canceling the sentence because
>>>>>>> it
>>>>>>> was done? the waiting time for sentences Jan 30 10:54 bareos-dir JobId
>>>>>>> 0:
>>>>>>> Error: cats/bvfs.cc:251 cats/bvfs.cc:251 update UPDATE Job SET
>>>>>>> HasCache=1
>>>>>>> WHERE Job Id=27 failed: ERROR: transaction aborted, orders will be
>>>>>>> ignored
>>>>>>> until the end of the transaction block
>>>>>>>
>>>>>>> El jueves, 30 de enero de 2025 a la(s) 6:58:53 a.m. UTC-3, Bruno
>>>>>>> Friedmann (bruno-at-bareos) escribió:
>>>>>>>
>>>>>>>> You might want to cleanup your bvfs_cache and try to regenerate it
>>>>>>>>
>>>>>>>> bconsole <<< ".bvfs_clear_cache yes"
>>>>>>>>
>>>>>>>> then recreate it completely
>>>>>>>>
>>>>>>>> bconsole <<< ".bvfs_update"
>>>>>>>>
>>>>>>>> On Thursday, 30 January 2025 at 04:01:54 UTC+1 Rodrigo Yesi wrote:
>>>>>>>>
>>>>>>>>> this is to day, its ok, but dont see in web
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 bareos-dir JobId 38: Version: 24.0.1~pre27.250812184
>>>>>>>>> (24 January 2025) Red Hat Enterprise Linux release 9.5 (Plow) 29-ene
>>>>>>>>> 21:00
>>>>>>>>> bareos-dir JobId 38: Start Backup JobId 38,
>>>>>>>>> Job=Backup-AVY.2025-01-29_21.00.00_03
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 bareos-dir JobId 38: Connected Storage daemon at
>>>>>>>>> 192.168.12.27:9103, encryption: TLS_CHACHA20_POLY1305_SHA256
>>>>>>>>> TLSv1.3 29-ene 21:00 bareos-dir JobId 38: Encryption:
>>>>>>>>> TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 29-ene 21:00 bareos-dir JobId
>>>>>>>>> 38:
>>>>>>>>> Probing client protocol... (result will be saved until config reload)
>>>>>>>>> 29-ene 21:00 bareos-dir JobId 38: Connected Client: AMB-SVR-AVY-fd at
>>>>>>>>> 192.168.6.91:9102, encryption: TLS_CHACHA20_POLY1305_SHA256
>>>>>>>>> TLSv1.3
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 bareos-dir JobId 38: Handshake: Immediate TLS
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 bareos-dir JobId 38: Encryption:
>>>>>>>>> TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 29-ene 21:00 bareos-sd JobId 38:
>>>>>>>>> Using
>>>>>>>>> just in time reservation for job 38 29-ene 21:00 bareos-dir JobId 38:
>>>>>>>>> Using
>>>>>>>>> Device "JustInTime Device" to write.
>>>>>>>>>
>>>>>>>>> 29-ene 20:59 amb-svr-avy-fd JobId 38: Created 24 wildcard excludes
>>>>>>>>> from FilesNotToBackup Registry key 29-ene 20:59 amb-svr-avy-fd JobId
>>>>>>>>> 38:
>>>>>>>>> Connected Storage daemon at 192.168.12.27:9103, encryption:
>>>>>>>>> TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 29-ene 20:59 amb-svr-avy-fd
>>>>>>>>> JobId 38:
>>>>>>>>> Encryption: TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 29-ene 21:00
>>>>>>>>> bareos-sd
>>>>>>>>> JobId 38: Version: 24.0.1~pre27.250812184 (24 January 2025) Red Hat
>>>>>>>>> Enterprise Linux release 9.5 (Plow) 29-ene 21:00 amb-svr-avy-fd JobId
>>>>>>>>> 38:
>>>>>>>>> Generate VSS snapshots. Driver="Win64 VSS"
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 amb-svr-avy-fd JobId 38: VolumeMountpoints are not
>>>>>>>>> processed as onefs = yes.
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 amb-svr-avy-fd JobId 38:
>>>>>>>>> (C:\)\\?\Volume{2077a78b-2dee-4fbe-b9a2-ab119663189b}\ ->
>>>>>>>>> \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy5
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 amb-svr-avy-fd JobId 38: Version:
>>>>>>>>> 24.0.1~pre27.250812184 (24 January 2025) Microsoft Windows Server
>>>>>>>>> 2012 Standard Edition (build 9200), 64-bit 29-ene 21:00 bareos-sd
>>>>>>>>> JobId 38:
>>>>>>>>> JustInTime Reservation: Finding drive to reserve.
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 bareos-dir JobId 38: Created new Volume "VolAVY-0041"
>>>>>>>>> in catalog.
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 bareos-sd JobId 38: Using Device "MyDevice2"
>>>>>>>>> (/Storage2) to write.
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 bareos-sd JobId 38: Labeled new Volume "VolAVY-0041"
>>>>>>>>> on device "MyDevice2" (/Storage2).
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 bareos-sd JobId 38: Moving to end of data on volume
>>>>>>>>> "VolAVY-0041"
>>>>>>>>>
>>>>>>>>> 29-ene 21:00 bareos-sd JobId 38: Ready to append to end of Volume
>>>>>>>>> "VolAVY-0041" size=221 29-ene 21:00 bareos-dir JobId 38: Max Volume
>>>>>>>>> jobs=1
>>>>>>>>> exceeded. Marking Volume "VolAVY-0041" as Used.
>>>>>>>>>
>>>>>>>>> 29-ene 21:52 amb-svr-avy-fd JobId 38: VSS Writer (BackupComplete):
>>>>>>>>> "Task Scheduler Writer", State: 0x1 (VSS_WS_STABLE) 29-ene 21:52
>>>>>>>>> amb-svr-avy-fd JobId 38: VSS Writer (BackupComplete): "VSS Metadata
>>>>>>>>> Store
>>>>>>>>> Writer", State: 0x1 (VSS_WS_STABLE) 29-ene 21:52 amb-svr-avy-fd JobId
>>>>>>>>> 38:
>>>>>>>>> VSS Writer (BackupComplete): "Performance Counters Writer", State:
>>>>>>>>> 0x1
>>>>>>>>> (VSS_WS_STABLE) 29-ene 21:52 amb-svr-avy-fd JobId 38: VSS Writer
>>>>>>>>> (BackupComplete): "System Writer", State: 0x1 (VSS_WS_STABLE) 29-ene
>>>>>>>>> 21:52
>>>>>>>>> amb-svr-avy-fd JobId 38: VSS Writer (BackupComplete): "ASR Writer",
>>>>>>>>> State:
>>>>>>>>> 0x1 (VSS_WS_STABLE) 29-ene 21:52 amb-svr-avy-fd JobId 38: VSS Writer
>>>>>>>>> (BackupComplete): "BITS Writer", State: 0x1 (VSS_WS_STABLE) 29-ene
>>>>>>>>> 21:52
>>>>>>>>> amb-svr-avy-fd JobId 38: VSS Writer (BackupComplete): "WMI Writer",
>>>>>>>>> State:
>>>>>>>>> 0x1 (VSS_WS_STABLE) 29-ene 21:52 amb-svr-avy-fd JobId 38: VSS Writer
>>>>>>>>> (BackupComplete): "Registry Writer", State: 0x1 (VSS_WS_STABLE)
>>>>>>>>> 29-ene
>>>>>>>>> 21:52 amb-svr-avy-fd JobId 38: VSS Writer (BackupComplete): "Shadow
>>>>>>>>> Copy
>>>>>>>>> Optimization Writer", State: 0x1 (VSS_WS_STABLE) 29-ene 21:52
>>>>>>>>> amb-svr-avy-fd JobId 38: VSS Writer (BackupComplete): "COM+ REGDB
>>>>>>>>> Writer",
>>>>>>>>> State: 0x1 (VSS_WS_STABLE) 29-ene 21:52 bareos-sd JobId 38: Releasing
>>>>>>>>> device "MyDevice2" (/Storage2).
>>>>>>>>>
>>>>>>>>> 29-ene 21:52 bareos-sd JobId 38: Elapsed time=00:52:25, Transfer
>>>>>>>>> rate=2.321 M Bytes/second 29-ene 21:52 bareos-dir JobId 38: Insert of
>>>>>>>>> attributes batch table with 70834 entries start 29-ene 21:52
>>>>>>>>> bareos-dir
>>>>>>>>> JobId 38: Insert of attributes batch table done 29-ene 21:52
>>>>>>>>> bareos-dir
>>>>>>>>> JobId 38: Bareos bareos-dir 24.0.1~pre27.250812184 (24Jan25):
>>>>>>>>>
>>>>>>>>> Build OS: Red Hat Enterprise Linux release 9.5
>>>>>>>>> (Plow)
>>>>>>>>>
>>>>>>>>> JobId: 38
>>>>>>>>>
>>>>>>>>> Job: Backup-AVY.2025-01-29_21.00.00_03
>>>>>>>>>
>>>>>>>>> Backup Level: Full
>>>>>>>>>
>>>>>>>>> Client: "AMB-SVR-AVY-fd" 24.0.1~pre27.250812184
>>>>>>>>> (24Jan25) Microsoft Windows Server 2012 Standard Edition (build
>>>>>>>>> 9200),
>>>>>>>>> 64-bit,Cross-compile
>>>>>>>>>
>>>>>>>>> FileSet: "MyFileSetAVY" 2025-01-24 09:03:38
>>>>>>>>>
>>>>>>>>> Pool: "Pool-AVY" (From Run Pool override)
>>>>>>>>>
>>>>>>>>> Catalog: "MyCatalog" (From Client resource)
>>>>>>>>>
>>>>>>>>> Storage: "MyStorage2" (From run override)
>>>>>>>>>
>>>>>>>>> Scheduled time: 29-ene-2025 21:00:00
>>>>>>>>>
>>>>>>>>> Start time: 29-ene-2025 21:00:00
>>>>>>>>>
>>>>>>>>> End time: 29-ene-2025 21:52:25
>>>>>>>>>
>>>>>>>>> Elapsed time: 52 mins 25 secs
>>>>>>>>>
>>>>>>>>> Priority: 10
>>>>>>>>>
>>>>>>>>> Allow Mixed Priority: no
>>>>>>>>>
>>>>>>>>> FD Files Written: 70,835
>>>>>>>>>
>>>>>>>>> SD Files Written: 70,835
>>>>>>>>>
>>>>>>>>> FD Bytes Written: 7,284,102,133 (7.284 GB)
>>>>>>>>>
>>>>>>>>> SD Bytes Written: 7,300,803,636 (7.300 GB)
>>>>>>>>>
>>>>>>>>> Rate: 2316.1 KB/s
>>>>>>>>>
>>>>>>>>> Software Compression: None
>>>>>>>>>
>>>>>>>>> VSS: yes
>>>>>>>>>
>>>>>>>>> Encryption: no
>>>>>>>>>
>>>>>>>>> Accurate: no
>>>>>>>>>
>>>>>>>>> Volume name(s): VolAVY-0041
>>>>>>>>>
>>>>>>>>> Volume Session Id: 2
>>>>>>>>>
>>>>>>>>> Volume Session Time: 1738150563
>>>>>>>>>
>>>>>>>>> Last Volume Bytes: 7,303,834,619 (7.303 GB)
>>>>>>>>>
>>>>>>>>> Non-fatal FD errors: 0
>>>>>>>>>
>>>>>>>>> SD Errors: 0
>>>>>>>>>
>>>>>>>>> FD termination status: OK
>>>>>>>>>
>>>>>>>>> SD termination status: OK
>>>>>>>>>
>>>>>>>>> Bareos binary info: Bareos community build (UNSUPPORTED):
>>>>>>>>> Get professional support from https://www.bareos.com
>>>>>>>>>
>>>>>>>>> Job triggered by: Scheduler
>>>>>>>>>
>>>>>>>>> Termination: Backup OK
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 29-ene 21:52 bareos-dir JobId 38: Begin pruning Jobs older than 3
>>>>>>>>> months .
>>>>>>>>>
>>>>>>>>> 29-ene 21:52 bareos-dir JobId 38: No jobids found to be purged
>>>>>>>>> 29-ene 21:52 bareos-dir JobId 38: Begin pruning Files.
>>>>>>>>>
>>>>>>>>> 29-ene 21:52 bareos-dir JobId 38: No Files found to prune.
>>>>>>>>>
>>>>>>>>> 29-ene 21:52 bareos-dir JobId 38: End auto prune.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> El miércoles, 29 de enero de 2025 a la(s) 8:27:58 a.m. UTC-3,
>>>>>>>>> Rodrigo Yesi escribió:
>>>>>>>>>
>>>>>>>>>> Hello good morning. As you can see in the images, certain backups
>>>>>>>>>> work. The webui marks them as correct for the backup, but when I
>>>>>>>>>> want to do
>>>>>>>>>> a restore. In some clients it does not show them to me, as in the
>>>>>>>>>> case of
>>>>>>>>>> MFS. It has 2 backups made and both are left blank. in the case of
>>>>>>>>>> MANTIS,
>>>>>>>>>> if you see the image. The back from the 28th works, but the one from
>>>>>>>>>> the
>>>>>>>>>> 29th doesn't! I already gave more resources to postgresql-14 and
>>>>>>>>>> still
>>>>>>>>>> nothing.
>>>>>>>>>
>>>>>>>>>
--
You received this message because you are subscribed to the Google Groups
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/bareos-users/874a023f-4ff7-4a23-8ee8-fc0e63b206ben%40googlegroups.com.