Re: [Bacula-users] Backing up to NetApp
Hello Dan, Yes you can backup to NetApp, and if you have configured NFS properly, it can be very fast. I will ask my NetApp expert for the parameters and get back to you. Best regards, Kern On 1/20/19 12:06 PM, Dan Langille wrote: Have you backed up to NetApp? At $WORK we are likely to soon have heaps of NetApp storage at our disposal. Of course my thoughts turned to backups. Do you use a NetApp appliance with Bacula as a destination for backups? I know of the NetApp plugin for Bacula, but that is the wrong direction: that is for backing up the NetApp device. I've never mounted remote storage for bacula-sd over any of NFS, CIF, Samba, etc. I can't imagine NFS would be useful given the throughput. Mind you, I don't yet know how much we'll be backing up, but it'll be more than 1 Have you? -- Dan Langille - BSDCan / PGCon d...@langille.org ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Problem with volume are not present when the tape is empty on incremental
Yes, i understand. It s normal with v9.2.2 and > But before with the 7.4 , this volume are present. Tanks. Le 19/01/2019 à 12:00, Kern Sibbald a écrit : On a quick look this seems normal. Bacula will not list volumes that have no files stored on them. Kern On 1/10/19 2:39 PM, Olivier Delestre wrote: hi, i use bacula v9.2.2 and storage on disk. i notice that when an incremantal is empty, bacula make a tape with a file ( 612 bytes ). when you list with the query "5: List all backups for a Client" , this volume is not present. but present in the pool and the file system. see below on the vol conty-2546 note : the query is issue from sample-query.sql How to have this volume in my list ? A Bug or not ?? Thanks Choose a query (1-21): 5 Enter Client Name: conty-fd ++--+---+---+-+--+++ | jobid | client | fileset | level | starttime | jobfiles | jobbytes | volumename | ++--+---+---+-+--+++ | 15,276 | conty-fd | fileset_conty | F | 2018-12-01 22:00:02 | 8,121 | 11,532,984,139 | conty-2471 | | 15,323 | conty-fd | fileset_conty | I | 2018-12-03 22:00:02 | 1 | 221,917 | conty-2481 | | 15,540 | conty-fd | fileset_conty | I | 2018-12-10 22:00:02 | 1 | 222,648 | conty-2487 | | 15,683 | conty-fd | fileset_conty | I | 2018-12-14 22:00:03 | 1 | 0 | conty-2503 | | 15,755 | conty-fd | fileset_conty | I | 2018-12-17 22:00:02 | 1 | 223,377 | conty-2510 | | 15,977 | conty-fd | fileset_conty | I | 2018-12-24 22:00:02 | 1 | 224,106 | conty-2514 | | 16,198 | conty-fd | fileset_conty | I | 2018-12-31 22:00:01 | 1 | 224,835 | conty-2529 | | 16,374 | conty-fd | fileset_conty | F | 2019-01-05 22:02:31 | 8,120 | 11,532,987,786 | conty-2401 | | 16,421 | conty-fd | fileset_conty | I | 2019-01-07 22:00:02 | 1 | 225,566 | conty-2407 | | 16,495 | conty-fd | fileset_conty | I | 2019-01-09 22:00:02 | 226 | 10,177,801,880 | conty-2412 | ++--+---+---+-+--+++ *list media pool=conty +-++---+-++--+--+-+--+---+---+-+--+-+---+ | mediaid | volumename | volstatus | enabled | volbytes | volfiles | volretention | recycle | slot | inchanger | mediatype | voltype | volparts | lastwritten | expiresin | +-++---+-++--+--+-+--+---+---+-+--+-+---+ | 2,401 | conty-2401 | Used | 1 | 11,542,823,537 | 2 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2019-01-05 22:05:03 | 5,124,733 | | 2,404 | conty-2404 | Used | 1 | 612 | 0 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2019-01-06 22:00:02 | 5,210,832 | | 2,407 | conty-2407 | Used | 1 | 226,476 | 0 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2019-01-07 22:00:02 | 5,297,232 | | 2,410 | conty-2410 | Used | 1 | 612 | 0 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2019-01-08 22:00:02 | 5,383,632 | | 2,412 | conty-2412 | Used | 1 | 10,185,386,134 | 2 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2019-01-09 22:00:34 | 5,470,064 | | 2,415 | conty-2415 | Recycle | 1 | 1 | 0 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2018-11-12 22:00:40 | 458,870 | | 2,418 | conty-2418 | Purged | 1 | 56,360,634 | 0 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2018-11-13 22:00:07 | 545,237 | | 2,422 | conty-2422 | Purged | 1 | 9,601,588,270 | 2 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2018-11-14 22:00:41 | 631,671 | | 2,426 | conty-2426 | Purged | 1 | 51,766,177 | 0 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2018-11-15 22:00:03 | 718,033 | | 2,429 | conty-2429 | Purged | 1 | 51,991,866 | 0 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2018-11-16 22:00:03 | 804,433 | | 2,432 | conty-2432 | Purged | 1 | 14,694,037 | 0 | 5,529,600 | 1 | 0 | 0 | File | 1 | 0 | 2018-11-17 22:00:03 | 890,833 | | 2,434 | conty-2
Re: [Bacula-users] Backing up to NetApp
On Sun, 20 Jan 2019 15:06:53 -0500 Dan Langille wrote: > Have you backed up to NetApp? No, and we recently turned our FAS2020 boat anchor off altogether, for various reasons, but I would pick iscsi over nfs/cifs for this. -- Dmitri Maziuk ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Backing up to NetApp
Have you backed up to NetApp? At $WORK we are likely to soon have heaps of NetApp storage at our disposal. Of course my thoughts turned to backups. Do you use a NetApp appliance with Bacula as a destination for backups? I know of the NetApp plugin for Bacula, but that is the wrong direction: that is for backing up the NetApp device. I've never mounted remote storage for bacula-sd over any of NFS, CIF, Samba, etc. I can't imagine NFS would be useful given the throughput. Mind you, I don't yet know how much we'll be backing up, but it'll be more than 1 Have you? -- Dan Langille - BSDCan / PGCon d...@langille.org signature.asc Description: Message signed with OpenPGP ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Temporarily place autochanger tape drive in service mode?
Hello Patti, Could you please check to see if the Bacula in the public repo in Branch-9.4 has corrected this problem. The code currently in that branch will be released as version 9.4.2 once I fix one or two more bugs. The change I made was to disable that warning message unless you specifically use the -v (verbose) option when starting the SD. Best regards, Kern On 1/4/19 4:42 PM, Clark, Patti via Bacula-users wrote: Thank you to everyone that replied. The disable storage command did what was needed. One issue with using it is the obnoxious noise produced by the director in every job that was initially assigned the disabled drive. Kern, this is beyond ridiculous. . . . 2019-01-04 10:27:51 rdback2-sd JobId 198498: Warning: Device "adminChanger" requested by DIR is disabled. 2019-01-04 10:27:51 rdback2-sd JobId 198498: Warning: Device "adminChanger" requested by DIR is disabled. 2019-01-04 10:27:51 rdback2-sd JobId 198498: Warning: Device "adminChanger" requested by DIR is disabled. . . . And it goes on for pages and pages until the job is assigned an available drive. Patti On 1/3/19, 1:30 PM, "Bill Arlofski" wrote: On 01/02/2019 02:45 PM, Clark, Patti via Bacula-users wrote: > Is there a way to put a malfunctioning tape drive in an autochanger into a > service mode via commands without modifying bacula configuration files? > > > > */Patti Clark/* Hi Patti, There is an enable/disable command to do this: * disable storage= drive= A status storage will show this drive disabled by "User command": 8< Device File: "speedy_drv_0" (/path/to/device/0) is not open. Device is disabled. User command. Drive 0 is not loaded. Available Space= GB 8< To enable it again: * enable storage= drive= I do not believe the disable is permanent.. ie: it will not survive an SD restart, but it should help in your situation. Hope this help. Best regards, Bill -- Bill Arlofski http://www.revpol.com/bacula -- Not responsible for anything below this line -- ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] 800, 000+ orphaned paths entries cannot be pruned when using bvfs
Hello Stefan, Please submit a bug report on this. When running dbcheck, it is my opinion that we should not be relying on .bvfs. If this is true (as it seems from your output) then I must see why and make sure it is justified. dbcheck should always be able to prune. Best regards, Kern On 1/17/19 11:13 AM, Stefan Muenkner wrote: Hi, I have an almost 9 years running Bacula installation that accumulated more than 800,000 orphaned paths entries in the database (around 5% of all entries in the path table). dbcheck claims it cannot prune those when BVFS is used. 9) Check for orphaned Path records Select function number: 9 Pruning orphaned Path entries isn't possible when using BVFS. Is there anything I can do about it? Best regards Stefan ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users