[Bacula-users] cannot delete/cancel running jobs
Hi all, I've got the following running jobs in the status of a storage: Running Jobs: Writing: Full Backup job sede_samba_job JobId=36 Volume="" pool="Default" device=""sede_storage" (/backup/sede/)" Files=7,606 Bytes=865,540,986 Bytes/sec=2,603 FDReadSeqNo=86,140 in_msg=64080 out_msg=5 fd=5 Writing: Incremental Backup job sede_coge_job JobId=38 Volume="" pool="Default" device=""sede_storage" (/backup/sede/)" Files=0 Bytes=0 Bytes/sec=0 FDReadSeqNo=6 in_msg=6 out_msg=5 fd=10 Writing: Incremental Backup job sede_coge_job JobId=39 Volume="" pool="Default" device=""sede_storage" (/backup/sede/)" Files=0 Bytes=0 Bytes/sec=0 FDReadSeqNo=6 in_msg=6 out_msg=5 fd=11 but I cannot either cancel or delete them, and thus cannot start another backup job (since it will be enqueued after the above). Any idea? I've tried to restart the file daemon and the director but nothing changed. Thanks, Luca - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] cannot delete/cancel running jobs
On Tuesday 10 April 2007 Luca Ferrari's cat, walking on the keyboard, wrote: > but I cannot either cancel or delete them, and thus cannot start another > backup job (since it will be enqueued after the above). Any idea? I've > tried to restart the file daemon and the director but nothing changed. I've stopped the bacula-sd daemon on the backup machine for a while and the jobs disappeared. I think this is not the best solution, but it worked for now. I think the problem was caused because one the server being backupped up was shut down while bacula was running Thanks, Luca - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] cannot delete/cancel running jobs
On Wednesday 11 April 2007 Arno Lehmann's cat, walking on the keyboard, wrote: > If that's the case, waiting should have solved the problem - there are > some rather long timeouts involved. After two hours, the SD should > notice the jobs are stale, finish them, and have the capacity to handle > new jobs. I wated several hours but nothing happened! That's why I wrote the first e-mail Luca - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] backup on dvd-like base
Hi all, I'm not sure if this is possible: I'd like to instrument bacula to perform backup over a disk storage but preparing a kind of dvd iso and splitting the backup over a set of such iso as needed. A kind of pooling of dvd backups that I'll archive on phisical dvds let's say one at month (remaking then a full backup). Is this possible? Has anybody a sample or some tips for the configuration of this? Thanks, Luca - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ Bacula-users mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] doubt on autolabeling
Hi all, I cannot understan very well how labeling works. I've got, in my bacula-dir.conf file the following jobs: Job { Name = mammuth_uff_a_job Enabled = yes Type = Backup Level = Incremental Client = mammuth-fd FileSet = uff_a_fileset Storage = mammuth_storage Messages = Daemon Pool = Default } Job { Name = mammuth_uff_b_job Enabled = yes Type = Backup Level = Incremental Client = mammuth-fd FileSet = uff_b_fileset Storage = mammuth_storage Messages = Daemon Pool = Default } Pool { Name = Default Pool Type = Backup Recycle = yes AutoPrune = yes Volume Retention = 365 days Accept Any Volume = yes Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}" } Now if I run the first job (mammuth_uff_a_job) the volume with the label mammuth_uff_a_job_2007_ is created, but if I run the second job, that I'd like to be on a different volume, the system keeps using the previous volume. What is wrong with my configuration? I'd like to keep each job on a different volume, is this possible? Thanks, Luca - This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] doubt on autolabeling
On Tuesday 24 April 2007 Darien Hager's cat, walking on the keyboard, wrote: > Maximum Volume Jobs = 1 Even with a pool for each job and the above directive in the pool I got a strange behaviour: 27-apr 15:22 backup-sd: Volume "mammuth_uff_b_job-2007-04-26" previously written, moving to end of data. I've tried even a full job but it does not change. Any idea? Thanks, Luca - This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Maximum Volume Size is for me?
Hi all, this is what I'd like to do: backup up regularly on volumes of type file that should not exceed 4GB of size (this is in the case I've to burn such files). I thought that Maximum Volume Size could help me, but it does not: 08-mag 17:41 backup-sd: User defined maximum volume capacity 4,000,000 exceeded on device "sede_samba_storage" (/backup/sede/samba). It hangs and the backup does not go. I'd like to have a way to archive my volumes as files, and sometimes to burn them on dvd (here the max limit) and to use them for restoration (mounting them). Is this possible? Thanks, Luca - This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Maximum Volume Size is for me?
On Tuesday 08 May 2007 Darien Hager's cat, walking on the keyboard, wrote: > Luca Ferrari wrote: > > 08-mag 17:41 backup-sd: User defined maximum volume capacity 4,000,000 > > exceeded on device "sede_samba_storage" (/backup/sede/samba). > > So far, so good. But since you're backing up to plain old files, you > cannot create a new file on the disk without creating a new volume. In > other words, unless you use an external tool to slice the file up, each > file on the SD's disk is one volume. (Now, a job can have multiple > volumes, and vice-versa...) Dear Darien, it's not clear to me what does it mean that all the files will be on a volume and how to impose that a volume can have more than one file. Could you please explain it better? Thanks, Luca - This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] a job with device that is blocked waiting for media?
Hi all, I've got a couple of jobs that follows. The strange thing is that while the job mammuth_uff_b_job runs correctly, the mammuth_uff_c_job does not and I can see from the bacula console that Device "mammuth_device" (/backup/mammuth) is not open or does not exist. Device is BLOCKED waiting for media. as it was waiting for being labeled, even if the pools have the autolabel option. I cannot understand why, surely I'm doing something wrong but I cannot see where. Any idea? Thanks, Luca Job { Name = mammuth_uff_b_job Enabled = yes Type = Backup Level = Incremental Client = mammuth-fd FileSet = uff_b_fileset Storage = mammuth_storage Messages = Daemon Pool = mammuth_uff_b_pool Schedule = Daily-Incremental } Job { Name = mammuth_uff_c_job Enabled = yes Type = Backup Level = Incremental Client = mammuth-fd FileSet = uff_c_fileset Storage = mammuth_storage Messages = Daemon Pool = mammuth_uff_c_pool Schedule = Daily-Incremental } The pools are the following: Pool { Name = mammuth_uff_b_pool Pool Type = Backup Recycle = yes AutoPrune = yes Volume Retention = 365 days Accept Any Volume = yes Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}" Maximum Volume Jobs = 1 } Pool { Name = mammuth_uff_c_pool Pool Type = Backup Recycle = yes AutoPrune = yes Volume Retention = 365 days Accept Any Volume = yes Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}" Maximum Volume Jobs = 1 } - This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] one pool for multiple jobs - labeling problem
Hi, I've got the following pool definition: Pool { Name = sede_pool Pool Type = Backup Recycle = yes AutoPrune = yes Volume Retention = 10 days Accept Any Volume = yes Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}" Maximum Volume Jobs = 1 } and I used this pool for two different jobs. However, the name given to the backup file is always the same, and the one of the first job defined in the config file. In other words, it seems as the ${job} expansion is not done depending on the second job. Is not possible to use the same pool for different jobs? How to get each job rightly labeled? Thanks, Luca - This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] problem labeling volumes
Hi all, I'm still fighting with the auto-labeling problem. This is what I want: each job should stay on a separate file with the name of the job and the time of the backup. This is my configuration: (storage daemon) Device { Name = mammuth_device Archive Device = /backup/mammuth Device Type = File Removable Media = No Random Access = Yes Media Type = mammuth_disk_storage LabelMedia = yes AutomaticMount = yes } the LabelMedia should force bacula to label a media automatically, isn't it? Now the job definition: Job { Name = mammuth_uff_a_job Enabled = yes Type = Backup Level = Incremental Client = mammuth-fd Enabled = yes Type = Backup Level = Incremental Client = mammuth-fd FileSet = uff_a_fileset Storage = mammuth_storage Messages = Daemon Pool = mammuth_uff_a_pool Schedule = Daily-Evening-Incremental } and the pool: Pool { Name = mammuth_uff_a_pool Pool Type = Backup Recycle = yes AutoPrune = yes Volume Retention = 365 days Accept Any Volume = yes Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}" Maximum Volume Jobs = 1 Use Volume Once = yes } However if I run the job from the console the device is locked waiting for media and I can see it has no volume label. So how can I achieve my aim of automatic labeling the volumes keeping them separated on each file? Moverover, the Recycle = Yes option in the pool is dangerous for the above aim? Finally, consider the following schedule: Schedule { Name = "Daily-Evening-Incremental" Run = Level = Incremental mon-fri at 13:00 Run = Level = Incremental mon-fri at 22:00 Run = Level = Full on 5 at 00:00 } is it possible to assign a different volume label to the full backup, tus to easily recognize it? Thanks, Luca - This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] problem labeling volumes
What is very strange is that if I run the backups from the console, that is through the run option, the backups are labeled, while the same jobs using the scheduler does not get the label. For instance today I ran the uff_a job manually, then I waited for the scheduled one. The device was blocked waiting for media, and in the messages I found: 16-lug 13:33 backup-dir: mammuth_uff_a_job.2007-07-16_13.00.01 Error: sql_create.c:384 Volume "mammuth_uff_a_job-2007-07-16" already exists. 16-lug 13:33 backup-dir: mammuth_uff_a_job.2007-07-16_13.00.01 Error: sql_create.c:384 Volume "mammuth_uff_a_job-2007-07-16" already exists. 16-lug 13:33 backup-sd: Job mammuth_uff_a_job.2007-07-16_13.00.01 waiting. Cannot find any appendable volumes. Since I've got a label format: Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}" is it possible to avoid that two backups on the same day lock bacula? Maybe appending the time or an autoincrement value? Any clue? Thanks, Luca On Friday 13 July 2007 Luca Ferrari's cat, walking on the keyboard, wrote: > Hi all, > I'm still fighting with the auto-labeling problem. This is what I want: > each job should stay on a separate file with the name of the job and the > time of the backup. > This is my configuration: > > (storage daemon) > Device { >Name = mammuth_device >Archive Device = /backup/mammuth >Device Type = File >Removable Media = No >Random Access = Yes >Media Type = mammuth_disk_storage >LabelMedia = yes >AutomaticMount = yes > } > > the LabelMedia should force bacula to label a media automatically, isn't > it? Now the job definition: > > Job { > Name = mammuth_uff_a_job > Enabled = yes > Type = Backup > Level = Incremental > Client = mammuth-fd > Enabled = yes > Type = Backup > Level = Incremental > Client = mammuth-fd > FileSet = uff_a_fileset > Storage = mammuth_storage > Messages = Daemon > Pool = mammuth_uff_a_pool > Schedule = Daily-Evening-Incremental > } > > and the pool: > > Pool { > Name = mammuth_uff_a_pool > Pool Type = Backup > Recycle = yes > AutoPrune = yes > Volume Retention = 365 days > Accept Any Volume = yes > Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}" > Maximum Volume Jobs = 1 > Use Volume Once = yes > } > > > However if I run the job from the console the device is locked waiting for > media and I can see it has no volume label. So how can I achieve my aim of > automatic labeling the volumes keeping them separated on each file? > Moverover, the Recycle = Yes option in the pool is dangerous for the above > aim? > > Finally, consider the following schedule: > > Schedule { > Name = "Daily-Evening-Incremental" > Run = Level = Incremental mon-fri at 13:00 > Run = Level = Incremental mon-fri at 22:00 > Run = Level = Full on 5 at 00:00 > } > > is it possible to assign a different volume label to the full backup, tus > to easily recognize it? > > > Thanks, > Luca > > - > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > ___ > Bacula-users mailing list > Bacula-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bacula-users - This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] no label for a volume
Hi all, I've got several job definitions that are similar, but the following is not working: Job { Name = sede_Vol2Samba_job Enabled = no Type = Backup Level = Incremental Client = sede-fd FileSet = sede_Vol2Samba_fileset Storage = sede-samba-sd Messages = Daemon Pool = sede_vol2_pool Schedule = Night-Incremental } Pool { Name = sede_vol2_pool Pool Type = Backup Recycle = yes AutoPrune = yes Volume Retention = 365 days Accept Any Volume = yes Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}--${Hour}-${Minute}" Maximum Volume Jobs = 1 Use Volume Once = yes } in fact the job stay blocked waiting for media, and in particular there is no label for the volume: Running Jobs: Writing: Full Backup job sede_Vol2Samba_job JobId=749 Volume="" pool="sede_vol2_pool" device=""sede_Vol2Samba_storage" (/backup/sede/vol2)" Files=0 Bytes=0 Bytes/sec=0 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=5 Other jobs with an identical pool get labeled right, why this is not? Any suggestion? Thanks, Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] no label for a volume
Looking in the messages I found this: 25-Jul 08:58 backup-dir: Start Backup JobId 761, Job=sede_Vol2Samba_job.2007-07-25_08.58.35 25-Jul 08:58 backup-dir: Created new Volume "sede_Vol2Samba_job-2007-07-25--8-58" in catalog. 25-Jul 08:58 backup-sd: End of Volume "sede_Vol2Samba_job-2007-07-25--8-58" at 0:0 on device "sede_Vol2Samba_storage" (/backup/sede/vol2). Write of 240 bytes got -1. 25-Jul 08:58 backup-sd: Marking Volume "sede_Vol2Samba_job-2007-07-25--8-58" in Error in Catalog. 25-Jul 08:58 backup-dir: sede_Vol2Samba_job.2007-07-25_08.58.35 Error: sql_create.c:384 Volume "sede_Vol2Samba_job-2007-07-25--8-58" already exists. 25-Jul 08:58 backup-dir: sede_Vol2Samba_job.2007-07-25_08.58.35 Error: sql_create.c:384 Volume "sede_Vol2Samba_job-2007-07-25--8-58" already exists. 25-Jul 08:58 backup-sd: Job sede_Vol2Samba_job.2007-07-25_08.58.35 waiting. Cannot find any appendable volumes. Please use the "label" command to create a new Volume for: but the volume does not exists and I don't know what the above error "got -1" meansjust I'm not running out of space. Any idea/suggestion? Luca On Tuesday 24 July 2007 Luca Ferrari's cat, walking on the keyboard, wrote: > Hi all, > I've got several job definitions that are similar, but the following is not > working: > > Job { > Name = sede_Vol2Samba_job > Enabled = no > Type = Backup > Level = Incremental > Client = sede-fd > FileSet = sede_Vol2Samba_fileset > Storage = sede-samba-sd > Messages = Daemon > Pool = sede_vol2_pool > Schedule = Night-Incremental > } > > > > Pool { > Name = sede_vol2_pool > Pool Type = Backup > Recycle = yes > AutoPrune = yes > Volume Retention = 365 days > Accept Any Volume = yes > Label Format > = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}--${Hour}-${Minute}" > Maximum Volume Jobs = 1 > Use Volume Once = yes > } > > > in fact the job stay blocked waiting for media, and in particular there is > no label for the volume: > > Running Jobs: > Writing: Full Backup job sede_Vol2Samba_job JobId=749 Volume="" > pool="sede_vol2_pool" device=""sede_Vol2Samba_storage" > (/backup/sede/vol2)" > Files=0 Bytes=0 Bytes/sec=0 > FDReadSeqNo=6 in_msg=6 out_msg=4 fd=5 > > > Other jobs with an identical pool get labeled right, why this is not? > Any suggestion? > > Thanks, > Luca > > - > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > ___ > Bacula-users mailing list > Bacula-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/bacula-users - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] no label for a volume
On Wednesday 25 July 2007 Mantas M.'s cat, walking on the keyboard, wrote: > Do you have LabelMedia = yes; set in the device section of your sd config? Yes I've got, the following is the device used in bacula-sd.conf: Device { Name = sede_Vol2Samba_storage Archive Device = /backup/sede/vol2 Device Type = File Removable Media = No Random Access = Yes Media Type = sede_disk_storage LabelMedia = yes AutomaticMount = yes } and this is the mapping with the storage in bacula-dir.conf: Storage { Name = sede-samba-sd Address = 192.168.1.4 SDPort = 9103 Device = sede_Vol2Samba_storage Media Type = sede_disk_storage } Any idea? Thanks, Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] no label for a volume
On Wednesday 25 July 2007 Mantas M.'s cat, walking on the keyboard, wrote: > * Bacula sd can write to that directory This was the problem, I mis-set the owner! Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] no label for a volume
On Wednesday 25 July 2007 Arno Lehmann's cat, walking on the keyboard, wrote: > > * Bacula sd can write to that directory > > I assume that other volumes were created successfully on that disk. Of > course, a change in the user the SD runs as might also have happened. > Should I worry about volumes that were inserted in the catalog and are no present on the disk? Does bacula check the presence of the archive files and adjusts the catalog or this must be done manually? Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] doing always a full backup?
On Thursday 26 July 2007 Ralf Winkler's cat, walking on the keyboard, wrote: > Ciao Luca, > > may i ask, did you change the fileset after the first backup? > I am not that expert, but another possibility could be that this job is > un-enabled " Enabled = no". I didn't change the fileset and I've already tried to switch the enabled to yes, but then running the backup again causes bacula to do a full backup. I don't think enabled = yes requires a first full backup and then an incremental onebut I can be wrong! Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] doing always a full backup?
Hi all, consider the following job: Job { Name = sede_Vol2Samba_job Enabled = no Type = Backup Level = Incremental Client = sede-fd FileSet = sede_Vol2Samba_fileset Storage = sede-samba-sd Messages = Daemon Pool = sede_vol2_pool Schedule = Night-Incremental } Now if I run it manually the first time I get a full backup (right), then I run again and I get a new full backup while it should be incremental... In the messages I found: 26-lug 09:12 backup-dir: No prior Full backup Job record found. 26-lug 09:12 backup-dir: No prior or suitable Full backup found. Doing FULL backup. but according to the status a full backup exists: Terminated Jobs: JobId LevelFiles Bytes Status FinishedName == 762 Full 86,33221.51 G OK 25-Jul-07 12:11 sede_Vol2Samba_job 772 Full 86,40820.88 G OK 25-Jul-07 18:04 sede_Vol2Samba_job What am I doing wrong here? Thanks, Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] doing always a full backup?
On Friday 27 July 2007 Ryan Novosielski's cat, walking on the keyboard, wrote: > Just to let you know, the "Terminated Jobs" listing is in NO way proof > that the job exists in the database. You need to do list jobs, at a > minimum. As for why a full backup is being done, I could not say. One > gentleman here recommended I turn on query logging in my database and > look for the query that was attempting to find a previous full backup. > In my case, I made a change to the fileset, meaning a new full would be > required. It might be interesting to see, in your case, what this select > is turning up. I found that the following query does not return any result: SELECT StartTime FROM Job WHERE JobStatus='T' AND Type='B' AND Level='F' AND Name='sede_Vol2Samba_job' AND ClientId=2 AND FileSetId=14 ORDER BY StartTime DESC LIMIT 1 and in fact: SELECT StartTime,jobstatus,level FROM Job WHERE Type='B' AND Level='F' and name='sede_Vol2Samba_job'; starttime | jobstatus | level -+---+--- 2007-07-25 08:58:38 | A | F 2007-07-25 11:09:32 | f | F 2007-07-25 17:01:21 | f | F 2007-07-26 09:09:06 | A | F 2007-07-26 09:12:38 | A | F 2007-07-26 14:14:57 | f | F 2007-07-27 02:00:04 | f | F 2007-07-27 10:15:47 | A | F 2007-07-27 10:18:58 | f | F 2007-07-27 11:59:35 | A | F 2007-07-27 12:04:15 | A | F but I don't know what exactly it meansmaybe the job, once has finished does not mark itself as terminated? > > Also, Enabled=no? I can't understand what the purpose to that could be, > if you are indeed using this backup. I'm testing the job without making bacula scheduling it until I'm sure it will work as I'm expected to be. Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] doing always a full backup?
On Friday 27 July 2007 your cat, walking on the keyboard, wrote: > Definitely on the right track, but as you can see, you have no > successful jobs listed there. I don't remember the job status codes > (Martin has listed them after me, I see). Can you look at your log file > and find out what happened to the ends of those listed jobs? That would > be useful information. Also, look for successful jobs or evidence of > pruning. You might also try running one job and watching it the whole > time and seeing if it completes toward the end. > I found in the log this: 30-lug 03:12 backup-dir: sede_Vol2Samba_job.2007-07-30_02.00.00 Fatal error: catreq.c:424 Attribute create error.\ sql_create.c:853 Create db Filename record INSERT INTO Filename (Name) VALUES ('invio dichiarazione di conformit\ à.doc') failed. ERR=ERROR: invalid byte sequence for encoding "UTF8": 0xe02e64 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which \ is controlled by "client_encoding". 30-lug 03:15 backup-dir: Volume used once. Marking Volume "sede_Vol2Samba_job-2007-07-30--2-0" as Used. 30-lug 03:15 backup-dir: sede_Vol2Samba_job.2007-07-30_02.00.00 Error: Bacula 1.38.11 (28Jun06): 30-lug-2007 03:1\ 5:22 JobId: 839 Job:sede_Vol2Samba_job.2007-07-30_02.00.00 Backup Level: Full (upgraded from Incremental) Client: "sede-fd" 2.0.3 (06Mar07) i686-pc-linux-gnu,suse,9.1 FileSet:"sede_Vol2Samba_fileset" 2007-05-15 02:00:02 Pool: "sede_vol2_pool" FileSet:"sede_Vol2Samba_fileset" 2007-05-15 02:00:02 Pool: "sede_vol2_pool" Storage:"sede-samba-sd" Scheduled time: 30-lug-2007 02:00:00 Start time: 30-lug-2007 02:00:05 End time: 30-lug-2007 03:15:22 Elapsed time: 1 hour 15 mins 17 secs Priority: 10 FD Files Written: 86,685 SD Files Written: 86,685 FD Bytes Written: 20,737,156,122 (20.73 GB) SD Bytes Written: 20,751,178,686 (20.75 GB) Rate: 4590,9 KB/s Software Compression: 32,9 % Volume name(s): sede_Vol2Samba_job-2007-07-30--2-0 Volume Session Id: 68 Volume Session Time:1185374415 Last Volume Bytes: 20,771,044,005 (20.77 GB) Non-fatal FD errors:0 SD Errors: 0 FD termination status: OK SD termination status: OK Termination:*** Backup Error *** so it seems there are errors due to the name of some files, but the backup seems ok (FD/SD termination are ok and it is listed ok in the terminated jobs). Could it be that the problem? I ran the job manually and it shows me only problems about the file names. I guess bacula is still backuping files skipping those that cannot be inserted in the catalog, right? > One guess I have is that you missed a unit on your time settings and > have tapes with a retention time of 30 seconds rather than 30 days or > something like that. Just a guess. As you can see though, upgrading to a > full is definitely the right thing for Bacula to do. No, in the pool I've got: Volume Retention = 365 days that should be right Thanks, Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] backing up MySQL databases
On Monday 30 July 2007 Dimitrios's cat, walking on the keyboard, wrote: > What is the best way to backup MySQL databases? > > I've got a few web servers that i'd like to backup, which rely heavily on > MySQL for offering dynamic content. > > Should i just backup the "/var/lib/mysql/*" directly? Does that mean i have > to shutdown the MySQL process during the backup? (which means the web > server will be unable to offer content, web sites die) > > Any help would be appreciated I'm not sure this is related to bacula. MySQL provides a tool, called mysqldump, that can be used to dump the data of a database and to rebuild it. I guess this is what you are looking for, even if there are other strategies (PITR, ...). Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] restore: device is blocked waiting for media
Hi, I've got to restore a few files from a on-disk restore. I've run the restore procedure, marking the files, and then the systems status appears to be BLOCKED waiting for media (?) as it was unlabeled (?). Running Jobs: Reading: Full Restore job restore_mySelf JobId=1050 Volume="mammuth_uff_g_job-2007-08-30--23-47" pool="Default" device=""mammuth_device" (/backup/mammuth)" Files=0 Bytes=0 Bytes/sec=0 FDReadSeqNo=32 in_msg=31 out_msg=5 fd=6 Device "mammuth_device" (/backup/mammuth) is not open or does not exist. Device is BLOCKED waiting for media. and the file from which restore exists and seems right (with its size). What am I doing wrong? Thanks, Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] restore: device is blocked waiting for media
On Wednesday 5 September 2007 Arno Lehmann's cat, walking on the keyboard, wrote: > I think we'll need some more detailed information here... does Bacula > send a notification to load the volume it wants, what are the Media > Types, and so on... the relevant parts of your configuration might > help here. In fact looking into the messages I found: 05-set 18:54 backup-sd: Please mount Volume "mammuth_uff_g_job-2007-08-30--23-47" on Storage Device "mammuth_device" (/backup/mammuth) for Job restore_mySelf.2007-09-05_17.53.30 so I manually did a mount of the device and the restore process is now running. Why it is not automatically mounting the device? The following is the definition of my restore job: Job { Name = restore_mySelf Enabled = no Type = Restore Client = mySelf FileSet = config_fileset Storage = backup-sd Messages = Daemon Pool = Default Where = /backup/restore } Am I missing something? Thanks, Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] restore: device is blocked waiting for media
On Thursday 6 September 2007 Arno Lehmann's cat, walking on the keyboard, wrote: > Hi, > > 06.09.2007 09:11,, Luca Ferrari wrote:: > > On Wednesday 5 September 2007 Arno Lehmann's cat, walking on the > > keyboard, > > > > wrote: > >> I think we'll need some more detailed information here... does Bacula > >> send a notification to load the volume it wants, what are the Media > >> Types, and so on... the relevant parts of your configuration might > >> help here. > > > > In fact looking into the messages I found: > > > > 05-set 18:54 backup-sd: Please mount > > Volume "mammuth_uff_g_job-2007-08-30--23-47" on Storage > > Device "mammuth_device" (/backup/mammuth) for Job > > restore_mySelf.2007-09-05_17.53.30 > > > > so I manually did a mount of the device and the restore process is now > > running. Why it is not automatically mounting the device? > > Is this device defined as an autochanger? No, here it is: Device { Name = mammuth_device Archive Device = /backup/mammuth Device Type = File Removable Media = No Random Access = Yes Media Type = mammuth_disk_storage LabelMedia = yes AutomaticMount = yes } any idea? Thanks, Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] restore: device is blocked waiting for media
On Thursday 6 September 2007 Arno Lehmann's cat, walking on the keyboard, wrote: > So my only idea is that perhaps you use several file storage devices > with identical Media Type settings and Bacula chooses the wrong device > for the needed volumes. This is true! I mean, I've got several storages with the same media type, is this not correct? Or it will lead me only problems like the mounting when restoring? Do you suggest me to separate each storage and device? Luca - This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] a job is always blocking others
Hi, I've got a job that is always blocking other jobs from being executed. The messages reports that: 26-set 16:58 backup-sd: Job sede_Vol2Samba_QUALITA_job.2007-09-26_16.58.11 waiting to reserve a device. and the definition of the job is: Job { Name = sede_Vol2Samba_QUALITA_job Enabled = yes Type = Backup Level = Incremental Client = sede-fd FileSet = sede_Vol2Samba_QUALITA_fileset Storage = sede-samba-sd Messages = Daemon Pool = sede_Vol2Samba_QUALITA_pool Schedule = Night-Incremental } with the following pool: Pool { Name = sede_Vol2Samba_QUALITA_pool Pool Type = Backup Recycle = yes AutoPrune = yes Volume Retention = 365 days Accept Any Volume = yes Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}--${Hour}-${Minute}" Maximum Volume Jobs = 1 Use Volume Once = yes } and if I try to do, from the console, a status for the storage, the console blocks listing the jobs that are waiting to reserve a pool. Anyone can provide me some hints on how to solve the problem? Thanks, Luca - This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2005. http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/ ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Device not in SD device resources....why?
Hi, I'm a newbie of Bacula and I cannot get my first job running. I'm trying to backup the config of the machine that runs the fd,sd and director. In my bacula.sd.conf I've got: Device { Name = Config_Storage Archive Device = /backup/configurazioni Device Type = File Removable Media = No Random Access = Yes Media Type = backup_server } and in my bacula-dir.conf I've got: Storage { Name = backup-sd Address = 192.168.1.4 SDPort = 9103 Password = "_BacUlA_dIrEcToR_" Device = Config_Storage Media Type = backup_storage } Job { Name = mySelf_Job Enabled = yes Type = Backup Level = Incremental Client = mySelf FileSet = Config_FileSet Storage = backup-sd Messages = Daemon Pool = Default } Now, in my bconsole I see the first problem (I guess): *status storage Device status: Device "Config_Storage" (/backup/configurazioni) is not open or does not exist. and when I run the job I got the following error in the logs: 03-apr 10:22 backup-sd: mySelf_Job.2007-04-03_10.22.04 Fatal error: Device "Config_Storage" with MediaType "backup_storage" requested by DIR not found in SD Device resources. 03-apr 10:22 backup-dir: mySelf_Job.2007-04-03_10.22.04 Fatal error: Storage daemon didn't accept Device "Config_Storage" because: 3924 Device "Config_Storage" not in SD Device resources. 03-apr 10:22 backup-dir: mySelf_Job.2007-04-03_10.22.04 Error: Bacula 1.38.11 (28Jun06): 03-apr-2007 10:22:06 Why it does not run the job? Can anyone please help me to configure this simple backup job?? Thanks, Luca - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] help doing backup of serveral host
Hi all, apologize my trivial questions, but I'm new to bacula. I've got a centalized machine that runs the director and the storage daemon and that should backup several machine with their own file daemon. I'd like to backup each host on a volume that is a file on the backupper machine disk, on a separate directory for each host, something like: /backup/host1/volume1 /backup/host2/volume1 /backup/host2/volume2 /backup/host2/volume3 . The problem, or better my doubt is about how to configure to get the above behaviour. In my opinion I should have a different Device section in my baculs-sd.conf file for each host (and thus path) I'd like to backup, right? The problem is that, running the jobs from the console, bacula continues to work with the previously selected volume. Thus for example, once I'm in the console, if I label the volume1 (for host1) and run the job, even the job for the host 2 continues using the same volume. What I'm not getting here? Can anyone helping me understand please? Thanks, Luca - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] wrong auto-labeling ?
Hi all, in my bacula-dir.conf I've got a pool as follows: Pool { Name = Default Pool Type = Backup Recycle = yes AutoPrune = yes Volume Retention = 365 days Accept Any Volume = yes Label Format = "${Job}-${Year}-${Month:p/2/0/r}-${Day:p/2/0/r}" } and I've got several jobs, for example: Job { Name = prod_dataflex_job Enabled = yes Type = Backup Level = Incremental Client = prod-fd FileSet = prod_FlxFrm_fileset Storage = prod-sd Messages = Daemon Pool = Default } but when I start the job in the console I got the volume labeled as another job: *run Using default Catalog name=MyCatalog DB=bacula A job name must be specified. The defined Job resources are: 1: backup_mySelf 2: restore_mySelf 3: prod_coge_job 4: prod_samba_job 5: prod_config_job 6: prod_dataflex_job 7: prod_log_job Select Job resource (1-7): 6 Run Backup job JobName: prod_dataflex_job FileSet: prod_FlxFrm_fileset Level:Incremental Client: prod-fd Storage: prod-sd Pool: Default When: 2007-04-05 17:06:47 Priority: 10 OK to run? (yes/mod/no): yes Job started. JobId=30 Running Jobs: Writing: Full Backup job prod_samba_job JobId=29 Volume="prod_config_job-2007-04-05" pool="Default" device=""prod_storage" (/backup/prod/)" Files=1,561 Bytes=82,161,418 Bytes/sec=196,089 FDReadSeqNo=16,283 in_msg=11663 out_msg=5 fd=6 Why this running job has the volume prod_config_job while it should be prod_dataflex_job? What am I missing? Thanks, Luca - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users