Immediately after KafkaFull04 was written, before the job even terminated, it was prunned. Thus it appears to me that you have set your retention periods very low or some other parameter so that you are prunning the job or Volume while the Job is running, thus causing errors.
On Wednesday 31 May 2006 13:30, Alejandro Alfonso wrote: > I have a strange problem, than only happens in two Jobs: the others go > fine! Bacula its the best backup solution i have never try, and more > that 20 servers works ok! > > Full backups of a Server *always* return an Error message, but: > > 1) other full jobs than uses de same device, same spoll, different fd, > diferent fileset, diferent pool, and bigger amount of gigabytes, finish Ok > 2) incremental backups of same Server, fd, fileset, also works fine! > (Only Full fails!) > 3) changing the fileset, same device, same spoll, same pool, and USING > ONLY ONE TAPE, the job finish Ok > > Here are the results: > 30-May 17:29 poe-dir: Start Backup JobId 6012, > Job=ServerFull.2006-05-30_16.59.54 > 30-May 17:30 jja-sd: Volume "ServerFull04" previously written, moving to > end of data. > 30-May 17:32 jja-sd: Ready to append to end of Volume "ServerFull04" at > file=21. > 30-May 17:32 jja-sd: Spooling data ... > 30-May 17:36 jja-sd: User specified spool size reached. > 30-May 17:36 jja-sd: Writing spooled data to Volume. Despooling > 2,424,876,393 bytes ... > 30-May 20:59 jja-sd: Spooling data again ... > [...some spool later...] > 30-May 18:58 jja-sd: End of Volume "KafkaFull04" at 40:7904 on device > "Petalo" (/dev/st0). Write of 64512 bytes got -1. > 30-May 18:58 jja-sd: Re-read of last block succeeded. > 30-May 18:58 jja-sd: End of medium on Volume "KafkaFull04" > Bytes=39,828,927,676 Blocks=617,388 at 30-May-2006 18:58. > 30-May 19:00 poe-dir: Pruned 1 Job on Volume "KafkaFull04" from catalog. > 30-May 19:00 poe-dir: Recycled volume "KafkaFull05" > 30-May 19:00 jja-sd: Please mount Volume "KafkaFull05" on Storage Device > "Petalo" (/dev/st0) for Job KafkaFull.2006-05-30_16.59.54 > 30-May 19:18 jja-sd: Recycled volume "KafkaFull05" on device "Petalo" > (/dev/st0), all previous data lost. > 30-May 19:18 jja-sd: New volume "KafkaFull05" mounted on device "Petalo" > (/dev/st0) at 30-May-2006 19:18. > 30-May 19:23 jja-sd: Spooling data again ... > [...some spool later...] > 30-May 21:02 jja-sd: User specified spool size reached. > 30-May 21:02 jja-sd: Writing spooled data to Volume. Despooling > 2,424,876,070 bytes ... > 30-May 21:29 jja-sd: Spooling data again ... > 30-May 21:31 jja-sd: Committing spooled data to Volume "ServerFull05". > Despooling 1,136,088,341 bytes ... > 30-May 21:34 jja-sd: Sending spooled attrs to the Director. Despooling > 59,004,306 bytes ... > 30-May 21:41 poe-dir: ServerFull.2006-05-30_16.59.54 Warning: Error > updating job record. sql_update.c:169 Update problem: affected_rows=0 > 30-May 21:41 poe-dir: ServerFull.2006-05-30_16.59.54 Warning: Error > getting job record for stats: sql_get.c:287 No Job found for JobId 6012 > 30-May 21:41 poe-dir: ServerFull.2006-05-30_16.59.54 Error: Bacula > 1.38.9 (02May06): 30-May-2006 21:41:12 > JobId: 6012 > Job: ServerFull.2006-05-30_16.59.54 > Backup Level: Full > Client: "server-fd" i686-pc-linux-gnu,gentoo,1.6.14 > FileSet: "ServerFull" 2006-02-08 19:45:15 > Pool: "ServerFull" > Storage: "Petalo" > Scheduled time: 30-May-2006 16:59:52 > Start time: 30-May-2006 17:29:59 > End time: 30-May-2006 21:41:12 > Elapsed time: 4 hours 11 mins 13 secs > Priority: 10 > FD Files Written: 187,112 > SD Files Written: 187,112 > FD Bytes Written: 49,550,280,837 (49.55 GB) > SD Bytes Written: 49,580,153,543 (49.58 GB) > Rate: 3287.4 KB/s > Software Compression: None > Volume name(s): ServerFull05 > Volume Session Id: 45 > Volume Session Time: 1147363481 > Last Volume Bytes: 30,114,408,500 (30.11 GB) > Non-fatal FD errors: 0 > SD Errors: 0 > FD termination status: OK > SD termination status: OK > Termination: *** Backup Error *** > > LOOK: "Volume name(s):" doesn't show the two tapes used in backup! > > I think this has happend sice a reboot with a runnig job. I think its a > Pool problem, and the source it's in database. Can I debug changing the > source code? > > sql_update.c:169 > db_lock(mdb); > Mmsg(mdb->cmd, > "UPDATE Job SET JobStatus='%c', EndTime='%s', " > "ClientId=%s, JobBytes=%s, JobFiles=%u, JobErrors=%u, VolSessionId=%u, " > "VolSessionTime=%u, PoolId=%s, FileSetId=%s, JobTDate=%s WHERE JobId=%s", > (char)(jr->JobStatus), dt, ClientId, edit_uint64(jr->JobBytes, ed1), > jr->JobFiles, jr->JobErrors, jr->VolSessionId, jr->VolSessionTime, > PoolId, FileSetId, edit_uint64(JobTDate, ed2), > edit_int64(jr->JobId, ed3)); > > stat = UPDATE_DB(jcr, mdb, mdb- >cmd); > db_unlock(mdb); > > sql_get.c:287 > db_lock(mdb); > if (jr->JobId == 0) { > Mmsg(mdb->cmd, "SELECT VolSessionId,VolSessionTime," > "PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus," > "Type,Level,ClientId,Name " > "FROM Job WHERE Job='%s'", jr->Job); > } else { > Mmsg(mdb->cmd, "SELECT VolSessionId,VolSessionTime," > "PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus," > "Type,Level,ClientId,Name " > "FROM Job WHERE JobId=%s", > edit_int64(jr->JobId, ed1)); > } > > if (!QUERY_DB(jcr, mdb, mdb->cmd)) { > db_unlock(mdb); > return 0; /* failed */ > } > if ((row = sql_fetch_row(mdb)) == NULL) { > Mmsg1(mdb->errmsg, _("No Job found for JobId %s\n"), > edit_int64(jr->JobId, ed1)); > sql_free_result(mdb); > db_unlock(mdb); > return 0; /* failed */ > } > > Thanks in advance > > Best regards!!! > > Pd. I dosen't report to http://bugs.bacula.org/ because i thing its a > particular problem -- Best regards, Kern ("> /\ V_V _______________________________________________ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users