Re: [Bacula-users] backup issue with batch table

2007-11-19 Thread Nick Jones
I'm still getting the error after trying the ALTER command (I ran it
on the FILE table which was absent minded of me since the error was on
the batch table, ah well).  I learned some more about the batch table
and found out from a mailing list that it is a temp table meaning the
ALTER solution will not work for this.

Are there any serious implications in migrating the tables from v 4.x
to 5 by recompiling?  I only found this in the docs:
"If you upgrade MySQL, you must reconfigure, rebuild, and re-install
Bacula otherwise you are likely to get bizarre failures. If you
install from rpms and you upgrade MySQL, you must also rebuild Bacula.
You can do so by rebuilding from the source rpm. To do so, you may
need to modify the bacula.spec file to account for the new MySQL
version."

It doesn't mention the catalog needing recreation so hopefully this
will be smooth and easy.  Does anyone recommend against this for any
reason?

Thanks alot

Nick




On Nov 15, 2007 5:49 PM, Jason Martin <[EMAIL PROTECTED]> wrote:
> MySQL has its own size limits on files. See:
> http://wiki.bacula.org/doku.php?id=faq#why_does_mysql_say_my_file_table_is_full
>
> -Jason Martin
>
>
> On Thu, Nov 15, 2007 at 05:44:44PM -0600, Nick Jones wrote:
> > Hello,
> >
> > I was hoping someone could help me identify what is going wrong with
> > my backup job?
> >
> > I recently updated from 2.0.3 to 2.2.5 so that building of directory
> > trees for restores were faster (and I am quite pleased).  After I
> > updated, everything seemed fine, I was able to run several incremental
> > backups of the same identical job except on a different / identical
> > tapeset that is now offsite.
> >
> > I am trying to create a new backup on the secondary set of tapes and I
> > keep running into this error after a day and a half.  Table 'batch' is
> > full.  I'm using a large my.cnf config
> >
> > Another error is:   Attribute create error. sql_find.c:333 Request for
> > Volume item 1 greater than max 0 or less than 1 I may have read
> > somewhere that this is caused by a disk space issue so I suspect I'm
> > running out of space.
> >
> > The fileset is roughly 27,000,000 (million) files consuming 2.5 TB of
> > space.  I have 16GB free on the root partition where mysql lives,
> > however the bacula sql tables and working directory are symbolically
> > linked to a RAID with 80GB of free space.  I had hoped this would be
> > enough.  Is it not?
> >
> > Thanks for any hints on identifying the problem.
> >
> > Nick
> >
> >
> >
> > -- Forwarded message --
> > From: Bacula <[EMAIL PROTECTED]>
> > Date: Nov 15, 2007 5:05 PM
> > Subject: Bacula: Backup Fatal Error of lcn-fd Full
> > To: [EMAIL PROTECTED]
> >
> >
> > 14-Nov 09:29 lcn-dir JobId 375: No prior Full backup Job record found.
> > 14-Nov 09:29 lcn-dir JobId 375: No prior or suitable Full backup found
> > in catalog. Doing FULL backup.
> > 14-Nov 09:29 lcn-dir JobId 375: Start Backup JobId 375,
> > Job=Job1.2007-11-14_09.29.05
> > 14-Nov 09:29 lcn-dir JobId 375: Recycled current volume "tape1"
> > 14-Nov 09:29 lcn-dir JobId 375: Using Device "Ultrium"
> > 14-Nov 09:29 lcn-sd JobId 375: 3301 Issuing autochanger "loaded? drive
> > 0" command.
> > 14-Nov 09:29 lcn-sd JobId 375: 3302 Autochanger "loaded? drive 0",
> > result is Slot 1.
> > 14-Nov 09:29 lcn-sd JobId 375: Recycled volume "tape1" on device
> > "Ultrium" (/dev/tape), all previous data lost.
> > 14-Nov 23:46 lcn-sd JobId 375: End of Volume "tape1" at 742:11802 on
> > device "Ultrium" (/dev/tape). Write of 64512 bytes got -1.
> > 14-Nov 23:46 lcn-sd JobId 375: Re-read of last block succeeded.
> > 14-Nov 23:46 lcn-sd JobId 375: End of medium on Volume "tape1"
> > Bytes=742,713,882,624 Blocks=11,512,801 at 14-Nov-2007 23:46.
> > 14-Nov 23:46 lcn-dir JobId 375: Recycled volume "tape4"
> > 14-Nov 23:46 lcn-sd JobId 375: 3307 Issuing autochanger "unload slot
> > 1, drive 0" command.
> > 14-Nov 23:47 lcn-sd JobId 375: 3304 Issuing autochanger "load slot 4,
> > drive 0" command.
> > 14-Nov 23:47 lcn-sd JobId 375: 3305 Autochanger "load slot 4, drive
> > 0", status is OK.
> > 14-Nov 23:47 lcn-sd JobId 375: 3301 Issuing autochanger "loaded? drive
> > 0" command.
> > 14-Nov 23:47 lcn-sd JobId 375: 3302 Autochanger "loaded? drive 0",
> > result is Slot 4.
> > 14-Nov 23:47 lcn-sd JobId 375: Recycled volume "tape4"

[Bacula-users] backup issue with batch table

2007-11-15 Thread Nick Jones
Hello,

I was hoping someone could help me identify what is going wrong with
my backup job?

I recently updated from 2.0.3 to 2.2.5 so that building of directory
trees for restores were faster (and I am quite pleased).  After I
updated, everything seemed fine, I was able to run several incremental
backups of the same identical job except on a different / identical
tapeset that is now offsite.

I am trying to create a new backup on the secondary set of tapes and I
keep running into this error after a day and a half.  Table 'batch' is
full.  I'm using a large my.cnf config

Another error is:   Attribute create error. sql_find.c:333 Request for
Volume item 1 greater than max 0 or less than 1 I may have read
somewhere that this is caused by a disk space issue so I suspect I'm
running out of space.

The fileset is roughly 27,000,000 (million) files consuming 2.5 TB of
space.  I have 16GB free on the root partition where mysql lives,
however the bacula sql tables and working directory are symbolically
linked to a RAID with 80GB of free space.  I had hoped this would be
enough.  Is it not?

Thanks for any hints on identifying the problem.

Nick



-- Forwarded message --
From: Bacula <[EMAIL PROTECTED]>
Date: Nov 15, 2007 5:05 PM
Subject: Bacula: Backup Fatal Error of lcn-fd Full
To: [EMAIL PROTECTED]


14-Nov 09:29 lcn-dir JobId 375: No prior Full backup Job record found.
14-Nov 09:29 lcn-dir JobId 375: No prior or suitable Full backup found
in catalog. Doing FULL backup.
14-Nov 09:29 lcn-dir JobId 375: Start Backup JobId 375,
Job=Job1.2007-11-14_09.29.05
14-Nov 09:29 lcn-dir JobId 375: Recycled current volume "tape1"
14-Nov 09:29 lcn-dir JobId 375: Using Device "Ultrium"
14-Nov 09:29 lcn-sd JobId 375: 3301 Issuing autochanger "loaded? drive
0" command.
14-Nov 09:29 lcn-sd JobId 375: 3302 Autochanger "loaded? drive 0",
result is Slot 1.
14-Nov 09:29 lcn-sd JobId 375: Recycled volume "tape1" on device
"Ultrium" (/dev/tape), all previous data lost.
14-Nov 23:46 lcn-sd JobId 375: End of Volume "tape1" at 742:11802 on
device "Ultrium" (/dev/tape). Write of 64512 bytes got -1.
14-Nov 23:46 lcn-sd JobId 375: Re-read of last block succeeded.
14-Nov 23:46 lcn-sd JobId 375: End of medium on Volume "tape1"
Bytes=742,713,882,624 Blocks=11,512,801 at 14-Nov-2007 23:46.
14-Nov 23:46 lcn-dir JobId 375: Recycled volume "tape4"
14-Nov 23:46 lcn-sd JobId 375: 3307 Issuing autochanger "unload slot
1, drive 0" command.
14-Nov 23:47 lcn-sd JobId 375: 3304 Issuing autochanger "load slot 4,
drive 0" command.
14-Nov 23:47 lcn-sd JobId 375: 3305 Autochanger "load slot 4, drive
0", status is OK.
14-Nov 23:47 lcn-sd JobId 375: 3301 Issuing autochanger "loaded? drive
0" command.
14-Nov 23:47 lcn-sd JobId 375: 3302 Autochanger "loaded? drive 0",
result is Slot 4.
14-Nov 23:47 lcn-sd JobId 375: Recycled volume "tape4" on device
"Ultrium" (/dev/tape), all previous data lost.
14-Nov 23:47 lcn-sd JobId 375: New volume "tape4" mounted on device
"Ultrium" (/dev/tape) at 14-Nov-2007 23:47.
15-Nov 15:53 lcn-sd JobId 375: End of Volume "tape4" at 808:12641 on
device "Ultrium" (/dev/tape). Write of 64512 bytes got -1.
15-Nov 15:53 lcn-sd JobId 375: Re-read of last block succeeded.
15-Nov 15:53 lcn-sd JobId 375: End of medium on Volume "tape4"
Bytes=808,763,784,192 Blocks=12,536,640 at 15-Nov-2007 15:53.
15-Nov 15:53 lcn-dir JobId 375: Recycled volume "tape3"
15-Nov 15:53 lcn-sd JobId 375: 3307 Issuing autochanger "unload slot
4, drive 0" command.
15-Nov 15:54 lcn-sd JobId 375: 3304 Issuing autochanger "load slot 3,
drive 0" command.
15-Nov 15:54 lcn-sd JobId 375: 3305 Autochanger "load slot 3, drive
0", status is OK.
15-Nov 15:54 lcn-sd JobId 375: 3301 Issuing autochanger "loaded? drive
0" command.
15-Nov 15:54 lcn-sd JobId 375: 3302 Autochanger "loaded? drive 0",
result is Slot 3.
15-Nov 15:54 lcn-sd JobId 375: Recycled volume "tape3" on device
"Ultrium" (/dev/tape), all previous data lost.
15-Nov 15:54 lcn-sd JobId 375: New volume "tape3" mounted on device
"Ultrium" (/dev/tape) at 15-Nov-2007 15:54.
15-Nov 17:04 lcn-dir JobId 375: Fatal error: sql_create.c:732
sql_create.c:732 insert INSERT INTO batch VALUES
(20976597,375,'/mnt/right/ppg/dropbox/for_jessica.dir/lesion_vol.dir/2117/','2117_lesionnot_fruit_004.flt.gz','gg
Bgn/f IGw B Ru U A FO BAA I BHOj5q BE7Ot4 BG2Gxr A A
C','xWoEMzfHWuvIxoZu2vxP0A') failed:
The table 'batch' is full
15-Nov 17:04 lcn-dir JobId 375: sql_create.c:732 INSERT INTO batch
VALUES 
(20976597,375,'/mnt/right/ppg/dropbox/for_jessica.dir/lesion_vol.dir/2117/','2117_lesionnot_fruit_004.flt.gz','gg
Bgn/f IGw B Ru U A FO BAA I BHOj5q BE7Ot4 BG2Gxr A A
C','xWoEMzfHWuvIxoZu2vxP0A')
15-Nov 17:04 lcn-dir JobId 375: Fatal error: catreq.c:478 Attribute
create error. sql_find.c:333 Request for Volume item 1 greater than
max 0 or less than 1
15-Nov 17:04 lcn-sd JobId 375: Job Job1.2007-11-14_09.29.05 marked to
be canceled.
15-Nov 17:04 lcn-sd JobId 375: Job write elapsed time = 31:31:59,
Transfer rate = 14.30 M bytes/se

[Bacula-users] Job scheduling issue

2007-07-18 Thread Nick Jones
Hi all.

Maybe this is normal bacula behavior but it is causing me problems.

Say I have a large job to run (~28 hrs).  I schedule the job to be run
daily.  Now, if there is not a suitable full backup already written
(ie. the full backup is the current running job), the scheduled job
happily checks and sees that there is not a full backup.   Say the job
gets scheduled twice before the first full backup completes.   Now it
will run through two more identical full backups filling my tapes
until the whole pool is full.

Shouldn't the scheduled jobs wait to check for a prior backup until
the actual moment it is going to run, instead of two days before it
will actually run (yes my backup takes days, a problem caused by data
disorganization and a lack of proper archiving for the past 20 years).

This doesn't really pose a huge problem though since it only happens
when a job is scheduled while the full backup is being created.

Thanks

Nick

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Backup only modified files question

2007-05-23 Thread Nick Jones
Hello everyone.

I am interested in creating a disk to disk daily backup (at least
daily) that will backup up all files that were modified after a
certain date.

I realize this is possible with a differential backup but what I DONT
want is a full backup.  I just want the subset of files that have been
modified since running job X.

Perhaps (hopefully) I am wrong, but the documentation states that for
differential backups it is required that the job be the same job,
meaning that it is required that a full backup be made.

I was wondering if there is another way to do this.  Perhaps by using
the catalog from a full backup performed on date X.

If anyone has any suggestions or tricks for this please let me know.
I apologize if I missed this in the documentation or if the answer is
trivial.  BTW this is done so that I can lose the RAID volume, restore
from 2 week old tape to the new RAID, and then update the files with
the daily disk backup from the other backup RAID.

Thanks

Nick

-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tape marked full, but has 340GB free, now says 635 billion bytes for a 400GB tape.

2007-01-25 Thread Nick Jones
Oh :-)   Now I see what you mean.  No, I did not know that.  It makes
sense though.

Thanks again sir.

Nick

On 1/25/07, John Drescher <[EMAIL PROTECTED]> wrote:
> > Wow, you must really think I'm oblivious.  I should try to word my
> > questions better :)
> >
> > Yes, I understand that flipping a bit does not allow data on tape to
> > magically reorganize itself on the tape.  It must be rewritten of
> > course.
> >
> No sorry, I did not mean that. I meant that a tape can not be be
> partially uncompressed and partially compressed it is all or none.
>
> John
>


-- 
Nick Jones
University of Iowa
Dept of Neurology
Systems Analyst
319-356-0451

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tape marked full, but has 340GB free, now says 635 billion bytes for a 400GB tape.

2007-01-25 Thread Nick Jones
This is the whole story.

The first time I wrote to the tape after running btape (including the
autochanger test) was part of a full backup to 7 tapes.  All the tapes
filled to 400 Gigs roughly except this one, which errored at 52  Gigs
at which point bacula moved to the next appendable volume.

Next, I removed the volume from the pool, rewound it, wrote an EOF,
relabeled, and added to the pool again.  Then I ran an incremental
backup to fill the tape up to see if it would error again.

So as you said, it filled it with 600 some gigs of compressed data,
instead of erroring out at 52G like I expected.  Basically compression
got turned on without me knowing it or asking for it.  The backup I
ran last week of the *same exact* data did not fill any tapes over
400, meaning compression was off then.

Regardless, the point of the exercise was to elicit an error on the
tape.  I'm going to try again with btape fill and then move on,
assuming the tape is good.

Thanks for replying.

On 1/25/07, John Drescher <[EMAIL PROTECTED]> wrote:
> >
> > So, ok, all of a sudden compression is on.  I think it must have been
> > enabled when I was messing with btape trying to test the "bad" tape.
> > If this is the case though, then I wonder why compression wasn't
> > enabled the first time I ran btape on these exact same tapes/drive
> > (note: on the last run of btape I terminated (killall) the btape
> > console after it froze on me, which may explain that).
> >
> Did you know if compression is off at the drive and then data is
> written to the tape and then compression is turned back on the drive
> the tape will still not compress data unless it is wiped?
>

Wow, you must really think I'm oblivious.  I should try to word my
questions better :)

Yes, I understand that flipping a bit does not allow data on tape to
magically reorganize itself on the tape.  It must be rewritten of
course.

I appreciate your help.

Nick

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tape marked full, but has 340GB free, now says 635 billion bytes for a 400GB tape.

2007-01-25 Thread Nick Jones
I apologize, I understand compression and read the FAQ and documentation.

My question arises amidst earlier tape errors with this particular
tape and the fact that all the other tapes become full at just over
400 billion bytes, or 400 gigs.

So, ok, all of a sudden compression is on.  I think it must have been
enabled when I was messing with btape trying to test the "bad" tape.
If this is the case though, then I wonder why compression wasn't
enabled the first time I ran btape on these exact same tapes/drive
(note: on the last run of btape I terminated (killall) the btape
console after it froze on me, which may explain that).

I'm just going to wipe the pool and jobs and start over, after running
a btape fill on this tape.

Thanks for pointing that out.

Nick

ps.
[EMAIL PROTECTED] ~]# tapeinfo -f /dev/sg2
Product Type: Tape Drive
Vendor ID: 'HP  '
Product ID: 'Ultrium 3-SCSI  '
Revision: 'G24H'
Attached Changer: No
SerialNumber: 'HU10534WE1'
MinBlock:1
MaxBlock:16777215
SCSI ID: 4
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: Not Loaded
Density Code: 0x44
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
Block Position: 22658
[EMAIL PROTECTED] ~]#

On 1/25/07, John Drescher <[EMAIL PROTECTED]> wrote:
> On 1/25/07, Nick Jones <[EMAIL PROTECTED]> wrote:
> > I had a bad tape recently
> >
> > I have tried to verify if the tape is good or not, but I've run into
> > another issue.
> >
> > It now wrote the tape fine, and didn't error out at
> > Bytes=52,398,230,131.   However, it wrote way more bytes that what
> > will fit on the tape.  It is a 400 GB LTO-3 tape.
> >
> > |  22 | tape7  | Full  | 635,479,838,715 |  636 |
> > 15,552,000 |   1 |7 | 1 | LTO   | 2007-01-25
> > 08:52:41
> >
> > Can anyone explain how the drive wrote so much data, or why it *thinks* it 
> > did.
> >
> The drive has hardware data compression that saves tape space by
> compressing each packet sent to the drive. Although the manufacture
> will claim 2:1 (and call the tape a 800GB tape) is the norm this
> number is highly dependent on your data. Remember that compressed data
> does compress a second time and random data is also not compressible
> but text is very compressible.
>
>
>
> BTW (dev team), Is this info in the docs or faq anywhere? I believe I
> have answered this question 20 times or more 
>
> John
>


-- 
Nick Jones
University of Iowa
Dept of Neurology
Systems Analyst
319-356-0451

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tape marked full, but has 340GB free, now says 635 billion bytes for a 400GB tape.

2007-01-25 Thread Nick Jones
I had a bad tape recently

I have tried to verify if the tape is good or not, but I've run into
another issue.

It now wrote the tape fine, and didn't error out at
Bytes=52,398,230,131.   However, it wrote way more bytes that what
will fit on the tape.  It is a 400 GB LTO-3 tape.

|  22 | tape7  | Full  | 635,479,838,715 |  636 |
15,552,000 |   1 |7 | 1 | LTO   | 2007-01-25
08:52:41

Can anyone explain how the drive wrote so much data, or why it *thinks* it did.

Also, I set up 2 different jobs that are meant to run on 2 different
tape sets (I have two tape sets of 7 tapes each) so I can have 2 full
backups that are in different places (to mitigate against the "grenade
in the server room" scenario :-).

I am now getting these errors after running the second backup.

25-Jan 08:52 lcn-dir: There are no Jobs associated with Volume "tape1". Marking
it purged.

Does anyone have suggestions as to what happened to the job associated
with that backup?  I purged a volume that was in the pool that was
associated with the job, because I think it has errors.  I did not
think this would destroy the whole job.

Thanks everyone.

Nick

On 12/15/06, Kern Sibbald <[EMAIL PROTECTED]> wrote:
> On Friday 15 December 2006 18:40, Nick Jones wrote:
> > Comments/questions inline.
> >
> > On 12/15/06, Kern Sibbald <[EMAIL PROTECTED]> wrote:
> > >
> > > On Friday 15 December 2006 17:36, Nick Jones wrote:
> > > > Your explanation is correct.  Here is the log which I should have looked
> > > at
> > > > and included in the original.
> > > >
> > > > 12-Dec 23:19 lcn-dir: Start Backup JobId 6, Job=
> > > > BackupCatalog.2006-12-12_23.10.00
> > > > 12-Dec 23:19 lcn-sd: BackupCatalog.2006-12-12_23.10.00 Error: block.c
> > > :538
> > > > Write error at 53:1 on device "Ultrium" (/dev/tape). ERR=Input/output
> > > error.
> > > > 12-Dec 23:19 lcn-sd: BackupCatalog.2006-12-12_23.10.00 Error: Error
> > > writing
> > > > final EOF to tape. This Volume may not be readable.
> > > > dev.c:1542 ioctl MTWEOF error on "Ultrium" (/dev/tape). ERR=Input/output
> > > > error.
> > > > 12-Dec 23:19 lcn-sd: End of medium on Volume "tape7"
> > > Bytes=52,398,230,131
> > > > Blocks=812,226 at 12-Dec-2006 23:19.
> > > > 12-Dec 23:24 lcn-sd: Invalid slot=0 defined, cannot autoload Volume.
> > > >
> > > > This happened after a full backup, then the backupCatalog job was
> > > running
> > > > and ran several times successfully, then this happened.  Does this in
> > > fact
> > > > indicate a physical error on the tape?
> > >
> > > Yes very likely.
> > >
> > > > If so, I guess I just have to try to
> > > > get a refund or warranty on the tape which is disappointing.  Ah well it
> > > is
> > > > not a big deal.
> > >
> > >
> > > >
> > > > Do you know, is it possible to purge the tape, then run an incremental
> > > > backup where bacula knows (from the purge) that these files are no
> > > longer in
> > > > the catalog, and so it backs them up?
> > > >
> > > > I don't care much about leaving the tape in the pool as a 60GB volume,
> > > but I
> > > > do worry that I won't be able to recover data on this tape
> > >
> > > I wouldn't worry too much.  It is very likely that all data to that point
> > > is
> > > OK.  You could test that by running a Volume Verify on that JobId.
> > >
> > > > if it does in
> > > > fact have physical errors, and would like to get rid of it.
> > >
> > > I wouldn't put too much trust in such a tape myself.  If you do nothing,
> > > the
> > > next time it is recycled, Bacula will start from the beginning, and if the
> > > error was temporary (possible but unlikely), Bacula will fill the whole
> > > tape.
> > >
> > > > Also I want to
> > > > avoid recreating the whole backup which takes awhile (24hr) with our
> > > > fileset.
> > > > FD Files Written:  45,422,838
> > > > FD Bytes Written:   2,059,990,875,973 (2.059 TB)
> > > >
> > > > Can someone let me know if this is possible?
> > >
> > > You can load version 1.39.30 and use Migration to copy all the jobs on
> > > that
> > > Volume to a new Volume (or Volumes).
> >
> >
> >
> > Is there no other

[Bacula-users] building directory tree for restore takes long time

2007-01-18 Thread Nick Jones
I'm trying to test the restore functionality of our backup system
using bacula 1.38.11 under Yellow Dog Linux on an XServe G5.

Everything is working great, except for the restore time.  Building
the directory tree from the MySQL DB takes forever (2 hrs + ).
However this is normal, I *think*, due to the fact that the jobID I'm
selecting from is our whole filesystem, 45,000,000 files.

If only the option 11 under *restore all command worked recursively.
Although I would probably have the same problem if that were the case.

Do I just need to chop up the job into multiple jobs or are there
other ways to improve performance with this?  Also, is it possible
this is recursively following symbolic links and that's why it takes
so long?  What does "building the directory tree" do exactly?

Thanks

Nick

-- 
Nick Jones
University of Iowa
Dept of Neurology
Systems Analyst
319-356-0451

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tape marked full, but has 340GB free

2006-12-15 Thread Nick Jones

Comments/questions inline.

On 12/15/06, Kern Sibbald <[EMAIL PROTECTED]> wrote:


On Friday 15 December 2006 17:36, Nick Jones wrote:
> Your explanation is correct.  Here is the log which I should have looked
at
> and included in the original.
>
> 12-Dec 23:19 lcn-dir: Start Backup JobId 6, Job=
> BackupCatalog.2006-12-12_23.10.00
> 12-Dec 23:19 lcn-sd: BackupCatalog.2006-12-12_23.10.00 Error: block.c
:538
> Write error at 53:1 on device "Ultrium" (/dev/tape). ERR=Input/output
error.
> 12-Dec 23:19 lcn-sd: BackupCatalog.2006-12-12_23.10.00 Error: Error
writing
> final EOF to tape. This Volume may not be readable.
> dev.c:1542 ioctl MTWEOF error on "Ultrium" (/dev/tape). ERR=Input/output
> error.
> 12-Dec 23:19 lcn-sd: End of medium on Volume "tape7"
Bytes=52,398,230,131
> Blocks=812,226 at 12-Dec-2006 23:19.
> 12-Dec 23:24 lcn-sd: Invalid slot=0 defined, cannot autoload Volume.
>
> This happened after a full backup, then the backupCatalog job was
running
> and ran several times successfully, then this happened.  Does this in
fact
> indicate a physical error on the tape?

Yes very likely.

> If so, I guess I just have to try to
> get a refund or warranty on the tape which is disappointing.  Ah well it
is
> not a big deal.


>
> Do you know, is it possible to purge the tape, then run an incremental
> backup where bacula knows (from the purge) that these files are no
longer in
> the catalog, and so it backs them up?
>
> I don't care much about leaving the tape in the pool as a 60GB volume,
but I
> do worry that I won't be able to recover data on this tape

I wouldn't worry too much.  It is very likely that all data to that point
is
OK.  You could test that by running a Volume Verify on that JobId.

> if it does in
> fact have physical errors, and would like to get rid of it.

I wouldn't put too much trust in such a tape myself.  If you do nothing,
the
next time it is recycled, Bacula will start from the beginning, and if the
error was temporary (possible but unlikely), Bacula will fill the whole
tape.

> Also I want to
> avoid recreating the whole backup which takes awhile (24hr) with our
> fileset.
> FD Files Written:  45,422,838
> FD Bytes Written:   2,059,990,875,973 (2.059 TB)
>
> Can someone let me know if this is possible?

You can load version 1.39.30 and use Migration to copy all the jobs on
that
Volume to a new Volume (or Volumes).




Is there no other way to remove the tape without destroying the 'whole'
job??  Can't I purge the tape (I don't care about rewriting the 58GB on a
different tape, as long as it gets backed up on the next incremental backup
AFTER the data/volume has been purged) and thus bacula (via the catalog)
will think those files have essentially been added to the filesystem since
the last backup and add them to the appendable tape associated with that
job's pool?

Also, I have never upgraded, can you think of any issues with this upgrade
that aren't mentioned in the documentation.

Thanks alot

PS. I love this product thus far.

Nick





> Thanks
>
> Nick
>
>
>
>
> On 12/15/06, James Cort <[EMAIL PROTECTED]> wrote:
> >
> > Nick Jones wrote:
> > > Does anyone know why or how my tape7 could have been marked full
even
> > > though it is the same capacity as the other tapes and is not full
> > > according to bacula.  Now it is appending to tape4.  Also, it is
hard
> > > to read but it says 0 for 'in changer' for tapes 1,2, and 4 which
are
> > > all in the changer and were listed as such at one time.  All I did
was
> > > run a backup.  How did these values become changed?  How can I tell
> > > bacula that tape7 is not full (it's pretty obvious it's not full
> > > according to bacula's records).  Also is it a problem that the tapes
> > > are no longer 'in changer' ?
> > >
> > > Thanks alot for help or suggestions
> > I've had exactly that happen when the tape media reports an error.  I
> > can't remember exactly where it's written, but IIRC the documentation
> > says something to the effect that it's not always possible to
determine
> > the difference between a "tape full" message and an error on the tape.
> >
>
>
>
> --
> Nick Jones
> University of Iowa
> Dept of Neurology
> Systems Analyst
> 319-356-0451
>

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tape marked full, but has 340GB free

2006-12-15 Thread Nick Jones

Your explanation is correct.  Here is the log which I should have looked at
and included in the original.

12-Dec 23:19 lcn-dir: Start Backup JobId 6, Job=
BackupCatalog.2006-12-12_23.10.00
12-Dec 23:19 lcn-sd: BackupCatalog.2006-12-12_23.10.00 Error: block.c:538
Write error at 53:1 on device "Ultrium" (/dev/tape). ERR=Input/output error.
12-Dec 23:19 lcn-sd: BackupCatalog.2006-12-12_23.10.00 Error: Error writing
final EOF to tape. This Volume may not be readable.
dev.c:1542 ioctl MTWEOF error on "Ultrium" (/dev/tape). ERR=Input/output
error.
12-Dec 23:19 lcn-sd: End of medium on Volume "tape7" Bytes=52,398,230,131
Blocks=812,226 at 12-Dec-2006 23:19.
12-Dec 23:24 lcn-sd: Invalid slot=0 defined, cannot autoload Volume.

This happened after a full backup, then the backupCatalog job was running
and ran several times successfully, then this happened.  Does this in fact
indicate a physical error on the tape?  If so, I guess I just have to try to
get a refund or warranty on the tape which is disappointing.  Ah well it is
not a big deal.

Do you know, is it possible to purge the tape, then run an incremental
backup where bacula knows (from the purge) that these files are no longer in
the catalog, and so it backs them up?

I don't care much about leaving the tape in the pool as a 60GB volume, but I
do worry that I won't be able to recover data on this tape if it does in
fact have physical errors, and would like to get rid of it.  Also I want to
avoid recreating the whole backup which takes awhile (24hr) with our
fileset.
FD Files Written:  45,422,838
FD Bytes Written:   2,059,990,875,973 (2.059 TB)

Can someone let me know if this is possible?

Thanks

Nick




On 12/15/06, James Cort <[EMAIL PROTECTED]> wrote:


Nick Jones wrote:
> Does anyone know why or how my tape7 could have been marked full even
> though it is the same capacity as the other tapes and is not full
> according to bacula.  Now it is appending to tape4.  Also, it is hard
> to read but it says 0 for 'in changer' for tapes 1,2, and 4 which are
> all in the changer and were listed as such at one time.  All I did was
> run a backup.  How did these values become changed?  How can I tell
> bacula that tape7 is not full (it's pretty obvious it's not full
> according to bacula's records).  Also is it a problem that the tapes
> are no longer 'in changer' ?
>
> Thanks alot for help or suggestions
I've had exactly that happen when the tape media reports an error.  I
can't remember exactly where it's written, but IIRC the documentation
says something to the effect that it's not always possible to determine
the difference between a "tape full" message and an error on the tape.





--
Nick Jones
University of Iowa
Dept of Neurology
Systems Analyst
319-356-0451
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] tape marked full, but has 340GB free

2006-12-14 Thread Nick Jones

Does anyone know why or how my tape7 could have been marked full even though
it is the same capacity as the other tapes and is not full according to
bacula.  Now it is appending to tape4.  Also, it is hard to read but it says
0 for 'in changer' for tapes 1,2, and 4 which are all in the changer and
were listed as such at one time.  All I did was run a backup.  How did these
values become changed?  How can I tell bacula that tape7 is not full (it's
pretty obvious it's not full according to bacula's records).  Also is it a
problem that the tapes are no longer 'in changer' ?

Thanks alot for help or suggestions

*list media
Pool: Default
+-++---+-+--+--+-+--+---+---+-+
| MediaId | VolumeName | VolStatus | VolBytes| VolFiles |
VolRetention | Recycle | Slot | InChanger | MediaType | LastWritten
|
+-++---+-+--+--+-+--+---+---+-+
|   4 | tape4  | Append|   1 |0 |
15,552,000 |   1 |4 | 0 | LTO   | -00-00 00:00:00 |
|   5 | tape1  | Full  | 405,824,702,601 |  406 |
15,552,000 |   1 |1 | 0 | LTO   | 2006-12-10 22:32:09 |
|   6 | tape2  | Full  | 406,506,404,991 |  407 |
15,552,000 |   1 |2 | 0 | LTO   | 2006-12-11 03:39:38 |
|   7 | tape3  | Full  | 406,532,124,662 |  407 |
15,552,000 |   1 |3 | 1 | LTO   | 2006-12-11 10:47:27 |
|   8 | tape5  | Full  | 406,481,624,151 |  407 |
15,552,000 |   1 |5 | 1 | LTO   | 2006-12-11 16:56:49 |
|   9 | tape6  | Full  | 405,655,251,975 |  406 |
15,552,000 |   1 |6 | 1 | LTO   | 2006-12-11 20:27:11 |
|  12 | tape7  | Full  |  52,398,230,131 |   53 |
15,552,000 |   1 |7 | 1 | LTO   | 2006-12-12 23:19:17 |
+-++---+-+--+--+-+--+---+---+-----+


Nick


--
Nick Jones
University of Iowa
Dept of Neurology
Systems Analyst
319-356-0451
-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users