Hi all,
I thought I had this nailed, but bacula went silent on me with no runs
Monday or Tuesday nights.
Tonight I received this cryptic email:
02-Aug 02:05 devel2-sd: Job devel2.2007-08-02_02.05.00 waiting to reserve
a device.
Tape is mounted, it's from my Daily pool for incrementals. No ot
Kern Sibbald wrote:
> I would appreciate if beta testers would retest the current SVN. Thanks.
Other than weird-files2 (known glitch due to cp deficiency) it passes
everything fine on Mac OS 10.4. Also passes everything on my fedora 7 test
system.
--
Frank Sweetser fs at wpi.edu | For every
On 1 Aug 2007 at 13:42, Jason King wrote:
> I've had to make some adjustments during my trials of bacula and my
> MediaID numbering sequence is way off. My current LAST tape is labeled
> "Tape-10" with a MediaID of 10. If I put in a fresh new tape and label
> it "Tape-11", the MediaID would pro
I've had to make some adjustments during my trials of bacula and my
MediaID numbering sequence is way off. My current LAST tape is labeled
"Tape-10" with a MediaID of 10. If I put in a fresh new tape and label
it "Tape-11", the MediaID would probably be like 41 or 42 or so because
of the deleti
Thank you everyone.
++
| Megan Kispert
| Code: 423
| GSFC: 301-614-5410
| ADNET: 301.352.4632
| [EMAIL PROTECTED]
++
On Wed, 1 Aug 2007, Rich wrote:
> On 2007.08.01. 17:10, Robert LeBlanc wrote:
>> Not without losing all
You also need a Maximum Concurrent Jobs in the Storage sections and
probably the Client sections of bacula-dir.conf
John
-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Sto
Hi!
I've been trying to run many jobs simultaneously, but it does not
running.
Follow attach my config files of bacula.
It's possible to run many jobs simultaneously.
Thanks.
#
Director {
Name = bacula-dir
DIRport = 9101
QueryFile = "/usr/local/share/bacula/query.sql"
WorkingD
For those interested:
As I did not get any response on the mail below I reported the
issue now in the Bacula Bug Tracker, it got the ID 912.
cheers
sascha
Sascha Wilde <[EMAIL PROTECTED]> wrote:
> I'm using bacula 2.0.3 with DVD-Rs as backup media.
> Now I have the problem, that I need to res
On 2007.08.01. 16:16, Drew Bentley wrote:
...
>> if you really need incremental backups, i don't know of any other method
>> than using binlogs
>> --
>> Rich
>>
> Yes, in order to do incrementals, you'll need binlogs enabled and
> you'll need to back these up. But also consider they take or can t
On 2007.08.01. 17:22, [EMAIL PROTECTED] wrote:
> I have an ongoing problem, when my backup jobs fill my hard drive to 100%.
> Many times I have gone in and pruned jobs, and files, but the volume size
> stays the same, and will not run any new jobs. Is there a fix for this?
this question is being a
On 2007.08.01. 17:10, Robert LeBlanc wrote:
> Not without losing all your back-ups. With disk, it is best to set a
> volume limit so that it will create multiple back-up files. These will
> look like tapes and bacula will be able to prune and recycle these,
> freeing up disk space the size of the b
Not without losing all your back-ups. With disk, it is best to set a
volume limit so that it will create multiple back-up files. These will
look like tapes and bacula will be able to prune and recycle these,
freeing up disk space the size of the back-up file. You may be able to
use bcopy to extract
I have an ongoing problem, when my backup jobs fill my hard drive to 100%.
Many times I have gone in and pruned jobs, and files, but the volume size
stays the same, and will not run any new jobs. Is there a fix for this?
-
Thi
Morning,
I'm running bacula-2.1.26 on a centos 4.5 server. I have my backups going
to disk. One of my disks ran out of space due to a failure on my part to
exclude a directory that shouldn't have been backed up. I have two
volumes on this disk. I tried to delete jobs for this particular
p
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
For Solaris, I recommend using all of the binaries from BlastWave. They
are the most current version, and they include Solaris 10 SMF repository
scripts.
Francisco Rodrigo Cortinas Maseda wrote:
> I use the following:
>
> ./configure --enable-smarta
I can telnet to all the bacula ports from the "sqlengine" machine and I
ran a tcpdump to monitor the packets from the "sqlengine" machine to the
bacula server and all is OK.
I don't think it's the Firewall anymore, I realized with a shock that
the SQL backup is over 30GB and I didn't configure my
On 8/1/07, Rich <[EMAIL PROTECTED]> wrote:
> On 2007.08.01. 15:33, Dimitrios wrote:
> > On Wed, 01 Aug 2007 15:20:50 +0300 Rich <[EMAIL PROTECTED]> wrote:
> >
> >> i still prefer dumps, they are more portable, easier to restore and
> >> compress better.
> >
> > but what about incremental backups? a
On 8/1/07, James Cort <[EMAIL PROTECTED]> wrote:
> John Drescher wrote:
> >> Instead, however, I get this:
> >>
> >> - and it'll sit there indefinitely waiting for me to mount a volume
> >> which it's created in the catalog but not labelled.
> >>
> > Type mount from the console, the actual labeling
On 8/1/07, Janco van der Merwe <[EMAIL PROTECTED]> wrote:
> Hi Guys,
>
> I have the following:
>
> 30-Jul 02:01 subver-dir: Start Backup JobId 57,
> Job=sqlengine.2007-07-30_02.00.01
> 30-Jul 02:01 subver-sd: Volume "Daily_Backup-0002" previously written,
> moving to end of data.
> 30-Jul 02:08 sql
On 2007.08.01. 15:33, Dimitrios wrote:
> On Wed, 01 Aug 2007 15:20:50 +0300 Rich <[EMAIL PROTECTED]> wrote:
>
>> i still prefer dumps, they are more portable, easier to restore and
>> compress better.
>
> but what about incremental backups? as far as i know, you'd have to backup
> the entire da
On 2007.08.01. 15:31, Dimitrios wrote:
> On Wed, 01 Aug 2007 14:51:43 +0300 Rich <[EMAIL PROTECTED]> wrote:
...
>> other method might be feeding mysqldump output to pipe, where data is
>> directly picked up by bacula client (i think, this was implemented
>> somewhere around 2.0 bacula).
>> has th
On Wed, 01 Aug 2007 15:20:50 +0300 Rich <[EMAIL PROTECTED]> wrote:
> i still prefer dumps, they are more portable, easier to restore and
> compress better.
but what about incremental backups? as far as i know, you'd have to backup the
entire database over and over again and for a very large dat
On Wed, 01 Aug 2007 14:51:43 +0300 Rich <[EMAIL PROTECTED]> wrote:
> using innodb, it should be possible to dump all tables as a single
> transaction.
indeed, unfortunately our MySQL has a mix of innodb and myisam tables.
> other method might be feeding mysqldump output to pipe, where data is
On 2007.08.01. 14:51, Rich wrote:
> On 2007.08.01. 14:37, Dimitrios wrote:
>> Based on my research, i found out that backing up MySQL is quite
>> un-efficient.
>>
>> On a small database, you can just dump the contents of MySQL into a file and
>> then backup that file.
>>
>> On a large database (s
The way the backup should work is that once the backup begins, all
database operations should be written to the 'log' file so that the main
database file is consistent. Once that backup is complete, all logs can
be written back to the database file. There is some performance penalty
obviously, but
On 2007.08.01. 14:37, Dimitrios wrote:
> Based on my research, i found out that backing up MySQL is quite un-efficient.
>
> On a small database, you can just dump the contents of MySQL into a file and
> then backup that file.
>
> On a large database (several Gigs) dumping to a file should be avo
Based on my research, i found out that backing up MySQL is quite un-efficient.
On a small database, you can just dump the contents of MySQL into a file and
then backup that file.
On a large database (several Gigs) dumping to a file should be avoided, for
example, on a hosting service the dump c
Hi Guys,
I have the following:
30-Jul 02:01 subver-dir: Start Backup JobId 57,
Job=sqlengine.2007-07-30_02.00.01
30-Jul 02:01 subver-sd: Volume "Daily_Backup-0002" previously written,
moving to end of data.
30-Jul 02:08 sqlengine-fd: Generate VSS snapshots. Driver="VSS Win
2003", Drive(s)="C"
30-
John Drescher wrote:
>> Instead, however, I get this:
>>
>> - and it'll sit there indefinitely waiting for me to mount a volume
>> which it's created in the catalog but not labelled.
>>
> Type mount from the console, the actual labeling will happen after the
> drive mounts and it detects it ha
I use the following:
./configure --enable-smartalloc --sbindir=/opt/bacula/bin
--sysconfdir=/opt/bacula/bin --with-pid-dir=/opt/bacula/bin/working
--with-subsys-dir=/opt/bacula/bin/working
--with-working-dir=/opt/bacula/working --enable-largefile --enable-client-only
--enable-static-fd --with
30 matches
Mail list logo