On 18.02.2009, at 22:08, Arno Lehmann wrote:
>> Is there any way to tell bacula it should backup all new files (new
>> meaning "not already backed up") within this directory, regardless of
>> the timestamp, without doing a full backup?
>>
> Solution one: Wait for 3.0, and / or start testing the cu
On 19.02.2009, at 14:50, Jari Fredriksson wrote:
>> Is there any way to tell bacula it should backup all new
>> files (new meaning "not already backed up") within this
>> directory, regardless of the timestamp, without doing a
>> full backup?
> Exclude the /archive from your normal backup job file
For special data we have a backup scheme that does not really fit
bacula's idea of incremental backups:
There is a directory, say /archive, that is empty by default. If
something needs to be backed up by bacula, it is copied (or moved)
into this directory. Then the backup job is started and
On 18.07.2006, at 18:47, Alan Brown wrote:
> On Tue, 18 Jul 2006, Sebastian Stark wrote:
>
>
>> I also noticed problems with this kind of setup but almost never
>> got an
>> answer when asking questions about multiple drive autochanger
>> issues...
>>
&
On 28.06.2006, at 15:55, Julien Cigar wrote:
> Yep it's turned on, but I have this messages every time I do a
> *status dir, and nothing is pruned
Have you checked your retention periods? Maybe this volume is just
not "old enough".
Using Tomcat but need to do more? Need to support web servi
If you have automatic pruning turned on this is expected behaviour I
would say.
Sebastian
On 28.06.2006, at 15:47, Julien Cigar wrote:
> Hi !
>
> I'm using 1.38.9 (Debian) with PostgreSQL (8.1.4)
>
> Am I the only one to have this kind of message : 28-Jun 12:39
> phoenix-dir: Pruning oldest
Is there a way to restrict the "list jobs" to show the jobs of a
specific client only?
-Sebastian
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v
On 25.06.2006, at 15:55, Kern Sibbald wrote:
> Hello,
>
> I have released a patch (1.38.10-scheduler.patch) to the patches
> area of the
> Bacula Source Forge releases. I *strongly* recommend that everyone
> using
> Bacula version 1.38.10 apply this patch. It applies only to the
> Directo
On 25.06.2006, at 17:55, Cristobal Sabroe Yde wrote:
> Hi, I've been using bacula 1.38.8 on a SuSE Linux 10.0 x86_64
> without any
> problem, but I choose to switch to 1.38.10 and now every time the
> director is
> idle for several hours (ie. waiting for backups the next day), it
> stops to
I want to run a restore job, only one jobs is running at the moment
but there are two other drives that could be used. The restore job
you see in the list was run with priority 1, still it is waiting for
higher priority jobs to finish. Why that?
Thanks,
Sebastian
Running Jobs:
JobId Leve
Am 18.06.2006 um 13:04 schrieb Tracy R Reed:
> Tracy R Reed wrote:
>> Device:tps Blk_read/s Blk_wrtn/s Blk_read
>> Blk_wrtn
>> hda 0.00 0.00 0.00
>> 0 0
>> sda 969.39 146.94 13069.39144
>>
Am 16.06.2006 um 21:02 schrieb Christoff Buch:
>
> Hi!
>
> So far I thought about turning off spooling, which speeded
> throughput up from ca. 5MB/s to 7MB/s.
> But the old backup - software did ca. 15MB/s.
> So I'm still at only about 50%.
How did you measure? The througput value supplied by
On 02.06.2006, at 10:14, Arno Lehmann wrote:
>
> I'd even like commands to move tapes...
>
Couldn't this be done with an infrastructure that is similar to sql
queries? I can imagine a generic "changer" command that looks into a
file just like query.sql and lists available commands like "move
At the moment all my tapes are LTO-2 and I want to switch to LTO-3
tapes. There are three LTO-3 and one LTO-2 drive in my library. The
idea is to have a pool "old" that has its own autochanger resource
with only the LTO-2 drive and then some other pools (with only LTO-3
tapes in them) that
I am sorry for the noise. The problem was that for a reason I still
do not know the SCSI IDs (and therefore the device names) of the two
drives were changed. This, of course, is very hard to handle for
bacula...
-Sebastian
On 13.05.2006, at 10:11, Sebastian Stark wrote:
How can I
How can I resolve this situation? Volume 81 is currently loaded
but it is not acceptable. Bacula somehow wants volume 84, which
is also not acceptable for some reason...
*m
13-May 10:07 yangtse-sd: Backup_yangtse-system.2006-05-13_01.05.00
Warning: Director wanted Volume "84" f
Am 11.05.2006 um 17:46 schrieb Waldock, Brian:
Before I go any further in building this environment, I wanted to
see if anyone has any experience with the following configuration
or similar.
Sun e480 running Solaris 10 OS
Sun storedge L180 fiber attached autochanger with 6 LTO2 fibre
On 04.04.2006, at 12:10, Alan Brown wrote:
On Tue, 4 Apr 2006 [EMAIL PROTECTED] wrote:
Again: how can I tell the SD to unblock a device during runtime?
What seems to work for me is to manually load a tape using MTX and
then trying "mount" again.
For me doing this sometimes leads to a v
On 31.03.2006, at 14:02, Turbo Fredriksson wrote:
Does bacula (1.36.3) backup the Windows (2k etc) ACL's?
Last time I tested: yes.
This via Samba.
Don't know what you mean.
---
This SF.Net email is sponsored by xPML, a groundbreaking s
Can you post the job and fileset resources regarding this?
On 19.03.2006, at 03:14, Carles Bou wrote:
i'm running bacula 1.38.5 on gentoo with postgresql, as DB backend.
i don't know why, but bacula is always copying all files in the set
i've loocked at stat output and files, no change betwe
Am 13.03.2006 um 16:10 schrieb Attila Fülöp:
I had a similar problem. I fixed it by running "status dir"
a couple of times in an admin job which runs before all other jobs.
Each call of "status dir" triggers one step of the recycling
algorithm.
This sounds like a broken concept to me. The "st
Can anybody confirm "update slots" is working (regarding ALL drives)
correctly for them in a multidrive autochanger environment? If yes,
could you share your mtx-changer script and bacula-*.conf please?
Thanks,
Sebastian
On 08.03.2006, at 14:45, Sebastian Stark wrote:
On
On 08.03.2006, at 13:49, Dan Langille wrote:
On 8 Mar 2006 at 12:53, Sebastian Stark wrote:
If I use the update slots command in bconsole it asks for a drive
number:
*update slots
Automatically selected Storage: neo4000
Enter autochanger drive[0]:
Regardless what I type here it always
On 06.03.2006, at 14:59, Dwayne Hottinger wrote:
Thanks,
I know thats the problem. But I dont see a dist for 1.38 dist for
os x 10.3.x.
Is there a build for the -fd on osx 10.3? Or how do I build for
the osx 10.3?
Have you tried opendarwin ports? Works good for me on 10.4, one would
If I use the update slots command in bconsole it asks for a drive
number:
*update slots
Automatically selected Storage: neo4000
Enter autochanger drive[0]:
Regardless what I type here it always tries to unload drive0. How do
I get it to unload drive1 as well?
As I read in the archives "u
Am 03.03.2006 um 01:00 schrieb Erik P. Olsen:
Sebastian Stark wrote:
On 01.03.2006, at 11:09, Erik P. Olsen wrote:
What happens when you enter 'use bacula'?
mysql> use bacula;
Reading table information for completion of table and column names
You can turn off this feature to
On 01.03.2006, at 12:56, Rudolf Cejka wrote:
Sebastian Stark wrote (2006/03/01):
I never get more than ~28MB/s backing up to an HP Ultrium-3 drive. I
Momentary speed or overall speed including data spooling, tape write
and database update? For momentary it is slow, for overall it is
On 01.03.2006, at 11:09, Erik P. Olsen wrote:
What happens when you enter 'use bacula'?
mysql> use bacula;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
Apparently I can open and use bacula data
What tape throughput are people seeing under Solaris (10)?
I never get more than ~28MB/s backing up to an HP Ultrium-3 drive. I
configured spooling to a locally attached raid that allows for much
faster througput, bonnie++ says ~160MB/s when reading block-wise.
Could this be a problem with
In the documentation for the "prefer mounted volumes" parameter it says:
"you will probably want to start each of your jobs one after
another with approximately 5 second intervals"
But how would I implement this? Do I really have to setup a distinct
schedule with an exact time table for
On 14.02.2006, at 19:12, Arno Lehmann wrote:
Mysql is using ~100% CPU so this seems to be the bottleneck in
this case. An index in the Name column does exist. The Filename
table is not extremely big, roughly at ~500MB with ~1 million rows.
I'm getting backup rates at ~15M/s but I would e
I noticed that when doing a backup mysql seems to be using most of
the time doing SELECTs:
yangtse ~ % /usr/local/mysql/bin/mysqladmin -u root processlist
+--+--+---++-+--+--
+--
On 13.02.2006, at 13:52, Dan Langille wrote:
On 13 Feb 2006 at 13:10, Sebastian Stark wrote:
On 13.02.2006, at 12:24, Sebastian Stark wrote:
On 13.02.2006, at 11:56, Michel Meyers wrote:
Sebastian Stark wrote:
After upgrading from 1.36.2 to 1.38.3 everything worked perfect
but one
job
On 13.02.2006, at 12:24, Sebastian Stark wrote:
On 13.02.2006, at 11:56, Michel Meyers wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sebastian Stark wrote:
After upgrading from 1.36.2 to 1.38.3 everything worked perfect
but one
job refuses to backup anything. An upgrade to
On 13.02.2006, at 11:56, Michel Meyers wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sebastian Stark wrote:
After upgrading from 1.36.2 to 1.38.3 everything worked perfect
but one
job refuses to backup anything. An upgrade to 1.38.5 did not fix
this.
The FileSet definition did
After upgrading from 1.36.2 to 1.38.3 everything worked perfect but
one job refuses to backup anything. An upgrade to 1.38.5 did not fix
this.
The FileSet definition did not change and there are definitely files
that need to be backed up (new files, changed files..).
The job and fileset
Am 03.02.2006 um 21:56 schrieb Erik Dykema:
Hi All-
I'm looking to upgrade to a new tape library, and am looking
at the "Overland Neo 2000" with Ultrium 3 drives. I saw on the
bacula web page that it's listed as 'supported', but with LTO-1
drives.
Is anyone out there using Bacu
inful. No changes to configuration files necesssary, nothing new to
learn, etc.
Like: "don't try to solve a problem by solving a harder one" :)
-Sebastian
Ryan
Sebastian Stark wrote:
This worked for me when I hit the limit:
http://jeremy.zawodny.com/blog/archives/000
This worked for me when I hit the limit:
http://jeremy.zawodny.com/blog/archives/000796.html
Be sure to have a recent dump in case anything goes wrong.
-Sebastian
On Feb 1, 2006, at 3:37 PM, Roger Kvam wrote:
27-Jan 00:45 alexandria-dir: LogicBackupAvr32.2006-01-27_00.05.10
Fatal error:
Am 31.01.2006 um 08:43 schrieb Natxo Asenjo:
30-Jan 20:00 vpn-sd: NightlySave.2006-01-30_20.00.00 Fatal error:
dev.c:387 dev.c:381 Unable to open device "Tape" (/dev/nst0):
ERR=Read-only file system
Is the operating system disk okay?
Bacula can not open the device node for writing, so I
Am 28.01.2006 um 17:58 schrieb Drew Tomlinson:
I tried 'less' but it didn't work for me as it still wrapped. Is
there something else?
Type "-S" within less. Or do
export LESS=-S
(or setenv LESS -S for tcsh)
in your shell startup file.
-
Am 27.01.2006 um 09:51 schrieb Daniel Amkreutz:
Now, because time is slipping away i'm about to
cancel any further use of bacula and use homebrew scripts to
automate the
backup.
I'm not sure who should be more afraid of this statement: the bacula
devopers or your users?
3. How about ar
We have one autochanger with two drives. We have configured both
drives within an autochanger resource (both with "autoselect=yes") in
bacula-sd.conf.
We want to use both drives simultaneously but no job interleaving.
That is, we want to make sure that only one job writes to one volume
On Friday 02 December 2005 18:12, Harondel J. Sibble wrote:
> On 1 Dec 2005 at 9:29, Sebastian Stark wrote:
> > You have to use the "update" command to first update the pool parameters
> > from the config file and the use "update" again to update the
again and now I'm not so sure anymore.
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
---
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for proble
On Wed, Nov 30, 2005 at 10:02:59PM -0800, Harondel J. Sibble wrote:
>
>
> On 30 Nov 2005 at 19:20, Sebastian Stark wrote:
>
> > >From your bacula-dir.conf it looks like your job retention is 30 days.
> > As I understand bacula it won't (automatically) prune volu
keyid 0x3AD5C11D) http://www.pdscc.com
> (604) 739-3709 (voice/fax) (604) 686-2253 (pager)
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
---
This SF.net email is sponsored
On Tuesday 25 October 2005 15:38, Sebastian Stark wrote:
> On Tuesday 25 October 2005 15:08, Volker Sauer wrote:
> > > Well, something is wrong here, you should have an index on JobId as
>
> follows:
> > Yes, and that's the solution! The missing index on JobId mak
g index? Can it be done while the server is running?
-Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
---
This SF.Net email is sponsored by the JBoss Inc.
Get Certified Today * Regi
exactly happend.
> Both SD and DIR are on non SMP machines.
>
> Any idea what is causing that.
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
---
This SF.Net emai
s with people by sending standard 3.5" drives, e. g. from Germany
to Australia and back and had no problems so far. Luck?
-Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
olume=..." an expensive database operation?
Our catalog has 3 or 4 gigs.
Just wanted to ask if I should wait or look for any problems.
-Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological C
: undefined reference to
> `_Unwind_SjLj_RaiseException'
> /usr/lib/libstdc++.so.38.0: undefined reference to `_Unwind_SjLj_Resume'
> /usr/lib/libstdc++.so.38.0: undefined reference to
> `_Unwind_SjLj_Resume_or_Rethrow'
> collect2: ld returned 1 exit status
> *** Erro
en.mpg.de/stark/admin/scripts/trigger_bacula.sh
-Sebastian
On Thursday 07 July 2005 16:17, Phil Stracchino wrote:
> On Thu, Jul 07, 2005 at 01:55:20PM +0200, Sebastian Stark wrote:
> > I want to write a script that laptop users can run to trigger a backup of
> > their laptop. Sch
ething. My first approach would be to have something
with the suid-bit set that pipes the command "run client=myclient
level=incremental".
Has someone ever done a script like this and already knows the pitfalls? Are
there better ways?
Thanks,
Sebastian
--
Sebastian Stark -- http://www.
t
do something like a checksum over the config file? Would it think I changed
the fileset just because I change some whitespace in the fileset definition
or put it into another file?
-Sebastian
On Wednesday 22 June 2005 14:22, Dominic Marks wrote:
> On Tuesday 21 June 2005 13:25, Sebastian Sta
tes.
>
> No memory tree is built for the estimate command only for the restore
> command. The two commands cannot be compared in any way.
I know, I showed the output of the estimate command just to tell that I have
38800 files with ~2G. "estimate" runs pretty fast, but when I run &q
Will splitting up bacula-dir.conf into several files lead to bacula seeing new
FileSets and upgrading to Full backups next run? If I don't change anything
else of course.
-Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybern
On Tuesday 21 June 2005 12:13, Kern Sibbald wrote:
> On Tuesday 21 June 2005 10:31, Sebastian Stark wrote:
> > Is there a way to speed up the creation of the directory tree when
> > restoring files? For some clients this takes more than an hour for us.
> >
> > Our My
p the
catalog? Maybe play around with indexes?
Thanks,
Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
Spemannstr. 38, 72076 Tuebingen
Phone: +49 7071 601 555 -- Fax: +49 7071 60
On Friday 10 June 2005 12:55, Alan Brown wrote:
> if test xsqlite = xmysql ; then
How do you expect this to be of any use? :)
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
Spemannstr. 38, 72076 Tuebingen
Phone: +49 7071 601
very silly reasons.
But you're right, a UPS is needed. :)
-Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
Spemannstr. 38, 72076 Tuebingen
Phone: +49 7071 601 555 -- Fax:
help me in that case?
Anyway: Thank you _very_ much for your help. And it's good to know that bacula
survives this kind of desaster.
-Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
Spemannstr. 38, 720
simply modify the virtual tape
with a hex editor and bcopy it back to a clean tape.
-Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
Spemannstr. 38, 72076 Tuebingen
Phone: +49 7071 601 555 -- Fax: +49 7071 601 552
-
Update:
On Wednesday 08 June 2005 11:23, Sebastian Stark wrote:
> btape scanblocks is able to read the tape up to file ??? (is still
> running).
After a few hours I get:
Jun 8 14:01:08 yangtse SCSI transport failed: reason 'timeout':
giving up
Jun 8 14:01:08 yangtse
.
In case it's just a missing mark on the stream how could I recover from
this error and save (at least most of) the data (~250GB) on the tape?
Hardware:
HP Ultrium-2 drive, LTO-2 Tape
Any hints appreciated.
-Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck I
/usr/local/mysql/var/bacula
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
Spemannstr. 38, 72076 Tuebingen
Phone: +49 7071 601 555 -- Fax: +49 7071 601 552
---
SF email is sponso
do some test i would be
> pleased to send it to you...
I am!
-Sebastian
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
Spemannstr. 38, 72076 Tuebingen
Phone: +49 7071 601 55
se than
> via email.
It's here: http://bugs.bacula.org/bug_view_advanced_page.php?bug_id=0000265
--
Sebastian Stark -- http://www.kyb.tuebingen.mpg.de/~stark
Max Planck Institute for Biological Cybernetics
Spemannstr. 38, 72076 Tuebingen
Phone: +49 7071 601 5
s of
> > phase change errors in the messages. After replacing the controller with
> > a LVD one (29160) it worked instantly.
> >
> > Did you double checked your block sizes? Variable, or fixed?
> >
> >
> > Bye
> > Roland
> >
> > -Ursprüng
e="15"
VolSessionId=28
VolSessionTime=1110492679
VolFile=447
VolBlock=0-8026
FileIndex=115-170
Count=56
Volume="18"
VolSessionId=38
VolSessionTime=1110492679
VolFile=8
VolBlock=0-122
FileIndex=17-23
Count=7
Volume="18"
VolSessionId=46
VolSessionTime=1110492679
71 matches
Mail list logo