[Bacula-users] Bat cannot restore: full backup not found

2007-10-03 Thread Silver Salonen
Hello.

I've installed Bacula-bat 2.2.4 on FreeBSD 6.2 from ports. Bacula-server (dir 
and sd) is 2.2.4 too.

I can successfully connect Bat to dir and browse Jobs, Media etc. What I can't 
do is restore smth or browse files (aka Browse Cataloged Files). When I click 
on Version Browser, Bat just stays blank, like it wasn't doing anything.

If I click on restore, select a job and click OK, Bat says: No full backup 
before 2007-10-03 09:30:43 found. It's the case for every job and every 
date. After cancelling restore, Bat disconnects.

Restoring from wx-console is OK though. What could be wrong?

-- 
Silver

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backward compability

2007-10-03 Thread Rich
On 2007.10.02. 18:57, Marek Simon wrote:
 Hi,
 I have director 1.38.11 from stable debian release. I tried to get 
 windows client of the same version, I searched for it very long, but I 
 did not succeed. I tried a new windows client (2.2.4), but it does not 
 work (as said in manual). Can I dig winbacula-1.38.11 installation 
 program somewhere in the world (or in software heaven or software hell)? 
 Or do I need to upgrade all clients, storages and director to 2.x.x?

hmm. i am using 36.1 director/sd and 2.2.something agents. works fine, 
also restoration ;)

 Thanks.
 Marek
...
-- 
  Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Compiling 2.2.4 on Solaris

2007-10-03 Thread Weber, Philip
I think I have resolved this ... haven't tried compiling the director
yet nor tested anything, but the configure/compile of SD etc. shown
below appears to work.

I replaced our MySQL 5.0.41 package from SunFreeware with MySQL 5.0.45
Solaris (32-bit) pkgadd package from dev.mysql.com.  Now compiles OK
with --enable-batch-insert.  So I guess, for me at least, a thread-safe
version of MySQL is required regardless of whether this option is used.

thanks, Phil

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf 
 Of Weber, Philip
 Sent: 02 October 2007 09:51
 To: bacula-users@lists.sourceforge.net
 Subject: Re: [Bacula-users] Compiling 2.2.4 on Solaris
 
 
 Hi,
 
 I am trying to compile Storage Daemon for now but will be 
 doing Director
 as well.
 
 Solaris 9 Sparc (on a Sun v440).
 Bacula 2.2.4.
 SMCgcc 3.4.6 (package from SunFreeware).  Also libgcc 3.4.6.
 SMCmysql 5.0.41 (package from SunFreeware).
 
 CFLAGS=-g ./configure \
   --sbindir=/usr/local/bacula/bin \
   --sysconfdir=/usr/local/bacula/bin \
   --with-mysql=/usr/local/mysql \
   --enable-smartalloc \
   --with-pid-dir=/var/local/bacula/bin/working \
   --with-subsys-dir=/usr/local/bacula/bin/working \
   --with-working-dir=/var/local/bacula/working \
   --mandir=/usr/local/man \
   --with-sd-user=bacula \
   --with-sd-group=bacula \
   --with-python \
   --disable-build-dird \
   --with-openssl=/usr/local/ssl
 
 I've narrowed it down to I think only the parts that interact 
 with MySQL
 are failing, i.e. bacula-dir  bscan, e.g. :
 
 /usr/local/bin/g++  -O -L../lib -L../cats -L../findlib -o 
 bscan bscan.o
 block.o device.o dev.o label.o ansi_label.o dvd.o ebcdic.o lock.o
 autochanger.o acquire.o mount.o record.o match_bsr.o 
 parse_bsr.o butil.o
 read_record.o scan.o reserve.o stored_conf.o spool.o wait.o \
  -lsql  -lsec -lz -lfind -lbac -lm -lpthread -lgen -lresolv -lnsl -ldl
 -lsocket -lxnet  -lintl -lresolv  -L/usr/local/ssl/lib -lssl -lcrypto
 
 Undefined   first referenced
  symbol in file
 mysql_fetch_row ../cats/libsql.a(mysql.o)
 mysql_fetch_field   ../cats/libsql.a(sql.o)
 mysql_data_seek ../cats/libsql.a(sql_get.o)
 mysql_query ../cats/libsql.a(mysql.o)
 mysql_error ../cats/libsql.a(mysql.o)
 mysql_close ../cats/libsql.a(mysql.o)
 mysql_insert_id ../cats/libsql.a(sql_create.o)
 mysql_free_result   ../cats/libsql.a(mysql.o)
 mysql_store_result  ../cats/libsql.a(sql.o)
 mysql_init  ../cats/libsql.a(mysql.o)
 mysql_affected_rows ../cats/libsql.a(sql.o)
 mysql_real_connect  ../cats/libsql.a(mysql.o)
 mysql_field_seek../cats/libsql.a(sql.o)
 mysql_num_rows  ../cats/libsql.a(sql.o)
 mysql_num_fields../cats/libsql.a(mysql.o)
 mysql_use_result../cats/libsql.a(mysql.o)
 mysql_escape_string ../cats/libsql.a(mysql.o)
 my_thread_end   ../cats/libsql.a(mysql.o)
 ld: fatal: Symbol referencing errors. No output written to bscan
 collect2: ld returned 1 exit status
 
 If I add -L/usr/local/mysql/lib/mysql -lmysqlclient to the 
 g++ command I
 get a bit further :
 
 Undefined   first referenced
  symbol in file
 my_thread_end   ../cats/libsql.a(mysql.o)
 ld: fatal: Symbol referencing errors. No output written to bscan
 collect2: ld returned 1 exit status
 
 I can find my_thread_end in 'strings' output from :
 /usr/local/mysql/lib/mysql/libmyisam.a
 /usr/local/mysql/lib/mysql/libmysys.a
 
 So I suspect the problem is with the MySQL implementation I have from
 SunFreeware.  I don't believe it was compiled with the thread safe
 option, as --enable-batch-insert wasn't picked up from the configure
 line (hence taken off the configure statement above).  So I think my
 next attempt will be to compile MySQL myself.
 
 thanks, Phil
 
 
  -Original Message-
  From: Masopust, Christian [mailto:[EMAIL PROTECTED] 
  Sent: 02 October 2007 06:30
  To: Weber, Philip; bacula-users@lists.sourceforge.net
  Subject: RE: [Bacula-users] Compiling 2.2.4 on Solaris
  
  
  
  Hello Philip,
  
  I've Bacula on a Solaris 8 system running (client-only).
  Could you give a little more information? (configure-call,
  which gcc, ...)
  
  christian
  
  --
  I sense much NT in you, NT leads to Blue Screen. 
  Blue Screen leads to downtime, downtime leads to suffering. 
  NT is the path to the darkside. 
  
  - Unknown Unix Jedi  
  
   -Original Message-
   From: [EMAIL PROTECTED] 
   [mailto:[EMAIL PROTECTED] On Behalf 
   Of Weber, Philip
   Sent: Monday, October 01, 2007 4:10 PM
   To: bacula-users@lists.sourceforge.net
   Subject: [Bacula-users] Compiling 2.2.4 on 

[Bacula-users] cluster e redundancy of Bacula

2007-10-03 Thread MasterBrian
Greetings,

anyone here have experience in clustering bacula to have geographical
redundancy?

I've looked up into the manual and google, but I've not find anything
usefull.

Thank you


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] cluster e redundancy of Bacula

2007-10-03 Thread Rich
i submitted a feature suggestion for that some time ago.
it didn't go in the 'projects' list, though :)

On 2007.10.03. 11:01, MasterBrian wrote:
 Greetings,
 
 anyone here have experience in clustering bacula to have geographical
 redundancy?
 
 I've looked up into the manual and google, but I've not find anything
 usefull.
 
 Thank you
-- 
  Rich

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Bug, upgrade from 2.0.3 to 2.2.4, Volume Retention?

2007-10-03 Thread kshatriyak
On Mon, 1 Oct 2007, Arno Lehmann wrote:

 That wouldn't help, because the oldest volume is not necessarily the
 one to be selected for pruning/recycling.

Hm, here it is. Every day there is a different pool (monday, tuesday, 
...). Bacula chooses the right pool because of the JobDefs. In those pools 
there are 2 tapes, one didn't reach it's retention period yet, so that one 
can't be selected- so there is only 1 tape left that can be selected. So 
it's 100% sure which tape should be taken.

Anyway, it's not a big problem ofcourse. I've already found another way to 
'fix' things -- not yet done, but I think this will work. I take a (very 
small) incremental backup during the afternoon, so the correct tape gets 
recycled. The tape is not there yet, so the data gets spooled. In the 
afternoon I can change the tape, the data gets written and the tape is 
ready for the nightly backup.

K.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] backward compability

2007-10-03 Thread Marek Simon
We will see if it helps, maybe the problem is somewhere else.
M.

Rich napsal(a):
 On 2007.10.02. 18:57, Marek Simon wrote:
   
 Hi,
 I have director 1.38.11 from stable debian release. I tried to get 
 windows client of the same version, I searched for it very long, but I 
 did not succeed. I tried a new windows client (2.2.4), but it does not 
 work (as said in manual). Can I dig winbacula-1.38.11 installation 
 program somewhere in the world (or in software heaven or software hell)? 
 Or do I need to upgrade all clients, storages and director to 2.x.x?
 

 hmm. i am using 36.1 director/sd and 2.2.something agents. works fine, 
 also restoration ;)

   
 Thanks.
 Marek
 
 ...
   

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] multidestination e-mail

2007-10-03 Thread luyigui loholhlki
can you please tell what to put in bacula-dir.conf in order to send 
notification to different e-mails
(e.g: [EMAIL PROTECTED]  [EMAIL PROTECTED])
thx

   
-
 Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail -
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Optical DVD low reliability?

2007-10-03 Thread Hydro Meteor
John and Eric,

Thank you both for your feedback (also valuable to the Bacula community
particularly those who are new to Bacula). There is indeed more to using
optical media than at first what meets the eye but if it is indeed true that
using optical DVD is still somewhat beta-like in a Bacula-specific context
then I am all all for pushing things forward and doing some of my own
testing and reporting back the results to the Bacula community. I'll be
using an Apple Xserve Intel machine (current version of this machine) which
supports a variety of DVD formats (read-only and read/write) but not
DVD-RAM.

Cheers,

-Hydro

On 10/2/07, Eric Böse-Wolf [EMAIL PROTECTED] wrote:

 Hello Hydro,

 Hydro Meteor [EMAIL PROTECTED] writes:


  DVD media is not recommended for serious or important backups
 because of
  its low reliability.
 
 
  I wonder how long ago this statement was written and if this still
 remains true
  today ( e.g., have there been improvements to DVD optical media over
  time)?

 DVD-RAM was built with the intention that it keeps data save for 30
 years. DVD-RAM has a defect management. DVD-RAM has a metallic dye
 different to DVD-R, DVD+R, DVD-RW, DVD+RW, CD-R, CD-RW. DVD-RAM is much
 slower (! 3x or 5x) than the other optical formats. DVD-RAM is available
 in catridges
 to protect the media against physical harm, there are even DVD-RAM
 burners which accepts directly the catridge.

 But it is, as any phase change media, not suited for archival long term
 backup.

 See: http://en.wikipedia.org/wiki/Dvd-ram

 The information of the metallic dye was only on german wikipedia:
 http://de.wikipedia.org/wiki/DVD-RAM

 mfg

 Eric


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Optical DVD low reliability?

2007-10-03 Thread Alan Brown
On Tue, 2 Oct 2007, John Drescher wrote:

 I have seen a few studies in the past (possibly cdfreaks)  that show
 that under torture tests that cd/dvd media is not very good. And that
 cd/dvd media is also a very bad choice for archival because the media
 breaks down over time. Personally, I have not seen this happen with
 write once media as I have 10 year old cd-rs that still read fine
 however I have had difficulty reading RW media.

This is interesting, I would have expected the other way round, as CD-R is 
dye based while CDRW is based on a high temperature state change.

The caveat for CD/DVD storage is in a cool dark place - which most 
people tend to forget.

I had to backup the contents of a CD jukebox a couple of years ago while 
we were clearing out old kit. All 500 discs inside were CD-Rs ranging from 
5-9 years old. 3 were partially or completely unreadable with no obvious 
physical damage. Many more were touch-and-go to read (the noise made 
by the drives when they were retrying was quite noticable) which 
underscores why CD media has at least 4 copies of the recorded data on the 
actual disc surface.

DVD-R/RW (both versions) haven't been around long enough for me to form a 
meaningful opinion on their longevity.


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multidestination e-mail

2007-10-03 Thread kshatriyak
On Wed, 3 Oct 2007, luyigui loholhlki wrote:

 can you please tell what to put in bacula-dir.conf in order to send 
 notification to different e-mails

Just configure your mailserver to use an alias, for example 
[EMAIL PROTECTED] ?




-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] TLS connections

2007-10-03 Thread alex
Hi,

I was just wondering, I have some fd clients on my local net and some fd
clients that need to be accessed over the evil interweb. 

Is it possible that the connections to the internet server are TLS secured and 
the
local clients connections not?

-- 
alex

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] TLS connections

2007-10-03 Thread Frank Sweetser
alex wrote:
 Hi,
 
 I was just wondering, I have some fd clients on my local net and some fd
 clients that need to be accessed over the evil interweb. 
 
 Is it possible that the connections to the internet server are TLS secured 
 and the
 local clients connections not?
 

Sure.

Your director and sd will have to have 'tls enable', but not 'tls require'.
Any fds that require encryption will need 'tls require', and just leave it out
of the local ones.

-- 
Frank Sweetser fs at wpi.edu  |  For every problem, there is a solution that
WPI Senior Network Engineer   |  is simple, elegant, and wrong. - HL Mencken
GPG fingerprint = 6174 1257 129E 0D21 D8D4  E8A3 8E39 29E3 E2E8 8CEC

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Best practice to outhouse tapes

2007-10-03 Thread Tom Meiner
Hi!

I have an autochanger with 8 slots and only 1 drive.

I want to outhouse all tapes on a weekly basis. What is the best
practice to do such?

Each weekend the database should be pruned to start the necessary full
backup with an empty database.

The incremental backups should rotate on tape 1 - 6. Tape 7 should be
used for the backup of the catalog. while tape 8 should be the cleaning
tape.

I guess I have to shut down the storage daemon to be able to replace the
magazin. Is it also necessary to restart the director? How can I achieve
that the library gives the magazin free without going in error after
restarting the storage daemon (today I really need to power off server
and library to get them working again).

Also, how an I achieve to make a cleaning run before the full backup?

I use bacula 2.04 with CentOS5 and a Dilog Libra8 DDS3 connected to an
Adaptec 2940

tom

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multidestination e-mail

2007-10-03 Thread Chris Hoogendyk


luyigui loholhlki wrote:
 can you please tell what to put in bacula-dir.conf in order to send 
 notification to different e-mails
 (e.g: [EMAIL PROTECTED]  [EMAIL PROTECTED])

Typically, any script, config or application that asks for an email 
address will take a string of characters and use it.

If you hand it (with the quotes) [EMAIL PROTECTED],[EMAIL PROTECTED], or 
depending on your local system configuration, abc,xyz, then it should 
just work. Try it and see.



---

Chris Hoogendyk

-
   O__   Systems Administrator
  c/ /'_ --- Biology  Geology Departments
 (*) \(*) -- 140 Morrill Science Center
~~ - University of Massachusetts, Amherst 

[EMAIL PROTECTED]

--- 

Erdös 4



-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] disk spooling

2007-10-03 Thread GDS.Marshall

I have searched through the mailing list archive but can not find the
answer in the hundreds which relate to disk spooling.

I have four linux machines setup as follows:
fd host1 version 2.0.3 (gentoo)
fd host2 version 2.2.4 (packman on debian)
dir host3 2.2.4 (packman on debian)
sd host4 2.0.3 (gentoo)

I have spooling working, and can see the spool file increasing.
under Job, I have
Spool Data = yes
Spool Attributes = yes

Under Client (fd) and Storage (sd) I have
Maximum Concurrent Jobs = 20

on the sd I have
Storage { # definition of myself
  Name = xyz-sd
  SDPort = 9103  # Director's port
  WorkingDirectory = /var/lib/bacula
  Pid Directory = /var/run
  Maximum Concurrent Jobs = 20
}

Device {
  Name = DLT-V4
  Media Type = DLT-V4
  Archive Device = /dev/nst0
  AutomaticMount = yes
  AlwaysOpen = yes
  AutoChanger = yes
  SpoolDirectory=/var/data/amanda/bacula/spool
  Maximum Network Buffer Size = 65536
}
#
# An autochanger device with two drives
#
Autochanger {
  Name = Autochanger
  Device = DLT-V4
  Changer Command = /usr/libexec/bacula/mtx-changer %c %o %S %a %d
  Changer Device = /dev/sg1
}

When I run two jobs at the same time which want to write to two different
tapes, the second one fails immediately rather than both spooling and the
first one to finish de-spooling to tape, this is clear from the following
error message:
03-Oct 07:00 abc-dir: Start Backup JobId 58,
Job=def-backup.2007-10-03_07.00.00
03-Oct 07:00 abc-dir: Using Device DLT-V4
03-Oct 07:00 xyz-fd: ClientRunBeforeJob: run command
/etc/bacula/waitforntbackup
03-Oct 11:09 backupserver-sd: def-backup.2007-10-03_07.00.00 Fatal error:
acquire.c:355 Wanted to append to Volume CNI910, but device DLT-V4
(/dev/nst0) is busy writing on CNI911 .
03-Oct 11:09 xyz-fd: def-backup.2007-10-03_07.00.00 Fatal error:
job.c:1758 Bad response to Append Data command. Wanted 3000 OK data
, got 3903 Error append data

What have I set wrong in order to get them to spool at the same time?

Many thanks,

Spencer


-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] LTO hardware compression ratio

2007-10-03 Thread Ralf Gross
Hi,

I'm testing our new changer which is equipped with 2 LTO-4 drives.
What should I expect from LTO's hw compression? I've seen LTO-3 tapes with
800+ GB data. Here are the volbytes number I got with LTO-4 so far.

volbytes:
1,164,080,268,288
1,138,440,038,400
1,180,908,417,024

This is a compression ratio of 1,45:1.

I know that these numbers are highly dependent on the kind of data
that is backed up.

The data I'm currently backing up is mainly made up of large hdf
files.

# du -sh *
1,4G16bit_chan0.hdf
0   16bit_chan0.hdf_pdetTrigger.log
325M8bit_chan0.hdf
201M8bit_chan1.hdf

# bzip2 *

# du -sh *
447M16bit_chan0.hdf.bz2
4,0K16bit_chan0.hdf_pdetTrigger.log.bz2
219M8bit_chan0.hdf.bz2
652K8bit_chan1.hdf.bz2


So bzip2 seems to be able to compress the data far better (3,4:1), but
the drive has to do it in real time, thus is might be slower,
although compression is implemented in hardware.

I'm just wondering if I have to set a special density code with mt
(which I don't know at the moment)? LTO-3's code was 0x44 if I
remember correctly, but I *think* the default shoulb be ok. I couldn't
find any density code for LTO-4 with google.

Ralf

-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse012070mrt/direct/01/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO hardware compression ratio

2007-10-03 Thread Chris Howells
Hi Ralf,

Ralf Gross wrote:
 I'm testing our new changer which is equipped with 2 LTO-4 drives.
 What should I expect from LTO's hw compression? I've seen LTO-3 tapes with
 800+ GB data. Here are the volbytes number I got with LTO-4 so far.
 
 volbytes:
 1,164,080,268,288
 1,138,440,038,400
 1,180,908,417,024

I am very interested in this too. So far I have got even less than that 
on a volume, though I know that the data I am currently testing with is 
not *that* compressible.

Tomorrow I intend to do some benchmarking of different block sizes to 
see what effect they have on performance and compression.

 I'm just wondering if I have to set a special density code with mt
 (which I don't know at the moment)? LTO-3's code was 0x44 if I
 remember correctly, but I *think* the default shoulb be ok. I couldn't
 find any density code for LTO-4 with google.

My drive is using 0x46:

[EMAIL PROTECTED]:~# mt -f /dev/st0 status
SCSI 2 tape drive:
File number=51, block number=0, partition=0.
Tape block size 0 bytes. Density code 0x46 (no translation).
Soft error count since last status=0
General status bits on (8101):
  EOF ONLINE IM_REP_EN

Is that what yours is using too?

I'm using mt-st by the way, it seems more featureful than GNU mt, which 
was the one that was already installed on my Ubuntu box.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Journal notes

2007-10-03 Thread Yuri Timofeev
Hi, there


Feature request and not only.

I backup great volumes of the information.
Copies should be kept long time.
For from 1 year till 5 years.
Copies of especially important information are on a tape and HDD.

IMHO in Bacula there is no simple journal for  records notes.
At present I conduct such registry manually on a paper.
Now I shall try to explain the idea and to explain for what this all it
is necessary.

For example, Job N 123 it is executed 2006.03.31. The period of storage
of a copy - 5 years.
Bacula keeps all necessary data for a computer for restoration of these data.
However it (or nearly so anything) does not speak anything to me - to
the manager of system.
I would like to see the description, that in files Job N 123 there is
a DB of financial parameters of firm for all 2006.
I.e. in other words in 5 years I cannot recollect which information
have been saved. I should execute restoration of a DB completely to
understand, which information has been saved.
The situation is even worse, if Volumes (being on HDD) have been spoiled
and then saved anew (from tapes). In such cases communication on time of
events is lost completely.

Besides in magazine it would be desirable to write down cases (for
example) losses of power supplies and other.

And now, after a long explanation I would like to state the question.

For the project webacula I would like to create one more table (or some
tables) in a Catalog Bacula.
For example, with name JournalNote or Blog ;)
And to ask developers Bacula to not use these names for tables Bacula.

I make design of tables as is described above and to make the necessary
documentation. For use in native Bacula (in the future).

?

This will be useful to anyone?

Thanks.

-- 
have a nice day

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk spooling

2007-10-03 Thread John Drescher
 When I run two jobs at the same time which want to write to two different
 tapes, the second one fails immediately rather than both spooling and the
 first one to finish de-spooling to tape, this is clear from the following
 error message:
 03-Oct 07:00 abc-dir: Start Backup JobId 58,
 Job=def-backup.2007-10-03_07.00.00
 03-Oct 07:00 abc-dir: Using Device DLT-V4
 03-Oct 07:00 xyz-fd: ClientRunBeforeJob: run command
 /etc/bacula/waitforntbackup
 03-Oct 11:09 backupserver-sd: def-backup.2007-10-03_07.00.00 Fatal error:
 acquire.c:355 Wanted to append to Volume CNI910, but device DLT-V4
 (/dev/nst0) is busy writing on CNI911 .
 03-Oct 11:09 xyz-fd: def-backup.2007-10-03_07.00.00 Fatal error:
 job.c:1758 Bad response to Append Data command. Wanted 3000 OK data
 , got 3903 Error append data

The weird part here is that bacula started a second job knowing it
could not write to the second tape at the same time as the first tape.
In normal circumstances on a single device bacula will only allow
concurrency on the jobs that want to use the same pool and they all
write to the same volume.

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Optical DVD low reliability?

2007-10-03 Thread Yuri Timofeev
Hi,

From my experience: DVD really very unreliable.

-- 
have a nice day

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO hardware compression ratio

2007-10-03 Thread John Drescher
 I'm testing our new changer which is equipped with 2 LTO-4 drives.
 What should I expect from LTO's hw compression?
On LTO2 and DLT I have seen between 1.1:1 to 2.5:1 but mostly between
1.4:1 and 2:1.

 I've seen LTO-3 tapes with
 800+ GB data. Here are the volbytes number I got with LTO-4 so far.

 volbytes:
 1,164,080,268,288
 1,138,440,038,400
 1,180,908,417,024

 This is a compression ratio of 1,45:1.

 I know that these numbers are highly dependent on the kind of data
 that is backed up.

 The data I'm currently backing up is mainly made up of large hdf
 files.

 # du -sh *
 1,4G16bit_chan0.hdf
 0   16bit_chan0.hdf_pdetTrigger.log
 325M8bit_chan0.hdf
 201M8bit_chan1.hdf

 # bzip2 *

 # du -sh *
 447M16bit_chan0.hdf.bz2
 4,0K16bit_chan0.hdf_pdetTrigger.log.bz2
 219M8bit_chan0.hdf.bz2
 652K8bit_chan1.hdf.bz2


 So bzip2 seems to be able to compress the data far better (3,4:1), but
 the drive has to do it in real time, thus is might be slower,
 although compression is implemented in hardware.

It has to do this at 120 MB/s for LTO4 drives which is a lot faster
than any software compression I have seen.

I would consider the compression closer to gzip fast (or LZO) than
bzip2 and also there is one big difference the hardware compression in
the tape drive compresses block size chunks at a time instead of the
whole file.


 I'm just wondering if I have to set a special density code with mt
 (which I don't know at the moment)?
Doubtful. I have never seen a tape drive with variable compression methods.

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk spooling

2007-10-03 Thread GDS.Marshall

Thank you John, I have a couple of questions which are outlined below.

 When I run two jobs at the same time which want to write to two
 different
 tapes, the second one fails immediately rather than both spooling and
 the
 first one to finish de-spooling to tape, this is clear from the
 following
 error message:
 03-Oct 07:00 abc-dir: Start Backup JobId 58,
 Job=def-backup.2007-10-03_07.00.00
 03-Oct 07:00 abc-dir: Using Device DLT-V4
 03-Oct 07:00 xyz-fd: ClientRunBeforeJob: run command
 /etc/bacula/waitforntbackup
 03-Oct 11:09 backupserver-sd: def-backup.2007-10-03_07.00.00 Fatal
 error:
 acquire.c:355 Wanted to append to Volume CNI910, but device DLT-V4
 (/dev/nst0) is busy writing on CNI911 .
 03-Oct 11:09 xyz-fd: def-backup.2007-10-03_07.00.00 Fatal error:
 job.c:1758 Bad response to Append Data command. Wanted 3000 OK data
 , got 3903 Error append data

 The weird part here is that bacula started a second job knowing it
 could not write to the second tape at the same time as the first tape.
Could it have done this because it has an autochanger?

 In normal circumstances on a single device bacula will only allow
 concurrency on the jobs that want to use the same pool and they all
 write to the same volume.
So what you are saying is that it should have queued the jobs?

Many thanks,

Spencer



-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] disk spooling

2007-10-03 Thread John Drescher
On 10/3/07, GDS.Marshall [EMAIL PROTECTED] wrote:

 Thank you John, I have a couple of questions which are outlined below.

  When I run two jobs at the same time which want to write to two
  different
  tapes, the second one fails immediately rather than both spooling and
  the
  first one to finish de-spooling to tape, this is clear from the
  following
  error message:
  03-Oct 07:00 abc-dir: Start Backup JobId 58,
  Job=def-backup.2007-10-03_07.00.00
  03-Oct 07:00 abc-dir: Using Device DLT-V4
  03-Oct 07:00 xyz-fd: ClientRunBeforeJob: run command
  /etc/bacula/waitforntbackup
  03-Oct 11:09 backupserver-sd: def-backup.2007-10-03_07.00.00 Fatal
  error:
  acquire.c:355 Wanted to append to Volume CNI910, but device DLT-V4
  (/dev/nst0) is busy writing on CNI911 .
  03-Oct 11:09 xyz-fd: def-backup.2007-10-03_07.00.00 Fatal error:
  job.c:1758 Bad response to Append Data command. Wanted 3000 OK data
  , got 3903 Error append data
 
  The weird part here is that bacula started a second job knowing it
  could not write to the second tape at the same time as the first tape.
 Could it have done this because it has an autochanger?

I have a 2 drive 24 slot changer and I have run  1000 jobs on it and
I have never seen this.


  In normal circumstances on a single device bacula will only allow
  concurrency on the jobs that want to use the same pool and they all
  write to the same volume.
 So what you are saying is that it should have queued the jobs?

Yes. To me it looks like a bug.

John

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] Vchanger Questions

2007-10-03 Thread Josh Fisher

Elie Azar wrote:
 Hi Josh,

 Thanks for the reply...


 Josh Fisher wrote:
 There is a script called disk-changer that is packaged with Bacula 
 that does much the same thing and is what vchanger was based on. The 
 disk-changer script uses a file in the base directory of each 
 autochanger to simulate barcodes for its volumes. So it is designed 
 to use a single disk for each configured autochanger. Another way to 
 view it would be that it emulates a tape autoloader that will only 
 ever use a single magazine. So disk-changer fits well with fixed drives.

 The vchanger script modifies that approach to generate the barcode 
 labels on the fly based on the number of slots defined and the 
 magazine label. The magazine label is a number stored in a file in 
 the root directory of the USB drive. So each USB drive belonging to a 
 particular autochanger has the same filesystem label, but a different 
 magazine label. It emulates a tape autoloader that will be used with 
 multiple magazines. So vchanger fits well with removable drives.

 Why use either one of these scripts? Basically, because the storage 
 device must be specified in either the Pool or the Job. That leaves 
 two possible solutions; use a virtual autochanger or use LVM. LVM 
 would take care of the space problem, but it would be impossible to 
 remove a drive with intact backups on it.

 As was pointed out, these approaches work well with a single drive per 
 changer. I've been looking at vchanger, and I was hoping to be able to 
 use it, but it supports one drive per changer; no simultaneous drives 
 present. The mountpoint directive in the vchanger conf file restricts 
 it to a single drive.

Right. It simplifies the script quite a bit. :)

 In my case, I would like to implement bacula so that it backs up on 
 any of a set of hard drives, all present in the system, either all 
 mounted or using autofs to mount. In my case, I would like to define 
 four changers, based on retention values and other criteria, each with 
 a number of hard drives. Then, I want to define my jobs to go to one 
 of these changers. At that point, the backup will happen on any of the 
 drives within that changer, without worrying about whether there is 
 enough space on a specific drive. And if a drive is full, it is 
 replaced with a new one. Here, I'm not sure about spanning a single 
 job across more than one drive; i.e. if bacula picks up a volume and 
 starts backing up and it runs out of room on the disk, would it span 
 across to another disk to finish the job; I'm not sure about these issues.

The bacula autochanger interface is pretty straight forward and is 
documented at 
http://www.bacula.org/dev-manual/Autochanger_Resource1.html#SECTION003212.
 
There are only 5 commands issued by bacula to the autochanger script.

Bacula uses the 'list' command to determine what volumes are in the 
autochanger and in which slots. It issues a 'loaded' command for each of 
the autochanger's drives to determine which slots (if any) are already 
loaded and ready to use. Based on this info, bacula selects a volume to 
use for the job and, if needed, issues 'unload' and 'load' commands to 
load the volume it needs from the selected slot. Once the volume is 
loaded, bacula begins using it.

Spanning to another drive is not a Bacula issue, per se. If a write 
error occurs due to the disk being full, then bacula will mark the 
volume full and begin looking for another available volume to continue 
to write the job to. This means it would again query the autochanger 
script for available volumes and attempt to load one.

So the script would have to deal with maintaining slots on multiple 
physical drives. Then it should work out automatically. Bacula will only 
know it is spanning the job across volumesit doesn't care which 
drive those volumes are on.

However, there is a problem. Let's say we have 5 slots on each of 2 
physical drives. Drive-0 has slots 1 through 5 with slot-1 being a large 
volume with status=Used and slot-2 through slot-5 being unused with 
status=append. Drive-1 has slots 6 through 10, all unused with 
status=Append. We start a big job and Bacula selects slot-2 on drive-0. 
Bacula writes to that volume until drive-0 becomes full, so it marks the 
slot-2 volume status=Used. Now drive-0 has no remaining free space and 
has slot-1 with status=Used, slot-2 with status=Full, and slot-3 through 
slot-5 with status=Append. Bacula thinks there are 3 more appendable 
volumes on drive-0. It might still work, though, because if Bacula loads 
one of the appendable volumes on drive-0, it will immediately get a 
write error when it attempts to write to it and again (I think) mark the 
volume status=Full and look for another appendable one.

I have no idea how many times Bacula will repeat that process before 
giving up and failing the job. But if it doesn't give up, then it will 
eventually get all of the volumes in slots 1 through 5 marked 

Re: [Bacula-users] multidestination e-mail

2007-10-03 Thread Foo Bar
Hi,

 can you please tell what to put in bacula-dir.conf in order to send
 notification to different e-mails

in (all) the Messages { } section(s):
mail = [EMAIL PROTECTED], [EMAIL PROTECTED] = all, !skipped

this worked for me.



  ___
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/ 

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] multidestination e-mail

2007-10-03 Thread Foo Bar

--- Foo Bar [EMAIL PROTECTED] wrote:

 mail = [EMAIL PROTECTED], [EMAIL PROTECTED] = all, !skipped

And if you view this from a webinterface that mangles this, the  at 
should be an at sign for both addresses. Hopefully the brackets aren't
mangled now :)


  ___ 
Want ideas for reducing your carbon footprint? Visit Yahoo! For Good  
http://uk.promotions.yahoo.com/forgood/environment.html

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] cluster e redundancy of Bacula

2007-10-03 Thread Rich
On 2007.10.03. 11:17, MasterBrian wrote:
 Hi,
 
 Are you doing any self-made clustering police while waiting? :)

well, no. simple syncing at file level with manual failover ;)
it's still highly problematic as all clients also would have to be 
reconfigured.

 Rich ha scritto:
 i submitted a feature suggestion for that some time ago.
 it didn't go in the 'projects' list, though :)

 On 2007.10.03. 11:01, MasterBrian wrote:
 Greetings,

 anyone here have experience in clustering bacula to have geographical
 redundancy?

 I've looked up into the manual and google, but I've not find anything
 usefull.

 Thank you
-- 
  Rich

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO hardware compression ratio

2007-10-03 Thread Ralf Gross
John Drescher schrieb:
  I'm testing our new changer which is equipped with 2 LTO-4 drives.
  What should I expect from LTO's hw compression?
 On LTO2 and DLT I have seen between 1.1:1 to 2.5:1 but mostly between
 1.4:1 and 2:1.

Ok.
 
  I've seen LTO-3 tapes with
  800+ GB data. Here are the volbytes number I got with LTO-4 so far.
 
  volbytes:
  1,164,080,268,288
  1,138,440,038,400
  1,180,908,417,024
 
  This is a compression ratio of 1,45:1.
 
  I know that these numbers are highly dependent on the kind of data
  that is backed up.
 
  The data I'm currently backing up is mainly made up of large hdf
  files.
 
  # du -sh *
  1,4G16bit_chan0.hdf
  0   16bit_chan0.hdf_pdetTrigger.log
  325M8bit_chan0.hdf
  201M8bit_chan1.hdf
 
  # bzip2 *
 
  # du -sh *
  447M16bit_chan0.hdf.bz2
  4,0K16bit_chan0.hdf_pdetTrigger.log.bz2
  219M8bit_chan0.hdf.bz2
  652K8bit_chan1.hdf.bz2
 
 
  So bzip2 seems to be able to compress the data far better (3,4:1), but
  the drive has to do it in real time, thus is might be slower,
  although compression is implemented in hardware.
 
 It has to do this at 120 MB/s for LTO4 drives which is a lot faster
 than any software compression I have seen.
 
 I would consider the compression closer to gzip fast (or LZO) than
 bzip2 and also there is one big difference the hardware compression in
 the tape drive compresses block size chunks at a time instead of the
 whole file.
 
# gzip -1 *
# du -sh *
644M16bit_chan0.hdf.gz
4,0K16bit_chan0.hdf_pdetTrigger.log.gz
252M8bit_chan0.hdf.gz
3,2M8bit_chan1.hdf.gz

That's about 2,1:1 on a Xeon CPU server that is not comparable with a
LTO-4 drive.

  I'm just wondering if I have to set a special density code with mt
  (which I don't know at the moment)?
 Doubtful. I have never seen a tape drive with variable compression methods.

I guess the compression ratio is fine, even though I thought the data
would be highly compressible.

Ralf

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] LTO hardware compression ratio

2007-10-03 Thread Ralf Gross
Chris Howells schrieb:
 Hi Ralf,
 
 Ralf Gross wrote:
  I'm testing our new changer which is equipped with 2 LTO-4 drives.
  What should I expect from LTO's hw compression? I've seen LTO-3 tapes with
  800+ GB data. Here are the volbytes number I got with LTO-4 so far.
  
  volbytes:
  1,164,080,268,288
  1,138,440,038,400
  1,180,908,417,024
 
 I am very interested in this too. So far I have got even less than that 
 on a volume, though I know that the data I am currently testing with is 
 not *that* compressible.

I could test the LTO-4 drive with data from an other server. With
LTO-3 I get 2:1 compression for this data. But these files are a
completely different kind of data, mostly office documents. The data
I'm backing up to LTO-4 is mainly video data (hdf files) which might
be not that compressible.
 
 Tomorrow I intend to do some benchmarking of different block sizes to 
 see what effect they have on performance and compression.

Ah, I remember the thread a few weeks ago.
 
  I'm just wondering if I have to set a special density code with mt
  (which I don't know at the moment)? LTO-3's code was 0x44 if I
  remember correctly, but I *think* the default shoulb be ok. I couldn't
  find any density code for LTO-4 with google.
 
 My drive is using 0x46:
 
 [EMAIL PROTECTED]:~# mt -f /dev/st0 status
 SCSI 2 tape drive:
 File number=51, block number=0, partition=0.
 Tape block size 0 bytes. Density code 0x46 (no translation).
 Soft error count since last status=0
 General status bits on (8101):
   EOF ONLINE IM_REP_EN
 
 Is that what yours is using too?

I'm still running the first backup, thus I can't access the drive with
mt. 'mt densities' doesn't show any LTO values at all on debian.
 
 I'm using mt-st by the way, it seems more featureful than GNU mt, which 
 was the one that was already installed on my Ubuntu box.

I use the mt-st that comes with debian.

Ralf

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] bconsole problem with modify restore job

2007-10-03 Thread Mark Nienberg
[EMAIL PROTECTED] ~]$ rpm -q bacula-mysql
bacula-mysql-2.2.4-1

In bconsole I cannot change the Replace option in a restore job.
See the following  partial bconsole session:


8 files selected to be restored.

Defined Clients:
  1: gecko-fd
  2: gingham-fd
  3: buckeye-fd
  4: tesla-fd
Select the Client (1-4): 1

Run Restore job
JobName: RestoreFiles
Bootstrap:   /var/bacula/buckeye-dir.restore.1.bsr
Where:   /
Replace: always
FileSet: gecko Files
Backup Client:   gecko-fd
Restore Client:  gecko-fd
Storage: File
When:2007-10-03 11:33:07
Catalog: MyCatalog
Priority:10
OK to run? (yes/mod/no): mod
Parameters to modify:
  1: Level
  2: Storage
  3: Job
  4: FileSet
  5: Restore Client
  6: When
  7: Priority
  8: Bootstrap
  9: Where
 10: File Relocation
 11: Replace
 12: JobId
Select parameter to modify (1-12): 11
Replace:
  1: always
  2: ifnewer
  3: ifolder
  4: never
Select replace option (1-4): 4

Run Restore job
JobName: RestoreFiles
Bootstrap:   /var/bacula/buckeye-dir.restore.1.bsr
Where:   /
Replace: always that should be never
FileSet: gecko Files
Backup Client:   gecko-fd
Restore Client:  gecko-fd
Storage: File
When:2007-10-03 11:33:07
Catalog: MyCatalog
Priority:10

I'm sure this used to work in prior versions (at least in the 1.series).
Mark


-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] painfully slow backups

2007-10-03 Thread Ross Boylan
On Sun, 2007-09-30 at 23:19 +0200, Arno Lehmann wrote:
 Hello,
 
 30.09.2007 01:36,, Ross Boylan wrote::
  On Sat, 2007-09-29 at 13:15 -0700, Ross Boylan wrote:
  On Fri, 2007-09-28 at 08:46 +0200, Arno Lehmann wrote:
  Hello,
 
  27.09.2007 22:47,, Ross Boylan wrote::
  On Thu, 2007-09-27 at 09:19 +0200, Arno Lehmann wrote:
  Hi,
 
  27.09.2007 01:17,, Ross Boylan wrote::
  I've been having really slow backups (13 hours) when I backup a large
  mail spool.  I've attached a run report.  There are about 1.4M files
  with a compressed size of 4G.  I get much better throughput (e.g.,
  2,000KB/s vs 86KB/s for this job!) with other jobs.
  2MB/s is still not especially fast for a backup to disk, I think. So 
  your storage disk might also be a factor here.
  .
  vmstat during a backup would be a good next step in this case, I think.
 
  Here are the results of a test job.  The first vmstat was shortly after
  I started the job
  # vmstat 15
  procs ---memory-- ---swap-- -io -system--
  cpu
   r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy
  id wa
   1  2   7460  50760 204964 667288004332  197   15 18  5
  75  2
   1  1   6852  51476 195492 675524   280  1790   358  549 1876 20  6
  36 38
   0  2   6852  51484 189332 68261200  1048   416  470 1321 12  4
  41 43
   2  0   6852  52508 187344 68532800   303   353  485 1369 16  4
  68 12
   1  0   6852  52108 187352 68546400 1   144  468 1987 12  4
  84  0
 
  Sorry for the bad wrapping.  This clearly shows about 40% of the CPU
  time spent in IO wait during the backup.  Another 40% is idle.  I'm not
  sure if the reports are being thrown off by the fact that I have 2
  virtual CPU's (not really: it's P4 with hyperthreading).  If that's the
  case, the 40% might really mean 80%.
 
 Interesting question... I never thought about that, and the man page 
 writers for vmstat on my system didn't either. I suppose that vmstat 
 bases its output on the overall available CPU time, i.e. you have 40% 
 of all available CPU time spent in IOwait. Like, one (HT) CPU spends 
 80% waiting, the other no time at all.
 
  During the run I observed little CPU or memory useage above where I was
  before it.  None of the bacula daemons, postgres or bzip got anywhere
  near the top of my cpu use list (using ksysguard).
 
  A second run went much faster: 14 seconds (1721.6 KB/s) vs 64 seconds
  (376.6 KB/s) the first time.  Both are much better than I got with my
  original, bigger jobs.  It was so quick I think vmstat missed it
  procs ---memory-- ---swap-- -io -system--
  cpu
   r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy
  id wa
   1  0   6852  56496 184148 683932004332  197   19 18  5
  75  2
   3  0   6852  56016 178604 6900240  113 0   429  524 3499 35 10
  55  0
   2  0   6852  51988 172476 70155600 1  2023  418 3827 33 11
  55  1
 
  It looks as if the 2nd run only hit the cache, not the disk, while
  reading the directory (bi is very low)--if I understand the output,
  which is a big if.
 
 I agree with your assumption.
 
  Here are some more stats, systematically varying the software.  I'll
  just give the total backup time, starting with the 2 reported above:
  64
  14
  Upgrade to Postgresql 8.2 from 8.1
  41
  13
  upgrade to bacula 2.2.4
  13
  12
 
 This helped a lot, I think, though it still might be the buffers 
 performance you're measuring. 
Yes, I think the good first time performance reflects the fact that the
disk sectors were already in memory.  That's why I tried a different
directory in the next test; it showed the first time performance right
back to very slow.
 Anyway, give the relative increase you 
 observe, I'm rather sure that at least part of it is due to Bacula 
 performing better.
 
Comparing times with the disk apparently in the cache, it was 13 seconds
before the bacula upgrade and 13 and 12 seconds after.  That doesn't
seem like much evidence for improvement.  I presume the part the upgrade
should have helped (database transactions) is the same regardless of the
issues reading files to backup.

Since the first and 2nd runs share the same need to populate the
catalog, these results seem to show that the speed of the catalog is not
the issue.
  switch to a new directory for source of backup
  old one has 1,606 files = 24MB copressed
  new one has 4,496 files = 27MB
  92
  22
  
With a new directory, first time is up to 92 seconds (slightly more MB,
but way more files than the earlier tests).
  In the slow cases vmstat shows lots of blocks in and major (40%) CPU
  time in iowait.
  
  I suspect the relatively good first try time with Postgresql 8.2 was a
  result of having some of the disk still in the cache.
  
  Even the best transfer rates were not particularly impressive (1854
  kb/s), but the difference between the first and 2nd runs (and the
  

Re: [Bacula-users] Mysql - INSERT INTO batch error

2007-10-03 Thread Marc Cousin
On Monday 01 October 2007 12:58:27 Alejandro Alfonso wrote:
 Thank you for the fast answer!

 Um... maybe the problem is related with my server? Its a big backup
 (about 1'7 Tb, many small files), and 770Gb of SQL sentences

 01-Oct 04:05 poe-sd: Sending spooled attrs to the Director. Despooling
  *767,610,414* bytes ...
  01-Oct 04:08 poe-dir: FileServerFull.2007-09-28_23.13.09 Fatal error:
  sql_create.c:730 sql_create.c:730 insert INSERT INTO batch VALUES

 My /tmp partition its like this:

 poe etc # df -h
 FilesystemSize  Used Avail Use% Mounted on
 /dev/sda4  67G  8.9G   58G  14% /
 udev  505M  2.8M  502M   1% /dev
 /dev/sda2 976M  531M  446M  55% /tmp
 shm   505M 0  505M   0% /dev/shm

 And there's a problem in that partition:
 Incorrect key file for table '/tmp/#sql1439_10_0.MYI';try to repair it
 01-Oct 04:08 poe-dir: FileServerFull.2007-09-28_23.13.09 Fatal error:

 The question is... that table was created by bacula or by mysql?


temp tables are, I think, created in /tmp with mysql. So if you have to 
despool a lot of data, all will go into the temp table in /tmp. 
500Mb free is probably much too small. It would explain why it says the table 
is corrupted and you have to repair it. I don't see a link with Bug #965.

-
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now  http://get.splunk.com/
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users