Re: [Bacula-users] tuning lto-4

2011-12-02 Thread Andrea Conti
Hello,

 blocksize set with mt and in bacula-sd.conf

Unless you are setting minimum block size (which you really should
not), Bacula uses the tape drive in variable block size mode, with block
sizes up to the value given in maximum block size.

Setting a fixed block size with mt (and reading it back with tapeinfo)
is irrelevant.

 btape: btape.c:1082 Test with zero data and bacula block structure.
 btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
 65536 bytes.

 btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 111.8 MB/s

A 64kb max block size is *way* too small for LTO4. You'll need at least
256kB, possibly 512kB or more to achieve full throughput.

When diagnosing throughput problems, the tests with raw block structure
are more representative of the actual performance of the tape drive,
although the difference will not be that much.

What are you getting in the random data tests? With ~112MB/s for zeroes,
you're still being severely limited by something (most likely block
size). LTO-4 is rated for 120MB/s *to tape*, so you should be aiming for
110MB/s with random data and 250-300MB/s with zeroes (the latter being
dependent on the compression engine maximum bandwidth).

If you can't achieve that with any maximum block size, your hardware is
probably inadequate for the task.

andrea

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SQL Failure while upgrading from 5.0.3 to 5.2.1

2011-12-02 Thread Armin Tueting
26-Nov 12:43 sydney-dir JobId 10: Error: sql_update.c:255 sql_update.c:255 
update UPDATE Counters SET 
MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
Counter='DiffFileVolumeCounter' failed: You have an error in your SQL syntax; 
check the manual that corresponds to your MySQL server version for the right 
syntax to use near 
'MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
Counter='' at line 1
26-Nov 12:43 sydney-dir JobId 10: Error: Count not update counter 
DiffFileVolumeCounter: ERR=sql_update.c:255 update UPDATE Counters SET 
MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
Counter='DiffFileVolumeCounter'
 failed: You have an error in your SQL syntax; check the manual that
 corresponds to your MySQL server version for the right syntax to use
 near
 'MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
 Counter='' at line 1
26-Nov 12:43 sydney-dir JobId 10: Created new Volume DiffFile-0001 in 
catalog.
 I'm getting the error message when upgrading from 5.0.3 to 5.2.1...

I've done some research and found the following in GIT
Fix bug #1504 -- Error when creating tables in MySQL 5.5
This  seems  to  resolve  the issue for version 5.5.  RHEL, CentOS are
running version 5.0...

Any help appreciated.


--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] SQL Failure while upgrading from 5.0.3 to 5.2.1

2011-12-02 Thread Martin Simmons
 On Fri, 2 Dec 2011 10:08:49 +0100, Armin Tueting said:
 
 26-Nov 12:43 sydney-dir JobId 10: Error: sql_update.c:255 sql_update.c:255 
 update UPDATE Counters SET 
 MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
 Counter='DiffFileVolumeCounter' failed: You have an error in your SQL 
 syntax; check the manual that corresponds to your MySQL server version for 
 the right syntax to use near 
 'MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
 Counter='' at line 1
 26-Nov 12:43 sydney-dir JobId 10: Error: Count not update counter 
 DiffFileVolumeCounter: ERR=sql_update.c:255 update UPDATE Counters SET 
 MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
 Counter='DiffFileVolumeCounter'
  failed: You have an error in your SQL syntax; check the manual that
  corresponds to your MySQL server version for the right syntax to use
  near
  'MinValue=1,MaxValue=2147483647,CurrentValue=2,WrapCounter='' WHERE 
  Counter='' at line 1
 26-Nov 12:43 sydney-dir JobId 10: Created new Volume DiffFile-0001 in 
 catalog.
  I'm getting the error message when upgrading from 5.0.3 to 5.2.1...
 
 I've done some research and found the following in GIT
 Fix bug #1504 -- Error when creating tables in MySQL 5.5
 This  seems  to  resolve  the issue for version 5.5.  RHEL, CentOS are
 running version 5.0...
 
 Any help appreciated.

Check the sql_mode of mysqld.  The most likely cause of this error is non-ANSI
mode (see
http://dev.mysql.com/doc/refman/5.0/en/server-options.html#option_mysqld_ansi).

__Martin

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] using script output in fileset

2011-12-02 Thread Silver Salonen
Hi.

I'm trying to get my job backing up only files that would match smth 
like *`date +%y%m%d`*.

It should be quite easy, but it seems to not work.

I created a script /usr/local/etc/bacula/fileset.sh which would echo 
wildcards along with today's date:
echo $1/*`date +%y%m%d`*

So /usr/local/etc/bacula/fileset.sh /path/to/backup gives me:
/path/to/backup/*111202*

The fileset:

Include {
   Options {
 Wild = |/usr/local/etc/bacula/fileset.sh /path/to/backup
   }
   Options {
 Exclude = yes
 RegexFile = .*
   }
   File = /path/to/backup
}

But estimate jobid=... listing gives only /path/to/backup as the 
only thing to be backed up.

However, I get the right files when I set: Wild = /path/to/backup/*111202*

What am I doing wrong? Why does that not work with executing the script?

-- 
Silver

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] using script output in fileset

2011-12-02 Thread Konstantin Khomoutov
On Fri, 02 Dec 2011 13:57:42 +0200
Silver Salonen sil...@serverock.ee wrote:

 I'm trying to get my job backing up only files that would match smth 
 like *`date +%y%m%d`*.
 
 It should be quite easy, but it seems to not work.
 
 I created a script /usr/local/etc/bacula/fileset.sh which would echo 
 wildcards along with today's date:
 echo $1/*`date +%y%m%d`*
 
 So /usr/local/etc/bacula/fileset.sh /path/to/backup gives me:
 /path/to/backup/*111202*
 
 The fileset:
 
 Include {
Options {
  Wild = |/usr/local/etc/bacula/fileset.sh /path/to/backup
}
Options {
  Exclude = yes
  RegexFile = .*
}
File = /path/to/backup
 }
 
 But estimate jobid=... listing gives only /path/to/backup as the 
 only thing to be backed up.
 
 However, I get the right files when I set: Wild =
 /path/to/backup/*111202*
 
 What am I doing wrong? Why does that not work with executing the
 script?
From the manual regarding the Bacula Director, it does not follow that
you can use | ... notation with Wild--you can only do this with File,
it seems.
So I'd go another route and make your script output everything that
matched your pattern in a way Bacula expects | ... to work, and then
just use that with File.
The script should be something like this:
#!/bin/sh
ls -1 $1/*`date +%y%m%d`*

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


[Bacula-users] Searching through backed-up files

2011-12-02 Thread Marcello Romani
Hallo,
 I've got a (hopefully not so) unusual problem.
I need look for a text string inside some text files that I've backed up 
with bacula.
The first, naive solution that comes to mind is of course to extract 
portion of the backup archive (e.g. going back in time), one fd-client 
at-a-time, do the search and then delete the restored files if the 
string is not found.
But this process would be highly time-consuming and inefficient FWIKT.

Searching around the 'net I found a potentially very useful object, 
named BaculaFS. Unfortunately it's requirements include pretty recent 
versions of postgresql (the db I currently use) and other libraries. 
Because of this I haven't been able to install it on my director host. 
If possible, I'd like to avoid upgrading this host just for the sake of 
installing baculafs...

Does anybody have a hint about how to solve this ?

Thank you in advance to anyone who'll answer.

-- 
Marcello Romani

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] using script output in fileset

2011-12-02 Thread Konstantin Khomoutov
On Fri, 2 Dec 2011 16:47:38 +0400
Konstantin Khomoutov flatw...@users.sourceforge.net wrote:

[...]
 From the manual regarding the Bacula Director, it does not follow
 that you can use | ... notation with Wild--you can only do this with
 File, it seems.
 So I'd go another route and make your script output everything that
 matched your pattern in a way Bacula expects | ... to work, and then
 just use that with File.
 The script should be something like this:
 #!/bin/sh
 ls -1 $1/*`date +%y%m%d`*
Well, on the second thought, the right tool for the job would rather
be the `find` utility:
#!/bin/sh
find $1 -mindepth 1 -maxdepth 1 \
  -type f -name *`date +%y%m%d`* -print

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] using script output in fileset

2011-12-02 Thread Silver Salonen
On 02.12.2011 17:30, Konstantin Khomoutov wrote:
 On Fri, 2 Dec 2011 16:47:38 +0400
 Konstantin Khomoutovflatw...@users.sourceforge.net  wrote:

 [...]
  From the manual regarding the Bacula Director, it does not follow
 that you can use | ... notation with Wild--you can only do this with
 File, it seems.
 So I'd go another route and make your script output everything that
 matched your pattern in a way Bacula expects | ... to work, and then
 just use that with File.
 The script should be something like this:
 #!/bin/sh
 ls -1 $1/*`date +%y%m%d`*
 Well, on the second thought, the right tool for the job would rather
 be the `find` utility:
 #!/bin/sh
 find $1 -mindepth 1 -maxdepth 1 \
-type f -name *`date +%y%m%d`* -print
The problem is that dir and fd are located on different machines, so I 
can't use this method. Any other way to get dynamic content (current 
year+month+day) into Wild parameter?

--
Silver

--
All the data continuously generated in your IT infrastructure 
contains a definitive record of customers, application performance, 
security threats, fraudulent activity, and more. Splunk takes this 
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
___
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


Re: [Bacula-users] tuning lto-4

2011-12-02 Thread gary artim
180 MBs, 256MB min/max blocksize.

[root@genepi1 bacula]# tapeinfo -f /dev/nst0
Product Type: Tape Drive
Vendor ID: 'HP  '
Product ID: 'Ultrium 4-SCSI  '
Revision: 'B12H'
Attached Changer API: No
SerialNumber: 'HU17450M8L'
MinBlock: 1
MaxBlock: 16777215
SCSI ID: 1
SCSI LUN: 0
Ready: yes
BufferedMode: yes
Medium Type: Not Loaded
Density Code: 0x46
BlockSize: 0
DataCompEnabled: yes
DataCompCapable: yes
DataDeCompEnabled: yes
CompType: 0x1
DeCompType: 0x1
Block Position: 471909
Partition 0 Remaining Kbytes: 799204
Partition 0 Size in Kbytes: 799204
ActivePartition: 0
EarlyWarningSize: 0
NumPartitions: 0
MaxPartitions: 0
[root@genepi1 bacula]#  btape -c /etc/bacula/bacula-sd.conf /dev/nst0
Tape block granularity is 1024 bytes.
btape: butil.c:284 Using device: /dev/nst0 for writing.
02-Dec 13:07 btape JobId 0: 3301 Issuing autochanger loaded? drive 0 command.
02-Dec 13:07 btape JobId 0: 3302 Autochanger loaded? drive 0, result
is Slot 12.
btape: btape.c:476 open device LTO-4 (/dev/nst0): OK
*speed file_size=20 nb_file=10 skip_raw
btape: btape.c:1082 Test with zero data and bacula block structure.
btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
524288 bytes.
+
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 177.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 180.4 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 178.9 MB/s
btape: btape.c:384 Total Volume bytes=214.7 GB. Total Write rate = 179.5 MB/s

btape: btape.c:1094 Test with random data, should give the minimum throughput.
btape: btape.c:960 Begin writing 10 files of 21.47 GB with blocks of
524288 bytes.
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 26.47 MB/s
++
btape: btape.c:608 Wrote 1 EOF to LTO-4 (/dev/nst0)
btape: btape.c:410 Volume bytes=21.47 GB. Write rate = 26.51 MB/s
++


On Wed, Nov 30, 2011 at 8:02 AM, gary artim gar...@gmail.com wrote:
 Hi --

 Getting about 41.6/MBs and hoping for closer to the max (120MB). I
 tried maximum file sizes of 5, 8, 12GB -- 12GB the best the others
 where about 35/MBs. Any advise welcomed...should I look at max/min
 block sizes?
 most of the data is big, genetics data -- filesizes avg in the 500/MB
 to 3-4/GB -- looking at a growth from 4TB to 15TB in the next 2 years.

 run results and bacula-sd.conf and bacula-dir.conf below...

 thanks
 -- gary

 Run:
 ===

  Build OS:               x86_64-redhat-linux-gnu redhat
  JobId:                  5
  Job:                    Prodbackup.2011-11-29_19.32.42_05
  Backup Level:           Full
  Client:                 bacula-fd 5.0.3 (04Aug10)
 x86_64-redhat-linux-gnu,red


 hat,
  FileSet:                FileSetProd 2011-11-29 19:32:42
  Pool:                   FullProd (From Job FullPool override)
  Catalog:                MyCatalog (From Client resource)
  Storage:                LTO-4 (From Job resource)
  Scheduled time:         29-Nov-2011