On Thursday 20 September 2007 00:59, Arno Lehmann wrote:
> Hi,
>
> 18.09.2007 16:25,, Marc Schiffbauer wrote::
> > * Chris Howells schrieb am 18.09.07 um 16:14 Uhr:
> >> Hi,
> >>
> >> Marc Schiffbauer wrote:
> >>
> >> Finally got around to messing around with bacula again...
> >>
> >>> The manual says that nnn being the same number for both settings
> >>> means "fixed" blocksize.
> >>>
> >>> As I understand it, your solutions should be to just set the
> >>> "Minimum Block Size" so you get a good perfromance.
> >>>
> >>> Minimum Block Size = 1048576
> >>
> >> Unfortunately just setting a Minimum Block Size does not work. btape for
> >> instance will not work then. It dies with a glibc error. (See end of
> >> mail for full trace.
>
> Interesting. On a FreeBSD 7 system with Bacula 2.2.4 btape crashes
> when I use larger block sizes. I haven't found the actual limit, but
> 512k blocks work, 1MB sized ones don't.
Bacula currently limits block sizes to 1MB. This limit was implemented 7
years ago when the fastest drive was a DLT. I will increase the size, but I
am quite skeptical about writing block sizes of 1 or 2 Megabytes. I believe
that you may get a bit more speed, but you will probably increase the rate of
errors -- unless drive technology has progressed more than I am aware of.
By the way, increasing the Minimum Block Size is NOT the way to increase the
Maximum block size. In general one should *never* set the minimum block size
unless you have an older brain damaged drive. In increasing the Minimum
Block Size, you virtually guarantee to waste tape for no good reason. The
way to increase the maximum block size is to use the Maximum Block Size
directive, which I previously thought was rather obvious ... oh well.
>
> >> For instance with the following setting:
> >>
> >> Minimum Block Size = 256000
> >>
> >> [EMAIL PROTECTED]:/etc/bacula# btape -c bacula-sd.conf /dev/nst0
> >> <snip>
> >> test
> >> <snip>
> >> *** glibc detected *** malloc(): memory corruption: 0x080d9d90 ***
> >>
> >> Setting both a Minimum Block Size and Maximum Block Size to the same
> >> value *does* seems to work with btape.
> >>
> >> BTW, I tried using 1048576. Unfortunately this does not work. From
> >> src/stored/dev.c:
> >>
> >> if (dev->max_block_size > 1000000) {
> >> Jmsg3(jcr, M_ERROR, 0, _("Block size %u on device %s is too
> >> large, using default %u\n"),
> >> dev->max_block_size, dev->print_name(), DEFAULT_BLOCK_SIZE);
> >>
> >> Oops.
> >>
> >> Why can I not use > 1000000 bytes? This seems a *really* strange
> >> restriction. I can happily use blocks of several megabytes using tar.
>
> For current tape drives, we really need to support larger block sizes.
To the best of my knowledge we support sizes up to 1MB. I will increase it,
but as I mentioned, I am somewhat skeptical.
>
> I tested today with an AIT-5 tape drive.
>
> Throughput measured with dd, test file 8 GB in size, so I think we can
> ignore the effects of buffering and caches.
> disk -> /dev/null ~60MB/s
> disk -> tape, <64kB blocks: ~5MB/s
> 1 MB blocks: ~15MB/s
> 2 MB blocks: ~20MB/s (close enough to the published
> specification)
If you want to measure throughput, please do so with Bacula. Though the
results may be the same, it would be a bad idea to base such important
considerations on tests that don't necessarily apply.
>
> Unfortunately, I could not test with 512k block sizes in btape - there
> were positioning errors during the test, while the default block sizes
> worked flawless. I don't know if more in-depth testing is possible as
> this is a customer's system which should go into production some day soon.
Unless you have a critical problem of speed, I don't particularly recommend
starting with block sizes other than the default. It would require a lot
more testing, and if you look at the *old* archives you will find a number of
comments from people who know kernels that claim that anything bigger than
64K blocks can create problems. Of course, those comments were made a long
time ago.
>
> Anyway, for decent performance, on that system block sizes well beyon
> 1MB should be used.
This has not been proved to me. No one has done testing to the best of my
knowledge with Bacula.
I would be interested to hear the experience of users who are using larger
block sizes. First about the throughput they get, but also about the number
of tape read errors they have, and any problems associated with running
multiple simultaneous jobs.
> System is FreeBSD 7-current, AIT-5 tape drive in autochanger, 2GB RAM,
> and a reasonable disk subsystem. (Again, I don't have many details
> now, might get them later, but the key issue here is that Bacula
> should support larger tape block sizes.)
Feel free to push up the 1MB limit and test. I will increase the limit to 3MB
in the next release.
By the way, once you choose a block size, I don't believe that you can easily
switch to a smaller size -- Bacula will not be able to read the tapes unless
you have a special setup for those Volumes. I think you can safely increase
the block size, but I have never tested it, so this is also an important
point to consider.
Regards,
Kern
>
> > Indeed.
> > I discuss that on the devel list and/or maybe open a
> > bugreport in the bacula BTS.
> >
> > And btape crashing is a bug as well...
>
> Yes...
>
> > -Marc
>
> Arno
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Bacula-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-devel