Wow.
Since setting the Maximum Block Size to 262144, and the Maximum File
Size to 4G, I am now getting 35MB/s which is up from 15-18MB/s. Now I
am not spooling, these are copy jobs, so I don't know if this makes a
difference.
On 1/7/2010 4:46 PM, Jesper Krogh wrote:
05-Jan 00:37 bacula-sd
Thomas Mueller schrieb:
With
Maximum File Size = 5G
Maximum Block Size = 262144
Maximum Network Buffer Size = 262144
I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
Size gave me some extra MB/s, I think it's as important as the
Maximum Block
Thomas Mueller wrote:
maybe this is the reason for the extra mb/s.
Modifying the Maximum Block Size to more than 262144 didn't change much
here. But changing the File Size did. Much.
I found a post from Kern saying that Quantum told him, that about 262144
is the best blocksize - increasing
On Tue, Jan 05, 2010 at 04:55:51PM -0500, John Drescher wrote:
I'm not seeing anywhere close to 60M/s ( 30 ). I think I just fixed
that. I increased the block size to 1M, and that seemed to really
increase the throughput, in the test I just did. I will see tomorrow,
when it all runs.
Tino Schwarze schrieb:
On Tue, Jan 05, 2010 at 04:55:51PM -0500, John Drescher wrote:
I'm not seeing anywhere close to 60M/s ( 30 ). I think I just fixed
that. I increased the block size to 1M, and that seemed to really
increase the throughput, in the test I just did. I will see
Hi there,
On Tue, Jan 05, 2010 at 07:30:44PM +0100, Tino Schwarze wrote:
It looks like btape is not happy.
Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on device
Superloader-Drive (/dev/nst0).
Are your tapes old (still good)? Did you clean the drive? Latest
On Wed, Jan 06, 2010 at 02:20:06PM +0100, Tino Schwarze wrote:
It looks like btape is not happy.
Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on
device Superloader-Drive (/dev/nst0).
Are your tapes old (still good)? Did you clean the drive? Latest
I believe they are rated for 250 complete passes.
On 1/6/2010 9:05 AM, Tino Schwarze wrote:
All the same config, just an older tape. :-( I'm not sure, how often
it's been used because of our rather complicted multi-stage backup
scheme (daily/weekly/monthly/yearly pools). Is it possible that
That should not matter. I have read somewhere that Kern said that very
large blocks sizes can waste space, I don not know how this works tho.
On 1/6/2010 8:20 AM, Tino Schwarze wrote:
Is it possible that the low block size of 64k affects tape capacity? It
looks suspicious to me that all
Put a new tape in, and run tapeinfo -f /dev/nst0 and it should report
what block size range your drive can support. Alng with lots of other
useful information.
--
This SF.Net email is sponsored by the Verizon Developer
Is it possible that the low block size of 64k affects tape capacity? It
looks suspicious to me that all tapes end at about the same size...
I have never seen it go below native capacity. I usually get around
1.5 to 1 compression rate on my data with outliners being close to
1.1 to 1 and 5.5
I tried to do that years ago but I believe this made all tapes that
were already written to unreadable (and I now have 80) so I gave this
up. With my 5+ year old dual processor Opteron 248 server I get
25MB/s to 45MB/s despools (which measures the actual tape rate) for
my LTO2 drives.
Thomas Mueller schrieb:
I tried to do that years ago but I believe this made all tapes that
were already written to unreadable (and I now have 80) so I gave this
up. With my 5+ year old dual processor Opteron 248 server I get
25MB/s to 45MB/s despools (which measures the actual tape
With
Maximum File Size = 5G
Maximum Block Size = 262144
Maximum Network Buffer Size = 262144
I get up to 150M MB/s while despooling to LTO-4 drives. Maximum File
Size gave me some extra MB/s, I think it's as important as the
Maximum Block Size.
thanks for providing this
Hi there,
I'm struggling with my LTO3 autochanger (Quantum Superloader3). We're
using HP tapes of 400/800 GB capacity (uncompressed/compressed).
Everything has been running fine for about 3 years now (OS: OpenSuSE
10.2, package bacula-postgresql-2.2.5-1), but we're starting to really
fill our
It looks like btape is not happy.
Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on device
Superloader-Drive (/dev/nst0).
Are your tapes old (still good)? Did you clean the drive? Latest Firmware?
On 1/5/2010 9:06 AM, Tino Schwarze wrote:
Hi there,
I'm struggling with my
On Tue, Jan 5, 2010 at 12:37 PM, Brian Debelius
bdebel...@intelesyscorp.com wrote:
It looks like btape is not happy.
Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on device
Superloader-Drive (/dev/nst0).
Are your tapes old (still good)? Did you clean the drive? Latest
On Tue, Jan 05, 2010 at 12:48:53PM -0500, John Drescher wrote:
It looks like btape is not happy.
Error reading block: ERR=block.c:1008 Read zero bytes at 326:0 on device
Superloader-Drive (/dev/nst0).
Are your tapes old (still good)? Did you clean the drive? Latest Firmware?
The
I'm not seeing anywhere close to 60M/s ( 30 ). I think I just fixed
that. I increased the block size to 1M, and that seemed to really
increase the throughput, in the test I just did. I will see tomorrow,
when it all runs.
You should not be seeing any errors.
On 1/5/2010 1:30 PM, Tino
Brian Debelius wrote:
I'm not seeing anywhere close to 60M/s ( 30 ). I think I just fixed
that. I increased the block size to 1M, and that seemed to really
increase the throughput, in the test I just did. I will see tomorrow,
when it all runs.
Yes, if you aren't already, whenever
That would be 16M. Isn't the SD hard limited to 1M?
On 1/5/2010 3:00 PM, Phil Stracchino wrote:
Brian Debelius wrote:
I'm not seeing anywhere close to 60M/s ( 30 ). I think I just fixed
that. I increased the block size to 1M, and that seemed to really
increase the throughput, in the
On Tue, Jan 5, 2010 at 3:00 PM, Phil Stracchino ala...@metrocast.net wrote:
Brian Debelius wrote:
I'm not seeing anywhere close to 60M/s ( 30 ). I think I just fixed
that. I increased the block size to 1M, and that seemed to really
increase the throughput, in the test I just did. I will
Brian Debelius wrote:
That would be 16M. Isn't the SD hard limited to 1M?
The maximun size-in-bytes possible is 2,000,000
From the SD Configuration Maximum block size directive.
Regards,
Richard
--
This SF.Net
23 matches
Mail list logo