Hi,
I've tried to reproduce this problem, but I couldn't. It works in my setup,
which is 2 gluster nodes running CentOS 6 and using a distributed volume,
here are the packages and settings:
[root@c6glfs01 ~]# rpm -qa "gluster*"
glusterfs-cli-3.6.2-1.el6.x86_64
glusterfs-libs-3.6.2-1.el6.x86_64
gl
Hello Claude,
our policy regarding binary package releases is that the project
publishes on download.bareos.org:
a) packages for every major release (until now 12.4, 13.2 and 14.2)
b) packages from nightly builds. These are built from the newest sources
but published packages here have passed au
Am Dienstag, 28. April 2015 09:34:53 UTC+2 schrieb philipp.storz:
> Hello,
>
> we are using 1M blocksize ourselves with LTO5 and LTO6 and we do not have
> those problems.
>
> How did you configure the blocksizes? (Device or pool?)
>
> Best regards,
>
> Philipp
>
Hello Philipp,
I have confi
ors.
>>>
>>
>> Hi,
>> i am just setting up a Quantum Scalar i80 with LTO6. I have changed my
>> block size and I get the same errors.
>>
>> I will try to switch back to the default size and look if the
>> errors went ago.
>>
>&g
Rene Arnhold jena.de> writes:
>
> sorry did not see this mail until now
> And sorry about the missleading LTO5 in my first toppic.
> We use LTO6 on the bareos Server.
>
> i have configured the max blocksize in the device section of my tape
> drives. At the moment i have switched back to the def