gt; write speed on small files.
>
>
> Gary Lloyd
>
> I.T. Systems:Keele University
> Finance & IT Directorate
> Keele:Staffs:IC1 Building:ST5 5NB:UK
> +44 1782 733063
> ________
>
> On 8 February 2
Hi,
There is a number of tweaks/hacks to make it better, but IMHO overall
performance with small files is still unacceptable for such folders with
thousands of entries.
If your shares are not too large to be placed on single filesystem and you
still want to use Gluster - it is possible to run
Hello,
For offline migration you can use storage domain of type Export, shared between
clusters. For online storage migration the source and destination storage have
to be present in current cluster.
Regarding the different glusterfs versions - it must not be a problem because
oVirt uses vm im
Hi,
According to man page for setfacl: For uid and gid you can specify either a
name or a number.
But actually the information will be stored in xattrs in the form of numbers,
afaik.
One way to solve your problem is the consistent name/id mapping, which can be
achieved by using directory serve
Hi,
I always though that hardware RAID is a requirement for SDS as it hides all
dirty work with raw disks from software which just cannot deal with all kinds
of hardware faults. If disk starts to experiencing long delays, then after
about 7 seconds RAID controller marks this disk as failed (thi
gt;>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Nov 17, 2016 at 11:35 PM, Olivier Lambert
>>>>>> wrote:
>>>>>>> It's planned to have an arbiter soon :) It was just preliminary
>>>>>
Hi,
I've tried Minio and Scality S3 (both as Docker containers). None of them give
me more than 60 MB/sec for one stream.
--
Dmitry Glushenok
Jet Infosystems
> 28 сент. 2016 г., в 1:04, Gandalf Corvotempesta
> написал(а):
>
> Anyone tried Minio as object storage over gluster?
> It mostly a o
Hi,
Red Hat only supports XFS for some reason:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Installation_Guide/sect-Prerequisites1.html
--
Dmitry Glushenok
Jet Infosystems
> 26 сент. 2016 г., в 14:26, Lindsay Mathieson
> написал(а):
>
> On 26/09/2016 8:18 PM, Gandal
Hi,
It looks like for NFS you have to change nfs.rpc-auth-allow, not auth.allow
(which is for access by API). Docs for nfs.rpc-auth-allow states that "By
default, all clients are disallowed", but in fact the option has "all" as
default value.
Regarding auth.allow and information disclosure usin
Hi,
It is because your switch is not performing round-robin distribution while
sending data to server (probably it can't). Usually it is enough to configure
ip-port LACP hashing to evenly distribute traffic by all ports in aggregation.
But any single tcp connections will still be using only one
You are right, stat triggers self-heal. Thank you!
--
Dmitry Glushenok
Jet Infosystems
> 17 авг. 2016 г., в 13:38, Ravishankar N написал(а):
>
> On 08/17/2016 03:48 PM, Дмитрий Глушенок wrote:
>> Unfortunately not:
>>
>> Remount FS, then access test file from second
cluster.granular-entry-heal no
[root@srv01 ~]#
--
Dmitry Glushenok
Jet Infosystems
> 17 авг. 2016 г., в 11:30, Ravishankar N написал(а):
>
> On 08/17/2016 01:48 PM, Дмитрий Глушенок wrote:
>> Hello Ravi,
>>
>> Thank you for r
file. Otherwise the file will be accessed from good brick and self-healing will
not happen (just verified). Or by accessing you meant something like touch?
--
Dmitry Glushenok
Jet Infosystems
> 17 авг. 2016 г., в 4:24, Ravishankar N написал(а):
>
> On 08/16/2016 10:44 PM, Дмитрий
Hello,
While testing healing after bitrot error it was found that self healing cannot
heal files which were manually deleted from brick. Gluster 3.8.1:
- Create volume, mount it locally and copy test file to it
[root@srv01 ~]# gluster volume create test01 replica 2 srv01:/R1/test01
srv02:/R1/t
Hi,
Same problem on 3.8.1. Even on loopback interface (traffic not leaves gluster
node):
Writing locally to replica 2 volume (each brick is separate local RAID6): 613
MB/sec
Writing locally to 1-brick volume: 877 MB/sec
Writing locally to the brick itself (directly to XFS): 1400 MB/sec
Tests w
ok
Jet Infosystems
> 15 июня 2016 г., в 19:42, Gandalf Corvotempesta
> написал(а):
>
> 2016-06-15 18:12 GMT+02:00 Дмитрий Глушенок :
>> Hello.
>>
>> May be because of current implementation of rotten bits detection - one hash
>> for whole file. Imagine 40 GB VM i
Hello.
May be because of current implementation of rotten bits detection - one hash
for whole file. Imagine 40 GB VM image - few parts of the image are modified
continuously (VM log files and application data are constantly changing). Those
writes making checksum invalid and BitD has to recalcu
17 matches
Mail list logo