Thank you for the acknowledgement.
On Mon, Sep 4, 2017 at 6:39 PM, lejeczek wrote:
> yes, I see things got lost in transit, I said before:
>
> I did from first time and now not rejected.
> now I'm restarting fourth(newly added) peer's glusterd
> and.. it seems to work. <- HERE! (even th
It is sequential write with file size 2GB. Same behavior observed with
3.11.3 too.
On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan wrote:
> On 09/06/2017 05:48 AM, Serkan Çoban wrote:
>>
>> Hi,
>>
>> Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
>> 100 clients are writing
On 09/05/2017 02:07 PM, Serkan Çoban wrote:
For rpm packages you can use [1], just installed without any problems.
It is taking time packages to land in Centos storage SIG repo...
Thank you for reporting this. The SIG does take a while to get updated
with the latest bits. We are looking at way
On 09/06/2017 05:48 AM, Serkan Çoban wrote:
Hi,
Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
100 clients are writing each has 5 threads total 500 threads.
With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s
With 3.12.0 each server has 200MB/s networ
Mh, I never had to do that and I never had that problem. Is that an
arbiter specific thing ? With replica 3 it just works.
On Wed, Sep 06, 2017 at 03:59:14PM -0400, Alastair Neil wrote:
> you need to set
>
> cluster.server-quorum-ratio 51%
>
> On 6 September 2017 at 10:12, Pavel Szal
you need to set
cluster.server-quorum-ratio 51%
On 6 September 2017 at 10:12, Pavel Szalbot wrote:
> Hi all,
>
> I have promised to do some testing and I finally find some time and
> infrastructure.
>
> So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
> replicated volu
Hi all,
I have promised to do some testing and I finally find some time and
infrastructure.
So I have 3 servers with Gluster 3.10.5 on CentOS 7. I created
replicated volume with arbiter (2+1) and VM on KVM (via Openstack)
with disk accessible through gfapi. Volume group is set to virt
(gluster vo
Dear users,
I just started my first Gluster test volume using 3 servers (each server
contains 12 hdd).
I would like to create a "distributed disperse volume” but I’m a little bit
confused about the right configuration schema that I should use.
Should I use JBOD disks? How many bricks to be defin
I tried to follow step from
https://wiki.centos.org/SpecialInterestGroup/Storage to install latest
gluster on the first node.
It installed 3.10 and not 3.11. I am not sure how to install 3.11 without
compiling it.
Then when tried to start the gluster on the node the bricks were reported
down (the o
Hi,
Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
100 clients are writing each has 5 threads total 500 threads.
With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s
With 3.12.0 each server has 200MB/s network traffic, cluster total is 8GB/s
I did not ch
Do you see any improvement with 3.11.1 as that has a patch that improves
perf for this kind of a workload
Also, could you disable eager-lock and check if that helps? I see that max
time is being spent in acquiring locks.
-Krutika
On Wed, Sep 6, 2017 at 1:38 PM, Abi Askushi wrote:
> Hi Krutika,
Hi Krutika,
Is it anything in the profile indicating what is causing this bottleneck?
In case i can collect any other info let me know.
Thanx
On Sep 5, 2017 13:27, "Abi Askushi" wrote:
Hi Krutika,
Attached the profile stats. I enabled profiling then ran some dd tests.
Also 3 Windows VMs are r
12 matches
Mail list logo