Hi Rafi,
I have a test oVirt 4.3.9 cluster with Gluster v7.5 on CentOS7.
Can you provide the rpms and I will try to test.
Also, please share the switch that disables this behaviour (in case something
goes wrong).
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 14:54:34 GMT+03:00, RAFI KC
Also,
can you provide a ping between the nodes, so we get an idea of the lattency
between the nodes.
Also, I'm interested how much time it takes on the bricks to 'du'.
Best Regards,
Strahil Nikolov
На 27 май 2020 г. 10:27:34 GMT+03:00, Karthik Subrahmanya
написа:
>Hi,
>
>Please provide
Sure, I have back-ported the patch to release-7. Now I will see How I
can build the rpms.
On the other hand, if possible, Could you please tell me about the
volume configuration and the volume tree structure. This will be more
helpful when you have lot of files or a distribution is high.
The shared-brick-count value indicates the number of bricks sharing a
file-system. In your case, it should be one, as all the bricks are from
different mount points. Can you please share the values of brick-fsid?
grep "brick-fsid" /var/lib/glusterd/vols//bricks/
I tried reproducing this issue in
Hi Felix,
If you already have a test cluster, please give me the gluster version.
I will try to build rpm then you can just do an upgrade. If you don't
have a test cluster, and you are creating a new one, then I will build a
new rpm against the latest head. This is more easier to me as I
As far as I remember, there was no version update on the server. It
was definitely installed as version 7.
Shared bricks:
Server 1:
/var/lib/glusterd/vols/gv0/gv0.imagegluster1.data2-brick.vol:
option shared-brick-count 2
/var/lib/glusterd/vols/gv0/gv0.imagegluster1.data-brick.vol:option
Hi Petr,
what was the server version before upgrading to 7.2?
Can you please share the shared-brick-count values from brick volfiles from
all the nodes?
grep shared-brick-count /var/lib/glusterd/vols//*
On Wed, May 27, 2020 at 2:31 PM Petr Certik wrote:
> Hi everyone,
>
> we've been running a
Dear Rafi KC,
lets suppose I going to spend some time for testing. How would I install
glusterfs-server including your feature?
Maybe this is an easy procedure, but actually I am not familiar with it.
Regards,
Felix
On 27/05/2020 07:56, RAFI KC wrote:
Hi All,
I have been working on POC
Hi everyone,
we've been running a replicated volume for a while, with three ~1 TB
bricks. Recently we've added three more same-sized bricks, making it a
2 x 3 distributed replicated volume. However, even after rebalance,
the `df` command on a client shows the correct used/size percentage,
but
Result for a single ls on a directory with 10k directories inside (16*3
volume)
*
*
Configuration
Plain volume
Parallel-readdir
Proposed Solution
Single Dir ls (Seconds)
-
135
32.744
It is showing 321% improvements.
On Wed, May 27, 2020 at 10:35 AM RAFI KC wrote:
> Hi Felix,
>
> Thanks for your mail. I will test it more to make sure that it doesn't
> break anything. Also I have added a configuration key for easier switching
> to the older code in case if there is any problem. If you can help me in
> any
Hi Felix,
Thanks for your mail. I will test it more to make sure that it doesn't
break anything. Also I have added a configuration key for easier
switching to the older code in case if there is any problem. If you can
help me in any manner in testing or performance numbers, please let me
Hi,
Please provide the following information to understand the setup and debug
this further:
- Which version of gluster you are using?
- 'gluster volume status atlassian' to confirm both bricks and shds are up
or not
- Complete output of 'gluster volume profile atlassian info' before running
'du'
Dear Rafi,
thanks for your effort. I think this is of great interest of many
gluster users. Thus, I would really encourage you to
test and to further improve this feature. Maybe it is beneficial to
create a certain guideline which things should be tested
to make this feature really ready for
14 matches
Mail list logo