Hi Henrik,
Thanks for providing the required outputs. See my replies inline.
On Thu, Dec 21, 2017 at 10:42 PM, Henrik Juul Pedersen wrote:
> Hi Karthik and Ben,
>
> I'll try and reply to you inline.
>
> On 21 December 2017 at 07:18, Karthik Subrahmanya
>
The Gluster community is pleased to announce the release of Gluster
3.13.1 (packages available at [1,2,3]).
Release notes for the release can be found at [4].
We still carry following major issue that is reported in the
release-notes as follows,
1.) - Expanding a gluster volume that is
Thanks wk, Jim and Shyam for the feedback. Let us go with `replica 2
arbiter 1` in that case.
@Shyam, There are no plans to make arbiter count more than 1, but I
think it is better to have it stated explicitly. If we are not going
with a new volume type but say `replica 2 arbiter` and give
Thanks for your response (6 months ago!) but I have only just got around to
following up on this.
Unfortunately, I had already copied and shipped the data to the second
datacenter before copying the GFIDs so I already stumbled before the first
hurdle!
I have been using the scripts in the
Hi Karthik and Ben,
I'll try and reply to you inline.
On 21 December 2017 at 07:18, Karthik Subrahmanya wrote:
> Hey,
>
> Can you give us the volume info output for this volume?
# gluster volume info virt_images
Volume Name: virt_images
Type: Replicate
Volume ID:
Could youplease provide following -
1 - output of gluster volume heal info
2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
3 - output of gluster volume info
4 - output of gluster volume status
5 - Also, could you try unmount the volume and mount it again and
I'm seeing a similar issue. It appears that df on the client is reporting
the *brick* size instead of the total pool size. I think that this started
happening after one of my servers crashed due to a hardware issue.
I'm running a distributed / replicated volume, 2 boxes with 3 bricks each,
Hi,
After running rm -rf on a directory, the files under it got deleted, but
the directory was not deleted and was showing stale file handle error.
After 18 minutes, I'm able to delete the directory. So could anyone help me
in knowing what could have happened or when in general I get such
Hi,
i'm also looking forward to get the answer.
Right now, we are using 3.10 with nfs-ganesha HA.
Thank you
Renaud
De : gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] De la part de Tomalak Geret'kal
Envoyé : 20 décembre 2017 12:24
À : gluster-users@gluster.org
Hi,
In your ganesha-ha.conf do you have your virtual ip adresses set something like
this :
VIP_tlxdmz-nfs1="192.168.22.33"
VIP_tlxdmz-nfs2="192.168.22.34"
Renaud
De : gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] De la part de Hetz Ben Hamo
Envoyé : 20
10 matches
Mail list logo