Due to this issue, along with few other logging issues, we did make a
glusterfs-5.5 release, which has the fix for particular crash.
Regards,
Amar
On Tue, 19 Mar, 2019, 1:04 AM , wrote:
> Hello Ville-Pekka and list,
>
>
>
> I believe we are experiencing similar gluster fuse client crashes on
On Tue, Mar 19, 2019 at 9:25 AM Jiffin Thottan wrote:
>
> Thanks Valerio for sharing the information
>
> - Original Message -
> From: "Valerio Luccio"
> To: "gluster-users"
> Sent: Monday, March 18, 2019 8:37:46 PM
> Subject: [Gluster-users] NFS export of gluster - solution
>
> So, I
Thanks Valerio for sharing the information
- Original Message -
From: "Valerio Luccio"
To: "gluster-users"
Sent: Monday, March 18, 2019 8:37:46 PM
Subject: [Gluster-users] NFS export of gluster - solution
So, I recently start NFS exporting of my gluster so that I could mount
it from a
On Mon, Mar 18, 2019 at 1:21 PM 快乐 <994506...@qq.com> wrote:
> Three node: node1, node2, node3
>
> Steps:
>
> 1. gluster volume create volume_test node1:/brick1
> 2. gluster volume set volume_test cluster.server-quorum-ratio 51
> 3. gluster volume set volume_test cluster.server-quorum-type
Hello Ville-Pekka and list,
I believe we are experiencing similar gluster fuse client crashes on 5.3 as
mentioned here. This morning I made a post in regards.
https://lists.gluster.org/pipermail/gluster-users/2019-March/036036.html
Has this "performance.write-behind: off" setting
Hello list,
We are having critical failures under load of CentOS7 glusterfs 5.3 with our
servers losing their local mount point with the issue - "Transport endpoint
is not connected"
Not sure if it is related but the logs are full of the following message.
[2019-03-18 14:00:02.656876]
So, I recently start NFS exporting of my gluster so that I could mount
it from a legacy Mac OS X server. Every 24/36 hours the export seemed to
freeze causing the server to seize up. The ganesha log was filled with
errors related to RQUOTA. Frank Filz of the nfs-ganesha suggested that
I'd try
Hi Amar,
if you refer to this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1674225 : in the test
setup i haven't seen those entries, while copying & deleting a few GBs
of data. For a final statement we have to wait until i updated our
live gluster servers - could take place on tuesday or
Performed some tests simulating the setup on OVS.
When using mode 6 I had mixed results for both scenarios (see below):
[image: image.png]
There were times that hosts were not able to reach each other (simple ping
tests) and other time where hosts were able to reach each other with ping
but
Hi Hu Bert,
Appreciate the feedback. Also are the other boiling issues related to logs
fixed now?
-Amar
On Mon, Mar 18, 2019 at 3:54 PM Hu Bert wrote:
> update: upgrade from 5.3 -> 5.5 in a replicate 3 test setup with 2
> volumes done. In 'gluster peer status' the peers stay connected during
update: upgrade from 5.3 -> 5.5 in a replicate 3 test setup with 2
volumes done. In 'gluster peer status' the peers stay connected during
the upgrade, no 'peer rejected' messages. No cksum mismatches in the
logs. Looks good :-)
Am Mo., 18. März 2019 um 09:54 Uhr schrieb Hu Bert :
>
> Good morning
Good morning :-)
for debian the packages are there:
https://download.gluster.org/pub/gluster/glusterfs/5/5.5/Debian/stretch/amd64/apt/pool/main/g/glusterfs/
I'll do an upgrade of a test installation 5.3 -> 5.5 and see if there
are some errors etc. and report back.
btw: no release notes for 5.4
12 matches
Mail list logo