healing process a lot of vms
> got i/o failure and timeouts.
> we never got this with the single zfs+nfs box.
>
> now we are planning 3 boxes in each box 2 HHHL NVME 3.8TB
> samsungs(special dev) +8x1.9TB storage as a zfsbox, so they will be glued
> as 2+1 glusterfs.
>
>
>
Hello Arman,
We have several volumes running all flash bricks hosting VMs for
RHV/oVirt. as far as I know, there's no profile specifically for SSD, we
just use the usual virt group for the volume which has the essential
options for the volume to be used for VMs.
I have no experience with Gluster
ext boot fails
> > > again as does luanching it on the other two).
> > >
> > > Based on feedback, I will not change the shard size at this time and
> > > will leave that for later. Some people suggest larger sizes but it
> isn't
> > > a universal suggestion. I
I would leave it on 64M in volumes with spindle disks, but with SSD
volumes, I would increase it to 128M or even 256M, but it varies from one
workload to another.
On Wed, Jan 27, 2021 at 10:02 PM Erik Jacobson
wrote:
> > Also, I would like to point that I have VMs with large disks 1TB and
>
I think the following messages are not harmful;
[2021-01-26 19:28:40.652898] W [MSGID: 101159]
[inode.c:1212:__inode_unlink] 0-inode: be318638-e8a0-4c6d-977d-7a937a
a84806/48bb5288-e27e-46c9-9f7c-944a804df361.1: dentry not found in
48bb5288-e27e-46c9-9f7c-944a804df361
[2021-01-26 19:28:40.652975]
Hello Erik,
Anything in the logs of the fuse mount? can you stat the file from the
mount? also, the report of an image is only 64M makes me think about
Sharding as the default value of Shard size is 64M.
Do you have any clues on when this issue start to happen? was there any
operation done to
Hello David,
Always keep both clients and servers running the same major version, so
for RHV 4.2 I think the client is running Gluster 3.8 and in RHV 4.2.x it
was upgraded to 3.12, just check the client version inside of one of the
RHV hosts "gluster --version" so you know which version to use
Hi,
Differently, the Gluster docs missing quite a bit regarding the available
options that can be used in the volumes.
Not only that, there are some options that might corrupt data and do not
have proper documentation, for example, disable Sharding will lead to data
corruption and I think it
Hello Martín,
Try to disable "performance.readdir-ahead", we had a similar issue, and
disabling "performance.readdir-ahead" solved our issue.
gluster volume set tapeless performance.readdir-ahead off
On Tue, Oct 27, 2020 at 8:23 PM Martín Lorenzo wrote:
> Hi Strahil, today we have the same
Hello
How do you keep track of the health status of your Gluster volumes? When
Brick went down (crash, failure, shutdown), node failure, peering issue,
on-going healing?
Gluster Tendrl is complex and sometimes it's broken, Prometheus exporter
still lacking, gstatus is basic.
Currently, to
ience?
> Thanks.
>
> [1] https://access.redhat.com/solutions/22231 (accound required)
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=489889 (old, but I can
> not find anything newer)
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it [1]
>
22, 2020 at 10:55 AM Gionatan Danti wrote:
> Il 2020-06-21 20:41 Mahdi Adnan ha scritto:
> > Hello Gionatan,
> >
> > Using Gluster brick in a RAID configuration might be safer and
> > require less work from Gluster admins but, it is a waste of disk
> > space.
I think if it's reproducible than someone can look into it, can you list
the steps to reproduce it?
On Sun, Jun 21, 2020 at 9:12 PM Artem Russakovskii
wrote:
> There's been 0 progress or attention to this issue in a month on github or
> otherwise.
>
> Sincerely,
> Artem
>
> --
> Founder,
Hello Gionatan,
Using Gluster brick in a RAID configuration might be safer and require
less work from Gluster admins but, it is a waste of disk space.
Gluster bricks are replicated "assuming you're creating a
distributed-replica volume" so when brick went down, it should be easy to
recover it
My concern for Glusterd2 deprecation is, it tried to implement and fix
several things that we need in Gluster, and the promised features were not
considered in Gluster afterward "better logging, Journal Based Replication".
We're running both Ceph and Gluster, while both solutions are great in
Hello,
I'm wondering what's the current and future plan for Gluster project
overall, I see that the project is not as busy as it was before "at least
this is what I'm seeing" Like there are fewer blogs about what the roadmap
or future plans of the project, the deprecation of Glusterd2, even Red
Hello,
We had a similar issue when we upgraded one of our clusters to 6.5 and
clients were running 4.1.5 and 4.1.9, both crashed after few seconds of
mounting, we did not dig into the issue instead, we upgraded the clients to
6.5 and it worked fine.
On Tue, Jan 28, 2020 at 1:35 AM Laurent
-boun...@gluster.org <gluster-users-boun...@gluster.org> on
behalf of Mahdi Adnan <mahdi.ad...@outlook.com>
Sent: Wednesday, January 17, 2018 9:50 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Gluster endless heal
Hi,
I have an issue with Gluster 3.8.14.
The cluste
Hi,
I have an issue with Gluster 3.8.14.
The cluster is 4 nodes with replica count 2, on of the nodes went offline for
around 15 minutes, when it came back online, self heal triggered and it just
did not stop afterward, it's been running for 3 days now, maxing the bricks
utilization without
Hi,
1. How often do you use the Gluster CLI? Is it a preferred method to manage
Gluster? It's the only way we manage our volumes.
2. What operations do you commonly perform using the CLI? Create, replace,
set, and healing info.
3. How intuitive/easy to use do you find the CLI ? it's
Hi,
We had issues with data corruption before but, with glusterfs 3.8.12 we tested
expanding a sharded volume and it worked fine without issues.
Try expanding a test volume and see the results yourself, for me, it was 100%
reproducible.
--
Respectfully
Mahdi A. Mahdi
created from the
template, is this correct ?
https://paste.fedoraproject.org/paste/qzHmK8t-eJHM3hcZBVs5Yw
--
Respectfully
Mahdi A. Mahdi
From: Krutika Dhananjay <kdhan...@redhat.com>
Sent: Monday, October 9, 2017 1:59 PM
To: Mahdi Adnan
Cc: Lindsay Mat
ober 6, 2017 7:39 AM
To: Lindsay Mathieson
Cc: Mahdi Adnan; gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.13 data corruption
Could you disable stat-prefetch on the volume and create another vm off that
template and see if it works?
-Krutika
On Fri, Oct 6, 2017 at 8:28 AM, L
Hi,
We're running Gluster 3.8.13 replica 2 (SSDs), it's used as storage domain for
oVirt.
Today, we found an issue with one of the VMs template, after deploying a VM
from this template it will not boot, it stuck at mount the root partition.
We've been using this templates for months now and we
Hi,
Doing online upgrade with replica 2 should be fine, i think there might be
something else causing the corruption.
--
Respectfully
Mahdi A. Mahdi
From: gluster-users-boun...@gluster.org on
behalf of Pavel Szalbot
Noted, many thanks
--
Respectfully
Mahdi A. Mahdi
From: Pranith Kumar Karampuri <pkara...@redhat.com>
Sent: Tuesday, July 11, 2017 6:41:28 AM
To: Mahdi Adnan
Cc: Pavel Szalbot; gluster-users
Subject: Re: [Gluster-users] Upgrading Gluster revision (
I upgraded from 3.8.12 to 3.8.13 without issues.
Two replicated volumes with online update, upgraded clients first and followed
by servers upgrade, "stop glusterd, pkill gluster*, update gluster*, start
glusterd, monitor healing process and logs, after completion proceed to the
other node"
Hi,
Why change to storhaug ? and whats going to happen to the current setup if i
want to update Gluster to 3.11 or beyond ?
--
Respectfully
Mahdi A. Mahdi
From: gluster-users-boun...@gluster.org on
behalf of Kaleb S.
Hi,
In general and not in Gluster.
we used Teaming for some time and we switched back to Bonding because we had
issues with the load balancing of Teaming.
With teaming config was "LACP, eth,ipv4,ipv6" the results was one interface
utilized more then the other one, and in some cases one
June 6, 2017 9:17:40 AM
To: Mahdi Adnan
Cc: gluster-user; Gandalf Corvotempesta; Lindsay Mathieson; Kevin Lemonnier
Subject: Re: Rebalance + VM corruption - current status and request for feedback
Hi Mahdi,
Did you get a chance to verify this fix again?
If this fix works for you, is it OK if w
or directory]
Although the process went smooth, i will run another extensive test tomorrow
just to be sure.
--
Respectfully
Mahdi A. Mahdi
From: Krutika Dhananjay <kdhan...@redhat.com>
Sent: Monday, May 29, 2017 9:20:29 AM
To: Mahdi Adnan
Cc: gluster-user; G
rning message"
VMs started to fail after rebalancing.
--
Respectfully
Mahdi A. Mahdi
From: Krutika Dhananjay <kdhan...@redhat.com>
Sent: Wednesday, May 17, 2017 6:59:20 AM
To: gluster-user
Cc: Gandalf Corvotempesta; Lindsay Mathieson; Kevin Lemonnier; Mah
Hi,
Still no RPMs in SIG repository.
--
Respectfully
Mahdi A. Mahdi
From: Niels de Vos <nde...@redhat.com>
Sent: Monday, May 22, 2017 3:26:02 PM
To: Atin Mukherjee
Cc: Mahdi Adnan; Vijay Bellur; gluster-user
Subject: Re: [Gluster-users] Rebalanc
A. Mahdi
From: Nithya Balachandran <nbala...@redhat.com>
Sent: Wednesday, May 24, 2017 8:16:53 PM
To: Mahdi Adnan
Cc: Mohammed Rafi K C; gluster-users@gluster.org
Subject: Re: [Gluster-users] Distributed re-balance issue
On 24 May 2017 at 22:45, Nithya Balach
Hi,
I have a distributed volume with 6 bricks, each have 5TB and it's hosting large
qcow2 VM disks (I know it's reliable but it's not important data)
I started with 5 bricks and then added another one, started the re balance
process, everything went well, but now im looking at the bricks free
Good morning,
SIG repository does not have the latest glusterfs 3.10.2.
Do you have any idea when it's going to be updated ?
Is there any other recommended place to get the latest rpms ?
--
Respectfully
Mahdi A. Mahdi
From: Mahdi Adnan <mahdi
at.com>
Sent: Saturday, May 20, 2017 6:46:51 PM
To: Krutika Dhananjay
Cc: Mahdi Adnan; raghavendra talur; gluster-user
Subject: Re: [Gluster-users] Rebalance + VM corruption - current status and
request for feedback
On Sat, May 20, 2017 at 6:38 AM, Krutika Dhananjay
<kdhan...@redhat.com&
indsay Mathieson; Kevin Lemonnier; Mahdi Adnan
Subject: Rebalance + VM corruption - current status and request for feedback
Hi,
In the past couple of weeks, we've sent the following fixes concerning VM
corruption upon doing rebalance -
https://review.gluster.org/#/q/status:merged+project:glu
Okay so it's fixed by killing Gluster and rebooting the node again.
--
Respectfully
Mahdi A. Mahdi
From: gluster-users-boun...@gluster.org <gluster-users-boun...@gluster.org> on
behalf of Mahdi Adnan <mahdi.ad...@outlook.com>
Sent: Wednesday, May 3
Hi,
Same here, when i reboot the node i have to manually execute "pcs cluster start
gluster01" and pcsd already enabled and started.
Gluster 3.8.11
Centos 7.3 latest
Installed using CentOS Storage SIG repository
--
Respectfully
Mahdi A. Mahdi
From:
Hi,
I have a 4 node Gluster volume, each has 24 SSD brick running Gluster 3.8.10
(two volumes), i updated one of the nodes to 3.8.11 and rebooted the node,
after it came back online the healing process started and it never ended.
It has been 24 hours and the healing is still going, gluster
I first encountered this bug about a year ago, and lost more than 100 VM.
Sharding is essential to VM datastores and i think Gluster is't that useful
without this feature for VMs.
I appreciate all the hard work that the developers putting on this bug, but i
think a warning in CLI or something
Thank you guys.
I'll be testing this and let you know if i have any issues.
--
Respectfully
Mahdi A. Mahdi
From: Pranith Kumar Karampuri <pkara...@redhat.com>
Sent: Saturday, April 22, 2017 3:06:20 PM
To: Ravishankar N
Cc: Mahdi Adnan; gluster
Thank you very much.
--
Respectfully
Mahdi A. Mahdi
From: Karthik Subrahmanya <ksubr...@redhat.com>
Sent: Wednesday, April 19, 2017 4:30:30 PM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Replica 2 Quorum and arbiter
Hi,
Co
Thank you very much.
--
Respectfully
Mahdi A. Mahdi
From: Karthik Subrahmanya <ksubr...@redhat.com>
Sent: Wednesday, April 19, 2017 4:30:30 PM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Replica 2 Quorum and arbiter
Hi,
Co
Hi,
We have a replica 2 volume and we have issue with setting proper quorum.
The volumes used as datastore for vmware/ovirt, the current settings for the
quorum are:
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%
Losing the first node which
Hi,
We have a replica 2 volume and we have issue with setting proper quorum.
The volumes used as datastore for vmware/ovirt, the current settings for the
quorum are:
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 51%
Losing the first node which
Good to hear.
Eagerly waiting for the patch.
Thank you guys.
Get Outlook for Android<https://aka.ms/ghei36>
From: Krutika Dhananjay <kdhan...@redhat.com>
Sent: Monday, April 3, 2017 11:22:40 AM
To: Pranith Kumar Karampuri
Cc: Mahdi Adnan; g
Hi,
Do you guys have any update regarding this issue ?
--
Respectfully
Mahdi A. Mahdi
From: Krutika Dhananjay <kdhan...@redhat.com>
Sent: Tuesday, March 21, 2017 3:02:55 PM
To: Mahdi Adnan
Cc: Nithya Balachandran; Gowdappa, Raghavendra; Susant Palai;
g
luster-users] Gluster 3.8.10 rebalance VMs corruption
To: Krutika Dhananjay
Cc: Mahdi Adnan, Gowdappa, Raghavendra, Susant Palai, gluster-users@gluster.org
List
Hi,
Do you know the GFIDs of the VM images which were corrupted?
Regards,
Nithya
On 20 March 2017 at 20:37, Krutika Dhananjay
han...@redhat.com>
Sent: Sunday, March 19, 2017 2:01:49 PM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
While I'm still going through the logs, just wanted to point out a couple of
things:
1. It is recommended that you use
2017 8:02:19 AM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
On Sat, Mar 18, 2017 at 10:36 PM, Mahdi Adnan
<mahdi.ad...@outlook.com<mailto:mahdi.ad...@outlook.com>> wrote:
Kindly, check the attached new log
Although i have tested the patch before it got released, but apparently it
was't a thorough test.
In Gluster 3.7.x i lost around 100 VMs, now in 3.8.x i just lost a few test VMs.
I hope there will be a fix soon.
--
Respectfully
Mahdi A. Mahdi
From:
Hi,
I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure in a
volume contains few VMs.
After the completion of rebalance, i have rebooted the VMs, some of ran just
fine, and others just crashed.
Windows boot to recovery mode and Linux throw xfs errors and does not boot.
I
Hi,
I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure in a
volume contains few VMs.
After the completion of rebalance, i have rebooted the VMs, some of ran just
fine, and others just crashed.
Windows boot to recovery mode and Linux throw xfs errors and does not boot.
I ran
ull, where can
i check the current queue status ?
--
Respectfully
Mahdi A. Mahdi
From: Nithya Balachandran <nbala...@redhat.com>
Sent: Thursday, March 2, 2017 8:32:52 AM
To: Soumya Koduri
Cc: Mahdi Adnan; gluster-users@gluster.org; Krutika Dhananjay; Frank
Hi,
We have a Gluster volume hosting VMs for ESXi exported via Ganesha.
Im getting the following messages in ganesha-gfapi.log and ganesha.log
=
[2017-02-28 07:44:55.194621] E [MSGID: 109040]
[dht-helper.c:1198:dht_migration_complete_check_task] 0-vmware2-dht:
: failed to lookup the
From: Krutika Dhananjay <kdhan...@redhat.com>
Sent: Monday, February 27, 2017 8:11:31 AM
To: Mahdi Adnan
Cc: Gandalf Corvotempesta; gluster-users@gluster.org
Subject: Re: [Gluster-users] Volume rebalance issue
I've attached the src tarball with the patches that fix this issue, applied o
;
Sent: Sunday, February 26, 2017 11:07:04 AM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Volume rebalance issue
How did you replicate the issue?
Next week I'll spin up a gluster storage and I would like to try the same to
see the corruption and to test any patche
Hi,
Yes, i would love to try it out.
Steps to apply the patch would be highly appreciated.
--
Respectfully
Mahdi A. Mahdi
From: Krutika Dhananjay <kdhan...@redhat.com>
Sent: Sunday, February 26, 2017 5:37:11 PM
To: Mahdi Adnan
Cc: gluster-users@glust
Hi,
We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting VMs
for ESXi, i tried expanding the volume with 8 more bricks, and after
rebalancing the volume, the VMs got corrupted.
Gluster version is 3.8.9 and the volume is using the default parameters of
group "virt" plus
Hi,
I have a question regarding disk preparation.
I have 4 nodes, each has 24 SSD, i would like to know whats the best practice
to setup the disks.
The pool will be used as a vmware datastore.
im planning on using each disk as a brick without lvm, pool will be distributed
replicas with
A. Mahdi
> Subject: Re: [Gluster-users] NFS-Ganesha lo traffic
> To: mahdi.ad...@outlook.com
> CC: gluster-users@gluster.org; nfs-ganesha-de...@lists.sourceforge.net
> From: skod...@redhat.com
> Date: Wed, 10 Aug 2016 11:05:50 +0530
>
>
>
> On 08/09/2016 09:06 PM
t;server ip address";
volume = "home"; } CLIENT {Clients = *;Access_Type =
RW;Squash = None; } }
On Tue, Aug 9, 2016 at 11:44 AM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote:
Hi,
Please post ganesha configuration file.
--
Respectfully
Hi,
Please post ganesha configuration file.
--
Respectfully
Mahdi A. Mahdi
From: corey.kov...@gmail.com
Date: Tue, 9 Aug 2016 11:24:58 -0600
To: gluster-users@gluster.org
Subject: [Gluster-users] Nfs-ganesha...
If not an appropriate place to ask, my apologies.
I have been trying
the steps
to recreate the issue, along with the relevant
information about volume configuration, logs, core, version etc, then it would
be good to track this issue through a bug report.
-Krutika
On Mon, Aug 8, 2016 at 8:56 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote:
Thank you ver
Hi,
Im using NFS-Ganesha to access my volume, it's working fine for now but im
seeing lots of traffic on the Loopback interface, in fact it's the same amount
of traffic on the bonding interface, can anyone please explain to me why is
this happening ?also, i got the following error in the
haven't had the chance to look into this issue last week. Do you mind
raising a bug in upstream with all
the relevant information and I'll take a look sometime this week?
-Krutika
On Fri, Aug 5, 2016 at 11:58 AM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote:
Hi,
Yes, i got some me
2016 at 1:14 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote:
Hi,
Kindly check the following link for all 7 bricks logs;
https://db.tt/YP5qTGXk
--
Respectfully
Mahdi A. Mahdi
From: kdhan...@redhat.com
Date: Thu, 4 Aug 2016 13:00:43 +0530
Subject: Re: [Gluster-users] Glu
@gluster.org
Could you also attach the brick logs please?
-Krutika
On Thu, Aug 4, 2016 at 12:48 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote:
appreciate your help,
(gdb) frame 2#2 0x7f195deb1787 in shard_common_inode_write_do
(frame=0x7f19699f1164, this=0x7f195802ac10) at shard.c:37
lso print the values of the following variables from the original
core:
i. i
ii. local->inode_list[0]
iii. local->inode_list[1]
-Krutika
On Wed, Aug 3, 2016 at 9:01 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote:
Hi,
Unfortunately no, but i can setup a test bench and see if it g
Hi,
Please attach the logs and "gluster volume info $VOLUMENAME" output here;
--
Respectfully
Mahdi A. Mahdi
> From: davy.croo...@smartbit.be
> To: gluster-users@gluster.org
> Date: Wed, 3 Aug 2016 13:01:36 +
> Subject: [Gluster-users] Glusterfs 3.7.13 node suddenly stops
ter-users] Gluster 3.7.13 NFS Crash
> To: mahdi.ad...@outlook.com
> CC: gluster-users@gluster.org
>
> 2016-08-03 22:33 GMT+02:00 Mahdi Adnan <mahdi.ad...@outlook.com>:
> > Yeah, only 3 for now running in 3 replica.
> > around 5MB (900 IOps) write and 3MB (250 IOps) r
] Gluster 3.7.13 NFS Crash
> To: mahdi.ad...@outlook.com
> CC: gluster-users@gluster.org
>
> 2016-08-03 21:40 GMT+02:00 Mahdi Adnan <mahdi.ad...@outlook.com>:
> > Hi,
> >
> > Currently, we have three UCS C220 M4, dual Xeon CPU (48 cores), 32GB of RAM,
> > 8
Hi,
I'm not expert in Gluster but, i think it would be better to replace the downed
brick with a new one.Maybe start from here;
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick
--
Respectfully
Mahdi A. Mahdi
Date: Wed, 3 Aug 2016
.
--
Respectfully
Mahdi A. Mahdi
> From: gandalf.corvotempe...@gmail.com
> Date: Wed, 3 Aug 2016 20:25:56 +0200
> Subject: Re: [Gluster-users] Gluster 3.7.13 NFS Crash
> To: mahdi.ad...@outlook.com
> CC: kdhan...@redhat.com; gluster-users@gluster.org
>
> 2016-08-03 17:02 GMT+02:00
@gluster.org
Do you have a test case that consistently recreates this problem?
-Krutika
On Wed, Aug 3, 2016 at 8:32 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote:
Hi,
So i have updated to 3.7.14 and i still have the same issue with NFS.based on
what i have provided so far from logs and
rect', do this:
(gdb) p odirect
and gdb will print its value for you in response.
-Krutika
On Mon, Aug 1, 2016 at 4:55 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote:
Hi,
How to get the results of the below variables ? i cant get the results from gdb.
--
Respectfully
Mahdi A. Mahdi
variable 'odirect', do this:
(gdb) p odirect
and gdb will print its value for you in response.
-Krutika
On Mon, Aug 1, 2016 at 4:55 PM, Mahdi Adnan <mahdi.ad...@outlook.com> wrote:
Hi,
How to get the results of the below variables ? i cant get the results from gdb.
--
Respectf
@gluster.org
Could you also print and share the values of the following variables from the
backtrace please:
i. cur_block
ii. last_block
iii. local->first_block
iv. odirect
v. fd->flags
vi. local->call_count
-Krutika
On Sat, Jul 30, 2016 at 5:04 PM, Mahdi Adnan <mahdi.ad...@outlook.com&g
Hi,
I really appreciate if someone can help me fix my nfs crash, its happening a
lot and it's causing lots of issues to my VMs;the problem is every few hours
the native nfs crash and the volume become unavailable from the affected node
unless i restart glusterd.the volume is used by vmware esxi
Hi, i have im having issues with gluster nfs, it keep crashing after few
hours under medium load.
OS: CentOS 7.2
Gluster version 3.7.13
Gluster info;
Volume Name: vlm01
Type: Distributed-Replicate
Volume ID: eacd8248-dca3-4530-9aed-7714a5a114f2
Status: Started
Number of Bricks: 7 x 3 = 21
0 4312 offset: 0x0
requested: 0x200 read: 0x95
Respectfully*
**Mahdi A. Mahdi*
Skype: mahdi.ad...@outlook.com <mailto:mahdi.ad...@outlook.com>
On 03/15/2016 03:06 PM, Mahdi Adnan wrote:
[2016-03-15 14:12:01.421615] I [MSGID: 109036]
[dht-common.c:8043:dht_log_new_layout_for_dir_se
a
On Tue, Mar 15, 2016 at 1:45 PM, Mahdi Adnan
<mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>>
wrote:
Okay, here's what i did;
Volume Name: v
Type: Distributed-Replicate
Volume ID: b348fd8e-b117-469d-bcc0-56a56bdfc930
Status: Started
Nu
te:
OK but what if you use it with replication? Do you still see the
error? I think not.
Could you give it a try and tell me what you find?
-Krutika
On Tue, Mar 15, 2016 at 1:23 PM, Mahdi Adnan
<mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>>
wrote:
Hi,
on it, and
enable sharding on it,
set the shard-block-size that you feel appropriate and then just start
off with VM image creation etc.
If you run into any issues even after you do this, let us know and
we'll help you out.
-Krutika
On Tue, Mar 15, 2016 at 1:07 PM, Mahdi Adnan
<mahdi
On Mon, Mar 14, 2016 at 3:17 PM, Mahdi Adnan
<mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>>
wrote:
sorry for serial posting but, i got new logs it might help..
the message appear during the migration;
/var/log/glusterfs/nfs.log
[2016
45:05.079657] E [MSGID: 112069]
[nfs3.c:3649:nfs3_rmdir_resume] 0-nfs-nfsv3: No such file or directory:
(192.168.221.52:826) testv : ----0001
Respectfully*
**Mahdi A. Mahd
*
On 03/14/2016 11:14 AM, Mahdi Adnan wrote:
So i have deployed a new server "Cisco UCS C2
ahead: off
performance.quick-read: off
performance.readdir-ahead: off
same error ..
can anyone share with me the info of a working striped volume ?
On 03/14/2016 09:02 AM, Mahdi Adnan wrote:
I have a pool of two bricks in the same server;
Volume Name: k
Type: Stripe
Volume ID: 1e9281ce-2a8b-44e8-a0c6-e3
o cp
them to a temp name within the volume, and then rename them back to the
original file name.
HTH,
Krutika
On Sun, Mar 13, 2016 at 11:49 PM, Mahdi Adnan <mahdi.ad...@earthlinktele.com
wrote:
I couldn't find anything related to cache in the HBAs.
what logs are useful in my case ? i see only bricks l
My setup is 2 servers with a floating ip controlled by CTDB and my ESXi
server mount the NFS via the floating ip.
On 03/13/2016 08:40 PM, pkoelle wrote:
Am 13.03.2016 um 18:22 schrieb David Gossage:
On Sun, Mar 13, 2016 at 11:07 AM, Mahdi Adnan
<mahdi.ad...@earthlinktele.com
w
e, niether sharding
nor striping works for me.
i did follow up with some of threads in the mailing list and tried some
of the fixes that worked with the others, none worked for me. :(
On 03/13/2016 06:54 PM, David Gossage wrote:
On Sun, Mar 13, 2016 at 8:16 AM, Mahdi Adnan
size: 16MB
features.shard: on
performance.readdir-ahead: off
On 03/12/2016 08:11 PM, David Gossage wrote:
On Sat, Mar 12, 2016 at 10:21 AM, Mahdi Adnan
<mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>>
wrote:
Both servers have HBA no RAIDs and i
a replicated striped) and again same
thing, data corruption.
On 03/12/2016 07:02 PM, David Gossage wrote:
On Sat, Mar 12, 2016 at 9:51 AM, Mahdi Adnan
<mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>>
wrote:
Thanks David,
My settings are all defaults,
performance.quick-read: off
performance.readdir-ahead: on
On 03/12/2016 03:25 PM, David Gossage wrote:
On Sat, Mar 12, 2016 at 1:55 AM, Mahdi Adnan
<mahdi.ad...@earthlinktele.com <mailto:mahdi.ad...@earthlinktele.com>>
wrote:
Dears,
I have created a replicated striped vol
Appreciate your help.
Respectfully
Mahdi Adnan
System Admin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
96 matches
Mail list logo