Please bring down all glusterd instances before correcting the files. So
you should first stop the glusterd services, repair the files (removing
unwanted entries), ensure the peer files content are correct across all
nodes and then start the glusterd service on each node one by one.
On Fri, 3 Jan
This is definitely a good start. In fact the experiment you have done which
indicates a 20% improvement of run time perf with out logger does put this
work for a ‘worth a try’ category for sure. The only thing we need to be
mindful here is the ordering of the logs to be provided, either through a
On Mon, Aug 26, 2019 at 11:18 AM Rinku Kothiya wrote:
> Hi,
>
> Release-7 RC0 packages are built. This is a good time to start testing the
> release bits, and reporting any issues on bugzilla.
> Do post on the lists any testing done and feedback for the same.
>
> We have about 2 weeks to GA of
No, it's not production ready.
On Thu, Jul 18, 2019 at 1:33 PM deepu srinivasan wrote:
> Hi Users
> Is it safe to use glusterd2 for production?
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
>
Please see - https://bugzilla.redhat.com/show_bug.cgi?id=1655827
On Wed, Jun 19, 2019 at 5:52 PM Olaf Buitelaar
wrote:
> Dear All,
>
> Has anybody seen this error on gluster 5.6;
> [glusterd-rpc-ops.c:1388:__glusterd_commit_op_cbk]
> (-->/lib64/libgfrpc.so.0(+0xec60) [0x7fbfb7801c60]
>
Please see https://bugzilla.redhat.com/show_bug.cgi?id=1696147 which is
fixed in 5.6 . Although a race, I believe you're hitting this. Although the
title of the bug reflects it to be shd + brick multiplexing combo, but it's
applicable for bricks too.
On Fri, Jun 14, 2019 at 2:07 PM David Spisla
On Wed, May 8, 2019 at 9:45 AM Atin Mukherjee wrote:
>
>
> On Wed, May 8, 2019 at 12:08 AM Vijay Bellur wrote:
>
>>
>>
>> On Tue, May 7, 2019 at 11:15 AM FNU Raghavendra Manjunath <
>> rab...@redhat.com> wrote:
>>
>>>
>>>
On Wed, May 8, 2019 at 12:08 AM Vijay Bellur wrote:
>
>
> On Tue, May 7, 2019 at 11:15 AM FNU Raghavendra Manjunath <
> rab...@redhat.com> wrote:
>
>>
>> + 1 to this.
>>
>
> I have updated the footer of gluster-devel. If that looks ok, we can
> extend it to gluster-users too.
>
> In case of a
On Sat, 27 Apr 2019 at 20:36, Hetz Ben Hamo wrote:
> Hi,
>
> I've looked at a YouTube video about Gluster volumes creation. The video
> is here:
> https://www.youtube.com/watch?v=9SRsvFZZa5E
>
> One thing that is weird to me is this: the guy creates a volume of replica
> 2, where each brick is
9-04-15 14:00:33.997004] I [MSGID: 106578]
> [glusterd-brick-ops.c:1364:glusterd_op_perform_add_bricks] 0-management:
> type is set 0, need to change it
>
> [2019-04-15 14:00:34.013789] I [MSGID: 106132]
> [glusterd-proc-mgmt.c:84:glusterd_proc_stop] 0-management: nfs already
> stop
On Fri, 12 Apr 2019 at 22:32, Boris Goldowsky wrote:
> I’ve got a replicated volume with three bricks (“1x3=3”), the idea is to
> have a common set of files that are locally available on all the machines
> (Scientific Linux 7, which is essentially CentOS 7) in a cluster.
>
>
>
> I tried to add
On Thu, 4 Apr 2019 at 22:10, Darrell Budic wrote:
> Just the glusterd.log from each node, right?
>
Yes.
>
> On Apr 4, 2019, at 11:25 AM, Atin Mukherjee wrote:
>
> Darell,
>
> I fully understand that you can't reproduce it and you don't have
> bandwidth to test it a
;> # option base-port 49152
>> option max-port 60999
>> end-volume
>>
>> the only thing I found in the glusterd logs that looks relevant was
>> (repeated for both of the other nodes in this cluster), so no clue why it
>> happened:
>> [2019-04-03 20:19:16.8026
On Mon, 1 Apr 2019 at 10:28, Hari Gowtham wrote:
> Comments inline.
>
> On Mon, Apr 1, 2019 at 5:55 AM Sankarshan Mukhopadhyay
> wrote:
> >
> > Quite a considerable amount of detail here. Thank you!
> >
> > On Fri, Mar 29, 2019 at 11:42 AM Hari Gowtham
> wrote:
> > >
> > > Hello Gluster users,
On Sat, 30 Mar 2019 at 08:06, Vijay Bellur wrote:
>
>
> On Fri, Mar 29, 2019 at 6:42 AM Atin Mukherjee
> wrote:
>
>> All,
>>
>> As many of you already know that the design logic with which GlusterD
>> (here on to be referred as GD1) was implemen
All,
As many of you already know that the design logic with which GlusterD (here
on to be referred as GD1) was implemented has some fundamental scalability
bottlenecks at design level, especially around it's way of handshaking
configuration meta data and replicating them across all the peers.
On Fri, Mar 29, 2019 at 12:47 PM Krutika Dhananjay
wrote:
> Questions/comments inline ...
>
> On Thu, Mar 28, 2019 at 10:18 PM wrote:
>
>> Dear All,
>>
>> I wanted to share my experience upgrading from 4.2.8 to 4.3.1. While
>> previous upgrades from 4.1 to 4.2 etc. went rather smooth, this one
On Wed, 27 Mar 2019 at 16:02, Riccardo Murri
wrote:
> Hello Atin,
>
> > Check cluster.op-version, peer status, volume status output. If they are
> all fine you’re good.
>
> Both `op-version` and `peer status` look fine:
> ```
> # gluster volume get all cluster.max-op-version
> Option
On Wed, 27 Mar 2019 at 15:24, Riccardo Murri
wrote:
> I managed to put the reinstalled server back into connected state with
> this procedure:
>
> 1. Run `for other_server in ...; do gluster peer probe $other_server;
> done` on the reinstalled server
> 2. Now all the peers on the reinstalled
If you were on rc0 and upgraded to rc1, then you are hitting BZ 1684029 I
believe. Can you please upgrade all the nodes to rc1, bump up the
op-version to 6 (if not already done) and then restart glusterd
services to see if the peer rejection goes away?
On Thu, Mar 14, 2019 at 7:51 AM
On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan
wrote:
> Thanks to those who participated.
>
> Update at present:
>
> We found 3 blocker bugs in upgrade scenarios, and hence have marked release
> as pending upon them. We will keep these lists updated about progress.
I’d like to clarify
On Sat, 9 Feb 2019 at 04:52, John Quinoz wrote:
> Hello Board!
>
> Was wonder is someone could help me out.
>
> Im learning gluster and have inherited a server running it.
>
> Running the gluster peer status cmd on any of the four nodes does not
> provide any results.
>
> Each server has two
On Tue, Feb 5, 2019 at 8:43 PM Nithya Balachandran
wrote:
>
>
> On Tue, 5 Feb 2019 at 17:26, deepu srinivasan wrote:
>
>> HI Nithya
>> We have a test gluster setup.We are testing the rebalancing option of
>> gluster. So we started the volume which have 1x3 brick with some data on it
>> .
>>
performed in the cluster, it'd be difficult to figure out the exact cause.
On Wed, Jan 30, 2019 at 7:25 PM Amudhan P wrote:
> Hi Atin,
>
> yes, it worked out thank you.
>
> what would be the cause of this issue?
>
>
>
> On Fri, Jan 25, 2019 at 1:56 PM Atin Mukherjee
>
On Tue, Jan 29, 2019 at 8:52 PM David Spisla wrote:
> Hello Gluster Community,
>
> in glusterd.vol are parameters to define the port range for the bricks.
> They are commented out per default:
>
> # option base-port 49152
> # option max-port 65535
> I assume that glusterd is not using this
m node is node3 IP 10.1.2.3, `glusterd` log file is inside node3
> folder.
>
> regards
> Amudhan
>
> On Wed, Jan 23, 2019 at 11:02 PM Atin Mukherjee
> wrote:
>
>> Amudhan,
>>
>> I see that you have provided the content of the configuration of the
>> vol
Amudhan,
I see that you have provided the content of the configuration of the volume
gfs-tst where the request was to share the dump of /var/lib/glusterd/* . I
can not debug this further until you share the correct dump.
On Thu, Jan 17, 2019 at 3:43 PM Atin Mukherjee wrote:
> Can you ple
>
> regards
> Amudhan
>
> On Thu, Jan 17, 2019 at 3:43 PM Atin Mukherjee
> wrote:
>
>> Can you please run 'glusterd -LDEBUG' and share back the glusterd.log?
>> Instead of doing too many back and forth I suggest you to share the content
>> of /var/lib/glusterd f
nd_exit]
> (-->/usr/local/sbin/glusterd(glusterfs_volumes_init+0xc2) [0x409f52]
> -->/usr/local/sbin/glusterd(glusterfs_process_volfp+0x151) [0x409e41]
> -->/usr/local/sbin/glusterd(cleanup_and_exit+0x5f) [0x40942f] ) 0-:
> received signum (-1), shutting down
>
>
> On Thu, Jan
On Wed, Jan 16, 2019 at 9:48 PM David Spisla wrote:
> Dear Gluster Community,
>
> i created a replica 4 volume from gluster-node1 on a 4-Node Cluster with
> SSL/TLS network encryption . During setting the 'cluster.use-compound-fops'
> option, i got the error:
>
> $ volume set: failed: Commit
g replace-brick or heal begins.
>
> On Wed, Jan 16, 2019 at 5:05 PM Atin Mukherjee
> wrote:
>
>>
>>
>> On Wed, Jan 16, 2019 at 5:02 PM Amudhan P wrote:
>>
>>> Atin,
>>> I have copied the content of 'gfs-tst' from vol folder in another node.
>
SGID: 101176]
> [graph.c:738:glusterfs_graph_activate] 0-graph: init failed
> [2019-01-15 20:17:00.693004] W [glusterfsd.c:1514:cleanup_and_exit]
> (-->/usr/local/sbin/glusterd(glusterfs_volumes_init+0xc2) [0x409f52]
> -->/usr/local/sbin/glusterd(glusterfs_process_volfp+0x151)
This is a case of partial write of a transaction and as the host ran out of
space for the root partition where all the glusterd related configurations
are persisted, the transaction couldn't be written and hence the new
(replaced) brick's information wasn't persisted in the configuration. The
Today, we are announcing the availability of GCS (Gluster Container
Storage) 0.5.
Highlights and updates since v0.4:
- GCS environment updated to kube 1.13
- CSI deployment moved to 1.0
- Integrated Anthill deployment
- Kube & etcd metrics added to prometheus
- Tuning of etcd to increase
On Fri, 21 Dec 2018 at 15:54, Anand Malagi wrote:
> Hi Friends,
>
>
>
> Please note that, when replace-brick operation was tried for one of the
> bad brick present in distributed disperse EC volume, the command actually
> failed but the brick daemon of new replaced brick came online.
>
This is
We've decided to delay GCS 0.5 release and postpone by few days (new date :
1st week of Jan) considering (a) most of the team members are out on
holidays (b) some of the critical issues/PRs are yet to be addressed from
[1] .
Regards,
GCS team
[1] https://waffle.io/gluster/gcs?label=GCS%2F0.5
Today, we are announcing the availability of GCS (Gluster Container
Storage) 0.4. The release was bit delayed to address some of the critical
issues identified. This release brings in a good amount of bug fixes along
with some key feature enhancements in GlusterD2. We’d request all of you to
try
Even though the subject says the issue is with glusterd, I think the
question is more applicable on heal/shards. Added the relevant folks to
help out.
On Mon, Dec 10, 2018 at 3:43 PM Chris Drescher wrote:
> Let me provide more information.
>
> We have 3 gluster nodes running with sharding
On Thu, Nov 22, 2018 at 3:30 AM mabi wrote:
> Hello,
>
> I would like to know if by increasing the op-version of all my GlusterFS
> volumes from the its actual version 31202 to 40100 by using the following
> command:
>
> gluster volume set all op-version 40100
>
> Will my clients using GlusterFS
On Mon, Nov 26, 2018 at 8:21 AM Atin Mukherjee wrote:
>
>
> On Sun, Nov 25, 2018 at 8:40 PM Jeevan Patnaik
> wrote:
>
>> Hi,
>>
>> I am getting output Another transaction is in progress with few gluster
>> volume commands including stop command. And with
On Sun, Nov 25, 2018 at 8:40 PM Jeevan Patnaik wrote:
> Hi,
>
> I am getting output Another transaction is in progress with few gluster
> volume commands including stop command. And with gluster volume status
> command, it's just hung and fails with timeout error.
>
This is primarily because of
The fix is in release 5 branch and next update of glusterfs-5 should have
it. Thanks for reporting the issue, David.
On Tue, 6 Nov 2018 at 20:27, David Spisla wrote:
> Ok, thanks for the update.
>
> Am Di., 6. Nov. 2018 um 15:46 Uhr schrieb Sanju Rakonde <
> srako...@redhat.com>:
>
>> Hi David,
What’s the version of srvstogfs-b11,srvstogfs-b11 and the client?
Have you checked the glusterd log of srvstogfs-b11 to see the reason of the
disconnection?
On Wed, 7 Nov 2018 at 04:35, MOISY Jérôme wrote:
> Hello,
>
>
>
> I just installed a new storage pool with 2 x 5 Servers in version 5.0
On Tue, 6 Nov 2018 at 20:40, fsoyer wrote:
> Hi all,
> after some problems on a node it finished by been marked as "rejected" by
> others nodes.
> It is part of no more volumes (after some remove-brick force), so I
> resetted it as described here :
>
>
On Thu, Nov 1, 2018 at 10:08 AM Computerisms Corporation <
b...@computerisms.ca> wrote:
> My troubleshooting took me to confirming that all my package versions
> were lined up and I came to realized that I had gotten version 5.0 from
> the debian repos instead of the repo at download.gluster.org.
== Overview
Today, we are announcing the availability of GCS (Gluster Container
Storage) 0.1. This initial release is designed to provide a platform for
community members to try out and provide feedback on the new Gluster
container storage stack. This new stack is a collaboration across a number
Could you check what does glusterd.log & cli.log files have to say on such
request? Is it failing on some socket file connection?
On Thu, Sep 13, 2018 at 11:31 PM Thomas Weiss wrote:
> Hi.
>
> I am very new to GlusterFS.
> I have just deployed a cluster and want to list the nodes.
>
> $ gluster
Can you please pass all the gluster log files from the server where the
transport end point not connected error is reported? As restarting glusterd
didn’t solve this issue, I believe this isn’t a stale port problem but
something else. Also please provide the output of ‘gluster v info ’
(@cc Ravi,
On Mon, 20 Aug 2018 at 13:08, wrote:
> Hi,
>
> To add to the problematic memory leak, I've been seeing another strange
> behavior on the 3.12 servers. When I reboot a node, it seems like often
> (but not always) the other nodes mark it as disconnected and won't
> accept it back until I restart
On Sun, 5 Aug 2018 at 13:29, Yuhao Zhang wrote:
> Sorry, what I meant was, if I start the transfer now and get glusterd into
> zombie status,
>
glusterd or glusterfsd?
it's unlikely that I can fully recover the server without a reboot.
>
>
> On Aug 5, 2018, at 02:55, Raghavendra Gowdappa
>
It might be worth if you can open a github issue
https://github.com/gluster/glusterd2/issues with all the details so that
this doesn't go out of from our radar.
On Thu, Jun 14, 2018 at 10:25 PM, Davide Obbi
wrote:
> thanks,
>
> i have been able to remove the old id entry:
>
On Wed, May 30, 2018 at 10:55 PM, Jim Kinney wrote:
> All,
>
> I added a third peer for a arbiter brick host to replica 2 cluster. Then I
> realized I can't use it since it has no infiniband like the other two hosts
> (infiniband and ethernet for clients). So I removed the new arbiter bricks
>
On Tue, 17 Apr 2018 at 10:06, Nithya Balachandran
wrote:
> That might be the reason. Perhaps the volfiles were not regenerated after
> upgrading to the version with the fix.
>
Bumping up the op-version is necessary in this case as (AFAIK) the code was
handling this based on
gluster
> 3.12? In which version? If not, there is plan to backport it?
>
>
> Greetings,
>
> Paolo
>
> Il 16/03/2018 13:24, Atin Mukherjee ha scritto:
>
> Have sent a backport request https://review.gluster.org/19730 at
> release-3.10 branch. Hopefully this fix w
Have sent a backport request https://review.gluster.org/19730 at
release-3.10 branch. Hopefully this fix will be picked up in next update.
On Fri, Mar 16, 2018 at 4:47 PM, Marco Lorenzo Crociani <
mar...@prismatelecomtesting.com> wrote:
> Hi,
> I'm hitting bug
further once you can pass down the glusterd and cmd_history log
files and the content of /var/lib/glusterd from all the nodes.
On Wed, Mar 7, 2018 at 4:13 AM, Jamie Lawrence <jlawre...@squaretrade.com>
wrote:
>
> > On Mar 5, 2018, at 6:41 PM, Atin Mukherjee <amukh...@redhat.com
On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence
wrote:
> Hello,
>
> So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
>
> It actually began as the same problem with a different peer. I noticed
> with (call it) gluster-2, when I couldn't make a
I believe the peer rejected issue is something we recently identified and
has been fixed through https://bugzilla.redhat.com/show_bug.cgi?id=1544637
and is available in 3.12.6. I'd request you to upgrade to the latest
version in 3.12 series.
On Mon, Feb 19, 2018 at 12:27 PM,
On Thu, Feb 15, 2018 at 1:39 AM, Gregor Burck
wrote:
> Hi,
>
> I run a proxmox system with a glustervolume over three nodes.
> I think about setup a second volume, but want to use the other interfaces
> on
> the nodes.
>
> Is this recommended or possible?
>
It's
Are you running gluster version <= 3.12?
Did you happen to start seeing this flood after rebalance? I'm just trying
to eliminate you're not hitting
https://bugzilla.redhat.com/show_bug.cgi?id=1484885 .
On Fri, Feb 9, 2018 at 4:45 AM, Vijay Bellur wrote:
>
> On Thu, Feb 8,
I'm guessing there's something wrong w.r.t address resolution on node 1.
>From the logs it's quite clear to me that node 1 is unable to resolve the
address configured in /etc/hosts where as the other nodes do. Could you
paste the gluster peer status output from all the nodes?
Also can you please
I have repeatedly explained this multiple times the way to hit this problem
is *extremely rare* and until and unless you prove us wrong and explain why
do you think you can get into this situation often. I still see that
information is not being made available to us to think through why this fix
Adding Poornima to take a look at it and comment.
On Tue, Jan 23, 2018 at 10:39 PM, Alan Orth wrote:
> Hello,
>
> I saw that parallel-readdir was an experimental feature in GlusterFS
> version 3.10.0, became stable in version 3.11.0, and is now recommended for
> small file
t 3.10 and hope to stay stable :/
>
>
>
> Regards
>
> Jo
>
>
>
>
>
> -Original message-
> *From:* Atin Mukherjee <amukh...@redhat.com>
> *Sent:* Tue 23-01-2018 05:15
> *Subject:* Re: [Gluster-users] BUG: After stop and start wrong port is
> adverti
So from the logs what it looks to be a regression caused by commit 635c1c3
( and the good news is that this is now fixed in release-3.12 branch and
should be part of 3.12.5.
Commit which fixes this issue:
COMMIT: https://review.gluster.org/19146 committed in release-3.12 by
\"Atin Mukh
> every time I reboot!
>
> Regards,
>
> On Sat, Dec 2, 2017 at 5:23 PM Atin Mukherjee <amukh...@redhat.com> wrote:
>
>> On Sat, 2 Dec 2017 at 19:29, Jo Goossens <jo.gooss...@hosted-power.com>
>> wrote:
>>
>>> Hello Atin,
>>>
>>&g
On Mon, Jan 15, 2018 at 6:30 PM, 陈曦 wrote:
> Using the host name of the volume, its related gluster commands can become
> very slow .For example,create,start,stop volume,nfs related commands. and
> some time And in some cases, the command will return Error : Request timed
>
On Fri, 12 Jan 2018 at 21:16, Nithya Balachandran
wrote:
> -- Forwarded message --
> From: Jose Sanchez
> Date: 11 January 2018 at 22:05
> Subject: Re: [Gluster-users] Creating cluster replica on 2 nodes 2 bricks
> each.
> To: Nithya
On Fri, 12 Jan 2018 at 01:34, Dj Merrill wrote:
> This morning I did a rolling update from the latest 3.7.x to 3.12.4,
> with no client activity. "Rolling" as in, shut down the Gluster
> services on the first server, update, reboot, wait until up and running,
> proceed to the
On Tue, Jan 2, 2018 at 2:36 PM, Hetz Ben Hamo wrote:
> Hi Amar,
>
> If can say something about the development of GlusterFS - is that there
> are 2 missing things:
>
> 1. Breakage between releases. I'm "stuck" using GlusterFS 3.8 because
> someone support to enable NFS-Ganesha.
Could you share the glusterd and the respective brick log files along with
the output of gluster volume info, gluster volume status and 'ps aux | grep
glusterfsd' .
On Wed, Dec 20, 2017 at 3:08 PM, David Spisla
wrote:
> Hello Gluster Community,
>
> I am using
brick-9=shchhv02-sto:-data-brick4-shchst01
> brick-10=shchhv03-sto:-data-brick4-shchst01
> brick-11=shchhv04-sto:-data-brick4-shchst01
>
> NOTE
>
> [root@shchhv01 shchst01]# gluster volume get shchst01 cluster.op-version
> Warning: Support to get global option value using
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack wrote:
> Hi all,
>
>
>
> I have an issue where our volume will not start from any node. When
> attempting to start the volume it will eventually return:
>
> Error: Request timed out
>
>
>
> For some time after that, the volume
nk trying to start it as a regular user did something odd
> and reinstalling the packages doesn’t correct the state for some reason.
>
>
> On Dec 13, 2017, at 4:34 AM, Atin Mukherjee <amukh...@redhat.com> wrote:
>
> Seems like you don't have the glusterd.vol file installed in
Seems like you don't have the glusterd.vol file installed in the node. Can
you please cross check?
On Wed, Dec 13, 2017 at 8:52 AM, Ben Mabey
wrote:
> Hi all,
> I’m trying out gluster by following the Quick Start guide on two fresh
> installs of Ubuntu 16.04. One
> pollin = 0x3fff6c000920
>>
>> priv = 0x3fff74002d50
>>
>> #14 0x3fff847ff89c in socket_event_handler (fd=,
>> idx=, data=0x3fff74002210, poll_in=,
>> poll_out=, poll_err=) at socket.c:2349
>>
>> ---Type to continue, or q to quit---
>>
nnouncement.
>
> Regards
>
> Jo
>
>
>
>
>
>
> -Original message-
> *From:* Atin Mukherjee <amukh...@redhat.com>
>
> *Sent:* Mon 30-10-2017 17:40
> *Subject:* Re: [Gluster-users] BUG: After stop and start wrong port is
> advertised
> *To:* Jo
On Fri, Dec 1, 2017 at 1:55 AM, Ziemowit Pierzycki
wrote:
> Hi,
>
> I have a problem joining four Gluster 3.10 nodes to an existing
> Gluster 3.8 nodes. My understanding that this should work and not be
> too much of a problem.
>
> Peer robe is successful but the node is
i
> done
> done
>
>
> Met vriendelijke groet,
>
> Mike Hulsman
>
> Proxy Managed Services B.V. | www.proxy.nl | Enterprise IT-Infra, Open
> Source and Cloud Technology
> Delftweg 128 3043 NB Rotterdam The Netherlands
> <https://maps.google.com/?q=Delftweg+128+3043+NB+R
el drivers myself in c :)
>
>
>
>
>
> Regards
>
> Jo Goossens
>
>
>
>
>
>
>
>
> -Original message-
> *From:* Atin Mukherjee <amukh...@redhat.com>
> *Sent:* Fri 27-10-2017 21:01
> *Subject:* Re: [Gluster-users] BUG: After stop and start wrong port
If the gluster nodes are peer probed through FQDNs then you’re good. If
they’re done through IPs then for every node you’d need to replace the old
IP with new IP for all the files in /var/lib/glusterd along with renaming
the filenames with have the associated old IP and restart all gluster
We (finally) figured out the root cause, Jo!
Patch https://review.gluster.org/#/c/18579 posted upstream for review.
On Thu, Sep 21, 2017 at 2:08 PM, Jo Goossens
wrote:
> Hi,
>
>
>
>
>
> We use glusterfs 3.10.5 on Debian 9.
>
>
>
> When we stop or restart the
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil
wrote:
> gluster version 3.10.6, replica 3 volume, daemon is present but does not
> appear to be functioning
>
> peculiar behaviour. If I kill the glusterfs brick daemon and restart
> glusterd then the brick becomes
Please share the complete cli and the glusterd log file from the node.
On Mon, Oct 16, 2017 at 10:38 AM, Ngo Leung wrote:
> Dear Sir / Madam
>
>
>
> I had been using Glusterfs of both nodes, and it is under in distribute
> mode. But I cannot use all of the gluster
On Tue, Oct 17, 2017 at 3:28 PM, ismael mondiu wrote:
> Hi,
>
> I noticed that when i stop my gluster server via systemctl stop glusterd
> command , one glusterfs process is still up.
>
> Which is the correct way to stop all gluster processes in my host?
>
Stopping glusterd
*
>
>
> [
> [root@dvihcasc0r ~]# glusterd -L DEBUG
> [root@dvihcasc0r ~]# date
> Thu Oct 5 10:05:44 CEST 2017
> [root@dvihcasc0r ~]# shutdown -r now
>
>
> ****
> *
&g
2017-10-04 15:44:20.189153] E [MSGID: 101176]
> [graph.c:681:glusterfs_graph_activate] 0-graph: init failed
> [2017-10-04 15:44:20.190877] W [glusterfsd.c:1360:cleanup_and_exit]
> (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x7f2b04693bcd]
> -->/usr/sbin/glusterd(glust
On Fri, 22 Sep 2017 at 18:54, Ravishankar N wrote:
> Hello,
> Are our servers still facing the overload issue? My replies to
> gluster-users ML are not getting delivered to the list.
>
Same here. Even this is true for gluster-devel as well.
> Regards,
> Ravi
>
>
> On
On Fri, Sep 22, 2017 at 2:37 AM, Jo Goossens
wrote:
> Hi,
>
>
>
>
>
> We use glusterfs 3.10.5 on Debian 9.
>
>
>
> When we stop or restart the service, e.g.: service glusterfs-server restart
>
>
>
> We see that the wrong port get's advertised afterwards. For
On Wed, Sep 20, 2017 at 6:02 PM, Alexandre Blanca wrote:
> Hi,
>
> how to change the host name of gluster servers?
> if I modify the hostname1 in /etc/lib/glusterd/peers/uuid, the change is
> not save...
>
> gluster pool list return ipserver and not new hostname...
>
I've already replied to your earlier email. In case you've not seen it in
your mailbox here it goes:
This looks like a bug to me. For some reason glusterd's portmap is
referring to a stale port (IMO) where as brick is still listening to the
correct port. But ideally when glusterd service is
On Thu, Sep 14, 2017 at 12:58 AM, Ben Werthmann wrote:
> I ran into something like this in 3.10.4 and filed two bugs for it:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1491059
> https://bugzilla.redhat.com/show_bug.cgi?id=1491060
>
> Please see the above bugs for full
le ?
>
> Thanks
>
>
>
>
> --
> *De :* Atin Mukherjee <amukh...@redhat.com>
> *Envoyé :* vendredi 18 août 2017 10:53
> *À :* Niels de Vos
> *Cc :* ismael mondiu; gluster-users@gluster.org; Gaurav Yadav
> *Objet :* Re: [Gluster-users] Glusterd not working with s
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav wrote:
> Please send me the logs as well i.e
s/3.10.5/xlator/mgmt/glusterd.so
> #11 0x7f413bea789a in __glusterd_mgmt_hndsk_version_ack_cbk ()
> from /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so
> #12 0x7f413be8d3ee in glusterd_big_locked_cbk () from
> /usr/lib64/glusterfs/3.10.5/xlator/mgmt/glusterd.so
> #13 0x0
up. So I'd be really interested to see the backtrace of this
when there are no volumes associated.
>
> On Mon, Sep 4, 2017 at 5:08 PM, Atin Mukherjee <amukh...@redhat.com>
> wrote:
> >
> >
> > On Mon, Sep 4, 2017 at 5:28 PM, Serkan Çoban <cobanser...@gmail.com>
> wr
services and be operational.
> On Mon, Sep 4, 2017 at 1:43 PM, Atin Mukherjee <amukh...@redhat.com>
> wrote:
> >
> >
> > On Fri, Sep 1, 2017 at 8:47 AM, Milind Changire <mchan...@redhat.com>
> wrote:
> >>
> >> Serkan,
> >> I ha
-
> --
> There are no active volume tasks
>
> $ gluster vol heal QEMU-VMs statistics heal-count
> Gathering count of entries to be healed on volume QEMU-VMs has been
> unsuccessful on bricks that are down. Please check if all brick processe
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
On Mon, Sep 4, 2017 at 4:07 PM, lejeczek wrote:
> hi all
>
> this:
> $ vol heal $_vol info
> outputs ok and exit code is 0
> But if I want to see statistics:
> $ gluster vol
1 - 100 of 741 matches
Mail list logo