Hi,
issue https://github.com/gluster/glusterfs/issues/1332 is fixed now with
https://github.com/gluster/glusterfs/commit/865cca1190e233381f975ff36118f46e29477dcf
.
It will be backported to release-7 and release-8 branches soon.
On Mon, Sep 7, 2020 at 1:14 AM Strahil Nikolov
wrote:
> Your
Hi,
I believe you can do an offline upgrade (I have never tried upgrading from
3.7 to 7.7, so there might be issues).
If you want to do a fresh install, after installing the 7.7 packages, you
can use the same old bricks to create the volumes. but you need to add
force at the end of volume create
Hi glusterfs community,
We are planning a Test day on 29th Jun 2020, mainly to focus on upgrade
testing to release-8.0RC.
Also, we are planning to automate the upgrade test flow. @Prajith Kesava
Prasad be automating the rolling upgrade flow using
ansible. We need to capture the tests which
n this state?
>
> Cheers,
> Petr
>
> On Fri, May 29, 2020 at 11:37 AM Sanju Rakonde
> wrote:
> >
> > Nope, for now. I will update you if we figure out any other workaround.
> >
> > Thanks for your help!
> >
> > On Fri, May 29, 2020 at 2:50
formation?
>
>
> On Fri, May 29, 2020 at 11:08 AM Sanju Rakonde
> wrote:
> >
> > The issue is not with glusterd restart. We need to reproduce from
> beginning and add-bricks to check df -h values.
> >
> > I suggest not to try on the production environment. if y
>
> Cheers,
> Petr
>
> On Fri, May 29, 2020 at 9:09 AM Sanju Rakonde wrote:
> >
> > Surprising! Will you be able to reproduce the issue and share the logs
> if I provide a custom build with more logs?
> >
> > On Thu, May 28, 2020 at 1:35 PM Petr Cer
var/lib/glusterd/vols/gv0/gv0.imagegluster3.data-brick.vol:option
> >> shared-brick-count 0
> >>
> >> Server 3:
> >>
> >> /var/lib/glusterd/vols/gv0/gv0.imagegluster1.data2-brick.vol:
> >> option shared-brick-count 0
> >> /var/lib/gluste
b/glusterd/vols/gv0/gv0.imagegluster2.data2-brick.vol:
> option shared-brick-count 0
> /var/lib/glusterd/vols/gv0/gv0.imagegluster2.data-brick.vol:option
> shared-brick-count 0
> /var/lib/glusterd/vols/gv0/gv0.imagegluster3.data2-brick.vol:
> option shared-brick-count 2
> /var/lib
Hi Petr,
what was the server version before upgrading to 7.2?
Can you please share the shared-brick-count values from brick volfiles from
all the nodes?
grep shared-brick-count /var/lib/glusterd/vols//*
On Wed, May 27, 2020 at 2:31 PM Petr Certik wrote:
> Hi everyone,
>
> we've been running a
96133643c634c1ecd603ba
> node3/vols/rrddata/rrddata.glusterDevVM3.bricks-rrddata-brick1-data.vol
>
> Best regards,
> Nicolas.
>
> --
> *De: *"Hari Gowtham"
> *À: *n...@furyweb.fr, "Sanju Rakonde"
> *Cc: *"Nikhil Ladha"
@Nikhil Ladha Please look at this issue.
On Thu, Apr 23, 2020 at 9:02 PM wrote:
> Did you find any clue in log files ?
>
> I can try an update to 7.5 in case some recent bug were solved, what's
> your opinion ?
>
> ------
> *De: *"Sanju Rakond
g on node 3 (arbiter) I get only
> these few lines :
> root@glusterDevVM3:~# pstack 13700
>
> 13700: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
> (No symbols found)
> 0x7fafd747a6cd:
>
> Which debian stretch package should I install ?
>
> To be more expl
Hi,
The email is talking about many issues. Let me ask a few questions to get a
whole picture.
1. are the peers are in the connected state now? or they still in the
rejected state?
2. What led you to see "locking failed" messages? We would like to if there
is a reproducer and fix the issue if
Thanks for the update David!
On Fri, Feb 28, 2020 at 8:21 PM David Spisla wrote:
> Dear Gluster Community,
>
> as mentioned on the last Gluster Community meeting, here is bug report for
> the above issue. The patch is already sent to gerrit:
> https://bugzilla.redhat.com/show_bug.cgi?id=1808421
Hi Rifat,
I suspect you are hitting
https://bugzilla.redhat.com/show_bug.cgi?id=1773856. This bug has been
fixed in master and will be backported to release branches soon.
Although the bug says, "with volume set operation when a node is down we
see the issue", according to the RCA of the bug it
Hi,
In every node of the cluster, at /var/lib/glusterd/peers/ directory, we
will have peer related information. To come out of this situation, you have
to delete hostname=nas28 from the file, whose name is uuid of node nas23 in
all the nodes (except npde nas23). The file will be present
under
On Wed, Dec 11, 2019 at 10:06 PM woz woz wrote:
> Hi guys,
> how are you? I have a question for you
> Last week one of our 8 servers went down due to a problem on the RAID
> controller and, unfortunately, we had to reinstall and reconfigure it. The
> hostname of this server is
please check contents of /var/lib/glusterd/peers/ directory, it should not
have any information regarding the localhost. Please check the uuid of the
local node at /var/lib/glusterd/glusterd.info file and figure out if you
have a file with this uuid at /var/lib/glusterd/peers/*. If you find any
Hi Tom,
Volume delete operation is not permitted when some of peers in cluster are
down. Please check peer status output and make sure that all the nodes are
up and running. and then you can go for volume delete operation.
On Sun, Sep 29, 2019 at 8:53 AM TomK wrote:
> Hello All,
>
> I'm not
Christian,
To debug memory leaks, we need periodic statedumps of respective processes.
Please provide the statedumps.
I suspect that you are hitting
https://bugzilla.redhat.com/show_bug.cgi?id=1694612. This bug is addressed
in release-5.6
Thanks,
Sanju
On Thu, Jul 18, 2019 at 1:30 PM Christian
On Sun, Jun 23, 2019 at 4:54 AM Laurent Dumont
wrote:
> Hi,
>
> I am facing a strange issue with the Gluster CLI. No matter what command
> is used, the CLI doesn't output anything. It's a gluster with a single
> node. The volumes themselves are working without any issues.
>
>
Olaf,
Can you please paste complete backtrace from the core file, so that we can
analyse what is wrong here.
On Wed, Jun 19, 2019 at 10:31 PM Olaf Buitelaar
wrote:
> Hi Atin,
>
> Thank you for pointing out this bug report, however no rebalancing task
> was running during this event. So maybe
Hi Christian,
I see below errors when I try to unzip the file.
[root@localhost Downloads]# unzip gluster_coredump.zip
Archive: gluster_coredump.zip
checkdir error: coredump exists but is not directory
unable to process coredump/.
checkdir error: coredump exists but is not
I apologize for the wrong mail. This .t failed only for one patch and I
don't think it is spurious. Closing this bug as not a bug.
On Thu, May 23, 2019 at 4:04 PM Sanju Rakonde wrote:
> I see a lot of patches are failing regressions due to the .t mentioned in
> the subject line. I've
I see a lot of patches are failing regressions due to the .t mentioned in
the subject line. I've filed a bug[1] for the same.
https://bugzilla.redhat.com/show_bug.cgi?id=1713284
--
Thanks,
Sanju
___
Gluster-users mailing list
Gluster-users@gluster.org
David,
can you please attach glusterd.logs? As the error message says, Commit
failed on the arbitar node, we might be able to find some issue on that
node.
On Mon, May 20, 2019 at 10:10 AM Nithya Balachandran
wrote:
>
>
> On Fri, 17 May 2019 at 06:01, David Cunningham
> wrote:
>
>> Hello,
>>
We don't hit https://bugzilla.redhat.com/show_bug.cgi?id=1694010 while
upgrading to glusterfs-6. We tested it in different setups and understood
that this issue is seen because of some issue in setup.
regarding the issue you have faced, can you please let us know which
documentation you have
On Tue, Mar 12, 2019 at 10:46 AM Sanju Rakonde wrote:
> Hi Maurya,
>
> Can you please share the glusterd.log with us? It will be stored under
> /var/log/glusterfs/ directory.
>
> Thanks,
> Sanju
>
> On Mon, Mar 11, 2019 at 4:09 PM Maurya M wrote:
>
>> Hi All
Hi Maurya,
Can you please share the glusterd.log with us? It will be stored under
/var/log/glusterfs/ directory.
Thanks,
Sanju
On Mon, Mar 11, 2019 at 4:09 PM Maurya M wrote:
> Hi All,
> I am trying to install gluster 4.1 on 3 nodes on my AKS cluster using
> gluster-kubernetes project.
>
>
Abhishek,
We need below information on investigate this issue.
1. gluster --version
2. Please run glusterd in gdb, so that we can capture the backtrace. I see
some rpc errors in log, but backtrace will be more helpful.
To run glusterd in gdb, you need start glusterd in gdb (i.e. gdb
glusterd,
On Thu, Feb 28, 2019 at 9:48 PM Poornima Gurusiddaiah
wrote:
>
>
> On Thu, Feb 28, 2019, 8:44 PM Tami Greene wrote:
>
>> I'm missing some information about how the cluster volume creates the
>> metadata allowing it to see and find the data on the bricks. I've been
>> told not to write anything
Hi Deepu,
I can see multiple errors in glusterd log.
[2019-02-06 13:22:21.012490] E
[glusterd-rpc-ops.c:1429:__glusterd_commit_op_cbk]
(-->/lib64/libgfrpc.so.0(+0xec20) [0x7f278d201c20]
-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0x7762a)
[0x7f2781f1d62a]
Self-heal Daemon on localhost N/A N/AY
> 109550
> Self-heal Daemon on 192.168.3.6 N/A N/AY
> 52557
> Self-heal Daemon on 192.168.3.15N/A N/AY
> 16946
>
> Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8
>
> --
> trusted.glusterfs.dht=0x0001
> trusted.glusterfs.volume-id=0x15477f3622e84757a0ce9000b63fa849
>
> sh-4.2# ls -la |wc -l
> 86
> sh-4.2# pwd
>
> /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c
n you please provide us output of "getfattr -m -d -e hex
"
On Thu, Jan 24, 2019 at 12:18 PM Shaik Salam wrote:
> Hi Sanju,
>
> Could you please have look my issue if you have time (atleast provide
> workaround).
>
> BR
> Salam
>
>
>
> From:Shaik Salam
sconnecting now
> [2019-01-21 08:23:25.346500] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-glusterfs: error returned while attempting to connect to host:(null),
> port:0
>
>
> Enabled DEBUG mode for brick level. But nothing writing to brick log.
>
> gluster volume set vol_3442e
Hi Shaik,
Can you please provide us complete glusterd and cmd_history logs from all
the nodes in the cluster? Also please paste output of the following
commands (from all nodes):
1. gluster --version
2. gluster volume info
3. gluster volume status
4. gluster peer status
5. ps -ax | grep
Hi Jeevan,
You might be hitting https://bugzilla.redhat.com/show_bug.cgi?id=1635820
Were any of the volumes in "Created" state, when the peer reject issue is
seen?
Thanks,
Sanju
On Mon, Nov 26, 2018 at 9:35 AM Jeevan Patnaik wrote:
> Hi Atin,
>
> Thanks for the details. I think the issue is
Hi David,
With commit 44e4db, shared-storage functionality has broken. The test case
we added couldn't catch this, since our .t frame work simulates a cluster
environment in a single node. We will send out patch for this soon(into
release-5 branch as well).
On Tue, Nov 6, 2018 at 4:15 PM David
Hi Abhishek,
Can you please share the output of "t a a bt" with us?
Thanks,
Sanju
On Fri, Sep 21, 2018 at 2:55 PM, ABHISHEK PALIWAL
wrote:
>
> We have seen a SIGSEGV crash on glusterfs process on kernel restart at
> start up.
>
> (gdb) bt
> #0 0x3fffad4463b0 in _IO_unbuffer_all () at
40 matches
Mail list logo