Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-09-06 Thread Sanju Rakonde
Hi, issue https://github.com/gluster/glusterfs/issues/1332 is fixed now with https://github.com/gluster/glusterfs/commit/865cca1190e233381f975ff36118f46e29477dcf . It will be backported to release-7 and release-8 branches soon. On Mon, Sep 7, 2020 at 1:14 AM Strahil Nikolov wrote: > Your

Re: [Gluster-users] upgrade gluster from old version

2020-08-26 Thread Sanju Rakonde
Hi, I believe you can do an offline upgrade (I have never tried upgrading from 3.7 to 7.7, so there might be issues). If you want to do a fresh install, after installing the 7.7 packages, you can use the same old bricks to create the volumes. but you need to add force at the end of volume create

[Gluster-users] GlusterFS - 8.0RC Test day (29th Jun 2020)

2020-06-24 Thread Sanju Rakonde
Hi glusterfs community, We are planning a Test day on 29th Jun 2020, mainly to focus on upgrade testing to release-8.0RC. Also, we are planning to automate the upgrade test flow. @Prajith Kesava Prasad be automating the rolling upgrade flow using ansible. We need to capture the tests which

Re: [Gluster-users] df shows wrong mount size, after adding bricks to volume

2020-05-29 Thread Sanju Rakonde
n this state? > > Cheers, > Petr > > On Fri, May 29, 2020 at 11:37 AM Sanju Rakonde > wrote: > > > > Nope, for now. I will update you if we figure out any other workaround. > > > > Thanks for your help! > > > > On Fri, May 29, 2020 at 2:50

Re: [Gluster-users] df shows wrong mount size, after adding bricks to volume

2020-05-29 Thread Sanju Rakonde
formation? > > > On Fri, May 29, 2020 at 11:08 AM Sanju Rakonde > wrote: > > > > The issue is not with glusterd restart. We need to reproduce from > beginning and add-bricks to check df -h values. > > > > I suggest not to try on the production environment. if y

Re: [Gluster-users] df shows wrong mount size, after adding bricks to volume

2020-05-29 Thread Sanju Rakonde
> > Cheers, > Petr > > On Fri, May 29, 2020 at 9:09 AM Sanju Rakonde wrote: > > > > Surprising! Will you be able to reproduce the issue and share the logs > if I provide a custom build with more logs? > > > > On Thu, May 28, 2020 at 1:35 PM Petr Cer

Re: [Gluster-users] df shows wrong mount size, after adding bricks to volume

2020-05-29 Thread Sanju Rakonde
var/lib/glusterd/vols/gv0/gv0.imagegluster3.data-brick.vol:option > >> shared-brick-count 0 > >> > >> Server 3: > >> > >> /var/lib/glusterd/vols/gv0/gv0.imagegluster1.data2-brick.vol: > >> option shared-brick-count 0 > >> /var/lib/gluste

Re: [Gluster-users] df shows wrong mount size, after adding bricks to volume

2020-05-27 Thread Sanju Rakonde
b/glusterd/vols/gv0/gv0.imagegluster2.data2-brick.vol: > option shared-brick-count 0 > /var/lib/glusterd/vols/gv0/gv0.imagegluster2.data-brick.vol:option > shared-brick-count 0 > /var/lib/glusterd/vols/gv0/gv0.imagegluster3.data2-brick.vol: > option shared-brick-count 2 > /var/lib

Re: [Gluster-users] df shows wrong mount size, after adding bricks to volume

2020-05-27 Thread Sanju Rakonde
Hi Petr, what was the server version before upgrading to 7.2? Can you please share the shared-brick-count values from brick volfiles from all the nodes? grep shared-brick-count /var/lib/glusterd/vols//* On Wed, May 27, 2020 at 2:31 PM Petr Certik wrote: > Hi everyone, > > we've been running a

Re: [Gluster-users] never ending logging

2020-04-29 Thread Sanju Rakonde
96133643c634c1ecd603ba > node3/vols/rrddata/rrddata.glusterDevVM3.bricks-rrddata-brick1-data.vol > > Best regards, > Nicolas. > > -- > *De: *"Hari Gowtham" > *À: *n...@furyweb.fr, "Sanju Rakonde" > *Cc: *"Nikhil Ladha"

Re: [Gluster-users] never ending logging

2020-04-27 Thread Sanju Rakonde
@Nikhil Ladha Please look at this issue. On Thu, Apr 23, 2020 at 9:02 PM wrote: > Did you find any clue in log files ? > > I can try an update to 7.5 in case some recent bug were solved, what's > your opinion ? > > ------ > *De: *"Sanju Rakond

Re: [Gluster-users] never ending logging

2020-04-22 Thread Sanju Rakonde
g on node 3 (arbiter) I get only > these few lines : > root@glusterDevVM3:~# pstack 13700 > > 13700: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO > (No symbols found) > 0x7fafd747a6cd: > > Which debian stretch package should I install ? > > To be more expl

Re: [Gluster-users] never ending logging

2020-04-21 Thread Sanju Rakonde
Hi, The email is talking about many issues. Let me ask a few questions to get a whole picture. 1. are the peers are in the connected state now? or they still in the rejected state? 2. What led you to see "locking failed" messages? We would like to if there is a reproducer and fix the issue if

Re: [Gluster-users] WORM: If autocommit-period 0 file will be WORMed with 0 Byte during initial write

2020-03-01 Thread Sanju Rakonde
Thanks for the update David! On Fri, Feb 28, 2020 at 8:21 PM David Spisla wrote: > Dear Gluster Community, > > as mentioned on the last Gluster Community meeting, here is bug report for > the above issue. The patch is already sent to gerrit: > https://bugzilla.redhat.com/show_bug.cgi?id=1808421

Re: [Gluster-users] Brick Goes Offline After server reboot/Or Gluster Container is restarted, on which a gluster node is running

2020-03-01 Thread Sanju Rakonde
Hi Rifat, I suspect you are hitting https://bugzilla.redhat.com/show_bug.cgi?id=1773856. This bug has been fixed in master and will be backported to release branches soon. Although the bug says, "with volume set operation when a node is down we see the issue", according to the RCA of the bug it

Re: [Gluster-users] How can I remove a wrong information in "Other names" while checking gluster peer status?

2020-01-03 Thread Sanju Rakonde
Hi, In every node of the cluster, at /var/lib/glusterd/peers/ directory, we will have peer related information. To come out of this situation, you have to delete hostname=nas28 from the file, whose name is uuid of node nas23 in all the nodes (except npde nas23). The file will be present under

Re: [Gluster-users] Problem with heal operation con replica 2: "Launching heal operation to perform full self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all bric

2019-12-12 Thread Sanju Rakonde
On Wed, Dec 11, 2019 at 10:06 PM woz woz wrote: > Hi guys, > how are you? I have a question for you > Last week one of our 8 servers went down due to a problem on the RAID > controller and, unfortunately, we had to reinstall and reconfigure it. The > hostname of this server is

Re: [Gluster-users] one peer flooded with - 0-glusterfs: connection attempt on 127.0.0.1:24007 failed, (Invalid argument)

2019-10-14 Thread Sanju Rakonde
please check contents of /var/lib/glusterd/peers/ directory, it should not have any information regarding the localhost. Please check the uuid of the local node at /var/lib/glusterd/glusterd.info file and figure out if you have a file with this uuid at /var/lib/glusterd/peers/*. If you find any

Re: [Gluster-users] gluster volume delete mdsgv01: volume delete: mdsgv01: failed: Some of the peers are down

2019-09-29 Thread Sanju Rakonde
Hi Tom, Volume delete operation is not permitted when some of peers in cluster are down. Please check peer status output and make sure that all the nodes are up and running. and then you can go for volume delete operation. On Sun, Sep 29, 2019 at 8:53 AM TomK wrote: > Hello All, > > I'm not

Re: [Gluster-users] Memory leak in gluster 5.4

2019-07-18 Thread Sanju Rakonde
Christian, To debug memory leaks, we need periodic statedumps of respective processes. Please provide the statedumps. I suspect that you are hitting https://bugzilla.redhat.com/show_bug.cgi?id=1694612. This bug is addressed in release-5.6 Thanks, Sanju On Thu, Jul 18, 2019 at 1:30 PM Christian

Re: [Gluster-users] Gluster CLI - No output/no info displayed.

2019-06-26 Thread Sanju Rakonde
On Sun, Jun 23, 2019 at 4:54 AM Laurent Dumont wrote: > Hi, > > I am facing a strange issue with the Gluster CLI. No matter what command > is used, the CLI doesn't output anything. It's a gluster with a single > node. The volumes themselves are working without any issues. > >

Re: [Gluster-users] glusterd crashes on Assertion failed: rsp.op == txn_op_info.op

2019-06-20 Thread Sanju Rakonde
Olaf, Can you please paste complete backtrace from the core file, so that we can analyse what is wrong here. On Wed, Jun 19, 2019 at 10:31 PM Olaf Buitelaar wrote: > Hi Atin, > > Thank you for pointing out this bug report, however no rebalancing task > was running during this event. So maybe

Re: [Gluster-users] Memory leak in gluster 5.4

2019-05-29 Thread Sanju Rakonde
Hi Christian, I see below errors when I try to unzip the file. [root@localhost Downloads]# unzip gluster_coredump.zip Archive: gluster_coredump.zip checkdir error: coredump exists but is not directory unable to process coredump/. checkdir error: coredump exists but is not

Re: [Gluster-users] ./tests/basic/gfapi/gfapi-ssl-test.t is failing too often in regression

2019-05-23 Thread Sanju Rakonde
I apologize for the wrong mail. This .t failed only for one patch and I don't think it is spurious. Closing this bug as not a bug. On Thu, May 23, 2019 at 4:04 PM Sanju Rakonde wrote: > I see a lot of patches are failing regressions due to the .t mentioned in > the subject line. I've

[Gluster-users] ./tests/basic/gfapi/gfapi-ssl-test.t is failing too often in regression

2019-05-23 Thread Sanju Rakonde
I see a lot of patches are failing regressions due to the .t mentioned in the subject line. I've filed a bug[1] for the same. https://bugzilla.redhat.com/show_bug.cgi?id=1713284 -- Thanks, Sanju ___ Gluster-users mailing list Gluster-users@gluster.org

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-20 Thread Sanju Rakonde
David, can you please attach glusterd.logs? As the error message says, Commit failed on the arbitar node, we might be able to find some issue on that node. On Mon, May 20, 2019 at 10:10 AM Nithya Balachandran wrote: > > > On Fri, 17 May 2019 at 06:01, David Cunningham > wrote: > >> Hello, >>

Re: [Gluster-users] [Gluster-devel] Upgrade testing to gluster 6

2019-04-04 Thread Sanju Rakonde
We don't hit https://bugzilla.redhat.com/show_bug.cgi?id=1694010 while upgrading to glusterfs-6. We tested it in different setups and understood that this issue is seen because of some issue in setup. regarding the issue you have faced, can you please let us know which documentation you have

Re: [Gluster-users] Gluster 4.1 install on AKS (Azure)

2019-03-11 Thread Sanju Rakonde
On Tue, Mar 12, 2019 at 10:46 AM Sanju Rakonde wrote: > Hi Maurya, > > Can you please share the glusterd.log with us? It will be stored under > /var/log/glusterfs/ directory. > > Thanks, > Sanju > > On Mon, Mar 11, 2019 at 4:09 PM Maurya M wrote: > >> Hi All

Re: [Gluster-users] Gluster 4.1 install on AKS (Azure)

2019-03-11 Thread Sanju Rakonde
Hi Maurya, Can you please share the glusterd.log with us? It will be stored under /var/log/glusterfs/ directory. Thanks, Sanju On Mon, Mar 11, 2019 at 4:09 PM Maurya M wrote: > Hi All, > I am trying to install gluster 4.1 on 3 nodes on my AKS cluster using > gluster-kubernetes project. > >

Re: [Gluster-users] Not able to start glusterd

2019-03-05 Thread Sanju Rakonde
Abhishek, We need below information on investigate this issue. 1. gluster --version 2. Please run glusterd in gdb, so that we can capture the backtrace. I see some rpc errors in log, but backtrace will be more helpful. To run glusterd in gdb, you need start glusterd in gdb (i.e. gdb glusterd,

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-03-01 Thread Sanju Rakonde
On Thu, Feb 28, 2019 at 9:48 PM Poornima Gurusiddaiah wrote: > > > On Thu, Feb 28, 2019, 8:44 PM Tami Greene wrote: > >> I'm missing some information about how the cluster volume creates the >> metadata allowing it to see and find the data on the bricks. I've been >> told not to write anything

Re: [Gluster-users] Getting timedout error while rebalancing

2019-02-08 Thread Sanju Rakonde
Hi Deepu, I can see multiple errors in glusterd log. [2019-02-06 13:22:21.012490] E [glusterd-rpc-ops.c:1429:__glusterd_commit_op_cbk] (-->/lib64/libgfrpc.so.0(+0xec20) [0x7f278d201c20] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0x7762a) [0x7f2781f1d62a]

Re: [Gluster-users] [Bugs] Bricks are going offline unable to recover with heal/start force commands

2019-01-24 Thread Sanju Rakonde
Self-heal Daemon on localhost N/A N/AY > 109550 > Self-heal Daemon on 192.168.3.6 N/A N/AY > 52557 > Self-heal Daemon on 192.168.3.15N/A N/AY > 16946 > > Task Status of Volume vol_3442e86b6d994a14de73f1b8c82cf0b8 > > --

Re: [Gluster-users] [Bugs] Bricks are going offline unable to recover with heal/start force commands

2019-01-24 Thread Sanju Rakonde
> trusted.glusterfs.dht=0x0001 > trusted.glusterfs.volume-id=0x15477f3622e84757a0ce9000b63fa849 > > sh-4.2# ls -la |wc -l > 86 > sh-4.2# pwd > > /var/lib/heketi/mounts/vg_d5f17487744584e3652d3ca943b0b91b/brick_e15c12cceae12c8ab7782dd57cf5b6c

Re: [Gluster-users] [Bugs] Bricks are going offline unable to recover with heal/start force commands

2019-01-24 Thread Sanju Rakonde
n you please provide us output of "getfattr -m -d -e hex " On Thu, Jan 24, 2019 at 12:18 PM Shaik Salam wrote: > Hi Sanju, > > Could you please have look my issue if you have time (atleast provide > workaround). > > BR > Salam > > > > From:Shaik Salam

Re: [Gluster-users] [Bugs] Bricks are going offline unable to recover with heal/start force commands

2019-01-23 Thread Sanju Rakonde
sconnecting now > [2019-01-21 08:23:25.346500] W [rpc-clnt.c:1753:rpc_clnt_submit] > 0-glusterfs: error returned while attempting to connect to host:(null), > port:0 > > > Enabled DEBUG mode for brick level. But nothing writing to brick log. > > gluster volume set vol_3442e

Re: [Gluster-users] [Bugs] Bricks are going offline unable to recover with heal/start force commands

2019-01-22 Thread Sanju Rakonde
Hi Shaik, Can you please provide us complete glusterd and cmd_history logs from all the nodes in the cluster? Also please paste output of the following commands (from all nodes): 1. gluster --version 2. gluster volume info 3. gluster volume status 4. gluster peer status 5. ps -ax | grep

Re: [Gluster-users] How to check running transactions in gluster?

2018-11-26 Thread Sanju Rakonde
Hi Jeevan, You might be hitting https://bugzilla.redhat.com/show_bug.cgi?id=1635820 Were any of the volumes in "Created" state, when the peer reject issue is seen? Thanks, Sanju On Mon, Nov 26, 2018 at 9:35 AM Jeevan Patnaik wrote: > Hi Atin, > > Thanks for the details. I think the issue is

Re: [Gluster-users] Can't enable shared_storage with Glusterv5.0

2018-11-06 Thread Sanju Rakonde
Hi David, With commit 44e4db, shared-storage functionality has broken. The test case we added couldn't catch this, since our .t frame work simulates a cluster environment in a single node. We will send out patch for this soon(into release-5 branch as well). On Tue, Nov 6, 2018 at 4:15 PM David

Re: [Gluster-users] [Gluster-devel] Crash in glusterfs!!!

2018-09-21 Thread Sanju Rakonde
Hi Abhishek, Can you please share the output of "t a a bt" with us? Thanks, Sanju On Fri, Sep 21, 2018 at 2:55 PM, ABHISHEK PALIWAL wrote: > > We have seen a SIGSEGV crash on glusterfs process on kernel restart at > start up. > > (gdb) bt > #0 0x3fffad4463b0 in _IO_unbuffer_all () at