Re: [Gluster-users] How to diagnose volume rebalance failure?

2015-12-14 Thread Nithya Balachandran
Hi, Can you send us the rebalance log? Regards, Nithya - Original Message - > From: "PuYun" > To: "gluster-users" > Sent: Monday, December 14, 2015 11:33:40 AM > Subject: Re: [Gluster-users] How to diagnose volume rebalance failure? > > Here is the tail of the failed rebalance log, an

Re: [Gluster-users] DHT error

2016-06-07 Thread Nithya Balachandran
On Tue, Jun 7, 2016 at 2:01 PM, Emmanuel Dreyfus wrote: > Hello > > I get this message in the log, but I have trouble to figure > what it means. Any hint? > > [2016-06-07 06:41:17.366490] I [MSGID: 109036] > [dht-common.c:8173:dht_log_new_layout_for_dir_selfheal] 0-gfs-dht: Setting > layout of /f

Re: [Gluster-users] Disk failed, how do I remove brick?

2016-06-14 Thread Nithya Balachandran
On Fri, Jun 10, 2016 at 1:25 AM, Phil Dumont wrote: > Just started trying gluster, to decide if we want to put it into > production. > > Running version 3.7.11-1 > > Replicated, distributed volume, two servers, 20 bricks per server: > > [root@storinator1 ~]# gluster volume status gv0 > Status of

Re: [Gluster-users] Rebalancing after adding larger bricks

2016-10-14 Thread Nithya Balachandran
On 11 October 2016 at 22:32, Jackie Tung wrote: > Joe, > > Thanks for that, that was educational. Gluster docs claim that since 3.7, > DHT hash ranges are weighted based on brick sizes by default: > > $ gluster volume get Option Value > > --

Re: [Gluster-users] Please help

2016-10-26 Thread Nithya Balachandran
On 26 October 2016 at 19:47, Leung, Alex (398C) wrote: > Does anyone has any idea to troubleshoot the following problem? > > > > Alex > > > Can you please provide the gluster client logs (in /var/log/glusterfs) and the gluster volume info? Regards, Nithya > > > [root@*pdsimg-6* alex]# rsync -a

Re: [Gluster-users] [Gluster-devel] Feedback on DHT option "cluster.readdir-optimize"

2016-11-10 Thread Nithya Balachandran
On 8 November 2016 at 20:21, Kyle Johnson wrote: > Hey there, > > We have a number of processes which daily walk our entire directory tree > and perform operations on the found files. > > Pre-gluster, this processes was able to complete within 24 hours of > starting. After outgrowing that single

Re: [Gluster-users] Gluster File Abnormalities

2016-11-15 Thread Nithya Balachandran
Hi kevin, On 15 November 2016 at 20:56, Kevin Leigeb wrote: > All - > > > > We recently moved from an old cluster running 3.7.9 to a new one running > 3.8.4. To move the data we rsync’d all files from the old gluster nodes > that were not in the .glusterfs directory and had a size of greater-tha

Re: [Gluster-users] Gluster File Abnormalities

2016-11-15 Thread Nithya Balachandran
constraints and they are considered to have been skipped. This behaviour was modified as part of http://review.gluster.org/#/c/12347. We now reset the size to 0. I'm afraid, if those files were overwritten by their linkto files, the only way forward would be to restore from a backup. Regards,

Re: [Gluster-users] Gluster File Abnormalities

2016-11-16 Thread Nithya Balachandran
formed. Thanks, Nithya > > > *From:* Nithya Balachandran [mailto:nbala...@redhat.com] > *Sent:* Tuesday, November 15, 2016 10:55 AM > *To:* Kevin Leigeb > > *Subject:* Re: [Gluster-users] Gluster File Abnormalities > > > > > > > > On 15 November 2016 at 2

Re: [Gluster-users] rebalance and volume commit hash

2017-01-24 Thread Nithya Balachandran
On 20 January 2017 at 01:15, Shyam wrote: > > > On 01/17/2017 11:40 AM, Piotr Misiak wrote: > >> >> 17 sty 2017 17:10 Jeff Darcy napisał(a): >> >>> >>> Do you think that is wise to run rebalance process manually on every brick with the actual commit hash value? I didn't do anythin

Re: [Gluster-users] Always writeable distributed volume

2017-02-01 Thread Nithya Balachandran
On 1 February 2017 at 19:30, Jesper Led Lauridsen TS Infra server wrote: > Arbiter, isn't that only used where you want replica, but same storage > space. > > I would like a distributed volume where I can write, even if one of the > bricks fail. No replication. > > DHT does not currently allow th

Re: [Gluster-users] Question about heterogeneous bricks

2017-02-21 Thread Nithya Balachandran
Hi, Ideally, both bricks in a replica set should be of the same size. Ravi, can you confirm? Regards, Nithya On 21 February 2017 at 16:05, Daniele Antolini wrote: > Hi Serkan, > > thanks a lot for the answer. > > So, if you are correct, in a distributed with replica environment the best > pra

Re: [Gluster-users] nfs-ganesha logs

2017-03-01 Thread Nithya Balachandran
On 1 March 2017 at 18:25, Soumya Koduri wrote: > I am not sure if there are any outstanding issues with exposing shard > volume via gfapi. CCin Krutika. > > On 02/28/2017 01:29 PM, Mahdi Adnan wrote: > >> Hi, >> >> >> We have a Gluster volume hosting VMs for ESXi exported via Ganesha. >> >> Im ge

Re: [Gluster-users] Cannot remove-brick/migrate data

2017-03-08 Thread Nithya Balachandran
On 8 March 2017 at 23:34, Jarsulic, Michael [CRI] < mjarsu...@bsd.uchicago.edu> wrote: > I am having issues with one of my systems that houses two bricks and want > to bring it down for maintenance. I was able to remove the first brick > successfully and committed the changes. The second brick is

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-20 Thread Nithya Balachandran
Hi, Do you know the GFIDs of the VM images which were corrupted? Regards, Nithya On 20 March 2017 at 20:37, Krutika Dhananjay wrote: > I looked at the logs. > > From the time the new graph (since the add-brick command you shared where > bricks 41 through 44 are added) is switched to (line 3011

Re: [Gluster-users] rebalance fix layout necessary

2017-04-04 Thread Nithya Balachandran
On 4 April 2017 at 12:33, Amudhan P wrote: > Hi, > > I have a query on rebalancing. > > let's consider following is my folder hierarchy. > > parent1-fol (parent folder) > |_ > class-fol-1 ( 1 st level subfolder) >|_ >

Re: [Gluster-users] rebalance fix layout necessary

2017-04-06 Thread Nithya Balachandran
.c:202:client_set_lk_version_cbk] > 2-gfs-vol-client-1045: Server lk version = 1 > > > Regards, > Amudhan > > On Tue, Apr 4, 2017 at 4:31 PM, Amudhan P wrote: > >> I mean time takes for listing folders and files? because of "rebalance >> fix layout&

Re: [Gluster-users] Rebalance info

2017-04-17 Thread Nithya Balachandran
On 17 April 2017 at 16:04, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > Let's assume a replica 3 cluster with 3 bricks used at 95% > > If I add 3 bricks more , a rebalance (in addition to the corruption :-) ) > will move some shards to the newly added bricks so that old bricks

Re: [Gluster-users] [Gluster-devel] Don't allow data loss via add-brick (was Re: Add single server)

2017-05-02 Thread Nithya Balachandran
On 2 May 2017 at 16:59, Shyam wrote: > Talur, > > Please wait for this fix before releasing 3.10.2. > > We will take in the change to either prevent add-brick in > sharded+distrbuted volumes, or throw a warning and force the use of --force > to execute this. > > IIUC, the problem is less the add

Re: [Gluster-users] Questions about the limitations on using Gluster Volume Tiering.

2017-05-04 Thread Nithya Balachandran
On 2 May 2017 at 01:01, Jeff Byers wrote: > Hello, > > We've been thinking about giving GlusterFS Tiering a try, but > had noticed the following limitations documented in the: > > Red Hat Gluster Storage 3.2 Administration Guide > > Limitations of arbitrated replicated volumes: > >

Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Release 3.11: Has been Branched (and pending feature notes)

2017-05-05 Thread Nithya Balachandran
We have one more blocker bug (opened today): https://bugzilla.redhat.com/show_bug.cgi?id=1448307 On 5 May 2017 at 15:31, Kaushal M wrote: > On Thu, May 4, 2017 at 6:40 PM, Kaushal M wrote: > > On Thu, May 4, 2017 at 4:38 PM, Niels de Vos wrote: > >> On Thu, May 04, 2017 at 03:39:58PM +0530, P

Re: [Gluster-users] Remove-brick failed

2017-05-05 Thread Nithya Balachandran
Hi, You need to check the rebalance logs (glu_linux_dr2_oracle-rebalance.log) on glustoretst03.net.dr.dk and glustoretst04.net.dr.dk to see what went wrong. Regards, Nithya On 4 May 2017 at 11:46, Jesper Led Lauridsen TS Infra server wrote: > Hi > > I'm tr

Re: [Gluster-users] Reliability issues with Gluster 3.10 and shard

2017-05-15 Thread Nithya Balachandran
On 15 May 2017 at 11:01, Benjamin Kingston wrote: > I resolved this with the following settings, particularly disabling > features.ctr-enabled > That's odd. CTR should be enabled for tiered volumes. Was it enabled by default? > > olume Name: storage2 > Type: Distributed-Replicate > Volume ID:

Re: [Gluster-users] Deleting large files on sharded volume hangs and doesn't delete shards

2017-05-17 Thread Nithya Balachandran
I don't think we have tested shards with a tiered volume. Do you see such issues on non-tiered sharded volumes? Regards, Nithya On 18 May 2017 at 00:51, Walter Deignan wrote: > I have a reproducible issue where attempting to delete a file large enough > to have been sharded hangs. I can't kill

Re: [Gluster-users] gluster remove-brick problem

2017-05-19 Thread Nithya Balachandran
Hi, The rebalance could have failed because of any one of several reasons. You would need to check the rebalance log for the volume to figure out why it failed in this case. This should be /var/log/glusterfs/data-rebalance.log on bigdata-dlp-server00.xg01. I can take a look at the log if you send

Re: [Gluster-users] Distributed re-balance issue

2017-05-24 Thread Nithya Balachandran
On 24 May 2017 at 20:02, Mohammed Rafi K C wrote: > > > On 05/23/2017 08:53 PM, Mahdi Adnan wrote: > > Hi, > > > I have a distributed volume with 6 bricks, each have 5TB and it's hosting > large qcow2 VM disks (I know it's reliable but it's not important data) > > I started with 5 bricks and then

Re: [Gluster-users] Distributed re-balance issue

2017-05-24 Thread Nithya Balachandran
l > fill in the next half hour or so. > > attached are the logs for all 6 bricks. > > Hi, Just to clarify, did you run a rebalance (gluster volume rebalance start) or did you only run remove-brick ? -- > > Respectfully > *Mahdi A. Mahdi* > > ---

Re: [Gluster-users] Distributed re-balance issue

2017-05-24 Thread Nithya Balachandran
On 24 May 2017 at 22:45, Nithya Balachandran wrote: > > > On 24 May 2017 at 21:55, Mahdi Adnan wrote: > >> Hi, >> >> >> Thank you for your response. >> >> I have around 15 files, each is 2TB qcow. >> >> One brick reached 96% so i re

Re: [Gluster-users] Distributed re-balance issue

2017-05-25 Thread Nithya Balachandran
gration is complete, it looked like nothing was happening. > -- > > Respectfully > *Mahdi A. Mahdi* > > ------ > *From:* Nithya Balachandran > *Sent:* Wednesday, May 24, 2017 8:16:53 PM > *To:* Mahdi Adnan > *Cc:* Mohammed Rafi K C; gluste

Re: [Gluster-users] FW: ATTN: nbalacha IRC - Gluster - BlackoutWNCT requested info for 0byte file issue

2017-05-31 Thread Nithya Balachandran
CCing Ravi (arbiter) , Poornima and Raghavendra (parallel readdir) Hi Joshua, I had a quick look at the files you sent across. To summarize the issue, you see empty linkto files on the mount point. >From the logs I see that parallel readdir is enabled for this volume: performance.readdir-ahea

[Gluster-users] Gluster Documentation Feedback

2017-06-14 Thread Nithya Balachandran
Hi, We are looking at improving our documentation (http://gluster.readthedocs. io/en/latest/) and would like your feedback. Please let us know what would make the documentation more useful by answering a few questions: - Which guides do you use (admin, developer)? - How easy is it to find

Re: [Gluster-users] Gluster Documentation Feedback

2017-06-19 Thread Nithya Balachandran
Gentle reminder ... On 15 June 2017 at 10:43, Nithya Balachandran wrote: > Hi, > > We are looking at improving our documentation ( > http://gluster.readthedocs.io/en/latest/) and would like your feedback. > > Please let us know what would make the documentation more useful by

Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] Release 3.11.1: Scheduled for 20th of June

2017-06-23 Thread Nithya Balachandran
On 22 June 2017 at 22:44, Pranith Kumar Karampuri wrote: > > > On Wed, Jun 21, 2017 at 9:12 PM, Shyam wrote: > >> On 06/21/2017 11:37 AM, Pranith Kumar Karampuri wrote: >> >>> >>> >>> On Tue, Jun 20, 2017 at 7:37 PM, Shyam >> > wrote: >>> >>> Hi, >>> >>> Relea

Re: [Gluster-users] Rebalance task fails

2017-07-09 Thread Nithya Balachandran
On 7 July 2017 at 15:42, Szymon Miotk wrote: > Hello everyone, > > > I have problem rebalancing Gluster volume. > Gluster version is 3.7.3. > My 1x3 replicated volume become full, so I've added three more bricks > to make it 2x3 and wanted to rebalance. > But every time I start rebalancing, it fa

Re: [Gluster-users] Rebalance task fails

2017-07-13 Thread Nithya Balachandran
; Could someone explain what is index in Gluster? > Unfortunately index is popular word, so googling is not very helpful. > > Best regards, > Szymon Miotk > > On Sun, Jul 9, 2017 at 6:37 PM, Nithya Balachandran > wrote: > > > > On 7 July 2017 at 15:4

Re: [Gluster-users] Rebalance task fails

2017-07-13 Thread Nithya Balachandran
he index here is simply the value of the number of nodes on which the rebalance process should be running - it is used to track the rebalance status on all nodes. Best regards, > Szymon Miotk > > On Thu, Jul 13, 2017 at 10:12 AM, Nithya Balachandran > wrote: > > Hi Szymon

Re: [Gluster-users] Hot Tier

2017-07-30 Thread Nithya Balachandran
Milind and Hari, Can you please take a look at this? Thanks, Nithya On 31 July 2017 at 05:12, Dmitri Chebotarov <4dim...@gmail.com> wrote: > Hi > > I'm looking for an advise on hot tier feature - how can I tell if the hot > tier is working? > > I've attached replicated-distributed hot tier to a

Re: [Gluster-users] Rebalance failed on Distributed Disperse volume based on 3.12.14 version

2018-10-03 Thread Nithya Balachandran
On 1 October 2018 at 15:35, Mauro Tridici wrote: > Good morning Ashish, > > your explanations are always very useful, thank you very much: I will > remember these suggestions for any future needs. > Anyway, during the week-end, the remove-brick procedures ended > successfully and we were able to

Re: [Gluster-users] Rebalance failed on Distributed Disperse volume based on 3.12.14 version

2018-10-03 Thread Nithya Balachandran
l check every files on each removed bricks. > > So, if I understand, I can proceed with deletion of directories and files > left on the bricks only if each file have T tag, right? > > Thank you in advance, > Mauro > > > Il giorno 03 ott 2018, alle ore 16:49, Nithya Balach

Re: [Gluster-users] Rebalance failed on Distributed Disperse volume based on 3.12.14 version

2018-10-04 Thread Nithya Balachandran
Hi Mauro, The files on s04 and s05 can be deleted safely as long as those bricks have been removed from the volume and their brick processes are not running. .glusterfs/indices/xattrop/xattrop-* are links to files that need to be healed. .glusterfs/quarantine/stub-----00

Re: [Gluster-users] Found anomalies in ganesha-gfapi.log

2018-10-04 Thread Nithya Balachandran
On 4 October 2018 at 17:39, Renaud Fortier wrote: > Yes ! > > 2 clients using the same export connected to the same IP. Do you see > something wrong with that ? > > Thank you > Not necessarily wrong. This message shows up if DHT does not find a complete layout set on the directory when it does a

Re: [Gluster-users] Found anomalies in ganesha-gfapi.log

2018-10-04 Thread Nithya Balachandran
the > backup from only one client) I find it a little worrying even if it’s an > INFO log level. > > > > *De :* Nithya Balachandran [mailto:nbala...@redhat.com] > *Envoyé :* 4 octobre 2018 09:34 > *À :* Renaud Fortier > *Cc :* gluster-users@gluster.org > > *Objet :* Re

Re: [Gluster-users] Rebalance failed on Distributed Disperse volume based on 3.12.14 version

2018-10-08 Thread Nithya Balachandran
; Brick29: s02-stg:/gluster/mnt10/brick > Brick30: s03-stg:/gluster/mnt10/brick > Brick31: s01-stg:/gluster/mnt11/brick > Brick32: s02-stg:/gluster/mnt11/brick > Brick33: s03-stg:/gluster/mnt11/brick > Brick34: s01-stg:/gluster/mnt12/brick > Brick35: s02-stg:/gluster/mnt12/brick &

Re: [Gluster-users] Wrong volume size for distributed dispersed volume on 4.1.5

2018-10-16 Thread Nithya Balachandran
Hi, On 16 October 2018 at 18:20, wrote: > Hi everybody, > > I have created a distributed dispersed volume on 4.1.5 under centos7 like > this a few days ago: > > gluster volume create data_vol1 disperse-data 4 redundancy 2 transport tcp > \ > \ > gf-p-d-01.isec.foobar.com:/bricks/brick1/brick \

Re: [Gluster-users] Wrong volume size for distributed dispersed volume on 4.1.5

2018-10-16 Thread Nithya Balachandran
On 16 October 2018 at 20:04, wrote: > Hi, > > > > So we did a quick grep shared-brick-count > > > /var/lib/glusterd/vols/data_vol1/* > on all boxes and found that on 5 out of 6 boxes this was > shared-brick-count=0 for all bricks on remote boxes and 1 for local bricks. > > > > > > Is this the ex

Re: [Gluster-users] Should I be using gluster 3 or gluster 4?

2018-11-05 Thread Nithya Balachandran
On 6 November 2018 at 12:24, Jeevan Patnaik wrote: > Hi Vlad, > > I'm still confused of gluster releases. :( > Is 3.13 an official gluster release? It's not mentioned in > www.gluster.org/release-schedule > > 3.13 is EOL. It was a short term release. Which is more stable 3.13.2 or 3.12.6 or 4.1.

Re: [Gluster-users] distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)

2018-12-11 Thread Nithya Balachandran
This is the current behaviour of rebalance and nothing to be concerned about - it will migrate data on all bricks on the nodes which host the bricks being removed. The data on the removed bricks will be moved to other bricks, some of the data on the other bricks on the node will just be moved to o

Re: [Gluster-users] Invisible files

2018-12-18 Thread Nithya Balachandran
On Fri, 14 Dec 2018 at 19:10, Raghavendra Gowdappa wrote: > > > On Fri, Dec 14, 2018 at 6:38 PM Lindolfo Meira > wrote: > >> It happened to me using gluster 5.0, on OpenSUSE Leap 15, during a >> benchmark with IOR: the volume would seem normally mounted, but I was >> unable to overwrite files, a

Re: [Gluster-users] distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)

2018-12-18 Thread Nithya Balachandran
rick > Brick9: 10.0.0.42:/export/md3/brick > Brick10: 10.0.0.43:/export/md1/brick > Options Reconfigured: > cluster.rebal-throttle: aggressive > cluster.min-free-disk: 1% > transport.address-family: inet > performance.readdir-ahead: on > nfs.disable: on > > > Best, > > S

Re: [Gluster-users] distribute remove-brick has started migrating the wrong brick (glusterfs 3.8.13)

2018-12-18 Thread Nithya Balachandran
> > > Steve > > On Tue, 18 Dec 2018 at 15:37, Nithya Balachandran > wrote: > >> >> >> On Tue, 18 Dec 2018 at 14:56, Stephen Remde >> wrote: >> >>> Nithya, >>> >>> I've realised, I will not have enough space on the oth

Re: [Gluster-users] [Stale file handle] in shard volume

2019-01-02 Thread Nithya Balachandran
On Mon, 31 Dec 2018 at 01:27, Olaf Buitelaar wrote: > Dear All, > > till now a selected group of VM's still seem to produce new stale file's > and getting paused due to this. > I've not updated gluster recently, however i did change the op version > from 31200 to 31202 about a week before this is

Re: [Gluster-users] [Stale file handle] in shard volume

2019-01-03 Thread Nithya Balachandran
; criteria i should use to determine if a file is stale or not? > these criteria are just based observations i made, moving the stale files > manually. After removing them i was able to start the VM again..until some > time later it hangs on another stale shard file unfortunate. > > Thanks O

Re: [Gluster-users] update to 4.1.6-1 and fix-layout failing

2019-01-04 Thread Nithya Balachandran
On Fri, 4 Jan 2019 at 15:48, mohammad kashif wrote: > Hi > > I have updated our distributed gluster storage from 3.12.9-1 to 4.1.6-1. > The existing cluster had seven servers totalling in around 450 TB. OS is > Centos7. The update went OK and I could access files. > Then I added two more servers

Re: [Gluster-users] update to 4.1.6-1 and fix-layout failing

2019-01-07 Thread Nithya Balachandran
t one hour and I can't see any > new directories being created. > > Thanks > > Kashif > > > On Fri, Jan 4, 2019 at 10:42 AM Nithya Balachandran > wrote: > >> >> >> On Fri, 4 Jan 2019 at 15:48, mohammad kashif >> wrote: >> >>>

Re: [Gluster-users] A broken file that can not be deleted

2019-01-10 Thread Nithya Balachandran
On Wed, 9 Jan 2019 at 19:49, Dmitry Isakbayev wrote: > I am seeing a broken file that exists on 2 out of 3 nodes. The > application trying to use the file throws file permissions error. ls, rm, > mv, touch all throw "Input/output error" > > $ ls -la > ls: cannot access .download_suspensions.mem

Re: [Gluster-users] Input/output error on FUSE log

2019-01-10 Thread Nithya Balachandran
I don't see write failures in the log but I do see fallocate failing with EIO. [2019-01-07 19:16:44.846187] W [MSGID: 109011] [dht-layout.c:163:dht_layout_search] 0-gv1-dht: no subvolume for hash (value) = 1285124113 [2019-01-07 19:16:44.846194] D [MSGID: 0] [dht-helper.c:969:dht_subvol_get_hashed

Re: [Gluster-users] invisible files in some directory

2019-01-18 Thread Nithya Balachandran
On Fri, 18 Jan 2019 at 14:25, Mauro Tridici wrote: > Dear Users, > > I’m facing with a new problem on our gluster volume (v. 3.12.14). > Sometime it happen that “ls” command execution, in a specified directory, > return empty output. > “ls” command output is empty, but I know that the involved di

Re: [Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-01-22 Thread Nithya Balachandran
On Tue, 22 Jan 2019 at 11:42, Amar Tumballi Suryanarayan < atumb...@redhat.com> wrote: > > > On Thu, Jan 10, 2019 at 1:56 PM Hu Bert wrote: > >> Hi, >> >> > > We ara also using 10TB disks, heal takes 7-8 days. >> > > You can play with "cluster.shd-max-threads" setting. It is default 1 I >> > > th

Re: [Gluster-users] Files losing permissions in GlusterFS 3.12

2019-01-27 Thread Nithya Balachandran
On Fri, 25 Jan 2019 at 20:51, Gudrun Mareike Amedick < g.amed...@uni-luebeck.de> wrote: > Hi all, > > we have a problem with a distributed dispersed volume (GlusterFS 3.12). We > have files that lost their permissions or gained sticky bits. The files > themselves seem to be okay. > > It looks like

Re: [Gluster-users] Files losing permissions in GlusterFS 3.12

2019-01-31 Thread Nithya Balachandran
On Wed, 30 Jan 2019 at 19:12, Gudrun Mareike Amedick < g.amed...@uni-luebeck.de> wrote: > Hi, > > a bit additional info inlineAm Montag, den 28.01.2019, 10:23 +0100 schrieb > Frank Ruehlemann: > > Am Montag, den 28.01.2019, 09:50 +0530 schrieb Nithya Balachandran: > &g

Re: [Gluster-users] gluster remove-brick

2019-02-03 Thread Nithya Balachandran
Hi, The status shows quite a few failures. Please check the rebalance logs to see why that happened. We can decide what to do based on the errors. Once you run a commit, the brick will no longer be part of the volume and you will not be able to access those files via the client. Do you have suffic

Re: [Gluster-users] gluster remove-brick

2019-02-04 Thread Nithya Balachandran
tatus and for very long time there > was no failures and then at some point these 17000 failures appeared and it > stayed like that. > > Thanks > > Kashif > > > > > > Let me explain a little bit of background. > > > On Mon, Feb 4, 2019 at 5:09 AM Nith

Re: [Gluster-users] Getting timedout error while rebalancing

2019-02-05 Thread Nithya Balachandran
Hi, Please provide the exact step at which you are seeing the error. It would be ideal if you could copy-paste the command and the error. Regards, Nithya On Tue, 5 Feb 2019 at 15:24, deepu srinivasan wrote: > HI everyone. I am getting "Error : Request timed out " while doing > rebalance . I h

Re: [Gluster-users] Getting timedout error while rebalancing

2019-02-05 Thread Nithya Balachandran
566 0 201completed > 0:00:08 > > Is the rebalancing option working fine? Why did gluster throw the error > saying that "Error : Request timed out"? > .On Tue, Feb 5, 2019 at 4:23 PM Nithya Balachandran > wrote: > >> Hi, >> Pleas

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-06 Thread Nithya Balachandran
Hi Artem, Do you still see the crashes with 5.3? If yes, please try mount the volume using the mount option lru-limit=0 and see if that helps. We are looking into the crashes and will update when have a fix. Also, please provide the gluster volume info for the volume in question. regards, Nithy

Re: [Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-02-06 Thread Nithya Balachandran
Hi, The client logs indicates that the mount process has crashed. Please try mounting the volume with the volume option lru-limit=0 and see if it still crashes. Thanks, Nithya On Thu, 24 Jan 2019 at 12:47, Hu Bert wrote: > Good morning, > > we currently transfer some data to a new glusterfs vo

Re: [Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-02-06 Thread Nithya Balachandran
"reaching this limit (0 means 'unlimited')", }, > > This seems to be the default already? Set it explicitly? > > Regards, > Hubert > > Am Mi., 6. Feb. 2019 um 09:26 Uhr schrieb Nithya Balachandran > : > > > > Hi, > > > &

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-07 Thread Nithya Balachandran
dir: on >>> network.inode-lru-limit: 50 >>> performance.md-cache-timeout: 600 >>> performance.cache-invalidation: on >>> performance.stat-prefetch: on >>> features.cache-invalidation-timeout: 600 >>> features.cache-invalidation: on >>> cluster.readdir

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-08 Thread Nithya Balachandran
>> performance.cache-invalidation: on >>> performance.stat-prefetch: on >>> features.cache-invalidation-timeout: 600 >>> features.cache-invalidation: on >>> cluster.readdir-optimize: on >>> performance.io-thread-count: 32 >>> server.event

Re: [Gluster-users] Message repeated over and over after upgrade from 4.1 to 5.3: W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fd966fcd329] -->/us

2019-02-12 Thread Nithya Balachandran
l Robot LLC >>>>>> beerpla.net | +ArtemRussakovskii >>>>>> <https://plus.google.com/+ArtemRussakovskii> | @ArtemR >>>>>> <http://twitter.com/ArtemR> >>>>>> >>>>>> >>>>>> On Fri,

Re: [Gluster-users] Files on Brick not showing up in ls command

2019-02-13 Thread Nithya Balachandran
On Tue, 12 Feb 2019 at 08:30, Patrick Nixon wrote: > The files are being written to via the glusterfs mount (and read on the > same client and a different client). I try not to do anything on the nodes > directly because I understand that can cause weirdness. As far as I can > tell, there haven

Re: [Gluster-users] Files on Brick not showing up in ls command

2019-02-13 Thread Nithya Balachandran
f > balancing the new brick and will resync the files onto the full gluster > volume when that completes > > On Wed, Feb 13, 2019, 10:28 PM Nithya Balachandran > wrote: > >> >> >> On Tue, 12 Feb 2019 at 08:30, Patrick Nixon wrote: >> >>> The

Re: [Gluster-users] gluster 5.3: file or directory not read-/writeable, although it exists - cache?

2019-02-19 Thread Nithya Balachandran
On Tue, 19 Feb 2019 at 15:18, Hu Bert wrote: > Hello @ll, > > one of our backend developers told me that, in the tomcat logs, he > sees errors that directories on a glusterfs mount aren't readable. > Within tomcat the errors look like this: > > 2019-02-19 07:39:27,124 WARN Path > /data/repositor

Re: [Gluster-users] / - is in split-brain

2019-03-19 Thread Nithya Balachandran
Hi, What is the output of the gluster volume info ? Thanks, Nithya On Wed, 20 Mar 2019 at 01:58, Pablo Schandin wrote: > Hello all! > > I had a volume with only a local brick running vms and recently added a > second (remote) brick to the volume. After adding the brick, the heal > command repo

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-19 Thread Nithya Balachandran
Hi Artem, I think you are running into a different crash. The ones reported which were prevented by turning off write-behind are now fixed. We will need to look into the one you are seeing to see why it is happening. Regards, Nithya On Tue, 19 Mar 2019 at 20:25, Artem Russakovskii wrote: > Th

Re: [Gluster-users] .glusterfs GFID links

2019-03-20 Thread Nithya Balachandran
On Wed, 20 Mar 2019 at 22:59, Jim Kinney wrote: > I have half a zillion broken symlinks in the .glusterfs folder on 3 of 11 > volumes. It doesn't make sense to me that a GFID should linklike some of > the ones below: > > /data/glusterfs/home/brick/brick/.glusterfs/9e/75/9e75a16f-fe4f-411e-937d-1a

Re: [Gluster-users] Transport endpoint is not connected failures in

2019-03-27 Thread Nithya Balachandran
On Wed, 27 Mar 2019 at 21:47, wrote: > Hello Amar and list, > > > > I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the > “Transport endpoint is not connected failures” for us. > > > > We did not have any of these failures in this past weekend backups cycle. > > > > Thank you v

Re: [Gluster-users] Prioritise local bricks for IO?

2019-03-28 Thread Nithya Balachandran
On Wed, 27 Mar 2019 at 20:27, Poornima Gurusiddaiah wrote: > This feature is not under active development as it was not used widely. > AFAIK its not supported feature. > +Nithya +Raghavendra for further clarifications. > This is not actively supported - there has been no work done on this featu

Re: [Gluster-users] Inconsistent issues with a client

2019-03-28 Thread Nithya Balachandran
Hi, If you know which directories are problematic, please check and see if the permissions on them are correct on the individual bricks. Please also provide the following: - *gluster volume info* for the volume - The gluster version you are running regards, Nithya On Wed, 27 Mar 2019 at

Re: [Gluster-users] Extremely slow Gluster performance

2019-04-23 Thread Nithya Balachandran
Hi Patrick, Did this start only after the upgrade? How do you determine which brick process to kill? Are there a lot of files to be healed on the volume? Can you provide a tcpdump of the slow listing from a separate test client mount ? 1. Mount the gluster volume on a different mount point th

Re: [Gluster-users] Cannot see all data in mount

2019-05-15 Thread Nithya Balachandran
Hi Paul, A few questions: Which version of gluster are you using? Did this behaviour start recently? As in were the contents of that directory visible earlier? Regards, Nithya On Wed, 15 May 2019 at 18:55, Paul van der Vlis wrote: > Hello Strahil, > > Thanks for your answer. I don't find the

Re: [Gluster-users] Cannot see all data in mount

2019-05-15 Thread Nithya Balachandran
On Thu, 16 May 2019 at 03:05, Paul van der Vlis wrote: > Op 15-05-19 om 15:45 schreef Nithya Balachandran: > > Hi Paul, > > > > A few questions: > > Which version of gluster are you using? > > On the server and some clients: glusterfs 4.1.2 > On a new c

Re: [Gluster-users] Cannot see all data in mount

2019-05-16 Thread Nithya Balachandran
On Thu, 16 May 2019 at 14:17, Paul van der Vlis wrote: > Op 16-05-19 om 05:43 schreef Nithya Balachandran: > > > > > > On Thu, 16 May 2019 at 03:05, Paul van der Vlis > <mailto:p...@vandervlis.nl>> wrote: > > > > Op 15-05-19 om 15:45 s

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-19 Thread Nithya Balachandran
On Fri, 17 May 2019 at 06:01, David Cunningham wrote: > Hello, > > We're adding an arbiter node to an existing volume and having an issue. > Can anyone help? The root cause error appears to be > "----0001: failed to resolve (Transport > endpoint is not connected)", as

Re: [Gluster-users] remove-brick failure on distributed with 5.6

2019-05-24 Thread Nithya Balachandran
Hi Brandon, Please send the following: 1. the gluster volume info 2. Information about which brick was removed 3. The rebalance log file for all nodes hosting removed bricks. Regards, Nithya On Fri, 24 May 2019 at 19:33, Ravishankar N wrote: > Adding a few DHT folks for some possible suggest

Re: [Gluster-users] Memory leak in glusterfs

2019-06-05 Thread Nithya Balachandran
Hi, Writing to a volume should not affect glusterd. The stack you have shown in the valgrind looks like the memory used to initialise the structures glusterd uses and will free only when it is stopped. Can you provide more details to what it is you are trying to test? Regards, Nithya On Tue, 4

Re: [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
the below script to see the memory increase whihle the script is > above script is running in background. > > *ps_mem.py* > > I am attaching the script files as well as the result got after testing > the scenario. > > On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran > wro

Re: [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread Nithya Balachandran
why contacted to glusterfs community. > > Regards, > Abhishek > > On Thu, Jun 6, 2019, 16:08 Nithya Balachandran > wrote: > >> Hi Abhishek, >> >> I am still not clear as to the purpose of the tests. Can you clarify why >> you are using valgrind and why y

Re: [Gluster-users] Does replace-brick migrate data?

2019-06-07 Thread Nithya Balachandran
On Sat, 8 Jun 2019 at 01:29, Alan Orth wrote: > Dear Ravi, > > In the last week I have completed a fix-layout and a full INDEX heal on > this volume. Now I've started a rebalance and I see a few terabytes of data > going around on different bricks since yesterday, which I'm sure is good. > > Whil

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-06-26 Thread Nithya Balachandran
Hi, On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote: > I have a 9-brick, replica 2+A cluster and plan to (permanently) remove > one of the three subvolumes. I think I've worked out how to do it, but > want to verify first that I've got it right, since downtime or data loss > would be Bad Th

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-06-26 Thread Nithya Balachandran
On Thu, 27 Jun 2019 at 12:17, Nithya Balachandran wrote: > Hi, > > > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote: > >> I have a 9-brick, replica 2+A cluster and plan to (permanently) remove >> one of the three subvolumes. I think I've worked out how to do

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-06-28 Thread Nithya Balachandran
On Fri, 28 Jun 2019 at 14:34, Dave Sherohman wrote: > On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote: > > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote: > > > My objective is to remove nodes B and C entirely. > > > > > > First up is

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-07-01 Thread Nithya Balachandran
. Ravi and Krutika, please take a look at the other files. Regards, Nithya On Fri, 28 Jun 2019 at 19:56, Dave Sherohman wrote: > On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote: > > There are some edge cases that may prevent a file from being migrated > > duri

Re: [Gluster-users] Parallel process hang on gluster volume

2019-07-04 Thread Nithya Balachandran
Did you see this behaviour with previous Gluster versions? Regards, Nithya On Wed, 3 Jul 2019 at 21:41, wrote: > Am I alone having this problem ? > > - Mail original - > De: n...@furyweb.fr > À: "gluster-users" > Envoyé: Vendredi 21 Juin 2019 09:48:47 > Objet: [Gluster-users] Parallel

Re: [Gluster-users] Brick missing trusted.glusterfs.dht xattr

2019-07-18 Thread Nithya Balachandran
001 > > trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x5d307baa00023ec0 > trusted.glusterfs.quota.dirty=0x3000 > > trusted.glusterfs.quota.size.2=0x1b71d5279e763e320005cd53 > trusted.glusterfs.volume-id=0x6f95525a94d74174bac4e1a18fe010a2 > >

Re: [Gluster-users] Brick missing trusted.glusterfs.dht xattr

2019-07-24 Thread Nithya Balachandran
194T 62T 77% /storage > [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex > /mnt/raid6-storage/storage/ > # file: /mnt/raid6-storage/storage/ > > security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 > trusted.gfid=0x0

Re: [Gluster-users] Brick missing trusted.glusterfs.dht xattr

2019-07-26 Thread Nithya Balachandran
> Pacific Climate Impacts Consortium <https://pacificclimate.org/> > University of Victoria, UH1 > PO Box 1800, STN CSC > Victoria, BC, V8W 2Y2 > Phone: +1-250-721-8432 > Email: matth...@uvic.ca > On 7/24/19 9:30 PM, Nithya Balachandran wrote: > > > > On Wed,

Re: [Gluster-users] Brick missing trusted.glusterfs.dht xattr

2019-07-28 Thread Nithya Balachandran
hing else, but I wasn't sure. > > Thanks, > -Matthew > > -- > Matthew Benstead > System Administrator > Pacific Climate Impacts Consortium <https://pacificclimate.org/> > University of Victoria, UH1 > PO Box 1800, STN CSC > Victoria, BC, V8W 2Y2 > Phone: +1-250

Re: [Gluster-users] Gluster eating up a lot of ram

2019-07-29 Thread Nithya Balachandran
On Tue, 30 Jul 2019 at 05:44, Diego Remolina wrote: > Unfortunately statedump crashes on both machines, even freshly rebooted. > Do you see any statedump files in /var/run/gluster? This looks more like the gluster cli crashed. > > [root@ysmha01 ~]# gluster --print-statedumpdir > /var/run/glust

Re: [Gluster-users] Gluster eating up a lot of ram

2019-07-29 Thread Nithya Balachandran
the actual process or simply trigger the dump? Which > process should I kill? The brick process in the system or the fuse mount? > > Diego > > On Mon, Jul 29, 2019, 23:27 Nithya Balachandran > wrote: > >> >> >> On Tue, 30 Jul 2019 at 05:44, Diego Remolina wrote:

  1   2   3   >