Hi,
I checked the statedump and found some very high memory allocations.
grep -rwn "num_allocs" glusterdump.17317.dump.1605* | cut -d'=' -f2 | sort
30003616
30003616
3305
3305
36960008
36960008
38029944
38029944
38450472
38450472
39566824
39566824
4
I did check the lines on
Hi Alvin,
As Yaniv also pointed out, you are running a very old version which is
difficult to debug and support.
So, you should upgrade gluster version if you want to have better performance
and less bug and also good community support.
Having said that, I would suggest to use
It will require much more information than what you have provided to fix this
issue.
gluster v info
gluster v status
gluster v heal info
This is mainly to understand what is the volume type and what is current status
of bricks.
Knowing that, we can come up with next set of steps ti
- Original Message -
From: "K. de Jong"
To: gluster-users@gluster.org
Sent: Thursday, August 13, 2020 11:43:03 AM
Subject: [Gluster-users] 4 node cluster (best performance + redundancy setup?)
I posted something in the subreddit [1], but I saw the suggestion
elsewhere that the
Yes, you are right. You have to add 2 bricks on each server to expend storage
capacity of this volume.
I am assuming the volume config is 4+2.
---
Ashish
- Original Message -
From: "Markus Kern"
To: gluster-users@gluster.org
Sent: Tuesday, January 14, 2020 2:26:55 PM
Subject:
- Original Message -
From: "Gudrun Mareike Amedick"
To: "Ashish Pandey"
Cc: "Gluster-users"
Sent: Friday, November 29, 2019 8:45:13 PM
Subject: Re: [Gluster-users] Trying to fix files that don't want to heal
Hi Ashish,
thanks for your reply. To f
Hey Gudrun,
Could you please try to use the scripts and try to resolve it.
We have written some scripts and it is in final phase to get merge -
https://review.gluster.org/#/c/glusterfs/+/23380/
You can find the steps to use these scripts in README.md file
---
Ashish
- Original
Hi Mauro,
Yes, it will take time to heal these files and time depends on the number of
file/dir you have created and the amount of data you have written while the
bricks were down.
YOu can just run following command and keep observing that the count is
changing or not -
gluster volume
Hi,
I am keeping Raghvendra in loop and hope he can comment on the "Read being
scheduled as slow fop".
Other than that, I would request you to provide following information to debug
this issue.
1 - Profile information of the volume. You can find the steps here -
- Original Message -
From: "William Ferrell"
To: "Ashish Pandey"
Cc: gluster-users@gluster.org
Sent: Wednesday, September 25, 2019 7:12:47 PM
Subject: Re: [Gluster-users] Add single brick to dispersed volume?
Thanks for the quick reply!
So it sounds like I did misu
Hi William,
If you want to increase capacity of a disperse volume, you have to add bricks
to your existing disperse volume.
The number of bricks you add should be in multiple of the existing
configuration.
For example:
If you have created a disperse volume like this -
gluster volume
Hi Ashish,
Thanks for that. I guess it's not your responsibility, but do you know how
often it typically takes for new versions to reach the CentOS package system
after being released?
On Tue, 11 Jun 2019 at 17:15, Ashish Pandey < aspan...@redhat.com > wrote:
Hi David,
It should be any
Hi Felix,
As I don't have much expertise on hardware side, I would not comment on that
segment.
Based on your "Requirements", I would say that it looks very much feasible. As
you are talking about storing large files, I would say that disperse volume
could be a good choice.
However,
Hi All,
Today, we had Gluster Community Meeting and the minutes of meeting can be found
on following link -
https://github.com/gluster/community/blob/master/meetings/2019-07-09-Community_meeting.md
---
Ashish
___
Gluster-users mailing list
...@gluster.org
ATTENDEE;CN=Ashish Pandey;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TR
UE:mailto:aspan...@redhat.com
ORGANIZER;CN=Ashish Pandey:mailto:aspan...@redhat.com
DTSTART;TZID="Asia/Kolkata":20190709T113000
DTEND;TZID="Asia/Kolkata":20190709T123000
STATUS:CONFIRMED
CLASS:PU
Hi,
There are two different meetings based on regions.
You can find out the details here - https://github.com/gluster/community
---
Ashish
- Original Message -
From: "Strahil"
To: "gluster-users"
Sent: Thursday, June 27, 2019 12:49:39 PM
Subject: [Gluster-users] Regular
Hi,
Yes, you can stop/disable gluster-ta-volume.service using systemctl command.
I will also check and see why it even trying to load thin-arbiter for non
thin-arbiter volume but for now you can just disable it.
---
Ashish
- Original Message -
From: "wkmail"
To:
Hi David,
It should be any time soon as we are in last phase of patch reviews. You can
follow this patch - https://review.gluster.org/#/c/glusterfs/+/22612/
---
Ashish
- Original Message -
From: "David Cunningham"
To: "Ashish Pandey"
Cc: "gluster-use
Hi,
First of all , following command is not for disperse volume -
gluster volume heal elastic-volume info split-brain
This is applicable for replicate volumes only.
Could you please let us know what exactly do you want to test?
If you want to test disperse volume against failure of bricks
Hi,
While we send a mail on gluster-devel or gluster-user mailing list, following
content gets auto generated and placed at the end of mail.
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
Gluster-devel mailing list
- Original Message -
From: "David Cunningham"
To: "Ashish Pandey"
Cc: "gluster-users"
Sent: Monday, May 6, 2019 1:40:30 PM
Subject: Re: [Gluster-users] Thin-arbiter questions
Hi Ashish,
Thank you for the update. Does that mean they're now in th
luster-users"
Sent: Saturday, May 4, 2019 12:10:01 AM
Subject: Re: [Gluster-users] Thin-arbiter questions
Hi Ashish,
Can someone commit the doc change I have already proposed ?
At least, the doc will clarify that fact .
Best Regards,
Strahil Nikolov
On May 3, 2019 05:30, Ashish
://review.gluster.org/#/c/glusterfs/+/22612/
---
Ashish
- Original Message -
From: "David Cunningham"
To: "Ashish Pandey"
Cc: gluster-users@gluster.org
Sent: Friday, May 3, 2019 8:04:04 AM
Subject: Re: [Gluster-users] Thin-arbiter questions
Hi Ashish,
Thanks very
Hi David,
Creation of thin-arbiter volume is currently supported by GD2 only. The command
" glustercli " is available when glusterd2 is running.
We are also working on providing thin-arbiter support on glusted however, it is
not available right now.
Pat,
I would like to see the final configuration of your gluster volume after you
added bricks on new node.
You mentioned that -
"The new brick was a new server with with 12 of 24 disk bays filled (we
couldn't afford to fill them all at the time). These 12 disks are managed in a
hardware
- Original Message -
From: "Poornima Gurusiddaiah"
To: "Tom Fite"
Cc: "Gluster-users"
Sent: Tuesday, April 9, 2019 9:53:02 AM
Subject: Re: [Gluster-users] Rsync in place of heal after brick failure
On Mon, Apr 8, 2019, 6:31 PM Tom Fite < tomf...@gmail.com > wrote:
Thanks
Hi,
Currently, thin-arbiter can be setup using GD2. glustercli command is provided
by GD2 only.
Have you installed and started GD2 first?
Could you please mention in which step you faced issue?
---
Ashish
- Original Message -
From: "banda bassotti"
To:
r : Improvements on "heal info" command
Hi ,
This sounds nice. I would like to ask if the order is starting from the local
node's bricks first ? (I am talking about --brick=one)
Best Regards,
Strahil Nikolov
On Mar 5, 2019 10:51, Ashish Pandey wrote:
Hi All,
We have o
Hi All,
We have observed and heard from gluster users about the long time "heal info"
command takes.
Even when we all want to know if a gluster volume is healthy or not, it takes
time to list down all the files from all the bricks after which we can be
sure if the volume is healthy or not.
comments inline
- Original Message -
From: "Hu Bert"
To: "Ashish Pandey"
Cc: "Gluster Users"
Sent: Monday, January 7, 2019 12:41:29 PM
Subject: Re: [Gluster-users] Glusterfs 4.1.6
Hi Ashish & all others,
if i may jump in... i have a litt
Hi,
Some of the the steps provided by you are not correct.
You should have used reset-brick command which was introduced for the same task
you wanted to do.
https://docs.gluster.org/en/v3/release-notes/3.9.0/
Although your thinking was correct but replacing a faulty disk requires some of
er of entries possibly healing: 0
On 12/27/18 3:09 AM, Ashish Pandey wrote:
Hi Brett,
Could you please tell us more about the setup?
1 - Gluster v info
2 - gluster v status
3 - gluster v heal info
These are the very basic information to start with debugging or suggesting any
Hi Brett,
Could you please tell us more about the setup?
1 - Gluster v info
2 - gluster v status
3 - gluster v heal info
These are the very basic information to start with debugging or suggesting any
workaround.
It should always be included when asking such questions on mailing list so
Hi,
Slow "ls -l" command which is also the reason behind TAB and df -h issue.
I would suggest to check "disperse.other-eager-lock" option and see if it is
"ON" or "OFF"
gluster v get all | grep other
If it is ON, change it to OFF by following command
gluster v set
- Original Message -
From: "Mauro Tridici"
To: "Ashish Pandey"
Cc: "Gluster Users"
Sent: Friday, September 28, 2018 9:08:52 PM
Subject: Re: [Gluster-users] Rebalance failed on Distributed Disperse volume
based on 3.12.14 version
Thank you, Ashis
Yes, you can.
If not me others may also reply.
---
Ashish
- Original Message -
From: "Mauro Tridici"
To: "Ashish Pandey"
Cc: "gluster-users"
Sent: Thursday, September 27, 2018 4:24:12 PM
Subject: Re: [Gluster-users] Rebalance failed on Dist
nt12/brick
Just a note that these steps need movement of data.
Be careful while performing these steps and do one replace brick at a time and
only after heal completion go to next.
Let me know if you have any issues.
---
Ashish
- Original Message -
From: "Mauro Tridici"
Hi Mauro,
Yes, I can provide you step by step procedure to correct it.
Is it fine If i provide you the steps tomorrow as it is quite late over here
and I don't want to miss anything in hurry?
---
Ashish
- Original Message -
From: "Mauro Tridici"
To: "Ashi
it now then facing
issues in future when it will be almost impossible to correct these things if
you have lots of data.
---
Ashish
- Original Message -
From: "Mauro Tridici"
To: "Ashish Pandey"
Cc: "gluster-users"
Sent: Wednesday, September 26
I think we don't have enough logs to debug this so I would suggest you to
provide more logs/info.
I have also observed that the configuration and setup of your volume is not
very efficient.
For example:
Brick37: s04-stg:/gluster/mnt1/brick
Brick38: s04-stg:/gluster/mnt2/brick
Brick39:
Yes, you should file a bug to track this issue and to share information.
Also, I would like to have logs which are present in /var/log/messages,
specially mount logs with name mnt.log or something.
Following are the points I would like to bring in to your notice-
1 - Are you sure that all
I think it should be rephrased a little bit -
"When one brick is up: Fail FOP with EIO."
should be
"When only one brick is up out of 3 bricks: Fail FOP with EIO."
So we have 2 data bricks and one thin arbiter brick. Out of these 3 bricks if
only one brick is UP then we will fail IO.
---
I think I have replied all the questions you have asked.
Let me know if you need any additional information.
---
Ashish
- Original Message -
From: "Benjamin Kingston"
To: "gluster-users"
Sent: Tuesday, July 31, 2018 1:01:29 AM
Subject: [Gluster-users] Increase redundancy on
Hi,
1. If I create a 1 redundancy volume in the beginning, after I add more bricks,
can I increase redundancy to 2 or 3
No, You can not change the redundancy level of the same volume in future. So,
if you have created a 2+1 volume (2 data and 1 redundancy), you will have to
stick to it.
Hey Mauro,
How did it go? Were you able to expand volumes without any issues?
Just curious to know the approach you took and how smooth was it to
expand this volume.
---
Ashish
- Original Message -
From: "Mauro Tridici"
To: "Ashish Pandey"
Cc: "Glus
bit15"
To: "Ashish Pandey"
Cc: "gluster-users"
Sent: Monday, July 2, 2018 1:45:01 AM
Subject: Re: [Gluster-users] Files not healing & missing their extended
attributes - Help!
Hi Ashish,
The output is below. It's a rep 2+1 volume. The arbiter is offline for
You have not even talked about the volume type and configuration and this issue
would require lot of other information to fix it.
1 - What is the type of volume and config.
2 - Provide the gluster v info out put
3 - Heal info out put
4 - getxattr of one of the file, which needs healing,
Mauro,
Your plan looks fine to me. However, I would like to add few points.
1 - After adding 3 more nodes you will have overall 6 nodes. It would be better
if you can have 1 brick on each server for any EC subvolume.
For example: for 4+2, all 6 bricks should be on different nodes. This will
Disperse volumes are not best suited for small read/writes.
Probably if you can tell us more about your use case, we can come up with some
options which could help you.
- Are you using fuse mount or nfs mount?
- Are these small writes sequential or random?
---
Ashish
- Original
It is not at all good.
It should be healed and dirty xattr should be set to all zero.
Please provide all the xattrs of all the fragments of this file.
Provide gluster v heal info
Run gluster v heal and provide glustershd.logs
Provide gluster v status
---
Ashish
- Original
shish
- Original Message -
From: "Vijay Bellur" <vbel...@redhat.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Cc: "gluster-users" <gluster-users@gluster.org>, "Gluster Devel"
<gluster-de...@gluster.org>
Sent: Monday, Janu
It is disappointing to see the limitation being put by Nvidia on low cost GPU
usage on data centers.
https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/
We thought of providing an option in glusterfs by which we can control if we
want to use GPU or not.
So, the concern of gluster
Y 3183
Task Status of Volume gv0
--
There are no active volume tasks
> 5 - Also, could you try unmount the volume and mount it again and check the
> size?
I have done this a few times but it doesn't seem to help.
On Thu, Dec 21, 2017 at 11:18 AM, Ashish Pandey < aspan...@redhat.com >
Could youplease provide following -
1 - output of gluster volume heal info
2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
3 - output of gluster volume info
4 - output of gluster volume status
5 - Also, could you try unmount the volume and mount it again and
Hi Jorick,
1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just
want to replace the disk and get it back into the volume.
Reset brick command can be used in different scenarios. One more case could be
where you just want to change the host name to IP address of that
comments inline
- Original Message -
From: "Gino Lisignoli"
To: gluster-users@gluster.org
Sent: Wednesday, November 22, 2017 3:49:02 AM
Subject: [Gluster-users] Brick and Subvolume Info
Hello
I have a Distributed-Replicate volume and I would like to know if
upgrade it
to latest one. I am sure this
would have fix .
Ashish
- Original Message -
From: "Nithya Balachandran" <nbala...@redhat.com>
To: "Amudhan P" <amudha...@gmail.com>, "Ashish Pandey" <aspan...@redhat.com>
Cc: "A
Hi,
I have received the log file but did not get chance to look into it.
I will let you know if I find anything or what we can do next.
--
Ashish
- Original Message -
From: "Mauro Tridici" <mauro.trid...@cmcc.it>
To: "Ashish Pandey" <aspan...@re
db
https://unix.stackexchange.com/questions/192716/how-to-set-the-core-dump-file-location-and-name
https://stackoverflow.com/questions/2065912/core-dumped-but-core-file-is-not-in-current-directory
--
Ashish
- Original Message -
From: "Mauro Tridici" <mauro.trid...@cmcc.it>
Hi Mauro,
We would require complete log file to debug this issue.
Also, could you please provide some more information of the core after
attaching to gdb and using command "bt".
---
Ashish
- Original Message -
From: "Mauro Tridici"
To: "Gluster Users"
Hi,
I hope you have gluster volume running on your cluster and you just want to
change host name of your nodes to some other host name.
Ex: From hostname1 to hostname2
I think changing a host name of any node is very simple using linux command.
I think changing host name should
After adding 3 more nodes you will have 6 nodes and 2 HD on each nodes.
It depends on the way you are going to add new bricks on the existing volume
'vol"
I think you should remember that in a given EC sub volume of 4+2, at any point
of time 2 bricks could be down.
When you make 6 * (4+2) to
BTW, I think it should be in 3.10.1 also.
We have back ported to 3.10.1 too.
If possible upgrade to 3.11.0 and see if you are seeing this messages or not.
- Original Message -
From: "Ashish Pandey" <aspan...@redhat.com>
To: "Amudhan P" <amudha...@g
Based on this BZ https://bugzilla.redhat.com/show_bug.cgi?id=1414287
it has been fixed in glusterfs-3.11.0
---
Ashish
- Original Message -
From: "Amudhan P" <amudha...@gmail.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Cc: "Gluster Use
Whenever we do some fop on EC volume on a file, we check the xattr also to see
if the file is healthy or not. If not, we trigger heal.
lookup is the fop for which we don't take inodelk lock so it is possible that
the xattr which we get for lookup fop are different for some bricks.
This
Hi,
Adding bricks to a disperse volume is very easy and same as replica volume.
You just need to add bricks in the multiple of the number of bricks which you
already have.
So if you have disperse volume with n+k configuration, you need to add n+k more
bricks.
Example :
If your disperse
Hi,
The issue you are seeing is a little complex one but information you have
provided is very less.
- Volume info
- Volume status?
- What kind of IO is going on?
- Any brick is down or not?
- Snapshot of Top command.
- Anything you are seeing in glustershd or mount logs or bricks logs?
ncenv" infrastructure which is nothing but a set
of workers pick these tasks and execute it. That is when actual read/write for
heal happens.
- Original Message -
From: "Serkan Çoban" <cobanser...@gmail.com>
To: "Ashish Pandey" <aspan...@redhat.com&
- Original Message -
From: "Serkan Çoban"
To: "Gluster Users"
Sent: Monday, May 29, 2017 5:13:06 PM
Subject: [Gluster-users] Heal operation detail of EC volumes
Hi,
When a brick fails in EC, What is the healing read/write data
8+2 and 8+3 configurations are not the limitation but just suggestions.
You can create 16+3 volume without any issue.
Ashish
- Original Message -
From: "Alastair Neil"
To: "gluster-users"
Sent: Friday, May 5, 2017 2:23:32 AM
There is difference between server and bricks which we should understand.
When we say m+n = 6+2, then we are talking about the bricks.
Total number of bricks are m+n = 8.
Now, these bricks could be anywhere on any server. The only thing is that the
server should be a part of cluster.
You
Hi Amudhan,
In your case, was any IO going on while healing a file?
Were you writing on a file which was also getting healed by shd? and you
observed that this file is not healing?
Or you just left the system after replace brick to complete the heal.
Ashish
- Original Message -
if you have anything in .glusterfs which
should not be. I would have started from the biggest entry I see in .glusterfs
like " 154M /opt/lvmdir/c2/brick/.glusterfs/08"
- Original Message -
From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
To: "Ashi
.
That is , in above command you can not say which bricks out of brick 1 to
brick6 would be parity brick.
- Original Message -
From: "Gandalf Corvotempesta" <gandalf.corvotempe...@gmail.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Cc: gluster-users@glu
lume config options. Turns out only my 8+3
choice is permitted, as the 4+2 and 8+4 options violate the data/parity>2 rule.
So, 8+3 it is, as 8+2 isn’t quite enough redundancy for me.
Regards,
Terry
On Mar 30, 2017, at 02:14, yipik...@gmail.com wrote:
On 30/03/2017 08:35, Ashish Pandey
Good point Cedric!!
The only thing is that, I would prefer to say "bricks" instead of "nodes" in
your statement.
"starting with 4 bricks (3+1) can only evolve by adding 4 bricks (3+1)"
- Original Message -
From: "Cedric Lemarchand"
To: "Terry McGuire"
Hi Terry,
There is not constraint on number of nodes for erasure coded volumes.
However, there are some suggestions to keep in mind.
If you have 4+2 configuration, that means you can loose maximum 2 bricks at a
time without loosing your volume for IO.
These bricks may fail because of node
- Original Message -
From: "Atin Mukherjee"
To: "Raghavendra Talur" , gluster-de...@gluster.org,
gluster-users@gluster.org
Sent: Thursday, March 16, 2017 4:22:41 PM
Subject: Re: [Gluster-devel] [Gluster-users] Proposal to deprecate
Hi Scott,
I don't know which version of glusterfs you have been using. In case you have
been using latest version, you can also explore reset-brick command.
https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/
Ashish
- Original Message -
From: "Ted Miller"
This is the format which we have followed in this command . There could be a
case when you want to
keep the same bricks which you have added using IP address but now instead of
IP address you want that all the bricks should have hostname.
In this case you have to mention both the source as
That means ec is not getting correct trusted.ec.config xattr from minimum
number of bricks.
1 - Did you see any error on client side while accessing any file?
2 - If yes, check the file xattr's from all the bricks for such files.
It is too less information to find out the cause. If [1] is
Hi,
No, we can not convert replica volume to EC volume.
You can always create a new EC volume and copy all the data. I don't think this
feature will serve any purpose.
However, I would suggest you to explore "tier" volume which can have both
replica and EC volume as
hot and cold tier. May
++Bhaskar
- Original Message -
From: "Ashish Pandey" <aspan...@redhat.com>
To: "Menaka Mohan" <menak...@outlook.com>
Cc: "Gluster Users" <gluster-users@gluster.org>
Sent: Monday, October 17, 2016 4:15:02 PM
Subject: Re: [Gluster-u
Keeping Bhaskar in loop as he has done testing on glusterfs with iozone.
- Original Message -
From: "Menaka Mohan"
To: gluster-de...@gluster.org
Sent: Tuesday, October 11, 2016 1:18:13 AM
Subject: [Gluster-devel] Need help in understanding IOZone config file
...@gmail.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Tuesday, August 9, 2016 11:08:12 PM
Subject: Re: [Gluster-users] Need help to design a data storage
Il 09 ago 2016 19:20, "Ashish Pandey" < aspan...@redhat.com > ha
- Original Message -
From: "Gandalf Corvotempesta" <gandalf.corvotempe...@gmail.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Cc: gluster-users@gluster.org
Sent: Tuesday, August 9, 2016 8:33:31 PM
Subject: Re: [Gluster-users] Need help to design a data storage
.
Ashish
- Original Message -
From: "Serkan Çoban" <cobanser...@gmail.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Cc: "Gluster Users" <gluster-users@gluster.org>
Sent: Monday, August 8, 2016 4:47:02 PM
Subject: Re: [Gluster-users
Hi,
Considering all the other factor same for both the configuration, yes small
configuration
would take less time. To read good copies, it will take less time.
I think, multi threaded shd is the only enhancement in near future.
Ashish
- Original Message -
From: "Serkan Çoban"
der using the latest version of gluster and also use a
config which actually serves the purpose of disperse volume.
Ashish
- Original Message -
From: "jayakrishnan mm" <jayakrishnan...@gmail.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Sent: Mond
Please provide the output of "gluster vol info" and "gluster vol status" of the
concerned volume.
Also, please provide mount logs present in /var/log/gluster with name .log
What command are you executing while mouting, please mention exact command.
- Original Message -
From:
Hi,
You should not delete any file manually. Most of the time it is not safe.
I would suggest to run gluster v heal info to see if there are files
to be healed.
You can execute "gluster v heal " to heal any file present in heal
info command. You could also try "gluster v heal full" for
Hi Iñaki
The steps you are following don't have any issue.
I would like to have more information to debug this further.
1 - gluster v info
2 - gluster v status before and after running replace-brick
3 - Brick logs (for this volume only) from /var/log/glusterfs/bricks/
4 - glusterd logs
Hi Nicolas,
I think this issue has already been raised where we are seeing different heal
info from different servers.
https://bugzilla.redhat.com/show_bug.cgi?id=1335429
Patch for this is under review.
Ashish
- Original Message -
From: "Nicolas Ecarnot"
- Original Message -
From: "Chen Chen" <chenc...@smartquerier.com>
To: "Joe Julian" <j...@julianfamily.org>, "Ashish Pandey" <aspan...@redhat.com>
Cc: "Gluster Users" <gluster-users@gluster.org>
Sent: Friday, April 22, 2016 8:28:4
force" will do the same.
Regards,
Ashish
- Original Message -
From: "Serkan Çoban" <cobanser...@gmail.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Cc: "Gluster Users" <gluster-users@gluster.org>, "Gluster Devel"
&
/statedump.md
Ashish
- Original Message -
From: "Serkan Çoban" <cobanser...@gmail.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Cc: "Gluster Users" <gluster-users@gluster.org>, "Gluster Devel"
<gluster-de...@gluster.org>
I think this is the statesump of only one brick.
We would required statedump from all the bricks and client process in case of
fuse or nfs process if it is mounted through nfs.
Ashish
- Original Message -
From: "Serkan Çoban" <cobanser...@gmail.com>
To
Hi Serkan,
Could you also provide us the statedump of all the brick processes and clients?
Commands to generate statedumps for brick processes/nfs server/quotad
For bricks: gluster volume statedump
For nfs server: gluster volume statedump nfs
We can find the directory where statedump
your mail? What exactly are you trying
to do and what is the setup?
Also volume info, logs statedumps might help.
-
Ashish
- Original Message -
From: "Chen Chen" <chenc...@smartquerier.com>
To: "Ashish Pandey" <aspan...@redhat.com>
Cc: gluster
the volume using -
gluster v start force. This will restart the nfs process too
which will release the locks and
we could come out of this issue.
Ashish
- Original Message -
From: "Chen Chen" <chenc...@smartquerier.com>
To: "Ashish Pandey" <aspan...@redhat.com&g
1 - 100 of 104 matches
Mail list logo