- Original Message -
> From: "Anuradha Talur"
> To: "songxin"
> Cc: "gluster-user"
> Sent: Thursday, March 3, 2016 12:31:41 PM
> Subject: Re: [Gluster-users] about tail command
>
>
>
> - Original Message -
> > From: "songxin"
> > To: "Anuradha Talur"
> > Cc: "gluster-user"
On 29/02/16 15:25, Pavel Riha wrote:
Hi all,
I have read some recent post about performance issues, complaining
about the fuse driver and recomended NFS..
although my final goal is replicate volume, I'm now just doing some
test for reference.
my basic benchmark is
dd if=/dev/zero of=ddte
On 02/03/16 19:48, Fabian Wenk wrote:
Hello Joe
On 01.03.16 19:07, Joe Julian wrote:
On 03/01/2016 09:43 AM, Fabian Wenk wrote:
With some testing, I did realize, that I can mount the volume with NFS
from anywhere in my local network. According to the documentation [1],
the option nfs.rpc-au
- Original Message -
> From: "songxin"
> To: "Anuradha Talur"
> Cc: "gluster-user"
> Sent: Wednesday, March 2, 2016 4:09:01 PM
> Subject: Re:Re: [Gluster-users] about tail command
>
>
>
> Thank you for your reply.I have two more questions as below
>
>
> 1. the command "gluster v r
>The command "heal full" is async and "heal info" show nothing need to heal.
>How could I know when the "heal full" has completed to replicate these
>files(a,b and c)?How to monitor the progress?
I'm not a gluster expert; I'm pretty new to this. Yes, I've had the same
problem and it is frust
Hello,
In gluster, we use the command "gluster volume heal c_glusterfs info
split-brain" to find the files that are in split-brain scenario.
We run heal script (developed by Windriver prime team) on the files
reported by above command to resolve split-brain issue.
But we observed that the above c
Hi Steve,
As atin pointed out to take statedump by running #kill -SIGUSR1 $(pidof
glusterd) command. it will create .dump file in /var/run/gluster/ directory.
client-op-version information will be present in dump file.
Thanks,
~Gaurav
- Original Message -
From: "Steve Dainard"
To: "G
Thanks for the feedback. This will help us to improve the
Gluster/Geo-rep user experience.
regards
Aravinda
On 03/02/2016 07:34 PM, ML mail wrote:
Thanks for the info. Somehow geo-rep doesn't work well for me, each time I try
to fix something with the help of the mailing list there is another
Hi,
recondition:
glusterfs version is 3.7.6
A node:128.224.95.140
B node:128.224.162.255
brick on A node:/data/brick/gv0
brick on B node:/data/brick/gv0
reproduce steps:
1.gluster peer probe 128.224.162.255
Thank you very much for your reply.It is very helpful for me.
And I have one more question about "heal full" in glusterfs 3.7.6.
the reproduce steps :
A board:128.224.95.140
B board:128.224.162.255
1.gluster peer probe 128.224.162.255
Could you share glusterd statedump file? Run kill -SIGUSR1 $(pidof
glusterd) and post that there would be a statedump file created in
/var/run/gluster.
-Atin
Sent from one plus one
On 03-Mar-2016 12:07 am, "Steve Dainard" wrote:
> From the the client side logs I can see version info on mount:
>
On 03/03/2016 12:43 AM, Nir Soffer wrote:
PS: # find /STORAGES -samefile
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
-print
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can I
> The "unable to get index-dir on .." messages you saw in log are not
> harmful in this scenario.
> A simple explanation : when you have 1 new node and 2 old nodes, the
> self-heal-deamon and
> heal commands run on the new node are expecting that the index-dir
> "/.glusterfs/xattrop/dirty" exis
On Wed, Mar 2, 2016 at 7:48 PM, p...@email.cz wrote:
> UPDATE:
>
> all "ids" file have permittion fixed to 660 now
>
> # find /STORAGES -name ids -exec ls -l {} \;
> -rw-rw 2 vdsm kvm 0 24. úno 07.41
> /STORAGES/g1r5p1/GFS/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md/ids
> -rw-rw 2 vdsm
On Mar 2, 2016, at 4:21 AM, Ravishankar N wrote:
> On 02/18/2016 11:33 AM, Mike Stump wrote:
>> On Feb 13, 2016, at 6:21 PM, Ravishankar N wrote:
>>> About the docs, could you list the links for client and server quorum where
>>> you found the details to be inadequate? I can't seem to find anyth
>From the the client side logs I can see version info on mount:
Final graph:
+--+
1: volume storage-client-0
2: type protocol/client
3: option clnt-lk-version 1
4: option volfile-checksum 0
5: opt
UPDATE:
all "ids" file have permittion fixed to 660 now
# find /STORAGES -name ids -exec ls -l {} \;
-rw-rw 2 vdsm kvm 0 24. úno 07.41
/STORAGES/g1r5p1/GFS/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md/ids
-rw-rw 2 vdsm kvm 0 24. úno 07.43
/STORAGES/g1r5p2/GFS/88adbd49-62d6-45b1-9992-b
>1. the command "gluster v replace-brick " is async or sync? The replace is
>complete when the command quit ?
Async. The command will end immediately, and the replace will continue in the
background.
Use "gluster volume replace-brick VOLUME-NAME OLD-BRICK NEW-BRICK status" to
monitor th
Hello Joe
On 01.03.16 19:07, Joe Julian wrote:
On 03/01/2016 09:43 AM, Fabian Wenk wrote:
With some testing, I did realize, that I can mount the volume with NFS
from anywhere in my local network. According to the documentation [1],
the option nfs.rpc-auth-allow should be set to 'Reject All' as
Thanks for the info. Somehow geo-rep doesn't work well for me, each time I try
to fix something with the help of the mailing list there is another issue or
problem. So I will simply delete the whole geo-rep, upgrade to 3.7.9 as soon as
available, and re-create the geo-rep session. Hopefully this
IMPORTANT: Next weeks meeting will be held at 1500UTC instead of the
normal time. This is part of our trialing rotating schedules for
meeting. I've attached a calendar invite for next weeks meeting to
this mail.
The meeting minutes of this week are available at the following locations,
* Minutes:
On 02/18/2016 11:33 AM, Mike Stump wrote:
On Feb 13, 2016, at 6:21 PM, Ravishankar N wrote:
About the docs, could you list the links for client and server quorum where you
found the details to be inadequate? I can't seem to find anything myself on
readthedocs.:(
I'm anyway planning to do a
Hi All,
This weeks weekly meeting will be held in #gluster-meeting on
freenode, in about 1 hour from now (1200UTC). This was supposed to be
the first meeting in which we tried out the proposed rotating schedule
(1200UTC/1500UTC). But this change wasn't communicated well, so we'll
start the rotating
Yes we have had "ids" split brains + some other VM's files
Split brains was fixed by healing with preffered ( source ) brick.
eg: " # gluster volume heal 1KVM12-P1 split-brain source-brick
16.0.0.161:/STORAGES/g1r5p1/GFS "
Pavel
Okay, so what I understand from the output above is you have di
Thank you for your reply.I have two more questions as below
1. the command "gluster v replace-brick " is async or sync? The replace is
complete when the command quit ?
2.I run "tail -n 0" on mount point.Does it trigger the heal?
Thanks,
Xin
At 2016-03-02 15:22:35, "Anuradha Talur"
Hi guys,
thx a lot for your support ...at first.
Because we had been under huge time pressure, we found "google
workaround" which delete both files . It helped, probabbly at first
steps of recover .
eg: " # find /STORAGES/g1r5p5/GFS/ -samefile
/STORAGES/g1r5p5/GFS/3da46e07-d1ea-4f10-9250
- Original Message -
> From: "Alan Millar"
> To: "Anuradha Talur"
> Cc: gluster-users@gluster.org
> Sent: Wednesday, March 2, 2016 2:00:49 AM
> Subject: Re: [Gluster-users] Broken after 3.7.8 upgrade from 3.7.6
>
> >> I’m still trying to figure out why the self-heal-daemon doesn’t seem
27 matches
Mail list logo