On 19 April 2016 at 16:50, Kaushal M wrote:
> I'm pleased to announce the release of GlusterFS version 3.7.11.
Installed and running quite smoothly here, thanks.
--
Lindsay
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.o
On Tue, Apr 19, 2016 at 1:24 AM, qingwei wei wrote:
> Hi Vijay,
>
> I rerun the test with gluster 3.7.11and found that the utilization still
> high when i use libgfapi. The write performance is also no good.
>
> Below are the info and results:
>
> Hypervisor host:
> libvirt 1.2.17-13.el7_2.4
> qem
Regression run [1] failed from trash.t, however the same doesn't talk
about any core file, but when I run it with and with out my changes the
same generates a core.
[1]
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15971/consoleFull
~Atin
___
Hi, I just reinstalled fresh 3.7.11 and I am seeing the same behavior.
50 clients copying part-0- named files using mapreduce to gluster
using one thread per server and they are using only 20 servers out of
60. On the other hand fio tests use all the servers. Anything I can do
to solve the issu
I am copying 10.000 files to gluster volume using mapreduce on
clients. Each map process took one file at a time and copy it to
gluster volume.
My disperse volume consist of 78 subvolumes of 16+4 disk each. So If I
copy >78 files parallel I expect each file goes to different subvolume
right?
In my
Hi Serkan,
I have gone through the logs and can see there are some blocked inode lock
requests.
We have observed that some other user have also faced this issue with similar
logs.
I think you have tried some rolling update on your setup or some NODES , on
which you have collected these state
I did one upgrade only from 3.7.9 to 3.7.10 and it is not a rolling
upgrade, I stopped volume and then upgrade all the components.
I will try restarting the volume and see if it helps..
On Mon, Apr 18, 2016 at 10:17 AM, Ashish Pandey wrote:
> Hi Serkan,
>
> I have gone through the logs and can s
You can find the statedumps of server and client in below link.
Gluster version is 3.7.10, 78x(16+4) disperse setup. 60 nodes named
node185..node244
https://www.dropbox.com/s/cc2dgsxwuk48mba/gluster_statedumps.zip?dl=0
On Fri, Apr 15, 2016 at 9:52 PM, Ashish Pandey wrote:
>
> Actually it was my
On Tue, Apr 19, 2016 at 04:25:07PM +0200, Csaba Henk wrote:
> I also have a vague memory that in Linux VFS the file operations
> are dispatched to file objects in quite a pure oop manner (which
> suggests itself to practices like "storing the file handle identifier
> along with the file object"), w
Hi Manu,
My memories of FUSE internals are a bit rusty, but I try
to give a usable answer.
Your description is essentially correct, but some of it
needs to be addressed more carefully. The Linux kernel's
VFS internally maintains file objects. They are tied to a given
call of open(2). The file han
Le mardi 19 avril 2016 à 09:58 -0400, Jeff Darcy a écrit :
> > So can a workable solution be pushed to git, because I plan to force the
> > checkout to be like git, and it will break again (and this time, no
> > workaround will be possible).
> >
>
> It has been pushed to git, but AFAICT pull requ
> So can a workable solution be pushed to git, because I plan to force the
> checkout to be like git, and it will break again (and this time, no
> workaround will be possible).
>
It has been pushed to git, but AFAICT pull requests for that repo go into
a black hole.
__
Hi,
Thanks for the participation. Please find meeting summary below.
Meeting ended Tue Apr 19 12:58:58 2016 UTC. Information about MeetBot at
http://wiki.debian.org/MeetBot .
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-04-19/gluster_bug_triage.2016-04-19-12.00.html
Minutes
Le samedi 02 avril 2016 à 07:53 -0400, Jeff Darcy a écrit :
> > IIRC, this happens because in the build job use "--enable-bd-xlator"
> > option while configure
>
> I came to the same conclusion, and set --enable-bd-xlator=no on the
> slave. I also had to remove -Werror because that was also causi
On Wed, 2015-12-02 at 06:29 +0530, Anoop C S wrote:
> Hi all,
>
> As part of preparing GlusterFS to cope with Multi-protocol
> environment,
> it is necessary to have mandatory locks support within the file
> system
> with respect to its integration to protocols like NFS or SMB. This
> will
> allow
Hi,
This meeting is scheduled for anyone, who is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your
Hi Serkan,
On 19/04/16 09:18, Serkan Çoban wrote:
Hi, I just reinstalled fresh 3.7.11 and I am seeing the same behavior.
50 clients copying part-0- named files using mapreduce to gluster
using one thread per server and they are using only 20 servers out of
60. On the other hand fio tests use
Cool - thanks!
On 19 April 2016 at 16:50, Kaushal M wrote:
> Packages for Debian Stretch, Jessie and Wheezy are available on
> download.gluster.org.
I think
http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
is still pointing to 3.7.10
--
Lindsay
Hi Atin,
Thanks.
Have more doubts here.
Brick and glusterd connected by unix domain socket.It is just a local
socket then why it is disconnect in below logs:
1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]
[glusterd-handler.c:4908:__glusterd_brick_rpc_notify] 0-management:
Brick 10.32.
On Tue, Apr 19, 2016 at 12:20 PM, Kaushal M wrote:
> Hi All,
>
> I'm pleased to announce the release of GlusterFS version 3.7.11.
>
> GlusterFS-3.7.11 has been a quick release to fix some regressions
> found in GlusterFS-3.7.10. If anyone has been wondering why there
> hasn't been a proper release
Thanks for your response.
We use glusterfs 3.6.7.
Sure ,We use Centos7.0.
Related log show bellow:
143 [2016-04-13 06:33:54.236013] W
[glusterfsd.c:1211:cleanup_and_exit] (--> 0-: received signum (15),
shutting down
144 [2016-04-13 06:33:54.236081] I [fuse-bridge.c:5607:fini] 0-fuse:
Thanks for your reponse.
Our glibc is:
[root@host-247 glusterfs]# rpm -qa | grep glibc
glibc-common-2.17-55.el7.x86_64
glibc-devel-2.17-55.el7.i686
glibc-2.17-55.el7.x86_64
glibc-static-2.17-55.el7.x86_64
compat-glibc-headers-2.12-4.el7.x86_64
glibc-headers-2.17-55.el7.x86_64
glibc-devel-2.17-55.
22 matches
Mail list logo