Hi all,
Because of a update breakage in glusterfs-3.7.7 a very quick 3.7.8
release has been done to fix the issue.
The tarball and RPMs for CentOS can be found at
https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.8/ .
Packages for other distros should be making it into the respective
Updating LATEST from 3.7.6 to point at 3.7.8, also appreciated.
On Wed, Feb 10, 2016 at 10:18 AM, Glomski, Patrick <
patrick.glom...@corvidtec.com> wrote:
> The LATEST/ directory used by the glusterfs-epel repo file offered on
> download.gluster.org references the $releasever, but doesn't have
>
Hi all,
We had a pretty lively meeting today, with lots of people showing up
(especially towards the end). I hope this continues for all the future
meetings.
The meeting logs for this week are available at the links below. I've
pasted the meeting summary at the end of this mail for quick
Hi
After obtaining a core in a regression, I noticed there are a few readdir()
use in threaded code. This is begging for a crash, as readdir() maintains
an internal state that will be trashed on concurent use. readdir_r()
should be used instead.
A quick search shows readdir(à usage here:
The LATEST/ directory used by the glusterfs-epel repo file offered on
download.gluster.org references the $releasever, but doesn't have
directories for epel-7.1 or 7.2.
baseurl=
http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-$releasever/$basearch/
I have to manually
Hi, folks.
Here go new test results regarding client memory leak.
I use v3.7.8 with the following patches:
===
Soumya Koduri (2):
inode: Retire the inodes from the lru list in inode_table_destroy
gfapi: Use inode_forget in case of handle objects
===
Those are the only 2 not merged
Any comments before I merge the patch http://review.gluster.org/#/c/13393/ ?
On Mon, Feb 8, 2016 at 3:15 PM, Raghavendra Talur wrote:
>
>
> On Tue, Jan 19, 2016 at 8:33 PM, Emmanuel Dreyfus wrote:
>
>> On Tue, Jan 19, 2016 at 07:08:03PM +0530, Raghavendra
On Wednesday, February 10, 2016 5:07:27 PM, Amye Scavarda wrote:
On Wed, Feb 10, 2016 at 12:28 PM, Samikshan Bairagya < sbair...@redhat.com >
wrote:
On 02/10/2016 04:40 PM, Michael Scherer wrote:
Le mercredi 10 février 2016 à 12:11 +0530, Atin Mukherjee a écrit :
It'd be better if
https://github.com/blog/1986-announcing-git-large-file-storage-lfs
On 02/10/2016 03:10 AM, Michael Scherer wrote:
Le mercredi 10 février 2016 à 12:11 +0530, Atin Mukherjee a écrit :
It'd be better if you can send a PR to glusterdocs with the odp.
*grmbl* top post *grmlb*
I am not sure if
Great presentation Prasanna.
There are a couple of things that are difficult to make out solely from the
slides - could you shed some light on the following;
- libgfapi vs fuse chart
- what was the I/O profile you were measuring
- for the georep results is this saying that with the i/o load
Hi Steve,
We suspect the mismatching in accounting is probably because of the
xattr's being not cleaned up properly. Please ensure you do the following
steps and make sure the xattr's are cleaned up properly before quota
is enabled for the next time.
1) stop the volume
2) on each brick in the
On Wed, 2016-02-10 at 14:32 -0500, Diego Remolina wrote:
> I actually had this problem with CentOS 7 and glusterfs 3.7.x
>
> I downgraded to 3.6.x and the crashes stopped.
>
> See https://bugzilla.redhat.com/show_bug.cgi?id=1234877
>
> It may be the same issue.
>
Probably not. This issue was
Even more exciting would be to record a demo of you presenting this, as I
know DevConf didn't record anything.
We can get this out on the planet and blogs from there.
Thanks!
On Wed, Feb 10, 2016 at 7:41 AM, Atin Mukherjee wrote:
> It'd be better if you can send a PR to
Hi all,
This weeks Gluster community meeting will starting ~50 minutes from
now in #gluster-meeting on Freenode. The agenda for the meeting is
available at https://public.pad.fsfe.org/p/gluster-community-meetings
.
~kaushal
___
Gluster-devel mailing
Le mercredi 10 février 2016 à 12:11 +0530, Atin Mukherjee a écrit :
> It'd be better if you can send a PR to glusterdocs with the odp.
*grmbl* top post *grmlb*
I am not sure if that's a good idea to add all kind of binary files to
the git repo. Since git will store it in the repo on clone, that
> >
> > hmm.. I would prefer an infinite timeout. The only scenario where brick
> > process can forcefully flush leases would be connection lose with
> > rebalance process. The more scenarios where brick can flush leases
> > without knowledge of rebalance process, we open up more race-windows for
The crash reported on the above link is same as bug 1221629.
But stack trace mentioned below looks to be from different regression run?
Can I get the link for the same?
It is strange the bt says 'rpcsvc_record_build_header' calling
'gf_history_changelog'
which does not! Am I missing something?
On 02/10/2016 04:40 PM, Michael Scherer wrote:
Le mercredi 10 février 2016 à 12:11 +0530, Atin Mukherjee a écrit :
It'd be better if you can send a PR to glusterdocs with the odp.
*grmbl* top post *grmlb*
I am not sure if that's a good idea to add all kind of binary files to
the git repo.
Please attach the logs to https://bugzilla.redhat.com/show_bug.cgi?id=1304348
(or mail them to Kaushal and/or me.
Thanks
- Original Message -
> From: "Ronny Adsetts"
> To: "Kaleb Keithley" , "Gluster Users"
>
On Wed, Feb 10, 2016 at 3:14 PM, Amye Scavarda wrote:
> Even more exciting would be to record a demo of you presenting this, as I
> know DevConf didn't record anything.
You're wrong. https://www.youtube.com/watch?v=loHCZaTSVSQ (you need to
switch to the right camera though)
>
On Wed, Feb 10, 2016 at 11:54 AM, Kaushal M wrote:
> On Wed, Feb 10, 2016 at 3:14 PM, Amye Scavarda wrote:
> > Even more exciting would be to record a demo of you presenting this, as I
> > know DevConf didn't record anything.
>
> You're wrong.
On Wed, Feb 10, 2016 at 02:26:35PM +0530, Soumya Koduri wrote:
> Is this issue related to bug1221629 as well?
I do not know, but please someone replace readdir by readdir_r! :-)
--
Emmanuel Dreyfus
m...@netbsd.org
___
Gluster-devel mailing list
On Wed, Feb 10, 2016 at 12:17:23PM +0530, Soumya Koduri wrote:
> I see a core generated in this regression run though all the tests seem to
> have passed. I do not have a netbsd machine to analyze the core.
> Could you please take a look and let me know what the issue could have been?
changelog
Thanks Manu.
Kotresh,
Is this issue related to bug1221629 as well?
Thanks,
Soumya
On 02/10/2016 02:10 PM, Emmanuel Dreyfus wrote:
On Wed, Feb 10, 2016 at 12:17:23PM +0530, Soumya Koduri wrote:
I see a core generated in this regression run though all the tests seem to
have passed. I do not
On Wed, Feb 10, 2016 at 12:28 PM, Samikshan Bairagya
wrote:
>
>
> On 02/10/2016 04:40 PM, Michael Scherer wrote:
>
>> Le mercredi 10 février 2016 à 12:11 +0530, Atin Mukherjee a écrit :
>>
>>> It'd be better if you can send a PR to glusterdocs with the odp.
>>>
>>
>> *grmbl*
On Wed, Feb 10, 2016 at 4:58 PM, Kaleb Keithley wrote:
>
> Please attach the logs to https://bugzilla.redhat.com/show_bug.cgi?id=1304348
> (or mail them to Kaushal and/or me.
>
> Thanks
>
> - Original Message -
>> From: "Ronny Adsetts"
On Wed, Feb 10, 2016 at 5:43 PM, Amye Scavarda wrote:
> In order to increase the amount of people attending the Gluster Community
> Meetings, we'd like to try something new for March.
>
> We'd like to move the March 2 and March 16th meetings to a slightly
> different time, 3
In order to increase the amount of people attending the Gluster Community
Meetings, we'd like to try something new for March.
We'd like to move the March 2 and March 16th meetings to a slightly
different time, 3 hours later, making it UTC 15:00.
Does this time work for people, or should I look
hi Atin, Kaushal,
Is this a known issue?
(gdb) #1 0xbb789fb7 in __synclock_unlock (lock=0xbb1d4ac0)
(gdb) at
/home/jenkins/root/workspace/rackspace-netbsd7-regression-triggered/libglusterfs/src/syncop.c:1056
#2 0xbb789ffd in synclock_unlock (lock=0xbb1d4ac0)
at
Not that I am aware of. Do you have backtrace of all the threads?
~Atin
On 02/10/2016 05:58 PM, Pranith Kumar Karampuri wrote:
> hi Atin, Kaushal,
> Is this a known issue?
>
> (gdb) #1 0xbb789fb7 in __synclock_unlock (lock=0xbb1d4ac0)
> (gdb) at
>
On 02/10/2016 06:01 PM, Atin Mukherjee wrote:
Not that I am aware of. Do you have backtrace of all the threads?
it doesn't seem to give proper output for 'thread apply all bt':
(gdb) thread apply all bt
Thread 6 (process 2):
#0 0xbb354977 in ?? ()
#1 0xbb682b67 in ?? ()
#2 0xba4fff98 in
On Wed, Feb 10, 2016 at 6:08 PM, Pranith Kumar Karampuri
wrote:
>
>
> On 02/10/2016 06:01 PM, Atin Mukherjee wrote:
>>
>> Not that I am aware of. Do you have backtrace of all the threads?
>
>
> it doesn't seem to give proper output for 'thread apply all bt':
> (gdb) thread
On 02/10/2016 06:13 PM, Kaushal M wrote:
> On Wed, Feb 10, 2016 at 6:08 PM, Pranith Kumar Karampuri
> wrote:
>>
>>
>> On 02/10/2016 06:01 PM, Atin Mukherjee wrote:
>>>
>>> Not that I am aware of. Do you have backtrace of all the threads?
>>
>>
>> it doesn't seem to give
Hi All,
We will be migrating our Gerrit and (possibly) Jenkins services from the
current hosted infrastructure on Friday, 12th February. To accomplish
the migration, we will have a downtime for these services starting at
0900 UTC. We expect the migration to be complete by 1700 UTC on the same
34 matches
Mail list logo