Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-22 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Ok, I am clear now.
I’ve added ssl_free in socket reset and socket finish function, though 
glusterfsd memory leak is not that much, still it is leaking, from source code 
I can not find anything else,
Could you help to check if this issue exists in your env? If not I may have a 
try to merge your patch .
Step

1>   while true;do gluster v heal  info,

2>   check the vol-name glusterfsd memory usage, it is obviously increasing.
cynthia

From: Milind Changire 
Sent: Monday, April 22, 2019 2:36 PM
To: Zhou, Cynthia (NSB - CN/Hangzhou) 
Cc: Atin Mukherjee ; gluster-devel@gluster.org
Subject: Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

According to BIO_new_socket() man page ...

If the close flag is set then the socket is shut down and closed when the BIO 
is freed.

For Gluster to have more control over the socket shutdown, the BIO_NOCLOSE flag 
is set. Otherwise, SSL takes control of socket shutdown whenever BIO is freed.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Announcing Gluster release 6.1

2019-04-22 Thread Shyam Ranganathan
The Gluster community is pleased to announce the release of Gluster
6.1 (packages available at [1]).

Release notes for the release can be found at [2].

Major changes, features and limitations addressed in this release:

None

Thanks,
Gluster community

[1] Packages for 6.1:
https://download.gluster.org/pub/gluster/glusterfs/6/6.1/

[2] Release notes for 6.1:
https://docs.gluster.org/en/latest/release-notes/6.1/

___
maintainers mailing list
maintain...@gluster.org
https://lists.gluster.org/mailman/listinfo/maintainers

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] ./tests/basic/afr/tarissue.t is failing on regression runs

2019-04-22 Thread Deepshikha Khandelwal
The above test is failing all day today because of rpc-statd going into
inactive state on builders.
I've been trying to enable this service going in dead state after every
run.
1. rpcbind  and rpcbind.socket is running on builders
2. ipv6 is disabled.
3. nfs-server is also working fine
4. is_nfs_export_available is failing for this test which means showmount
is failing. Hence some issue with mount.nfs

This test was not failing earlier. Does anyone know what has changed or how
this can be fixed on builder?

Jiffin and Soumya, can you please help me through this.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-infra] is_nfs_export_available from nfs.rc failing too often?

2019-04-22 Thread Atin Mukherjee
Is this back again? The recent patches are failing regression :-\ .

On Wed, 3 Apr 2019 at 19:26, Michael Scherer  wrote:

> Le mercredi 03 avril 2019 à 16:30 +0530, Atin Mukherjee a écrit :
> > On Wed, Apr 3, 2019 at 11:56 AM Jiffin Thottan 
> > wrote:
> >
> > > Hi,
> > >
> > > is_nfs_export_available is just a wrapper around "showmount"
> > > command AFAIR.
> > > I saw following messages in console output.
> > >  mount.nfs: rpc.statd is not running but is required for remote
> > > locking.
> > > 05:06:55 mount.nfs: Either use '-o nolock' to keep locks local, or
> > > start
> > > statd.
> > > 05:06:55 mount.nfs: an incorrect mount option was specified
> > >
> > > For me it looks rpcbind may not be running on the machine.
> > > Usually rpcbind starts automatically on machines, don't know
> > > whether it
> > > can happen or not.
> > >
> >
> > That's precisely what the question is. Why suddenly we're seeing this
> > happening too frequently. Today I saw atleast 4 to 5 such failures
> > already.
> >
> > Deepshika - Can you please help in inspecting this?
>
> So we think (we are not sure) that the issue is a bit complex.
>
> What we were investigating was nightly run fail on aws. When the build
> crash, the builder is restarted, since that's the easiest way to clean
> everything (since even with a perfect test suite that would clean
> itself, we could always end in a corrupt state on the system, WRT
> mount, fs, etc).
>
> In turn, this seems to cause trouble on aws, since cloud-init or
> something rename eth0 interface to ens5, without cleaning to the
> network configuration.
>
> So the network init script fail (because the image say "start eth0" and
> that's not present), but fail in a weird way. Network is initialised
> and working (we can connect), but the dhclient process is not in the
> right cgroup, and network.service is in failed state. Restarting
> network didn't work. In turn, this mean that rpc-statd refuse to start
> (due to systemd dependencies), which seems to impact various NFS tests.
>
> We have also seen that on some builders, rpcbind pick some IP v6
> autoconfiguration, but we can't reproduce that, and there is no ip v6
> set up anywhere. I suspect the network.service failure is somehow
> involved, but fail to see how. In turn, rpcbind.socket not starting
> could cause NFS test troubles.
>
> Our current stop gap fix was to fix all the builders one by one. Remove
> the config, kill the rogue dhclient, restart network service.
>
> However, we can't be sure this is going to fix the problem long term
> since this only manifest after a crash of the test suite, and it
> doesn't happen so often. (plus, it was working before some day in the
> past, when something did make this fail, and I do not know if that's a
> system upgrade, or a test change, or both).
>
> So we are still looking at it to have a complete understanding of the
> issue, but so far, we hacked our way to make it work (or so do I
> think).
>
> Deepshika is working to fix it long term, by fixing the issue regarding
> eth0/ens5 with a new base image.
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
> --
- Atin (atinm)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Proposal: Changes in Gluster Community meetings

2019-04-22 Thread FNU Raghavendra Manjunath
Hi,

This is the agenda for tomorrow's community meeting for NA/EMEA timezone.

https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g?both




On Thu, Apr 11, 2019 at 4:56 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Hi All,
>
> Below is the final details of our community meeting, and I will be sending
> invites to mailing list following this email. You can add Gluster Community
> Calendar so you can get notifications on the meetings.
>
> We are starting the meetings from next week. For the first meeting, we
> need 1 volunteer from users to discuss the use case / what went well, and
> what went bad, etc. preferrably in APAC region.  NA/EMEA region, next week.
>
> Draft Content: https://hackmd.io/OqZbh7gfQe6uvVUXUVKJ5g
> 
> Gluster Community Meeting
> Previous
> Meeting minutes:
>
>- http://github.com/gluster/community
>
>
> Date/Time:
> Check the community calendar
> 
> Bridge
>
>- APAC friendly hours
>   - Bridge: https://bluejeans.com/836554017
>- NA/EMEA
>   - Bridge: https://bluejeans.com/486278655
>
> --
> Attendance
>
>- Name, Company
>
> Host
>
>- Who will host next meeting?
>   - Host will need to send out the agenda 24hr - 12hrs in advance to
>   mailing list, and also make sure to send the meeting minutes.
>   - Host will need to reach out to one user at least who can talk
>   about their usecase, their experience, and their needs.
>   - Host needs to send meeting minutes as PR to
>   http://github.com/gluster/community
>
> User stories
>
>- Discuss 1 usecase from a user.
>   - How was the architecture derived, what volume type used, options,
>   etc?
>   - What were the major issues faced ? How to improve them?
>   - What worked good?
>   - How can we all collaborate well, so it is win-win for the
>   community and the user? How can we
>
> Community
>
>-
>
>Any release updates?
>-
>
>Blocker issues across the project?
>-
>
>Metrics
>- Number of new bugs since previous meeting. How many are not triaged?
>   - Number of emails, anything unanswered?
>
> Conferences
> / Meetups
>
>- Any conference in next 1 month where gluster-developers are going?
>gluster-users are going? So we can meet and discuss.
>
> Developer
> focus
>
>-
>
>Any design specs to discuss?
>-
>
>Metrics of the week?
>- Coverity
>   - Clang-Scan
>   - Number of patches from new developers.
>   - Did we increase test coverage?
>   - [Atin] Also talk about most frequent test failures in the CI and
>   carve out an AI to get them fixed.
>
> RoundTable
>
>- 
>
> 
>
> Regards,
> Amar
>
> On Mon, Mar 25, 2019 at 8:53 PM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
>> Thanks for the feedback Darrell,
>>
>> The new proposal is to have one in North America 'morning' time. (10AM
>> PST), And another in ASIA day time, which is evening 7pm/6pm in Australia,
>> 9pm Newzealand, 5pm Tokyo, 4pm Beijing.
>>
>> For example, if we choose Every other Tuesday for meeting, and 1st of the
>> month is Tuesday, we would have North America time for 1st, and on 15th it
>> would be ASIA/Pacific time.
>>
>> Hopefully, this way, we can cover all the timezones, and meeting minutes
>> would be committed to github repo, so that way, it will be easier for
>> everyone to be aware of what is happening.
>>
>> Regards,
>> Amar
>>
>> On Mon, Mar 25, 2019 at 8:40 PM Darrell Budic 
>> wrote:
>>
>>> As a user, I’d like to visit more of these, but the time slot is my 3AM.
>>> Any possibility for a rolling schedule (move meeting +6 hours each week
>>> with rolling attendance from maintainers?) or an occasional regional
>>> meeting 12 hours opposed to the one you’re proposing?
>>>
>>>   -Darrell
>>>
>>> On Mar 25, 2019, at 4:25 AM, Amar Tumballi Suryanarayan <
>>> atumb...@redhat.com> wrote:
>>>
>>> All,
>>>
>>> We currently have 3 meetings which are public:
>>>
>>> 1. Maintainer's Meeting
>>>
>>> - Runs once in 2 weeks (on Mondays), and current attendance is around
>>> 3-5 on an avg, and not much is discussed.
>>> - Without majority attendance, we can't take any decisions too.
>>>
>>> 2. Community meeting
>>>
>>> - Supposed