[Gluster-devel] Clang-Formatter for GlusterFS.

2018-08-22 Thread Amar Tumballi
Hi All,

Below is an update about the project’s move towards using clang-formatter
for imposing few coding-standards.

Gluster project, since inception followed certain basic coding standard,
which was (at that time) easy to follow, and easy to review.

Over the time, with inclusion of many more developers and working with
other communities, as the coding standards are different across projects,
we got different type of code into source. After 11+years, now is the time
we should be depending on tool for it more than ever, and hence we have
decided to depend on clang-formatter for this.

Below are some highlights of this activity. We expect each of you to
actively help us in this move, so it is smooth for all of us.

   - We kickstarted this activity sometime around April 2018
   
   - There was a repo created for trying out the options, and validating
   the code. Link to Repo 
   - Now, with the latest .clang-format file, we have made the whole
   GlusterFS codebase changes. The change here
   
   - We will be running regression with the changes, multiple times, so we
   don’t want to miss something getting in without our notice.
   - As it is a very big change (Almost 6 lakh lines changed), we will not
   put this commit through gerrit, but directly pushing to the repo.
   - Once this patch gets in (ETA: 28th August), all the pending patches
   needs to go through rebase.

What are the next steps:

   - The patch  of adding
   .clang-format file will get in first
   - Nigel/Infra team will be keeping the repo
    with all files changed open for
   review till EOD 27th August, 2018
   - Upon passing regression, we will push this one change to main branch.
   - After that, we will have a smoke job to validate the coding standard
   as per the .clang-format file, which will vote -1 if it is not meeting
   the standard.
   - There will be guidelines about how to setup your own .clang-format
   setup, so while sending the patch, it gets posted in proper format
  - This will be provided for both ./rfc.sh and git review users.
   - Having clang-formatter installed would be still optional, but there
   would be high chance the smoke would fail if not formatted right.

Any future changes to coding standard, due to improvements in clang-format
tool itself, or due to developers believing some other option is better
suited, can be getting in through gerrit.

Also note that, we will not be applying the changes to contrib/ directory,
as that is expected to be same as corresponding upstream coding standard of
particular project. We believe that helps to make sure we can quickly check
the diff with corresponding changes really easily.

Happy to hear any feedback!

Regards,
Amar (on behalf of many Gluster Maintainers)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [Fwd: [Gluster-infra] Reboot policy for the infra]

2018-08-22 Thread Michael Scherer
Forward, cause I can't type gluster-devel properly, and use gluster-dev 
each time :p

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS

--- Begin Message ---
Hi,

so that's kernel reboot time again, this time courtesy of Intel
(again). I do not consider the issue to be "OMG the sky is falling",
but enough to take time to streamline our process to reboot.



Currently, we do not have a policy or anything, and I think the
negociation time around that is cumbersome:
- we need to reach people, which take time and add latency (would be
bad if that was a urgent issue, and likely add undeed stress while
waiting)

- we need to keep track of what was supposed to be done, which is also
cumbersome

While that's not a problem if I had only gluster to deal with, my team
of 3 do have to deal with a few more projects than 1, and orchestrating
choice for a dozen of group is time consuming (just think last time you
had to go to a restaurant after a conference to see how hard it is to
reach agreements).

So I would propose that we simplify that with the following policy:

- Jenkins builder would be reboot by jenkins on a regular basis. 
I do not know how we can do that, but given that we have enough node to
sustain builds, it shouldn't impact developpers in a big way. The only
exception is the freebsd builder, since we only have 1 functionnal at
the moment. But once the 2nd is working, it should be treated like the
others.

- service in HA (firewall, reverse proxy, internal squid/DNS) would be
reboot during the day without notice. Due to working HA, that's non
user impacting. In fact, that's already what I do.

- service not in HA should be pushed for HA (gerrit might get there one
day, no way for jenkins :/, need to see for postgres and so
fstat/softserve, and maybe try to get something for
download.gluster.org)

- service critical and not in HA should be announced in advance.
Critical mean the service listed here: https://gluster-infra-docs.readt
hedocs.io/emergency.html

- service non visible to end user (backup servers, ansible deployment
etc) can be reboot at will

Then the only question is what about stuff not in the previous
category, like softserve, fstat.

Also, all dependencies are as critical as the most critical service
that depend on them. So hypervisors hosting gerrit/jenkins are critical
(until we find a way to avoid outage), the ones for builders are not.



Thoughts, ideas ?


-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS



signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
gluster-in...@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra
--- End Message ---


signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-08-22-0ebaa9c6 (master branch)

2018-08-22 Thread staticanalysis


GlusterFS Coverity covscan results for the master branch are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-08-22-0ebaa9c6/

Coverity covscan results for other active branches are also available at
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 5: Release calendar and status updates

2018-08-22 Thread Shyam Ranganathan
On 08/14/2018 02:28 PM, Shyam Ranganathan wrote:
> 2) Branching date: (Monday) Aug-20-2018 (~40 days before GA tagging)

We are postponing branching to 2nd week of September (10th), as the
entire effort in this release has been around stability and fixing
issues across the board.

Thus, we are expecting no net new features from hereon till branching,
and features that are already a part of the code base and its details
are as below.

> 
> 3) Late feature back port closure: (Friday) Aug-24-2018 (1 week from
> branching)

As stated above, there is no late feature back port.

The features that are part of master since 4.1 release are as follows,
with some questions for the authors,

1) Changes to options tables in xlators (#302)

@Kaushal/GD2 team, can we call this complete? There maybe no real
release notes for the same, as these are internal in nature, but
checking nevertheless.

2) CloudArchival (#387)

@susant, what is the status of this feature? Is it complete?
I am missing user documentation, and code coverage from the tests is
very low (see:
https://build.gluster.org/job/line-coverage/485/Line_20Coverage_20Report/ )

3) Quota fsck (#390)

@Sanoj I do have documentation in the github issue, but would prefer if
the user facing documentation moves to glusterdocs instead.

Further I see no real test coverage for the tool provided here, any
thoughts around the same?

The script is not part of the tarball and hence the distribution RPMs as
well, what is the thought around distributing the same?

4) Ensure python3 compatibility across code base (#411)

@Kaleb/others, last patch to call this issue done (sans real testing at
the moment) is https://review.gluster.org/c/glusterfs/+/20868 request
review and votes here, to get this merged before branching.

5) Turn on Dentry fop serializer by default in brick stack (#421)

@du, the release note for this can be short, as other details are
captured in 4.0 release notes.

However, in 4.0 release we noted a limitation with this feature as follows,

"Limitations: This feature is released as a technical preview, as
performance implications are not known completely." (see section
https://docs.gluster.org/en/latest/release-notes/4.0.0/#standalone )

Do we now have better data regarding the same that we can use when
announcing the release?

Thanks,
Shyam
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Release 5: Release calendar and status updates

2018-08-22 Thread Susant Palai
Comments inline
On Wed, Aug 22, 2018 at 11:34 PM Shyam Ranganathan 
wrote:

> On 08/14/2018 02:28 PM, Shyam Ranganathan wrote:
> > 2) Branching date: (Monday) Aug-20-2018 (~40 days before GA tagging)
>
> We are postponing branching to 2nd week of September (10th), as the
> entire effort in this release has been around stability and fixing
> issues across the board.
>
> Thus, we are expecting no net new features from hereon till branching,
> and features that are already a part of the code base and its details
> are as below.
>
> >
> > 3) Late feature back port closure: (Friday) Aug-24-2018 (1 week from
> > branching)
>
> As stated above, there is no late feature back port.
>
> The features that are part of master since 4.1 release are as follows,
> with some questions for the authors,
>
> 1) Changes to options tables in xlators (#302)
>
> @Kaushal/GD2 team, can we call this complete? There maybe no real
> release notes for the same, as these are internal in nature, but
> checking nevertheless.
>
> 2) CloudArchival (#387)
>
> @susant, what is the status of this feature? Is it complete?
>
The feature is complete from a functional point of view. But still would
like to retain "experimental" status for few releases.

> I am missing user documentation, and code coverage from the tests is
>
User documentation is here:
https://review.gluster.org/#/c/glusterfs/+/20064/
Or should there be some other doc I missed?

> very low (see:
> https://build.gluster.org/job/line-coverage/485/Line_20Coverage_20Report/
> )
>
This is expected as without any plugin most of the code is untouched. It's
just a bypass in the build setup.

>
> 3) Quota fsck (#390)
>
> @Sanoj I do have documentation in the github issue, but would prefer if
> the user facing documentation moves to glusterdocs instead.
>
> Further I see no real test coverage for the tool provided here, any
> thoughts around the same?
>
> The script is not part of the tarball and hence the distribution RPMs as
> well, what is the thought around distributing the same?
>
> 4) Ensure python3 compatibility across code base (#411)
>
> @Kaleb/others, last patch to call this issue done (sans real testing at
> the moment) is https://review.gluster.org/c/glusterfs/+/20868 request
> review and votes here, to get this merged before branching.
>
> 5) Turn on Dentry fop serializer by default in brick stack (#421)
>
> @du, the release note for this can be short, as other details are
> captured in 4.0 release notes.
>
> However, in 4.0 release we noted a limitation with this feature as follows,
>
> "Limitations: This feature is released as a technical preview, as
> performance implications are not known completely." (see section
> https://docs.gluster.org/en/latest/release-notes/4.0.0/#standalone )
>
> Do we now have better data regarding the same that we can use when
> announcing the release?
>
> Thanks,
> Shyam
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Release 5: Release calendar and status updates

2018-08-22 Thread Sanoj Unnikrishnan
On Wed, Aug 22, 2018 at 11:33 PM, Shyam Ranganathan 
wrote:

> On 08/14/2018 02:28 PM, Shyam Ranganathan wrote:
> > 2) Branching date: (Monday) Aug-20-2018 (~40 days before GA tagging)
>
> We are postponing branching to 2nd week of September (10th), as the
> entire effort in this release has been around stability and fixing
> issues across the board.
>
> Thus, we are expecting no net new features from hereon till branching,
> and features that are already a part of the code base and its details
> are as below.
>
> >
> > 3) Late feature back port closure: (Friday) Aug-24-2018 (1 week from
> > branching)
>
> As stated above, there is no late feature back port.
>
> The features that are part of master since 4.1 release are as follows,
> with some questions for the authors,
>
> 1) Changes to options tables in xlators (#302)
>
> @Kaushal/GD2 team, can we call this complete? There maybe no real
> release notes for the same, as these are internal in nature, but
> checking nevertheless.
>
> 2) CloudArchival (#387)
>
> @susant, what is the status of this feature? Is it complete?
> I am missing user documentation, and code coverage from the tests is
> very low (see:
> https://build.gluster.org/job/line-coverage/485/Line_20Coverage_20Report/
> )
>
> 3) Quota fsck (#390)
>
> @Sanoj I do have documentation in the github issue, but would prefer if
> the user facing documentation moves to glusterdocs instead.
>
> Further I see no real test coverage for the tool provided here, any
> thoughts around the same?
>
> The script is not part of the tarball and hence the distribution RPMs as
> well, what is the thought around distributing the same?
>

I will start working on the packaging and glusterdocs for it right away.
Test coverage would be an ongoing activity.. since multiple scenario has to
be covered.


> 4) Ensure python3 compatibility across code base (#411)
>
> @Kaleb/others, last patch to call this issue done (sans real testing at
> the moment) is https://review.gluster.org/c/glusterfs/+/20868 request
> review and votes here, to get this merged before branching.
>
> 5) Turn on Dentry fop serializer by default in brick stack (#421)
>
> @du, the release note for this can be short, as other details are
> captured in 4.0 release notes.
>
> However, in 4.0 release we noted a limitation with this feature as follows,
>
> "Limitations: This feature is released as a technical preview, as
> performance implications are not known completely." (see section
> https://docs.gluster.org/en/latest/release-notes/4.0.0/#standalone )
>
> Do we now have better data regarding the same that we can use when
> announcing the release?
>
> Thanks,
> Shyam
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel