[Gluster-devel] Release 5: Release calendar and status updates

2018-08-14 Thread Shyam Ranganathan
This mail is to solicit the following,

Features/enhancements planned for Gluster 5 needs the following from
contributors:
  - Open/Use relevant issue
  - Mark issue with the "Release 5" milestone [1]
  - Post to the devel lists issue details, requesting addition to track
the same for the release

NOTE: We are ~7 days from branching, and I do not have any issues marked
for the release, please respond with your issues that are going to be a
part of this release as you read this.

Calendar of activities look as follows:

1) master branch health checks (weekly, till branching)
  - Expect every Monday a status update on various tests runs

2) Branching date: (Monday) Aug-20-2018 (~40 days before GA tagging)

3) Late feature back port closure: (Friday) Aug-24-2018 (1 week from
branching)

4) Initial release notes readiness: (Monday) Aug-27-2018

5) RC0 build: (Monday) Aug-27-2018



6) RC1 build: (Monday) Sep-17-2018



7) GA tagging: (Monday) Oct-01-2018



8) ~week later release announcement

Go/no-go discussions per-phase will be discussed in the maintainers list.


[1] Release milestone: https://github.com/gluster/glusterfs/milestone/7
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Firewall migration around the end of the month.

2018-08-14 Thread Michael Scherer
Hi folks,

So after a few weeks of testing, the new firewall based on nft seems to
be ready. I did switch a few servers on a test firewall
(chrono.rht.gluster.org) without any trouble so far.

So I plan to switch the 2 HA main firewall (masa and mune) to use nft
instead of firewalld sometime in the next 2 weeks, depending on how
fast I can recover from Flock and where in the world I will be by then.

Switching to the new firewall would permit to have:
- better management of the firewall (using 1 single file, instead of
the ctulhuan horror of using 75 call to firewalld ansible module)
- a more modern stack (see https://developers.redhat.com/blog/2018/08/1
0/firewalld-the-future-is-nftables/ )
- more locked down internal network (which in turn would make easier to
detect a future attack, especially if we start to sign packages, etc).

In practice, this should be pretty transparent for the users, but if
you see any network issue on a builder in the int.rht.gluster.org
domain, please tell us along the date so we can investigate. 

People interested can see the config file on https://github.com/gluster
/gluster.org_ansible_configuration/blob/master/roles/nftables/templates
/nftables.conf

Is there a time that should be avoided for the deploy, even if it
should only impact various internal infra servers, and the various
internal builders ?

We later still plan to move some services inside the internal lan, like
postgres, jenkins, etc, but that's out of scope for this change.

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS



signature.asc
Description: This is a digitally signed message part
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Coverity covscan for 2018-08-14-e49c95dd (master branch)

2018-08-14 Thread staticanalysis


GlusterFS Coverity covscan results for the master branch are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-08-14-e49c95dd/

Coverity covscan results for other active branches are also available at
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Setting up machines from softserve in under 5 mins

2018-08-14 Thread Kotresh Hiremath Ravishankar
In the /etc/hosts, I think it is adding different IP

On Mon, Aug 13, 2018 at 5:59 PM, Rafi Kavungal Chundattu Parambil <
rkavu...@redhat.com> wrote:

> This is so nice. I tried it and succesfully created a test machine. It
> would be great if there is a provision to extend the lifetime of vm's
> beyond the time provided during creation. First I ran the ansible-playbook
> from the vm machine, then I realized that has to be executed from outside
> machine. May be we can mention that info in the doc.
>
> Regards
> Rafi KC
>
> - Original Message -
> From: "Nigel Babu" 
> To: "gluster-devel" 
> Cc: "gluster-infra" 
> Sent: Monday, August 13, 2018 3:38:17 PM
> Subject: [Gluster-devel] Setting up machines from softserve in under 5 mins
>
> Hello folks,
>
> Deepshikha did the work to make loaning a machine to running your
> regressions on them faster a while ago. I've tested them a few times today
> to confirm it works as expected. In the past, Softserve[1] machines would
> be a clean Centos 7 image. Now, we have an image with all the dependencies
> installed and *almost* setup to run regressions. It just needs a few steps
> run on them and we have a simplified playbook that will setup *just* those
> steps. This brings down the time from around 30 mins to setup a machine to
> less than 5 mins. The instructions[2] are on the softserve wiki for now,
> but will move to the site itself in the future.
>
> Please let us know if you face troubles by filing a bug.[3]
> [1]: https://softserve.gluster.org/
> [2]: https://github.com/gluster/softserve/wiki/Running-
> Regressions-on-loaned-Softserve-instances
> [3]: https://bugzilla.redhat.com/enter_bug.cgi?product=
> GlusterFS&component=project-infrastructure
>
> --
> nigelb
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> ___
> Gluster-infra mailing list
> gluster-in...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-infra
>



-- 
Thanks and Regards,
Kotresh H R
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Access to Docker Hub Gluster organization

2018-08-14 Thread Nigel Babu
On Tue, Aug 14, 2018 at 5:52 PM Humble Chirammal 
wrote:

>
>
> On Tue, Aug 14, 2018 at 2:09 PM, Nigel Babu  wrote:
>
>> Hello folks,
>>
>> Do we know who's the admin of the Gluster organization on Docker hub? I'd
>> like to be added to the org so I can set up nightly builds for all the
>> GCS-related containers.
>>
>> I admin this repo and I can add you to the team.
>
>
I found Kaushal who added me to the team :)


-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Access to Docker Hub Gluster organization

2018-08-14 Thread John Strunk
Is there anything preventing us from using the hooks in github to automate
this? (or is that what you're proposing?)

I would expect the following to be sufficient (and fully automatable via
both docker hub and quay):
master => latest
{git release tags} => image tag

-John


On Tue, Aug 14, 2018 at 6:27 AM Niels de Vos  wrote:

> On Tue, Aug 14, 2018 at 02:09:59PM +0530, Nigel Babu wrote:
> > Hello folks,
> >
> > Do we know who's the admin of the Gluster organization on Docker hub? I'd
> > like to be added to the org so I can set up nightly builds for all the
> > GCS-related containers.
>
> Nice! Which containers are these? The ones from the gluster-containers
> repository on GitHub?
>
> I was looking for this as well, but also for the team that exists on
> quay.io.
>
> There has been a request to keep clearly identifyable versioning for our
> containers. It is something I wan to look at, but have not had the time
> to do so. A description on which contaners we have, where the sources
> are kept, base OS and possible branches/versions would be needed (maybe
> more). If this is documented somewhere, a pointer to it would be great.
> Otherwise I'll need assistence on figuring out what the current state
> is.
>
> Thanks,
> Niels
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Master branch lock down for stabilization (unlocking the same)

2018-08-14 Thread Shyam Ranganathan
On 08/14/2018 12:51 AM, Pranith Kumar Karampuri wrote:
> 
> 
> On Mon, Aug 13, 2018 at 10:55 PM Shyam Ranganathan  > wrote:
> 
> On 08/13/2018 02:20 AM, Pranith Kumar Karampuri wrote:
> >     - At the end of 2 weeks, reassess master and nightly test
> status, and
> >     see if we need another drive towards stabilizing master by
> locking down
> >     the same and focusing only on test and code stability around
> the same.
> >
> >
> > When will there be a discussion about coming up with guidelines to
> > prevent lock down in future?
> 
> A thread for the same is started in the maintainers list.
> 
> 
> Could you point me to the thread please? I am only finding a thread with
> subject "Lock down period merge process"

That is the one I am talking about, where you already raised the above
point (if I recollect right).

> 
> >
> > I think it is better to lock-down specific components by removing
> commit
> > access for the respective owners for those components when a test in a
> > particular component starts to fail.
> 
> Also I suggest we move this to the maintainers thread, to keep the noise
> levels across lists in check.
> 
> Thanks,
> Shyam
> 
> 
> 
> -- 
> Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster release 3.12.13 (Long Term Maintenance) Canceled for 10th of August, 2018

2018-08-14 Thread Jiffin Tony Thottan

Hi,

Currently master branch is lock for fixing failures in the regression 
test suite [1].


As a result we are not releasing the next minor update for the 3.12 branch,

which falls on the 10th of every month.

The next 3.12 update would be around the 10th of September, 2018.

Apologies for the delay to inform above details.

[1] 
https://lists.gluster.org/pipermail/gluster-devel/2018-August/055160.html


Regards,

Jiffin

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Access to Docker Hub Gluster organization

2018-08-14 Thread Niels de Vos
On Tue, Aug 14, 2018 at 02:09:59PM +0530, Nigel Babu wrote:
> Hello folks,
> 
> Do we know who's the admin of the Gluster organization on Docker hub? I'd
> like to be added to the org so I can set up nightly builds for all the
> GCS-related containers.

Nice! Which containers are these? The ones from the gluster-containers
repository on GitHub?

I was looking for this as well, but also for the team that exists on
quay.io.

There has been a request to keep clearly identifyable versioning for our
containers. It is something I wan to look at, but have not had the time
to do so. A description on which contaners we have, where the sources
are kept, base OS and possible branches/versions would be needed (maybe
more). If this is documented somewhere, a pointer to it would be great.
Otherwise I'll need assistence on figuring out what the current state
is.

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Access to Docker Hub Gluster organization

2018-08-14 Thread Nigel Babu
Hello folks,

Do we know who's the admin of the Gluster organization on Docker hub? I'd
like to be added to the org so I can set up nightly builds for all the
GCS-related containers.

-- 
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel