Re: [Gluster-infra] Request for more executor VMs for the Gluster project

2019-01-08 Thread Kaushal M
Awesome, thanks!

On Wed, 9 Jan 2019, 00:29 Brian Stinson  I migrated the gluster workspace to a new VM. If you have access to the
> workspace, your new hostname is slave07.ci.centos.org
>
> This should help avoid the noisy neighbour problem in the future.
>
> --Brian
>
> On Tue, Jan 8, 2019 at 12:50 AM Kaushal M  wrote:
>
>> Hi,
>>
>> Just a little while back, the gluster-ci-slave01 VM assigned to the
>> Gluster project went offline.
>> Fabian diagnosed this to have been caused by the VM being overloaded
>> with too many jobs.
>>
>> Can an additional VM be setup for the gluster project to handle the
>> increased load?
>>
>> Thanks.
>>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Request for more executor VMs for the Gluster project

2019-01-07 Thread Kaushal M
Hi,

Just a little while back, the gluster-ci-slave01 VM assigned to the
Gluster project went offline.
Fabian diagnosed this to have been caused by the VM being overloaded
with too many jobs.

Can an additional VM be setup for the gluster project to handle the
increased load?

Thanks.
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] glusto-tests docker hub setup

2017-08-30 Thread Kaushal M
I've changed the linking. I've also added you as an admin for this
image. You should be able to maintain this image by yourself now.

On Fri, Aug 25, 2017 at 8:28 AM, Jonathan Holloway  wrote:
>
>
> - Original Message -
>> From: "Kaushal M" 
>> To: "Jonathan Holloway" 
>> Cc: "gluster-infra" 
>> Sent: Thursday, August 17, 2017 10:34:42 AM
>> Subject: Re: [Gluster-infra] glusto-tests docker hub setup
>>
>> On Thu, Aug 17, 2017 at 10:57 AM, Jonathan Holloway
>>  wrote:
>> > Hi Kaushal,
>> >
>> > Per our conversation, I'm requesting a docker hub project be setup for
>> > gluster/glusto-tests.
>> >
>> > Settings:
>> > - check "When active, builds will happen automatically on pushes." with
>> > repo "https://github.com/gluster/glusto-tests";
>> > - Change Dockerfile location to /docker/ for both "master" and "All
>> > branches except master". Defaults for the remaining fields.
>> >
>> > Short Description:
>> > Glusto and the Gluster glusto-tests libraries for Gluster QE.
>> >
>> > Full Description:
>> > This Dockerfile adds Glusto, the Gluster glustolibs libraries, gdeploy (as
>> > required by NFS Ganesha tests), and some other commonly used test tools on
>> > top of a Fedora container to provide the complete environment required to
>> > run Gluster QE tests under Glusto.
>> >
>> > This is currently a minimal implementation. More to come.
>> > ---
>>
>> This is now done.
>>
>> I've created an automated build at
>> https://hub.docker.com/r/gluster/glusto-tests/ and set it to be
>> triggered on changes to https://github.com/gluster/glusto-tests and
>> https://hub.docker.com/_/fedora/ .
>
> Hi Kaushal,
> Thanks! Works great. Both stable and latest branches are being built 
> automatically.
>
> Can we switch from linking to the Fedora docker hub repo to linking the 
> loadtheaccumulator/glusto docker hub repo?
> Glusto and glusto-tests are more tightly coupled than Fedora.
> I originally had the image based on the Glusto image, but had to break that 
> inheritance and rework the order to avoid pip/rpm conflicts with Fedora and 
> other tools.
>
> Cheers,
> Jonathan
>
>>
>> ~kaushal
>>
>> >
>> >
>> > Thanks again!
>> >
>> > Cheers,
>> > Jonathan
>> > ___
>> > Gluster-infra mailing list
>> > Gluster-infra@gluster.org
>> > http://lists.gluster.org/mailman/listinfo/gluster-infra
>>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] glusto-tests docker hub setup

2017-08-17 Thread Kaushal M
On Thu, Aug 17, 2017 at 10:57 AM, Jonathan Holloway
 wrote:
> Hi Kaushal,
>
> Per our conversation, I'm requesting a docker hub project be setup for 
> gluster/glusto-tests.
>
> Settings:
> - check "When active, builds will happen automatically on pushes." with repo 
> "https://github.com/gluster/glusto-tests";
> - Change Dockerfile location to /docker/ for both "master" and "All branches 
> except master". Defaults for the remaining fields.
>
> Short Description:
> Glusto and the Gluster glusto-tests libraries for Gluster QE.
>
> Full Description:
> This Dockerfile adds Glusto, the Gluster glustolibs libraries, gdeploy (as 
> required by NFS Ganesha tests), and some other commonly used test tools on 
> top of a Fedora container to provide the complete environment required to run 
> Gluster QE tests under Glusto.
>
> This is currently a minimal implementation. More to come.
> ---

This is now done.

I've created an automated build at
https://hub.docker.com/r/gluster/glusto-tests/ and set it to be
triggered on changes to https://github.com/gluster/glusto-tests and
https://hub.docker.com/_/fedora/ .

~kaushal

>
>
> Thanks again!
>
> Cheers,
> Jonathan
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Cronjob on jenkins server ?

2017-02-15 Thread Kaushal M
I guess this is related to gluster-swift and swiftonfile. Neils and
Prashanth were doing some jenkins stuff related to swift recently. You
should check with them.

On Wed, Feb 15, 2017 at 2:43 AM, Michael Scherer  wrote:
> Hi,
>
> so while trying to figure why I can't install a F25 as I want on our
> virt host, i stumbled on jenkins being at 100% cpu.
>
> The reason was a make process, started like this:
> 56 * * * * jenkins cd /d/cache && /usr/bin/make
>
> and there is a makefile there like this:
> .PHONY: all
>
> all::
> -rm -rf tmp
> mkdir tmp
> cd tmp && git clone git://pkgs.fedoraproject.org/glusterfs.git
>> /dev/null
> -mv glusterfs glusterfs~ && mv tmp/glusterfs . && rm -rf glusterfs~
>
>
> SWIFT_TARBALL = $(shell grep -v gluster glusterfs/sources | cut -d ' '
> -f 3)
> # SWIFT_MD5SIG = $(shell grep -v gluster glusterfs/sources | cut -d ' '
> -f 1)
> SWIFT_VERS = $(shell echo $(SWIFT_TARBALL) | grep -o 1\.[0-9]\.[0-9])
> FOLSOM_URL =
> https://launchpad.net/swift/folsom/1.7.4/+download/swift-1.7.4.tar.gz
> GRIZZLY_URL =
> https://launchpad.net/swift/grizzly/1.8.0/+download/swift-1.8.0.tar.gz
>
> all::
> -cd tmp && curl -sOL $(FOLSOM_URL)
> -cd tmp && curl -sOL $(GRIZZLY_URL)
> -mv tmp/*.tar.gz .
> rmdir tmp
>
>
> I am not exactly sure on why we need to download swift tarball from
> grizzly, nor why we need to get a up to date version of glusterfs
> package.
>
> Should I remove that cronjob, or should I push to ansible ?
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-18 Thread Kaushal M
On Fri, Nov 18, 2016 at 3:29 PM, Kaushal M  wrote:
> On Fri, Nov 18, 2016 at 2:04 PM, Kaushal M  wrote:
>> On Fri, Nov 18, 2016 at 1:22 PM, Kaushal M  wrote:
>>> On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M  wrote:
>>>> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>>>>
>>>> I made a mistake.
>>>>
>>>> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
>>>> not correct.
>>>> So I corrected it with a new commit, c11131f, directly on top of my
>>>> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
>>>> tagged this commit as 3.7.17.
>>>>
>>>> Unfortunately, when pushing I just pushed the tags and didn't push my
>>>> updated branch to release-3.7. Because of this I inadvertently created
>>>> a new (virtual) branch.
>>>> Any new changes merged in release-3.7 since have happened on top of
>>>> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
>>>> v3.7.17 exists as a virtual branch now.
>>>>
>>>> The current branching for release-3.7 and v3.7.17 looks like this.
>>>>
>>>> | release-3.7 CURRENT HEAD
>>>> |
>>>> | new commits
>>>> |   | c11131f (tag: v3.7.17)
>>>> 8b95eba /
>>>> |
>>>> | old commits
>>>>
>>>> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
>>>> push this as the new release-3.7.
>>>>
>>>>  | release-3.7 NEW HEAD
>>>> |release-3.7 CURRENT HEAD -->| Merge commit
>>>> ||
>>>> | new commits*   |
>>>> || c11131f (tag: v3.7.17)
>>>> | 8b95eba ---/
>>>> |
>>>> | old commits
>>>>
>>>> I'd like to avoid doing a rebase because it would lead to changes
>>>> commit-ids, and break any existing clones.
>>>>
>>>> The actual commands I'll be doing on my local system are:
>>>> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
>>>> to the 3.7.17 branch in the picture above)
>>>> ```
>>>> $ git fetch origin # fetch latest origin
>>>> $ git checkout release-3.7 # checking out my local release-3.7
>>>> $ git merge origin/release-3.7 # merge updates from origin into my
>>>> local release-3.7. This will create a merge commit.
>>>> $ git push origin release-3.7:release-3.7 # push my local branch to
>>>> remote and point remote release-3.7 to my release-3.7 ie. the merge
>>>> commit.
>>>> ```
>>>>
>>>> After this users with existing clones should get changes done on their
>>>> next `git pull`.
>>>
>>> I've tested this out locally, and it works.
>>>
>>>>
>>>> I'll do this in the next couple of hours, if there are no objections.
>>>>
>>
>> If forgot to give credit. Thanks JoeJulian and gnulnx for noticing
>> this and bringing attention to it.
>
> I'm going ahead with the plan. I've not gotten any bad feedback. On
> JoeJulian and Niels said it looks okay.

This is now done. A merge commit 94ba6c9 was created which merges back
v3.7.17 into release-3.7. The head of release-3.7 now points to this
merge commit.
Future pulls of release-3.7 will not be affected. If anyone faces
issues, please let met know.

>
>>
>>>> ~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-18 Thread Kaushal M
On Fri, Nov 18, 2016 at 2:04 PM, Kaushal M  wrote:
> On Fri, Nov 18, 2016 at 1:22 PM, Kaushal M  wrote:
>> On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M  wrote:
>>> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>>>
>>> I made a mistake.
>>>
>>> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
>>> not correct.
>>> So I corrected it with a new commit, c11131f, directly on top of my
>>> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
>>> tagged this commit as 3.7.17.
>>>
>>> Unfortunately, when pushing I just pushed the tags and didn't push my
>>> updated branch to release-3.7. Because of this I inadvertently created
>>> a new (virtual) branch.
>>> Any new changes merged in release-3.7 since have happened on top of
>>> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
>>> v3.7.17 exists as a virtual branch now.
>>>
>>> The current branching for release-3.7 and v3.7.17 looks like this.
>>>
>>> | release-3.7 CURRENT HEAD
>>> |
>>> | new commits
>>> |   | c11131f (tag: v3.7.17)
>>> 8b95eba /
>>> |
>>> | old commits
>>>
>>> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
>>> push this as the new release-3.7.
>>>
>>>  | release-3.7 NEW HEAD
>>> |release-3.7 CURRENT HEAD -->| Merge commit
>>> ||
>>> | new commits*   |
>>> || c11131f (tag: v3.7.17)
>>> | 8b95eba ---/
>>> |
>>> | old commits
>>>
>>> I'd like to avoid doing a rebase because it would lead to changes
>>> commit-ids, and break any existing clones.
>>>
>>> The actual commands I'll be doing on my local system are:
>>> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
>>> to the 3.7.17 branch in the picture above)
>>> ```
>>> $ git fetch origin # fetch latest origin
>>> $ git checkout release-3.7 # checking out my local release-3.7
>>> $ git merge origin/release-3.7 # merge updates from origin into my
>>> local release-3.7. This will create a merge commit.
>>> $ git push origin release-3.7:release-3.7 # push my local branch to
>>> remote and point remote release-3.7 to my release-3.7 ie. the merge
>>> commit.
>>> ```
>>>
>>> After this users with existing clones should get changes done on their
>>> next `git pull`.
>>
>> I've tested this out locally, and it works.
>>
>>>
>>> I'll do this in the next couple of hours, if there are no objections.
>>>
>
> If forgot to give credit. Thanks JoeJulian and gnulnx for noticing
> this and bringing attention to it.

I'm going ahead with the plan. I've not gotten any bad feedback. On
JoeJulian and Niels said it looks okay.

>
>>> ~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-18 Thread Kaushal M
On Fri, Nov 18, 2016 at 1:22 PM, Kaushal M  wrote:
> On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M  wrote:
>> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>>
>> I made a mistake.
>>
>> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
>> not correct.
>> So I corrected it with a new commit, c11131f, directly on top of my
>> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
>> tagged this commit as 3.7.17.
>>
>> Unfortunately, when pushing I just pushed the tags and didn't push my
>> updated branch to release-3.7. Because of this I inadvertently created
>> a new (virtual) branch.
>> Any new changes merged in release-3.7 since have happened on top of
>> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
>> v3.7.17 exists as a virtual branch now.
>>
>> The current branching for release-3.7 and v3.7.17 looks like this.
>>
>> | release-3.7 CURRENT HEAD
>> |
>> | new commits
>> |   | c11131f (tag: v3.7.17)
>> 8b95eba /
>> |
>> | old commits
>>
>> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
>> push this as the new release-3.7.
>>
>>  | release-3.7 NEW HEAD
>> |release-3.7 CURRENT HEAD -->| Merge commit
>> ||
>> | new commits*   |
>> || c11131f (tag: v3.7.17)
>> | 8b95eba ---/
>> |
>> | old commits
>>
>> I'd like to avoid doing a rebase because it would lead to changes
>> commit-ids, and break any existing clones.
>>
>> The actual commands I'll be doing on my local system are:
>> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
>> to the 3.7.17 branch in the picture above)
>> ```
>> $ git fetch origin # fetch latest origin
>> $ git checkout release-3.7 # checking out my local release-3.7
>> $ git merge origin/release-3.7 # merge updates from origin into my
>> local release-3.7. This will create a merge commit.
>> $ git push origin release-3.7:release-3.7 # push my local branch to
>> remote and point remote release-3.7 to my release-3.7 ie. the merge
>> commit.
>> ```
>>
>> After this users with existing clones should get changes done on their
>> next `git pull`.
>
> I've tested this out locally, and it works.
>
>>
>> I'll do this in the next couple of hours, if there are no objections.
>>

If forgot to give credit. Thanks JoeJulian and gnulnx for noticing
this and bringing attention to it.

>> ~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-17 Thread Kaushal M
On Fri, Nov 18, 2016 at 1:17 PM, Kaushal M  wrote:
> IMPORTANT: Till this is fixed please stop merging changes into release-3.7
>
> I made a mistake.
>
> When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
> not correct.
> So I corrected it with a new commit, c11131f, directly on top of my
> local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
> tagged this commit as 3.7.17.
>
> Unfortunately, when pushing I just pushed the tags and didn't push my
> updated branch to release-3.7. Because of this I inadvertently created
> a new (virtual) branch.
> Any new changes merged in release-3.7 since have happened on top of
> 8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
> v3.7.17 exists as a virtual branch now.
>
> The current branching for release-3.7 and v3.7.17 looks like this.
>
> | release-3.7 CURRENT HEAD
> |
> | new commits
> |   | c11131f (tag: v3.7.17)
> 8b95eba /
> |
> | old commits
>
> The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
> push this as the new release-3.7.
>
>  | release-3.7 NEW HEAD
> |release-3.7 CURRENT HEAD -->| Merge commit
> ||
> | new commits*   |
> || c11131f (tag: v3.7.17)
> | 8b95eba ---/
> |
> | old commits
>
> I'd like to avoid doing a rebase because it would lead to changes
> commit-ids, and break any existing clones.
>
> The actual commands I'll be doing on my local system are:
> (NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
> to the 3.7.17 branch in the picture above)
> ```
> $ git fetch origin # fetch latest origin
> $ git checkout release-3.7 # checking out my local release-3.7
> $ git merge origin/release-3.7 # merge updates from origin into my
> local release-3.7. This will create a merge commit.
> $ git push origin release-3.7:release-3.7 # push my local branch to
> remote and point remote release-3.7 to my release-3.7 ie. the merge
> commit.
> ```
>
> After this users with existing clones should get changes done on their
> next `git pull`.

I've tested this out locally, and it works.

>
> I'll do this in the next couple of hours, if there are no objections.
>
> ~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [release-3.7] Tag v3.7.17 doesn't actually exist in the branch

2016-11-17 Thread Kaushal M
IMPORTANT: Till this is fixed please stop merging changes into release-3.7

I made a mistake.

When tagging v3.7.17, I noticed that the release-notes at 8b95eba were
not correct.
So I corrected it with a new commit, c11131f, directly on top of my
local release-3.7 branch (I'm sorry that I didn't use gerrit). And I
tagged this commit as 3.7.17.

Unfortunately, when pushing I just pushed the tags and didn't push my
updated branch to release-3.7. Because of this I inadvertently created
a new (virtual) branch.
Any new changes merged in release-3.7 since have happened on top of
8b95eba, which was the HEAD of release-3.7 when I made the mistake. So
v3.7.17 exists as a virtual branch now.

The current branching for release-3.7 and v3.7.17 looks like this.

| release-3.7 CURRENT HEAD
|
| new commits
|   | c11131f (tag: v3.7.17)
8b95eba /
|
| old commits

The easiest fix now is to merge release-3.7 HEAD into v3.7.17, and
push this as the new release-3.7.

 | release-3.7 NEW HEAD
|release-3.7 CURRENT HEAD -->| Merge commit
||
| new commits*   |
|| c11131f (tag: v3.7.17)
| 8b95eba ---/
|
| old commits

I'd like to avoid doing a rebase because it would lead to changes
commit-ids, and break any existing clones.

The actual commands I'll be doing on my local system are:
(NOTE: My local release-3.7 currently has v3.7.17, which is equivalent
to the 3.7.17 branch in the picture above)
```
$ git fetch origin # fetch latest origin
$ git checkout release-3.7 # checking out my local release-3.7
$ git merge origin/release-3.7 # merge updates from origin into my
local release-3.7. This will create a merge commit.
$ git push origin release-3.7:release-3.7 # push my local branch to
remote and point remote release-3.7 to my release-3.7 ie. the merge
commit.
```

After this users with existing clones should get changes done on their
next `git pull`.

I'll do this in the next couple of hours, if there are no objections.

~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Access to gluster org on hub.docker.com

2016-10-19 Thread Kaushal M
Hey Humble,

I want to add a couple of new docker images for GD2 under the gluster
organization in Docker hub. I need to be a part of the
organization/team to do this.

Could you add please add me to the group? I'm kshlm on docker hub.

~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] centos-5 build failures on mainline

2016-09-21 Thread Kaushal M
On Wed, Sep 21, 2016 at 1:36 PM, Niels de Vos  wrote:
> On Wed, Sep 21, 2016 at 11:53:43AM +0530, Atin Mukherjee wrote:
>> 
>>
>> As of now we don't check for build sanity on RHEL5/centos-5 distros.
>> I believe Gluster still has legacy support for these distros. Here we could
>> either add a glusterfs-devrpms script for el5 for every patch submission or
>> at worst have a nightly build to check the sanity of el5 build on mainline
>> branch to ensure we don't break further?
>
> Currently there is no way to build only the Gluster client part. This is
> a limitation in the autoconf/automake scripts. The server-side requires
> fancy things that are not available (in the required versions) for
> different parts (mainly GlusterD?).
>
> Once we have a "./configre --without-server" or similar, there is no use
> in trying to build for RHEL/CentOS-5.
>
> Niels

This was discussed in today's community meeting.

Kaleb reminded everyone that the community decided to stop supporting
and building packages for EL5 from glusterfs-3.8 [1].

With this in mind, the opinion was that we will be willing to accept
patches that help build GlusterFs on EL5 (only patches like what Neils
has described above).
But we will not be building EL5 packages or be running tests on EL5.

For more information on the discussion, refer to the meeting logs. [2]

~kaushal

[1] https://www.gluster.org/pipermail/gluster-devel/2016-April/048955.html
[2] 
https://meetbot.fedoraproject.org/gluster-meeting/2016-09-21/weekly_community_meeting_21-sep-2016.2016-09-21-11.59.log.html


>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] cgit on review.gluster.org

2016-09-11 Thread Kaushal M
On Mon, Sep 12, 2016 at 10:26 AM, Nigel Babu  wrote:
> Hello,
>
> I'm contemplating adding cgit or a similar git view on review.gluster.org so
> we don't have to push everything to Github to view the code online. I'm
> contemplating using cgit. Do we have a preferred git viewer?
>

There are gerrit plugins which do this [1][2]. I suppose those would
be easier and better integrated, you could easily click on commit-SHAs
and get to the commit.
Gitiles is used by the gerrit instance hosting gerrit.


[1] 
https://gerrit-review.googlesource.com/Documentation/config-plugins.html#gitiles
[2] 
https://gerrit-review.googlesource.com/Documentation/config-plugins.html#gitblit


> --
> nigelb
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Zuul?

2016-09-02 Thread Kaushal M
I'd brought up Zuul a long while back. The opinion then was that,
while a gatekeeper is nice, we didn't want to maintain anymore infra
over what we had at the time. We tried to make Jenkins itself do the
work, which hasn't succeeded as well as we hoped.

With you being dedicated to maintain the infra, this will be a nice
time to revisit/investigate Zuul again.






On Fri, Sep 2, 2016 at 3:01 PM, Nigel Babu  wrote:
> Hello,
>
> We've had master breaking twice in this week because we of when we run
> regressions and how we merge. I think it's time we officially thought of 
> moving
> regressions as a gate controlld by Zuul. And Zuul will do the merge onto
> the correct branch.
>
> This is me throwing the idea about to hear any negative thoughts, before I do
> further investigation. What does everyone think about this?
>
> Note: I've purposefully not CC'd gluster-devel here because I'd rather go to
> the full developer team with a proper plan.
>
> --
> nigelb
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-Maintainers] Request to provide PASS flags to a patch in gerrit

2016-08-31 Thread Kaushal M
I've given the flags. The change can be merged now.

On Wed, Aug 31, 2016 at 5:06 PM, Aravinda  wrote:
> +1
>
> regards
> Aravinda
>
> On Wednesday 31 August 2016 04:23 PM, Raghavendra Talur wrote:
>
> Hi All,
>
> We have a test [1] which is causing hangs in NetBSD. We have not been able
> to debug the issue yet.
> It could be because the bash script does not comply with posix guidelines or
> that there is a bug in the brick code.
>
> However, as we have 3.9 merge deadline tomorrow this is causing the test
> pipeline to grow a lot and needing manual intervention.
> I recommend we disable this test for now. I request Kaushal to provide pass
> flags to the patch [2] for faster merge.
>
>
> [1] ./tests/features/lock_revocation.t
> [2] http://review.gluster.org/#/c/15374/
>
>
> Thanks,
> Raghavendra Talur
>
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
>
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Please welcome Worker Ant

2016-08-22 Thread Kaushal M
On Mon, Aug 22, 2016 at 2:46 PM, Michael Scherer  wrote:
> Le lundi 22 août 2016 à 12:41 +0530, Nigel Babu a écrit :
>> Hello,
>>
>> I've just switched the bugzilla authentication on Gerrit today. From today
>> onwards, Gerrit will comment on Bugzilla using a new account:
>>
>> bugzilla-...@gluster.org
>>
>> If you notice any issues, please file a bug against project-infrastructure.
>> This bot will only comment on public bugs, so if you open a review request 
>> when
>> the bug is private, it will fail. This is intended behavior.
>
> You got me curious, in which case are private bugs required ?
>
> Do we need to make sure that the review is private also or something ?

Private bugs are not required. Private bugs we see are Red Hat Storage
Bugs that were cloned upstream, which is not right.

>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Investigating random votes in Gerrit

2016-06-09 Thread Kaushal M
On Thu, Jun 9, 2016 at 3:01 PM, Michael Scherer  wrote:
> Le jeudi 09 juin 2016 à 14:32 +0530, Kaushal M a écrit :
>> On Thu, Jun 9, 2016 at 12:14 PM, Kaushal M  wrote:
>> > On Thu, Jun 9, 2016 at 12:03 PM, Saravanakumar Arumugam
>> >  wrote:
>> >> Hi Kaushal,
>> >>
>> >> One of the patch is failing for (http://review.gluster.org/#/c/14653/) is
>> >> failing in NETBSD.
>> >> Its log:
>> >> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15624/
>> >>
>> >> But the patch mentioned in NETBSD is another
>> >> one.(http://review.gluster.org/#/c/13872/)
>> >>
>> >
>> > Yup. We know this is happening, but don't know why yet. I'll keep this
>> > thread updated with any findings I have.
>> >
>> >> Thanks,
>> >> Saravana
>> >>
>> >>
>> >>
>> >> On 06/09/2016 11:52 AM, Kaushal M wrote:
>> >>>
>> >>> In addition to the builder issues we're having, we are also facing
>> >>> problems with jenkins voting/commenting randomly.
>> >>>
>> >>> The comments generally link to older jobs for older patchsets, which
>> >>> were run about 2 months back (beginning of April). For example,
>> >>> https://review.gluster.org/14665 has a netbsd regression +1 vote, from
>> >>> a job run in April for review 13873, and which actually failed.
>> >>>
>> >>> Another observation that I've made is that these fake votes sometime
>> >>> provide a -1 Verified. Jenkins shouldn't be using this flag anymore.
>> >>>
>> >>> These 2 observations, make me wonder if another jenkins instance is
>> >>> running somewhere, from our old backups possibly? Michael, could this
>> >>> be possible?
>> >>>
>> >>> To check from where these votes/comments were coming from, I tried
>> >>> checking the Gerrit sshd logs. This wasn't helpful, because all logins
>> >>> apparently happen from 127.0.0.1. This is probably some firewall rule
>> >>> that has been setup, post migration, because I see older logs giving
>> >>> proper IPs. I'll require Michael's help with fixing this, if possible.
>> >>>
>> >>> I'll continue to investigate, and update this thread with anything I 
>> >>> find.
>> >>>
>>
>> My guess was right!!
>>
>> This problem should now be fixed, as well as the problem with the builders.
>> The cause for both is the same: our old jenkins server, back from the
>> dead (zombie-jenkins from now on).
>>
>> The hypervisor in iWeb which hosted our services earlier, which was
>> supposed to be off,
>> had started up about 4 days back. This brought back zombie-jenkins.
>>
>> Zombie-jenkins continued from where is left off around early April. It
>> started getting gerrit events, and started running jobs for them.
>> Zombie-jenkins started numbering jobs from where it left off, and used
>> these numbers when reporting back to gerrit.
>> But these job numbers had already been used by new-jenkins about 2
>> months back when it started.
>> This is why the links in the comments pointed to the old jobs in new-jenkins.
>> I've checked logs on Gerrit (with help from Micheal) and can verify
>> that these comments/votes did come zombie-jenkins's IP.
>>
>> Zombie-jenkins also explains the random build failures being seen on
>> the builders.
>> Zombie-jenkins and new-jenkins each thought they had the slaves to
>> themselves and launched jobs on them,
>> causing jobs to clash sometimes, which resulted in random failures
>> reported in new-jenkins.
>> I'm yet to login to a slave and verify this, but I'm pretty sure this
>> what happened.
>>
>> For now, Michael has stopped the iWeb hypervisor and zombie-jenkins.
>> This should stop anymore random comments in Gerrit and failures in Jenkins.
>
> Well, i just stopped the 3 VM, and disabled them on boot (both xen and
> libvirt), so they should cause much trouble.

I hope something better than fire was used this time, it wasn't
effective last time.

>
>> I'll get Michael (once he's back on Monday) to figure out why
>> zombie-jenkins restarted,
>> and write up a proper postmortem about the issues.
>
> Oh, that part is easy to guess. We did ask to iweb to stop the server,
> that was supposed to hap

Re: [Gluster-infra] Regression fails due to infra issue

2016-06-09 Thread Kaushal M
On Wed, Jun 8, 2016 at 4:50 PM, Niels de Vos  wrote:
> On Wed, Jun 08, 2016 at 10:30:37AM +0200, Michael Scherer wrote:
>> Le mercredi 08 juin 2016 à 03:15 +0200, Niels de Vos a écrit :
>> > On Tue, Jun 07, 2016 at 10:29:34AM +0200, Michael Scherer wrote:
>> > > Le mardi 07 juin 2016 à 10:00 +0200, Michael Scherer a écrit :
>> > > > Le mardi 07 juin 2016 à 09:54 +0200, Michael Scherer a écrit :
>> > > > > Le lundi 06 juin 2016 à 21:18 +0200, Niels de Vos a écrit :
>> > > > > > On Mon, Jun 06, 2016 at 09:59:02PM +0530, Nigel Babu wrote:
>> > > > > > > On Mon, Jun 6, 2016 at 12:56 PM, Poornima Gurusiddaiah 
>> > > > > > > 
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > > > Hi,
>> > > > > > > >
>> > > > > > > > There are multiple issues that we saw with regressions lately:
>> > > > > > > >
>> > > > > > > > 1. On certain slaves the regression fails during build and i 
>> > > > > > > > see those on
>> > > > > > > > slave26.cloud.gluster.org, slave25.cloud.gluster.org and may 
>> > > > > > > > be others
>> > > > > > > > also.
>> > > > > > > > Eg:
>> > > > > > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/21422/console
>> > > > > > > >
>> > > > > > >
>> > > > > > > Are you sure this isn't a code breakage?
>> > > > > >
>> > > > > > No, it really does not look like that.
>> > > > > >
>> > > > > > This is an other one, it seems the testcase got killed for some 
>> > > > > > reason:
>> > > > > >
>> > > > > >   
>> > > > > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/21459/console
>> > > > > >
>> > > > > > It was running on slave25.cloud.gluster.org too... Is it possible 
>> > > > > > that
>> > > > > > there is some watchdog or other configuration checking for 
>> > > > > > resources and
>> > > > > > killing testcases on occasion? The number of slaves where this 
>> > > > > > happens
>> > > > > > seems limited, were these more recently installed/configured?
>> > > > >
>> > > > > So dmesg speak of segfault in yum
>> > > > >
>> > > > > yum[2711] trap invalid opcode ip:7f2efac38d60 sp:7ffd77322658 
>> > > > > error:0 in
>> > > > > libfreeblpriv3.so[7f2efabe6000+72000]
>> > > > >
>> > > > > and
>> > > > > https://access.redhat.com/solutions/2313911
>> > > > >
>> > > > > That's exactly the problem.
>> > > > > [root@slave25 ~]# /usr/bin/curl https://google.com
>> > > > > Illegal instruction
>> > > > >
>> > > > > I propose to remove the builder from rotation while we investigate.
>> > > >
>> > > > Or we can:
>> > > >
>> > > > export NSS_DISABLE_HW_AES=1
>> > > >
>> > > > to work around, cf the bug listed on the article.
>> > > >
>> > > > Not sure the best way to deploy that.
>> > >
>> > > So we are testing the fix on slave25, and if that's what fix the error,
>> > > I will deploy to the whole gluster builders, and investigate for the non
>> > > builders server. That's only for RHEL 6/Centos 6 on rackspace.
>> >
>> > If this does not work, configuring mock to use http (without the 's')
>> > might be an option too. The export variable would probably need to get
>> > set inside the mock chroot. It can possibly be done in
>> > /etc/mock/site-defaults.cfg.
>> >
>> > For the normal test cases, placing the environment variable (and maybe
>> > NSS_DISABLE_HW_GCM=1 too?) in the global bashrc might be sufficient.
>>
>> We used /etc/environment, and so far, no one complained about side
>> effects.
>>
>> (I mean, this did fix stuff, right ? right ??? )
>
> I dont know. This was the last job that failed due to the bug:
>
>   https://build.gluster.org/job/glusterfs-devrpms/16978/console
>
> There are more recent ones on slave25 that failed due to unclear reasons
> as well, not sure if that is caused by the same problem:
>
>   https://build.gluster.org/computer/slave25.cloud.gluster.org/builds
>
> Thanks,
> Niels

The random build failures should now be fixed (or at least not happen anymore).
Please refer to the mail-thread 'Investigating random votes in Gerrit'
for more information.

~kaushal

>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Investigating random votes in Gerrit

2016-06-09 Thread Kaushal M
On Thu, Jun 9, 2016 at 12:14 PM, Kaushal M  wrote:
> On Thu, Jun 9, 2016 at 12:03 PM, Saravanakumar Arumugam
>  wrote:
>> Hi Kaushal,
>>
>> One of the patch is failing for (http://review.gluster.org/#/c/14653/) is
>> failing in NETBSD.
>> Its log:
>> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15624/
>>
>> But the patch mentioned in NETBSD is another
>> one.(http://review.gluster.org/#/c/13872/)
>>
>
> Yup. We know this is happening, but don't know why yet. I'll keep this
> thread updated with any findings I have.
>
>> Thanks,
>> Saravana
>>
>>
>>
>> On 06/09/2016 11:52 AM, Kaushal M wrote:
>>>
>>> In addition to the builder issues we're having, we are also facing
>>> problems with jenkins voting/commenting randomly.
>>>
>>> The comments generally link to older jobs for older patchsets, which
>>> were run about 2 months back (beginning of April). For example,
>>> https://review.gluster.org/14665 has a netbsd regression +1 vote, from
>>> a job run in April for review 13873, and which actually failed.
>>>
>>> Another observation that I've made is that these fake votes sometime
>>> provide a -1 Verified. Jenkins shouldn't be using this flag anymore.
>>>
>>> These 2 observations, make me wonder if another jenkins instance is
>>> running somewhere, from our old backups possibly? Michael, could this
>>> be possible?
>>>
>>> To check from where these votes/comments were coming from, I tried
>>> checking the Gerrit sshd logs. This wasn't helpful, because all logins
>>> apparently happen from 127.0.0.1. This is probably some firewall rule
>>> that has been setup, post migration, because I see older logs giving
>>> proper IPs. I'll require Michael's help with fixing this, if possible.
>>>
>>> I'll continue to investigate, and update this thread with anything I find.
>>>

My guess was right!!

This problem should now be fixed, as well as the problem with the builders.
The cause for both is the same: our old jenkins server, back from the
dead (zombie-jenkins from now on).

The hypervisor in iWeb which hosted our services earlier, which was
supposed to be off,
had started up about 4 days back. This brought back zombie-jenkins.

Zombie-jenkins continued from where is left off around early April. It
started getting gerrit events, and started running jobs for them.
Zombie-jenkins started numbering jobs from where it left off, and used
these numbers when reporting back to gerrit.
But these job numbers had already been used by new-jenkins about 2
months back when it started.
This is why the links in the comments pointed to the old jobs in new-jenkins.
I've checked logs on Gerrit (with help from Micheal) and can verify
that these comments/votes did come zombie-jenkins's IP.

Zombie-jenkins also explains the random build failures being seen on
the builders.
Zombie-jenkins and new-jenkins each thought they had the slaves to
themselves and launched jobs on them,
causing jobs to clash sometimes, which resulted in random failures
reported in new-jenkins.
I'm yet to login to a slave and verify this, but I'm pretty sure this
what happened.

For now, Michael has stopped the iWeb hypervisor and zombie-jenkins.
This should stop anymore random comments in Gerrit and failures in Jenkins.

I'll get Michael (once he's back on Monday) to figure out why
zombie-jenkins restarted,
and write up a proper postmortem about the issues.

>>> ~kaushal
>>> ___
>>> Gluster-infra mailing list
>>> Gluster-infra@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-infra
>>
>>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Investigating random votes in Gerrit

2016-06-08 Thread Kaushal M
On Thu, Jun 9, 2016 at 12:03 PM, Saravanakumar Arumugam
 wrote:
> Hi Kaushal,
>
> One of the patch is failing for (http://review.gluster.org/#/c/14653/) is
> failing in NETBSD.
> Its log:
> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/15624/
>
> But the patch mentioned in NETBSD is another
> one.(http://review.gluster.org/#/c/13872/)
>

Yup. We know this is happening, but don't know why yet. I'll keep this
thread updated with any findings I have.

> Thanks,
> Saravana
>
>
>
> On 06/09/2016 11:52 AM, Kaushal M wrote:
>>
>> In addition to the builder issues we're having, we are also facing
>> problems with jenkins voting/commenting randomly.
>>
>> The comments generally link to older jobs for older patchsets, which
>> were run about 2 months back (beginning of April). For example,
>> https://review.gluster.org/14665 has a netbsd regression +1 vote, from
>> a job run in April for review 13873, and which actually failed.
>>
>> Another observation that I've made is that these fake votes sometime
>> provide a -1 Verified. Jenkins shouldn't be using this flag anymore.
>>
>> These 2 observations, make me wonder if another jenkins instance is
>> running somewhere, from our old backups possibly? Michael, could this
>> be possible?
>>
>> To check from where these votes/comments were coming from, I tried
>> checking the Gerrit sshd logs. This wasn't helpful, because all logins
>> apparently happen from 127.0.0.1. This is probably some firewall rule
>> that has been setup, post migration, because I see older logs giving
>> proper IPs. I'll require Michael's help with fixing this, if possible.
>>
>> I'll continue to investigate, and update this thread with anything I find.
>>
>> ~kaushal
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-infra
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Investigating random votes in Gerrit

2016-06-08 Thread Kaushal M
In addition to the builder issues we're having, we are also facing
problems with jenkins voting/commenting randomly.

The comments generally link to older jobs for older patchsets, which
were run about 2 months back (beginning of April). For example,
https://review.gluster.org/14665 has a netbsd regression +1 vote, from
a job run in April for review 13873, and which actually failed.

Another observation that I've made is that these fake votes sometime
provide a -1 Verified. Jenkins shouldn't be using this flag anymore.

These 2 observations, make me wonder if another jenkins instance is
running somewhere, from our old backups possibly? Michael, could this
be possible?

To check from where these votes/comments were coming from, I tried
checking the Gerrit sshd logs. This wasn't helpful, because all logins
apparently happen from 127.0.0.1. This is probably some firewall rule
that has been setup, post migration, because I see older logs giving
proper IPs. I'll require Michael's help with fixing this, if possible.

I'll continue to investigate, and update this thread with anything I find.

~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Access to review.gluster.org

2016-05-19 Thread Kaushal M
I've added Nigel's keys to formicary.gluster.org, from where he can
get onto review.gluster.org.

On Wed, May 18, 2016 at 4:12 PM, Amye Scavarda  wrote:
> Nigel will be helping us out all over, so review.gluster.org is a good place
> to start.
> Not sure if the upgrade will be something we can do in a short period of
> time, but getting better debugging on gerrit will be helpful.
> - amye
>
>
> On Wed, May 18, 2016 at 3:44 PM, Kaushal M  wrote:
>>
>> Hey Nigel,
>>
>> I came to know that you needed access to review.gluster.org, so that
>> you can get started figuring out how we can go about upgrading gerrit.
>>
>> Could you share your ssh-keys so that it can be added at the relevant
>> places?
>>
>> Thanks,
>> Kaushal
>
>
>
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Access to review.gluster.org

2016-05-18 Thread Kaushal M
Hey Nigel,

I came to know that you needed access to review.gluster.org, so that
you can get started figuring out how we can go about upgrading gerrit.

Could you share your ssh-keys so that it can be added at the relevant places?

Thanks,
Kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] NetBSD machine required to debug a core

2016-05-12 Thread Kaushal M
On Thu, May 12, 2016 at 6:02 PM, Raghavendra Talur  wrote:
> slave26.cloud.gluster.org is taken down for you.
> please update here when done.

Isn't this a CentOS machine?

>
> Thanks,
> Raghavendra Talur
>
>
> On Wed, May 11, 2016 at 2:49 PM, Ravishankar N 
> wrote:
>>
>> Hello,
>>
>> I would need a NetBSD machine to debug a crash
>> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/16749/consoleFull
>>
>> The test that is failing is a .t that I have written as a part of the
>> patch against which the regression was triggered.
>>
>> Thanks,
>>
>> Ravi
>>
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-infra
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] r.g.o seems to be inaccessible

2016-05-03 Thread Kaushal M
I did it in the morning at home. Forgot to reply back here.

On Tue, May 3, 2016 at 3:45 PM, Michael Scherer  wrote:
> Le mardi 03 mai 2016 à 09:31 +0530, Atin Mukherjee a écrit :
>
> Seems to be working. If someone did fix, please tell.
>
>
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Requesting for NetBSD setup

2016-04-29 Thread Kaushal M
On Fri, Apr 29, 2016 at 12:35 PM, Emmanuel Dreyfus  wrote:
> On Fri, Apr 29, 2016 at 01:28:53AM -0400, Karthik Subrahmanya wrote:
>> I would like to ask for a NetBSD setup
>
> nbslave7[4gh] are disabled in Jenkins right now. They are labeled
> "Disconnected by kaushal", but I don't kno why. Once it is confirmed
> that they are not alread used for testing, you could pick one.
>
> I still does not know who is the password guardian at Rehat, though.

I often disconnect machines that aren't in a working state, and reboot them.
If I've left something in the disconnected state, most likely those
machines didn't get back to a working state after the reboot.
Or it could be that I just forgot.

>
> --
> Emmanuel Dreyfus
> m...@netbsd.org
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Download.gluster.org 27 April 2016 postmortem

2016-04-27 Thread Kaushal M
On Wed, Apr 27, 2016 at 5:26 PM, Kaushal M  wrote:
> On Wed, Apr 27, 2016 at 5:21 PM, Michael Scherer  wrote:
>> Le mercredi 27 avril 2016 à 14:39 +0300, Eyal Edri a écrit :
>>> Excellent post-mortem!
>>>
>>> Do you think its worth adding mirrors to gluster repos like oVirt is doing?
>>> [1]
>>>
>>> [1] http://ovirt-infra-docs.readthedocs.org/en/latest/General/Mirror.html
>>
>> That could be a solution.
>>
>> But we have the ressources to host a mirror ourself in the DC, it just
>> need a ip address, and a migration of servers (which is taking a awful
>> lot of time to happen :/ ).
>>
>> One issue we would have with a mirror is on the download stats.
>>
>> This and the need to have a mirrorlist, not sure how that's done on
>> dnf/yum side theses days.
>>
>
> Someone recently offered to mirror download.gluster.org (I need to dig
> archives to find out who exactly). Didn't we take up their offer?

The offer was made from nluug.nl [1]. The last mail in the thread on
Mar 10[2], said the offer was still open, and we just needed to setup
the sync.

Michael, did you just lose track of this with your long PTOs?

[1]: https://www.gluster.org/pipermail/gluster-infra/2016-February/001875.html
[2]: https://www.gluster.org/pipermail/gluster-infra/2016-March/001978.html
>
>>
>> --
>> Michael Scherer
>> Sysadmin, Community Infrastructure and Platform, OSAS
>>
>>
>>
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Download.gluster.org 27 April 2016 postmortem

2016-04-27 Thread Kaushal M
On Wed, Apr 27, 2016 at 5:21 PM, Michael Scherer  wrote:
> Le mercredi 27 avril 2016 à 14:39 +0300, Eyal Edri a écrit :
>> Excellent post-mortem!
>>
>> Do you think its worth adding mirrors to gluster repos like oVirt is doing?
>> [1]
>>
>> [1] http://ovirt-infra-docs.readthedocs.org/en/latest/General/Mirror.html
>
> That could be a solution.
>
> But we have the ressources to host a mirror ourself in the DC, it just
> need a ip address, and a migration of servers (which is taking a awful
> lot of time to happen :/ ).
>
> One issue we would have with a mirror is on the download stats.
>
> This and the need to have a mirrorlist, not sure how that's done on
> dnf/yum side theses days.
>

Someone recently offered to mirror download.gluster.org (I need to dig
archives to find out who exactly). Didn't we take up their offer?

>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] https access with gerrit

2016-04-22 Thread Kaushal M
You need to use the ssh:// URIs to push. The https:// URIs just allow git clone.

The ssh:// URIs should be in the format
`ssh://@review.gluster.org/glusterfs.git`



On Fri, Apr 22, 2016 at 11:47 PM, Vijay Bellur  wrote:
> Hey All,
>
> Whenever I try to push changes (via rfc.sh or git push) from a https clone
> of glusterfs repository, it fails as:
>
> remote: Unauthorized
> fatal: Authentication failed for 'https://review.gluster.org/glusterfs.git/'
>
> Does this work successfully for anybody?
>
> Thanks,
> Vijay
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] r.g.o down?

2016-04-21 Thread Kaushal M
On Thu, Apr 21, 2016 at 2:58 PM, Michael Scherer  wrote:
> Le jeudi 21 avril 2016 à 14:29 +0530, Kaushal M a écrit :
>> On Thu, Apr 21, 2016 at 1:14 PM, Michael Scherer  wrote:
>> > Le jeudi 21 avril 2016 à 12:47 +0530, Kaushal M a écrit :
>> >> I restarted Gerrit that should have fixed it for now.
>> >>
>> >> As this has happened several times since the migration, I thought
>> >> something wasn't done correctly.
>> >> Turns out the gerrit VM is a really-(really)-low end VM with just 1
>> >> CPU core and 2GBs of RAM!!
>> >>
>> >> Michael, was this intentional or did you some how overlook this?
>> >
>> > I likely overlooked. I reused the same type of VM than the one for
>> >
>> >>  IIRC,
>> >> before the migration the VM was running with a dual-core CPU at least
>> >> and more RAM.
>> >
>> > 4G of ram and 2 cpus. So I propose to go to 6G and still 2 cpu for now ?
>>
>> Fine with me. We always have room to grow if required.
>>
>> >
>> >> The hypervisor has more than enough resources to run a beefier VM. I
>> >> was thinking of bumping the VM to a 4 Cores and 8 GB of RAM (or more).
>> >> I'll do this if there are no objections.
>> >
>> > This requires a reboot however, is this ok ?
>>
>> Doing this shouldn't take longer that ~5minutes. We could do it any time.
>> It would have been so much easier if libvirt allowed hot-plugging cpu
>> and ram like disks.
>
> Seems to be supported in openstack, and in version we use on RHEL 7.
>
> And virsh has a command setvcpus with a --live switch, and the same for
> the ram.

TIL! I've been trying to do this all the time by editing the VM xml
config. I believe virt-manager does the same as well, because it fails
to add cpus live as well.

>
> Shall I try later today ?
> (worst case, it crash and we reboot)

Sure.

> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] r.g.o down?

2016-04-21 Thread Kaushal M
On Thu, Apr 21, 2016 at 1:14 PM, Michael Scherer  wrote:
> Le jeudi 21 avril 2016 à 12:47 +0530, Kaushal M a écrit :
>> I restarted Gerrit that should have fixed it for now.
>>
>> As this has happened several times since the migration, I thought
>> something wasn't done correctly.
>> Turns out the gerrit VM is a really-(really)-low end VM with just 1
>> CPU core and 2GBs of RAM!!
>>
>> Michael, was this intentional or did you some how overlook this?
>
> I likely overlooked. I reused the same type of VM than the one for
>
>>  IIRC,
>> before the migration the VM was running with a dual-core CPU at least
>> and more RAM.
>
> 4G of ram and 2 cpus. So I propose to go to 6G and still 2 cpu for now ?

Fine with me. We always have room to grow if required.

>
>> The hypervisor has more than enough resources to run a beefier VM. I
>> was thinking of bumping the VM to a 4 Cores and 8 GB of RAM (or more).
>> I'll do this if there are no objections.
>
> This requires a reboot however, is this ok ?

Doing this shouldn't take longer that ~5minutes. We could do it any time.
It would have been so much easier if libvirt allowed hot-plugging cpu
and ram like disks.

> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] r.g.o down?

2016-04-21 Thread Kaushal M
I restarted Gerrit that should have fixed it for now.

As this has happened several times since the migration, I thought
something wasn't done correctly.
Turns out the gerrit VM is a really-(really)-low end VM with just 1
CPU core and 2GBs of RAM!!

Michael, was this intentional or did you some how overlook this? IIRC,
before the migration the VM was running with a dual-core CPU at least
and more RAM.

The hypervisor has more than enough resources to run a beefier VM. I
was thinking of bumping the VM to a 4 Cores and 8 GB of RAM (or more).
I'll do this if there are no objections.

~kaushal


On Thu, Apr 21, 2016 at 12:36 PM, Krutika Dhananjay  wrote:
> Working now.
>
> -Krutika
>
> On Thu, Apr 21, 2016 at 12:19 PM, Krutika Dhananjay 
> wrote:
>>
>> I get this when i try to access it: 500 Internal server error
>>
>> -Krutika
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] review.gluster.org down?

2016-04-10 Thread Kaushal M
Gerrit lost it's database connection for some reason. I've restarted
gerrit and it's working now.

On Mon, Apr 11, 2016 at 9:01 AM, Vijay Bellur  wrote:
> Loading through HTTP errors out with "Internal Server Error".
>
> -Vijay
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Smoke results voting

2016-04-05 Thread Kaushal M
On 5 Apr 2016 3:43 p.m., "Kaushal M"  wrote:
>
> So gerrit voting seems to be working again. It required all the jobs
> to have the same trigger configuration.
>
> Smoke vote was given for [1] a test change that I posted for review.
> Now just need to check if it works for changes which already have
> regressions run on them.
> I'll be following [2] to see it works.

Working now. If anyone notices anything amiss, please let gluster-infra
know of it.

>
> [1] https://review.gluster.org/13898
> [2] https://review.gluster.org/13869
>
> On Tue, Apr 5, 2016 at 12:46 PM, Prasanna Kalever 
wrote:
> > On Tue, Apr 5, 2016 at 12:39 PM, Kaushal M  wrote:
> >> On Tue, Apr 5, 2016 at 11:26 AM, Kaushal M  wrote:
> >>> On Tue, Apr 5, 2016 at 11:10 AM, Atin Mukherjee 
wrote:
> >>>>
> >>>>
> >>>> On 04/05/2016 11:06 AM, Kaushal M wrote:
> >>>>> I did some changes so that all smoke jobs (linux, *bsd smoke jobs,
> >>>>> devrpm jobs etc) are triggered for `recheck smoke`.
> >>>>>
> >>>>> The collated results are being reported back and the 'Smoke' flag is
> >>>>> being set. But sometimes, if regression jobs have been already run
on
> >>>>> the patchset, jenkins is collating those results as well.
> >>>>> When this happens, a '-Verified' flag is being set.
> >>>>>
> >>>>> Jenkins and its gerrit plugin collate results for jobs launched by
the
> >>>>> same event. The regression and smoke jobs should be triggered for
> >>>>> different events,
> >>>>> but for some reason jenkins is assuming that they're being triggered
> >>>>> by the same event and collating all of them together.
> >>>>>
> >>>>> I need some time to figure out why this is happening, and fix it.
> >>>> Is there a way that this doesn't impact the merging as until we get
all
> >>>> the positive votes, web interface doesn't provide a submit button.
> >>>
> >>> This would require manually running the gerrit ssh command as the
> >>> build user to set the flag, which requires sudo access on
> >>> build.gluster.org.
> >>
> >> Alternatively, administrators can spoof other users. Administrators
can do
> >> `ssh @review.gluster.org  suexec --as
> >> jenk...@build.gluster.org -- gerrit review --label Smoke=+1
> >> ,`
> >
> > As Kaushal mentioned above
> >
> > 1. users are free to comment "recheck smoke" (which will  trigger smoke)
> > 2. only after success on step 1, administrators will get +1 done with
'ssh ...'
> >
> >
> > Thanks,
> > --
> > Prasanna
> >
> >
> >
> >>
> >> I'm still figuring out how to solve it properly though.
> >>
> >>>
> >>>
> >>>>>
> >>>>> ~kaushal
> >>>>>
> >>>>>
> >>>>> On Mon, Apr 4, 2016 at 10:39 PM, Prasanna Kalever <
pkale...@redhat.com> wrote:
> >>>>>> On Mon, Apr 4, 2016 at 9:58 PM, Atin Mukherjee
> >>>>>>  wrote:
> >>>>>>> Did anyone notice that for few of the patches smoke results are
not voted
> >>>>>>> back?
> >>>>>>>
> >>>>>>> http://review.gluster.org/#/c/13869 is one of them.
> >>>>>>
> >>>>>> +1
> >>>>>>
> >>>>>> Here is another one http://review.gluster.org/#/c/11083/
> >>>>>> I have also re-triggered it with "recheck smoke", after which
> >>>>>> it return success "http://build.gluster.org/job/smoke/26373/ :
SUCCESS"
> >>>>>>
> >>>>>> but again failed to report back ...
> >>>>>>
> >>>>>> --
> >>>>>> Prasanna
> >>>>>>
> >>>>>>
> >>>>>>>
> >>>>>>> -Atin
> >>>>>>> Sent from one plus one
> >>>>>>>
> >>>>>>>
> >>>>>>> ___
> >>>>>>> Gluster-infra mailing list
> >>>>>>> Gluster-infra@gluster.org
> >>>>>>> http://www.gluster.org/mailman/listinfo/gluster-infra
> >>>>>> ___
> >>>>>> Gluster-devel mailing list
> >>>>>> gluster-de...@gluster.org
> >>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
> >>>>> ___
> >>>>> Gluster-devel mailing list
> >>>>> gluster-de...@gluster.org
> >>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
> >>>>>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Smoke results voting

2016-04-05 Thread Kaushal M
So gerrit voting seems to be working again. It required all the jobs
to have the same trigger configuration.

Smoke vote was given for [1] a test change that I posted for review.
Now just need to check if it works for changes which already have
regressions run on them.
I'll be following [2] to see it works.

[1] https://review.gluster.org/13898
[2] https://review.gluster.org/13869

On Tue, Apr 5, 2016 at 12:46 PM, Prasanna Kalever  wrote:
> On Tue, Apr 5, 2016 at 12:39 PM, Kaushal M  wrote:
>> On Tue, Apr 5, 2016 at 11:26 AM, Kaushal M  wrote:
>>> On Tue, Apr 5, 2016 at 11:10 AM, Atin Mukherjee  wrote:
>>>>
>>>>
>>>> On 04/05/2016 11:06 AM, Kaushal M wrote:
>>>>> I did some changes so that all smoke jobs (linux, *bsd smoke jobs,
>>>>> devrpm jobs etc) are triggered for `recheck smoke`.
>>>>>
>>>>> The collated results are being reported back and the 'Smoke' flag is
>>>>> being set. But sometimes, if regression jobs have been already run on
>>>>> the patchset, jenkins is collating those results as well.
>>>>> When this happens, a '-Verified' flag is being set.
>>>>>
>>>>> Jenkins and its gerrit plugin collate results for jobs launched by the
>>>>> same event. The regression and smoke jobs should be triggered for
>>>>> different events,
>>>>> but for some reason jenkins is assuming that they're being triggered
>>>>> by the same event and collating all of them together.
>>>>>
>>>>> I need some time to figure out why this is happening, and fix it.
>>>> Is there a way that this doesn't impact the merging as until we get all
>>>> the positive votes, web interface doesn't provide a submit button.
>>>
>>> This would require manually running the gerrit ssh command as the
>>> build user to set the flag, which requires sudo access on
>>> build.gluster.org.
>>
>> Alternatively, administrators can spoof other users. Administrators can do
>> `ssh @review.gluster.org  suexec --as
>> jenk...@build.gluster.org -- gerrit review --label Smoke=+1
>> ,`
>
> As Kaushal mentioned above
>
> 1. users are free to comment "recheck smoke" (which will  trigger smoke)
> 2. only after success on step 1, administrators will get +1 done with 'ssh 
> ...'
>
>
> Thanks,
> --
> Prasanna
>
>
>
>>
>> I'm still figuring out how to solve it properly though.
>>
>>>
>>>
>>>>>
>>>>> ~kaushal
>>>>>
>>>>>
>>>>> On Mon, Apr 4, 2016 at 10:39 PM, Prasanna Kalever  
>>>>> wrote:
>>>>>> On Mon, Apr 4, 2016 at 9:58 PM, Atin Mukherjee
>>>>>>  wrote:
>>>>>>> Did anyone notice that for few of the patches smoke results are not 
>>>>>>> voted
>>>>>>> back?
>>>>>>>
>>>>>>> http://review.gluster.org/#/c/13869 is one of them.
>>>>>>
>>>>>> +1
>>>>>>
>>>>>> Here is another one http://review.gluster.org/#/c/11083/
>>>>>> I have also re-triggered it with "recheck smoke", after which
>>>>>> it return success "http://build.gluster.org/job/smoke/26373/ : SUCCESS"
>>>>>>
>>>>>> but again failed to report back ...
>>>>>>
>>>>>> --
>>>>>> Prasanna
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> -Atin
>>>>>>> Sent from one plus one
>>>>>>>
>>>>>>>
>>>>>>> ___
>>>>>>> Gluster-infra mailing list
>>>>>>> Gluster-infra@gluster.org
>>>>>>> http://www.gluster.org/mailman/listinfo/gluster-infra
>>>>>> ___
>>>>>> Gluster-devel mailing list
>>>>>> gluster-de...@gluster.org
>>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>>> ___
>>>>> Gluster-devel mailing list
>>>>> gluster-de...@gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>>>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Smoke results voting

2016-04-05 Thread Kaushal M
On Tue, Apr 5, 2016 at 11:26 AM, Kaushal M  wrote:
> On Tue, Apr 5, 2016 at 11:10 AM, Atin Mukherjee  wrote:
>>
>>
>> On 04/05/2016 11:06 AM, Kaushal M wrote:
>>> I did some changes so that all smoke jobs (linux, *bsd smoke jobs,
>>> devrpm jobs etc) are triggered for `recheck smoke`.
>>>
>>> The collated results are being reported back and the 'Smoke' flag is
>>> being set. But sometimes, if regression jobs have been already run on
>>> the patchset, jenkins is collating those results as well.
>>> When this happens, a '-Verified' flag is being set.
>>>
>>> Jenkins and its gerrit plugin collate results for jobs launched by the
>>> same event. The regression and smoke jobs should be triggered for
>>> different events,
>>> but for some reason jenkins is assuming that they're being triggered
>>> by the same event and collating all of them together.
>>>
>>> I need some time to figure out why this is happening, and fix it.
>> Is there a way that this doesn't impact the merging as until we get all
>> the positive votes, web interface doesn't provide a submit button.
>
> This would require manually running the gerrit ssh command as the
> build user to set the flag, which requires sudo access on
> build.gluster.org.

Alternatively, administrators can spoof other users. Administrators can do
`ssh @review.gluster.org  suexec --as
jenk...@build.gluster.org -- gerrit review --label Smoke=+1
,`

I'm still figuring out how to solve it properly though.

>
>
>>>
>>> ~kaushal
>>>
>>>
>>> On Mon, Apr 4, 2016 at 10:39 PM, Prasanna Kalever  
>>> wrote:
>>>> On Mon, Apr 4, 2016 at 9:58 PM, Atin Mukherjee
>>>>  wrote:
>>>>> Did anyone notice that for few of the patches smoke results are not voted
>>>>> back?
>>>>>
>>>>> http://review.gluster.org/#/c/13869 is one of them.
>>>>
>>>> +1
>>>>
>>>> Here is another one http://review.gluster.org/#/c/11083/
>>>> I have also re-triggered it with "recheck smoke", after which
>>>> it return success "http://build.gluster.org/job/smoke/26373/ : SUCCESS"
>>>>
>>>> but again failed to report back ...
>>>>
>>>> --
>>>> Prasanna
>>>>
>>>>
>>>>>
>>>>> -Atin
>>>>> Sent from one plus one
>>>>>
>>>>>
>>>>> ___
>>>>> Gluster-infra mailing list
>>>>> Gluster-infra@gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-infra
>>>> ___
>>>> Gluster-devel mailing list
>>>> gluster-de...@gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>> ___
>>> Gluster-devel mailing list
>>> gluster-de...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Smoke results voting

2016-04-04 Thread Kaushal M
On Tue, Apr 5, 2016 at 11:10 AM, Atin Mukherjee  wrote:
>
>
> On 04/05/2016 11:06 AM, Kaushal M wrote:
>> I did some changes so that all smoke jobs (linux, *bsd smoke jobs,
>> devrpm jobs etc) are triggered for `recheck smoke`.
>>
>> The collated results are being reported back and the 'Smoke' flag is
>> being set. But sometimes, if regression jobs have been already run on
>> the patchset, jenkins is collating those results as well.
>> When this happens, a '-Verified' flag is being set.
>>
>> Jenkins and its gerrit plugin collate results for jobs launched by the
>> same event. The regression and smoke jobs should be triggered for
>> different events,
>> but for some reason jenkins is assuming that they're being triggered
>> by the same event and collating all of them together.
>>
>> I need some time to figure out why this is happening, and fix it.
> Is there a way that this doesn't impact the merging as until we get all
> the positive votes, web interface doesn't provide a submit button.

This would require manually running the gerrit ssh command as the
build user to set the flag, which requires sudo access on
build.gluster.org.


>>
>> ~kaushal
>>
>>
>> On Mon, Apr 4, 2016 at 10:39 PM, Prasanna Kalever  
>> wrote:
>>> On Mon, Apr 4, 2016 at 9:58 PM, Atin Mukherjee
>>>  wrote:
>>>> Did anyone notice that for few of the patches smoke results are not voted
>>>> back?
>>>>
>>>> http://review.gluster.org/#/c/13869 is one of them.
>>>
>>> +1
>>>
>>> Here is another one http://review.gluster.org/#/c/11083/
>>> I have also re-triggered it with "recheck smoke", after which
>>> it return success "http://build.gluster.org/job/smoke/26373/ : SUCCESS"
>>>
>>> but again failed to report back ...
>>>
>>> --
>>> Prasanna
>>>
>>>
>>>>
>>>> -Atin
>>>> Sent from one plus one
>>>>
>>>>
>>>> ___
>>>> Gluster-infra mailing list
>>>> Gluster-infra@gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-infra
>>> ___
>>> Gluster-devel mailing list
>>> gluster-de...@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Smoke results voting

2016-04-04 Thread Kaushal M
I did some changes so that all smoke jobs (linux, *bsd smoke jobs,
devrpm jobs etc) are triggered for `recheck smoke`.

The collated results are being reported back and the 'Smoke' flag is
being set. But sometimes, if regression jobs have been already run on
the patchset, jenkins is collating those results as well.
When this happens, a '-Verified' flag is being set.

Jenkins and its gerrit plugin collate results for jobs launched by the
same event. The regression and smoke jobs should be triggered for
different events,
but for some reason jenkins is assuming that they're being triggered
by the same event and collating all of them together.

I need some time to figure out why this is happening, and fix it.

~kaushal


On Mon, Apr 4, 2016 at 10:39 PM, Prasanna Kalever  wrote:
> On Mon, Apr 4, 2016 at 9:58 PM, Atin Mukherjee
>  wrote:
>> Did anyone notice that for few of the patches smoke results are not voted
>> back?
>>
>> http://review.gluster.org/#/c/13869 is one of them.
>
> +1
>
> Here is another one http://review.gluster.org/#/c/11083/
> I have also re-triggered it with "recheck smoke", after which
> it return success "http://build.gluster.org/job/smoke/26373/ : SUCCESS"
>
> but again failed to report back ...
>
> --
> Prasanna
>
>
>>
>> -Atin
>> Sent from one plus one
>>
>>
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-infra
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Jenkins migrated to the RH DC

2016-03-31 Thread Kaushal M
This was even smoother than the gerrit move!
Planning it out helps.

Awesome job!


On Thu, Mar 31, 2016 at 6:20 PM, Michael Scherer  wrote:
> Hi,
>
> so, without any big problem, Jenkins was migrated from Iweb to the RDU 2
> DC of RH, running as a VM on formicary.gluster.org.
>
> So far, everything seems to be ok, if we except that jenkins lost its
> queues of job (despites documentation and bugs saying it shouldn't), but
> Kaushal has a list and is looking at retriggering the important one.
>
> The DNS was changed and should be propagated by now.
>
> So if anything weird occurs, please tell us.
>
> We will be looking the coming weeks to do more change (like upgrading
> jenkins, using a real db, setting more protection on jenkins side, etc,
> etc), so stay tuned.
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Signing in on review.g.o returns a "Server Error"

2016-03-27 Thread Kaushal M
This was a problem with Java not correctly picking up CA certs bundle
(at least that's what google says).
Didn't find much help on how to solve it specific to Gerrit, so just
restarted Gerrit.
And, now login is working again!


On Mon, Mar 28, 2016 at 12:00 PM, Kaushal M  wrote:
> I checked again in an incognito window and it failed. Seems to be a
> reverse-proxying failure. I'll check.
>
> On Mon, Mar 28, 2016 at 11:57 AM, Prasanna Kalever  
> wrote:
>> On Mon, Mar 28, 2016 at 11:56 AM, Anoop C S  wrote:
>>>
>>> On Sun, 2016-03-27 at 08:57 +0200, Niels de Vos wrote:
>>> > Hi,
>>> >
>>> > I'm trying to sign in on review.gluster.org, but when I click the
>>> > "Sign-in with GitHub" link, I get an almost empty page with
>>> >
>>> > Server Error
>>> >
>>> > on it.
>>> >
>>> > It would be nice if someone can fix that :)
>>> >
>>>
>>> I think this issue still exists. I am unable to login at
>>> review.gluste.org via github. Can somebody please take a look?
>>
>> +1
>>
>> --
>> Prasanna
>>
>>
>>>
>>> > Thanks!
>>> > Niels
>>> > ___
>>> > Gluster-infra mailing list
>>> > Gluster-infra@gluster.org
>>> > http://www.gluster.org/mailman/listinfo/gluster-infra
>>> ___
>>> Gluster-infra mailing list
>>> Gluster-infra@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-infra
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Signing in on review.g.o returns a "Server Error"

2016-03-27 Thread Kaushal M
I checked again in an incognito window and it failed. Seems to be a
reverse-proxying failure. I'll check.

On Mon, Mar 28, 2016 at 11:57 AM, Prasanna Kalever  wrote:
> On Mon, Mar 28, 2016 at 11:56 AM, Anoop C S  wrote:
>>
>> On Sun, 2016-03-27 at 08:57 +0200, Niels de Vos wrote:
>> > Hi,
>> >
>> > I'm trying to sign in on review.gluster.org, but when I click the
>> > "Sign-in with GitHub" link, I get an almost empty page with
>> >
>> > Server Error
>> >
>> > on it.
>> >
>> > It would be nice if someone can fix that :)
>> >
>>
>> I think this issue still exists. I am unable to login at
>> review.gluste.org via github. Can somebody please take a look?
>
> +1
>
> --
> Prasanna
>
>
>>
>> > Thanks!
>> > Niels
>> > ___
>> > Gluster-infra mailing list
>> > Gluster-infra@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-infra
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-infra
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Signing in on review.g.o returns a "Server Error"

2016-03-27 Thread Kaushal M
What exact issue are you seeing? I'm logged in right now, and don't
see any problem.


On Mon, Mar 28, 2016 at 11:56 AM, Anoop C S  wrote:
> On Sun, 2016-03-27 at 08:57 +0200, Niels de Vos wrote:
>> Hi,
>>
>> I'm trying to sign in on review.gluster.org, but when I click the
>> "Sign-in with GitHub" link, I get an almost empty page with
>>
>> Server Error
>>
>> on it.
>>
>> It would be nice if someone can fix that :)
>>
>
> I think this issue still exists. I am unable to login at
> review.gluste.org via github. Can somebody please take a look?
>
>> Thanks!
>> Niels
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-infra
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] r.g.o is down

2016-03-24 Thread Kaushal M
Should be back online now. Can you check and let me know?

On Fri, Mar 25, 2016 at 11:49 AM, Anuradha Talur  wrote:
> Hi,
>
> review.gluster.org times out "Server error" on access.
> Can anyone please look into it?
>
> --
> Thanks,
> Anuradha.
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] build.gluster.org, no route to host!

2016-03-19 Thread Kaushal M
On Wed, Mar 16, 2016 at 4:10 PM, Michael Scherer  wrote:
> Le mercredi 16 mars 2016 à 11:23 +0100, Michael Scherer a écrit :
>> Le mercredi 16 mars 2016 à 10:51 +0100, Niels de Vos a écrit :
>> > On Wed, Mar 16, 2016 at 11:54:00AM +0530, Kaushal M wrote:
>> > > This is fixed now.
>> > >
>> > > This had happened previously when we migrated Gerrit. By default, the
>> > > nameserver of the Jenkins VM is set to its hypervisor. The hypervisor
>> > > is configured to return the local IP for the old gerrit VM that was
>> > > also running on it.
>> > > I've updated the nameserver to use the Google DNS server, as I'd done
>> > > before. Now I need to find a way to make this permanent, and with
>> > > stand reboots.
>> >
>> > Add the nameserver to /etc/sysconfig/network ? IIRC the valid options
>> > for this file are stored in the /usr/share/doc files from the sysvinit
>> > package.
>>
>> For now, we have 127.0.0.1 on
>> # grep DNS  /etc/sysconfig/network-scripts/ifcfg-eth0
>> DNS1="127.0.0.1"
>>
>> we can add DNS2="8.8.8.8".
>>
>> IIRC, we tried to use a local resolver to see if that could solve the
>> network problem, but it didn't work.
>
> I also added PEERDNS="no", to disable the dns given by dhcp (ie, the
> libvirt host)

This is something new I learnt today! Didn't know the changes in
/etc/sysconfig/network-scripts  had effect with NetworkManager.

>
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Gerrit trigger connection to jenkins not working

2016-03-15 Thread Kaushal M
Fixed now. Check my reply to Kaleb's earlier mail for the details.

On Wed, Mar 16, 2016 at 11:07 AM, Raghavendra Talur  wrote:
> Root cause: I see the following error when I do "test connection"
> Connection error : com.jcraft.jsch.JSchException:
> java.net.NoRouteToHostException: No route to host
>
>
> On Wed, Mar 16, 2016 at 10:57 AM, Raghavendra Talur 
> wrote:
>>
>> Hi All,
>>
>> I see that jenkins isn't getting any triggers from gerrit. This includes
>> patch set update trigger, so basically no patch is automatically tested. I
>> am looking into it. Will update in this thread.
>>
>> For anything critical and urgent, please mail in gluster-devel asking
>> jenkins maintainers to trigger build from you.
>>
>> Thanks,
>> Raghavendra Talur
>
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] build.gluster.org, no route to host!

2016-03-15 Thread Kaushal M
This is fixed now.

This had happened previously when we migrated Gerrit. By default, the
nameserver of the Jenkins VM is set to its hypervisor. The hypervisor
is configured to return the local IP for the old gerrit VM that was
also running on it.
I've updated the nameserver to use the Google DNS server, as I'd done
before. Now I need to find a way to make this permanent, and with
stand reboots.

~kaushal

On Wed, Mar 16, 2016 at 8:53 AM, Amye Scavarda  wrote:
> Adding misc to this thread for visibility.
>
> On Tue, Mar 15, 2016 at 6:21 PM, Kaleb Keithley  wrote:
>>
>> AFAIK no jobs have run since build.gluster.org came back on line.
>>
>> Attempts to query and schedule jobs give
>>com.jcraft.jsch.JSchException: java.net.NoRouteToHostException: No
>> route to host
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-infra
>
>
>
>
> --
> Amye Scavarda | a...@redhat.com | Gluster Community Lead
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Requesting CentOS regression machine to debug!

2016-03-13 Thread Kaushal M
Kotresh,
Are you done with this machine? I'll add it back to the pool.

~kaushal

On Thu, Mar 10, 2016 at 7:26 PM, Kaushal M  wrote:
> You can use slave23.cloud.gluster.org . I've marked it offline in
> Jenkins for now. Let us know once you're done, so that it can be
> re-enabled.
>
> ~kaushal
>
> On Thu, Mar 10, 2016 at 6:27 PM, Kotresh Hiremath Ravishankar
>  wrote:
>> Hi,
>>
>> Please provide me CentOS regression machine offline to debug the regression 
>> failure.
>>
>> Thanks and Regards,
>> Kotresh H R
>>
>> ___
>> Gluster-infra mailing list
>> Gluster-infra@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Gerrit down, disk full and more complication

2016-03-13 Thread Kaushal M
On Mon, Mar 14, 2016 at 9:35 AM, Niels de Vos  wrote:
> On Sat, Mar 12, 2016 at 12:23:52PM +0100, Michael Scherer wrote:
>> Le samedi 12 mars 2016 à 12:08 +0100, Michael Scherer a écrit :
>> > Le samedi 12 mars 2016 à 10:11 +0100, Michael Scherer a écrit :
>> > > Hi,
>> > >
>> > > so ndevos pinged me on irc this morning (around 7) since gerrit was
>> > > down. I poked around it once I woke up, and found that the disk was
>> > > full. However, after adding a new disk to the volume group, the web
>> > > interface also refuse to start.
>> > >
>> > > I am currently looking for that, but not ETA until I find what is wrong.
>> >
>> > TLDR: So gerrit is back.
>> >
>> > Root cause was a wrong interaction between gerrit config and apache.
>> >
>> > For a reason I do not quite understand (since we restart gerrit quite
>> > often), this:
>> > canonicalWebUrl = https://review.gluster.org/
>> > stopped working after the reboot.
>> >
>> > instead, you have to have this:
>> > canonicalWebUrl = http://review.gluster.org/
>> >
>> > for a reason that I do not understand, this resulted in a redirection
>> > loop with // url at the end.
>> >
>> > I tried various variations (such as removing the /, adding /foo,
>> > or /foo/), but the only working way was to tell to gerrit to advertise
>> > the http url.
>> >
>> > But, apache redirect to https fine, and do not complain at all, so maybe
>> > that's a issue on the proxyPass side.
>>
>> So this is interesting, cause this work:
>>canonicalWebUrl = https://review.gluster.org/

This is my fault. Sorry.

I changed it sometime late last week (Thursday I think), so that the
URL returned when pushing a change, and the URLs used in the
commit-message when a change is merged used `https`.
I thought that this setting only affected the UI, and didn't expect it
to cause problems.

>> if you have proxy-https in the ListenUrl setting.
>>
>> And it seems it made gerrit a bit faster (potentially due to lack of
>> redirection), even if maybe that's just the restart.
>
> Interesting, thanks for looking into this!
>
> Niels
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Requesting CentOS regression machine to debug!

2016-03-10 Thread Kaushal M
You can use slave23.cloud.gluster.org . I've marked it offline in
Jenkins for now. Let us know once you're done, so that it can be
re-enabled.

~kaushal

On Thu, Mar 10, 2016 at 6:27 PM, Kotresh Hiremath Ravishankar
 wrote:
> Hi,
>
> Please provide me CentOS regression machine offline to debug the regression 
> failure.
>
> Thanks and Regards,
> Kotresh H R
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-Maintainers] Need merge access to Gluster repo

2016-03-09 Thread Kaushal M
I've added you to the maintainers lists now.

On Wed, Mar 9, 2016 at 11:15 AM, Vijaikumar Mallikarjuna
 wrote:
> Hi,
>
> I will be the maintainer for quota and marker component and the same is
> updated in the Maintainer's List.
> Could you please provide merge access to the Gluster repo?
>
> Thanks,
> Vijay
>
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Restore github mirror sync for libgfapi-python and gluster-swift

2016-03-08 Thread Kaushal M
This happened because of the changes we did with ssh-keys used for
replication. I'd forgotten that this had been done, and was expecting
pushes to happen automatically.

Earlier, a single users key was used (key of one of the admins of the
gluster organization, Avati initially) to push changes from gerrit to
github.
This enabled all repositories in gerrit to be automatically be pushed
into to its replica in github ( in gerrit would be
automatically pushed to github/).
This key got changed/deleted which caused replication to fail a while back.

To avoid tying pushes to a single user like this, a separate key was
setup for each github repo, and gerrit was setup to use the respective
keys to push to each repo.
This requires manual replication configuration. This change was
implemented only for the glusterfs and glusterfs-specs repos.
All other repos require manual configuration.

We'll try to comeup with a better way to do this replication. If we
can't, we'll need to manually set up the keys and encryption for each
repo.

~kaushal

On Wed, Mar 9, 2016 at 12:22 PM, Prashanth Pai  wrote:
> Hi,
>
> The github mirror[1] sync is broken for libgfapi-python and gluster-swift 
> repos[2].
> Can someone please take a look ?
>
> Thanks.
>
> [1]: Github mirrors:
> https://github.com/gluster/libgfapi-python
> https://github.com/gluster/gluster-swift
>
> [2]: Gerrit repos:
> git://review.gluster.org/libgfapi-python
> git://review.gluster.org/gluster-swift
>
> Regards,
>  -Prashanth Pai
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Request for SSH access to review.gluster.org

2016-03-01 Thread Kaushal M
On Mon, Feb 29, 2016 at 9:42 PM, Michael Scherer  wrote:
> Le mercredi 17 février 2016 à 13:01 +0530, Kaushal M a écrit :
>> Since the migration to RH data center, SSH access to
>> review.gluster.org has been revoked for everyone, including me.
>>
>> Michael, can you please restore access for me? So that I can help with
>> any problems, when you're not online. My ssh keys are attached.
>
> So for some reason, freeipa admin password is not working (I suspect the
> hack i had to do with keytab disabled it...), so I have added your key
> in the root keyring for the time being.
>
> TO access review.gluster.org :
> # ssh r...@formicary.gluster.org

This times out. I can't even ping this system.

> # virsh console dev.gluster.org
>
> then login on the console.
>
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Giving Sahina temporary owner rights to github.com/gluster

2016-02-22 Thread Kaushal M
On Tue, Feb 23, 2016 at 12:35 PM, Kaushal M  wrote:
> Thanks for the update Shina.

I meant Sahina. Sorry for the typo.

> I've revoked your owner rights, but I've made you the maintainer for
> gluster-nagios Github group, so that you can manage the repositories.
>
> On Tue, Feb 23, 2016 at 12:29 PM, Sahina Bose  wrote:
>> Thanks, Kaushal! Repositories have been transferred.
>>
>>
>> On 02/23/2016 12:09 PM, Kaushal M wrote:
>>>
>>> Hi all,
>>>
>>> Sahina would like to transfer the gluster nagios repositories to the
>>> Gluster organization in Github. I'm temporarily giving Sahina owner
>>> rights so she can perform the transfer.
>>>
>>> Regards,
>>> Kaushal
>>
>>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Giving Sahina temporary owner rights to github.com/gluster

2016-02-22 Thread Kaushal M
Thanks for the update Shina.
I've revoked your owner rights, but I've made you the maintainer for
gluster-nagios Github group, so that you can manage the repositories.

On Tue, Feb 23, 2016 at 12:29 PM, Sahina Bose  wrote:
> Thanks, Kaushal! Repositories have been transferred.
>
>
> On 02/23/2016 12:09 PM, Kaushal M wrote:
>>
>> Hi all,
>>
>> Sahina would like to transfer the gluster nagios repositories to the
>> Gluster organization in Github. I'm temporarily giving Sahina owner
>> rights so she can perform the transfer.
>>
>> Regards,
>> Kaushal
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Giving Sahina temporary owner rights to github.com/gluster

2016-02-22 Thread Kaushal M
Hi all,

Sahina would like to transfer the gluster nagios repositories to the
Gluster organization in Github. I'm temporarily giving Sahina owner
rights so she can perform the transfer.

Regards,
Kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] review.gluster.com no longer available, use review.gluster.org

2016-02-18 Thread Kaushal M
After the migration, it turns out that we only updated the DNS records
for review.gluster.org, and not that for review.gluster.com.

This has lead to git pull/push and rfc.sh failures for people who had
cloned their repositories from review.gluster.com. Anyone who's still
using review.gluster.com git remotes, need to update the remote to
point to review.gluster.org. [1]

~kaushal

PS: We will not be enabling review.gluster.com again. It's been over 4
years since the transition to the .org domain, which is plenty of time
for everyone to have moved to the new domain.

[1] `man git-remote` for instructions on how to change remote url.
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Request for SSH access to review.gluster.org

2016-02-16 Thread Kaushal M
Since the migration to RH data center, SSH access to
review.gluster.org has been revoked for everyone, including me.

Michael, can you please restore access for me? So that I can help with
any problems, when you're not online. My ssh keys are attached.

~kaushal


kaushal-rgo-rsa.pub
Description: application/vnd.ms-publisher


kaushal-rgo-ed25519.pub
Description: application/vnd.ms-publisher
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Migrating Gerrit and Jenkins out of iWeb

2016-02-16 Thread Kaushal M
On Feb 12, 2016 10:50 PM, "Kaushal M"  wrote:
>
> On Fri, Feb 12, 2016 at 10:47 PM, Michael Scherer 
wrote:
> > Le vendredi 12 février 2016 à 12:16 +0100, Michael Scherer a écrit :
> >> Le mercredi 10 février 2016 à 19:18 +0530, Kaushal M a écrit :
> >> > On Wed, Feb 10, 2016 at 1:53 PM, Michael Scherer 
wrote:
> >> > > Le mercredi 10 février 2016 à 11:44 +0530, Kaushal M a écrit :
> >> > >> On Sun, Feb 7, 2016 at 2:34 PM, Michael Scherer <
msche...@redhat.com> wrote:
> >> > >> > Le samedi 06 février 2016 à 18:17 -0500, Vijay Bellur a écrit :
> >> > >> >> I think starting around 0900 UTC on Friday of next week (12th
Feb)
> >> > >> >> should be possible. We should be done with 3.7.8 before that
and can
> >> > >> >> afford a bit of downtime then. In case any assistance is
needed post the
> >> > >> >> migration, we can have folks around the clock to help.
> >> > >> >>
> >> > >> >> If migration fails for an unforeseen reason, would we be able
to
> >> > >> >> rollback and maintain status quo?
> >> > >> >
> >> > >> > Yes. Worst case, I think people would just redo a few reviews
or push
> >> > >> > again patch.
> >> > >> >
> >> > >> > Also, since gerrit is critical, I wonder what is the support of
gerrit
> >> > >> > for slaves and replication.
> >> > >>
> >> > >> Michael are you okay with the time? If you are, I think we should
> >> > >> announce the migration.
> >> > >
> >> > > I am ok.
> >> >
> >> > So as discussed in the community meeting, we will be announcing the
> >> > migration and downtime. Vijay, you said you required the exact
> >> > schedule? This is what I expect
> >> >
> >> > Friday 12th Feb 2016
> >> >
> >> > 0900 UTC : build.gluster.org and review.gluster.org are taken down
and
> >> > migration begins.
> >> > <1-2 hrs?>: Michael copies over data onto the RH community infra and
> >> > sets up the VMs.
> >> > : The DNS records are updated, and some time is for it to
propogate.
> >> > <3hr?>: Verify everything is working well (I can help with this).
We'd
> >> > possibly need to run a regression job, so this will take longest I
> >> > think.
> >> >
> >> > 1700UTC : We announce the finish of the migration and
open
> >> > the services back up. If migration failed, we bring the existing
> >> > servers back on, and continue on.
> >>
> >> So, for people wanting to know, the migration has started.
> >>
> >> 1) gerrit
> >> --
> >> gerrit seems to be ok, it has a new IP 66.187.224.201
> >>
> >> people who used to have access there need to contact me so I can
> >> explain/create required account on the new virt host.
> >>
> >> the old VM is shutdown for now. I will make a backup copy of the disk.
> >>
> >> 2) jenkins
> >> --
> >>
> >> so the preparation of the migration didn't worked as well, so we have
to
> >> copy the VM when offline, and then prepare it later ( ie, import in
> >> libvirt, adjust network, etc). So jenkins master is offline (sorry manu
> >> for your tcpdump monitoring), we are making a copy of the disk that is
> >> gonna take between 3 and 14h (most likely < 4h, but rsync estimation is
> >> moving a lot), and then restart like before.
> >
> > So turn out I was wrong, and the copy is taking a much longer time.
> > Despites me trying to investigate why, it didn't copied everything.
> >
> > So the new plan is to stop the VM, make a local copy of the disk, start
> > the VM, and then copy that one over the new server, prepare for the new
> > host (firewall, public ip, etc), and sync from the old VM to the new
> > host.
> >
> > All is ok with this plan ?
>
> Sounds good to me. No need to wait for the copy to happen and the
> downtime should be minimal like with gerrit.
>

Can we schedule this minimal downtime sometime tomorrow? Or if possible
tonight itself. Since we don't require a long downtime we can do this
anytime we like.

I'd like to get the migration done by the end of this week (if possible)
before Michael goes on vacation.

> > --
> > Michael Scherer
> > Sysadmin, Community Infrastructure and Platform, OSAS
> >
> >
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] FYI: resolv.conf needs to be updated on build.gluster.org after reboot

2016-02-12 Thread Kaushal M
The resolv.conf generated by NetworkManager on build.gluster.org sets
the hypervisor as the nameserver. The hypervisor always returns the
old, local ip for review.gluster.org. Because of this jenkins will not
be able to connect to the migrated gerrit instance.

To fix this, I've modified /etc/resolv.conf to use the google dns
servers, and restarted jenkins.

The exact steps I followed,

1. Added `nameserver=8.8.8.8` as the first nameserver in /etc/resolv.conf
2. Restarted jenkins.

This needs to be done after restart of build.g.o till it is migrated
(so probably wouldn't need to be done ever).

If I find a way to make this stick through reboots, I'll update here.

~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Migrating Gerrit and Jenkins out of iWeb

2016-02-12 Thread Kaushal M
On Fri, Feb 12, 2016 at 10:47 PM, Michael Scherer  wrote:
> Le vendredi 12 février 2016 à 12:16 +0100, Michael Scherer a écrit :
>> Le mercredi 10 février 2016 à 19:18 +0530, Kaushal M a écrit :
>> > On Wed, Feb 10, 2016 at 1:53 PM, Michael Scherer  
>> > wrote:
>> > > Le mercredi 10 février 2016 à 11:44 +0530, Kaushal M a écrit :
>> > >> On Sun, Feb 7, 2016 at 2:34 PM, Michael Scherer  
>> > >> wrote:
>> > >> > Le samedi 06 février 2016 à 18:17 -0500, Vijay Bellur a écrit :
>> > >> >> I think starting around 0900 UTC on Friday of next week (12th Feb)
>> > >> >> should be possible. We should be done with 3.7.8 before that and can
>> > >> >> afford a bit of downtime then. In case any assistance is needed post 
>> > >> >> the
>> > >> >> migration, we can have folks around the clock to help.
>> > >> >>
>> > >> >> If migration fails for an unforeseen reason, would we be able to
>> > >> >> rollback and maintain status quo?
>> > >> >
>> > >> > Yes. Worst case, I think people would just redo a few reviews or push
>> > >> > again patch.
>> > >> >
>> > >> > Also, since gerrit is critical, I wonder what is the support of gerrit
>> > >> > for slaves and replication.
>> > >>
>> > >> Michael are you okay with the time? If you are, I think we should
>> > >> announce the migration.
>> > >
>> > > I am ok.
>> >
>> > So as discussed in the community meeting, we will be announcing the
>> > migration and downtime. Vijay, you said you required the exact
>> > schedule? This is what I expect
>> >
>> > Friday 12th Feb 2016
>> >
>> > 0900 UTC : build.gluster.org and review.gluster.org are taken down and
>> > migration begins.
>> > <1-2 hrs?>: Michael copies over data onto the RH community infra and
>> > sets up the VMs.
>> > : The DNS records are updated, and some time is for it to propogate.
>> > <3hr?>: Verify everything is working well (I can help with this). We'd
>> > possibly need to run a regression job, so this will take longest I
>> > think.
>> >
>> > 1700UTC : We announce the finish of the migration and open
>> > the services back up. If migration failed, we bring the existing
>> > servers back on, and continue on.
>>
>> So, for people wanting to know, the migration has started.
>>
>> 1) gerrit
>> --
>> gerrit seems to be ok, it has a new IP 66.187.224.201
>>
>> people who used to have access there need to contact me so I can
>> explain/create required account on the new virt host.
>>
>> the old VM is shutdown for now. I will make a backup copy of the disk.
>>
>> 2) jenkins
>> --
>>
>> so the preparation of the migration didn't worked as well, so we have to
>> copy the VM when offline, and then prepare it later ( ie, import in
>> libvirt, adjust network, etc). So jenkins master is offline (sorry manu
>> for your tcpdump monitoring), we are making a copy of the disk that is
>> gonna take between 3 and 14h (most likely < 4h, but rsync estimation is
>> moving a lot), and then restart like before.
>
> So turn out I was wrong, and the copy is taking a much longer time.
> Despites me trying to investigate why, it didn't copied everything.
>
> So the new plan is to stop the VM, make a local copy of the disk, start
> the VM, and then copy that one over the new server, prepare for the new
> host (firewall, public ip, etc), and sync from the old VM to the new
> host.
>
> All is ok with this plan ?

Sounds good to me. No need to wait for the copy to happen and the
downtime should be minimal like with gerrit.

> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Migrating Gerrit and Jenkins out of iWeb

2016-02-10 Thread Kaushal M
On Wed, Feb 10, 2016 at 1:53 PM, Michael Scherer  wrote:
> Le mercredi 10 février 2016 à 11:44 +0530, Kaushal M a écrit :
>> On Sun, Feb 7, 2016 at 2:34 PM, Michael Scherer  wrote:
>> > Le samedi 06 février 2016 à 18:17 -0500, Vijay Bellur a écrit :
>> >> I think starting around 0900 UTC on Friday of next week (12th Feb)
>> >> should be possible. We should be done with 3.7.8 before that and can
>> >> afford a bit of downtime then. In case any assistance is needed post the
>> >> migration, we can have folks around the clock to help.
>> >>
>> >> If migration fails for an unforeseen reason, would we be able to
>> >> rollback and maintain status quo?
>> >
>> > Yes. Worst case, I think people would just redo a few reviews or push
>> > again patch.
>> >
>> > Also, since gerrit is critical, I wonder what is the support of gerrit
>> > for slaves and replication.
>>
>> Michael are you okay with the time? If you are, I think we should
>> announce the migration.
>
> I am ok.

So as discussed in the community meeting, we will be announcing the
migration and downtime. Vijay, you said you required the exact
schedule? This is what I expect

Friday 12th Feb 2016

0900 UTC : build.gluster.org and review.gluster.org are taken down and
migration begins.
<1-2 hrs?>: Michael copies over data onto the RH community infra and
sets up the VMs.
: The DNS records are updated, and some time is for it to propogate.
<3hr?>: Verify everything is working well (I can help with this). We'd
possibly need to run a regression job, so this will take longest I
think.

1700UTC : We announce the finish of the migration and open
the services back up. If migration failed, we bring the existing
servers back on, and continue on.


Also, Michael are you happy migrating just one server or both? I case
you can only do one server we'd like Jenkins to migrated out.


> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-Maintainers] Gluster community and external test infra

2016-02-10 Thread Kaushal M
On Wed, Feb 10, 2016 at 6:29 PM, Raghavendra Talur  wrote:
> Hi,
>
> I read about openstack community and how they allow external test infra to
> report back on the main CI.
>
> Read more here:
> https://www.mirantis.com/blog/setting-external-openstack-testing-system-part-1/
>
>
> I have setup a jenkins server in RedHat blr lab which has 27 slaves.
> Also, the jenkins server is intelligent enough to distribute tests across 27
> nodes for the same patch set so that the run completes in ~30mins.
>
>
> Now onto the questions:
> 1. Would gluster community accept a +1 from this jenkins, if we follow
> similar policy like openstack does with external test infra?
> 2. Would RedHat allow a such thing if it is promised that trigger won't
> happen from Gerrit server but manually from within the VPN of RedHat?
>
> NOTE: This mail is sent to maintain...@gluster.org keeping in view the
> security concerns before asking opinion from wider audience. Also don't
> reply back with any RedHat confidential informantion.
>
> Thanks
> Raghavendra Talur
>

+gluster-infra

>
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://www.gluster.org/mailman/listinfo/maintainers
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Migrating Gerrit and Jenkins out of iWeb

2016-02-09 Thread Kaushal M
On Sun, Feb 7, 2016 at 2:34 PM, Michael Scherer  wrote:
> Le samedi 06 février 2016 à 18:17 -0500, Vijay Bellur a écrit :
>> I think starting around 0900 UTC on Friday of next week (12th Feb)
>> should be possible. We should be done with 3.7.8 before that and can
>> afford a bit of downtime then. In case any assistance is needed post the
>> migration, we can have folks around the clock to help.
>>
>> If migration fails for an unforeseen reason, would we be able to
>> rollback and maintain status quo?
>
> Yes. Worst case, I think people would just redo a few reviews or push
> again patch.
>
> Also, since gerrit is critical, I wonder what is the support of gerrit
> for slaves and replication.

Michael are you okay with the time? If you are, I think we should
announce the migration.

> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Using software-factory for our infra

2016-02-06 Thread Kaushal M
Thanks misc for letting me know of this.

So software-factory[1][2] is a collection of tools (gerrit, jenkins,
etherpad, pastebin etc.) that help build and test software. It
provides all these tools in a well integrated manner with
single-signon to all.

I think using this for our infra would be very useful. What do other
people feel? Would you be interested in getting this set up for the
Gluster project?

~kaushal


[1]: http://softwarefactory-project.io/docs/
[2]: http://softwarefactory-project.io/
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Migrating Gerrit and Jenkins out of iWeb

2016-02-06 Thread Kaushal M
Hi Vijay.

I spoke with Michael about migrating off of iWeb.

He's already done trial runs of migrating the b.g.o and r.g.o VMs into
the community hardware we have lying around. It's easier to migrate
these VMs into the community cage, than into rackspace.

We know we can do this migration. But this migration would require a
downtime of about 8 hours for both Gerrit and Jenkins. If we can come
up with a time for this downtime, we can get the migration done by the
end of the upcoming week. One way we could do this (credit to misc)
would be to spend a day as a documentation day, when all developers
instead of developing work on getting documentation fixed up.

We also need to update Gerrit, but Michael suggested doing it after
the migration. That would possibly also require some downtime later.

In any case, what we want now is a time set for doing the migration.
Can you help us set this time?

Thanks,
Kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Expired SSL certs - review.gluster.org

2016-02-05 Thread Kaushal M
This is working now. Thanks Michael!

On Fri, Feb 5, 2016 at 7:14 AM, Michael Scherer  wrote:
> Le jeudi 04 février 2016 à 19:20 +0530, Kaushal M a écrit :
>> Michael,
>> Do you have an update on the SSL cert situation for review.gluster.org?
>> We've been in this state for nearly 5 months now.
>
> so I did update the cert today, so it should be good now.
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Adding Sahina to the Gluster Github organization

2016-02-05 Thread Kaushal M
Sahina will be creating the gluster-nagios repositories under the
Gluster organization.
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Gerrit is not responding

2016-02-05 Thread Kaushal M
I think Misc has restarted gerrit. It's working now.
On Feb 5, 2016 12:00 PM, "Raghavendra Talur"  wrote:

> $subject, can someone have a look at it?
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [Gluster-devel] Expired SSL certs - review.gluster.org

2016-02-04 Thread Kaushal M
Michael,
Do you have an update on the SSL cert situation for review.gluster.org?
We've been in this state for nearly 5 months now.

~kaushal

On Thu, Feb 4, 2016 at 6:46 PM, Glomski, Patrick
 wrote:
> FYI,
>
> The wildcard certificate for *.gluster.org used by review.gluster.org
> expired on 2015/09/25. I figure if one goes through the trouble of
> configuring SSL on the site, one may as well keep the certs valid...
>
> Patrick
>
> ==
> Certificate:
> Data:
> Version: 3 (0x2)
> Serial Number:
> 08:9e:45:06:63:aa:98:b0:2f:6b:39:7d:aa:bc:38:28
> Signature Algorithm: sha1WithRSAEncryption
> Issuer: C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert High
> Assurance CA-3
> Validity
> Not Before: Sep  3 00:00:00 2013 GMT
> ***Not After : Sep 10 12:00:00 2015 GMT***
> Subject: C=US, ST=North Carolina, L=Raleigh, O=Red Hat Inc.,
> CN=*.gluster.org
> Subject Public Key Info:
> Public Key Algorithm: rsaEncryption
> Public-Key: (2048 bit)
> Modulus:
> 00:c4:a1:76:b9:12:92:04:4b:62:ef:ac:c8:70:9b:
> 06:8f:00:85:b4:a8:ef:87:70:a5:8b:eb:6b:56:c8:
> 33:d0:37:54:92:92:46:da:a2:ee:c2:ac:48:b1:98:
> 48:20:91:b2:e7:a6:dd:60:a6:02:77:58:3c:e2:11:
> e7:4f:2b:ae:4f:65:6c:37:f7:45:bf:bf:31:98:5e:
> ea:17:96:9a:6d:95:9a:eb:09:b1:cf:89:ca:ba:bc:
> 70:0a:26:c3:a4:a4:ce:0e:33:d0:fd:6f:2e:c7:27:
> b6:2e:e8:48:2f:e1:a1:99:2a:0b:c2:ae:98:e9:f8:
> d6:fd:c2:52:0f:85:de:9d:25:28:af:02:7e:db:dd:
> e7:68:8b:5e:68:75:f2:05:1e:47:99:5d:9f:60:e7:
> 6a:3d:d8:ea:b8:af:8a:f2:1d:2e:00:ad:26:75:ca:
> 82:e6:45:9d:cc:25:98:24:3e:ef:50:fb:57:af:ac:
> a0:95:6f:ff:ff:6e:ad:ce:e3:9b:72:db:61:25:bd:
> 20:4a:ad:33:aa:e6:4d:ab:1b:c8:80:1c:42:21:60:
> d0:cc:ce:22:39:f8:93:24:e9:83:2d:bb:ec:bf:15:
> 45:37:55:f2:27:26:0f:c6:8d:7f:e3:5b:86:0f:c5:
> 73:74:af:c2:07:8a:bc:df:1f:3d:d5:72:ff:22:e7:
> d8:1d
> Exponent: 65537 (0x10001)
> X509v3 extensions:
> X509v3 Authority Key Identifier:
>
> keyid:50:EA:73:89:DB:29:FB:10:8F:9E:E5:01:20:D4:DE:79:99:48:83:F7
>
> X509v3 Subject Key Identifier:
> B4:16:D1:76:1F:AA:C6:8B:BD:9A:45:B4:AC:14:FD:0B:F4:3D:3F:9E
> X509v3 Subject Alternative Name:
> DNS:*.gluster.org, DNS:gluster.org
> X509v3 Key Usage: critical
> Digital Signature, Key Encipherment
> X509v3 Extended Key Usage:
> TLS Web Server Authentication, TLS Web Client Authentication
> X509v3 CRL Distribution Points:
>
> Full Name:
>   URI:http://crl3.digicert.com/ca3-g28.crl
>
> Full Name:
>   URI:http://crl4.digicert.com/ca3-g28.crl
>
> X509v3 Certificate Policies:
> Policy: 2.16.840.1.114412.1.1
>   CPS: https://www.digicert.com/CPS
>
> Authority Information Access:
> OCSP - URI:http://ocsp.digicert.com
> CA Issuers -
> URI:http://cacerts.digicert.com/DigiCertHighAssuranceCA-3.crt
>
> X509v3 Basic Constraints: critical
> CA:FALSE
> Signature Algorithm: sha1WithRSAEncryption
>  77:11:b2:5f:e6:77:4b:a6:a5:4a:4f:35:c4:95:d6:d1:72:29:
>  9b:6c:f1:2b:f6:0e:ec:63:43:9e:5d:25:19:4b:ab:6b:a0:be:
>  86:14:cc:54:bc:be:41:f1:23:26:8e:d7:32:1b:69:59:f0:dd:
>  36:8a:3b:b2:81:b4:3d:90:07:6c:31:4c:4f:dc:f4:67:d3:d6:
>  49:d9:f5:7c:ab:0b:fc:58:bf:5f:df:fd:22:53:de:1d:7f:9a:
>  95:f7:c8:8b:b3:ed:e9:fa:0a:76:22:7e:c5:c2:ba:34:4f:9b:
>  75:1a:3c:c0:7c:ad:b3:d6:65:f0:5e:cc:5b:1e:ca:15:80:21:
>  c7:af:26:bf:2e:a6:03:6a:95:28:a2:8b:84:33:86:7d:61:35:
>  9b:86:30:7c:c8:c3:08:44:0a:6b:82:d1:dc:4e:bc:df:2d:d3:
>  b5:9c:59:76:5a:68:8e:cd:46:a8:9c:6f:1e:2c:a4:f4:1d:fc:
>  43:e1:cf:dc:1e:54:42:cc:01:d3:d5:ec:45:63:b6:c2:12:55:
>  fd:87:3c:cc:36:de:de:47:21:46:7b:14:be:cf:13:95:0e:df:
>  15:6f:4f:22:e4:47:48:d0:1a:9f:95:6e:d0:39:2b:92:e2:e5:
>  d8:2e:a6:35:59:87:cc:fa:9e:c6:2f:19:3c:36:7b:1f:5b:e9:
>  c7:1f:8a:9e
>
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Add gluster-nagios* to github

2016-02-04 Thread Kaushal M
Hi Sahina,

The gluster-nagios project currently lives in the 3 repositories in
review.gluster.org. Finding these projects on gerrit is not easy.

To be more visible it would be better if it is added to github under
the gluster organization. We can help set gerrit up to replicate to
the github repo. This will help with visibility of the project a lot.

~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Code-Review+2 and Verified+1 cause multiple retriggers on Jenkins

2016-02-04 Thread Kaushal M
I'm okay with this.


On Thu, Feb 4, 2016 at 3:34 PM, Raghavendra Talur  wrote:
> Hi,
>
> We recently changed the jenkins builds to be triggered on the following
> triggers.
>
> 1. Verified+1
> 2. Code-review+2
> 3. recheck (netbsd|centos|smoke)
>
> There is a bug in 1 and 2.
>
> Multiple triggers of 1 or 2 would result in re-runs even when not intended.
>
> I would like to replace 1 and 2 with a comment "run-all-regression" or
> something like that.
> Thoughts?
>
>
> Thanks
> Raghavendra Talur
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Gluster-devel] Jenkins accounts for all devs.

2016-01-22 Thread Kaushal M
On Fri, Jan 22, 2016 at 2:41 PM, Michael Scherer  wrote:
> Le vendredi 22 janvier 2016 à 11:31 +0530, Ravishankar N a écrit :
>> On 01/14/2016 12:16 PM, Kaushal M wrote:
>> > On Thu, Jan 14, 2016 at 10:33 AM, Raghavendra Talur  
>> > wrote:
>> >>
>> >> On Thu, Jan 14, 2016 at 10:32 AM, Ravishankar N 
>> >> wrote:
>> >>> On 01/08/2016 12:03 PM, Raghavendra Talur wrote:
>> >>>> P.S: Stop using the "universal" jenkins account to trigger jenkins build
>> >>>> if you are not a maintainer.
>> >>>> If you are a maintainer and don't have your own jenkins account then get
>> >>>> one soon!
>> >>>>
>> >>> I would request for a jenkins account for non-maintainers too, at least
>> >>> for the devs who are actively contributing code (as opposed to random
>> >>> one-off commits from persons). That way, if the regression failure is
>> >>> *definitely* not in my patch (or) is a spurious failure (or) is something
>> >>> that I need to take a netbsd slave offline to debug etc.,  I don't have 
>> >>> to
>> >>> be blocked on the Maintainer. Since the accounts are anyway tied to an
>> >>> individual, it should be easy to spot if someone habitually re-trigger
>> >>> regressions without any initial debugging.
>> >>>
>> >> +1
>> > We'd like to give everyone accounts. But the way we're providing
>> > accounts now gives admin accounts to all. This is not very secure.
>> >
>> > This was one of the reasons misc setup freeipa.gluster.org, to provide
>> > controlled accounts for all. But it hasn't been used yet. We would
>> > need to integrate jenkins and the slaves with freeipa, which would
>> > give everyone easy access.
>>
>> Hi Michael,
>> Do you think it is possible to have this integration soon so that all
>> contributors can re-trigger/initiate builds by themselves?
>
> The thing that is missing is still the same, how do we consider that
> someone is a contributor. IE, do we want people just say "add me" and
> get root access to all our jenkins builder (because that's also what go
> with jenkins way of restarting a build for now) ?
>
> I did the technical stuff, but so far, no one did the organisational
> part of giving a criteria for who has access to what. Without clear
> process, I can't do much.

The need right now is the ability for some of the developers to be
able to re-trigger jobs in Jenkins. Access to the builders is not
required right away (this would also require changes to the builders
IIRC). What I had in mind was to create 3 groups - admins,
maintainers, developers. Then we configure Jenkins to give admins full
access, maintainers access to manually trigger and retrigger, and
developers the ability to retrigger. Jenkins can do this with either
unix groups, using build.gluster.org's groups and users, or via LDAP.
Since we'd like to move to freeipa later, I thought it'd be better not
to create more users/groups on build.gluster.org.

> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-18 Thread Kaushal M
On Mon, Jan 18, 2016 at 1:03 PM, Raghavendra Talur  wrote:
>
>
> On Fri, Jan 15, 2016 at 4:22 PM, Niels de Vos  wrote:
>>
>> On Thu, Jan 14, 2016 at 10:26:46PM +0530, Kaushal M wrote:
>> > I'd pushed the config to a new branch instead of updating the
>> > `refs/meta/config` branch. I've corrected this now.
>> >
>> > The 3 new labels are,
>> > - Smoke
>> > - CentOS-regression
>> > - NetBSD-regression
>> >
>> > The new labels are active now. Changes cannot be merged without all of
>> > them being +1. Only the bot accounts (Gluster Build System and NetBSD
>> > Build System) can set them.
>
>
> Thanks Kaushal !
>
>>
>>
>> It seems that Verified is also a label that is required. Because this is
>> now the label for manual testing by reviewers/qa, I do not think it
>> should be a requirement anymore.
>>
>> Could the labels that are needed for merging be setup like this?
>>
>>   Code-Review=+2 && (Verified=+1 || (Smoke=+1 && CentOS-regression=+1 &&
>> NetBSD-regression=+1))
>
>
> I would prefer not having Verified=+1 here. A dev should not be allowed to
> override the restrictions.

I've made the Verified flag a `NoBlock` flag. No changes are
merge-able only with (Code-Review+2 && Smoke+1 && CentOS-regression+1
&& NetBSD-regression+1).

>
>>
>>
>> I managed to get http://review.gluster.org/13208 merged now, please
>> check if the added tags in the commit message are ok, or need to get
>> modified.
>>
>> Thanks,
>> Niels
>>
>>
>> >
>> > On Thu, Jan 14, 2016 at 9:22 PM, Kaushal M  wrote:
>> > > On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos 
>> > > wrote:
>> > >> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
>> > >>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos 
>> > >>> wrote:
>> > >>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
>> > >>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee
>> > >>> >> 
>> > >>> >> wrote:
>> > >>> >>
>> > >>> >> > -Atin
>> > >>> >> > Sent from one plus one
>> > >>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos" 
>> > >>> >> > wrote:
>> > >>> >> > >
>> > >>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur
>> > >>> >> > > wrote:
>> > >>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
>> > >>> >> > > >
>> > >>> >> > > > 1. Developer works on a new feature/bug fix and tests it
>> > >>> >> > > > locally(run
>> > >>> >> > > > run-tests.sh completely).
>> > >>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
>> > >>> >> > > >
>> > >>> >> > > > +++Note that no regression runs have started automatically
>> > >>> >> > > > for this
>> > >>> >> > patch
>> > >>> >> > > > at this point.+++
>> > >>> >> > > >
>> > >>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a
>> > >>> >> > > > promise of
>> > >>> >> > > > having tested the patch completely. For cases where patches
>> > >>> >> > > > don't have
>> > >>> >> > a +1
>> > >>> >> > > > verified from the developer, maintainer has the following
>> > >>> >> > > > options
>> > >>> >> > > > a. just do the code-review and award a +2 code review.
>> > >>> >> > > > b. pull the patch locally and test completely and award a
>> > >>> >> > > > +1 verified.
>> > >>> >> > > > Both the above actions would result in triggering of
>> > >>> >> > > > regression runs
>> > >>> >> > for
>> > >>> >> > > > the patch.
>> > >>> >> > >
>> > >>> &

Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-14 Thread Kaushal M
On Thu, Jan 14, 2016 at 10:26 PM, Kaushal M  wrote:
> I'd pushed the config to a new branch instead of updating the
> `refs/meta/config` branch. I've corrected this now.
>
> The 3 new labels are,
> - Smoke
> - CentOS-regression
> - NetBSD-regression
>
> The new labels are active now. Changes cannot be merged without all of
> them being +1. Only the bot accounts (Gluster Build System and NetBSD
> Build System) can set them.

I've also enabled copying flag states for trivial rebases and no code
changes for the regression flags. I'll disable this if desired.

> On Thu, Jan 14, 2016 at 9:22 PM, Kaushal M  wrote:
>> On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos  wrote:
>>> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
>>>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos  wrote:
>>>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
>>>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
>>>> >> 
>>>> >> wrote:
>>>> >>
>>>> >> > -Atin
>>>> >> > Sent from one plus one
>>>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos"  wrote:
>>>> >> > >
>>>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur wrote:
>>>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
>>>> >> > > >
>>>> >> > > > 1. Developer works on a new feature/bug fix and tests it 
>>>> >> > > > locally(run
>>>> >> > > > run-tests.sh completely).
>>>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
>>>> >> > > >
>>>> >> > > > +++Note that no regression runs have started automatically for 
>>>> >> > > > this
>>>> >> > patch
>>>> >> > > > at this point.+++
>>>> >> > > >
>>>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a 
>>>> >> > > > promise of
>>>> >> > > > having tested the patch completely. For cases where patches don't 
>>>> >> > > > have
>>>> >> > a +1
>>>> >> > > > verified from the developer, maintainer has the following options
>>>> >> > > > a. just do the code-review and award a +2 code review.
>>>> >> > > > b. pull the patch locally and test completely and award a +1 
>>>> >> > > > verified.
>>>> >> > > > Both the above actions would result in triggering of regression 
>>>> >> > > > runs
>>>> >> > for
>>>> >> > > > the patch.
>>>> >> > >
>>>> >> > > Would it not help if anyone giving +1 code-review starts the 
>>>> >> > > regression
>>>> >> > > tests too? When developers ask me to review, I prefer to see reviews
>>>> >> > > done by others first, and any regression failures should have been 
>>>> >> > > fixed
>>>> >> > > by the time I look at the change.
>>>> >> > When this idea was originated (long back) I was in favour of having
>>>> >> > regression triggered on a +1, however verified flag set by the 
>>>> >> > developer
>>>> >> > would still trigger the regression. Being a maintainer I would always
>>>> >> > prefer to look at a patch when its verified  flag is +1 which means 
>>>> >> > the
>>>> >> > regression result would also be available.
>>>> >> >
>>>> >>
>>>> >>
>>>> >> Niels requested in IRC that it is good have a mechanism of getting all
>>>> >> patches that have already passed all regressions before starting review.
>>>> >> Here is what I found
>>>> >> a. You can use the search string
>>>> >> status:open label:Verified+1,user=build AND 
>>>> >> label:Verified+1,user=nb7build
>>>> >> b. You can bookmark this link and it will take you directly to the page
>>>> >> with list of such patches.
>>>> >>
>>>> >> http://review.gluster.org/#/q/status:open+label:Verified%252B1%252Cuser%253Dbuild+AND+label:Verified%252B1%252Cuser%253Dnb7build
>>>> >
>>>> > Hmm, copy/pasting this URL does not work for me, I get an error:
>>>> >
>>>> > Code Review - Error
>>>> > line 1:26 no viable alternative at character '%'
>>>> > [Continue]
>>>> >
>>>> >
>>>> > Kaushal, could you add the following labels to gerrit, so that we can
>>>> > update the Jenkins jobs and they can start setting their own labels?
>>>> >
>>>> > http://review.gluster.org/Documentation/config-labels.html#label_custom
>>>> >
>>>> > - Smoke: misc smoke testing, compile, bug check, posix, ..
>>>> > - NetBSD: NetBSD-7 regression
>>>> > - Linux: Linux regression on CentOS-6
>>>>
>>>> I added these labels to the gluster projects' project.config, but they
>>>> don't seem to be showing up. I'll check once more when I get back
>>>> home.
>>>
>>> Might need a restart/reload of Gerrit? It seems required for the main
>>> gerrit.config file too:
>>>
>>>   
>>> http://review.gluster.org/Documentation/config-gerrit.html#_file_code_etc_gerrit_config_code
>>
>> I was using Chromium and did a restart. Both hadn't helped. I'll try again.
>>>
>>> Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-14 Thread Kaushal M
I'd pushed the config to a new branch instead of updating the
`refs/meta/config` branch. I've corrected this now.

The 3 new labels are,
- Smoke
- CentOS-regression
- NetBSD-regression

The new labels are active now. Changes cannot be merged without all of
them being +1. Only the bot accounts (Gluster Build System and NetBSD
Build System) can set them.

On Thu, Jan 14, 2016 at 9:22 PM, Kaushal M  wrote:
> On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos  wrote:
>> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
>>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos  wrote:
>>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
>>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
>>> >> 
>>> >> wrote:
>>> >>
>>> >> > -Atin
>>> >> > Sent from one plus one
>>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos"  wrote:
>>> >> > >
>>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur wrote:
>>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
>>> >> > > >
>>> >> > > > 1. Developer works on a new feature/bug fix and tests it 
>>> >> > > > locally(run
>>> >> > > > run-tests.sh completely).
>>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
>>> >> > > >
>>> >> > > > +++Note that no regression runs have started automatically for this
>>> >> > patch
>>> >> > > > at this point.+++
>>> >> > > >
>>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a promise 
>>> >> > > > of
>>> >> > > > having tested the patch completely. For cases where patches don't 
>>> >> > > > have
>>> >> > a +1
>>> >> > > > verified from the developer, maintainer has the following options
>>> >> > > > a. just do the code-review and award a +2 code review.
>>> >> > > > b. pull the patch locally and test completely and award a +1 
>>> >> > > > verified.
>>> >> > > > Both the above actions would result in triggering of regression 
>>> >> > > > runs
>>> >> > for
>>> >> > > > the patch.
>>> >> > >
>>> >> > > Would it not help if anyone giving +1 code-review starts the 
>>> >> > > regression
>>> >> > > tests too? When developers ask me to review, I prefer to see reviews
>>> >> > > done by others first, and any regression failures should have been 
>>> >> > > fixed
>>> >> > > by the time I look at the change.
>>> >> > When this idea was originated (long back) I was in favour of having
>>> >> > regression triggered on a +1, however verified flag set by the 
>>> >> > developer
>>> >> > would still trigger the regression. Being a maintainer I would always
>>> >> > prefer to look at a patch when its verified  flag is +1 which means the
>>> >> > regression result would also be available.
>>> >> >
>>> >>
>>> >>
>>> >> Niels requested in IRC that it is good have a mechanism of getting all
>>> >> patches that have already passed all regressions before starting review.
>>> >> Here is what I found
>>> >> a. You can use the search string
>>> >> status:open label:Verified+1,user=build AND 
>>> >> label:Verified+1,user=nb7build
>>> >> b. You can bookmark this link and it will take you directly to the page
>>> >> with list of such patches.
>>> >>
>>> >> http://review.gluster.org/#/q/status:open+label:Verified%252B1%252Cuser%253Dbuild+AND+label:Verified%252B1%252Cuser%253Dnb7build
>>> >
>>> > Hmm, copy/pasting this URL does not work for me, I get an error:
>>> >
>>> > Code Review - Error
>>> > line 1:26 no viable alternative at character '%'
>>> > [Continue]
>>> >
>>> >
>>> > Kaushal, could you add the following labels to gerrit, so that we can
>>> > update the Jenkins jobs and they can start setting their own labels?
>>> >
>>> > http://review.gluster.org/Documentation/config-labels.html#label_custom
>>> >
>>> > - Smoke: misc smoke testing, compile, bug check, posix, ..
>>> > - NetBSD: NetBSD-7 regression
>>> > - Linux: Linux regression on CentOS-6
>>>
>>> I added these labels to the gluster projects' project.config, but they
>>> don't seem to be showing up. I'll check once more when I get back
>>> home.
>>
>> Might need a restart/reload of Gerrit? It seems required for the main
>> gerrit.config file too:
>>
>>   
>> http://review.gluster.org/Documentation/config-gerrit.html#_file_code_etc_gerrit_config_code
>
> I was using Chromium and did a restart. Both hadn't helped. I'll try again.
>>
>> Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-14 Thread Kaushal M
On Thu, Jan 14, 2016 at 5:12 PM, Niels de Vos  wrote:
> On Thu, Jan 14, 2016 at 03:46:02PM +0530, Kaushal M wrote:
>> On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos  wrote:
>> > On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
>> >> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
>> >> 
>> >> wrote:
>> >>
>> >> > -Atin
>> >> > Sent from one plus one
>> >> > On Jan 12, 2016 7:41 PM, "Niels de Vos"  wrote:
>> >> > >
>> >> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur wrote:
>> >> > > > We have now changed the gerrit-jenkins workflow as follows:
>> >> > > >
>> >> > > > 1. Developer works on a new feature/bug fix and tests it locally(run
>> >> > > > run-tests.sh completely).
>> >> > > > 2. Developer sends the patch to gerrit using rfc.sh.
>> >> > > >
>> >> > > > +++Note that no regression runs have started automatically for this
>> >> > patch
>> >> > > > at this point.+++
>> >> > > >
>> >> > > > 3. Developer marks the patch as +1 verified on gerrit as a promise 
>> >> > > > of
>> >> > > > having tested the patch completely. For cases where patches don't 
>> >> > > > have
>> >> > a +1
>> >> > > > verified from the developer, maintainer has the following options
>> >> > > > a. just do the code-review and award a +2 code review.
>> >> > > > b. pull the patch locally and test completely and award a +1 
>> >> > > > verified.
>> >> > > > Both the above actions would result in triggering of regression runs
>> >> > for
>> >> > > > the patch.
>> >> > >
>> >> > > Would it not help if anyone giving +1 code-review starts the 
>> >> > > regression
>> >> > > tests too? When developers ask me to review, I prefer to see reviews
>> >> > > done by others first, and any regression failures should have been 
>> >> > > fixed
>> >> > > by the time I look at the change.
>> >> > When this idea was originated (long back) I was in favour of having
>> >> > regression triggered on a +1, however verified flag set by the developer
>> >> > would still trigger the regression. Being a maintainer I would always
>> >> > prefer to look at a patch when its verified  flag is +1 which means the
>> >> > regression result would also be available.
>> >> >
>> >>
>> >>
>> >> Niels requested in IRC that it is good have a mechanism of getting all
>> >> patches that have already passed all regressions before starting review.
>> >> Here is what I found
>> >> a. You can use the search string
>> >> status:open label:Verified+1,user=build AND label:Verified+1,user=nb7build
>> >> b. You can bookmark this link and it will take you directly to the page
>> >> with list of such patches.
>> >>
>> >> http://review.gluster.org/#/q/status:open+label:Verified%252B1%252Cuser%253Dbuild+AND+label:Verified%252B1%252Cuser%253Dnb7build
>> >
>> > Hmm, copy/pasting this URL does not work for me, I get an error:
>> >
>> > Code Review - Error
>> > line 1:26 no viable alternative at character '%'
>> > [Continue]
>> >
>> >
>> > Kaushal, could you add the following labels to gerrit, so that we can
>> > update the Jenkins jobs and they can start setting their own labels?
>> >
>> > http://review.gluster.org/Documentation/config-labels.html#label_custom
>> >
>> > - Smoke: misc smoke testing, compile, bug check, posix, ..
>> > - NetBSD: NetBSD-7 regression
>> > - Linux: Linux regression on CentOS-6
>>
>> I added these labels to the gluster projects' project.config, but they
>> don't seem to be showing up. I'll check once more when I get back
>> home.
>
> Might need a restart/reload of Gerrit? It seems required for the main
> gerrit.config file too:
>
>   
> http://review.gluster.org/Documentation/config-gerrit.html#_file_code_etc_gerrit_config_code

I was using Chromium and did a restart. Both hadn't helped. I'll try again.
>
> Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Requesting separate labels in Gerrit for better testing results

2016-01-14 Thread Kaushal M
On Thu, Jan 14, 2016 at 2:43 PM, Niels de Vos  wrote:
> On Thu, Jan 14, 2016 at 11:51:15AM +0530, Raghavendra Talur wrote:
>> On Tue, Jan 12, 2016 at 7:59 PM, Atin Mukherjee 
>> wrote:
>>
>> > -Atin
>> > Sent from one plus one
>> > On Jan 12, 2016 7:41 PM, "Niels de Vos"  wrote:
>> > >
>> > > On Tue, Jan 12, 2016 at 07:21:37PM +0530, Raghavendra Talur wrote:
>> > > > We have now changed the gerrit-jenkins workflow as follows:
>> > > >
>> > > > 1. Developer works on a new feature/bug fix and tests it locally(run
>> > > > run-tests.sh completely).
>> > > > 2. Developer sends the patch to gerrit using rfc.sh.
>> > > >
>> > > > +++Note that no regression runs have started automatically for this
>> > patch
>> > > > at this point.+++
>> > > >
>> > > > 3. Developer marks the patch as +1 verified on gerrit as a promise of
>> > > > having tested the patch completely. For cases where patches don't have
>> > a +1
>> > > > verified from the developer, maintainer has the following options
>> > > > a. just do the code-review and award a +2 code review.
>> > > > b. pull the patch locally and test completely and award a +1 verified.
>> > > > Both the above actions would result in triggering of regression runs
>> > for
>> > > > the patch.
>> > >
>> > > Would it not help if anyone giving +1 code-review starts the regression
>> > > tests too? When developers ask me to review, I prefer to see reviews
>> > > done by others first, and any regression failures should have been fixed
>> > > by the time I look at the change.
>> > When this idea was originated (long back) I was in favour of having
>> > regression triggered on a +1, however verified flag set by the developer
>> > would still trigger the regression. Being a maintainer I would always
>> > prefer to look at a patch when its verified  flag is +1 which means the
>> > regression result would also be available.
>> >
>>
>>
>> Niels requested in IRC that it is good have a mechanism of getting all
>> patches that have already passed all regressions before starting review.
>> Here is what I found
>> a. You can use the search string
>> status:open label:Verified+1,user=build AND label:Verified+1,user=nb7build
>> b. You can bookmark this link and it will take you directly to the page
>> with list of such patches.
>>
>> http://review.gluster.org/#/q/status:open+label:Verified%252B1%252Cuser%253Dbuild+AND+label:Verified%252B1%252Cuser%253Dnb7build
>
> Hmm, copy/pasting this URL does not work for me, I get an error:
>
> Code Review - Error
> line 1:26 no viable alternative at character '%'
> [Continue]
>
>
> Kaushal, could you add the following labels to gerrit, so that we can
> update the Jenkins jobs and they can start setting their own labels?
>
> http://review.gluster.org/Documentation/config-labels.html#label_custom
>
> - Smoke: misc smoke testing, compile, bug check, posix, ..
> - NetBSD: NetBSD-7 regression
> - Linux: Linux regression on CentOS-6

I added these labels to the gluster projects' project.config, but they
don't seem to be showing up. I'll check once more when I get back
home.

>
> Users/developers should not be able to set these labels, only the
> Jenkins accounts are allowed to.
>
> The standard Verified label can then be used for manual verification by
> developers, qa and reviewers.
>
> Thanks,
> Niels
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Need help with gerrit trigger plugin

2016-01-11 Thread Kaushal M
Just clear the password field. Don't know why, but I think Jenkins
fills the password hash of the empty password in there.

On Tue, Jan 12, 2016 at 11:32 AM, Raghavendra Talur  wrote:
> I am reconfiguring the events to trigger jenkins build on. The events are
> not getting triggered though.
>
> Checking the config page for trigger plugin I see this
>
>
> May be this is the reason for it, could someone check the ssh file and
> password?
>
> Thanks,
> Raghavendra Talur
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Switching from salt to ansible ?

2016-01-05 Thread Kaushal M
On Tue, Jan 5, 2016 at 2:52 PM, Niels de Vos  wrote:
> On Mon, Jan 04, 2016 at 05:27:10PM +0100, Michael Scherer wrote:
>> Hi,
>>
>> so over the holidays, I was pondering on moving to ansible from salt.
>>
>> So the reasons are numerous:
>>
>> - I am personally much more efficient with ansible than with salt
>> (despite using salt for 1 year). While the 2 tools do the same basic
>> stuff, there is always some difference (like user vs owner), and there
>> is a totally different philosophy when it come to multiple servers
>> orchestration (one example is how I do deploy freeipa on salt vs
>> ansible).
>
> Many of us are more familiar with Ansible than Salt. It'll be easier to
> get contributions from developers when Ansible is used.
>
>> - Since I use ansible for most others projects, I do already have a few
>> roles for most of the thing I want to deploy (freeipa, nagios, etc), and
>> adding features on them and then again on salt is not very efficient.
>>
>> - One of the initial reasons to choose salt was a tiny margin of people
>> who know it in the community, vs ansible. I suspect this is no longer
>> valid. For example, the vagrant image for developper is made using
>> ansible, and I know a few people in the dev community who use ansible. I
>> still think no one grok salt.
>>
>> - Another of the reason of using salt vs ansible is that salt was much
>> faster to apply configuration, especially if done on git commit. While
>> that's true, I managed to make it good enough on manageiq.org using
>> smart post-commit hook, and salt is getting also slower the more stuff
>> we add to configuration.
>>
>> - salt in epel is still using a old version ( for dependencies reasons
>> ). While this is working well enough, it make contributing quite
>> difficult, and prevent using some new features that are needed.
>>
>> - having a client/server model is something that caused trouble with
>> puppet when they decided to support only 1 version of ruby (around the
>> ruby 2.0 time frame). And given the transition of python2 and 3 is
>> happening right now in Fedora, I foresee this might be the same kind of
>> issue for salt.
>>
>> - Fedora is using ansible, and while we can't reuse their code that
>> much, we can at least take it and adapt.
>
> And can ask the Fedora sysadmins for help/ideas, or discuss the general
> approach of a role/task. If something in our Ansible doesn't work well
> enough, they might be able to share their thoughts. Fedora Infra is
> interested in Gluster and they would surely assist with some bits in
> return for our help ;-)
>
>> Now, there is a few downsides:
>>
>> - it mean rewriting most of the stuff we already have
>>
>> - it mean that we depend on sshd to be running. IE, if we screwed ssh
>> config (happened in the past), we can't just use salt to fix it.
>
> Having ssh running, or the salt-minion, does not make much of a
> difference to me.
>
>> - it also mean that we will have a ssh key to connect as root on a
>> server, and i am not that confortable with the idea (provided that we
>> use the regular method of using ansible, ie push based)
>
> Or (a dedicated ansible user and) use sudo? Might make auditing a little
> easier. I think its even possible to use sshd/Match on a username and
> only allow certain logins from selected sources (like a management
> server).
>
>> and maybe other I didn't think of.
>>
>> Any opinions ?
>
> I prefer the move to Ansible, it would allow me to contribute changes
> without learning Salt. Fixing/improving Ansible (in Python) is also
> something that I can do, but I'll stay away from patching Salt (in
> Ruby).

Salt is also a Python based tool. You were probably thinking about
Puppet or Chef. :)

>
> Niels
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Jenkins has lost its connection to Gerrit again :-/

2016-01-04 Thread Kaushal M
On Mon, Jan 4, 2016 at 4:58 PM, Niels de Vos  wrote:
> On Wed, Dec 30, 2015 at 11:47:14AM +0100, Niels de Vos wrote:
>> Jenkins is not triggering smoke and regression tests at the moment. I
>> have triggered smoke and regression tests for many patch sets manually
>> through:
>>
>>  1. open https://build.gluster.org/gerrit_manual_trigger/
>>  2. search: "status:open after:2015-12-27 label:Verified=0 NOT 
>> label:Code-Review=-1"
>>  3. select all changes
>>  4. click "trigger selected"
>>
>> Could someone look into getting the connection between Jenkins and
>> Gerrit up again?
>>
>> Thanks,
>> Niels
>
> This does not seem to have been resolved yet. I've executed the above
> procedure again. Many regression tests have been scheduled now.
>
> Niels
>

I've scheduled a Jenkins restart, which should fix the connection.
I've also aborted the following hung netbsd regression runs, they can
be retriggered after Jenkins restarts.
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13124/
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13126/
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13130/
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/13132/

~kaushal

> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Request for a Gerrit account for the CentOS CI Jenkins

2016-01-04 Thread Kaushal M
On Tue, Jan 5, 2016 at 10:52 AM, Kaushal M  wrote:
> On Tue, Jan 5, 2016 at 9:30 AM, Kaushal M  wrote:
>> On Mon, Jan 4, 2016 at 9:58 PM, Michael Scherer  wrote:
>>> Le lundi 04 janvier 2016 à 16:33 +0100, Niels de Vos a écrit :
>>>> On Mon, Jan 04, 2016 at 04:16:14PM +0100, Michael Scherer wrote:
>>>> > Le lundi 04 janvier 2016 à 09:24 +0530, Kaushal M a écrit :
>>>> > > On Sun, Jan 3, 2016 at 6:34 PM, Niels de Vos  wrote:
>>>> > > > Hi!
>>>> > > >
>>>> > > > I would like to request a Gerrit account for the CentOS CI Jenkins
>>>> > > > (ci.centos.org) so that we can setup jobs based on Gerrit triggers.
>>>> > > > Could you please create an account with the following details?
>>>> > > >
>>>> > > >   Username: centos-ci
>>>> > > >   Email: (hmm, maybe a new list, or alias like 
>>>> > > > centos...@gluster.org?)
>>>> > > >   Public-key: ssh-rsa 
>>>> > > > B3NzaC1yc2EDAQABAAABAQDCu9qWPHYJm+s4Nq1seE82Q+m3Ilq82Z+GkK88tgy7aNMeJ5DWeHMTo+jCu+sV68uXXAGIC0IvGeQPeTae1Rk6WYyPz+l/LQUME051i/ke0wG/1SaaWkduK6KDqnC9xi4Ud/ZDF+/StIqlSKM7/FIPzcqOV3TAEU4B82MRA3NaNJrTHLgdTqqDXZc9snEp7pBsWfTu4ojeV/Nv2dcdfcdWkT9VSDIUmhfGYODAnEQACrw0P4V17gkPMYV96jgU06HIXiSz20JnA1E6PazAtHLEPOfQR4D5csJ+3DpoBek3PUB8E4kqRAouz4qIWzum4Sc2d1AE8UTkxIWBXjf5bkGd
>>>> > > >  centos...@review.gluster.org
>>>> > > >   See-also: 
>>>> > > > http://review.gluster.org/Documentation/access-control.html#examples_cisystem
>>>> > > >
>>>> > >
>>>> > > I can create the bot account, but someone else (Misc) needs to create
>>>> > > the mail alias/list.
>>>> >
>>>> > Well, what do we want to do with it ?
>>>> >
>>>> > And what kind of information will be sent to that email ?
>>>>
>>>> Not sure... Anything that gets sent to Gerrit users, I guess? If a
>>>> Gerrit account does not require an email address, that would probably be
>>>> easier. Email addresses need to be unique, I think, otherwise it'll be a
>>>> weird auto-competion in the Gerrit UI.
>>>
>>> But so, who need to read that, and is there some "reset password" system
>>> using this email ? ( in which case, we shouldn't maybe push that to a
>>> public ml)
>>
>> AFAIK emails are only sent if you register for alerts. Bot accounts
>> are created under  the 'Non interactive users' group by gerrit admins.
>> I believe you wouldn't be able to login to the account via the
>> web-interface, so there is no way to register for alerts. I'll check
>> if I can create  the account without providing an email address.
>
> Huh, our old version of gerrit doesn't have a non-interactive users group :-(
> Another reason to update gerrit.
>

I created the account and added it to the 'Event Streaming Users'
group. This should allow the ci.centos.org to stream events from
review.gluster.org. It did not require an email address to be created.
We can add and address later if required.

>>
>>> --
>>> Michael Scherer
>>> Sysadmin, Community Infrastructure and Platform, OSAS
>>>
>>>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Switching from salt to ansible ?

2016-01-04 Thread Kaushal M
On Mon, Jan 4, 2016 at 9:57 PM, Michael Scherer  wrote:
> Hi,
>
> so over the holidays, I was pondering on moving to ansible from salt.

I'd like this as well, as I'm more familiar with Ansible as well. If
switching to Ansible helps reduce the burden on you, I think we should
do the switch.

>
> So the reasons are numerous:
>
> - I am personally much more efficient with ansible than with salt
> (despite using salt for 1 year). While the 2 tools do the same basic
> stuff, there is always some difference (like user vs owner), and there
> is a totally different philosophy when it come to multiple servers
> orchestration (one example is how I do deploy freeipa on salt vs
> ansible).
>
> - Since I use ansible for most others projects, I do already have a few
> roles for most of the thing I want to deploy (freeipa, nagios, etc), and
> adding features on them and then again on salt is not very efficient.
>
> - One of the initial reasons to choose salt was a tiny margin of people
> who know it in the community, vs ansible. I suspect this is no longer
> valid. For example, the vagrant image for developper is made using
> ansible, and I know a few people in the dev community who use ansible. I
> still think no one grok salt.
>
> - Another of the reason of using salt vs ansible is that salt was much
> faster to apply configuration, especially if done on git commit. While
> that's true, I managed to make it good enough on manageiq.org using
> smart post-commit hook, and salt is getting also slower the more stuff
> we add to configuration.
>
> - salt in epel is still using a old version ( for dependencies reasons
> ). While this is working well enough, it make contributing quite
> difficult, and prevent using some new features that are needed.
>
> - having a client/server model is something that caused trouble with
> puppet when they decided to support only 1 version of ruby (around the
> ruby 2.0 time frame). And given the transition of python2 and 3 is
> happening right now in Fedora, I foresee this might be the same kind of
> issue for salt.

I think this would be a problem with Ansible as well, as it depends on
python2. I've faced some small hiccups getting ansible to manage Arch
Linux which uses python3 by default.

>
> - Fedora is using ansible, and while we can't reuse their code that
> much, we can at least take it and adapt.
>
> Now, there is a few downsides:
>
> - it mean rewriting most of the stuff we already have
>
> - it mean that we depend on sshd to be running. IE, if we screwed ssh
> config (happened in the past), we can't just use salt to fix it.
>
> - it also mean that we will have a ssh key to connect as root on a
> server, and i am not that confortable with the idea (provided that we
> use the regular method of using ansible, ie push based)
>
> and maybe other I didn't think of.

I've sometime had Ansible modules which failed to run on the remote
host because it lacked some python packages. This could be a problem
for Ansible. I've not faced this with Salt yet, maybe because the
remote hosts need to have salt installed which would pull in all
dependencies.

>
> Any opinions ?
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Request for a Gerrit account for the CentOS CI Jenkins

2016-01-04 Thread Kaushal M
On Tue, Jan 5, 2016 at 9:30 AM, Kaushal M  wrote:
> On Mon, Jan 4, 2016 at 9:58 PM, Michael Scherer  wrote:
>> Le lundi 04 janvier 2016 à 16:33 +0100, Niels de Vos a écrit :
>>> On Mon, Jan 04, 2016 at 04:16:14PM +0100, Michael Scherer wrote:
>>> > Le lundi 04 janvier 2016 à 09:24 +0530, Kaushal M a écrit :
>>> > > On Sun, Jan 3, 2016 at 6:34 PM, Niels de Vos  wrote:
>>> > > > Hi!
>>> > > >
>>> > > > I would like to request a Gerrit account for the CentOS CI Jenkins
>>> > > > (ci.centos.org) so that we can setup jobs based on Gerrit triggers.
>>> > > > Could you please create an account with the following details?
>>> > > >
>>> > > >   Username: centos-ci
>>> > > >   Email: (hmm, maybe a new list, or alias like centos...@gluster.org?)
>>> > > >   Public-key: ssh-rsa 
>>> > > > B3NzaC1yc2EDAQABAAABAQDCu9qWPHYJm+s4Nq1seE82Q+m3Ilq82Z+GkK88tgy7aNMeJ5DWeHMTo+jCu+sV68uXXAGIC0IvGeQPeTae1Rk6WYyPz+l/LQUME051i/ke0wG/1SaaWkduK6KDqnC9xi4Ud/ZDF+/StIqlSKM7/FIPzcqOV3TAEU4B82MRA3NaNJrTHLgdTqqDXZc9snEp7pBsWfTu4ojeV/Nv2dcdfcdWkT9VSDIUmhfGYODAnEQACrw0P4V17gkPMYV96jgU06HIXiSz20JnA1E6PazAtHLEPOfQR4D5csJ+3DpoBek3PUB8E4kqRAouz4qIWzum4Sc2d1AE8UTkxIWBXjf5bkGd
>>> > > >  centos...@review.gluster.org
>>> > > >   See-also: 
>>> > > > http://review.gluster.org/Documentation/access-control.html#examples_cisystem
>>> > > >
>>> > >
>>> > > I can create the bot account, but someone else (Misc) needs to create
>>> > > the mail alias/list.
>>> >
>>> > Well, what do we want to do with it ?
>>> >
>>> > And what kind of information will be sent to that email ?
>>>
>>> Not sure... Anything that gets sent to Gerrit users, I guess? If a
>>> Gerrit account does not require an email address, that would probably be
>>> easier. Email addresses need to be unique, I think, otherwise it'll be a
>>> weird auto-competion in the Gerrit UI.
>>
>> But so, who need to read that, and is there some "reset password" system
>> using this email ? ( in which case, we shouldn't maybe push that to a
>> public ml)
>
> AFAIK emails are only sent if you register for alerts. Bot accounts
> are created under  the 'Non interactive users' group by gerrit admins.
> I believe you wouldn't be able to login to the account via the
> web-interface, so there is no way to register for alerts. I'll check
> if I can create  the account without providing an email address.

Huh, our old version of gerrit doesn't have a non-interactive users group :-(
Another reason to update gerrit.

>
>> --
>> Michael Scherer
>> Sysadmin, Community Infrastructure and Platform, OSAS
>>
>>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Request for a Gerrit account for the CentOS CI Jenkins

2016-01-04 Thread Kaushal M
On Mon, Jan 4, 2016 at 9:58 PM, Michael Scherer  wrote:
> Le lundi 04 janvier 2016 à 16:33 +0100, Niels de Vos a écrit :
>> On Mon, Jan 04, 2016 at 04:16:14PM +0100, Michael Scherer wrote:
>> > Le lundi 04 janvier 2016 à 09:24 +0530, Kaushal M a écrit :
>> > > On Sun, Jan 3, 2016 at 6:34 PM, Niels de Vos  wrote:
>> > > > Hi!
>> > > >
>> > > > I would like to request a Gerrit account for the CentOS CI Jenkins
>> > > > (ci.centos.org) so that we can setup jobs based on Gerrit triggers.
>> > > > Could you please create an account with the following details?
>> > > >
>> > > >   Username: centos-ci
>> > > >   Email: (hmm, maybe a new list, or alias like centos...@gluster.org?)
>> > > >   Public-key: ssh-rsa 
>> > > > B3NzaC1yc2EDAQABAAABAQDCu9qWPHYJm+s4Nq1seE82Q+m3Ilq82Z+GkK88tgy7aNMeJ5DWeHMTo+jCu+sV68uXXAGIC0IvGeQPeTae1Rk6WYyPz+l/LQUME051i/ke0wG/1SaaWkduK6KDqnC9xi4Ud/ZDF+/StIqlSKM7/FIPzcqOV3TAEU4B82MRA3NaNJrTHLgdTqqDXZc9snEp7pBsWfTu4ojeV/Nv2dcdfcdWkT9VSDIUmhfGYODAnEQACrw0P4V17gkPMYV96jgU06HIXiSz20JnA1E6PazAtHLEPOfQR4D5csJ+3DpoBek3PUB8E4kqRAouz4qIWzum4Sc2d1AE8UTkxIWBXjf5bkGd
>> > > >  centos...@review.gluster.org
>> > > >   See-also: 
>> > > > http://review.gluster.org/Documentation/access-control.html#examples_cisystem
>> > > >
>> > >
>> > > I can create the bot account, but someone else (Misc) needs to create
>> > > the mail alias/list.
>> >
>> > Well, what do we want to do with it ?
>> >
>> > And what kind of information will be sent to that email ?
>>
>> Not sure... Anything that gets sent to Gerrit users, I guess? If a
>> Gerrit account does not require an email address, that would probably be
>> easier. Email addresses need to be unique, I think, otherwise it'll be a
>> weird auto-competion in the Gerrit UI.
>
> But so, who need to read that, and is there some "reset password" system
> using this email ? ( in which case, we shouldn't maybe push that to a
> public ml)

AFAIK emails are only sent if you register for alerts. Bot accounts
are created under  the 'Non interactive users' group by gerrit admins.
I believe you wouldn't be able to login to the account via the
web-interface, so there is no way to register for alerts. I'll check
if I can create  the account without providing an email address.

> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Request for a Gerrit account for the CentOS CI Jenkins

2016-01-03 Thread Kaushal M
On Sun, Jan 3, 2016 at 6:34 PM, Niels de Vos  wrote:
> Hi!
>
> I would like to request a Gerrit account for the CentOS CI Jenkins
> (ci.centos.org) so that we can setup jobs based on Gerrit triggers.
> Could you please create an account with the following details?
>
>   Username: centos-ci
>   Email: (hmm, maybe a new list, or alias like centos...@gluster.org?)
>   Public-key: ssh-rsa 
> B3NzaC1yc2EDAQABAAABAQDCu9qWPHYJm+s4Nq1seE82Q+m3Ilq82Z+GkK88tgy7aNMeJ5DWeHMTo+jCu+sV68uXXAGIC0IvGeQPeTae1Rk6WYyPz+l/LQUME051i/ke0wG/1SaaWkduK6KDqnC9xi4Ud/ZDF+/StIqlSKM7/FIPzcqOV3TAEU4B82MRA3NaNJrTHLgdTqqDXZc9snEp7pBsWfTu4ojeV/Nv2dcdfcdWkT9VSDIUmhfGYODAnEQACrw0P4V17gkPMYV96jgU06HIXiSz20JnA1E6PazAtHLEPOfQR4D5csJ+3DpoBek3PUB8E4kqRAouz4qIWzum4Sc2d1AE8UTkxIWBXjf5bkGd
>  centos...@review.gluster.org
>   See-also: 
> http://review.gluster.org/Documentation/access-control.html#examples_cisystem
>

I can create the bot account, but someone else (Misc) needs to create
the mail alias/list.

> Once this account is in place, I will request the CentOS CI admins to
> add our Gerrit instance to their Jenkins.
>
> Thanks,
> Niels
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] [Bug 1291537] New: [RFE] Provide mechanism to spin up reproducible test environment for all developers

2015-12-15 Thread Kaushal M
On Tue, Dec 15, 2015 at 6:06 PM, Michael Scherer  wrote:
> Le mardi 15 décembre 2015 à 05:39 +, bugzi...@redhat.com a écrit :
>> https://bugzilla.redhat.com/show_bug.cgi?id=1291537
>>
>> Bug ID: 1291537
>>Summary: [RFE] Provide mechanism to spin up reproducible test
>> environment for all developers
>>Product: GlusterFS
>>Version: mainline
>>  Component: project-infrastructure
>>   Assignee: b...@gluster.org
>>   Reporter:
>> CC: gluster-b...@redhat.com, gluster-infra@gluster.org
>>
>>
>>
>> Description of problem:
>>
>> There should be a way for developers to obtain/spin up a test environment 
>> which
>> is identical to our upstream test infrastructure.
>
> I suspect that having a different code base for the Vagrant VM and the
> regular one will just cause trouble.
>
> Why did people decide to rewrite the jenkins slave installation in
> ansible when we published the salt based stuff, and if there was any
> issues no one did ever post anything on the gluster-infra list ?
>

Micheal, I'm still working on fixing salt state for the jenkins slave.
I'm nearly done with it, I need to just fix ~5 tests each on el7 and f23.
I hope that this will become the standard base for any gluster regression test.

Raghavendra had started this particular effort by himself before the
salt states were published.
He probably chose Ansible, as it's much easier to use Ansible with Vagrant.

I'll work with him to see that we can converge on a solution.

>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] Restarting build.gluster.org

2015-12-08 Thread Kaushal M
On Tue, Dec 8, 2015 at 8:27 PM, Kaushal M  wrote:
> The gerrit-trigger plugin is not triggering automatically again. And
> as before, I have no idea why.
>
> I've scheduled a Jenkins restart, which should happen after running
> jobs finish. Hopefully, this fixes the problem.

The restart has been cancelled. Vijay wants to take a crack at fixing
the problem.

>
> ~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Restarting build.gluster.org

2015-12-08 Thread Kaushal M
The gerrit-trigger plugin is not triggering automatically again. And
as before, I have no idea why.

I've scheduled a Jenkins restart, which should happen after running
jobs finish. Hopefully, this fixes the problem.

~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Rackspace VM conversion : supercolony.gluster.org

2015-12-07 Thread Kaushal M
On Mon, Dec 7, 2015 at 2:11 PM, Michael Scherer  wrote:
> Le lundi 07 décembre 2015 à 10:02 +0530, Kaushal M a écrit :
>> Some others on the list also should have received this mail from
>> Rackspace. The supercolony VM, which hosts the mailing lists IIRC,
>> needs to be converted from a first generation VM to a second
>> generation vm.
>>
>> This conversion can be done automatically by soft-rebooting the VM,
>> the docs mention that this will try to preserve everything as is. Or
>> we could do a manual migration. If we haven't done the migration by
>> 5th Jan, Rackspace will do it automatically, probably using the
>> soft-rebooting method.
>>
>> I suggest that we do this ourselves, so that we can handle any
>> eventualities that arise.
>
> I can take care, but is there a time where it would be easier for people
> to do it ?
>

The mailing lists are generally quiet during the weekends. That might
be a good time.

>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] Rackspace VM conversion : supercolony.gluster.org

2015-12-06 Thread Kaushal M
Some others on the list also should have received this mail from
Rackspace. The supercolony VM, which hosts the mailing lists IIRC,
needs to be converted from a first generation VM to a second
generation vm.

This conversion can be done automatically by soft-rebooting the VM,
the docs mention that this will try to preserve everything as is. Or
we could do a manual migration. If we haven't done the migration by
5th Jan, Rackspace will do it automatically, probably using the
soft-rebooting method.

I suggest that we do this ourselves, so that we can handle any
eventualities that arise.

~kaushal
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] Gerrit sending errors message

2015-11-25 Thread Kaushal M
I checked [1]. None of the listed have active reviews on. Maybe they
had watched projects setup, which is sending out these emails. I don't
know how we could remove these watches though.

[1] 
https://review.gluster.org/#/q/reviewer:%22Dhandapani+Gopal+%253Cdhandapani%2540gluster.com%253E%22+OR+reviewer:%22Selvasundaram+%253Cselvam%2540gluster.com%253E%22++OR+reviewer:%22Pavan+T+C+%253Ctcp%2540gluster.com%253E%22+OR++reviewer:%22lakshmipathi+%253Clakshmipathi%2540gluster.com%253E%22

On Wed, Nov 25, 2015 at 5:17 PM, Michael Scherer  wrote:
> Hi,
>
> it seems there is a few people and review still assigned to various
> gluster.com email, and the RH MX is refusing mail:
> dhandapani, lakshmipathi, selvam, tcp (all @gluster.com)
>
> Should we see if they are listed as reviewer somewhere in Gerrit and
> remove them, so it doesn't fill the server mail spool ?
>
> Or there is another way to deal with that (like migrating their email to
> a new @redhat.com one, for example) ?
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra


Re: [Gluster-infra] glusterfs github is out of sync.

2015-11-25 Thread Kaushal M
On Wed, Nov 25, 2015 at 4:55 PM, Michael Scherer  wrote:
> Le mercredi 25 novembre 2015 à 11:52 +0100, Michael Scherer a écrit :
>> Le mercredi 25 novembre 2015 à 15:47 +0530, Kaushal M a écrit :
>> > On Wed, Nov 25, 2015 at 3:42 PM, Michael Scherer  
>> > wrote:
>> > > Le mercredi 25 novembre 2015 à 11:05 +0530, Kaushal M a écrit :
>> > >> On Wed, Nov 25, 2015 at 12:22 AM, Michael Scherer  
>> > >> wrote:
>> > >> > Le lundi 23 novembre 2015 à 11:56 +0100, Michael Scherer a écrit :
>> > >> >> Le jeudi 29 octobre 2015 à 11:24 +, Niels de Vos a écrit :
>> > >> >> > Humble Chirammal  writes:
>> > >> >> > >
>> > >> >> > >
>> > >> >> > > Hi,
>> > >> >> > > FYI,
>> > >> >> > > It seems that, https://github.com/gluster/glusterfs code
>> > >> >> > > repo has not synced from gerrit for some time and the last 
>> > >> >> > > commit (
>> > >> >> > > 51632e1eec3ff88d19867dc8d266068dd7db432a)
>> > >> >> > > recorded on  Sep 10 .
>> > >> >> > > Does any one know, why the gerrit plugin is broken? --
>> > >> >> >
>> > >> >> > I've now manually synced the glusterfs repo on GitHub with the 
>> > >> >> > current state
>> > >> >> > from Gerrit. If the automatic sync takes time to get 
>> > >> >> > configured/setup again, we
>> > >> >> > might need to do this on regular basis? Anyway, ping me if it 
>> > >> >> > needs to be done
>> > >> >> > again (not for each commit though!).
>> > >> >>
>> > >> >> One solution is to just add a ssh key on the repo, it can be found in
>> > >> >> the settings.
>> > >> >>
>> > >> >> That's how I plan to do for the salt repo, but github is being 
>> > >> >> annoying
>> > >> >> a requires 1 different ssh key per repo, and I do need to take that 
>> > >> >> in
>> > >> >> account.
>> > >> >> (the joy of having a SaaS service...)
>> > >> >
>> > >> > So I was looking at that, and just want to be sure I do publish the 
>> > >> > good
>> > >> > repository.
>> > >> >
>> > >> > So we are looking at pushing the git repo from dev.gluster.org ( ie,
>> > >> > gerrit ), in /review/review.gluster.org/git
>> > >> >
>> > >> > And there is all of them:
>> > >> >
>> > >> > All-Projects.git
>> > >> > automation.git
>> > >> > glusterfs-afrv1.git
>> > >> > glusterfs.git
>> > >> > glusterfs-hadoop.old.git
>> > >> > glusterfs-nsr.git
>> > >> > glusterfs-quota.git
>> > >> > glusterfs-snapshot.git
>> > >> > glusterfs-specs.git
>> > >> > gluster-nagios-addons.git
>> > >> > gluster-nagios-common.git
>> > >> > gluster-nagios.git
>> > >> > gluster-operations-guide.git
>> > >> > gluster-swift.git
>> > >> > gmc.git
>> > >> > libgfapi-python.git
>> > >> > nagios-gluster-addons.git
>> > >> > nagios-server-addons.git
>> > >> > qa.git
>> > >> > regression.git
>> > >> > swiftkrbauth.git
>> > >> >
>> > >> > Should all be synced ?
>> > >>
>> > >> Most of these are inactive. We'll need to check with the owners to
>> > >> find out if they need to be replicated.
>> > >> The most important and active repos here would be glusterfs.git and
>> > >> glusterfs-specs.git. These need to be replicated.
>> > >
>> > > Ok. And I assume I can sync the anonymous clone, and use git push -f ?
>> > >
>> > > I propose to have a separate small VM for that, so the ssh keys to push
>> > > on github are separate from any complex application like github. We can
>> > > later move it back on a regular server ( or just use some magic
>> > > kubernetes/atomic system 

Re: [Gluster-infra] glusterfs github is out of sync.

2015-11-25 Thread Kaushal M
On Wed, Nov 25, 2015 at 4:44 PM, Michael Scherer  wrote:
> Le mercredi 25 novembre 2015 à 11:52 +0100, Michael Scherer a écrit :
>> Le mercredi 25 novembre 2015 à 15:47 +0530, Kaushal M a écrit :
>> > On Wed, Nov 25, 2015 at 3:42 PM, Michael Scherer  
>> > wrote:
>> > > Le mercredi 25 novembre 2015 à 11:05 +0530, Kaushal M a écrit :
>> > >> On Wed, Nov 25, 2015 at 12:22 AM, Michael Scherer  
>> > >> wrote:
>> > >> > Le lundi 23 novembre 2015 à 11:56 +0100, Michael Scherer a écrit :
>> > >> >> Le jeudi 29 octobre 2015 à 11:24 +, Niels de Vos a écrit :
>> > >> >> > Humble Chirammal  writes:
>> > >> >> > >
>> > >> >> > >
>> > >> >> > > Hi,
>> > >> >> > > FYI,
>> > >> >> > > It seems that, https://github.com/gluster/glusterfs code
>> > >> >> > > repo has not synced from gerrit for some time and the last 
>> > >> >> > > commit (
>> > >> >> > > 51632e1eec3ff88d19867dc8d266068dd7db432a)
>> > >> >> > > recorded on  Sep 10 .
>> > >> >> > > Does any one know, why the gerrit plugin is broken? --
>> > >> >> >
>> > >> >> > I've now manually synced the glusterfs repo on GitHub with the 
>> > >> >> > current state
>> > >> >> > from Gerrit. If the automatic sync takes time to get 
>> > >> >> > configured/setup again, we
>> > >> >> > might need to do this on regular basis? Anyway, ping me if it 
>> > >> >> > needs to be done
>> > >> >> > again (not for each commit though!).
>> > >> >>
>> > >> >> One solution is to just add a ssh key on the repo, it can be found in
>> > >> >> the settings.
>> > >> >>
>> > >> >> That's how I plan to do for the salt repo, but github is being 
>> > >> >> annoying
>> > >> >> a requires 1 different ssh key per repo, and I do need to take that 
>> > >> >> in
>> > >> >> account.
>> > >> >> (the joy of having a SaaS service...)
>> > >> >
>> > >> > So I was looking at that, and just want to be sure I do publish the 
>> > >> > good
>> > >> > repository.
>> > >> >
>> > >> > So we are looking at pushing the git repo from dev.gluster.org ( ie,
>> > >> > gerrit ), in /review/review.gluster.org/git
>> > >> >
>> > >> > And there is all of them:
>> > >> >
>> > >> > All-Projects.git
>> > >> > automation.git
>> > >> > glusterfs-afrv1.git
>> > >> > glusterfs.git
>> > >> > glusterfs-hadoop.old.git
>> > >> > glusterfs-nsr.git
>> > >> > glusterfs-quota.git
>> > >> > glusterfs-snapshot.git
>> > >> > glusterfs-specs.git
>> > >> > gluster-nagios-addons.git
>> > >> > gluster-nagios-common.git
>> > >> > gluster-nagios.git
>> > >> > gluster-operations-guide.git
>> > >> > gluster-swift.git
>> > >> > gmc.git
>> > >> > libgfapi-python.git
>> > >> > nagios-gluster-addons.git
>> > >> > nagios-server-addons.git
>> > >> > qa.git
>> > >> > regression.git
>> > >> > swiftkrbauth.git
>> > >> >
>> > >> > Should all be synced ?
>> > >>
>> > >> Most of these are inactive. We'll need to check with the owners to
>> > >> find out if they need to be replicated.
>> > >> The most important and active repos here would be glusterfs.git and
>> > >> glusterfs-specs.git. These need to be replicated.
>> > >
>> > > Ok. And I assume I can sync the anonymous clone, and use git push -f ?
>> > >
>> > > I propose to have a separate small VM for that, so the ssh keys to push
>> > > on github are separate from any complex application like github. We can
>> > > later move it back on a regular server ( or just use some magic
>> > > kubernetes/atomic system ).
>> >
>> > Gerrit itself is capable of pushing changes to mirrors and this
>> > capability was being used till the ssh-keys were removed. I don't know
>> > if it's possible to use a different ssh-key for each mirror in gerrit,
>> > but if it is then we should be able to do this easily. No need to set
>> > up a different VM for that.
>>
>> Ok, seems faster. I was looking at a generic solution for the git of
>> salt too, but maybe I do over engineer stuff :)
>>
>> Let me see on gerrit side, and then I will look at a work around if it
>> doesn't work.
>
> Just to be sure, we can remove the sync to gitorious ?

We should be ok. Amye announced that gitorious would be decommissioned
at the end of November, and I don't believe anyone is using the
gitorious mirror.

>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] glusterfs github is out of sync.

2015-11-25 Thread Kaushal M
On Wed, Nov 25, 2015 at 3:42 PM, Michael Scherer  wrote:
> Le mercredi 25 novembre 2015 à 11:05 +0530, Kaushal M a écrit :
>> On Wed, Nov 25, 2015 at 12:22 AM, Michael Scherer  
>> wrote:
>> > Le lundi 23 novembre 2015 à 11:56 +0100, Michael Scherer a écrit :
>> >> Le jeudi 29 octobre 2015 à 11:24 +, Niels de Vos a écrit :
>> >> > Humble Chirammal  writes:
>> >> > >
>> >> > >
>> >> > > Hi,
>> >> > > FYI,
>> >> > > It seems that, https://github.com/gluster/glusterfs code
>> >> > > repo has not synced from gerrit for some time and the last commit 
>> >> > > (
>> >> > > 51632e1eec3ff88d19867dc8d266068dd7db432a)
>> >> > > recorded on  Sep 10 .
>> >> > > Does any one know, why the gerrit plugin is broken? --
>> >> >
>> >> > I've now manually synced the glusterfs repo on GitHub with the current 
>> >> > state
>> >> > from Gerrit. If the automatic sync takes time to get configured/setup 
>> >> > again, we
>> >> > might need to do this on regular basis? Anyway, ping me if it needs to 
>> >> > be done
>> >> > again (not for each commit though!).
>> >>
>> >> One solution is to just add a ssh key on the repo, it can be found in
>> >> the settings.
>> >>
>> >> That's how I plan to do for the salt repo, but github is being annoying
>> >> a requires 1 different ssh key per repo, and I do need to take that in
>> >> account.
>> >> (the joy of having a SaaS service...)
>> >
>> > So I was looking at that, and just want to be sure I do publish the good
>> > repository.
>> >
>> > So we are looking at pushing the git repo from dev.gluster.org ( ie,
>> > gerrit ), in /review/review.gluster.org/git
>> >
>> > And there is all of them:
>> >
>> > All-Projects.git
>> > automation.git
>> > glusterfs-afrv1.git
>> > glusterfs.git
>> > glusterfs-hadoop.old.git
>> > glusterfs-nsr.git
>> > glusterfs-quota.git
>> > glusterfs-snapshot.git
>> > glusterfs-specs.git
>> > gluster-nagios-addons.git
>> > gluster-nagios-common.git
>> > gluster-nagios.git
>> > gluster-operations-guide.git
>> > gluster-swift.git
>> > gmc.git
>> > libgfapi-python.git
>> > nagios-gluster-addons.git
>> > nagios-server-addons.git
>> > qa.git
>> > regression.git
>> > swiftkrbauth.git
>> >
>> > Should all be synced ?
>>
>> Most of these are inactive. We'll need to check with the owners to
>> find out if they need to be replicated.
>> The most important and active repos here would be glusterfs.git and
>> glusterfs-specs.git. These need to be replicated.
>
> Ok. And I assume I can sync the anonymous clone, and use git push -f ?
>
> I propose to have a separate small VM for that, so the ssh keys to push
> on github are separate from any complex application like github. We can
> later move it back on a regular server ( or just use some magic
> kubernetes/atomic system ).

Gerrit itself is capable of pushing changes to mirrors and this
capability was being used till the ssh-keys were removed. I don't know
if it's possible to use a different ssh-key for each mirror in gerrit,
but if it is then we should be able to do this easily. No need to set
up a different VM for that.

>
> (Ideally, i would even add support for the github api to deploy the key,
> but it might be overkill for now)
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] [gluster.org_salt_states] Want to refactor jenkins.slave

2015-11-25 Thread Kaushal M
On Tue, Nov 24, 2015 at 4:42 AM, Michael Scherer  wrote:
> Le lundi 23 novembre 2015 à 16:39 +0530, Kaushal M a écrit :
>> Hi all
>>
>> Now that we have public access to the gluster-infra salt states [1], (thank
>> you Misc) I'd like to start contributing to it. I'd like to begin by
>> refactoring the `jenkins.slave`[2] state.
>>
>> My aim with the refactoring is to pull out the general test environment
>> configuration, from the gluster infra specific configuration. The reason to
>> do this is mainly two fold,
>> 1. To make it easier for developers to contribute changes to the GlusterFS
>> testing environment.
>> 2. To make it easier to deploy local testing environments.
>
> local testing, like vagrant based for example ?

Yes. It doesn't matter if it's vagrant or standalone VMs, or even
possibly the developers system. It should be possible to easily
prepare a system to successfully run these tests.

>
>> I need some questions on which I'd like feedback.
>> 1. How do I contribute changes back? As I understand the github repos of
>> the salt-states and salt-pillar are just mirrors.
>
> For now, I think patch by mail would be ok, that's the stuff that
> requires the less setup.
>
> Ideally, I would maybe start to use gerrit, since that's what is used by
> the rest of the project, but I do not have strong opinion on what to do
> for review.
>
> Where I am a bit less happy is what happen after the review. IE, should
> we pull from github and trigger a run, or use cron, or write a webhook
> somewhere, etc, etc.
>

Okay. I'll send this a mail once I've got something useful.

>
>> 2. I'd like to have this test environment state separate from gluster.org
>> infra states, ie., outside the current salt-states tree, in a separate
>> repo, or within the glusterfs repo. This would help developers contribute
>> to it, without hurting the infra states. Would this be okay? If yes, which
>> repo should this be put into.
>
> So, I am not against splitting stuff more on salt, the only reason I
> didn't is because I am not sure how it work in practice (salt docs are a
> bit messy, and so I need to look a bit more on examples to grasp the
> details).
>
> But rather than splitting, what about cleaning our environment (like,
> getting ride of /d/ directory and various symlink around) to have almost
> nothing gluster-infra specific ?

These are things I'd like to do as a part of this refactoring effort.
But to clean and improve, I first need to do this split.

>
> Another issue I see, we have salt config to disable ipv6
> ( 
> https://github.com/gluster/gluster.org_salt_states/blob/master/jenkins/disable_ipv6.sls
>  ) and the 2nd admin card from rackspace VM ( .
>
> Where would this kind of tweaks go for the goal of letting people run
> test locally ?
>
> On one hand, they are gluster infra specific (an not even distro
> agnostic yet), so should be out.
>
> On the other hand, if ipv6 make tests fail, then it should be disabled
> if we want people to use the test suite.
>
> Of course, the best solution is to make ipv6 working with gluster :)

We are planning to have IPv6 ready for GlusterFS-3.8. So maybe we can
get rid of this customization.

> The same go for the 2nd card, but I also think that's not easy or it
> would have been done.
>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] glusterfs github is out of sync.

2015-11-24 Thread Kaushal M
On Wed, Nov 25, 2015 at 12:22 AM, Michael Scherer  wrote:
> Le lundi 23 novembre 2015 à 11:56 +0100, Michael Scherer a écrit :
>> Le jeudi 29 octobre 2015 à 11:24 +, Niels de Vos a écrit :
>> > Humble Chirammal  writes:
>> > >
>> > >
>> > > Hi,
>> > > FYI,
>> > > It seems that, https://github.com/gluster/glusterfs code
>> > > repo has not synced from gerrit for some time and the last commit (
>> > > 51632e1eec3ff88d19867dc8d266068dd7db432a)
>> > > recorded on  Sep 10 .
>> > > Does any one know, why the gerrit plugin is broken? --
>> >
>> > I've now manually synced the glusterfs repo on GitHub with the current 
>> > state
>> > from Gerrit. If the automatic sync takes time to get configured/setup 
>> > again, we
>> > might need to do this on regular basis? Anyway, ping me if it needs to be 
>> > done
>> > again (not for each commit though!).
>>
>> One solution is to just add a ssh key on the repo, it can be found in
>> the settings.
>>
>> That's how I plan to do for the salt repo, but github is being annoying
>> a requires 1 different ssh key per repo, and I do need to take that in
>> account.
>> (the joy of having a SaaS service...)
>
> So I was looking at that, and just want to be sure I do publish the good
> repository.
>
> So we are looking at pushing the git repo from dev.gluster.org ( ie,
> gerrit ), in /review/review.gluster.org/git
>
> And there is all of them:
>
> All-Projects.git
> automation.git
> glusterfs-afrv1.git
> glusterfs.git
> glusterfs-hadoop.old.git
> glusterfs-nsr.git
> glusterfs-quota.git
> glusterfs-snapshot.git
> glusterfs-specs.git
> gluster-nagios-addons.git
> gluster-nagios-common.git
> gluster-nagios.git
> gluster-operations-guide.git
> gluster-swift.git
> gmc.git
> libgfapi-python.git
> nagios-gluster-addons.git
> nagios-server-addons.git
> qa.git
> regression.git
> swiftkrbauth.git
>
> Should all be synced ?

Most of these are inactive. We'll need to check with the owners to
find out if they need to be replicated.
The most important and active repos here would be glusterfs.git and
glusterfs-specs.git. These need to be replicated.

> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] glusterfs github is out of sync.

2015-11-24 Thread Kaushal M
On Tue, Nov 24, 2015 at 7:37 PM, Michael Scherer  wrote:
> Le lundi 23 novembre 2015 à 15:28 +0530, Kaushal M a écrit :
>> On Mon, Nov 23, 2015 at 3:09 PM, Michael Scherer 
>> wrote:
>> > Le lundi 23 novembre 2015 à 09:18 +0530, Kaushal M a écrit :
>> >> On Wed, Oct 28, 2015 at 4:48 PM, Michael Scherer 
>> wrote:
>> >> > Le mercredi 28 octobre 2015 à 16:24 +0530, Kaushal M a écrit :
>> >> >> On Wed, Oct 28, 2015 at 4:01 PM, Michael Scherer 
>> wrote:
>> >> >> > Le mercredi 28 octobre 2015 à 15:57 +0530, Kaushal M a écrit :
>> >> >> >> Authentication failure is causing the replication plugin pushes to
>> fail.
>> >> >> >>
>> >> >> >> The gerrit ssh key was attached to Avati's account IIRC. Maybe he's
>> >> >> >> removed the key by mistake.
>> >> >> >
>> >> >> > Nope.
>> >> >> > The key was removed due to the compromise Amye posted about on
>> >> >> > gluster-users. You can ask her details, cause she will likely be
>> much
>> >> >> > more polite than me to explain the whole topic :)
>> >> >>
>> >> >> An ssh key pair is still present on review.gluster.org though.
>> >> >
>> >> > So maybe not the same key. Which one is it ?
>> >> >
>> >> >> >
>> >> >> >> It'll be good if we add keys to an org instead of individual user's
>> >> >> >> account.  But as we can't do that, what do people feel about
>> creating
>> >> >> >> a `glusterbot` or `glusterant` account controlled by the community?
>> >> >> >
>> >> >> > I am ok with the idea but:
>> >> >> > - it need to be a account using a email the project can recover,
>> not a
>> >> >> > personal one
>> >> >>
>> >> >> I think we can setup an email alias or a mailing list on the gluster
>> >> >> mail infra, which includes the admins of the Gluster org in Github.
>> >> >
>> >> > I am fine with the idea. But now, that mean we will have a official
>> >> > group of people, and I rather not have a group "admin of github and
>> adin
>> >> > of gerrit and admin of rackspace and admin of the infra".
>> >> >
>> >> > So if we go this way, I will likely start to remove people access and
>> >> > centralize all in a ldap. (sync between github/rackspace and that list
>> >> > is a open problem).
>> >> >
>> >>
>> >> Michael, can we get this done now?
>> >>  Or do we need to wait for the
>> >> community ldap to be setup?
>> >
>> > The ldap is here ( need to create accounts, but I need to know who want
>> > one, so we get back on the start, ie a list of people).
>> >
>> > But there is no need for a ldap to decide who is in that group. You just
>> > need to decide who go in the group, and what the group do cover, which
>> > is something that can be done by mail.
>> >
>> > rest is a technical details I can fix later, but I can't (or rather I do
>> > not want) decide who is volunteer for the duty.
>> >
>>
>> We can start by creating accounts for people managing the infra. The
>> members of the Admin/owners groups in Github/Gerrit can be used as a seed
>> for this.
>>
>> >> We don't want to keep pushing to Github
>> >> manually.
>> >
>> > So count that as a incentive for giving the list of people, and how we
>> > decide who go in the group.
>>
>> Well I had to manually push this time. I wouldn't want to do it again.
>> Comparing the two lists above, I get the following common names.
>> - Avati
>> - Humble
>> - Justin
>> - Me
>> - You
>> - Vijay
>>
>> Avati and Justin are no longer heavily involved in the community, so they
>> could be dropped. I'd add Neils and Kaleb, as they've also done some(~)
>> infra management. And Amye as well.
>
> Perfect :)
>
> So I need for each:
> - preferred username
> - a email where I can send the temp password
> - if people want to have some information

Cool! I'll ask everyone to do this right away.

>
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] glusterfs github is out of sync.

2015-11-23 Thread Kaushal M
On Mon, Nov 23, 2015 at 4:26 PM, Michael Scherer  wrote:
>
> Le jeudi 29 octobre 2015 à 11:24 +, Niels de Vos a écrit :
> > Humble Chirammal  writes:
> > >
> > >
> > > Hi,
> > > FYI,
> > > It seems that, https://github.com/gluster/glusterfs code
> > > repo has not synced from gerrit for some time and the last commit (
> > > 51632e1eec3ff88d19867dc8d266068dd7db432a)
> > > recorded on  Sep 10 .
> > > Does any one know, why the gerrit plugin is broken? --
> >
> > I've now manually synced the glusterfs repo on GitHub with the current state
> > from Gerrit. If the automatic sync takes time to get configured/setup 
> > again, we
> > might need to do this on regular basis? Anyway, ping me if it needs to be 
> > done
> > again (not for each commit though!).
>
> One solution is to just add a ssh key on the repo, it can be found in
> the settings.
>

I always thought that this was for things like automated deployment
and mirroring. I didn't know it worked for pushing to the repo.

>
> That's how I plan to do for the salt repo, but github is being annoying
> a requires 1 different ssh key per repo, and I do need to take that in
> account.
> (the joy of having a SaaS service...)
> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
>
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] [gluster.org_salt_states] Want to refactor jenkins.slave

2015-11-23 Thread Kaushal M
Hi all

Now that we have public access to the gluster-infra salt states [1], (thank
you Misc) I'd like to start contributing to it. I'd like to begin by
refactoring the `jenkins.slave`[2] state.

My aim with the refactoring is to pull out the general test environment
configuration, from the gluster infra specific configuration. The reason to
do this is mainly two fold,
1. To make it easier for developers to contribute changes to the GlusterFS
testing environment.
2. To make it easier to deploy local testing environments.

I need some questions on which I'd like feedback.
1. How do I contribute changes back? As I understand the github repos of
the salt-states and salt-pillar are just mirrors.
2. I'd like to have this test environment state separate from gluster.org
infra states, ie., outside the current salt-states tree, in a separate
repo, or within the glusterfs repo. This would help developers contribute
to it, without hurting the infra states. Would this be okay? If yes, which
repo should this be put into.

~kaushal

[1]: https://github.com/gluster/gluster.org_salt_states
[2]:
https://github.com/gluster/gluster.org_salt_states/blob/master/jenkins/slave.sls
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

Re: [Gluster-infra] glusterfs github is out of sync.

2015-11-23 Thread Kaushal M
On Mon, Nov 23, 2015 at 3:09 PM, Michael Scherer 
wrote:
> Le lundi 23 novembre 2015 à 09:18 +0530, Kaushal M a écrit :
>> On Wed, Oct 28, 2015 at 4:48 PM, Michael Scherer 
wrote:
>> > Le mercredi 28 octobre 2015 à 16:24 +0530, Kaushal M a écrit :
>> >> On Wed, Oct 28, 2015 at 4:01 PM, Michael Scherer 
wrote:
>> >> > Le mercredi 28 octobre 2015 à 15:57 +0530, Kaushal M a écrit :
>> >> >> Authentication failure is causing the replication plugin pushes to
fail.
>> >> >>
>> >> >> The gerrit ssh key was attached to Avati's account IIRC. Maybe he's
>> >> >> removed the key by mistake.
>> >> >
>> >> > Nope.
>> >> > The key was removed due to the compromise Amye posted about on
>> >> > gluster-users. You can ask her details, cause she will likely be
much
>> >> > more polite than me to explain the whole topic :)
>> >>
>> >> An ssh key pair is still present on review.gluster.org though.
>> >
>> > So maybe not the same key. Which one is it ?
>> >
>> >> >
>> >> >> It'll be good if we add keys to an org instead of individual user's
>> >> >> account.  But as we can't do that, what do people feel about
creating
>> >> >> a `glusterbot` or `glusterant` account controlled by the community?
>> >> >
>> >> > I am ok with the idea but:
>> >> > - it need to be a account using a email the project can recover,
not a
>> >> > personal one
>> >>
>> >> I think we can setup an email alias or a mailing list on the gluster
>> >> mail infra, which includes the admins of the Gluster org in Github.
>> >
>> > I am fine with the idea. But now, that mean we will have a official
>> > group of people, and I rather not have a group "admin of github and
adin
>> > of gerrit and admin of rackspace and admin of the infra".
>> >
>> > So if we go this way, I will likely start to remove people access and
>> > centralize all in a ldap. (sync between github/rackspace and that list
>> > is a open problem).
>> >
>>
>> Michael, can we get this done now?
>>  Or do we need to wait for the
>> community ldap to be setup?
>
> The ldap is here ( need to create accounts, but I need to know who want
> one, so we get back on the start, ie a list of people).
>
> But there is no need for a ldap to decide who is in that group. You just
> need to decide who go in the group, and what the group do cover, which
> is something that can be done by mail.
>
> rest is a technical details I can fix later, but I can't (or rather I do
> not want) decide who is volunteer for the duty.
>

We can start by creating accounts for people managing the infra. The
members of the Admin/owners groups in Github/Gerrit can be used as a seed
for this.

>> We don't want to keep pushing to Github
>> manually.
>
> So count that as a incentive for giving the list of people, and how we
> decide who go in the group.

Well I had to manually push this time. I wouldn't want to do it again.
Comparing the two lists above, I get the following common names.
- Avati
- Humble
- Justin
- Me
- You
- Vijay

Avati and Justin are no longer heavily involved in the community, so they
could be dropped. I'd add Neils and Kaleb, as they've also done some(~)
infra management. And Amye as well.

> --
> Michael Scherer
> Sysadmin, Community Infrastructure and Platform, OSAS
>
>
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra

  1   2   >