[Gluster-devel] Access to Jenkins

2016-06-15 Thread Nigel Babu
Hello folks,

We have 45 people who have access to Jenkins UI. Pretty much everyone will be
losing this access in the next couple of weeks.

At the moment, I understand that access to the UI is essential for configuring
jobs. I’m going to change this in the near future. Jenkins Job Builder[1] will
talk to Jenkins to create/update jobs. Job information will be managed as a
collection of yaml files. If you want a new job, you can give us a pull request
with the correct format. The jobs will then be updated (probably via Jenkins).
You will then no longer need access to Jenkins to create or manage jobs. In
fact, editing the jobs in the UI will make the YAML files out of sync.

Before we start this process, I’d love to know if you use your Jenkins access.
If you do use it, please let me know off-list what you use it for.

[1]: http://docs.openstack.org/infra/system-config/jjb.html

--
nigelb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Gluster weekly community meeting 15-Jun-2016

2016-06-15 Thread Kaushal M
Thanks again, all the attendees of today's meeting. The meeting logs
can be found at the following links,
Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-15/weekly_community_meeting_15-jun-2016.2016-06-15-11.59.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-15/weekly_community_meeting_15-jun-2016.2016-06-15-11.59.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-15/weekly_community_meeting_15-jun-2016.2016-06-15-11.59.log.html

Next weeks meeting will be held at the same time and in the same
place. See you all next week.

Thanks,
Kaushal

Meeting summary
---
* Rollcall  (kshlm, 11:59:48)

* GlusterFS-3.9  (kshlm, 12:06:05)
  * ACTION: ndevos will call for 3.9 release-maintainers on the
maintainers list  (kshlm, 12:15:18)

* GlusterFS-3.8  (kshlm, 12:15:40)

* GlusterFS-3.7  (kshlm, 12:32:15)
  * ACTION: kshlm to start seperate thread for maintainer feedback on
3.7.12rc  (kshlm, 12:37:08)

* GlusterFS-3.6  (kshlm, 12:37:34)
  * LINK:
http://download.gluster.org/pub/gluster/glusterfs/download-stats.html
(kkeithley, 12:47:25)
  * ACTION: Start a mailing list discussion on EOLing 3.6  (kshlm,
12:51:33)

* GlusterFS-3.5  (kshlm, 12:53:57)
  * LINK: https://en.wikipedia.org/wiki/File:Taps_on_bugle.ogg
(jdarcy, 12:55:24)

* NFS-Ganesha  (kshlm, 12:56:23)

* Samba  (kshlm, 12:59:13)

* Last weeks AIs  (kshlm, 13:00:04)

* rastar to look at 3.6 builds failures on BSD  (kshlm, 13:00:49)

* Open floor  (kshlm, 13:05:03)
  * Bug self triage. When you open a bug for yourself, assign it (to
yourself) and add the keyword "Triaged"  (kshlm, 13:07:48)
  * If it's not for yourself, but you know who it does belong to, assign
it to them and add the keyword "Triaged"  (kshlm, 13:07:48)
  * If you submit a patch for a bug, set the bug state to POST.  (kshlm,
13:07:48)
  * If your patch gets commited/merged, and the commiter forgets, set
the bug state to MODIFIED  (kshlm, 13:07:48)
  * LINK:
http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/
(ndevos, 13:09:02)
  * LINK:

http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-report-Life-Cycle/
(ndevos, 13:09:25)
  * ACTION: kkeithley Saravanakmr with nigelb will set up Coverity,
clang, etc on public  facing machine and run it regularly  (kshlm,
13:10:37)

Meeting ended at 13:12:49 UTC.




Action Items

* ndevos will call for 3.9 release-maintainers on the maintainers list
* kshlm to start seperate thread for maintainer feedback on 3.7.12rc
* Start a mailing list discussion on EOLing 3.6
* kkeithley Saravanakmr with nigelb will set up Coverity, clang, etc on
  public  facing machine and run it regularly




Action Items, by person
---
* kkeithley
  * kkeithley Saravanakmr with nigelb will set up Coverity, clang, etc
on public  facing machine and run it regularly
* kshlm
  * kshlm to start seperate thread for maintainer feedback on 3.7.12rc
* ndevos
  * ndevos will call for 3.9 release-maintainers on the maintainers list
* nigelb
  * kkeithley Saravanakmr with nigelb will set up Coverity, clang, etc
on public  facing machine and run it regularly
* **UNASSIGNED**
  * Start a mailing list discussion on EOLing 3.6




People Present (lines said)
---
* kshlm (139)
* ndevos (59)
* kkeithley (27)
* post-factum (22)
* jdarcy (13)
* rastar_ (8)
* ira_ (8)
* glusterbot (6)
* atinm (5)
* zodbot (3)
* jiffin (2)
* nigelb (2)
* skoduri (1)
* kotreshhr (1)
* aravindavk (1)
* samikshan (1)
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-15 Thread André Bauer
Hi Prasanna,

Am 15.06.2016 um 12:09 schrieb Prasanna Kalever:

>
> I think you have missed enabling bind insecure which is needed by
> libgfapi access, please try again after following below steps
>
> => edit /etc/glusterfs/glusterd.vol by add "option
> rpc-auth-allow-insecure on" #(on all nodes)
> => gluster vol set $volume server.allow-insecure on
> => systemctl restart glusterd #(on all nodes)
>

No, thats not the case. All services are up and runnig correctly,
allow-insecure is set and the volume works fine with libgfapi access
from my Ubuntu 14.04 KVM/Qemu servers.

Just the server which was updated to Ubuntu 16.04 can't access the
volume via libgfapi anmyore (fuse mount still works).

GlusterFS logs are empty when trying to access the GlusterFS nodes so i
think the requests are blocked on the client side.

Maybe apparmor again?

Regards
André

>
> --
> Prasanna
>
>>
>> I don't see anything in the apparmor logs when setting everything to
>> complain or audit.
>>
>> It also seems GlusterFS servers don't get any request because brick logs
>> are not complaining anything.
>>
>> Any hints?
>>
>>
>> --
>> Regards
>> André Bauer
>>
>> ___
>> Gluster-devel mailing list
>> Gluster-devel@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net 
www.magix.com 

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

 
 
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Fwd: [Gluster-infra] Switching to Bugzilla for infra failures

2016-06-15 Thread Kaushal M
Sharing this with -devel as well, where more developers are.


-- Forwarded message --
From: Nigel Babu 
Date: Wed, Jun 15, 2016 at 4:56 PM
Subject: [Gluster-infra] Switching to Bugzilla for infra failures
To: gluster-in...@gluster.org


Hello,

We've had a bugzilla component for a long time and it's been unused. With
effect from today, we've decided to resurrect it. Please file a bug in the
glusterfs -> Project-infrastructure component for infra issues.

We're doing this so that

* We can track bugs to their logical conclusion and things don't get missed in
  the noise.
* We can have a quick post-mortem in the bug about the issue.
* We can also get an idea of what sort of requests take up most of our time and
  how to reduce these issues.

File a bug: 
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS=project-infrastructure
List of open bugs:
https://bugzilla.redhat.com/buglist.cgi?cmdtype=dorem_id=5294544=gluster-infra=run_id=396097


Thanks,
nigelb and misc
___
Gluster-infra mailing list
gluster-in...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-infra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-15 Thread Prasanna Kalever
On Wed, Jun 15, 2016 at 2:41 PM, André Bauer  wrote:
>
> Hi Lists,
>
> i just updated on of my Ubuntu KVM Servers from 14.04 (Trusty) to 16.06
> (Xenial).
>
> I use the Glusterfs packages from the officail Ubuntu PPA and my own
> Qemu packages (
> https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7 )
> which have libgfapi enabled.
>
> On Ubuntu 14.04 everything is working fine. I only had to add the
> following lines to the Apparmor config in
> /etc/apparmor.d/abstractions/libvirt-qemu to get it work:
>
> # for glusterfs
> /proc/sys/net/ipv4/ip_local_reserved_ports r,
> /usr/lib/@{multiarch}/glusterfs/**.so mr,
> /tmp/** rw,
>
> In Ubuntu 16.04 i'm not able to start the my VMs via libvirt or to
> create new images via qemu-img using libgfapi.
>
> Mounting the volume via fuse does work without problems.
>
> Examples:
>
> qemu-img create gluster://storage.mydomain/vmimages/kvm2test.img 1G
> Formatting 'gluster://storage.intdmz.h1.mdd/vmimages/kvm2test.img',
> fmt=raw size=1073741824
> [2016-06-15 08:15:26.710665] E [MSGID: 108006]
> [afr-common.c:4046:afr_notify] 0-vmimages-replicate-0: All subvolumes
> are down. Going offline until atleast one of them comes back up.
> [2016-06-15 08:15:26.710736] E [MSGID: 108006]
> [afr-common.c:4046:afr_notify] 0-vmimages-replicate-1: All subvolumes
> are down. Going offline until atleast one of them comes back up.
>
> Libvirtd log:
>
> [2016-06-13 16:53:57.055113] E [MSGID: 104007]
> [glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch
> volume file (key:vmimages) [Invalid argument]
> [2016-06-13 16:53:57.055196] E [MSGID: 104024]
> [glfs-mgmt.c:738:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with
> remote-host: storage.intdmz.h1.mdd (Permission denied) [Permission denied]
> 2016-06-13T16:53:58.049945Z qemu-system-x86_64: -drive
> file=gluster://storage.intdmz.h1.mdd/vmimages/checkbox.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=writeback:
> Gluster connection failed for server=storage.intdmz.h1.mdd port=0
> volume=vmimages image=checkbox.qcow2 transport=tcp: Permission denied

I think you have missed enabling bind insecure which is needed by
libgfapi access, please try again after following below steps

=> edit /etc/glusterfs/glusterd.vol by add "option
rpc-auth-allow-insecure on" #(on all nodes)
=> gluster vol set $volume server.allow-insecure on
=> systemctl restart glusterd #(on all nodes)

In case this does not work,
provide help us with the below, along with the logfiles
# gluster vol info
# gluster vol status
# gluster peer status

--
Prasanna

>
> I don't see anything in the apparmor logs when setting everything to
> complain or audit.
>
> It also seems GlusterFS servers don't get any request because brick logs
> are not complaining anything.
>
> Any hints?
>
>
> --
> Regards
> André Bauer
>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-06-15 Thread André Bauer
Hi Lists,

i just updated on of my Ubuntu KVM Servers from 14.04 (Trusty) to 16.06
(Xenial).

I use the Glusterfs packages from the officail Ubuntu PPA and my own
Qemu packages (
https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7 )
which have libgfapi enabled.

On Ubuntu 14.04 everything is working fine. I only had to add the
following lines to the Apparmor config in
/etc/apparmor.d/abstractions/libvirt-qemu to get it work:

# for glusterfs
/proc/sys/net/ipv4/ip_local_reserved_ports r,
/usr/lib/@{multiarch}/glusterfs/**.so mr,
/tmp/** rw,

In Ubuntu 16.04 i'm not able to start the my VMs via libvirt or to
create new images via qemu-img using libgfapi.

Mounting the volume via fuse does work without problems.

Examples:

qemu-img create gluster://storage.mydomain/vmimages/kvm2test.img 1G
Formatting 'gluster://storage.intdmz.h1.mdd/vmimages/kvm2test.img',
fmt=raw size=1073741824
[2016-06-15 08:15:26.710665] E [MSGID: 108006]
[afr-common.c:4046:afr_notify] 0-vmimages-replicate-0: All subvolumes
are down. Going offline until atleast one of them comes back up.
[2016-06-15 08:15:26.710736] E [MSGID: 108006]
[afr-common.c:4046:afr_notify] 0-vmimages-replicate-1: All subvolumes
are down. Going offline until atleast one of them comes back up.

Libvirtd log:

[2016-06-13 16:53:57.055113] E [MSGID: 104007]
[glfs-mgmt.c:637:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch
volume file (key:vmimages) [Invalid argument]
[2016-06-13 16:53:57.055196] E [MSGID: 104024]
[glfs-mgmt.c:738:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with
remote-host: storage.intdmz.h1.mdd (Permission denied) [Permission denied]
2016-06-13T16:53:58.049945Z qemu-system-x86_64: -drive
file=gluster://storage.intdmz.h1.mdd/vmimages/checkbox.qcow2,format=qcow2,if=none,id=drive-virtio-disk0,cache=writeback:
Gluster connection failed for server=storage.intdmz.h1.mdd port=0
volume=vmimages image=checkbox.qcow2 transport=tcp: Permission denied

I don't see anything in the apparmor logs when setting everything to
complain or audit.

It also seems GlusterFS servers don't get any request because brick logs
are not complaining anything.

Any hints?


-- 
Regards
André Bauer

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel