Re: [Gluster-devel] Building GD2 from glusterd2-v5.0-0-vendor.tar.xz fails on CentOS-7

2018-10-30 Thread Kaleb S. KEITHLEY
On 10/30/18 5:10 AM, Niels de Vos wrote:

> 
> Thanks! But even on x86_64 there only seems to be
> golang-1.8.3-1.2.1.el7.x86_64 in the buildroot. I can not find
> golang-1.9.4, can you check where it comes from? The build details are
> in https://cbs.centos.org/koji/taskinfo?taskID=595140 and you can check
> the root.log for the packages+versions that get installed.

It's because golang-1.8 was tagged into storage7-gluster-common-candidate:

% cbs list-tagged storage7-gluster-common-candidate

Build Tag   Built by
  

...
golang-1.8.3-1.2.1.el7
storage7-gluster-common-candidate  tdawson
...

Not sure why it was ever
 tagged into storage7-gluster-common-candidate. I untagged it. gd2
builds should get golang-1.9 now.

I tried to resubmit the task for your build but only the owner or an
admin can do that.

thanks to arrfab for helping me untangle the tags

--

Kaleb

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Who is the package maintainer for GlusterFS 4.1?

2018-10-29 Thread Kaleb S. KEITHLEY
On 10/29/18 6:31 AM, mabi wrote:
> Hello,
> 
> I would like to know how I can contact the package maintainer for the 
> GluserFS 4.1.x packages?
> 
> I have noticed that Debian 8 (jessie) is missing here:
> 
> https://download.gluster.org/pub/gluster/glusterfs/4.1/4.1.5/Debian/
> 
> Thank you very much in advance.

Community GlusterFS packages are built by multiple volunteers.

GlusterFS 4.0, 4.1, and 5.0 packages aren't missing; they have never
been built for Debian 8 jessie. One reason is that jessie doesn't have a
new enough golang compiler (even in backports) to build glusterd2.

If you want to build packages without glusterd2 for jessie the packaging
files are at https://github.com/gluster/glusterfs-debian.

The distributions that packages are built for are listed at
https://docs.gluster.org/en/latest/Install-Guide/Community_Packages/
History for this page is in github at
https://github.com/gluster/glusterdocs/blob/master/docs/Install-Guide/Community_Packages.md

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: Compiling GLFS Source Tree under SLES15

2018-10-16 Thread Kaleb S. KEITHLEY




 Forwarded Message 
Subject: Re: Compiling GLFS Source Tree under SLES15
Date: Tue, 16 Oct 2018 09:48:21 -0400
From: Kaleb S. KEITHLEY 
To: David Spisla 

On 10/16/18 9:30 AM, David Spisla wrote:
> Hello Kaleb,
> I've heard that you are responsible building Gluster RPMs for SUSE.
> At the moment I am working with SLES15 and do some xlator experiments.
> Therefore I am compiling the Source Tree to get all dependencies to
> compile some xlators manually.
> 
> Is there any recommendation from the Gluster community to compile
> options under SUSE?
> I use at the moment:
> 
> ./autogen.sh
> ./configure --without-libtirpc
> make


The rpm .spec I use for sles-15 is
https://github.com/gluster/glusterfs-suse/blob/sles15-glusterfs-4.1/glusterfs.spec

which is similar to what you are using.

IMO you should not be using --without-libtirpc on newer distribution
releases like sles-15.

> 
> with gcc version 7.3.1. There is a file attached wicht the output of the
> build process. In the autogen.sh
> part there is warning concerning 'subdir-objects' and in the make part a
> lot of 'warnings'. Do you think this
> could be a problem? 

No, those warnings are benign AFAICT.

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python3 build process

2018-09-28 Thread Kaleb S. KEITHLEY
On 9/28/18 6:12 AM, Niels de Vos wrote:
> Is it really needed to have this as an option? Instead of an option in
> configure.ac, can it not be a post-install task in a Makefile.am? 

I don't fully understand how .pyc and .pyo files are used, or how the
new-in-python3 __cache__ files are used, but they seem to be created
during the build and/or the install.

What does it mean to build+install the .pyc, .pyo, and __cache__ files
and then go in after and whack the shebangs of the .py files?

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python3 build process

2018-09-28 Thread Kaleb S. KEITHLEY
On 9/28/18 9:38 AM, Niels de Vos wrote:
>>>
>>> Tests should just run with "$PYTHON run-the-test.py" instead of
>>> ./run-the-test.py with a #!/usr/bin/python shebang. The testing
>>> framework can also find out what version of python is available.
>>
>> If we back up a bit here, if all shebangs are cleared, then we do not
>> need anything. That is not the situation at the moment, and neither do I
>> know if that state can be reached.
> 
> Not all shebangs need to go away, only the ones for the test-cases. A
> post-install hook can modify the shebangs from python3 to python2
> depending on what ./configure detected.

None of the .py files in .../tests/... have shebangs (master and
release-5 branch).

They are all invoked with ...$PYTHON some-test.py ...

All but one of them were always invoked that way. socket-as-fifo.py was
the only one that was not, and that has been fixed.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python3 build process

2018-09-27 Thread Kaleb S. KEITHLEY
On 9/27/18 8:57 AM, Kaleb S. KEITHLEY wrote:
> On 9/27/18 8:40 AM, Shyam Ranganathan wrote:
>> On 09/27/2018 08:07 AM, Kaleb S. KEITHLEY wrote:
>>>> The thought is,
>>>> - Add a configure option "--enable-py-version-correction" to configure,
>>>> that is disabled by default
>>> "correction" implies there's something that's incorrect. How about
>>> "conversion" or perhaps just --enable-python2
>>>
>>
>> I would not like to go with --enable-python2 as that implies it is an
>> conscious choice with the understanding that py2 is on the box. Given
>> the current ability to detect and hence correct the python shebangs, I
>> would think we should retain it as a more detect and modify the shebangs
>> option name. (I am looking at this more as an option that does the right
>> thing implicitly than someone/tool using this checking explicitly, which
>> can mean different things to different people, if that makes sense)
>>
>> Now "correction" seems like an overkill, maybe "conversion"?
>>
> 
> I guess I don't really care what the option is called.
> 
> The only conversion is _to_ python2 and the only place it ever _needs_
> to be done is RHEL < 8 and eventually CentOS < 8.

that s/b ... and CentOS (eventually CentOS < 8).

> 
> (You could argue that python3 -> python3 is a conversion.) Are you
> saying that if you do `./configure --enable-py-version-conversion` on,
> e.g. a Fedora < 30 box, that the shebangs will still be
> #!/usr/bin/python3 because python3 is on the box? I think that would be
> surprising.
> 
> I can imagine that someone might want to convert them on Fedora < 30 and
> Debian/Ubuntu/etc because they want to _test_ that the python bits still
> work with python2. (Or they actually want to use python2 over python3
> for some reason.)
> 
> --
> 
> Kaleb
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
> 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python3 build process

2018-09-27 Thread Kaleb S. KEITHLEY
On 9/27/18 8:40 AM, Shyam Ranganathan wrote:
> On 09/27/2018 08:07 AM, Kaleb S. KEITHLEY wrote:
>>> The thought is,
>>> - Add a configure option "--enable-py-version-correction" to configure,
>>> that is disabled by default
>> "correction" implies there's something that's incorrect. How about
>> "conversion" or perhaps just --enable-python2
>>
> 
> I would not like to go with --enable-python2 as that implies it is an
> conscious choice with the understanding that py2 is on the box. Given
> the current ability to detect and hence correct the python shebangs, I
> would think we should retain it as a more detect and modify the shebangs
> option name. (I am looking at this more as an option that does the right
> thing implicitly than someone/tool using this checking explicitly, which
> can mean different things to different people, if that makes sense)
> 
> Now "correction" seems like an overkill, maybe "conversion"?
> 

I guess I don't really care what the option is called.

The only conversion is _to_ python2 and the only place it ever _needs_
to be done is RHEL < 8 and eventually CentOS < 8.

(You could argue that python3 -> python3 is a conversion.) Are you
saying that if you do `./configure --enable-py-version-conversion` on,
e.g. a Fedora < 30 box, that the shebangs will still be
#!/usr/bin/python3 because python3 is on the box? I think that would be
surprising.

I can imagine that someone might want to convert them on Fedora < 30 and
Debian/Ubuntu/etc because they want to _test_ that the python bits still
work with python2. (Or they actually want to use python2 over python3
for some reason.)

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Python3 build process

2018-09-27 Thread Kaleb S. KEITHLEY
On 9/26/18 8:28 PM, Shyam Ranganathan wrote:
> Hi,
> 
> With the introduction of default python 3 shebangs and the change in
> configure.ac to correct these to py2 if the build is being attempted on
> a machine that does not have py3, there are a couple of issues
> uncovered. Here is the plan to fix the same, suggestions welcome.
> 
> Issues:
> - A configure job is run when creating the dist tarball, and this runs
> on non py3 platforms, hence changing the dist tarball to basically have
> py2 shebangs, as a result the release-new build job always outputs py
> files with the py2 shebang. See tarball in [1]
> 
> - All regression hosts are currently py2 and so if we do not run the py
> shebang correction during configure (as we do not build and test from
> RPMS), we would be running with incorrect py3 shebangs (although this
> seems to work, see [2]. @kotresh can we understand why?)

Is it because we don't test any of the python in the regression tests?

Or because when we do, we invoke python scripts with `python foo.py` or
`$PYTHON foo.py` everywhere? The shebangs are ignored when scripts are
invoked this way.

> 
> Plan to address the above is detailed in this bug [3].
> 
> The thought is,
> - Add a configure option "--enable-py-version-correction" to configure,
> that is disabled by default

"correction" implies there's something that's incorrect. How about
"conversion" or perhaps just --enable-python2

> 
> - All regression jobs will run with the above option, and hence this
> will correct the py shebangs in the regression machines. In the future
> as we run on both py2 and py3 machines, this will run with the right
> python shebangs on these machines.
> 
> - The packaging jobs will now run the py version detection and shebang
> correction during actual build and packaging, Kaleb already has put up a
> patch for the same [2].
> 
> Thoughts?
> 

Also note that until --enable-whatever is added to configure(.ac), if
you're building and testing any of the python bits on RHEL or CentOS
you'll need to convert the shebangs. Perhaps the easiest way to do that
now (master branch and release-5 branch) is to build+install rpms.

If you're currently doing

  `git clone; ./autogen.sh; ./configure; make; make install`

then change that to

  `git clone; ./autogen.sh; ./configure; make -C extras/LinuxRPMS
glusterrpms`

and then yum install those rpms. The added advantage is that it's easier
to remove rpms than anything installed with `make install`.

If you're developing on Fedora (hopefully 27 or later) or Debian or
Ubuntu you don't need to do anything different as they all have python3.

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] glusterd2 4.1 packages now available for Debian stretch and buster, and Ubuntu bionic and cosmic

2018-09-15 Thread Kaleb S. KEITHLEY


Packages built from the -vendor source tar file are now available for
Debian stretch and buster at [1]; and for Ubuntu xenial, bionic, and
cosmic at [2].

(The existing glusterd2 4.1 packages for Fedora, CentOS, RHEL, and
various SUSE and OpenSUSE are still available as always.)

[1] http://download.gluster.org/pub/gluster/glusterd2
[2] https://launchpad.net/~gluster

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Coverity covscan for 2018-07-20-8d7be4ac (master branch)

2018-07-20 Thread Kaleb S. KEITHLEY
On 07/20/2018 05:18 PM, staticanaly...@gluster.org wrote:
> 
> GlusterFS Coverity covscan results for the master branch are available from
> http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2018-07-20-8d7be4ac/
> 
> Coverity covscan results for other active branches are also available at
> http://download.gluster.org/pub/gluster/glusterfs/static-analysis/
> 

FYI, as of this alert, Coverity has been updated to cov-sa2018-06.

--

Kaleb

___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster -- Storhaug NFS HA

2018-06-27 Thread Kaleb S. KEITHLEY
Not sure what you're running. Storhaug-1.0 doesn't use pcs and there is 
no storhaug.conf



On 06/27/2018 09:13 PM, anirudh narayan wrote:

Team,

I am trying to set up storhaug on a 3 node Centos 7.4 gluster set up. I 
have created the gluster volume and have exposed the volume using NFS 
Ganesha. However, I am not able to set up HA using storhaug. Is there an 
admin doc for this that I can use? Or can anyone let me know the correct 
sequence?


[root@sp9pool4 sysconfig]# rpm -qa | grep -i storh
storhaug-nfs-1.0-1.el7.noarch
storhaug-1.0-1.el7.noarch

[root@sp9pool4 sysconfig]# gluster volume list
ani_test_nfs

[root@sp9pool4 sysconfig]# showmount -e
Export list for sp9pool4:
/ani_test_nfs (everyone)

[root@sp9pool4 sysconfig]# storhaug --setup
Setting up
ERROR: Insufficient servers for HA, aborting


[root@sp9pool4 log]# pcs cluster status
Cluster Status:
  Stack: corosync
  Current DC: sp9pool4 (version 1.1.18-11.el7_5.2-2b07d5c5a9) - 
partition with quorum

  Last updated: Wed Jun 27 11:42:35 2018
  Last change: Wed Jun 27 11:42:14 2018 by hacluster via crmd on sp9pool4
  3 nodes configured
  0 resources configured

PCSD Status:
   sp9pool4: Online
   sp9pool6: Online
   sp9pool5: Online



cat /etc/sysconfig/storhaug.conf
# Name of the HA cluster created.
HA_NAME="ani_nfs"

# Password of the hacluster user
HA_PASSWORD="cvadmin"

# The server on which cluster-wide configuration is managed.
# IP/Hostname
HA_SERVER="sp9pool4"

# The set of nodes that forms the HA cluster.
# Comma-deliminated IP/Hostname list
HA_CLUSTER_NODES="sp9pool4,sp9pool5,sp9pool6"

# [OPTIONAL] A subset of HA nodes that will serve as storage servers.
# Comma-deliminated IP/Hostname list
STORAGE_NODES="sp9pool4,sp9pool5,sp9pool6""

# Virtual IPs of each of the nodes specified above.
# Whitespace-deliminated IP address list
HA_VIPS="172.24.25.200"

# Managed access methods
# Whitespace-delimited list. Valid values:
#   nfs
#   smb
HA_SERVICES="nfs"



Regards,
Ani


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-18 Thread Kaleb S. KEITHLEY
On 06/18/2018 12:03 PM, Kaushal M wrote:
> 
> GD2 packages have been built for Fedora 28 and available from the
> updates-testing repo, and soon from the updates repo
> Packages are also available for Fedora 29/Rawhide.
> 
I built GD2 rpms for Fedora 27 using the -vendor tar file. They are
available at [1].

Attempts to build from the non-vendor tar file failed. Logs from one of
the failed builds are at [2] for anyone who cares to examine them to see
why they failed.


[1] https://download.gluster.org/pub/gluster/glusterd2/4.1/
[2] https://koji.fedoraproject.org/koji/taskinfo?taskID=27705828


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] storhaug, a utility for setting up gluster+nfs-ganesha highly available NFSv3, NFSv4, NFSv4.1

2018-06-18 Thread Kaleb S. KEITHLEY
On 06/15/2018 03:37 PM, Jim Kinney wrote:
> YAY!!!
> 
> Glad to see this!
> 
> Now for a specific use case question:
> 
> I have a 3-node gluster service in replica 3. Each node has multiple
> network interfaces: a 40G ethernet and a 40G Infiniband with TCP. The
> infiniband is a separate IP network from the 40G ethernet. There is no
> (known) way to bridge the two networks.
> 
> How do I get the HA to work with dual (or more) networks?
The CTDB docs
(https://wiki.samba.org/index.php/Adding_public_IP_addresses) seem to
suggest you can have multiple IPs per node, i.e. on multiple NICs,
managed by CTDB.

I'd guess that the /etc/ctdb/public_addresses file might look something
like this:

  192.168.122.85 eth0
  192.168.123.85 ib0
  192.168.122.86 eth0
  192.168.123.86 ib0
  192.168.122.87 eth0
  192.168.123.87 ib0

This might be a better question to ask in a Samba/CTDB forum. Or perhaps
Günther or Michael can point to some better documentation than what I
was able to find.

> 
> ascii art:
> _ 
> | |--- ib connection -- gluster storage node 1 -- ethernet connection
> | |
> IB cluster gluster clients ---| IB switch |--- ib connection -- gluster
> storage node 2 -- ethernet connection | big ethernet switch |---
> large number of gluster clients
> ||--- ib connection -- gluster storage node 3 -- ethernet
> connection |________|
> 
> 
> 
> 
> On Fri, 2018-06-15 at 11:39 -0400, Kaleb S. KEITHLEY wrote:
>> storhaug-1.0 is available now. Packages for Fedora[1], RHEL/CentOS[2],
>> and SUSE/OpenSUSE[3] are available now. Packages for Debian and Ubuntu
>> are coming soon¹.
>>
>> storhaug uses CTDB to to monitor the ganesha.nfsds in a cluster and
>> manage the associated floating IP addresses (VIPs). storhaug is a
>> replacement for the old ganesha-ha utility that was in GlusterFS-3.10
>> and earlier. storhaug may be used with GlusterFS-3.12 and later and any
>> version of NFS-Ganesha.
>>
>> storhaug is much easier to set up than the old ganesha-ha utility. There
>> is a write-up describing how to set it up and use it at [4].
>>
>> Open issues for storhaug at [5]. Pull requests at [6]. Ask questions
>> here on the lists or on IRC in #gluster or #gluster-dev.
>>
>> [1] Fedora Updates-Testing and Updates repos.
>> [2] https://wiki.centos.org/SpecialInterestGroup/Storage
>> [3] https://build.opensuse.org/project/subprojects/home:glusterfs
>> [4] https://github.com/gluster/storhaug/wiki
>> [5] https://github.com/gluster/storhaug/issues
>> [6] https://github.com/gluster/storhaug/pulls
>>
>> ¹ For some definition of soon.
>>
>>
>>
> -- 
> 
> James P. Kinney III
> 
> Every time you stop a school, you will have to build a jail. What you
> gain at one end you lose at the other. It's like feeding a dog on his
> own tail. It won't fatten the dog.
> - Speech 11/23/1900 Mark Twain
> 
> http://heretothereideas.blogspot.com/
> 

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] storhaug, a utility for setting up gluster+nfs-ganesha highly available NFSv3, NFSv4, NFSv4.1

2018-06-15 Thread Kaleb S. KEITHLEY

storhaug-1.0 is available now. Packages for Fedora[1], RHEL/CentOS[2],
and SUSE/OpenSUSE[3] are available now. Packages for Debian and Ubuntu
are coming soon¹.

storhaug uses CTDB to to monitor the ganesha.nfsds in a cluster and
manage the associated floating IP addresses (VIPs). storhaug is a
replacement for the old ganesha-ha utility that was in GlusterFS-3.10
and earlier. storhaug may be used with GlusterFS-3.12 and later and any
version of NFS-Ganesha.

storhaug is much easier to set up than the old ganesha-ha utility. There
is a write-up describing how to set it up and use it at [4].

Open issues for storhaug at [5]. Pull requests at [6]. Ask questions
here on the lists or on IRC in #gluster or #gluster-dev.

[1] Fedora Updates-Testing and Updates repos.
[2] https://wiki.centos.org/SpecialInterestGroup/Storage
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] https://github.com/gluster/storhaug/wiki
[5] https://github.com/gluster/storhaug/issues
[6] https://github.com/gluster/storhaug/pulls

¹ For some definition of soon.


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-15 Thread Kaleb S. KEITHLEY
what about packages built in Fedora?

On 06/15/2018 07:33 AM, Kaushal M wrote:
> In Tue, Jun 12, 2018 at 10:15 PM Niels de Vos  wrote:
>>
>> On Tue, Jun 12, 2018 at 11:26:33AM -0400, Shyam Ranganathan wrote:
>>> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
 As brick-mux tests were failing (and still are on master), this was
 holding up the release activity.

 We now have a final fix [1] for the problem, and the situation has
 improved over a series of fixes and reverts on the 4.1 branch as well.

 So we hope to branch RC0 today, and give a week for package and upgrade
 testing, before getting to GA. The revised calendar stands as follows,

 - RC0 Tagging: 31st May, 2018
 - RC0 Builds: 1st June, 2018
 - June 4th-8th: RC0 testing
 - June 8th: GA readiness callout
 - June 11th: GA tagging
>>>
>>> GA has been tagged today, and is off to packaging.
>>
>> The glusterfs packages should land in the testing repositories from the
>> CentOS Storage SIG soon. Currently glusterd2 is still on rc0 though.
>> Please test with the instructions from
>> http://lists.gluster.org/pipermail/packaging/2018-June/000553.html
>>
>> Thanks!
>> Niels
> 
> GlusterD2-v4.1.0 has been tagged and released [1].
> 
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0
> 
>> ___
>> maintainers mailing list
>> maintain...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/maintainers
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
> 

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaleb S. KEITHLEY
On 06/04/2018 11:32 AM, Kaushal M wrote:

> 
> We have a proper release this time. Source tarballs are available from [1].
> 
> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> 

I didn't wait for you to do COPR builds.

There are rpm packages for RHEL/CentOS 7, Fedora 27, Fedora 28, and
Fedora 29 at [1].

If you really want to use COPR builds instead, let me know and I'll
replace the ones I built with your COPR builds.

I think you will find (as I did) that Fedora 28 (still) doesn't have all
the dependencies and you'll need to build from the -vendor tar file.
Ditto for Fedora 27. If you believe this should not be the case please
let me know.

[1] https://download.gluster.org/pub/gluster/glusterd2/qa-releases/4.1rc0/

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaleb S. KEITHLEY
On 06/04/2018 11:05 AM, Kaushal M wrote:

> 
>> And it's good that RC0 was tagged in a timely matter. Who is building
>> those packages?
> 
> I can build the RPMs. I'll build them on the COPR I've been
> maintaining. But I don't believe that those can be used as the
> official RPM sources.
> So where should I build and how should they be distributed?

Test packages don't need to be "official" and can be built anywhere
AFAIC. COPR is fine. Or koji scratch builds.

Once they're built tell me where and I'll sign them and put them on
download.gluster.org

Thanks,

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaleb S. KEITHLEY
On 06/02/2018 07:47 AM, Niels de Vos wrote:
> On Sat, Jun 02, 2018 at 12:11:55AM +0530, Kaushal M wrote:
>> On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  wrote:
>>>
>>> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
 As brick-mux tests were failing (and still are on master), this was
 holding up the release activity.

 We now have a final fix [1] for the problem, and the situation has
 improved over a series of fixes and reverts on the 4.1 branch as well.

 So we hope to branch RC0 today, and give a week for package and upgrade
 testing, before getting to GA. The revised calendar stands as follows,

 - RC0 Tagging: 31st May, 2018
>>>
>>> RC0 Tagged and off to packaging!
>>
>> GD2 has been tagged as well. [1]
>>
>> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> 
> What is the status of the RPM for GD2? Can the Fedora RPM be rebuilt
> directly on CentOS, or does it need additional dependencies? (Note that
> CentOS does not allow dependencies from Fedora EPEL.)
> 

My recollection of how this works is that one would need to build from
the "bundled vendor" tarball.

Except when I tried to download the vendor bundle tarball I got the same
bits as the unbundled tarball.

ISTR Kaushal had to do something extra to generate the vendor bundled
tarball. It doesn't appear that that occured.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 4.1: Branched

2018-06-04 Thread Kaleb S. KEITHLEY
On 06/02/2018 07:47 AM, Niels de Vos wrote:
> On Sat, Jun 02, 2018 at 12:11:55AM +0530, Kaushal M wrote:
>> On Fri, Jun 1, 2018 at 6:19 PM Shyam Ranganathan  wrote:
>>>
>>> On 05/31/2018 09:22 AM, Shyam Ranganathan wrote:
 As brick-mux tests were failing (and still are on master), this was
 holding up the release activity.

 We now have a final fix [1] for the problem, and the situation has
 improved over a series of fixes and reverts on the 4.1 branch as well.

 So we hope to branch RC0 today, and give a week for package and upgrade
 testing, before getting to GA. The revised calendar stands as follows,

 - RC0 Tagging: 31st May, 2018
>>>
>>> RC0 Tagged and off to packaging!
>>
>> GD2 has been tagged as well. [1]
>>
>> [1]: https://github.com/gluster/glusterd2/releases/tag/v4.1.0-rc0
> 
> What is the status of the RPM for GD2? Can the Fedora RPM be rebuilt
> directly on CentOS, or does it need additional dependencies? (Note that
> CentOS does not allow dependencies from Fedora EPEL.)
> 

I checked, and was surprised to see that gd2 made it into Fedora[1]. I
guess I missed the announcement.

But I was disappointed to see that packages have only been built for
Fedora29/rawhide. We've been shipping glusterfs-4.0 in Fedora28 and even
if [2] didn't say so, I would think it would be obvious that we should
have packages for gd2 in F28 too.

And it's good that RC0 was tagged in a timely matter. Who is building
those packages?

[1] https://koji.fedoraproject.org/koji/packageinfo?packageID=26508
[2] https://docs.gluster.org/en/latest/Install-Guide/Community_Packages/
-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Proposal to change the version numbers of Gluster project

2018-03-13 Thread Kaleb S. KEITHLEY
On 03/12/2018 02:32 PM, Shyam Ranganathan wrote:
> On 03/12/2018 10:34 AM, Atin Mukherjee wrote:
>>   *
>>
>> After 4.1, we want to move to either continuous numbering (like
>> Fedora), or time based (like ubuntu etc) release numbers. Which
>> is the model we pick is not yet finalized. Happy to hear opinions.
>>
>>
>> Not sure how the time based release numbers would make more sense than
>> the one which Fedora follows. But before I comment further on this I
>> need to first get a clarity on how the op-versions will be managed. I'm
>> assuming once we're at GlusterFS 4.1, post that the releases will be
>> numbered as GlusterFS5, GlusterFS6 ... So from that perspective, are we
>> going to stick to our current numbering scheme of op-version where for
>> GlusterFS5 the op-version will be 5?
> 
> Say, yes.
> 
> The question is why tie the op-version to the release number? That
> mental model needs to break IMO.
> 
> With current options like,
> https://docs.gluster.org/en/latest/Upgrade-Guide/op_version/ it is
> easier to determine the op-version of the cluster and what it should be,
> and hence this need not be tied to the gluster release version.
> 
> Thoughts?

I'm okay with that, but——

Just to play the Devil's Advocate, having an op-version that bears some
resemblance to the _version_ number may make it easy/easier to determine
what the op-version ought to be.

We aren't going to run out of numbers, so there's no reason to be
"efficient" here. Let's try to make it easy. (Easy to not make a mistake.)

My 2¢

--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-03-02 Thread Kaleb S. KEITHLEY
On 03/02/2018 04:24 AM, Kaushal M wrote:
> [snip]
> I was able to create libglusterfsd, with just the pmap_signout nad
> autoscale functions.
> Turned out to be easy enough to do in the end.
> I've pushed a patch for review [1] on master.
> 
> I've also opened new bugs to track the fixes for master[2] and
> release-4.0[3]. They have been made blockers to the glusterfs-4.0.0
> tracker bug [4].

I really don't like creating this libglusterfsd.so with just two
functions to get around this. It feels like a quick-and-dirty hack.
(There's never time to do it right, but there's always time to do it
over. Except there isn't.)

I've posted a change at https://review.gluster.org/19664 that moves
those two functions to libgfrpc.so. It works on my f28/rawhide box and
the various centos and fedora smoke test boxes. No tricky linker flags,
or anything else, required. Regression is running now.

(And truth be told I'd like to also move glusterfs_mgmt_pmap_signin()
into libgfrpc.so too. Just for (foolish) consistency/symmetry.)

> 
> Shyam,
> To backport the fix from master to release-4.0, also requires
> backporting one more change [5].
> Would you be okay with backporting that as well, in a single patch?
> 
> [1]: https://review.gluster.org/19657
> [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1550895
> [3]: https://bugzilla.redhat.com/show_bug.cgi?id=1550894
> [4]: https://bugzilla.redhat.com/show_bug.cgi?id=1539842
> [5]: https://review.gluster.org/19337
> 
>>>

>
>>
>> --
>>
>> Kaleb
>>> ___
>>> Gluster-devel mailing list
>>> Gluster-devel@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>>
>> --
>> Milind
>>
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterD2 - 4.0.0rc1 (warning: we have a blocker for GD2)

2018-02-28 Thread Kaleb S. KEITHLEY
On 02/28/2018 10:49 AM, Kaushal M wrote:
> We have a GlusterD2-4.0.0rc1 release.
> 
> Aravinda, Prashanth and the rest of the GD2 developers have been
> working hard on getting more stuff merged into GD2 before the 4.0
> release.
> 
> At the same time I have been working on getting GD2 packaged for Fedora.
> I've been able to get all the required dependencies updated and have
> submitted to the package maintainer for merging.
> I'm now waiting on the maintainer to accept those updates. Once the
> updates have been accepted, the GD2 spec can get accepted [2].
> I expect this to take at least another week on the whole.
> 
> In the meantime, I've been building all the updated dependencies and
> glusterd2-v4.0.0rc1, on the GD2 copr [3].
> 
> I tried to test out the GD2 package with the GlusterFS v4.0.0rc1
> release from [4]. And this is where I hit the blocker.
> 
> GD2 does not start with the packaged glusterfs-v4.0.0rc1 bits. I've
> opened an issue on the GD2 issue tracker for it [5].
> In short, GD2 fails to read options from xlators, as dlopen fails with
> a missing symbol error.
> 
> ```
> FATA[2018-02-28 15:02:53.345686] Failed to load xlator options
> error="dlopen(/usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so)
> failed; dlerror =
> /usr/lib64/glusterfs/4.0.0rc1/xlator/protocol/server.so: undefined
> symbol: glusterfs_mgmt_pmap_signout" source="[main.go:79:main.main]"


see https://review.gluster.org/#/c/19225/


glusterfs_mgmt_pmap_signout() is in glusterfsd. When glusterfsd dlopens
server.so the run-time linker can resolve the symbol — for now.

Tighter run-time linker semantics coming in, e.g. Fedora 28, means this
will stop working in the near future even when RTLD_LAZY is passed as a
flag. (As I understand the proposed changes.)

It should still work, e.g., on Fedora 27 and el7 though.

glusterfs_mgmt_pmap_signout() (and glusterfs_autoscale_threads()) really
need to be moved to libglusterfs. ASAP. Doing that will resolve this issue.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged

2018-02-27 Thread Kaleb S. KEITHLEY
On 02/26/2018 02:03 PM, Shyam Ranganathan wrote:
> Hi,
> 
> RC1 is tagged in the code, and the request for packaging the same is on
> its way.
> 
> We should have packages as early as today, and request the community to
> test the same and return some feedback.
> 
> We have about 3-4 days (till Thursday) for any pending fixes and the
> final release to happen, so shout out in case you face any blockers.
> 
> The RC1 packages should land here:
> https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc1/
> and like so for CentOS,
> CentOS7:
>   # yum install
> http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm
>   # yum install glusterfs-server

Packages for:

* Fedora 27 and 28 (x86_64, aarch64, etc.) are at [1]; Fedora 29
(rawhide) are in rawhide.

* Debian stretch and buster (amd64) are at [1].

* CentOS 7 (x86_64, aarch64, ppc64le) are at [2]. They have been tagged
for testing and should appear soon at [3].

Please test and give feedback on gluster-d...@gluster.org or
#gluster-dev on freenode.

[1] https://download.gluster.org/pub/gluster/glusterfs/qa-releases/4.0rc1/
[2] https://cbs.centos.org/koji/taskinfo?taskID=340364
[3] https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-4.0/
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Preview of glusterfs packaging with glusterd2 (Fedora and CentOS RPMs)

2018-01-29 Thread Kaleb S. KEITHLEY
This is built on top of glusterfs-3.12. Obviously this will change to
4.0 (4.0rc0, etc.) This is derived from the
.../extras/rpms/glusterd2.spec in the glusterd2 source.

see https://koji.fedoraproject.org/koji/taskinfo?taskID=24543030

(Having to "bundle" the generated source and the -vendor source tar
files does make for a big .src.rpm.)

Question for Debian and Ubuntu users: would you want to see the
glusterd2 bits included in the -common or -server sub-package or would
you like to see a separate sub-package for glusterd2?

Thanks,

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [gluster-packaging] Release 4.0: Making it happen! (GlusterD2)

2018-01-11 Thread Kaleb S. KEITHLEY
On 01/11/2018 11:34 AM, Shyam Ranganathan wrote:
>>>
>>> One thing not covered above is what happens when GD2 fixes a high priority
>>> bug between releases of glusterfs.
>>>
>>> Once option is we wait until the next release of glusterfs to include the
>>> update to GD2.
>>>
>>> Or we can respin (rerelease) the glusterfs packages with the updated GD2.
>>> I.e. glusterfs-4.0.0-1 (containing GD2-1.0.0) -> glusterfs-4.0.0-2
>>> (containing GD2-1.0.1).
>>>
>>> Or we can decide not to make a hard rule and do whatever makes the most
>>> sense at the time. If the fix is urgent, we respin. If the fix is not urgent
>>> it waits for the next Gluster release. (From my perspective though I'd
>>> rather not do respins, I've already got plenty of work doing the regular
>>> releases.)
> 
> I would think we follow what we need to do for the gluster package (and
> its sub-packages) as it stands now. If there is an important enough fix
> (critical/security etc.) that requires a one-off build (ie. not a
> maintenance release or a regular release) we respin the whole thing
> (which is more work).
> 
> I think if it is a GD2 specific fix then just re-spinning that
> sub-package makes more sense and possibly less work.

RPM (and Debian) packaging is an all or nothing proposition. There is no
respinning just the -glusterd2 sub-package.

> I am going to leave the decision of re-spinning the whole thing or just
> the GD2 package to the packaging folks, but state that re-spin rules do
> not change, IOW, if something is critical enough we re-spin as we do today.

I think my real question was what should happen when GD2 discovers/fixes
a severe bug between the regular release dates.

If we take the decision that it needs to be released immediately (with
packages built), do we:

  a) make a whole new glusterfs Release with just the GD2 fix. I.e.
glusterfs-4.0.4-1.rpm  ->  glusterfs-4.0.5-1.rpm. IOW we bump the _V_ in
the NVR? (This implies tagging the glusterfs source with the new tag at
the same location as the previous tag.)

or

  b) "respin" the existing glusterfs release, also with just the GD2
fix. I.e. glusterfs-4.0.4-1.rpm  ->  glusterfs-4.0.4-2.rpm. IOW we bump
the _R_ in the NVR?


Obviously (or is it?) if we find serious bugs in core gluster and GD2
that we want to release we can update both and that would be a new
Version (_V_ in the NVR).

--

Kaleb

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [gluster-packaging] Release 4.0: Making it happen! (GlusterD2)

2018-01-10 Thread Kaleb S. KEITHLEY
comments inline

On 01/10/2018 02:08 PM, Shyam Ranganathan wrote:
> Hi, (GD2 team, packaging team, please read) Here are some things we
> need to settle so that we can ship/release GD2 along with Gluster 4.0
> release (considering this is a separate repository as of now). 1)
> Generating release package (read as RPM for now) to go with Gluster
> 4.0 release Proposal: - GD2 makes github releases, as in [1] - GD2
> Releases (tagging etc.) are made in tandem to Gluster releases - So,
> when an beta1/RC0 is tagged for gluster release, this will receive a
> coordinated release (if required) from the GD2 team - GD2 team will
> receive *at-least* a 24h notice on a tentative Gluster tagging
> date/time, to aid the GD2 team to prepare the required release tarball
> in github
This is a no-op. In github creating a tag or a release automatically
creates the tar source file.
> - Post a gluster tag being created, and the subsequent release job is
> run for gluster 4.0, the packaging team will be notified about which
> GD2 tag to pick up for packaging, with this gluster release - IOW, a
> response to the Jenkins generated packaging job, with the GD2
> version/tag/release to pick up - GD2 will be packaged as a sub-package
> of the glusterfs package, and hence will have appropriate changes to
> the glusterfs spec file (or other variants of packaging as needed), to
> generate one more package (RPM) to post in the respective download
> location - The GD2 sub-package version would be the same as the
> release version that GD2 makes (it will not be the gluster package
> version, at least for now)
IMO it's clearer if the -glusterd2 sub-package has the same version as
the rest of the glusterfs-* packages.

The -glusterd2 sub-package's Summary and/or its %description can be used
to identify the version of GD2.

Emphasis on IMO. It is possible for the -glusterd sub-package to have a
version that's different than the parent package(s).
> - For now, none of the gluster RPMs would be dependent on the GD2 RPM
> in the downloads, so any user wanting to use GD2 would have to install
> the package specifically and then proceed as needed -
> (thought/concern) Jenkins smoke job (or other jobs) that builds RPMs
> will not build GD2 (as the source is not available) and will continue
> as is (which means there is enough spec file magic here that we can
> specify during release packaging to additionally build GD2) 2)
> Generate a quick start or user guide, to aid using GD2 with 4.0
> @Kaushal if this is generated earlier (say with beta builds of 4.0
> itself) we could get help from the community to test drive the same
> and provide feedback to improve the guide for users by the release (as
> discussed in the maintainers meeting)
One thing not covered above is what happens when GD2 fixes a high
priority bug between releases of glusterfs.

Once option is we wait until the next release of glusterfs to include
the update to GD2.

Or we can respin (rerelease) the glusterfs packages with the updated
GD2. I.e. glusterfs-4.0.0-1 (containing GD2-1.0.0) -> glusterfs-4.0.0-2
(containing GD2-1.0.1).

Or we can decide not to make a hard rule and do whatever makes the most
sense at the time. If the fix is urgent, we respin. If the fix is not
urgent it waits for the next Gluster release. (From my perspective
though I'd rather not do respins, I've already got plenty of work doing
the regular releases.)

The alternative to all of the above is to package GD2 in its own
package. This entails opening a New Package Request and going through
the packaging reviews. All in all it's a lot of work. If GD2 source is
eventually going to be moved into the main glusterfs source though this
probably doesn't make sense.

--

Kaleb



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] 2018 - Plans and Expectations on Gluster Community

2018-01-03 Thread Kaleb S. KEITHLEY

On 01/02/2018 11:03 PM, Vijay Bellur wrote:
... 
The people who were writing storhaug never finished it. Keep using

3.10 until storhaug gets finished.



Since 3.10 will be EOL in approximately 2 months from now, what would be 
our answer for NFS HA if storahug is not finished by then?


  -   Use ctdb
  -   Restore nfs.ganesha CLI support
  -   Something else?

Have we already documented upgrade instructions for those users 
utilizing nfs.ganesha CLI in 3.8? If not already done so, it would be 
useful to have them listed somewhere.




I have a pretty high degree of confidence that I can have storhaug 
usable by or before 4.0. The bits I have on my devel box are almost 
ready to post on github.


I'd like to abandon the github repo at 
https://github.com/linux-ha-storage/storhaug; and create a new repo 
under https://github.com/gluster/storhaug. I dare say there are other 
Linux storage solutions besides gluster+ganesha+samba that storhaug 
doesn't handle.


And upgrade instructions for what? Upgrading/switching from legacy 
glusterd to storhaug? No, not yet. Doesn't make sense since there's no 
(usable) storhaug yet.


--

Kaleb


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Coverity fixes

2017-11-03 Thread Kaleb S. KEITHLEY
On 11/02/2017 10:19 AM, Atin Mukherjee wrote:
> While I appreciate the folks to contribute lot of coverity fixes over
> last few days, I have an observation for some of the patches the
> coverity issue id(s) are *not* mentioned which gets maintainers in a
> difficult situation to understand the exact complaint coming out of the
> coverity. From my past experience in fixing coverity defects, sometimes
> the fixes might look simple but they are not.
> 
> May I request all the developers to include the defect id in the commit
> message for all the coverity fixes?
> 

How does that work? AFAIK the defect IDs are constantly changing as some
get fixed and new ones get added.

(And I know everyone looks at the coverity report after their new code
is committed to see if they might have added a new issue.)

Today's defect ID 435 might be 436 or 421 tomorrow.


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Suggestion to Improve performance

2017-09-27 Thread Kaleb S. KEITHLEY
On 09/27/2017 04:17 AM, Mohit Agrawal wrote:
> Niels,
> 
>    Thanks for your reply, I think these built-in function provides by
> gcc and it should support most of the architecture.
>    In your view what could be the archietecure that does not support
> these builtin function ??
see


https://gcc.gnu.org/onlinedocs/gcc-7.2.0/gcc/_005f_005fsync-Builtins.html#g_t_005f_005fsync-Builtins,


https://gcc.gnu.org/onlinedocs/gcc-7.2.0/gcc/_005f_005fatomic-Builtins.html#g_t_005f_005fatomic-Builtins,
and

  https://llvm.org/docs/Atomics.html

The _legacy_ __sync*() functions have been superceded by the __atomic*()
functions.

A quick search seems to suggest that ARM has atomic insns since armv6.
Fedora supports armv7hl and aarch64 and IMO ARM should be okay in this
regard.

A ten year old gcc bug report
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=34115) speaks to __sync*
(and now __atomic* ?) not being supported on the i386 and by extension
to i686 because of how glibc was built for most distributions back then.
We (gluster) are conservative in this regard, but frankly, since hardly
anyone runs 32-bit these days, the perf hit of not using atomic is not a
serious concern.

It's easy to fall into the trap of saying "it works on my box." Where
"my box" is a x86_64 machine running latest {Fedora,Debian,whatever} but
please keep in mind that even Fedora runs on i686, x86_64, armv7hl,
aarch64, ppc64, ppc64le, and s390x these days. IMO we don't want to fill
our source with lots of #ifdef glop if we can avoid it.

Clang/LLVM supports __sync*() and __atomic*(), and between Clang and gcc
I'd say that covers pretty much all the distributions we care about,
e.g. Linux and *BSD, and even many we don't care so much about any more.

I'd say proceed with caution, but proceed. ;-)


>     
> 
> Regards
> Mohit Agrawal
> 
> On Wed, Sep 27, 2017 at 1:20 PM, Niels de Vos  > wrote:
> 
> On Wed, Sep 27, 2017 at 12:55:37PM +0530, Mohit Agrawal wrote:
> > Hi,
> >
> >    I was checking code of internal data structures (dict,fd,rpc_clnt 
> etc.)
> > those we use in glusterfs to store data.
> >    Usually we use common pattern to take reference of data structure in
> > xlator level, in ref function we do take lock(mutex_lock)
> >    and update(increase) reference counter and in unref function we do 
> take
> > lock and decrease reference counter and
> >    check if ref counter is become 0 after decrease then destroy object.
> >
> >    I think to update reference counter we don't need to take a lock, we 
> can
> > use atomic in-built function those
> >    can improve performance
> 
> The below is not portable for all architectures. However we have
> refcount.h in libglusterfs/src/ which hides the portability things. One
> of the big advantages to use this, is that the code for reference
> counting is the same everywhere. Some structures have been updated with
> GF_REF_* macros, more can surely be done.
> 
> For other more basic counters that do not function as reference counter,
> the libglusterfs/src/atomic.h macros can be used. The number of lock
> instructions on modern architectures can be reduced considerably this
> way. It will likely come with a performance increase, but the usage of a
> standard API makes the code simpler to understand and that is my main
> interest :)
> 
> Obviously I'm all for replacing the lock+count+unlock sequences for many
> structures!
> 
> Thanks,
> Niels
> 
> 
> >
> >    For ex: Below is a example for specific to dict_ref/unref
> >    To increase refCount we can use below built-in function
> >    dict_ref
> >    {
> >        __atomic_add_fetch (>refcount, 1, __ATOMIC_SEQ_CST);
> >
> >    }
> >
> >    dict_unref
> >    {
> >       __atomic_sub_fetch (>refcount, 1, __ATOMIC_SEQ_CST);
> >       __atomic_load (>refcount, , __ATOMIC_SEQ_CST);
> >    }
> >
> >    In the same way we can use for all other shared data-structure also 
> in
> > case of take/release reference.
> >
> >    I have not tested yet how much performance improvement we can gain 
> but i
> > think there should be some improvement.
> >   Please share your input on this, appreciate your input.
> >
> > Regards
> > Mohit Agrawal
> 
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org 
> > http://lists.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
> 

-- 

Kaleb
___
Gluster-devel mailing list

Re: [Gluster-devel] Permission for glusterfs logs.

2017-09-20 Thread Kaleb S. KEITHLEY
On 09/18/2017 09:22 PM, ABHISHEK PALIWAL wrote:
> Any suggestion would be appreciated...
> 
> On Sep 18, 2017 15:05, "ABHISHEK PALIWAL"  > wrote:
> 
> Any quick suggestion.?
> 
> On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL
> > wrote:
> 
> Hi Team,
> 
> As you can see permission for the glusterfs logs in
> /var/log/glusterfs is 600.
> 
> drwxr-xr-x 3 root root  140 Jan  1 00:00 ..
> *-rw--- 1 root root    0 Jan  3 20:21 cmd_history.log*
> drwxr-xr-x 2 root root   40 Jan  3 20:21 bricks
> drwxr-xr-x 3 root root  100 Jan  3 20:21 .
> *-rw--- 1 root root 2102 Jan  3 20:21
> etc-glusterfs-glusterd.vol.log*
> 
> Due to that non-root user is not able to access these logs
> files, could you please let me know how can I change these
> permission. So that non-root user can also access these log files.
>

There is no "quick fix."  Gluster creates the log files with 0600 — like
nearly everything else in /var/log.

The admin can chmod the files, but when the logs rotate the new log
files will be 0600 again.

You'd have to patch the source and rebuild to get different permission bits.

You can probably do something with ACLs, but as above, when the logs
rotate the new files won't have the ACLs.



-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] FYI: longevity cluster updated

2017-09-19 Thread Kaleb S. KEITHLEY

The longevity cluster has been updated to GlusterFS-3.12.1 and
NFS-Ganesha-2.5.2.

The cluster consists of eight servers, with an eight brick 4x2
distribute+replica volume, w/ sharding enabled.

NFS-Ganesha with FSAL_GLUSTER is running on the first server. Previously
ACLs had been disabled; now they are enabled.

The client mounts both NFS and Gluster native (i.e. FUSE) file systems
and runs a modest create-write-read-delete workload on each mount point.

Memory consumption (RSS, VSZ) of the GlusterFS and NFS-Ganesha daemons
is sampled hourly on the servers and on the client. The results are
logged at[1]. Gluster state dumps are also collected and are available
at the same location[1]. Check back periodically to watch for memory
leaks or even just unexpected excessive memory consumption.

[1] https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Debian 3.9 repository missing?

2017-08-17 Thread Kaleb S. KEITHLEY


Up until a few days ago they haven't been moved.

I suppose I should've warned people. I guess I consider them warned now.

I can put them back if enough people think I should, but for the moment 
I'm inclined to keep it this way.


On 08/17/2017 03:27 PM, Shane StClair wrote:
Ah, thanks, I didn't realize old versions were moved to 
https://download.gluster.org/pub/gluster/glusterfs/old-releases/. Moving 
old repos does break apt updates on clients which isn't great for wide 
deployments, but I suppose it does force end users to upgrade to a 
supported version.


On Thu, Aug 17, 2017 at 12:02 PM Kaleb S. KEITHLEY <kkeit...@redhat.com 
<mailto:kkeit...@redhat.com>> wrote:


3.9 is EOL several months ago.

The old repos are still there.

https://download.gluster.org/pub/gluster/glusterfs/old-releases/3.9/


On 08/17/2017 02:56 PM, Shane StClair wrote:
 > Just wanted to alert Gluster devs that the 3.9 Debian repository
seems
 > to have recently disappeared, e.g. this source
 >
 > deb
 >

http://download.gluster.org/pub/gluster/glusterfs/3.9/3.9.0/Debian/jessie/apt
 > jessie main
 >
 > no longer works and breaks apt-get updates for servers configured
with it.
 >
 > https://download.gluster.org/pub/gluster/glusterfs/
 >
 > Understood that Debian packages are created by volunteers as time
allows
 > and that Gluster 3.9 is EOL, just wasn't sure if it was supposed
to be
 > completely removed from the Debian repositories.
 >
 > Thanks,
 > Shane St Clair
 > Axiom Data Science



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Debian 3.9 repository missing?

2017-08-17 Thread Kaleb S. KEITHLEY

On 08/17/2017 03:01 PM, Kaleb S. KEITHLEY wrote:

3.9 was a STM release and reached ...



... EOL several months ago.

The old repos are still there.

https://download.gluster.org/pub/gluster/glusterfs/old-releases/3.9/


On 08/17/2017 02:56 PM, Shane StClair wrote:
Just wanted to alert Gluster devs that the 3.9 Debian repository seems 
to have recently disappeared, e.g. this source


deb 
http://download.gluster.org/pub/gluster/glusterfs/3.9/3.9.0/Debian/jessie/apt 
jessie main


no longer works and breaks apt-get updates for servers configured with 
it.


https://download.gluster.org/pub/gluster/glusterfs/

Understood that Debian packages are created by volunteers as time 
allows and that Gluster 3.9 is EOL, just wasn't sure if it was 
supposed to be completely removed from the Debian repositories.


Thanks,
Shane St Clair
Axiom Data Science


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Debian 3.9 repository missing?

2017-08-17 Thread Kaleb S. KEITHLEY

3.9 is EOL several months ago.

The old repos are still there.

https://download.gluster.org/pub/gluster/glusterfs/old-releases/3.9/


On 08/17/2017 02:56 PM, Shane StClair wrote:
Just wanted to alert Gluster devs that the 3.9 Debian repository seems 
to have recently disappeared, e.g. this source


deb 
http://download.gluster.org/pub/gluster/glusterfs/3.9/3.9.0/Debian/jessie/apt 
jessie main


no longer works and breaks apt-get updates for servers configured with it.

https://download.gluster.org/pub/gluster/glusterfs/

Understood that Debian packages are created by volunteers as time allows 
and that Gluster 3.9 is EOL, just wasn't sure if it was supposed to be 
completely removed from the Debian repositories.


Thanks,
Shane St Clair
Axiom Data Science


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Adding xxhash to gluster code base

2017-06-27 Thread Kaleb S. KEITHLEY


xxhash doesn't seem to change much. Last update to the non-test code was 
six months ago.


bundling giant (for some definition of giant) packages/projects would be 
bad. bundling two (three if you count the test) C files doesn't seem too 
bad when you consider that there are already three or four packages in 
fedora (perl, python, R-digest, ghc (gnu haskell) that have 
implementations of xxhash or murmur but didn't bother to package a C 
implementation and use it.


I'd be for packaging it in Fedora rather than bundling it in gluster. 
But then we get to "carry" it in rhgs as we do with userspace-rcu.




On 06/27/2017 04:08 AM, Niels de Vos wrote:

On Tue, Jun 27, 2017 at 12:25:11PM +0530, Kotresh Hiremath Ravishankar wrote:

Hi,

We were looking for faster non-cryptographic hash to be used for the
gfid2path infra [1]
The initial testing was done with md5 128bit checksum which was a slow,
cryptographic hash
and using it makes software not complaint to FIPS [2]

On searching online a bit we found out xxhash [3] seems to be faster from
the results of
benchmark tests shared and lot of projects use it. So we have decided to us
xxHash
and added following files to gluster code base with the patch [4]

 BSD 2-Clause License:
contrib/xxhash/xxhash.c
contrib/xxhash/xxhash.h

 GPL v2 License:
tests/utils/xxhsum.c

NOTE: We have ignored the code guideline check for these files as
maintaining it
further becomes difficult.

Please comment on the same if there are any issues around it.


How performance critical is the hashing for gfid2path?

What is the plan to keep these files maintained? At minimal we need to
add these files to MAINTAINERS and the maintainers need to cherry-pick
updates and bugfixes from the original project. The few patches a year
makes this a recurring task that should not be forgoten. It would be
much better to use this as an external library that is provided by the
distributions. We already rely on OpenSSL, does this library not provide
an alternative 'FIPS approved' hashing that performs reasonably well?

Some distributions are very strict on bundling external projects, and we
need to inform the packagers about the additions so that they can handle
it correctly. Adding an external project to contrib/ should be mentioned
in the release notes at the very least.

Note that none of the symbols of any public functions in Gluster may
collide with functions in standard distribution libraries. This causes
for regular problems with gfapi applications. All exposed symbols that
get imported in contrib/ should have a gf_ prefix.

Thanks,
Niels




[1] Issue: https://github.com/gluster/glusterfs/issues/139
[2] https://en.wikipedia.org/wiki/Federal_Information_Processing_Standards
[3] http://cyan4973.github.io/xxHash/
[4] https://review.gluster.org/#/c/17488/10



--
Thanks and Regards,
Kotresh H R and Aravinda VK


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Release 3.11.1: Scheduled for 20th of June

2017-06-26 Thread Kaleb S. KEITHLEY
On 06/26/2017 10:31 AM, Shyam wrote:
> 
> In the future, we would like to stick with the release calendar, as that
> is published and well known, than delay releases. Hence, when raising
> blockers for a release or delaying the release, expect more questions
> and diligence required around the same in the future.
> 

+1.

Let's stick to the schedule as much as possible.

We can — and have — released another release closely after. Our decision
to have scheduled releases does not preclude having extra releases
between the scheduled ones when we have a legitimate need.

We can also respin packages with a patch. We've done that on several
occasions.

And finally, waiting four weeks for the next release is really not that bad.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Community Meeting minutes, 2017-06-21

2017-06-21 Thread Kaleb S. KEITHLEY
===
#gluster-meeting: Gluster Community Meeting
===


Meeting started by kkeithley at 15:13:53 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2017-06-21/gluster_community_meeting.2017-06-21-15.13.log.html
.



Meeting summary
---
* roll call  (kkeithley, 15:14:12)

* AIs from last meeting  (kkeithley, 15:19:21)

* related projects  (kkeithley, 15:33:56)
  * ACTION: JoeJulian to invite Harsha to next community meeting to
discuss Minio  (kkeithley, 15:50:21)
  *

https://review.openstack.org/#/q/status:open+project:openstack/swift3,n,z
(kkeithley, 15:50:49)
  * there's definetely versioning work going on,  bunch of patches that
needs reviews...  (kkeithley, 15:50:57)
  * The infra for simplified reverts is done btw.  (kkeithley, 15:51:30)

* open floor  (kkeithley, 15:54:32)

Meeting ended at 16:07:14 UTC.




Action Items

* JoeJulian to invite Harsha to next community meeting to discuss Minio




Action Items, by person
---
* JoeJulian
  * JoeJulian to invite Harsha to next community meeting to discuss
Minio
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kkeithley (54)
* ndevos (40)
* nigelb (35)
* JoeJulian (10)
* tdasilva (9)
* shyam (7)
* zodbot (3)
* jstrunk (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot



-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] The glusterfs (and nfs-ganesha) longevity cluster updated

2017-06-08 Thread Kaleb S. KEITHLEY


FYI,

The glusterfs and nfs-ganesha longevity cluster has been updated to 
glusterfs-3.11.0. (It continues to run nfs-ganesha-2.4.3 and 
libntirpc-1.4.3)


Periodic sampling of the RSZ and VSZ of glusterd, glusterfsd, and 
ganesha.nfsd on the servers, and glusterfs on the client are taken while 
running under a modest workload. You can see the cumulative logs at [1].


You can see the results from the previous run of glusterfs-3.10.1, which 
ran from 2017-04-05 through 2017-06-08 at [2]




[1] 
https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/
[2] 
https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity3101/

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GFID2 - Proposal to add extra byte to existing GFID

2017-05-15 Thread Kaleb S. KEITHLEY
On 05/15/2017 12:48 PM, Xavier Hernandez wrote:
>> [ snip ]
>> Also, I have a question, What are the chances of uuid collision if we
> take just 3 bits from the first byte ?
>>
>> 000 - Unspecified (can be anything).
>> 001 - Directory
>> 010 - Regular File
>> 011 - Special files (symlink, Block and Char devices, socket files etc).
>> {100 - 111} - Reserved.
> 
> This cannot be done. Since we are currently using random UUIDs, on
> average, one of every eight randomly generated ids will start with each
> one of the combinations.
> 
> Already existing GFIDs will be a problem when updating. The only thing
> that can avoid the problem is to create new GFIDs in a format that won't
> collide with existing ones, and this can only be done safely if we use
> the special fields of the UUID itself.
> 
>>
>> As a side-effect, it reduces the number of directories created at as
> the metadata, inside of .glusterfs directory. (Will be 50% of current
> load).
> 
> Maybe we can find a better way to store the GFIDs using the standard
> fields instead of relying on the first bits, which is not a valid solution.
> 
> We can think more about this.

How about using a variation of Version 5 UUIDs? Or define our own Version 6?

Strictly speaking, Version 5 hashes a NamespaceUUID + Name. That won't
work as we'd have too many collisions in the Name part. Instead we could
hash NamespaceUUID + Time + Name; or we could just use Time, like a
Version 1 UUID; or random bits, like a Version 4 UUID.

And store the bits described above in the clock-seq-low part of the GFID.

E.g.:
74738ff5-5367-5958-91ee-98fffdcd1876
  ^ 5 indicates Version 5
   ^ required for Type 5 first two bits set to 1 and 0
^ 0001 for directory
-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Cleaning up Jenkins

2017-04-20 Thread Kaleb S. KEITHLEY

On 04/20/2017 08:17 AM, Shyam wrote:

On 04/20/2017 01:27 AM, Nigel Babu wrote:

Hello folks,

As I was testing the Jenkins upgrade, I realized we store quite a lot
of old
builds on Jenkins that doesn't seem to be useful. I'm going to start
cleaning
them slowly in anticipation of moving Jenkins over to a CentOS 7
server in the
not-so-distant future.

* Old and disabled jobs will be deleted completely.
* Discard regression logs older than 90 days.
* Discard smoke and dev RPM logs older than 30 days.
* Discard post-build RPM jobs older than 10 days.
* Release job will be unaffected. We'll store all logs.


Above decisions seem fair enough to me. +1 from my end.


Agreed.





If we want to archive the old regression logs, I might looking at
storing them
some place that's not the Jenkins machine. If you have concerns or
comments,
please let me know.


Get rid of them after 30 days. I'd be amazed if anyone ever looks at them.

--

Kaleb


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] MINUTES: Gluster Community Bug Triage

2017-04-18 Thread Kaleb S. KEITHLEY

The minutes of the 18 April meeting:

Meeting summary
---
* roll call  (kkeithley, 12:01:10)

* next week's host  (kkeithley, 12:03:55)

* AIs from previous meetings  (kkeithley, 12:05:04)

* bug triage  (kkeithley, 12:06:27)
  * LINK: http://bit.ly/gluster-bugs-to-triage   (kkeithley, 12:07:25)

Meeting ended at 12:14:49 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kkeithley (24)
* jiffin (4)
* ndevos (3)
* zodbot (3)
* hgowtham (2)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-18/gluster_community_bug_triage.2017-04-18-12.01.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-18/gluster_community_bug_triage.2017-04-18-12.01.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-18/gluster_community_bug_triage.2017-04-18-12.01.log.html

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Meeting minutes for 12 April 2017

2017-04-13 Thread Kaleb S. KEITHLEY


No meeting was held on 2017-03-29. Zero people responded to the roll 
call. (Possibly due to many being at the Vault storage conference.)


Also a very low turnout for this meeting — only five people and myself.

The next meeting is on 26 April 2017 at 15:00 UTC  (11AM EDT, 8AM PDT) 
or `date -d "15:00 UTC"` at the shell prompt for your local timezone.


Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-12/gluster_community_weekly_meeting.2017-04-12-15.00.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-12/gluster_community_weekly_meeting.2017-04-12-15.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-12/gluster_community_weekly_meeting.2017-04-12-15.00.log.html


==
#gluster-meeting: Gluster community weekly meeting
==


Meeting started by kkeithley at 15:00:45 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2017-04-12/gluster_community_weekly_meeting.2017-04-12-15.00.log.html
.



Meeting summary
---
* roll call  (kkeithley, 15:01:19)

* next meeting's host  (kkeithley, 15:05:59)
  * kshlm will host in two weeks  (kkeithley, 15:07:21)

* old pending reviews  (kkeithley, 15:07:56)
  * ACTION: nigelb to start deleting old patches in gerrit  (kkeithley,
15:11:36)

* snapshot on btrfs  (kkeithley, 15:12:18)
  * JoeJulian will check with major on status of snapshot-on-btrfs
(kkeithley, 15:16:00)

* AIs from last meeting  (kkeithley, 15:16:28)

* jdarcy and nigelb to make reverts easier  (kkeithley, 15:17:06)

* nigelb will document packaging  (kkeithley, 15:17:29)

* shyam backport whine job and feetback  (kkeithley, 15:29:25)

* amye and vbellur to work on revised maintainers' draft?  (kkeithley,
  15:31:27)

* rafi will start discussion of abandoning old reviews in gerrit
  (kkeithley, 15:33:13)
  * shyam will send a 3.11 feature nag  (kkeithley, 15:35:49)
  * 3.12 and 4.0 scope and dates to be out by end of the week
(kkeithley, 15:36:03)
  * Software Defined Storage meetup tomorrow:
https://www.meetup.com/Seattle-Storage-Meetup/events/238684916/
(kkeithley, 15:36:36)

* Open Floor  (kkeithley, 15:42:54)

Meeting ended at 15:44:30 UTC.




Action Items

* nigelb to start deleting old patches in gerrit




Action Items, by person
---
* **UNASSIGNED**
  * nigelb to start deleting old patches in gerrit




People Present (lines said)
---
* kkeithley (75)
* ndevos (25)
* JoeJulian (19)
* shyam (17)
* kshlm (10)
* amye (7)
* zodbot (5)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterFS+NFS-Ganesha longevity cluster

2017-04-06 Thread Kaleb S. KEITHLEY


The longevity cluster has been updated to glusterfs-3.10.1 (from 3.8.5).

General information on the longevity cluster is at [1].

In the previous update sharding was enabled on the gluster volume. This 
time I have added a NFS-Ganesha NFS server on one server. Its memory 
usage is being sampled along with gluster's memory usage.


fsstress is used to run an I/O load over both the glusterfs and NFS mounts.

Snapshots of RSZ and VSZ are collected hourly for glusterd, the 
glusterfsd brick processes, the glusterfs SHD processes, and the 
NFS-Ganesha ganesha.nfsd process. There are also hourly statedumps of 
the glusterfsd brick processes and the nfs-ganesha gluster FSAL which 
uses gfapi.


You can see the collected data at [2], or follow the link on [1]

[1] https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis
[2] 
https://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/


--

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] MINUTES: Gluster Community Bug Triage meeting at 12:00 UTC (today)

2017-03-28 Thread Kaleb S. KEITHLEY

Hi,

There was no meeing on 21 March.

The minutes of 28 March's meeting:

Meeting summary

roll call (kkeithley, 12:01:21)

agenda:https://github.com/gluster/glusterfs/wiki/Bug-Triage-Meeting 
(Saravanakmr, 12:04:06)

http://bit.ly/gluster-bugs-to-triage (Saravanakmr, 12:05:25)

next week's host (kkeithley, 12:05:25)
ACTION: Saravanakmr will host next week (kkeithley, 12:06:25)

action items (kkeithley, 12:06:37)
group triage (kkeithley, 12:07:49)
open floor (kkeithley, 12:30:29)



Meeting ended at 12:31:26 UTC (full logs).

Action items

Saravanakmr will host next week

Action Items

* ndevos need to decide on how to provide/use debug builds
* jiffin  needs to send the changes to check-bugs.py


Action Items, by person
---
* jiffin
  * jiffin  needs to send the changes to check-bugs.py
* ndevos
  * ndevos need to decide on how to provide/use debug builds

People present (lines said)

kkeithley (33)
ndevos (13)
Saravanakmr (8)
hgowtham (8)
zodbot (3)
rafi (2)

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-03-28/gluster_community_bug_triage.2017-03-28-12.01.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2017-03-28/gluster_community_bug_triage.2017-03-28-12.01.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2017-03-28/gluster_community_bug_triage.2017-03-28-12.01.log.html

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Dropping nightly build from download.gluster.org ?

2017-03-24 Thread Kaleb S. KEITHLEY
On 03/24/2017 09:39 AM, Niels de Vos wrote:
> On Thu, Mar 23, 2017 at 05:29:05PM -0400, Michael Scherer wrote:
>> Another example:
>> pub/gluster/glusterfs has various directory for versions of glusterfs,
>> but also do have libvirt, vagrant and nfs-ganesha, who are not version,
>> and might be rather served from a directory upstream (in fact,
>> nfs-ganesha and glusterfs-coreutils are also on
>> https://download.gluster.org/pub/gluster/ )
> 
> A cleanup is much appreciated! Maybe come up with a proposed directory
> structure and see from there what makes sense to keep or remove?
> 

What does "from a directory upstream" mean? There is no upstream
nfs-ganesha server.

There are versions of nfs-ganesha, even if there aren't very many, and
thus not to the same level of granularity as gluster.

And .../pub/gluster/glusterfs/nfs-ganesha is merely a symlink to
.../pub/gluster/nfs-ganesha — just a convenience. There are not two
copies of it. Likewise for -coreutils and the others.

I'm not opposed reorganizing the directories, but I don't believe
there's really anything wrong with it, per se, the way it is now.

-- 

Kaleb



signature.asc
Description: OpenPGP digital signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] GlusterFS 3.9.1 released

2017-01-18 Thread Kaleb S. KEITHLEY

GlusterFS 3.9.1 is a regular bug fix release for GlusterFS-3.9. The
release-notes for this release can be read here[1].

The source tar file and community provided packages[2] for several
popular Linux distributions can obtained from download.gluster.org[3].
The CentOS Storage SIG[4] packages are being built and will be available
soon in the centos-gluster39 repository.

Reminder: GlusterFS-3.9 is a Short Term Maintenance (STM) release and is
scheduled[5] to reach EOL shortly after the release of GlusterFS-3.10,
which scheduled for mid-February 2017.

[1]:
https://github.com/gluster/glusterfs/blob/release-3.9/doc/release-notes/3.9.1.md
[2]:
https://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
[3]: https://download.gluster.org/pub/gluster/glusterfs/3.9/3.9.1/
[4]: https://wiki.centos.org/SpecialInterestGroup/Storage
[5]: https://www.gluster.org/community/release-schedule/

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] release-3.9 branch is frozen pending release

2017-01-16 Thread Kaleb S. KEITHLEY
Hi,

Please do not merge any changes to the release-3.9 branch until after
the 3.9.1 release

Thanks

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Get GFID for a file through libgfapi

2017-01-10 Thread Kaleb S. KEITHLEY

On 01/10/2017 06:33 AM, Niels de Vos wrote:

On Tue, Jan 10, 2017 at 10:42:36AM +, Ankireddypalle Reddy wrote:

Neils,
 Thanks a lot. Will use this for extracting GFID.


Note that this should not be the final solution for fetching the GFID.
We will add an API for this to make it easier to use and have it well
defined and stable for the future.


A 3.10 feature?

Please write it up.

Thanks




Niels



Ram

-Original Message-
From: Niels de Vos [mailto:nde...@redhat.com]
Sent: Tuesday, January 10, 2017 4:26 AM
To: Ankireddypalle Reddy
Cc: Gluster Devel (gluster-devel@gluster.org); integrat...@gluster.org
Subject: Re: [Gluster-devel] Get GFID for a file through libgfapi

On Mon, Jan 09, 2017 at 08:58:07PM +, Ankireddypalle Reddy wrote:

Neils,
Thanks for pointing to the sample code. The use case is to use  
GFID as a unique key for maintaining indexing information about a file when it 
is being backed up from GlusterFS storage.  Extending GFAPI to extract GFID for 
a file would be great.
 As  per the example If I need to find GFID for a path p1/p2/p3/p4 
on a glusterfs volume then should I do a look up for every level?
 LOOKUP (/)->LOOKUP(p1)-> LOOKUP(p2)-> LOOKUP(p3)->
LOOKUP(p4)


No, that is not required. You can use glfs_h_lookupat() with the full path. 
Note that glfs_h_extract_handle() just does a memcpy() of the GFID into the 
given (unsigned char*), the format is of 'uuid_t'.

Attached is the modified test that shows the UUID without the need for a lookup 
of each component of the directory (a LOOKUP will be done by gfapi if needed).

  $ make CFLAGS="-lgfapi -luuid" resolve
  cc -lgfapi -luuidresolve.c   -o resolve
  $ ./resolve storage.example.com media resolve.log
  Starting libgfapi_fini
  glfs_set_volfile_server : returned 0
  glfs_set_logging : returned 0
  glfs_init : returned 0
  glfs_set_volfile_server : returned 0
  glfs_set_logging : returned 0
  glfs_init : returned 0
  glfs_h_extract_handle : returned 0
  UUID of /installation/CentOS-7-x86_64-Everything-1503-01.iso: 
b1b20352-c71c-4579-b678-a7a38b0e9a84
  glfs_fini : returned 0
  End of libgfapi_fini

  $ getfattr -n glusterfs.gfid -ehex 
/lan/storage.example.com/media/installation/CentOS-7-x86_64-Everything-1503-01.iso
  getfattr: Removing leading '/' from absolute path names
  # file: 
lan/storage.example.com/media/installation/CentOS-7-x86_64-Everything-1503-01.iso
  glusterfs.gfid=0xb1b20352c71c4579b678a7a38b0e9a84

HTH,
Niels




Thanks and Regards,
Ram


-Original Message-
From: Niels de Vos [mailto:nde...@redhat.com]
Sent: Monday, January 09, 2017 3:39 PM
To: Ankireddypalle Reddy
Cc: Gluster Devel (gluster-devel@gluster.org); integrat...@gluster.org
Subject: Re: [Gluster-devel] Get GFID for a file through libgfapi

On Mon, Jan 09, 2017 at 05:53:03PM +, Ankireddypalle Reddy wrote:

Hi,
I am trying to extract the GFID for a file through libgfapi interface. 
When I try to extract the value of extended attribute glusterfs.gfid through 
libgfapi I get the errorno: 95.  This works for FUSE though. Is there a way to 
extract the GFID for a file through libgfapi.


It seems that this is a case where FUSE handles the xatts special. The
glusterfs.gfid and glusterfs.gfid.string (VIRTUAL_GFID_XATTR_KEY and
VIRTUAL_GFID_XATTR_KEY_STR) are specifically handled in 
xlators/mount/fuse/src/fuse-bridge.c.

There is a way to get the GFID, but it probably is rather a cumbersome
workaround for you. The handle-API is used heavily by NFS-Ganesha
(because NFS uses filehandles more than filenames), and extracts the
GFID from the 'struct glfs_object' with glfs_h_extract_handle(). A
basic example of how to obtain and extract the handle is in
https://github.com/gluster/glusterfs/blob/master/tests/basic/gfapi/bug
1291259.c

Could you explain the need for knowing the GFID in the application? We can 
extend gfapi with fetching the GFID if that would help you.

Niels


PS: we have a new integrat...@gluster.org where external projects can ask gfapi 
related questions. The gluster-devel list tends to be a little heavy on traffic 
for non-Gluster developers.
***Legal Disclaimer***
"This communication may contain confidential and privileged material
for the sole use of the intended recipient. Any unauthorized review,
use or distribution by others is strictly prohibited. If you have
received the message by mistake, please advise the sender by reply email and delete 
the message. Thank you."
**


***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. 

Re: [Gluster-devel] What is the answer to the 3.9.1 release question?

2017-01-09 Thread Kaleb S. KEITHLEY
On 01/09/2017 09:05 AM, Niels de Vos wrote:
> 
> I just filed https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.9.1
> Assuming that Kalebs offer still is valid, this bug has been assigned to
> him.

Yes, it's still valid.

3.9 updates are supposed to happen on the 20th of each month. (But maybe
I'll do it sooner, given that we missed in December.)

-- 

Kaleb



signature.asc
Description: OpenPGP digital signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] What is the answer to the 3.9.1 release question?

2017-01-06 Thread Kaleb S. KEITHLEY
On 01/06/2017 06:42 AM, Pranith Kumar Karampuri wrote:
> 
> 
> On Fri, Jan 6, 2017 at 4:51 PM, Kaleb Keithley  > wrote:
> 
> 
> Nothing?
> 
> 
> The reason I asked for 2 maintainers for the release is so that there
> will be load distribution. But unfortunately the pairing was bad, both
> of us are impacted by the same work which is leading to not enough time
> for upstream release maintenance. Last time I was loaded a bit less so
> took care of most of the things at the end with help from Amye and
> Vijay. But this time I am swamped with work too. Please suggest how we
> can get the release out.
> 
> May be Aravinda can add if he is a bit free to do this.

I'd certainly be willing to step in and help. I don't have time either
to do an extensive round of testing.

I'm not convinced that an STM release update needs huge amounts of
testing either. (But feel free to disagree with me. ;-))

If you and Aravinda are okay with it, I'll do some minimal testing, tag,
and release.

Just so we can get _something_ out!?!  What do you think?

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] What is the answer to the 3.9.1 release question?

2017-01-05 Thread Kaleb S. KEITHLEY


There was considerable discussion in the community meeting yesterday.

If we're not going to get one (any time soon) I'm contemplating a 
3.9.0-n+1 update in Fedora, Ubuntu Launchpad PPA, etc., that would 
consist of 3.9.0 plus all the commits to the release-3.9 branch to date.


Obviously I'd rather have an official 3.9.1 release by the maintainers.

--

Kaleb


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] static analysis updated

2016-12-19 Thread Kaleb S. KEITHLEY
On 12/19/2016 01:06 PM, Jeff Darcy wrote:
>> Thank you Kaleb. Shall we start somwhere in terms of automation?
>>
>> The cppcheck results look the shortest[1]. If we can commit to fixing all of
>> them in the next 1 month, I can kick off a non-voting smoke job. We'll make
>> it
>> vote after 1 month. I guss the cli and experimental xlator (and a few more
>> xlators, please check log) owners need to confirm that this is something we
>> can
>> target to fix. And commit to keeping fixed.
> 
> Before we get to automation, shouldn't we have a discussion about what
> "defects" we should filter out?  For example, we already have a problem
> with compilers spitting out warnings about unused variables in generated
> code, and then promoting those warnings to errors.  Fixing those is more
> trouble than it's worth.  Static analyzers are going to produce even
> more reports of Things That Don't Really Matter, along with a few about
> Actual Serious Problems.  It's a characteristic of the genre.  If we
> don't make any explicit decisions about priorities, it will actually
> take us longer to fix all of the null-pointer errors and array overflows
> and memory leaks as people wade through a sea of lesser defects.
> 

At the moment we're only talking about cppcheck. cppcheck is medium
interesting because a) it's the least picky of all of them, and b) it's
what Ubuntu looks at and wants fixed before they'll ship it (versus our
PPA packages.)

Here's what we're looking at now on the master branch:

[cli/src/cli.c:504]: (error) va_list 'ap' used before va_start() was called.
[cli/src/cli.c:530]: (error) va_list 'ap' used before va_start() was called.
[contrib/libexecinfo/execinfo.c:359]: (error) Memory leak: rval
[extras/create_new_xlator/new-xlator-tmpl.c:13]: (error) syntax error
[extras/test/test-ffop.c:27]: (error) Buffer overrun possible for long
command line arguments.
[libglusterfs/src/logging.c:2315]: (error) va_list 'ap' used before
va_start() was called.
[tests/basic/fops-sanity.c:63]: (error) Buffer overrun possible for long
command line arguments.
[tests/bugs/replicate/bug-1250170-fsync.c:39]: (error) Memory leak: buffer
[xlators/experimental/fdl/src/dump-tmpl.c] ->
[xlators/experimental/fdl/src/dump-tmpl.c]: (error) syntax error
[xlators/experimental/fdl/src/recon-tmpl.c] ->
[xlators/experimental/fdl/src/recon-tmpl.c]: (error) syntax error
[xlators/experimental/jbr-client/src/fop-template.c] ->
[xlators/experimental/jbr-client/src/fop-template.c]: (error) syntax error
[xlators/experimental/jbr-server/src/all-templates.c] ->
[xlators/experimental/jbr-server/src/all-templates.c]: (error) syntax error
[xlators/features/changelog/lib/src/gf-history-changelog.c:803]: (error)
Null pointer dereference: this
[xlators/mount/fuse/src/fuse-helpers.c:253]: (error) Resource leak: fp
[xlators/storage/posix/src/posix-helpers.c:1097]: (error) Invalid number
of character '{' when these macros are defined: 'GF_DARWIN_HOST_OS'.


and since it wasn't clear in Nigel's mail, as a voting test, it would
only fail if someone causes an incremental warning over the status quo,
where the above is the current status quo.

Also it would be a non-voting test until we agree it's ready to be a
voting test. Translation: it's informational only, until we decide it's
ready for prime time, which might be never.

It does beg the question of how do we adjust the status quo upward if we
find something that's a false positive. cppcheck doesn't have many false
positives in my experience, so maybe it's a non-issue.

clang compile is also fairly forgiving, but clang analyze is too picky;
I don't expect to use it as a voting test. clang compile would also be
an "incremental" test.

coverity is a whole 'nuther ball of wax.

Waiting until we solve these problems just means we'll be waiting —
forever. IMO.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] static analysis updated

2016-12-19 Thread Kaleb S. KEITHLEY
On 12/19/2016 12:56 PM, Kaleb S. KEITHLEY wrote:
> On 12/19/2016 12:49 PM, Nigel Babu wrote:
>> Thank you Kaleb. Shall we start somwhere in terms of automation?
>>
>> The cppcheck results look the shortest[1]. If we can commit to fixing all of
>> them in the next 1 month, I can kick off a non-voting smoke job. We'll make 
>> it
>> vote after 1 month. I guess the cli and experimental xlator (and a few more
>> xlators, please check log) owners need to confirm that this is something we 
>> can
>> target to fix. And commit to keeping fixed.
> 
> Hi,
> 
> It would be great to fix those, but——
> 
> IIRC we discussed a two compile before-and-after test, i.e. compile the
> tree before and after applying the patch. If the second compile has more
> (or different) warnings than the first, then the test scores a fail.
> 
> If we do that we don't have to wait (as long) to make it a voting test.
> 
> Once we have a voting test, then maintainers have no choice but to keep
> things fixed. ;-)

And we can work on fixing the existing warnings in parallel.

-- 

Kaleb



signature.asc
Description: OpenPGP digital signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] static analysis updated

2016-12-19 Thread Kaleb S. KEITHLEY
On 12/19/2016 12:49 PM, Nigel Babu wrote:
> Thank you Kaleb. Shall we start somwhere in terms of automation?
> 
> The cppcheck results look the shortest[1]. If we can commit to fixing all of
> them in the next 1 month, I can kick off a non-voting smoke job. We'll make it
> vote after 1 month. I guess the cli and experimental xlator (and a few more
> xlators, please check log) owners need to confirm that this is something we 
> can
> target to fix. And commit to keeping fixed.

Hi,

It would be great to fix those, but——

IIRC we discussed a two compile before-and-after test, i.e. compile the
tree before and after applying the patch. If the second compile has more
(or different) warnings than the first, then the test scores a fail.

If we do that we don't have to wait (as long) to make it a voting test.

Once we have a voting test, then maintainers have not choice but to keep
things fixed. ;-)


> 
> Thoughts?
> 
> 
> [1]: 
> https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-cppcheck/2016-12-19-2bb23136/cppcheck.txt
> 
> 
> On Mon, Dec 19, 2016 at 11:33:16AM -0500, Kaleb S. KEITHLEY wrote:
>> Hi,
>>
>> Nightly runs of static analyzers have been migrated to a new host
>> (inside Red Hat) running Fedora 25. (The old host was running F23.)
>>
>> With that update comes clang-3.8 (from 3.7) and cppcheck-1.75 (from 1.70).
>>
>> Independent of that, coverity was updated from 7.7.0 to 8.6.0.
>>
>> As always the results of coverity scan, clang compile, clang analyzer,
>> and cppcheck are available at
>> https://download.gluster.org/pub/gluster/glusterfs/static-analysis/
>>
>> Longer term plans still include migrating these nightly tasks to the
>> "community cage" and better integration with Gerrit and Jenkins.
>>
>> --
>>
>> Kaleb
> 
> --
> nigelb
> 

-- 

Kaleb



signature.asc
Description: OpenPGP digital signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Notice: https://download.gluster.org:/pub/gluster/glusterfs/LATEST has changed

2016-11-16 Thread Kaleb S. KEITHLEY
Hi,

As some of you may have noticed, GlusterFS-3.9.0 was released. Watch
this space for the official announcement soon.

If you are using Community GlusterFS packages from download.gluster.org
you should check your package metadata to be sure that an update doesn't
inadvertently update your system to 3.9.

There is a new symlink:
https://download.gluster.org:/pub/gluster/glusterfs/LTM-3.8 which will
remain pointed at the GlusterFS-3.8 packages. Use this instead of
.../LATEST to keep getting 3.8 updates without risk of accidentally
getting 3.9. There is also a new LTM-3.7 symlink that you can use for
3.7 updates.

Also note that there is a new package signing key for the 3.9 packages
that are on download.gluster.org. The old key remains the same for 3.8
and earlier packages. New releases of 3.8 and 3.7 packages will continue
to use the old key.

GlusterFS-3.9 is the first "short term" release; it will be supported
for approximately six months. 3.7 and 3.8 are Long Term Maintenance
(LTM) releases. 3.9 will be followed by 3.10; 3.10 will be a LTM release
and 3.9 and 3.7 will be End-of-Life (EOL) at that time.


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday - Release 3.9

2016-10-27 Thread Kaleb S. KEITHLEY


Ack on nfs-ganesha bits. Tentative ack on gnfs bits.

Conditional ack on build, see:
  http://review.gluster.org/15726
  http://review.gluster.org/15733
  http://review.gluster.org/15737
  http://review.gluster.org/15743

There will be backports to 3.9 of the last three soon. Timely reviews of 
the last three will accelerate the availability of backports.


On 10/26/2016 10:34 AM, Aravinda wrote:

Gluster 3.9.0rc2 tarball is available here
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.9.0rc2.tar.gz

regards
Aravinda

On Tuesday 25 October 2016 04:12 PM, Aravinda wrote:

Hi,

Since Automated test framework for Gluster is in progress, we need
help from Maintainers and developers to test the features and bug
fixes to release Gluster 3.9.

In last maintainers meeting Shyam shared an idea about having a Test
day to accelerate the testing and release.

Please participate in testing your component(s) on Oct 27, 2016. We
will prepare the rc2 build by tomorrow and share the details before
Test day.

RC1 Link:
http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
Release Checklist:
https://public.pad.fsfe.org/p/gluster-component-release-checklist


Thanks and Regards
Aravinda and Pranith



___
maintainers mailing list
maintain...@gluster.org
http://www.gluster.org/mailman/listinfo/maintainers


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday - Release 3.9

2016-10-25 Thread Kaleb S. KEITHLEY
On 10/25/2016 12:11 PM, Niels de Vos wrote:
> On Tue, Oct 25, 2016 at 07:51:47AM -0400, Kaleb S. KEITHLEY wrote:
>> On 10/25/2016 06:46 AM, Atin Mukherjee wrote:
>>>
>>>
>>> On Tue, Oct 25, 2016 at 4:12 PM, Aravinda <avish...@redhat.com
>>> <mailto:avish...@redhat.com>> wrote:
>>>
>>> Hi,
>>>
>>> Since Automated test framework for Gluster is in progress, we need
>>> help from Maintainers and developers to test the features and bug
>>> fixes to release Gluster 3.9.
>>>
>>> In last maintainers meeting Shyam shared an idea about having a Test
>>> day to accelerate the testing and release.
>>>
>>> Please participate in testing your component(s) on Oct 27, 2016. We
>>> will prepare the rc2 build by tomorrow and share the details before
>>   ^^^
>>> Test day.
>>>
>>> RC1 Link:
>>> http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
>>> 
>>> <http://www.gluster.org/pipermail/maintainers/2016-September/001442.html>
>>>
>>>
>>> I don't think testing RC1 would be ideal as 3.9 head has moved forward
>>> with significant number of patches. I'd recommend of having an RC2 here.
>>>
>>
>> BTW, please tag RC2 as 3.9.0rc2 (versus 3.9rc2).  It makes building
>> packages for Fedora much easier.
>>
>> I know you were following what was done for 3.8rcX. That was a pain. :-}
> 
> Can you explain what the problem is with 3.9rc2 and 3.9.0? The huge
> advantage is that 3.9.0 is seen as a version update to 3.9rc2. When
> 3.9.0rc2 is used, 3.9.0 is *not* an update for that, and rc2 packages
> will stay installed until 3.9.1 is released...
> 
> You can check this easily with the rpmdev-vercmp command:
> 
>$ rpmdev-vercmp 3.9.0rc2 3.9.0
>3.9.0rc2 > 3.9.0
>$ rpmdev-vercmp 3.9rc2 3.9.0
>3.9rc2 < 3.9.0

Those aren't really very realistic RPM NVRs IMO.

> 
> So, at least for RPM packaging, 3.9rc2 is recommended, and 3.9.0rc2 is
> problematic.

That's not the only thing recommended.

Last I knew, one of several things that are recommended is, e.g.,
3.9.0-0.2rc2; 3.9.0-1 > 3.9.0-0.2rc2.

The RC (and {qa,alpha,beta}) packages (that I've) built for Fedora for
several years have had NVRs in that form.

This scheme was what was suggested to me on the fedora-devel mailing
list several years ago.

When RCs are tagged as 3.9rc1, then I have to make non-trivial and
counter-intuitive changes to the .spec file to build packages with NVRs
like 3.9.0-0.XrcY. If they are tagged 3.9.0rc1 then the changes much
more straight forward and much simpler.

-- 

Kaleb



signature.asc
Description: OpenPGP digital signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Minutes from today's Gluster Community Bug Triage meeting (Oct 25 2016)

2016-10-25 Thread Kaleb S. KEITHLEY

There were no meetings on Oct 11 or Oct 18 due to small number of
attendees. There is no meeting next week (Nov 1) due to holiday in
Bangalore. The next meeting will be Nov 8th.

Please find the minutes of today's Gluster Community Bug Triage meeting
at the links posted below.

Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-25/bug_triage.2016-10-25-12.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-25/bug_triage.2016-10-25-12.00.txt
Log:
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-25/bug_triage.2016-10-25-12.00.log.html


#gluster-meeting: bug triage



Meeting started by kkeithley at 12:00:07 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-25/bug_triage.2016-10-25-12.00.log.html
.



Meeting summary
---
* roll call  (kkeithley, 12:00:20)

* Action Items  (kkeithley, 12:02:50)
  * Saravanakmr will host  (kkeithley, 12:03:55)
  * Saravanakmr will host on 2016/11/8 ?  (kkeithley, 12:06:10)
  * bug triage on 2016/11/1 is cancelled due to holiday in Bangalore
(kkeithley, 12:08:43)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage
(kkeithley, 12:11:18)

Meeting ended at 12:29:47 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kkeithley (45)
* Saravanakmr (11)
* jiffin (8)
* hgowtham (6)
* zodbot (3)
* ashiq (1)
* rafi (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot



-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-Maintainers] Gluster Test Thursday - Release 3.9

2016-10-25 Thread Kaleb S. KEITHLEY
On 10/25/2016 06:46 AM, Atin Mukherjee wrote:
> 
> 
> On Tue, Oct 25, 2016 at 4:12 PM, Aravinda  > wrote:
> 
> Hi,
> 
> Since Automated test framework for Gluster is in progress, we need
> help from Maintainers and developers to test the features and bug
> fixes to release Gluster 3.9.
> 
> In last maintainers meeting Shyam shared an idea about having a Test
> day to accelerate the testing and release.
> 
> Please participate in testing your component(s) on Oct 27, 2016. We
> will prepare the rc2 build by tomorrow and share the details before
  ^^^
> Test day.
> 
> RC1 Link:
> http://www.gluster.org/pipermail/maintainers/2016-September/001442.html
> 
> 
> 
> I don't think testing RC1 would be ideal as 3.9 head has moved forward
> with significant number of patches. I'd recommend of having an RC2 here.
> 

BTW, please tag RC2 as 3.9.0rc2 (versus 3.9rc2).  It makes building
packages for Fedora much easier.

I know you were following what was done for 3.8rcX. That was a pain. :-}

3.7 and 3.6 were all 3.X.0betaY or 3.X.0qaY.

If for some reason 3.9 doesn't get released soon, I'll need to package
the RC to get 3.9 into Fedora 25 before its GA and having a packaging
friendly tag will make it that much easier for me to get that done.

(See the community packaging matrix I sent to the mailing lists and/or
at
http://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/)

N.B. This will serve as the email part of the RC tagging discussion
action item I have.

Thanks.


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting 19 Oct 2016 - Minutes

2016-10-19 Thread Kaleb S. KEITHLEY
Hi all,

Thank you to all the participants in today's community meeting. The next
meeting is scheduled next week (October 26th) at #gluster-meeting on
freenode.

The minutes, logs and a summary for today's meeting can be found below.

Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-19/gluster_community.2016-10-19-12.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-19/gluster_community.2016-10-19-12.00.txt
Log:
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-19/gluster_community.2016-10-19-12.00.log.html

===
#gluster-meeting: Gluster Community
===


Meeting started by kkeithley at 12:00:49 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-19/gluster_community.2016-10-19-12.00.log.html
.



Meeting summary
---
* roll call  (kkeithley, 12:01:06)

* : host for next week  (kkeithley, 12:05:13)

* Gluster 4.0  (kkeithley, 12:05:32)
  * LINK:
http://www.gluster.org/pipermail/gluster-devel/2016-October/051198.html
(post-factum, 12:06:47)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-October/051198.html
(kshlm, 12:06:47)
  *
https://www.gluster.org/pipermail/gluster-devel/2016-October/051198.html
(kkeithley, 12:07:11)

* Gluster 3.9  (kkeithley, 12:12:09)
  * ACTION: atinm to poke 3.9 release mgrs to finish and release
(kkeithley, 12:20:51)

* Gluster 3.8  (kkeithley, 12:21:15)
  * LINK:
https://www.gluster.org/pipermail/maintainers/2016-October/001562.html
(kshlm, 12:22:30)
  * The 3.8.5 release is planned to get announced later today. 3.8.6
should be following the normal schedule of approx. 10th of November.
(kkeithley, 12:22:32)
  *
https://www.gluster.org/pipermail/maintainers/2016-October/001562.html
(kkeithley, 12:22:40)

* Gluster 3.7  (kkeithley, 12:23:18)
  * LINK:
https://www.gluster.org/pipermail/gluster-devel/2016-October/051187.html
(kshlm, 12:24:20)

* Gluster 3.6  (kkeithley, 12:27:26)
  * ACTION: kshlm or samikshan will send 3.7.17 reminder  (kkeithley,
12:27:48)

* Infrastructure  (kkeithley, 12:28:35)

* : nfs-ganesha  (kkeithley, 12:30:19)

* Samba  (kkeithley, 12:32:30)
  * md-cache improvements speeds up samba metadata-heavy workloads a
*lot* on gluste  (kkeithley, 12:36:06)
  * ACTION: obnox to starting discussion of Samba memory solutions
(kkeithley, 12:45:49)

* Heketi  (kkeithley, 12:46:33)

* last week's action items  (kkeithley, 12:51:45)
  * ACTION: kkeithley to document RC tagging guidelines in release steps
document  (kkeithley, 12:54:45)

* open floor  (kkeithley, 12:55:17)
  * ACTION: : jdarcy to discuss dbench smoke test failures on email
(kkeithley, 13:03:55)
  * Go bindings to gfapi are moving to github/gluster  (kkeithley,
13:05:12)

* recurring topics  (kkeithley, 13:05:45)
  * ACTION: kshlm to send email about go bindings to gfapi  (kkeithley,
13:06:04)

Meeting ended at 13:07:04 UTC.




Action Items

* atinm to poke 3.9 release mgrs to finish and release
* kshlm or samikshan will send 3.7.17 reminder
* obnox to starting discussion of Samba memory solutions
* kkeithley to document RC tagging guidelines in release steps document
* : jdarcy to discuss dbench smoke test failures on email
* kshlm to send email about go bindings to gfapi




Action Items, by person
---
* atinm
  * atinm to poke 3.9 release mgrs to finish and release
* jdarcy
  * : jdarcy to discuss dbench smoke test failures on email
* kkeithley
  * kkeithley to document RC tagging guidelines in release steps
document
* kshlm
  * kshlm or samikshan will send 3.7.17 reminder
  * kshlm to send email about go bindings to gfapi
* obnox
  * obnox to starting discussion of Samba memory solutions
* samikshan
  * kshlm or samikshan will send 3.7.17 reminder
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kkeithley (105)
* obnox (56)
* kshlm (35)
* post-factum (24)
* jdarcy (18)
* atinm (10)
* shyam (10)
* misc (4)
* skoduri (3)
* samikshan (3)
* zodbot (3)
* karthik_us (1)
* msvbhat (1)
* Saravanakmr (1)
* jiffin (1)
* amye (1)
* rjoseph (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Missing Debian packages

2016-10-18 Thread Kaleb S. KEITHLEY

I'll just leave this here

  https://download.gluster.org/pub/gluster/glusterfs/DOWNLOAD.README

On 10/17/2016 06:56 PM, Shane StClair wrote:
> Hi all,
> 
> Debian packages are missing on download.gluster.org
>  for almost all versions of Gluster. Examples:
> 
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/
> https://download.gluster.org/pub/gluster/glusterfs/3.8/LATEST/Debian/
> https://download.gluster.org/pub/gluster/glusterfs/3.8/3.8.5/Debian/
> https://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/Debian/
> 
> According to the documentation these apt repo URLs are correct:
> 
> https://gluster.readthedocs.io/en/latest/Install-Guide/Install/#for-debian
> 
> This is breaking apt updates on our servers with the following errors:
> 
> ```
> $ sudo apt-get update
> ...
> Err http://download.gluster.org jessie/main amd64 Packages  
>
>   404  Not Found [IP: 23.253.208.221 80]
> ...
> W: Failed to fetch
> http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/Debian/jessie/apt/dists/jessie/main/binary-amd64/Packages
>  404  Not Found [IP: 23.253.208.221 80]
> 
> E: Some index files failed to download. They have been ignored, or old
> ones used instead.
> $ echo $?
> 100
> ```
> 
> I believe this problem happened recently because our deployments ran
> normally until this morning.
> 
> Thanks,
> Shane
> 
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
> 

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Weekly Community Meeting 12 Oct 2016 - Minutes

2016-10-13 Thread Kaleb S. KEITHLEY

Hi all,

Thank you to the five participants in today's community meeting. The 
next meeting is scheduled next week (October 19th) at #gluster-meeting.


The minutes, logs and a summary for today's meeting can be found below.

Minutes: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-12/gluster_community_meeting.2016-10-12-12.00.html
Minutes (text): 
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-12/gluster_community_meeting.2016-10-12-12.00.txt
Log: 
https://meetbot.fedoraproject.org/gluster-meeting/2016-10-12/gluster_community_meeting.2016-10-12-12.00.log.html



Meeting summary
---
* next week's chair  (kkeithley, 12:06:28)
  * kkeithley will chair the meeting on 19 October (will take timeout
from NFS Bake-a-thon)  (kkeithley, 12:08:12)

* GlusterFS 4.0  (kkeithley, 12:08:20)

* GlusterFS 3.9 update  (kkeithley, 12:09:15)

* GlusterFS 3.8 update  (kkeithley, 12:09:44)
  * 3.8.5 on 13 October maybe  (kkeithley, 12:11:13)

* 3.7  (kkeithley, 12:13:46)

* infrastructure  (kkeithley, 12:19:18)

* NFS Ganesha  (kkeithley, 12:25:23)

* Samba  (kkeithley, 12:27:04)

* Action Items from last week  (kkeithley, 12:28:04)

* Open Floor  (kkeithley, 12:29:30)
  * LINK:

http://gluster.readthedocs.io/en/latest/Install-Guide/Community_Packages/
- community packages in the left menu  (ndevos, 12:32:08)

* the usual recurring topics  (kkeithley, 12:35:30)

Meeting ended at 12:36:51 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kkeithley (66)
* ndevos (20)
* post-factum (17)
* atinm (7)
* zodbot (3)
* skoduri (2)
* misc (2)


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Community Gluster Package Matrix, updated

2016-09-28 Thread Kaleb S. KEITHLEY
Hi,

With the imminent release of 3.9 in a week or two, here's a summary of
the Community packages for various Linux distributions that are
tentatively planned going forward.

Note that 3.6 will reach end-of-life (EOL) when 3.9 is released, and no
further releases will be made on the release-3.6 branch.

N.B. Fedora 23 and Ubuntu Wily are nearing EOL.

(I haven't included NetBSD or FreeBSD here, only because they're not
Linux and we have little controlover them.)

An X means packages are planned to be in the repository.
A — means we have no plans to build the version for the repository.
d.g.o means packages will (also) be provided on https://download.gluster.org
DNF/YUM means the packages are included in the Fedora updates or
updates-testing repos.



3.9
3.8 3.7 3.6
CentOS Storage SIG¹ el5 —   —
d.g.o   d.g.o

el6 X
X   X, d.g.oX, d.g.o

el7 X
X   X, d.g.oX, d.g.o






Fedora
F23 —   d.g.o   DNF/YUM d.g.o

F24 d.g.o
DNF/YUM d.g.o   d.g.o

F25 DNF/YUM d.g.o   d.g.o   d.g.o

F26
DNF/YUM
d.g.o   d.g.o   d.g.o






Ubuntu Launchpad²   Precise (12.04 LTS) —   —€” X   X

Trusty (14.04 LTS)  —   X   X   X

Wily (15.10)—   X   X   X

Xenial (16.04 LTS)  X
X   X   X

Yakkety (16.10)
X
X
—   —






Debian  Wheezy (7)  —   —€” d.g.o   d.g.o

Jessie (8)  d.g.o
d.g.o   d.g.o   d.g.o

Stretch (9) d.g.o
d.g.o   d.g.o   d.g.o






SuSE Build System³  OpenSuSE13
X
X   X   X

Leap 42.X   X
X   X   —€”

SLES11  —   —€” —€” X

SLES12  X
X   X   X

¹ https://wiki.centos.org/SpecialInterestGroup/Storage
² https://launchpad.net/~gluster
³ https://build.opensuse.org/project/subprojects/home:kkeithleatredhat

-- Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: OpenPGP digital signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Kaleb S. KEITHLEY
On 09/28/2016 01:54 AM, Muthu Vigneshwaran wrote:
> Hi,
> as we find that the above mentioned components are either
> deprecated,uses GitHub for bugs/issues filing and also planned to add
> the following components as the main component
> 
> - common-ha

common-ha is to (eventually) be replaced with storhaug, which I believe
uses github issues.

But if you want to keep common-ha for now, that's okay with me.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] GlusterFs upstream bugzilla components Fine graining

2016-09-28 Thread Kaleb S. KEITHLEY
On 09/28/2016 02:10 AM, Soumya Koduri wrote:
> Hi,
> 
> On 09/28/2016 11:24 AM, Muthu Vigneshwaran wrote:
> 
>> +- Component GlusterFS
>> |
>> |
>> |  +Subcomponent nfs
> 
> Maybe its time to change it to 'gluster-NFS/native NFS'. Niels/Kaleb?

IIRC there is a separate nfs-ganesha subcomponent already. Correct?

But I agree with calling it gluster-nfs, or anything that makes the
distinction between gluster-nfs and nfs-ganesha clear.

> 
>> +- Component gdeploy
>>
>> |  |
>>
>> |  +Subcomponent samba
>>
>> |  +Subcomponent hyperconvergence

I don't know what hyper-convergence is in the context of gdeploy.

>>
>> |  +Subcomponent RHSC 2.0
> 
> gdeploy has support for 'ganesha' configuration as well. Also would it
> help if we have additional subcomponent 'glusterfs' as well, may be as
> the default one (any new support being added can fall under that
> category)? Request Sac to comment.

Yes, we need a ganesha or nfs-ganesha subcomponent here.

Thanks

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] make install again compiling source

2016-09-19 Thread Kaleb S. KEITHLEY
On 09/19/2016 07:48 AM, Anoop C S wrote:
> On Mon, 2016-09-19 at 17:07 +0530, Avra Sengupta wrote:
>> Hi,
>>
>> I ran "make -j" on the latest master, followed by make install. The
>> make 
>> install, by itself is doing a fresh compile every time (and totally 
>> ignoring the make i did before it).
> 
> Yeah..hit the same issue for me too.
> 
>> Is there any recent change, which 
>> would cause this. Thanks.
>>

It's probably related to the out-of-tree build changes that were merged
over the weekend.

I'm looking at it now.


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Libunwind

2016-09-08 Thread Kaleb S. KEITHLEY

On 09/08/2016 10:33 AM, Vijay Bellur wrote:

On Thu, Sep 8, 2016 at 9:07 AM, Jeff Darcy  wrote:

In a few places in our code (e.g. gf_log_callingfn) we use the "backtrace" and 
"backtrace_symbols" functions from libc to log stack traces.  Unfortunately, these 
functions don't seem very smart about dynamically loaded libraries - such as translators, where 
most of our code lives.  They give us the object plus offset from where the object was loaded into 
memory, which isn't that easy to turn into a function name (let alone a file and line number).  It 
seems like libunwind can do better, getting at least to the function name.  AFAICT it's supported 
and packaged on all of our platforms, though there might be version differences.  Newer versions 
can supposedly get to file and line, which would be even better.  Before I get further into this, 
two questions for all of you:

(1) Has somebody already gone down this path?  Does it work?

(2) Are there any other reasons we wouldn't want to switch?



Cannot think of any. The BSD platforms seem to have libunwind and Mac
OS X doesn't have it apparently [1].

I have been thinking of fixing the recent Mac OS X compilation
problems and can address issues related to libunwind as part of that
activity.


I have libunwind.h in the XCode headers and /usr/lib/libunwind.dylib.

Brew also has a libunwind-headers package, but I didn't look at what it 
provides vis-a-vis the XCode headers.


--

Kaleb

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Anyone wants to maintain Mac-OSX port of gluster?

2016-09-06 Thread Kaleb S. KEITHLEY

On 09/06/2016 08:03 AM, Emmanuel Dreyfus wrote:

On Tue, Sep 06, 2016 at 07:30:08AM -0400, Kaleb S. KEITHLEY wrote:

Mac OS X doesn't build at the present time because its sed utility (used in
the xdrgen/rpcgen part of the build) doesn't support the (linux compatible)
'-r' command line option. (NetBSD and FreeBSD do.)

(There's an easy fix)


Easy fix, replace sed -r by $SED_R and
SED_R="sed -r" on Linux vs SED_R="sed -E" on BSDs, including OSX.



Even easier is don't use an extended regex, then you won't need `sed -r` 
or `sed -E`.


See the regex I used in 
http://review.gluster.org/#/c/14085/14/rpc/xdr/src/Makefile.am  (line 48)


--

Kaleb

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Anyone wants to maintain Mac-OSX port of gluster?

2016-09-06 Thread Kaleb S. KEITHLEY

On 09/02/2016 03:49 PM, Pranith Kumar Karampuri wrote:

hi,
 As per MAINTAINERS file this port doesn't have maintainer. If you
want to take up the responsibility of maintaining the port please let us
know how you want to go about doing it and what should be the checklist
of things that should be done before every release upstream. It is
extremely healthy to have more than one maintainer for the port. Even if
multiple people already responded and you still want to be part of it,
don't feel shy to respond. More the merrier.


Mac OS X doesn't build at the present time because its sed utility (used 
in the xdrgen/rpcgen part of the build) doesn't support the (linux 
compatible) '-r' command line option. (NetBSD and FreeBSD do.)


(There's an easy fix)

--

Kaleb


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Checklist for glusterfs packaging/build for upstream release

2016-09-06 Thread Kaleb S. KEITHLEY

On 09/02/2016 03:40 PM, Pranith Kumar Karampuri wrote:

hi,
  In the past we have had issues where some of the functionality
didn't work on debian/ubuntu because 'glfsheal' binary was not packaged.


It was? That seems strange to me because our Debian packaging is less 
"selective" than our RPM.


Less selective in that it wildcards pretty much everything that gets 
installed, unlike the Fedora/RHEL/CentOS.


But perhaps I'm just not remembering this particular incident.


What do you guys as packaging/build maintainers on different distros
suggest that we do to make sure we catch such mistakes before the
releases are made?


Short of a trial build of Debian packages before the release, coupled 
with some kind of audit of what's in them, and compare that to what's in 
the RPMs?


And for the record, I'm trying not to be the packaging maintainer for so 
many different distributions.




Please suggest them here so that we can add them at
https://public.pad.fsfe.org/p/gluster-component-release-checklist after
the discussion is complete


And BTW, at some point we should compare our current Debian/Ubuntu 
package files with Patrick's and get them back in sync again if they 
have diverged.


--

Kaleb


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] adding smoke test to catch printf-style format string errors

2016-08-31 Thread Kaleb S. KEITHLEY

Hi,

FYI, we are adding a new smoke test that builds on a 32-bit platform to
catch printf-style format string errors.

We have cleaned up these errors in the past, but they're creeping in
again in new code and fixes.

The test will start out as a non-voting test. After we get the source
cleaned up again the test will be changed to a voting test, i.e. smoke
will fail if there are format string errors.

Thanks,

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] CFP Gluster Developer Summit

2016-08-24 Thread Kaleb S. KEITHLEY
On 08/23/2016 07:29 PM, Amye Scavarda wrote:
> 
> 
> On Tue, Aug 23, 2016 at 7:41 AM, Kaleb S. KEITHLEY <kkeit...@redhat.com
> <mailto:kkeit...@redhat.com>> wrote:
> 
> On 08/17/2016 09:56 AM, Kaleb S. KEITHLEY wrote:
>> I propose to present on one or more of the following topics:
>>
>> * NFS-Ganesha Architecture, Roadmap, and Status, Jiffin Thotton 
>> copresenter.
>>
>> * Architecture of the High Availability
>> Solution for Ganesha and Samba - detailed walk through and demo of
>> current implementation - difference between the current and
>> storhaug implementations
>>
>> * High Level Overview of autoconf/automake/libtool configuration 
>> (I gave a presentation in BLR in 2015, so this is perhaps less 
>> interesting?)
>>
>> * Packaging Howto — RPMs and .debs (maybe a breakout session or a 
>> BOF. Would like to (re)enlist volunteers to help build packages.)
> 
> 
> Note addition of Jiffin as copresenter.  Thank you.
> 
> 
> --
> 
> 
> Kaleb
> 
> 
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org <mailto:gluster-us...@gluster.org>
> http://www.gluster.org/mailman/listinfo/gluster-users
> <http://www.gluster.org/mailman/listinfo/gluster-users>
> 
> 
> Out of these three? I think there's three, which one would you be most
> interested in giving?

Well four, but

I'd prefer to do the HA architecture and then let Jiffin give the
Ganesha architecture and roadmap.


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] CFP Gluster Developer Summit

2016-08-23 Thread Kaleb S. KEITHLEY

On 08/17/2016 09:56 AM, Kaleb S. KEITHLEY wrote:

I propose to present on one or more of the following topics:

* NFS-Ganesha Architecture, Roadmap, and Status,Jiffin Thotton copresenter.
* Architecture of the High Availability Solution for Ganesha and Samba
  - detailed walk through and demo of current implementation
  - difference between the current and storhaug implementations
* High Level Overview of autoconf/automake/libtool configuration
  (I gave a presentation in BLR in 2015, so this is perhaps less
interesting?)
* Packaging Howto — RPMs and .debs
  (maybe a breakout session or a BOF. Would like to (re)enlist volunteers
to help build packages.)




Note addition of Jiffin as copresenter.  Thank you.


--


Kaleb

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] release-3.6 end of life

2016-08-19 Thread Kaleb S. KEITHLEY
On 08/19/2016 11:17 AM, Kaleb S. KEITHLEY wrote:
> On 08/19/2016 08:59 AM, Diego Remolina wrote:
> 
>> My issue is trying to install a particular minor version, or after
>> doing an update to the latest minor version change, i.e. 3.6.5 to
>> 3.6.9, trying to go back to an older release, if there is a problem
>> with the latest.
>>
>> How does one do that in Ubuntu?
>>
>> This shows 3.6.9 is available:
>>
>> https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6/+packages
>>
>> But nothing under 3.6.9 is there, or if it is how do I get those specific 
>> ones?
>>
> 
> You don't. Launchpad doesn't keep old versions. We (we = gluster
> community) have no control over that.
> 
> If you think you're going to want to install an older version then
> you'll need to save copies while they're available.
> 
> The GlusterFS Community decided a long time ago that using Launchpad was
> the preferred way to go.
> 

And you can build your own. Anyone can get a Launchpad account and
create their own PPAs.

The packaging files to build your own packages are in the git repo at
https://github.com/gluster/glusterfs-debian .  You can build any version
of GlusterFS you want.


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] release-3.6 end of life

2016-08-19 Thread Kaleb S. KEITHLEY
On 08/19/2016 08:59 AM, Diego Remolina wrote:

> My issue is trying to install a particular minor version, or after
> doing an update to the latest minor version change, i.e. 3.6.5 to
> 3.6.9, trying to go back to an older release, if there is a problem
> with the latest.
> 
> How does one do that in Ubuntu?
> 
> This shows 3.6.9 is available:
> 
> https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.6/+packages
> 
> But nothing under 3.6.9 is there, or if it is how do I get those specific 
> ones?
> 

You don't. Launchpad doesn't keep old versions. We (we = gluster
community) have no control over that.

If you think you're going to want to install an older version then
you'll need to save copies while they're available.

The GlusterFS Community decided a long time ago that using Launchpad was
the preferred way to go.
-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Help needed during installation of Gluster 3.8.1

2016-08-19 Thread Kaleb S. KEITHLEY
On 08/19/2016 07:27 AM, Shekhar Berry wrote:
>>
>> Can you check and share which "userspace-rcu" version is installed in
>> your machine?
> rpm -qa | grep userspace-rcu
> userspace-rcu-0.7.16-1.el7.x86_64

To build from source you need userspace-rcu-devel.

Or you could just install prebuilt RPMs from the CentOS Storage SIG.
See https://wiki.centos.org/SpecialInterestGroup/Storage

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] release-3.6 end of life

2016-08-18 Thread Kaleb S. KEITHLEY

On 08/18/2016 01:22 PM, Kaleb S. KEITHLEY wrote:

On 08/18/2016 01:10 PM, Joe Julian wrote:

I'd like to plead with the community to continue to support 3.6 as a
"lts" release. It's the last release version that can be used on Ubuntu
14.04 (Trusty Tahr) LTS which many users may be stuck using for quite
some time (eol of April 2019).


What's wrong with 3.7 on Trusty?

https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7/+packages


Or 3.8?

https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.8/+packages

--

Kaleb


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] release-3.6 end of life

2016-08-18 Thread Kaleb S. KEITHLEY

On 08/18/2016 01:10 PM, Joe Julian wrote:

I'd like to plead with the community to continue to support 3.6 as a
"lts" release. It's the last release version that can be used on Ubuntu
14.04 (Trusty Tahr) LTS which many users may be stuck using for quite
some time (eol of April 2019).


What's wrong with 3.7 on Trusty?

https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7/+packages

--

Kaleb

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] FYI: change in bugzilla version(s) starting with the 3.9 release

2016-08-17 Thread Kaleb S. KEITHLEY
On 08/17/2016 12:11 PM, Atin Mukherjee wrote:
> 
> 
> On Wednesday 17 August 2016, Kaleb S. KEITHLEY <kkeit...@redhat.com
> <mailto:kkeit...@redhat.com>> wrote:
> 
> Hi,
> 
> In today's Gluster Community Meeting it was tentatively agreed that we
> will change how we report the GlusterFS version for glusterfs bugs.
> 
> Starting with the 3.9 release there will only be a "3.9" version in
> bugzilla; compared to the current scheme where there are, e.g., 3.8.0,
> 3.8.1, ..., 3.8.x versions in bugzilla.
> 
> 
> May I ask what is the benefit we are going to get from this change?
> Personally I am more inclined towards the existing option as users can
> select the version and do not have to mention it in the comment.
> Sometimes users may forget to report the version in the comment and we
> need to do back and forth on this, instead having a specific release
> version on which the bug can be filed looks a better option IMO.

Personally I'm ambivalent about it, but a handful of people want it.

It's a nuisance to add new versions to bugzilla every few weeks, but
it's not "End of the World" hard, it's just a detail.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] CFP Gluster Developer Summit

2016-08-17 Thread Kaleb S. KEITHLEY
I propose to present on one or more of the following topics:

* NFS-Ganesha Architecture, Roadmap, and Status
* Architecture of the High Availability Solution for Ganesha and Samba
 - detailed walk through and demo of current implementation
 - difference between the current and storhaug implementations
* High Level Overview of autoconf/automake/libtool configuration
 (I gave a presentation in BLR in 2015, so this is perhaps less
interesting?)
* Packaging Howto — RPMs and .debs
 (maybe a breakout session or a BOF. Would like to (re)enlist volunteers
to help build packages.)


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] FYI: change in bugzilla version(s) starting with the 3.9 release

2016-08-17 Thread Kaleb S. KEITHLEY
Hi,

In today's Gluster Community Meeting it was tentatively agreed that we
will change how we report the GlusterFS version for glusterfs bugs.

Starting with the 3.9 release there will only be a "3.9" version in
bugzilla; compared to the current scheme where there are, e.g., 3.8.0,
3.8.1, ..., 3.8.x versions in bugzilla.

When filing a new bug report, the exact version can — and should — be
entered in the comments section of the report.

For 3.8 and earlier we will retain the old scheme for the lifetime of
that release.

If you have any questions or comments about this change you can reply to
this email or raise them in IRC #gluster-dev (on freenode)


-- 

Kaleb


0x89CCAE8B.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] readdir() harmful in threaded code

2016-07-25 Thread Kaleb S. KEITHLEY
On 07/25/2016 07:26 AM, Kaleb S. KEITHLEY wrote:
> On 07/23/2016 10:32 AM, Emmanuel Dreyfus wrote:
>> Pranith Kumar Karampuri <pkara...@redhat.com> wrote:
>>
>>> So should we do readdir() with external locks for everything instead?
>>
>> readdir() with a per-directory lock is safe. However, it may come with a
>> performance hit in some scenarios, since two threads cannot read the
>> same directory at once. But I am not sure it can happen in GlusterFS.
>>
>> I am a bit disturbed by readdir_r() being planned for deprecation. The
>> Open Group does not say that, or I missed it:
>> http://pubs.opengroup.org/onlinepubs/9699919799/functions/readdir.html
> 
> You should take that concern up, perhaps, with the glibc people.
> 
> As for GlusterFS, the recent change I made only affects Linux/glibc.
> 
> Non-linux platforms are unchanged; they use the same old hodgepodge of
> readdir(3)/readdir_r(3) they always have.

I take that back. Non-linux platforms now use readdir_r(3) exclusively.
Which seems to me to be better than the old hodgepodge of readdir(3) and
readdir_r(3).

> 
> As such I don't understand what it is that you're concerned about.
> 

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] readdir() harmful in threaded code

2016-07-25 Thread Kaleb S. KEITHLEY
On 07/23/2016 10:32 AM, Emmanuel Dreyfus wrote:
> Pranith Kumar Karampuri  wrote:
> 
>> So should we do readdir() with external locks for everything instead?
> 
> readdir() with a per-directory lock is safe. However, it may come with a
> performance hit in some scenarios, since two threads cannot read the
> same directory at once. But I am not sure it can happen in GlusterFS.
> 
> I am a bit disturbed by readdir_r() being planned for deprecation. The
> Open Group does not say that, or I missed it:
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/readdir.html

You should take that concern up, perhaps, with the glibc people.

As for GlusterFS, the recent change I made only affects Linux/glibc.

Non-linux platforms are unchanged; they use the same old hodgepodge of
readdir(3)/readdir_r(3) they always have.

As such I don't understand what it is that you're concerned about.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster weekly community meeting minutes 20-Jul-2016

2016-07-20 Thread Kaleb S. KEITHLEY
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-20/gluster_community_weekly_meeting.2016-07-20-12.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-20/gluster_community_weekly_meeting.2016-07-20-12.00.txt
Log:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-20/gluster_community_weekly_meeting.2016-07-20-12.00.log.html

Next weeks meeting will be held at 12:00 UTC  27 July 2016 in
#gluster-meeting on freenode.  See you all next week.

===
#gluster-meeting: Community Meeting
===


Meeting started by kkeithley at 12:00:27 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-13/community_meeting.2016-07-13-12.00.log.html
.



Meeting summary
---
* roll call  (kkeithley, 12:00:56)

* GlusterFS 4.0  (kkeithley, 12:03:50)

* next week's host  (kkeithley, 12:04:27)

* GlusterFS 4.0  (kkeithley, 12:07:17)

* GlusterFS 3.9  (kkeithley, 12:11:42)

* GlusterFS 3.8  (kkeithley, 12:14:48)
  * LINK:
https://download.gluster.org/pub/gluster/glusterfs/download-stats.html
(kkeithley, 12:17:04)

* GlusterFS 3.7  (kkeithley, 12:17:51)
  * ACTION: kshlm and ndevos to respond to
http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
(kkeithley, 12:23:49)
  * problem with the order in which packages are installed, geo-rep
package is installed after server, but server call gsyncd provided
by geo-rep resulting in using older version binary.  (kkeithley,
12:30:11)

* next week's meeting chair  (kkeithley, 12:31:19)

* GlusterFS 3.6  (kkeithley, 12:34:45)

* Infrastructure  (kkeithley, 12:37:42)

* NFS-Ganesha  (kkeithley, 12:42:08)

* Samba  (kkeithley, 12:42:51)

* AIs from last week  (kkeithley, 12:44:00)
  * ACTION: kshlm, csim to chat with nigelb about setting up faux/pseudo
user email for gerrit, bugzilla, github  (kkeithley, 12:47:43)
  * ACTION: rastar to look at 3.6 builds failures on BSD  (kkeithley,
12:48:32)
  * ACTION: kshlm will start a mailing list discussion on EOLing 3.6
(kkeithley, 12:49:58)
  * ACTION: kshlm to setup GD2 CI on centos-ci  (kkeithley, 12:53:02)

* chair for next week's meeting  (kkeithley, 12:53:17)

* Open Floor  (kkeithley, 12:55:20)
  * IDEA: quick summary of our release - what went well, what we can
improve, what we did improve this time.  (kkeithley, 12:59:30)

Meeting ended at 13:00:55 UTC.




Action Items

* kshlm and ndevos to respond to
  http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
* kshlm, csim to chat with nigelb about setting up faux/pseudo user
  email for gerrit, bugzilla, github
* rastar to look at 3.6 builds failures on BSD
* kshlm will start a mailing list discussion on EOLing 3.6
* kshlm to setup GD2 CI on centos-ci




Action Items, by person
---
* nigelb
  * kshlm, csim to chat with nigelb about setting up faux/pseudo user
email for gerrit, bugzilla, github
* rastar
  * rastar to look at 3.6 builds failures on BSD
* **UNASSIGNED**
  * kshlm and ndevos to respond to
http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
  * kshlm will start a mailing list discussion on EOLing 3.6
  * kshlm to setup GD2 CI on centos-ci




People Present (lines said)
---
* kkeithley (113)
* nigelb (16)
* post-factum (16)
* atinm (8)
* jdarcy (6)
* kotreshhr (6)
* aravindavk (6)
* rastar (6)
* partner (4)
* skoduri (3)
* zodbot (3)
* ira (2)
* msvbhat (1)
* Saravanakmr (1)
* karthik_ (1)
* ramky (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster weekly community meeting minutes 13-Jul-2016

2016-07-13 Thread Kaleb S. KEITHLEY
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-13/community_meeting.2016-07-13-12.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-13/community_meeting.2016-07-13-12.00.txt
Log:
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-13/community_meeting.2016-07-13-12.00.log.html

Next weeks meeting will be held at 12:00 UTC  20 July 2016 in
#gluster-meeting on freenode.  See you all next week.

===
#gluster-meeting: Community Meeting
===


Meeting started by kkeithley at 12:00:27 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2016-07-13/community_meeting.2016-07-13-12.00.log.html
.



Meeting summary
---
* roll call  (kkeithley, 12:00:56)

* GlusterFS 4.0  (kkeithley, 12:03:50)

* next week's host  (kkeithley, 12:04:27)

* GlusterFS 4.0  (kkeithley, 12:07:17)

* GlusterFS 3.9  (kkeithley, 12:11:42)

* GlusterFS 3.8  (kkeithley, 12:14:48)
  * LINK:
https://download.gluster.org/pub/gluster/glusterfs/download-stats.html
(kkeithley, 12:17:04)

* GlusterFS 3.7  (kkeithley, 12:17:51)
  * ACTION: kshlm and ndevos to respond to
http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
(kkeithley, 12:23:49)
  * problem with the order in which packages are installed, geo-rep
package is installed after server, but server call gsyncd provided
by geo-rep resulting in using older version binary.  (kkeithley,
12:30:11)

* next week's meeting chair  (kkeithley, 12:31:19)

* GlusterFS 3.6  (kkeithley, 12:34:45)

* Infrastructure  (kkeithley, 12:37:42)

* NFS-Ganesha  (kkeithley, 12:42:08)

* Samba  (kkeithley, 12:42:51)

* AIs from last week  (kkeithley, 12:44:00)
  * ACTION: kshlm, csim to chat with nigelb about setting up faux/pseudo
user email for gerrit, bugzilla, github  (kkeithley, 12:47:43)
  * ACTION: rastar to look at 3.6 builds failures on BSD  (kkeithley,
12:48:32)
  * ACTION: kshlm will start a mailing list discussion on EOLing 3.6
(kkeithley, 12:49:58)
  * ACTION: kshlm to setup GD2 CI on centos-ci  (kkeithley, 12:53:02)

* chair for next week's meeting  (kkeithley, 12:53:17)

* Open Floor  (kkeithley, 12:55:20)
  * IDEA: quick summary of our release - what went well, what we can
improve, what we did improve this time.  (kkeithley, 12:59:30)

Meeting ended at 13:00:55 UTC.




Action Items

* kshlm and ndevos to respond to
  http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
* kshlm, csim to chat with nigelb about setting up faux/pseudo user
  email for gerrit, bugzilla, github
* rastar to look at 3.6 builds failures on BSD
* kshlm will start a mailing list discussion on EOLing 3.6
* kshlm to setup GD2 CI on centos-ci




Action Items, by person
---
* nigelb
  * kshlm, csim to chat with nigelb about setting up faux/pseudo user
email for gerrit, bugzilla, github
* rastar
  * rastar to look at 3.6 builds failures on BSD
* **UNASSIGNED**
  * kshlm and ndevos to respond to
http://www.gluster.org/pipermail/maintainers/2016-July/001063.html
  * kshlm will start a mailing list discussion on EOLing 3.6
  * kshlm to setup GD2 CI on centos-ci


People Present (lines said)
---
* kkeithley (113)
* nigelb (16)
* post-factum (16)
* atinm (8)
* jdarcy (6)
* kotreshhr (6)
* aravindavk (6)
* rastar (6)
* partner (4)
* skoduri (3)
* zodbot (3)
* ira (2)
* msvbhat (1)
* Saravanakmr (1)
* karthik_ (1)
* ramky (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot



-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster weekly community meeting minutes 22-Jun-2016

2016-06-22 Thread Kaleb S. KEITHLEY
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-22/weekly_community_meeting_-_22-jun-2016.2016-06-22-12.01.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-22/weekly_community_meeting_-_22-jun-2016.2016-06-22-12.01.txt
Log:
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-22/weekly_community_meeting_-_22-jun-2016.2016-06-22-12.01.log.html

Next weeks meeting will be held at 12:00 UTC  29 June 2016 in
#gluster-meeting on freenode.  See you all next week.



#gluster-meeting: Weekly Community meeting - 22-Jun-2016



Meeting started by kshlm at 12:01:29 UTC. The full logs are available at
https://meetbot.fedoraproject.org/gluster-meeting/2016-06-22/weekly_community_meeting_-_22-jun-2016.2016-06-22-12.01.log.html
.



Meeting summary
---
* RollCall  (kshlm, 12:02:01)

* GlusterFS 4.0  (kshlm, 12:05:44)
  * LINK: http://review.gluster.org/#/c/14763/   (atinm, 12:10:25)

* GlusterFS 3.8  (kshlm, 12:12:14)
  * LINK: http://blog.gluster.org/2016/06/glusterfs-3-8-released/
(anoopcs, 12:14:48)
  * ACTION: jiffin will announce 3.8 on the mailing lists.  (kshlm,
12:15:28)
  * ACTION: aravindavk will ping amye to link release-notes to 3.8
release annoucement on blog  (kshlm, 12:17:38)

* GlusterFS-3.9  (kshlm, 12:17:49)
  * LINK:
http://www.gluster.org/pipermail/maintainers/2016-June/000951.html
(aravindavk, 12:19:20)

* GlusterFS 3.7  (kshlm, 12:21:09)
  * AGREED: , we will release 3.7.12 following the meeting  (kkeithley,
12:29:29)
  * AGREED: hagarth tentative release manager for 3.7.13  (kkeithley,
12:33:39)

* GlusterFS 3.6  (kkeithley, 12:34:24)
  * AGREED: we only fix critical bugs in 3.6  (kkeithley, 12:43:48)

* NFS-Ganesha + Gluster  (kkeithley, 12:45:49)

* Samba and GlusterFS  (kkeithley, 12:48:45)

* AIs from last week  (kkeithley, 12:50:11)

* Open Floor  (kkeithley, 12:55:31)
  * glusterfs-coreutils is now available in fedora 22, 23 and 24 stable
repositories  (kkeithley, 12:59:01)

Meeting ended at 13:02:40 UTC.




Action Items

* jiffin will announce 3.8 on the mailing lists.
* aravindavk will ping amye to link release-notes to 3.8 release
  annoucement on blog




Action Items, by person
---
* aravindavk
  * aravindavk will ping amye to link release-notes to 3.8 release
annoucement on blog
* jiffin
  * jiffin will announce 3.8 on the mailing lists.
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kkeithley (87)
* kshlm (85)
* anoopcs (10)
* aravindavk (9)
* atinm (6)
* partner (6)
* jiffin (4)
* zodbot (4)
* rjoseph (4)
* jdarcy (3)
* ira (3)
* mchangir (2)
* post-factum (1)
* karthik___ (1)
* hgowtham_ (1)
* msvbhat (1)
* samikshan (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot



-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Fwd: Gluster Package Matrix, tentative

2016-05-09 Thread Kaleb S. KEITHLEY
Resend.

N.B. Fedora 24 just entered Beta. The plan is to update GlusterFS to 3.8
in Fedora24 before Fedora24 GA. GlusterFS-3.8 will be the standard
version of GlusterFS for the life of Fedora24. Other versions of
GlusterFS will be available for Fedora24 on download.gluster.org.

N.B. Starting with GlusterFS-3.8, packages for RHEL and CentOS will be
available only from the CentOS Storage SIG repos. (They will NOT be
available on download.gluster.org, just like we do for GlusterFS package
for Ubuntu and SuSE.)


 Forwarded Message 
Subject:[Gluster-devel] Gluster Package Matrix, tentative
Date:   Fri, 1 Apr 2016 09:44:31 -0400



Hi,

With the imminent release of 3.8 in a few weeks, here's a summary of the
Linux packages that are
tentatively planned going forward.

Note that 3.5 will reach end-of-life (EOL) when 3.8 is released, and no
further releases will be
made on the release-3.5 branch.

(I haven't included NetBSD or FreeBSD here, only because they're not
Linux and we have little control
over them.)

An X means packages are planned to be in the repository.
A — means we have no plans to build the version for the repository.
d.g.o means packages will (also) be provided on https://download.gluster.org
DNF/YUM means the packages are included in the Fedora updates or
updates-testing repos.
 


3.8 3.7 3.6 3.5
CentOS Storage SIG¹ el5 —
d.g.o   d.g.o   d.g.o

el6 X   X, d.g.oX, d.g.od.g.o

el7 X   X, d.g.oX, d.g.od.g.o






Fedora  F22 —€” d.g.o   DNF/YUM d.g.o

F23 d.g.o   DNF/YUM d.g.o   d.g.o

F24 DNF/YUM d.g.o   d.g.o   d.g.o

F25 DNF/YUM d.g.o   d.g.o   d.g.o






Ubuntu Launchpad²   Precise (12.04 LTS) —€” X   X   X

Trusty (14.04 LTS)  X   X   X   X

Wily (15.10)X   X   X   X

Xenial (16.04 LTS)  X   X   X   —€”






Debian  Wheezy (7)  —€” d.g.o   d.g.o   d.g.o

Jessie (8)  d.g.o   d.g.o   d.g.o   d.g.o

Stretch (9) d.g.o   d.g.o   d.g.o   —€”






SuSE Build System³  OpenSuSE13
X   X   X   X

Leap 42.1   X   X   —€” —€”

SLES11  —€” —€” X   X

SLES12  X   X   X   —€”


¹ https://wiki.centos.org/SpecialInterestGroup/Storage
² https://launchpad.net/~gluster
³ https://build.opensuse.org/project/subprojects/home:kkeithleatredhat

-- Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How use Gluster/NFS

2016-04-29 Thread Kaleb S. KEITHLEY
On 04/29/2016 07:34 AM, Rick Macklem wrote:
> Abhishek Paliwal wrote:
>> Hi Team,
>>
>> I want to use gluster NFS and export this gluster volume using 'mount -t nfs
>> -o acl' command.
>>
>> i have done the following changes:
>> 1. Enable the NFS using nfs.disable off
>> 2. Enable the ACL using nfs.acl on
>> 3. RPCbind is also running
>> 4. Kernel NFS is stopped
>>
> You could try setting
>  nfs.register-with-portmap on
> I thought it was enabled by default, but maybe that changed
> when the default for nfs.disable changed?

The default for nfs.disable is _only_ changing starting with GlusterFS-3.8.

GlusterFS-3.8 HASN'T BEEN RELEASED YET.

IOW the default for nfs.disable has _not_ changed in GlusterFS-3.7 and
nfs.register-with-portmap _remains_ enabled by default; and will remain
enabled by default even in GlusterFS-3.8.


-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-22 Thread Kaleb S. KEITHLEY
On 01/22/2016 01:20 PM, Joe Julian wrote:
> 
> 
> On 01/22/16 09:53, Kaleb S. KEITHLEY wrote:
>> On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote:
>>> On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:
>>>> I presume by this you mean you're not seeing the "kernel notifier loop
>>>> terminated" error in your logs.
>>> Correct, but only with simple traversing. Have to test under rsync.
>> Without the patch I'd get "kernel notifier loop terminated" within a few
>> minutes of starting I/O.  With the patch I haven't seen it in 24 hours
>> of beating on it.
>>
>>>> Hmmm.  My system is not leaking. Last 24 hours the RSZ and VSZ are
>>>> stable:
>>>> http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity
>>>>
>>>> /client.out
>>> What ops do you perform on mounted volume? Read, write, stat? Is that
>>> 3.7.6 +
>>> patches?
>> I'm running an internally developed I/O load generator written by a guy
  


>> on our perf team.
>>
>> it does, create, write, read, rename, stat, delete, and more.
>>
> Github link?

I looked for one before posting. I don't think he has shared it.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-22 Thread Kaleb S. KEITHLEY
On 01/22/2016 12:15 PM, Oleksandr Natalenko wrote:
> OK, compiles and runs well now,

I presume by this you mean you're not seeing the "kernel notifier loop
terminated" error in your logs.

> but still leaks. 

Hmmm.  My system is not leaking. Last 24 hours the RSZ and VSZ are
stable:
http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity/client.out

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Memory leak in GlusterFS FUSE client

2016-01-22 Thread Kaleb S. KEITHLEY
On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote:
> On пʼятниця, 22 січня 2016 р. 12:32:01 EET Kaleb S. KEITHLEY wrote:
>> I presume by this you mean you're not seeing the "kernel notifier loop
>> terminated" error in your logs.
> 
> Correct, but only with simple traversing. Have to test under rsync.

Without the patch I'd get "kernel notifier loop terminated" within a few
minutes of starting I/O.  With the patch I haven't seen it in 24 hours
of beating on it.

> 
>> Hmmm.  My system is not leaking. Last 24 hours the RSZ and VSZ are
>> stable:
>> http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longevity
>> /client.out
> 
> What ops do you perform on mounted volume? Read, write, stat? Is that 3.7.6 + 
> patches?

I'm running an internally developed I/O load generator written by a guy
on our perf team.

it does, create, write, read, rename, stat, delete, and more.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Python 3 is coming!

2016-01-18 Thread Kaleb S. KEITHLEY

Python 3.5 is approved for Fedora 24[1], which is scheduled to ship in
May[2]

We have several places in the source where Python 2 is an explicit
requirement.

Do we want to rethink the hard requirement for Python 2?  I suspect we
should.

[1]https://fedoraproject.org/wiki/Releases/24/ChangeSet

[2] https://fedoraproject.org/wiki/Releases/24/Schedule

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of CPU usage

2016-01-08 Thread Kaleb S. KEITHLEY
On 12/30/2015 01:22 PM, Hubbard Jordan wrote:

> I also have a broader question to go with the specific one:  We 
> (at iXsystems) were attempting to engage with some of the Red Hat 
> folks back when the FreeBSD port was first done, in the hope of 
> getting it more “officially supported” for FreeBSD and perhaps even 
> donating some more serious stress-testing and integration work for 
> it, but when those Red Hat folks moved on we lost continuity and 
> the effort stalled.  Who at Red Hat would / could we work with in 
> getting this back on track?  We’d like to integrate glusterfs with 
> FreeNAS 10, and in fact have already done so but it’s still early 
> days and we’re not even really sure what we have yet.
> 

Hi,

To me, from a community standpoint, to be "officially supported" I'd
venture to say that what it takes is being visibly involved in the
project. That can take many forms, e.g., do regular builds on your
platform, submit bug reports (to our bugzilla) and associated fixes (to
our gerrit), implement and contribute new features, review other
people's patches in gerrit, build packages for your platform, evangelize
GlusterFS, answer questions in IRC and the mailing lists, etc., etc.

Everything that goes on the community is done by volunteers. There are
no Red Hat employees whose sole responsibility is to work on Community
GlusterFS. (Excepting our Community Manager, Amye.) The Red Hat mantra
is "upstream first" so every feature and every bug fix that Red Hat
employees work on does indeed go into Community GlusterFS first; a lot
does get done as a side effect of that policy, but nobody should take it
for granted that _everything_ (or anything) will just get done.

Nobody would say no to having serious stress testing and integration
work. If it plugs into our current gerrit and jenkins infrastructure, so
much the better. If there are people in your community who can help
maintain and/or grow our infrastructure, we could use a lot of help there.

With that level of involvement, I could imagine eventually FreeBSD
having more of a, I don't know, for lack of a better word, 'standing' in
the GlusterFS community. We do compile every patch on FreeBSD to ensure
that we don't break that level of portability, but that's the extent of
it. Maybe elevated to running regressions, as we do for NetBSD, which
has a bit of a legacy standing in the community due to Emmanuel Dreyfus'
long time participation.

Anyway, that's my opinion. (Emphasis on my and opinion. Perhaps others
will weigh in with their opinions.) I look forward to your involvement
in the community. Look for us at FOSDEM, a couple of us will be there.

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] FreeBSD port of GlusterFS racks up a lot of CPU usage

2016-01-08 Thread Kaleb S. KEITHLEY
On 01/08/2016 08:58 AM, Kaleb S. KEITHLEY wrote:
> On 12/30/2015 01:22 PM, Hubbard Jordan wrote:
> 
>> I also have a broader question to go with the specific one:  We 
>> (at iXsystems) were attempting to engage with some of the Red Hat 
>> folks back when the FreeBSD port was first done, in the hope of 
>> getting it more “officially supported” for FreeBSD and perhaps even 
>> donating some more serious stress-testing and integration work for 
>> it, but when those Red Hat folks moved on we lost continuity and 
>> the effort stalled.  Who at Red Hat would / could we work with in 
>> getting this back on track?  We’d like to integrate glusterfs with 
>> FreeNAS 10, and in fact have already done so but it’s still early 
>> days and we’re not even really sure what we have yet.
>>
> 
> Hi,
> 
> To me, from a community standpoint, to be "officially supported" I'd
> venture to say that what it takes is being visibly involved in the
> project. That can take many forms, e.g., do regular builds on your
> platform, submit bug reports (to our bugzilla) and associated fixes (to
> our gerrit), implement and contribute new features, review other
> people's patches in gerrit, build packages for your platform, evangelize
> GlusterFS, answer questions in IRC and the mailing lists, etc., etc.
> 
> Everything that goes on the community is done by volunteers. There are
> no Red Hat employees whose sole responsibility is to work on Community
> GlusterFS. (Excepting our Community Manager, Amye.) The Red Hat mantra
> is "upstream first" so every feature and every bug fix that Red Hat
> employees work on does indeed go into Community GlusterFS first; a lot
> does get done as a side effect of that policy, but nobody should take it
> for granted that _everything_ (or anything) will just get done.
> 
> Nobody would say no to having serious stress testing and integration
> work. If it plugs into our current gerrit and jenkins infrastructure, so
> much the better. If there are people in your community who can help
> maintain and/or grow our infrastructure, we could use a lot of help there.

Just to be clear, by "our infrastructure" I mean Community GlusterFS
infrastructure.

> 
> With that level of involvement, I could imagine eventually FreeBSD
> having more of a, I don't know, for lack of a better word, 'standing' in
> the GlusterFS community. We do compile every patch on FreeBSD to ensure
> that we don't break that level of portability, but that's the extent of
> it. Maybe elevated to running regressions, as we do for NetBSD, which
> has a bit of a legacy standing in the community due to Emmanuel Dreyfus'
> long time participation.
> 
> Anyway, that's my opinion. (Emphasis on my and opinion. Perhaps others
> will weigh in with their opinions.) I look forward to your involvement
> in the community. Look for us at FOSDEM, a couple of us will be there.
> 

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] building/installing on FreeBSD

2015-12-07 Thread Kaleb S. KEITHLEY
On 12/07/2015 08:42 AM, Rick Macklem wrote:

> to avoid an undefined reference for xdr_auth_glusterfs_parms_v2.
> (I also found that building without libxml2 doesn't work, because fields
>  #if HAVE_LIB_XML are used in the code. Maybe it would be nicer if configure
>  failed when libxml2 isn't installed, like it does for Bison, etc.)

File a bug[1], and/or submit a patch[2]

> 
> Now, I can build/install it, but it isn't building any shared *.so files.
> As such, the binaries basically fail.
> 
> I have zero experience with libtool. So, does someone happen to know what
> it takes to get it to build the shared libraries?
> I didn't do autogen.sh. I just used configure. Do I need to run autogen.sh?
> 

I pretty much always run autogen.sh. On FreeBSD 10, my builds produce
shared libs and binaries. Your patch looked a little suspect.

I was able to build both 3.7.6 built from the tarball, and the head of
release-3.7 branches in git b with `./autogen.sh && ./configure
--disable-tiering && make`

[1]https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
[2]http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] libgfapi changes to add lk_owner and lease ID

2015-12-04 Thread Kaleb S. KEITHLEY
On 12/04/2015 07:51 AM, Ira Cooper wrote:
>

 why not use storage in the client_t instead of thread local?
>>>
>>> It comes down to the use case.  For Samba, the right spot is almost
>>> certainly the fd, because lease keys are a per-handle (which we map to
>>> fd) property.
>>>
>>> client_t is a horror show, due to race conditions between threads, IMHO.
>>
>> If there are known races, should we not address that? Got a bug that
>> explains it in more detail?
> 
> Niels,
> 
> For samba, if we do multi-threaded open, Kaleb's proposal is a
> race-condition.  I haven't gone through every use of client_t and seen
> if it is racy.
> 
> The race here is pretty simple:
> 
> Thread 1: Sets lease_id
> Thread 2: Sets lease_id
> Thread 1: Opens file. (wrong lease_id)
> 
> If these two threads represent requests from different clients, client_t
> won't work, unless there's a client_t per-thread.

client_t is, as one might guess from the name, per client (connection).
If smbd has a single connection, then there's a single client_t for it.

> 
> For global things on the connection, client_t is fine, and appropriate.
> For this?  No.
> 
> This is a property per-open, and belongs in the glfs_fd and glfs_object,
> IMHO.
> 
> Thanks,
> 
> -Ira
> 

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] what rpm sub-package do /usr/{libexec, sbin}/gfind_missing_files belong to?

2015-11-09 Thread Kaleb S. KEITHLEY

the in-tree glusterfs.spec(.in) has them immediately following the
geo-rep sub-package, but outside the %if ... %endif.

Are they part of geo-rep? Or something else?

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Glusterfs mainline BZs to close...

2015-10-26 Thread Kaleb S. KEITHLEY
On 10/26/2015 05:14 AM, Niels de Vos wrote:
> On Wed, Oct 21, 2015 at 05:22:49AM -0400, Nagaprasad Sathyanarayana wrote:
>> I came across the following BZs which are still open in mainline.  But
>> they are fixed and made available in a upstream release.  Planning to
>> close them this week, unless there are any objections.
> 
> We have a policy to close bugs when their patches land in a released
> version. The bugs against mainline will get closed when a release is
> made that contains those fixes. For many of the current mainline bugs,
> this would be the case when glusterfs-3.8 is released.

Some of those bugs are pretty old. We haven't been good about closing
them when a 3.X.0 release has occurred. I sent email to the lists about
closing some of the older ones with a note to reopen them if they're
still valid.

The bugs with RFE in the subject, or RFE or FutureFeature as a keyword
need to be left open after a 3.X.0 release if appropriate.

I'm going to take a stab at guessing — based on the date the bug was
filed — at changing the version of the non-RFE bugs filed against mainline.

> 
> What is the concern of having bugs against the mainline version open
> until a release contains that particular fix? There are many bugs that
> also get backports to stable versions (3.7, 3.6 and 3.5). Those bugs get
> closed with each minor releases for that stable version.
> 
> Of course we can change our policy to close mainline bugs earlier. But,
> we need to be consistent in setting the rules for that, and document
> them well. There is an ongoing task about automatically changing the
> status of bugs when patches get posted/merged and releases made. Closing
> mainline bugs should be part of that process.
> 
> When do you suggest that a bug against mainline should be closed, when
> one of the stable releases containst the fix, or when all of them do?
> What version with fix should we point the users at when we closed a
> mainline bug?
> 
>   Our shiny docs (or my bookmarks?) are broken again...
>   
> http://gluster.readthedocs.org/en/latest/Developer-guide/Bug%20report%20Life%20Cycle/
>   http://gluster.readthedocs.org/en/latest/Developer-guide/Bug%20Triage/
> 
>   This is the old contents:
>   
> http://www.gluster.org/community/documentation/index.php/Bug_report_life_cycle
>   http://www.gluster.org/community/documentation/index.php/Bug_triage
> 
> I was about to suggest to send a pull request so that we can discuss
> your proposal during the weekly Bug Triage meeting on Tuesdays.
> Unfortunately I don't know where the latest documents moved to, so
> please send your changes by email.
> 
> Could you explain what Bugzilla query you used to find these bugs? We
> have some keywords (like "Tracking") that should be handled with care,
> and probably should not get closed even when patches were merged in
> mainline and stable releases.
> 
> Thanks,
> Niels
> 
> 
>>
>> 1211836,1212398,1211132,1215486,1213542,1212385,1210687,1209818,1210690,1215187,1215161,1214574,1218120,1217788,1216067,1213773,1210684,1209104,1217937,1200262,1204651,1211913,1211594,1163561,1176062,
>> 1219784,1176837,1208131,1200704,1220329,1221095,1172430,1219732,1219738,1213295,1212253,1211808,1207615,1216931,1224290,1217701,1223213,1223889,1223385,1221104,1221696,1219442,1224596,1165041,1225491,
>> 1221938,1226367,1215002,1222379,1221889,1220332,1223338,1224600,1222126,1212413,1211123,1225793,1226551,1218055,1220713,1223772,1222013,1227646,1228635,1227884,1224016,1223432,1227904,1228952,1228613,
>> 1209461,1226507,1225572,1227449,1220670,1225564,1225424,1200267,1229825,1230121,1228696,1228680,1229609,1229134,1231425,1229172,1232729,1208482,1169317,1180545,1231197,1188242,1229658,1232686,1234842,
>> 1235216,1235359,1235195,1235007,1232238,1233617,1235542,1233162,1233258,1193388,1238072,1236270,1237381,1238508,1230007,1210689,1240254,1240564,1241153,1193636,1132465,1226717,1242609,1238747,1242875,
>> 1226279,1240210,1232678,1242254,1232391,1242570,1235231,1240654,1240284,1215117,1240184,1228520,1244165,1243187,1243774,1209430,1196027,1232572,1202244,1229297,1246052,1246082,1238135,1245547,1234819,
>> 1224611,1221914,1207134,1245981,1246432,1178619,1243890,1240598,1240949,1247930,1247108,1245544,1238936,1232420,1245142,1226223,1250441,1229860,1245276,1246275,1250582,1249499,1231437,1241274,1212437,
>> 1245558,1250855,1226829,1230015,1251042,1209329,1235582,1251449,1248415,1245895,1221490,1250797,1240991,1243391,1240218,1207829,1236009,1244613,1255599,1213349,1232378,1225465,1254127,1250170,1240244,
>> 1245045,1254863,1242819,1242421,1256580,1251824,1252808,1251454,1258334,1205596,1242742,1205037,1212823,1209735,1210712,1229948,1232001,1234474,1242030,1241133,1241480,1242041,1252410,1232430,1231876,
>> 1218573,1240952,1233544,1244109,1239269,1218164,1218060,1211640,1204641,1230090,1225571,1231205,1232666,1234882,1235292,1234694,1233411,1231789,1240229,1239044,1247529,1207735,1251346,1200254,1200265,
>> 

Re: [Gluster-devel] RHEL-5 Client build failed

2015-10-16 Thread Kaleb S. KEITHLEY

There is already a bugzilla open for this:
https://bugzilla.redhat.com/show_bug.cgi?id=1258594 for 3.7.x, and
https://bugzilla.redhat.com/show_bug.cgi?id=1258883 for the master branch


On 10/16/2015 06:25 AM, Vijay Bellur wrote:
> On Friday 16 October 2015 12:52 PM, Milind Changire wrote:
>> Following commit to release-3.7 branch causes RHEL-5 Client build to
>> fail because there isn't any  available on RHEL-5
>>
>> ca5b466d rpc/rpc-transport/socket/src/socket.h
>>   (Emmanuel Dreyfus   2015-07-30 14:02:43 +0200  22) #include
>> 
>>
> 
> This is a conditional inclusion. Can you please check if ERR_R_ECDH_LIB
> is defined in RHEL 5?
>>
>> This commit is also not available in upstream master yet.
>>
>> Link to failed build:
>> RHGS-3.1.2-CLIENT-RHEL-5:
>> http://brewweb.devel.redhat.com/brew/taskinfo?taskID=9962404
>>
> 
> This looks like an internal link to me. Please share relevant
> information from your build on something like fpaste.
> 
>>
>> Looks like we need to upgrade RHEL-5 with latest OpenSSL headers and
>> libraries.
>> How else do we fix this?
>>
> 
> Updating RHEL 5 with a later version is beyond our scope. Can you check
> if some additional conditional compilation is needed in RHEL 5 for the
> build to go through?
> 
> Regards,
> Vijay
> 
> 
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] RHEL-5 Client build failed

2015-10-16 Thread Kaleb S. KEITHLEY
On 10/16/2015 06:25 AM, Vijay Bellur wrote:
> On Friday 16 October 2015 12:52 PM, Milind Changire wrote:
>> Following commit to release-3.7 branch causes RHEL-5 Client build to
>> fail because there isn't any  available on RHEL-5
>>
>> ca5b466d rpc/rpc-transport/socket/src/socket.h
>>   (Emmanuel Dreyfus   2015-07-30 14:02:43 +0200  22) #include
>> 
>>
> 
> This is a conditional inclusion. Can you please check if ERR_R_ECDH_LIB
> is defined in RHEL 5?
>>

It is defined, but

...
mkdir .libs
 gcc -DHAVE_CONFIG_H -I. -I. -I../../../.. -I/usr/include/uuid
-D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -DGF_LINUX_HOST_OS
-I../../../../libglusterfs/src -DUSE_INSECURE_OPENSSL
-I../../../../libglusterfs/src -I../../../../rpc/rpc-lib/src/
-I../../../../rpc/xdr/src/ -Wall -g -O2 -g -O2 -MT socket.lo -MD -MP -MF
.deps/socket.Tpo -c socket.c  -fPIC -DPIC -o .libs/socket.o
In file included from socket.c:17:
socket.h:22:26: error: openssl/ecdh.h: No such file or directory
In file included from socket.c:30:
../../../../rpc/xdr/src/glusterfs3-xdr.h:19: warning: ignoring #pragma
GCC diagnostic
../../../../rpc/xdr/src/glusterfs3-xdr.h:20: warning: ignoring #pragma
GCC diagnostic
socket.c: In function 'socket_init':
socket.c:3999: error: 'SSL_OP_NO_TICKET' undeclared (first use in this
function)
socket.c:3999: error: (Each undeclared identifier is reported only once
socket.c:3999: error: for each function it appears in.)
socket.c:4000: error: 'SSL_OP_NO_COMPRESSION' undeclared (first use in
this function)
socket.c:4036: error: 'EC_KEY' undeclared (first use in this function)
socket.c:4036: error: 'ecdh' undeclared (first use in this function)
socket.c:4042: warning: implicit declaration of function
'EC_KEY_new_by_curve_name'
socket.c:4048: warning: implicit declaration of function 'EC_KEY_free'
make[5]: *** [socket.lo] Error 1
...




-- 

Kaleb
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


  1   2   >