Re: [Gluster-users] Right way to use community Gluster on genuine RHEL?

2022-08-16 Thread Kaleb Keithley
CentOS's build system and the Community Build Service (CBS) — where the SIG
packages get built — use incarnations of Fedora's Koji. (I presume it comes
from Fedora anyway, but I could be wrong.) E.g. Stream is on
https://kojihub.stream.centos.org/koji and the SIG packages get built on
CBS at https://cbs.centos.org/koji/. There was a different koji instance
for C8 before Stream8 and C7 and earlier, which name escapes me.

The CLIs that packagers use are called fedpkg and koji for Fedora, cbs for
CBS, and I'm not sure what the CLI is for Stream, I don't use that, I don't
work on Stream.

There are a lot of other pieces too, e.g. bodhi is for update management I
think in Fedora, and account management. CentOS had their own account
management for a while then later switched to Fedora's FAS. CBS doesn't use
bodhi, updates are managed by the packagers. There are other backend pieces
that sign the RPMs and build the repos. I honestly don't even know what all
the pieces are.

There are people on #centos-devel (on Libera.chat) like arrfab, bstinson,
hughesjr, and carlwgeorge to name a few, who know all the details. You
should make friends with them. ;-)

But you don't really need all that just to build packages. You could just
set up a Stream8 box and do all your builds, signing, and repo creation on
it. Which is what is/was done for gluster's Debian packages.  For bonus
points set it up as a container and use k8s or ansible to fire up an image
and do the builds when you need them.


On Tue, Aug 16, 2022 at 6:43 PM Strahil Nikolov 
wrote:

> Hey Kaleb,
> thanks for your reply. I was affraid of that.
>
> How can I get more info about the build process ? Maybe we can replicate
> the setup as a Rocky SIG .
>
> Best Regards,
> Strahil Nikolov
>
> On Mon, Aug 15, 2022 at 15:15, Kaleb Keithley
>  wrote:
>
> On Sun, Aug 14, 2022 at 8:31 AM Strahil Nikolov 
> wrote:
>
>
> Yet, when I asked the CentOS Storage SIG about the situation after the
> CentOS Stream goes end of life -> there was no definitive answer.
>
>
> I'm not sure asking the Storage SIG is the right thing to do.  The SIGs
> are subject to whatever CentOS decides.
>
> E.g. if you look at C6 vs C7 on http://mirror.centos.org/centos/, the SIG
> repos are there (obviously) for C7, but *everything* is gone for C6 (and
> earlier) including all the SIG repos.  (But it is all achieved on
> https://vault.centos.org/ so it's not truly gone.)
>
> If everything in Stream8 and everything related to Stream8 gets removed at
> Stream8's EOL by CentOS, that's nothing to do with the SIGs.
>
> I kinda suspect that some Stream8 stuff will stick around for a little
> while, but we — i.e. the SIGs — are not part of that decision process.
>
> And I pretty much guarantee that all the build infra around Stream8 will
> get torn down pretty quickly and there won't be any further updates to any
> of the software.
>
> --
>
> Kaleb
>
>

-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Right way to use community Gluster on genuine RHEL?

2022-08-15 Thread Kaleb Keithley
On Sun, Aug 14, 2022 at 8:31 AM Strahil Nikolov 
wrote:

>
> Yet, when I asked the CentOS Storage SIG about the situation after the
> CentOS Stream goes end of life -> there was no definitive answer.
>

I'm not sure asking the Storage SIG is the right thing to do.  The SIGs are
subject to whatever CentOS decides.

E.g. if you look at C6 vs C7 on http://mirror.centos.org/centos/, the SIG
repos are there (obviously) for C7, but *everything* is gone for C6 (and
earlier) including all the SIG repos.  (But it is all achieved on
https://vault.centos.org/ so it's not truly gone.)

If everything in Stream8 and everything related to Stream8 gets removed at
Stream8's EOL by CentOS, that's nothing to do with the SIGs.

I kinda suspect that some Stream8 stuff will stick around for a little
while, but we — i.e. the SIGs — are not part of that decision process.

And I pretty much guarantee that all the build infra around Stream8 will
get torn down pretty quickly and there won't be any further updates to any
of the software.

-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Right way to use community Gluster on genuine RHEL?

2022-07-18 Thread Kaleb Keithley
On Mon, Jul 18, 2022 at 11:46 AM Yaniv Kaul  wrote:

>
>
> On Mon, Jul 18, 2022 at 6:34 PM Thomas Cameron <
> thomas.came...@camerontech.com> wrote:
>
>> On 7/18/22 09:18, Péter Károly JUHÁSZ wrote:
>> > The best would be officially pre built rpms for RHEL.
>>
>> Where are there official Red Hat Gluster 10 RPMs for RHEL?
>>
>
> There's no such thing. Let's not confuse the upstream Gluster project and
> Red Hat product - RHGS (Red Hat Gluster Storage), which has a different
> version[1] and lifecycle[2] than the project.
> Red Hat does not build upstream project official RPMs for RHEL.
>
> That being said, I'm somewhat surprised the CentOS RPMs don't work on RHEL
> - is that indeed the case?
> Y.
>
>
 No.  The CentOS packages work fine on RHEL.

-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Right way to use community Gluster on genuine RHEL?

2022-07-18 Thread Kaleb Keithley
The only "official" supported RPMs I know about come from Red Hat with a
RHGS subscription.

Community packages are built from the upstream source by volunteers; they
are built on third party build systems, e.g. SUSE OBS, Ubuntu Launchpad,
Fedora Koji, and CentOS CBS.  The packages built in CentOS CBS work on
CentOS and RHEL, and probably on Rocky too.

On Mon, Jul 18, 2022 at 11:34 AM Thomas Cameron <
thomas.came...@camerontech.com> wrote:

> On 7/18/22 09:18, Péter Károly JUHÁSZ wrote:
> > The best would be officially pre built rpms for RHEL.
>
> Where are there official Red Hat Gluster 10 RPMs for RHEL?
>
> Thomas
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Right way to use community Gluster on genuine RHEL?

2022-07-18 Thread Kaleb Keithley
On Sat, Jul 16, 2022 at 5:42 PM Thomas Cameron <
thomas.came...@camerontech.com> wrote:

> All -
>
> Is there a way to install community packages on genuine RHEL? ... It seems
> like I need to install
> centos-release-gluster9-1.0-1.el8.noarch.rpm,
> centos-release-storage-common-2-2.el8.noarch.rpm, and maybe centos-release?
>
>

Péter Károly JUHÁSZ wrote:
>I don't know what is the correct way but what I did on my RHEL7 (I assume
8 and 9 is more or less the same):
>
>  * Added this repo
http://mirror.centos.org/centos/7/storage/x86_64/gluster-9/
>  * Then yum install glusterfs-server

Strahil Nikolov wrote:
> You can built the rpms from source.
> https://docs.gluster.org/en/main/Developer-guide/Building-GlusterFS/

Those all work. Building from source is maybe the hardest, but it's not
that hard.

Packages are nice because they're easy to install, update, and remove.

What is the "correct" way of using gluster on RHEL 8 or, preferably, 9?
>

There isn't any one correct or official way. If packages make sense to you,
use them. If building from source works for you, do that. Building your own
RPMs gives you the best of both.

The glusterfs.spec (and related files) is at
https://git.centos.org/rpms/glusterfs/ if you want to build your own rpms.

-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ubuntu 18.04 PPA and fuse3

2022-02-16 Thread Kaleb Keithley
On Wed, Feb 16, 2022 at 7:48 AM Maarten van Baarsel <
mrten_glusterus...@ii.nl> wrote:

> ...
> installing 10.1 from the PPA but I stumble:
> ...

Where do I report this, besides on this list?
>

It feels like I answer this question at least once a month.

 https://github.com/gluster/glusterfs-debian/issues

But no need to file an issue. It's fixed and there are new packages for
bionic in the PPA.

-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing Gluster release 10.1

2022-02-15 Thread Kaleb Keithley
On Tue, Feb 15, 2022 at 9:08 AM lejeczek  wrote:

> there are no releases in repos(including EPEL's) for any
> gluster version in C9 as of today.
>

They won't ever be in EPEL. Nor will they be in CentOS Stream 9 proper.

They will be in CentOS Storage SIG repos for Stream 9.

What will be in the CentOS Stream 9 Extras repo will be the
centos-release-gluster9 and centos-release-gluster10 packages. Once you
install one of those then you'll be able to install gluster easily.

And you can always install it the hard by by manually downloading from the
CentOS CBS build system at
https://cbs.centos.org/koji/packageinfo?packageID=5.  Packages for Stream 9
have been there for quite some time now — since September of last year.

-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing Gluster release 10.1

2022-02-14 Thread Kaleb Keithley
On Sun, Feb 13, 2022 at 3:08 AM Yaniv Kaul  wrote:

>
>
> Personally, I would like to see CentOS 9 Stream support soon.
> Y.
>

 glusterfs-10.1 (and 10.0) were built for Stream 9[1][2][3]. And are even
tagged for release.

glusterfs-9.5 was too[1][4]

Maybe what's missing are the centos-release-gluster{9,10} packages in
CentOS-Extras? Those are the convenient way to get gluster packages for
Stream 9 but it is possible to get them without the centos-release-*
packages.

[1] https://cbs.centos.org/koji/packageinfo?packageID=5
[2] https://cbs.centos.org/koji/buildinfo?buildID=35960
[3] https://cbs.centos.org/koji/buildinfo?buildID=36893
[4] https://cbs.centos.org/koji/buildinfo?buildID=36987

-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-22 Thread Kaleb Keithley
On Wed, Sep 22, 2021 at 7:51 AM Taste-Of-IT  wrote:

> Hi,
>
> i installed fresh Debian 11 stable and use GlusterFS latest sources. At
> installing glusterfs-server i got error missing libreadline7 Paket, which
> is not in Debian 11.
>
> Is GF 9 not Debian 11 ready?
>

Our Debian 11 box has readline-common 8.1-1 and libreadline8 8.1-1 and
glusterfs 9 builds fine for us.

What "latest sources" are you using?

-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Using Ganesha v2.8.4 with Gluster v5.11 ???

2021-03-22 Thread Kaleb Keithley
I was wrong:  nfs-ganesha-2.8's fsal_gluster calls glfs_ftruncate() and
glfs_fsync(), which appeared in glusterfs-6.0.

Sorry for any confusion.

--

Kaleb




On Mon, Mar 22, 2021 at 10:07 AM Kaleb Keithley  wrote:

>
> GFAPI_6.0 is a reference to a set of versioned symbols in
> gluster's libgfapi.
>
> As the version implies, you need at least glusterfs-6.0 to run
> nfs-ganesha-2.8.x.
>
> Although it's not clear — without further investigation — why the rpm has
> derived that dependency. I'm not seeing that the gluster FSAL in
> ganesha-2.8.x calls any of the GFAPI_6.0 apis. Or any of the later
> GFAPI_6.x apis.
>
> It seems to me like nfs-ganesha-2.8.x could be compiled with glusterfs-5
> and would work fine.
>
> --
>
> Kaleb
>
> On Mon, Mar 22, 2021 at 8:15 AM David Spisla  wrote:
>
>> Dear Gluster Community and Devels,
>> at the moment we are using Ganesha 2.7.6 with Glusterv5.11
>>
>> Now we want to update ganesha from 2.7.6 to 2.8.4 . I just tried to
>> update ganesha on a 2-node SLES15SP1 cluster with the above mentioned
>> versions. I got the packages from here:
>>
>> https://download.opensuse.org/repositories/home:/nfs-ganesha:/SLES15SP1-nfs-ganesha-2.8/SLE_15_SP1/x86_64/
>>
>> But I got the following dependency error:
>>
>>> fs-davids-c3-n1:~ # zypper install libntirpc1_8-1.8.1-2.2.x86_64.rpm
>>> nfs-ganesha-2.8.4-5.2.x86_64.rpm nfs-ganesha-gluster-2.8.4-5.2.x86_64.rpm
>>> nfs-ganesha-vfs-2.8.4-5.2.x86_64.rpm
>>> Loading repository data...
>>> Reading installed packages...
>>> Resolving package dependencies...
>>>
>>> Problem: nothing provides libgfapi.so.0(GFAPI_6.0)(64bit) needed by
>>> nfs-ganesha-gluster-2.8.4-5.2.x86_64
>>>  Solution 1: do not install nfs-ganesha-gluster-2.8.4-5.2.x86_64
>>>  Solution 2: break nfs-ganesha-gluster-2.8.4-5.2.x86_64 by ignoring some
>>> of its dependencies
>>>
>>> Choose from above solutions by number or cancel [1/2/c/d/?] (c): c
>>>
>>
>> Does anybody of you know to which Gluster version GFAPI_6.0 refers?
>> Is it posible at all to run ganesha 2.8.4 with gluster 5.11?
>> Regards
>> David Spisla
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Using Ganesha v2.8.4 with Gluster v5.11 ???

2021-03-22 Thread Kaleb Keithley
GFAPI_6.0 is a reference to a set of versioned symbols in
gluster's libgfapi.

As the version implies, you need at least glusterfs-6.0 to run
nfs-ganesha-2.8.x.

Although it's not clear — without further investigation — why the rpm has
derived that dependency. I'm not seeing that the gluster FSAL in
ganesha-2.8.x calls any of the GFAPI_6.0 apis. Or any of the later
GFAPI_6.x apis.

It seems to me like nfs-ganesha-2.8.x could be compiled with glusterfs-5
and would work fine.

--

Kaleb

On Mon, Mar 22, 2021 at 8:15 AM David Spisla  wrote:

> Dear Gluster Community and Devels,
> at the moment we are using Ganesha 2.7.6 with Glusterv5.11
>
> Now we want to update ganesha from 2.7.6 to 2.8.4 . I just tried to update
> ganesha on a 2-node SLES15SP1 cluster with the above mentioned versions. I
> got the packages from here:
>
> https://download.opensuse.org/repositories/home:/nfs-ganesha:/SLES15SP1-nfs-ganesha-2.8/SLE_15_SP1/x86_64/
>
> But I got the following dependency error:
>
>> fs-davids-c3-n1:~ # zypper install libntirpc1_8-1.8.1-2.2.x86_64.rpm
>> nfs-ganesha-2.8.4-5.2.x86_64.rpm nfs-ganesha-gluster-2.8.4-5.2.x86_64.rpm
>> nfs-ganesha-vfs-2.8.4-5.2.x86_64.rpm
>> Loading repository data...
>> Reading installed packages...
>> Resolving package dependencies...
>>
>> Problem: nothing provides libgfapi.so.0(GFAPI_6.0)(64bit) needed by
>> nfs-ganesha-gluster-2.8.4-5.2.x86_64
>>  Solution 1: do not install nfs-ganesha-gluster-2.8.4-5.2.x86_64
>>  Solution 2: break nfs-ganesha-gluster-2.8.4-5.2.x86_64 by ignoring some
>> of its dependencies
>>
>> Choose from above solutions by number or cancel [1/2/c/d/?] (c): c
>>
>
> Does anybody of you know to which Gluster version GFAPI_6.0 refers?
> Is it posible at all to run ganesha 2.8.4 with gluster 5.11?
> Regards
> David Spisla
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Export gluster with NFS

2020-11-12 Thread Kaleb Keithley
On Mon, Nov 9, 2020 at 10:17 AM Strahil Nikolov 
wrote:

> Hi Alex,
>
> I have been playing arround with the NFS Ganesha on EL8 and I was
> surprised that the solution deploys the cluster with a 'portblock' resource
> which is relying on IPTABLES, when the default is NFTABLES...
>

portblock is an off-the-shelf pacemaker resource agent.

I suggest you file a bug report against it if it's not correct for EL8.

--

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Export gluster with NFS

2020-11-09 Thread Kaleb Keithley
https://docs.gluster.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/
On Mon, Nov 9, 2020 at 3:08 AM Alex K  wrote:

>
> https://docs.gluster.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/
>


> Reading the HA setup at the above link, it mentions the use of
> ganesha-ha.conf which incorporates some  HA_CLUSTER_NODES  etc parameters.
> I do not see the reason to go like that since HA is managed from gluster
> already.
>

gluster's HA is for gluster.

HA for nfs-ganesha is managed by pacemaker as mentioned  in the doc you
referenced. The two are unrelated.

storhaug is incomplete, but it works well enough for some people. YMMV.

--

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] missing mount-shared-storage.sh on glusterfs 7.8-2

2020-10-23 Thread Kaleb Keithley
On Fri, Oct 23, 2020 at 6:15 AM peter knezel  wrote:

> Hello All,
>
> can somebody responsible confirm me, that mount-shared-storage.sh
> is really missing from glusterfs 7.8-2 packages?
> Will new version be created?
>

It is fixed in 7.8-3, which has been on download.gluster.org for several
days now.


>
> I have recreated this file manually on my two glusterfs servers
> (i took the mount-shared-storage.sh file from a server that was OS updated
> from debian stretch to debian buster (10.5)) - with 7.8-1 at that time:
>
> root@server1:~# cd /usr/lib/x86_64-linux-gnu/glusterfs
> root@server1:/usr/lib/x86_64-linux-gnu/glusterfs# ls -ltr|grep storage.sh
> -rwxr-xr-x 1 root root  1259 Oct 23 09:28 mount-shared-storage.sh
> root@server1:/usr/lib/x86_64-linux-gnu/glusterfs# cat
> mount-shared-storage.sh
> #!/bin/bash
> #Post reboot there is a chance in which mounting of shared storage will
> fail
> #This will impact starting of features like NFS-Ganesha. So this script
> will
> #try to mount the shared storage if it fails
>
> exitStatus=0
>
> while IFS= read -r glm
> do
> IFS=$' \t' read -r -a arr <<< "$glm"
>
> #Validate storage type is glusterfs
> if [ "${arr[2]}" == "glusterfs" ]
> then
>
> #check whether shared storage is mounted
> #if it is mounted then mountpoint -q will return a 0
> success code
> if mountpoint -q "${arr[1]}"
> then
> echo "${arr[1]} is already mounted"
> continue
> fi
>
> mount -t glusterfs "${arr[0]}" "${arr[1]}"
> #wait for few seconds
> sleep 10
>
> #recheck mount got succeed
> if mountpoint -q "${arr[1]}"
> then
> echo "${arr[1]} has been mounted"
> continue
> else
> echo "${arr[1]} failed to mount"
> exitStatus=1
> fi
> fi
> done <<< "$(sed '/^#/ d'  exit $exitStatus
> root@server1:/usr/lib/x86_64-linux-gnu/glusterfs#
>
> root@server1:/usr/lib/x86_64-linux-gnu/glusterfs# uname -a
> Linux server1 4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24) x86_64
> GNU/Linux
> root@server1:/usr/lib/x86_64-linux-gnu/glusterfs# cat /etc/debian_version
> 10.5
> root@server1:/usr/lib/x86_64-linux-gnu/glusterfs#
>
>
> Then I enabled and started glusterfssharedstorage.service.
> It works.
> NOTE: same done on server2 as well.
>
>
>
> Kind regards,
> peterk
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster daemons missing in debian buster 10.5 after glusterfs 7.8-1 installed?

2020-10-15 Thread Kaleb Keithley
On Thu, Oct 15, 2020 at 8:26 AM peter knezel  wrote:

>
> But why i do not see same daemons?
> was there a crucial change between 7.7-1 and 7.8-1?
>

No, it's just a bug in the packaging.

It would have helped if you had simply indicated which service was running
in 7.7 that wasn't running in 7.8.

Showing a list of things and asking what's missing isn't really helpful.

Look for new packages shortly.


>
> ii  glusterfs-server   7.7-1
> amd64clustered file-system (server package)
>
> # systemctl --all|grep gluster
>   ...
>   glusterfssharedstorage.service
>   loadedactive   running   Mount
> glusterfs sharedstorage
>   ...
>


> on a clear buster server installed from scratch:
>
> # dpkg -l|grep gluster
>
> ii  glusterfs-server 7.8-1
>   amd64clustered file-system (server package)
>


> # systemctl --all|grep gluster
>  ...
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster daemons missing in debian buster 10.5 after glusterfs 7.8-1 installed?

2020-10-14 Thread Kaleb Keithley
tl;dnr: some linux distribution package guidelines say it's bad to
automatically start services after installing.

long answer:
The package layout of the gluster packages changed in the base packages of
newer Debian and Ubuntu releases. To be compatible — e.g. to update from
Buster's base glusterfs to the gluster community packages — the Gluster
community packages switched to the same layout. If they used to start
automatically before, and they don't now, that's almost certainly why.

For the most part we just copied the base package debian files, so whatever
the base packages do, or don't do, that's what the community packages also
do, including starting, or not starting the daemons.


On Wed, Oct 14, 2020 at 8:39 AM peter knezel  wrote:

> Hello Kaleb,
> thanks for your email.
>
> i think i found the problem.
> In previous releases, when gluster packages are installed, they are
> started.
> In the case of 7.8-1 version i needed to check and enable them:
> systemctl is-enabled xx
> systemctl enable xx
> systemctl start xx
>
> where xx={glusterd.service, glustereventsd.service}
>
> But i still miss the glusterfssharedstorage.service - i think it will be
> activated when a glusterfs client locally mounts a created volume.
>
> I will go on and recheck all my steps. i will update this thread later.
> Kind regards,
> peterk
>
>
> On Wed, 14 Oct 2020 at 14:16, Kaleb Keithley  wrote:
>
>> On Wed, Oct 14, 2020 at 7:56 AM peter knezel 
>> wrote:
>>
>>> Hello All,
>>>
>>> i have installed 7.8-1 version of glusterfs packages on a VM with debian
>>> buster 10.5 and see no glusterfs daemons present.
>>>
>>> root@buster:~# dpkg -l|grep gluster
>>> ii  glusterfs-client 7.8-1
>>> amd64clustered file-system (client package)
>>> ii  glusterfs-common 7.8-1
>>> amd64GlusterFS common libraries and translator modules
>>> ii  glusterfs-server 7.8-1
>>> amd64clustered file-system (server package)
>>> ii  libglusterfs0:amd64  7.8-1
>>> amd64GlusterFS shared library
>>> root@buster:~# cat /etc/debian_version
>>> 10.5
>>> root@buster:~# uname -a
>>> Linux buster 4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24)
>>> x86_64 GNU/Linux
>>> root@buster:~# systemctl --all|grep gluster
>>> root@buster:~#
>>>
>>> root@buster:/etc/systemd/system/multi-user.target.wants# pwd
>>> /etc/systemd/system/multi-user.target.wants
>>> root@buster:/etc/systemd/system/multi-user.target.wants# ls -ltr|grep
>>> gluster
>>> root@buster:/etc/systemd/system/multi-user.target.wants#
>>>
>>>
>>> root@buster:/lib/systemd/system# ls -ltr|grep gluster
>>> -rw-r--r-- 1 root root  400 Sep 29 04:01 glustereventsd.service
>>> -rw-r--r-- 1 root root  466 Sep 29 04:01 glusterd.service
>>> root@buster:/lib/systemd/system#
>>>
>>> on updated server from stretch to buster:
>>> root@stretchtobluster:/lib/systemd/system# ls -ltr|grep gluster
>>> -rw-r--r-- 1 root root  425 Jul 21 05:16 gluster-ta-volume.service
>>> -rw-r--r-- 1 root root  301 Jul 21 05:16 glusterfssharedstorage.service
>>> -rw-r--r-- 1 root root  400 Jul 21 05:16 glustereventsd.service
>>> -rw-r--r-- 1 root root  464 Jul 21 05:16 glusterd.service
>>> root@stretchtobluster:/lib/systemd/system#
>>>
>>> Is something missing here?
>>>
>>
>> The Gluster executables (including the daemons: glusterd, glusterfsd,
>> glusterfs, and glustereventsd) aren't installed in /lib/systemd/...  and
>> never have been.
>>
>> They are installed in /usr/sbin/, and /usr/lib/$arch-linux-gnu/glusterfs/
>> or on newer Debian in /usr/libexec/glusterfs/
>>
>> What do you think is missing?
>>
>> --
>>
>> Kaleb
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster daemons missing in debian buster 10.5 after glusterfs 7.8-1 installed?

2020-10-14 Thread Kaleb Keithley
On Wed, Oct 14, 2020 at 7:56 AM peter knezel  wrote:

> Hello All,
>
> i have installed 7.8-1 version of glusterfs packages on a VM with debian
> buster 10.5 and see no glusterfs daemons present.
>
> root@buster:~# dpkg -l|grep gluster
> ii  glusterfs-client 7.8-1
>   amd64clustered file-system (client package)
> ii  glusterfs-common 7.8-1
>   amd64GlusterFS common libraries and translator modules
> ii  glusterfs-server 7.8-1
>   amd64clustered file-system (server package)
> ii  libglusterfs0:amd64  7.8-1
>   amd64GlusterFS shared library
> root@buster:~# cat /etc/debian_version
> 10.5
> root@buster:~# uname -a
> Linux buster 4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24) x86_64
> GNU/Linux
> root@buster:~# systemctl --all|grep gluster
> root@buster:~#
>
> root@buster:/etc/systemd/system/multi-user.target.wants# pwd
> /etc/systemd/system/multi-user.target.wants
> root@buster:/etc/systemd/system/multi-user.target.wants# ls -ltr|grep
> gluster
> root@buster:/etc/systemd/system/multi-user.target.wants#
>
>
> root@buster:/lib/systemd/system# ls -ltr|grep gluster
> -rw-r--r-- 1 root root  400 Sep 29 04:01 glustereventsd.service
> -rw-r--r-- 1 root root  466 Sep 29 04:01 glusterd.service
> root@buster:/lib/systemd/system#
>
> on updated server from stretch to buster:
> root@stretchtobluster:/lib/systemd/system# ls -ltr|grep gluster
> -rw-r--r-- 1 root root  425 Jul 21 05:16 gluster-ta-volume.service
> -rw-r--r-- 1 root root  301 Jul 21 05:16 glusterfssharedstorage.service
> -rw-r--r-- 1 root root  400 Jul 21 05:16 glustereventsd.service
> -rw-r--r-- 1 root root  464 Jul 21 05:16 glusterd.service
> root@stretchtobluster:/lib/systemd/system#
>
> Is something missing here?
>

The Gluster executables (including the daemons: glusterd, glusterfsd,
glusterfs, and glustereventsd) aren't installed in /lib/systemd/...  and
never have been.

They are installed in /usr/sbin/, and /usr/lib/$arch-linux-gnu/glusterfs/
or on newer Debian in /usr/libexec/glusterfs/

What do you think is missing?

--

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Official Bugzilla?

2020-07-03 Thread Kaleb Keithley
On Fri, Jul 3, 2020 at 6:46 AM lejeczek  wrote:

> hi guys,
>
> where those of use who run gluster from(via) EPEL repo
>

glusterfs rpms aren't in EPEL, and haven't been for something like six or
seven years.

Perhaps you meant from the CentOS Storage SIG repo?


> should go to report bugs?
>

https://github.com/gluster/glusterfs/issues

If you're really still running glusterfs from old EPEL RPMs then you're
running a very very old version of gluster and you should upgrade to
something more recent. I can pretty much guarantee any bugs in something
that old have been fixed in a newer version.




>
> many thanks, L.
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster Test Day

2020-06-24 Thread Kaleb Keithley
On Wed, Jun 24, 2020 at 8:32 AM Strahil Nikolov 
wrote:

> Hi Rinku,
>
> can you tell me how the packages for CentOS 7 are build, as I had issues
> yesterday bulding both latest and v7 branches ?
>

Generally speaking, CentOS packages are built in CBS (CentOS Build System).
They are built using the glusterfs.spec in the CentOS dist-git repo at
https://git.centos.org/rpms/glusterfs. Each gluster release has a separate
dist-git .spec on a branch for the CentOS release, i.e. 6, 7, and 8. These
.specs are, generally, very close the .specs in the glusterfs tree.

There is a README.md file there that describes how the rpms are built. In a
nutshell, a .src.rpm is created using rpmbuild, and then the rpms are built
in mock in CBS.

There aren't branches for glusterfs-8 yet, so in this case the gluster-8
RC0 packages that are on download.gluster.org were built using upstream
glusterfs.spec.  There are .src.rpms on download.gluster.org from which you
can extract the glusterfs.spec that was used to build the rpms.

--

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [rhgs-devel] Announcing Gluster release 5.12

2020-03-31 Thread Kaleb Keithley
Support of upstream, community-built packages is pretty nebulous. If it
builds, with little or no work, typically we package it. Actual support, as
in help with problems, comes from the "community."

Niels and I discussed building glusterfs-5 for C8 and decided we'd wait and
see if anyone actually asked for it.

Typical places to reach the CentOS Storage SIG people would be the
#centos-devel channel on FreeNode IRC, the centos-devel@ and/or the
centos-storage-...@centos.org mailing lists, and also here, to a lesser
extent, on gluster-de...@gluster.org.

--

Kaleb


On Tue, Mar 31, 2020 at 4:39 AM Yaniv Kaul  wrote:

>
>
> On Tue, Mar 31, 2020 at 10:06 AM Alan Orth  wrote:
>
>> Thanks, Hari! Do you know where the CentOS Storage SIG does their release
>> planning? I'm curious as they have released CentOS 8 packages for Gluster 6
>> and Guster 7, but not Gluster 5.
>>
>
> I'm not sure it makes sense to support Gluster 5 with CentOS 8.
> I would not have supported 6 as well?
> Y.
>
>>
>> http://mirror.centos.org/centos/8/storage/x86_64/
>>
>> Regards,
>>
>> On Mon, Mar 2, 2020 at 10:20 AM Hari Gowtham  wrote:
>>
>>> Hi,
>>>
>>> The Gluster community is pleased to announce the release of Gluster
>>> 5.12 (packages available at [1]).
>>>
>>> Release notes for the release can be found at [2].
>>>
>>> Major changes, features and limitations addressed in this release:
>>> None
>>>
>>> Thanks,
>>> Gluster community
>>>
>>> [1] Packages for 5.12:
>>> https://download.gluster.org/pub/gluster/glusterfs/5/5.12/
>>>
>>> [2] Release notes for 5.12:
>>> https://docs.gluster.org/en/latest/release-notes/5.12/
>>>
>>>
>>> --
>>> Regards,
>>> Hari Gowtham.
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>> --
>> Alan Orth
>> alan.o...@gmail.com
>> https://picturingjordan.com
>> https://englishbulgaria.net
>> https://mjanja.ch
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Repo NFS-Ganesha for SLES 15 SP1

2020-01-16 Thread Kaleb Keithley
On Thu, Jan 16, 2020 at 9:04 AM Christian Meyer 
wrote:

> Hello everyone!
>
> I'm looking for NFS-Ganesha 2.7 packages for SLES 15 SP1.
>
> I found the following repo, but it's empty.
>
> https://download.opensuse.org/repositories/home:/glusterfs:/SLES15SP1-nfs-ganesha-2.7/SLE_15_SP1/
>
> Since the repo is created, I assume that there should actually be
> packages. Since I don't know the maintainer, my question into the
> round if the repo is maintained and if NFS-Ganesha 2.7 packages for
> SLES 15 SP1 are available.
>

The NFS-Ganesha packages moved to their own OBS project some time ago.

No ganesha 2.7.x packages were ever built under the glusterfs project for
SLES15SP1; ISTR that SLES15SP1 was released after ganesha 2.7 reached EOL.

There are Ganesha packages for 2.7.x built for SLES15. AFAIK they should
work fine on SLES15SP1.

There are Ganesha 2.8 and Ganesha 3 packages under the nfs-ganesha project
for SLES15SP1. We recommend you use 2.8.x or 3.x, which are actively
maintained as of this writing.  See
https://build.opensuse.org/users/nfs-ganesha or
https://download.nfs-ganesha.org/NFS-Ganesha.README

There are no plans to build any more ganesha 2.7.x packages.

--

Kaleb


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] No gluster NFS server on localhost

2020-01-06 Thread Kaleb Keithley
On Mon, Jan 6, 2020 at 8:19 AM Yaniv Kaul  wrote:

>
>
> On Mon, Jan 6, 2020 at 2:27 PM Xie Changlong  wrote:
>
>> Hi Birgit
>>
>>
>>  Gnfs is not build in glusterfs 7.0 by default, you can build the
>> source code with: ./autogen.sh; ./configure  --enable-gnfs
>>
>> to enable it.
>>
>
> I'm not sure that explains it.
> Gluster 7 was released in November[1].
> My patch for this [2] was merged only to Master, late December.
> It was never backported to 7.x.
>


Installed from where?

And by 7.0 all the Gluster community packages* finally disabled gnfs after
years of warning that gnfs was deprecated and that the gnfs bits would
eventually not be built and packaged.

FWIW I know that Debian, Ubuntu, and SUSE build their own "official"
packages. They may or may not have disabled gnfs in their packages.

* E.g. from download.gluster.org, the Gluster Launchpad PPA, the Gluster
OpenSUSE OBS, and the CentOS Storage SIG.

--

Kaleb


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster 7.0 (CentOS7) issue with hooks

2019-12-30 Thread Kaleb Keithley
On Sun, Dec 29, 2019 at 4:49 PM Strahil Nikolov 
wrote:

> Hello Community,
>
> After upgrading from Gluster v6.6 to 7.0 I have noticed that some gluster
> hooks are wrongly named.
>
> For example:
> [root@ovirt1 post]# pwd
> /var/lib/glusterd/hooks/1/start/post
> [root@ovirt1 post]# ll
> total 12
> -rwxr-xr-x. 1 root root 2334 Oct 16 13:57 D29CTDBsetup.sh
> -rwxr-xr-x. 1 root root 4137 Oct 16 13:57 D30samba-start.sh
> [root@ovirt1 post]# rpm -qf D30samba-start.sh
> file /var/lib/glusterd/hooks/1/start/post/D30samba-start.sh is not owned
> by any package
> [root@ovirt1 post]# rpm -qf S30samba-start.sh
> glusterfs-server-7.0-1.el7.x86_64
> [root@ovirt1 post]# mv D30samba-start.sh S30samba-start.sh
>
> Can you reproduce the issue ?
>


I can't.  The CentOS (and Fedora) glusterfs-server-7.0 rpm contains (`rpm
-qlp glusterfs-server`):
   ...
   /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
  ...

The RPM .spec file used to build has:
   %attr(0755,-,-)
%{_sharedstatedir}/glusterd/hooks/1/start/post/S30samba-start.sh

In the -release-7 branch of the source the Makefile.am has
   S30samba-start.sh

And in the tree the file itself is named: S30samba-start.sh

Also on my C7 box, `rpm -qf
/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh` gives
   glusterfs-server-7.0-1.el7.x86_64

I can't imagine how you managed to get a file named D30samba-start.sh on
your system.

--

Kaleb


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster v7 in CentOS7

2019-12-18 Thread Kaleb Keithley
On Wed, Dec 18, 2019 at 7:27 AM Strahil  wrote:

> Hello Community,
>
> Can someone update me what is the situation in CentOS 7 with Gluster v7.
>
> It seems that there is no centos-release-gluster7  rpm in the stable repos.
>

Good question. It's built, and tagged for release in the CentOS extras
repo.

I've asked the CentOS folks why it hasn't been pushed to the repos yet.

In the meantime you can download it from
https://cbs.centos.org/koji/buildinfo?buildID=27732

--

Kaleb


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] Proposal to change gNFS status

2019-11-21 Thread Kaleb Keithley
Independent of anything else—

Maintain it. Send patches to gerrit. Get the requisite +2 reviews on the
patches. Amar still has commit privs AFAIK; he can merge anything that gets
two votes.

It's open source meritocracy.

If there's real support for it then it makes a stronger case for adding it
back to the community packages.

On Thu, Nov 21, 2019 at 4:14 PM Kaleb Keithley  wrote:

> I personally wouldn't call three years ago — when we started to deprecate
> it, in glusterfs-3.9 — a recent change.
>
> As a community the decision was made to move to NFS-Ganesha as the
> preferred NFS solution, but it was agreed to keep the old code in the tree
> for those who wanted it. There have been plans to drop it from the
> community packages for most of those three years, but we didn't follow
> through across the board until fairly recently. Perhaps the most telling
> piece of data is that it's been gone from the packages in the CentOS
> Storage SIG in glusterfs-4.0, -4.1, -5, -6, and -7 with no complaints ever,
> that I can recall.
>
> Ganesha is a preferable solution because it supports NFSv4, NFSv4.1,
> NFSv4.2, and pNFS, in addition to legacy NFSv3. More importantly, it is
> actively developed, maintained, and supported, both in the community and
> commercially. There are several vendors selling it, or support for it; and
> there are community packages for it for all the same distributions that
> Gluster packages are available for.
>
> Out in the world, the default these days is NFSv4. Specifically v4.2 or
> v4.1 depending on how recent your linux kernel is. In the linux kernel,
> client mounts start negotiating for v4.2 and work down to v4.1, v4.0, and
> only as a last resort v3. NFSv3 client support in the linux kernel largely
> exists at this point only because of the large number of legacy servers
> still running that can't do anything higher than v3. The linux NFS
> developers would drop the v3 support in a heartbeat if they could.
>
> IMO, providing it, and calling it maintained, only encourages people to
> keep using a dead end solution. Anyone in favor of bringing back NFSv2,
> SSHv1, or X10R4? No? I didn't think so.
>
> The recent issue[1] where someone built gnfs in glusterfs-7.0 on CentOS7
> strongly suggests to me that gnfs is not actually working well. Three years
> of no maintenance seems to have taken its toll.
>
> Other people are more than welcome to build their own packages from the
> src.rpms and/or tarballs that are available from gluster — and support
> them. It's still in the source and there are no plans to remove it. (Unlike
> most of the other deprecated features which were recently removed in
> glusterfs-7.)
>
>
>
> [1] https://github.com/gluster/glusterfs/issues/764
>
> On Thu, Nov 21, 2019 at 5:31 AM Amar Tumballi  wrote:
>
>> Hi All,
>>
>> As per the discussion on https://review.gluster.org/23645, recently we
>> changed the status of gNFS (gluster's native NFSv3 support) feature to
>> 'Depricated / Orphan' state. (ref:
>> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
>> With this email, I am proposing to change the status again to 'Odd Fixes'
>> (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>>
>> TL;DR;
>>
>> I understand the current maintainers are not able to focus on maintaining
>> it as the focus of the project, as earlier described, is keeping
>> NFS-Ganesha based integration with glusterfs. But, I am volunteering along
>> with Xie Changlong (currently working at Chinamobile), to keep the feature
>> running as it used to in previous versions. Hence the status of 'Odd
>> Fixes'.
>>
>> Before sending the patch to make these changes, I am proposing it here
>> now, as gNFS is not even shipped with latest glusterfs-7.0 releases. I have
>> heard from some users that it was working great for them with earlier
>> releases, as all they wanted was NFS v3 support, and not much of features
>> from gNFS. Also note that, even though the packages are not built, none of
>> the regression tests using gNFS are stopped with latest master, so it is
>> working same from at least last 2 years.
>>
>> I request the package maintainers to please add '--with gnfs' (or
>> --enable-gnfs) back to their release script through this email, so those
>> users wanting to use gNFS happily can continue to use it. Also points to
>> users/admins is that, the status is 'Odd Fixes', so don't expect any
>> 'enhancements' on the features provided by gNFS.
>>
>> Happy to hear feedback, if any.
>>
>> Regards,
>> Amar
>>
>> ___
>> maintainers mailing 

Re: [Gluster-users] [Gluster-Maintainers] Proposal to change gNFS status

2019-11-21 Thread Kaleb Keithley
I personally wouldn't call three years ago — when we started to deprecate
it, in glusterfs-3.9 — a recent change.

As a community the decision was made to move to NFS-Ganesha as the
preferred NFS solution, but it was agreed to keep the old code in the tree
for those who wanted it. There have been plans to drop it from the
community packages for most of those three years, but we didn't follow
through across the board until fairly recently. Perhaps the most telling
piece of data is that it's been gone from the packages in the CentOS
Storage SIG in glusterfs-4.0, -4.1, -5, -6, and -7 with no complaints ever,
that I can recall.

Ganesha is a preferable solution because it supports NFSv4, NFSv4.1,
NFSv4.2, and pNFS, in addition to legacy NFSv3. More importantly, it is
actively developed, maintained, and supported, both in the community and
commercially. There are several vendors selling it, or support for it; and
there are community packages for it for all the same distributions that
Gluster packages are available for.

Out in the world, the default these days is NFSv4. Specifically v4.2 or
v4.1 depending on how recent your linux kernel is. In the linux kernel,
client mounts start negotiating for v4.2 and work down to v4.1, v4.0, and
only as a last resort v3. NFSv3 client support in the linux kernel largely
exists at this point only because of the large number of legacy servers
still running that can't do anything higher than v3. The linux NFS
developers would drop the v3 support in a heartbeat if they could.

IMO, providing it, and calling it maintained, only encourages people to
keep using a dead end solution. Anyone in favor of bringing back NFSv2,
SSHv1, or X10R4? No? I didn't think so.

The recent issue[1] where someone built gnfs in glusterfs-7.0 on CentOS7
strongly suggests to me that gnfs is not actually working well. Three years
of no maintenance seems to have taken its toll.

Other people are more than welcome to build their own packages from the
src.rpms and/or tarballs that are available from gluster — and support
them. It's still in the source and there are no plans to remove it. (Unlike
most of the other deprecated features which were recently removed in
glusterfs-7.)



[1] https://github.com/gluster/glusterfs/issues/764

On Thu, Nov 21, 2019 at 5:31 AM Amar Tumballi  wrote:

> Hi All,
>
> As per the discussion on https://review.gluster.org/23645, recently we
> changed the status of gNFS (gluster's native NFSv3 support) feature to
> 'Depricated / Orphan' state. (ref:
> https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L185..L189).
> With this email, I am proposing to change the status again to 'Odd Fixes'
> (ref: https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L22)
>
> TL;DR;
>
> I understand the current maintainers are not able to focus on maintaining
> it as the focus of the project, as earlier described, is keeping
> NFS-Ganesha based integration with glusterfs. But, I am volunteering along
> with Xie Changlong (currently working at Chinamobile), to keep the feature
> running as it used to in previous versions. Hence the status of 'Odd
> Fixes'.
>
> Before sending the patch to make these changes, I am proposing it here
> now, as gNFS is not even shipped with latest glusterfs-7.0 releases. I have
> heard from some users that it was working great for them with earlier
> releases, as all they wanted was NFS v3 support, and not much of features
> from gNFS. Also note that, even though the packages are not built, none of
> the regression tests using gNFS are stopped with latest master, so it is
> working same from at least last 2 years.
>
> I request the package maintainers to please add '--with gnfs' (or
> --enable-gnfs) back to their release script through this email, so those
> users wanting to use gNFS happily can continue to use it. Also points to
> users/admins is that, the status is 'Odd Fixes', so don't expect any
> 'enhancements' on the features provided by gNFS.
>
> Happy to hear feedback, if any.
>
> Regards,
> Amar
>
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
>


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] rsa.pub at https://download.gluster.org/pub/gluster/glusterfs/LATEST/?

2019-10-29 Thread Kaleb Keithley
On Mon, Oct 28, 2019 at 1:03 PM Shane St Savage 
wrote:

> Adding rsa.pub at
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub would
> allow bootstrapping Debian servers with the following repo/key:
>
> deb
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${RELEASE}/amd64/apt
> ${RELEASE} main
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
>
> In other words, only LATEST would have to be referenced instead of LATEST
> and some specific version for the key.
>

I'm not a Debian packaging expert. (Even if sometimes I play one on TV.)
Why is this preferable to what's in the README.txt (i.e. "wget -O -
https://download.gluster.org/pub/gluster/glusterfs/5/rsa.pub | apt-key add
-") ?

You import the key once, and it works for every update after that? That's
what Louis Zuckerman (a.k.a. semiosis), the original gluster debian
packager suggested. I don't know enough to know why your deb cmd is better
than semiosis' apt-key add cmd?

Also the fact that the key hasn't actually changed since glusterfs-5 means,
among other things, you only need to change the
/etc/apt/sources.list.d/gluster.list and updates to -6 or -7 will just keep
working with the key you already imported.

As an example of why this is useful, Gluster 7 has been released since my
> original mail, so now the key for LATEST is at
> https://download.gluster.org/pub/gluster/glusterfs/7/rsa.pub instead of
> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub. Every time
> a new verison of Gluster is released the recipe for installing the latest
> Gluster client has to be updated.
>

I have the possibly mistaken impression that not everyone wants to always
use .../glusterfs/LATEST.  Some people want to install glusterfs-6 and stay
on -6, i.e. .../glusterfs/6/LATEST. And they'd be really upset if they came
in one morning to find that an automatic update (however good or bad an
idea that is) had updated them to glusterfs-7 when they weren't ready for
it. (And worse, if it broke their system.)

And apropos of nothing in particular, perhaps we should create a new key
for glusterfs-8 when that time comes; it's probably time.



> On Mon, Sep 9, 2019 at 11:42 PM Kaleb Keithley 
> wrote:
>
>> Hi,
>>
>> What is the issue that this would solve?
>>
>> The Debian README.txt files and RPM repo files for 6.x all say the
>> rsa.pub is at
>> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub and have
>> since day one.
>>
>> (Likewise the rsa.pub for 5.x is at
>> https://download.gluster.org/pub/gluster/glusterfs/5/rsa.pub)
>> <https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub>
>>
>>
>> On Mon, Sep 9, 2019 at 10:29 PM Shane St Savage <
>> sh...@axiomdatascience.com> wrote:
>>
>>> Hello,
>>>
>>> Any chance of getting an rsa.pub available in
>>>
>>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/
>>>
>>> at
>>>
>>> https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
>>>
>>> ?
>>>
>>> (in this case, it should be
>>> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub).
>>>
>>> Thanks,
>>> Shane
>>>
>>


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] [Gluster-Maintainers] GlusterFS - 7.0RC1 - Test day (26th Sep 2019)

2019-09-20 Thread Kaleb Keithley
On Fri, Sep 20, 2019 at 8:39 AM Rinku Kothiya  wrote:

> Hi,
>
> Release-7 RC1 packages are built. We are planning to have a test day on
> 26-Sep-2019, we request your participation. Do post on the lists any
> testing done and feedback for the same.
>
> Packages for Fedora 29, Fedora 30, RHEL 8, CentOS  at
> https://download.gluster.org/pub/gluster/glusterfs/qa-releases/7.0rc1/
>
> Packages are signed. The public key is at
> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub
>

FYI, there are no CentOS packages there, but there are Debian stretch and
Debian buster packages.

Packages for CentOS 7 are built in  CentOS CBS at
https://cbs.centos.org/koji/buildinfo?buildID=26538 but I don't see them in
https://buildlogs.centos.org/centos/7/storage/x86_64/.

@Niels, shouldn't we expect them in buildlogs?

--

Kaleb


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] rsa.pub at https://download.gluster.org/pub/gluster/glusterfs/LATEST/?

2019-09-10 Thread Kaleb Keithley
Hi,

What is the issue that this would solve?

The Debian README.txt files and RPM repo files for 6.x all say the rsa.pub
is at https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub and have
since day one.

(Likewise the rsa.pub for 5.x is at
https://download.gluster.org/pub/gluster/glusterfs/5/rsa.pub)



On Mon, Sep 9, 2019 at 10:29 PM Shane St Savage 
wrote:

> Hello,
>
> Any chance of getting an rsa.pub available in
>
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/
>
> at
>
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub
>
> ?
>
> (in this case, it should be
> https://download.gluster.org/pub/gluster/glusterfs/6/rsa.pub).
>
> Thanks,
> Shane
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Issues with Geo-replication (GlusterFS 6.3 on Ubuntu 18.04)

2019-09-02 Thread Kaleb Keithley
Fixes on master (before or after the release-7 branch was taken) almost
certainly warrant a backport IMO to at least release-6, and probably
release-5 as well.

We used to have a "tracker" BZ for each minor release (e.g. 6.6) to keep
track of backports by cloning the original BZ and changing the Version, and
adding that BZ to the tracker. I'm not sure what happened to that practice.
The last ones I can find are for 6.3 and 5.7;
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.3 and
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.7

It isn't enough to just backport recent fixes on master to release-7. We
are supposedly continuing to maintain release-6 and release-5 after
release-7 GAs. If that has changed, I haven't seen an announcement to that
effect. I don't know why our developers don't automatically backport to all
the actively maintained releases.

Even if there isn't a tracker BZ, you can always create a backport BZ by
cloning the original BZ and change the release to 6. That'd be a good place
to start.

On Sun, Sep 1, 2019 at 8:45 AM Alexander Iliev 
wrote:

> Hi Strahil,
>
> Yes, this might be right, but I would still expect fixes like this to be
> released for all supported major versions (which should include 6.) At
> least that's how I understand https://www.gluster.org/release-schedule/.
>
> Anyway, let's wait for Sunny to clarify.
>
> Best regards,
> alexander iliev
>
> On 9/1/19 2:07 PM, Strahil Nikolov wrote:
> > Hi Alex,
> >
> > I'm not very deep into bugzilla stuff, but for me NEXTRELEASE means v7.
> >
> > Sunny,
> > Am I understanding it correctly ?
> >
> > Best Regards,
> > Strahil Nikolov
> >
> > В неделя, 1 септември 2019 г., 14:27:32 ч. Гринуич+3, Alexander Iliev
> >  написа:
> >
> >
> > Hi Sunny,
> >
> > Thank you for the quick response.
> >
> > It's not clear to me however if the fix has been already released or not.
> >
> > The bug status is CLOSED NEXTRELEASE and according to [1] the
> > NEXTRELEASE resolution means that the fix will be included in the next
> > supported release. The bug is logged against the mainline version
> > though, so I'm not sure what this means exactly.
> >
> >  From the 6.4[2] and 6.5[3] release notes it seems it hasn't been
> > released yet.
> >
> > Ideally I would not like to patch my systems locally, so if you have an
> > ETA on when this will be out officially I would really appreciate it.
> >
> > Links:
> > [1] https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status
> > [2] https://docs.gluster.org/en/latest/release-notes/6.4/
> > [3] https://docs.gluster.org/en/latest/release-notes/6.5/
> >
> > Thank you!
> >
> > Best regards,
> >
> > alexander iliev
> >
> > On 8/30/19 9:22 AM, Sunny Kumar wrote:
> >  > Hi Alexander,
> >  >
> >  > Thanks for pointing that out!
> >  >
> >  > But this issue is fixed now you can see below link for bz-link and
> patch.
> >  >
> >  > BZ - https://bugzilla.redhat.com/show_bug.cgi?id=1709248
> >  >
> >  > Patch - https://review.gluster.org/#/c/glusterfs/+/22716/
> >  >
> >  > Hope this helps.
> >  >
> >  > /sunny
> >  >
> >  > On Fri, Aug 30, 2019 at 2:30 AM Alexander Iliev
> >  > mailto:glus...@mamul.org>> wrote:
> >  >>
> >  >> Hello dear GlusterFS users list,
> >  >>
> >  >> I have been trying to set up geo-replication between two clusters for
> >  >> some time now. The desired state is (Cluster #1) being replicated to
> >  >> (Cluster #2).
> >  >>
> >  >> Here are some details about the setup:
> >  >>
> >  >> Cluster #1: three nodes connected via a local network (
> 172.31.35.0/24),
> >  >> one replicated (3 replica) volume.
> >  >>
> >  >> Cluster #2: three nodes connected via a local network (
> 172.31.36.0/24),
> >  >> one replicated (3 replica) volume.
> >  >>
> >  >> The two clusters are connected to the Internet via separate network
> >  >> adapters.
> >  >>
> >  >> Only SSH (port 22) is open on cluster #2 nodes' adapters connected to
> >  >> the Internet.
> >  >>
> >  >> All nodes are running Ubuntu 18.04 and GlusterFS 6.3 installed from
> [1].
> >  >>
> >  >> The first time I followed the guide[2] everything went fine up until
> I
> >  >> reached the "Create the session" step. That was like a month ago,
> then I
> >  >> had to temporarily stop working in this and now I am coming back to
> it.
> >  >>
> >  >> Currently, if I try to see the mountbroker status I get the
> following:
> >  >>
> >  >>> # gluster-mountbroker status
> >  >>> Traceback (most recent call last):
> >  >>>File "/usr/sbin/gluster-mountbroker", line 396, in 
> >  >>>  runcli()
> >  >>>File
> > "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py", line 225,
> > in runcli
> >  >>>  cls.run(args)
> >  >>>File "/usr/sbin/gluster-mountbroker", line 275, in run
> >  >>>  out = execute_in_peers("node-status")
> >  >>>File
> "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py",
> >  >> line 127, in execute_in_peers
> >  >>>  raise GlusterCmdException((rc, out, err, " 

Re: [Gluster-users] Important: Debian and Ubuntu packages are changing

2019-08-09 Thread Kaleb Keithley
*On Thu, Aug 8, 2019 at 4:56 PM Ingo Fischer  wrote:

> Hi Kaleb,
>
> I'm currently experiencing this issue while trying to upgrade my Proxmox
> servers where gluster is installed too.
>
> Thank you for the official information for the community, but what
> exactly do this mean?
>
> Will upgrades from 5.8 to 5.9 work or what exactly needs to be done in
> order to get the update done?
>

I expect they will work as well as updating from, e.g., gluster's old style
glusterfs_5.4 debs to debian's new style glusterfs_5.5 debs.  IOW probably
not very well. My guess is that you will probably need to uninstall 5.8
followed by installing 5.9.

Here at Red Hat, as one might guess, we don't use a lot of Debian or
Ubuntu. My experience with Debian and Ubuntu has been limited to building
the packages. (FWIW, in a previous job I used SLES and OpenSuSE, and before
that I used Slackware.)

These are "community" packages and they're free. I personally do feel like
the community really should shoulder some of the burden to test them and
report any problems. Give them a try. Let us know what does or doesn't
work. And send PRs.

Debian Stretch is not affected?
>

TL;DNR: if it was, I would have said so. ;-)

The Debian packager didn't change the packaging on stretch or bionic and
xenial. The gluster community packages for those distributions are the same
as they've always been.



>
> Thank you for additional information
>
> Ingo
>
> Am 07.08.19 um 19:38 schrieb Kaleb Keithley:
> > *TL;DNR: *updates from glusterfs-5.8 to glusterfs-5.9 and from
> > glusterfs-6.4 to glusterfs-6.5, — using the package repos on
> > https://download.gluster.org  or the Gluster PPA on Launchpad— on
> > buster, bullseye/sid, and some Ubuntu releases may not work, or may not
> > work smoothly. Consider yourself warned. Plan accordingly.
> >
> > *Longer Answer*: updates from glusterfs-5.8 to glusterfs-5.9 and from
> > glusterfs-6.4 to glusterfs-6.5, — using the package repos on
> > https://download.gluster.org or the Gluster PPA on Launchpad — on
> > buster, bullseye, and some Ubuntu releases may not work, or may not work
> > smoothly.
> >
> > *Why*: The original packaging bits were contributed by the Debian
> > maintainer of GlusterFS. For those that know Debian packaging, these did
> > not follow normal Debian packaging conventions and best practices.
> > Recently — for some definition of recent —  the powers that be in Debian
> > apparentl insisted that the packaging actually start to follow the
> > conventions and best practices, and the packaging bits were rewritten
> > for Debian. The only problem is that nobody bothered to notify the
> > Gluster Community that this was happening. Nor did they send their new
> > bits to GlusterFS. We were left to find out about it the hard way.
> >
> > *The Issue*: people who have used the packages from
> > https://download.gluster.org are experiencing issues updating other
> > software that depends on glusterfs.
> >
> > *The Change*: Gluster Community packages will now be built using
> > packaging bits derived from the Debian packaging bits, which now follow
> > Debian packaging conventions and best practices.
> >
> > *Conclusion*: This may be painful, but it's better in the long run for
> > everyone. The volunteers who generously build packages in their copious
> > spare time for the community appreciate your patience and understanding.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Important: Debian and Ubuntu packages are changing

2019-08-07 Thread Kaleb Keithley
On Wed, Aug 7, 2019 at 1:38 PM Kaleb Keithley  wrote:

> *... *and some Ubuntu releases
>

Specifically Ubuntu Disco and Eoan.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Important: Debian and Ubuntu packages are changing

2019-08-07 Thread Kaleb Keithley
*TL;DNR: *updates from glusterfs-5.8 to glusterfs-5.9 and from
glusterfs-6.4 to glusterfs-6.5, — using the package repos on
https://download.gluster.org  or the Gluster PPA on Launchpad— on buster,
bullseye/sid, and some Ubuntu releases may not work, or may not work
smoothly. Consider yourself warned. Plan accordingly.

*Longer Answer*: updates from glusterfs-5.8 to glusterfs-5.9 and from
glusterfs-6.4 to glusterfs-6.5, — using the package repos on
https://download.gluster.org or the Gluster PPA on Launchpad — on buster,
bullseye, and some Ubuntu releases may not work, or may not work smoothly.

*Why*: The original packaging bits were contributed by the Debian
maintainer of GlusterFS. For those that know Debian packaging, these did
not follow normal Debian packaging conventions and best practices. Recently
— for some definition of recent —  the powers that be in Debian apparentl
insisted that the packaging actually start to follow the conventions and
best practices, and the packaging bits were rewritten for Debian. The only
problem is that nobody bothered to notify the Gluster Community that this
was happening. Nor did they send their new bits to GlusterFS. We were left
to find out about it the hard way.

*The Issue*: people who have used the packages from
https://download.gluster.org are experiencing issues updating other
software that depends on glusterfs.

*The Change*: Gluster Community packages will now be built using packaging
bits derived from the Debian packaging bits, which now follow Debian
packaging conventions and best practices.

*Conclusion*: This may be painful, but it's better in the long run for
everyone. The volunteers who generously build packages in their copious
spare time for the community appreciate your patience and understanding.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Recommended gluster stable version

2019-07-23 Thread Kaleb Keithley
On Tue, Jul 23, 2019 at 4:36 AM Gionatan Danti  wrote:

> Hi list,
> I have a question about recommended gluster stable version for using to
> host virtual disk images.
>
>  From my understanding, current RHGS uses the latest 3.x gluster branch.
> This is also the same version provided by default in RHEL/CentOS;
>
> [root@localhost ~]# yum info glusterfs.x86_64
> ...
> Name: glusterfs
> Arch: x86_64
> Version : 3.12.2
> Release : 18.el7
> Size: 542 k
> Repo: base/7/x86_64
> ...
>
> At the same time, CentOS SIG enables version 4.0, 4.1, 5.x and 6.x.
>


That's true, but the glusterfs-3.12.x packages are still available on the
CentOS mirrors.

I'm not sure why the centos-release-gluster312 package has been removed
from the mirrors, but you can still get it from
https://cbs.centos.org/koji/packageinfo?packageID=6530

It seems odd to me that centos-release-gluster312 would have been removed
when RHEL is still shipping — for a little while longer anyway —
glusterfs-3.12 (client side) packages.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Create Gluster RPMs on a SLES15 machine

2019-05-10 Thread Kaleb Keithley
Seems I accidentally omitted gluster-users in my first reply.

On Thu, May 9, 2019 at 3:19 PM Kaleb Keithley  wrote:

> On Thu, May 9, 2019 at 8:53 AM David Spisla  wrote:
>
>> Hello Kaleb,
>>
>> i am trying to create my own Gluster v5.5 RPMs for SLES15 and I am using
>> a SLES15 system to create them. I got the following error message:
>>
>> rpmbuild --define '_topdir
>>> /home/davids/glusterfs/extras/LinuxRPM/rpmbuild' --with gnfs -bb
>>> rpmbuild/SPECS/glusterfs.spec
>>> warning: bogus date in %changelog: Tue Apr 17 2019 kkeithle at
>>> redhat.com
>>> warning: bogus date in %changelog: Fri Sep 19 2018 kkeithle at
>>> redhat.com
>>> error: Failed build dependencies:
>>> rpcgen is needed by glusterfs-5.5-100.x86_64
>>> make: *** [Makefile:579: rpms] Error 1
>>>
>>>
>> In the corresponding glusterfs.spec file (branch sles15-glusterfs-5 in
>> Repo glusterfs-suse) there is rpcgen listed as dependency. But
>> unfortunately there is no rpcgen package provided on SLES15. Or with other
>> words:
>> I did only find RPMs for other SUSE distributions, but not for SLES15.
>>
>> Do you know that issue?
>>
>
> I'm afraid I don't.
>
>
>> What is the name of the distribution which you are using to create
>> Packages for SLES15?
>>
>
> The community packages are built on the OpenSUSE OBS and they are built on
> SLES15 —the one that OBS provides. I don't know any details beyond that. It
> could be a real SLES15 system, or it could be a build in mock, or SUSE's
> chroot build tool if they don't have mock.
>
> You can see the build logs from the community builds of glusterfs-5.5 and
> glusterfs-5.6 for SLES15 at [1] and [2] respectively. AFAIK it's a
> completely "vanilla" SLES15 and seems to have rpcgen-1.3-2.18 available.
> Finding things in the OBS repos seems to be hit or miss sometimes. I can't
> find the SLE_15 rpcgen package.
>
> (Back in SLES11 days I had a free eval license that let me update and
> install add-on packages on my own system. I tried to get a similar license
> for SLES12 and was advised to just use OBS. I haven't even bothered trying
> to get one for SLES15. It makes it harder IMO to figure things out.)
>
> I recommend asking the OBS team on #opensuse-buildservice on (freenode)
> IRC. They've always been very helpful to me.
>

Miuku on #opensuse-buildservice poked around and found that the unbundled
rpcgen in SLE_15 comes from the rpcsvc-proto rpm. (Not the rpcgen rpm as it
does in Fedora and RHEL8.)

All the gluster community packages for SLE_15 going back to glusterfs-5.0
in October 2018 have used the unbundled rpcgen. You can do the same, or
remove the BuildRequires: rpcgen line and use the glibc bundled rpcgen.

HTH

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] One more way to contact Gluster team - Slack (gluster.slack.com)

2019-04-26 Thread Kaleb Keithley
On Fri, Apr 26, 2019 at 8:21 AM Harold Miller  wrote:

> Has Red Hat security cleared the Slack systems for confidential / customer
> information?
>
> If not, it will make it difficult for support to collect/answer questions.
>

I'm pretty sure Amar meant as a replacement for the freenode #gluster and
#gluster-dev channels, given that he sent this to the public gluster
mailing lists @gluster.org. Nobody should have even been posting
confidential and/or customer information to any of those lists or channels.
And AFAIK nobody ever has.

Amar, would you like to clarify which IRC channels you meant?


> Harold Miller, Associate Manager,
> Red Hat, Enterprise Cloud Support
> Desk - US (650) 254-4346
>
>
>
> On Fri, Apr 26, 2019 at 6:00 AM Scott Worthington <
> scott.c.worthing...@gmail.com> wrote:
>
>> Hello, are you not _BOTH_ Red Hat FTEs or contractors?
>>
>> On Fri, Apr 26, 2019, 3:16 AM Michael Scherer 
>> wrote:
>>
>>> Le vendredi 26 avril 2019 à 13:24 +0530, Amar Tumballi Suryanarayan a
>>> écrit :
>>> > Hi All,
>>> >
>>> > We wanted to move to Slack from IRC for our official communication
>>> > channel
>>> > from sometime, but couldn't as we didn't had a proper URL for us to
>>> > register. 'gluster' was taken and we didn't knew who had it
>>> > registered.
>>> > Thanks to constant ask from Satish, Slack team has now agreed to let
>>> > us use
>>> > https://gluster.slack.com and I am happy to invite you all there.
>>> > (Use this
>>> > link
>>> > <
>>> >
>>> https://join.slack.com/t/gluster/shared_invite/enQtNjIxMTA1MTk3MDE1LWIzZWZjNzhkYWEwNDdiZWRiOTczMTc4ZjdiY2JiMTc3MDE5YmEyZTRkNzg0MWJiMWM3OGEyMDU2MmYzMTViYTA
>>> > >
>>> > to
>>> > join)
>>> >
>>> > Please note that, it won't be a replacement for mailing list. But can
>>> > be
>>> > used by all developers and users for quick communication. Also note
>>> > that,
>>> > no information there would be 'stored' beyond 10k lines as we are
>>> > using the
>>> > free version of Slack.
>>>
>>> Aren't we concerned about the ToS of slack ? Last time I did read them,
>>> they were quite scary (like, if you use your corporate email, you
>>> engage your employer, and that wasn't the worst part).
>>>
>>> Also, to anticipate the question, my employer Legal department told me
>>> to not setup a bridge between IRC and slack, due to the said ToS.
>>>
>>> --
>>> Michael Scherer
>>> Sysadmin, Community Infrastructure
>>>
>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
>
> HAROLD MILLER
>
> ASSOCIATE MANAGER, ENTERPRISE CLOUD SUPPORT
>
> Red Hat
>
> 
>
> har...@redhat.comT: (650)-254-4346
> 
> TRIED. TESTED. TRUSTED. 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Pre-Historic Gluster RPM's

2019-03-08 Thread Kaleb Keithley
https://download.gluster.org/pub/gluster/glusterfs/old-releases/

On Fri, Mar 8, 2019 at 2:24 AM Ersen E.  wrote:

> Hi,
>
> I do have some RHEL5 clients still. I will update OS's but not now.
> Meantime I am looking for a way to update at least to latest version
> available.
> Is there any web site still keeping RHEL5/Centos RPM's ?
>
> Regards,
> Ersen E.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Load balanced VIP

2017-07-12 Thread Kaleb Keithley


- Original Message -
> From: "Anthony Valentine" 
> 
> I am working on implementing my first Gluster/Ganesha NFS setup and I am
> flowing this guide:
> http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time
>  
> 
> Everything is working fine. I’ve got Gluster and Ganesha NFS working and I
> have VIPs on each node and it is failing over fine, if a little slowly.
> However, the VIPs don’t behave as I was expecting.
> 
> I was expecting a single VIP that clients could connect to that would load
> balance amongst all the active nodes, instead of having a VIP on each node.
> Is it possible to configure it to behave this way? I’m happy adding a proxy
> layer, such as an F5 or HAProxy, however I want to make sure that Ganesha
> doesn’t handle this on its own before I head in that direction. Also, if it
> does require a proxy, is there a particular product that is known to work
> well and is there a guide for setting that up somewhere?
>

The Gluster/Ganesha HA provides a simple Active/Active HA.

I'm not aware that Pacemaker has any load-balancing capability in the IPaddr RA.

I think you will have to implement that yourself.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] nfs-ganesha rsa.pub download give 403

2017-01-24 Thread Kaleb Keithley

Just did a restorecon and am able to download it now.



- Original Message -
> From: "Cedric Lemarchand" 
> To: "gluster-users" 
> Sent: Monday, January 23, 2017 1:37:03 PM
> Subject: [Gluster-users] nfs-ganesha rsa.pub download give 403
> 
> Hello,
> 
> It seems there is some rights problem with
> https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/rsa.pub :
> 
> wget -O /dev/null
> https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/rsa.pub
> --2017-01-23 19:28:47--
> https://download.gluster.org/pub/gluster/glusterfs/nfs-ganesha/rsa.pub
> Resolving download.gluster.org ( download.gluster.org )... 23.253.208.221,
> 2001:4801:7824:104:be76:4eff:fe10:23d8
> Connecting to download.gluster.org ( download.gluster.org
> )|23.253.208.221|:443... connected.
> HTTP request sent, awaiting response... 403 Forbidden
> 2017-01-23 19:28:48 ERROR 403: Forbidden.
> 
> Cheers,
> 
> —
> Cédric Lemarchand
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] HA with nfs-ganesha and Virtual IP

2017-01-23 Thread Kaleb Keithley
a couple comments in-line

- Original Message -
> From: "David Spisla" 
> 
> 
> 
> Hello,
> 
> 
> 
> I have two ec2-instances with CentOS and I want to create a cluster
> infrastructure with nfs-ganesha, pacamaker and corosnyc. I read different
> instructions but in some points I am not really sure how to do.
> 
> At the moment my configuration is not running. The system says:
> 
> 
> 
> Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted
> pool. Do you still want to continue?
> 
> (y/n) y
> 
> This will take a few minutes to complete. Please wait ..
> 
> nfs-ganesha : success
> 
> 
> 
> But nothing happen. Ganesha is not starting. I think there is a problem with
> my ganesha-ha.conf file.
> 
> 
> 
> 1. What about that Virtual IPs? How I can create them (maybe /etc/hosts) ???

You can't just make them up. In your case you must get them from somewhere in 
AWS so that you don't have a conflict with other AWS users.

("Virtual" is a bad name. They're real IPs, and they are managed by pacemaker.)

> 
> 2. Should I use always use the same ganesha.ha.conf file on all nodes or
> should I change entries. You can see as follows my two ganesha.ha.conf files

You only need to create the ganesha-ha.conf (note the '-') once, on one host, 
namely the same host you issue the gluster commands. Gluster will propagate it 
to the other nodes in the cluster.

> 
> 
> 
> First ec2-instance:
> 
> # Name of the HA cluster created.
> 
> # must be unique within the subnet
> 
> HA_NAME="ganesha-ha-360"
> 
> #
> 
> # The gluster server from which to mount the shared data volume.
> 
> HA_VOL_SERVER="ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com"
> 
> #
> 
> # N.B. you may use short names or long names; you may not use IP addrs.
> 
> # Once you select one, stay with it as it will be mildly unpleasant to
> 
> # clean up if you switch later on. Ensure that all names - short and/or
> 
> # long - are in DNS or /etc/hosts on all machines in the cluster.
> 
> #
> 
> # The subset of nodes of the Gluster Trusted Pool that form the ganesha
> 
> # HA cluster. Hostname is specified.
> 
> HA_CLUSTER_NODES="ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com,ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com"
> 
> #HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
> 
> #
> 
> # Virtual IPs for each of the nodes specified above.
> 
> VIP_ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.1"
> 
> VIP_ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.2"
> 
> #VIP_server1_lab_redhat_com="10.0.2.1"
> 
> #VIP_server2_lab_redhat_com="10.0.2.2"
> 
> 
> 
> Second ec2-instance:
> 
> # Name of the HA cluster created.
> 
> # must be unique within the subnet
> 
> HA_NAME="ganesha-ha-360"
> 
> #
> 
> # The gluster server from which to mount the shared data volume.
> 
> HA_VOL_SERVER="ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com"
> 
> #
> 
> # N.B. you may use short names or long names; you may not use IP addrs.
> 
> # Once you select one, stay with it as it will be mildly unpleasant to
> 
> # clean up if you switch later on. Ensure that all names - short and/or
> 
> # long - are in DNS or /etc/hosts on all machines in the cluster.
> 
> #
> 
> # The subset of nodes of the Gluster Trusted Pool that form the ganesha
> 
> # HA cluster. Hostname is specified.
> 
> HA_CLUSTER_NODES="ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com,ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com"
> 
> #HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
> 
> #
> 
> # Virtual IPs for each of the nodes specified above.
> 
> VIP_ec2-52-209-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.1"
> 
> VIP_ec2-52-18-xxx-xxx.eu-west-1.compute.amazonaws.com="10.0.2.2"
> 
> #VIP_server1_lab_redhat_com="10.0.2.1"
> 
> #VIP_server2_lab_redhat_com="10.0.2.2"
> 
> 
> 
> Thank you for your attention
> 
> 
> 
> 
> 
> 
> 
> David Spisla
> 
> Software Developer
> 
> david.spi...@iternity.com
> 
> www.iTernity.com
> 
> Tel: +49 761-590 34 841
> 
> 
> 
> 
> 
> 
> 
> iTernity GmbH
> Heinrich-von-Stephan-Str. 21
> 79100 Freiburg – Germany
> ---
> unseren technischen Support erreichen Sie unter +49 761-387 36 66
> ---
> 
> Geschäftsführer: Ralf Steinemann
> Eingetragen beim Amtsgericht Freiburg: HRB-Nr. 701332
> USt.Id de-24266431
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Meeting Gluster users and dev at FOSDEM 2017?

2017-01-23 Thread Kaleb Keithley


- Original Message -
> From: "Olivier Lambert" 
> To: "gluster-users" 
> Sent: Monday, January 23, 2017 7:58:27 AM
> Subject: [Gluster-users] Meeting Gluster users and dev at FOSDEM 2017?
> 
> Hi there,
> 
> My team and I will be at FOSDEM on the Xen booth (for the project "Xen
> Orchestra"). Is there a way to meet Gluster devs and users there to talk
> about it? I have a nice pile of questions, and experience + devs point of
> view would be really useful.


Absolutely. Several of us will be there. We have a booth and a dev workshop.

See you there.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs service not detected

2017-01-23 Thread Kaleb Keithley


- Original Message -
> From: "Matthew Ma 馬耀堂 (奧圖碼)" 
> 
> 
> Hi all,
> 
> 
> 
> I have created two gluster-servers and applied following cmd:
> 
> gluster volume create fs-disk replica 2 transport tcp,rdma
> sgnfs-ser1:/mnt/dev/ sgnfs-ser2:/mnt/dev/
> 

It helps to know what version you are using. Starting with 3.8 gluster NFS is 
disabled by default as be begin transitioning to Ganesha NFS.

use `gluster volume $vol nfs.disable false` to enable the legacy gnfs.


> 
> 
> I touch a file under /mnt/dev in sgnfs-ser2 , but there is nth in sgnfs-ser1.

And as others have already mentioned, don't do that.

> 
> Then I troubleshooting with following cmd:
> 
> 
> 
> -
> 
> 
> 
> root@sgnfs-ser1:~# gluster volume status
> 
> Status of volume: fs-disk
> 
> Gluster process Port Online Pid
> 
> --
> 
> Brick sgnfs-ser1:/mnt/dev 49152 Y 12395
> 
> Brick sgnfs-ser2:/mnt/dev 49152 Y 5791
> 
> NFS Server on localhost N/A N N/A
> 
> Self-heal Daemon on localhost N/A Y 12407
> 
> NFS Server on sgnfs-ser2 N/A N N/A
> 
> Self-heal Daemon on sgnfs-ser2 N/A Y 5804
> 
> 
> 
> There are no active volume tasks
> 
> 
> 
> root@sgnfs-ser1:~# gluster peer status
> 
> Number of Peers: 1
> 
> 
> 
> Hostname: sgnfs-ser2
> 
> Uuid: 539bb70a-7819-457d-9dc9-cc07a85c008e
> 
> State: Peer in Cluster (Connected)
> 
> 
> 
> root@sgnfs-ser1:~# /etc/init.d/glusterfs-server status
> 
> glusterfs-server start/running, process 11672
> 
> 
> 
> -
> 
> 
> 
> I recognized that it maybe due to NFS service.
> 
> However, my NFS service is running.
> 
> Is there anything I should be configure? Or I can check?
> 
> 
> 
> Thanks all
> 
> 
> 
> This e-mail transmission and its attachment are intended only for the use of
> the individual or entity to which it is addressed, and may contain
> information that is privileged, confidential and exempted from disclosure
> under applicable law. If the reader is not the intended recipient, you are
> hereby notified that any disclosure, dissemination, distribution or copying
> of this communication, in part or entirety, is strictly prohibited. If you
> are not the intended recipient for this confidential e-mail, delete it
> immediately without keeping or distributing any copy and notify the sender
> immediately. The hard copies should also be destroyed. Thank you for your
> cooperation. It is advisable that any unauthorized use of confidential
> information of this Company is strictly prohibited; and any information in
> this email that does not relate to the official business of this Company
> shall be deemed as neither given nor endorsed by this Company.
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] install Gluster 3.9 on CentOS

2017-01-02 Thread Kaleb Keithley

- Original Message -
> From: "Ramesh Nachimuthu" <rnach...@redhat.com>
>  
> - Original Message -
> > From: "Niels de Vos" <nde...@redhat.com>
> > 
> > On Wed, Dec 28, 2016 at 06:40:35AM -0500, Kaleb Keithley wrote:
> > > Hi,
> > > 
> > > Just send Niels (nde...@redhat.com) an email telling him you've tested
> > > it.
> > 
> > Yes, that's correct. There is no web interface for giving karma to
> > packages like there is for Fedora. It is best to inform the 3.9 release
> > maintainers on one of the lists and put me in CC. Once someone other
> > than me checked the packages and is happy with them, I can mark them for
> > signing and releasing by the CentOS release engineering team.
> > 
> 
> Thanks Niels. I'm not sure about who is the 3.9 maintainer but I have
> personally verified the basic things like volume creation in 3.9 and it
> works. May me I will I include Kasturi, Sas to give their feedback on
> gluster 3.9.
> 
> Kasturi, Sas: Are you using gluster 3.9 in your testing?. If not, can you
> include it in some tests and share your feedback with Niels?.
> 

I'm not sure we have settled on who is _maintaining_ 3.9. Pranith and Aravinda 
made the initial release, but that might be the end of their involvement. Or 
not. Either way you can just send an email to gluster-users or gluster-dev 
saying 3.9 looks good and whoever it is will get the message.

GlusterFS 3.9 is a Short Term Maintenance release. I wouldn't expect Red Hat QE 
to be using it in their "day job" testing. STM releases are meant to get new 
features into the hands of community users early.

The packages in the CentOS Storage SIG are _upstream_ packages provided by and 
for the GlusterFS community, just like the packages that are provided by the 
community in Fedora, Ubuntu Launchpad, the SuSE Build System, and on 
download.gluster.org.

FWIW, CentOS, Debian, Ubuntu, and SuSE have their own processes for managing 
the packages they provide – the GlusterFS community packages are independent of 
those processes. Independent of all that, Niels does ask for a thumbs up from 
someone in the community before he promotes packages in the CentOS Storage SIG. 
(Although I personally think Niels could promote them after a couple weeks even 
if nobody gives a thumbs up; as can be done with packages in Fedora.)

The GlusterFS community should not expect Red Hat QE to give the thumbs up that 
Niels is looking for; it's really intended, and expected, to come from one or 
more community users.

HTH.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] install Gluster 3.9 on CentOS

2016-12-28 Thread Kaleb Keithley
Hi,

Just send Niels (nde...@redhat.com) an email telling him you've tested it.

Thanks


- Original Message -
> From: "Ramesh Nachimuthu" 
> To: "Kaleb S. KEITHLEY" 
> Cc: "Grant Ridder" , gluster-users@gluster.org
> Sent: Wednesday, December 28, 2016 12:26:03 AM
> Subject: Re: [Gluster-users] install Gluster 3.9 on CentOS
> 
> Hi Kaleb,
> 
> I can give the karma. Do you know where these builds are listed in Centos
> Update system?
> 
> 
> Regards,
> Ramesh
> 
> 
> 
> - Original Message -
> > From: "Kaleb S. KEITHLEY" 
> > To: "Grant Ridder" , gluster-users@gluster.org
> > Sent: Wednesday, December 21, 2016 12:06:33 AM
> > Subject: Re: [Gluster-users] install Gluster 3.9 on CentOS
> > 
> > On 12/20/2016 12:19 PM, Grant Ridder wrote:
> > > Hi,
> > >
> > > I am not seeing 3.9 in the Storage SIG for CentOS 6 or 7
> > > http://mirror.centos.org/centos/7.2.1511/storage/x86_64/
> > > http://mirror.centos.org/centos/6.8/storage/x86_64/
> > >
> > > However, i do see it
> > > here: http://buildlogs.centos.org/centos/7/storage/x86_64/
> > >
> > > Is that expected?
> > 
> > Yes.
> > 
> > > did the Storage SIG repo change locations?
> > 
> > No.
> > 
> > Until someone tests and gives positive feedback they remain in buildlogs.
> > 
> > Much the same way Fedora RPMs remain in Updates-Testing until they
> > receive +3 karma (or wait for 14 days).
> > 
> > --
> > 
> > Kaleb
> > 
> > 
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> > 
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread Kaleb KEITHLEY
On 07/21/2016 02:38 PM, Samuli Heinonen wrote:
> Hi all,
> 
> I’m running oVirt 3.6 and Gluster 3.7 with ZFS backend. 
> ...
> Afaik ZFS on Linux doesn’t support aio. Has there been some changes to 
> GlusterFS regarding aio?
> 

Boy, if that isn't a smoking gun, I don't know what is.

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] 3.7.13 & proxmox/qemu

2016-07-21 Thread Kaleb KEITHLEY
On 07/21/2016 10:19 AM, David Gossage wrote:
> Has their been any release notes or bug reports about the removal of aio
> support being intentional? 

Build logs of 3.7.13 on Fedora and Ubuntu PPA (Launchpad) show that when
`configure` ran during the build it reported that Linux AIO was enabled.

What packages are you using? On which Linux distribution?

You might like to file a bug report at
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS


--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Error 404 ?

2016-07-11 Thread Kaleb Keithley

Starting with the 3.8 releases EPEL packages are in the CentOS Storage SIG 
repos.

If you want to stay on 3.7, edit your /etc/yum.repos.d/glusterfs-epel.repo file 
and change .../LATEST/... to .../3.7/LATEST/...

(There have been several emails to gluster-users and gluster-devel mailing 
lists about this.)

See http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/EPEL.README 
for more info.


- Original Message -
> From: "Nicolas Ecarnot" 
> To: "gluster-users" 
> Sent: Monday, July 11, 2016 6:18:36 AM
> Subject: [Gluster-users] Error 404 ?
> 
> Hello,
> 
> When trying a yum upgrade, I see that :
> 
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo
> 
> is leading to :
> 
> Not Found
> 
> The requested URL /pub/gluster/glusterfs/LATEST/RHEL/glusterfs-epel.repo
> was not found on this server.
> 
> What did I do wrong?
> 
> (it was working for years...)
> 
> Thx
> 
> --
> Nicolas ECARNOT
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaleb KEITHLEY
On 06/30/2016 06:53 PM, Lindsay Mathieson wrote:
> On 30/06/2016 10:31 PM, Kaushal M wrote:
>> The pve-qemu-kvm package was last built or updated in January this
>> year[1]. And I think it was built against glusterfs-3.5.2, which is
>> the latest version of glusterfs in the proxmox sources [2].
>> Maybe the pve-qemu-kvm package needs a rebuild.
> 
> Does qemu static link libglusterfs?
> 

There isn't a libglusterfs.a that it could static link to.

So no.

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaleb KEITHLEY
On 06/30/2016 11:23 AM, Kaleb KEITHLEY wrote:
> On 06/30/2016 11:18 AM, Vijay Bellur wrote:
>> On Thu, Jun 30, 2016 at 8:31 AM, Kaushal M <kshlms...@gmail.com> wrote:
>>> On Thu, Jun 30, 2016 at 5:47 PM, Kevin Lemonnier <lemonni...@ulrar.net> 
>>> wrote:
>>>>>
>>>>> Replicated the problem with 3.7.12 *and* 3.8.0 :(
>>>>>
>>>>
>>>> Yeah, I tried 3.8 when it came out too and I had to use the fuse mount 
>>>> point
>>>> to get the VMs to work. I just assumed proxmox wasn't compatible yet with 
>>>> 3.8 (since
>>>> the menu were a bit wonky anyway) but I guess it was the same bug.
>>>>
>>>
>>> I was able to reproduce the hang as well against 3.7.12.
>>>
>>> I tested by installing the pve-qemu-kvm package from the Proxmox
>>> repositories in a Debain Jessie container, as the default Debian qemu
>>> packages don't link with glusterfs.
>>> I used the 3.7.11 and 3.7.12 gluster repos from download.gluster.org.
>>>
>>> I tried to create an image on a simple 1 brick gluster volume using 
>>> qemu-img.
>>> The qemu-img command succeeded against a 3.7.11 volume, but hung
>>> against 3.7.12 to finally timeout and fail after ping-timeout.
>>>
>>> We can at-least be happy that this issue isn't due to any bugs in AFR.
>>>
>>> I was testing this with Raghavendra, and we are wondering if this is
>>> probably a result of changes to libglusterfs and libgfapi that have
>>> been introduced in 3.7.12 and 3.8.
>>> Any app linking with libgfapi also needs to link with libglusterfs.
>>> While we have some sort of versioning for libgfapi, we don't have any
>>> for libglusterfs.
>>> This has caused problems before (I cannot find any links for this
>>> right now though).
>>>
>>
>> Did any function signatures change between 3.7.11 and 3.7.12?
> 
> In gfapi? No. And (as I'm sure you're aware) they're all versioned, so
> things that linked with the old version-signature continue to do so.
> 
> I don't know about libglusterfs.
> 

And I'm not sure I want to suggest that we version libglusterfs for 4.0;
but perhaps we ought to?

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaleb KEITHLEY
On 06/30/2016 11:18 AM, Vijay Bellur wrote:
> On Thu, Jun 30, 2016 at 8:31 AM, Kaushal M  wrote:
>> On Thu, Jun 30, 2016 at 5:47 PM, Kevin Lemonnier  
>> wrote:

 Replicated the problem with 3.7.12 *and* 3.8.0 :(

>>>
>>> Yeah, I tried 3.8 when it came out too and I had to use the fuse mount point
>>> to get the VMs to work. I just assumed proxmox wasn't compatible yet with 
>>> 3.8 (since
>>> the menu were a bit wonky anyway) but I guess it was the same bug.
>>>
>>
>> I was able to reproduce the hang as well against 3.7.12.
>>
>> I tested by installing the pve-qemu-kvm package from the Proxmox
>> repositories in a Debain Jessie container, as the default Debian qemu
>> packages don't link with glusterfs.
>> I used the 3.7.11 and 3.7.12 gluster repos from download.gluster.org.
>>
>> I tried to create an image on a simple 1 brick gluster volume using qemu-img.
>> The qemu-img command succeeded against a 3.7.11 volume, but hung
>> against 3.7.12 to finally timeout and fail after ping-timeout.
>>
>> We can at-least be happy that this issue isn't due to any bugs in AFR.
>>
>> I was testing this with Raghavendra, and we are wondering if this is
>> probably a result of changes to libglusterfs and libgfapi that have
>> been introduced in 3.7.12 and 3.8.
>> Any app linking with libgfapi also needs to link with libglusterfs.
>> While we have some sort of versioning for libgfapi, we don't have any
>> for libglusterfs.
>> This has caused problems before (I cannot find any links for this
>> right now though).
>>
> 
> Did any function signatures change between 3.7.11 and 3.7.12?

In gfapi? No. And (as I'm sure you're aware) they're all versioned, so
things that linked with the old version-signature continue to do so.

I don't know about libglusterfs.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Disappearance of glusterfs-3.7.11-2.el6.x86_64 and dependencies

2016-06-30 Thread Kaleb KEITHLEY
On 06/30/2016 07:03 AM, Milos Kurtes wrote:
> Hi,
> 
> yesterday and day before package was there but now it is not.
> 
> http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/EPEL.repo/epel-6/x86_64/glusterfs-3.7.11-2.el6.x86_64.rpm:
^^^
3.7.x packages are (still) at

http://download.gluster.org/pub/gluster/glusterfs/3.7/


> [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not
> Found"
> 
> The package is still in yum list getting from the repository.
> 
> What happened?
> 
> When will be available again?

And after the release of 3.7.12, LATEST is now

http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-6/x86_64/glusterfs-3.7.12-1.el6.x86_64.rpm

If you absolutely want 3.7.11, you can still get it from

http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.11/EPEL.repo/epel-6/x86_64/glusterfs-3.7.11-2.el6.x86_64.rpm

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.8 Release

2016-06-23 Thread Kaleb KEITHLEY
On 06/23/2016 06:18 PM, Joe Julian wrote:
> 3.8.0 has a bug that prevents certain operations with libgfapi which
> will affect the self-heal daemon, nfs, and any applications built to use
> the api.
> 
> I would wait for 3.8.1. I'm not sure if a fix will be in 3.7.12, it's
> broken in 3.7.12rc2.

There's a patch. http://review.gluster.org/#/c/14779/ if I'm not mistaken.

If that passes review I'll respin all the 3.8.0 packages with it while
we wait for 3.8.1.

And if 3.7.12 gets released without it, I'll add it to the 3.7.12
package builds too.

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] implementation of RPC in glusterFS

2016-06-05 Thread Kaleb Keithley


- Original Message -
> From: "袁仲" > 
> 
> 
> I have check the source code of glusterFS, the communication between cli and
> glustered , glustered and glusterd is depend on RPC. But it is different
> from the RPC program i wrote myself that I use rpcgen to create client-stub
> and server-stub, but glusterFS does not. so, my question is, 
> 
> Does glusterfs have implement RPC by itself, and if does, what’s the
> difference from the RPC program that use rpcgen.

GlusterFS uses rpcgen too.

The protocol is defined by the XDR files in .../rpc/xdr/src/*.x.

The stubs are generated using rpcgen. See .../build-aux/xdrgen

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] MInutes of Gluster Community Bug Triage meeting at 12:00 UTC on 10 May, 2016

2016-05-10 Thread Kaleb KEITHLEY

Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-10/bug_triage.2016-05-10-12.04.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-10/bug_triage.2016-05-10-12.04.txt
Log:
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-10/bug_triage.2016-05-10-12.04.log.html


#gluster-meeting: bug triage



Meeting started by kkeithley_ at 12:04:24 UTC. The full logs are
available at
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-10/bug_triage.2016-05-10-12.04.log.html
.



Meeting summary
---
* rollcall  (kkeithley_, 12:04:54)

* last week's action items  (kkeithley_, 12:08:16)
  * LINK: http://review.gluster.org/14240   (ndevos, 12:15:33)

* char for next week's meeting  (kkeithley_, 12:18:54)

* group triage  (kkeithley_, 12:22:55)
  * LINK: https://public.pad.fsfe.org/p/gluster-bugs-to-triage
(kkeithley_, 12:23:23)

* fix bad bug status  (kkeithley_, 12:43:13)

* Open Floor  (kkeithley_, 12:51:04)

Meeting ended at 12:53:04 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* kkeithley_ (64)
* ndevos (24)
* post-factum (3)
* zodbot (3)
* Manikandan (3)
* skoduri (2)
* jiffin (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Question about the number of nodes

2016-04-19 Thread Kaleb KEITHLEY
On 04/19/2016 07:55 AM, Kevin Lemonnier wrote:
> Hi,
> 
> As stated in another thread, we currently have a 3 nodes cluster with 
> sharding enabled used for storing VM disks.
> I am migrating that to a new 3.7.11 cluster to hopefully fix the problems 
> with had friday, but since those 3
> nodes are nearly full we'd like to expand.
> 
> We have 3 nodes with a replica 3. What would be better, go to 5 nodes and use 
> a replica 2 (so "wasting" one node),
> or go to 6 nodes with still a replica 3 ? Seems like having 3 replicas is 
> better for safety, but can someone confirm
> that whats important for quorum is the number of bricks in a replica set, not 
> the number of nodes total ?
> Would hate to get into a split brain because we upgraded to an even number of 
> node.

I believe you could set up a `2x2 replica 2` cluster and use the fifth
node as an arbiter node to prevent/minimize split brain.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] ganesha-nfs v2.3.2 request

2016-04-19 Thread Kaleb KEITHLEY
On 04/19/2016 08:40 AM, Serkan Çoban wrote:
> Yes I build the ganesha rpms, I can use these rpms with 3.7.11 right?
> or I also should build gluster rpms too?

No, you don't need to build gluster rpms (unless you want to.)

nfs-ganeha-2.3.2 should work just fine with 3.7.11.

--

Kaleb



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ganesha-nfs v2.3.2 request

2016-04-19 Thread Kaleb KEITHLEY
On 04/19/2016 01:53 AM, Serkan Çoban wrote:
> Hi Jiffin,
> 
> I see v2.3.2 stable  nfs-ganesha is released. Is there any plans to
> include 2.3.2 in gluster?

Include nfs-ganesha in Gluster? No, it's its own project and is packaged
independently from Gluster.

Be patient, it was only released a couple days ago. Packages are built
by _volunteers_, in their copious spare time.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] RFC: beginning to phase out legacy Gluster NFS, to be eventually replaces with NFS-Ganesha

2016-04-12 Thread Kaleb Keithley
Hi,

Some of you may have noticed that one of the roadmap items for GlusterFS 3.8 is 
to change the default for the volume option nfs.disable from 'off' to 'on'.

If you haven't noticed, then this email will serve to call your attention to it.

Changing the nfs.disable volume option from 'off' to 'on' means that when a 
volume is started (i.e. exported) that the Gluster NFS (or gnfs) server is not 
automatically started at the same time.

This change is being made to support the transition from gnfs to NFS-Ganesha. 
NFS-Ganesha[1] is a user-space NFS server that supports NFSv3, NFSv4, NFSv4.1, 
pNFS, and beyond; while gnfs is NFSv3 only.

More users are asking for NFSv4 and pNFS and the features they provide, 
especially as the implementations in the Linux (and other) kernels and the 
NFS-Ganesha mature.

The gnfs implementation isn't well suited to being extended to support NFSv4 – 
NFSv4 is a very different protocol – and little effort is being devoted these 
days to maintaining the gnfs implementation. There is a fairly substantial 
effort being devoted to NFS-Ganesha.

Starting with GlusterFS-3.7 preliminary support was added for NFS-Ganesha, but 
legacy gnfs was kept as the default – using NFS-Ganesha means explicitly 
disabling gnfs. The next phase, proposed for GlusterFS 3.8, as noted above, is 
the change to the default for nfs.disable. This means when a volume is created, 
if you want to use gnfs, you will need to explicitly enable it by issuing a 
`gluster volume set $volname nfs.disable off` command before starting the 
volume. Anyone who has scripts or other automation for creating gluster volumes 
that wants to keep using gnfs will need to modify their tooling accordingly.

At least through GlusterFS-3.8 the gnfs server (nfs-server xlator, gluster and 
glusterd pieces) will continue to be available, i.e. they will be compiled and 
included in the community GlusterFS packages that are provided. In some future 
version (perhaps in whatever version follows 3.8, either 3.9 or 4.0) those 
parts may no longer be compiled by default; there will be a configuration 
option to enable compiling them.

N.B. we have no plans at this time to remove the sources from the tree, however 
ongoing maintenance of them will need to be taken up by the community.

If you have any questions or concerns about this migration to NFS-Ganesha or 
the change to the default value for the nfs.disable option, please feel free to 
post them here in mailto:gluster-users@gluster.org.

Regards,

--

Kaleb KEITHLEY

[1] https://github.com/nfs-ganesha/nfs-ganesha/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS 3.7.9 released

2016-03-22 Thread Kaleb KEITHLEY

On 03/23/2016 06:13 AM, Alan Millar wrote:

Anyone have any success in updating to 3.7.9 on Debian Jessie?

I'm seeing dependency problems, when trying to install 3.7.9 using the Debian 
Jessie packages on download.gluster.org.


For example, it says it wants liburcu4.

  Depends: liburcu4 (>= 0.8.4) but it is not installable

I can only find liburcu2 for Jessie.


https://packages.debian.org/search?searchon=names=liburcu

It looks similar for some of the other dependencies also, like libtinfo5 and 
libssl1.0.2

Did the Jessie packages accidentally get built with the spec file for sid or 
stretch, possibly?  Or is my system broken and I'm looking at the wrong thing?


Looks the build machine's pbuilder apt-cache got polluted somehow. I've 
rebuilt the apt-cache and rebuilt the packages.


They're on download.gluster.org now.

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.7.9 released

2016-03-22 Thread Kaleb KEITHLEY

On 03/22/2016 11:55 AM, ML mail wrote:

And a thank you from me too for this release, I am looking forward to a working 
geo-replication...

btw: where can I find the changelog for this release? I always somehow forget 
where it is located.



Footnote [3], below, has the URL of the patch that will become the 
release notes.






On Tuesday, March 22, 2016 4:19 AM, Vijay Bellur  wrote:
Hi all,

GlusterFS 3.7.9 has been released and the tarball can be found at [1]. Release 
notes will appear at [2] once the patch [3] gets merged into the repository.

Fedora-22, EPEL-[567], and Debian {Jessie,Stretch} packages are on 
download.gluster.org
(wheezy coming soon).

Packages are in Ubuntu Launchpad for Trusty and Wily.

Packages are in SuSE Build System for Leap42.1, OpenSuSE, and SLES-12.

Packages for Fedora 23 are queued for testing, and packages for Fedora
{24,25} are live.

Appreciate your feedback about this release as ever.

Thanks,
Vijay

[1] 
https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.9/glusterfs-3.7.9.tar.gz

[2] 
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.9.md

[3] http://review.gluster.org/13802

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Question about a libglusterfs0 package ?

2016-03-08 Thread Kaleb Keithley


- Original Message -
> From: "Michael H Martel" 
> 
> Greetings!
> 
> After a reboot, our gluster server is consuming memory at an incredible rate
> and will routinely run out of memory and crash.
> 
> In looking at the installed packages (debian wheezy) I see these packages.
> The libglusterfs0 seems to me to be an old package and probably not needed.

That's correct. Gluster's Debian packages have not had a libglusterfs0 package 
since 3.4.

I'd get rid of it.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Fwd: qemu-block: deprecated/defunct.. really?

2016-03-07 Thread Kaleb Keithley


- Forwarded Message -
From: "Niels de Vos" <nde...@redhat.com>
On Mon, Mar 07, 2016 at 12:19:51PM -0500, Kaleb Keithley wrote:
> Hi,
> 
> No, I only proposed, via an RFC, to remove it.
> 
> I also posted emails to gluster-devel[1] and gluster-users[2] asking
> people to speak up and let us know if they are still using it.
> 
> Independent of that, doesn't qemu (or libvirt) use the newer gfapi
> functionality to access images on gluster volumes?

Yes, it does. The removal of qemu-block from the Gluster sources should
not affect the ability to use the gluster://volume/dir/image.qcow2 URLs
with QEMU. QEMU uses libgfapi for this (see block/gluster.c in the QEMU
sources if interested).

> Please send an email either or both of the gluster-users or
> gluster-devel mailings lists and register your opinion about the
> proposal.

I would like to see the question there as well. We can send an other
confirmation about it. I am pretty confident that if one user has a
quesion like this, many others would like to know the answer too.

Thanks,
Niels


> 
> Thanks,
> 
> --
> 
> Kaleb
> 
> 
> [1]https://www.gluster.org/pipermail/gluster-devel/2016-March/048556.html
> [2]https://www.gluster.org/pipermail/gluster-users/2016-March/025665.html
> 
> 
> - Original Message -
> > From: "Aleš Kapica" <kap...@fel.cvut.cz>
> > To: kkeit...@redhat.com
> > Cc: jda...@redhat.com
> > Sent: Monday, March 7, 2016 8:49:57 AM
> > Subject: qemu-block: deprecated/defunct.. really?
> > 
> > In last changes git repository you removed qemu-block xlator from GlusterFS,
> > with comment "qemu-block xlator is not used by anyone, or so I'm told."
> > (commit 6860968 from 18.2,2016).
> > 
> > It calmed me.
> > 
> > I use GlusterFS Qemu api for booting virtual machines from storage by using
> > image file from GlusterFS volume, what is storaged in the same volume as
> > system files. Another files are delivered over NFS Ganesha. May influence
> > it, or not your patch?  Example as I use it..
> > 
> > qemu-system-x86_64 ... -device virtio-scsi-pci -drive
> > file=gluster+tcp://10.0.0.216/diskless/k2-boot.img,if=none,index=0,id=gluster0,cache=none,media=disk,format=raw
> > -device scsi-hd,drive=gluster0 ...
> > 
> > Root of volume diskless..
> > /k2-boot.img
> > /system
> > |- bin
> > |- boot (mountpoint for block device gluster0 aka /dev/sda1)
> > |- dev
> > ... & etc.
> > 
> > Thank you of your answer.
> > 
> > Aleš Kapica, DCE FEL CVUT
> > 
> > --
> > Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
> > 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIcBAEBAgAGBQJW3b3UAAoJECXo5AApwsWz6RAP/Avo5JcGpgOEEF2kdvLCJm0C
OfBqQIanJK/LUpDRQc/H3ZsMuDdJ8hc+AHWkq0vgcnN9IbgM9XfhahGEc1lA0cB9
kF9Cd9R/WcOpJUB+rZ0JlUseEs4zSSq3FCRnQjCJfM9svVEVRCJ8ON5P7YaEmk66
/0fuGM0z/TCTsRMPw8P5LYd5nUgD0TX6ngIOTi8T0bhEYisRmyQxxxnUKck1VFLT
dOyfDhgEJB/SwpsDSAzX2hI7yyRvstGNEoO7fUet5zkn7f99CLDmmYit5gKFeESI
INJO73BdTVRLKsbvUXzK4osr3vz0rsQh0GmHeKdCCmB9bUwKI4Lqp0vs7xW1bg44
LFYLeXbnrmfMA1vsbtCqiU/I8O+4zFEQBUdqubmMUkox+z/IJ6gCfFWNzHr95ZBC
IxCbPoJh/Ce8e96cis2MzVnnRRch06okuApSd6tYZDkQmKUhScVJHYkh1wiBAyXr
LfeSiayvsrErSFRmFuJuBqakNBDL4maR2jkplDZjaEkNhh//wskZMGlzTOoLdzkO
u4rbaEdqJof3KpW6Rle5zdiWUK7Y95WD+1Y7Qt8H1iEeYh14b2iCOUB7Zo9x+kTD
zl1yk3DCfWath9JQdxixVJbfuMUUVGVAkZTJ8x1J6exeDvbK5V2uDXzYDCBwsOwo
GFdzKCuS/VqjCzVgRO8E
=5r9a
-END PGP SIGNATURE-
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.8-1 vs 3.7.8-3

2016-03-07 Thread Kaleb Keithley


- Original Message -
> From: "Dj Merrill" 
> 
> I noticed a release 3.7.8-3 appear for Centos 7 in the glusterfs repo
> over the weekend.  Are there any release notes available noting the
> changes between 3.7.8-1 and 3.7.8-3?  I am probably just looking in the
> wrong place.
> 

The %changelog of the glusterfs.spec file used to build the rpms!

`rpm -q --changelog glusterfs` (after updating).

Or the Fedora dist-git repo used to build the packages at 
http://pkgs.fedoraproject.org/cgit/rpms/glusterfs.git/tree/glusterfs.spec

Here's the relevant part:

...
%changelog
* Fri Mar 4 2016  Kaleb S. KEITHLEY  - 3.7.8-3
- Requires /bin/dbus -> dbus
- quiet %%post server (1312897)
- syslog dependency (1310437)

* Fri Feb 26 2016 Niels de Vos  - 3.7.8-2
- Just run /sbin/ldconfig without arguments, not as interpreter (#1312374)

* Mon Feb 8 2016  Kaleb S. KEITHLEY  - 3.7.8-1
- GlusterFS 3.7.8 GA
...




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glister 3.7.8 RPM install dependency error

2016-03-04 Thread Kaleb Keithley


- Original Message -
> From: "Gmail" 
> 
> I’ve tried symlinks and it’s still not working.
> 
> ln -s /bin/dbus-* /usr/bin/

ln doesn't do globbing (wildcards) like that.

`ln -s /bin/dbus /usr/bin/dbus` should work for the purposes of installing.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glister 3.7.8 RPM install dependency error

2016-03-04 Thread Kaleb Keithley


- Original Message -
> From: 
> 
> Hi,
> 
> I’m trying to install Gluster 3.7.8 RPMs on CentOS 6.5 and I get the
> following error:
> 
> Error: Package: glusterfs-ganesha-3.7.8-1.el6.x86_64
> (/glusterfs-ganesha-3.7.8-1.el6.x86_64)
> Requires: /usr/bin/dbus-send
> 
> I’ve checked if dbus is installed or not, and I found those RPMs installed:
> 
> dbus-libs-1.2.24-8.0.1.el6_6.x86_64
> dbus-glib-0.86-6.el6_4.x86_64
> dbus-1.2.24-8.0.1.el6_6.x86_64
> 
> 
> I can’t find dbus-send anywhere on the system, so what am I missing?!
> 

It's /bin/dbus-send on RHEL and CentOS, in the dbus RPM (`rpm -ql dbus`).

On Fedora it's /usr/bin/dbus-send, also in the dbus RPM. (and /bin is a symlink 
to /usr/bin)

This is why I don't particularly like dependencies on binaries – I prefer the 
dependency on the RPM itself.

It appears that you got your gluster RPMs from the CentOS Storage SIG repos, is 
that correct?

Looks like we'll be respinning the 3.7.8 RPMs soon, unless 3.7.9 gets released 
first.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Fwd: [Gluster-devel] proposal to remove the qemu-block xlator from master branch

2016-03-03 Thread Kaleb Keithley


- Forwarded Message -

It's not clear to some of us that anyone is using this xlator.

The associated contrib/qemu sources are very old, and there is nobody currently 
maintaining it. It would take a substantial effort to update it – to what end, 
if nobody actually uses it?

Bundled in the source, the way it is now, is strongly discouraged by at least 
one major Linux distribution.

The patch under review as an RFC at http://review.gluster.org/13473 proposes to 
remove the source from the glusterfs master branch. Thus it would not be in the 
upcoming 3.8 and later releases.

Any objections? Any comments? Please reply to the list 
mailto:gluster-de...@gluster.org 




___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Replicated Volume (mirror) on 17 nodes.

2016-02-25 Thread Kaleb KEITHLEY
On 02/24/2016 04:34 PM, Simone Taliercio wrote:
> Hi all :)
> 
> I would need soon to create a pool of 17 nodes. Each node requires a
> copy of the same file "locally" so that can be accessed from the
> deployed application.
> 
> * Do you see any performance issue in creating a Replica Set on 17 nodes
> ? Any best practice that I should follow ?
> 
> * An other question: in case there's no problem then on this line
> 
> gluster volume create gv0 replica*2* server1:/data/brick1/gv0
> server2:/data/brick1/gv0
> 
> do i need to provide 17 instead of 2 ?

Yes, you'll need to provide _all_ the bricks participating in the
volume. It will be hard to get a "replica 2" volume with an odd number
of bricks. You will need to rethink that part of your solution.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] rpmbuild on sles 11 sp4

2016-02-23 Thread Kaleb KEITHLEY
On 02/23/2016 04:11 PM, Kaleb KEITHLEY wrote:
> On 02/23/2016 03:18 PM, Dan Castelhano wrote:
>> Hi,
>>
>> I'm getting the error below when trying to build rpms on sles 11 sp4
>> (64bit).
>>
>> Has anyone successfully compiled gluster on sles 11 and/or know how to
>> fix this rpmbuild error? I get the same error with 3.5.7 and 3.6.7.
> 
> Gluster's official community packages for SLES 11sp4 are in the SuSE
> Build System repos at [1].
> 
> Yes, the SuSE packages (and the spec file used to build them) are
> different than the Fedora/RHEL/CentOS packages that are built with the
> spec file in the source three. The spec file(s) used for [1] are based
> on the spec file(s) that SuSE uses when they build GlusterFS for their
> own zipper repos.
> 
> These spec files are in the github repo at [2].
> 
> If you're determined to use (or stuck using) SuSE, I recommend using
> these, because a) they're known to work, and b) they're a better match
> for SuSE's GlusterFS packages.
> 

After that, if you're still determined to build Fedora-style RPMs on
SuSE, you can probably find useful hints in the spec file inside the
src.rpm at [1].

HTH,


[1]
http://download.gluster.org/pub/gluster/glusterfs/3.6/3.6.1/SLES11sp3/glusterfs-3.6.1-1.src.rpm

--

Kaleb




> 
> 
> [1]
> http://download.opensuse.org/repositories/home:/kkeithleatredhat:/SLES11-3.6/SLE_11_SP4/
> [2] https://github.com/gluster/glusterfs-suse
> 
>>
>> rpmbuild commands used:
>> "rpmbuild -ba glusterfs.spec" and "rpmbuild -ba glusterfs.spec --without bd"
>>
>>
>> Thanks,
>> Dan
>>
>> Processing files: glusterfs-server-3.6.8-0.0
>> error: File not found:
>> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/com/glusterd/groups
>> error: File not found:
>> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/com/glusterd/groups/virt
>> Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.4296
>> + umask 022
>> + cd /usr/src/packages/BUILD
>> + cd glusterfs-3.6.8
>> +
>> DOCDIR=/var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/share/doc/packages/glusterfs-server
>> + export DOCDIR
>> + rm -rf
>> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/share/doc/packages/glusterfs-server
>> + /bin/mkdir -p
>> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/share/doc/packages/glusterfs-server
>> + cp -pr extras/clear_xattrs.sh
>> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/share/doc/packages/glusterfs-server
>> + exit 0
>> Checking for unpackaged file(s): /usr/lib/rpm/check-files
>> /var/tmp/glusterfs-3.6.8-0.0-CDwT09
>> error: Installed (but unpackaged) file(s) found:
>>/usr/local/lib64/python2.6/site-packages/gluster/__init__.py
>>/usr/local/lib64/python2.6/site-packages/gluster/__init__.pyc
>>   
>> /usr/local/lib64/python2.6/site-packages/glusterfs_glupy-3.6.8-py2.6.egg-info
>>/usr/share/doc/glusterfs/benchmarking/README
>>/usr/share/doc/glusterfs/benchmarking/glfs-bm.c
>>/usr/share/doc/glusterfs/benchmarking/launch-script.sh
>>/usr/share/doc/glusterfs/benchmarking/local-script.sh
>>/usr/share/doc/glusterfs/benchmarking/rdd.c
>>/usr/share/doc/glusterfs/glusterfs-mode.el
>>/usr/share/doc/glusterfs/glusterfs.vim
>>/var/lib/glusterd/groups/virt
>>
>>
>> RPM build errors:
>> File not found by glob:
>> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/lib64/python2.6/site-packages/glusterfs_glupy*.egg-info
>> File not found:
>> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/com/glusterd/groups
>> File not found:
>> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/com/glusterd/groups/virt
>> Installed (but unpackaged) file(s) found:
>>/usr/local/lib64/python2.6/site-packages/gluster/__init__.py
>>/usr/local/lib64/python2.6/site-packages/gluster/__init__.pyc
>>   
>> /usr/local/lib64/python2.6/site-packages/glusterfs_glupy-3.6.8-py2.6.egg-info
>>/usr/share/doc/glusterfs/benchmarking/README
>>/usr/share/doc/glusterfs/benchmarking/glfs-bm.c
>>/usr/share/doc/glusterfs/benchmarking/launch-script.sh
>>/usr/share/doc/glusterfs/benchmarking/local-script.sh
>>/usr/share/doc/glusterfs/benchmarking/rdd.c
>>/usr/share/doc/glusterfs/glusterfs-mode.el
>>/usr/share/doc/glusterfs/glusterfs.vim
>>/var/lib/glusterd/groups/virt
>>
>>
>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] rpmbuild on sles 11 sp4

2016-02-23 Thread Kaleb KEITHLEY
On 02/23/2016 03:18 PM, Dan Castelhano wrote:
> Hi,
> 
> I'm getting the error below when trying to build rpms on sles 11 sp4
> (64bit).
> 
> Has anyone successfully compiled gluster on sles 11 and/or know how to
> fix this rpmbuild error? I get the same error with 3.5.7 and 3.6.7.

Gluster's official community packages for SLES 11sp4 are in the SuSE
Build System repos at [1].

Yes, the SuSE packages (and the spec file used to build them) are
different than the Fedora/RHEL/CentOS packages that are built with the
spec file in the source three. The spec file(s) used for [1] are based
on the spec file(s) that SuSE uses when they build GlusterFS for their
own zipper repos.

These spec files are in the github repo at [2].

If you're determined to use (or stuck using) SuSE, I recommend using
these, because a) they're known to work, and b) they're a better match
for SuSE's GlusterFS packages.



[1]
http://download.opensuse.org/repositories/home:/kkeithleatredhat:/SLES11-3.6/SLE_11_SP4/
[2] https://github.com/gluster/glusterfs-suse

> 
> rpmbuild commands used:
> "rpmbuild -ba glusterfs.spec" and "rpmbuild -ba glusterfs.spec --without bd"
> 
> 
> Thanks,
> Dan
> 
> Processing files: glusterfs-server-3.6.8-0.0
> error: File not found:
> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/com/glusterd/groups
> error: File not found:
> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/com/glusterd/groups/virt
> Executing(%doc): /bin/sh -e /var/tmp/rpm-tmp.4296
> + umask 022
> + cd /usr/src/packages/BUILD
> + cd glusterfs-3.6.8
> +
> DOCDIR=/var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/share/doc/packages/glusterfs-server
> + export DOCDIR
> + rm -rf
> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/share/doc/packages/glusterfs-server
> + /bin/mkdir -p
> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/share/doc/packages/glusterfs-server
> + cp -pr extras/clear_xattrs.sh
> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/share/doc/packages/glusterfs-server
> + exit 0
> Checking for unpackaged file(s): /usr/lib/rpm/check-files
> /var/tmp/glusterfs-3.6.8-0.0-CDwT09
> error: Installed (but unpackaged) file(s) found:
>/usr/local/lib64/python2.6/site-packages/gluster/__init__.py
>/usr/local/lib64/python2.6/site-packages/gluster/__init__.pyc
>   
> /usr/local/lib64/python2.6/site-packages/glusterfs_glupy-3.6.8-py2.6.egg-info
>/usr/share/doc/glusterfs/benchmarking/README
>/usr/share/doc/glusterfs/benchmarking/glfs-bm.c
>/usr/share/doc/glusterfs/benchmarking/launch-script.sh
>/usr/share/doc/glusterfs/benchmarking/local-script.sh
>/usr/share/doc/glusterfs/benchmarking/rdd.c
>/usr/share/doc/glusterfs/glusterfs-mode.el
>/usr/share/doc/glusterfs/glusterfs.vim
>/var/lib/glusterd/groups/virt
> 
> 
> RPM build errors:
> File not found by glob:
> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/lib64/python2.6/site-packages/glusterfs_glupy*.egg-info
> File not found:
> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/com/glusterd/groups
> File not found:
> /var/tmp/glusterfs-3.6.8-0.0-CDwT09/usr/com/glusterd/groups/virt
> Installed (but unpackaged) file(s) found:
>/usr/local/lib64/python2.6/site-packages/gluster/__init__.py
>/usr/local/lib64/python2.6/site-packages/gluster/__init__.pyc
>   
> /usr/local/lib64/python2.6/site-packages/glusterfs_glupy-3.6.8-py2.6.egg-info
>/usr/share/doc/glusterfs/benchmarking/README
>/usr/share/doc/glusterfs/benchmarking/glfs-bm.c
>/usr/share/doc/glusterfs/benchmarking/launch-script.sh
>/usr/share/doc/glusterfs/benchmarking/local-script.sh
>/usr/share/doc/glusterfs/benchmarking/rdd.c
>/usr/share/doc/glusterfs/glusterfs-mode.el
>/usr/share/doc/glusterfs/glusterfs.vim
>/var/lib/glusterd/groups/virt
> 
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS-3.7.6-2 packages for Debian Wheezy now available

2016-02-10 Thread Kaleb Keithley

Please attach the logs to https://bugzilla.redhat.com/show_bug.cgi?id=1304348
(or mail them to Kaushal and/or me.

Thanks

- Original Message -
> From: "Ronny Adsetts" <ronny.adse...@amazinginternet.com>
> To: "Kaleb Keithley" <kkeit...@redhat.com>, "Gluster Users" 
> <gluster-users@gluster.org>, "Gluster Devel"
> <gluster-de...@gluster.org>
> Sent: Wednesday, February 10, 2016 10:50:43 AM
> Subject: Re: [Gluster-users] GlusterFS-3.7.6-2 packages for Debian Wheezy now 
> available
> 
> Kaleb Keithley wrote on 04/02/2016 06:40:
> > 
> > If you're a Debian Wheezy user please give the new packages a try.
> 
> Hi Kaleb,
> 
> Apologies for the delay in getting back to you. I tried the upgrade on one
> node last week and it failed but I hadn't had the time to try it again
> without the feeling of panic around my neck :-).
> 
> So, I did the upgrade again on one node only but the node does not restart
> without error.
> 
> Excerpt of etc-glusterfs-glusterd.vol.log follows. Other than the errors, the
> thing that sticks out to me is in the management volume definition where it
> says "option transport-type rdma" as we're not using rdma. This may of
> course be a red herring.
> 
> I've now uninstalled the 3.7.6 packages and reinstalled 3.6.8.
> 
> If you need any further information please do let me know. I can try the
> upgrade again if there are changes you'd like me to make.
> 
> Thanks for your work on this so far.
> 
> Ronny
> 
> 
> [2016-02-10 09:24:42.479807] W [glusterfsd.c:1211:cleanup_and_exit] (--> 0-:
> received signum (15), shutting down
> [2016-02-10 09:27:14.031377] I [MSGID: 100030] [glusterfsd.c:2318:main]
> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.6
> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid)
> [2016-02-10 09:27:14.035512] I [MSGID: 106478] [glusterd.c:1350:init]
> 0-management: Maximum allowed open file descriptors set to 65536
> [2016-02-10 09:27:14.035554] I [MSGID: 106479] [glusterd.c:1399:init]
> 0-management: Using /var/lib/glusterd as working directory
> [2016-02-10 09:27:14.040817] W [MSGID: 103071]
> [rdma.c:4592:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
> channel creation failed [No such device]
> [2016-02-10 09:27:14.040848] W [MSGID: 103055] [rdma.c:4899:init]
> 0-rdma.management: Failed to initialize IB Device
> [2016-02-10 09:27:14.040860] W [rpc-transport.c:359:rpc_transport_load]
> 0-rpc-transport: 'rdma' initialization failed
> [2016-02-10 09:27:14.040921] W [rpcsvc.c:1597:rpcsvc_transport_create]
> 0-rpc-service: cannot create listener, initing the transport failed
> [2016-02-10 09:27:14.040937] E [MSGID: 106243] [glusterd.c:1623:init]
> 0-management: creation of 1 listeners failed, continuing with succeeded
> transport
> [2016-02-10 09:27:15.725220] I [MSGID: 106513]
> [glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd: retrieved
> op-version: 1
> [2016-02-10 09:27:15.900057] I [MSGID: 106498]
> [glusterd-handler.c:3579:glusterd_friend_add_from_peerinfo] 0-management:
> connect returned 0
> [2016-02-10 09:27:15.900138] I [rpc-clnt.c:984:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2016-02-10 09:27:15.900735] W [socket.c:869:__socket_keepalive] 0-socket:
> failed to set TCP_USER_TIMEOUT -1000 on socket 13, Invalid argument
> [2016-02-10 09:27:15.900749] E [socket.c:2965:socket_connect] 0-management:
> Failed to set keep-alive: Invalid argument
> [2016-02-10 09:27:15.900922] I [MSGID: 106194]
> [glusterd-store.c:3487:glusterd_store_retrieve_missed_snaps_list]
> 0-management: No missed snaps list.
> [2016-02-10 09:27:15.901746] I [MSGID: 106544]
> [glusterd.c:159:glusterd_uuid_init] 0-management: retrieved UUID:
> 79083345-b45a-466b-97f3-612ebfac7fe9
> Final graph:
> +--+
>   1: volume management
>   2: type mgmt/glusterd
>   3: option rpc-auth.auth-glusterfs on
>   4: option rpc-auth.auth-unix on
>   5: option rpc-auth.auth-null on
>   6: option rpc-auth-allow-insecure on
>   7: option transport.socket.listen-backlog 128
>   8: option ping-timeout 30
>   9: option transport.socket.read-fail-log off
>  10: option transport.socket.keepalive-interval 2
>  11: option transport.socket.keepalive-time 10
>  12: option transport-type rdma
>  13: option working-directory /var/lib/glusterd
>  14: end-volume
>  15:
> +--+
> [2016-02-10 09:27:15.903840] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: 

Re: [Gluster-users] GlusterFS-3.7.6-2 packages for Debian Wheezy now available

2016-02-04 Thread Kaleb Keithley


- Original Message -
> From: "Ronny Adsetts" <ronny.adse...@amazinginternet.com>
> 
> Kaleb Keithley wrote on 04/02/2016 06:40:
> > 
> > If you're a Debian Wheezy user please give the new packages a try.
> 
> Thanks for this Kaleb, I'll give it a try in the next few days.
> 
> I realise this is a newbie question but, just for my sanity, what's the best
> procedure to upgrade my two nodes? Ideally I'd prefer to do the upgrade
> without having to take the volumes offline if at all possible... thanks.
> 

If your volume is a replica 2, then you can take one brick off-line, update it, 
restart it, and let it heal.

Then repeat for the other brick.

If it's a DHT (distribute) volume then you don't really have a choice, it has 
to be a disruptive update. If you can add two bricks you can turn it into a 
distribute+replica volume (let it heal), and then do the updates.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS-3.7.6-2 packages for Debian Wheezy now available

2016-02-03 Thread Kaleb Keithley

Hi,

If you're a Debian Wheezy user please give the new packages a try.

Thanks

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Installing 3.7.6 in Debian Wheezy

2016-01-26 Thread Kaleb Keithley


- Original Message -
> From: "Benjamin Wilson" 
> 
> Hi Kaleb et al.,
> 
> Do you have any feel for whether or not getting the Wheezy package working
> will be a quick fix? I was planning deploying the client to a number of our
> Wheezy servers, but if it’s going to be a while I might look at setting up
> NFS-Ganesha instead.
> 

Hi,

I think we decided it will probably work with the older version. The fix isn't 
hard, per se.

I'm on vacation this week, and at FOSDEM on Sat/Sun. Maybe I can find a couple 
hours to come up with a quick fix if nobody else gets to it first.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Installing 3.7.6 in Debian Wheezy

2016-01-21 Thread Kaleb Keithley


- Original Message -
> From: "Atin Mukherjee" 
> > 
> > Wheezy has an old version of liburcu [1]. Gluster requires at-least
> > v0.7. I don't know how Wheezy packages even got built without the
> > required version of liburcu being present.
> Probably Kaleb might be knowing it?
> > 
> > [1] https://packages.debian.org/search?searchon=names=urcu
> > 

Good question. The debian package build runs configure, so the build _should_ 
have failed.

We hadn't built 3.7.x up until 3.7.6 because of other dependencies that were 
finally resolved in 3.7.6.

Will have to investigate why the build did not fail on the incorrect urcu.

I'll pull the wheezy bits from d.g.o.

Thanks

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Kaleb KEITHLEY
On 01/20/2016 04:08 AM, Oleksandr Natalenko wrote:
> Yes, there are couple of messages like this in my logs too (I guess one 
> message per each remount):
> 
> ===
> [2016-01-18 23:42:08.742447] I [fuse-bridge.c:3875:notify_kernel_loop] 0-
> glusterfs-fuse: kernel notifier loop terminated
> ===
>

Bug reports and fixes for master and release-3.7 branches are:

master)
 https://bugzilla.redhat.com/show_bug.cgi?id=1288857
 http://review.gluster.org/12886

release-3.7)
 https://bugzilla.redhat.com/show_bug.cgi?id=1288922
 http://review.gluster.org/12887

The release-3.7 fix will be in glusterfs-3.7.7 when it's released.

I think with even with the above fixes applied there are still some
issues remaining. I have submitted additional/revised fixes on top of
the above fixes at:

 master: http://review.gluster.org/13274
 release-3.7: http://review.gluster.org/13275

I invite you to review the patches in gerrit (review.gluster.org).

Regards,

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client

2016-01-21 Thread Kaleb KEITHLEY
On 01/21/2016 06:59 PM, Oleksandr Natalenko wrote:
> I see extra GF_FREE (node); added with two patches:
> 
> ===
> $ git diff HEAD~2 | gist
> https://gist.github.com/9524fa2054cc48278ea8
> ===
> 
> Is that intentionally? I guess I face double-free issue.
> 

I presume you're referring to the release-3.7 branch.

Yup, bad edit. Long day. That's why we review. ;-)

Please try the latest.

Thanks,

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] installing glusterfs-coreutils on ubuntu

2015-12-23 Thread Kaleb Keithley

> 
> The documentation requires installing "glusterfs-api-devel", which isn't
> available in the Ubuntu PPA.

As Kaushal pointed out, the headers are in the glusterfs-common .deb. The 
people who did the Debian/Ubuntu packaging don't like -dev packages. I'm not 
sure why. :-/

You appear to have glusterfs-common installed because you do have the header 
files. What version of GlusterFS are you using?

> However, I do have the header files that I
> believe ./configure is looking for installed but can't seem to point
> ./configure to them using CFLAGS, CXXFLAGS, etc. I also tried to use the
> --with-glusterfs ./configure option as well. What's the proper way to do
> this?
> 
> output of ./configure where it halts:
> 
> checking for GLFS... no
> configure: error: cannot find glusterfs api headers

You could look at configure(.ac) and try to suss out how configure is looking 
for GLFS. It could be using pkgconfig, or looking for the header file, or 
looking for a symbol in the shared library, or something else.

Otherwise I'll take a look when I get to the office in a bit.

> 
> existing header location:
> $:~/glusterfs-coreutils$ sudo find /usr -name "glfs.h" -print
> /usr/include/glusterfs/api/glfs.h
> 

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster nfs-ganesha enable fails and is driving me crazy

2015-12-08 Thread Kaleb KEITHLEY
On 12/08/2015 03:46 AM, Marco Antonio Carcano wrote:
> Hi,
> 

> 
> /etc/ganesha/ganesha-ha.conf
> 
> HA_NAME="ganesha-ha-360"
> HA_VOL_SERVER="glstr01.carcano.local"
> HA_CLUSTER_NODES="glstr01.carcano.local,glstr02.carcano.local"
> VIP_server1="192.168.65.250"
> VIP_server2="192.168.65.251"
> 

change your /etc/ganesha/ganesha-ha.conf file:

HA_NAME="ganesha-ha-360"
HA_VOL_SERVER="glstr01.carcano.local"
HA_CLUSTER_NODES="glstr01.carcano.local,glstr02.carcano.local"
VIP_glstr01.carcano.local="192.168.65.250"
VIP_glstr02.carcano.local="192.168.65.251"

I'd change the HA_NAME to something else too, but as long as you don't
set up another cluster on the same network you should be fine.

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] CentOS EPEL 6 GlusterFS 3.7 GPG Key

2015-11-25 Thread Kaleb KEITHLEY
On 11/25/2015 01:02 PM, Jason Woods wrote:
> Hi,
> 
> I recently attempted to update GlusterFS 3.7 to the latest
> revision on a CentOS 6 system and received the following error
> message:
> 
> warning: rpmts_HdrFromFdno: Header V4 RSA/SHA256 Signature, key ID 
> d5dc52dc: NOKEY Retrieving key from 
> file:///etc/pki/rpm-gpg/RPM-GPG-KEY-glusterfs-epel
> 
> 
> The GPG keys listed for the "GlusterFS is a clustered file-system 
> capable of scaling to several petabytes." repository are already 
> installed but they are not correct for this package. Check that
> the correct key URLs are configured for this repository.
> 
> Does anyone know if the GPG key was changed?
> 

http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.6/NEW_PUBLIC_KEY.README
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] vol set ganesha.enable errors out

2015-11-17 Thread Kaleb KEITHLEY
On 11/17/2015 09:08 AM, Surya K Ghatty wrote:
> Hi:
> 
> I am running into the following error when trying to enable ganesha on
> my system. This seems to be the same message as the one here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1004332.
> 
> [root@conv-gls002 ~]# gluster volume set gvol0 ganesha.enable on
> volume set: failed: Staging failed on gluster1. Error: One or more
> connected clients cannot support the feature being set. These clients
> need to be upgraded or disconnected before running this command again

Some "client" — client usually means a gluster fuse client mount, or a
gfapi client like the nfs-ganesha server — isn't using 3.7.x.

Based on what I think your setup is that seems really unlikely. But see
my comment/question at the end.


> 
> However, I can execute some of the other gluster vol set commands.
> 
> Here is the log:
> [2015-11-17 13:51:48.629507] E [MSGID: 106289]
> [glusterd-syncop.c:1871:gd_sync_task_begin] 0-management: Failed to
> build payload for operation 'Volume Set'
> [2015-11-17 13:51:56.698145] E [MSGID: 106022]
> [glusterd-utils.c:10154:glusterd_check_client_op_version_support]
> 0-management: One or more clients don't support the required op-version
> [2015-11-17 13:51:56.698193] E [MSGID: 106301]
> [glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
> operation 'Volume Set' failed on localhost : One or more connected
> clients cannot support the feature being set. These clients need to be
> upgraded or disconnected before running this command again
> [2015-11-17 13:54:32.759969] E [MSGID: 106022]
> [glusterd-utils.c:10154:glusterd_check_client_op_version_support]
> 0-management: One or more clients don't support the required op-version
> [2015-11-17 13:54:32.760017] E [MSGID: 106301]
> [glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
> operation 'Volume Set' failed on localhost : One or more connected
> clients cannot support the feature being set. These clients need to be
> upgraded or disconnected before running this command again
> [2015-11-17 13:55:15.930722] E [MSGID: 106022]
> [glusterd-utils.c:10154:glusterd_check_client_op_version_support]
> 0-management: One or more clients don't support the required op-version
> [2015-11-17 13:55:15.930733] E [MSGID: 106301]
> [glusterd-syncop.c:1274:gd_stage_op_phase] 0-management: Staging of
> operation 'Volume Set' failed on localhost : One or more connected
> clients cannot support the feature being set. These clients need to be
> upgraded or disconnected before running this command again
> 
> 
> The work around seems to upgrade the "clients" to a certain level or
> disconnect them. What client is this message referring to? I am running
> in a HA mode, and have two glusterfs nodes. Both have gluster at the
> same level. (3.7.6). There are no lingering mounts, as far as I can tell.
> 
> [root@conv-gls001 glusterfs]# gluster --version
> glusterfs 3.7.6 built on Nov 9 2015 15:20:26
> ... 
> [root@conv-gls001 ~]# gluster --version
> glusterfs 3.7.6 built on Nov 9 2015 15:20:26

These look like the same machine, i.e. conv-gls001.  Is that really
correct? Let's'see the output from the _other_ machine.

--

Kaleb




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] vol set ganesha.enable errors out

2015-11-17 Thread Kaleb KEITHLEY
On 11/17/2015 09:30 AM, Surya K Ghatty wrote:
> Hi Kaleb,
> 
> Sorry... here is the version from the other machine. Both have the same
> version.
> 
> [root@conv-gls002 glusterfs]# gluster --version
> glusterfs 3.7.6 built on Nov 9 2015 15:20:26
> Repository revision: git://git.gluster.com/glusterfs.git
> Copyright (c) 2006-2011 Gluster Inc.  >
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU
> General Public License.
> 

Yup, I kinda suspected. ;-)

As a work-around, try skipping the `gluster volume set gvol0
ganesha.enable on`.

Write your own /etc/ganesha/ganesha.conf file. (Example in my blog post
at
http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/)

And if you wouldn't mind filing a bug at
https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS we would
appreciate it.

Thanks

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Configuring Ganesha and gluster on separate nodes?

2015-11-17 Thread Kaleb KEITHLEY
On 11/17/2015 11:51 AM, Surya K Ghatty wrote:
> Hi:
> 
> I am trying to understand if it is technically feasible to have gluster
> nodes on one machine, and export a volume from one of these nodes using
> a nfs-ganesha server installed on a totally different machine?


It should work, but it's definitely outside the envelope of anything we
have tested. You're on your own here.


I tried
> the below and showmount -e does not show my volume exported. Any
> suggestions will be appreciated.
> 
> 1. Here is my configuration:
> 
> Gluster nodes: glusterA and glusterB on individual bare metals - both in
> Trusted pool, with volume gvol0 up and running.
> Ganesha node: on bare metal ganeshaA.
> 
> 2. my ganesha.conf looks like this with IP address of glusterA in the FSAL.
> 
> FSAL {
> Name = GLUSTER;
> 
> # IP of one of the nodes in the trusted pool
> *hostname = "WW.ZZ.XX.YY" --> IP address of GlusterA.*
> 
> # Volume name. Eg: "test_volume"
> volume = "gvol0";
> }
> 
> 3. I disabled nfs on gvol0. As you can see, *nfs.disable is set to on.*
> 
> [root@glusterA ~]# gluster vol info
> 
> Volume Name: gvol0
> Type: Distribute
> Volume ID: 16015bcc-1d17-4ef1-bb8b-01b7fdf6efa0
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: glusterA:/data/brick0/gvol0
> Options Reconfigured:
> *nfs.disable: on*
> nfs.export-volumes: off
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> performance.readdir-ahead: on
> 
> 4. I then ran ganesha.nfsd -f /etc/ganesha/ganesha.conf -L
> /var/log/ganesha.log -N NIV_FULL_DEBUG
> Ganesha server was put in grace, no errors.
> 
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
> nfs-ganesha-26426[reaper] fridgethr_freeze :RW LOCK :F_DBG :Released
> mutex 0x7f21a92818d0 (>mtx) at
> /builddir/build/BUILD/nfs-ganesha-2.2.0/src/support/fridgethr.c:484
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA:
> nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Acquired mutex
> 0x7f21ad1f18e0 (_mutex) at
> /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:129
> *17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
> nfs-ganesha-26426[reaper] nfs_in_grace :STATE :DEBUG :NFS Server IN GRACE*
> 17/11/2015 10:44:40 : epoch 564b5964 : ganeshaA :
> nfs-ganesha-26426[reaper] nfs_in_grace :RW LOCK :F_DBG :Released mutex
> 0x7f21ad1f18e0 (_mutex) at
> /builddir/build/BUILD/nfs-ganesha-2.2.0/src/SAL/nfs4_recovery.c:141
> 
> 5. [root@ganeshaA glusterfs]# showmount -e
> Export list for ganeshaA:
> 
> 
> Any suggestions on what I am missing?
> 
> Regards,
> 
> Surya Ghatty
> 
> "This too shall pass"
> 
> Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
> Development | tel: (507) 316-0559 | gha...@us.ibm.com
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Question on HA Active-Active Ganesha setup

2015-11-05 Thread Kaleb KEITHLEY
On 11/05/2015 10:13 AM, Surya K Ghatty wrote:
> All... I need your help! I am trying to setup Highly available
> Active-Active Ganesha configuration on two glusterfs nodes based on
> instructions here:
> 
> https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/
> and
> http://www.slideshare.net/SoumyaKoduri/high-49117846 and
> https://www.youtube.com/watch?v=Z4mvTQC-efM.
> 
> 
> *My questions:*
> 
> 1. what is the expected behvaior? Is the cluster.enable-shared-storage
> command expected to create shared storage? It seems odd to return a
> success message without creating the shared volume.
> 2. Any suggestions on how to get past this problem?
> 
> *Details:*
> I am using glusterfs 3.7.5 and Ganesha 2.2.0.6 installable packages. I'm
> installing
> 
> Also, I am using the following command
> 
> gluster volume set all cluster.enable-shared-storage enable
> 
> that would automatically setup the shared_storage directory under
> /run/gluster/ and automounts the shared volume for HA.
> 
> This command was working perfectly fine, and I was able to setup ganesha
> HA successfully on cent OS 7.0 running on bare metals - until now.
> 
> 
> 
> [root@qint-tor01-c7 gluster]# gluster vol set all
> cluster.enable-shared-storage enable
> volume set: success
> 
> [root@qint-tor01-c7 gluster]# pwd
> /run/gluster
> 
> [root@qint-tor01-c7 gluster]# ls
> 5027ba011969a8b2eca99ca5c9fb77ae.socket shared_storage
> changelog-9fe3f3fdd745db918d7d5c39fbe94017.sock snaps
> changelog-a9bf0a82aba38610df80c75a9adc45ad.sock
> 
> 
> Yesterday, we tried to deploy Ganesha HA with Gluster FSAL on a
> different cloud. and when I run the same command there, (same version of
> glusterfs and ganesha, same cent OS 7) - the command returned
> successfully, but it did not auto create the shared_storage directory.
> There were no logs either in
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
> 
> or /var/log/ganesha.log related to the command.
> 
> However, I do see these logs written to the etc-glusterfs-glusterd.vol.log
> 
> [2015-11-05 14:43:00.692762] W [socket.c:588:__socket_rwv] 0-nfs: readv
> on /var/run/gluster/9d5e1ba5e44bd1aa3331d2ee752a806a.socket failed
> (Invalid argument)
> 
> on both ganesha nodes independent of the commands I execute.
> 
> regarding this error, I did a ss -x | grep
> /var/run/gluster/9d5e1ba5e44bd1aa3331d2ee752a806a.socket
> 
> and it appears that no process was using these sockets, on either machines.
> 
> My questions:
> 
> 1. what is the expected behvaior? Is the cluster.enable-shared-storage
> command expected to create shared storage? It seems odd to return a
> success message without creating the shared volume.
> 2. Any suggestions on how to get past this problem?
> Regards,

The answer everyone hates to hear: It works for me.

I suspect it's not working in your case because it wants to create a
"replica 3" volume and you only have two nodes.

My blog at
http://blog.gluster.org/2015/10/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/
documents what I did recently to set up a four node HA ganesha cluster
for testing at the NFS Bake-a-thon that Red Hat hosted recently.


> 
> Surya Ghatty
> 
> "This too shall pass"
> 
> Surya Ghatty | Software Engineer | IBM Cloud Infrastructure Services
> Development | tel: (507) 316-0559 | gha...@us.ibm.com
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Community Meeting minutes for 2015-10-28

2015-11-03 Thread Kaleb KEITHLEY

Sorry for the late mailing

Meeting summary

Roll Call (kkeithley, 12:00:56)
kshlm to check back with misc on the new jenkins slaves (kkeithley,
12:04:10)
ACTION: kshlm to check back with misc on the new jenkins slaves
(kkeithley, 12:05:03)

krishnan_p and atinmu will remind developers to not work in personal
repositories, but request one for github.com/gluster (kkeithley, 12:05:14)
ACTION: krishnan_p and atinmu will remind developers to not work
in personal repositories, but request one for github.com/gluster
(kkeithley, 12:06:01)

ndevos send out a reminder to the maintainers about more actively
enforcing backports of bugfixes (kkeithley, 12:06:44)
skoduri, poornimag and obnox_ to post SDC trip report on
gluster-devel (kkeithley, 12:09:58)
ACTION: skoduri, poornimag and obnox_ to forward SDC trip
report(s) to gluster-devel (kkeithley, 12:12:44)

raghu to call for volunteers and help from maintainers for doing
backports listed by rwareing to 3.6.7 (kkeithley, 12:14:29)
kshlm to clean up 3.7.4 tracker bug (kkeithley, 12:16:21)
ACTION: kshlm to clean up 3.7.4 tracker bug by next week for
sure (kkeithley, 12:16:34)

hagarth to post a tracking page on gluster.org for 3.8 by next
week's meeting (kkeithley, 12:17:07)
rafi to setup a doodle poll for bug triage meeting (kkeithley, 12:17:28)
ACTION: rafi to setup a doodle poll for bug triage meeting
(kkeithley, 12:18:31)

rastar and msvbhat to publish a test exit criterion for major/minor
releases on gluster.org (kkeithley, 12:21:59)
ACTION: rastar and msvbhat to publish a test exit criterion for
major/minor releases on gluster.org (kkeithley, 12:22:46)

hagarth to review http://review.gluster.org/#/c/12210/ (kkeithley,
12:23:24)
ACTION: hagarth to finish review
http://review.gluster.org/#/c/12210/ (kkeithley, 12:23:48)

atinm to send a monthly update for 4.0 initiatives (kkeithley, 12:24:11)
ACTION: atinm to send a monthly update for 4.0 initiatives,
including summarize last couple of months (kkeithley, 12:25:45)

GlusterFS 3.7 (kkeithley, 12:26:05)
ACTION: rastar will open a BZ for 3.7.5 upgrade issue with
glusterd commands (kkeithley, 12:40:34)

GlusterFS 3.6 (kkeithley, 12:42:52)
https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.6.7
(kshlm, 12:44:35)

GlusterFS 3.5 (kkeithley, 12:49:52)
GlusterFS 3.8 (kkeithley, 12:52:04)
GlusterFS 4.0 (kkeithley, 12:53:02)
ACTION: atinm will also put up the GlusterD 2.0 design document
for review in a week or two (kkeithley, 12:55:47)
ACTION: overclk to review http://review.gluster.org/#/c/12321/
(kkeithley, 13:02:27)

open floor (kkeithley, 13:09:39)



Meeting ended at 13:19:14 UTC (full logs).

Action items

kshlm to check back with misc on the new jenkins slaves
krishnan_p and atinmu will remind developers to not work in personal
repositories, but request one for github.com/gluster
skoduri, poornimag and obnox_ to forward SDC trip report(s) to
gluster-devel
kshlm to clean up 3.7.4 tracker bug by next week for sure
rafi to setup a doodle poll for bug triage meeting
rastar and msvbhat to publish a test exit criterion for major/minor
releases on gluster.org
hagarth to finish review http://review.gluster.org/#/c/12210/
atinm to send a monthly update for 4.0 initiatives, including
summarize last couple of months
rastar will open a BZ for 3.7.5 upgrade issue with glusterd commands
atinm will also put up the GlusterD 2.0 design document for review
in a week or two
overclk to review http://review.gluster.org/#/c/12321/



Action items, by person

atinm
krishnan_p and atinmu will remind developers to not work in
personal repositories, but request one for github.com/gluster
atinm to send a monthly update for 4.0 initiatives, including
summarize last couple of months
atinm will also put up the GlusterD 2.0 design document for
review in a week or two
kshlm
kshlm to check back with misc on the new jenkins slaves
kshlm to clean up 3.7.4 tracker bug by next week for sure
obnox
skoduri, poornimag and obnox_ to forward SDC trip report(s) to
gluster-devel
overclk
overclk to review http://review.gluster.org/#/c/12321/
poornimag
skoduri, poornimag and obnox_ to forward SDC trip report(s) to
gluster-devel
rastar
rastar and msvbhat to publish a test exit criterion for
major/minor releases on gluster.org
rastar will open a BZ for 3.7.5 upgrade issue with glusterd commands
skoduri
skoduri, poornimag and obnox_ to forward SDC trip report(s) to
gluster-devel
UNASSIGNED
rafi to setup a doodle poll for bug triage meeting
hagarth to finish review http://review.gluster.org/#/c/12210/



People present (lines said)

kkeithley (128)
kshlm (30)
atinm (25)
rastar (22)
overclk (20)

Re: [Gluster-users] glusterfs 3.7.5 compile error

2015-11-03 Thread Kaleb KEITHLEY
Install liburcu-dev: `apt-get install liburcu-dev`

On 11/03/2015 02:42 AM, 黄平 wrote:
> on machine 1:
> 
> #./configure  --disable-tiering
> 
> building glupy with -isystem /usr/include/python2.6 -l python2.6
> checking for URCU... configure: error: Package requirements (liburcu-bp)
> were not met:
> 
> No package 'liburcu-bp' found
> 
> Consider adjusting the PKG_CONFIG_PATH environment variable if you
> installed software in a non-standard prefix.
> 
> Alternatively, you may set the environment variables URCU_CFLAGS
> and URCU_LIBS to avoid the need to call pkg-config.
> See the pkg-config man page for more details.
> 
> 
> #root@debian:~/glusterfs-3.7.5# ldconfig -v | grep  liburcu
> liburcu-signal.so.0 -> liburcu-signal.so.0.0.0
> liburcu-qsbr.so.0 -> liburcu-qsbr.so.0.0.0
> liburcu-bp.so.0 -> liburcu-bp.so.0.0.0
> liburcu-mb.so.0 -> liburcu-mb.so.0.0.0
> liburcu.so.0 -> liburcu.so.0.0.0
> liburcu-defer.so.0 -> liburcu-defer.so.0.0.0
> 
> on machine 2:
> #./configure is ok, 
> 
> @me:~$ sudo ldconfig -v | grep   urcu
> liburcu-qsbr.so.1 -> liburcu-qsbr.so.1.0.0
> liburcu-signal.so.1 -> liburcu-signal.so.1.0.0
> liburcu-cds.so.1 -> liburcu-cds.so.1.0.0
> liburcu-bp.so.1 -> liburcu-bp.so.1.0.0
> liburcu.so.1 -> liburcu.so.1.0.0
> liburcu-common.so.1 -> liburcu-common.so.1.0.0
> liburcu-mb.so.1 -> liburcu-mb.so.1.0.0 
> 
> 
> on machine 1,why cannot find liburcu-bp  ?
> 
> Thanks you for any help ...
> 
> Norbert
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.7 wheezy repo?

2015-10-19 Thread Kaleb KEITHLEY
On 10/19/2015 08:15 AM, Pranith Kumar Karampuri wrote:
> Added kaleb who knows more about it
> 
> Pranith
> 
> On 10/17/2015 04:12 AM, Lindsay Mathieson wrote:
>> Any chance of that happening?
>>

Unlikely IMO. Wheezy is too old. Recent changes in Gluster to use more
secure OpenSSL use APIs that don't exist on Wheezy.

Same for RHEL/CentOS 5 and Ubuntu Trusty.

There's a bug open for it at
https://bugzilla.redhat.com/show_bug.cgi?id=1258883.

If someone fixes it then we'll start packaging for Wheezy, Trusty, and EL5.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Can you please help us in installing Gluster FS(Have errors in istalling)

2015-10-06 Thread Kaleb KEITHLEY

You need to install gcc.

On 10/06/2015 12:24 AM, M.Tarkeshwar Rao wrote:
> Hi,
> 
> While trying to configure GlusterFS tar on my Linux server (
> 
> Linux sb6270x1803-2 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT
> 2014 x86_64 x86_64 x86_64 GNU/Linux
> 
> ),
> sb6270x1803-2:/home/pag/glusterfs-3.7.4# ./configure 
> checking for a BSD-compatible install... /usr/bin/install -c 
> checking whether build environment is sane... yes 
> checking for a thread-safe mkdir -p... /usr/bin/mkdir -p 
> checking for gawk... gawk 
> checking whether make sets $(MAKE)... yes 
> checking how to create a pax tar archive... gnutar 
> checking build system type... x86_64-unknown-linux-gnu 
> checking host system type... x86_64-unknown-linux-gnu 
> checking for gcc... gcc 
> checking for C compiler default output file name... 
> configure: error: in `/home/pag/glusterfs-3.7.4': 
> configure: error: C compiler cannot create executables 
> See `config.log' for more details.
> 
> in Logs config.log, getting following exception -
> 
> configure:3354: checking for C compiler default output file name
> 
> configure:3376: gccconftest.c  >&5
> 
> /usr/libexec/gcc/x86_64-redhat-linux/4.4.5/cc1: error while loading
> shared libraries: libgmp.so.3: cannot open shared object file: No such
> file or directory
> 
> configure:3380: $? = 1
> 
> configure:3418: result:
> 
> configure: failed program was:
> 
> | /* confdefs.h.  */
> 
> | #define PACKAGE_NAME "glusterfs"
> 
> | #define PACKAGE_TARNAME "glusterfs"
> 
> | #define PACKAGE_VERSION "3.7.4"
> 
> | #define PACKAGE_STRING "glusterfs 3.7.4"
> 
> | #define PACKAGE_BUGREPORT "gluster-users@gluster.org
> "
> 
> | #define PACKAGE "glusterfs"
> 
> | #define VERSION "3.7.4"
> 
> | /* end confdefs.h.  */
> 
> |
> 
> | int
> 
> | main ()
> 
> | {
> 
> |
> 
> |   ;
> 
> |   return 0;
> 
> | }
> 
> configure:3424: error: in `/home/pag/glusterfs-3.7.4':
> 
> configure:3427: error: C compiler cannot create executables
> 
> See `config.log' for more details.
> 
> 
> Regards
> Tarkeshwar
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Need Info about concurrent access

2015-10-01 Thread Kaleb KEITHLEY
Yes, what you read is correct. It does.


On 10/01/2015 05:56 AM, wodel youchi wrote:
> Hi again,
> 
> any one?
> 
> 2015-09-30 11:34 GMT+01:00 wodel youchi  >:
> 
> Hi,
> 
> I am a newbie, I am implementing an Alfresco solution with 3 servers
> all in active mode
> 
> I want to use glusterfs as storage for the 3 servers, to store the
> index and documents.
> 
> 
> from what I have read, glusterfs do manage concurrent access to files.
> 
> I want to be sure of that
> 
> 
> thanks in advance.
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Fwd: nfs-ganesha HA with arbiter volume

2015-09-22 Thread Kaleb Keithley

Hi,

IIRC, the setup is two nodes gluster+ganesha nodes plus the arbiter node for 
gluster quorum.

Have I remembered that correctly?

The Ganesha HA in 3.7 requires a minimum of three servers running ganesha and 
pacemaker. Two might work if you change the ganesha-ha.sh to not enable 
pacemaker quorum, but I haven't tried that myself. I'll try and find time in 
the next couple of days to update the documentation or write a blog post.



- Original Message 
> 
> 
> 
> On 21/09/15 21:21, Tiemen Ruiten wrote:
> > Whoops, replied off-list.
> >
> > Additionally I noticed that the generated corosync config is not
> > valid, as there is no interface section:
> >
> > /etc/corosync/corosync.conf
> >
> > totem {
> > version: 2
> > secauth: off
> > cluster_name: rd-ganesha-ha
> > transport: udpu
> > }
> >
> > nodelist {
> >   node {
> > ring0_addr: cobalt
> > nodeid: 1
> >}
> >   node {
> > ring0_addr: iron
> > nodeid: 2
> >}
> > }
> >
> > quorum {
> > provider: corosync_votequorum
> > two_node: 1
> > }
> >
> > logging {
> > to_syslog: yes
> > }
> >
> >
> >
> 
> May be Kaleb can help you out.
> >
> > -- Forwarded message --
> > From: *Tiemen Ruiten* >
> > Date: 21 September 2015 at 17:16
> > Subject: Re: [Gluster-users] nfs-ganesha HA with arbiter volume
> > To: Jiffin Tony Thottan >
> >
> >
> > Could you point me to the latest documentation? I've been struggling
> > to find something up-to-date. I believe I have all the prerequisites:
> >
> > - shared storage volume exists and is mounted
> > - all nodes in hosts files
> > - Gluster-NFS disabled
> > - corosync, pacemaker and nfs-ganesha rpm's installed
> >
> > Anything I missed?
> >
> > Everything has been installed by RPM so is in the default locations:
> > /usr/libexec/ganesha/ganesha-ha.sh
> > /etc/ganesha/ganesha.conf (empty)
> > /etc/ganesha/ganesha-ha.conf
> >
> 
> Looks fine for me.
> 
> > After I started the pcsd service manually, nfs-ganesha could be
> > enabled successfully, but there was no virtual IP present on the
> > interfaces and looking at the system log, I noticed corosync failed to
> > start:
> >
> > - on the host where I issued the gluster nfs-ganesha enable command:
> >
> > Sep 21 17:07:18 iron systemd: Starting NFS-Ganesha file server...
> > Sep 21 17:07:19 iron systemd: Started NFS-Ganesha file server.
> > Sep 21 17:07:19 iron rpc.statd[2409]: Received SM_UNMON_ALL request
> > from iron.int.rdmedia.com  while not
> > monitoring any hosts
> > Sep 21 17:07:20 iron systemd: Starting Corosync Cluster Engine...
> > Sep 21 17:07:20 iron corosync[3426]: [MAIN  ] Corosync Cluster Engine
> > ('2.3.4'): started and ready to provide service.
> > Sep 21 17:07:20 iron corosync[3426]: [MAIN  ] Corosync built-in
> > features: dbus systemd xmlconf snmp pie relro bindnow
> > Sep 21 17:07:20 iron corosync[3427]: [TOTEM ] Initializing transport
> > (UDP/IP Unicast).
> > Sep 21 17:07:20 iron corosync[3427]: [TOTEM ] Initializing
> > transmit/receive security (NSS) crypto: none hash: none
> > Sep 21 17:07:20 iron corosync[3427]: [TOTEM ] The network interface
> > [10.100.30.38] is now up.
> > Sep 21 17:07:20 iron corosync[3427]: [SERV  ] Service engine loaded:
> > corosync configuration map access [0]
> > Sep 21 17:07:20 iron corosync[3427]: [QB] server name: cmap
> > Sep 21 17:07:20 iron corosync[3427]: [SERV  ] Service engine loaded:
> > corosync configuration service [1]
> > Sep 21 17:07:20 iron corosync[3427]: [QB] server name: cfg
> > Sep 21 17:07:20 iron corosync[3427]: [SERV  ] Service engine loaded:
> > corosync cluster closed process group service v1.01 [2]
> > Sep 21 17:07:20 iron corosync[3427]: [QB] server name: cpg
> > Sep 21 17:07:20 iron corosync[3427]: [SERV  ] Service engine loaded:
> > corosync profile loading service [4]
> > Sep 21 17:07:20 iron corosync[3427]: [QUORUM] Using quorum provider
> > corosync_votequorum
> > Sep 21 17:07:20 iron corosync[3427]: [VOTEQ ] Waiting for all cluster
> > members. Current votes: 1 expected_votes: 2
> > Sep 21 17:07:20 iron corosync[3427]: [SERV  ] Service engine loaded:
> > corosync vote quorum service v1.0 [5]
> > Sep 21 17:07:20 iron corosync[3427]: [QB] server name: votequorum
> > Sep 21 17:07:20 iron corosync[3427]: [SERV  ] Service engine loaded:
> > corosync cluster quorum service v0.1 [3]
> > Sep 21 17:07:20 iron corosync[3427]: [QB] server name: quorum
> > Sep 21 17:07:20 iron corosync[3427]: [TOTEM ] adding new UDPU member
> > {10.100.30.38}
> > Sep 21 17:07:20 iron corosync[3427]: [TOTEM ] adding new UDPU member
> > {10.100.30.37}
> > Sep 21 17:07:20 iron corosync[3427]: [TOTEM ] A new membership
> > (10.100.30.38:104 ) was formed. Members joined: 1
> > Sep 21 17:07:20 iron corosync[3427]: [VOTEQ ] Waiting for all cluster
> > members. Current votes: 1 

Re: [Gluster-users] xattrs not supported?

2015-09-03 Thread Kaleb KEITHLEY
On 09/03/2015 02:07 AM, Jan Písačka wrote:
> Hi everyone,
> 
> is it normal to see the following sort of errors in the brick's logs?
> 
> 2015-09-02 21:26:40.808486] E [posix.c:1864:posix_create]
> 0-repsilo-posix: setting xattrs on
> /mnt/glusterRawL/CDB_data/10704/RAW_DATA/PCIE_ATCA_ADC_01.BOARD_12.CHANNEL_019.1.h5
> failed (Operation not supported)
> 
> The back-ends are xfs:
> 
>  ~ # grep glusterRaw /proc/mounts
> /dev/sda5 /mnt/glusterRawL xfs
> rw,noatime,nodiratime,attr2,delaylog,inode64,noquota 0 0
> /dev/sdb5 /mnt/glusterRawR xfs
> rw,noatime,nodiratime,attr2,delaylog,inode64,noquota 0 0
> 
> We are running glusterfs-3.4.7 on CentOS 6.5
> 

I'm not surprised. There have been bugs where glusterfsd was trying to
set invalid xattrs. (gluster uses faux xattrs to communicate state
between xlators. They aren't always excluded when xattrs are set.)

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.7 for Ubuntu-12.04

2015-08-04 Thread Kaleb KEITHLEY
On 08/03/2015 10:07 PM, Prasun Gera wrote:
 Does the 3.7 client work for precise ? 
There is no 3.7 client for precise. The build is 'all or nothing'.

3.6 for precise ought to work with 3.7 servers. Try it and see.

It would be great if someone in the community wanted to step up and
figure out how to build a 'client only' release of 3.7 for precise or
any other distributions.

FWIW, the Gluster Community provides the following packages:

3.5 3.6 3.7
Fedora 21¹   ×   ×
Fedora 22×   ¹   ×
Fedora 23×   ×   ¹
Fedora 24²   ²   ¹
RHEL/CentOS 5×   ×
RHEL/CentOS 6×   ×   ×
RHEL/CentOS 7×   ×   ×
Ubuntu 12.04 LTS (precise)   ×   ×
Ubuntu 14.04 LTS (trusty)×   ×   ×
Ubuntu 14.10 (utopic)×   ×   ³
Ubuntu 15.04 (vivid) ×   ×
Debian 7 (wheezy)×   ×
Debian 8 (jessie)×   ×   ×
Debian 9 (squeeze)   ×   ×   ×
SLES 11  ×   ×
SLES 12  ×   ×
OpenSuSE 13  ×   ×   ×
RHELSA 7 ×
(There are also NetBSD and maybe FreeBSD pkgs available.)

That's 44 sets of packages, not counting a few one-offs like Raspbian,
etc., and old, end-of-life 3.4 and 3.3 pkgs that are still available on
download.gluster.org.

As can be seen, a couple of the old distributions don't have pkgs of the
latest GlusterFS, usually due to dependencies that are too old or
missing — this is not unique to precise. Also some of the newer
distributions don't have pkgs of the oldest GlusterFS.


[1] In Fedora or Fedora Updates.
[2] Not now, but probably for the next releases.
[3] Has 3.7.2. Launchpad has stopped accepting updates for Utopic.

 On Mon, Aug 3, 2015 at 6:14 PM, Kaleb Keithley kkeit...@redhat.com  
 mailto:kkeit...@redhat.com wrote:  From: John S
bun...@gmail.com mailto:bun...@gmail.com Hi All, Is
there any gluster 3.7 version available for ubuntu-12.04. Currently we 
 have all production/test servers running on ubuntu-12.04 Saw
glusterfs-3.7 for Ubuntu-14.04, 14.10 and 15 versions. please help.  
  3.7.x does not build on 12.04 (Precise). Some of the dependencies 
apparently either don't exist or are too old.   As you noted, there
are 3.7.x packages for 14.04 LTS (Trusty). If  you absolutely need 3.7
then you'll need to update your servers.   --   Kaleb   
___  Gluster-users mailing
list  Gluster-users@gluster.org mailto:Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users  
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.5 Debian Wheezy packages?

2015-07-10 Thread Kaleb Keithley

Yup, that's about the gist of it. Community packages are built by volunteers. 


- Original Message -
 From: Paul Osborne (paul.osbo...@canterbury.ac.uk) 
 paul.osbo...@canterbury.ac.uk
 To: gluster-users@gluster.org
 Sent: Friday, July 10, 2015 3:47:36 AM
 Subject: Re: [Gluster-users] 3.5 Debian Wheezy packages?
 
 Just noticed that 3.5.5 has been released perhaps the LATEST build has not
 been run for Wheezy yet?
 
 
 Paul
 
 
 From: gluster-users-boun...@gluster.org gluster-users-boun...@gluster.org
 on behalf of Osborne, Paul (paul.osbo...@canterbury.ac.uk)
 paul.osbo...@canterbury.ac.uk
 Sent: 10 July 2015 08:15
 To: gluster-users@gluster.org
 Subject: [Gluster-users] 3.5 Debian Wheezy packages?
 
 Hi,
 
 It appears that the Debian Wheezy (7) packages for 3.5 are no longer in the
 repository, where there are packages for Jessie (8).
 
 Is this deliberate?
 
 http://download.gluster.org/pub/gluster/glusterfs/3.5/LATEST/Debian/wheezy/apt/dists/wheezy/main/binary-amd64/Packages
 
 I really hope that this is temporary as I am loathe to move forward to Jessie
 at present due to it's immaturity.
 
 Many thanks
 
 Paul
 
 Paul Osborne
 Senior Systems Engineer
 Canterbury Christ Church University
 Tel: 01227 782751
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS 3.7.2 RPMs for aarch64, Red Hat Linux Server for ARM

2015-06-30 Thread Kaleb KEITHLEY


In case anyone is interested, there are now aarch64 RPMs for RHELSA 7 at 
http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHELSA/


--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Repos for EPEL 5

2015-06-16 Thread Kaleb KEITHLEY

On 06/16/2015 10:42 AM, Gene Liverman wrote:

I have servers set to pull from
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-5Server/x86_64
yet when I go there and work back up the path to the EPEL.repo folder I
only see 6  7 now.  Is this a mistake or was support for EPEL 5 dropped?



Neither. LATEST now points at 3.7/3.7.1. 3.7.1 doesn't build on EPEL5, 
python is too old and some other requisite packages are not available.


If you were running 3.6.x, then you need to pull from 
.../glusterfs/3.6/LATEST/... where EPEL 5 is still supported.


--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Nfs-ganesha-devel] Questions on ganesha HA and shared storage size

2015-06-15 Thread Kaleb Keithley


- Original Message -
 From: Malahal Naineni mala...@us.ibm.com
...
 
 PS: there were some efforts to make ntirpc as an rpm by itself. Not sure where
 that is.
 

google[1] will tell you that libntirpc is in fact a stand-alone package in 
Fedora and EPEL, and as you can see at [2] it's even available for EPEL7

But note that nfs-ganesha in EPEL[67] is built with a) glusterfs-api-3.6.x from 
Red Hat's downstream glusterfs, and b) the bundled static version of 
ntirpc, not the shared lib in the stand-along package above. If you're trying 
to use these packages with glusterfs-3.7.x then I guess I'm not too surprised 
if something isn't working. Look for nfs-ganesha packages built against 
glusterfs-3.7.x in the CentOS Storage SIG or watch for the same on 
download.gluster.org. They're not there yet, but they will be eventually.

Another thing to note is that the Fedora and EPEL builds of nfs-ganesha do not 
use the nfs-ganesha.spec.cmake.in spec file from the nfs-ganesha source tree. 
It is based on it, but it's not the same, for a number of reasons.

I'll look at the EPEL nfs-ganesha when I have time. I do have a $dayjob, which 
takes priority over wrangling the community bits in Fedora and EPEL. Your 
patience and understanding is appreciated.


[1] https://www.google.com/?gws_rd=ssl#q=fedora+koji+libntirpc
[2] http://koji.fedoraproject.org/koji/packageinfo?packageID=20199

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Support for SLES 11 SP3

2015-06-03 Thread Kaleb KEITHLEY

On 06/03/2015 03:34 AM, Morrison, Gerald wrote:

Hi all,

I do not find rpms for SLES 11 SP3 in the download section for 3.7
anymore. Does this mean there will be no packages for SLES 11 SP3 at
all? Will you continue to deliver patches for SLES 11 SP3 for version 3.6?


Hi,

SLES11SP3 doesn't have, or doesn't have new enough versions of several 
packages necessary to build 3.7.x.


I have every expectation that the Gluster Community will continue to 
provide 3.6.x packages for SLES11SP3.


--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs 3.7 Compile Error

2015-05-19 Thread Kaleb KEITHLEY

On 05/19/2015 10:17 AM, Mohamed Pakkeer wrote:

Hi GlusterFS experts,

I am trying to compile GlusterFS 3.7 on Ubuntu 14.04 and getting
following error

checking sys/acl.h usability... no
checking sys/acl.h presence... no
checking for sys/acl.h... no
configure: error: Support for POSIX ACLs is required
node001:~/glusterfs-3.7.0$

Ubuntu 14.04 enables the acl by default on root partition. I enabled acl
manually on root partition and still it is showing same error.


You don't have the libacl1-dev pkg installed.

% apt-get install libacl1-dev

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Gluster 3.7.0 released

2015-05-15 Thread Kaleb Keithley



 From: Niels de Vos nde...@redhat.com
 On Fri, May 15, 2015 at 02:41:50AM +0100, Justin Clift wrote:
  On 14 May 2015, at 10:19, Vijay Bellur vbel...@redhat.com wrote:
   Hi All,
   
   I am happy to announce that Gluster 3.7.0 is now generally
  
  3.7.0 won't be packaged into Ubuntu LTS nor CentOS (sic) EPEL will
  it? (I'm meaning their official external repos, not
  download.gluster.org)
  
 The packages are not in Fedora EPEL (which is what CentOS uses too?).
 

Just to expand on Niels' reply a teeny bit: Fedora/EPEL policy prohibits 
shipping packages in EPEL when (another version of) the packages is already in 
RHEL.

RHEL ships the client-side Gluster packages.

--

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Minutes of todays Gluster Community meeting

2015-04-15 Thread Kaleb KEITHLEY

On 04/15/2015 04:39 PM, Kaleb KEITHLEY wrote:

Hi All,

In about 20 minutes from now we will have the regular weekly Gluster
Community meeting.

Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 8:00 EDT, 12:00 UTC, 12:00 CEST, 17:30 IST
(in your terminal, run: date -d 12:00 UTC)
- agenda: available at [1]

Currently the following items are listed:
* Roll Call
* Status of last week's action items
* Gluster 3.6
* Gluster 3.5
* Gluster Next
* Open Floor
- bring your own topic!

The last topic has space for additions. If you have a suitable topic to
discuss, please add it to the agenda.



Thanks all who joined! Meeting minutes and logs are available in fancy
html format and plain-text from these URLs, but also included below for
improved searchability and reminding the people about their action
items.

Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-04-15/gluster-meeting.2015-04-15-12.01.html
Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-04-15/gluster-meeting.2015-04-15-12.01.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-04-15/gluster-meeting.2015-04-15-12.01.log.html

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] REMINDER: Weekly Gluster Community meeting today at 12:00 UTC (~50 minutes from now)

2015-04-15 Thread Kaleb KEITHLEY

Hi All,

In about 20 minutes from now we will have the regular weekly Gluster 
Community meeting.


Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 8:00 EDT, 12:00 UTC, 12:00 CEST, 17:30 IST
   (in your terminal, run: date -d 12:00 UTC)
- agenda: available at [1]

Currently the following items are listed:
* Roll Call
* Status of last week's action items
* Gluster 3.6
* Gluster 3.5
* Gluster Next
* Open Floor
   - bring your own topic!

The last topic has space for additions. If you have a suitable topic to
discuss, please add it to the agenda.

Thanks,

--

Kaleb

[1] https://public.pad.fsfe.org/p/gluster-community-meetings
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


  1   2   >