[Gluster-users] Official word on gluster replace brick for 3.6?

2014-10-30 Thread B.K.Raghuram
I just wanted to check if gluster replace brick commit force is
officially deprecated in 3.6? Is there any other way to do a planned
replace of just one of the bricks in a replica pair? Add/remove brick
requires that new bricks be added in replica count multiples which may not
be always available..

Thanks,
-Ram
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Official word on gluster replace brick for 3.6?

2014-10-30 Thread Kaushal M
'replace-brick commit force' is still going to be available. It's just
that data-migration for replace-brick is going to be removed. So
'replace-brick (start|stop)' are going to be deprecated.

To replace a brick in a replica set, you'll need to do a
'replace-brick commit force'. Self-healing will take care of filling
the new brick with data.

~kaushal

On Thu, Oct 30, 2014 at 12:56 PM, B.K.Raghuram bkr...@gmail.com wrote:
 I just wanted to check if gluster replace brick commit force is officially
 deprecated in 3.6? Is there any other way to do a planned replace of just
 one of the bricks in a replica pair? Add/remove brick requires that new
 bricks be added in replica count multiples which may not be always
 available..

 Thanks,
 -Ram

 ___
 Gluster-devel mailing list
 gluster-de...@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-30 Thread Anders Blomdell
On 2014-10-28 20:33, Niels de Vos wrote:
 On Tue, Oct 28, 2014 at 05:52:38PM +0100, Anders Blomdell wrote:
 On 2014-10-28 17:30, Niels de Vos wrote:
 On Tue, Oct 28, 2014 at 08:42:00AM -0400, Kaleb S. KEITHLEY wrote:
 On 10/28/2014 07:48 AM, Darshan Narayana Murthy wrote:
 Hi,
 Installation of glusterfs-3.6beta with vdsm 
 (vdsm-4.14.8.1-0.fc19.x86_64) fails on
 f19  f20 because of dependency issues with qemu packages.

 I installed vdsm-4.14.8.1-0.fc19.x86_64 which installs 
 glusterfs-3.5.2-1.fc19.x86_64
 as dependency. Now when I try to update glusterfs by downloading rpms 
 from :
 http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.6.0beta3/Fedora/fedora-19/
 It fails with following error:

 Error: Package: 2:qemu-system-lm32-1.4.2-15.fc19.x86_64 (@updates)
Requires: libgfapi.so.0()(64bit)
Removing: glusterfs-api-3.5.2-1.fc19.x86_64 (@updates)
libgfapi.so.0()(64bit)
Updated By: glusterfs-api-3.6.0-0.5.beta3.fc19.x86_64 
 (/glusterfs-api-3.6.0-0.5.beta3.fc19.x86_64)
   ~libgfapi.so.7()(64bit)
Available: glusterfs-api-3.4.0-0.5.beta2.fc19.x86_64 (fedora)
libgfapi.so.0()(64bit)

 Full output at: http://ur1.ca/ikvk8

 For having snapshot and geo-rep management through ovirt, we need 
 glusterfs-3.6 to be
 installed with vdsm, which is currently failing.

 Can you please provide your suggestions to resolve this issue.

 Hi,

 Starting in 3.6 we have bumped the SO_VERSION of libgfapi.

 You need to install glusterfs-api-devel-3.6.0... first and build vdsm.

 But  we are (or were) not planning to release glusterfs-3.6.0 on f19 
 and
 f20...

 Off hand I don't believe there's anything in glusterfs-api-3.6.0 that vdsm
 needs. vdsm with glusterfs-3.5.x on f19 and f20 should be okay.

 Is there something new in vdsm-4-14 that really needs glusterfs-3.6? If so
 we can revisit whether we release 3.6 to fedora 19 and 20.

 The chain of dependencies is like this:

vdsm - qemu - libgfapi.so.0

 I think a rebuild of QEMU should be sufficient. I'm planning to put
 glusterfs-3.6 and rebuilds of related packages in a Fedora COPR. This
 would make it possible for Fedora users to move to 3.6 before they
 switch to Fedora 22.
 AFAICT the only difference between libgfapi.so.0 and libgfapi.so.7 are
 two added synbols (glfs_get_volfile, glfs_h_access) and __THROW on 
 functions. Wouldn't it be possible to provide a compatibilty libgfapi.so.0
 to ease migration?
 
 That is possible, sure. I think that rebuilding related packages is just
 easier, there are only a few needed. Users that would like to run 3.6
 before it is made available with Fedora 22 need to add a repository for
 the glusterfs-3.6 packages anyway, using the same repository to provide
 related packages is simple enough.
Except that you have to manually bump the version in those packages if yum
should automatically pick up the new version (just realized that tonights 
rebuild of qemu was useless, since the version is the same :-(, sigh).

I think a compat package would make the coupling between server and client 
looser, (i.e. one could run old clients on the same machine as a new server).

Due to limited time and dependency on qemu on some of my testing machines, I 
still have not been able to test 3.6.0beta3. A -compat package would have helped
me a lot (but maybe given you more bugs to fix :-)).

 
 But, if there is a strong interest in having a -compat package, we can
 discuss that during tomorrows (Wednesdays) meeting.
Sorry that i missed the meeting (due to DST change and not doing
date -d 12:00 UTC [should be in the etherpad])

/Anders
-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Recomended underlying filesystem

2014-10-30 Thread Justin Clift
On Tue, 28 Oct 2014 11:35:49 -
John Hearns john.hea...@viglen.co.uk wrote:
snip
 Gluster on ZFS is of course very interesting.
 http://www.gluster.org/community/documentation/index.php/GlusterOnZFS
 
 I guess this must get discussed here a lot!
 Anyone care to comment on the roadmap for this?

As far as I understand, it's not possible to legally add ZFS to Linux
and then distribute it.  So, places are ok to do it themselves for their
own purposes, but it's not something that a support company could
provide to their end users then support.

So, GlusterFS with ZFS on Linux isn't as widespread as it otherwise
might be. People have been looking into GlusterFS on BtrFS as the
new-upcoming ZFS kinda-equivalent, but there's nothing even close to
official yet. :/

Does that help? :)

+ Justin

-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-30 Thread Kaleb KEITHLEY

On 10/30/2014 04:36 AM, Anders Blomdell wrote:


I think a compat package would make the coupling between server and client
looser, (i.e. one could run old clients on the same machine as a new server).

Due to limited time and dependency on qemu on some of my testing machines, I
still have not been able to test 3.6.0beta3. A -compat package would have helped
me a lot (but maybe given you more bugs to fix :-)).


Hi,

Here's an experimental respin of 3.6.0beta3 with a -compat RPM.

http://koji.fedoraproject.org/koji/taskinfo?taskID=7981431

Please let us know how it works. The 3.6 release is coming very soon and 
if this works we'd like to include it in our Fedora and EPEL packages.


Thanks,

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Initial sync

2014-10-30 Thread Andreas Hollaus
Hi,

Thanks! Seems like an interesting document. Although I've read blogs about how
extended attributes are used as a change log, this seams like a more 
comprehensive
document.

I won't write directly to any brick. That's the reason I first have to create a
volume which consists of only one brick, until the other server is available, 
and
then add that second brick. I don't want to delay the file system clients until 
the
second server is available, hence the reason for add-brick.

I guess that this procedure is only needed the first time the volume is 
configured,
right? If any of these bricks would fail later on, the change log would keep 
track of
all changes to the file system even though only one of the bricks is 
available(?).
After a restart, volume settings stored in the configuration file would be 
accepted
even though not all servers were up and running yet at that time, wouldn't they?

Speaking about configuration files. When are these copied to each server?
If I create a volume which consists of two bricks, I guess that those servers 
will
create the configuration files, independently of each other, from the 
information
sent from the client (gluster volume create...).
 
In case I later on add a brick, I guess that the settings have to be copied to 
the
new brick after they have been modified on the first one, right (or will they be
recreated on all servers from the information specified by the client, like in 
the
previous case)?

Will configuration files be copied in other situations as well, for instance in 
case
one of the servers which is part of the volume for some reason would be missing 
those
files? In my case, the root file system is recreated from an image at each 
reboot, so
everything created in /etc will be lost. Will GlusterFS settings be restored 
from the
other server automatically or do I need to backup and restore those myself? Even
though the brick doesn't know that it is part of a volume in case it lose the
configuration files, both the other server(s) and the client(s) will probably
recognize it as being part of the volume. I therefore believe that such a
self-healing would actually be possible, even though it may not be implemented.


Regards
Andreas
 

On 10/30/14 05:21, Ravishankar N wrote:
 On 10/28/2014 03:58 PM, Andreas Hollaus wrote:
 Hi,

 I'm curious about how GlusterFS manages to sync the bricks in the initial 
 phase, when
 the volume is created or
 extended.

 I first create a volume consisting of only one brick, which clients will 
 start to
 read and write.
 After a while I add a second brick to the volume to create a replicated 
 volume.

 If this new brick is empty, I guess that files will be copied from the first 
 brick to
 get the bricks in sync, right?

 However, if the second brick is not empty but rather contains a subset of 
 the files
 on the first brick I don't see
 how GlusterFS will solve the problem of syncing the bricks.

 I guess that all files which lack extended attributes could be removed in 
 this
 scenario, because they were created
 when the disk was not part of a GlusterFS volume. However, in case the brick 
 was used
 in the volume previously,
 for instance before that server restarted, there will be extended attributes 
 for the
 files on the second brick which
 weren't updated during the downtime (when the volume consisted of only one 
 brick).
 There could be multiple
 changes to the files during this time. In this case I don't understand how 
 the
 extended attributes could be used to
 determine which of the bricks contains the most recent file.

 Can anyone explain how this works? Is it only allowed to add empty bricks to 
 a
 volume?

  
 It is allowed to add only empty bricks to the volume. Writing directly to 
 bricks is
 not supported. One needs to access the volume only from a mount point or using
 libgfapi.
 After adding a brick to increase the distribute count, you need to run the 
 volume
 rebalance command so that the some of the existing files are hashed (moved) 
 to this
 newly added brick.
 After adding a brick to increase the replica count, you need to run the 
 volume heal
 full command to sync the files from the other replica into the newly added 
 brick.
 https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md will 
 give
 you an idea of how the replicate translator uses xattrs to keep files in sync.

 HTH,
 Ravi

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Firewall ports with v 3.5.2 grumble time

2014-10-30 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
Hi,

I have a requirement to run my gluster hosts within a firewalled section of 
network and where the consumer hosts are in a different segment due to IP 
address preservation, part of our security policy requires that we run local 
firewalls on every host so I have to get the network access locked down 
appropriately.

I am running 3.5.2 using the packages provided in the Gluster package 
repository as my Linux distribution only includes packages for 3.2 which seems 
somewhat ancient.

Following the documentation here:
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting

I opened up the relevant ports: 

34865 – 34867  for gluster
111 for the portmapper
24009 – 24012 as I am using 2 bricks

This though contradicts:

http://gluster.org/community/documentation/index.php/Gluster_3.2:_Installing_GlusterFS_on_Red_Hat_Package_Manager_(RPM)_Distributions

Which says:

Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks across 
all volumes) are open on all Gluster servers. If you will be using NFS, open 
additional ports 38465 to 38467

What has not been helpful is that there was no mention of port: 2049 for NFS 
over TCP - which would have been helpful and probably my own mistake as I 
should have known.

To really confuse matters I noticed that the bricks were not syncing anyway, 
and a look at the logs reveals:

/var/log/glusterfs/glfsheal-www.log:[2014-10-30 07:39:48.428286] I 
[client-handshake.c:1462:client_setvolume_cbk] 0-www-client-1: Connected to 
111.222.333.444:49154, attached to remote volume '/srv/hod/lampe-www'.

along with other entries that show that I also actually need ports:  49154 and 
49155 open. 

even gluster volume status reveals some of the ports:

gluster volume status
Status of volume: www
Gluster process PortOnline  Pid
--
Brick 194.82.210.140:/srv/hod/lampe-www 49154   Y   3035
Brick 194.82.210.130:/srv/hod/lampe-www 49155   Y   16160
NFS Server on localhost 2049Y   16062
Self-heal Daemon on localhost   N/A Y   16072
NFS Server on gfse-isr-01   2049Y   3040
Self-heal Daemon on gfse-isr-01 N/A Y   3045
 
Task Status of Volume www
--
There are no active volume tasks


So my query here is, if the bricks are actually using 49154  49155 (which they 
appear to be) why is this not mentioned in the documentation and are there any 
other ports that I should be aware of?

Thanks

Paul
--

Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-30 Thread Anders Blomdell
On 2014-10-30 14:52, Kaleb KEITHLEY wrote:
 On 10/30/2014 04:36 AM, Anders Blomdell wrote:

 I think a compat package would make the coupling between server and client
 looser, (i.e. one could run old clients on the same machine as a new server).

 Due to limited time and dependency on qemu on some of my testing machines, I
 still have not been able to test 3.6.0beta3. A -compat package would have 
 helped
 me a lot (but maybe given you more bugs to fix :-)).
 
 Hi,
 
 Here's an experimental respin of 3.6.0beta3 with a -compat RPM.
 
 http://koji.fedoraproject.org/koji/taskinfo?taskID=7981431
 
 Please let us know how it works. The 3.6 release is coming very soon and if 
 this works we'd like to include it in our Fedora and EPEL packages.
And I just got mine ready:  http://review.gluster.org/9014
I'll try to test it (any reference to a git patch?)

/Anders

-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] RHEL6.6 provides Gluster 3.6.0-28.2 ?

2014-10-30 Thread Chad Feller

On 10/27/2014 07:05 AM, Prasun Gera wrote:
Just wanted to check if this issue was reproduced by RedHat. I don't 
see any updates on Satellite yet that resolve this.


To exasperate this issue, CentOS 6.6 has now dropped, including 
GlusterFS 3.6.0, making the problem more widespread.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] More GlusterFS Developer companies added to Bitergia stats :D

2014-10-30 Thread Justin Clift
The Bitergia team have updated their GlusterFS Community stats page,
so it now shows more of the companies in our Community. :)

  
http://bitergia.com/projects/redhat-glusterfs-dashboard/browser/scm-companies.html

Previously it was:

 #1 Red Hat
 #2 IBM
 #3 CERN

Now it's:

  #1 Red Hat
  #2 DataLab
  #3 IBM
  #4 GoodData
  #5 Lunds University
  #6 Stepping Stone
  #7 CERN

That looks a lot better. :)

Thanks Xavi for prompting the update for this page, and thank you
everyone for all of your efforts in making GlusterFS better! :)

Regards and best wishes,

Justin Clift

-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Official word on gluster replace brick for 3.6?

2014-10-30 Thread Joe Julian
Which means, of course, no redundancy until that self heal is completed. 

Furthermore, replace-brick start stopped working at all some versions ago so 
the removal of start and stop may as well just happen.

On October 30, 2014 12:41:17 AM PDT, Kaushal M kshlms...@gmail.com wrote:
'replace-brick commit force' is still going to be available. It's just
that data-migration for replace-brick is going to be removed. So
'replace-brick (start|stop)' are going to be deprecated.

To replace a brick in a replica set, you'll need to do a
'replace-brick commit force'. Self-healing will take care of filling
the new brick with data.

~kaushal

On Thu, Oct 30, 2014 at 12:56 PM, B.K.Raghuram bkr...@gmail.com
wrote:
 I just wanted to check if gluster replace brick commit force is
officially
 deprecated in 3.6? Is there any other way to do a planned replace of
just
 one of the bricks in a replica pair? Add/remove brick requires that
new
 bricks be added in replica count multiples which may not be always
 available..

 Thanks,
 -Ram

 ___
 Gluster-devel mailing list
 gluster-de...@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
gluster-de...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-30 Thread Anders Blomdell
On 2014-10-30 14:52, Kaleb KEITHLEY wrote:
 On 10/30/2014 04:36 AM, Anders Blomdell wrote:

 I think a compat package would make the coupling between server and client
 looser, (i.e. one could run old clients on the same machine as a new server).

 Due to limited time and dependency on qemu on some of my testing machines, I
 still have not been able to test 3.6.0beta3. A -compat package would have 
 helped
 me a lot (but maybe given you more bugs to fix :-)).
 
 Hi,
 
 Here's an experimental respin of 3.6.0beta3 with a -compat RPM.
 
 http://koji.fedoraproject.org/koji/taskinfo?taskID=7981431
 
 Please let us know how it works. The 3.6 release is coming very soon and if 
 this works we'd like to include it in our Fedora and EPEL packages.
Nope, does not work, since the running usr/lib/rpm/find-provides 
(or /usr/lib/rpm/redhat/find-provides) on the symlink does not
yield the proper provides [which for my system should be
libgfapi.so.0()(64bit)]. So no cigar :-(

Have you checked my more heavyhanded http://review.gluster.org/9014 ?

/Anders

-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Firewall ports with v 3.5.2 grumble time

2014-10-30 Thread Todd Stansell
This is because in 3.4, they changed the brick port range.  It's mentioned
on
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_No
tes:

  Brick ports will now listen from 49152 onwards (instead of 24009 onwards
as with previous releases). The brick port assignment scheme is now
compliant with IANA guidelines.

Sadly, documentation for gluster is very difficult to find what you need, in
my experience.

Todd

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Osborne, Paul
(paul.osbo...@canterbury.ac.uk)
Sent: Thursday, October 30, 2014 6:59 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] Firewall ports with v 3.5.2 grumble time

Hi,

I have a requirement to run my gluster hosts within a firewalled section of
network and where the consumer hosts are in a different segment due to IP
address preservation, part of our security policy requires that we run local
firewalls on every host so I have to get the network access locked down
appropriately.

I am running 3.5.2 using the packages provided in the Gluster package
repository as my Linux distribution only includes packages for 3.2 which
seems somewhat ancient.

Following the documentation here:
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troub
leshooting

I opened up the relevant ports: 

34865 - 34867  for gluster
111 for the portmapper
24009 - 24012 as I am using 2 bricks

This though contradicts:

http://gluster.org/community/documentation/index.php/Gluster_3.2:_Installing
_GlusterFS_on_Red_Hat_Package_Manager_(RPM)_Distributions

Which says:

Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks
across all volumes) are open on all Gluster servers. If you will be using
NFS, open additional ports 38465 to 38467

What has not been helpful is that there was no mention of port: 2049 for NFS
over TCP - which would have been helpful and probably my own mistake as I
should have known.

To really confuse matters I noticed that the bricks were not syncing anyway,
and a look at the logs reveals:

/var/log/glusterfs/glfsheal-www.log:[2014-10-30 07:39:48.428286] I
[client-handshake.c:1462:client_setvolume_cbk] 0-www-client-1: Connected to
111.222.333.444:49154, attached to remote volume '/srv/hod/lampe-www'.

along with other entries that show that I also actually need ports:  49154
and 49155 open. 

even gluster volume status reveals some of the ports:

gluster volume status
Status of volume: www
Gluster process PortOnline  Pid

--
Brick 194.82.210.140:/srv/hod/lampe-www 49154   Y   3035
Brick 194.82.210.130:/srv/hod/lampe-www 49155   Y
16160
NFS Server on localhost 2049Y
16062
Self-heal Daemon on localhost   N/A Y
16072
NFS Server on gfse-isr-01   2049Y   3040
Self-heal Daemon on gfse-isr-01 N/A Y   3045
 
Task Status of Volume www

--
There are no active volume tasks


So my query here is, if the bricks are actually using 49154  49155 (which
they appear to be) why is this not mentioned in the documentation and are
there any other ports that I should be aware of?

Thanks

Paul
--

Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] configure hostname/ip for bricks

2014-10-30 Thread Harshavardhana
On Wed, Oct 29, 2014 at 1:18 PM, craig w codecr...@gmail.com wrote:
 I am trying to run a GlusterFS server in a Docker container. It works fine,
 the problem I have is when I create a volume it's associated with the
 private IP address of the container, which is not accessible to other hosts
 in the network.

 If I start the docker container using --net host, I can work around the
 issue, however, I was wondering if there was a configuration option I could
 set that would specify the hostname/ip to use when creating volumes.


--net host is the valid way to do things for docker, docker by itself
is not multi node aware.

So --net host is a cheap hack to get around the problem, you should
use Kubernetes and
see if fixes the problem.

There are other solutions produced by few people using openvswitch
etc. But none are
known to be quite stable.

-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Firewall ports with v 3.5.2 grumble time

2014-10-30 Thread Jeremy Young
Hi Paul,

I will agree from experience that finding accurate, up-to-date
documentation on how to do some basic configuration of a Gluster volume can
be difficult.  However, this blog post mentions the updated firewall ports.

http://www.jamescoyle.net/how-to/457-glusterfs-firewall-rules

Get rid of 24009-24012 in your firewall configuration and replace them with
49152-4915X.  If you don't actually need NFS, you can exclude the 3486X
ports that you've opened as well.


From: gluster-users-boun...@gluster.org gluster-users-boun...@gluster.org
on behalf of Osborne, Paul (paul.osbo...@canterbury.ac.uk) 
paul.osbo...@canterbury.ac.uk
Sent: Thursday, October 30, 2014 8:58 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] Firewall ports with v 3.5.2 grumble time

Hi,

I have a requirement to run my gluster hosts within a firewalled section of
network and where the consumer hosts are in a different segment due to IP
address preservation, part of our security policy requires that we run
local firewalls on every host so I have to get the network access locked
down appropriately.

I am running 3.5.2 using the packages provided in the Gluster package
repository as my Linux distribution only includes packages for 3.2 which
seems somewhat ancient.

Following the documentation here:
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting

I opened up the relevant ports:

34865 – 34867  for gluster
111 for the portmapper
24009 – 24012 as I am using 2 bricks

This though contradicts:

http://gluster.org/community/documentation/index.php/Gluster_3.2:_Installing_GlusterFS_on_Red_Hat_Package_Manager_(RPM)_Distributions

Which says:

Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks
across all volumes) are open on all Gluster servers. If you will be using
NFS, open additional ports 38465 to 38467

What has not been helpful is that there was no mention of port: 2049 for
NFS over TCP - which would have been helpful and probably my own mistake as
I should have known.

To really confuse matters I noticed that the bricks were not syncing
anyway, and a look at the logs reveals:

/var/log/glusterfs/glfsheal-www.log:[2014-10-30 07:39:48.428286] I
[client-handshake.c:1462:client_setvolume_cbk] 0-www-client-1: Connected to
111.222.333.444:49154, attached to remote volume '/srv/hod/lampe-www'.

along with other entries that show that I also actually need ports:  49154
and 49155 open.

even gluster volume status reveals some of the ports:

gluster volume status
Status of volume: www
Gluster process PortOnline  Pid
--
Brick 194.82.210.140:/srv/hod/lampe-www 49154   Y   3035
Brick 194.82.210.130:/srv/hod/lampe-www 49155   Y
16160
NFS Server on localhost 2049Y
16062
Self-heal Daemon on localhost   N/A Y
16072
NFS Server on gfse-isr-01   2049Y   3040
Self-heal Daemon on gfse-isr-01 N/A Y   3045

Task Status of Volume www
--
There are no active volume tasks


So my query here is, if the bricks are actually using 49154  49155 (which
they appear to be) why is this not mentioned in the documentation and are
there any other ports that I should be aware of?

Thanks

Paul
--

Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

-- 
Jeremy Young jrm16...@gmail.com, M.S., RHCSA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster Night Paris 2014

2014-10-30 Thread Dave McAllister

Hey folks

We're having a Gluster event in Paris on 4-November, to get together and 
celebrate a number of new technologies, including the GlusterFS 3.6.0 
release.


Sign up here: 
http://www.eventbrite.com/e/inscription-gluster-night-paris-10413255327?aff=es2rank=0


Our agenda is pretty packed:

Event opens at 18:00

18:00 Drinks, snacks and chats

18:15 : Overview of 3.6.0 new features

18:25 : Manilla-GlusterFS

18:45 : Cinder-GlusterFS

19:05 : Swift-GlusterFS

19:30 : Socializing, Networking, Questions

20:00 Event ends

We look forward to see you next week.

Sign up: 
http://www.eventbrite.com/e/inscription-gluster-night-paris-10413255327?aff=es2rank=0


davemc




___
Announce mailing list
annou...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Firewall ports with v 3.5.2 grumble time

2014-10-30 Thread Joe Julian

https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_settingup_clients.md#installing-on-red-hat-package-manager-rpm-distributions
(also restated again under  Installing on Debian-based Distributions)

Any thoughts on what would make this any easier to find?

On 10/30/2014 11:21 AM, Todd Stansell wrote:

This is because in 3.4, they changed the brick port range.  It's mentioned
on
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_No
tes:

   Brick ports will now listen from 49152 onwards (instead of 24009 onwards
as with previous releases). The brick port assignment scheme is now
compliant with IANA guidelines.

Sadly, documentation for gluster is very difficult to find what you need, in
my experience.

Todd

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Osborne, Paul
(paul.osbo...@canterbury.ac.uk)
Sent: Thursday, October 30, 2014 6:59 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] Firewall ports with v 3.5.2 grumble time

Hi,

I have a requirement to run my gluster hosts within a firewalled section of
network and where the consumer hosts are in a different segment due to IP
address preservation, part of our security policy requires that we run local
firewalls on every host so I have to get the network access locked down
appropriately.

I am running 3.5.2 using the packages provided in the Gluster package
repository as my Linux distribution only includes packages for 3.2 which
seems somewhat ancient.

Following the documentation here:
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troub
leshooting

I opened up the relevant ports:

34865 - 34867  for gluster
111 for the portmapper
24009 - 24012 as I am using 2 bricks

This though contradicts:

http://gluster.org/community/documentation/index.php/Gluster_3.2:_Installing
_GlusterFS_on_Red_Hat_Package_Manager_(RPM)_Distributions

Which says:

Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks
across all volumes) are open on all Gluster servers. If you will be using
NFS, open additional ports 38465 to 38467

What has not been helpful is that there was no mention of port: 2049 for NFS
over TCP - which would have been helpful and probably my own mistake as I
should have known.

To really confuse matters I noticed that the bricks were not syncing anyway,
and a look at the logs reveals:

/var/log/glusterfs/glfsheal-www.log:[2014-10-30 07:39:48.428286] I
[client-handshake.c:1462:client_setvolume_cbk] 0-www-client-1: Connected to
111.222.333.444:49154, attached to remote volume '/srv/hod/lampe-www'.

along with other entries that show that I also actually need ports:  49154
and 49155 open.

even gluster volume status reveals some of the ports:

gluster volume status
Status of volume: www
Gluster process PortOnline  Pid

--
Brick 194.82.210.140:/srv/hod/lampe-www 49154   Y   3035
Brick 194.82.210.130:/srv/hod/lampe-www 49155   Y
16160
NFS Server on localhost 2049Y
16062
Self-heal Daemon on localhost   N/A Y
16072
NFS Server on gfse-isr-01   2049Y   3040
Self-heal Daemon on gfse-isr-01 N/A Y   3045
  
Task Status of Volume www


--
There are no active volume tasks


So my query here is, if the bricks are actually using 49154  49155 (which
they appear to be) why is this not mentioned in the documentation and are
there any other ports that I should be aware of?

Thanks

Paul
--

Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-30 Thread Kaleb KEITHLEY

On 10/30/2014 01:50 PM, Anders Blomdell wrote:

On 2014-10-30 14:52, Kaleb KEITHLEY wrote:

On 10/30/2014 04:36 AM, Anders Blomdell wrote:


I think a compat package would make the coupling between server and client
looser, (i.e. one could run old clients on the same machine as a new server).

Due to limited time and dependency on qemu on some of my testing machines, I
still have not been able to test 3.6.0beta3. A -compat package would have helped
me a lot (but maybe given you more bugs to fix :-)).


Hi,

Here's an experimental respin of 3.6.0beta3 with a -compat RPM.

http://koji.fedoraproject.org/koji/taskinfo?taskID=7981431

Please let us know how it works. The 3.6 release is coming very soon and if 
this works we'd like to include it in our Fedora and EPEL packages.

Nope, does not work, since the running usr/lib/rpm/find-provides
(or /usr/lib/rpm/redhat/find-provides) on the symlink does not
yield the proper provides [which for my system should be
libgfapi.so.0()(64bit)]. So no cigar :-(


Hi,

1) I erred on the symlink in the -compat RPM. It should have been 
/usr/lib64/libgfapi.so.0 - libgfapi.so.7(.0.0).


2) find-provides is just a wrapper that greps the SO_NAME from the 
shared lib. And if you pass symlinks such as /usr/lib64/libgfapi.so.7 or 
/usr/lib64/libgfapi.so.0 to it, they both return the same result, i.e. 
the null string. The DSO run-time does not check that the SO_NAME matches.


I have a revised set of rpms with a correct symlink available 
http://koji.fedoraproject.org/koji/taskinfo?taskID=7984220. The main 
test (that I'm interested in) is whether qemu built against 3.5.x works 
with it or not.




Have you checked my more heavyhanded http://review.gluster.org/9014 ?


I have. A) it's, well, heavy handed ;-) mainly due to, B) there's lot of 
duplicated code for no real purpose, and C) for whatever reason it's not 
making it through our smoke and regression tests (although I can't 
imagine how a new and otherwise unused library would break those.)


If it comes to it, I personally would rather take a different route and 
use versioned symbols in the library and not bump the SO_NAME. Because 
the old APIs are unchanged and all we've done is add new APIs.


But we're running out of time for the 3.6 release (which is already 
months overdue.)


I don't know if anyone here looked at doing versioned symbols before we 
made the decision to bump the SO_NAME. I've looked at Ulrich Dreppers 
https://software.intel.com/sites/default/files/m/a/1/e/dsohowto.pdf 
write-up and it's not very hard.


--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-30 Thread Anders Blomdell
On 2014-10-30 20:55, Kaleb KEITHLEY wrote:
 On 10/30/2014 01:50 PM, Anders Blomdell wrote:
 On 2014-10-30 14:52, Kaleb KEITHLEY wrote:
 On 10/30/2014 04:36 AM, Anders Blomdell wrote:
 
 I think a compat package would make the coupling between server
 and client looser, (i.e. one could run old clients on the same
 machine as a new server).
 
 Due to limited time and dependency on qemu on some of my
 testing machines, I still have not been able to test
 3.6.0beta3. A -compat package would have helped me a lot (but
 maybe given you more bugs to fix :-)).
 
 Hi,
 
 Here's an experimental respin of 3.6.0beta3 with a -compat RPM.
 
 http://koji.fedoraproject.org/koji/taskinfo?taskID=7981431
 
 Please let us know how it works. The 3.6 release is coming very
 soon and if this works we'd like to include it in our Fedora and
 EPEL packages.
 Nope, does not work, since the running usr/lib/rpm/find-provides 
 (or /usr/lib/rpm/redhat/find-provides) on the symlink does not 
 yield the proper provides [which for my system should be 
 libgfapi.so.0()(64bit)]. So no cigar :-(
 
 Hi,
 
 1) I erred on the symlink in the -compat RPM. It should have been
 /usr/lib64/libgfapi.so.0 - libgfapi.so.7(.0.0).
Noticed that, not the main problem though :-)

 2) find-provides is just a wrapper that greps the SO_NAME from the 
 shared lib. And if you pass symlinks such as
 /usr/lib64/libgfapi.so.7 or /usr/lib64/libgfapi.so.0 to it, they both
 return the same result, i.e. the null string. The DSO run-time does
 not check that the SO_NAME matches.

No, but yum checks for libgfapi.so.0()(64bit) / libgfapi.so.0, so
i think something like this is needed for yum to cope with upgrades.

%ifarch x86_64
Provides: libgfapi.so.0()(64bit)
%else
Provides: libgfapi.so.0
%endif


 I have a revised set of rpms with a correct symlink available
 http://koji.fedoraproject.org/koji/taskinfo?taskID=7984220. The main
 test (that I'm interested in) is whether qemu built against 3.5.x
 works with it or not.
First thing is to get a yum upgrade to succeed.

 Have you checked my more heavyhanded http://review.gluster.org/9014
 ?
 
 I have. A) it's, well, heavy handed ;-) mainly due to, 
 B) there's lot of duplicated code for no real purpose, and 
Agreed, quick fix to avoid soname hell (and me being unsure of
what problems __THROW could give rise to).

 C) for whatever reason it's not making it through our smoke and
 regression tests (although I can't imagine how a new and otherwise
 unused library would break those.)
Me neither, but I'm good at getting smoke :-)

 If it comes to it, I personally would rather take a different route
 and use versioned symbols in the library and not bump the SO_NAME.
 Because the old APIs are unchanged and all we've done is add new
 APIs.
I guess we have passed that point since 3.6.0 is out in the wild (RHEL),
and no way to bump down the version number.


 But we're running out of time for the 3.6 release (which is already
 months overdue.)
 
 I don't know if anyone here looked at doing versioned symbols before
 we made the decision to bump the SO_NAME. I've looked at Ulrich
 Dreppers
 https://software.intel.com/sites/default/files/m/a/1/e/dsohowto.pdf
 write-up and it's not very hard.
Too late now, though?

-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-30 Thread Kaleb KEITHLEY

On 10/30/2014 04:34 PM, Anders Blomdell wrote:

On 2014-10-30 20:55, Kaleb KEITHLEY wrote:

On 10/30/2014 01:50 PM, Anders Blomdell wrote:

On 2014-10-30 14:52, Kaleb KEITHLEY wrote:

On 10/30/2014 04:36 AM, Anders Blomdell wrote:


I think a compat package would make the coupling between server
and client looser, (i.e. one could run old clients on the same
machine as a new server).

Due to limited time and dependency on qemu on some of my
testing machines, I still have not been able to test
3.6.0beta3. A -compat package would have helped me a lot (but
maybe given you more bugs to fix :-)).


Hi,

Here's an experimental respin of 3.6.0beta3 with a -compat RPM.

http://koji.fedoraproject.org/koji/taskinfo?taskID=7981431

Please let us know how it works. The 3.6 release is coming very
soon and if this works we'd like to include it in our Fedora and
EPEL packages.

Nope, does not work, since the running usr/lib/rpm/find-provides
(or /usr/lib/rpm/redhat/find-provides) on the symlink does not
yield the proper provides [which for my system should be
libgfapi.so.0()(64bit)]. So no cigar :-(


Hi,

1) I erred on the symlink in the -compat RPM. It should have been
/usr/lib64/libgfapi.so.0 - libgfapi.so.7(.0.0).

Noticed that, not the main problem though :-)


2) find-provides is just a wrapper that greps the SO_NAME from the
shared lib. And if you pass symlinks such as
/usr/lib64/libgfapi.so.7 or /usr/lib64/libgfapi.so.0 to it, they both
return the same result, i.e. the null string. The DSO run-time does
not check that the SO_NAME matches.


No, but yum checks for libgfapi.so.0()(64bit) / libgfapi.so.0, so
i think something like this is needed for yum to cope with upgrades.

%ifarch x86_64
Provides: libgfapi.so.0()(64bit)
%else
Provides: libgfapi.so.0
%endif



That's already in the glusterfs-api-compat RPM that I sent you. The 
64-bit part anyway. Yes, a complete fix would include the 32-bit too.






I have a revised set of rpms with a correct symlink available
http://koji.fedoraproject.org/koji/taskinfo?taskID=7984220. The main
test (that I'm interested in) is whether qemu built against 3.5.x
works with it or not.

First thing is to get a yum upgrade to succeed.


What was the error?





Have you checked my more heavyhanded http://review.gluster.org/9014
?


I have. A) it's, well, heavy handed ;-) mainly due to,
B) there's lot of duplicated code for no real purpose, and

Agreed, quick fix to avoid soname hell (and me being unsure of
what problems __THROW could give rise to).


In C, that's a no-op. In C++, it tells the compiler that the function 
does not throw exceptions and can optimize accordingly.





C) for whatever reason it's not making it through our smoke and
regression tests (although I can't imagine how a new and otherwise
unused library would break those.)

Me neither, but I'm good at getting smoke :-)


If it comes to it, I personally would rather take a different route
and use versioned symbols in the library and not bump the SO_NAME.
Because the old APIs are unchanged and all we've done is add new
APIs.

I guess we have passed that point since 3.6.0 is out in the wild (RHEL),
and no way to bump down the version number.


That's RHS-Gluster, not community gluster. There's been some discussion 
of not packaging 3.6.0 and releasing and packaging 3.6.1 in short order. 
We might have a small window of opportunity. (Because there's never time 
to do it right the first time, but there's always time to do it over. ;-)






But we're running out of time for the 3.6 release (which is already
months overdue.)

I don't know if anyone here looked at doing versioned symbols before
we made the decision to bump the SO_NAME. I've looked at Ulrich
Dreppers
https://software.intel.com/sites/default/files/m/a/1/e/dsohowto.pdf
write-up and it's not very hard.

Too late now, though?


Perhaps not.

--

Kaleb



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-30 Thread Anders Blomdell
On 2014-10-30 22:06, Kaleb KEITHLEY wrote:
 On 10/30/2014 04:34 PM, Anders Blomdell wrote:
 On 2014-10-30 20:55, Kaleb KEITHLEY wrote:
 On 10/30/2014 01:50 PM, Anders Blomdell wrote:
 On 2014-10-30 14:52, Kaleb KEITHLEY wrote:
 On 10/30/2014 04:36 AM, Anders Blomdell wrote:

 I think a compat package would make the coupling between server
 and client looser, (i.e. one could run old clients on the same
 machine as a new server).

 Due to limited time and dependency on qemu on some of my
 testing machines, I still have not been able to test
 3.6.0beta3. A -compat package would have helped me a lot (but
 maybe given you more bugs to fix :-)).

 Hi,

 Here's an experimental respin of 3.6.0beta3 with a -compat RPM.

 http://koji.fedoraproject.org/koji/taskinfo?taskID=7981431

 Please let us know how it works. The 3.6 release is coming very
 soon and if this works we'd like to include it in our Fedora and
 EPEL packages.
 Nope, does not work, since the running usr/lib/rpm/find-provides
 (or /usr/lib/rpm/redhat/find-provides) on the symlink does not
 yield the proper provides [which for my system should be
 libgfapi.so.0()(64bit)]. So no cigar :-(

 Hi,

 1) I erred on the symlink in the -compat RPM. It should have been
 /usr/lib64/libgfapi.so.0 - libgfapi.so.7(.0.0).
 Noticed that, not the main problem though :-)

 2) find-provides is just a wrapper that greps the SO_NAME from the
 shared lib. And if you pass symlinks such as
 /usr/lib64/libgfapi.so.7 or /usr/lib64/libgfapi.so.0 to it, they both
 return the same result, i.e. the null string. The DSO run-time does
 not check that the SO_NAME matches.

 No, but yum checks for libgfapi.so.0()(64bit) / libgfapi.so.0, so
 i think something like this is needed for yum to cope with upgrades.

 %ifarch x86_64
 Provides: libgfapi.so.0()(64bit)
 %else
 Provides: libgfapi.so.0
 %endif
 
 
 That's already in the glusterfs-api-compat RPM that I sent you. The 64-bit 
 part anyway. Yes, a complete fix would include the 32-bit too.
A looked/tried at the wrong RPM

 


 I have a revised set of rpms with a correct symlink available
 http://koji.fedoraproject.org/koji/taskinfo?taskID=7984220. The main
 test (that I'm interested in) is whether qemu built against 3.5.x
 works with it or not.
 First thing is to get a yum upgrade to succeed.
 
 What was the error?
Me :-( (putting files in the wrong location)

Unfortunately hard to test, my libvirtd (1.1.3.6) seems to lack gluster support
(even though qemu is linked against libvirtd), any recommended version of 
libvirtd to 
compile?

 Have you checked my more heavyhanded http://review.gluster.org/9014
 ?

 I have. A) it's, well, heavy handed ;-) mainly due to,
 B) there's lot of duplicated code for no real purpose, and
 Agreed, quick fix to avoid soname hell (and me being unsure of
 what problems __THROW could give rise to).
 
 In C, that's a no-op. In C++, it tells the compiler that the function does 
 not throw exceptions and can optimize accordingly.
OK, no problems there then.

 

 C) for whatever reason it's not making it through our smoke and
 regression tests (although I can't imagine how a new and otherwise
 unused library would break those.)
 Me neither, but I'm good at getting smoke :-)

 If it comes to it, I personally would rather take a different route
 and use versioned symbols in the library and not bump the SO_NAME.
 Because the old APIs are unchanged and all we've done is add new
 APIs.
 I guess we have passed that point since 3.6.0 is out in the wild (RHEL),
 and no way to bump down the version number.
 
 That's RHS-Gluster, not community gluster. There's been some discussion of 
 not packaging 3.6.0 and releasing and packaging 3.6.1 in short order. We 
 might have a small window of opportunity. (Because there's never time to do 
 it right the first time, but there's always time to do it over. ;-)
 


 But we're running out of time for the 3.6 release (which is already
 months overdue.)
So is my testing, hope I'm not the bottleneck here :-)


 I don't know if anyone here looked at doing versioned symbols before
 we made the decision to bump the SO_NAME. I've looked at Ulrich
 Dreppers
 https://software.intel.com/sites/default/files/m/a/1/e/dsohowto.pdf
 write-up and it's not very hard.
 Too late now, though?
 
 Perhaps not.
+2



/Anders

-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-30 Thread Anders Blomdell
On 2014-10-30 22:44, Anders Blomdell wrote:
 On 2014-10-30 22:06, Kaleb KEITHLEY wrote:
 On 10/30/2014 04:34 PM, Anders Blomdell wrote:
 On 2014-10-30 20:55, Kaleb KEITHLEY wrote:
 On 10/30/2014 01:50 PM, Anders Blomdell wrote:
 On 2014-10-30 14:52, Kaleb KEITHLEY wrote:
 On 10/30/2014 04:36 AM, Anders Blomdell wrote:

 I think a compat package would make the coupling between server
 and client looser, (i.e. one could run old clients on the same
 machine as a new server).

 Due to limited time and dependency on qemu on some of my
 testing machines, I still have not been able to test
 3.6.0beta3. A -compat package would have helped me a lot (but
 maybe given you more bugs to fix :-)).

 Hi,

 Here's an experimental respin of 3.6.0beta3 with a -compat RPM.

 http://koji.fedoraproject.org/koji/taskinfo?taskID=7981431

 Please let us know how it works. The 3.6 release is coming very
 soon and if this works we'd like to include it in our Fedora and
 EPEL packages.
 Nope, does not work, since the running usr/lib/rpm/find-provides
 (or /usr/lib/rpm/redhat/find-provides) on the symlink does not
 yield the proper provides [which for my system should be
 libgfapi.so.0()(64bit)]. So no cigar :-(

 Hi,

 1) I erred on the symlink in the -compat RPM. It should have been
 /usr/lib64/libgfapi.so.0 - libgfapi.so.7(.0.0).
 Noticed that, not the main problem though :-)

 2) find-provides is just a wrapper that greps the SO_NAME from the
 shared lib. And if you pass symlinks such as
 /usr/lib64/libgfapi.so.7 or /usr/lib64/libgfapi.so.0 to it, they both
 return the same result, i.e. the null string. The DSO run-time does
 not check that the SO_NAME matches.

 No, but yum checks for libgfapi.so.0()(64bit) / libgfapi.so.0, so
 i think something like this is needed for yum to cope with upgrades.

 %ifarch x86_64
 Provides: libgfapi.so.0()(64bit)
 %else
 Provides: libgfapi.so.0
 %endif


 That's already in the glusterfs-api-compat RPM that I sent you. The 64-bit 
 part anyway. Yes, a complete fix would include the 32-bit too.
 A looked/tried at the wrong RPM
 



 I have a revised set of rpms with a correct symlink available
 http://koji.fedoraproject.org/koji/taskinfo?taskID=7984220. The main
 test (that I'm interested in) is whether qemu built against 3.5.x
 works with it or not.
 First thing is to get a yum upgrade to succeed.

 What was the error?
 Me :-( (putting files in the wrong location)
 
 Unfortunately hard to test, my libvirtd (1.1.3.6) seems to lack gluster 
 support
 (even though qemu is linked against libvirtd), any recommended version of 
 libvirtd to 
 compile?
With (srpms from fc21)

libvirt-client-1.2.9-3.fc20.x86_64
libvirt-daemon-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-interface-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-network-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-nodedev-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-nwfilter-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-qemu-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-secret-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-storage-1.2.9-3.fc20.x86_64
libvirt-daemon-kvm-1.2.9-3.fc20.x86_64
libvirt-daemon-qemu-1.2.9-3.fc20.x86_64
libvirt-devel-1.2.9-3.fc20.x86_64
libvirt-docs-1.2.9-3.fc20.x86_64
libvirt-gconfig-0.1.7-2.fc20.x86_64
libvirt-glib-0.1.7-2.fc20.x86_64
libvirt-gobject-0.1.7-2.fc20.x86_64
libvirt-python-1.2.7-2.fc20.x86_64

I can create a gluster pool, but when trying to create
an image I get the error Libvirt version does not support storage cloning.
Will continue tomorrow.

Qemu's that does not touch gluster works OK, so the installation is OK right 
now :-)


 
 Have you checked my more heavyhanded http://review.gluster.org/9014
 ?

 I have. A) it's, well, heavy handed ;-) mainly due to,
 B) there's lot of duplicated code for no real purpose, and
 Agreed, quick fix to avoid soname hell (and me being unsure of
 what problems __THROW could give rise to).

 In C, that's a no-op. In C++, it tells the compiler that the function does 
 not throw exceptions and can optimize accordingly.
 OK, no problems there then.
 


 C) for whatever reason it's not making it through our smoke and
 regression tests (although I can't imagine how a new and otherwise
 unused library would break those.)
 Me neither, but I'm good at getting smoke :-)

 If it comes to it, I personally would rather take a different route
 and use versioned symbols in the library and not bump the SO_NAME.
 Because the old APIs are unchanged and all we've done is add new
 APIs.
 I guess we have passed that point since 3.6.0 is out in the wild (RHEL),
 and no way to bump down the version number.

 That's RHS-Gluster, not community gluster. There's been some discussion of 
 not packaging 3.6.0 and releasing and packaging 3.6.1 in short order. We 
 might have a small window of opportunity. (Because there's never time to do 
 it right the first time, but there's always time to do it over. ;-)



 But we're running out of time for the 3.6 release (which is already
 months overdue.)
 So is my