[Gluster-users] GlusterFS storage driver deprecation in Kubernetes.

2022-08-11 Thread Humble Chirammal
Hey Gluster Community,

As you might be aware, there is an effort in the kubernetes community to
remove in-tree storage plugins to reduce external dependencies and security
concerns in the core Kubernetes. Thus, we are in a process to gradually
deprecate all the in-tree external storage plugins and eventually remove
them from the core Kubernetes codebase.  GlusterFS is one of the very first
dynamic provisioners which was made into kubernetes v1.4 ( 2016 ) release
via https://github.com/kubernetes/kubernetes/pull/30888 . From then on many
deployments were/are making use of this driver to consume GlusterFS volumes
in Kubernetes/Openshift clusters.

As part of this effort, we are planning to deprecate GlusterFS intree
plugin in 1.25 release and planning to take out Heketi code from Kubernetes
Code base in subsequent release. This code removal will not be following
kubernetes' normal deprecation policy [1] and will be treated as an
exception [2]. The main reason for this exception is that, Heketi is in
"Deep Maintenance" [3], also please see [4] for the latest push back from
the Heketi team on changes we would need to keep vendoring heketi into
kubernetes/kubernetes. We cannot keep heketi in the kubernetes code base as
heketi itself is literally going away. The current plan is to start
declaring the deprecation in kubernetes v1.25 and code removal in v1.26.

If you are using GlusterFS driver in your cluster setup, please reply with
below info before 16-Aug-2022 to d...@kubernetes.io ML on thread ( Deprecation
of intree GlusterFS driver in 1.25) or to this thread which can help us to
make a decision on when to completely remove this code base from the repo.

- what version of kubernetes are you running in your setup ?

- how often do you upgrade your cluster?

- what vendor or distro you are using ? Is it any (downstream) product
offering or upstream GlusterFS driver directly used in your setup?

Awaiting your feedback.

Thanks,

Humble

[1] https://kubernetes.io/docs/reference/using-api/deprecation-policy/

[2]
https://kubernetes.io/docs/reference/using-api/deprecation-policy/#exceptions

[3] https://github.com/heketi/heketi#maintenance-status

[4] https://github.com/heketi/heketi/pull/1904#issuecomment-1197100513
[5] https://github.com/kubernetes/kubernetes/issues/100897

--




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Humble Chirammal
Hi Christopher,

We are experimenting  few other options to get rid of this issue. We will
provide an update as soon as we have it.

On Thu, May 25, 2017 at 7:18 PM, Christopher Schmidt 
wrote:

> Hi Humble,
>
> thanks for that, it is really appreciated.
>
> In the meanwhile, using K8s 1.5, what can I do to disable the performance
> translator that doesn't work with Kafka? Maybe something while generating
> the Glusterfs container for Kubernetes?
>
> Best Christopher
>
> Humble Chirammal  schrieb am Do., 25. Mai 2017,
> 09:36:
>
>> On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur 
>> wrote:
>>
>>> On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
>>>  wrote:
>>> > So this change of the Gluster Volume Plugin will make it into K8s 1.7
>>> or
>>> > 1.8. Unfortunately too late for me.
>>> >
>>> > Does anyone know how to disable performance translators by default?
>>>
>>> Humble,
>>>
>>> Do you know of any way Christopher can proceed here?
>>>
>>
>> I am trying to get it in 1.7 branch, will provide an update here as soon
>> as its available.
>>
>>>
>>> >
>>> >
>>> > Raghavendra Talur  schrieb am Mi., 24. Mai 2017,
>>> 19:30:
>>> >>
>>> >> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt <
>>> fakod...@gmail.com>
>>> >> wrote:
>>> >> >
>>> >> >
>>> >> > Vijay Bellur  schrieb am Mi., 24. Mai 2017 um
>>> 05:53
>>> >> > Uhr:
>>> >> >>
>>> >> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt
>>> >> >> 
>>> >> >> wrote:
>>> >> >>>
>>> >> >>> OK, seems that this works now.
>>> >> >>>
>>> >> >>> A couple of questions:
>>> >> >>> - What do you think, are all these options necessary for Kafka?
>>> >> >>
>>> >> >>
>>> >> >> I am not entirely certain what subset of options will make it work
>>> as I
>>> >> >> do
>>> >> >> not understand the nature of failure with  Kafka and the default
>>> >> >> gluster
>>> >> >> configuration. It certainly needs further analysis to identify the
>>> list
>>> >> >> of
>>> >> >> options necessary. Would it be possible for you to enable one
>>> option
>>> >> >> after
>>> >> >> the other and determine the configuration that ?
>>> >> >>
>>> >> >>
>>> >> >>>
>>> >> >>> - You wrote that there have to be kind of application profiles.
>>> So to
>>> >> >>> find out, which set of options work is currently a matter of
>>> testing
>>> >> >>> (and
>>> >> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL /
>>> >> >>> Zookeeper
>>> >> >>> etc.?
>>> >> >>
>>> >> >>
>>> >> >> Application profiles are work in progress. We have a few that are
>>> >> >> focused
>>> >> >> on use cases like VM storage, block storage etc. at the moment.
>>> >> >>
>>> >> >>>
>>> >> >>> - I am using Heketi and Dynamik Storage Provisioning together with
>>> >> >>> Kubernetes. Can I set this volume options somehow by default or by
>>> >> >>> volume
>>> >> >>> plugin?
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> Adding Raghavendra and Michael to help address this query.
>>> >> >
>>> >> >
>>> >> > For me it would be sufficient to disable some (or all) translators,
>>> for
>>> >> > all
>>> >> > volumes that'll be created, somewhere here:
>>> >> > https://github.com/gluster/gluster-containers/tree/master/CentOS
>>> >> > This is the container used by the GlusterFS DaemonSet for
>>> Kubernetes.
>>> >>
>>> >> Work is in progress to give such option at volume plugin level. We
>>> >> currently have a patch[1] in

Re: [Gluster-users] GlusterFS and Kafka

2017-05-25 Thread Humble Chirammal
On Thu, May 25, 2017 at 12:57 PM, Raghavendra Talur 
wrote:

> On Thu, May 25, 2017 at 11:21 AM, Christopher Schmidt
>  wrote:
> > So this change of the Gluster Volume Plugin will make it into K8s 1.7 or
> > 1.8. Unfortunately too late for me.
> >
> > Does anyone know how to disable performance translators by default?
>
> Humble,
>
> Do you know of any way Christopher can proceed here?
>

I am trying to get it in 1.7 branch, will provide an update here as soon as
its available.

>
> >
> >
> > Raghavendra Talur  schrieb am Mi., 24. Mai 2017,
> 19:30:
> >>
> >> On Wed, May 24, 2017 at 4:08 PM, Christopher Schmidt <
> fakod...@gmail.com>
> >> wrote:
> >> >
> >> >
> >> > Vijay Bellur  schrieb am Mi., 24. Mai 2017 um
> 05:53
> >> > Uhr:
> >> >>
> >> >> On Tue, May 23, 2017 at 1:39 AM, Christopher Schmidt
> >> >> 
> >> >> wrote:
> >> >>>
> >> >>> OK, seems that this works now.
> >> >>>
> >> >>> A couple of questions:
> >> >>> - What do you think, are all these options necessary for Kafka?
> >> >>
> >> >>
> >> >> I am not entirely certain what subset of options will make it work
> as I
> >> >> do
> >> >> not understand the nature of failure with  Kafka and the default
> >> >> gluster
> >> >> configuration. It certainly needs further analysis to identify the
> list
> >> >> of
> >> >> options necessary. Would it be possible for you to enable one option
> >> >> after
> >> >> the other and determine the configuration that ?
> >> >>
> >> >>
> >> >>>
> >> >>> - You wrote that there have to be kind of application profiles. So
> to
> >> >>> find out, which set of options work is currently a matter of testing
> >> >>> (and
> >> >>> hope)? Or are there any experiences for MongoDB / ProstgreSQL /
> >> >>> Zookeeper
> >> >>> etc.?
> >> >>
> >> >>
> >> >> Application profiles are work in progress. We have a few that are
> >> >> focused
> >> >> on use cases like VM storage, block storage etc. at the moment.
> >> >>
> >> >>>
> >> >>> - I am using Heketi and Dynamik Storage Provisioning together with
> >> >>> Kubernetes. Can I set this volume options somehow by default or by
> >> >>> volume
> >> >>> plugin?
> >> >>
> >> >>
> >> >>
> >> >> Adding Raghavendra and Michael to help address this query.
> >> >
> >> >
> >> > For me it would be sufficient to disable some (or all) translators,
> for
> >> > all
> >> > volumes that'll be created, somewhere here:
> >> > https://github.com/gluster/gluster-containers/tree/master/CentOS
> >> > This is the container used by the GlusterFS DaemonSet for Kubernetes.
> >>
> >> Work is in progress to give such option at volume plugin level. We
> >> currently have a patch[1] in review for Heketi that allows users to
> >> set Gluster options using heketi-cli instead of going into a Gluster
> >> pod. Once this is in, we can add options in storage-class of
> >> Kubernetes that pass down Gluster options for every volume created in
> >> that storage-class.
> >>
> >> [1] https://github.com/heketi/heketi/pull/751
> >>
> >> Thanks,
> >> Raghavendra Talur
> >>
> >> >
> >> >>
> >> >>
> >> >> -Vijay
> >> >>
> >> >>
> >> >>
> >> >>>
> >> >>>
> >> >>> Thanks for you help... really appreciated.. Christopher
> >> >>>
> >> >>> Vijay Bellur  schrieb am Mo., 22. Mai 2017 um
> >> >>> 16:41
> >> >>> Uhr:
> >> 
> >>  Looks like a problem with caching. Can you please try by disabling
> >>  all
> >>  performance translators? The following configuration commands would
> >>  disable
> >>  performance translators in the gluster client stack:
> >> 
> >>  gluster volume set  performance.quick-read off
> >>  gluster volume set  performance.io-cache off
> >>  gluster volume set  performance.write-behind off
> >>  gluster volume set  performance.stat-prefetch off
> >>  gluster volume set  performance.read-ahead off
> >>  gluster volume set  performance.readdir-ahead off
> >>  gluster volume set  performance.open-behind off
> >>  gluster volume set  performance.client-io-threads off
> >> 
> >>  Thanks,
> >>  Vijay
> >> 
> >> 
> >> 
> >>  On Mon, May 22, 2017 at 9:46 AM, Christopher Schmidt
> >>   wrote:
> >> >
> >> > Hi all,
> >> >
> >> > has anyone ever successfully deployed a Kafka (Cluster) on
> GlusterFS
> >> > volumes?
> >> >
> >> > I my case it's a Kafka Kubernetes-StatefulSet and a Heketi
> >> > GlusterFS.
> >> > Needless to say that I am getting a lot of filesystem related
> >> > exceptions like this one:
> >> >
> >> > Failed to read `log header` from file channel
> >> > `sun.nio.ch.FileChannelImpl@67afa54a`. Expected to read 12 bytes,
> >> > but
> >> > reached end of file after reading 0 bytes. Started read from
> >> > position
> >> > 123065680.
> >> >
> >> > I limited the amount of exceptions with the
> >> > log.flush.interval.messages=1 option, but not all...
> >> >
> >> > best Christopher
> >> >
> >> >
> >> > ___

[Gluster-users] GlusterFS Containers with Docker, Kubernetes and Openshift

2016-03-23 Thread Humble Chirammal
*Hi All, I would like to provide you a status update on the developments 
with GlusterFS containers and its presence in projects like docker, 
kubernetes and Openshift. We have containerized GlusterFS with base 
image of CentOS and Fedora and its available at Docker Hub[1] to 
consume. The Dockerfile of the Image can be found at github[2]. You can 
pull the image with # docker pull gluster/gluster-centos # docker pull 
gluster/gluster-fedora The exact steps to be followed to run GlusterFS 
container is mentioned here[3]. We can deploy GlusterFS pods in 
Kubernetes Environment and an example blog about this setup can be found 
here [4]. There is GlusterFS volume plugin available in Kubernetes and 
openshift v3 which provides Persistent Volume to the containers in the 
Environment, How to use GlusterFS containers for Persistent Volume and 
Persistent Volume Claim in Openshift has been recorded at [5]. 
[1]https://hub.docker.com/r/gluster/ 
[2]https://github.com/gluster/docker/ [3]http://tinyurl.com/jupgene 
[4]http://tinyurl.com/zsrz36y [5]http://tinyurl.com/hne8g7o**Please let 
us know if you have any comments/suggestions/feedback. *


--
Cheers,
Humble

Senior Software  Engineer
RH India

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster Conference at NMAMIT, Nitte on April 11th 2015.

2015-03-26 Thread Humble Chirammal

Hi All,

We are conducting a Gluster Conference at NMAMIT, Nitte ( Mangalore) on April 
11th 2015.

We are expecting Engineering/MCA students and faculties from nearby colleges to 
attend the conference.

The conference will mainly focus on Open Source Software Defined Storage with 
GlusterFS as an example. The conference will have a
deep dive into GlusterFS followed by a workshop on GlusterFS Development. The 
conference will be a great opportunity for the students
and the faculties to interact with Gluster community, and exchanges 
ideas/thoughts.

Please find the tentative schedule,

Event Schedule:
~~~

-
Inauguration: 9.30 AM - 10.00 AM
-
-
Keynote Session : Introduction to Opensource
-
Speaker : Niels De Vos
Time: 10 AM - 10.45 AM
Duration: 45 mins
This session will give a introduction to Open Source. It will focus on what's, 
why's and how's
Open Source, the advantages of Open Source and how important open Source has 
become in
the current software era. The talk will give information on the different tools 
used to
develop/collaborate/review/maintain/test open source software.

~~~
Tea/Coffee/Snacks Break 15 mins
~~~

-
Technical Talk  : Software Defined Storage : GlusterFS as an example
-
Speaker : Dan Lambright
Time: 11 AM - 11.45 AM
Duration: 45 mins
This talk would cover system storage as whole and different types of storage &
storage protocols. It would emphasis on Software Defined Storage and how 
disruptive it has
become, with its advantages and how open source software defined storage is the 
most disruptive,
and hence the most desired. It would showcase of GluterFS as an example for 
Open Source
Software Defined Storge


--
Lightening Talk : Quality of Service in Storage Systems
--
Speakers: Karthik U.S and Sukumar Poojary [1]
Time: 11.45 AM - 12.15 PM
Duration: 30 mins

--
Lightening Talk : Deduplicaiton in Storage Systems
--
Speakers: Ewen Pinto and Srinivas Billava [1]
Time: 12.15 PM - 12.45 PM
Duration: 30 mins


~
Lunch Break : 12.45 PM - 1.45 PM
~


-
Technical Talk  : Introduction to GlusterFS(Internals)
-
Speaker : Kaleb Keithley
Time: 2.00 PM - 3.00 PM
Duration: 1hr
GlusterFS is an open source, distributed file system capable of scaling to 
several petabytes
and handling thousands of clients. GlusterFS clusters together storage
building blocks over Infiniband RDMA or TCP/IP interconnect, aggregating disk 
and memory resources
and managing data in a single global namespace. GlusterFS is based on a 
stackable user space design
and can deliver exceptional performance for diverse workloads.
This talk will provide an overview of GlusterFS and its features. We will also 
talk about internal
of GlusterFS.
This will be a prequel session to GlusterFS Development Workshop that follows.

~~~
Tea/Coffee/Snacks Break 15 mins
~~~

-
Technical Talk  : GlusterFS Development Workshop
-
Speaker : Joseph Elwin Fernandes & Niels De Vos
Time: 3.15 PM -5.45 PM
Duration: 2.30 hr
1) GlusterFS Development workflow.
2) Trying out GlusterFS, by adding a new xlator(GlusterFS Module)!
3) Setting up GlusterFS and testing the newly added xlator.
4) Submitting a patch and review process.


[1]

Lightening Talk Speaker:

1) Karthik U.S, Student of MCA (4th Sem), NMAMIT, Nitte, India.

2) Sukumar Poojary, Student of MCA (4th Sem), NMAMIT, Nitte, India.

3) Ewen Pinto, Student of MCA (6th Sem), NMAMIT, Nitte, India.

4) Srinivas Billava, Student of MCA (6th Sem), NMAMIT, Nitte, India.


Please let us know if anyone would like to contribute to the event w.r.t 
sharing talks/workshop.

--
Cheers,
Humble

Senior Software  Engineer
RH India

___
Announce mailing list
annou...@gluster.org
http://www.gluster.org/mailman/listinfo/announce
___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] Proposal for more sub-maintainers

2014-12-10 Thread Humble Chirammal


- Original Message -
| From: "Vijay Bellur" 
| To: "gluster-users Discussion List" , "Gluster 
Devel" , "Humble
| Chirammal" 
| Sent: Tuesday, December 9, 2014 3:21:47 PM
| Subject: Re: [Gluster-users] Proposal for more sub-maintainers
| 
| On 11/28/2014 01:08 PM, Vijay Bellur wrote:
| > Hi All,
| >
| > To supplement our ongoing effort of better patch management, I am
| > proposing the addition of more sub-maintainers for various components.
| > The rationale behind this proposal & the responsibilities of maintainers
| > continue to be the same as discussed in these lists a while ago [1].
| > Here is the proposed list:
| >
| > Build - Kaleb Keithley & Niels de Vos
| >
| > DHT   - Raghavendra Gowdappa & Shyam Ranganathan
| >
| > docs  - Humble Chirammal & Lalatendu Mohanty
| >
| > gfapi - Niels de Vos & Shyam Ranganathan
| >
| > index & io-threads - Pranith Karampuri
| >
| > posix - Pranith Karampuri & Raghavendra Bhat
| >
| > We intend to update Gerrit with this list by 8th of December. Please let
| > us know if you have objections, concerns or feedback on this process by
| > then.
| 
| 
| We have not seen any objections to this list. Kudos to the new
| maintainers and good luck for maintaining these components!
| 
| Humble - can you please update gerrit to reflect this?
| 

Done. Please cross-check.  

--Humble
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-31 Thread Humble Chirammal


On 10/31/2014 06:00 PM, Anders Blomdell wrote:

On 2014-10-31 13:18, Kaleb S. KEITHLEY wrote:

On 10/31/2014 08:15 AM, Humble Chirammal wrote:

I can create a gluster pool, but when trying to create
an image I get the error "Libvirt version does not support storage
cloning".
Will continue tomorrow.

As you noticed, this looks to be a libvirt version specific issue.

Qemu's that does not touch gluster works OK, so the installation is OK
right now :-)

I read it as, 'compat package works' !



It works, but I think most of us think it's a hack.

Yes, but workable in the interim :-)
yep, this fill the gap and avoid breakage. Any way, I am not in the 
group who think this is a hack :)

I'm going to cobble up a libgfapi with versioned symbols, without the
SO_NAME bump. Since we're not going to package 3.6.0 anyway, we have
a bit of breathing room.

Could we get the compat package into master/release-3.6 in the interim
to ease tracking/testing?

Thanks for all the work so far.

/Anders




--
Cheers,
Humble

Senior Software  Engineer
RH India

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-31 Thread Humble Chirammal


On 10/31/2014 03:46 AM, Anders Blomdell wrote:

On 2014-10-30 22:44, Anders Blomdell wrote:

On 2014-10-30 22:06, Kaleb KEITHLEY wrote:

On 10/30/2014 04:34 PM, Anders Blomdell wrote:

On 2014-10-30 20:55, Kaleb KEITHLEY wrote:

On 10/30/2014 01:50 PM, Anders Blomdell wrote:

On 2014-10-30 14:52, Kaleb KEITHLEY wrote:

On 10/30/2014 04:36 AM, Anders Blomdell wrote:

I think a compat package would make the coupling between server
and client looser, (i.e. one could run old clients on the same
machine as a new server).

Due to limited time and dependency on qemu on some of my
testing machines, I still have not been able to test
3.6.0beta3. A -compat package would have helped me a lot (but
maybe given you more bugs to fix :-)).

Hi,

Here's an experimental respin of 3.6.0beta3 with a -compat RPM.

http://koji.fedoraproject.org/koji/taskinfo?taskID=7981431

Please let us know how it works. The 3.6 release is coming very
soon and if this works we'd like to include it in our Fedora and
EPEL packages.

Nope, does not work, since the running usr/lib/rpm/find-provides
(or /usr/lib/rpm/redhat/find-provides) on the symlink does not
yield the proper provides [which for my system should be
"libgfapi.so.0()(64bit)"]. So no cigar :-(

Hi,

1) I erred on the symlink in the -compat RPM. It should have been
/usr/lib64/libgfapi.so.0 -> libgfapi.so.7(.0.0).

Noticed that, not the main problem though :-)


2) find-provides is just a wrapper that greps the SO_NAME from the
shared lib. And if you pass symlinks such as
/usr/lib64/libgfapi.so.7 or /usr/lib64/libgfapi.so.0 to it, they both
return the same result, i.e. the null string. The DSO run-time does
not check that the SO_NAME matches.

No, but yum checks for "libgfapi.so.0()(64bit) / libgfapi.so.0", so
i think something like this is needed for yum to cope with upgrades.

%ifarch x86_64
Provides: libgfapi.so.0()(64bit)
%else
Provides: libgfapi.so.0
%endif


That's already in the glusterfs-api-compat RPM that I sent you. The 64-bit part 
anyway. Yes, a complete fix would include the 32-bit too.

A looked/tried at the wrong RPM




I have a revised set of rpms with a correct symlink available
http://koji.fedoraproject.org/koji/taskinfo?taskID=7984220. The main
test (that I'm interested in) is whether qemu built against 3.5.x
works with it or not.

First thing is to get a yum upgrade to succeed.

What was the error?

Me :-( (putting files in the wrong location)

Unfortunately hard to test, my libvirtd (1.1.3.6) seems to lack gluster support
(even though qemu is linked against libvirtd), any recommended version of 
libvirtd to
compile?

With (srpms from fc21)

libvirt-client-1.2.9-3.fc20.x86_64
libvirt-daemon-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-interface-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-network-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-nodedev-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-nwfilter-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-qemu-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-secret-1.2.9-3.fc20.x86_64
libvirt-daemon-driver-storage-1.2.9-3.fc20.x86_64
libvirt-daemon-kvm-1.2.9-3.fc20.x86_64
libvirt-daemon-qemu-1.2.9-3.fc20.x86_64
libvirt-devel-1.2.9-3.fc20.x86_64
libvirt-docs-1.2.9-3.fc20.x86_64
libvirt-gconfig-0.1.7-2.fc20.x86_64
libvirt-glib-0.1.7-2.fc20.x86_64
libvirt-gobject-0.1.7-2.fc20.x86_64
libvirt-python-1.2.7-2.fc20.x86_64

I can create a gluster pool, but when trying to create
an image I get the error "Libvirt version does not support storage cloning".
Will continue tomorrow.


As you noticed, this looks to be a libvirt version specific issue.


Qemu's that does not touch gluster works OK, so the installation is OK right 
now :-)


I read it as, 'compat package works' !




Have you checked my more heavyhanded http://review.gluster.org/9014
?

I have. A) it's, well, heavy handed ;-) mainly due to,
B) there's lot of duplicated code for no real purpose, and

Agreed, quick fix to avoid soname hell (and me being unsure of
what problems __THROW could give rise to).

In C, that's a no-op. In C++, it tells the compiler that the function does not 
throw exceptions and can optimize accordingly.

OK, no problems there then.


C) for whatever reason it's not making it through our smoke and
regression tests (although I can't imagine how a new and otherwise
unused library would break those.)

Me neither, but I'm good at getting smoke :-)


If it comes to it, I personally would rather take a different route
and use versioned symbols in the library and not bump the SO_NAME.
Because the old APIs are unchanged and all we've done is add new
APIs.

I guess we have passed that point since 3.6.0 is out in the wild (RHEL),
and no way to bump down the version number.

That's RHS-Gluster, not community gluster. There's been some discussion of not 
packaging 3.6.0 and releasing and packaging 3.6.1 in short order. We might have 
a small window of opportunity. (Because there's never time to do it right the 
first time, but there's always time to do it over. ;-

Re: [Gluster-users] [Gluster-devel] Dependency issue while installing glusterfs-3.6beta with vdsm

2014-10-29 Thread Humble Chirammal



- Original Message -
| From: "Niels de Vos" 
| To: "Humble Devassy Chirammal" 
| Cc: "Anders Blomdell" , 
"Gluster-users@gluster.org List" ,
| "Gluster Devel" , "Humble Chirammal" 

| Sent: Wednesday, October 29, 2014 1:52:35 PM
| Subject: Re: [Gluster-devel] [Gluster-users] Dependency issue while 
installing glusterfs-3.6beta with vdsm
| 
| On Wed, Oct 29, 2014 at 09:26:36AM +0530, Humble Devassy Chirammal wrote:
| > Rebuilding related packages looks to be a solution , however it may not be
| > possible for each and every release builds of GlusterFS. lets discuss it in
| > community meeting and act.
| 
| No, these rebuilds are only needed for major releases. The SONAME for
| libgfapi is not supposed to change in a stable release. That means that
| the next rebuild would potentially be needed when glusterfs-3.7 gets
| released.
| 

Not really. afaict, the SONAME jump can happen if there is a new api exposed 
via libgfapi which can happen even in same release version of GlusterFS.

As a second thought, What happens if qemu or dependent packages released a new 
version in between ? Dont we need to rebuild ?

Also,  we need to build these related packages for f19, 20 and 21. 

--Humble


| 
| > 
| > 
| > On Wed, Oct 29, 2014 at 1:03 AM, Niels de Vos  wrote:
| > 
| > > On Tue, Oct 28, 2014 at 05:52:38PM +0100, Anders Blomdell wrote:
| > > > On 2014-10-28 17:30, Niels de Vos wrote:
| > > > > On Tue, Oct 28, 2014 at 08:42:00AM -0400, Kaleb S. KEITHLEY wrote:
| > > > >> On 10/28/2014 07:48 AM, Darshan Narayana Murthy wrote:
| > > > >>> Hi,
| > > > >>> Installation of glusterfs-3.6beta with vdsm
| > > (vdsm-4.14.8.1-0.fc19.x86_64) fails on
| > > > >>> f19 & f20 because of dependency issues with qemu packages.
| > > > >>>
| > > > >>> I installed vdsm-4.14.8.1-0.fc19.x86_64 which installs
| > > glusterfs-3.5.2-1.fc19.x86_64
| > > > >>> as dependency. Now when I try to update glusterfs by downloading
| > > rpms from :
| > > > >>>
| > > 
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.6.0beta3/Fedora/fedora-19/
| > > > >>> It fails with following error:
| > > > >>>
| > > > >>> Error: Package: 2:qemu-system-lm32-1.4.2-15.fc19.x86_64 (@updates)
| > > > >>>Requires: libgfapi.so.0()(64bit)
| > > > >>>Removing: glusterfs-api-3.5.2-1.fc19.x86_64 (@updates)
| > > > >>>libgfapi.so.0()(64bit)
| > > > >>>Updated By: glusterfs-api-3.6.0-0.5.beta3.fc19.x86_64
| > > (/glusterfs-api-3.6.0-0.5.beta3.fc19.x86_64)
| > > > >>>   ~libgfapi.so.7()(64bit)
| > > > >>>Available: glusterfs-api-3.4.0-0.5.beta2.fc19.x86_64
| > > (fedora)
| > > > >>>libgfapi.so.0()(64bit)
| > > > >>>
| > > > >>> Full output at: http://ur1.ca/ikvk8
| > > > >>>
| > > > >>> For having snapshot and geo-rep management through ovirt, we
| > > need glusterfs-3.6 to be
| > > > >>> installed with vdsm, which is currently failing.
| > > > >>>
| > > > >>> Can you please provide your suggestions to resolve this issue.
| > > > >>
| > > > >> Hi,
| > > > >>
| > > > >> Starting in 3.6 we have bumped the SO_VERSION of libgfapi.
| > > > >>
| > > > >> You need to install glusterfs-api-devel-3.6.0... first and build
| > > > >> vdsm.
| > > > >>
| > > > >> But  we are (or were) not planning to release glusterfs-3.6.0 on
| > > f19 and
| > > > >> f20...
| > > > >>
| > > > >> Off hand I don't believe there's anything in glusterfs-api-3.6.0
| > > > >> that
| > > vdsm
| > > > >> needs. vdsm with glusterfs-3.5.x on f19 and f20 should be okay.
| > > > >>
| > > > >> Is there something new in vdsm-4-14 that really needs glusterfs-3.6?
| > > If so
| > > > >> we can revisit whether we release 3.6 to fedora 19 and 20.
| > > > >
| > > > > The chain of dependencies is like this:
| > > > >
| > > > >vdsm -> qemu -> libgfapi.so.0
| > > > >
| > > > > I think a rebuild of QEMU should be sufficient. I'm planning to put
| > > > > glusterfs-3.6 and rebuilds of related packages in a Fedora COPR. This
| &

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-08-27 Thread Humble Chirammal


- Original Message -
| From: "Pranith Kumar Karampuri" 
| To: "Humble Chirammal" 
| Cc: "Roman" , gluster-users@gluster.org, "Niels de Vos" 

| Sent: Wednesday, August 27, 2014 12:34:22 PM
| Subject: Re: [Gluster-users] libgfapi failover problem on replica bricks
| 
| 
| On 08/27/2014 12:24 PM, Roman wrote:
| > root@stor1:~# ls -l /usr/sbin/glfsheal
| > ls: cannot access /usr/sbin/glfsheal: No such file or directory
| > Seems like not.
| Humble,
|   Seems like the binary is still not packaged?

Checking with Kaleb on this.

| 
| 
| >
| >
| > 2014-08-27 9:50 GMT+03:00 Pranith Kumar Karampuri  <mailto:pkara...@redhat.com>>:
| >
| >
| > On 08/27/2014 11:53 AM, Roman wrote:
| >> Okay.
| >> so here are first results:
| >>
| >> after I disconnected the first server, I've got this:
| >>
| >> root@stor2:~# gluster volume heal HA-FAST-PVE1-150G info
| >> Volume heal failed
| > Can you check if the following binary is present?
| > /usr/sbin/glfsheal
| >
| > Pranith
| >>
| >>
| >> but
| >> [2014-08-26 11:45:35.315974] I
| >> [afr-self-heal-common.c:2868:afr_log_self_heal_completion_status]
| >> 0-HA-FAST-PVE1-150G-replicate-0:  foreground data self heal  is
| >> successfully completed,  data self heal from
| >> HA-FAST-PVE1-150G-client-1  to sinks  HA-FAST-PVE1-150G-client-0,
| >> with 16108814336 bytes on HA-FAST-PVE1-150G-client-0, 16108814336
| >> bytes on HA-FAST-PVE1-150G-client-1,  data - Pending matrix:  [ [
| >> 0 0 ] [ 348 0 ] ]  on 
| >>
| >> something wrong during upgrade?
| >>
| >> I've got two VM-s on different volumes: one with HD on and other
| >> with HD off.
| >> Both survived the outage and both seemed synced.
| >>
| >> but today I've found kind of a bug with log rotation.
| >>
| >> logs rotated both on server and client sides, but logs are being
| >> written in *.log.1 file :)
| >>
| >> /var/log/glusterfs/mnt-pve-HA-MED-PVE1-1T.log.1
| >> /var/log/glusterfs/glustershd.log.1
| >>
| >> such behavior came after upgrade.
| >>
| >> logrotate.d conf files include the HUP for gluster pid-s.
| >>
| >> client:
| >> /var/log/glusterfs/*.log {
| >> daily
| >> rotate 7
| >> delaycompress
| >> compress
| >> notifempty
| >> missingok
| >> postrotate
| >> [ ! -f /var/run/glusterd.pid ] || kill -HUP `cat
| >> /var/run/glusterd.pid`
| >> endscript
| >> }
| >>
| >> but I'm not able to ls the pid on client side (should it be
| >> there?) :(
| >>
| >> and servers:
| >> /var/log/glusterfs/*.log {
| >> daily
| >> rotate 7
| >> delaycompress
| >> compress
| >> notifempty
| >> missingok
| >> postrotate
| >> [ ! -f /var/run/glusterd.pid ] || kill -HUP `cat
| >> /var/run/glusterd.pid`
| >> endscript
| >> }
| >>
| >>
| >> /var/log/glusterfs/*/*.log {
| >> daily
| >> rotate 7
| >> delaycompress
| >> compress
| >> notifempty
| >> missingok
| >> copytruncate
| >> postrotate
| >> [ ! -f /var/run/glusterd.pid ] || kill -HUP `cat
| >> /var/run/glusterd.pid`
| >> endscript
| >> }
| >>
| >> I do have /var/run/glusterd.pid on server side.
| >>
| >> Should I change something? Logrotation seems to be broken.
| >>
| >>
| >>
| >>
| >>
| >>
| >> 2014-08-26 9:29 GMT+03:00 Pranith Kumar Karampuri
| >> mailto:pkara...@redhat.com>>:
| >>
| >>
| >> On 08/26/2014 11:55 AM, Roman wrote:
| >>> Hello all again!
| >>> I'm back from vacation and I'm pretty happy with 3.5.2
| >>> available for wheezy. Thanks! Just made my updates.
| >>> For 3.5.2 do I still have to set cluster.self-heal-daemon to
| >>> off?
| >> Welcome back :-). If you set it to off, the test case you
| >> execute will work(Validate please :-) ). But we need to test
| >> it with self-heal-daemon 'on'

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-08-06 Thread Humble Chirammal



- Original Message -
| From: "Pranith Kumar Karampuri" 
| To: "Roman" 
| Cc: gluster-users@gluster.org, "Niels de Vos" , "Humble 
Chirammal" 
| Sent: Wednesday, August 6, 2014 12:09:57 PM
| Subject: Re: [Gluster-users] libgfapi failover problem on replica bricks
| 
| Roman,
|  The file went into split-brain. I think we should do these tests
| with 3.5.2. Where monitoring the heals is easier. Let me also come up
| with a document about how to do this testing you are trying to do.
| 
| Humble/Niels,
|  Do we have debs available for 3.5.2? In 3.5.1 there was packaging
| issue where /usr/bin/glfsheal is not packaged along with the deb. I
| think that should be fixed now as well?
| 
Pranith,

The 3.5.2 packages for debian is not available yet. We are co-ordinating 
internally to get it processed.
I will update the list once its available.

--Humble
| 
| On 08/06/2014 11:52 AM, Roman wrote:
| > good morning,
| >
| > root@stor1:~# getfattr -d -m. -e hex
| > /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2
| > getfattr: Removing leading '/' from absolute path names
| > # file: exports/fast-test/150G/images/127/vm-127-disk-1.qcow2
| > trusted.afr.HA-fast-150G-PVE1-client-0=0x
| > trusted.afr.HA-fast-150G-PVE1-client-1=0x0132
| > trusted.gfid=0x23c79523075a4158bea38078da570449
| >
| > getfattr: Removing leading '/' from absolute path names
| > # file: exports/fast-test/150G/images/127/vm-127-disk-1.qcow2
| > trusted.afr.HA-fast-150G-PVE1-client-0=0x0004
| > trusted.afr.HA-fast-150G-PVE1-client-1=0x
| > trusted.gfid=0x23c79523075a4158bea38078da570449
| >
| >
| >
| > 2014-08-06 9:20 GMT+03:00 Pranith Kumar Karampuri  <mailto:pkara...@redhat.com>>:
| >
| >
| > On 08/06/2014 11:30 AM, Roman wrote:
| >> Also, this time files are not the same!
| >>
| >> root@stor1:~# md5sum
| >> /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2
| >> 32411360c53116b96a059f17306caeda
| >>  /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2
| >>
| >> root@stor2:~# md5sum
| >> /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2
| >> 65b8a6031bcb6f5fb3a11cb1e8b1c9c9
| >>  /exports/fast-test/150G/images/127/vm-127-disk-1.qcow2
| > What is the getfattr output?
| >
| > Pranith
| >
| >>
| >>
| >> 2014-08-05 16:33 GMT+03:00 Roman > <mailto:rome...@gmail.com>>:
| >>
| >> Nope, it is not working. But this time it went a bit other way
| >>
| >> root@gluster-client:~# dmesg
| >> Segmentation fault
| >>
| >>
| >> I was not able even to start the VM after I done the tests
| >>
| >> Could not read qcow2 header: Operation not permitted
| >>
| >> And it seems, it never starts to sync files after first
| >> disconnect. VM survives first disconnect, but not second (I
| >> waited around 30 minutes). Also, I've
| >> got network.ping-timeout: 2 in volume settings, but logs
| >> react on first disconnect around 30 seconds. Second was
| >> faster, 2 seconds.
| >>
| >> Reaction was different also:
| >>
| >> slower one:
| >> [2014-08-05 13:26:19.558435] W [socket.c:514:__socket_rwv]
| >> 0-glusterfs: readv failed (Connection timed out)
| >> [2014-08-05 13:26:19.558485] W
| >> [socket.c:1962:__socket_proto_state_machine] 0-glusterfs:
| >> reading from socket failed. Error (Connection timed out),
| >> peer (10.250.0.1:24007 <http://10.250.0.1:24007>)
| >> [2014-08-05 13:26:21.281426] W [socket.c:514:__socket_rwv]
| >> 0-HA-fast-150G-PVE1-client-0: readv failed (Connection timed out)
| >> [2014-08-05 13:26:21.281474] W
| >> [socket.c:1962:__socket_proto_state_machine]
| >> 0-HA-fast-150G-PVE1-client-0: reading from socket failed.
| >> Error (Connection timed out), peer (10.250.0.1:49153
| >> <http://10.250.0.1:49153>)
| >> [2014-08-05 13:26:21.281507] I
| >> [client.c:2098:client_rpc_notify]
| >> 0-HA-fast-150G-PVE1-client-0: disconnected
| >>
| >> the fast one:
| >> 2014-08-05 12:52:44.607389] C
| >> [client-handshake.c:127:rpc_client_ping_timer_expired]
| >> 0-HA-fast-150G-PVE1-client-1: server 10.250.0.2:49153
| >> <http://10.250.0.2:49153> has not responded in the last 2
| >> seconds, d