Re: [Gluster-devel] Announcing Gluster for Container Storage (GCS)

2018-08-24 Thread Michael Adam
On 2018-08-23 at 13:54 -0700, Joe Julian wrote:
> Personally, I'd like to see the glusterd service replaced by a k8s native 
> controller (named "kluster").

If you are exclusively interested in gluster for kubernetes
storage, this might seem the right approach.  But I think
this is much too narrow. The standalone, non-k8s deployments
still are important and will be for some time.

So what we've always tried to achieve (this is my personal
very firm credo, and I think several of the other gluster
developers are on the same page), is to keep any business
logic of *how* to manage bricks, create volumes, how to do a
mount, how to grow, shrink and grow volumes and clusters,
etc... close to the core gluster project, so that these
features are usable irrespective of whether gluster is
used in kubernetes or not.

The kubernetes components just need to make use of these,
and so they can stay nicely small, too:

* The provisioners and csi drivers mainly do api translation
  between k8s and gluster(heketi in the old style) and are
  rather trivial.

* The operator would implement the logic "when" and "why"
  to invoke the gluster operations, but should imho not
  bother about the "how".

What can not be implemented with that nice separation
of responsibilies?


Thinking about this a bit more, I do actually feel
more and more that it would be wrong to put all of
gluster into k8s even if we were only interested
in k8s. And I'm really curious how you want to do
that: I think you would have to rewrite more parts
of how gluster actually works. Currently glusterd
mananges (spawns) other gluster processes. Clients
for mounting first connect to glusterd to get the
volfile and maintain a connection to glusterd
throughout the whole lifetime of the mount, etc...

Really interested to hear your thoughts about the above!


Cheers - Michael




> I'm hoping to use this vacation I'm currently on to write up a design doc.
> 
> On August 23, 2018 12:58:03 PM PDT, Michael Adam  wrote:
> >On 2018-07-25 at 06:38 -0700, Vijay Bellur wrote:
> >> Hi all,
> >
> >Hi Vijay,
> >
> >Thanks for announcing this to the public and making everyone
> >more aware of Gluster's focus on container storage!
> >
> >I would like to add an additional perspective to this,
> >giving some background about the history and origins:
> >
> >Integrating Gluster with kubernetes for providing
> >persistent storage for containerized applications is
> >not new. We have been working on this since more than
> >two years now, and it is used by many community users
> >and and many customers (of Red Hat) in production.
> >
> >The original software stack used heketi
> >(https://github.com/heketi/heketi) as a high level service
> >interface for gluster to facilitate the easy self-service for
> >provisioning volumes in kubernetes. Heketi implemented some ideas
> >that were originally part of the glusterd2 plans already in a
> >separate, much more narrowly scoped project to get us started
> >with these efforts in the first place, and also went beyond those
> >original ideas.  These features are now being merged into
> >glusterd2 which will in the future replace heketi in the
> >container storage stack.
> >
> >We were also working on kubernetes itself, writing the
> >privisioners for various forms of gluster volumes in kubernets
> >proper (https://github.com/kubernetes/kubernetes) and also the
> >external storage repo
> >(https://github.com/kubernetes-incubator/external-storage).
> >Those provisioners will eventually be replaced by the mentioned
> >csi drivers. The expertise of the original kubernetes
> >development is now flowing into the CSI drivers.
> >
> >The gluster-containers repository was created and used
> >for this original container-storage effort already.
> >
> >The mentioned https://github.com/gluster/gluster-kubernetes
> >repository was not only the place for storing the deployment
> >artefacts and tools, but it was actually intended to be the
> >upstream home of the gluster-container-storage project.
> >
> >In this view, I see the GCS project announced here
> >as a GCS version 2. The original first version,
> >even though never officially announced that widely in a formal
> >introduction like this, and never given a formal release
> >or version number (let me call it version one), was the
> >software stack described above and homed at the
> >gluster-kubernetes repository. If you look at this project
> >(and heketi), you see that it has a nice level of popularity.
> >
> >I think we should make use of this traction instead of
> >ignoring the legacy

Re: [Gluster-devel] Announcing Gluster for Container Storage (GCS)

2018-08-23 Thread Michael Adam
On 2018-07-25 at 06:38 -0700, Vijay Bellur wrote:
> Hi all,

Hi Vijay,

Thanks for announcing this to the public and making everyone
more aware of Gluster's focus on container storage!

I would like to add an additional perspective to this,
giving some background about the history and origins:

Integrating Gluster with kubernetes for providing
persistent storage for containerized applications is
not new. We have been working on this since more than
two years now, and it is used by many community users
and and many customers (of Red Hat) in production.

The original software stack used heketi
(https://github.com/heketi/heketi) as a high level service
interface for gluster to facilitate the easy self-service for
provisioning volumes in kubernetes. Heketi implemented some ideas
that were originally part of the glusterd2 plans already in a
separate, much more narrowly scoped project to get us started
with these efforts in the first place, and also went beyond those
original ideas.  These features are now being merged into
glusterd2 which will in the future replace heketi in the
container storage stack.

We were also working on kubernetes itself, writing the
privisioners for various forms of gluster volumes in kubernets
proper (https://github.com/kubernetes/kubernetes) and also the
external storage repo
(https://github.com/kubernetes-incubator/external-storage).
Those provisioners will eventually be replaced by the mentioned
csi drivers. The expertise of the original kubernetes
development is now flowing into the CSI drivers.

The gluster-containers repository was created and used
for this original container-storage effort already.

The mentioned https://github.com/gluster/gluster-kubernetes
repository was not only the place for storing the deployment
artefacts and tools, but it was actually intended to be the
upstream home of the gluster-container-storage project.

In this view, I see the GCS project announced here
as a GCS version 2. The original first version,
even though never officially announced that widely in a formal
introduction like this, and never given a formal release
or version number (let me call it version one), was the
software stack described above and homed at the
gluster-kubernetes repository. If you look at this project
(and heketi), you see that it has a nice level of popularity.

I think we should make use of this traction instead of
ignoring the legacy, and turn gluster-kubernetes into the
home of GCS (v2). In my view, GCS (v2) will be about:

* replacing some of the components with newer, i.e.
  - i.e. glusterd2 instead of the heketi and glusterd1 combo
  - csi drivers (the new standard) instead of native
kubernetes plugins
* adding the operator feature,
  (even though we are currently also working on an operator
  for the current stack with heketi and traditional gluster,
  since this will become important in production before
  this v2 will be ready.)

These are my 2cents on this topic.
I hope someone finds them useful.

I am very excited to (finally) see the broader gluster
community getting more aligned behind the idea of bringing
our great SDS system into the space of containers! :-)

Cheers - Michael





> We would like to let you  know that some of us have started focusing on an
> initiative called ‘Gluster for Container Storage’ (in short GCS). As of
> now, one can already use Gluster as storage for containers by making use of
> different projects available in github repositories associated with gluster
>  & Heketi .
> The goal of the GCS initiative is to provide an easier integration of these
> projects so that they can be consumed together as designed. We are
> primarily focused on integration with Kubernetes (k8s) through this
> initiative.
> 
> Key projects for GCS include:
> Glusterd2 (GD2)
> 
> Repo: https://github.com/gluster/glusterd2
> 
> The challenge we have with current management layer of Gluster (glusterd)
> is that it is not designed for a service oriented architecture. Heketi
> overcame this limitation and made Gluster consumable in k8s by providing
> all the necessary hooks needed for supporting Persistent Volume Claims.
> 
> Glusterd2 provides a service oriented architecture for volume & cluster
> management. Gd2 also intends to provide many of the Heketi functionalities
> needed by Kubernetes natively. Hence we are working on merging Heketi with
> gd2 and you can follow more of this action in the issues associated with
> the gd2 github repository.
> gluster-block
> 
> Repo: https://github.com/gluster/gluster-block
> 
> This project intends to expose files in a gluster volume as block devices.
> Gluster-block enables supporting ReadWriteOnce (RWO) PVCs and the
> corresponding workloads in Kubernetes using gluster as the underlying
> storage technology.
> 
> Gluster-block is intended to be consumed by stateful RWO applications like
> databases and k8s infrastructure services like logging, metrics

[Gluster-devel] Heketi v6.0.0 available for download

2018-02-21 Thread Michael Adam
Hi all,

Heketi v6.0.0 is now available [1].

This is the new stable version of Heketi.
Older versions are discontinued.

The main additions in this release are the block-volume API, a
great deal of stabilization to prevent inconsistent database and
out-of-sync situations, and tooling to do disaster recovery when
the database is bad.

Changelog

* Add support for gluster-block volumes
* Add device resync API.
* A lot of internal restructuring and code cleanup.
* Greatly improved robustness, preventing inconsistent database state.
* Add a database import and export feature.
* Add a database repair mode (cleaning orphaned bricks).
* Allow to set heketi's log level through the HEKETI_GLUSTERAPP_LOGLEVEL 
environment variable.
* Many other bug fixes and improvements.


Michael and the Heketi team


[1] https://github.com/heketi/heketi/releases/tag/v6.0.0


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Heketi v5.0.1 security release available for download

2017-12-18 Thread Michael Adam

Heketi v5.0.1 is now available.


This release[1] fixes a flaw that was found in heketi API that
permits issuing of OS commands through specially crafted
requests, possibly leading to escalation of privileges. More
details can be obtained at CVE-2017-15103. [2]

If authentication is turned "on" in heketi configuration, the
flaw can be exploited only by those who possess authentication
key. In case you have a deployment without authentication set to
true, we recommend that you turn it on and also upgrade to
version with fix.


We thank Markus Krell of NTT Security for identifying
the vulnerability and notifying us about the it.

The fix was provided by Raghavendra Talur of Red Hat.


Note that previous versions of Heketi are discontinued
and users are strongly recommended to upgrade to Heketi 5.0.1.


Michael Adam on behalf of the Heketi team


[1] https://github.com/heketi/heketi/releases/tag/v5.0.1
[2] https://cve.mitre.org/cgi-bin/cvename.cgi?name=2017-15103


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] R.Talur heketi maintainer.

2017-10-24 Thread Michael Adam
Hi all,

This is to let you know that Raghavendra Talur
has recently become co-maintainer of heketi,
i.e. he can merge patches.

Thanks Talur for your good work on heketi and for
taking this up! And remember: with great power
comes great responsibility. :-)

A corresponding maintainers file has been added
to the repository.

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [heketi-devel] Nos vemos!

2017-09-26 Thread Michael Adam
On 2017-09-25 at 09:47 -0700, Luis Pabon wrote:
> Hi community,
>   It has been a great time working together on GlusterFS and Heketi these
> past few years. I remember starting on Heketi to enable GlusterFS for
> Manila, but now I will be moving on to work at Portworx and their
> containerized storage product.
>   I am passing leadership of the Heketi project to Michael Adam, and I now
> it is in good hands at Red Hat.
> 
> Thank you all again for making Heketi awesome.

Thanks Luis!

All the best for your future endeavors!
I'm sure our paths will cross again at
some conference or project... :-)

Michael


> PS: If you didn't know, Heketi is a Taino word (indigenous people of the
> Caribbean) which means One. Other Taino words are: Barbecue,  hurricane,
> hammock and many others.

> ___
> heketi-devel mailing list
> heketi-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/heketi-devel



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [heketi-devel] Heketi version 5 has been released!

2017-09-20 Thread Michael Adam
Update to Heketi release 5:

The downloads of the heketi 5.0.0 release
https://github.com/heketi/heketi/releases/tag/v5.0.0
have been updated:

1) The binaries were not reflecting the exact release but
   had a few additional (innocuous) patches on top and
   hence had an awkward file name (including githash ...).

   This has been fixed: Correct binaries have been uploaded
   with the naming scheme of heketi-[client-]v5.0.0.$OS.$ARCH.tar.gz

2) Since the new version does not have checked-in vendor
   dependencies, we have provided an additional source tarball
   with all the required dependencies:

   
https://github.com/heketi/heketi/releases/download/v5.0.0/heketi-deps-v5.0.0.tar.gz

   This is intended to make the life of packagers more easy.
   It's purpose is to be extracted into the $GOPATH/src
   directory. Alternatively, the vendor directory in the
   untarred sources might work as well.

   An example of how this can be used can be seen here:

   https://github.com/CentOS-Storage-SIG/heketi/pull/2

Michael


On 2017-09-14 at 23:32 +0200, Michael Adam wrote:
> Hi all,
> 
> 
> Heketi version 5 has just been released.
> 
> Here is the high-level changelog since version 4:
> 
> - Set Gluster Volume options based on user input.
> - Allow disperse volumes to be 2+1.
> - Use glide instead of godeps for build dependencies.
> - Increase test coverage.
> - Compress database when storing in a K8S Secret.
> - Introduce switch `backup_db_to_kube_secret`, defaulting to false.
> - Add functionality to remove a device.
> - Add functionality to remove a node.
> - Environmental variable support for config of sshexec.
> 
> Downloads of source code and binaries:
> 
> https://github.com/heketi/heketi/releases/tag/v5.0.0
> 
> Docker image:
> 
> docker pull heketi/heketi
> 
> or
> 
> docker pull heketi/heketi:5
> docker pull heketi/heketi:latest
> 
> 
> - Michael (for the heketi team)



> ___
> heketi-devel mailing list
> heketi-de...@gluster.org
> http://lists.gluster.org/mailman/listinfo/heketi-devel



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Heketi version 5 has been released!

2017-09-20 Thread Michael Adam
On 2017-09-15 at 11:08 +0200, Niels de Vos wrote:
> On Thu, Sep 14, 2017 at 11:32:02PM +0200, Michael Adam wrote:
> > Hi all,
> > 
> > 
> > Heketi version 5 has just been released.
> 
> Congrats with the new release!
> 
> > Here is the high-level changelog since version 4:
> > 
> > - Set Gluster Volume options based on user input.
> > - Allow disperse volumes to be 2+1.
> > - Use glide instead of godeps for build dependencies.
> > - Increase test coverage.
> > - Compress database when storing in a K8S Secret.
> > - Introduce switch `backup_db_to_kube_secret`, defaulting to false.
> > - Add functionality to remove a device.
> > - Add functionality to remove a node.
> > - Environmental variable support for config of sshexec.
> > 
> > Downloads of source code and binaries:
> > 
> > https://github.com/heketi/heketi/releases/tag/v5.0.0
> 
> Could someone send a pull request to update the packaging for the CentOS
> Storage SIG? I'm not very familiar with Golang yet, and the change from
> godeps to glide is not something I can do in a couple of minutes.
> 
> Contents of the 'dist-git like' packaging files can be found here:
>   
> https://github.com/CentOS-Storage-SIG/heketi/tree/sig-storage7-gluster-common

I will be looking into it (probably with Jose).

Cheers - Michael



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Heketi version 5 has been released!

2017-09-14 Thread Michael Adam
Hi all,


Heketi version 5 has just been released.

Here is the high-level changelog since version 4:

- Set Gluster Volume options based on user input.
- Allow disperse volumes to be 2+1.
- Use glide instead of godeps for build dependencies.
- Increase test coverage.
- Compress database when storing in a K8S Secret.
- Introduce switch `backup_db_to_kube_secret`, defaulting to false.
- Add functionality to remove a device.
- Add functionality to remove a node.
- Environmental variable support for config of sshexec.

Downloads of source code and binaries:

https://github.com/heketi/heketi/releases/tag/v5.0.0

Docker image:

docker pull heketi/heketi

or

docker pull heketi/heketi:5
docker pull heketi/heketi:latest


- Michael (for the heketi team)


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] some gluster-block fixes

2017-06-22 Thread Michael Adam
On 2017-06-22 at 16:26 +0200, Michael Adam wrote:
> Hi all,
> 
> I have created a few patches to gluster-block.
> But am  a little bit at a loss how to create
> gerrit review requests. Hence I have created
> github PRs.
> 
> https://github.com/gluster/gluster-block/pull/29
> https://github.com/gluster/gluster-block/pull/30
> 
> Prasanna, I hope you can convert those to gerrit
> again... :-D

Ok, thanks to Niels, I was able to move them to gerrit:

https://review.gluster.org/#/c/17613/
https://review.gluster.org/#/c/17614/

Updated according to review commits on github.

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] some gluster-block fixes

2017-06-22 Thread Michael Adam
Hi all,

I have created a few patches to gluster-block.
But am  a little bit at a loss how to create
gerrit review requests. Hence I have created
github PRs.

https://github.com/gluster/gluster-block/pull/29
https://github.com/gluster/gluster-block/pull/30

Prasanna, I hope you can convert those to gerrit
again... :-D

Thanks - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [heketi-devel] Gluster-Block provisioning end-to-end design

2017-05-22 Thread Michael Adam
Hi all,

Initially, I forgot to send this to gluster-devel, too.
For those who are interested in learning about and
participating in the efforts of using the new gluster-block
mechanism to provide better RWO volumes for Kubernetes,
here is the design document PR.

Cheers - Michael

- Forwarded message from Michael Adam  -

Date: Sat, 20 May 2017 09:14:55 +0200
From: Michael Adam 
To: heketi-de...@gluster.org
Subject: [heketi-devel] Gluster-Block provisioning end-to-end design
User-Agent: Mutt/1.8.0 (2017-02-23)

Hi all,

Yesterday, I have posted on gluster-kubernetes a
design proposal for the impending gluster-block
based provisioning of RWO volumes. This is the
distillate of a lot of discussions and previous
versions of such a document.

https://github.com/gluster/gluster-kubernetes/pull/268

Please have a look and possibly comment.

A skeleton heketi implementation can be found here:

https://github.com/obnoxxx/heketi/tree/block-wip

Cheers - Michael



___
heketi-devel mailing list
heketi-de...@gluster.org
http://lists.gluster.org/mailman/listinfo/heketi-devel


- End forwarded message -


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] github:gluster/container-storage - team create request

2016-11-04 Thread Michael Adam
On 2016-11-04 at 11:27 +0100, Michael Scherer wrote:
> Le jeudi 03 novembre 2016 à 19:04 +0100, Michael Adam a écrit :
> > Hi all,
> > 
> > recently a new repo was created under the github/gluster org:
> > 
> > github.com/gluster/container-storage
> > 
> > This is supposed to become the home of gluster's container
> > storage project. This is the project which brings
> > gluster into the kubernetes/openshift container platform
> > as a provider of perstistent storage volumes for the
> > application containers, with gluster's service interface
> > heketi (github.com/heketi/heketi) as the central hub
> > between kubernetes/openshift and glusterfs.
> > 
> > As of now, we only have the repo, and I hereby suggest
> > the creation of a container-storage-admin team
> > with admin powers on that repo, and I would request to be
> > made a member of that team.
> 
> We already have a ton of team, any reason to not reuse a existing one ?
> 
> In the end, this is starting to become a mess (and while that's a lost
> battle, I am all to fight against entropy), and we have to drain that
> swamp some day, so better start now.

I don't technically need a new team for that.
I thought it would be a simple way to give more
fine grained privileges to the new repo.

I don't care *how* I get the privileges.
So please feel free to do it better. What I need:

  full admin and write access to the gluster-storage repo
  including the right to manage other people's rights.

Thanks,

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] github:gluster/container-storage - team create request

2016-11-03 Thread Michael Adam
On 2016-11-03 at 14:13 -0400, Vijay Bellur wrote:
> On 11/03/2016 02:04 PM, Michael Adam wrote:
> > Hi all,
> > 
> > recently a new repo was created under the github/gluster org:
> > 
> > github.com/gluster/container-storage
> > 
> > This is supposed to become the home of gluster's container
> > storage project. This is the project which brings
> > gluster into the kubernetes/openshift container platform
> > as a provider of perstistent storage volumes for the
> > application containers, with gluster's service interface
> > heketi (github.com/heketi/heketi) as the central hub
> > between kubernetes/openshift and glusterfs.
> > 
> > As of now, we only have the repo, and I hereby suggest
> > the creation of a container-storage-admin team
> > with admin powers on that repo, and I would request to be
> > made a member of that team.
> > 
> 
> I have done the necessary changes.

Thanks, Vijay!

> Please let me know in case you encounter
> any difficulties in modifying the repo.

Will do. Seems to work well so far.

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] github:gluster/container-storage - team create request

2016-11-03 Thread Michael Adam
Hi all,

recently a new repo was created under the github/gluster org:

github.com/gluster/container-storage

This is supposed to become the home of gluster's container
storage project. This is the project which brings
gluster into the kubernetes/openshift container platform
as a provider of perstistent storage volumes for the
application containers, with gluster's service interface
heketi (github.com/heketi/heketi) as the central hub
between kubernetes/openshift and glusterfs.

As of now, we only have the repo, and I hereby suggest
the creation of a container-storage-admin team
with admin powers on that repo, and I would request to be
made a member of that team.

Thanks - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] problem with recent change to glfs_realpath

2016-10-21 Thread Michael Adam
On 2016-10-21 at 10:22 +0200, Niels de Vos wrote:
> On Fri, Oct 21, 2016 at 01:15:50AM +0200, Michael Adam wrote:
> > Hi all,
> > 
> > Anoop has brought to my attention that
> > recently glfs_realpath was changed in an incompatible way:
> 
> Interesting, Rajesh and I had an email discussion about that yesterday
> too... Unfortunately this (or the Samba) list was not on CC :-/

Yeah, Anoop brought it up, but Anoop, Rajesh and I discussed
the problem and the solution. Since I saw now mail on the list
I raised it, but it's got that common background. :-)

> > Previously, glfs_realpath returned an allocated string
> > that the caller would have to free with 'free'. Now  after
> > the change, free segfaults on the returned string, and
> > the caller needs to call glfs_free.
> > 
> > That change makes no sense, imho, because the result from
> > a realpath implementation may be used by the application
> > using libgfapi, outside of the scope of the actual libgfapi
> > using code. E.g. in samba, the gfapi calls are hidden behind
> > the vfs api in the gluster backend. But the realpath result
> > is used outside the vfs module. I think this should be quite
> > normal a use case, and hence glfs_realpath should behave
> > as one would expect from a realpath implementation
> > (and as described in the realpath(3) manpage): return a string
> > that needs to be freed with 'free'...
> 
> With libgfapi and other applications we can not guarantee that the
> malloc() that libgfapi uses matches the free() from the application. It
> is possible for applications to link with alternative implementations
> (libjemalloc for example). All structures allocated and returned by
> libgfapi should be free'd by libgfapi as well. We have seen problems
> like this with NFS-Ganesha before, and that triggered us to introduce
> glfs_free().

Hm. Ok, I see what you're saying here, but doesn't that apply
to many other libraries as well? libc functions allocate
memory for you frequently. (Eg libc's realpath.) So no
application should ever use these? It kind of questions the
fundamentals of many such libs.

> > Now for samba, after thorough discussion with Anoop and Rajesh,
> > we have proposed a fix/workaround by using the variant of
> > glfs_realpath that hands in a pre-allocated result string.
> > This renders us independent of the allocation method used by
> > glfs_realpath. But we wanted to point this out to the list, since
> > it is a potential problem for other users, too.
> 
> That is the correct approach. I am very sorry that I missed to check
> samba/vfs_gluster for glfs_realpath() usage, none of the other projects
> that I am aware of use this function.

No damage done, just some amount of initial confusion.
It is just in master yet. By early (manual)  upstream testing
Anoop found it. We should aim to include samba in some upstream
automated sanity test at least.

> I was (and still am!) planning to write a blog post and email about this
> general glfs_free() addition/change.

Sure, that might be a good idea!

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] problem with recent change to glfs_realpath

2016-10-20 Thread Michael Adam
On 2016-10-21 at 01:15 +0200, Michael Adam wrote:
> Hi all,
> 
> Anoop has brought to my attention that
> recently glfs_realpath was changed in an incompatible way:
> 
> Previously, glfs_realpath returned an allocated string
> that the caller would have to free with 'free'. Now  after
> the change, free segfaults on the returned string, and
> the caller needs to call glfs_free.

I meant to give reference:

85e959052148ec481823d55c8b91cdee36da2b43 (master commit)

https://review.gluster.org/#/c/15332/


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] problem with recent change to glfs_realpath

2016-10-20 Thread Michael Adam
Hi all,

Anoop has brought to my attention that
recently glfs_realpath was changed in an incompatible way:

Previously, glfs_realpath returned an allocated string
that the caller would have to free with 'free'. Now  after
the change, free segfaults on the returned string, and
the caller needs to call glfs_free.

That change makes no sense, imho, because the result from
a realpath implementation may be used by the application
using libgfapi, outside of the scope of the actual libgfapi
using code. E.g. in samba, the gfapi calls are hidden behind
the vfs api in the gluster backend. But the realpath result
is used outside the vfs module. I think this should be quite
normal a use case, and hence glfs_realpath should behave
as one would expect from a realpath implementation
(and as described in the realpath(3) manpage): return a string
that needs to be freed with 'free'...

Now for samba, after thorough discussion with Anoop and Rajesh,
we have proposed a fix/workaround by using the variant of
glfs_realpath that hands in a pre-allocated result string.
This renders us independent of the allocation method used by
glfs_realpath. But we wanted to point this out to the list, since
it is a potential problem for other users, too.

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-16 Thread Michael Adam
On 2016-10-16 at 02:04 +0530, Pranith Kumar Karampuri wrote:
> Which review-tool do you suggest Michael? Any other alternatives that are
> better? Don't tell me email :-)

Well, for no tool/vehicle is perfect, each sucks in some respect.
Quite frankly, of the few I have seen so far, email just sucks least,
and gerrit sucks most. That's just me, and I could elaborate, but
I won't bother you since I am obviously not one of the main
contributors to Gluster, and those should probably have the
strongest voice! :-)

Here is an interesting read on the topic:

https://lwn.net/SubscriberLink/702177/d0f5decfbb3cb619/

And I am certainly not trying to convince you from
moving away from Gerrit right now - there is more
important stuff to do - but my advice is to refrain
from getting involved deeper with Gerrit by forking
it and customizing the code.

The git logs will survive, and with them, any tags in
the commit messages -- no matter which tool created them.

Cheers - Michael


> On Sun, Oct 16, 2016 at 1:20 AM, Michael Adam  wrote:
> 
> > On 2016-10-14 at 11:44 +0200, Niels de Vos wrote:
> > > On Fri, Oct 14, 2016 at 02:21:23PM +0530, Nigel Babu wrote:
> > > > I've said on this thread before, none of this is easy to do. It needs
> > us to
> > > > fork Gerrit to make our own changes. I would argue that depending on
> > the
> > > > data from the commit message is folly.
> > >
> > > Eventhough we all seem to agree that statistics based on commit messages
> > > is not correct,
> >
> > I think it is the best we can currently offer.
> > Let's be honest: Gerrit sucks. Big time!
> > If gerrit is no more, the git logs will survive.
> > Git is the common denominator that will last,
> > with all the tags that the commit messages carry.
> > So for now, I'd say the more tags we can fit into
> > git commit mesages the better... :-)
> >
> > > it looks like it is an incentive to get reviewing valued
> > > more. We need to promote the reviewing work somehow, and this is one way
> > > to do it.
> > >
> > > Forking Gerrit is surely not the right thing.
> >
> > Right. Avoid it if possible. Did I mention gerrit sucks? ;-)
> >
> > Cheers - Michael
> >
> >
> 
> 
> -- 
> Pranith


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-15 Thread Michael Adam
On 2016-10-14 at 11:44 +0200, Niels de Vos wrote:
> On Fri, Oct 14, 2016 at 02:21:23PM +0530, Nigel Babu wrote:
> > I've said on this thread before, none of this is easy to do. It needs us to
> > fork Gerrit to make our own changes. I would argue that depending on the
> > data from the commit message is folly.
> 
> Eventhough we all seem to agree that statistics based on commit messages
> is not correct,

I think it is the best we can currently offer.
Let's be honest: Gerrit sucks. Big time!
If gerrit is no more, the git logs will survive.
Git is the common denominator that will last,
with all the tags that the commit messages carry.
So for now, I'd say the more tags we can fit into
git commit mesages the better... :-)

> it looks like it is an incentive to get reviewing valued
> more. We need to promote the reviewing work somehow, and this is one way
> to do it.
> 
> Forking Gerrit is surely not the right thing.

Right. Avoid it if possible. Did I mention gerrit sucks? ;-)

Cheers - Michael



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-05 Thread Michael Adam
On 2016-10-05 at 09:45 -0400, Ira Cooper wrote:
> "Feedback-given-by: "

I like that one - thanks! :-)

Michael

> - Original Message -
> > On 2016-09-30 at 17:52 +0200, Niels de Vos wrote:
> > > On Fri, Sep 30, 2016 at 08:50:12PM +0530, Ravishankar N wrote:
> > > > On 09/30/2016 06:38 PM, Niels de Vos wrote:
> > > > > On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri
> > > > > wrote:
> > > ...
> > > > > Maybe we can add an additional tag that mentions all the people that
> > > > > did do reviews of older versions of the patch. Not sure what the tag
> > > > > would be, maybe just CC?
> > > > It depends on what tags would be processed to obtain statistics on 
> > > > review
> > > > contributions.
> > > 
> > > Real statistics would come from Gerrit, not from the 'git log' output.
> > > We do have a ./extras/who-wrote-glusterfs/ in the sources, but that is
> > > only to get an idea about the changes that were made and should not be
> > > used for serious statistics.
> > > 
> > > It is possible to feed the Gerrit comment-stream into things like
> > > Elasticsearch and get an accurate impression how many reviews people do
> > > (and much more). I hope we can get some contribution diagrams from
> > > someting like this at one point.
> > > 
> > > Would some kind of Gave-feedback tag for people that left a comment on
> > > earlier versions of the patch be appreciated by others? It will show in
> > > the 'git log' who was involved in some way or form.
> > 
> > I think this would be fair.
> > 
> > Reviewed-by tags should imho be reserved for the final
> > incarnation of the patch. Those mean that the person named
> > in the tag has aproved this version of the patch for getting
> > into the official tree. A previous version of the patch can
> > have been entirely different, so a reviewed-by for that
> > previous version may not actually apply to the new version at all
> > and hence create a false impression!
> > 
> > It is also difficult to track all activities by tags,
> > and anyone who wants to measure performance and contributions
> > only by looking at git commit tags will not be doing several
> > people justice. We could add 'discussed-with' or 'designed-by'
> > tags, etc ... ;-)
> > 
> > On a serious note, in Samba we use 'Pair-programmed-with' tags,
> > because we do pair-programming a lot, but only one person can
> > be an author of a git commit ...
> > 
> > The 'Gave-feedback' tag I do like. even though it does
> > not quite match with the foobar-by pattern of other tags.
> > 
> > Michael
> > 
> > ___
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-Maintainers] 'Reviewd-by' tag for commits

2016-10-05 Thread Michael Adam
On 2016-09-30 at 17:52 +0200, Niels de Vos wrote:
> On Fri, Sep 30, 2016 at 08:50:12PM +0530, Ravishankar N wrote:
> > On 09/30/2016 06:38 PM, Niels de Vos wrote:
> > > On Fri, Sep 30, 2016 at 07:11:51AM +0530, Pranith Kumar Karampuri wrote:
> ...
> > > Maybe we can add an additional tag that mentions all the people that
> > > did do reviews of older versions of the patch. Not sure what the tag
> > > would be, maybe just CC?
> > It depends on what tags would be processed to obtain statistics on review
> > contributions.
> 
> Real statistics would come from Gerrit, not from the 'git log' output.
> We do have a ./extras/who-wrote-glusterfs/ in the sources, but that is
> only to get an idea about the changes that were made and should not be
> used for serious statistics.
> 
> It is possible to feed the Gerrit comment-stream into things like
> Elasticsearch and get an accurate impression how many reviews people do
> (and much more). I hope we can get some contribution diagrams from
> someting like this at one point.
> 
> Would some kind of Gave-feedback tag for people that left a comment on
> earlier versions of the patch be appreciated by others? It will show in
> the 'git log' who was involved in some way or form.

I think this would be fair.

Reviewed-by tags should imho be reserved for the final
incarnation of the patch. Those mean that the person named
in the tag has aproved this version of the patch for getting
into the official tree. A previous version of the patch can
have been entirely different, so a reviewed-by for that
previous version may not actually apply to the new version at all
and hence create a false impression!

It is also difficult to track all activities by tags,
and anyone who wants to measure performance and contributions
only by looking at git commit tags will not be doing several
people justice. We could add 'discussed-with' or 'designed-by'
tags, etc ... ;-)

On a serious note, in Samba we use 'Pair-programmed-with' tags,
because we do pair-programming a lot, but only one person can
be an author of a git commit ...

The 'Gave-feedback' tag I do like. even though it does
not quite match with the foobar-by pattern of other tags.

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Heketi] How pushing to heketi happens - especially about squashing

2016-09-20 Thread Michael Adam
sure if there is more lurking. ;-)

> 3) (As a consequence of the above,) If we push delta-patches
>to update PRs, that can usually not be the final push, but
>needs a final iteration of force-pushing an amended patchset.
> [lpabon] Do not amend patches.
> 
> NOTE on amended patches.  If I notice another one, I will *not* merge
> the change.  Sorry to be a pain about that, but it makes it almost
> impossible to review.  This is not Gerrit, this is Github, it
> is something new, but in my opinion, it is more natural git workflow.

I disagree. And I stand by my point that while we can do those
delta-patches for intermediate WIP-updates of a patchset, the ultimate version
of the patchset *needs* to have these delta-patches squashed into
the patches and be a patchset that can be merged as-is. Those delta
patches should imho *never* be pushed to the upstream repo. This
would be bad habit. They need to be squashed/amended into the patchset,
because their existence means that the original patch was not
complete / correct. This is a necessary step in a good git workflow.
The end result is what counts most.

So are you saying that you will reject amended patchsets when
pushed to the *same* PR, but accept them when the original PR
is closed and the amended patchset is pushed to a *new* PR?

I don't understand the reasoning, because that sacrifices the
context and history of the original PR, which might be helpful,
and it seems to create extra amount of book-keeping.
But I might be able to live with that as a compromise.

I understand that sometimes it might be helpful to see those delta
patches to verify that change requests have been addressed, but
I at least am more confused by them than by updated (amended) patchsets.
I usually need to see the full patches more than I need the delta.
So when someone does that delta patch update, once I realize what's going
on, I pull the branch and view the patch in its full beauty of a combined
diff locally. ;-)

Hope that made it a little more clear what I was suggesting.

Michael



> - Original Message -
> From: "Michael Adam" 
> To:"Luis Pabón" 
> Sent: Tuesday, September 20, 2016 4:50:01 AM
> Subject: [RFC] [upstream] How pushing to heketi happens - especially about
> squashing
> 
> Hi all, hi Luis,
> 
> Since we do not have a real upstream ML yet (see my other mail), I want
> use this list now for discussing about the way patches are merged into
> heketi upstream.
> 
> [ tl;dr ? --> look for "summing up" at the bottom... ;-) ]
> 
> This is after a few weeks of working on the projects with you all
> especially with Luis, and seeing how he does the project. And there
> have been a few surprises on both ends.
> 
> While I still don't fully like or trust the github UI, it is
> for instance better than gerrit (But as José sais: "That bar
> is really low..." ;-). One point where it is better is it can
> deal with patchsets, i.e. multiple patches submitted as one PR.
> 
> But github has the feature of instead of squashing the patches
> instead of merging the patches as they are. This can be useful
> or remotely correct in one situation, but I think generally it
> should be avoided for reasons detailed below.
> 
> 
> So in this mail, I am sharing a few observations from the past
> few weeks, and a few concerns or problems I am having. I think
> it is important with the growing team to clearly formulate
> how both reviewers and patch submitters expect the process to work.
> 
> 
> At least when I propose a patchset, I propose it exactly the way
> I send it. Coming from Samba and Gluster development, for me as a
> contributor and as a reviewer, the content of the commits, i.e.
> the actual diffs as well as the layout into patches and the commit
> messages are 'sacred' in the sense that this is what the patch
> submitter proposed and signed-off on for pushing. Hence the reviewer
> should imho not change the general layout of patches (e.g. by squashing
> them) without consulting the author. Here are two examples where
> pull request with two patches were squashed with the heketi method:
> 
> https://github.com/heketi/heketi/commit/bbc513ef214c5ec81b6cdb0a3a024944c9fe12ba
> https://github.com/heketi/heketi/commit/bccab2ee8f70f6862d9bfee3a8cbdf6e47b5a8bf
> 
> You see what github does: it prints the title of the PR as main commit
> message and creates a bullet list of the original commit messages.
> Hence, it really creates pretty bad commits (A commit called
> "Two minor patches (#499)" - really??)... :-)
> This is not how these were intended by the authors. The actual result of
> how the commits looks like in git after they have been merged.
> (Btw, I don't look at the git log / code in github: it is 

Re: [Gluster-devel] 3.9. feature freeze status check

2016-08-31 Thread Michael Adam
On 2016-08-26 at 21:38 +0530, Pranith Kumar Karampuri wrote:
> hi,
>   Now that we are almost near the feature freeze date (31st of Aug),
> want to get a sense if any of the status of the features.
> 
> Please respond with:
> 1) Feature already merged
> 2) Undergoing review will make it by 31st Aug
> 3) Undergoing review, but may not make it by 31st Aug
> 4) Feature won't make it for 3.9.
> 
> I added the features that were not planned(i.e. not in the 3.9 roadmap
> page) but made it to the release and not planned but may make it to release
> at the end of this mail.
> If you added a feature on master that will be released as part of 3.9.0 but
> forgot to add it to roadmap page, please let me know I will add it.
> 
> Here are the features planned as per the roadmap:
>
> ...
> 
> 8) Integrate with external resource management software
> Feature owners: Kaleb Keithley, Jose Rivera

I still don't understand what there really is to do inside
gluster. My understanding is that you take (e.g.) storhaug
(https://github.com/linux-ha-storage/storhaug) and start
using it.

Cheers - Michael




signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] CFP for Gluster Developer Summit

2016-08-23 Thread Michael Adam
a second proposal:

topic: "Multi-Protocol support for Gluster"

Background:
Multi-Protocol support refers to the the idea of
accessing the same data with different access protocols,
in the gluster case, there are fuse, nfs, and smb.
This is a much-requested feature, which is currently
not supported.

Contents:
The purpose of this presentation is to explain the challenges
in general, and specific to Gluster, and to give an overview of
where we currently are.
Very big emphasis will be put onto the aspect of testing.

Presenter:
Michael Adam
ob...@samba.org / ob...@redhat.com

Copresenter(s): Rajesh Joseph would be an ideal copresenter.
Alternatively, Poornima or R.Talur who have all been working
on the Gluster-underpinnings for multi-protocol.



On 2016-08-12 at 15:48 -0400, Vijay Bellur wrote:
> Hey All,
> 
> Gluster Developer Summit 2016 is fast approaching [1] on us. We are looking
> to have talks and discussions related to the following themes in the summit:
> 
> 1. Gluster.Next - focusing on features shaping the future of Gluster
> 
> 2. Experience - Description of real world experience and feedback from:
>a> Devops and Users deploying Gluster in production
>b> Developers integrating Gluster with other ecosystems
> 
> 3. Use cases  - focusing on key use cases that drive Gluster.today and
> Gluster.Next
> 
> 4. Stability & Performance - focusing on current improvements to reduce our
> technical debt backlog
> 
> 5. Process & infrastructure  - focusing on improving current workflow,
> infrastructure to make life easier for all of us!
> 
> If you have a talk/discussion proposal that can be part of these themes,
> please send out your proposal(s) by replying to this thread. Please clearly
> mention the theme for which your proposal is relevant when you do so. We
> will be ending the CFP by 12 midnight PDT on August 31st, 2016.
> 
> If you have other topics that do not fit in the themes listed, please feel
> free to propose and we might be able to accommodate some of them as
> lightening talks or something similar.
> 
> Please do reach out to me or Amye if you have any questions.
> 
> Thanks!
> Vijay
> 
> [1] https://www.gluster.org/events/summit2016/
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] CFP for Gluster Developer Summit

2016-08-23 Thread Michael Adam
I'd like to propose a general presentation about Samba
to increase the understanding of what Samba does and how
it does it.

working title: "Samba, alien imposer of strange workloads"

Items to be covered:

- samba in general: history and overview
- SMB: some details about the protocol
- layout of process model and async code in samba
- samba-clustering with ctdb
- samba<->gluster interaction (vfs module)

This would cover some basics of how to set up
samba on top of gluster, but the main focus would be
to increase the understanding of the interactions between
samba and its interaction with gluster for a developer,
specifically why Samba imposes so many notoriously hard workloads.

Michael Adam
ob...@samba.org / ob...@redhat.com



On 2016-08-12 at 15:48 -0400, Vijay Bellur wrote:
> Hey All,
> 
> Gluster Developer Summit 2016 is fast approaching [1] on us. We are looking
> to have talks and discussions related to the following themes in the summit:
> 
> 1. Gluster.Next - focusing on features shaping the future of Gluster
> 
> 2. Experience - Description of real world experience and feedback from:
>a> Devops and Users deploying Gluster in production
>b> Developers integrating Gluster with other ecosystems
> 
> 3. Use cases  - focusing on key use cases that drive Gluster.today and
> Gluster.Next
> 
> 4. Stability & Performance - focusing on current improvements to reduce our
> technical debt backlog
> 
> 5. Process & infrastructure  - focusing on improving current workflow,
> infrastructure to make life easier for all of us!
> 
> If you have a talk/discussion proposal that can be part of these themes,
> please send out your proposal(s) by replying to this thread. Please clearly
> mention the theme for which your proposal is relevant when you do so. We
> will be ending the CFP by 12 midnight PDT on August 31st, 2016.
> 
> If you have other topics that do not fit in the themes listed, please feel
> free to propose and we might be able to accommodate some of them as
> lightening talks or something similar.
> 
> Please do reach out to me or Amye if you have any questions.
> 
> Thanks!
> Vijay
> 
> [1] https://www.gluster.org/events/summit2016/
> ___
> Gluster-devel mailing list
> Gluster-devel@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gluster Developer Summit Program Committee

2016-08-16 Thread Michael Adam
On 2016-08-16 at 11:30 -0700, Amye Scavarda wrote:
> Hi all,
> As we get closer to the CfP wrapping up (August 31, per
> http://www.gluster.org/pipermail/gluster-users/2016-August/028002.html) -
> we'll be looking for 3-4 people for the program committee to help arrange
> the schedule.
> 
> Go ahead and respond here if you're interested, and I'll work to gather us
> together after September 1st.
> Thanks!
> - amye

If you're interested in someone who's looking from a few miles higher
than many hard-core gluster engineers, I would help out.
But happy to step back if enough high profile Gluster people
speak up! :-)

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] md-cache improvements

2016-08-16 Thread Michael Adam
Hi all,

On 2016-08-15 at 22:39 -0400, Vijay Bellur wrote:
> Hi Poornima, Dan -
> 
> Let us have a hangout/bluejeans session this week to discuss the planned
> md-cache improvements, proposed timelines and sort out open questions if
> any.

Because the initial mail creates the impression that this is
a topic that people are merely discussing, let me point out
that it has actually moved way beyond that stage already:

Poornima has been working hard on these cache improvements
since late 2015 at least. (And desperately looking for review
and support since at least springtime..) See all her patches
that have now finally already gone into master recently
(e.g. http://review.gluster.org/#/c/12951/ for an old one
that has just been merged)
and all the patches that she has still up for review
(e.g. http://review.gluster.org/#/c/15002/ for a big one).

These changes were mainly motivated by samba-workloads,
since the chatty, md-heavy smb protocol is suffering most
notably from the lack of proper caching of this metadata.
The good news is that it recently started getting more
attention and we are seeing very, very promising performance
test results!
Full functional and regression testings are also underway.

Discussion the state of affairs in a real call
could be very useful indeed. Sometimes this can be
less awkward than using the list..

> Would 11:00 UTC on Wednesday work for everyone in the To: list?

Not on the To: list myself, but would work for me.. :-)
Although I have to admit it may really be very short notice for
some...

And since Poornima drove the project thus far, and was mainly
supported by Rajesh J and R.Talur from the gluster side for long
stretches of time, afaict, I think these three should be present
bare minimum.

Thanks - Michael


> On 08/11/2016 01:04 AM, Poornima Gurusiddaiah wrote:
> > 
> > My comments inline.
> > 
> > Regards,
> > Poornima
> > 
> > - Original Message -
> > > From: "Dan Lambright" 
> > > To: "Gluster Devel" 
> > > Sent: Wednesday, August 10, 2016 10:35:58 PM
> > > Subject: [Gluster-devel] md-cache improvements
> > > 
> > > 
> > > There have been recurring discussions within the gluster community to 
> > > build
> > > on existing support for md-cache and upcalls to help performance for small
> > > file workloads. In certain cases, "lookup amplification" dominates data
> > > transfers, i.e. the cumulative round trip times of multiple LOOKUPs from 
> > > the
> > > client mitigates benefits from faster backend storage.
> > > 
> > > To tackle this problem, one suggestion is to more aggressively utilize
> > > md-cache to cache inodes on the client than is currently done. The inodes
> > > would be cached until they are invalidated by the server.
> > > 
> > > Several gluster development engineers within the DHT, NFS, and Samba teams
> > > have been involved with related efforts, which have been underway for some
> > > time now. At this juncture, comments are requested from gluster 
> > > developers.
> > > 
> > > (1) .. help call out where additional upcalls would be needed to 
> > > invalidate
> > > stale client cache entries (in particular, need feedback from DHT/AFR
> > > areas),
> > > 
> > > (2) .. identify failure cases, when we cannot trust the contents of 
> > > md-cache,
> > > e.g. when an upcall may have been dropped by the network
> > 
> > Yes, this needs to be handled.
> > It can happen only when there is a one way disconnect, where the server 
> > cannot
> > reach client and notify fails. We can have a retry for the same until the 
> > cache
> > expiry time.
> > 
> > > 
> > > (3) .. point out additional improvements which md-cache needs. For 
> > > example,
> > > it cannot be allowed to grow unbounded.
> > 
> > This is being worked on, and will be targetted for 3.9
> > 
> > > 
> > > Dan
> > > 
> > > - Original Message -
> > > > From: "Raghavendra Gowdappa" 
> > > > 
> > > > List of areas where we need invalidation notification:
> > > > 1. Any changes to xattrs used by xlators to store metadata (like dht 
> > > > layout
> > > > xattr, afr xattrs etc).
> > 
> > Currently, md-cache will negotiate(using ipc) with the brick, a list of 
> > xattrs
> > that it needs invalidation for. Other xlators can add the xattrs they are 
> > interested
> > in to the ipc. But then these xlators need to manage their own caching and 
> > processing
> > the invalidation request, as md-cache will be above all cluater xlators.
> > reference: http://review.gluster.org/#/c/15002/
> > 
> > > > 2. Scenarios where individual xlator feels like it needs a lookup. For
> > > > example failed directory creation on non-hashed subvol in dht during 
> > > > mkdir.
> > > > Though dht succeeds mkdir, it would be better to not cache this inode 
> > > > as a
> > > > subsequent lookup will heal the directory and make things better.
> > 
> > For this, these xlators can specify an indicator in the dict of
> > the fop cbk, to not cache. This should be fairly simple to implement.
> > 
> > > > 3. remo

Re: [Gluster-devel] Gerrit review, submit type and Jenkins testing

2016-01-11 Thread Michael Adam
On 2016-01-12 at 08:46 +0530, Raghavendra Talur wrote:
> On Jan 12, 2016 3:44 AM, "Michael Adam"  wrote:
> >
> > On 2016-01-08 at 12:03 +0530, Raghavendra Talur wrote:
> > > Top posting, this is a very old thread.
> > >
> > > Keeping in view the recent NetBSD problems and the number of bugs
> creeping
> > > in, I suggest we do these things right now:
> > >
> > > a. Change the gerrit merge type to fast forward only.
> > > As explained below in the thread, with our current setup even if both
> > > PatchA and PatchB pass regression separately when both are merged it is
> > > possible that a functional bug creeps in.
> > > This is the only solution to prevent that from happening.
> > > I will work with Kaushal to get this done.
> > >
> > > b. In Jenkins, remove gerrit trigger and make it a manual operation
> > >
> > > Too many developers use the upstream infra as a test cluster and it is
> > > *not*.
> > > It is a verification mechanism for maintainers to ensure that the patch
> > > does not cause regression.
> > >
> > > It is required that all developers run full regression on their machines
> > > before asking for reviews.
> >
> > Hmm, I am not 100% sure I would underwrite that.
> > I am coming from the Samba process, where we have exactly
> > that: A developer should have run full selftest before
> > submitting the change for review. Then after two samba
> > team developers have given their review+ (counting the
> > author), it can be pushed to our automatism that keeps
> > rebasing on current upstream and running selftest until
> > either selftest succeeds and is pushed as a fast forward
> > or selftest fails.
> >
> > The reality is that people are lazy and think they know
> > when they can skip selftest. But people are deceived and
> > overlook problems.  Hence either reviewers run into failures
> > or the automatic pre-push selftest fails. The problem
> > I see with this is that it wastes the precios time of
> > the reviewers.
> >
> > When I started contributing to Gluster, I found it to
> > be a big, big plus to have automatic regression runs
> > as a first step after submission, so that a reviewer
> > has the option to only start looking at the patch once
> > automatic tests have passed.
> >
> > I completely agree that the fast-forward-only and
> > post-review-pre-merge-regression-run approach
> > is the way to go, only this way the original problem
> > described by Talur can be avoided.
> >
> > But would it be possible to keep and even require some
> > amount of automatic pre-review test run (build and at
> > least some amount of runtimte test)?
> > It really prevents waste of time of reviewers/maintainers.
> >
> > The problem with this is of course that it can increase
> > the (real) time needed to complete a review from submission
> > until upstream merge.
> >
> > Just a few thoughts...
> >
> > Cheers - Michael
> >
> 
> We had same concern from many other maintainers. I guess it would be better
> if test runs both before and after review.

Yes. That would be ideal, imho, if the additional delay
is acceptable.

> With these changes we would have removed test runs of work
> in progress patches.

'these changes' being not running tests before review?
If test runs of WIP patches is not desired, then this
is mostly a matter of education... :-)

With the new vagrant based in-tree selftes infrastructure,
it is now really easy to run tests locally. On the other
hand, it is also very convenient to have some online
platform where one can just submit WIP patches for testing.
Maybe one could have one jenkins instance for this purpose
which uses resources separate from the review-push-jenkins?

Another 2 cents of mine..

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gerrit review, submit type and Jenkins testing

2016-01-11 Thread Michael Adam
On 2016-01-08 at 12:03 +0530, Raghavendra Talur wrote:
> Top posting, this is a very old thread.
> 
> Keeping in view the recent NetBSD problems and the number of bugs creeping
> in, I suggest we do these things right now:
> 
> a. Change the gerrit merge type to fast forward only.
> As explained below in the thread, with our current setup even if both
> PatchA and PatchB pass regression separately when both are merged it is
> possible that a functional bug creeps in.
> This is the only solution to prevent that from happening.
> I will work with Kaushal to get this done.
> 
> b. In Jenkins, remove gerrit trigger and make it a manual operation
> 
> Too many developers use the upstream infra as a test cluster and it is
> *not*.
> It is a verification mechanism for maintainers to ensure that the patch
> does not cause regression.
>
> It is required that all developers run full regression on their machines
> before asking for reviews.

Hmm, I am not 100% sure I would underwrite that.
I am coming from the Samba process, where we have exactly
that: A developer should have run full selftest before
submitting the change for review. Then after two samba
team developers have given their review+ (counting the
author), it can be pushed to our automatism that keeps
rebasing on current upstream and running selftest until
either selftest succeeds and is pushed as a fast forward
or selftest fails.

The reality is that people are lazy and think they know
when they can skip selftest. But people are deceived and
overlook problems.  Hence either reviewers run into failures
or the automatic pre-push selftest fails. The problem
I see with this is that it wastes the precios time of
the reviewers.

When I started contributing to Gluster, I found it to
be a big, big plus to have automatic regression runs
as a first step after submission, so that a reviewer
has the option to only start looking at the patch once
automatic tests have passed.

I completely agree that the fast-forward-only and
post-review-pre-merge-regression-run approach
is the way to go, only this way the original problem
described by Talur can be avoided.

But would it be possible to keep and even require some
amount of automatic pre-review test run (build and at
least some amount of runtimte test)?
It really prevents waste of time of reviewers/maintainers.

The problem with this is of course that it can increase
the (real) time needed to complete a review from submission
until upstream merge.

Just a few thoughts...

Cheers - Michael



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] snapshot/bug-1227646.t throws core [rev...@dev.gluster.org: Change in glusterfs[master]: hook-scripts: reconsile mount, fixing manual mount]

2016-01-04 Thread Michael Adam
https://build.gluster.org/job/rackspace-regression-2GB-triggered/17291/console

- Forwarded message from "Gluster Build System (Code Review)" 
 -

Date: Mon, 4 Jan 2016 15:29:51 -0800
From: "Gluster Build System (Code Review)" 
To: Michael Adam 
Subject: Change in glusterfs[master]: hook-scripts: reconsile mount, fixing 
manual mount
User-Agent: Gerrit/2.9.4

Gluster Build System has posted comments on this change.

Change subject: hook-scripts: reconsile mount, fixing manual mount
..


Patch Set 1: Verified-1

http://build.gluster.org/job/rackspace-regression-2GB-triggered/17291/consoleFull
 : FAILED

-- 
To view, visit http://review.gluster.org/13170
To unsubscribe, visit http://review.gluster.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: Ibb7613b1b1278ab13745846baa79268db226ef19
Gerrit-PatchSet: 1
Gerrit-Project: glusterfs
Gerrit-Branch: master
Gerrit-Owner: Michael Adam 
Gerrit-Reviewer: Gluster Build System 
Gerrit-HasComments: No

- End forwarded message -


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent failure: tests/basic/tier/locked_file_migration.t

2015-12-10 Thread Michael Adam
tests/basic/tier/locked_file_migration.t fails spuriously:

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12602/consoleFull

triggered by

http://review.gluster.org/12938

collecting a few before I try and add more tests to the bad
list...

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent failure/core: tests/bugs/unclassified/bug-1034085.t

2015-12-10 Thread Michael Adam

this one throws core: tests/bugs/unclassified/bug-1034085.t
(but prints "PASS"...)

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16713/consoleFull

triggered by

http://review.gluster.org/12930

Collecting ...

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] submitted one patch for marking several tests bad

2015-12-09 Thread Michael Adam
Since all those patches mutually prevent the other patches
addition to the bad tests list from successfully running
regressions, I created a patch to add all those that I have
seen recently:

http://review.gluster.org/#/c/12933/

If it is too much for your taste, I'll reduce... :-)

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] test throws core intermittently: tests/bugs/snapshot/bug-1140162-file-snapshot-features-encrypt-opts-validation.t

2015-12-09 Thread Michael Adam
by

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16674/consoleFull


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent failure: tests/bugs/tier/bug-1279376-rename-demoted-file.t

2015-12-09 Thread Michael Adam
Another one?

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16675/console

Triggered by:

http://review.gluster.org/12930

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] intermittent failure: tests/basic/afr/split-brain-healing.t

2015-12-09 Thread Michael Adam
On 2015-12-09 at 17:00 +0100, Michael Adam wrote:
> 
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/16652/consoleFull
> 
> triggered by
> 
> http://review.gluster.org/#/c/12826/

More of these happen.

E.g.:

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16680/consoleFull

Created a bug

https://bugzilla.redhat.com/show_bug.cgi?id=1290245

and a patch mark the test as bad:

http://review.gluster.org/#/c/12932/

Thanks - Michael



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] intermittent failure: tests/features/weighted-rebalance.t

2015-12-09 Thread Michael Adam
On 2015-12-09 at 19:59 +0100, Michael Adam wrote:
> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12530/consoleFull
> 
> http://review.gluster.org/#/c/12929/
> 
> Michael


Having eliminated arbiter-statfs.t (in the review request above),
this seems to be the next suspect.

https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12538/consoleFull

Created a BZ:

https://bugzilla.redhat.com/show_bug.cgi?id=1290204

and a patch to mark it bad:

http://review.gluster.org/12931

Cheers - Michael



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent failure: tests/features/weighted-rebalance.t

2015-12-09 Thread Michael Adam
https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12530/consoleFull

http://review.gluster.org/#/c/12929/

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Netbsd failures on ./tests/basic/afr/arbiter-statfs.t

2015-12-09 Thread Michael Adam
On 2015-12-09 at 10:17 -0500, Vijay Bellur wrote:
> On 08/24/2015 07:01 AM, Susant Palai wrote:
> >Ravi,
> >  The test case ./tests/basic/afr/arbiter-statfs.t failing frequently on 
> > netbsd machine. Requesting to take a look.
> >
> 
> tests/basic/afr/arbiter-statfs.t seems to be affecting most NetBSD runs now.
> Ravi - can you please take a look in?
> 
> Sample test run that got affected by this test unit:
> 
> https://build.gluster.org/job/rackspace-netbsd7-regression-triggered/12516/consoleFull

This seems to prevent any NetBSD regression run from succeeding
currently. Have seen it many times since your mail.

I have created a bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1290125

and a patch to add the test to bad tests for now:

http://review.gluster.org/12929

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent failure: tests/basic/afr/split-brain-healing.t

2015-12-09 Thread Michael Adam

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16652/consoleFull

triggered by

http://review.gluster.org/#/c/12826/

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] intermittent test failure: tests/basic/afr/sparse-file-self-heal.t

2015-12-09 Thread Michael Adam
On 2015-12-09 at 14:49 +0100, Michael Adam wrote:
> On 2015-12-09 at 13:20 +0100, Michael Adam wrote:
> > On 2015-12-09 at 09:19 +0100, Michael Adam wrote:
> > > Another one:
> > > 
> > > https://build.gluster.org/job/rackspace-regression-2GB-triggered/16601/consoleFull
> > > 
> > > by
> > > 
> > > http://review.gluster.org/#/c/12826/
> > > 
> > > Cheers - Michael
> > 
> > 
> > Again:
> > 
> > https://build.gluster.org/job/rackspace-regression-2GB-triggered/16644/consoleFull
> 
> and again:
> 
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/16652/consoleFull

Forget that -- it is a different test.

Michael



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] intermittent test failure: tests/basic/afr/sparse-file-self-heal.t

2015-12-09 Thread Michael Adam
On 2015-12-09 at 13:20 +0100, Michael Adam wrote:
> On 2015-12-09 at 09:19 +0100, Michael Adam wrote:
> > Another one:
> > 
> > https://build.gluster.org/job/rackspace-regression-2GB-triggered/16601/consoleFull
> > 
> > by
> > 
> > http://review.gluster.org/#/c/12826/
> > 
> > Cheers - Michael
> 
> 
> Again:
> 
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/16644/consoleFull

and again:

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16652/consoleFull


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] intermittent test failure: tests/basic/afr/sparse-file-self-heal.t

2015-12-09 Thread Michael Adam
On 2015-12-09 at 09:19 +0100, Michael Adam wrote:
> Another one:
> 
> https://build.gluster.org/job/rackspace-regression-2GB-triggered/16601/consoleFull
> 
> by
> 
> http://review.gluster.org/#/c/12826/
> 
> Cheers - Michael


Again:

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16644/consoleFull

same patch (rebased)



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent test failure: tests/basic/afr/sparse-file-self-heal.t

2015-12-09 Thread Michael Adam
Another one:

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16601/consoleFull

by

http://review.gluster.org/#/c/12826/

Cheers - Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent test failure: tests/bugs/tier/bug-1279376-rename-demoted-file.t

2015-12-09 Thread Michael Adam
Hi,

found another one. See

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16603/consoleFull

Run by http://review.gluster.org/#/c/12830/
which should not change any test result.

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent test failure: tests/basic/tier/record-metadata-heat.t ?

2015-12-07 Thread Michael Adam
FYI: tests/basic/tier/record-metadata-heat.t failed in

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16562/consoleFull

triggered for

http://review.gluster.org/#/c/12830/

I can see no relation.
(In fact, that patch should not add any new failures.)

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] intermittent test failure: sparse-file-self-heal.t ?

2015-12-07 Thread Michael Adam
Here is a failure of sparse-file-self-heal.t:

https://build.gluster.org/job/rackspace-regression-2GB-triggered/16561/consoleFull

triggered by http://review.gluster.org/#/c/12826/

I can't see how this is related.

Michael


signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] (late) report from SDC 2015

2015-11-02 Thread Michael Adam
September 19 through September 25, several developers from Red Hat
Storage were in Santa Clara, CA, attending and presenting at
SNIA's Storage Developer Conference (SDC), and also participating in
the colocated SMB plugfest. For the Samba community, this is one of
the two major events of the year (the other being sambaXP - the annual
Samba developer and user conference held in spring - www.sambaxp.org).

http://www.snia.org/events/storage-developer

This is my report -- better late than never. :-)

We had a very good presence from Red Hat Storage.
Here are the presentations listed in chronological order:

- Poornima Gurusiddaiah and Soumya Koduri gave a great
  presentation about consistent client caching in Gluster.
  This includes the leases which are an important part
  of the SMB and NFS teams' combined effort to implement
  multi-protocol access. They form the foundation inside
  Gluster for implementing SMB leases and NFS delegations in
  the protocol on top in such a way that they can be used
  concurrently.

- Ira Cooper gave a really nice comprehensive overview talk
  about the status of the implementation of SMB3 in Samba.
  Ira's talk also served as an 'appetizer' for the more
  specific Samba talks that followed.

- I gave a presentation about the implementation of
  SMB3-Multi-Channel (my current project) which is some
  kind of SMB-client side channel bonding. It included a
  live demo that ran code I finished hacking the night
  before the talk. :-)

- Günther Deschner and José Rivera gave a great talk
  about the implementation of the SMB3 witness service.
  This is at the heart of the SMB3 clustering features and
  serves for faster client failovers. The talk also included
  a tremendous live demo of some "hot" code.

- Finally, Soumya Koduri gave an awesome talk about
  NFS Ganesha on Gluster. She explained the basics of
  Ganesha and how it works specifically on top of Gluster.

There was also an interesting talk about GlusterFS in Manila
by Veda Shankar from RH together with Ramnath Sai Sagar
from Mellanox.

The conference over all was pretty good this year. The usual
SMB protocol updates from Microsoft. A couple of good
Samba and linux-cifs related talks by Volker Lendecke,
Jeremy Allison and Steve French. Non volatile memory and
shingled disks were big topics this year, as well as
generally software defined storage and all things where
storage meets cloud. An interesting keynote was the
introduction of the concept of Ethernet connected JBOD,
called EBOD, by Jim Pinkerton of Microsoft. It basically
provides a new proposal where to draw the line between
storage and compute.

The second aspect of the event was the SMB plugfest, which
is an NDA area where engineers of several companies and projects
that implement SMB servers and/or clients (Microsoft, NetApp,
EMC, Apple, LoadDynamix, ... to name just a few) meet to test
their implementations against one another, to discuss the protocol
protocol, and hack.

Like in past years, this has been a very good and energizing
event, for me mostly discussing, hacking, and fixing bugs together
with other Samba people. I spent the week essentially jumping
back and forth between selected conference sessions and the
plugfest.

Cheers - Michael



signature.asc
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] [Gluster-users] Got a slogan idea?

2015-04-01 Thread Michael Adam
On 2015-04-01 at 15:48 +0200, Niels de Vos wrote:
> On Wed, Apr 01, 2015 at 06:59:15PM +0530, Vijay Bellur wrote:
> > On 04/01/2015 05:44 PM, Tom Callaway wrote:
> > >Hello Gluster Ant People!
> > >
> > >Right now, if you go to gluster.org, you see our current slogan in giant
> > >text:
> > >
> > >Write once, read everywhere
> > >
> > >However, no one seems to be super-excited about that slogan. It doesn't
> > >really help differentiate gluster from a portable hard drive or a
> > >paperback book. I am going to work with Red Hat's branding geniuses to
> > >come up with some possibilities, but sometimes, the best ideas come from
> > >the people directly involved with a project.
> > >
> > >What I am saying is that if you have a slogan idea for Gluster, I want
> > >to hear it. You can reply on list or send it to me directly. I will
> > >collect all the proposals (yours and the ones that Red Hat comes up
> > >with) and circle back around for community discussion in about a month
> > >or so.
> > >
> > 
> > I also think that we should start calling ourselves Gluster or GlusterDS
> > (Gluster Distributed Storage) instead of GlusterFS by default. We are
> > certainly not file storage only, we have object, api & block interfaces too
> > and the FS in GlusterFS seems to imply a file storage connotation alone.
> 
> My preference goes to Gluster, to capture the whole community.

I also would prefer Gluster over GlusterDS.

regarding the slogan, I am thinking about something that
incorporates

SOS - [S]cale [O]ut [S]torage ...

Cheers - Michael


pgpnsc2Z02y9H.pgp
Description: PGP signature
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel