Re: [Gluster-devel] [Heketi] Mailing list

2016-09-20 Thread Luis Pabón
You are completely correct Jeff.  We will move to a Google Group email list.
I have updated Heketi site with the new information:

https://github.com/heketi/heketi#community

We will update gluster-devel when we continue working together, for example,
on iSCSI and similar projects.

Thanks all,

- Luis

- Original Message -
From: "Jeff Darcy" 
To: "Luis Pabón" 
Cc: "gluster-devel" 
Sent: Tuesday, September 20, 2016 4:17:09 PM
Subject: Re: [Gluster-devel] [Heketi] Mailing list

> Hi gluster-devel,
>   At the Heketi project, we wanted to get better communication with the
> GlusterFS community.  We are a young project and didn't have our own
> mailing list, so we asked if we could also be part gluster-devel mailing
> list.  The plan is to Heketi specific emails to gluster-devel using the
> subject tag '[Heketi]'.  This is what is done in OpenStack, where they
> all share the same mailing list, and use the subject line tag for
> separate projects.
>   I consider this a pilot, nothing is set in stone, but I wanted to ask
> your opinion in the matter.

Personally, I'd rather see Heketi get its own mailing list(s) forthwith.
While it's fine for things that affect both projects to be crossposted,
putting general (potentially non-Gluster-related) Heketi traffic on a
Gluster mailing list has the following effects.

 * Gluster developers who have some interest in Heketi will have to
   "manually filter" which Heketi messages are actually relevant.

 * Gluster developers who have *no* interest in Heketi (yes, they
   exist) will have to set up more automatic filters.

 * Non-Gluster developers who want to follow Heketi will have to
   join a Gluster mailing list which has lots of stuff they couldn't
   care less about.

 * Searching for Heketi-related email gets weird, with lots of false
   positives on "Gluster" just because it's on our list.

 * Heketi developers might feel constrained in what they can say about
   Gluster, as compared to what they might say on a Heketi-specific
   list (even if public).

IMO the best place for any project XYZ to have its discussions is on
XYZ's own mailing list(s).
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [Heketi] Mailing list

2016-09-20 Thread Luis Pabón
Hi gluster-devel,
  At the Heketi project, we wanted to get better communication with the
GlusterFS community.  We are a young project and didn't have our own
mailing list, so we asked if we could also be part gluster-devel mailing
list.  The plan is to Heketi specific emails to gluster-devel using the
subject tag '[Heketi]'.  This is what is done in OpenStack, where they
all share the same mailing list, and use the subject line tag for
separate projects.
  I consider this a pilot, nothing is set in stone, but I wanted to ask
your opinion in the matter.

Regards,

- Luis
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-20 Thread Luis Pabón
Awesome, Thanks guys.

- Luis

- Original Message -
From: "Pranith Kumar Karampuri" 
To: "Niels de Vos" 
Cc: "Luis Pabón" , "gluster-devel" 
, "Stephen Watt" , "Ramakrishna 
Yekulla" , "Humble Chirammal" 
Sent: Tuesday, September 20, 2016 5:53:30 AM
Subject: Re: [Gluster-devel] [Heketi] Block store related API design discussion

On Mon, Sep 19, 2016 at 9:22 PM, Niels de Vos  wrote:

> On Mon, Sep 19, 2016 at 10:31:11AM -0400, Luis Pabón wrote:
> > Using qemu is interesting, but the I/O should be using the IO path of
> QEMU block API.  If not,
> > TCMU would not know how to work with QEMU dynamic QCOW2 files.
> >
> > Now, if TCMU already has this, then that would be great!
>
> It has a qcow2 header, maybe you guys are lucky!
>   https://github.com/open-iscsi/tcmu-runner/blob/master/qcow2.h


Sent the earlier mail before seeing this mail :-). So yes, what we
discussed is to see if this qemu in tcmu can internally use gfapi for doing
the operations or not is something we are trying to find out.


>
>
> Niels
>
> >
> > - Luis
> >
> > - Original Message -
> > From: "Prasanna Kalever" 
> > To: "Niels de Vos" 
> > Cc: "Luis Pabón" , "Stephen Watt" ,
> "gluster-devel" , "Ramakrishna Yekulla" <
> rre...@redhat.com>, "Humble Chirammal" 
> > Sent: Monday, September 19, 2016 7:13:36 AM
> > Subject: Re: [Gluster-devel] [Heketi] Block store related API design
> discussion
> >
> > On Mon, Sep 19, 2016 at 4:09 PM, Niels de Vos  wrote:
> > >
> > > On Mon, Sep 19, 2016 at 03:34:29PM +0530, Prasanna Kalever wrote:
> > > > On Mon, Sep 19, 2016 at 10:13 AM, Niels de Vos 
> wrote:
> > > > > On Tue, Sep 13, 2016 at 12:06:00PM -0400, Luis Pabón wrote:
> > > > >> Very good points.  Thanks Prasanna for putting this together.  I
> agree with
> > > > >> your comments in that Heketi is the high level abstraction API
> and it should have
> > > > >> an API similar of what is described by Prasanna.
> > > > >>
> > > > >> I definitely do not think any File Api should be available in
> Heketi,
> > > > >> because that is an implementation of the Block API.  The Heketi
> API should
> > > > >> be similar to something like OpenStack Cinder.
> > > > >>
> > > > >> I think that the actual management of the Volumes used for Block
> storage
> > > > >> and the files in them should be all managed by Heketi.  How they
> are
> > > > >> actually created is still to be determined, but we could have
> Heketi
> > > > >> create them, or have helper programs do that.
> > > > >
> > > > > Maybe a tool like qemu-img? If whatever iscsi service understand
> the
> > > > > format (at the very least 'raw'), you could get functionality like
> > > > > snapshots pretty simple.
> > > >
> > > > Niels,
> > > >
> > > > This is brilliant and subset of the Idea falls in one among my
> > > > thoughts, only concern is about building dependencies of qemu with
> > > > Heketi.
> > > > But at an advantage of easy and cool snapshots solution.
> > >
> > > And well tested as I understand that oVirt is moving to use qemu-img as
> > > well. Other tools are able to use the qcow2 format, maybe the iscsi
> > > servce that gets used does so too.
> > >
> > > Has there already been a decision on what Heketi will configure as
> iscsi
> > > service? I am aware of the tgt [1] and LIO/TCMU [2] projects.
> >
> > Niels,
> >
> > yes we will be using TCMU (Kernel Module) and TCMU-runner (user space
> > service) to expose file in Gluster volume as an iSCSI target.
> > more at [1], [2] & [3]
> >
> > [1] https://pkalever.wordpress.com/2016/06/23/gluster-
> solution-for-non-shared-persistent-storage-in-docker-container/
> > [2] https://pkalever.wordpress.com/2016/06/29/non-shared-
> persistent-gluster-storage-with-kubernetes/
> > [3] https://pkalever.wordpress.com/2016/08/16/read-write-
> once-persistent-storage-for-openshift-origin-using-gluster/
> >
> > --
> > Prasanna
> >
> > >
> > > Niels
> > >
> > > 1. http://stgt.sourceforge.net/
> > > 2. https://github.com/open-iscsi/tcmu-runner
> > >http://blog.gluster.org/2016/04/using-lio-with-gluster/
> > >
> >

[Gluster-devel] [Heketi] How pushing to heketi happens - especially about squashing

2016-09-20 Thread Luis Pabón
Hi Michael,
  We have a new mailing list, it is gluster-devel with [Heketi] in the
subject.  I probably will add this to the communications wiki page.

On the concept of github. It is always interesting to compare what we
know and how we are used to something to something new. In Github,
we do not need to let Github squash at all.  I was doing that as a 'pilot'.
The real method is for patches to be added to a PR, and if too many
patches are added, for the author to squash them, and send a new one.
This is documented in the Development Guide in the Wiki.

The author should also note that their first patch/commit sent as
a PR is the information used as the PR.  Lots of PRs are being sent
with almost no information, and I have let this happen because most
people are still ramping up.

There is no reason why commit messages cannot be as detailed as
those from Gerrit.  Here is an example: 
https://github.com/heketi/heketi/pull/393 .

The process to update changes is to update the forked
branch, and not to amend the same change.  Amending makes it impossible
to determine the changes from patch to patch, and makes it extremely hard
on reviewers (me).

Here are my thoughts on your questions below:

1) The the review should not squash the authors commits unless
   the author explicitly requests or approves that.
[lpabon] Absolutely.  The pilot, although it worked well technically,
it confuses those who come from other source control systems.

2) We should avoid using github to merge because this creates
   bad commit messages.
[lpabon] I'm not sure what you mean by this, but I would not
"avoid" github in any way.  That is like saying "avoid Gerrit".

3) (As a consequence of the above,) If we push delta-patches
   to update PRs, that can usually not be the final push, but
   needs a final iteration of force-pushing an amended patchset.
[lpabon] Do not amend patches.

NOTE on amended patches.  If I notice another one, I will *not* merge
the change.  Sorry to be a pain about that, but it makes it almost
impossible to review.  This is not Gerrit, this is Github, it
is something new, but in my opinion, it is more natural git workflow.

- Luis

- Original Message -----
From: "Michael Adam" 
To:"Luis Pabón" 
Sent: Tuesday, September 20, 2016 4:50:01 AM
Subject: [RFC] [upstream] How pushing to heketi happens - especially about 
squashing

Hi all, hi Luis,

Since we do not have a real upstream ML yet (see my other mail), I want
use this list now for discussing about the way patches are merged into
heketi upstream.

[ tl;dr ? --> look for "summing up" at the bottom... ;-) ]

This is after a few weeks of working on the projects with you all
especially with Luis, and seeing how he does the project. And there
have been a few surprises on both ends.

While I still don't fully like or trust the github UI, it is
for instance better than gerrit (But as José sais: "That bar
is really low..." ;-). One point where it is better is it can
deal with patchsets, i.e. multiple patches submitted as one PR.

But github has the feature of instead of squashing the patches
instead of merging the patches as they are. This can be useful
or remotely correct in one situation, but I think generally it
should be avoided for reasons detailed below.


So in this mail, I am sharing a few observations from the past
few weeks, and a few concerns or problems I am having. I think
it is important with the growing team to clearly formulate
how both reviewers and patch submitters expect the process to work.


At least when I propose a patchset, I propose it exactly the way
I send it. Coming from Samba and Gluster development, for me as a
contributor and as a reviewer, the content of the commits, i.e.
the actual diffs as well as the layout into patches and the commit
messages are 'sacred' in the sense that this is what the patch
submitter proposed and signed-off on for pushing. Hence the reviewer
should imho not change the general layout of patches (e.g. by squashing
them) without consulting the author. Here are two examples where
pull request with two patches were squashed with the heketi method:

https://github.com/heketi/heketi/commit/bbc513ef214c5ec81b6cdb0a3a024944c9fe12ba
https://github.com/heketi/heketi/commit/bccab2ee8f70f6862d9bfee3a8cbdf6e47b5a8bf

You see what github does: it prints the title of the PR as main commit
message and creates a bullet list of the original commit messages.
Hence, it really creates pretty bad commits (A commit called
"Two minor patches (#499)" - really??)... :-)
This is not how these were intended by the authors. The actual result of
how the commits looks like in git after they have been merged.
(Btw, I don't look at the git log / code in github: it is difficult to see
the relevant things there. I look at it in a local git checkout in the shell.
This is the "everlasting", "sacred" content.)

So 

Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-19 Thread Luis Pabón
Using qemu is interesting, but the I/O should be using the IO path of QEMU 
block API.  If not,
TCMU would not know how to work with QEMU dynamic QCOW2 files.

Now, if TCMU already has this, then that would be great!

- Luis

- Original Message -
From: "Prasanna Kalever" 
To: "Niels de Vos" 
Cc: "Luis Pabón" , "Stephen Watt" , 
"gluster-devel" , "Ramakrishna Yekulla" 
, "Humble Chirammal" 
Sent: Monday, September 19, 2016 7:13:36 AM
Subject: Re: [Gluster-devel] [Heketi] Block store related API design discussion

On Mon, Sep 19, 2016 at 4:09 PM, Niels de Vos  wrote:
>
> On Mon, Sep 19, 2016 at 03:34:29PM +0530, Prasanna Kalever wrote:
> > On Mon, Sep 19, 2016 at 10:13 AM, Niels de Vos  wrote:
> > > On Tue, Sep 13, 2016 at 12:06:00PM -0400, Luis Pabón wrote:
> > >> Very good points.  Thanks Prasanna for putting this together.  I agree 
> > >> with
> > >> your comments in that Heketi is the high level abstraction API and it 
> > >> should have
> > >> an API similar of what is described by Prasanna.
> > >>
> > >> I definitely do not think any File Api should be available in Heketi,
> > >> because that is an implementation of the Block API.  The Heketi API 
> > >> should
> > >> be similar to something like OpenStack Cinder.
> > >>
> > >> I think that the actual management of the Volumes used for Block storage
> > >> and the files in them should be all managed by Heketi.  How they are
> > >> actually created is still to be determined, but we could have Heketi
> > >> create them, or have helper programs do that.
> > >
> > > Maybe a tool like qemu-img? If whatever iscsi service understand the
> > > format (at the very least 'raw'), you could get functionality like
> > > snapshots pretty simple.
> >
> > Niels,
> >
> > This is brilliant and subset of the Idea falls in one among my
> > thoughts, only concern is about building dependencies of qemu with
> > Heketi.
> > But at an advantage of easy and cool snapshots solution.
>
> And well tested as I understand that oVirt is moving to use qemu-img as
> well. Other tools are able to use the qcow2 format, maybe the iscsi
> servce that gets used does so too.
>
> Has there already been a decision on what Heketi will configure as iscsi
> service? I am aware of the tgt [1] and LIO/TCMU [2] projects.

Niels,

yes we will be using TCMU (Kernel Module) and TCMU-runner (user space
service) to expose file in Gluster volume as an iSCSI target.
more at [1], [2] & [3]

[1] 
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
[2] 
https://pkalever.wordpress.com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes/
[3] 
https://pkalever.wordpress.com/2016/08/16/read-write-once-persistent-storage-for-openshift-origin-using-gluster/

--
Prasanna

>
> Niels
>
> 1. http://stgt.sourceforge.net/
> 2. https://github.com/open-iscsi/tcmu-runner
>http://blog.gluster.org/2016/04/using-lio-with-gluster/
>
> >
> > --
> > Prasanna
> >
> > >
> > > Niels
> > >
> > >
> > >> We also need to document the exact workflow to enable a file in
> > >> a Gluster volume to be exposed as a block device.  This will help
> > >> determine where the creation of the file could take place.
> > >>
> > >> We can capture our decisions from these discussions in the
> > >> following page:
> > >>
> > >> https://github.com/heketi/heketi/wiki/Proposed-Changes
> > >>
> > >> - Luis
> > >>
> > >>
> > >> - Original Message -
> > >> From: "Humble Chirammal" 
> > >> To: "Raghavendra Talur" 
> > >> Cc: "Prasanna Kalever" , "gluster-devel" 
> > >> , "Stephen Watt" , "Luis 
> > >> Pabon" , "Michael Adam" , 
> > >> "Ramakrishna Yekulla" , "Mohamed Ashiq Liyazudeen" 
> > >> 
> > >> Sent: Tuesday, September 13, 2016 2:23:39 AM
> > >> Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
> > >> discussion
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> - Ori

Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-18 Thread Luis Pabón
Hi Prasanna,
  I started the wiki page with the documentation on the API.  There
still needs to be more information added, and we still need to work
on the workflow, but at least it is a start.

Please take a look at the wiki:

https://github.com/heketi/heketi/wiki/Proposed-API:-Block-Storage

- Luis

- Original Message -
From: "Luis Pabón" 
To: "Humble Chirammal" 
Cc: "gluster-devel" , "Stephen Watt" 
, "Ramakrishna Yekulla" 
Sent: Tuesday, September 13, 2016 12:06:00 PM
Subject: Re: [Gluster-devel] [Heketi] Block store related API design discussion

Very good points.  Thanks Prasanna for putting this together.  I agree with
your comments in that Heketi is the high level abstraction API and it should 
have
an API similar of what is described by Prasanna.

I definitely do not think any File Api should be available in Heketi,
because that is an implementation of the Block API.  The Heketi API should
be similar to something like OpenStack Cinder.

I think that the actual management of the Volumes used for Block storage
and the files in them should be all managed by Heketi.  How they are
actually created is still to be determined, but we could have Heketi
create them, or have helper programs do that.

We also need to document the exact workflow to enable a file in
a Gluster volume to be exposed as a block device.  This will help
determine where the creation of the file could take place.

We can capture our decisions from these discussions in the
following page:

https://github.com/heketi/heketi/wiki/Proposed-Changes

- Luis


- Original Message -
From: "Humble Chirammal" 
To: "Raghavendra Talur" 
Cc: "Prasanna Kalever" , "gluster-devel" 
, "Stephen Watt" , "Luis Pabon" 
, "Michael Adam" , "Ramakrishna Yekulla" 
, "Mohamed Ashiq Liyazudeen" 
Sent: Tuesday, September 13, 2016 2:23:39 AM
Subject: Re: [Gluster-devel] [Heketi] Block store related API design discussion





- Original Message -
| From: "Raghavendra Talur" 
| To: "Prasanna Kalever" 
| Cc: "gluster-devel" , "Stephen Watt" 
, "Luis Pabon" ,
| "Michael Adam" , "Humble Chirammal" , 
"Ramakrishna Yekulla"
| , "Mohamed Ashiq Liyazudeen" 
| Sent: Tuesday, September 13, 2016 11:08:44 AM
| Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
discussion
| 
| On Mon, Sep 12, 2016 at 11:30 PM, Prasanna Kalever 
| wrote:
| 
| > Hi all,
| >
| > This mail is open for discussion on gluster block store integration with
| > heketi and its REST API interface design constraints.
| >
| >
| >  ___ Volume Request ...
| > |
| > |
| > PVC claim -> Heketi --->|
| > |
| > |
| > |
| > |
| > |__ BlockCreate
| > |   |
| > |   |__ BlockInfo
| > |   |
| > |___ Block Request (APIS)-> |__ BlockResize
| > |
| > |__ BlockList
| > |
| > |__ BlockDelete
| >
| > Heketi will have block API and volume API, when user submit a Persistent
| > volume claim, Kubernetes provisioner based on the storage class(from PVC)
| > talks to heketi for storage, heketi intern calls block or volume API's
| > based on request.
| >
| 
| This is probably wrong. It won't be Heketi calling block or volume APIs. It
| would be Kubernetes calling block or volume API *of* Heketi.
| 
| 
| > With my limited understanding, heketi currently creates clusters from
| > provided nodes, creates volumes and handover them to the user.
| > For block related API's, it has to deal with files right ?
| >
| > Here is how block API's look like in short-
| > Create: heketi has to create file in the volume and export it as a iscsi
| > target device and hand it over to user.
| > Info: show block stores information across all the clusters, connection
| > info, size etc.
| > resize: resize the file in the volume, refresh connections from initiator
| > side
| > List: List the connections
| > Delete: logout the connections and delete the file in the gluster volume
| >
| > Couple of questions:
| > 1. Should Block API have sub API's such as FileCreate, FileList,
| > FileResize, File delete and etc then get it used in B

[Gluster-devel] [Heketi] Kubernetes Dynamic Provisioner for Gluster/Heketi

2016-09-13 Thread Luis Pabón
Hi all,
  I was able to spend some time setting up the environment to test
the Gluster/Heketi dynamic provisioner in Kubernetes.  It took me
a little while, but once I figure it out, I was able to start the
tests.  It is actually really easy to setup, so I wrote two
blogs[1][2] about how I was able to accomplish this.

Everything worked very well, but I did find one issue[3].  I will
also determine how hard it would be to add this to the Heketi
CI functional tests.

[1] Setting up Minikube: http://bit.ly/2cFfaue
[2] Testing Gluster/Heketi Dynanic Provisioner: http://bit.ly/2cvl92V
[3] https://github.com/heketi/heketi/issues/494

- Luis
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-13 Thread Luis Pabón
Hi Steve, 
  Good questions.  We still need to investigate the security
concerns around block storage.

- Luis

- Original Message -
From: "Stephen Watt" 
To: "Luis Pabón" 
Cc: "Humble Chirammal" , "Raghavendra Talur" 
, "Prasanna Kalever" , "gluster-devel" 
, "Michael Adam" , "Ramakrishna 
Yekulla" , "Mohamed Ashiq Liyazudeen" , 
"Engineering discussions on containers & RHS" 
Sent: Tuesday, September 13, 2016 12:10:54 PM
Subject: Re: [Gluster-devel] [Heketi] Block store related API design discussion

+ rhs-containers list

Also, some important requirements to figure out/think about are:

- How are you managing locking a block device against a container (or a
host?)
- Will your implementation work with OpenShift volume security for block
devices (FSGroups + Recursive chown, chmod and SELinux labeling)

If these aren't already figured out, would it be possible to create
separate cards in your trello board so we can track the progress on the
resolution of these two topics?

On Tue, Sep 13, 2016 at 11:06 AM, Luis Pabón  wrote:

> Very good points.  Thanks Prasanna for putting this together.  I agree with
> your comments in that Heketi is the high level abstraction API and it
> should have
> an API similar of what is described by Prasanna.
>
> I definitely do not think any File Api should be available in Heketi,
> because that is an implementation of the Block API.  The Heketi API should
> be similar to something like OpenStack Cinder.
>
> I think that the actual management of the Volumes used for Block storage
> and the files in them should be all managed by Heketi.  How they are
> actually created is still to be determined, but we could have Heketi
> create them, or have helper programs do that.
>
> We also need to document the exact workflow to enable a file in
> a Gluster volume to be exposed as a block device.  This will help
> determine where the creation of the file could take place.
>
> We can capture our decisions from these discussions in the
> following page:
>
> https://github.com/heketi/heketi/wiki/Proposed-Changes
>
> - Luis
>
>
> - Original Message -
> From: "Humble Chirammal" 
> To: "Raghavendra Talur" 
> Cc: "Prasanna Kalever" , "gluster-devel" <
> gluster-devel@gluster.org>, "Stephen Watt" , "Luis
> Pabon" , "Michael Adam" ,
> "Ramakrishna Yekulla" , "Mohamed Ashiq Liyazudeen" <
> mliya...@redhat.com>
> Sent: Tuesday, September 13, 2016 2:23:39 AM
> Subject: Re: [Gluster-devel] [Heketi] Block store related API design
> discussion
>
>
>
>
>
> - Original Message -
> | From: "Raghavendra Talur" 
> | To: "Prasanna Kalever" 
> | Cc: "gluster-devel" , "Stephen Watt" <
> sw...@redhat.com>, "Luis Pabon" ,
> | "Michael Adam" , "Humble Chirammal" <
> hchir...@redhat.com>, "Ramakrishna Yekulla"
> | , "Mohamed Ashiq Liyazudeen" 
> | Sent: Tuesday, September 13, 2016 11:08:44 AM
> | Subject: Re: [Gluster-devel] [Heketi] Block store related API design
> discussion
> |
> | On Mon, Sep 12, 2016 at 11:30 PM, Prasanna Kalever 
> | wrote:
> |
> | > Hi all,
> | >
> | > This mail is open for discussion on gluster block store integration
> with
> | > heketi and its REST API interface design constraints.
> | >
> | >
> | >  ___ Volume Request ...
> | > |
> | > |
> | > PVC claim -> Heketi --->|
> | > |
> | > |
> | > |
> | > |
> | > |__ BlockCreate
> | > |   |
> | > |   |__ BlockInfo
> | > |   |
> | > |___ Block Request (APIS)-> |__ BlockResize
> | > |
> | > |__ BlockList
> | > |
> | > |__ BlockDelete
> | >
> | > Heketi will have block API and volume API, when user submit a
> Persistent
> | > volume claim, Kubernetes provisioner based on the storage class(from
> PVC)
> | > talks to heketi for storage, heketi intern calls block or volume API's
> | 

Re: [Gluster-devel] [Heketi] Block store related API design discussion

2016-09-13 Thread Luis Pabón
Very good points.  Thanks Prasanna for putting this together.  I agree with
your comments in that Heketi is the high level abstraction API and it should 
have
an API similar of what is described by Prasanna.

I definitely do not think any File Api should be available in Heketi,
because that is an implementation of the Block API.  The Heketi API should
be similar to something like OpenStack Cinder.

I think that the actual management of the Volumes used for Block storage
and the files in them should be all managed by Heketi.  How they are
actually created is still to be determined, but we could have Heketi
create them, or have helper programs do that.

We also need to document the exact workflow to enable a file in
a Gluster volume to be exposed as a block device.  This will help
determine where the creation of the file could take place.

We can capture our decisions from these discussions in the
following page:

https://github.com/heketi/heketi/wiki/Proposed-Changes

- Luis


- Original Message -
From: "Humble Chirammal" 
To: "Raghavendra Talur" 
Cc: "Prasanna Kalever" , "gluster-devel" 
, "Stephen Watt" , "Luis Pabon" 
, "Michael Adam" , "Ramakrishna Yekulla" 
, "Mohamed Ashiq Liyazudeen" 
Sent: Tuesday, September 13, 2016 2:23:39 AM
Subject: Re: [Gluster-devel] [Heketi] Block store related API design discussion





- Original Message -
| From: "Raghavendra Talur" 
| To: "Prasanna Kalever" 
| Cc: "gluster-devel" , "Stephen Watt" 
, "Luis Pabon" ,
| "Michael Adam" , "Humble Chirammal" , 
"Ramakrishna Yekulla"
| , "Mohamed Ashiq Liyazudeen" 
| Sent: Tuesday, September 13, 2016 11:08:44 AM
| Subject: Re: [Gluster-devel] [Heketi] Block store related API design 
discussion
| 
| On Mon, Sep 12, 2016 at 11:30 PM, Prasanna Kalever 
| wrote:
| 
| > Hi all,
| >
| > This mail is open for discussion on gluster block store integration with
| > heketi and its REST API interface design constraints.
| >
| >
| >  ___ Volume Request ...
| > |
| > |
| > PVC claim -> Heketi --->|
| > |
| > |
| > |
| > |
| > |__ BlockCreate
| > |   |
| > |   |__ BlockInfo
| > |   |
| > |___ Block Request (APIS)-> |__ BlockResize
| > |
| > |__ BlockList
| > |
| > |__ BlockDelete
| >
| > Heketi will have block API and volume API, when user submit a Persistent
| > volume claim, Kubernetes provisioner based on the storage class(from PVC)
| > talks to heketi for storage, heketi intern calls block or volume API's
| > based on request.
| >
| 
| This is probably wrong. It won't be Heketi calling block or volume APIs. It
| would be Kubernetes calling block or volume API *of* Heketi.
| 
| 
| > With my limited understanding, heketi currently creates clusters from
| > provided nodes, creates volumes and handover them to the user.
| > For block related API's, it has to deal with files right ?
| >
| > Here is how block API's look like in short-
| > Create: heketi has to create file in the volume and export it as a iscsi
| > target device and hand it over to user.
| > Info: show block stores information across all the clusters, connection
| > info, size etc.
| > resize: resize the file in the volume, refresh connections from initiator
| > side
| > List: List the connections
| > Delete: logout the connections and delete the file in the gluster volume
| >
| > Couple of questions:
| > 1. Should Block API have sub API's such as FileCreate, FileList,
| > FileResize, File delete and etc then get it used in Block API as they
| > mostly deal with files.
| >
| 
| IMO, Heketi should not expose any File related API. It should only have
| APIs to service request for block devices; how the block devices are
| created and modified is an implementation detail.
| 
| 
| > 2. How do we create the actual file in the volume, meaning using FUSE
| > mount (which may involve an extra process running) or gfapi, again if gfapi
| > should we go with c API's, python bindings or go bindings ?
| >
| 3. Should we get targetcli related (LUN exporting) setup done from heketi
| > or do we seek help from gdeploy for this ?
| >
| 
| I would prefer to either have it in Heketi or in Kubernetes. If the API in
| Heketi promises just the creation of block device, then the rest of the
| implementation should be in Kubernetes(the export part). If the API in
| Heketi promises create and export both, I would say Heketi should have the
| implementation within itself.
| 
| 

IMO, we should not think a

Re: [Gluster-devel] gdploy + Heketi

2015-12-11 Thread Luis Pabón
My proposal is for gdeploy to communicate with Heketi, glusterd, and the 
system itself to service requests from the administrator. Communicate 
with Heketi for all volume allocation/deallocation, with glusterd for 
any modifications on the volume, and with the node operating system (if 
really necessary) for any required setup.


The following is just a brainstorm, not a spec file by any means. Just 
an idea of what the workflow could be like.


Here is a possible workflow:

# Admin: Create SSH keys
# Admin: Setup Heketi service
  - Heketi configured with private SSH key.
# Admin: Raw nodes are setup only with the gluster service and the 
public ssh key.

# Admin: Create topology.json with clusters, nodes, zones, and devices.
Admin needs to create a topology.json file.  See example in
https://github.com/heketi/vagrant-heketi/blob/master/roles/heketi/files/topology_libvirt.json
* gdeploy topology load -json=topology.json
  - Assume that the configuration of the location of the Heketi server 
is known, either by an environment

variable, configuration file, or switch.
  - At this point Heketi has been loaded with the configuration of the 
data center.

# Display topology
* gdeploy topology show
Cluster [2345235]
 |- Node [my.node.com]
|- Device [/dev/sdb]
|- Device [/dev/sdc]
Cluster [F54DD]
 |- Node...
...

# Display node information
* gdeploy node info [hostname or uuid]

# Create a volume
* gdeploy volume create -size=100

# Create volumes from a configuration file
* gdeploy volume create -c volumes.conf

$ cat volumes.conf
[volume]
action=create
volname=Gdeploy_test  <-- optional
transport=tcp,rdma  <-- would need to be added to Heketi
replica=yes
replica_count=2

[clients]
action=mount
#volname=glustervol (If not specified earlier in 'volume' section
hosts=node2.redhat.com
fstype=glusterfs
client_mount_points=/mnt/gluster


# Set volume options, snapshots, etc.
These would first talk to Heketi to determine which servers are 
servicing this volume.
gdeploy can then communicate with glusterd to execute the volume 
modifications.

* gdeploy volume options  
* gdeploy volume options -c options.conf

# Destroy a volume
Here gdeploy would first check for snapshots.  If there are none, then 
it would

request the work from Heketi.


These are just some possible methods of how they could interact.

- Luis

On 12/11/2015 02:16 AM, Sachidananda URS wrote:



On Fri, Dec 11, 2015 at 12:31 PM, Luis Pabon > wrote:


I think at its simplest would be to specify workflow examples and
how gdploy+Heketi would satisfy them.  I will be sending out some
possible workflows tomorrow.


Awesome, I will see if we can add something to it.


Also, there is a python Heketi client in the works right now which
would benefit gdeploy: https://github.com/heketi/heketi/pull/251 .


Cool, will check this out.

-sac

- Luis

- Original Message -
From: "Sachidananda URS" mailto:s...@redhat.com>>
To: "Luis Pabon" mailto:lpa...@redhat.com>>
Cc: "Gluster Devel" mailto:gluster-devel@gluster.org>>
Sent: Friday, December 11, 2015 1:54:18 AM
Subject: Re: gdploy + Heketi

Hi Luis,

On Fri, Dec 11, 2015 at 12:01 PM, Luis Pabon mailto:lpa...@redhat.com>> wrote:

> Hi Sachidananda,
>   I think there is a great opportunity to enhance GlusterFS
management by
> using gdeploy as a service which uses Heketi for volume management.
> Currently, gdeploy sets up nodes, file systems, bricks, and
volumes.  It
> does all this with input from the administrator, but it does not
support
> automated brick allocation management, failure domains, or multiple
> clusters.  On the other hand, it does have support for mounting
volumes in
> clients, and setting up multiple options on a specified volume.
>
>   I would like to add support for Heketi in the gdeploy
workflow.  This
> would enable administrators to manage clusters, nodes, disks,
and volumes
> with gdeploy based on Heketi.
>
> What do you guys think?
>


That would be great. Please let us know if you already have a plan
on how
to make these two work.

-sac




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Open source SPC-1 Workload IO Pattern

2014-11-20 Thread Luis Pabón

Hi Michael,
I noticed the code on the fio branch (that is where I grabbed the 
spc1.[hc] files :-) ).  Do you know why that branch has not being merged 
to master?


- Luis

On 11/18/2014 11:56 PM, Michael O'Sullivan wrote:

Hi Justin & Luis,

We did a branch of fio that implemented this SPC-1 trace a few years ago. I can 
dig up the code and paper we wrote if it is useful?

Cheers, Mike


On 19/11/2014, at 4:21 pm, "Justin Clift"  wrote:

Nifty. :)

(Yeah, catching up on old unread email, as the wifi in this hotel is so
bad I can barely do anything else.  8-10 second ping times to
www.gluster.org. :/)

As a thought, would there be useful analysis/visualisation capabilities
if you stored the data into a time series database (eg InfluxDB) then
used Grafana (http://grafana.org) on it?

+ Justin


On Fri, 07 Nov 2014 12:01:56 +0100
Luis Pabón  wrote:


Hi guys,
I created a simple test program to visualize the I/O pattern of
NetApp’s open source spc-1 workload generator. SPC-1 is an enterprise
OLTP type workload created by the Storage Performance Council
(http://www.storageperformance.org/results).  Some of the results are
published and available here:
http://www.storageperformance.org/results/benchmark_results_spc1_active .

NetApp created an open source version of this workload and described
it in their publication "A portable, open-source implementation of
the SPC-1 workload" (
http://www3.lrgl.uqam.ca/csdl/proceedings/iiswc/2005/9461/00/01526014.pdf
)

The code is available onGithub: https://github.com/lpabon/spc1 .  All
it does at the moment is capture the pattern, no real IO is
generated. I will be working on a command line program to enable
usage on real block storage systems.  I may either extend fio or
create a tool specifically tailored to the requirements needed to run
this workload.

On github, I have an example IO pattern for a simulation running 50
mil IOs using HRRW_V2. The simulation ran with an ASU1 (Data Store)
size of 45GB, ASU2 (User Store) size of 45GB, and ASU3 (Log) size of
10GB.

- Luis

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel



--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Open source SPC-1 Workload IO Pattern

2014-11-19 Thread Luis Pabón
Interesting, I have never known about those tools.  They look great!  I 
will check them out.  Thanks Justin.


- Luis


On 11/18/2014 10:20 PM, Justin Clift wrote:

Nifty. :)

(Yeah, catching up on old unread email, as the wifi in this hotel is so
bad I can barely do anything else.  8-10 second ping times to
www.gluster.org. :/)

As a thought, would there be useful analysis/visualisation capabilities
if you stored the data into a time series database (eg InfluxDB) then
used Grafana (http://grafana.org) on it?

+ Justin


On Fri, 07 Nov 2014 12:01:56 +0100
Luis Pabón  wrote:


Hi guys,
I created a simple test program to visualize the I/O pattern of
NetApp’s open source spc-1 workload generator. SPC-1 is an enterprise
OLTP type workload created by the Storage Performance Council
(http://www.storageperformance.org/results).  Some of the results are
published and available here:
http://www.storageperformance.org/results/benchmark_results_spc1_active .

NetApp created an open source version of this workload and described
it in their publication "A portable, open-source implementation of
the SPC-1 workload" (
http://www3.lrgl.uqam.ca/csdl/proceedings/iiswc/2005/9461/00/01526014.pdf
)

The code is available onGithub: https://github.com/lpabon/spc1 .  All
it does at the moment is capture the pattern, no real IO is
generated. I will be working on a command line program to enable
usage on real block storage systems.  I may either extend fio or
create a tool specifically tailored to the requirements needed to run
this workload.

On github, I have an example IO pattern for a simulation running 50
mil IOs using HRRW_V2. The simulation ran with an ASU1 (Data Store)
size of 45GB, ASU2 (User Store) size of 45GB, and ASU3 (Log) size of
10GB.

- Luis

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel





___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Open source SPC-1 Workload IO Pattern

2014-11-07 Thread Luis Pabón

Hi guys,
I created a simple test program to visualize the I/O pattern of NetApp’s 
open source spc-1 workload generator. SPC-1 is an enterprise OLTP type 
workload created by the Storage Performance Council 
(http://www.storageperformance.org/results).  Some of the results are 
published and available here: 
http://www.storageperformance.org/results/benchmark_results_spc1_active .


NetApp created an open source version of this workload and described it 
in their publication "A portable, open-source implementation of the 
SPC-1 workload" ( 
http://www3.lrgl.uqam.ca/csdl/proceedings/iiswc/2005/9461/00/01526014.pdf )


The code is available onGithub: https://github.com/lpabon/spc1 .  All it 
does at the moment is capture the pattern, no real IO is generated. I 
will be working on a command line program to enable usage on real block 
storage systems.  I may either extend fio or create a tool specifically 
tailored to the requirements needed to run this workload.


On github, I have an example IO pattern for a simulation running 50 mil 
IOs using HRRW_V2. The simulation ran with an ASU1 (Data Store) size of 
45GB, ASU2 (User Store) size of 45GB, and ASU3 (Log) size of 10GB.


- Luis

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Languages (was Re: Proposal for GlusterD-2.0)

2014-09-10 Thread Luis Pabón
I think the real question is, Why do we depend on core files?  What does 
it provide?  If we rethink how we may do debugging, we may realize that 
we only require core files because we are used to it and it is familiar 
to us.  Now, I am not saying that core files are not useful, but I am 
saying that we may be able to do most of the necessary debugging by 
other means.


For example, debugging systems running OpenStack Swift which uses Python 
stack traces has been much easier than analyzing C core files.  Just my 
experience.


I would not say that because Go, Java, Ruby, or Python do not create 
core files, that it would be hard to debug.  Instead we need to learn 
new ways of debugging.  Just my $0.02 :-)


- Luis

On 09/10/2014 09:35 PM, Justin Clift wrote:

On 11/09/2014, at 1:47 AM, Luis Pabón wrote:

Hi guys, I wanted to share my experiences with Go.  I have been using it for 
the past few months and I have to say I am very impressed.  Instead of writing 
a massive email I created a blog entry:

http://goo.gl/g9abOi

Hope this helps.


With this:

   * Core files: I have not found a way yet to create a core file
 which has enough information about the running goroutines.  I
 have been able to get a core file, but most of the information
 saved is about the Go environment running the application,
 instead of the application itself.

Is there a workaround, or some other approach that replaces core
files?

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Languages (was Re: Proposal for GlusterD-2.0)

2014-09-10 Thread Luis Pabón
Hi guys, I wanted to share my experiences with Go.  I have been using it 
for the past few months and I have to say I am very impressed.  Instead 
of writing a massive email I created a blog entry:


http://goo.gl/g9abOi

Hope this helps.

- Luis

On 09/05/2014 11:44 AM, Jeff Darcy wrote:

Does this mean we'll need to learn Go as well as C and Python?

As KP points out, the fact that consul is written in Go doesn't mean our
code needs to be ... unless we need to contribute code upstream e.g. to
add new features.  Ditto for etcd also being written in Go, ZooKeeper
being written in Java, and so on.  It's probably more of an issue that
these all require integration into our build/test environments.  At
least Go, unlike Java, doesn't require any new *run time* support.
Python kind of sits in between - it does require runtime support, but
it's much less resource-intensive and onerous than Java (no GC-tuning
hell).  Between that and the fact that it's almost always present
already, it just doesn't seem to provoke the same kind of allergic
reaction that Java does.

However, this is as good a time as any to think about what languages
we're going to use for the project going forward.  While there are many
good reasons for our I/O path to remain in Plain Old C (yes I'm
deliberately avoiding the C++ issue), many of those reasons apply only
weakly to other parts of the code - not only management code, but also
"offline" processes like self heal and rebalancing.  Some people might
already be aware that I've used Python for the reconciliation component
of NSR, for example, and that version is in almost every way better than
the C version it replaces.  When we need to interface with code written
in other languages, or even interact with communities where other
languages are spoken more fluently than C, it's pretty natural to
consider using those languages ourselves.  Let's look at some of the
alternatives.

  * C++
Code is highly compatible with C, programming styles and idioms less
so.  Not prominent in most areas we care about.

  * Java
The "old standard" for a lot of distributed systems - e.g.  the
entire Hadoop universe, Cassandra, etc.  Also a great burden as
discussed previously.

  * Go
Definitely the "up and comer" in distributed systems, for which it
was (partly) designed.  Easy for C programmers to pick up, and also
popular among (former?) Python folks.  Light on resources and
dependencies.

  * JavaScript
Ubiquitous.  Common in HTTP-ish "microservice" situations, but not so
much in true distributed systems.

  * Ruby
Much like JavaScript as far as we're concerned, but less ubiquitous.

  * Erlang
Functional, designed for highly reliable distributed systems,
significant use in related areas (e.g. Riak).

Obviously, there are many more, but issues of compatibility and talent
availability weigh heavier for most than for Erlang (which barely made
the list as it is despite its strengths).  Of these, the ones without
serious drawbacks are JavaScript and Go.  As popular as JS is in other
specialties, I just don't feel any positive "pull" to use it in anything
we do.  As a language it's notoriously loose about many things (e.g.
equality comparisons) and prone to the same "callback hell" from which
we already suffer.

Go is an entirely different story.  We're already bumping up against
other projects that use it, and that's no surprise considering how
strong the uptake has been among other systems programmers.
Language-wise, goroutines might help get us out of callback hell, and it
has other features such as channels and "defer" that might also support
a more productive style for our own code.  I know that several in the
group are already eager to give it a try.  While we shouldn't do so for
the "cool factor" alone, for new code that's not in the I/O path the
potential productivity benefits make it an option well worth exploring.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Feature proposal - FS-Cache support in FUSE

2014-09-05 Thread Luis Pabón

Hi Vimal,
I have a simple suggestion.  I think it would be great to first 
test FS-Cache with the NFS and Samba servers in GlusterFS.  I think just 
proving these technologies work  well together without any issues would 
be great start. I am not sure if this information is already available 
with GlusterFS specifically, but it still would be great to gain this 
experience.
Once you have enough experience with FS-Cache and standard 
protocols for GlusterFS, then you can battle FUSE.


Just my $0.02. Either way it should be fun and I look forward to your 
results :-).


- Luis

On 09/03/2014 03:25 AM, Vimal A R wrote:

David / Dan / Anand,

Thank you for all the suggestions. I will make a note of the points, 
and will get back here if I have anything to report or doubts.


Thanks a lot,

Vimal


On Tuesday, 2 September 2014 8:50 PM, Anand Avati  
wrote:






On Mon, Sep 1, 2014 at 6:07 AM, Vimal A R > wrote:


Hello fuse-devel / fs-cache / gluster-devel lists,

I would like to propose the idea of implementing FS-Cache support
in the fuse kernel module, which I am planning to do as part of my
UG university course. This proposal is by no means final, since I
have just started to look into this.

There are several user-space filesystems which are based on the
FUSE kernel module. As of now, if I understand correct, the only
networked filesystems having FS-Cache support are NFS and AFS.

Implementing support hooks for fs-cache in the fuse module would
provide networked filesystems such as GlusterFS the benefit of  a
client-side caching mechanism, which should decrease the access times.


If you are planning to test this with GlusterFS, note that one of the 
first challenges would be to have persistent filehandles in FUSE. 
While GlusterFS has a notion of a persistent handle (GFID, 128bit) 
which is constant across clients and remounts, the FUSE kernel module 
is presented a transient LONG (64/32 bit) which is specific to the 
mount instance (actually, the address of the userspace inode_t within 
glusterfs process - allows for constant time filehandle resolution).


This would be a challenge with any FUSE based filesystem which has 
persistent filehandles larger than 64bit.


Thanks

When enabled, FS-Cache would maintain a virtual indexing tree to
cache the data or object-types per network FS. Indices in the tree
are used by FS-Cache to find objects faster. The tree or index
structure under the main network FS index depends on the
filesystem. Cookies are used to represent the indices, the pages etc..

The tree structure would be as following:

a) The virtual index tree maintained by fs-cache would look like:

* FS-Cache master index -> The network-filesystem indice (NFS/AFS
etc..) -> per-share indices -> File-handle indices -> Page indices

b) In case of FUSE-based filesystems, the tree would be similar to :

* FS-Cache master index -> FUSE indice -> Per FS indices ->
file-handle indices -> page indices.

c) In case of FUSE based filesystems as GlusterFS, the tree would as :

* FS-Cache master index -> FUSE indice (fuse.glusterfs) ->
GlusterFS volume ID (a UUID exists for each volume) - > GlusterFS
file-handle indices (based on the GFID of a file) -> page indices.

The idea is to enable FUSE to work with the FS-Cache network
filesystem API, which is documented at

'https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/caching/netfs-api.txt'.

The implementation of FS-Cache support in NFS can be taken as a
guideline to understand and start off.

I will reply to this mail with any other updates that would come
up whilst pursuing this further. I request any sort of
feedback/suggestions, ideas, any pitfalls etc.. that can help in
taking this further.

Thank you,

Vimal

References:
*

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/caching/fscache.txt
*

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/caching/netfs-api.txt
*

https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/caching/object.txt
* http://people.redhat.com/dhowells/fscache/FS-Cache.pdf
* http://people.redhat.com/steved/fscache/docs/HOWTO.txt
* https://en.wikipedia.org/wiki/CacheFS
* https://lwn.net/Articles/160122/
* http://www.linux-mag.com/id/7378/
___
Gluster-devel mailing list
Gluster-devel@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-devel






___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


__

Re: [Gluster-devel] NetBSD autobuild and cmockery2

2014-07-23 Thread Luis Pabón
Hi Emmanuel.  I have a bug and a fix where cmockery2 was being linked 
with all glusterfs applications.  Maybe this fixes your issue:


http://review.gluster.org/#/c/8340/

- Luis

On 07/23/2014 11:47 AM, Emmanuel Dreyfus wrote:

On Wed, Jul 23, 2014 at 01:09:57PM +, Emmanuel Dreyfus wrote:

I need help here: that restores the build, but I also had to fiddle with
CFLAGS and LIBS, and I am not sure I did it it in the intended way. I am
probbaly wrong since now glusterd breaks on startup because of cmockery2:
Guard block of 0xbb28e080 size=0 allocated by (null):0 at 0xbb28e070 is corrupt
ERROR: logging.c:2077 Failure!

It chokes on a FREE (msgstr) that is perfectly valid. The pointer was
obtained by vasprintf(), is it possible it fails to ctach allocations
through vasprintf() and vonsider the bloc was not allocated?




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Cmockery2 in GlusterFS

2014-07-22 Thread Luis Pabón

Hi Lala,
No problem at all, I just want to make sure that developers 
understand the importance of the tool.  On the topic of RPMs, they have 
a really cool section called "%check", which is currently being used to 
run the unit tests after the glusterfs RPM is created. Normally 
developers test only on certain systems and certain architectures, but 
by having the "%check" section, we can guarantee a level of quality when 
an RPM is created on an architecture or operating system version which 
is not normally used for development.  This actually worked really well 
for cmockery2 when the RPM was first introduced to Fedora.  The %check 
section ran the unit tests on two architectures that I do not have, and 
both of them found issues on ARM32 and s390 architectures.  Without the 
%check section, cmockery2 would have been released and not been able to 
have been used.  This is why cmockery2 is set in the "BuildRequires" 
section.


- Luis

On 07/22/2014 07:34 AM, Lalatendu Mohanty wrote:

On 07/22/2014 04:35 PM, Luis Pabón wrote:
I understand that when something is new and different, it is most 
likely blamed for anything wrong that happens.  I highly propose that 
we do not do this, and instead work to learn more about the tool.


Cmockery2 is a tool that is important as the compiler.  It provides 
an extremely easy method to determine the quality of the software 
after it has been constructed, and therefore it has been made to be a 
requirement of the build.  Making it optional undermines its 
importance, and could in turn make it useless.




Hey Luis,

Th intention was not to undermine or give less importance to 
Cmockery2. Sorry if it looked like that.


However I was thinking from a flexibility point of view. I am assuming 
in future, it would be part of upstream regression test suite. So each 
patch will go through full unit testing by-default. So when somebody 
is creating RPMs from pristine sources, we should be able to do that 
without Cmockery2 because the tests were already ran through 
Jenkins/gerrit.


The question is do we need Cmockery every-time we compile glusterfs 
source? if the answer is yes, then I am fine with current code.


Cmockery2 is available for all supported EPEL/Fedora versions.  For 
any other distribution or operating system, it takes about 3 mins to 
download and compile.


Please let me know if you have any other questions.

- Luis

On 07/22/2014 02:23 AM, Lalatendu Mohanty wrote:

On 07/21/2014 10:48 PM, Harshavardhana wrote:

Cmockery2 is a hard dependency before GlusterFS can be compiled in
upstream master now - we could make it conditional
and enable if necessary? since we know we do not have the cmockery2
packages available on all systems?


+1, we need to make it conditional and enable it if necessary.  I am 
also not sure if we have "cmockery2-devel" in el5, el6. If not Build 
will fail.


On Mon, Jul 21, 2014 at 10:16 AM, Luis Pabon  
wrote:

Niels you are correct. Let me take a look.

Luis


-Original Message-
From: Niels de Vos [nde...@redhat.com]
Received: Monday, 21 Jul 2014, 10:41AM
To: Luis Pabon [lpa...@redhat.com]
CC: Anders Blomdell [anders.blomd...@control.lth.se];
gluster-devel@gluster.org
Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS


On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote:

On 2014-07-21 16:17, Anders Blomdell wrote:

On 2014-07-20 16:01, Niels de Vos wrote:

On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote:

Hi all,
 A few months ago, the unit test framework based on 
cmockery2 was
in the repo for a little while, then removed while we improved 
the

packaging method.  Now support for cmockery2 (
http://review.gluster.org/#/c/7538/ ) has been merged into the 
repo

again.  This will most likely require you to install cmockery2 on
your development systems by doing the following:

* Fedora/EPEL:
$ sudo yum -y install cmockery2-devel

* All other systems please visit the following page:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation 



Here is also some information about Cmockery2 and how to use it:

* Introduction to Unit Tests in C Presentation:
http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
* Cmockery2 Usage Guide:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
* Using Cmockery2 with GlusterFS:
https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md 




When starting out writing unit tests, I would suggest writing 
unit
tests for non-xlator interface files when you start. Once you 
feel
more comfortable writing unit tests, then move to writing them 
for

the xlators interface files.
Awesome, many thanks! I'd like to add some unittests for the 
RPC and

NFS
layer. Several functions (like ip-address/netmask matching for 
ACLs)

look very suitable.

Did you have any particular functions in mind that you would 
like to

see
unittests for? If so, maybe you c

Re: [Gluster-devel] Cmockery2 in GlusterFS

2014-07-22 Thread Luis Pabón
I understand that when something is new and different, it is most likely 
blamed for anything wrong that happens.  I highly propose that we do not 
do this, and instead work to learn more about the tool.


Cmockery2 is a tool that is important as the compiler.  It provides an 
extremely easy method to determine the quality of the software after it 
has been constructed, and therefore it has been made to be a requirement 
of the build.  Making it optional undermines its importance, and could 
in turn make it useless.


Cmockery2 is available for all supported EPEL/Fedora versions.  For any 
other distribution or operating system, it takes about 3 mins to 
download and compile.


Please let me know if you have any other questions.

- Luis

On 07/22/2014 02:23 AM, Lalatendu Mohanty wrote:

On 07/21/2014 10:48 PM, Harshavardhana wrote:

Cmockery2 is a hard dependency before GlusterFS can be compiled in
upstream master now - we could make it conditional
and enable if necessary? since we know we do not have the cmockery2
packages available on all systems?


+1, we need to make it conditional and enable it if necessary.  I am 
also not sure if we have "cmockery2-devel" in el5, el6. If not Build 
will fail.



On Mon, Jul 21, 2014 at 10:16 AM, Luis Pabon  wrote:

Niels you are correct. Let me take a look.

Luis


-Original Message-
From: Niels de Vos [nde...@redhat.com]
Received: Monday, 21 Jul 2014, 10:41AM
To: Luis Pabon [lpa...@redhat.com]
CC: Anders Blomdell [anders.blomd...@control.lth.se];
gluster-devel@gluster.org
Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS


On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote:

On 2014-07-21 16:17, Anders Blomdell wrote:

On 2014-07-20 16:01, Niels de Vos wrote:

On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote:

Hi all,
 A few months ago, the unit test framework based on 
cmockery2 was

in the repo for a little while, then removed while we improved the
packaging method.  Now support for cmockery2 (
http://review.gluster.org/#/c/7538/ ) has been merged into the repo
again.  This will most likely require you to install cmockery2 on
your development systems by doing the following:

* Fedora/EPEL:
$ sudo yum -y install cmockery2-devel

* All other systems please visit the following page:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation 



Here is also some information about Cmockery2 and how to use it:

* Introduction to Unit Tests in C Presentation:
http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
* Cmockery2 Usage Guide:
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
* Using Cmockery2 with GlusterFS:
https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md 




When starting out writing unit tests, I would suggest writing unit
tests for non-xlator interface files when you start. Once you feel
more comfortable writing unit tests, then move to writing them for
the xlators interface files.

Awesome, many thanks! I'd like to add some unittests for the RPC and
NFS
layer. Several functions (like ip-address/netmask matching for ACLs)
look very suitable.

Did you have any particular functions in mind that you would like to
see
unittests for? If so, maybe you can file some bugs for the different
tests so that we won't forget about it? Depending on the tests, 
these

bugs may get the EasyFix keyword if there is a clear description and
some pointers to examples.

Looks like parts of cmockery was forgotten in glusterfs.spec.in:

# rpm -q -f  `which gluster`
glusterfs-cli-3.7dev-0.9.git5b8de97.fc20.x86_64
# ldd `which gluster`
  linux-vdso.so.1 =>  (0x74dfe000)
  libglusterfs.so.0 => /lib64/libglusterfs.so.0 
(0x7fe034cc4000)
  libreadline.so.6 => /lib64/libreadline.so.6 
(0x7fe034a7d000)

  libncurses.so.5 => /lib64/libncurses.so.5 (0x7fe034856000)
  libtinfo.so.5 => /lib64/libtinfo.so.5 (0x7fe03462c000)
  libgfxdr.so.0 => /lib64/libgfxdr.so.0 (0x7fe034414000)
  libgfrpc.so.0 => /lib64/libgfrpc.so.0 (0x7fe0341f8000)
  libxml2.so.2 => /lib64/libxml2.so.2 (0x7fe033e8f000)
  libz.so.1 => /lib64/libz.so.1 (0x7fe033c79000)
  libm.so.6 => /lib64/libm.so.6 (0x7fe033971000)
  libdl.so.2 => /lib64/libdl.so.2 (0x7fe03376d000)
  libcmockery.so.0 => not found
  libpthread.so.0 => /lib64/libpthread.so.0 (0x7fe03354f000)
  libcrypto.so.10 => /lib64/libcrypto.so.10 (0x7fe033168000)
  libc.so.6 => /lib64/libc.so.6 (0x7fe032da9000)
  libcmockery.so.0 => not found
  libcmockery.so.0 => not found
  libcmockery.so.0 => not found
  liblzma.so.5 => /lib64/liblzma.so.5 (0x7fe032b82000)
  /lib64/ld-linux-x86-64.so.2 (0x7fe0351f1000)

Should I file a bug report or could someone on the fast-lane fix 
this?

My bad (installation with --nodeps --force :-()


Re: [Gluster-devel] Cmockery2 in GlusterFS

2014-07-21 Thread Luis Pabón
Yes, it is a simple bug.  I filed 
https://bugzilla.redhat.com/show_bug.cgi?id=1121822 , thank you very 
much for finding this Anders.  I have sent a fix.


- Luis

On 07/21/2014 01:16 PM, Luis Pabon wrote:

Niels you are correct. Let me take a look.

Luis


-Original Message-
From: Niels de Vos [nde...@redhat.com]
Received: Monday, 21 Jul 2014, 10:41AM
To: Luis Pabon [lpa...@redhat.com]
CC: Anders Blomdell [anders.blomd...@control.lth.se]; 
gluster-devel@gluster.org

Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS


On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote:
> On 2014-07-21 16:17, Anders Blomdell wrote:
> > On 2014-07-20 16:01, Niels de Vos wrote:
> >> On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote:
> >>> Hi all,
> >>> A few months ago, the unit test framework based on cmockery2 
was

> >>> in the repo for a little while, then removed while we improved the
> >>> packaging method.  Now support for cmockery2 (
> >>> http://review.gluster.org/#/c/7538/ ) has been merged into the repo
> >>> again.  This will most likely require you to install cmockery2 on
> >>> your development systems by doing the following:
> >>>
> >>> * Fedora/EPEL:
> >>> $ sudo yum -y install cmockery2-devel
> >>>
> >>> * All other systems please visit the following page: 
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation

> >>>
> >>> Here is also some information about Cmockery2 and how to use it:
> >>>
> >>> * Introduction to Unit Tests in C Presentation:
> >>> http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
> >>> * Cmockery2 Usage Guide:
> >>> https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
> >>> * Using Cmockery2 with GlusterFS: 
https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md 


> >>>
> >>>
> >>> When starting out writing unit tests, I would suggest writing unit
> >>> tests for non-xlator interface files when you start.  Once you feel
> >>> more comfortable writing unit tests, then move to writing them for
> >>> the xlators interface files.
> >>
> >> Awesome, many thanks! I'd like to add some unittests for the RPC 
and NFS

> >> layer. Several functions (like ip-address/netmask matching for ACLs)
> >> look very suitable.
> >>
> >> Did you have any particular functions in mind that you would like 
to see

> >> unittests for? If so, maybe you can file some bugs for the different
> >> tests so that we won't forget about it? Depending on the tests, 
these

> >> bugs may get the EasyFix keyword if there is a clear description and
> >> some pointers to examples.
> >
> > Looks like parts of cmockery was forgotten in glusterfs.spec.in:
> >
> > # rpm -q -f  `which gluster`
> > glusterfs-cli-3.7dev-0.9.git5b8de97.fc20.x86_64
> > # ldd `which gluster`
> >  linux-vdso.so.1 =>  (0x74dfe000)
> >  libglusterfs.so.0 => /lib64/libglusterfs.so.0 
(0x7fe034cc4000)

> >  libreadline.so.6 => /lib64/libreadline.so.6 (0x7fe034a7d000)
> >  libncurses.so.5 => /lib64/libncurses.so.5 (0x7fe034856000)
> >  libtinfo.so.5 => /lib64/libtinfo.so.5 (0x7fe03462c000)
> >  libgfxdr.so.0 => /lib64/libgfxdr.so.0 (0x7fe034414000)
> >  libgfrpc.so.0 => /lib64/libgfrpc.so.0 (0x7fe0341f8000)
> >  libxml2.so.2 => /lib64/libxml2.so.2 (0x7fe033e8f000)
> >  libz.so.1 => /lib64/libz.so.1 (0x7fe033c79000)
> >  libm.so.6 => /lib64/libm.so.6 (0x7fe033971000)
> >  libdl.so.2 => /lib64/libdl.so.2 (0x7fe03376d000)
> >  libcmockery.so.0 => not found
> >  libpthread.so.0 => /lib64/libpthread.so.0 (0x7fe03354f000)
> >  libcrypto.so.10 => /lib64/libcrypto.so.10 (0x7fe033168000)
> >  libc.so.6 => /lib64/libc.so.6 (0x7fe032da9000)
> >  libcmockery.so.0 => not found
> >  libcmockery.so.0 => not found
> >  libcmockery.so.0 => not found
> >  liblzma.so.5 => /lib64/liblzma.so.5 (0x7fe032b82000)
> >  /lib64/ld-linux-x86-64.so.2 (0x7fe0351f1000)
> >
> > Should I file a bug report or could someone on the fast-lane fix 
this?

> My bad (installation with --nodeps --force :-()

Actually, I was not expecting a dependency on cmockery2. My
understanding was that only some temporary test-applications would be
linked with libcmockery and not any binaries that would get packaged in
the RPMs.

Luis, could you clarify that?

Thanks,
Niels


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Cmockery2 in GlusterFS

2014-07-18 Thread Luis Pabón

Hi all,
A few months ago, the unit test framework based on cmockery2 was in 
the repo for a little while, then removed while we improved the 
packaging method.  Now support for cmockery2 ( 
http://review.gluster.org/#/c/7538/ ) has been merged into the repo 
again.  This will most likely require you to install cmockery2 on your 
development systems by doing the following:


* Fedora/EPEL:
$ sudo yum -y install cmockery2-devel

* All other systems please visit the following page: 
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation


Here is also some information about Cmockery2 and how to use it:

* Introduction to Unit Tests in C Presentation: 
http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
* Cmockery2 Usage Guide: 
https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
* Using Cmockery2 with GlusterFS: 
https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md 



When starting out writing unit tests, I would suggest writing unit tests 
for non-xlator interface files when you start.  Once you feel more 
comfortable writing unit tests, then move to writing them for the 
xlators interface files.


Please let me know if you have any questions.

- Luis
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel