[Gluster-devel] Nos vemos!

2017-09-25 Thread Luis Pabon
Hi community,
  It has been a great time working together on GlusterFS and Heketi these
past few years. I remember starting on Heketi to enable GlusterFS for
Manila, but now I will be moving on to work at Portworx and their
containerized storage product.
  I am passing leadership of the Heketi project to Michael Adam, and I now
it is in good hands at Red Hat.

Thank you all again for making Heketi awesome.

- Luis

PS: If you didn't know, Heketi is a Taino word (indigenous people of the
Caribbean) which means One. Other Taino words are: Barbecue,  hurricane,
hammock and many others.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Renaming Heketi CLI

2016-04-04 Thread Luis Pabon
Thanks Niels.  I was explaining to another person offline that Heketi is like a 
GlusterFS cloud volume manager.  So maybe we should call it gcvm:

$ gcvm volume create ..?

What do you think?

- Original Message -
From: "Niels de Vos" <nde...@redhat.com>
To: "Luis Pabon" <lpa...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Monday, April 4, 2016 5:53:43 AM
Subject: Re: [Gluster-devel] Renaming Heketi CLI

On Mon, Apr 04, 2016 at 12:36:56AM -0400, Luis Pabon wrote:
> Hi all,
>   As you may know, Heketi (https://github.com/heketi/heketi) is a service 
> that allows volumes to be created on demand from any number of GlusterFS 
> clusters.  The program heketi-cli was at first created as a sample 
> application for developers, but now that we will be working on it to be used 
> by users, I am not sure if the name is appropriate, and is probably 
> confusing.  My question is, should the program still be called "heketi-cli"?. 
>  I am thinking it should be called something along the lines of 
> GlusterFS-family-like program:
> 
> Here are some names for the cli:
> 
> $ glfs-heketi volume create ...

I like this one best. "glfs" is a common abbreviation for GlusterFS.

> or
> 
> $ glfs volume create ...

When naming the binary "glfs", it needs to be very modular, similar to
how "git" can run additional binaries. Other projects may want to use
the common "glfs" name too.

> or
> 
> $ glusterfs-cloud volume create ...

My 2nd choice.

> or
> 
> $ gfsc volume create ...

GFS is like the Global File System, not GlusterFS.

> or
> 
> $ gfscloud volume create ...
> 
> What do you think?

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Renaming Heketi CLI

2016-04-03 Thread Luis Pabon
Hi all,
  As you may know, Heketi (https://github.com/heketi/heketi) is a service that 
allows volumes to be created on demand from any number of GlusterFS clusters.  
The program heketi-cli was at first created as a sample application for 
developers, but now that we will be working on it to be used by users, I am not 
sure if the name is appropriate, and is probably confusing.  My question is, 
should the program still be called "heketi-cli"?.  I am thinking it should be 
called something along the lines of GlusterFS-family-like program:

Here are some names for the cli:

$ glfs-heketi volume create ...

or

$ glfs volume create ...

or

$ glusterfs-cloud volume create ...

or

$ gfsc volume create ...

or

$ gfscloud volume create ...

What do you think?

- Luis
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] REST API authentication: JWT - Shared Token vs Shared Secret

2016-03-04 Thread Luis Pabon
For shared secret, create a simple 10 line python program. Here is an example: 
https://github.com/heketi/heketi/wiki/API#example

- Luis

- Original Message -
From: "Aravinda" <avish...@redhat.com>
To: "Kaushal M" <kshlms...@gmail.com>
Cc: "Luis Pabon" <lpa...@redhat.com>, "Kanagaraj Mayilsamy" 
<kmayi...@redhat.com>, "Gluster Devel" <gluster-devel@gluster.org>, "Kaushal 
Madappa" <kmada...@redhat.com>
Sent: Thursday, March 3, 2016 11:04:17 PM
Subject: Re: [Gluster-devel] REST API authentication: JWT - Shared Token vs 
Shared Secret


regards
Aravinda

On 03/03/2016 05:58 PM, Kaushal M wrote:
> On Thu, Mar 3, 2016 at 2:39 PM, Aravinda <avish...@redhat.com> wrote:
>> Thanks.
>>
>> We can use Shared secret if https requirement can be completely
>> avoided. I am not sure how to use same SSL certificates in all the
>> nodes of the Cluster.(REST API server patch set 2 was written based on
>> shared secret method based on custom HMAC signing
>> http://review.gluster.org/#/c/13214/2/in_progress/management_rest_api.md)
>>
>> Listing the steps involved in each side with both the
>> approaches. (Skipping Register steps since it is common to both)
>>
>> Shared Token:
>> -
>> Client side:
>> 1. Add saved token Authorization header and initiate a REST call.
>> 2. If UnAuthorized, call /token and get access_token again and repeat
>> the step 1
>>
>> Server side:
>> 1. Verify JWT using the Server's secret.
> You forgot the part where server generates the token. :)
Oh Yes. I missed that step :)
>
>>
>> Shared Secret:
>> --
>> Client side:
>> 1. Hash the Method + URL + Params and include in qsh claim of JWT
>> 2. Using shared secret, create JWT.
>> 3. Add previously generated JWT in Authorization header and initiate
>> REST call
>>
>> Server side:
>> 1. Recalculate the hash using same details (Method + URL + Params) and
>> verify with received qsh
>> 2. Do not trust any claims, validate against the values stored in
>> Server(role/group/capabilities)
>> 3. Verify JWT using the shared secret
>>
> Anyways, I'm still not sure which of the two approaches I like better.
> My google research on this topic (ReST api authentication) led to many
> results which followed a token approach.
> This causes me to lean slightly towards shared tokens.
Yeah, Shared token is widely used method. But https is mandatory to 
avoid URL tampering. But using SSL certs in multi node setup is 
tricky.(Using same cert files across all nodes)

Shared secret approach it is difficult to test using curl or 
Postman(https://www.getpostman.com/)

>
> Since, I can't decide, I plan to write down workflows involved for
> both and try to compare them that way.
> It would probably help arrive at a decision. I'll try to share this
> ASAP (probably this weekend).
Thanks.
>
>> regards
>> Aravinda
>>
>>
>> On 03/03/2016 11:49 AM, Luis Pabon wrote:
>>> Hi Aravinda,
>>> Very good summary.  I would like to rephrase a few parts.
>>>
>>> On the shared token approach, the disadvantage is that the server will be
>>> more complicated (not *really* complicated, just more than the shared
>>> token), because it would need a login mechanism.  Server would have to both
>>> authenticate and authorize the user.  Once this has occurred a token with an
>>> expiration date can be handed back to the caller.
>>>
>>> On the shared secret approach, I do not consider the client creating a JWT
>>> a disadvantage (unless you are doing it in C), it is pretty trivial for
>>> programs written in Python, Go, Javascript etc to create a JWT on each call.
>>>
>>> - Luis
>>>
>>> - Original Message -
>>> From: "Aravinda" <avish...@redhat.com>
>>> To: "Gluster Devel" <gluster-devel@gluster.org>
>>> Cc: "Kaushal Madappa" <kmada...@redhat.com>, "Atin Mukherjee"
>>> <amukh...@redhat.com>, "Luis Pabon" <lpa...@redhat.com>,
>>> kmayi...@redhat.com, "Prashanth Pai" <p...@redhat.com>
>>> Sent: Wednesday, March 2, 2016 1:53:00 AM
>>> Subject: REST API authentication: JWT - Shared Token vs Shared Secret
>>>
>>> Hi,
>>>
>>> For Gluster REST project we are planning to use JSON Web Token for
>>> authentication. There are two approaches to use JWT, please help us to
>>> evaluate between these two options.
>>>

Re: [Gluster-devel] REST API authentication: JWT - Shared Token vs Shared Secret

2016-03-02 Thread Luis Pabon
Hi Aravinda,
  Very good summary.  I would like to rephrase a few parts.

On the shared token approach, the disadvantage is that the server will be more 
complicated (not *really* complicated, just more than the shared token), 
because it would need a login mechanism.  Server would have to both 
authenticate and authorize the user.  Once this has occurred a token with an 
expiration date can be handed back to the caller.

On the shared secret approach, I do not consider the client creating a JWT a 
disadvantage (unless you are doing it in C), it is pretty trivial for programs 
written in Python, Go, Javascript etc to create a JWT on each call.

- Luis

- Original Message -
From: "Aravinda" <avish...@redhat.com>
To: "Gluster Devel" <gluster-devel@gluster.org>
Cc: "Kaushal Madappa" <kmada...@redhat.com>, "Atin Mukherjee" 
<amukh...@redhat.com>, "Luis Pabon" <lpa...@redhat.com>, kmayi...@redhat.com, 
"Prashanth Pai" <p...@redhat.com>
Sent: Wednesday, March 2, 2016 1:53:00 AM
Subject: REST API authentication: JWT - Shared Token vs Shared Secret

Hi,

For Gluster REST project we are planning to use JSON Web Token for
authentication. There are two approaches to use JWT, please help us to
evaluate between these two options.

http://jwt.io/

For both approach, user/app will register with Username and Secret.

Shared Token Approach:(Default as per JWT website 
http://jwt.io/introduction/)
--
Server will generate JWT with pre-configured expiry once user login to
server by providing Username and Secret. Secret is encrypted and
stored in Server. Clients should include that JWT in all requests.

Advantageous:
1. Clients need not worry anything about JWT signing.
2. Single secret at server side can be used for all token verification.
3. This is a stateless authentication mechanism as the user state is
never saved in the server memory(http://jwt.io/introduction/)
4. Secret is encrypted and stored in Server.

Disadvantageous:
1. URL Tampering can be protected only by using HTTPS.

Shared Secret Approach:
---
Secret will not be encrypted in Server side because secret is
required for JWT signing and verification. Clients will sign every
request using Secret and send that signature along with the
request. Server will sign again using the same secret to check the
signature match.

Advantageous:
1. Protection against URL Tampering without HTTPS.
2. Different expiry time management based on issued time

Disadvantageous:
1. Clients should be aware of JWT and Signing
2. Shared secrets will be stored as plain text format in server.
3. Every request should lookup for shared secret per user.

-- 
regards
Aravinda

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster REST - Management REST APIs for Glusterd 1.0

2016-01-04 Thread Luis Pabon
I highly encourage the projects to have almost 100% equal APIs (including 
returned JSON objects, errors, etc).  Asking clients to change their code due 
to a change in the server would most likely be received in a negative way.  On 
the other hand, if the v1 and v2 servers use the same API, there is no need to 
change the client.

Let me see if I can describe it in another way.  Changing the API is normally 
done due to a feature change, and not due to a change in the implementation of 
the API.


- Luis

- Original Message -
From: "Aravinda" 
To: "Vijay Bellur" , "Gluster Devel" 

Sent: Monday, December 28, 2015 2:57:58 AM
Subject: Re: [Gluster-devel] Gluster REST - Management REST APIs for
Glusterd 1.0


regards
Aravinda

On 12/28/2015 01:01 PM, Vijay Bellur wrote:
> On 12/21/2015 05:51 AM, Aravinda wrote:
>> Hi,
>>
>> In the past I submitted a feature([1] and [2]) to provide REST interface
>> to Gluster
>> CLI commands. I abandoned the patch because Glusterd 2.0 had plan to
>> include REST API support natively. Now I started again since other
>> projects like "skyrings"[3] is looking for REST API support for
>> Gluster.
>>
>> Created a gist[4] to discuss the REST API formats, Please review and let
>> me know your thoughts. Document is still in WIP, will update the
>> document by next week.
>>
>
> Are these APIs in sync with the ones in review for glusterd 2.0? Even 
> with api versioning, I would like us to have the APIs similar to 2.0 
> as much as possible.

I am trying to keep the APIs in sync with glusterd 2.0 for easy 
migration. But have some confusions while choosing the HTTP method for 
some URLs.
For example: Volume Creation.

Glusterd 2.0 uses volume name as part of body instead of part of URL 
since volume id is unique identifier and which can be used to perform 
other volume operations like start/stop.
But in Glusterd 1.0 operations based on ID is not possible using CLIs so 
two calls per request.

POST /volumes/:id/start

In Glusterd 1.0,
 Get Volname from Gluster Volume info
 Use that Volname to construct Gluster CLI(gluster volume start :name)

In Glusterd 2.0,
 May be directly possible to perform action using id.

To avoid this extra processing, I used Volume name as part of URL since 
Volume name is unique identifier per Cluster and these REST APIs are for 
single cluster management.

In Glusterd 1.0
 PUT /volumes/:name

In Glusterd 2.0
 POST /volumes

Once I submit the draft of REST API doc, we can discuss about these issues.


>
> Thanks,
> Vijay
>
>> [1]
>> http://www.gluster.org/community/documentation/index.php/Features/rest-api 
>>
>> [2] http://review.gluster.org/#/c/7860/
>> [3] https://github.com/skyrings/
>> [4] https://gist.github.com/aravindavk/6961564b4b1d966b8845
>>
>

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gdploy + Heketi

2015-12-10 Thread Luis Pabon
I think at its simplest would be to specify workflow examples and how 
gdploy+Heketi would satisfy them.  I will be sending out some possible 
workflows tomorrow.

Also, there is a python Heketi client in the works right now which would 
benefit gdeploy:  https://github.com/heketi/heketi/pull/251 .

- Luis

- Original Message -
From: "Sachidananda URS" <s...@redhat.com>
To: "Luis Pabon" <lpa...@redhat.com>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Friday, December 11, 2015 1:54:18 AM
Subject: Re: gdploy + Heketi

Hi Luis,

On Fri, Dec 11, 2015 at 12:01 PM, Luis Pabon <lpa...@redhat.com> wrote:

> Hi Sachidananda,
>   I think there is a great opportunity to enhance GlusterFS management by
> using gdeploy as a service which uses Heketi for volume management.
> Currently, gdeploy sets up nodes, file systems, bricks, and volumes.  It
> does all this with input from the administrator, but it does not support
> automated brick allocation management, failure domains, or multiple
> clusters.  On the other hand, it does have support for mounting volumes in
> clients, and setting up multiple options on a specified volume.
>
>   I would like to add support for Heketi in the gdeploy workflow.  This
> would enable administrators to manage clusters, nodes, disks, and volumes
> with gdeploy based on Heketi.
>
> What do you guys think?
>


That would be great. Please let us know if you already have a plan on how
to make these two work.

-sac
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] gdploy + Heketi

2015-12-10 Thread Luis Pabon
Hi Sachidananda,
  I think there is a great opportunity to enhance GlusterFS management by using 
gdeploy as a service which uses Heketi for volume management.  Currently, 
gdeploy sets up nodes, file systems, bricks, and volumes.  It does all this 
with input from the administrator, but it does not support automated brick 
allocation management, failure domains, or multiple clusters.  On the other 
hand, it does have support for mounting volumes in clients, and setting up 
multiple options on a specified volume.

  I would like to add support for Heketi in the gdeploy workflow.  This would 
enable administrators to manage clusters, nodes, disks, and volumes with 
gdeploy based on Heketi.

What do you guys think?

- Luis
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Heketi now available in EPEL/Fedora

2015-11-16 Thread Luis Pabon
Hi all,
  Heketi [1] is now available in EPEL/Fedora.  Installation instructions are 
available https://github.com/heketi/heketi/wiki/Usage-Guide .  For those of you 
new to Heketi:

Heketi provides a RESTful management interface which can be used to manage the 
life cycle of GlusterFS volumes.  With Heketi, cloud services like OpenStack 
Manila, Kubernetes, and OpenShift can dynamically provision GlusterFS volumes 
with any of the supported durability types.  Heketi will automatically 
determine the location for bricks across the cluster, making sure to place 
bricks and its replicas across different failure domains.  Heketi also supports 
any number of GlusterFS clusters, allowing cloud services to provide network 
file storage without being limited to a single GlusterFS cluster.

[1] https://github.com/heketi/heketi

- Luis
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Heketi Release 1 Available for use

2015-10-02 Thread Luis Pabon
Hi all,
  Release 1 of Heketi is now available for use.  

  Heketi is a service used to manage the lifecycle of GlusterFS volumes.  With 
Heketi, cloud services like OpenStack Manila, Kubernetes, and OpenShift can 
dynamically provision GlusterFS volumes with any of the supported durability 
types.  Heketi will automatically determine the location for bricks across the 
cluster, making sure to place bricks and its replicas across different failure 
domains.  Heketi also supports any number of GlusterFS clusters, allowing cloud 
services to provide network file storage without being limited to a single 
GlusterFS cluster.

Website:  https://github.com/heketi/heketi
Documentation:  https://github.com/heketi/heketi/wiki
Demo:  https://github.com/heketi/heketi/wiki/Demo
Download:  https://github.com/heketi/heketi/releases

Questions can be asked over email, using Github issues, or using Gitter 
(https://gitter.im/heketi/heketi).

Please try it out and let us know if you find any issues or have any comments. 

If you find a bug, checkout https://github.com/heketi/heketi/wiki/Filing-a-bug. 

Thank you!

- Luis
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] GlusterD 2.0 status updates

2015-09-07 Thread Luis Pabon
Hi Atin,
  This looks interesting.  Currently in Heketi it needs to ssh into a system to 
send commands to glusterfs.  It would be great to determine the interfaces 
needed and how it would work with programs like Heketi.  Do you guys have a 
simple few paragraphs on how glusterd2 rest interface will interact with other 
applications.  Also, normally on REST calls, JSON or XML is returned.  Is there 
a reason to go and use binary interfaces?  Do you see a need to use protobuf?

Also I do quite a few comments on the code itself.  Should I create github 
issues for them or should I put them here on the email?

Thanks Atin,

- Luis

- Original Message -
From: "Atin Mukherjee" 
To: "Gluster Devel" 
Sent: Tuesday, September 1, 2015 1:04:35 AM
Subject: [Gluster-devel] GlusterD 2.0 status updates

Here is a quick summary of what we accomplished over last one month:

1. The skeleton of GlusterD 2.0 codebase is now available @ [1] and the
same is integrated with gerrithub.

2. Rest end points for basic commands like volume
create/start/stop/delete/info/list have been implemented. Needs little
bit of more polishing to strictly follow the heketi APIs

3. Team has worked on coming up with a cross language light weight RPC
framework using pbrpc and the same can be found at [2]. The same also
has pbcodec package which provides a protobuf based rpc.ClientCodec and
rpc.ServerCodec that can be used with rpc package in Go's standard library

4. We also worked on the first cut of volfile generation and its
integrated in the repository.


The plan for next month is as follows:

1. Focus on the documentation along with publishing the design document
2. Unit tests
3. Come up with the initial design & a basic prototype for transaction
framework.

[1] https://github.com/kshlm/glusterd2
[2] https://github.com/kshlm/pbrpc

Thanks,
Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] An update on GlusterD-2.0

2015-06-25 Thread Luis Pabon
Kaushal, 
  This is really cool stuff! I think Heketi will provide the IVC service 
Glusterd-2.0 requires. I also look forward to working with you and KP if you 
guys have the time (and anyone else who want to help, for that matter).  Just 
letting you know, that at least until October, Heketi will be focused on 
providing a solution for Manila and Kubernetes as the priority.  There may be 
other features that Glusterd-2.0 requires which may have to wait, but I think 
it would be great to participate and get used to how Heketi works.  Currently I 
am trying to finalize the API, and I will be sending a follow up email 
requesting a review.  Also, like Deepak pointed out, Heketi will need to deal 
with GlusterFS Certificates, somehow.  That is still TBD.

I look forward to working with you guys.

- Luis

- Original Message -
From: Kaushal M kshlms...@gmail.com
To: Gluster Devel gluster-devel@gluster.org
Sent: Wednesday, June 17, 2015 11:05:14 AM
Subject: [Gluster-devel] An update on GlusterD-2.0

At the Gluster Summit, everyone agreed that GlusterD should be the
first component to be targeted for GlusterFS-4.0. We had good
discussions on what GlusterD lacks currently and what is required for
GlusterD-2.0. KP and I had promised to send an update to mailing list
summarizing the discussions, and this is it.

Along with the summary, we'll also be discussing our plans for Manila
integration and how we are planning to do it with GlusterD-2.0.

## Gluster Summit Summary
In the summit, KP and I presented a talk titled *GLUSTERD NEXT GEN
OR: HOW WE CAN STOP BEING A ROADBLOCK AND INSTEAD BECOME THE ROAD TO
SCALABILITY*. The slides can be viewed at [1][1]. There is no video
recording of it unfortunately.

The summary of the presentation is below.

### Problems
GlusterD, as it is currently, is not scalable which will prevent
GlusterFS as whole from scaling. The scaling issues can be classified
into,
- Node scalability
- Maintenance scalability
- Integration scalability
- Contribution scalability

 Node scalability
This is the general scalability we all think about, scale in terms of
node/machine/clients. GlusterD scalability is help back in this
because of the store, transaction and synchronization mechanisms used
in GlusterD.

 Maintenance scalability
Maintenance scalability is to with the problems we as GlusterD
maintainers faced. This is mainly related to the huge, monolithic code
base of the current GlusterD, which makes splitting of maintenance and
ownership tasks hard.

 Integration scalability
Integration scalability can split into internal and external integration.
Internal integration is the integration dealing with new features
being added to GlusterFS. Every new feature being added needs to touch
GlusterD or CLI in some way. This has generally been done with
copy/paste coding which has added to the maintenence overhead.
External integration is the integration of Gluster with other projects
or other projects with Gluster. This is hurt due to the
non-availability of a proper API for GlusterD operations. All
interaction with GlusterD currently happens only via the CLI. And the
output we provide is generally inconsistent to be programatically
parsed.

 Contribution scalability
Getting new contributors for GlusterD is hard. New contributors are
put off because GlusterD is hard to understand, and because there
isn't enough documentation.


So GlusterD-2.0 will need to
- be scalable to 1000s of nodes
- have lower maintenance costs
- enable better external and internal integrations
- make it easier for newer contributors

### Design characteristics for GlusterD-2.0
For GlusterD-2.0 to satisfy the above listed requirements and solve
the problems listed before, it should have the following
characteristics.

 Centralized store
This should help with our numbers scalability issues. GlusterD-2.0
will be built around a centralized store. This means, instead of the
GlusterD volume and peer information being persisted on all nodes, it
will be stored only on a subset of the nodes in a trusted storage
pool.

We are looking at solutions like etcd and consul, both of which
provide a distributed key/value store (and some more useful features
on top), to provide the centralized store. The transaction mechanism
for Gluster operations will be built around this centralized store.

Moving to an external store provider and a transaction framework built
around it will reduce a lot of the complexity in GlusterD.

 Plugins
This mainly for the maintainability and internal integration aspects.
GlusterD-2.0 will have a plug-gable, modular design. We expect all the
commands of GlusterD to be implemented as plugins. Certain other parts
of GlusterD, including things like volgen, volume-set, rest api, etc.
This will allow new features to be integrated in to GlusterD easily.
The code for these plugins is expected to live with the feature, and
not in GlusterD.

Doing a plugin design requires the defining of well 

Re: [Gluster-devel] Introducing Heketi: Storage Management Framework with Plugins for GlusterFS volumes

2015-06-18 Thread Luis Pabon
Hi Jay! I have comments below:

- Original Message -
From: Jay Vyas jayunit...@gmail.com
To: Joseph Fernandes josfe...@redhat.com
Cc: John Spray jsp...@redhat.com, Gluster Devel 
gluster-devel@gluster.org
Sent: Thursday, June 18, 2015 9:33:12 AM
Subject: Re: [Gluster-devel] Introducing Heketi: Storage Management 
Framework with Plugins  for GlusterFS volumes

Thanks for annoying letting us know about  this so early on; I don't fully 
understand the plugin functionality  So I have some general questions. But 
am very interested in this.

0) I notice it's largely written in go,compared to the gluster ecosystem which 
is mostly Python and C.  Any reason why?  I like go but just curious if there 
are some technical requirements driving this and if Go might become a more 
integral part of the gluster core in the future.
[LP] Why Go? 
http://www.g33knotes.org/2014/09/why-i-think-go-provides-excellent.html ;-) .. 
Maybe GlusterFS will start using Go.  But joking aside, it provides an 
excellent platform for RESTful interfaces and unmatched deployment.

1) is heketi something that would allow things like raid and lvm and so on to 
be moved from core dependencies into extensions, thus modularizing using 
glusters core?
[LP] Maybe.  I'm not sure if heketi should be an orchestrator of storage, but I 
guess the plugin can do whatever it needs to do.

2) Will heketi actually be co-evolving / working hand in hand with fluster so 
that some of the storage administration stuff in glusters code base is moved 
into a broader framework?
[LP] I hope so.  The goal is for Heketi to provide the framework to allow 
plugins for storage systems (GlusterFS being one) to accept a set of standard 
REST commands and expose any others it needs

- Luis


 On Jun 18, 2015, at 9:07 AM, Joseph Fernandes josfe...@redhat.com wrote:
 
 Agreed we need not be depended ONE technology for the above.
 But LVM is a strong contender as a single stable underlying technology that 
 provides the following.
 We can make it plugin based :) . So that ppl who have LVM and are happy with 
 it can use it.
 And we still can have other technology plugins developed in parallel, but let 
 have Single API standard defined for all.
 
 ~Joe
 
 - Original Message -
 From: Jeff Darcy jda...@redhat.com
 To: Joseph Fernandes josfe...@redhat.com
 Cc: Luis Pabon lpa...@redhat.com, Gluster Devel 
 gluster-devel@gluster.org, John Spray jsp...@redhat.com
 Sent: Thursday, June 18, 2015 5:15:37 PM
 Subject: Re: Introducing Heketi: Storage Management Framework with Plugins
 for GlusterFS volumes
 
 LVM or Volume Manager Dependencies:
 1) SNAPSHOTS: Gluster snapshots are LVM based
 
 The current implementation is LVM-centric, which is one reason uptake has
 been so low.  The intent was always to make it more generic, so that other
 mechanisms could be used as well.
 
 
 
 2) PROVISIONING and ENFORCEMENT:
 As of today Gluster does not have any control on the size of the brick. It
 will consume the brick (xfs)mount point given
 to it without checking on how much it needs to consume. LVM (or any other
 volume manager) will be required to do space provisioning per brick and
 enforce
 limits on size of bricks.
 
 Some file systems have quota, or we can enforce our own.
 
 3) STORAGE SEGREGATION:
 LVM pools can be used to have storage segregation i.e having primary storage
 pools and secondary(for Gluster replica) pools,
 So that we can crave out proper space from the physical disks attached to
 each node.
 At a high level (i.e Heketi's User) disk space can be viewed as storage
 pools,(i.e by aggreating disk space per pool per node using glusterd)
 To start with we can have Primary pool and secondary pool(for Gluster
 replica) , where each file serving node in the cluster participates in
 these pools via the local LVM pools.
 
 This functionality in no way depends on LVM.  In many cases, mere
 subdirectories are sufficient.
 
 4) DATA PROTECTION:
   Further data protection using LVM RAID. Pools can be marked to have RAID
   support on them, courtesy of LVM RAID.
   
 https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Logical_Volume_Manager_Administration/raid_volumes.html
 
 Given that we already have replication, erasure coding, etc. many users would
 prefer not to reduce storage utilization even further with RAID.  Others
 would prefer to get the same functionality without LVM, e.g. with ZFS.  That's
 why RAID has always been - and should remain - optional.
 
 It's fine that we *can* use LVM features when and where they're available.
 Building in *dependencies* on it has been a mistake every time, and repeating
 a mistake doesn't make it anything else.
 
 JOE: 
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http

Re: [Gluster-devel] Introducing Heketi: Storage Management Framework with Plugins for GlusterFS volumes

2015-06-16 Thread Luis Pabon
Great! I look forward to it. In the near term, we are planning on using 
OpenStack Swift's Ring to determine brick placement in the cluster.  I look 
forward to your plan and see how we can work together.. (Maybe Glusterd-2.0 is 
going to be written in Go and we can share code, or even integrate heketi into 
glusterd ;-) ).

- Luis

- Original Message -
From: Kaushal M kshlms...@gmail.com
To: Luis Pabon lpa...@redhat.com
Cc: gluster-devel@gluster.org  Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, June 16, 2015 5:19:02 AM
Subject: Re: [Gluster-devel] Introducing Heketi: Storage Management Framework 
with Plugins for GlusterFS volumes

Hey Luis,

You just announced this right when we were about to make public our
plans for Manila integration w.r.t GlusterD-2.0.

We are about to kickstart GlusterD-2.0, and the first piece we wanted
to implement was a method to create volumes intelligently. This is
pretty much what heketi is attempting to do now, so maybe we could
collaborate.

I'll be publishing our full plan later today. Let me know what you
think about it.

Cheers.

~kaushal

On Mon, Jun 15, 2015 at 9:31 PM, Luis Pabon lpa...@redhat.com wrote:
 Hi all,

   As we continue forward with OpenStack Manila and Kubernetes integration 
 with GlusterFS, we require a simpler method of managing volumes and bricks.  
 We introduce Heketi ( https://github.com/heketi ), a RESTful storage 
 management framework which enables certain storage systems to be easily 
 managed. Heketi poses itself not an abstraction layer, but instead as a 
 framework for developers to enhance their existing storage system management 
 through a plugin interface. Heketi also provides a RESTful interface which 
 enables callers to easily manage the storage system.  For GlusterFS, 
 currently we only plan on managing the creation and deletion of volumes in 
 GlusterFS through a simple RESTful interface.  Heketi will also manage the 
 brick location and life cycle.

 Heketi allows users to create volumes with a simple command and not worry 
 about which bricks to use.  A simple command to create a 1 TB volume would 
 look like this:

 POST on http://server/volumes, with JSON { size : 10 }

 To delete the volume:

 DELETE on http://server/volumes/id of volume


 * A demo is available https://github.com/heketi/vagrant-heketi , so please 
 try it out and let us know what you think.


 ** Heketi is still quite young, but the plan is to finish by early September.

 Please let us know what you think.

 - Luis
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Introducing Heketi: Storage Management Framework with Plugins for GlusterFS volumes

2015-06-15 Thread Luis Pabon
Hi all,

  As we continue forward with OpenStack Manila and Kubernetes integration with 
GlusterFS, we require a simpler method of managing volumes and bricks.  We 
introduce Heketi ( https://github.com/heketi ), a RESTful storage management 
framework which enables certain storage systems to be easily managed. Heketi 
poses itself not an abstraction layer, but instead as a framework for 
developers to enhance their existing storage system management through a plugin 
interface. Heketi also provides a RESTful interface which enables callers to 
easily manage the storage system.  For GlusterFS, currently we only plan on 
managing the creation and deletion of volumes in GlusterFS through a simple 
RESTful interface.  Heketi will also manage the brick location and life cycle.

Heketi allows users to create volumes with a simple command and not worry about 
which bricks to use.  A simple command to create a 1 TB volume would look like 
this:

POST on http://server/volumes, with JSON { size : 10 }

To delete the volume:

DELETE on http://server/volumes/id of volume


* A demo is available https://github.com/heketi/vagrant-heketi , so please try 
it out and let us know what you think.


** Heketi is still quite young, but the plan is to finish by early September.

Please let us know what you think.

- Luis
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] The Manila RFEs and why so

2015-06-15 Thread Luis Pabon
I agree Vijay, that is why we created Heketi, due to the urgency in Gluster and 
Manila integration for Liberty.  I think the next steps are to determine which 
of the RFEs below can be satisfied by Heketi.

Luis 



 On Jun 15, 2015, at 3:26 AM, Vijay Bellur vbel...@redhat.com wrote:
 
 On Tuesday 09 June 2015 04:42 PM, Ramana Raja wrote:
 - Vijay Bellur vbel...@redhat.com wrote:
 
 Would you be able to provide more light on the nature of features/APIs
 planned to be exposed through Manila in Liberty? Having that information
 can play an important part in prioritizing and arriving at a decision.
 
 Regards,
 Vijay
 
 Sure! The preliminary list of APIs that a Manila share driver, which
 talks to the storage backend, must support to be included in Liberty,
 the upcoming Manila release in Sep/Oct 2015, would be available to
 the Manila community sometime later this  week. But it can be
 inferred from the Manila mailing lists and the Manila community
 meetings that the driver APIs for actions such as
 - snapshotting a share,
 - creating a share from a snapshot,
 - providing read-only access level to a share,
 - resizing (extend or shrink) a share,
 besides the basic ones such as creating/deleting a share,
 allowing/denying access to a share would mostly likely be in the list
 of must-haves.
 
 There are two GlusterFS based share drivers in the current Manila
 release, glusterfs, and glusterfs_native that support NFS and
 native protocol access of shares respectively. The glusterfs driver
 treats a top-level directory in a GlusterFS volume as a share (dir
 mapped share layout) and performs share actions at the directory level
 in the GlusterFS backend. And the gluster_native driver treats a
 GlusterFS volume as a share (vol mapped share layout) and performs
 share actions at the volume level. But for the Liberty release we'd
 make both the drivers be able to work with either one of the share
 layouts depending on a configurable.
 
 Our first target is to make both our drivers support the must-have
 APIs for Liberty. We figured that if the volume based layout is used
 by both the drivers, then with existing GlusterFS features it would
 be possible for the drivers to support the must-have APIs, but with
 one caveat - the drivers would have to continue using a work around
 that makes the cloud/storage admin tasks in OpenStack deployments
 cumbersome and has to be done away with in the upcoming release i.e.,
 to create a share of specific size, pick a GlusterFS volume from among
 many already created in various Gluster clusters. The limitation can
 be overcome (as csaba mentioned earlier in this thread),
 We need a volume creation operation that creates a volume just by
 passing the name and the prospective size of it. The RFE
 for create_share API,
 Bug 1226772 – [RFE] GlusterFS Smart volume management
 
 It's also possible for the drivers to have the minimum API set
 using the directory based share layout provided GlusterFS supports
 the following operations needed for
 - create_snapshot API,
 Bug 1226207 – [RFE] directory level snapshot create
 - create_share_from_snapshot API,
 Bug 1226210 – [RFE] directory level snapshot clone
 - allow/deny_access APIs in gluster_native driver, as the driver
   relies on GlusterFS's TLS support to provide secure access to the
   shares,
 Bug 1226220 – [RFE] directory level SSL/TLS auth
 - read-only access to shares,
 Bug 1226788 – [RFE] per-directory read-only accesss
 
 And for a clean Manila-GlusterFS integration we'd like to have
 high-level query features,
 Bug 1226225 – [RFE] volume size query support
 Bug 1226776 – [RFE] volume capability query
 
 Hope this helps the community to let us know the feature sets -
 smart volume management, directory level features, query features -
 GlusterFS can support by early August and those that it can
 support later, while we strive to increase GlusterFS's adoption in
 OpenStack (Manila) cloud deployments.
 
 
 
 Given the current timelines, I am more inclined to go with the volume mapped 
 share layout (further referred to as volume layout)  for both drivers as it 
 already seems to support the desired features for the Liberty cycle.
 
 I also feel that Manila drivers are doing a lot more orchestration for 
 supporting shares with gluster than they need to today. Going ahead, I am 
 thinking about a higher level service in gluster that exposes a ReSTful 
 interface for manila drivers to call out to and have the intelligence 
 embedded there.
 
 For instance, the workflow for a share create request in Manila could look 
 like this in the new model:
 
 
 Users  create_share (size...) (manila with gluster driver)
|
|
  create_gluster_share (size...)
|
|
- gluster-storage-as-a-service-daemon
 |
 |
 ---  Transaction over gluster CLI 

Re: [Gluster-devel] Object Quota feature proposal for GlusterFS-3.7

2015-02-22 Thread Luis Pabon
Quick question, 
  Is this feature still necessary since gluster-swift is now SwiftOnFile and is 
deployed as a storage policy of Swift which uses the Swift container database?

- Luis

- Original Message -
From: Prashanth Pai p...@redhat.com
To: Vijaikumar M vmall...@redhat.com
Cc: gluster-devel@gluster.org  Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, February 17, 2015 9:47:20 PM
Subject: Re: [Gluster-devel] Object Quota feature proposal for GlusterFS-3.7

This might be helpful for  better integration with Swift. Swift maintains 
accounting metadata per account and per container. From glusterfs perspective, 
accounts are (represented as) first level directories in root and containers 
are second level directories. So accounts contain containers.

Each account has following metadata:
X-Account-Object-Count
X-Account-Bytes-Used
X-Account-Container-Count

Each container has following metadata:
X-Container-Object-Count
X-Container-Bytes-Used

I've got a few questions:
Will there be separate count stored for files vs directories ?
Does the count represent entire sub tree or just the immediate directory 
contents ?
Would a getxattr on a dir for count aggregate counts from all distributed 
bricks ?

Thanks.

Regards,
 -Prashanth Pai

- Original Message -
From: Vijaikumar M vmall...@redhat.com
To: gluster-devel@gluster.org  Gluster Devel gluster-devel@gluster.org
Sent: Tuesday, February 17, 2015 5:22:34 PM
Subject: [Gluster-devel] Object Quota feature proposal for GlusterFS-3.7

Hi All, 

We are proposing files/objects quota feature for GlusterFS-3.7. 

Here is the feature page: 
http://www.gluster.org/community/documentation/index.php/Features/Object_Count 


'Object Quotas' is an enhancement to the existing 'File Usage Quotas' and has 
the following benefits: 


* Easy to query number of objects present in a volume. 
* Can serve as an accounting mechanism for quota enforcement based on 
number of Inodes. 
* This interface will be useful for integration with OpenStack Swift and 
Ceilometer. 

We can set the following Quota object limits, similar to file usage: 


* 1) Directory level - limit the number of files at the directory level 
* 2) Volume level - limit the number of files at the volume level 



Looking forward for Question/Feedback. 

Thanks, 
Vijay 


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Open source SPC-1 Workload IO Pattern

2014-11-07 Thread Luis Pabon




 On Nov 7, 2014, at 1:53 PM, Mark Nelson mark.nel...@inktank.com wrote:
 
 On 11/07/2014 05:01 AM, Luis Pabón wrote:
 Hi guys,
 I created a simple test program to visualize the I/O pattern of NetApp’s
 open source spc-1 workload generator. SPC-1 is an enterprise OLTP type
 workload created by the Storage Performance Council
 (http://www.storageperformance.org/results).  Some of the results are
 published and available here:
 http://www.storageperformance.org/results/benchmark_results_spc1_active .
 
 NetApp created an open source version of this workload and described it
 in their publication A portable, open-source implementation of the
 SPC-1 workload (
 http://www3.lrgl.uqam.ca/csdl/proceedings/iiswc/2005/9461/00/01526014.pdf )
 
 The code is available onGithub: https://github.com/lpabon/spc1 .  All it
 does at the moment is capture the pattern, no real IO is generated. I
 will be working on a command line program to enable usage on real block
 storage systems.  I may either extend fio or create a tool specifically
 tailored to the requirements needed to run this workload.
 
 Neat!  integration with fio could be interesting.  We could then use any of 
 the engines include the librbd one (I think there is some kind of gluster 
 engine as well?)

Good point.

 
 
 On github, I have an example IO pattern for a simulation running 50 mil
 IOs using HRRW_V2. The simulation ran with an ASU1 (Data Store) size of
 45GB, ASU2 (User Store) size of 45GB, and ASU3 (Log) size of 10GB.
 
 Out of curiosity have you looked at how fast you can generate IOs before CPU 
 is a bottleneck?

Good question, I will check that when i have the io tool generation. 

 
 
 - Luis
 
 --
 To unsubscribe from this list: send the line unsubscribe ceph-devel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Cmockery2 in GlusterFS

2014-07-21 Thread Luis Pabon
The cmockery2 rpm is only available for the current supported Fedora versions 
which currently are 19 and 20.

Have you tried installing cmockery2 from the source? 

Luis 


-Original Message-
From: Santosh Pradhan [sprad...@redhat.com]
Received: Monday, 21 Jul 2014, 10:45AM
To: Luis Pabón [lpa...@redhat.com]
CC: gluster-devel@gluster.org
Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS

Hi Luis,
I am using Fedora18 in my laptop and after your patch, I am not able to 
compile the gluster from source. Yum install also does not find the 
cmockery2 bundle.

How to fix it?

Thanks,
Santosh

On 07/21/2014 07:57 PM, Anders Blomdell wrote:
 On 2014-07-21 16:17, Anders Blomdell wrote:
 On 2014-07-20 16:01, Niels de Vos wrote:
 On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote:
 Hi all,
  A few months ago, the unit test framework based on cmockery2 was
 in the repo for a little while, then removed while we improved the
 packaging method.  Now support for cmockery2 (
 http://review.gluster.org/#/c/7538/ ) has been merged into the repo
 again.  This will most likely require you to install cmockery2 on
 your development systems by doing the following:

 * Fedora/EPEL:
 $ sudo yum -y install cmockery2-devel

 * All other systems please visit the following page: 
 https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation

 Here is also some information about Cmockery2 and how to use it:

 * Introduction to Unit Tests in C Presentation:
 http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
 * Cmockery2 Usage Guide:
 https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
 * Using Cmockery2 with GlusterFS: 
 https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md


 When starting out writing unit tests, I would suggest writing unit
 tests for non-xlator interface files when you start.  Once you feel
 more comfortable writing unit tests, then move to writing them for
 the xlators interface files.
 Awesome, many thanks! I'd like to add some unittests for the RPC and NFS
 layer. Several functions (like ip-address/netmask matching for ACLs)
 look very suitable.

 Did you have any particular functions in mind that you would like to see
 unittests for? If so, maybe you can file some bugs for the different
 tests so that we won't forget about it? Depending on the tests, these
 bugs may get the EasyFix keyword if there is a clear description and
 some pointers to examples.
 Looks like parts of cmockery was forgotten in glusterfs.spec.in:

 # rpm -q -f  `which gluster`
 glusterfs-cli-3.7dev-0.9.git5b8de97.fc20.x86_64
 # ldd `which gluster`
  linux-vdso.so.1 =  (0x74dfe000)
  libglusterfs.so.0 = /lib64/libglusterfs.so.0 (0x7fe034cc4000)
  libreadline.so.6 = /lib64/libreadline.so.6 (0x7fe034a7d000)
  libncurses.so.5 = /lib64/libncurses.so.5 (0x7fe034856000)
  libtinfo.so.5 = /lib64/libtinfo.so.5 (0x7fe03462c000)
  libgfxdr.so.0 = /lib64/libgfxdr.so.0 (0x7fe034414000)
  libgfrpc.so.0 = /lib64/libgfrpc.so.0 (0x7fe0341f8000)
  libxml2.so.2 = /lib64/libxml2.so.2 (0x7fe033e8f000)
  libz.so.1 = /lib64/libz.so.1 (0x7fe033c79000)
  libm.so.6 = /lib64/libm.so.6 (0x7fe033971000)
  libdl.so.2 = /lib64/libdl.so.2 (0x7fe03376d000)
  libcmockery.so.0 = not found
  libpthread.so.0 = /lib64/libpthread.so.0 (0x7fe03354f000)
  libcrypto.so.10 = /lib64/libcrypto.so.10 (0x7fe033168000)
  libc.so.6 = /lib64/libc.so.6 (0x7fe032da9000)
  libcmockery.so.0 = not found
  libcmockery.so.0 = not found
  libcmockery.so.0 = not found
  liblzma.so.5 = /lib64/liblzma.so.5 (0x7fe032b82000)
  /lib64/ld-linux-x86-64.so.2 (0x7fe0351f1000)

 Should I file a bug report or could someone on the fast-lane fix this?
 My bad (installation with --nodeps --force :-()


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Cmockery2 in GlusterFS

2014-07-21 Thread Luis Pabon
Niels you are correct. Let me take a look. 

Luis 


-Original Message-
From: Niels de Vos [nde...@redhat.com]
Received: Monday, 21 Jul 2014, 10:41AM
To: Luis Pabon [lpa...@redhat.com]
CC: Anders Blomdell [anders.blomd...@control.lth.se]; gluster-devel@gluster.org
Subject: Re: [Gluster-devel] Cmockery2 in GlusterFS

On Mon, Jul 21, 2014 at 04:27:18PM +0200, Anders Blomdell wrote:
 On 2014-07-21 16:17, Anders Blomdell wrote:
  On 2014-07-20 16:01, Niels de Vos wrote:
  On Fri, Jul 18, 2014 at 02:52:18PM -0400, Luis Pabón wrote:
  Hi all,
  A few months ago, the unit test framework based on cmockery2 was
  in the repo for a little while, then removed while we improved the
  packaging method.  Now support for cmockery2 (
  http://review.gluster.org/#/c/7538/ ) has been merged into the repo
  again.  This will most likely require you to install cmockery2 on
  your development systems by doing the following:
 
  * Fedora/EPEL:
  $ sudo yum -y install cmockery2-devel
 
  * All other systems please visit the following page: 
  https://github.com/lpabon/cmockery2/blob/master/doc/usage.md#installation
 
  Here is also some information about Cmockery2 and how to use it:
 
  * Introduction to Unit Tests in C Presentation:
  http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/
  * Cmockery2 Usage Guide:
  https://github.com/lpabon/cmockery2/blob/master/doc/usage.md
  * Using Cmockery2 with GlusterFS: 
  https://github.com/gluster/glusterfs/blob/master/doc/hacker-guide/en-US/markdown/unittest.md
 
 
  When starting out writing unit tests, I would suggest writing unit
  tests for non-xlator interface files when you start.  Once you feel
  more comfortable writing unit tests, then move to writing them for
  the xlators interface files.
 
  Awesome, many thanks! I'd like to add some unittests for the RPC and NFS
  layer. Several functions (like ip-address/netmask matching for ACLs) 
  look very suitable.
 
  Did you have any particular functions in mind that you would like to see 
  unittests for? If so, maybe you can file some bugs for the different 
  tests so that we won't forget about it? Depending on the tests, these 
  bugs may get the EasyFix keyword if there is a clear description and 
  some pointers to examples.
  
  Looks like parts of cmockery was forgotten in glusterfs.spec.in:
  
  # rpm -q -f  `which gluster`
  glusterfs-cli-3.7dev-0.9.git5b8de97.fc20.x86_64
  # ldd `which gluster`
  linux-vdso.so.1 =  (0x74dfe000)
  libglusterfs.so.0 = /lib64/libglusterfs.so.0 (0x7fe034cc4000)
  libreadline.so.6 = /lib64/libreadline.so.6 (0x7fe034a7d000)
  libncurses.so.5 = /lib64/libncurses.so.5 (0x7fe034856000)
  libtinfo.so.5 = /lib64/libtinfo.so.5 (0x7fe03462c000)
  libgfxdr.so.0 = /lib64/libgfxdr.so.0 (0x7fe034414000)
  libgfrpc.so.0 = /lib64/libgfrpc.so.0 (0x7fe0341f8000)
  libxml2.so.2 = /lib64/libxml2.so.2 (0x7fe033e8f000)
  libz.so.1 = /lib64/libz.so.1 (0x7fe033c79000)
  libm.so.6 = /lib64/libm.so.6 (0x7fe033971000)
  libdl.so.2 = /lib64/libdl.so.2 (0x7fe03376d000)
  libcmockery.so.0 = not found
  libpthread.so.0 = /lib64/libpthread.so.0 (0x7fe03354f000)
  libcrypto.so.10 = /lib64/libcrypto.so.10 (0x7fe033168000)
  libc.so.6 = /lib64/libc.so.6 (0x7fe032da9000)
  libcmockery.so.0 = not found
  libcmockery.so.0 = not found
  libcmockery.so.0 = not found
  liblzma.so.5 = /lib64/liblzma.so.5 (0x7fe032b82000)
  /lib64/ld-linux-x86-64.so.2 (0x7fe0351f1000)
  
  Should I file a bug report or could someone on the fast-lane fix this?
 My bad (installation with --nodeps --force :-()

Actually, I was not expecting a dependency on cmockery2. My 
understanding was that only some temporary test-applications would be 
linked with libcmockery and not any binaries that would get packaged in 
the RPMs.

Luis, could you clarify that?

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] rackspace-regression job history disappeared?

2014-05-30 Thread Luis Pabon

:-D Nice job JC.

- Luis

On 05/30/2014 10:54 AM, Justin Clift wrote:

On 29/05/2014, at 5:58 AM, Justin Clift wrote:
snip

Any idea what causes the job history for a project
(eg rackspace-regression) to disappear?


Asked on #jenkins IRC, and the it seems to be a bug in Jenkins. :(

They said possibly this one:

   https://issues.jenkins-ci.org/browse/JENKINS-16845

But there are others in their issue tracker with similar
symptoms.  The job history itself is actually still on
disk on the master (under
/var/lib/jenkins/jobs/rackspace-regression/builds).  Its
just not showing up in the webUI.

We may need to upgrade Jenkins.  More likely, setup new
Jenkins instance in Rackspace.

(this has really turned into a strange-and-unusual-punishment
thing hasn't it? ;)

+ Justin

--
Open Source and Standards @ Red Hat

twitter.com/realjustinclift



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-infra] Progress report for regression tests in Rackspace

2014-05-15 Thread Luis Pabon

Should we create bugs for each of these, and divide-and-conquer?

- Luis

On 05/15/2014 10:27 AM, Niels de Vos wrote:

On Thu, May 15, 2014 at 06:05:00PM +0530, Vijay Bellur wrote:

On 04/30/2014 07:03 PM, Justin Clift wrote:

Hi us,

Was trying out the GlusterFS regression tests in Rackspace VMs last
night for each of the release-3.4, release-3.5, and master branches.

The regression test is just a run of run-tests.sh, from a git
checkout of the appropriate branch.

The good news is we're adding a lot of testing code with each release:

  * release-3.4 -  6303 lines  (~30 mins to run test)
  * release-3.5 -  9776 lines  (~85 mins to run test)
  * master  - 11660 lines  (~90 mins to run test)

(lines counted using:
  $ find tests -type f -iname *.t -exec cat {}  a \;; wc -l a; rm -f a)

The bad news is the tests only kind of pass now.  I say kind of because
although the regression run *can* pass for each of these branch's, it's
inconsistent. :(

Results from testing overnight:

  * release-3.4 - 20 runs - 17 PASS, 3 FAIL. 85% success.
* bug-857330/normal.t failed in one run
* bug-887098-gmount-crash.t failed in one run
* bug-857330/normal.t failed in one run

  * release-3.5 - 20 runs, 18 PASS, 2 FAIL. 90% success.
* bug-857330/xml.t failed in one run
* bug-1004744.t failed in another run (same vm for both failures)

  * master - 20 runs, 6 PASS, 14 FAIL. 30% success.
* bug-1070734.t failed in one run
* bug-1087198.t  bug-860663.t failed in one run (same vm as bug-1070734.t 
failure above)
* bug-1087198.t  bug-857330/normal.t failed in one run (new vm, a 
subsequent run on same vm passed)
* bug-1087198.t  bug-948686.t failed in one run (new vm)
* bug-1070734.t  bug-1087198.t failed in one run (new vm)
* bug-860663.t failed in one run
* bug-1023974.t  bug-1087198.t  bug-948686.t failed in one run (new vm)
* bug-1004744.t  bug-1023974.t  bug-1087198.t  bug-948686.t failed in 
one run (new vm)
* bug-948686.t failed in one run (new vm)
* bug-1070734.t failed in one run (new vm)
* bug-1023974.t failed in one run (new vm)
* bug-1087198.t  bug-948686.t failed in one run (new vm)
* bug-1070734.t failed in one run (new vm)
* bug-1087198.t failed in one run (new vm)

The occasional failing tests aren't completely random, suggesting
something is going on.  Possible race conditions maybe? (no idea).

  * 8 failures - bug-1087198.t
  * 5 failures - bug-948686.t
  * 4 failures - bug-1070734.t
  * 3 failures - bug-1023974.t
  * 3 failures - bug-857330/normal.t
  * 2 failures - bug-860663.t
  * 2 failures - bug-1004744.t
  * 1 failures - bug-857330/xml.t
  * 1 failures - bug-887098-gmount-crash.t

Anyone have suggestions on how to make this work reliably?



I think it would be a good idea to arrive at a list of test cases that
are failing at random and assign owners to address them (default owner
being the submitter of the test case). In addition to these, I have
also seen tests like bd.t and xml.t fail pretty regularly.

Justin - can we publish a consolidated list of regression tests that
fail and owners for them on an etherpad or similar?

Fixing these test cases will enable us to bring in more jenkins
instances for parallel regression runs etc. and will also provide more
determinism for our regression tests. Your help to address the
regression test suite problems will be greatly appreciated!

Indeed, getting the regression tests stable seems like a blocker before
we can move to a scalable Jenkins solution. Unfortunately, it may not be
trivial to debug these test cases... Any suggestion on capturing useful
data that helps in figuring out why the test cases don't pass?

Thanks,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Automatically building RPMs upon patch submission?

2014-05-12 Thread Luis Pabon
My $0.02, anything that can be done automated, should be done without 
waiting for human intervention.  If we can help parallelize the 
workflow, even better.  The only 'blocker' should be a human review and 
verification.


- Luis

On 05/12/2014 04:04 PM, Anand Avati wrote:




On Mon, May 12, 2014 at 11:26 AM, Anand Avati av...@gluster.org 
mailto:av...@gluster.org wrote:





On Mon, May 12, 2014 at 11:21 AM, Kaleb S. KEITHLEY
kkeit...@redhat.com mailto:kkeit...@redhat.com wrote:


Then maybe we should run regression tests on check-in. I'm
getting tired of queuing up regression tests. (And I know I'm
not the only one doing it.)

Or run them after they pass the smoke test,

Or


If we can make regression test trigger automatically, conditional
on smoke-test passing, that would be great. Last time I checked I
couldn't figure how to (did not look very deep) and left it manual
trigger.



And yeah, the other reason: if a dev pushes a series/set of dependent 
patches, regression needs to run only on the last one (regression 
test/voting is cumulative for the set). Running regression on all the 
individual patches (like a smoke test) would be very wasteful, and 
tricky to avoid (this was the part which I couldn't solve)


Avati

Avati




On 05/12/2014 02:17 PM, Anand Avati wrote:

It is much better to code review after regression tests
pass (premise
being human eye time is more precious than build server
run time)


On Mon, May 12, 2014 at 10:53 AM, Kaleb S. KEITHLEY
kkeit...@redhat.com mailto:kkeit...@redhat.com
mailto:kkeit...@redhat.com mailto:kkeit...@redhat.com
wrote:

top-post
How about also an auto run of regression at +1 or +2
code review?
/top-post


On 05/12/2014 01:49 PM, Raghavendra G wrote:

+1 to create RPMs when there is atleast a +1 on
code review.


On Mon, May 5, 2014 at 8:06 PM, Niels de Vos
nde...@redhat.com mailto:nde...@redhat.com
mailto:nde...@redhat.com mailto:nde...@redhat.com
mailto:nde...@redhat.com
mailto:nde...@redhat.com mailto:nde...@redhat.com
mailto:nde...@redhat.com wrote:

 On Mon, May 05, 2014 at 07:13:14PM +0530,
Lalatendu Mohanty
wrote:
   On 05/02/2014 04:07 PM, Niels de Vos wrote:
   Hi all,
   
   at the moment we have some duplicate
RPM-building tests
running:
   
   1. upon patch submission, next to smoke
(compile+posix)
tests
   2. rpm.t in the regression tests framework
   
   Both have their own advantages, and both
cover a little
different
   use-case.
   
   Notes and observations for 1:
   
  The advantage of (1) is that the built
rpm-packages
are made
 available
  for downloading, and users can test
the change easier.
   
  It is unclear to me how often this is
used, many
patches need
 several
  revisions before they get accepted,
each new
revision gets new
  packages build (takes time for each patch
submission). I do
 not know
  how long these packages are kept, or
when they are
deleted.
   
  Building is done for EPEL-6 and Fedora
(exact
version unclear
 to me).
   
   
   Notes and observations for 2:
   
  Building is only done when there are
changes related
to the
 packaging.
  When there are only changes in source
code or
documentation,
 there is
  no need to try and build the rpms
(saves ca. 5 minutes).
   
  The packages are build for EPEL-5 and