Re: [Gluster-devel] Patch needs merging

2015-06-09 Thread Krishnan Parthasarathi
> Hi,
> 
> Can you please merge the following patches:
> 
> http://review.gluster.org/#/c/11087/

Avra,

I think you should maintain the snapshot scheduler feature
and shouldn't depend on me as a glusterd maintainer, for
merging changes. I am not really maintaining snapshot scheduler
in any sense of the word, so I should not be merging patches too :)

Vijay,
Is it too late to add Avra to the list of proposed maintainers? Avra
has also worked extensively in glusterd for geo-replication, volume-snapshot
and volume-locks (core), mgmt-v3 transaction framework (core) etc. He is the 
only one sending patches to snapshot-scheduler feature. I have been merging 
patches
since it is built on top of volume-snapshot, which I think I shouldn't be.
Thoughts?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Patch needs merging

2015-06-09 Thread Avra Sengupta

Hi,

Can you please merge the following patches:

http://review.gluster.org/#/c/11087/

Regards,
Avra

On 06/09/2015 08:06 PM, Avra Sengupta wrote:

Thanks KP :)

On 06/09/2015 07:51 PM, Krishnan Parthasarathi wrote:

http://review.gluster.org/#/c/11042/
http://review.gluster.org/#/c/11100/

Merged.




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] tests/bugs/protocol/bug-808400-stripe.t spurious failure

2015-06-09 Thread Krishnan Parthasarathi

> The test mentioned in $Subj has failed in [1]
> 
> [1]
> http://build.gluster.org/job/rackspace-regression-2GB-triggered/10343/consoleFull

This was fixed in master by Jeff - http://review.gluster.org/11037. I have 
backported
it to release-3.7 - http://review.gluster.org/11145. We need to merge this to
fix this intermittent failure.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] tests/bugs/snapshot/bug-1162498.t spurious failure

2015-06-09 Thread Atin Mukherjee
The above one failed in [1]

[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/10334/consoleFull
-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] tests/bugs/protocol/bug-808400-stripe.t spurious failure

2015-06-09 Thread Atin Mukherjee
The test mentioned in $Subj has failed in [1]

[1]
http://build.gluster.org/job/rackspace-regression-2GB-triggered/10343/consoleFull

-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Patch needs merging

2015-06-09 Thread Avra Sengupta

Thanks KP :)

On 06/09/2015 07:51 PM, Krishnan Parthasarathi wrote:

http://review.gluster.org/#/c/11042/
http://review.gluster.org/#/c/11100/

Merged.


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Patch needs merging

2015-06-09 Thread Krishnan Parthasarathi

> http://review.gluster.org/#/c/11042/
> http://review.gluster.org/#/c/11100/

Merged. 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Patch needs merging

2015-06-09 Thread Avra Sengupta

Hi,

Could you please merge the following patches in release 3.7 branch. It's 
got code review +1s and all regressions have passed.


http://review.gluster.org/#/c/11042/
http://review.gluster.org/#/c/11100/

Regards,
Avra


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Expanding Volumes

2015-06-09 Thread Atin Mukherjee
Rebalance is broken in 3.7.1. It will be fixed in 3.7.2. Sorry for the
inconvenience.

Regards,
Atin

Sent from Samsung Galaxy S4
On 9 Jun 2015 18:34, "Jonhnny Weslley"  wrote:

> Hi guys,
>
> I trying to create a pool of 4 nodes using centos7 and gluster 3.7 in a
> vagrant-based environment for test. First, I create and start a replicated
> volume using only 2 nodes (replication 2). After, I mount the volume using
> fuse and copy some files. Everything works fine.
>
> Then, I try to expand the volume previously created using the command:
>
> sudo gluster volume add-brick jged 10.10.50.73:/home/vagrant/brick
> 10.10.50.74:/home/vagrant/brick force
>
> And works again:
>
> sudo gluster volume info
>
> Volume Name: jged
> Type: Distributed-Replicate
> Volume ID: 862ab9b7-4753-4682-ba44-cbe481b1b7df
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 10.10.50.71:/home/vagrant/brick
> Brick2: 10.10.50.72:/home/vagrant/brick
> Brick3: 10.10.50.73:/home/vagrant/brick
> Brick4: 10.10.50.74:/home/vagrant/brick
> Options Reconfigured:
> performance.readdir-ahead: on
>
>
> But when I try to rebalance the volume (sudo gluster volume rebalance jged
> start), the gluster's  proccess in the node where the command was executed
> dies and dont start again after running 'systemctl start glusterd'. I look
> the log file (/var/log/glusterfs/etc-glusterfs-glusterd.vol.log) but I cant
> figure out what is wrong! :(
>
> Follow the tail of the log file:
>
> [2015-06-09 12:30:56.197802] I [MSGID: 100030] [glusterfsd.c:2294:main]
> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.1
> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid)
> [2015-06-09 12:30:56.207596] I [glusterd.c:1282:init] 0-management:
> Maximum allowed open file descriptors set to 65536
> [2015-06-09 12:30:56.207653] I [glusterd.c:1327:init] 0-management: Using
> /var/lib/glusterd as working directory
> [2015-06-09 12:30:56.211505] E [rpc-transport.c:291:rpc_transport_load]
> 0-rpc-transport: /usr/lib64/glusterfs/3.7.1/rpc-transport/rdma.so: cannot
> open shared object file: No such file or directory
> [2015-06-09 12:30:56.211521] W [rpc-transport.c:295:rpc_transport_load]
> 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not
> valid or not found on this machine
> [2015-06-09 12:30:56.211528] W [rpcsvc.c:1595:rpcsvc_transport_create]
> 0-rpc-service: cannot create listener, initing the transport failed
> [2015-06-09 12:30:56.211535] E [glusterd.c:1515:init] 0-management:
> creation of 1 listeners failed, continuing with succeeded transport
> [2015-06-09 12:30:56.213311] I
> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication
> module not installed in the system
> [2015-06-09 12:30:56.213454] I
> [glusterd-store.c:1986:glusterd_restore_op_version] 0-glusterd: retrieved
> op-version: 30700
> [2015-06-09 12:30:56.213523] I [glusterd.c:154:glusterd_uuid_init]
> 0-management: retrieved UUID: f264d968-5a14-459b-8f3b-569aa15c3ce2
> [2015-06-09 12:30:56.213568] I [rpc-clnt.c:972:rpc_clnt_connection_init]
> 0-glustershd: setting frame-timeout to 600
> [2015-06-09 12:30:56.213675] I [rpc-clnt.c:972:rpc_clnt_connection_init]
> 0-nfs: setting frame-timeout to 600
> [2015-06-09 12:30:56.213801] I [rpc-clnt.c:972:rpc_clnt_connection_init]
> 0-quotad: setting frame-timeout to 600
> [2015-06-09 12:30:56.213896] I [rpc-clnt.c:972:rpc_clnt_connection_init]
> 0-bitd: setting frame-timeout to 600
> [2015-06-09 12:30:56.213979] I [rpc-clnt.c:972:rpc_clnt_connection_init]
> 0-scrub: setting frame-timeout to 600
> [2015-06-09 12:30:56.214094] I [rpc-clnt.c:972:rpc_clnt_connection_init]
> 0-snapd: setting frame-timeout to 600
> [2015-06-09 12:30:56.987649] I
> [glusterd-handler.c:3387:glusterd_friend_add_from_peerinfo] 0-management:
> connect returned 0
> [2015-06-09 12:30:56.987711] I
> [glusterd-handler.c:3387:glusterd_friend_add_from_peerinfo] 0-management:
> connect returned 0
> [2015-06-09 12:30:56.987755] I
> [glusterd-handler.c:3387:glusterd_friend_add_from_peerinfo] 0-management:
> connect returned 0
> [2015-06-09 12:30:56.987801] I [rpc-clnt.c:972:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2015-06-09 12:30:56.989874] W [socket.c:923:__socket_keepalive] 0-socket:
> failed to set TCP_USER_TIMEOUT -1000 on socket 13, Invalid argument
> [2015-06-09 12:30:56.989890] E [socket.c:3015:socket_connect]
> 0-management: Failed to set keep-alive: Invalid argument
> [2015-06-09 12:30:56.990051] I [rpc-clnt.c:972:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2015-06-09 12:30:56.992360] W [socket.c:923:__socket_keepalive] 0-socket:
> failed to set TCP_USER_TIMEOUT -1000 on socket 14, Invalid argument
> [2015-06-09 12:30:56.992419] E [socket.c:3015:socket_connect]
> 0-management: Failed to set keep-alive: Invalid argument
> [2015-06-09 12:30:56.992629] I [rpc-clnt.c:972:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 

[Gluster-devel] Expanding Volumes

2015-06-09 Thread Jonhnny Weslley
Hi guys,

I trying to create a pool of 4 nodes using centos7 and gluster 3.7 in a
vagrant-based environment for test. First, I create and start a replicated
volume using only 2 nodes (replication 2). After, I mount the volume using
fuse and copy some files. Everything works fine.

Then, I try to expand the volume previously created using the command:

sudo gluster volume add-brick jged 10.10.50.73:/home/vagrant/brick
10.10.50.74:/home/vagrant/brick force

And works again:

sudo gluster volume info

Volume Name: jged
Type: Distributed-Replicate
Volume ID: 862ab9b7-4753-4682-ba44-cbe481b1b7df
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.10.50.71:/home/vagrant/brick
Brick2: 10.10.50.72:/home/vagrant/brick
Brick3: 10.10.50.73:/home/vagrant/brick
Brick4: 10.10.50.74:/home/vagrant/brick
Options Reconfigured:
performance.readdir-ahead: on


But when I try to rebalance the volume (sudo gluster volume rebalance jged
start), the gluster's  proccess in the node where the command was executed
dies and dont start again after running 'systemctl start glusterd'. I look
the log file (/var/log/glusterfs/etc-glusterfs-glusterd.vol.log) but I cant
figure out what is wrong! :(

Follow the tail of the log file:

[2015-06-09 12:30:56.197802] I [MSGID: 100030] [glusterfsd.c:2294:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.1
(args: /usr/sbin/glusterd -p /var/run/glusterd.pid)
[2015-06-09 12:30:56.207596] I [glusterd.c:1282:init] 0-management: Maximum
allowed open file descriptors set to 65536
[2015-06-09 12:30:56.207653] I [glusterd.c:1327:init] 0-management: Using
/var/lib/glusterd as working directory
[2015-06-09 12:30:56.211505] E [rpc-transport.c:291:rpc_transport_load]
0-rpc-transport: /usr/lib64/glusterfs/3.7.1/rpc-transport/rdma.so: cannot
open shared object file: No such file or directory
[2015-06-09 12:30:56.211521] W [rpc-transport.c:295:rpc_transport_load]
0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not
valid or not found on this machine
[2015-06-09 12:30:56.211528] W [rpcsvc.c:1595:rpcsvc_transport_create]
0-rpc-service: cannot create listener, initing the transport failed
[2015-06-09 12:30:56.211535] E [glusterd.c:1515:init] 0-management:
creation of 1 listeners failed, continuing with succeeded transport
[2015-06-09 12:30:56.213311] I
[glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication
module not installed in the system
[2015-06-09 12:30:56.213454] I
[glusterd-store.c:1986:glusterd_restore_op_version] 0-glusterd: retrieved
op-version: 30700
[2015-06-09 12:30:56.213523] I [glusterd.c:154:glusterd_uuid_init]
0-management: retrieved UUID: f264d968-5a14-459b-8f3b-569aa15c3ce2
[2015-06-09 12:30:56.213568] I [rpc-clnt.c:972:rpc_clnt_connection_init]
0-glustershd: setting frame-timeout to 600
[2015-06-09 12:30:56.213675] I [rpc-clnt.c:972:rpc_clnt_connection_init]
0-nfs: setting frame-timeout to 600
[2015-06-09 12:30:56.213801] I [rpc-clnt.c:972:rpc_clnt_connection_init]
0-quotad: setting frame-timeout to 600
[2015-06-09 12:30:56.213896] I [rpc-clnt.c:972:rpc_clnt_connection_init]
0-bitd: setting frame-timeout to 600
[2015-06-09 12:30:56.213979] I [rpc-clnt.c:972:rpc_clnt_connection_init]
0-scrub: setting frame-timeout to 600
[2015-06-09 12:30:56.214094] I [rpc-clnt.c:972:rpc_clnt_connection_init]
0-snapd: setting frame-timeout to 600
[2015-06-09 12:30:56.987649] I
[glusterd-handler.c:3387:glusterd_friend_add_from_peerinfo] 0-management:
connect returned 0
[2015-06-09 12:30:56.987711] I
[glusterd-handler.c:3387:glusterd_friend_add_from_peerinfo] 0-management:
connect returned 0
[2015-06-09 12:30:56.987755] I
[glusterd-handler.c:3387:glusterd_friend_add_from_peerinfo] 0-management:
connect returned 0
[2015-06-09 12:30:56.987801] I [rpc-clnt.c:972:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
[2015-06-09 12:30:56.989874] W [socket.c:923:__socket_keepalive] 0-socket:
failed to set TCP_USER_TIMEOUT -1000 on socket 13, Invalid argument
[2015-06-09 12:30:56.989890] E [socket.c:3015:socket_connect] 0-management:
Failed to set keep-alive: Invalid argument
[2015-06-09 12:30:56.990051] I [rpc-clnt.c:972:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
[2015-06-09 12:30:56.992360] W [socket.c:923:__socket_keepalive] 0-socket:
failed to set TCP_USER_TIMEOUT -1000 on socket 14, Invalid argument
[2015-06-09 12:30:56.992419] E [socket.c:3015:socket_connect] 0-management:
Failed to set keep-alive: Invalid argument
[2015-06-09 12:30:56.992629] I [rpc-clnt.c:972:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
[2015-06-09 12:30:56.994163] W [socket.c:923:__socket_keepalive] 0-socket:
failed to set TCP_USER_TIMEOUT -1000 on socket 15, Invalid argument
[2015-06-09 12:30:56.994177] E [socket.c:3015:socket_connect] 0-management:
Failed to set keep-alive: Invalid argument
Final graph:
+--+
  1: v

[Gluster-devel] Minutes from todays Gluster Community Bug Triage meeting

2015-06-09 Thread Mohammed Rafi K C
Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-06-09/gluster-meeting.2015-06-09-12.00.html
Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-06-09/gluster-meeting.2015-06-09-12.00.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-06-09/gluster-meeting.2015-06-09-12.00.log.html



Meeting summary
---
* roll call  (atinmu, 12:01:53)
* Roll Call  (atinmu, 12:02:36)
  * Agenda https://public.pad.fsfe.org/p/gluster-bug-triage  (atinmu,
12:02:49)

* Action Items from last week  (atinmu, 12:04:03)

* ndevos needs to look into building nightly debug rpms that can be used
  for testing  (atinmu, 12:04:24)
  * ACTION: ndevos to still follow up on building nightly debug rpms
(atinmu, 12:05:36)

* Group Triage  (atinmu, 12:05:51)
  * 13 new bugs that have not been triaged yet : http://goo.gl/WuDQun
(atinmu, 12:06:34)

* Open Floor  (atinmu, 12:24:13)
  * just FYI, there are 47 bugs which are still in Needinfo and not
closed  (atinmu, 12:24:46)
  * https://goo.gl/08IHrT tracks all the bugs which are in needinfo
(atinmu, 12:25:24)

Meeting ended at 12:37:23 UTC.






Action Items

* ndevos to still follow up on building nightly debug rpms




Action Items, by person
---
* ndevos
  * ndevos to still follow up on building nightly debug rpms
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* atinmu (42)
* ndevos (23)
* rafi1 (17)
* zodbot (3)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot



On 06/09/2015 04:14 PM, Atin Mukherjee wrote:
> Hi all,
>
> This meeting is scheduled for anyone that is interested in learning more
> about, or assisting with the Bug Triage.
>
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> ( https://webchat.freenode.net/?channels=gluster-meeting )
> - date: every Tuesday
> - time: 12:00 UTC
> (in your terminal, run: date -d "12:00 UTC")
> - agenda: https://public.pad.fsfe.org/p/gluster-bug-triage
>
> Currently the following items are listed:
> * Roll Call
> * Status of last weeks action items
> * Group Triage
> * Open Floor
>
> The last two topics have space for additions. If you have a suitable bug
> or topic to discuss, please add it to the agenda.
>
> Appreciate your participation.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] The Manila RFEs and why so

2015-06-09 Thread Jeff Darcy
> As noted, the "Smart volume management" group is a
> singleton, but that single element is tricky. We
> have heard promises of a glusterd rewrite that would
> include the intelligence / structure for such a feature;

I would love to solve this problem for you in 4.0 (which includes
rewrites for at least some parts of glusterd).  Unfortunately, the main
thing blocking progress on 4.0 is the continuing accumulation of 3.x
work.  If there's a "right way" to solve a problem which is only
possible in 4.0, then trying to solve it the "not right way" in 3.x is -
in the long term - a waste of time.  I understand why short-term needs
might dictate that we do it anyway, but whenever it's at all possible we
should try to defer work from 3.x into 4.0 instead of doing it twice.

> also we toyed around implementing a partial version of
> it with configuration management software (Ansible) but
> that was too experimental (the whole concept) to dedicate
> ourselves to it, so we discontinued that.
> 
> OTOH, the directory level features are many but can
> possibly be addressed with a single well chosen volume
> variant (something like lv-s for all top level
> directories?) -- plus the UI would needed to be tailored
> to them.

I don't think auto-provisioning volumes is actually all that hard.  All
we need is a list of places where we can create new directories to be
bricks.  This is not so different from what I did for HekaFS years ago,
and it was one of the easiest parts of the project.  The "gotcha" is
that with lots of tenants we could end up with lots of uncoordinated
glusterfsd processes, with a significant negative impact on performance.
That's why HekaFS had its own infrastructure to generate volfiles and
manage multi-brick daemons.  Such multiplexing is planned for 4.0 but -
again - 4.0 is being delayed by ongoing 3.x work.

Regardless of whether we allocate new shares *within* existing volumes
or *as* new volumes, resizable per-share snapshots and clones are going
to be "interesting" in our LVM-centric snapshot model.  It seems to me
that we'll end up snapshotting a whole LV even when we only intend to
use a tiny portion.  We already have this problem - which results in
wasted space and extra COW activity - somewhat, but it's likely to get
worse under Manila.  Is there something we can do to address it, or is
it just something we'll have to live with?
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Python bindings to libgfapi

2015-06-09 Thread Prashanth Pai
Hi,

The libgfapi-python 
(http://review.gluster.org/#/admin/projects/libgfapi-python) project has been 
under development for some time now. Before it's going to be widely used and 
integrated into OpenStack projects, we would like to ensure that the consumer 
APIs are user-friendly, intuitive and "Pythonic".

The idea was to make the libgfapi-python APIs mimic the ones provided by 
following Python modules so that programmers find it easy to adapt and use:
* os module (https://docs.python.org/2/library/os.html)
* Built-in File object 
(https://docs.python.org/2/library/stdtypes.html#file-objects)
* shutil (https://docs.python.org/2/library/shutil.html)

Here's the API matrix which states the current status of APIs:
https://www.ethercalc.org/0psqmoqm8r

The File class 
(https://github.com/gluster/libgfapi-python/blob/master/gluster/gfapi.py#L19) 
as of today is a thin wrapper around the glfd object. But unlike the Python's 
built-in File object, there's no I/O buffering involved.

Example workflow as of today (it's a mix of built-in File object and os module 
usage, which is easy to use but inconsistent):

>>> import os
>>> from gluster import gfapi
>>> v = gfapi.Volume(host, volume, protocol, port)
>>> v.mount()
>>> f = v.open("path/to/file, os.O_WRONLY)
>>> f.write("hello world")
>>> f.close()

We wan't to clearly demarcate the APIs that mimic built-in File object from the 
ones that mimic os module:

Example 1 (more like os module):
>>> glfd = v.open("path/to/file, os.O_RDONLY)
>>> v.write(glfd, "hello world", 11)
>>> v.close(glfd)

Example 2 (like python's File object):
>>> f = File("path/to/file, 'r')
>>> f.write("hello world")
>>> f.close()

We would also want to do away with return values (like 0 or -1). The Pythonic 
way is: if something did not succeed, raise an exception (OSError or IOError).

What do you guys think (as consumers) ?

Regards,
 -Prashanth Pai
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] The Manila RFEs and why so

2015-06-09 Thread Ramana Raja
- Vijay Bellur  wrote:
> 
> Would you be able to provide more light on the nature of features/APIs 
> planned to be exposed through Manila in Liberty? Having that information 
> can play an important part in prioritizing and arriving at a decision.
> 
> Regards,
> Vijay

Sure! The preliminary list of APIs that a Manila share driver, which
talks to the storage backend, must support to be included in Liberty,
the upcoming Manila release in Sep/Oct 2015, would be available to
the Manila community sometime later this  week. But it can be
inferred from the Manila mailing lists and the Manila community
meetings that the driver APIs for actions such as
- snapshotting a share,
- creating a share from a snapshot,
- providing read-only access level to a share,
- resizing (extend or shrink) a share,
besides the basic ones such as creating/deleting a share,
allowing/denying access to a share would mostly likely be in the list
of must-haves.

There are two GlusterFS based share drivers in the current Manila
release, "glusterfs", and "glusterfs_native" that support NFS and
native protocol access of shares respectively. The "glusterfs driver"
treats a top-level directory in a GlusterFS volume as a share (dir
mapped share layout) and performs share actions at the directory level
in the GlusterFS backend. And the "gluster_native driver" treats a
GlusterFS volume as a share (vol mapped share layout) and performs
share actions at the volume level. But for the Liberty release we'd
make both the drivers be able to work with either one of the share
layouts depending on a configurable.

Our first target is to make both our drivers support the must-have
APIs for Liberty. We figured that if the volume based layout is used
by both the drivers, then with existing GlusterFS features it would
be possible for the drivers to support the must-have APIs, but with
one caveat - the drivers would have to continue using a work around
that makes the cloud/storage admin tasks in OpenStack deployments
cumbersome and has to be done away with in the upcoming release i.e.,
to create a share of specific size, pick a GlusterFS volume from among
many already created in various Gluster clusters. The limitation can
be overcome (as csaba mentioned earlier in this thread),
"We need a volume creation operation that creates a volume just by
passing the name and the prospective size of it." The RFE
for create_share API,
Bug 1226772 – [RFE] GlusterFS Smart volume management

It's also possible for the drivers to have the minimum API set
using the directory based share layout provided GlusterFS supports
the following operations needed for
- create_snapshot API,
Bug 1226207 – [RFE] directory level snapshot create
- create_share_from_snapshot API,
Bug 1226210 – [RFE] directory level snapshot clone
- allow/deny_access APIs in gluster_native driver, as the driver
  relies on GlusterFS's TLS support to provide secure access to the
  shares,
Bug 1226220 – [RFE] directory level SSL/TLS auth
- read-only access to shares,
Bug 1226788 – [RFE] per-directory read-only accesss

And for a clean Manila-GlusterFS integration we'd like to have
high-level query features,
Bug 1226225 – [RFE] volume size query support
Bug 1226776 – [RFE] volume capability query

Hope this helps the community to let us know the feature sets -
smart volume management, directory level features, query features -
GlusterFS can support by early August and those that it can
support later, while we strive to increase GlusterFS's adoption in
OpenStack (Manila) cloud deployments.

Thanks,

Ramana
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC (~in 75 minutes)

2015-06-09 Thread Atin Mukherjee
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.
-- 
~Atin
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Temporarily disabled netbsd-regression-triggered

2015-06-09 Thread Kaushal M
I was able to reboot 3 of the slaves and have re-enabled the project.
Unfortunately, jobs that had been queued have been lost and will need
to be retriggered.

If someone requires the netbsd regression jobs to be run immediately
on their changes, please ask anyone with admin access to jenkins. If
you are an admin, you can trigger a job by following these steps,
(copying Atin's instructions from another mail thread)
1. Find out the ref_spec number of patch from the download at the top
right corner of gerrit interface, for eg : refs/changes/29/11129/1.
Please note you would need to copy the ref number for the latest patch set.

2. Log in into build.gluster.org

3. Goto http://build.gluster.org/job/rackspace-netbsd7-regression-triggered/

4. Click on build with parameters, paste the ref_spec number in
GERRIT_REFSPEC section and click on build.

~kaushal

On Tue, Jun 9, 2015 at 12:44 PM, Kaushal M  wrote:
> The six netbsd slaves were hung on jobs for over 15h each. I've
> disabled the netbsd-regression project so that I can kill the hung vms
> and restart them. I'll re-enable the project once the vms have been
> restarted.
>
> ~kaushal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Shared resource pool for libgfapi

2015-06-09 Thread Krishnan Parthasarathi

> 
> Initially ctx had a one to one mapping with process and volume/mount, but
> with libgfapi
> and libgfchangelog, ctx has lost the one to one association with process.
> The question is do we want to retain one to one mapping between ctx and
> process or
> ctx and volume.

Yes and no. The problem is with viewing 'ctx' as panacea. We need to break it
into objects which have their own independent per-volume, per-process
relationships.

ctx as it stands abstracts the following.

- Process identity information - process_uuid, cmd_args etc.

- Volume specific information  - graph_id, list of graphs of a given volume,
  connection to volfile server, client_table etc.

- Aggregate of resources   - memory pools, event pool, syncenv (for
  synctasks) etc.

This proposal already does parts of above, i.e, to break the abstraction of an
aggregate of resources into a resource pool structure. I wouldn't decide on the
approach based on the no. of code (sites) that this change would impact. More
change is not necessarily bad.  I would decide on ease of abstraction and
extension it brings with it, for the kind of features that are to come in with
4.0. For e.g, imagine how this would pave way to enhance glusterfs to support
multiple (sub-)volumes being served from a single (virtual)brick process.This
is crudely similiar to multiple-'ctx' problem gfapi is trying to solve now.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [FAILED] /opt/qa/tools/posix-compliance/tests/chmod/00.t

2015-06-09 Thread Milind Changire
Job Console Output: http://build.gluster.org/job/smoke/18470/console

My patch is Python code and does not change Gluster Internals behavior.
This test failure doesn't seem to be directly associated with my patch
implementation.

Please look into the issue.

-

Test Summary Report
---
/opt/qa/tools/posix-compliance/tests/chmod/00.t   (Wstat: 0 Tests: 58 Failed: 1)
  Failed test:  43
Files=191, Tests=1960, 110 wallclock secs ( 0.70 usr  0.27 sys +  4.57
cusr  1.63 csys =  7.17 CPU)
Result: FAIL
+ finish
+ RET=1
+ '[' 1 -ne 0 ']'
++ date +%Y%m%d%T
+ filename=/d/logs/smoke/glusterfs-logs-2015060906:40:27.tgz
+ tar -czf /d/logs/smoke/glusterfs-logs-2015060906:40:27.tgz
/build/install/var/log
tar: Removing leading `/' from member names
tar: tar (child): /d/logs/smoke/glusterfs-logs-2015060906\:40\:27.tgz:
Cannot open/build/install/var/log: Cannot stat: No such file or
directory
: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
smoke.sh returned 2
Build step 'Execute shell' marked build as failure
Finished: FAILURE

-

Regards,
Milind
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel