[Gluster-devel] New tracker for glusterfs-3.7.2

2015-06-02 Thread Niels de Vos
Hi all,

all the bugs that have not made it in the 3.7.1 release have been moved
to the glusterfs-3.7.2 tracker. New bugs that should get fixed in
3.7.2 should get glusterfs-3.7.2 in the blocks field.

Note that only bugs with version 3.7.x are valid for blocking the
glusterfs-3.7.2 tracker. You will also need a bug filed against the
mainline version to get the patches into the master branch. The bug
against mainline is a blocker for the 3.7.x clone, and the ONLY the
3.7.x clone should be blocking glusterfs-3.7.2.

The dependency tree of glusterfs-3.7.2 can be found here:


https://bugzilla.redhat.com/showdependencytree.cgi?hide_resolved=1id=glusterfs-3.7.2

Let me know if there are any questions,
Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] DHTv2 design discussion

2015-06-02 Thread Jeff Darcy
I've put together a document which I hope captures the most recent discussions 
I've had, particularly those in Barcelona.  Commenting should be open to 
anyone, so please feel free to weigh in before too much code is written.  ;)

https://docs.google.com/document/d/1nJuG1KHtzU99HU9BK9Qxoo1ib9VXf2vwVuHzVQc_lKg/edit?usp=sharing
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] using GlusterFS to build an NFSv4.1 pNFS server

2015-06-02 Thread Niels de Vos
On Tue, Jun 02, 2015 at 06:18:54PM -0400, Rick Macklem wrote:
 Jiffin Tony Thottan wrote:
  
  Hi Rick,
  
  There is already support for pNFS in gluster volumes using
  nfs-ganesha :
  http://gluster.readthedocs.org/en/latest/Features/mount_gluster_volume_using_pnfs/
  It supports normal FILE_LAYOUT architecture.
 Yes, I am aware of this (although I'll admit I noticed it in the docs after I
 posted the email).
 
 Just fyi, if I wanted to set up a (near) production NFSv4.1/pNFS server, this 
 would be
 fine, but that's not me;-)
 I'm interested in extending the NFSv4.1 server I've already written to do
 pNFS. Why? Well, mostly because it interests me. (I've never been paid any $$
 to do any of the FreeBSD NFS work I've done, so I pretty much do it as a 
 hobby.)
 If the result never works or never performs well enough to be useful for
 production environments then...oh well, it was an interesting experiment.

Definitely sounds interesting! I don't have much to do with FreeBSD, but
I'm certainly happy to help on the Gluster side if you have any
questions.

 If it ever is useful for (near) production environments, I suspect it would be
 users that have set up a FreeBSD NFS server and it is outgrowing what a single
 server can handle. In other words, they would come from the FreeBSD NFS server
 side and not the GlusterFS side.
 
  Other comments are inline
  
  On 02/06/15 05:18, Rick Macklem wrote:
   Hi,
  
   Btw, I do most of the FreeBSD NFSv4 work.
   I am interested in trying to use GlusterFS
   to build a FreeBSD NFSv4.1 pNFS server.
   My hope is that, by directing the NFSv4.1 client
   to the host where the file resides, the client will
   be able to do I/O on it efficiently via the NFSv3
   server. (The new layout type called flex files allows
   an NFSv3 server to be a storage/data server for pNFS.)
  
  It will be good to use gluster-nfs  as a data-server(which is more
  tightly coupled with bricks)
  CCing Anand who has better idea about flex file layout architecture
  
 Flex file is pretty straightforward. It simply allows the NFSv3 server
 to be what they call a storage server. All that it does is use a fake
 uid/gid that is allowed rw/ro access to the file. (This implies that
 the client is responsible for deciding if a user is allowed access to
 the file. Not a big deal for AUTH_SYS, since the server trusts the
 client's choice of uid/gid anyhow.)
 -- As such, the NFSv3 server needs to have a small change applied to
 it to allow access via this fake uid/gid.

This sounds simple enough to do. File a feature request and describe how
you can use this. Patches are welcome too, of course, but we can likely
code something up quickly.

https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFScomponent=nfs

 Basically, the NFSv4.1 server needs to know what the NFSv3 server's
 host IP address is and what FH to use for the file on it. (I do see
 the code in the NFS xlator for generating an FH, but haven't looked
 much yet.) As noted below in the original post.

The FH in Gluster/NFS is based on the volume-id and the GFID. Both are
UUIDs. The volume-id is a unique identifier for the volume, and the GFID
is like a volume-wide inode-nr (volumes consist out of multiple bricks
with their own filesystems, a storage server can host multiple bricks).

There is no way to know which brick should handle a FH. Looking for the
GFID on all the bricks that participate in the volume is a rather
expensive operation (many LOOKUPs). You will always need to find the
location of the file with a request through FUSE.

   To do this, I need to be able to poke the
   glusterfs server and get the following information:
   - The NFSv3 file handle and the IP address for
  the host(s) the file lives on.
  -- Using this, I am planning on creating a layout
  that tells the NFSv4.1 client to use NFSv3 to
  do I/O on the file. (What NFSv4.1 calls a storage
  server, although some RFCs might call it a data
  server.)
   - I hope to use the fuse interface for the NFSv4.1 metadata
  server.
  
  I don't know how much it is feasible to implement meta data server
  using
  a fuse interface.
  
 I guess I'll find out;-). The FreeBSD NFSv4.1 server is kernel based
 and exports any local file system that has a VFS/VOP interface. So,
 hopefully FUSE won't provide too many surprises.
 I am curious to see how well it performs.

I have no idea how FreeBSD handles FUSE, but I'm sure you won't have an
issue with figuring that out. You should be able to get the details
about the location of the file through GETXATTR calls. In NFS-Ganesha,
these two functions parse the output:
 - get_pathinfo_host
 - glfs_get_ds_addr

These can be found here:

https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/FSAL/FSAL_GLUSTER/mds.c#L482


   If anyone can point me to the area in the GlusterFS sources
   that I should look at to do this and/or suggest a machanism
   for getting the above 

Re: [Gluster-devel] using GlusterFS to build an NFSv4.1 pNFS server

2015-06-02 Thread Rick Macklem
Jiffin Tony Thottan wrote:
 
 Hi Rick,
 
 There is already support for pNFS in gluster volumes using
 nfs-ganesha :
 http://gluster.readthedocs.org/en/latest/Features/mount_gluster_volume_using_pnfs/
 It supports normal FILE_LAYOUT architecture.
Yes, I am aware of this (although I'll admit I noticed it in the docs after I
posted the email).

Just fyi, if I wanted to set up a (near) production NFSv4.1/pNFS server, this 
would be
fine, but that's not me;-)
I'm interested in extending the NFSv4.1 server I've already written to do
pNFS. Why? Well, mostly because it interests me. (I've never been paid any $$
to do any of the FreeBSD NFS work I've done, so I pretty much do it as a hobby.)
If the result never works or never performs well enough to be useful for
production environments then...oh well, it was an interesting experiment.

If it ever is useful for (near) production environments, I suspect it would be
users that have set up a FreeBSD NFS server and it is outgrowing what a single
server can handle. In other words, they would come from the FreeBSD NFS server
side and not the GlusterFS side.

 Other comments are inline
 
 On 02/06/15 05:18, Rick Macklem wrote:
  Hi,
 
  Btw, I do most of the FreeBSD NFSv4 work.
  I am interested in trying to use GlusterFS
  to build a FreeBSD NFSv4.1 pNFS server.
  My hope is that, by directing the NFSv4.1 client
  to the host where the file resides, the client will
  be able to do I/O on it efficiently via the NFSv3
  server. (The new layout type called flex files allows
  an NFSv3 server to be a storage/data server for pNFS.)
 
 It will be good to use gluster-nfs  as a data-server(which is more
 tightly coupled with bricks)
 CCing Anand who has better idea about flex file layout architecture
 
Flex file is pretty straightforward. It simply allows the NFSv3 server
to be what they call a storage server. All that it does is use a fake
uid/gid that is allowed rw/ro access to the file. (This implies that
the client is responsible for deciding if a user is allowed access to
the file. Not a big deal for AUTH_SYS, since the server trusts the
client's choice of uid/gid anyhow.)
-- As such, the NFSv3 server needs to have a small change applied to
it to allow access via this fake uid/gid.

Basically, the NFSv4.1 server needs to know what the NFSv3 server's
host IP address is and what FH to use for the file on it. (I do see
the code in the NFS xlator for generating an FH, but haven't looked
much yet.) As noted below in the original post.

  To do this, I need to be able to poke the
  glusterfs server and get the following information:
  - The NFSv3 file handle and the IP address for
 the host(s) the file lives on.
 -- Using this, I am planning on creating a layout
 that tells the NFSv4.1 client to use NFSv3 to
 do I/O on the file. (What NFSv4.1 calls a storage
 server, although some RFCs might call it a data
 server.)
  - I hope to use the fuse interface for the NFSv4.1 metadata
 server.
 
 I don't know how much it is feasible to implement meta data server
 using
 a fuse interface.
 
I guess I'll find out;-). The FreeBSD NFSv4.1 server is kernel based
and exports any local file system that has a VFS/VOP interface. So,
hopefully FUSE won't provide too many surprises.
I am curious to see how well it performs.

  If anyone can point me to the area in the GlusterFS sources
  that I should look at to do this and/or suggest a machanism
  for getting the above information out of the GlusterFS server,
  please let me know.
 
  Also, any comments w.r.t. the above plan are welcome.
 
 In my opinion, a hybrid approach will better. Use the current meta
 data
 server implemented in ganesha (support for flex files is already
 added
 in ganesha)
 and might need to have some tweaks in write, read, commit api's of
 gluster-nfs. In this implementation, we should keep away
 meta-data-server from
 trusted storage pool(T.S.P) i.e a dedicated server is required for
 M.D.S
 
I think I answered this above. Also, I doubt ganesha-nfs is ported to
FreeBSD.

Thanks for your comments, rick
ps: Given ganesha-nfs etc, I'll understand if GlusterFS isn't interested
in this. Any patches that I'll generate are a long way off anyhow.

  Thanks in advance for any information, rick
  ps: I haven't written any code yet, but I think the above
   might be feasible.
 
 You are mostly welcome in coding part :).
 
 If you face any issue to implement current pNFS server for gluster
 volumes , please feel free to enquire about the same.
 
 Regards,
 Jiffin
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-devel
 
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel
 
___
Gluster-devel mailing list

Re: [Gluster-devel] DHTv2 design discussion

2015-06-02 Thread Krishnan Parthasarathi

 I've put together a document which I hope captures the most recent
 discussions I've had, particularly those in Barcelona.  Commenting should be
 open to anyone, so please feel free to weigh in before too much code is
 written.  ;)

Thanks Jeff. This document summarises the discussion we had in the DHT break out
session and highlights some less discussed aspects of early 4.0 proposals.
DHT2 proposal also needs to throw some light on how it would affect tiering
as it's implemented today. I will add this question to the google doc too.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] answer_list in EC xlator

2015-06-02 Thread Pranith Kumar Karampuri



On 06/02/2015 08:08 PM, fanghuang.d...@yahoo.com wrote:

Hi all,

As I reading the source codes of EC xlator, I am confused by the 
cbk_list and answer_list defined in struct _ec_fop_data. Why do we 
need two lists to combine the results of callback?


Especially for the answer_list, it is initialized 
in ec_fop_data_allocate, then the nodes are added 
in ec_cbk_data_allocate. Without being any accessed during the 
lifetime of fop, the whole list finally is released in ec_fop_cleanup. 
Do I miss something for the answer_list?

+Xavi.

hi,
The only reason I found is that It is easier to cleanup cbks using 
answers_list. You can check ec_fop_cleanup() function on latest master 
to check how this is. Combining of cbks is a bit involved until you 
understand it but once you do, it is amazing. I tried to add comments 
for this part of code and sent a patch, but we forgot to merge it :-) 
http://review.gluster.org/9982. If you think we can add more 
comments/change this part of code in a way it makes it easier, let us 
know. We would love your feedback :-). Wait for Xavi's response as well.


Pranith

Regards,
Fang Huang


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] DHTv2 design discussion

2015-06-02 Thread Pranith Kumar Karampuri



On 06/03/2015 01:14 AM, Jeff Darcy wrote:

I've put together a document which I hope captures the most recent discussions 
I've had, particularly those in Barcelona.  Commenting should be open to 
anyone, so please feel free to weigh in before too much code is written.  ;)

https://docs.google.com/document/d/1nJuG1KHtzU99HU9BK9Qxoo1ib9VXf2vwVuHzVQc_lKg/edit?usp=sharing

Jeff,
 Do you guys have any date before which the comments need to be 
given? It helps in prioritizing with other work I have. I would love to 
make time and go through this in detail and ask questions.


Pranith

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How to find total number of gluster mounts?

2015-06-02 Thread Pranith Kumar Karampuri



On 06/01/2015 11:07 AM, Bipin Kunal wrote:

Hi All,

  Is there a way to find total number of gluster mounts?

  If not, what would be the complexity for this RFE?

  As far as I understand finding the number of fuse mount should be possible 
but seems unfeasible for nfs and samba mounts.
True. Bricks have connections from each of the clients. Each of 
fuse/nfs/glustershd/quotad/glfsapi-based-clients(samba/glfsheal) would 
have separate client-context set on the bricks. So We can get this 
information. But like you said I am not sure how it can be done in nfs 
server/samba. Adding more people.


Pranith


  Please let me know your precious thoughts on this.

Thanks,
Bipin Kunal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Only netbsd regressions seem to be triggered

2015-06-02 Thread Raghavendra Gowdappa
All,

It seems only netbsd regressions are triggered. Linux based regressions seems 
to be not triggered. I've observed this with two patches [1][2]. Pranith also 
feels same. Have any of you seen similar issue?

[1]http://review.gluster.org/#/c/10943/
[2]http://review.gluster.org/#/c/10834/

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] self-heald.t failures

2015-06-02 Thread Vijay Bellur

self-heald.t seems to fail intermittently.

One such instance was seen recently [1]. Can somebody look into this please?

./tests/basic/afr/self-heald.t (Wstat: 0 Tests: 83 Failed: 1) Failed 
test: 78


Thanks,
Vijay

http://build.gluster.org/job/rackspace-regression-2GB-triggered/10029/consoleFull

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Only netbsd regressions seem to be triggered

2015-06-02 Thread Pranith Kumar Karampuri



On 06/03/2015 10:26 AM, Raghavendra Gowdappa wrote:

All,

It seems only netbsd regressions are triggered. Linux based regressions seems 
to be not triggered. I've observed this with two patches [1][2]. Pranith also 
feels same. Have any of you seen similar issue?
I saw it happen in reverse. I think the netbsd jobs on my patches failed 
more because it couldn't fetch the patch from gerrit. It does happen 
quite a bit though.


Pranith


[1]http://review.gluster.org/#/c/10943/
[2]http://review.gluster.org/#/c/10834/

regards,
Raghavendra
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [FAILED] tests/bugs/glusterd/bug-857330/xml.t

2015-06-02 Thread Milind Changire
Please see
http://build.gluster.org/job/rackspace-regression-2GB-triggered/9994/consoleFull
for details

Kindly advise regarding resolution

--
Milind
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC (~in 60 minutes)

2015-06-02 Thread Niels de Vos
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d 12:00 UTC)
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] answer_list in EC xlator

2015-06-02 Thread fanghuang.data
Hi all,
As I reading the source codes of EC xlator, I am confused by the cbk_list and 
answer_list defined in struct _ec_fop_data. Why do we need two lists to combine 
the results of callback? 
Especially for the answer_list, it is initialized in ec_fop_data_allocate, then 
the nodes are added in ec_cbk_data_allocate. Without being any accessed during 
the lifetime of fop, the whole list finally is released in ec_fop_cleanup. Do I 
miss something for the answer_list? Regards,
Fang Huang___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Failure in volume-snapshot.t

2015-06-02 Thread Atin Mukherjee
Sent from Samsung Galaxy S4
On 2 Jun 2015 18:50, Krutika Dhananjay kdhan...@redhat.com wrote:

 volume-snapshot.t failed this time on my patch @
http://build.gluster.org/job/rackspace-regression-2GB-triggered/9958/consoleFull
,
at test 39 again.
 A spurious failure perhaps?
Though its spurious but it does raise an alarm. Avra, mind to chip in?

 -Krutika
 

 From: Atin Mukherjee amukh...@redhat.com
 To: Anand Nekkunti anekk...@redhat.com, asen  Avra Sengupta 
aseng...@redhat.com
 Cc: Gluster Devel gluster-devel@gluster.org
 Sent: Monday, June 1, 2015 2:45:06 PM
 Subject: Re: [Gluster-devel] Failure in volume-snapshot.t




 On 06/01/2015 11:41 AM, Anand Nekkunti wrote:
  HI Atin
it seems like some spurious fail (failed one time). My patch nothing
  to do with  snapshot. I will send my patch with rebase .
 Then, I would request Avra to take a look at it.
  Regards
  Anand.N
 
  On 05/29/2015 07:17 PM, Atin Mukherjee wrote:
  Anand,
 
  Could you check if your patch [1] fails this regression every time?
  Otherwise I would request Avra to take a look at [2].
 
  Snapshot delete failed with following error:
 
  snapshot delete: failed: Snapshot
patchy2_snap1_GMT-2015.05.29-13.40.17
  might not be in an usable state.
  volume delete: patchy2: failed: Staging failed on 127.1.1.3. Error:
  Cannot delete Volume patchy2 ,as it has 1 snapshots. To delete the
  volume, first delete all the snapshots under it.
 
  [1] http://review.gluster.org/10586
  [2]
 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/9789/consoleFull
 
 

 --
 ~Atin
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel



 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel