Re: [Gluster-devel] regarding message for '-1' on gerrit

2014-07-07 Thread Justin Clift
On 07/07/2014, at 2:50 AM, Pranith Kumar Karampuri wrote:
 On 07/06/2014 11:05 PM, Vijay Bellur wrote:
 On 07/06/2014 07:47 PM, Pranith Kumar Karampuri wrote:
 hi Justin/Vijay,
  I always felt '-1' saying 'I prefer you didn't submit this' is a
 bit harsh. Most of the times all it means is 'Need some more changes' Do
 you think we can change this message?
 
 The message can be changed. What would everyone like to see as appropriate 
 messages accompanying values '-1' and '-2'?
 For '-1' - 'Please address the comments and Resubmit.'

That sounds good. :)

 I am not sure about '-2'

Maybe something like?

  I have strong doubts about this approach

(seems to reflect it's usage)

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding message for '-1' on gerrit

2014-07-07 Thread Pranith Kumar Karampuri


On 07/07/2014 03:11 PM, Justin Clift wrote:

On 07/07/2014, at 2:50 AM, Pranith Kumar Karampuri wrote:

On 07/06/2014 11:05 PM, Vijay Bellur wrote:

On 07/06/2014 07:47 PM, Pranith Kumar Karampuri wrote:

hi Justin/Vijay,
  I always felt '-1' saying 'I prefer you didn't submit this' is a
bit harsh. Most of the times all it means is 'Need some more changes' Do
you think we can change this message?

The message can be changed. What would everyone like to see as appropriate 
messages accompanying values '-1' and '-2'?

For '-1' - 'Please address the comments and Resubmit.'

That sounds good. :)


I am not sure about '-2'

Maybe something like?

   I have strong doubts about this approach
(seems to reflect it's usage)

Agree :-)

Pranith


+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] XML support for snapshot command

2014-07-07 Thread Rajesh Joseph
Hi all,

Following patch provides --xml support for all the snapshot command. I need
some reviewers to take a look at the patch and provide +1.
This patch is pending for quite long time.

http://review.gluster.org/#/c/7663

Thanks  Regards,
Rajesh
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding message for '-1' on gerrit

2014-07-07 Thread Lalatendu Mohanty

On 07/07/2014 03:11 PM, Justin Clift wrote:

On 07/07/2014, at 2:50 AM, Pranith Kumar Karampuri wrote:

On 07/06/2014 11:05 PM, Vijay Bellur wrote:

On 07/06/2014 07:47 PM, Pranith Kumar Karampuri wrote:

hi Justin/Vijay,
  I always felt '-1' saying 'I prefer you didn't submit this' is a
bit harsh. Most of the times all it means is 'Need some more changes' Do
you think we can change this message?

The message can be changed. What would everyone like to see as appropriate 
messages accompanying values '-1' and '-2'?

For '-1' - 'Please address the comments and Resubmit.'


+1

That sounds good. :)


I am not sure about '-2'

Maybe something like?

   I have strong doubts about this approach

+1

(seems to reflect it's usage)



Thanks to Pranith for bringing it up :).

-Lala
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-07 Thread Pranith Kumar Karampuri


On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:

Joseph,
Any updates on this? It failed 5 regressions today.
http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull 

http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull 

http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull 

http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull 



One more : http://build.gluster.org/job/rackspace-regression-2GB/543/console

Pranith



CC some more folks who work on snapshot.

Pranith

On 07/05/2014 11:19 AM, Pranith Kumar Karampuri wrote:

hi Joseph,
The test above failed on a documentation patch, so it has got to 
be a spurious failure.
Check 
http://build.gluster.org/job/rackspace-regression-2GB-triggered/150/consoleFull 
for more information


Pranith
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Locale problem in master

2014-07-07 Thread Anders Blomdell

Due to the line (commit 040319d8bced2f25bf25d8f6b937901c3a40e34b):

  ./libglusterfs/src/logging.c:503:setlocale(LC_ALL, );

The command 

  env -i LC_NUMERIC=sv_SE.utf8 /usr/sbin/glusterfs ...

will fail due to the fact that the swedish decimal separator is not '.', but 
',', 
i.e. _gf_string2double will fail due to strtod ('1.0', tail) will give the 
tail 
'.0'.

/Anders

-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Locale problem in master

2014-07-07 Thread Pranith Kumar Karampuri

Including Bala who is the author of the commit

Pranith
On 07/07/2014 10:18 PM, Anders Blomdell wrote:

Due to the line (commit 040319d8bced2f25bf25d8f6b937901c3a40e34b):

   ./libglusterfs/src/logging.c:503:setlocale(LC_ALL, );

The command

   env -i LC_NUMERIC=sv_SE.utf8 /usr/sbin/glusterfs ...

will fail due to the fact that the swedish decimal separator is not '.', but 
',',
i.e. _gf_string2double will fail due to strtod ('1.0', tail) will give the tail
'.0'.

/Anders



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Locale problem in master

2014-07-07 Thread Anders Blomdell
On 2014-07-07 18:57, Pranith Kumar Karampuri wrote:
 Including Bala who is the author of the commit
 
 Pranith
 On 07/07/2014 10:18 PM, Anders Blomdell wrote:
 Due to the line (commit 040319d8bced2f25bf25d8f6b937901c3a40e34b):

./libglusterfs/src/logging.c:503:setlocale(LC_ALL, );

 The command

env -i LC_NUMERIC=sv_SE.utf8 /usr/sbin/glusterfs ...

 will fail due to the fact that the swedish decimal separator is not '.', but 
 ',',
 i.e. _gf_string2double will fail due to strtod ('1.0', tail) will give the 
 tail
 '.0'.

 /Anders

 
Simple fix:


--- a/libglusterfs/src/logging.c
+++ b/libglusterfs/src/logging.c
@@ -501,6 +501,7 @@ gf_openlog (const char *ident, int option, int facility)
 
 /* TODO: Should check for errors here and return appropriately */
 setlocale(LC_ALL, );
+setlocale(LC_NUMERIC, C);
 /* close the previous syslog if open as we are changing settings */
 closelog ();
 openlog(ident, _option, _facility);


-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] RPM's for latest glusterfs does not install on Fedora-20

2014-07-07 Thread Anders Blomdell
On 2014-07-07 18:17, Niels de Vos wrote:
 On Mon, Jul 07, 2014 at 04:48:18PM +0200, Anders Blomdell wrote:
 On 2014-07-07 15:08, Lalatendu Mohanty wrote:
 # rpm -U ./glusterfs-server-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
 ./glusterfs-fuse-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
 ./glusterfs-cli-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
 ./glusterfs-libs-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
 ./glusterfs-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
 ./glusterfs-api-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm
 error: Failed dependencies:
 libgfapi.so.0()(64bit) is needed by (installed) 
 qemu-common-2:1.6.2-6.fc20.x86_64
 libgfapi.so.0()(64bit) is needed by (installed) 
 qemu-system-x86-2:1.6.2-6.fc20.x86_64
 libgfapi.so.0()(64bit) is needed by (installed) 
 qemu-img-2:1.6.2-6.fc20.x86_64


 Feature or bug?

 /Anders

 Hey Anders,

 You should not see this issue as glusterfs-api provides 
 /usr/lib64/libgfapi.so.0 .

 $rpm -ql glusterfs-api
 /usr/lib/python2.7/site-packages/gluster/__init__.py
 /usr/lib/python2.7/site-packages/gluster/__init__.pyc
 /usr/lib/python2.7/site-packages/gluster/__init__.pyo
 /usr/lib/python2.7/site-packages/gluster/gfapi.py
 /usr/lib/python2.7/site-packages/gluster/gfapi.pyc
 /usr/lib/python2.7/site-packages/gluster/gfapi.pyo
 /usr/lib/python2.7/site-packages/glusterfs_api-3.5.0-py2.7.egg-info
 /usr/lib64/glusterfs/3.5.0/xlator/mount/api.so
 /usr/lib64/libgfapi.so.0
 /usr/lib64/libgfapi.so.0.0.0
 True for branch release-3.5 from git://git.gluster.org/glusterfs.git, but 
 not for master.
 
 Indeed, the master branch uses SONAME-versioning. Any applications (like 
 qemu) using libgfapi.so need to get re-compiled to use the update.
Can do without these for my current testing :-)

 

 Is there any specific reason you want to use
 glusterfs-server-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm? because
 3.5.1 is available in fedora 20 in update repo . So if you do yum
 update it i.e. yum update glusterfs-server you should get glusterfs
 3.5.1 GA version which like to be stable than
 glusterfs-server-3.5qa2-0.722.
 I would like to track progress of the fixing of my bugs and at the same time 
 experiment 
 with reverting 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff to re-add IPv6 
 support, and I figured that the master is what will eventually become 
 3.6.0
 
 Yes, you are correct. The master branch will become glusterfs-3.6. You 
 should be able to recompile qemu (and other?) packages on a system that 
 has glusterfs-devel-3.6 installed. I can also recommend the use of 
 'mockchain' (from the 'mock' package) for rebuilding glusterfs and other 
 dependent packages.
I'll keep tracking master (exposing as many bugs as I can), I'll redirect 
followups to gluster-devel@gluster.org  and drop gluster-us...@gluster.org
after this message.

Thanks!


/Anders
 



-- 
Anders Blomdell  Email: anders.blomd...@control.lth.se
Department of Automatic Control
Lund University  Phone:+46 46 222 4625
P.O. Box 118 Fax:  +46 46 138118
SE-221 00 Lund, Sweden

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] RPM's for latest glusterfs does not install on Fedora-20

2014-07-07 Thread Niels de Vos
On Mon, Jul 07, 2014 at 07:29:15PM +0200, Anders Blomdell wrote:
 On 2014-07-07 18:17, Niels de Vos wrote:
  On Mon, Jul 07, 2014 at 04:48:18PM +0200, Anders Blomdell wrote:
  On 2014-07-07 15:08, Lalatendu Mohanty wrote:
  # rpm -U ./glusterfs-server-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
  ./glusterfs-fuse-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
  ./glusterfs-cli-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
  ./glusterfs-libs-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
  ./glusterfs-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm 
  ./glusterfs-api-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm
  error: Failed dependencies:
  libgfapi.so.0()(64bit) is needed by (installed) 
  qemu-common-2:1.6.2-6.fc20.x86_64
  libgfapi.so.0()(64bit) is needed by (installed) 
  qemu-system-x86-2:1.6.2-6.fc20.x86_64
  libgfapi.so.0()(64bit) is needed by (installed) 
  qemu-img-2:1.6.2-6.fc20.x86_64
 
 
  Feature or bug?
 
  /Anders
 
  Hey Anders,
 
  You should not see this issue as glusterfs-api provides 
  /usr/lib64/libgfapi.so.0 .
 
  $rpm -ql glusterfs-api
  /usr/lib/python2.7/site-packages/gluster/__init__.py
  /usr/lib/python2.7/site-packages/gluster/__init__.pyc
  /usr/lib/python2.7/site-packages/gluster/__init__.pyo
  /usr/lib/python2.7/site-packages/gluster/gfapi.py
  /usr/lib/python2.7/site-packages/gluster/gfapi.pyc
  /usr/lib/python2.7/site-packages/gluster/gfapi.pyo
  /usr/lib/python2.7/site-packages/glusterfs_api-3.5.0-py2.7.egg-info
  /usr/lib64/glusterfs/3.5.0/xlator/mount/api.so
  /usr/lib64/libgfapi.so.0
  /usr/lib64/libgfapi.so.0.0.0
  True for branch release-3.5 from git://git.gluster.org/glusterfs.git, but 
  not for master.
  
  Indeed, the master branch uses SONAME-versioning. Any applications (like 
  qemu) using libgfapi.so need to get re-compiled to use the update.
 Can do without these for my current testing :-)

Of course, removing packages you don't need is a nice workaround :)

 
  
 
  Is there any specific reason you want to use
  glusterfs-server-3.5qa2-0.722.git2c5eb5c.fc20.x86_64.rpm? because
  3.5.1 is available in fedora 20 in update repo . So if you do yum
  update it i.e. yum update glusterfs-server you should get glusterfs
  3.5.1 GA version which like to be stable than
  glusterfs-server-3.5qa2-0.722.
  I would like to track progress of the fixing of my bugs and at the same 
  time experiment 
  with reverting 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff to re-add IPv6 
  support, and I figured that the master is what will eventually become 
  3.6.0
  
  Yes, you are correct. The master branch will become glusterfs-3.6. You 
  should be able to recompile qemu (and other?) packages on a system that 
  has glusterfs-devel-3.6 installed. I can also recommend the use of 
  'mockchain' (from the 'mock' package) for rebuilding glusterfs and other 
  dependent packages.
 I'll keep tracking master (exposing as many bugs as I can), I'll redirect 
 followups to gluster-devel@gluster.org  and drop gluster-us...@gluster.org
 after this message.

Great, thanks for the testing!

Niels
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] inode linking in GlusterFS NFS server

2014-07-07 Thread Raghavendra Bhat


Hi,

As per my understanding nfs server is not doing inode linking in 
readdirp callback. Because of this there might be some errors while 
dealing with virtual inodes (or gfids). As of now meta, gfid-access and 
snapview-server (used for user serviceable snapshots) xlators makes use 
of virtual inodes with random gfids. The situation is this:


Say User serviceable snapshot feature has been enabled and there are 2 
snapshots (snap1 and snap2). Let /mnt/nfs be the nfs mount. Now the 
snapshots can be accessed by entering .snaps directory.  Now if snap1 
directory is entered and *ls -l* is done (i.e. cd 
/mnt/nfs/.snaps/snap1 and then ls -l),  the readdirp fop is sent to 
the snapview-server xlator (which is part of a daemon running for the 
volume), which talks to the corresponding snapshot volume and gets the 
dentry list. Before unwinding it would have generated random gfids for 
those dentries.


Now nfs server upon getting readdirp reply, will associate the gfid with 
the filehandle created for the entry. But without linking the inode, it 
would send the readdirp reply back to nfs client. Now next time when nfs 
client makes some operation on one of those filehandles, nfs server 
tries to resolve it by finding the inode for the gfid present in the 
filehandle. But since the inode was not linked in readdirp, inode_find 
operation fails and it tries to do a hard resolution by sending the 
lookup operation on that gfid to the normal main graph. (The information 
on whether the call should be sent to main graph or snapview-server 
would be present in the inode context. But here the lookup has come on a 
gfid with a newly created inode where the context is not there yet. So 
the call would be sent to the main graph itself). But since the gfid is 
a randomly generated virtual gfid (not present on disk), the lookup 
operation fails giving error.


As per my understanding this can happen with any xlator that deals with 
virtual inodes (by generating random gfids).


I can think of these 2 methods to handle this:
1)  do inode linking for readdirp also in nfs server
2)  If lookup operation fails, snapview-client xlator (which actually 
redirects the fops on snapshot world to snapview-server by looking into 
the inode context) should check if the failed lookup is a nameless 
lookup. If so, AND the gfid of the inode is NULL AND lookup has come 
from main graph, then instead of unwinding the lookup with failure, send 
it to snapview-server which might be able to find the inode for the gfid 
(as the gfid was generated by itself, it should be able to find the 
inode for that gfid unless and until it has been purged from the inode 
table).



Please let me know if I have missed anything. Please provide feedback.

Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] inode linking in GlusterFS NFS server

2014-07-07 Thread Anand Avati
On Mon, Jul 7, 2014 at 12:48 PM, Raghavendra Bhat rab...@redhat.com wrote:


 Hi,

 As per my understanding nfs server is not doing inode linking in readdirp
 callback. Because of this there might be some errors while dealing with
 virtual inodes (or gfids). As of now meta, gfid-access and snapview-server
 (used for user serviceable snapshots) xlators makes use of virtual inodes
 with random gfids. The situation is this:

 Say User serviceable snapshot feature has been enabled and there are 2
 snapshots (snap1 and snap2). Let /mnt/nfs be the nfs mount. Now the
 snapshots can be accessed by entering .snaps directory.  Now if snap1
 directory is entered and *ls -l* is done (i.e. cd /mnt/nfs/.snaps/snap1
 and then ls -l),  the readdirp fop is sent to the snapview-server xlator
 (which is part of a daemon running for the volume), which talks to the
 corresponding snapshot volume and gets the dentry list. Before unwinding it
 would have generated random gfids for those dentries.

 Now nfs server upon getting readdirp reply, will associate the gfid with
 the filehandle created for the entry. But without linking the inode, it
 would send the readdirp reply back to nfs client. Now next time when nfs
 client makes some operation on one of those filehandles, nfs server tries
 to resolve it by finding the inode for the gfid present in the filehandle.
 But since the inode was not linked in readdirp, inode_find operation fails
 and it tries to do a hard resolution by sending the lookup operation on
 that gfid to the normal main graph. (The information on whether the call
 should be sent to main graph or snapview-server would be present in the
 inode context. But here the lookup has come on a gfid with a newly created
 inode where the context is not there yet. So the call would be sent to the
 main graph itself). But since the gfid is a randomly generated virtual gfid
 (not present on disk), the lookup operation fails giving error.

 As per my understanding this can happen with any xlator that deals with
 virtual inodes (by generating random gfids).

 I can think of these 2 methods to handle this:
 1)  do inode linking for readdirp also in nfs server
 2)  If lookup operation fails, snapview-client xlator (which actually
 redirects the fops on snapshot world to snapview-server by looking into the
 inode context) should check if the failed lookup is a nameless lookup. If
 so, AND the gfid of the inode is NULL AND lookup has come from main graph,
 then instead of unwinding the lookup with failure, send it to
 snapview-server which might be able to find the inode for the gfid (as the
 gfid was generated by itself, it should be able to find the inode for that
 gfid unless and until it has been purged from the inode table).


 Please let me know if I have missed anything. Please provide feedback.



That's right. NFS server should be linking readdirp_cbk inodes just like
FUSE or protocol/server. It has been OK without virtual gfids thus far.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] inode linking in GlusterFS NFS server

2014-07-07 Thread Raghavendra Bhat

On Tuesday 08 July 2014 01:21 AM, Anand Avati wrote:
On Mon, Jul 7, 2014 at 12:48 PM, Raghavendra Bhat rab...@redhat.com 
mailto:rab...@redhat.com wrote:



Hi,

As per my understanding nfs server is not doing inode linking in
readdirp callback. Because of this there might be some errors
while dealing with virtual inodes (or gfids). As of now meta,
gfid-access and snapview-server (used for user serviceable
snapshots) xlators makes use of virtual inodes with random gfids.
The situation is this:

Say User serviceable snapshot feature has been enabled and there
are 2 snapshots (snap1 and snap2). Let /mnt/nfs be the nfs
mount. Now the snapshots can be accessed by entering .snaps
directory.  Now if snap1 directory is entered and *ls -l* is done
(i.e. cd /mnt/nfs/.snaps/snap1 and then ls -l),  the readdirp
fop is sent to the snapview-server xlator (which is part of a
daemon running for the volume), which talks to the corresponding
snapshot volume and gets the dentry list. Before unwinding it
would have generated random gfids for those dentries.

Now nfs server upon getting readdirp reply, will associate the
gfid with the filehandle created for the entry. But without
linking the inode, it would send the readdirp reply back to nfs
client. Now next time when nfs client makes some operation on one
of those filehandles, nfs server tries to resolve it by finding
the inode for the gfid present in the filehandle. But since the
inode was not linked in readdirp, inode_find operation fails and
it tries to do a hard resolution by sending the lookup operation
on that gfid to the normal main graph. (The information on whether
the call should be sent to main graph or snapview-server would be
present in the inode context. But here the lookup has come on a
gfid with a newly created inode where the context is not there
yet. So the call would be sent to the main graph itself). But
since the gfid is a randomly generated virtual gfid (not present
on disk), the lookup operation fails giving error.

As per my understanding this can happen with any xlator that deals
with virtual inodes (by generating random gfids).

I can think of these 2 methods to handle this:
1)  do inode linking for readdirp also in nfs server
2)  If lookup operation fails, snapview-client xlator (which
actually redirects the fops on snapshot world to snapview-server
by looking into the inode context) should check if the failed
lookup is a nameless lookup. If so, AND the gfid of the inode is
NULL AND lookup has come from main graph, then instead of
unwinding the lookup with failure, send it to snapview-server
which might be able to find the inode for the gfid (as the gfid
was generated by itself, it should be able to find the inode for
that gfid unless and until it has been purged from the inode table).


Please let me know if I have missed anything. Please provide feedback.



That's right. NFS server should be linking readdirp_cbk inodes just 
like FUSE or protocol/server. It has been OK without virtual gfids 
thus far.


I did the changes to link inodes in readdirp_cbk in nfs server. It seems 
to work fine. Should we need the second change also? (i.e chage in the 
snapview-client to redirect the fresh nameless lookups to 
snapview-server). With nfs server linking the inodes in readdirp, I 
think second change might not be needed.


Regards,
Raghavendra Bhat
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t

2014-07-07 Thread Joseph Fernandes
Hi Pranith,

I am looking into this issue. Will keep you posted on the process by EOD

Regards,
~Joe

- Original Message -
From: Pranith Kumar Karampuri pkara...@redhat.com
To: josfe...@redhat.com
Cc: Gluster Devel gluster-devel@gluster.org, Rajesh Joseph 
rjos...@redhat.com, Sachin Pandit span...@redhat.com, aseng...@redhat.com
Sent: Monday, July 7, 2014 8:42:24 PM
Subject: Re: [Gluster-devel] regarding spurious failure tests/bugs/bug-1112559.t


On 07/07/2014 06:18 PM, Pranith Kumar Karampuri wrote:
 Joseph,
 Any updates on this? It failed 5 regressions today.
 http://build.gluster.org/job/rackspace-regression-2GB/541/consoleFull
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/175/consoleFull
  

 http://build.gluster.org/job/rackspace-regression-2GB-triggered/173/consoleFull
  

 http://build.gluster.org/job/rackspace-regression-2GB-triggered/166/consoleFull
  

 http://build.gluster.org/job/rackspace-regression-2GB-triggered/172/consoleFull
  


One more : http://build.gluster.org/job/rackspace-regression-2GB/543/console

Pranith


 CC some more folks who work on snapshot.

 Pranith

 On 07/05/2014 11:19 AM, Pranith Kumar Karampuri wrote:
 hi Joseph,
 The test above failed on a documentation patch, so it has got to 
 be a spurious failure.
 Check 
 http://build.gluster.org/job/rackspace-regression-2GB-triggered/150/consoleFull
  
 for more information

 Pranith
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel