Re: [Gluster-devel] Can anyone else shed any light on this warning?

2014-07-26 Thread Joe Julian


On 07/26/2014 12:02 AM, Pranith Kumar Karampuri wrote:


On 07/26/2014 11:06 AM, Pranith Kumar Karampuri wrote:


On 07/26/2014 03:06 AM, Joe Julian wrote:
How can it come about? Is this from replacing a brick days ago? Can 
I prevent it from happening?


[2014-07-25 07:00:29.287680] W [fuse-resolve.c:546:fuse_resolve_fd] 
0-fuse-resolve: migration of basefd


(ptr:0x7f17cb846444 inode-gfid:87544fde-9bad-46d8-b610-1a8c93b85113) 
did not complete, failing fop with


EBADF (old-subvolume:gv-nova-3 new-subvolume:gv-nova-4)


It's critical because it causes a segfault every time. :(

Joe,
 This is fd migration code. When a brick layout changes (graph 
change) the file needs to be re-opened in the new graph. This re-open 
seemed to have failed. It leads to crash probably because extra unref 
in failure code path. Could you add brick/mount logs to the bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1123289. What is the 
configuration of the volume?
I checked the code, I don't see any extra unrefs as of now. Please 
provide the details I asked for in the bug.


CC Raghavendra G, Raghavendra Bhat who know this code path a bit more.

Added the log and volume info.

I assume I can prevent any more nodes from crashing by migrating any VMs 
whose image was hosted on the former brick to another compute note, 
which would establish a whole new fd?

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Can anyone else shed any light on this warning?

2014-07-26 Thread Pranith Kumar Karampuri


On 07/26/2014 11:06 AM, Pranith Kumar Karampuri wrote:


On 07/26/2014 03:06 AM, Joe Julian wrote:
How can it come about? Is this from replacing a brick days ago? Can I 
prevent it from happening?


[2014-07-25 07:00:29.287680] W [fuse-resolve.c:546:fuse_resolve_fd] 
0-fuse-resolve: migration of basefd


(ptr:0x7f17cb846444 inode-gfid:87544fde-9bad-46d8-b610-1a8c93b85113) 
did not complete, failing fop with


EBADF (old-subvolume:gv-nova-3 new-subvolume:gv-nova-4)


It's critical because it causes a segfault every time. :(

Joe,
 This is fd migration code. When a brick layout changes (graph 
change) the file needs to be re-opened in the new graph. This re-open 
seemed to have failed. It leads to crash probably because extra unref 
in failure code path. Could you add brick/mount logs to the bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1123289. What is the 
configuration of the volume?
I checked the code, I don't see any extra unrefs as of now. Please 
provide the details I asked for in the bug.


CC Raghavendra G, Raghavendra Bhat who know this code path a bit more.

Pranith

pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Can anyone else shed any light on this warning?

2014-07-25 Thread Pranith Kumar Karampuri


On 07/26/2014 03:06 AM, Joe Julian wrote:
How can it come about? Is this from replacing a brick days ago? Can I 
prevent it from happening?


[2014-07-25 07:00:29.287680] W [fuse-resolve.c:546:fuse_resolve_fd] 
0-fuse-resolve: migration of basefd


(ptr:0x7f17cb846444 inode-gfid:87544fde-9bad-46d8-b610-1a8c93b85113) 
did not complete, failing fop with


EBADF (old-subvolume:gv-nova-3 new-subvolume:gv-nova-4)


It's critical because it causes a segfault every time. :(

Joe,
 This is fd migration code. When a brick layout changes (graph 
change) the file needs to be re-opened in the new graph. This re-open 
seemed to have failed. It leads to crash probably because extra unref in 
failure code path. Could you add brick/mount logs to the bug 
https://bugzilla.redhat.com/show_bug.cgi?id=1123289. What is the 
configuration of the volume?


pranith


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel