Re: [Gluster-users] tune2fs exited with non-zero exit status

2015-03-24 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
Ah, that is handy to know.

Will this patch get applied to the 3.5. release stream or am I going to have to 
look at moving onto 3.6 at some point.

Thanks

Paul

--
Paul Osborne
Senior Systems Engineer
Infrastructure Services
IT Department
Canterbury Christ Church University

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Atin Mukherjee
Sent: 18 March 2015 16:24
To: Vitaly Lipatov; gluster-users@gluster.org
Subject: Re: [Gluster-users] tune2fs exited with non-zero exit status



On 03/18/2015 08:04 PM, Vitaly Lipatov wrote:
  
 
 Osborne, Paul (paul.osbo...@canterbury.ac.uk) писал 2015-03-16
 19:22: 
 
 Hi,

 I am just looking through my logs and am seeing a
 lot of entries of the form: 

 [2015-03-16 16:02:55.553140] I
 [glusterd-handler.c:3530:__glusterd_handle_status_volume] 0-management:
 Received status volume req for volume wiki

 [2015-03-16
 16:02:55.561173] E
 [glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management:
 tune2fs exited with non-zero exit status

 [2015-03-16
 16:02:55.561204] E
 [glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management:
 failed to get inode size

 Having had a rummage I *SUSPECT* it is
 because gluster is trying to get the volume status by querying the 
 superblock on the filesystem for a brick volume. However this is an 
 issue as when the volume was created it was done so in the form:
 
 I
 believe it is a bug with tune2fs missed device argument, introduced 
 this patch http://review.gluster.org/#/c/8134/
Could you send a patch to fix this problem? You can refer to [1] for the 
workflow to send a patch.


[1]
http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow


~Atin
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 

--
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] tune2fs exited with non-zero exit status

2015-03-18 Thread Vitaly Lipatov
 

Osborne, Paul (paul.osbo...@canterbury.ac.uk) писал 2015-03-16
19:22: 

 Hi, 
 
 I am just looking through my logs and am seeing a
lot of entries of the form: 
 
 [2015-03-16 16:02:55.553140] I
[glusterd-handler.c:3530:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume wiki 
 
 [2015-03-16
16:02:55.561173] E
[glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management:
tune2fs exited with non-zero exit status 
 
 [2015-03-16
16:02:55.561204] E
[glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management:
failed to get inode size 
 
 Having had a rummage I *SUSPECT* it is
because gluster is trying to get the volume status by querying the
superblock on the filesystem for a brick volume. However this is an
issue as when the volume was created it was done so in the form:

I
believe it is a bug with tune2fs missed device argument, introduced this
patch
http://review.gluster.org/#/c/8134/ 

-- 
Vitaly Lipatov
Etersoft

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] tune2fs exited with non-zero exit status

2015-03-18 Thread Atin Mukherjee


On 03/18/2015 08:04 PM, Vitaly Lipatov wrote:
  
 
 Osborne, Paul (paul.osbo...@canterbury.ac.uk) писал 2015-03-16
 19:22: 
 
 Hi, 

 I am just looking through my logs and am seeing a
 lot of entries of the form: 

 [2015-03-16 16:02:55.553140] I
 [glusterd-handler.c:3530:__glusterd_handle_status_volume] 0-management:
 Received status volume req for volume wiki 

 [2015-03-16
 16:02:55.561173] E
 [glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management:
 tune2fs exited with non-zero exit status 

 [2015-03-16
 16:02:55.561204] E
 [glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management:
 failed to get inode size 

 Having had a rummage I *SUSPECT* it is
 because gluster is trying to get the volume status by querying the
 superblock on the filesystem for a brick volume. However this is an
 issue as when the volume was created it was done so in the form:
 
 I
 believe it is a bug with tune2fs missed device argument, introduced this
 patch
 http://review.gluster.org/#/c/8134/ 
Could you send a patch to fix this problem? You can refer to [1] for the
workflow to send a patch.


[1]
http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow


~Atin
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users
 

-- 
~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] tune2fs exited with non-zero exit status

2015-03-16 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
Hi,

I am just looking through my logs and am seeing a lot of entries of the form:

[2015-03-16 16:02:55.553140] I 
[glusterd-handler.c:3530:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume wiki
[2015-03-16 16:02:55.561173] E 
[glusterd-utils.c:5140:glusterd_add_inode_size_to_dict] 0-management: tune2fs 
exited with non-zero exit status
[2015-03-16 16:02:55.561204] E 
[glusterd-utils.c:5166:glusterd_add_inode_size_to_dict] 0-management: failed to 
get inode size

Having had a rummage I *suspect* it is because gluster is trying to get the 
volume status by querying the superblock on the filesystem for a brick volume.  
However this is an issue as when the volume was created it was done so in the 
form:

root@gfsi-rh-01:/mnt# gluster volume create gfs1 replica 2 transport tcp \
 gfsi-rh-01:/srv/hod/wiki \
 gfsi-isr-01:/srv/hod/wiki force

Where the those paths to the bricks are not the raw paths but instead are paths 
to the mount points on the local server.

Volume status returns:
gluster volume status wiki
Status of volume: wiki
Gluster process 
   Port   Online   Pid
--
Brick gfsi-rh-01.core.canterbury.ac.uk:/srv/hod/wiki   49157Y   
   3077
Brick gfsi-isr-01.core.canterbury.ac.uk:/srv/hod/wiki  49156Y   
   3092
Brick gfsi-cant-01.core.canterbury.ac.uk:/srv/hod/wiki  49152Y  
2908
NFS Server on localhost 
   2049   Y  35065
Self-heal Daemon on localhost  
N/AY  35073
NFS Server on gfsi-cant-01.core.canterbury.ac.uk 2049   Y   
   2920
Self-heal Daemon on gfsi-cant-01.core.canterbury.ac.uk   N/A
Y  2927
NFS Server on gfsi-isr-01.core.canterbury.ac.uk 2049   Y
  32680
Self-heal Daemon on gfsi-isr-01.core.canterbury.ac.uk   N/AY
  32687
Task Status of Volume wiki
--
There are no active volume tasks

Which is what I would expect.

Interestingly to check my thoughts:

# tune2fs -l /srv/hod/wiki/
tune2fs 1.42.5 (29-Jul-2012)
tune2fs: Attempt to read block from filesystem resulted in short read while 
trying to open /srv/hod/wiki/
Couldn't find valid filesystem superblock.

Does what I expect as it is checking a mount point and is what it looks like 
gluster is trying to do.

But:

# tune2fs -l /dev/mapper/bricks-wiki
tune2fs 1.42.5 (29-Jul-2012)
Filesystem volume name:   wiki
Last mounted on:  /srv/hod/wiki
Filesystem UUID:  a75306ac-31fa-447d-9da7-23ef66d9756b
Filesystem magic number:  0xEF53
Filesystem revision #:1 (dynamic)
Filesystem features:  has_journal ext_attr resize_inode dir_index filetype 
needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg 
dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options:user_xattr acl
Filesystem state: clean
snipped

This leaves me with a couple of questions:

Is there any way that I can get this sorted in the gluster configuration so 
that it actually checks the raw volume rather than the local mount point for 
that volume?

Should the volume have been created using the raw path /dev/mapper/.   
rather than the mount point?

Or should I have created the volume (as I *now* see in the RedHat Storage Admin 
Guide) - under a sub directrory directory below the mounted filestore (ie:  
/srv/hod/wiki/brick ?

If I need to move the data and recreate the bricks it is not an issue for me as 
this is still proof of concept for what we are doing, what I need to know 
whether doing so will stop the continual log churn.

Many thanks

Paul


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users