[Gluster-users] VM crash, store in glusterfs

2015-07-02 Thread Gregor Burck

Hi,

I would like to ade some test with glusterfs and virtualbox.
The goal is to store the virtualbox files in a glusterfs store and  
access it from to machines.


In my setup, the servers are the clients too. I hope you could  
understand my description:


I remount the share back to the single machines


two machines:
gf001 and gf002

Both:
debian 8
glusterfs 3.7

/dev/sda - root
/dev/sdb - /export/vbstore

My volume info:
root@gf001  :~# gluster volume info vbstore

Volume Name: vbstore
Type: Replicate
Volume ID: 7bf8aa42-8fd9-4535-888d-dacea4f14a83
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gf001.mvz.ffm:/export/vbstore
Brick2: gf002.mvz.ffm:/export/vbstore
Options Reconfigured:
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on


mounting on gf001:
mount -t glusterfs gf001:/vbstore /import/vbstore
or via fstab:
gf001:/vbstore/import/vbstore glusterfs defaults 0 0

The same on gf002 with gf002 as server.

Creating an virtualbox VM in /import/vbstore.

Starting VM on gf001 OR gf002 work right.

But when running VM on one AND shut down the opposit, the VM crash  
with ATA failure.



Maybe mount or gluster setting options?

Thank you for help,

Gregor

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Expanding a replicated volume

2015-07-02 Thread Sjors Gielen
2015-07-02 14:25 GMT+02:00 Sjors Gielen sj...@sjorsgielen.nl:

 At this point, /local/glustertest/stor1 is still filled on mallorca, and
 empty on hawaii (except for .glusterfs). Here is the actual question: how
 do I sync the contents of the two?


I found another way:  by doing a `du -hc /stor1` on mallorca, all files
instantly appear on hawaii as well. Bizarrely, this only works when running
`du` as root on Mallorca; running it as another user does give the correct
output but does not make the files appear on Hawaii.

Sjors
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Expanding a replicated volume

2015-07-02 Thread Sjors Gielen
Hi all,

I'm doing a test setup where I have a 1-brick volume setup that I want to
online expand into a 2-brick replicated volume setup. It's pretty hard to
find information on this online; usually, only distributed volume setups
are discussed.

I'm using two machines for testing: mallorca and hawaii. They have been
added into each other's trusted pools.

First, I create a 1-brick volume on Mallorca:

# mkdir -p /local/glustertest/stor1 /stor1
# gluster volume create stor1 mallorca:/local/glustertest/stor1
# gluster volume start stor1
# mount -t glusterfs mallorca:/stor1 /stor1
# echo This is file A /stor1/fileA.txt
# echo This is file B /stor1/fileB.txt
# dd if=/dev/zero of=/stor1/largefile.img bs=1M count=100

So now /stor1 and /local/glustertest/stor1 contain these files.

Then, I expand the volume to a 2-brick replicated volume setup:

# mkdir -p /local/glustertest/stor1 /stor1
# gluster volume add-brick stor1 replica 2 hawaii:/local/glustertest/stor1
# mount -t glusterfs hawaii:/stor1 /stor1

At this point, /local/glustertest/stor1 is still filled on mallorca, and
empty on hawaii (except for .glusterfs). Here is the actual question: how
do I sync the contents of the two?

I tried:
* 'volume sync', but it's only for syncing of volume info between peers,
not volume contents
* 'volume rebalance', but it's only for Distribute volumes
* 'volume heal stor1 full', which finishes succesfully but didn't move
anything
* 'volume replace-brick', which I found online used to move some contents,
but now only supports switching the actual brick pointer with 'commit force'
* listing actual file names on Hawaii.

The last one is the only one that had some effect: after listing
/stor1/fileA.txt on Hawaii, the file appeared in /stor1 and
/local/glustertest/stor1. The other files are still missing. So, a
potential fix could be to get a list of all filenames from Mallorca and
`ls` them all so they are synced. But this seems like a silly solution.
There's two other viable solutions I could come up with:

* Stopping the volume, rsyncing the contents, adding the brick, starting
the volume. But that's offline (and it feels wrong to be poking around
inside Gluster's brick directory).
* Removing the volume, moving the brick directory, recreating the volume
with 2 replicas, and moving the old contents of the brick directory back
onto the new mount.

At some point, suddenly all files did appear on Hawaii, probably because of
the self-heal daemon. Is there some way to trigger the daemon to walk over
all files? Why didn't the explicit full self-heal do this?

Thanks,
Sjors
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Expanding a replicated volume

2015-07-02 Thread M S Vishwanath Bhat
On 2 July 2015 at 18:35, Sjors Gielen sj...@sjorsgielen.nl wrote:

 2015-07-02 14:25 GMT+02:00 Sjors Gielen sj...@sjorsgielen.nl:

 At this point, /local/glustertest/stor1 is still filled on mallorca, and
 empty on hawaii (except for .glusterfs). Here is the actual question: how
 do I sync the contents of the two?


 I found another way:  by doing a `du -hc /stor1` on mallorca, all files
 instantly appear on hawaii as well. Bizarrely, this only works when running
 `du` as root on Mallorca; running it as another user does give the correct
 output but does not make the files appear on Hawaii.


AFAIK there are two ways you can trigger the self-heal

1. Use the gluster CLI heal command. I'm not sure why it didn't work for
you and needs to be investigated.

2. Running 'stat' on files on gluster volume mountpoint, So if you run stat
on the entire mountpoint, the files should be properly synced across all
the replica bricks.

*my two cents*

Cheers,
Vishwanath


 Sjors

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Can't start geo-replication with version 3.6.3

2015-07-02 Thread M S Vishwanath Bhat
On 2 July 2015 at 21:33, John Ewing johnewi...@gmail.com wrote:

 Hi,

 I'm trying to build a new geo-replicated cluster using Centos 6.6 and
 Gluster 3.6.3

 I've got as far creating a replicated volume with two peers on site, and a
 slave volume in EC2.

 I've set up passwordless ssh from one of the pair to the slave server, and
 I've run

 gluster system:: execute gsec_create


 When I try and create the geo-replication relationship between the servers
 I get:


 gluster volume geo-replication myvol X.X.X.X::myvol create  push-pem force

  Unable to fetch slave volume details. Please check the slave cluster and
 slave volume.
  geo-replication command failed


I remember seeing this error when the slave volume is either not created or
not started or not present in your x.x.x.x host.

Can you check if the slave volume is started?

Best Regards,
Vishwanath




 The geo-replication-slaves log file from the master looks like this


 [2015-07-02 15:13:37.324823] I [rpc-clnt.c:1761:rpc_clnt_reconfig]
 0-myvol-client-0: changing port to 49152 (from 0)
 [2015-07-02 15:13:37.334874] I
 [client-handshake.c:1413:select_server_supported_programs]
 0-myvol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
 [2015-07-02 15:13:37.335419] I
 [client-handshake.c:1200:client_setvolume_cbk] 0-myvol-client-0: Connected
 to myvol-client-0, attached to remote volume '/export/sdb1/brick,'.
 [2015-07-02 15:13:37.335493] I
 [client-handshake.c:1210:client_setvolume_cbk] 0-myvol-client-0: Server and
 Client lk-version numbers are not same, reopening the fds
 [2015-07-02 15:13:37.336050] I [MSGID: 108005]
 [afr-common.c:3669:afr_notify] 0-myvol-replicate-0: Subvolume
 'myvol-client-0' came back up; going online.
 [2015-07-02 15:13:37.336170] I [rpc-clnt.c:1761:rpc_clnt_reconfig]
 0-myvol-client-1: changing port to 49152 (from 0)
 [2015-07-02 15:13:37.336298] I
 [client-handshake.c:188:client_set_lk_version_cbk] 0-myvol-client-0: Server
 lk version = 1
 [2015-07-02 15:13:37.343247] I
 [client-handshake.c:1413:select_server_supported_programs]
 0-myvol-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
 [2015-07-02 15:13:37.343964] I
 [client-handshake.c:1200:client_setvolume_cbk] 0-myvol-client-1: Connected
 to myvol-client-1, attached to remote volume '/export/sdb1/brick'.
 [2015-07-02 15:13:37.344043] I
 [client-handshake.c:1210:client_setvolume_cbk] 0-myvol-client-1: Server and
 Client lk-version numbers are not same, reopening the fds
 [2015-07-02 15:13:37.351151] I [fuse-bridge.c:5080:fuse_graph_setup]
 0-fuse: switched to graph 0
 [2015-07-02 15:13:37.351491] I
 [client-handshake.c:188:client_set_lk_version_cbk] 0-myvol-client-1: Server
 lk version = 1
 [2015-07-02 15:13:37.352078] I [fuse-bridge.c:4009:fuse_init]
 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel
 7.14
 [2015-07-02 15:13:37.355056] I [afr-common.c:1477:afr_local_discovery_cbk]
 0-myvol-replicate-0: selecting local read_child myvol-client-0
 [2015-07-02 15:13:37.396403] I [fuse-bridge.c:4921:fuse_thread_proc]
 0-fuse: unmounting /tmp/tmp.NPixVv7xk9
 [2015-07-02 15:13:37.396922] W [glusterfsd.c:1194:cleanup_and_exit] (--
 0-: received signum (15), shutting down
 [2015-07-02 15:13:37.396970] I [fuse-bridge.c:5599:fini] 0-fuse:
 Unmounting '/tmp/tmp.NPixVv7xk9'.
 [2015-07-02 15:13:37.412584] I [MSGID: 100030] [glusterfsd.c:2018:main]
 0-glusterfs: Started running glusterfs version 3.6.3 (args: glusterfs
 --xlator-option=*dht.lookup-unhashed=off --volfile-server X.X.X.X
 --volfile-id myvol -l /var/log/glusterfs/geo-replication-slaves/slave.log
 /tmp/tmp.am6rnOYxE7)
 [2015-07-02 15:14:40.423812] E [socket.c:2276:socket_connect_finish]
 0-glusterfs: connection to X.X.X.X:24007 failed (Connection timed out)
 [2015-07-02 15:14:40.424077] E [glusterfsd-mgmt.c:1811:mgmt_rpc_notify]
 0-glusterfsd-mgmt: failed to connect with remote-host: X.X.X.X (Transport
 endpoint is not connected)
 [2015-07-02 15:14:40.424119] I [glusterfsd-mgmt.c:1817:mgmt_rpc_notify]
 0-glusterfsd-mgmt: Exhausted all volfile servers
 [2015-07-02 15:14:40.424557] W [glusterfsd.c:1194:cleanup_and_exit] (--
 0-: received signum (1), shutting down
 [2015-07-02 15:14:40.424626] I [fuse-bridge.c:5599:fini] 0-fuse:
 Unmounting '/tmp/tmp.am6rnOYxE7'.


 I'm confused by the error message about not being able to connect to the
 slave on port 24007. Should it not be connecting over ssh ?

 Thanks

 John.

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Quota cleanup

2015-07-02 Thread Ryan Clough
I disabled quota on my volume about one week ago and the cleanup routine
started. Yesterday, I had to reboot the the servers, therefore the quota
cleanup did not finish. Is there a way for me to start this again manually?
If not, is there any danger in the cleanup process not completing? Also,
while the cleanup was running I noticed lots and lots of permission denied
error messages when the script tried to modify the xattrs of files. Here is
one example:

[MSGID: 115060] [server-rpc-fops.c:868:_gf_server_log_setxattr_failure]
0-export_volume-server: 629529723: SETXATTR
/ssimon/psdv/storm7/scan_data/90/417/002/vtkobjectspb-rohitbg/protobuf.PMTrackFeatureExtractor_VTKPolyDataObjectsEpochTopic
(517b1a20-c384-4c1c-9c35-91de5520889a) == glusterfs.quota-xattr-cleanup

Should I be worried about the stability of my Gluster volume?

Thank you for your time,
___
¯\_(ツ)_/¯
Ryan Clough
Information Systems
Decision Sciences International Corporation
http://www.decisionsciencescorp.com/
http://www.decisionsciencescorp.com/

-- 
This email and its contents are confidential. If you are not the intended 
recipient, please do not disclose or use the information within this email 
or its attachments. If you have received this email in error, please report 
the error to the sender by return email and delete this communication from 
your records.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users