Re: [Gluster-users] Setup with replica 3 for imagestorage
For now I made a setup with drbd and cman, ssem to work for me, even I hope to solve the problems with glusterfs. Cause there ar nice features in glusterfs and it is handy to setup. If you still need some test from me, I'm on the list, and I test glusterfs on... bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
OK, 1. stop glusterservice on node edgf006 2. kill the remaining processes 3. restart the node 4. start the process The VM is still alive, even I made some files with dd while the process. Not a solution, but a workaround for yet! The next thing to test, kill the node with poweroff (take the powersupply) and take the network,... Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
I test some new issue. When I stopped and start the glusterfs-server service on edgf006, I see in 'gluster volume status all' and gluster violume heal vbstore info' that the node is disconnect and reconnect. This has no influence on VM. But when I stop the service, deactivate it for autostart on boot and reboot, the same shit happens as ever. Then I see that after stop gluster service, there are still glusterfs-server processes on the node? I use in the moment debian 8, should this on the end an problem with this distro? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
On 07/17/2015 07:01 PM, Gregor Burck wrote: Hi, I test now following: 1. start the vm 2. shutdown one node (edgf006) 3. remove-brick edgf006 4. start edgf006 5. add-brick edgf006 6. gluster volume heal vbstore full The VM still running,... That is not a solution but maybee an workaround. Would bee nice if we found a solutions for shuting down a brick and restart it without remove it from cluster. What should I do for an maintenance shutdown of one brick, even for change a fan or other stuff? Rebooting a node should not cause the VM to hang Gregor. I wonder why the mount log did not show anything in your earlier mail. Could you check if you were looking at the correct log? Don't clear any of the logs when you are testing. About your query on timestamp, gluster logs are logged in UTC, no problem with that. -Ravi Is there something like 'gluster volume maintenance-brick planed? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
Hi, I test now following: 1. start the vm 2. shutdown one node (edgf006) 3. remove-brick edgf006 4. start edgf006 5. add-brick edgf006 6. gluster volume heal vbstore full The VM still running,... That is not a solution but maybee an workaround. Would bee nice if we found a solutions for shuting down a brick and restart it without remove it from cluster. What should I do for an maintenance shutdown of one brick, even for change a fan or other stuff? Is there something like 'gluster volume maintenance-brick planed? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
One Point is, the difference between the log file time (UTC) and system time (europe/Berlin) Could that be an hint? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
Hi Ravi, Please retain the CC to gluster-users when you reply. sorry, I press answer but ther is an misshavior in the Mailinglist email header. But not now, I go home ;-) Maybee I llok from home or reproduce the error tomorror again! Thank You Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
On 07/15/2015 07:39 PM, Gregor Burck wrote: Hi Ravi, Hi Gregor, Please retain the CC to gluster-users when you reply. That should not be the case, can you provide the client (mount) logs and the brick logs when this happens? Replicate translator returns EROFS usually when quorum is not met. Which is the mount log? Is it this: /var/log/glusterfs/import-vbstore.log ? Right. The brick log I think is this: /var/log/glusterfs/bricks/export-vbstore.log Right again. Please include these logs from all the nodes when you reboot the 3rd node and the VM goes read only. Here I see something special, the time is out of sync. date: Mi 15. Jul 16:04:38 CEST 2015 log file entry: [2015-07-15 14:03:56.009191] The system time should be europe/berlin, but in the log I got greenwich time? When the logfiles right, I clear them, after that I restart an node. Thank you for help! Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
On 07/15/2015 06:41 PM, Gregor Burck wrote: Hi Ravi, You can create a normal replica 3 volume and then use it for VM images, instead of doing an add brick (thus avoiding the need to heal the vm image file to the newly added brick). that is not the problem, the initial heal after the add I've done. But what is about take a node down? For example for maintenace or power supply or or or? After the node come back, the VM go readonly. That should not be the case, can you provide the client (mount) logs and the brick logs when this happens? Replicate translator returns EROFS usually when quorum is not met. This I get after one brick (edgf006) was restart: Brick edgf004:/export/vbstore/ /gftest/gftest.vdi - Possibly undergoing heal Number of entries: 1 Brick edgf005:/export/vbstore/ /gftest/gftest.vdi - Possibly undergoing heal Number of entries: 1 Brick edgf006:/export/vbstore/ Number of entries: 0 Why the two nodes wich still alive get 'Possibly undergoing heal'? That I don't understand. The healthy bricks always record the list of files that need to be healed to the other node, which is why it shows up in both bricks. The possibly undergoing heal means out of the list of files that need heal, this one is currently undergoing heal by the selfheal daemon. -Ravi client quorm (cluster.quorum-type) should be set to `auto`. I've done this with no change. glusterfs 3.7 onwards has support for a special type of replica 3 configuration called arbiter volumes where the disk space consumed is less than a conventional replica 3 volume . It would be great if you can try that out for your VM images and provide some feedback! Details on arbiter volumes can be found here: https://github.com/gluster/glusterfs/blob/master/doc/features/afr-ar biter-volumes.md I've a look on that to test it, Bye, Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
On 07/15/2015 05:48 PM, Gregor Burck wrote: Hi, I still testing glusterfs for use for storing of images from virtual machines. I start with 2 machines for stora and vm-host: two machines: gf001 and gf002 Both: debian 8 glusterfs 3.7 /dev/sda - root /dev/sdb - /export/vbstore My volume info: root@gf001 :~# gluster volume info vbstore Volume Name: vbstore Type: Replicate Volume ID: 7bf8aa42-8fd9-4535-888d-dacea4f14a83 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gf001.mvz.ffm:/export/vbstore Brick2: gf002.mvz.ffm:/export/vbstore mounting on gf001: mount -t glusterfs gf001:/vbstore /import/vbstore or via fstab: gf001:/vbstore/import/vbstore glusterfs defaults 0 0 The same on gf002 with gf002 as server. Creating an virtualbox VM in /import/vbstore. Starting VM on gf001 OR gf002 work right. After I learned, that if I restart an server, the filesystem go readonly I add an additional brick: gluster volume add-brick vbstore replica 3 edgf006:/export/vbstore and set following Options: cluster.quorum-count: 1 cluster.server-quorum-type: server cluster.quorum-type: none network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on cluster.server-quorum-ratio: 51 When I restart gf004 or gf005 the virtualmachine got an error and made filesystem readonly. A glusterfs volume heal vbstore info give this: Brick edgf004:/export/vbstore/ Number of entries: 0 Brick edgf005:/export/vbstore/ /gftest/gftest.vdi - Possibly undergoing heal Number of entries: 1 Brick edgf006:/export/vbstore/ /gftest/gftest.vdi - Possibly undergoing heal Number of entries: 1 My questions: 1. Which Options should I took for storing virtual images. You can create a normal replica 3 volume and then use it for VM images, instead of doing an add brick (thus avoiding the need to heal the vm image file to the newly added brick). client quorm (cluster.quorum-type) should be set to `auto`. glusterfs 3.7 onwards has support for a special type of replica 3 configuration called arbiter volumes where the disk space consumed is less than a conventional replica 3 volume . It would be great if you can try that out for your VM images and provide some feedback! Details on arbiter volumes can be found here: https://github.com/gluster/glusterfs/blob/master/doc/features/afr-arbiter-volumes.md 2. How to reset all options, I think I've played a little bit to much. gluster volume reset Thanks, Ravi Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Setup with replica 3 for imagestorage
Hi, I still testing glusterfs for use for storing of images from virtual machines. I start with 2 machines for stora and vm-host: two machines: gf001 and gf002 Both: debian 8 glusterfs 3.7 /dev/sda - root /dev/sdb - /export/vbstore My volume info: root@gf001 :~# gluster volume info vbstore Volume Name: vbstore Type: Replicate Volume ID: 7bf8aa42-8fd9-4535-888d-dacea4f14a83 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gf001.mvz.ffm:/export/vbstore Brick2: gf002.mvz.ffm:/export/vbstore mounting on gf001: mount -t glusterfs gf001:/vbstore /import/vbstore or via fstab: gf001:/vbstore/import/vbstore glusterfs defaults 0 0 The same on gf002 with gf002 as server. Creating an virtualbox VM in /import/vbstore. Starting VM on gf001 OR gf002 work right. After I learned, that if I restart an server, the filesystem go readonly I add an additional brick: gluster volume add-brick vbstore replica 3 edgf006:/export/vbstore and set following Options: cluster.quorum-count: 1 cluster.server-quorum-type: server cluster.quorum-type: none network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on cluster.server-quorum-ratio: 51 When I restart gf004 or gf005 the virtualmachine got an error and made filesystem readonly. A glusterfs volume heal vbstore info give this: Brick edgf004:/export/vbstore/ Number of entries: 0 Brick edgf005:/export/vbstore/ /gftest/gftest.vdi - Possibly undergoing heal Number of entries: 1 Brick edgf006:/export/vbstore/ /gftest/gftest.vdi - Possibly undergoing heal Number of entries: 1 My questions: 1. Which Options should I took for storing virtual images. 2. How to reset all options, I think I've played a little bit to much. Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users