[Gluster-users] bind glusterd to specified interface
Hi, after a bit of time I play around with glusterfs again. For now I want to bind the glusterd to an specified interface/ip adress. I want to have a management net, where the service doesn't reachable and an cluster net where the service is working on. I read something about to define it in the /etc/glusterfs/glusterfsd.vol file but found no valid desciption of it. Even on https://docs.gluster.org or in an man page,... Bye Gregor Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Use GlusterFS as storage for images of virtual machines - available issues
I tested something like this, think there should be an depency in the Unit Section: [Unit] Description=Kill gluster client access before shutdown Requires=glusterd.service [Service] Type=oneshot RemainAfterExit=true ExecStart=/bin/true ExecStop=/root/bin/killgluster.sh [Install] WantedBy=multi-user.target Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Use GlusterFS as storage for images of virtual machines - available issues
Hi, additional things: The problems occured too, when I mount the volume on an fourth machine as client. When I remove a node from volume and shutdown it then, all went OK. Is there a setting to avoid an behavior like this? The quorum? Here all the settings: root@ph-pvetest001:~# gluster volume get glustervol001 all Option Value -- - cluster.lookup-unhashed on cluster.lookup-optimize on cluster.min-free-disk 10% cluster.min-free-inodes 5% cluster.rebalance-stats off cluster.subvols-per-directory (null) cluster.readdir-optimizeoff cluster.rsync-hash-regex(null) cluster.extra-hash-regex(null) cluster.dht-xattr-name trusted.glusterfs.dht cluster.randomize-hash-range-by-gfidoff cluster.rebal-throttle normal cluster.lock-migration off cluster.force-migration off cluster.local-volume-name (null) cluster.weighted-rebalance on cluster.switch-pattern (null) cluster.entry-change-logon cluster.read-subvolume (null) cluster.read-subvolume-index-1 cluster.read-hash-mode 1 cluster.background-self-heal-count 8 cluster.metadata-self-heal off cluster.data-self-heal off cluster.entry-self-heal off cluster.self-heal-daemonon cluster.heal-timeout600 cluster.self-heal-window-size 1 cluster.data-change-log on cluster.metadata-change-log on cluster.data-self-heal-algorithmfull cluster.eager-lock enable disperse.eager-lock on disperse.other-eager-lock on disperse.eager-lock-timeout 1 disperse.other-eager-lock-timeout 1 cluster.quorum-type auto cluster.quorum-count(null) cluster.choose-localoff cluster.self-heal-readdir-size 1KB cluster.post-op-delay-secs 1 cluster.ensure-durability on cluster.consistent-metadata no cluster.heal-wait-queue-length 128 cluster.favorite-child-policy none cluster.full-lock yes diagnostics.latency-measurement off diagnostics.dump-fd-stats off diagnostics.count-fop-hits off diagnostics.brick-log-level INFO diagnostics.client-log-levelINFO diagnostics.brick-sys-log-level CRITICAL diagnostics.client-sys-log-levelCRITICAL diagnostics.brick-logger(null) diagnostics.client-logger (null) diagnostics.brick-log-format(null)
[Gluster-users] Use GlusterFS as storage for images of virtual machines - available issues
Hi, I test glusterFS in an setup for storing images for virtual machines. My setup: Three nodes, all debian buster/proxmox virtualisation. On each node I bind a own hdd for a brick. Then I create a replicated volume over the three nodes and mount it in the VMCluster. Here I could setup VMs and manage them like restart, migration and so on. But when I shutdown or restart one of the nodes, I got hanger. This is, what I didn't understand, I think, the glusterFS is redudant and high available? What I do: 1. all is running 2. I stop one node 3. When I do something with a VM I got Problems, seem like they lost the storage. 4. After some time I could work with the VMs again. Could it a thing with quorums? My goal is to set an high available KVM cluster, where one node could die and the VMs shouldn't hurd by it. Bye Gregor Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Multiple Volumes over diffrent Interfaces
Hi, I'm trying to create two gluster volumes over two nodes with two seperate networks: The names are in the hosts file of each node: root@gluster01 :~# cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 gluster01.peiker-cee.de gluster01 10.0.2.54 gluster02g1.peiker-cee.de gluster02g1 10.0.7.54 gluster02g2.peiker-cee.de gluster02g2 10.0.2.53 gluster01g1.peiker-cee.de gluster01g1 10.0.7.53 gluster01g2.peiker-cee.de gluster01g2 Then I peer: root@gluster01 :~# gluster peer probe gluster02g1 peer probe: success. root@gluster01 :~# gluster peer probe gluster02g2 peer probe: success. Host gluster02g2 port 24007 already in peer list root@gluster01 :~# gluster volume create g1 transport tcp gluster01g1:/glusterstore/g1 gluster02g1:/glusterstore/g1 volume create: g1: success: please start the volume to access data root@gluster01 :~# gluster volume start g1 volume start: g1: success root@gluster01 :~# gluster volume create g2 transport tcp gluster01g2:/glusterstore/g2 gluster02g2:/glusterstore/g2 volume create: g2: failed: Staging failed on gluster02g1. Error: Host gluster01g2 is not in 'Peer in Cluster' state Then I connect to the second node: root@gluster02 :~# gluster peer status Number of Peers: 1 Hostname: gluster01g1.peiker-cee.de Uuid: 32090055-0183-4f2c-b742-9e120b72573a State: Peer in Cluster (Connected) root@gluster02 :~# gluster peer probe gluster01g2 peer probe: success. Host gluster01g2 port 24007 already in peer list root@gluster02 :~# gluster peer status Number of Peers: 1 Hostname: gluster01g1.peiker-cee.de Uuid: 32090055-0183-4f2c-b742-9e120b72573a State: Peer in Cluster (Connected) Other names: gluster01g2 After recreate the folder (they was already in a volume,...) I managed to create and start the second volume. So I wonder why I've to probe the peer from the second node and wasn't able to do it all from the first node? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Diffrent volumes on diffrent interfaces
Hi, I run a proxmox system with a glustervolume over three nodes. I think about setup a second volume, but want to use the other interfaces on the nodes. Is this recommended or possible? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Diffrent volumes on diffrent interfaces
Hi, I run a proxmox system with a glustervolume over three nodes. I think about setup a second volume, but want to use the other interfaces on the nodes. Is this recommended or possible? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Migrate from existing NFS share to Gluster
Am Mittwoch, 29. Juli 2015, 13:05:05 schrieb Clay Stuckey: convert the existing folder to a brick without moving the data? Will I need to create the empty gluster brick and migrate the date over? Yes, there is no other way. Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster healing VM images
Am Freitag, 24. Juli 2015, 21:28:01 schrieb Andrus, Brian Contractor: I had this as well. It was a BIG pain for me. FWIW, after upgrading to gluster 3.7, I have not had an issue. YMMV Hmm, with 3.7 I've had it, but maybee it cause on my settings. Could you past your option settings? Maybee I play to much on my testenviroment,... Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
For now I made a setup with drbd and cman, ssem to work for me, even I hope to solve the problems with glusterfs. Cause there ar nice features in glusterfs and it is handy to setup. If you still need some test from me, I'm on the list, and I test glusterfs on... bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
I test some new issue. When I stopped and start the glusterfs-server service on edgf006, I see in 'gluster volume status all' and gluster violume heal vbstore info' that the node is disconnect and reconnect. This has no influence on VM. But when I stop the service, deactivate it for autostart on boot and reboot, the same shit happens as ever. Then I see that after stop gluster service, there are still glusterfs-server processes on the node? I use in the moment debian 8, should this on the end an problem with this distro? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
OK, 1. stop glusterservice on node edgf006 2. kill the remaining processes 3. restart the node 4. start the process The VM is still alive, even I made some files with dd while the process. Not a solution, but a workaround for yet! The next thing to test, kill the node with poweroff (take the powersupply) and take the network,... Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster healing VM images
Gluster Servers: CentOS 7 VMs: A mix of CentOS 6 / 7 / Windows Server 2008 VM Image size 20GB to 250GB In my case debain8 for nodes and vm-guest. image is 8GB I found a old mail-threat from 07.07.2014: self-heal stops some vms (virtual machines) I think this is similar to our problem. By Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
Hi, I test now following: 1. start the vm 2. shutdown one node (edgf006) 3. remove-brick edgf006 4. start edgf006 5. add-brick edgf006 6. gluster volume heal vbstore full The VM still running,... That is not a solution but maybee an workaround. Would bee nice if we found a solutions for shuting down a brick and restart it without remove it from cluster. What should I do for an maintenance shutdown of one brick, even for change a fan or other stuff? Is there something like 'gluster volume maintenance-brick VOLUME planed? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
One Point is, the difference between the log file time (UTC) and system time (europe/Berlin) Could that be an hint? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Setup with replica 3 for imagestorage
Hi, I still testing glusterfs for use for storing of images from virtual machines. I start with 2 machines for stora and vm-host: two machines: gf001 and gf002 Both: debian 8 glusterfs 3.7 /dev/sda - root /dev/sdb - /export/vbstore My volume info: root@gf001 :~# gluster volume info vbstore Volume Name: vbstore Type: Replicate Volume ID: 7bf8aa42-8fd9-4535-888d-dacea4f14a83 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gf001.mvz.ffm:/export/vbstore Brick2: gf002.mvz.ffm:/export/vbstore mounting on gf001: mount -t glusterfs gf001:/vbstore /import/vbstore or via fstab: gf001:/vbstore/import/vbstore glusterfs defaults 0 0 The same on gf002 with gf002 as server. Creating an virtualbox VM in /import/vbstore. Starting VM on gf001 OR gf002 work right. After I learned, that if I restart an server, the filesystem go readonly I add an additional brick: gluster volume add-brick vbstore replica 3 edgf006:/export/vbstore and set following Options: cluster.quorum-count: 1 cluster.server-quorum-type: server cluster.quorum-type: none network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on cluster.server-quorum-ratio: 51 When I restart gf004 or gf005 the virtualmachine got an error and made filesystem readonly. A glusterfs volume heal vbstore info give this: Brick edgf004:/export/vbstore/ Number of entries: 0 Brick edgf005:/export/vbstore/ /gftest/gftest.vdi - Possibly undergoing heal Number of entries: 1 Brick edgf006:/export/vbstore/ /gftest/gftest.vdi - Possibly undergoing heal Number of entries: 1 My questions: 1. Which Options should I took for storing virtual images. 2. How to reset all options, I think I've played a little bit to much. Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Setup with replica 3 for imagestorage
Hi Ravi, Please retain the CC to gluster-users when you reply. sorry, I press answer but ther is an misshavior in the Mailinglist email header. But not now, I go home ;-) Maybee I llok from home or reproduce the error tomorror again! Thank You Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] VM crash, store in glusterfs
Hi Satheesaran, Coming back to your problem, do you run VMs in the same machine, where gluster is installed ? Yes. Could you elaborate on your setup ?, and that could help me to understand the problem. Here my setup for evaluation, I delete the volume settings, caus I play to much around with. My opinion is, to keep it simple, but've the possibility to avoid crash on one single host. setup In my setup, the servers are the clients too. I hope you could understand my description: I remount the share back to the single machines two machines: gf001 and gf002 Both: debian 8 glusterfs 3.7 /dev/sda - root /dev/sdb - /export/vbstore My volume info: root@gf001 :~# gluster volume info vbstore Volume Name: vbstore Type: Replicate Volume ID: 7bf8aa42-8fd9-4535-888d-dacea4f14a83 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gf001.mvz.ffm:/export/vbstore Brick2: gf002.mvz.ffm:/export/vbstore mounting on gf001: mount -t glusterfs gf001:/vbstore /import/vbstore or via fstab: gf001:/vbstore /import/vbstore glusterfs _netdev,defaults 0 0 The same on gf002 with gf002 as server. Creating an virtualbox VM in /import/vbstore. Starting VM on gf001 OR gf002 work right. I could althoug teleporting an VBox VM from one to another host. /setup When I understand it right, is: Going down one of the gluster nodes in a replica 2 setup will set filesystem rewrite on clientsite when the node come back to work. I've no problem to take a third node to avoid the readonly problem, or seperate the Virtualisation Host and Gluster host. But what settings I've to do on the gluster volume? Bye and thank a lot for help Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] VM crash, store in glusterfs
Hi, I would like to ade some test with glusterfs and virtualbox. The goal is to store the virtualbox files in a glusterfs store and access it from to machines. In my setup, the servers are the clients too. I hope you could understand my description: I remount the share back to the single machines two machines: gf001 and gf002 Both: debian 8 glusterfs 3.7 /dev/sda - root /dev/sdb - /export/vbstore My volume info: root@gf001 :~# gluster volume info vbstore Volume Name: vbstore Type: Replicate Volume ID: 7bf8aa42-8fd9-4535-888d-dacea4f14a83 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: gf001.mvz.ffm:/export/vbstore Brick2: gf002.mvz.ffm:/export/vbstore Options Reconfigured: cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: enable cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off performance.readdir-ahead: on mounting on gf001: mount -t glusterfs gf001:/vbstore /import/vbstore or via fstab: gf001:/vbstore/import/vbstore glusterfs defaults 0 0 The same on gf002 with gf002 as server. Creating an virtualbox VM in /import/vbstore. Starting VM on gf001 OR gf002 work right. But when running VM on one AND shut down the opposit, the VM crash with ATA failure. Maybe mount or gluster setting options? Thank you for help, Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] glusterfs 3.6.2: gluster volume add-brick: wronp op-version
Hi, I tried to add an aditional brick to one volume, on both hosts I installed glusterfs 3.6.2 via launchpad repositories: glusterfs-server: Installiert: 3.6.2-ubuntu1~trusty3 Installationskandidat: 3.6.2-ubuntu1~trusty3 Versionstabelle: *** 3.6.2-ubuntu1~trusty3 0 500 http://ppa.launchpad.net/gluster/glusterfs-3.6/ubuntu/ trusty/main amd64 Packages Both systems are ubuntu 14.04 Error message: volume add-brick: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600. I don't understand what op-version stand for? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] glusterfs 3.6.2: gluster volume add-brick: wronpop-version
Hi, the exiting nodes were a update from former version, see my other email. I got on all nodes operating-version=30501 Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] glusterfs 3.6.2: gluster volume add-brick: wronp op-version
Hi, I don't understand what op-version stand for? OK I find the documentation: http://www.gluster.org/community/documentation/index.php/Features/Opversion I'm not sure If I tried to increase the op-version right: Im probe all exist nodes again? I still got the error message. In the existing cluster I've three nodes: edgf001, edgf002 and edgf003: gluster volume info samba1: Volume Name: samba1 Type: Replicate Volume ID: 4283053e-f50b-41c0-8b77-aa9985649b66 Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: edgf001:/export/samba1 Brick2: edgf002:/export/samba1 Brick3: edgf003:/export/samba1 I installed the nodes as 3.5.X and made an upgrade sucessfull on 3.6.2 an all nodes. Where could I get the op-version from node? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] glusterfs 3.6.2: gluster volume add-brick: wronp op-version
OK I find the command: gluster volume set all cluster.op-version 30600 Now it work, Gregor I see there is an problem with the reply to list, maybe my mailclient,... ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Performance and blocksize
Hi, I test diffrent blocksize on a local mount. with dd and a bs=1MB the performance is much better than bs=1KB, I read here that a to small blocksize is poisen for performance,... I plan to create a glustercluster for virtual image files, what should the best blocksize for the virtual machines? Like the nodes bs? And how calculate the bs? Bye Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Multiple nodes on one host with multiple interfaces
Hi, I wonder if it's increase following setting my storage and hardware exploration: host1.sdb:eth0 | stripe | host1.sdc:eth1 || replica -- host2.sdb:eth0 || stripe | host2.sdc:eth1 | Myabe we've a host with 4 Interfaces, so we can go up to 4,... Bye, Gregor ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Mounting for user access
Hi, I install glusterfs 3.5.0 for testing. With root all seems right. How I mount it for user-access ? I tried fuse-opt=allow_other, but I get: [2014-05-27 09:54:06.577317] E [mount.c:162:fuse_mount_fusermount] 0-glusterfs-fuse: failed to exec fusermount: No such file or directory [2014-05-27 09:54:06.577707] E [mount.c:298:gf_fuse_mount] 0-glusterfs-fuse: mount of edsv027:/gluster01 to /home/vibox/VirtualBox (default_permissions,fuse-opt=allow_other,allow_other,max_read=131072) failed [2014-05-27 09:54:06.578225] E [glusterfsd.c:1793:daemonize] 0-daemonize: mount failed fuse is installed,... Bye Gregor -- ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Create a volume with one brick
I guess I find a solution: 1. add a additional virtual interface with an own IP 2. create two folder to export 3. volume create test replica 2 IP1:/folder1 IP2:/folder2 It seem to work, so I only need one machine,... and if necessary I could add a other brick of an other machine. Bye Gregor -- ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Create a volume with one brick
Hi, i wonder if it's possible to create a volume with one brick. I know it is strange to use a clusterfilesystem with one machine, but it could be helpfull for testing or change a brick if there are only two,... Is there a tric to do it? Maybe with a additional virtuell interface, so the current machine've two IP addresses? Tank you for information, Gregor -- ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] go on with testing glusterfs - 3.3.1
Hi, I'm still testing glusterfs for me. I#m using ubuntu with the semiosis ppa I setup a cluster of two bricks: glustersv001:/export/daten glustersv002:/export/daten So far so good. Now I want to test some issues: 1. remove one server from brick: Think that wouldn't work with: gluster volume remove-brick daten replica 1 glustersv002.mvz.ffm:/export/daten commit cause it should more than one brick in the cluster? But wht should I do, when in productivity one server break? Set up an other and replace the broken one? Should've the new a other name, or could I set up a new machine with one of the olds? Bye Gregor -- ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Exist data in brick
Hi, is it possible to use the existing data in a brick? I mean to add a folder on a brick to a volume wich contains already data. Bye Gregor -- ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] Exist data in brick
Hi, is it possible to use the existing data in a brick? I mean to add a folder on a brick to a volume wich contains already data. Bye Gregor -- -- ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] gluster 3.2x and 3.3 compatible?
Hi, I test gluster on two debian (3.2.7) and one ubuntu machine (3.2.5). So long it work. Then I try to upgrade on the debian machines to 3.3.0 from experimental. With this, I can't use the ubuntu one as peer. Isn't that compatible? Bye Gregor -- ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users