Re: [Gluster-users] GlusterFS cluster of 2 nodes is disconnected after nodes reboot
What linux distro ? Anything special about your network configuration ? Any chance your server is taking too long to release networking and gluster is starting before network is ready ? Can you completely disable iptables and test again ? Both nodes are CentOS 6.5 VMs running on VMware ESXi 5.5.0. There is nothing special about network configuration, just static IPs. Ping and ssh works fine. I added iptables -F to /etc/rc.local. After simulteneous reboot gluster peer status on both nodes is connected and replication works fine. But gluster volume status states that NFS server and self-heal daemon on one of them isn't running. So I need to restart glusterd to make them running. Another issue: when everything is OK after service glusterd restart on both nodes, I reboot one node and then can see on the rebooted node (ipset02): *[root@ipset02 etc]#* gluster peer status Number of Peers: 1 Hostname: ipset01 Uuid: 6313a4dd-f736-46ff-9836-bdf05c886ffd State: Peer in Cluster (Connected) *[root@ipset02 etc]#* gluster volume status Status of volume: ipset-gv Gluster processPortOnlinePid -- Brick ipset01:/usr/local/etc/ipset49152Y1615 Brick ipset02:/usr/local/etc/ipset49152Y2282 NFS Server on localhost2049Y2289 Self-heal Daemon on localhostN/AY2296 NFS Server on ipset012049Y2258 Self-heal Daemon on ipset01 N/AY2262 There are no active volume tasks [root@ipset02 etc]# tail -17 /var/log/glusterfs/glustershd.log [2014-03-26 07:55:48.982456] E [client-handshake.c:1742:client_query_portmap_cbk] 0-ipset-gv-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running. [2014-03-26 07:55:48.982532] W [socket.c:514:__socket_rwv] 0-ipset-gv-client-1: readv failed (No data available) [2014-03-26 07:55:48.982555] I [client.c:2097:client_rpc_notify] 0-ipset-gv-client-1: disconnected [2014-03-26 07:55:48.982572] I [rpc-clnt.c:1676:rpc_clnt_reconfig] 0-ipset-gv-client-0: changing port to 49152 (from 0) [2014-03-26 07:55:48.982627] W [socket.c:514:__socket_rwv] 0-ipset-gv-client-0: readv failed (No data available) [2014-03-26 07:55:48.986252] I [client-handshake.c:1659:select_server_supported_programs] 0-ipset-gv-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2014-03-26 07:55:48.986551] I [client-handshake.c:1456:client_setvolume_cbk] 0-ipset-gv-client-0: Connected to 192.168.1.180:49152, attached to remote volume '/usr/local/etc/ipset'. [2014-03-26 07:55:48.986566] I [client-handshake.c:1468:client_setvolume_cbk] 0-ipset-gv-client-0: Server and Client lk-version numbers are not same, reopening the fds [2014-03-26 07:55:48.986628] I [afr-common.c:3698:afr_notify] 0-ipset-gv-replicate-0: Subvolume 'ipset-gv-client-0' came back up; going online. [2014-03-26 07:55:48.986743] I [client-handshake.c:450:client_set_lk_version_cbk] 0-ipset-gv-client-0: Server lk version = 1 [2014-03-26 07:55:52.975670] I [rpc-clnt.c:1676:rpc_clnt_reconfig] 0-ipset-gv-client-1: changing port to 49152 (from 0) [2014-03-26 07:55:52.975717] W [socket.c:514:__socket_rwv] 0-ipset-gv-client-1: readv failed (No data available) [2014-03-26 07:55:52.978961] I [client-handshake.c:1659:select_server_supported_programs] 0-ipset-gv-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2014-03-26 07:55:52.979128] I [client-handshake.c:1456:client_setvolume_cbk] 0-ipset-gv-client-1: Connected to 192.168.1.181:49152, attached to remote volume '/usr/local/etc/ipset'. [2014-03-26 07:55:52.979143] I [client-handshake.c:1468:client_setvolume_cbk] 0-ipset-gv-client-1: Server and Client lk-version numbers are not same, reopening the fds [2014-03-26 07:55:52.979269] I [client-handshake.c:450:client_set_lk_version_cbk] 0-ipset-gv-client-1: Server lk version = 1 [2014-03-26 07:55:52.980284] I [afr-self-heald.c:1180:afr_dir_exclusive_crawl] 0-ipset-gv-replicate-0: Another crawl is in progress for ipset-gv-client-1 And on the node that wasn't rebooted: *[root@ipset01 ~]#* gluster peer status Number of Peers: 1 Hostname: ipset02 Uuid: ff14ab0e-53cf-4015-9e49-fb60698c56db State: Peer in Cluster (Disconnected) *[root@ipset01 ~]#* gluster volume status Status of volume: ipset-gv Gluster processPortOnlinePid -- Brick ipset01:/usr/local/etc/ipset49152Y1615 NFS Server on localhost2049Y2258 Self-heal Daemon on localhostN/AY2262 There are no active volume tasks [root@ipset01 ~]# tail -3 /var/log/glusterfs/glustershd.log [2014-03-26 07:50:28.881369] W [socket.c:514:__socket_rwv] 0-ipset-gv-client-1: readv failed (Connection reset by peer) [2014-03-26
Re: [Gluster-users] GlusterFS API documentation
On 25/03/2014, at 11:06 PM, Viktor Villafuerte wrote: Hi all, are there any docs for GlusterFS 3.4.2 and its API? I've been looking around but cannot find anything. I saw post with the same question for GFs3.3 but the links there no longer exist.. Would anybody have any good links, tips? Which language are you looking for, and what are you looking to do? :) Asking because our API docs are one of the things I need to get into making shortly. There's not much quality documentation around this yet. :( If you're wanting to use something other than C (eg, Python, Java, Go, or Ruby), this page points to the various language binding projects for Libgfapi: http://www.gluster.org/community/documentation/index.php/Language_Bindings Hopefully those projects themselves can get you up and running. :) + Justin -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5
Hi All, I've got to the bottom of it By running glusterd in foreground with debug enabled I was able to see two error messages when the command was being run... it appears that it was requiring the xfsprogs package which I did not have installed. Once I installed it it appears that zombie processes are no longer being created. Cheers, Steve From: Carlos Capriotti [mailto:capriotti.car...@gmail.com] Sent: 25 March 2014 12:30 To: Steve Thomas Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5 Steve: Tested that myself - not the nagios part, but the gluster commands you posted later - and no errors or zombies. Somebody else reported the same, so, sounds consistent. There must be another process there biting your gluster, turning it into a haunted scenario. Cheers, Carlos On Thu, Mar 20, 2014 at 12:19 PM, Steve Thomas stho...@rpstechnologysolutions.co.ukmailto:stho...@rpstechnologysolutions.co.uk wrote: Hi, I'm running Gluster 3.4.2 on Redhat 6.5 with 4 servers with a brick on each. This brick is mounted locally and used by apache to server audio files for an IVR system. Each of these audio files are typically around 80-100Kb. System appears to be working ok in terms of health and status via gluster CLI. The system is monitored by nagios and there's a check for zombie processes and the gluster status. It appears that over a 24 hour period the number of Zombie processes on the box has increased and is continually increasing. Investigating these are glusterd processes. I'm making an assumption but I'd suspect that the regular nagios checks are resulting in the increase in zombie processes as they are querying the glusterd process. The command that the nagios plugin is running is: #Check heal status gluster volume heal audio info #Check volume status gluster volume status audio detail Does anyone have any suggestions as to why glusterd is resulting in these zombie processes? Thanks for help in advance, Steve ___ Gluster-users mailing list Gluster-users@gluster.orgmailto:Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] glusterfs 3.4.2 + CentOS6.5 + Infiniband /(rdma)
Does anyone has such kind of experience to run iozone on rdma? Just want to avoid the known issue or bugs. ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] glusterfs 3.4.2 + CentOS6.5 + Infiniband /(rdma)
On 26/03/2014, at 1:49 PM, yang feng电话 wrote: Does anyone has such kind of experience to run iozone on rdma? Just want to avoid the known issue or bugs. As a thought, use IPoIB instead of native RDMA transport type when creating Gluster volumes. The native RDMA code in Gluster is known to be buggy at the moment. We're planning to fix it for Gluster 3.6. Hope that helps. :) + Justin -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Reminder: Weekly Gluster Community meeting is in one hour
Reminder!!! The weekly Gluster Community meeting is in 1 hour, in #gluster-meeting on IRC. This is a completely public meeting, everyone is encouraged to attend and be a part of it. :) To add Agenda items *** Just add them to the main text of the Etherpad, and be at the meeting. :) http://titanpad.com/gluster-community-meetings The Agenda so far * [Note: not including action items from previous meetings] * 3.5 * We don't seem to have user docs for the new features in 3.5. We shouldn't release 3.5 until that's fixed. * We don't seem to have user docs for the features in 3.4 either (that are also in 3.5). We shouldn't release 3.5 until the 3.4 features in it have docs too. * 3.4 * http://review.gluster.org/#/c/6737 has received +2 (thanks Jeff) but the matching fixes for release-3.5 and master, http://review.gluster.org/6736 and http://review.gluster.org/5075 respectively, also still need +2 and merged. I am still reluctant to take this fix into release-3.4 unless I'm certain the corresponding fixes will also be taken into release-3.5 and master. This is going on 3+ weeks now! * Otherwise I'm ready to do beta2, and release a while later * Other agenda items * Gluster Forge: upgrade gitorious or switch to gitlab * / on build.gluster.org keeps filling up. cores in /, large /var/log/httpd/access*, and regression, smoke, and rh-bugid build results are the biggest causes. :-( * Do we have someone to take over RPM packaging for Fedora and download.gluster.org? Regards and best wishes, Justin Clift -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5
I see two separate bugs there. 1. A missing package requirement 2. The process hanging in a reproducible way. Could you please file these bugs in bugzilla? On March 26, 2014 6:11:50 AM PDT, Steve Thomas stho...@rpstechnologysolutions.co.uk wrote: Hi All, I've got to the bottom of it By running glusterd in foreground with debug enabled I was able to see two error messages when the command was being run... it appears that it was requiring the xfsprogs package which I did not have installed. Once I installed it it appears that zombie processes are no longer being created. Cheers, Steve From: Carlos Capriotti [mailto:capriotti.car...@gmail.com] Sent: 25 March 2014 12:30 To: Steve Thomas Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] Gluster 3.4.2 on Redhat 6.5 Steve: Tested that myself - not the nagios part, but the gluster commands you posted later - and no errors or zombies. Somebody else reported the same, so, sounds consistent. There must be another process there biting your gluster, turning it into a haunted scenario. Cheers, Carlos On Thu, Mar 20, 2014 at 12:19 PM, Steve Thomas stho...@rpstechnologysolutions.co.ukmailto:stho...@rpstechnologysolutions.co.uk wrote: Hi, I'm running Gluster 3.4.2 on Redhat 6.5 with 4 servers with a brick on each. This brick is mounted locally and used by apache to server audio files for an IVR system. Each of these audio files are typically around 80-100Kb. System appears to be working ok in terms of health and status via gluster CLI. The system is monitored by nagios and there's a check for zombie processes and the gluster status. It appears that over a 24 hour period the number of Zombie processes on the box has increased and is continually increasing. Investigating these are glusterd processes. I'm making an assumption but I'd suspect that the regular nagios checks are resulting in the increase in zombie processes as they are querying the glusterd process. The command that the nagios plugin is running is: #Check heal status gluster volume heal audio info #Check volume status gluster volume status audio detail Does anyone have any suggestions as to why glusterd is resulting in these zombie processes? Thanks for help in advance, Steve ___ Gluster-users mailing list Gluster-users@gluster.orgmailto:Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users -- Sent from my Android device with K-9 Mail. Please excuse my brevity.___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] [Gluster-devel] Reminder: Weekly Gluster Community meeting is in one hour
New etherpad URL: http://titanpad.com/gluster-community-meetings TitanPad keeps crashing during our previous meetings, so lets see how the above one goes instead. :) + Justin -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] [Gluster-devel] Reminder: Weekly Gluster Community meeting is in one hour
Sorry all, bad cut-n-paste. THIS Etherpad: http://beta.etherpad.org/gluster-community-meetings + Justin On 26/03/2014, at 2:15 PM, Justin Clift wrote: New etherpad URL: http://titanpad.com/gluster-community-meetings TitanPad keeps crashing during our previous meetings, so lets see how the above one goes instead. :) + Justin -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Upgrade glusterfs from 3.3.1 to 3.4.2
Hello, I have 5 glusterfs nodes with 3.3.1, last week I upgraded them to version 3.4.2, everything works. This week I like adding more node to it with version 3.4.2, whatever I did can not put the new nodes to the cluster, it always says State: Peer Rejected (Connected) Finally I clean up everything on new nodes and install version 3.3.1, I managed to add the new node to cluster. and then I did an upgrade. after this, I tried to add bricks to volume, glusterfs complains failed. the following is the log: /var/log/glusterfs/glustershd.log [2014-03-26 14:07:00.853501] W [socket.c:514:__socket_rwv] 0-puppet-bucket-client-6: readv failed (No data available) [2014-03-26 14:07:00.853536] I [client.c:2097:client_rpc_notify] 0-puppet-bucket-client-6: disconnected [2014-03-26 14:07:00.854137] W [socket.c:514:__socket_rwv] 0-puppet-bucket-client-7: readv failed (No data available) [2014-03-26 14:07:03.856700] W [socket.c:514:__socket_rwv] 0-puppet-ssl-client-6: readv failed (No data available) [2014-03-26 14:07:03.856739] I [client.c:2097:client_rpc_notify] 0-puppet-ssl-client-6: disconnected [2014-03-26 14:07:03.857543] W [socket.c:514:__socket_rwv] 0-puppet-dist-client-6: readv failed (No data available) [2014-03-26 14:07:03.857592] I [client.c:2097:client_rpc_notify] 0-puppet-dist-client-6: disconnected [2014-03-26 14:07:03.858523] W [socket.c:514:__socket_rwv] 0-puppet-bucket-client-6: readv failed (No data available) [2014-03-26 14:07:03.858559] I [client.c:2097:client_rpc_notify] 0-puppet-bucket-client-6: disconnected [2014-03-26 14:07:03.859330] W [socket.c:514:__socket_rwv] 0-puppet-bucket-client-7: readv failed (No data available) /var/log/glusterfs/bricks/opt-gluster-data-puppet-ssl.log [2014-03-26 13:34:02.302321] I [server-handshake.c:567:server_setvolume] 0-puppet-ssl-server: accepted client from pup-p-cfg002-6006-2014/03/25-20:29:51:414670-puppet-ssl-client-5-0 (version: 3.4.2) [2014-03-26 13:34:02.390477] I [server-handshake.c:567:server_setvolume] 0-puppet-ssl-server: accepted client from pup-p-cfg002-5999-2014/03/25-20:29:50:410326-puppet-ssl-client-5-0 (version: 3.4.2) [2014-03-26 13:35:00.968658] I [server.c:762:server_rpc_notify] 0-puppet-ssl-server: disconnecting connectionfrom pup-p-cfg006-3084-2014/03/26-13:27:50:521718-puppet-ssl-client-5-0 [2014-03-26 13:35:00.968697] I [server-helpers.c:729:server_connection_put] 0-puppet-ssl-server: Shutting down connection pup-p-cfg006-3084-2014/03/26-13:27:50:521718-puppet-ssl-client-5-0 [2014-03-26 13:35:00.968719] I [server-helpers.c:617:server_connection_destroy] 0-puppet-ssl-server: destroyed connection of pup-p-cfg006-3084-2014/03/26-13:27:50:521718-puppet-ssl-client-5-0 [2014-03-26 13:35:00.973661] I [server.c:762:server_rpc_notify] 0-puppet-ssl-server: disconnecting connectionfrom pup-p-cfg006-3079-2014/03/26-13:27:50:514210-puppet-ssl-client-5-0 [2014-03-26 13:35:00.973707] I [server-helpers.c:729:server_connection_put] 0-puppet-ssl-server: Shutting down connection pup-p-cfg006-3079-2014/03/26-13:27:50:514210-puppet-ssl-client-5-0 [2014-03-26 13:35:00.973731] I [server-helpers.c:617:server_connection_destroy] 0-puppet-ssl-server: destroyed connection of pup-p-cfg006-3079-2014/03/26-13:27:50:514210-puppet-ssl-client-5-0 [2014-03-26 13:35:35.916298] I [server-handshake.c:567:server_setvolume] 0-puppet-ssl-server: accepted client from pup-p-cfg006-6366-2014/03/26-13:35:31:901289-puppet-ssl-client-5-0 (version: 3.3.1) [2014-03-26 13:35:35.917272] I [server-handshake.c:567:server_setvolume] 0-puppet-ssl-server: accepted client from pup-p-cfg006-6372-2014/03/26-13:35:31:907067-puppet-ssl-client-5-0 (version: 3.3.1) ^C -- Yang Orange Key: 35745318S1 BBM PIN:7457867C ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Not able to list quota
Hello, Im not able to list quotas anymore. # gluster volume quota Hyxi0xegevajekenohatatewenuhaxa list operation failed Quota command failed From the log: [2014-03-26 15:50:24.100972] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (127.0.0.1:1021) Any ideas? ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] [Gluster-devel] Reminder: Weekly Gluster Community meeting is in one hour
On 26/03/2014, at 2:03 PM, Justin Clift wrote: Reminder!!! The weekly Gluster Community meeting is in 1 hour, in #gluster-meeting on IRC. This is a completely public meeting, everyone is encouraged to attend and be a part of it. :) Thanks for participating everyone! :) Meeting minutes available here: http://meetbot.fedoraproject.org/gluster-meeting/2014-03-26/gluster-meeting.2014-03-26-15.01.html Regards and best wishes, Justin Clift -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] glusterfs-3.4.3beta2 released
SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.4.3beta2.tar.gz This release is made off jenkins-release-64 -- Gluster Build System ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Failed to setup geo-replication
Setup 2 VMs and they are all CentOS 6.4. All two are installed with GlusterFS 3.4.2 (server, client, and geo-replication) No Firewalls, all using 'root' account. All are in the same subnet. After start the geo-replication, keep getting: 2014-03-26 20:38:26.401585] I [monitor(monitor):80:monitor] Monitor: [2014-03-26 20:38:26.402067] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker [2014-03-26 20:38:26.442378] I [gsyncd:404:main_i] top: syncing: gluster://localhost:mirror - ssh://10.1.10.52:/data/mirror [2014-03-26 20:38:27.786715] I [master:60:gmaster_builder] top: setting up master for normal sync mode [2014-03-26 20:38:28.808115] I [master:679:crawl] _GMaster: new master is 379cfc2c-257d-4be1-9719-6fe163197a0c [2014-03-26 20:38:28.808331] I [master:683:crawl] _GMaster: primary master with volume id 379cfc2c-257d-4be1-9719-6fe163197a0c ... [2014-03-26 20:38:29.302051] E [syncdutils:174:log_raise_exception] top: execution of rsync failed with ENOENT (No such file or directory) [2014-03-26 20:38:29.302360] I [syncdutils:148:finalize] top: exiting. rsync is in /usr/bin. Why rsync failed with ENOENT (No file or directory) I basically follow the Gluster_FS_3.3.0_admin guide. What did I miss? I have tried on Debian and CentOS, all failed. All documents and Internet posting show it is so easy to setup geo-replication, but at least not me. I have spent few days and cannot get the geo-replaction to work. BTW, under CentOS 6.4, I cannot even stop the geo-replication. I get 'geo-replication command failed' But I can stop geo-replicatoin on a volume in Debian's GlusterFS. Looking for your helps and thanks in advance. ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Openstack Havana Gluster /var/lib/instances issues
If anyone has encountered this issue - I was graciously given a solution by some of the gluster folks. The short of it is: gluster volume set yournovashared remote-dio on https://ask.openstack.org/en/question/25371/gluster-volume-for-nova-instance/ On Mar 18, 2014, at 5:04 PM, Ryan Aydelott ry...@anl.gov wrote: I was wondering if anyone had seen similar issues before, when booting a vm with a root volume on local disk things work fine. When I try to do the same with /var/lib/nova/instances mounted via gluster/fuse I get the following: http://paste.ubuntu.com/7116594/ I have of course insured perms were correct with chown -R nova:nova /var/lib/nova/instances, additionally app armor is completely unloaded/disabled. Has anyone encountered this before? Gluster version is 3.4.2 -ryan signature.asc Description: Message signed with OpenPGP using GPGMail ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] GlusterFS API documentation
Hey Justin, thanks for the link. I'm not 100% sure what I want/would like to do because I don't know what's possible :) I wanted to have a look what the API offeres and see if anything can be used in our environment. Or what we could use.. We are Perl shop here, so Perl would be a preferred language of choice but C would be ok too, or Python. I'll have a look thru the docs and will possibly ask more questions about this. v On Wed 26 Mar 2014 10:33:50, Justin Clift wrote: On 25/03/2014, at 11:06 PM, Viktor Villafuerte wrote: Hi all, are there any docs for GlusterFS 3.4.2 and its API? I've been looking around but cannot find anything. I saw post with the same question for GFs3.3 but the links there no longer exist.. Would anybody have any good links, tips? Which language are you looking for, and what are you looking to do? :) Asking because our API docs are one of the things I need to get into making shortly. There's not much quality documentation around this yet. :( If you're wanting to use something other than C (eg, Python, Java, Go, or Ruby), this page points to the various language binding projects for Libgfapi: http://www.gluster.org/community/documentation/index.php/Language_Bindings Hopefully those projects themselves can get you up and running. :) + Justin -- Open Source and Standards @ Red Hat twitter.com/realjustinclift -- Regards Viktor Villafuerte Optus Internet Engineering t: 02 808-25265 ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] glusterfs 3.4.2 + CentOS6.5 + Infiniband /(rdma)
thank u. very helpful 发自我的小米手机 Justin Clift jus...@gluster.org编写: On 26/03/2014, at 1:49 PM, yang feng电话 wrote: Does anyone has such kind of experience to run iozone on rdma? Just want to avoid the known issue or bugs. As a thought, use IPoIB instead of native RDMA transport type when creating Gluster volumes. The native RDMA code in Gluster is known to be buggy at the moment. We're planning to fix it for Gluster 3.6. Hope that helps. :) + Justin -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users