[Gluster-users] errors in the 'bricks' log file
Hello, I have quite some errors in the log files for bricks (see below). Are they important? They are produced every 5 minutes! P.S. I am still redirected to a redhat page trying to access gluster website. Does the DNS problem still exit? With best wishes, Alexander Zvyagin. $ tail bricks/gluster-logs-vol1.log [2011-12-14 10:00:04.362315] E [io-stats.c:2063:conditional_dump] 0-/gluster/logs-vol1: failed to open for writing [2011-12-14 10:00:04.366964] I [server-helpers.c:1504:gf_server_check_setxattr_cmd] 0-stats: total-read 10343012882, total-write 6473657770 [2011-12-14 10:00:04.367056] E [io-stats.c:2060:conditional_dump] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/protocol/server.so(server_setxattr_resume+0x10a) [0x2aaab001338a] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/debug/io-stats.so(io_stats_setxattr+0x58) [0x2bcbbd28] (--/opt/glusterfs/3.2.5/lib64/libglusterfs.so.0(dict_foreach+0x36) [0x2b2a7afeaa56]))) 0-: Assertion failed: logfp [2011-12-14 10:00:04.367075] E [io-stats.c:2063:conditional_dump] 0-/gluster/logs-vol1: failed to open for writing [2011-12-14 10:00:04.588516] I [server-helpers.c:1504:gf_server_check_setxattr_cmd] 0-stats: total-read 10343038653, total-write 6473707318 [2011-12-14 10:00:04.588593] E [io-stats.c:2060:conditional_dump] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/protocol/server.so(server_setxattr_resume+0x10a) [0x2aaab001338a] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/debug/io-stats.so(io_stats_setxattr+0x58) [0x2bcbbd28] (--/opt/glusterfs/3.2.5/lib64/libglusterfs.so.0(dict_foreach+0x36) [0x2b2a7afeaa56]))) 0-: Assertion failed: logfp [2011-12-14 10:00:04.588612] E [io-stats.c:2063:conditional_dump] 0-/gluster/logs-vol1: failed to open for writing [2011-12-14 10:00:04.600713] I [server-helpers.c:1504:gf_server_check_setxattr_cmd] 0-stats: total-read 10343039813, total-write 6473707698 [2011-12-14 10:00:04.600817] E [io-stats.c:2060:conditional_dump] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/protocol/server.so(server_setxattr_resume+0x10a) [0x2aaab001338a] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/debug/io-stats.so(io_stats_setxattr+0x58) [0x2bcbbd28] (--/opt/glusterfs/3.2.5/lib64/libglusterfs.so.0(dict_foreach+0x36) [0x2b2a7afeaa56]))) 0-: Assertion failed: logfp [2011-12-14 10:00:04.600836] E [io-stats.c:2063:conditional_dump] 0-/gluster/logs-vol1: failed to open for writing [2011-12-14 10:05:04.636463] I [server-helpers.c:1504:gf_server_check_setxattr_cmd] 0-stats: total-read 10379097713, total-write 6494169926 [2011-12-14 10:05:04.636538] E [io-stats.c:2060:conditional_dump] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/protocol/server.so(server_setxattr_resume+0x10a) [0x2aaab001338a] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/debug/io-stats.so(io_stats_setxattr+0x58) [0x2bcbbd28] (--/opt/glusterfs/3.2.5/lib64/libglusterfs.so.0(dict_foreach+0x36) [0x2b2a7afeaa56]))) 0-: Assertion failed: logfp [2011-12-14 10:05:04.636556] E [io-stats.c:2063:conditional_dump] 0-/gluster/logs-vol1: failed to open for writing [2011-12-14 10:05:04.664731] I [server-helpers.c:1504:gf_server_check_setxattr_cmd] 0-stats: total-read 10379098873, total-write 6494170306 [2011-12-14 10:05:04.664832] E [io-stats.c:2060:conditional_dump] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/protocol/server.so(server_setxattr_resume+0x10a) [0x2aaab001338a] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/debug/io-stats.so(io_stats_setxattr+0x58) [0x2bcbbd28] (--/opt/glusterfs/3.2.5/lib64/libglusterfs.so.0(dict_foreach+0x36) [0x2b2a7afeaa56]))) 0-: Assertion failed: logfp [2011-12-14 10:05:04.664882] E [io-stats.c:2063:conditional_dump] 0-/gluster/logs-vol1: failed to open for writing [2011-12-14 10:05:04.867472] I [server-helpers.c:1504:gf_server_check_setxattr_cmd] 0-stats: total-read 10379100033, total-write 6494170686 [2011-12-14 10:05:04.867551] E [io-stats.c:2060:conditional_dump] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/protocol/server.so(server_setxattr_resume+0x10a) [0x2aaab001338a] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/debug/io-stats.so(io_stats_setxattr+0x58) [0x2bcbbd28] (--/opt/glusterfs/3.2.5/lib64/libglusterfs.so.0(dict_foreach+0x36) [0x2b2a7afeaa56]))) 0-: Assertion failed: logfp [2011-12-14 10:05:04.867570] E [io-stats.c:2063:conditional_dump] 0-/gluster/logs-vol1: failed to open for writing [2011-12-14 10:05:04.892668] I [server-helpers.c:1504:gf_server_check_setxattr_cmd] 0-stats: total-read 10379101193, total-write 6494171066 [2011-12-14 10:05:04.892763] E [io-stats.c:2060:conditional_dump] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/protocol/server.so(server_setxattr_resume+0x10a) [0x2aaab001338a] (--/opt/glusterfs/3.2.5/lib64/glusterfs/3.2.5/xlator/debug/io-stats.so(io_stats_setxattr+0x58) [0x2bcbbd28] (--/opt/glusterfs/3.2.5/lib64/libglusterfs.so.0(dict_foreach+0x36) [0x2b2a7afeaa56]))) 0-: Assertion failed: logfp
[Gluster-users] Limit access to a volume
Hi, I created a volume and have to restrict access now. Just one client should be allowed to acces the volume. So I tried: # gluster volume set sesam auth.allow 192.168.20.1 But I can still mount the volume from other clients than 192.168.20.1. But when I try to access the directory from an unallowd machine, my shell hangs. If I try to unmount, it says that the device is busy. The only way to get it out of the client is to kill the glusterfs process. And one more question: If I set a option by gluster volume set - how do I unset it? Regards, Marc ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] Speed of glusterfs
Hi, we're currently testing glusterfs but getting a real slow connection with it. I setup a environment with 2 replicated storage servers. Both machines and the client are connected by 1 Gbit, but if I copy a 2.5 GB file from the client to the volumen mounted with glusterfs protocol, I never get more than 35-40 MB/sec. On the two storage I also run a NFS server. If I copy the same file from the client to one storage by NFS (the NFS server that was shipped with RHEL6), I get never less than 90-95 MB/sec. So the network/HDDs are possible to provide faster transfer. Is there a change to get glusterfs to about the same speed? And why is it so slow? Does the glusterfs send the data directly to both storages? This would explain why it is so slow. But is there a change then to send the data just to one server and this one replicates it (over a second NIC) to the second one)? Regards, Marc ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] glusterfs crash when the one of replicate node restart
On 12/14/2011 03:06 PM, Changliang Chen wrote: Hi,we have use glusterfs for two years. After upgraded to 3.2.5,we discover that when one of replicate node reboot and startup the glusterd daemon,the gluster will crash cause by the other replicate node cpu usage reach 100%. Our gluster info: Type: Distributed-Replicate Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Options Reconfigured: performance.cache-size: 3GB performance.cache-max-file-size: 512KB network.frame-timeout: 30 network.ping-timeout: 25 cluster.min-free-disk: 10% Our device: Dell R710 600Gsas *6 3*8Gmem The error info: [2011-12-14 13:24:10.483812] E [rdma.c:4813:init] 0-rdma.management: Failed to initialize IB Device [2011-12-14 13:24:10.483828] E [rpc-transport.c:742:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2011-12-14 13:24:10.483841] W [rpcsvc.c:1288:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2011-12-14 13:24:11.967621] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2011-12-14 13:24:11.967665] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1 [2011-12-14 13:24:11.967681] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-2 [2011-12-14 13:24:11.967695] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-3 [2011-12-14 13:24:11.967709] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-4 [2011-12-14 13:24:11.967723] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-5 [2011-12-14 13:24:11.967736] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-6 [2011-12-14 13:24:11.967750] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-7 [2011-12-14 13:24:11.967764] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-8 [2011-12-14 13:24:11.96] E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown key: brick-9 [2011-12-14 13:24:12.465565] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.17:1013 http://10.1.1.17:1013) [2011-12-14 13:24:12.465623] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.8:1013 http://10.1.1.8:1013) [2011-12-14 13:24:12.465656] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.10:1013 http://10.1.1.10:1013) [2011-12-14 13:24:12.465686] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.11:1013 http://10.1.1.11:1013) [2011-12-14 13:24:12.465716] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.125:1013 http://10.1.1.125:1013) [2011-12-14 13:24:12.633288] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.65:1006 http://10.1.1.65:1006) [2011-12-14 13:24:13.138150] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.1:1013 http://10.1.1.1:1013) [2011-12-14 13:24:13.284665] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.3:1013 http://10.1.1.3:1013) [2011-12-14 13:24:15.790805] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.8:1013 http://10.1.1.8:1013) [2011-12-14 13:24:16.113430] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.125:1013 http://10.1.1.125:1013) [2011-12-14 13:24:16.259040] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.10:1013 http://10.1.1.10:1013) [2011-12-14 13:24:16.392058] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.17:1013 http://10.1.1.17:1013) [2011-12-14 13:24:16.429444] W [socket.c:1494:__socket_proto_state_machine] 0-socket.management: reading from socket failed. Error (Transport endpoint is not connected), peer (10.1.1.11:1013 http://10.1.1.11:1013) [2011-12-14 13:26:05.787680] W [glusterfsd.c:727:cleanup_and_exit]
Re: [Gluster-users] Speed of glusterfs
Hi Marc, The GlusterFS client DOES send the data directly to both storage, that's why your speed is about half the NFS speed. Concerning tweaks to write only on one server, and then replicate from one server to another, I think it's not possible given the architecture of GlusterFS. If someone know more details, even tweaks about this... I would be very interested. Regards, Raphaël. 2011/12/14 Marc Muehlfeld marc.muehlf...@medizinische-genetik.de Hi, we're currently testing glusterfs but getting a real slow connection with it. I setup a environment with 2 replicated storage servers. Both machines and the client are connected by 1 Gbit, but if I copy a 2.5 GB file from the client to the volumen mounted with glusterfs protocol, I never get more than 35-40 MB/sec. On the two storage I also run a NFS server. If I copy the same file from the client to one storage by NFS (the NFS server that was shipped with RHEL6), I get never less than 90-95 MB/sec. So the network/HDDs are possible to provide faster transfer. Is there a change to get glusterfs to about the same speed? And why is it so slow? Does the glusterfs send the data directly to both storages? This would explain why it is so slow. But is there a change then to send the data just to one server and this one replicates it (over a second NIC) to the second one)? Regards, Marc __**_ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/**mailman/listinfo/gluster-usershttp://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Speed of glusterfs
Am 14.12.2011 11:50, schrieb Raphaël Hoareau: The GlusterFS client DOES send the data directly to both storage, that's why your speed is about half the NFS speed. Can I double the speed of GlusterFS by NIC-teaming (2x 1Gbit)? ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Speed of glusterfs
In theory yes, but the bandwith will also be limited by the nodes between the client and the server. Unless every part of your network can handle 2Gbit, it won't be a good solution. You could also try to use two NICs with specific routes. NIC1 knows the route to Server1 and NIC2 knows the route to Server2. Something like this : eth0 : route add -net 192.168.0.0 netmask 255.255.255.0 -i eth0# Assuming 192.168.0.0 = network containing Server1 eth1 : route add -net 192.168.1.0 netmask 255.255.255.0 -i eth1# Assuming 192.168.1.0 = network containing Server2 But I really don't know if it could work as I never tried it before. And, as I said, it also depends on the nodes in your network. The most basic configuration would look like this : Server1 Server2 | | | | 1Gbit| |1Gbit | | | | eth0 eth1 Client In that case, you will double your speed. But again, it's just an idea. If you have everything to try it (NIC-teaming and/or NIC specific route), go ahead. And, please, give me your results as I'm very interested in GlusterFS performances. Regards, Raphaël. Le 14 décembre 2011 12:55, Marc Muehlfeld marc.muehlf...@medizinische-genetik.de a écrit : Am 14.12.2011 11:50, schrieb Raphaël Hoareau: The GlusterFS client DOES send the data directly to both storage, that's why your speed is about half the NFS speed. Can I double the speed of GlusterFS by NIC-teaming (2x 1Gbit)? ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Speed of glusterfs
Am 14.12.2011 14:00, schrieb Raphaël Hoareau: Unless every part of your network can handle 2Gbit, it won't be a good solution. There's just one Cisco 3750 between. And this switch can configured for teaming. You could also try to use two NICs with specific routes. This is a good idea. I'll try this way first. And, please, give me your results as I'm very interested in GlusterFS performances. I'll post my results on the list. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Limit access to a volume
Hi Marc, To unset any options, you can use volume reset command. Usage: volume reset VOLNAME [option] [force]. We will look into the other question. From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on behalf of Marc Muehlfeld [marc.muehlf...@medizinische-genetik.de] Sent: Wednesday, December 14, 2011 4:00 PM To: gluster-users@gluster.org Subject: [Gluster-users] Limit access to a volume Hi, I created a volume and have to restrict access now. Just one client should be allowed to acces the volume. So I tried: # gluster volume set sesam auth.allow 192.168.20.1 But I can still mount the volume from other clients than 192.168.20.1. But when I try to access the directory from an unallowd machine, my shell hangs. If I try to unmount, it says that the device is busy. The only way to get it out of the client is to kill the glusterfs process. And one more question: If I set a option by gluster volume set - how do I unset it? Regards, Marc ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Limit access to a volume
Am 14.12.2011 14:48, schrieb Vijaykumar Koppad: To unset any options, you can use volume reset command. Usage: volume resetVOLNAME [option] [force]. I saw that. But this unset all options I have set, right? Is there a way just to unset _one_ setting? ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Speed of glusterfs
Am 14.12.2011 14:00, schrieb Raphaël Hoareau: You could also try to use two NICs with specific routes. NIC1 knows the route to Server1 and NIC2 knows the route to Server2. Could it be possible that this can't be done or do I miss something? Node 1: 192.168.29.14 and 192.168.20.14 Node 2: 192.168.29.15 and 192.168.20.15 Each IP is on a separate own NIC in both machines. Connection over both NICs are fine: # traceroute 192.168.20.14 traceroute to 192.168.20.14 (192.168.20.14), 30 hops max, 60 byte packets 1 192.168.20.14 (192.168.20.14) 0.162 ms 0.148 ms 0.141 ms # traceroute 192.168.29.14 traceroute to 192.168.29.14 (192.168.29.14), 30 hops max, 60 byte packets 1 192.168.29.14 (192.168.29.14) 0.189 ms 0.172 ms 0.244 ms But (both nodes on different NICs/Subnets): # gluster peer probe 192.168.29.14 Probe successful # gluster volume create test replica 2 transport tcp 192.168.29.14:/mnt/ 192.168.20.15:/mnt/ Operation failed on 192.168.29.14 Both nodes with connection over the same NIC/Subnet: # gluster peer probe 192.168.20.14 Probe successful # gluster volume create test replica 2 transport tcp 192.168.20.14:/mnt/ 192.168.20.15:/mnt/ Creation of volume test has been successful. ... It also works if I create the volume over the 192.168.29.0/24 subnet. But not if I mix them. The log says: [2011-12-14 14:53:45.253837] I [glusterd-handler.c:448:glusterd_handle_cluster_lock] 0-glusterd: Received LOCK from uuid: 4e765edc-9ca6-4404-8757-ca170ce938df [2011-12-14 14:53:45.253888] I [glusterd-utils.c:243:glusterd_lock] 0-glusterd: Cluster lock held by 4e765edc-9ca6-4404-8757-ca170ce938df [2011-12-14 14:53:45.253938] I [glusterd-handler.c:2651:glusterd_op_lock_send_resp] 0-glusterd: Responded, ret: 0 [2011-12-14 14:53:45.255287] I [glusterd-handler.c:488:glusterd_req_ctx_create] 0-glusterd: Received op from uuid: 4e765edc-9ca6-4404-8757-ca170ce938df [2011-12-14 14:53:45.256600] E [glusterd-op-sm.c:366:glusterd_op_stage_create_volume] 0-glusterd: cannot resolve brick: 192.168.20.15:/mnt [2011-12-14 14:53:45.256635] E [glusterd-op-sm.c:7370:glusterd_op_ac_stage_op] 0-: Validate failed: 1 [2011-12-14 14:53:45.256681] I [glusterd-handler.c:2743:glusterd_op_stage_send_resp] 0-glusterd: Responded to stage, ret: 0 [2011-12-14 14:53:45.256994] I [glusterd-handler.c:2693:glusterd_handle_cluster_unlock] 0-glusterd: Received UNLOCK from uuid: 4e765edc-9ca6-4404-8757-ca170ce938df [2011-12-14 14:53:45.257049] I [glusterd-handler.c:2671:glusterd_op_unlock_send_resp] 0-glusterd: Responded to unlock, ret: 0 ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Limit access to a volume
Consider you have set an option data-self-heal-algorithm to diff. So if you want to reset it to default value. you have use volume reset VOLNAME data-self-heal-algorithm . This will unset only that option and this option is available only in master. From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on behalf of Marc Muehlfeld [marc.muehlf...@medizinische-genetik.de] Sent: Wednesday, December 14, 2011 7:20 PM To: gluster-users@gluster.org Subject: Re: [Gluster-users] Limit access to a volume Am 14.12.2011 14:48, schrieb Vijaykumar Koppad: To unset any options, you can use volume reset command. Usage: volume resetVOLNAME [option] [force]. I saw that. But this unset all options I have set, right? Is there a way just to unset _one_ setting? ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Limit access to a volume
Am 14.12.2011 14:57, schrieb Vijaykumar Koppad: Consider you have set an option data-self-heal-algorithm to diff. So if you want to reset it to default value. you have use volume resetVOLNAME data-self-heal-algorithm . This will unset only that option and this option is available only in master. Ok. Thanks. I just called the reset without arguments, what resets all changes. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Speed of glusterfs
Le 14 décembre 2011 14:54, Marc Muehlfeld marc.muehlf...@medizinische-genetik.de a écrit : Am 14.12.2011 14:00, schrieb Raphaël Hoareau: You could also try to use two NICs with specific routes. NIC1 knows the route to Server1 and NIC2 knows the route to Server2. Could it be possible that this can't be done or do I miss something? Node 1: 192.168.29.14 and 192.168.20.14 Node 2: 192.168.29.15 and 192.168.20.15 Each IP is on a separate own NIC in both machines. Connection over both NICs are fine: # traceroute 192.168.20.14 traceroute to 192.168.20.14 (192.168.20.14), 30 hops max, 60 byte packets 1 192.168.20.14 (192.168.20.14) 0.162 ms 0.148 ms 0.141 ms # traceroute 192.168.29.14 traceroute to 192.168.29.14 (192.168.29.14), 30 hops max, 60 byte packets 1 192.168.29.14 (192.168.29.14) 0.189 ms 0.172 ms 0.244 ms Could you precise on which host the command is executed ? It will help a lot. (Also available for the output above) But (both nodes on different NICs/Subnets): # gluster peer probe 192.168.29.14 Probe successful # gluster volume create test replica 2 transport tcp 192.168.29.14:/mnt/ 192.168.20.15:/mnt/ Operation failed on 192.168.29.14 Both nodes with connection over the same NIC/Subnet: # gluster peer probe 192.168.20.14 Probe successful # gluster volume create test replica 2 transport tcp 192.168.20.14:/mnt/ 192.168.20.15:/mnt/ Creation of volume test has been successful. ... It also works if I create the volume over the 192.168.29.0/24 subnet. But not if I mix them. The log says: [2011-12-14 14:53:45.253837] I [glusterd-handler.c:448:**glusterd_handle_cluster_lock] 0-glusterd: Received LOCK from uuid: 4e765edc-9ca6-4404-8757-** ca170ce938df [2011-12-14 14:53:45.253888] I [glusterd-utils.c:243:**glusterd_lock] 0-glusterd: Cluster lock held by 4e765edc-9ca6-4404-8757-**ca170ce938df [2011-12-14 14:53:45.253938] I [glusterd-handler.c:2651:**glusterd_op_lock_send_resp] 0-glusterd: Responded, ret: 0 [2011-12-14 14:53:45.255287] I [glusterd-handler.c:488:**glusterd_req_ctx_create] 0-glusterd: Received op from uuid: 4e765edc-9ca6-4404-8757-**ca170ce938df [2011-12-14 14:53:45.256600] E [glusterd-op-sm.c:366:** glusterd_op_stage_create_**volume] 0-glusterd: cannot resolve brick: 192.168.20.15:/mnt It seems that 192.168.29.14 can't resolv 192.168.20.15. Can you show the result of route -n executed on the host that produces that log ? [2011-12-14 14:53:45.256635] E [glusterd-op-sm.c:7370:**glusterd_op_ac_stage_op] 0-: Validate failed: 1 [2011-12-14 14:53:45.256681] I [glusterd-handler.c:2743:**glusterd_op_stage_send_resp] 0-glusterd: Responded to stage, ret: 0 [2011-12-14 14:53:45.256994] I [glusterd-handler.c:2693:** glusterd_handle_cluster_**unlock] 0-glusterd: Received UNLOCK from uuid: 4e765edc-9ca6-4404-8757-**ca170ce938df [2011-12-14 14:53:45.257049] I [glusterd-handler.c:2671:**glusterd_op_unlock_send_resp] 0-glusterd: Responded to unlock, ret: 0 I don't have much time right now, but I'll try to help you a bit more later. Regards, Raphaël. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Speed of glusterfs
Am 14.12.2011 14:54, schrieb Marc Muehlfeld: Am 14.12.2011 14:00, schrieb Raphaël Hoareau: You could also try to use two NICs with specific routes. NIC1 knows the route to Server1 and NIC2 knows the route to Server2. Could it be possible that this can't be done or do I miss something? (For my speed test results see the end of this mail) I saw what was wrong: When I # gluster peer probe ... from one node to the other, then glusterfs automatically allows it's own IP from *the same subnet*. If I probe both server from a client and then it works: # gluster peer probe 192.168.20.14 Probe successful # gluster peer probe 192.168.29.15 Probe successful # gluster volume create test replica 2 transport tcp 192.168.20.14:/mnt/ 192.168.29.15:/mnt/ Creation of volume test has been successful. ... Netstat also shows that the client is connected to both nodes with each on a own IP in a separare subnet (and NIC): # netstat -taunp | grep glusterfs tcp0 0 192.168.29.1:1022 192.168.29.15:24011 VERBUNDEN 23488/glusterfs tcp0 0 192.168.20.1:1019 192.168.20.14:24011 VERBUNDEN 23310/glusterfs Here are the results: Im writing a 10 GB file to the cluster: time dd if=/dev/zero of=/mnt/test.10G bs=1M count=1 1+0 Datensätze ein 1+0 Datensätze aus 1048576 Bytes (10 GB) kopiert, 106,981 s, 98,0 MB/s real1m47.018s user0m0.010s sys 0m5.801s This is a good result for me on a 1GBit connection. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Limit access to a volume
And if you want to unset it in Release-3.2.x , you have to set it again for the default value , which is provided for all the options in volume set help. From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on behalf of Marc Muehlfeld [marc.muehlf...@medizinische-genetik.de] Sent: Wednesday, December 14, 2011 7:31 PM To: gluster-users@gluster.org Subject: Re: [Gluster-users] Limit access to a volume Am 14.12.2011 14:57, schrieb Vijaykumar Koppad: Consider you have set an option data-self-heal-algorithm to diff. So if you want to reset it to default value. you have use volume resetVOLNAME data-self-heal-algorithm . This will unset only that option and this option is available only in master. Ok. Thanks. I just called the reset without arguments, what resets all changes. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] where's the API docs?
Ah - Ok thanks, Jeff. I was looking for the Swift and REST API docs. I assumed there were API interfaces in 3.2.5. I'll go dig up some roadmap info. Erik Redding Systems Programmer, RHCE Core Systems Texas State University-San Marcos On Dec 13, 2011, at 8:39 PM, Jeff Darcy wrote: On 12/13/2011 04:31 PM, Redding, Erik wrote: Where could I find the API documentation for GlusterFS 3.2.5? I don't see anything on the gluster.org site. What kind of API docs are you looking for? Neither the S3/Swift object interface nor the Hadoop locality interface is in 3.2.5. Neither is the REST API for the management console. That leaves the CLI (documented in the user manual) or the internal translator API (not well documented but I've written extensively about it at hekafs.org). ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users smime.p7s Description: S/MIME cryptographic signature ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] where's the API docs?
- Original Message - Ah - Ok thanks, Jeff. I was looking for the Swift and REST API docs. I assumed there were API interfaces in 3.2.5. I'll go dig up some roadmap info. I guess the first question to ask is, what are you looking to do? -JM ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Limit access to a volume
On 12/14/2011 08:48 AM, Vijaykumar Koppad wrote: I created a volume and have to restrict access now. Just one client should be allowed to acces the volume. So I tried: # gluster volume set sesam auth.allow 192.168.20.1 But I can still mount the volume from other clients than 192.168.20.1. But when I try to access the directory from an unallowd machine, my shell hangs. If I try to unmount, it says that the device is busy. The only way to get it out of the client is to kill the glusterfs process. This looks like https://bugzilla.redhat.com/show_bug.cgi?id=765240 for which a patch is under review. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Limit access to a volume
Is there any reason why this bug is Access Denied to us mortals? I was also denied after I logged in to Redhat bugzilla. Bill Sebok Computer Software Manager, Univ. of Maryland, Astronomy Internet: w...@astro.umd.eduURL: http://furo.astro.umd.edu/ On Wed, Dec 14, 2011 at 12:21:02PM -0500, Jeff Darcy wrote: On 12/14/2011 08:48 AM, Vijaykumar Koppad wrote: I created a volume and have to restrict access now. Just one client should be allowed to acces the volume. So I tried: # gluster volume set sesam auth.allow 192.168.20.1 But I can still mount the volume from other clients than 192.168.20.1. But when I try to access the directory from an unallowd machine, my shell hangs. If I try to unmount, it says that the device is busy. The only way to get it out of the client is to kill the glusterfs process. This looks like https://bugzilla.redhat.com/show_bug.cgi?id=765240 for which a patch is under review. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Limit access to a volume
On 12/14/2011 01:55 PM, William L. Sebok wrote: Is there any reason why this bug is Access Denied to us mortals? I was also denied after I logged in to Redhat bugzilla. I have no idea. I see nothing in the content that should justify it being private, but I'm reluctant to clear that flag unilaterally. FWIW, it refers to a report on the community site... http://community.gluster.org/q/re-how-to-restrict-access-to-glusterfs-native-clients-by-ip-address/ ...and the patch for it doesn't seem to be private either... http://review.gluster.com/#change,398 ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Limit access to a volume
- Original Message - Is there any reason why this bug is Access Denied to us mortals? I was also denied after I logged in to Redhat bugzilla. I'm not entirely sure. I'll send a request to the bugzilla maintainer. -JM This looks like https://bugzilla.redhat.com/show_bug.cgi?id=765240 for which a patch is under review. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] where's the API docs?
I'm working on a proposal for a 1pb Gluster rig and I wanted to be able to present programmers with the API docs to sweeten the deal and to be able to programmatically mange it. I assumed there was a REST API for interfacing with the filesystem, either management or actual file i/o, because it appears in the feature list from time to time. I'm trying to dig around on how to pull 3.3 beta but with the Red Hat transition, I'm finding all of those offerings have disappeared. I'm digging around and Erik Redding Systems Programmer, RHCE Core Systems Texas State University-San Marcos On Dec 14, 2011, at 11:19 AM, John Mark Walker wrote: - Original Message - Ah - Ok thanks, Jeff. I was looking for the Swift and REST API docs. I assumed there were API interfaces in 3.2.5. I'll go dig up some roadmap info. I guess the first question to ask is, what are you looking to do? -JM smime.p7s Description: S/MIME cryptographic signature ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] where's the API docs?
I'll proceed with my foot insertion into mouth: http://community.gluster.org/a/install-glusterfs-3-3-beta-1-with-unified-file-and-object-storage/ Erik Redding Systems Programmer, RHCE Core Systems Texas State University-San Marcos On Dec 14, 2011, at 1:54 PM, Redding, Erik wrote: I'm working on a proposal for a 1pb Gluster rig and I wanted to be able to present programmers with the API docs to sweeten the deal and to be able to programmatically mange it. I assumed there was a REST API for interfacing with the filesystem, either management or actual file i/o, because it appears in the feature list from time to time. I'm trying to dig around on how to pull 3.3 beta but with the Red Hat transition, I'm finding all of those offerings have disappeared. I'm digging around and Erik Redding Systems Programmer, RHCE Core Systems Texas State University-San Marcos On Dec 14, 2011, at 11:19 AM, John Mark Walker wrote: - Original Message - Ah - Ok thanks, Jeff. I was looking for the Swift and REST API docs. I assumed there were API interfaces in 3.2.5. I'll go dig up some roadmap info. I guess the first question to ask is, what are you looking to do? -JM smime.p7s___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users smime.p7s Description: S/MIME cryptographic signature ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] where's the API docs?
this still works for me.. http://download.gluster.com/pub/gluster/glusterfs/qa-releases/3.3- beta-2/ hjm On Wednesday 14 December 2011 11:54:26 Redding, Erik wrote: I'm working on a proposal for a 1pb Gluster rig and I wanted to be able to present programmers with the API docs to sweeten the deal and to be able to programmatically mange it. I assumed there was a REST API for interfacing with the filesystem, either management or actual file i/o, because it appears in the feature list from time to time. I'm trying to dig around on how to pull 3.3 beta but with the Red Hat transition, I'm finding all of those offerings have disappeared. I'm digging around and Erik Redding Systems Programmer, RHCE Core Systems Texas State University-San Marcos On Dec 14, 2011, at 11:19 AM, John Mark Walker wrote: - Original Message - Ah - Ok thanks, Jeff. I was looking for the Swift and REST API docs. I assumed there were API interfaces in 3.2.5. I'll go dig up some roadmap info. I guess the first question to ask is, what are you looking to do? -JM -- Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine [ZOT 2225] / 92697 Google Voice Multiplexer: (949) 478-4487 MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps) -- This signature has been OCCUPIED! ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Renaming permission denied
On Dec 13, 2011, at 7:54 PM, Pranith Kumar K wrote: On 12/13/2011 10:56 PM, Brian Rosner wrote: On Dec 12, 2011, at 1:41 AM, Pranith Kumar K wrote: Seems like the issue with that specific file your application is trying to rename. Could you check if that file has correct permissions on the backends?. It is difficult for me to test that due to the file being created and unlinked during the reindexing process. The fact it has permission to unlink and not rename seems very odd to me. Brian Rosner Brian, Could you give the permissions/ownership of the directory the file is being created, permissions/ownership of the file that is being created. Is the group of the file secondary group? It is very difficult for us to find the root cause without a good test case. Ok, I have a bit more info. I was able to reproduce it with a tool that I can easily modify. Here is some info from this new case: log: [2011-12-14 21:53:15.373241] W [fuse-bridge.c:1348:fuse_rename_cbk] 0-glusterfs-fuse: 265: /i130/whoosh_index/_MAIN_1.toc.1323899595.36 - /i130/whoosh_index/_MAIN_1.toc = -1 (Permission denied) % sudo ls -l /mnt/i130/whoosh_index/_MAIN_1.toc.1323899595.36 -rw-r- 1 1630 nogroup 1522 2011-12-14 21:52 /mnt/i130/whoosh_index/_MAIN_1.toc.1323899595.36 % sudo ls -l /mnt/i130/ drwxr-x--- 2 1630 nogroup 8192 2011-12-14 21:52 whoosh_index % sudo ls -l /mnt | grep i130 drwxr-x--x 4 1630 anduin 8192 2011-12-14 21:52 i130 I've confirmed the user and group of the process are 1630 and nogroup. Brian Rosner ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users