Re: [Gluster-users] Problem with distribute translator.
Hi - It sends all the data on machine 1 instead of sending the data to machine1 and 2 both. Use AFR(replicate) instead of distribute, if you want data on both machines. For more check - http://gluster.com/community/documentation/index.php/Gluster_3.2:_Configuring_Distributed_Replicated_Volumes I am using Glusterfs2.0.9 for fedora 12 2.0.9 is very old version. Please use 3.2.x. -- Cheers, Lakshmipathi.G FOSS Programmer. - Original Message - From: sonali.gupta sonali.gu...@99acres.com To: Gluster-users@gluster.org Sent: Wednesday, May 18, 2011 4:56:50 PM Subject: [Gluster-users] Problem with distribute translator. Hi, I am a newbie at GlusterFs. I have a scenario of two machines, machine 1(client1 and server1) and machine 2(server2). I have client configuration file as having two bricks as server1 and server2. I have added the distribute translator to the client configuration file. On seeing the disk usage it gives the correct output, but while putting the data is does not distribute. It sends all the data on machine 1 instead of sending the data to machine1 and 2 both. I am using Glusterfs2.0.9 for fedora 12 Please help, I am unable to track any issue. Regards, Sonali. DISCLAIMER This email is intended only for the person or the entity to whom it is addressed and may contain information which is confidential and privileged. Any review, retransmission, dissemination or any other use of the said information by person or entities other than intended recipient is unauthorized and prohibited. If you are not the intended recipient, please delete this email and contact the sender. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Géo-rep fail
Hi - Do you have passwordless ssh login to slave machine? After setting passwordless login ,please try this - #gluster volume geo-replication athena root@$(hostname):/soft/venus start or #gluster volume geo-replication athena $(hostname):/soft/venus start wait for few seconds then verify the status. For minimum requirement ,checkout this http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Checking_Geo-replication_Minimum_Requirements HTH -- Cheers, Lakshmipathi.G FOSS Programmer. - Original Message - From: anthony garnier sokar6...@hotmail.com To: gluster-users@gluster.org Sent: Monday, May 16, 2011 5:06:22 PM Subject: [Gluster-users] Géo-rep fail Hi, I'm currently trying to use géo-rep on the local data-node into a directory but it fails with status faulty Volume : Volume Name: athena Type: Distributed-Replicate Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: ylal3020:/users/exp1 Brick2: yval1010:/users/exp3 Brick3: ylal3030:/users/exp2 Brick4: yval1000:/users/exp4 Options Reconfigured: geo-replication.indexing: on diagnostics.count-fop-hits: on diagnostics.latency-measurement: on performance.cache-max-file-size: 256MB network.ping-timeout: 5 performance.cache-size: 512MB performance.cache-refresh-timeout: 60 nfs.port: 2049 I've done this cmd : # gluster volume geo-replication athena /soft/venus config # gluster volume geo-replication athena /soft/venus start # gluster volume geo-replication athena /soft/venus status MASTER SLAVE STATUS athena /soft/venus faulty Here is the log file in Debug mod : [2011-05-16 13:28:55.268006] I [monitor(monitor):42:monitor] Monitor: [2011-05-16 13:28:55.268281] I [monitor(monitor):43:monitor] Monitor: starting gsyncd worker [2011-05-16 13:28:55.326309] I [gsyncd:287:main_i] top: syncing: gluster://localhost:athena - file:///soft/venus [2011-05-16 13:28:55.327905] D [repce:131:push] RepceClient: call 10888:47702589471600:1305545335.33 __repce_version__() ... [2011-05-16 13:28:55.462613] D [repce:141:__call__] RepceClient: call 10888:47702589471600:1305545335.33 __repce_version__ - 1.0 [2011-05-16 13:28:55.462886] D [repce:131:push] RepceClient: call 10888:47702589471600:1305545335.46 version() ... [2011-05-16 13:28:55.463330] D [repce:141:__call__] RepceClient: call 10888:47702589471600:1305545335.46 version - 1.0 [2011-05-16 13:28:55.480202] D [resource:381:connect] GLUSTER: auxiliary glusterfs mount in place [2011-05-16 13:28:55.682863] D [resource:393:connect] GLUSTER: auxiliary glusterfs mount prepared [2011-05-16 13:28:55.684926] D [monitor(monitor):57:monitor] Monitor: worker got connected in 0 sec, waiting 59 more to make sure it's fine [2011-05-16 13:28:55.685096] D [repce:131:push] RepceClient: call 10888:1115703616:1305545335.68 keep_alive(None,) ... [2011-05-16 13:28:55.685859] D [repce:141:__call__] RepceClient: call 10888:1115703616:1305545335.68 keep_alive - 1 [2011-05-16 13:28:59.546574] D [master:167:volinfo_state_machine] top: (None, None) (None, 28521f8f) - (None, 28521f8f) [2011-05-16 13:28:59.546863] I [master:184:crawl] GMaster: new master is 28521f8f-49d3-4e2a-b984-f664f44f5289 [2011-05-16 13:28:59.547034] I [master:191:crawl] GMaster: primary master with volume id 28521f8f-49d3-4e2a-b984-f664f44f5289 ... [2011-05-16 13:28:59.547180] D [master:199:crawl] GMaster: entering . [2011-05-16 13:28:59.548289] D [repce:131:push] RepceClient: call 10888:47702589471600:1305545339.55 xtime('.', '28521f8f-49d3-4e2a-b984-f664f44f5289') ... [2011-05-16 13:28:59.596978] E [syncdutils:131:log_raise_exception] top: FAIL: Traceback (most recent call last): File /usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py, line 152, in twrap tf(*aa) File /usr/local/libexec/glusterfs/python/syncdaemon/repce.py, line 118, in listen rid, exc, res = recv(self.inf) File /usr/local/libexec/glusterfs/python/syncdaemon/repce.py, line 42, in recv return pickle.load(inf) EOFError Does anyone already got those errors ? ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Error rpmbuild Glusterfs 3.1.3
Hi - Please apply the patch (by Joe) from here http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2279 and rebuild again. -- Cheers, Lakshmipathi.G FOSS Programmer. - Original Message - From: Jürgen Winkler juergen.wink...@xidras.com To: gluster-users@gluster.org Sent: Thursday, March 31, 2011 1:21:13 PM Subject: [Gluster-users] Error rpmbuild Glusterfs 3.1.3 Hi, i have a lot of troubles when i try to build rpm´s out of the glusterfs 3.1.3 tgz on my SLES Servers (SLES10.1 SLES11.1) all is running fine i guess until it try´s to build the rpm´s. Then i always run into this error : RPM build errors: File not found: /var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/gsyncd File not found by glob: /var/tmp/glusterfs-3.1.3-1-root/opt/glusterfs/3.1.3/local/libexec/python/syncdaemon/* are there missing dependencies ore something ? Thx ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Files not synced after peer reboot
Hi - Did you triggered the self heal by running ls -lR /mntpt on client? For more refer http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate -- Cheers, Lakshmipathi.G FOSS Programmer. - Original Message - From: Panos Kontopoulos pa...@kontopoulos.org To: gluster-users@gluster.org Sent: Monday, March 21, 2011 12:53:00 PM Subject: [Gluster-users] Files not synced after peer reboot Hi, I am testing GlusterFS 3.1 on a EC2 environment with 2 servers in a replica mode sudo gluster volume create gluster-volume replica 2 transport tcp ip-10-227-166-178:/export/brick01 ip-10-48-79-22:/export/brick02 I have been connected to the file system using a third client, created a test file which appears in both servers. I reboot my GlusterFS 2nd server and at the same time I change contents of my test file which appears ok in the 1st still running server. When the 2nd server comes live, I run sudo /etc/init.d/glusterd start and remount my drive - brick However the contents of test file in 2nd server are still the old ones before reboot. When I open the test file from the client (which is the correct latest one, BTW) I update the file and save it, both files are again synced. Is this how GlusterFS is working by default ? Is there a way to re-sync files without updating the actual file ? Thanx in advance, Panos Kontopoulos ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Fedora release 3.1.3
Hi - Its uploaded now they are available at http://download.gluster.com/pub/gluster/glusterfs/3.1/3.1.3/ -- Cheers, Lakshmipathi.G FOSS Programmer. - Original Message - From: Activepage Gmail activep...@gmail.com To: gluster-users@gluster.org Sent: Saturday, March 19, 2011 1:26:52 PM Subject: [Gluster-users] Fedora release 3.1.3 Will be an Fedora Release of 3.1.3? Because in http://download.gluster.com/pub/gluster/glusterfs/3.1/3.1.3/ there is no Fedora RPM. Thank´s ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users