Yes, it is!
Here's the volfile:
cat /mnt/gluster/brick0/config/vols/storage0/storage0-fuse.vol:
volume storage0-client-0
type protocol/client
option remote-host de-dc1-c1-pserver3
option remote-subvolume /mnt/gluster/brick0/storage
option transport-type rdma
option ping-time
Martin,
Is this a distributed-replicate setup?. Could you attach the vol-file of
the client.
Pranith
- Original Message -
From: "Martin Schenker"
To: gluster-users@gluster.org
Sent: Monday, May 16, 2011 2:49:29 PM
Subject: [Gluster-users] Client and server file "view", different re
hi Remi,
Would it be possible to post the logs on the client, so that we can find
what issue you are running into.
Pranith
- Original Message -
From: "Remi Broemeling"
To: gluster-users@gluster.org
Sent: Monday, May 16, 2011 10:47:33 PM
Subject: [Gluster-users] Rebuild Distributed/Re
Seems like you have run into the glusterd lock problem, most probably because
you ran a script with both peer probes and volume operations.
Can you check if you have the volumes/bricks you created on all the peers?. if
yes just restart the glusterds on all the machines and you should be fine.
Pr
Not sure if this is related but do you know why you are seeing "
(127.0.0.1:1020)" ? Can you look at gluster peer status on all the
hosts and see if they can see each other?
On Mon, May 16, 2011 at 11:17 AM, mkey wrote:
> Hi,
> I am trying to use tried to glusterfs_3.2.0 on ubuntu natty(11.04).
>
Hi,
I am trying to use tried to glusterfs_3.2.0 on ubuntu natty(11.04).
I have 3 servers, and 2 servers are already added to peer by following
command.
> root@natty3:~# gluster peer probe natty3
> root@natty3:~# gluster peer probe natty4
also, I created volume.
> root@natty3:~# gluster volume crea
Hi,
I've got a distributed/replicated GlusterFS v3.1.2 (installed via RPM) setup
across two servers (web01 and web02) with the following vol config:
volume shared-application-data-client-0
type protocol/client
option remote-host web01
option remote-subvolume /var/glusterfs/bricks/shar
As far as I know those options should be same between 3.1 and 3.2
On Mon, May 16, 2011 at 9:11 AM, Justice London wrote:
> Hey, are there any optimization options for 3.2 like there were for 3.1
> versions? I specifically need to have more client connections and server
> connections. Without thes
What happens when you read the file? Do you see right contents? These
look like linked files created in order to locate the files in the
right server. Did you recently upgrade or add/remove bricks?
Can you also look at the gfid on these files from server side?
Run
getfattr -dm -
On Mon, May 16
Hey, are there any optimization options for 3.2 like there were for 3.1
versions? I specifically need to have more client connections and server
connections. Without these 3.1 would lock up all the time as I was using it
for web image storage. After allowing more than the stock number of
connection
Lakshmi,
On 05/16/11 17:32, Lakshmipathi.G wrote:
Hi -
Do you have passwordless ssh login to slave machine? After setting
passwordless login ,please try this -
#gluster volume geo-replication athena root@$(hostname):/soft/venus start
or
#gluster volume geo-replication athena $(hostname):/soft
On 05/16/11 17:06, anthony garnier wrote:
Hi,
I'm currently trying to use géo-rep on the local data-node into a
directory but it fails with status "faulty"
[...]
I've done this cmd :
# gluster volume geo-replication athena /soft/venus config
# gluster volume geo-replication athena /soft/venus
Hi,
Yes my machine got passwordless ssh login but it still doesn't work. I also got
the requirement with the good version of software.
> Date: Mon, 16 May 2011 07:02:57 -0500
> From: lakshmipa...@gluster.com
> To: sokar6...@hotmail.com
> CC: gluster-users@gluster.org
> Subject: Re: [Gluster-us
Hi -
Do you have passwordless ssh login to slave machine? After setting
passwordless login ,please try this -
#gluster volume geo-replication athena root@$(hostname):/soft/venus start
or
#gluster volume geo-replication athena $(hostname):/soft/venus start
wait for few seconds then verify the s
Hi all!
Here we have another mismatch between the client "view" and the server
mounts:
>From the server site everything seems well, the 20G file is visible and the
attributes seem to match:
0 root@pserver5:~ # getfattr -R -d -e hex -m "trusted.afr."
/mnt/gluster/brick1/storage/images/207
15 matches
Mail list logo