Hi,
It is failing to get the virtual xattr value of
"trusted.glusterfs.volume-mark" at master volume root.
Could you share the geo-replication logs under
/var/log/glusterfs/geo-replication/*.gluster.log ?
I think if there are any transient errors, stopping geo-rep and restarting
master volume
On 03/06/2018 05:50 PM, Paul Anderson wrote:
> When I follow the directions at
> http://docs.gluster.org/en/latest/Install-Guide/Install/ to install
> the latest gluster on a debian 9 docker container, I get the following
> error:
Files in the .../3.13/3.13.2 directory had the wrong owner/group,
Yes, This was the case.
Thanks much.
--
Regards,
Sherin
From: Hari Gowtham
Sent: Monday, March 5, 2018 4:17 PM
To: Sherin George
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Why files goes to hot tier and cold tier at same
When I follow the directions at
http://docs.gluster.org/en/latest/Install-Guide/Install/ to install
the latest gluster on a debian 9 docker container, I get the following
error:
Step 6/15 : RUN echo deb [arch=amd64]
> On Mar 5, 2018, at 6:41 PM, Atin Mukherjee wrote:
> I'm tempted to repeat - down things, copy the checksum the "good" ones agree
> on, start things; but given that this has turned into a balloon-squeezing
> exercise, I want to make sure I'm not doing this the wrong way.
Just following up on the below after having some time to track down the
differences.
On the bad peer, the `tier-enabled=0` line in .../vols//info was
removed after I copied it over and as mentioned, the cksum file changed to a
value that doesn't match the others. The logs only complain about
> On Mar 5, 2018, at 6:41 PM, Atin Mukherjee wrote:
>
>
>
> On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence
> wrote:
> Hello,
>
> So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume.
>
> It actually began as the same
On Tue, Mar 6, 2018 at 10:58 PM, Raghavendra Gowdappa
wrote:
>
>
> On Tue, Mar 6, 2018 at 10:22 PM, Paul Anderson wrote:
>
>> Raghavendra,
>>
>> I've commited my tests case to https://github.com/powool/gluster.git -
>> it's grungy, and a work in progress,
On Tue, Mar 6, 2018 at 10:22 PM, Paul Anderson wrote:
> Raghavendra,
>
> I've commited my tests case to https://github.com/powool/gluster.git -
> it's grungy, and a work in progress, but I am happy to take change
> suggestions, especially if it will save folks significant time.
>
Raghavendra,
I've commited my tests case to https://github.com/powool/gluster.git -
it's grungy, and a work in progress, but I am happy to take change
suggestions, especially if it will save folks significant time.
For the rest, I'll reply inline below...
On Mon, Mar 5, 2018 at 10:39 PM,
Hi All,
I know this isn't the ganesha mailing list but wondered if anyone can help.
I'm having issue with file creation over NFS, I have a gluster volume "vol1"
presented via Ganesha with the following config:EXPORT{ Export_Id = 20;
Path = "/vol1"; FSAL { name
Hi list,
I am wondering why do we need Ganesha user-land NFS server in order to get pNFS
working?
I understand Ganesha is necessary on the MDS, but standard kernel based NFS
server should be sufficient on DS bricks (which should bring us additional
performance), right?
Could someone clarify?
Hi,
I'm trying to create two gluster volumes over two nodes with two
seperate networks:
The names are in the hosts file of each node:
root@gluster01 :~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 gluster01.peiker-cee.de gluster01
10.0.2.54 gluster02g1.peiker-cee.de
Hi,
Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04.
I can see a “master volinfo unavailable” in master logfile.
Any ideas?
Master:
Status of volume: testtomcat
Gluster process TCP Port RDMA Port Online Pid
14 matches
Mail list logo