On Mon, 2020-02-10 at 14:50 +0100, Stefan Kania wrote:
> I have:
> root@cluster-02:~# ls /var/lib/glusterd/groups
> db-workload gluster-block metadata-cache nl-cache virt
>
> and root@cluster-02:~# apt-file list glusterfs-server gives:
> ...
> glusterfs-server: /var/lib/glusterd/groups/db-work
Hi Ulrich,
Thank you for letting us know. Glad to hear that your system is back to
normal.
Regards,
Karthik
On Mon, Feb 10, 2020 at 9:51 PM Ulrich Pötter
wrote:
> Hello Karthik,
>
> thank you very much. That was exactly the problem.
> Running the command (cat
> /.meta/graphs/active/-client-*/p
My question: Are the errors and anomalies below something I need to
investigate? Are should I not be worried?
I installed a test cluster to gluster 7.2 to run some tests, preparing
to see if we gain confidence to put this on the 5,120 node
supercomputer instead of gluster 4.1.6.
I started with a
Closing the loop in case someone does a search on this...
I have an update. I am getting some time on 1,000 node soon so I have
started to validate jumping to gluster 7.2 on my small lab machine.
I switched the packages to my own build of gluster 7.2 with gnfs.
I re-installed my leader node (glus
Hello list,
I have been running a geo-replication session for some time now, but at
some point I noticed that the /var/lib/misc/gluster is eating up the
storage on my root partition.
I moved the folder away to another partition, but I don't seem to
remember reading any specific space require
On February 10, 2020 5:32:29 PM GMT+02:00, Matthias Schniedermeyer
wrote:
>On 10.02.20 16:21, Strahil Nikolov wrote:
>> On February 10, 2020 2:25:17 PM GMT+02:00, Matthias Schniedermeyer
> wrote:
>>> Hi
>>>
>>>
>>> I would describe our basic use case for gluster as:
>>> "data-store for a cold-sta
Hello Karthik,
thank you very much. That was exactly the problem.
Running the command (cat
/.meta/graphs/active/-client-*/private | egrep -i
'connected') on the clients revealed that a few were not connected to
all bricks.
After restarting them, everything went back to normal.
Regards,
Ulric
On February 10, 2020 3:53:08 PM GMT+02:00, Alberto Bengoa
wrote:
>Hello guys,
>
>We are running GlusterFS 6.6 in Replicate mode (1 x 3). After a
>split-brain
>and a massive heal process, we noticed that our app started to receive
>thousands of permissions denied while trying to access files and
>
On 10.02.20 16:21, Strahil Nikolov wrote:
On February 10, 2020 2:25:17 PM GMT+02:00, Matthias Schniedermeyer
wrote:
Hi
I would describe our basic use case for gluster as:
"data-store for a cold-standby application".
A specific application is installed on 2 hardware machines, the data is
kep
Hello guys,
We are running GlusterFS 6.6 in Replicate mode (1 x 3). After a split-brain
and a massive heal process, we noticed that our app started to receive
thousands of permissions denied while trying to access files and
directories.
Exemple log of a failed access atempt to a specific director
I have:
root@cluster-02:~# ls /var/lib/glusterd/groups
db-workload gluster-block metadata-cache nl-cache virt
and root@cluster-02:~# apt-file list glusterfs-server gives:
...
glusterfs-server: /var/lib/glusterd/groups/db-workload
glusterfs-server: /var/lib/glusterd/groups/gluster-block
gluster
Hi
I would describe our basic use case for gluster as:
"data-store for a cold-standby application".
A specific application is installed on 2 hardware machines, the data is
kept in-sync between the 2 machines by a replica-2 gluster volume.
(IOW: "RAID 1")
At any one time only 1 machine has the
On Sun, 2020-02-09 at 15:44 +0100, Stefan Kania wrote:
>
> Am 08.02.20 um 11:33 schrieb Anoop C S:
> > # gluster volume set group samba
> When i try to set the option I got the following error:
> Unable to open file '/var/lib/glusterd/groups/samba'. Error: No such
> file or directory
Do you have
13 matches
Mail list logo