enable,fetch-attempts=5
> 0 0
>
>
> On Thu, Jul 26, 2018 at 6:36 AM, Anoop C S <mailto:anoo...@autistici.org>> wrote:
>
> On Thu, 2018-07-26 at 12:23 +0200, Stefan Kania wrote:
> > Hello,
> >
> > I have the following problem:
&g
Hello,
I have the following problem:
If I reboot a node in a gluster-cluster on a Debian- or Ubuntu-System
the fuse-mount is not working if I just put the mount-options into the
/etc/fstab. For this reason I wrote a systemd-script as followed:
-
[Unit]
Description = Data dir
Hello,
I just installed Gluster Version 4.1.1 from, the gluster.org repository.
I tested the snapshot function and now I'm having the following problem:
When I do a "gluster volume info" BEFOR the snapshot I got:
--
root@sambabuch-c2:~# gluster snapshot create snap1 gv1
tion as 1*3. If we use the same path, then the
> third path is not in our gluster space.
>
> To avoid this scenario, when a snapshot restore, we use the same
> snapshot bricks. Ie volume bricks will make to point to snapshot brick.
>
>
> Regards
>
> Rafi KC
>
>
&g
No one uses gluster 4.1.1 with snapshots?
Am 10.07.18 um 14:11 schrieb Stefan Kania:
> Hello,
>
> I just installed Gluster Version 4.1.1 from, the gluster.org repository.
> I tested the snapshot function and now I'm having the following problem:
>
> When I do a "glu
I wrote my own systemd-script:
--
[Unit]
Description = Data dir
After=network.target glusterfs-server.service
Required=network-online.target
[Mount]
What=knoten-1:/gv1
Where=/glusterfs
Type=glusterfs
Options=defaults,acl
[Install]
WantedBy=multi-user.target
--
Me again :-)
Am 12.12.18 um 14:53 schrieb Stefan Kania:
> I have configured geo-replication with a non-privileged user, I used
> this documentation:
I now set up the geo-replication with user root and everything worked.
So it must have something to do with the non-privileged user. An
Am 12.12.18 um 16:13 schrieb Stefan Kania:
> Me again :-)
>
> Am 12.12.18 um 14:53 schrieb Stefan Kania:
>> I have configured geo-replication with a non-privileged user, I used
>> this documentation:
>
> I now set up the geo-replication with user root and everythin
Hello,
I have configured geo-replication with a non-privileged user, I used
this documentation:
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
My setup:
(Master)
2-Node Gluster replicated volume with Debian 9 and gluster 5.1 (from
gluster.org)
(Slave)
2-Node Gluster
Hello,
A few month a go I read about an option to set most of the samba-option
in a gluster volume. It was something like "samba-group" or so. Am I
right? If yes, can someone pleas give me the option?
Thank you
Stefan
smime.p7s
Description: S/MIME Cryptographic Signature
you please confirm whether it is the case or
> not?
>
> /var/lib/glusterd/groups/samba
--
Stefan Kania
Landweg 13
25693 St. Michaelisdonn
Signieren jeder E-Mail hilft Spam zu reduzieren und schützt Ihre Privatsphäre.
Ein kostenfreies Zertifikat erhalten Sie unter
https://www.dgn.de
cal repository for debian buster. Am I missing a package? I just
installeg glusterfs-server with it's dependencies
--
Stefan Kania
Landweg 13
25693 St. Michaelisdonn
Signieren jeder E-Mail hilft Spam zu reduzieren und schützt Ihre
Privatsphäre. Ein kostenfreies Zertifikat erhalten Sie unter
ht
Am 08.02.20 um 11:33 schrieb Anoop C S:
> On Sat, 2020-02-08 at 09:11 +0100, Stefan Kania wrote:
>> Hello,
>>
>> A few month a go I read about an option to set most of the samba-
>> option
>> in a gluster volume. It was something like "samba-group" or s
terfs-server 7.2-1 amd64 clustered file-system (server package)
seems to me the file is missing :-(
Am 10.02.20 um 09:53 schrieb Anoop C S:
> On Sun, 2020-02-09 at 15:44 +0100, Stefan Kania wrote:
>>
>> Am 08.02.20 um 11:33 schrieb Anoop C S:
>>> # gluster volume set gro
ilman/listinfo/gluster-users
> <https://lists.gluster.org/mailman/listinfo/gluster-users>
>
--
Stefan Kania
Landweg 13
25693 St. Michaelisdonn
Signieren jeder E-Mail hilft Spam zu reduzieren und schützt Ihre
Privatsphäre. Ein kostenfreies Zertifikat erhalten Sie unter
https
Hi to all,
I'm having a replicated Cluster with three nodes. At the moment Gluster
6.x is running. I would like to upgrade to 10.x. After upgrading I would
like to change the disks from HDD to SSD. Can I just remove one of the
bricks replace the HDD with SSD, then add the brick to the volume
>
> thanks, L>
>
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists
Hello,
we have a gluster volume (replica 3). We removed one node and detached
the peer, running gluster6 on ubuntu 18.04.
gluster v remove-brick gv1 replica 2 c3:/gluster/brick force
gluster peer detach c3
We installed ubuntu 20.04 and gluster9
We than had a volume with two nodes up ad running.
Hi to all,
I would like to replace the operating system on one gluster brick. Is it
possible to hold the data on the brick? If yes, how can I connect the
data partition back to the new brick.
I will remove the brick from the volume and remove the peer from the
pool first. Then setup the
same gluster package version and restore the 2 dirs
from backup.
Of course , a proper backup is always a good idea.
Best Regards,
Strahil Nikolov
On Mon, Jan 30, 2023 at 11:33, Stefan Kania
wrote:
Hi to all,
I would like to replace the operating system on one gluster b
Hello,
I have a strange problem on a gluster volume
If I do an "ls -l" in a directory insight a mountet gluster-volume I
see, only for some files, questionmarks for the permission, the owner,
the size and the date.
Looking at the same directory on the brick it self, everything is ok.
After
When will gluster 9.x reach EOL?
Which will be best version to update to?
smime.p7s
Description: S/MIME Cryptographic Signature
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users
Hi to all,
i have a volume where one peer is dead and I now would like to remove
the dead peer, but ther are still snapshots on the volume when I try to
remove the peer I gozt:
root@glfs1:/run/gluster/snaps# gluster peer detach c-03.heartbeat.net
All clients mounted through the
fixed:
setting both quorum settings to "none" restart glusterd and then I could
remove the snapshots and the peer
Am 19.06.23 um 13:01 schrieb Stefan Kania:
Hi to all,
i have a volume where one peer is dead and I now would like to remove
the dead peer, but ther are still
Hi to all,
The doc is telling me:
To guarantee crash consistency some of the fops are blocked during a
snapshot operation.
These fops are blocked till the snapshot is complete. All other fops is
passed through.
---
I could not find which fops are
/blog/gluster-volume-setup-binnacle/
--
Thanks and Regards
Aravinda
Kadalu Technologies
On Wed, 31 Jan 2024 22:01:24 +0530 Stefan Kania
wrote ---
Hi Aravinda,
im not so into Docker :-( So I just looked at your commands and I saw
that you did exacly the same I did. I even
c'
---
as expected. Reinstalling rsync and everything is fine again :-). So the
{error=12} came from /bin/sh as default shell. The missing rsync was not
shown because geo-replication changed to faulty before rsync was used.
Stefan
Am 14.02.24 um 13:34 schrieb Stefan Kania:
Hi A
Hi Anant,
shame on me ^.^. I forgot to install rsync on that host. Switching to
log-level DEBUG helped me to find the problem. Without log-level DEBUG
the host is not showing the missing rsync. Maybe that could be changed.
So thank you for the hint.
Stefan
Am 13.02.24 um 20:32 schrieb
Hi to all,
Yes, I saw that there is a thread about geo-replication with nearly the
same problem, I read it, but I think my problem is a bit different.
I created two volumes the primary volume "privol01" and the secondary
volume "secvol01". All hosts are having the same packages installed,
Hi Aravinda
Am 26.01.24 um 17:01 schrieb Aravinda:
Does the combined glusterfs.ca includes client nodes pem? Also this file
need to be placed in Client node as well.
Yes, I put all the Gluster-node Certificates AND the client certificate
into the glusterfs.ca file. And I put the file to all
Am 28.01.24 um 08:44 schrieb Strahil Nikolov:
Usually with Certificates it's always a pain.I would ask you to regenerate the
certificates but by adding the FQDN of the system and the IP used by the
clients to reach the brick in 'SANS' section of the cert. Also, set the
validity to 365 days for the
2024 22:10:50 +0530 Stefan Kania
wrote ---
Hi Strahil, hi Aravinda
Am 28.01.24 um 23:03 schrieb Strahil Nikolov:
You didn't specify correctly the IP in the SANS but I'm not sure if that's the
root cause.
In the SANs section Specify all hosts + their IPs:
IP.1=1.2.3.4IP.2=2.3.4.5DNS.1=c01
Am 28.01.24 um 23:03 schrieb Strahil Nikolov:
You didn't specify correctly the IP in the SANS but I'm not sure if that's the
root cause.
In the SANs section Specify all hosts + their IPs:
IP.1=1.2.3.4IP.2=2.3.4.5DNS.1=c01.glusterDNS.2=c02.gluster
ahh ok, I can try it, but I don't think
Hi Strahil, hi Aravinda
Am 28.01.24 um 23:03 schrieb Strahil Nikolov:
You didn't specify correctly the IP in the SANS but I'm not sure if that's the
root cause.
In the SANs section Specify all hosts + their IPs:
IP.1=1.2.3.4IP.2=2.3.4.5DNS.1=c01.glusterDNS.2=c02.gluster
That's what I did
Hi Varun,
Am 23.01.24 um 01:37 schrieb Varun:
I'm not sure which doc are you referring to? It would help if you can share
it.
Here,
https://docs.gluster.org/en/main/Administrator-Guide/Managing-Snapshots/#pre-requisites
and all the places where this page is copied to ;-)
Thank's for the
sitory.
https://github.com/aravindavk/gluster-tests?tab=readme-ov-file#gluster-tls-tests
<https://github.com/aravindavk/gluster-tests?tab=readme-ov-file#gluster-tls-tests>
--
Aravinda
Kadalu Technologies
On Mon, 29 Jan 2024 22:10:50 +0530 *Stefan Kania
* wrote ---
Hi
Hi to all,
The system is running Debian 12 with Gluster 10. All systems are using
the same versions.
I try to encrypt the communication between the peers and the clients via
TLS. The encryption between the peers works, but when I try to mount the
volume on the client I always get an error.
37 matches
Mail list logo