Re: [Gluster-users] Performance Questions - not only small files

2021-05-14 Thread Felix Kölzow
Dear Rupert, can you provide gluster volume info volumeName and in addition the xfs_info  of your brick-mountpoints and furthermore cat /etc/fstab Regards, Felix Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet

Re: [Gluster-users] [EXT] upgrading OS on gluster servers

2021-05-12 Thread Felix Kölzow
Dear Stefan, what about your expectation on upgrading the OS to CentOS 8 Stream? Are there any advantages? Regards, Felix On 12/05/2021 12:02, Stefan Solbrig wrote: Hi Strahli, Thank you for the quick answer!  Sorry I have to ask again: as far as I can see, Gluster keeps all information a

Re: [Gluster-users] Slow performance over rsync in Replicated-Distributed Setup

2021-03-06 Thread Felix Kölzow
Dear Shubhank, this small file performance appears to be slow on glusterfs usually. Can you provide more details according to your setup? (zfs settings, bonding, tuned-adm profile, etc, ...) From a gluster point of view, setting performance.write-behind-window to 128MB increases performance.

Re: [Gluster-users] no progress in geo-replication

2021-03-03 Thread Felix Kölzow
Dear Dietmar, I am very interested in helping you with that geo-replication, since we also have a setup with geo-replication that is crucial for the backup procedure. I just had a quick look at this and for the moment, I just can suggest: is there any suitable setting in the gluster-environme

Re: [Gluster-users] Geo-replication status Faulty

2020-10-27 Thread Felix Kölzow
Dear Gilberto, If I am right, you ran into server-quorum if you startet a 2-node replica and shutdown one host. From my perspective, its fine. Please correct me if I am wrong here. Regards, Felix On 27/10/2020 01:46, Gilberto Nunes wrote: Well I do not reboot the host. I shut down the ho

[Gluster-users] Explanation of gluster vol geo-repliation options

2020-10-07 Thread Felix Kölzow
Dear Community, actually, I am looking for a more detailed explaination of all geo-replication options which can be obtained by the command: gluster volume geo-replication master-vol slave-node::slave-volume config I only found https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20

Re: [Gluster-users] Gluster7 GeoReplication Operation not permitted and incomplete sync

2020-10-05 Thread Felix Kölzow
ctoria, UH1 PO Box 1800, STN CSC Victoria, BC, V8W 2Y2 Phone: +1-250-721-8432 Email: matth...@uvic.ca On 10/5/20 12:53 PM, Felix Kölzow wrote: Dear Matthew, this is our configuration: zfs get all mypool mypool  xattr sa  local mypool  aclty

Re: [Gluster-users] Gluster7 GeoReplication Operation not permitted and incomplete sync

2020-10-05 Thread Felix Kölzow
ty of Victoria, UH1 PO Box 1800, STN CSC Victoria, BC, V8W 2Y2 Phone: +1-250-721-8432 Email: matth...@uvic.ca On 10/5/20 1:39 AM, Felix Kölzow wrote: Dear Matthew, can you provide more information regarding to the geo-replication brick logs. These files area also located in: /var/log

Re: [Gluster-users] Gluster7 GeoReplication Operation not permitted and incomplete sync

2020-10-05 Thread Felix Kölzow
Dear Matthew, can you provide more information regarding to the geo-replication brick logs. These files area also located in: /var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/ Usually, these log files are more precise to figure out the root cause of the error. Additionally

Re: [Gluster-users] How to fix I/O error ? (resend)

2020-08-28 Thread Felix Kölzow
Dear Diego, I faced a similar issue on gluster 6.0 and I was able to resolve it (at least in my case). Observation: I faced a directory were a simple ls leads to input/output error. I cd into the corresponding directory on the brick and I did a ls command and it works. I got a list of all

Re: [Gluster-users] Reliable Geo-Replication

2020-07-13 Thread Felix Kölzow
, Felix On 09/07/2020 13:29, Shwetha Acharya wrote: Hi Felix, Find my reply inline. Regards, Shwetha On Thu, Jun 25, 2020 at 12:25 PM Felix Kölzow mailto:felix.koel...@gmx.de>> wrote: Dear Gluster-users, I deleted a further the geo-replication session with [reset-sync-time]

Re: [Gluster-users] Geo-replication completely broken

2020-07-03 Thread Felix Kölzow
, Please share the *-changes.log files and brick logs, which will help in analysis of the issue. Regards, Shwetha On Thu, Jun 25, 2020 at 1:26 PM Felix Kölzow mailto:felix.koel...@gmx.de>> wrote: Hey Rob, same issue for our third volume. Have a look at the logs just from rig

Re: [Gluster-users] volume process does not start - glusterfs is happy with it?

2020-07-01 Thread Felix Kölzow
Hey, what about the device mapper? Everything was mount properly during reboot? It happens to me if the lvm device mapper got a timeout during the reboot process while mounting the brick itself. Regards, Felix On 01/07/2020 16:46, lejeczek wrote: On 30/06/2020 11:31, Barak Sason Rofman w

Re: [Gluster-users] Latest NFS-Ganesha Gluster Integration docs

2020-06-30 Thread Felix Kölzow
Dear Users, On this list I keep on seeing comments that VM performance is better on NFS and a general dissatisfaction with Fuse. So we are looking to see for ourselves if NFS would be an improvement Does anyone can provide information (performance test) how big the improvement is using ganes

Re: [Gluster-users] Geo-replication completely broken

2020-06-25 Thread Felix Kölzow
Hey Rob, same issue for our third volume. Have a look at the logs just from right now (below). Question: You removed the htime files and the old changelogs. Just rm the files or is there something to pay more attention before removing the changelog files and the htime file. Regards, Felix [

Re: [Gluster-users] Reliable Geo-Replication

2020-06-24 Thread Felix Kölzow
Dear Gluster-users, I deleted a further the geo-replication session with [reset-sync-time] option. Afterwards, I recreated the session, and as expected, the session starts in the hybrid crawl. I can see some sync jobs are running in the gsyncd.log file and after a couple of hours, there are no su

Re: [Gluster-users] Reliable Geo-Replication

2020-06-22 Thread Felix Kölzow
elog crawl? Regards, Felix On 22/06/2020 13:11, Shwetha Acharya wrote: Hi Felix, File path is the path from mount point. Need not include any other options. Regards,| Shwetha On Mon, Jun 22, 2020 at 3:15 PM Felix Kölzow mailto:felix.koel...@gmx.de>> wrote: Dear Shwetha, > O

Re: [Gluster-users] Reliable Geo-Replication

2020-06-22 Thread Felix Kölzow
I got the error "operation not supported" so I was somehow confused. Now, it worked and On 22/06/2020 13:11, Shwetha Acharya wrote: Hi Felix, File path is the path from mount point. Need not include any other options. Regards,| Shwetha On Mon, Jun 22, 2020 at 3:15 PM Felix Kölz

Re: [Gluster-users] Reliable Geo-Replication

2020-06-22 Thread Felix Kölzow
Dear Shwetha, One more alternative would be to triggering sync on indivisual files, # setfattr -n glusterfs.geo-rep.trigger-sync -v "1" So, how to do it exactly and what is ? Is it a gluster mount point with certain mount options or is this the brick path? Furthermore, does it work for direc

Re: [Gluster-users] Reliable Geo-Replication

2020-06-22 Thread Felix Kölzow
, 2020 at 12:19 PM Felix Kölzow mailto:felix.koel...@gmx.de>> wrote: > Would it be possible for you to provide more specifics about the > issues you see and the release in which these are seen? Some files on the slave side still exist, while they are deleted on the

Re: [Gluster-users] Reliable Geo-Replication

2020-06-21 Thread Felix Kölzow
Would it be possible for you to provide more specifics about the issues you see and the release in which these are seen? Some files on the slave side still exist, while they are deleted on the master side many month ago. Due to another issue, I would like to initiate a re-sync just to assure th

[Gluster-users] Reliable Geo-Replication

2020-06-19 Thread Felix Kölzow
Dear Gluster-Users, as I am seeing some issues with the geo-replication, I would like to collect some more user experience and workarounds if issues occur. Some workarounds are already documented, but that seems to be not detailed enough. So I would like to know how reliable is your geo-rep

Re: [Gluster-users] Readdirp (ls -l) Performance Improvement

2020-05-27 Thread Felix Kölzow
Dear Rafi KC, lets suppose I going to spend some time for testing. How would I install glusterfs-server including your feature? Maybe this is an easy procedure, but actually I am not familiar with it. Regards, Felix On 27/05/2020 07:56, RAFI KC wrote: Hi All, I have been working on POC to

Re: [Gluster-users] Readdirp (ls -l) Performance Improvement

2020-05-27 Thread Felix Kölzow
Dear Rafi, thanks for your effort. I think this is of great interest of many gluster users. Thus, I would really encourage you to test and to further improve this feature. Maybe it is beneficial to create a certain guideline which things should be tested to make this feature really ready for p

Re: [Gluster-users] Gluster on top of xfs inode size 1024

2020-05-11 Thread Felix Kölzow
Dear Strahil, thanks for the hint. I found some references. Additionally, I would be glad to hear about some experiences from other users. Regards, Felix On 12/05/2020 06:30, Strahil Nikolov wrote: On May 11, 2020 11:39:48 PM GMT+03:00, "Felix Kölzow" wrote: Dear List, we ar

[Gluster-users] Gluster on top of xfs inode size 1024

2020-05-11 Thread Felix Kölzow
Dear List, we are planning to use gluster on top of xfs brick using an inode size of 1024, since it turns out that an inode size of 512 is not sufficient in our case without allocating additional blocks. Does anyone running the same setup or have issues with that? Regards, Felix

Re: [Gluster-users] MTU 9000 question

2020-05-06 Thread Felix Kölzow
Dear List, same question and same setup from my side. Regards, Felix On 06/05/2020 16:09, Erik Jacobson wrote: It is inconvenient for us to use MTU 9K for our gluster servers for various reasons. We typically have bonded 10G interfaces. We use distribute/replicate and gluster NFS for compu

Re: [Gluster-users] Extremely slow file listing in folders with many files

2020-04-30 Thread Felix Kölzow
Dear Artem, sry for the noise, since you already provide the xfs_info. Could you provide the output of getfattr -d -m. -e hex /DirectoryPathOfInterest_onTheBrick/ Felix On 30/04/2020 18:01, Felix Kölzow wrote: Dear Artem, can you also provide some information w.r.t your xfs filesystem

Re: [Gluster-users] Extremely slow file listing in folders with many files

2020-04-30 Thread Felix Kölzow
Dear Artem, can you also provide some information w.r.t your xfs filesystem, i.e. xfs_info of your block device? Regards, Felix On 30/04/2020 17:27, Artem Russakovskii wrote: Hi Strahil, in the original email I included both the times for the first and subsequent reads on the fuse mounted gl

[Gluster-users] gluster geo-replication to gluster volume on top of zfs: no posix-axls replicated

2020-04-06 Thread Felix Kölzow
Dear List, we actually have a geo-replication setup that replicates different volumes to a server that provides corresponding gluster volumes on top of zfsOnLinux. Unfortunately, the posix acls are not replicated to zfs. I am able to set the acls manually, but they are not transferred. Glust

Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Felix Kölzow
Dear Mauro, I also faced this issue several times, even on a host with single brick and single volume. The solution for me was: I figured out the leftover file-names and directory-names in the brick directory. Let's suppose the file name is hiddenFile and hiddenDirectory. Afterwards, go to th

[Gluster-users] Failed to synchronize cache for repo 'centos-gluster6'

2020-03-24 Thread Felix Kölzow
Dear Gluster-Community, I would like to install a specific glusterfs-client on CentOS8 and I am wondering if I am doing something wrong here, since I observe this error message: dnf install centos-release-gluster6 -y dnf install glusterfs-fuse Error: Failed to synchronize cache for repo 'cent

Re: [Gluster-users] Gluster Performance Issues

2020-03-10 Thread Felix Kölzow
ry 20, 2020 10:39:47 PM GMT+02:00, "Felix Kölzow" wrote: Dear Gluster-Experts, we created a three-node setup with two bricks each and a dispersed volume that can be accessed via the native client (glusterfs --version = 6.0). The nodes are connected via 10Gbps (cat6,bonding m

Re: [Gluster-users] dispersed volume + cifs export does not work (replicated + cifs works fine)

2020-03-10 Thread Felix Kölzow
performance.stat-prefetch on solved that issue. Reference: https://access.redhat.com/solutions/4558341 On 20/10/2019 18:26, Felix Kölzow wrote: Dear Gluster-Users, _ _ _short story:_ _ _ Two volumes are exported via smb/cifs with the (almost) the same configuration with respect to smb.conf. The

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Felix Kölzow
. Regards, Felix On 02/03/2020 23:25, Strahil Nikolov wrote: Hi Felix, can you test /on non-prod system/ the latest minor version of gluster v6 ? Best Regards, Strahil Nikolov В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow написа: Dear Community, this message appears

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Felix Kölzow
Dear Community, this message appears for me to on GlusterFS 6.0. Before that, we had GlusterFS 3.12 and the client log-file was almost empty. After upgrading to 6.0 we are facing this log entries. Regards, Felix On 02/03/2020 15:17, mabi wrote: Hello, On the FUSE clients of my GlusterFS 5

[Gluster-users] Gluster Performance Issues

2020-02-20 Thread Felix Kölzow
Dear Gluster-Experts, we created a three-node setup with two bricks each and a dispersed volume that can be accessed via the native client (glusterfs --version = 6.0). The nodes are connected via 10Gbps (cat6,bonding mode 6). If we running a performance test using the smallfile benchmark t

Re: [Gluster-users] Low performance of Gluster

2020-02-19 Thread Felix Kölzow
Dear Mark, we have also a gluster 3 node setup with a replica-3-volume which is exported via cifs, and two dispersed volumes which are exported via cifs and fuse. As already known, the small file performance could be better. Larger file performance is ok. We have similar volume options reconfig

[Gluster-users] Possible to Export Dispersed Volume via SMB/CIFS

2019-11-01 Thread Felix Kölzow
Dear Gluster-Community, At this time, I just have a short question. Who among you shares a _dispersed-gluster_ volume via cifs+smb using the vfs_object = glusterfs? I struggling with this setup since to weeks and I would like to know whether this is possible or not. Maybe someone can share h

[Gluster-users] dispersed volume + cifs export does not work (replicated + cifs works fine)

2019-10-20 Thread Felix Kölzow
Dear Gluster-Users, _ _ _short story:_ _ _ Two volumes are exported via smb/cifs with the (almost) the same configuration with respect to smb.conf. The replicated volume is easily accessible via cifs and fuse. The dispersed volume is accessible via fuse, but not via cifs. Error message fro

[Gluster-users] Share Experience with Productive Gluster Setup <>

2019-07-18 Thread Felix Kölzow
Dear Gluster-Community, we try to implement a gluster setup in a productive environment. During the development process, we found this nice thread regarding to gluster storage experience: https://forums.overclockers.com.au/threads/glusterfs-800tb-and-growing.1078674/ This thread seems to b

[Gluster-users] Replica 3: Client access via FUSE failed if two bricks are down

2019-04-12 Thread Felix Kölzow
Dear Gluster-Community, I created a test-environment to test a gluster volume with replica 3. Afterwards, I am able to manually mount the gluster volume using FUSE. mount command: mount -t glusterfs  -o backup-volfile-servers=gluster01:gluster02 gluster00:/ifwFuse /mnt/glusterfs/ifwFuse Ju

Re: [Gluster-users] Gluster and LVM

2019-04-08 Thread Felix Kölzow
Thank you very much for your response. I fully agree that using LVM has great advantages. Maybe there is a misunderstanding, but I really got the recommendation to not use (normal) LVM in combination with gluster to increase the volume. *Maybe someone in the community has some good or bad exper