Dear Rupert,
can you provide
gluster volume info volumeName
and in addition the
xfs_info of your brick-mountpoints
and furthermore
cat /etc/fstab
Regards,
Felix
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet
Dear Stefan,
what about your expectation on upgrading the OS to CentOS 8 Stream?
Are there any advantages?
Regards,
Felix
On 12/05/2021 12:02, Stefan Solbrig wrote:
Hi Strahli,
Thank you for the quick answer! Sorry I have to ask again: as far as
I can see, Gluster keeps all information a
Dear Shubhank,
this small file performance appears to be slow on glusterfs usually.
Can you provide more details according to your setup? (zfs settings,
bonding, tuned-adm profile, etc, ...)
From a gluster point of view, setting performance.write-behind-window
to 128MB increases performance.
Dear Dietmar,
I am very interested in helping you with that geo-replication, since we
also have a setup with geo-replication that is crucial for the
backup procedure. I just had a quick look at this and for the moment, I
just can suggest:
is there any suitable setting in the gluster-environme
Dear Gilberto,
If I am right, you ran into server-quorum if you startet a 2-node
replica and shutdown one host.
From my perspective, its fine.
Please correct me if I am wrong here.
Regards,
Felix
On 27/10/2020 01:46, Gilberto Nunes wrote:
Well I do not reboot the host. I shut down the ho
Dear Community,
actually, I am looking for a more detailed explaination of all
geo-replication options which can be obtained by the command:
gluster volume geo-replication master-vol slave-node::slave-volume config
I only found
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20
ctoria, UH1
PO Box 1800, STN CSC
Victoria, BC, V8W 2Y2
Phone: +1-250-721-8432
Email: matth...@uvic.ca
On 10/5/20 12:53 PM, Felix Kölzow wrote:
Dear Matthew,
this is our configuration:
zfs get all mypool
mypool xattr sa local
mypool aclty
ty of Victoria, UH1
PO Box 1800, STN CSC
Victoria, BC, V8W 2Y2
Phone: +1-250-721-8432
Email: matth...@uvic.ca
On 10/5/20 1:39 AM, Felix Kölzow wrote:
Dear Matthew,
can you provide more information regarding to the geo-replication brick
logs.
These files area also located in:
/var/log
Dear Matthew,
can you provide more information regarding to the geo-replication brick
logs.
These files area also located in:
/var/log/glusterfs/geo-replication/storage_10.0.231.81_pcic-backup/
Usually, these log files are more precise to figure out the root cause
of the error.
Additionally
Dear Diego,
I faced a similar issue on gluster 6.0 and I was able to resolve it (at
least in my case).
Observation:
I faced a directory were a simple ls leads to input/output error.
I cd into the corresponding directory on the brick and I did a ls
command and it works.
I got a list of all
,
Felix
On 09/07/2020 13:29, Shwetha Acharya wrote:
Hi Felix,
Find my reply inline.
Regards,
Shwetha
On Thu, Jun 25, 2020 at 12:25 PM Felix Kölzow mailto:felix.koel...@gmx.de>> wrote:
Dear Gluster-users,
I deleted a further the geo-replication session with [reset-sync-time]
,
Please share the *-changes.log files and brick logs, which will help
in analysis of the issue.
Regards,
Shwetha
On Thu, Jun 25, 2020 at 1:26 PM Felix Kölzow mailto:felix.koel...@gmx.de>> wrote:
Hey Rob,
same issue for our third volume. Have a look at the logs just from
rig
Hey,
what about the device mapper? Everything was mount properly during reboot?
It happens to me if the lvm device mapper got a timeout during the reboot
process while mounting the brick itself.
Regards,
Felix
On 01/07/2020 16:46, lejeczek wrote:
On 30/06/2020 11:31, Barak Sason Rofman w
Dear Users,
On this list I keep on seeing comments that VM performance is better
on NFS and a general dissatisfaction with Fuse. So we are looking to
see for ourselves if NFS would be an improvement
Does anyone can provide information (performance test) how big the
improvement is using ganes
Hey Rob,
same issue for our third volume. Have a look at the logs just from right
now (below).
Question: You removed the htime files and the old changelogs. Just rm
the files or is there something to pay more attention
before removing the changelog files and the htime file.
Regards,
Felix
[
Dear Gluster-users,
I deleted a further the geo-replication session with [reset-sync-time]
option. Afterwards,
I recreated the session, and as expected, the session starts in the
hybrid crawl.
I can see some sync jobs are running in the gsyncd.log file and after a
couple of hours,
there are no su
elog crawl?
Regards,
Felix
On 22/06/2020 13:11, Shwetha Acharya wrote:
Hi Felix,
File path is the path from mount point. Need not include any other
options.
Regards,|
Shwetha
On Mon, Jun 22, 2020 at 3:15 PM Felix Kölzow mailto:felix.koel...@gmx.de>> wrote:
Dear Shwetha,
> O
I got the error "operation not supported" so I was somehow
confused.
Now, it worked and
On 22/06/2020 13:11, Shwetha Acharya wrote:
Hi Felix,
File path is the path from mount point. Need not include any other
options.
Regards,|
Shwetha
On Mon, Jun 22, 2020 at 3:15 PM Felix Kölz
Dear Shwetha,
One more alternative would be to triggering sync on indivisual files,
# setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
So, how to do it exactly and what is ? Is it a gluster mount
point with certain mount options
or is this the brick path? Furthermore, does it work for direc
, 2020 at 12:19 PM Felix Kölzow mailto:felix.koel...@gmx.de>> wrote:
> Would it be possible for you to provide more specifics about the
> issues you see and the release in which these are seen?
Some files on the slave side still exist, while they are deleted
on the
Would it be possible for you to provide more specifics about the
issues you see and the release in which these are seen?
Some files on the slave side still exist, while they are deleted on the
master side many month ago. Due to
another issue, I would like to initiate a re-sync just to assure th
Dear Gluster-Users,
as I am seeing some issues with the geo-replication, I would like to
collect some more
user experience and workarounds if issues occur.
Some workarounds are already documented, but that seems to be not
detailed enough.
So I would like to know how reliable is your geo-rep
Dear Rafi KC,
lets suppose I going to spend some time for testing. How would I install
glusterfs-server including your feature?
Maybe this is an easy procedure, but actually I am not familiar with it.
Regards,
Felix
On 27/05/2020 07:56, RAFI KC wrote:
Hi All,
I have been working on POC to
Dear Rafi,
thanks for your effort. I think this is of great interest of many
gluster users. Thus, I would really encourage you to
test and to further improve this feature. Maybe it is beneficial to
create a certain guideline which things should be tested
to make this feature really ready for p
Dear Strahil,
thanks for the hint. I found some references.
Additionally, I would be glad to hear about some experiences from other
users.
Regards,
Felix
On 12/05/2020 06:30, Strahil Nikolov wrote:
On May 11, 2020 11:39:48 PM GMT+03:00, "Felix Kölzow"
wrote:
Dear List,
we ar
Dear List,
we are planning to use gluster on top of xfs brick using an inode size
of 1024, since it turns out
that an inode size of 512 is not sufficient in our case without
allocating additional blocks.
Does anyone running the same setup or have issues with that?
Regards,
Felix
Dear List,
same question and same setup from my side.
Regards,
Felix
On 06/05/2020 16:09, Erik Jacobson wrote:
It is inconvenient for us to use MTU 9K for our gluster servers for
various reasons. We typically have bonded 10G interfaces.
We use distribute/replicate and gluster NFS for compu
Dear Artem,
sry for the noise, since you already provide the xfs_info.
Could you provide the output of
getfattr -d -m. -e hex /DirectoryPathOfInterest_onTheBrick/
Felix
On 30/04/2020 18:01, Felix Kölzow wrote:
Dear Artem,
can you also provide some information w.r.t your xfs filesystem
Dear Artem,
can you also provide some information w.r.t your xfs filesystem, i.e.
xfs_info of your block device?
Regards,
Felix
On 30/04/2020 17:27, Artem Russakovskii wrote:
Hi Strahil, in the original email I included both the times for the
first and subsequent reads on the fuse mounted gl
Dear List,
we actually have a geo-replication setup that replicates different
volumes to
a server that provides corresponding gluster volumes on top of zfsOnLinux.
Unfortunately, the posix acls are not replicated to zfs. I am able to
set the acls manually, but they
are not transferred. Glust
Dear Mauro,
I also faced this issue several times, even on a host with single brick
and single volume.
The solution for me was: I figured out the leftover file-names and
directory-names in the brick directory.
Let's suppose the file name is hiddenFile and hiddenDirectory.
Afterwards, go to
th
Dear Gluster-Community,
I would like to install a specific glusterfs-client on CentOS8 and I am
wondering
if I am doing something wrong here, since I observe this error message:
dnf install centos-release-gluster6 -y
dnf install glusterfs-fuse
Error: Failed to synchronize cache for repo 'cent
ry 20, 2020 10:39:47 PM GMT+02:00, "Felix Kölzow"
wrote:
Dear Gluster-Experts,
we created a three-node setup with two bricks each and a dispersed
volume that can be
accessed via the native client (glusterfs --version = 6.0).
The nodes are connected via 10Gbps (cat6,bonding m
performance.stat-prefetch on
solved that issue.
Reference:
https://access.redhat.com/solutions/4558341
On 20/10/2019 18:26, Felix Kölzow wrote:
Dear Gluster-Users,
_
_
_short story:_
_
_
Two volumes are exported via smb/cifs with the
(almost) the same configuration with respect to smb.conf. The
.
Regards,
Felix
On 02/03/2020 23:25, Strahil Nikolov wrote:
Hi Felix,
can you test /on non-prod system/ the latest minor version of gluster v6 ?
Best Regards,
Strahil Nikolov
В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow
написа:
Dear Community,
this message appears
Dear Community,
this message appears for me to on GlusterFS 6.0.
Before that, we had GlusterFS 3.12 and the client log-file was almost
empty. After
upgrading to 6.0 we are facing this log entries.
Regards,
Felix
On 02/03/2020 15:17, mabi wrote:
Hello,
On the FUSE clients of my GlusterFS 5
Dear Gluster-Experts,
we created a three-node setup with two bricks each and a dispersed
volume that can be
accessed via the native client (glusterfs --version = 6.0).
The nodes are connected via 10Gbps (cat6,bonding mode 6).
If we running a performance test using the smallfile benchmark t
Dear Mark,
we have also a gluster 3 node setup with a replica-3-volume which is
exported via cifs, and two dispersed volumes which
are exported via cifs and fuse. As already known, the small file
performance could be better. Larger file performance is ok.
We have similar volume options reconfig
Dear Gluster-Community,
At this time, I just have a short question. Who among you shares
a _dispersed-gluster_ volume via cifs+smb using the vfs_object = glusterfs?
I struggling with this setup since to weeks and I would like to know
whether this is possible or not. Maybe someone can share h
Dear Gluster-Users,
_
_
_short story:_
_
_
Two volumes are exported via smb/cifs with the
(almost) the same configuration with respect to smb.conf. The replicated
volume
is easily accessible via cifs and fuse. The dispersed volume is
accessible via fuse,
but not via cifs.
Error message fro
Dear Gluster-Community,
we try to implement a gluster setup in a productive environment.
During the development process, we found this nice thread regarding to
gluster
storage experience:
https://forums.overclockers.com.au/threads/glusterfs-800tb-and-growing.1078674/
This thread seems to b
Dear Gluster-Community,
I created a test-environment to test a gluster volume with replica 3.
Afterwards, I am able to manually mount the gluster volume using FUSE.
mount command:
mount -t glusterfs -o backup-volfile-servers=gluster01:gluster02
gluster00:/ifwFuse /mnt/glusterfs/ifwFuse
Ju
Thank you very much for your response.
I fully agree that using LVM has great advantages. Maybe there is a
misunderstanding,
but I really got the recommendation to not use (normal) LVM in
combination with gluster to
increase the volume. *Maybe someone in the community has some good or
bad exper
43 matches
Mail list logo