Thanks for the advice.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em qui., 14 de dez. de 2023 às 09:54, Strahil Nikolov
escreveu:
> Hi Gilberto,
>
>
> Have you checked
>
Hi Gilberto,
Have you checked
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance
?
I think that you will need to test the virt profile as the settings will
prevent some bad situations -
Hi all
Aravinda, usually I set this in two server env and never get split brain:
gluster vol set VMS cluster.heal-timeout 5
gluster vol heal VMS enable
gluster vol set VMS cluster.quorum-reads false
gluster vol set VMS cluster.quorum-count 1
gluster vol set VMS network.ping-timeout 2
gluster vol
Hi Aravinda,
Based on the output it’s a ‘replica 3 arbiter 1’ type.
Gilberto,What’s the latency between the nodes ?
Best Regards,Strahil Nikolov
On Wednesday, December 13, 2023, 7:36 AM, Aravinda wrote:
Only Replica 2 or Distributed Gluster volumes can be created with two servers.
High
Only Replica 2 or Distributed Gluster volumes can be created with two servers.
High chance of split brain with Replica 2 compared to Replica 3 volume.
For NFS Ganesha, no issue exporting the volume even if only one server is
available. Run NFS Ganesha servers in Gluster server nodes and NFS
Ah that's nice.
Somebody knows this can be achieved with two servers?
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em ter., 12 de dez. de 2023 às 17:08, Danny
escreveu:
> Wow, HUGE improvement with NFS-Ganesha!
>
> sudo dnf -y install glusterfs-ganesha
> sudo vim
Wow, HUGE improvement with NFS-Ganesha!
sudo dnf -y install glusterfs-ganesha
sudo vim /etc/ganesha/ganesha.conf
NFS_CORE_PARAM {
mount_path_pseudo = true;
Protocols = 3,4;
}
EXPORT_DEFAULTS {
Access_Type = RW;
}
LOG {
Default_Log_Level = WARN;
}
EXPORT{
Export_Id = 1 ;
Fuse there some overhead.
Take a look at libgfapi:
https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/
I know this doc somehow is out of date, but could be a hint
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em ter., 12 de dez. de 2023
Nope, not a caching thing. I've tried multiple different types of fio
tests, all produce the same results. Gbps when hitting the disks locally,
slow MB\s when hitting the Gluster FUSE mount.
I've been reading up on glustr-ganesha, and will give that a try.
On Tue, Dec 12, 2023 at 1:58 PM Ramon
Dismiss my first question: you have SAS 12Gbps SSDs Sorry!
El 12/12/23 a les 19:52, Ramon Selga ha escrit:
May ask you which kind of disks you have in this setup? rotational, ssd
SAS/SATA, nvme?
Is there a RAID controller with writeback caching?
It seems to me your fio test on local brick
May ask you which kind of disks you have in this setup? rotational, ssd
SAS/SATA, nvme?
Is there a RAID controller with writeback caching?
It seems to me your fio test on local brick has a unclear result due to some
caching.
Try something like (you can consider to increase test file size
Sorry, I noticed that too after I posted, so I instantly upgraded to 10.
Issue remains.
On Tue, Dec 12, 2023 at 1:09 PM Gilberto Ferreira <
gilberto.nune...@gmail.com> wrote:
> I strongly suggest you update to version 10 or higher.
> It's come with significant improvement regarding performance.
I strongly suggest you update to version 10 or higher.
It's come with significant improvement regarding performance.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em ter., 12 de dez. de 2023 às 13:03, Danny
escreveu:
> MTU is already 9000, and as you can see from the
MTU is already 9000, and as you can see from the IPERF results, I've got a
nice, fast connection between the nodes.
On Tue, Dec 12, 2023 at 9:49 AM Strahil Nikolov
wrote:
> Hi,
>
> Let’s try the simple things:
>
> Check if you can use MTU9000 and if it’s possible, set it on the Bond
> Slaves
Hi,
Let’s try the simple things:
Check if you can use MTU9000 and if it’s possible, set it on the Bond Slaves
and the bond devices: ping GLUSTER_PEER -c 10 -M do -s 8972
Then try to follow up the recommendations from
15 matches
Mail list logo