On 11/26/20 8:14 PM, Gionatan Danti wrote:
So I think you simply are CPU limited. I remember doing some tests with loopback RAM disks and finding that Gluster used 100% CPU (ie: full load on an entire core) when doing 4K random writes. Side
note: using synchronized (ie: fsync) 4k writes, I only
Silly question to all though -
Akin to the problems that Linus Tech Tips experienced with ZFS and a multi-disk
NVMe SSD array -- is GlusterFS written so that it takes how NVMe SSDS operate
in mind?
(i.e. that the code itself might have wait and/or wait for synchronous commands
to finish first
Yes, I compared the client count like this:
gluster volume status clients |grep -B1 connected
I ran the find command on each client before and after shutting down the
problematic daemon to determine any file count differences:
find /mount/point |wc -l
After my last post I discovered that one
Il 2020-11-26 09:47 Dmitry Antipov ha scritto:
On 11/26/20 11:29 AM, Gionatan Danti wrote:
Can you details your exact client and server CPU model?
Desktop is 8x of:
model name : Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
Server is 32x of:
model name : Intel(R) Xeon(R) Silver 4110
Erm... that's not correct.
Put them on the same line
27.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4 trick.
Best Regards,
Strahil Nikolov
В 12:00 +0300 на 26.11.2020 (чт), Dmitry Antipov написа:
> On 11/26/20 11:42 AM, Strahil Nikolov wrote:
>
> > And you
To whom it may be interesting, this paper says that ~80K IOPS (4K random
writes) is real:
https://archive.fosdem.org/2018/schedule/event/optimizing_sds/attachments/slides/2300/export/events/attachments/optimizing_sds/slides/2300/GlusterOnNVMe_FOSDEM2018.pdf
On the same-class server hardware,
On Thu, Nov 26, 2020 at 2:31 PM Dmitry Antipov wrote:
> On 11/26/20 12:49 PM, Yaniv Kaul wrote:
>
> > I run a slightly different command, which hides the kernel stuff and
> focuses on the user mode functions:
> > sudo perf record --call-graph dwarf -j any --buildid-all --all-user -p
> `pgrep
And you gluster bricks are localhost:/brick1 , localhost:/brick2 and
localhost:/brick3 ?
If not, add the hostname used for the bricks on the line starting with
127.0.0.1 and try again.
Best Regards,
Strahil Nikolov
В 11:18 +0300 на 26.11.2020 (чт), Dmitry Antipov написа:
> On 11/26/20 9:05 AM,
On 11/26/20 12:49 PM, Yaniv Kaul wrote:
I run a slightly different command, which hides the kernel stuff and focuses on
the user mode functions:
sudo perf record --call-graph dwarf -j any --buildid-all --all-user -p `pgrep
-d\, gluster` -F 2000 -ag
Thanks.
BTW, how much is an overhead of
On 26/11/20 4:00 pm, Olaf Buitelaar wrote:
Hi Ravi,
I could try that, but i can only try a setup on VM's, and will not be
able to setup an environment like our production environment.
Which runs on physical machines, and has actual production load etc.
So the 2 setups would be quite
Hi Ravi,
I could try that, but i can only try a setup on VM's, and will not be able
to setup an environment like our production environment.
Which runs on physical machines, and has actual production load etc. So the
2 setups would be quite different.
Personally i think it would be best debug the
Can you test by adding entries in /etc/hosts for the loopback ip
(127.0.0.1)
something like this:
127.0.0.1 localhost localhost.localdomain
localhost4 localhost4.localdomain4 server
Best Regards,
Strahil Nikolov
В 08:14 +0300 на 26.11.2020 (чт), Dmitry Antipov написа:
> On 11/26/20 6:33 AM,
On Thu, Nov 26, 2020 at 11:44 AM Dmitry Antipov wrote:
> BTW, did someone try to profile the brick process? I do, and got this
> for the default replica 3 volume ('perf record -F 2500 -g -p [PID]'):
>
I run a slightly different command, which hides the kernel stuff and
focuses on the user mode
BTW, did someone try to profile the brick process? I do, and got this
for the default replica 3 volume ('perf record -F 2500 -g -p [PID]'):
+3.29% 0.02% glfs_epoll001[kernel.kallsyms] [k]
entry_SYSCALL_64_after_hwframe
+3.17% 0.01% glfs_epoll001[kernel.kallsyms]
On 11/26/20 11:42 AM, Strahil Nikolov wrote:
And you gluster bricks are localhost:/brick1 , localhost:/brick2 and
localhost:/brick3 ?
If not, add the hostname used for the bricks on the line starting with
127.0.0.1 and try again.
Same thing with:
127.0.0.1 trick trick.localdomain trick4
On 11/26/20 11:29 AM, Gionatan Danti wrote:
Can you details your exact client and server CPU model?
Desktop is 8x of:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 94
model name : Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
stepping: 3
Il 2020-11-26 06:14 Dmitry Antipov ha scritto:
In my test setup, all bricks and client workload (fio) are running on
the same host. So
all network traffic should be routed through the loopback interface,
which is CPU-bounded.
Since the server is 32-core and has plenty of RAM, loopback should be
On 11/26/20 9:05 AM, Strahil Nikolov wrote:
Can you test by adding entries in /etc/hosts for the loopback ip
(127.0.0.1)
something like this:
127.0.0.1 localhost localhost.localdomain
localhost4 localhost4.localdomain4 server
On both systems, my /etc/hosts is:
127.0.0.1 localhost
18 matches
Mail list logo