[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-14 Thread Alwin Antreich
On Wed, Oct 14, 2020 at 02:09:22PM +0200, Andreas John wrote:
> Hello Alwin,
> 
> do you know if it makes difference to disable "all green computing" in
> the BIOS vs. settings the governor to "performance" in the OS?
Well, for one the governor will not be able to influence all BIOS
settings (eg. Infinity Fabric). And the defaults of the governor may
change or it may behave differently. Since a BIOS usually receives less
updates then the kernel, the likelihood of change is less.

My recommendation is to set it in the BIOS.

--
Cheers,
Alwin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-14 Thread Andreas John
Hello Alwin,

do you know if it makes difference to disable "all green computing" in
the BIOS vs. settings the governor to "performance" in the OS?

Of not, I think I will will have some service cycles to set our
proxmox-ceph nodes correctly.


Best Regards,

Andreas


On 14.10.20 08:39, Alwin Antreich wrote:
> On Tue, Oct 13, 2020 at 11:19:33AM -0500, Mark Nelson wrote:
>> Thanks for the link Alwin!
>>
>>
>> On intel platforms disabling C/P state transitions can have a really big
>> impact on IOPS (on RHEL for instance using the network or performance
>> latency tuned profile).  It would be very interesting to know if AMD EPYC
>> platforms see similar benefits.  I don't have any in house, but if you
>> happen to have a chance it would be an interesting addendum to your report.
> Thanks for the suggestion. I indeed did a run before disabling the C/P
> states in the BIOS. But unfortunately I didn't keep the results. :/
>
> As far as I remember though, there was a visible improvement after
> disabling them.
>
> I will have a look, once I have some time to do some more benchmarks.
>
> --
> Cheers,
> Alwin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
-- 
Andreas John
net-lab GmbH  |  Frankfurter Str. 99  |  63067 Offenbach
Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832
Tel: +49 69 8570033-1 | Fax: -2 | http://www.net-lab.net

Facebook: https://www.facebook.com/netlabdotnet
Twitter: https://twitter.com/netlabdotnet
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-14 Thread Anthony D'Atri


>> 
>> Very nice and useful document. One thing is not clear for me, the fio
>> parameters in appendix 5:
>> --numjobs=<1|4> --iodepths=<1|32>
>> it is not clear if/when the iodepth was set to 32, was it used with all
>> tests with numjobs=4 ? or was it:
>> --numjobs=<1|4> --iodepths=1
> We have script that permutates the values and runs fio.

On the topic of permuting FIO runs here’s a useful tool to do so, which also 
generates various graphs from the results.

https://github.com/louwrentius/fio-plot

— aad
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-14 Thread Alwin Antreich
On Tue, Oct 13, 2020 at 09:09:27PM +0200, Maged Mokhtar wrote:
> 
> Very nice and useful document. One thing is not clear for me, the fio
> parameters in appendix 5:
> --numjobs=<1|4> --iodepths=<1|32>
> it is not clear if/when the iodepth was set to 32, was it used with all
> tests with numjobs=4 ? or was it:
> --numjobs=<1|4> --iodepths=1
We have script that permutates the values and runs fio.

But the iodepth results are not shown in the paper. They were very often
close together or in case of Windows showed lazy writing in action.

--
Cheers,
Alwin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-14 Thread Alwin Antreich
On Tue, Oct 13, 2020 at 11:19:33AM -0500, Mark Nelson wrote:
> Thanks for the link Alwin!
> 
> 
> On intel platforms disabling C/P state transitions can have a really big
> impact on IOPS (on RHEL for instance using the network or performance
> latency tuned profile).  It would be very interesting to know if AMD EPYC
> platforms see similar benefits.  I don't have any in house, but if you
> happen to have a chance it would be an interesting addendum to your report.
Thanks for the suggestion. I indeed did a run before disabling the C/P
states in the BIOS. But unfortunately I didn't keep the results. :/

As far as I remember though, there was a visible improvement after
disabling them.

I will have a look, once I have some time to do some more benchmarks.

--
Cheers,
Alwin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-13 Thread Maged Mokhtar



Very nice and useful document. One thing is not clear for me, the fio 
parameters in appendix 5:

--numjobs=<1|4> --iodepths=<1|32>
it is not clear if/when the iodepth was set to 32, was it used with all 
tests with numjobs=4 ? or was it:

--numjobs=<1|4> --iodepths=1

/maged

On 13/10/2020 12:17, Alwin Antreich wrote:

Hello fellow Ceph users,

we have released our new Ceph benchmark paper [0]. The used platform and
Hardware is Proxmox VE 6.2 with Ceph Octopus on a new AMD Epyc Zen2 CPU
with U.2 SSDs (details in the paper).

The paper should illustrate the performance that is possible with a 3x
node cluster without significant tuning.

I welcome everyone to share their experience and add to the discussion,
perferred on our forum [1] thread with our fellow Proxmox VE users.

--
Cheers,
Alwin

[0] https://proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark-2020-09
[1] 
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-13 Thread Alex Gorbachev
Alwin, this is excellent info.  We have a lab on AMD with a similar setup
with NVMe on Proxmox, and will try these benchmarks as well.

--
Alex Gorbachev
Intelligent Systems Services Inc. STORCIUM


On Tue, Oct 13, 2020 at 6:18 AM Alwin Antreich 
wrote:

> Hello fellow Ceph users,
>
> we have released our new Ceph benchmark paper [0]. The used platform and
> Hardware is Proxmox VE 6.2 with Ceph Octopus on a new AMD Epyc Zen2 CPU
> with U.2 SSDs (details in the paper).
>
> The paper should illustrate the performance that is possible with a 3x
> node cluster without significant tuning.
>
> I welcome everyone to share their experience and add to the discussion,
> perferred on our forum [1] thread with our fellow Proxmox VE users.
>
> --
> Cheers,
> Alwin
>
> [0]
> https://proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark-2020-09
> [1]
> https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Proxmox+Ceph Benchmark 2020

2020-10-13 Thread Mark Nelson

Thanks for the link Alwin!


On intel platforms disabling C/P state transitions can have a really big 
impact on IOPS (on RHEL for instance using the network or performance 
latency tuned profile).  It would be very interesting to know if AMD 
EPYC platforms see similar benefits.  I don't have any in house, but if 
you happen to have a chance it would be an interesting addendum to your 
report.



Mark


On 10/13/20 5:17 AM, Alwin Antreich wrote:

Hello fellow Ceph users,

we have released our new Ceph benchmark paper [0]. The used platform and
Hardware is Proxmox VE 6.2 with Ceph Octopus on a new AMD Epyc Zen2 CPU
with U.2 SSDs (details in the paper).

The paper should illustrate the performance that is possible with a 3x
node cluster without significant tuning.

I welcome everyone to share their experience and add to the discussion,
perferred on our forum [1] thread with our fellow Proxmox VE users.

--
Cheers,
Alwin

[0] https://proxmox.com/en/downloads/item/proxmox-ve-ceph-benchmark-2020-09
[1] 
https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-2020-09-hyper-converged-with-nvme.76516/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io