[ceph-users] RBD image QoS rbd_qos_write_bps_limit and rbd_qos_bps_limit and mkfs performance

2023-07-19 Thread Engelmann Florian
Hi,

I noticed an incredible high performance drop with mkfs.ext4 (as well as 
mkfs.xfs) when setting (almost) "any" value for rbd_qos_write_bps_limit (or 
rbd_qos_bps_limit).

Baseline: 4TB rbd volume  rbd_qos_write_bps_limit = 0
mkfs.ext4:
real0m6.688s
user0m0.000s
sys 0m0.006s

50GB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 53687091200
mkfs.ext4:
real1m22.217s
user0m0.009s
sys 0m0.000s

5GB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 5368709120
mkfs.ext4:
real13m39.770s
user0m0.008s
sys 0m0.034s

500MB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 524288000
mkfs.ext4:
test still runing... I can provide the result if needed.

The tests are running on a client vm (Ubuntu 22.04) using Qemu/libvirt.

Using the same values with Qemu/libvirt QoS does not affect mkfs performance.
https://libvirt.org/formatdomain.html#block-i-o-tuning

Ceph Version: 16.2.11
Qemu: 6.2.0
Libvirt: 8.0.0
Kernel (hypervisor host): 5.19.0-35-generic 
librbd1 (hypervisor host): 17.2.5

Could anyone pls confirm and explain what's going on?

All the best,
Florian


smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: RBD image QoS rbd_qos_write_bps_limit and rbd_qos_bps_limit and mkfs performance

2023-07-19 Thread Engelmann Florian
Hi Ilya,

thank you for your fast response! Those mkfs parameters I knew, but the 
possibility to exclude discard from rbd QoS was new to me. It looks like this 
option is not available with pacific, but with quincy. So we have to upgrade 
our clusters first.

Is it possible to exclude discard by default for ALL rbd images (or all images 
in a pool) or is it a "per image" setting? If it is a "per rbd image" setting, 
we will have to extend cinder (openstack) to support it.

All the best,
Florian


From: Ilya Dryomov 
Sent: Wednesday, July 19, 2023 3:16:20 PM
To: Engelmann Florian
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] RBD image QoS rbd_qos_write_bps_limit and 
rbd_qos_bps_limit and mkfs performance

On Wed, Jul 19, 2023 at 11:01 AM Engelmann Florian
 wrote:
>
> Hi,
>
> I noticed an incredible high performance drop with mkfs.ext4 (as well as 
> mkfs.xfs) when setting (almost) "any" value for rbd_qos_write_bps_limit (or 
> rbd_qos_bps_limit).
>
> Baseline: 4TB rbd volume  rbd_qos_write_bps_limit = 0
> mkfs.ext4:
> real0m6.688s
> user0m0.000s
> sys 0m0.006s
>
> 50GB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 53687091200
> mkfs.ext4:
> real1m22.217s
> user0m0.009s
> sys 0m0.000s
>
> 5GB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 5368709120
> mkfs.ext4:
> real13m39.770s
> user0m0.008s
> sys 0m0.034s
>
> 500MB/s: 4TB rbd volume  rbd_qos_write_bps_limit = 524288000
> mkfs.ext4:
> test still runing... I can provide the result if needed.
>
> The tests are running on a client vm (Ubuntu 22.04) using Qemu/libvirt.
>
> Using the same values with Qemu/libvirt QoS does not affect mkfs performance.
> https://libvirt.org/formatdomain.html#block-i-o-tuning
>
> Ceph Version: 16.2.11
> Qemu: 6.2.0
> Libvirt: 8.0.0
> Kernel (hypervisor host): 5.19.0-35-generic
> librbd1 (hypervisor host): 17.2.5
>
> Could anyone pls confirm and explain what's going on?

Hi Florian,

RBD QoS write limits apply to all write-like operations, including
discards.  By default, both mkfs.ext4 and mkfs.xfs attempt to discard
the entire partition/device and librbd QoS machinery treats that as 4TB
worth of writes.

RBD images are thin-provisioned, so if you are creating a filesystem on
a freshly created image, you can skip discarding with "-E nodiscard" for
mkfs.ext4 or "-K" for mkfs.xfs.

Alternatively, you can waive QoS limits for discards (or even an
arbitrary combination of operations) by setting rbd_qos_exclude_ops
option [1] appropriately.

[1] 
https://docs.ceph.com/en/latest/rbd/rbd-config-ref/#confval-rbd_qos_exclude_ops

Thanks,

Ilya


smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] build nautilus 14.2.13 packages and container

2020-11-13 Thread Engelmann Florian
Hi,

I was not able to find any complete guide on how to build ceph (14.2.x) from 
source, create packages and build containers based on those packages.

Ubuntu or centos, does not matter.

I tried so far:
###
docker pull centos:7
docker run -ti centos:7 /bin/bash

yum install -y git rpm-build rpmdevtools wget epel-release
yum install -y python-virtualenv python-pip jq cmake3 make gcc-c++ rpm-build 
which sudo createrepo

git clone https://github.com/ceph/ceph
cd ceph
git checkout v14.2.13
./make-srpm.sh
./install-deps.sh
#

but install-deps.sh fails with:
Error: No Package found for python-scipy

The following error message appeared before:

http://vault.centos.org/centos/7/sclo/Source/rh/repodata/repomd.xml: [Errno 14] 
HTTP Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below wiki article

https://wiki.centos.org/yum-errors

If above article doesn't help to resolve this issue please use 
https://bugs.centos.org/.

http://vault.centos.org/centos/7/sclo/Source/sclo/repodata/repomd.xml: [Errno 
14] HTTP Error 404 - Not Found


centos:8 fails as well. dependency error:
Error:
 Problem: package 
python36-rpm-macros-3.6.8-2.module_el8.1.0+245+c39af44f.noarch conflicts with 
python-modular-rpm-macros > 3.6 provided by 
python38-rpm-macros-3.8.0-6.module_el8.2.0+317+61fa6e7d.noarch
  - conflicting requests

Any helpful links?

All the best,
Florian

EveryWare AG
Florian Engelmann
Cloud Platform Architect
Zurlindenstrasse 52a
CH-8003 Zürich

T  +41 44 466 60 00
F  +41 44 466 60 10

florian.engelm...@everyware.ch
www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: build nautilus 14.2.13 packages and container

2020-11-16 Thread Engelmann Florian
I was able to fix this dependency problem by deleting the 'BuildRequires:  
python%{_python_buildid}-scipy'  line from the ceph.spec.in file:



docker pull centos:7.7.1908
docker run -ti centos:7.7.1908 /bin/bash

cd root
yum install -y epel-release
yum install -y git wget sudo which jq
yum install -y rpm-build rpmdevtools rpm-build createrepo cmake3
yum install -y python-pip python-virtualenv
yum install -y centos-release-scl
yum -y install devtoolset-8
scl enable devtoolset-8 bash

git clone https://github.com/ceph/ceph
cd ceph
git checkout v14.2.13
sed -i -e '/BuildRequires:  python%{_python_buildid}-scipy/d' ceph.spec.in
./make-srpm.sh
./install-deps.sh
 CEPH=$(ls ceph-14.2*.src.rpm)
 rpmbuild --rebuild $CEPH
###
________
From: Engelmann Florian 
Sent: Friday, November 13, 2020 3:51:10 PM
To: ceph-users
Subject: [ceph-users] build nautilus 14.2.13 packages and container

Hi,

I was not able to find any complete guide on how to build ceph (14.2.x) from 
source, create packages and build containers based on those packages.

Ubuntu or centos, does not matter.

I tried so far:
###
docker pull centos:7
docker run -ti centos:7 /bin/bash

yum install -y git rpm-build rpmdevtools wget epel-release
yum install -y python-virtualenv python-pip jq cmake3 make gcc-c++ rpm-build 
which sudo createrepo

git clone https://github.com/ceph/ceph
cd ceph
git checkout v14.2.13
./make-srpm.sh
./install-deps.sh
#

but install-deps.sh fails with:
Error: No Package found for python-scipy

The following error message appeared before:

http://vault.centos.org/centos/7/sclo/Source/rh/repodata/repomd.xml: [Errno 14] 
HTTP Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below wiki article

https://wiki.centos.org/yum-errors

If above article doesn't help to resolve this issue please use 
https://bugs.centos.org/.

http://vault.centos.org/centos/7/sclo/Source/sclo/repodata/repomd.xml: [Errno 
14] HTTP Error 404 - Not Found


centos:8 fails as well. dependency error:
Error:
 Problem: package 
python36-rpm-macros-3.6.8-2.module_el8.1.0+245+c39af44f.noarch conflicts with 
python-modular-rpm-macros > 3.6 provided by 
python38-rpm-macros-3.8.0-6.module_el8.2.0+317+61fa6e7d.noarch
  - conflicting requests

Any helpful links?

All the best,
Florian

EveryWare AG
Florian Engelmann
Cloud Platform Architect
Zurlindenstrasse 52a
CH-8003 Zürich

T  +41 44 466 60 00
F  +41 44 466 60 10

florian.engelm...@everyware.ch
www.everyware.ch


smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Storage class usage stats

2021-10-28 Thread Engelmann Florian
Is there any PR ongoing to add such counters to bucket stats? rados-level is 
not an option if those counters are needd to do, eg.  rating/billing.


From: Casey Bodley 
Sent: Wednesday, September 9, 2020 7:50:12 PM
To: Tobias Urdin
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: Storage class usage stats

That's right, radosgw doesn't do accounting per storage class. All you
have to go on is the rados-level pool stats for those storage classes.

On Mon, Sep 7, 2020 at 7:05 AM Tobias Urdin  wrote:
>
> Hello,
>
> Anybody have any feedback or ways they have resolved this issue?
>
> Best regards
> 
> From: Tobias Urdin 
> Sent: Wednesday, August 26, 2020 3:01:49 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Storage class usage stats
>
> Hello,
>
> I've been trying to understand if there is any way to get usage information 
> based on storage classes for buckets.
>
> Since there is no information available from the "radosgw-admin bucket stats" 
> commands nor any other endpoint I
> tried to browse the source code but couldn't find any references where the 
> storage class would be exposed in such a way.
>
> It also seems that RadosGW today is not saving any counters on amount of 
> objects stored in storage classes when it's
> collecting usage stats, which means there is no such metadata saved for a 
> bucket.
>
>
> I was hoping it was atleast saved but not exposed because then it would have 
> been a easier fix than adding support to count number of objects in storage 
> classes based on operations which would involve a lot of places and mean 
> writing to the bucket metadata on each op :(
>
>
> Is my assumptions correct that there is no way to retrieve such information, 
> meaning there is no way to measure such usage?
>
> If the answer is yes, I assume the only way to get something that could be 
> measured would be to instead have multiple placement
> targets since that is exposed from in bucket info. The bad things would be 
> though that you lose a lot of functionality related to lifecycle
> and moving a single object to another storage class.
>
> Best regards
> Tobias
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io