[ceph-users] Re: MDS stuck in "up:replay"

2023-01-23 Thread Venky Shankar
On Tue, Jan 24, 2023 at 1:34 AM  wrote:
>
> Hello Thomas,
>
> I have same issue with mds like you describe, and ceph version is a same. Did 
> state up:replay ever finish in your case?

There is probably much going on with Thomas's cluster which is
blocking the mds to make progress. Could you upload logs here? -
https://tracker.ceph.com/issues/58489

>
> Thx
> Aleksandar
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>


-- 
Cheers,
Venky
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph cluster iops low

2023-01-23 Thread Mark Nelson

Hi Peter,

I'm not quite sure if you're cluster is fully backed by NVMe drives 
based on your description, but you might be interested in the CPU 
scaling article we posted last fall.  It's available here:


https://ceph.io/en/news/blog/2022/ceph-osd-cpu-scaling/


That gives a good overview of what kind of performance you can get out 
of Ceph in a good environment with all NVMe drives.  We also have a 
tuning article for using QEMU KVM which shows how big of a difference 
various tuning options in the whole IO pipeline can make:


https://ceph.io/en/news/blog/2022/qemu-kvm-tuning/

The gist of it is that there are a lot of things that can negatively 
affect performance, but if you can isolate and fix them it's possible to 
get reasonably high performance in the end.


If you have HDD based OSDs with NVMe only for DB/WAL, you will 
ultimately be limited by the random IO performance of the HDDs.  The WAL 
can help a little but not like a full tiering solution.  We have some 
ideas regarding how to improve this in the future.  If you have a test 
cluster or are simply experimenting, you could try deploying on top of 
Intel's OpenCAS or bcache.  There have been reports of improvements for 
HDD backed clusters using these solutions, though they are not currently 
supported officially by the project afaik.


Mark


On 1/23/23 14:58, peter...@raksmart.com wrote:

I have my ceph IOPS very low with over 48 SSD backed on NVMs for DB/WAL on four 
physical servers. The whole cluster has only about 20K IO total. Looks the IOs 
are suppressed over bottleneck somewhere. Dstat shows a lots csw and interrupts 
over 150K, while I am using FIO bench 4K 128QD test.
I check SSD throughput only about 40M at 250 ios each. Network are total 20G 
and not full of traffic. CPU are around 50% idle on 2*E5 2950v2 each node.
Is it normal to get that high and how to reduce it? where else could be the 
bottleneck?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] rbd_mirroring_delete_delay not removing images with snaps

2023-01-23 Thread Tyler Brekke
We use the rbd-mirror as a way to migrate volumes between clusters.
The process is enable mirroring on the image to migrate, demote on the
primary cluster, promote on the secondary cluster, and then disable
mirroring on the image.
When we started using `rbd_mirroring_delete_delay` so we could retain a
backup of the source image, we noticed volumes with unprotected snaps do
not get purged from the trash. Previously, the image and all its snaps
would be successfully removed after disabling mirroring.
I would expect a similar function when using `rbd_mirroring_delete_delay`
as well. Is rbd trash just overly cautious here?

-- 
Tyler Brekke
Senior Engineer I
tbre...@digitalocean.com
--
We're Hiring!  | @digitalocean
 | YouTube

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] ceph cluster iops low

2023-01-23 Thread petersun
I have my ceph IOPS very low with over 48 SSD backed on NVMs for DB/WAL on four 
physical servers. The whole cluster has only about 20K IO total. Looks the IOs 
are suppressed over bottleneck somewhere. Dstat shows a lots csw and interrupts 
over 150K, while I am using FIO bench 4K 128QD test. 
I check SSD throughput only about 40M at 250 ios each. Network are total 20G 
and not full of traffic. CPU are around 50% idle on 2*E5 2950v2 each node.
Is it normal to get that high and how to reduce it? where else could be the 
bottleneck?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: rbd-mirror ceph quincy Not able to find rbd_mirror_journal_max_fetch_bytes config in rbd mirror

2023-01-23 Thread ankit raikwar
hello ,
   i tryed all of the option it's not working , my replication network 
speed still same , can you help me any other way we can do speed performance 
increase
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: MDS stuck in "up:replay"

2023-01-23 Thread adjurdjevic
Hello Thomas,

I have same issue with mds like you describe, and ceph version is a same. Did 
state up:replay ever finish in your case?

Thx
Aleksandar
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph Disk Prediction module issues

2023-01-23 Thread Nikhil Shah
Hey, did you ever find a resolution for this?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Set async+rdma in Ceph cluster

2023-01-23 Thread Aristide Bekroundjo


Hi,

I try to set rdma setting in cluster of 6 nodes (3 mon, 3 OSD Nodes and 10 OSDs 
on each OSD nodes).
OS : CentOS Stream release 8.

I've followed the step bellow, but I got an error.

[root@mon1 ~]# cephadm shell
Inferring fsid 9414e1bc-9061-11ed-90fc-00163e4f92ad
Using recent ceph image 
quay.io/ceph/ceph@sha256:3cd25ee2e1589bf534c24493ab12e27caf634725b4449d50408fd5ad4796bbfa
[ceph: root@mon1 /]# ceph config set global ms_type async+rdma
2023-01-21T11:11:49.182+ 7fab5922e700 -1 Infiniband verify_prereq!!! 
WARNING !!! For RDMA to work properly user memlock (ulimit -l) must be big 
enough to allow large amount of registered memory. We recommend setting this 
parameter to infinity
/usr/include/c++/8/bits/stl_vector.h:932: std::vector<_Tp, _Alloc>::reference 
std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with 
_Tp = Worker*; _Alloc = std::allocator; std::vector<_Tp, 
_Alloc>::reference = Worker*&; std::vector<_Tp, _Alloc>::size_type = long 
unsigned int]: Assertion '__builtin_expect(__n < this->size(), true)' failed.
Aborted (core dumped)
[ceph: root@mon1 /]#

With error I can see suggestion about ulimit, but with a containerized 
deployment, How can I configure properly async+rdma.

Best regards,

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-23 Thread Yuri Weinstein
Ilya, Venky

rbd, krbd, fs reruns are almost ready, pls review/approve

On Mon, Jan 23, 2023 at 2:30 AM Ilya Dryomov  wrote:
>
> On Fri, Jan 20, 2023 at 5:38 PM Yuri Weinstein  wrote:
> >
> > The overall progress on this release is looking much better and if we
> > can approve it we can plan to publish it early next week.
> >
> > Still seeking approvals
> >
> > rados - Neha, Laura
> > rook - Sébastien Han
> > cephadm - Adam
> > dashboard - Ernesto
> > rgw - Casey
> > rbd - Ilya (full rbd run in progress now)
> > krbd - Ilya
>
> Hi Yuri,
>
> There are 12 infra-related failures in rbd and a few a krbd.  Please
> rerun failed and dead jobs in:
>
> https://pulpito.ceph.com/yuriw-2023-01-20_16:09:11-rbd-pacific_16.2.11_RC6.6-distro-default-smithi/
> https://pulpito.ceph.com/yuriw-2023-01-15_16:16:11-krbd-pacific_16.2.11_RC6.6-testing-default-smithi/
>
> Thanks,
>
> Ilya
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-23 Thread Ilya Dryomov
On Fri, Jan 20, 2023 at 5:38 PM Yuri Weinstein  wrote:
>
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dashboard - Ernesto
> rgw - Casey
> rbd - Ilya (full rbd run in progress now)
> krbd - Ilya

Hi Yuri,

There are 12 infra-related failures in rbd and a few a krbd.  Please
rerun failed and dead jobs in:

https://pulpito.ceph.com/yuriw-2023-01-20_16:09:11-rbd-pacific_16.2.11_RC6.6-distro-default-smithi/
https://pulpito.ceph.com/yuriw-2023-01-15_16:16:11-krbd-pacific_16.2.11_RC6.6-testing-default-smithi/

Thanks,

Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-23 Thread Venky Shankar
Hey Yuri,

On Fri, Jan 20, 2023 at 10:08 PM Yuri Weinstein  wrote:
>
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
> cephadm - Adam
> dashboard - Ernesto
> rgw - Casey
> rbd - Ilya (full rbd run in progress now)
> krbd - Ilya
> fs - Venky, Patrick

Seems like not all failed/dead jobs were rerun. Did you miss updating
the rerun link in the tracker?

> upgrade/nautilus-x (pacific) - passed thx Adam Kraitman!
> upgrade/octopus-x (pacific) - almost passed, still running 1 job
> upgrade/pacific-p2p - Neha (same as in 16.2.8)
> powercycle - Brad (see new SELinux denials)
>
> On Tue, Jan 17, 2023 at 10:45 AM Yuri Weinstein  wrote:
> >
> > OK I will rerun failed jobs filtering rhel in
> >
> > Thx!
> >
> > On Tue, Jan 17, 2023 at 10:43 AM Adam Kraitman  wrote:
> > >
> > > Hey the satellite issue was fixed
> > >
> > > Thanks
> > >
> > > On Tue, Jan 17, 2023 at 7:43 PM Laura Flores  wrote:
> > >>
> > >> This was my summary of rados failures. There was nothing new or amiss,
> > >> although it is important to note that runs were done with filtering out
> > >> rhel 8.
> > >>
> > >> I will leave it to Neha for final approval.
> > >>
> > >> Failures:
> > >> 1. https://tracker.ceph.com/issues/58258
> > >> 2. https://tracker.ceph.com/issues/58146
> > >> 3. https://tracker.ceph.com/issues/58458
> > >> 4. https://tracker.ceph.com/issues/57303
> > >> 5. https://tracker.ceph.com/issues/54071
> > >>
> > >> Details:
> > >> 1. rook: kubelet fails from connection refused - Ceph - Orchestrator
> > >> 2. test_cephadm.sh: Error: Error initializing source docker://
> > >> quay.ceph.io/ceph-ci/ceph:master - Ceph - Orchestrator
> > >> 3. qa/workunits/post-file.sh: postf...@drop.ceph.com: Permission 
> > >> denied
> > >> - Ceph
> > >> 4. rados/cephadm: Failed to fetch package version from
> > >> https://shaman.ceph.com/api/search/?status=ready=ceph=default=ubuntu%2F22.04%2Fx86_64=b34ca7d1c2becd6090874ccda56ef4cd8dc64bf7
> > >> - Ceph - Orchestrator
> > >> 5. rados/cephadm/osds: Invalid command: missing required parameter
> > >> hostname() - Ceph - Orchestrator
> > >>
> > >> On Tue, Jan 17, 2023 at 9:48 AM Yuri Weinstein  
> > >> wrote:
> > >>
> > >> > Please see the test results on the rebased RC 6.6 in this comment:
> > >> >
> > >> > https://tracker.ceph.com/issues/58257#note-2
> > >> >
> > >> > We're still having infrastructure issues making testing difficult.
> > >> > Therefore all reruns were done excluding the rhel 8 distro
> > >> > ('--filter-out rhel_8')
> > >> >
> > >> > Also, the upgrades failed and Adam is looking into this.
> > >> >
> > >> > Seeking new approvals
> > >> >
> > >> > rados - Neha, Laura
> > >> > rook - Sébastien Han
> > >> > cephadm - Adam
> > >> > dashboard - Ernesto
> > >> > rgw - Casey
> > >> > rbd - Ilya
> > >> > krbd - Ilya
> > >> > fs - Venky, Patrick
> > >> > upgrade/nautilus-x (pacific) - Adam Kraitman
> > >> > upgrade/octopus-x (pacific) - Adam Kraitman
> > >> > upgrade/pacific-p2p - Neha - Adam Kraitman
> > >> > powercycle - Brad
> > >> >
> > >> > Thx
> > >> >
> > >> > On Fri, Jan 6, 2023 at 8:37 AM Yuri Weinstein  
> > >> > wrote:
> > >> > >
> > >> > > Happy New Year all!
> > >> > >
> > >> > > This release remains to be in "progress"/"on hold" status as we are
> > >> > > sorting all infrastructure-related issues.
> > >> > >
> > >> > > Unless I hear objections, I suggest doing a full rebase/retest QE
> > >> > > cycle (adding PRs merged lately) since it's taking much longer than
> > >> > > anticipated when sepia is back online.
> > >> > >
> > >> > > Objections?
> > >> > >
> > >> > > Thx
> > >> > > YuriW
> > >> > >
> > >> > > On Thu, Dec 15, 2022 at 9:14 AM Yuri Weinstein 
> > >> > wrote:
> > >> > > >
> > >> > > > Details of this release are summarized here:
> > >> > > >
> > >> > > > https://tracker.ceph.com/issues/58257#note-1
> > >> > > > Release Notes - TBD
> > >> > > >
> > >> > > > Seeking approvals for:
> > >> > > >
> > >> > > > rados - Neha (https://github.com/ceph/ceph/pull/49431 is still 
> > >> > > > being
> > >> > > > tested and will be merged soon)
> > >> > > > rook - Sébastien Han
> > >> > > > cephadm - Adam
> > >> > > > dashboard - Ernesto
> > >> > > > rgw - Casey (rwg will be rerun on the latest SHA1)
> > >> > > > rbd - Ilya, Deepika
> > >> > > > krbd - Ilya, Deepika
> > >> > > > fs - Venky, Patrick
> > >> > > > upgrade/nautilus-x (pacific) - Neha, Laura
> > >> > > > upgrade/octopus-x (pacific) - Neha, Laura
> > >> > > > upgrade/pacific-p2p - Neha - Neha, Laura
> > >> > > > powercycle - Brad
> > >> > > > ceph-volume - Guillaume, Adam K
> > >> > > >
> > >> > > > Thx
> > >> > > > YuriW
> > >> > ___
> > >> > Dev mailing list -- d...@ceph.io
> > >> > To unsubscribe send an email to dev-le...@ceph.io
> > >> >
> > >>
> > >>
> > >> --
> > >>
> > >> Laura 

[ceph-users] Re: Pools and classes

2023-01-23 Thread Massimo Sgaravatto
Thanks a lot
Cheers, Massimo

On Mon, Jan 23, 2023 at 9:55 AM Robert Sander 
wrote:

> Am 23.01.23 um 09:44 schrieb Massimo Sgaravatto:
>
> >> This triggered the remapping of some pgs and therefore some data
> movement.
> >> Is this normal/expected, since for the time being I have only hdd osds ?
>
> This is expected behaviour as the cluster map has changed. Internally
> the device classes are represented through "shadow" trees of the cluster
> topology.
>
> Regards
> --
> Robert Sander
> Heinlein Consulting GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> http://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Zwangsangaben lt. §35a GmbHG:
> HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: trouble deploying custom config OSDs

2023-01-23 Thread Guillaume Abrioux
On Fri, 20 Jan 2023 at 13:12, seccentral  wrote:

> Hello,
> Thank you for the valuable info, and especially for the slack link (it's
> not listed on the community page)
> The ceph-volume command was issued in the following manner :
> login to my 1st vps from which I performed the boostrap with cephadm
> exec
>
> sudo cephadm shell
>
> which gets me root shell inside the container and then ceph-volume [...]
> etc.
>

you shouldn't run ceph-volume commands like that in order to create OSDs
from a cephadm shell. That will result in OSDs not being managed by cephadm
as they should be.
you probably want to run `ceph orch daemon add osd
:data_devices=/dev/sdb,db_devices=/dev/mapper/ssd0-ssd0_0,method=raw`
or use a service spec like the following:

```
---
service_type: osd
service_id: mix_raw_lvm
placement:
  hosts:
- node123
data_devices:
  paths:
- /dev/sdb
db_devices:
  paths:
- /dev/mapper/ssd0-ssd0_0
method: raw
```



> -
> I nuked my environment to recreate the issues and paste them here so my
> new vg/lv names are different
>
> root@dev0:/# ceph-volume raw prepare --bluestore --data /dev/sdb
> --block.db /dev/mapper/ssd0-ssd0_0
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> 0011a2a8-084b-4d79-ab8f-2503dfc2c804
>  stderr: 2023-01-20T11:50:23.495+ 7fdeebd02700 -1 auth: unable to find
> a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or
> directory
>  stderr: 2023-01-20T11:50:23.495+ 7fdeebd02700 -1
> AuthRegistry(0x7fdee4060d70) no keyring found at
> /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
>  stderr: 2023-01-20T11:50:23.495+ 7fdeebd02700 -1 auth: unable to find
> a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or
> directory
>  stderr: 2023-01-20T11:50:23.495+ 7fdeebd02700 -1
> AuthRegistry(0x7fdee4064440) no keyring found at
> /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
>  stderr: 2023-01-20T11:50:23.499+ 7fdeebd02700 -1 auth: unable to find
> a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or
> directory
>  stderr: 2023-01-20T11:50:23.499+ 7fdeebd02700 -1
> AuthRegistry(0x7fdeebd00ea0) no keyring found at
> /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
>  stderr: 2023-01-20T11:50:23.503+ 7fdee929d700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support [1]
>  stderr: 2023-01-20T11:50:23.503+ 7fdeea29f700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support [1]
>  stderr: 2023-01-20T11:50:23.503+ 7fdee9a9e700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support [1]
>  stderr: 2023-01-20T11:50:23.503+ 7fdeebd02700 -1 monclient:
> authenticate NOTE: no keyring found; disabled cephx authentication
>  stderr: [errno 13] RADOS permission denied (error connecting to the
> cluster)
> -->  RuntimeError: Unable to create a new OSD id
>
> After manually ln -s /etc/ceph/ceph.keyring /var/lib/ceph/bootstrap-osd/ i
> got the the credentials from ceph auth ls and added them to the keyring
> file respecting it's syntax
>

I wouldn't do that, I think if you have the admin keyring you can do
something like `ceph auth get client.bootstrap-osd -o
/var/lib/ceph/bootstrap-osd/ceph.keyring`, but again, you don't need to do
that if you create your OSD with cephadm as I described above.

[client.bootstrap-osd]
> key = AQA5vcdj/pClABAAt9hDro+HC73wrZysJSHyAg==
> caps mon = "allow profile bootstrap-osd"
>
> Then it worked:
>
> root@dev0:/# ceph-volume raw prepare --bluestore --data /dev/sdb
> --block.db /dev/mapper/ssd0-ssd0_0
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> 4d47af7e-cf8c-451a-8773-894854e3ce8a
> Running command: /usr/bin/ceph-authtool --gen-print-key
> Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
> Running command: /usr/bin/chown -R ceph:ceph /dev/sdb
> Running command: /usr/bin/ln -s /dev/sdb /var/lib/ceph/osd/ceph-3/block
> Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o
> /var/lib/ceph/osd/ceph-3/activate.monmap
>  stderr: got monmap epoch 3
> --> Creating keyring file for osd.3
> Running command: /usr/bin/chown -R ceph:ceph
> /var/lib/ceph/osd/ceph-3/keyring
> Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
> Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ssd0-ssd0_0
> Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
> Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore
> bluestore --mkfs -i 3 --monmap 

[ceph-users] Re: Pools and classes

2023-01-23 Thread Massimo Sgaravatto
Any feedback ? I would just like to be sure that I am using the right
procedure ...

Thanks, Massimo

On Fri, Jan 20, 2023 at 11:28 AM Massimo Sgaravatto <
massimo.sgarava...@gmail.com> wrote:

> Dear all
>
> I have a ceph cluster where so far all OSDs have been rotational hdd disks
> (actually there are some SSDs, used only for block.db and wal.db)
>
> I now want to add some SSD disks to be used as OSD. My use case is:
>
> 1) for the existing pools keep using only hdd disks
> 2) create some new pools using only sdd disks
>
>
> Let's start with 1 (I didn't have added yet the ssd disks in the cluster)
>
> I have some replicated pools and some ec pools. The replicated pools are
> using a replicated_ruleset rule [*].
> I created a new "replicated_hdd" rule [**] using the command:
>
> ceph osd crush rule create-replicated replicated_hdd default host hdd
>
> I then changed the crush rule of a existing pool (that was using
> 'replicated_ruleset') using the command:
>
>
> ceph osd pool set   crush_rule replicated_hdd
>
> This triggered the remapping of some pgs and therefore some data movement.
> Is this normal/expected, since for the time being I have only hdd osds ?
>
> Thanks, Massimo
>
>
>
> [*]
> rule replicated_ruleset {
> id 0
> type replicated
> min_size 1
> max_size 10
> step take default
> step chooseleaf firstn 0 type host
> step emit
> }
>
> [**]
> rule replicated_hdd {
> id 7
> type replicated
> min_size 1
> max_size 10
> step take default class hdd
> step chooseleaf firstn 0 type host
> step emit
> }
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Pools and classes

2023-01-23 Thread Robert Sander

Am 23.01.23 um 09:44 schrieb Massimo Sgaravatto:


This triggered the remapping of some pgs and therefore some data movement.
Is this normal/expected, since for the time being I have only hdd osds ?


This is expected behaviour as the cluster map has changed. Internally 
the device classes are represented through "shadow" trees of the cluster 
topology.


Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 220009 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io