[ceph-users] Re: ceph cluster iops low

2023-01-24 Thread Konstantin Shalygin
Hi, You SSD is a "desktop" SSD, not a "enterprise" SSD, see [1] This mostly was't suitable for Ceph [1] https://yourcmc.ru/wiki/Ceph_performance#CAPACITORS.21 k > On 25 Jan 2023, at 05:35, peter...@raksmart.com wrote: > > Hi Mark, > Thanks for your response, it is help! > Our Ceph cluster

[ceph-users] Re: Mount ceph using FQDN

2023-01-24 Thread Konstantin Shalygin
Hi, Do you think kernel should care about DNS resolution? k > On 24 Jan 2023, at 19:07, kushagra.gu...@hsc.com wrote: > > Hi team, > > We have a ceph cluster with 3 storage nodes: > 1. storagenode1 - abcd:abcd:abcd::21 > 2. storagenode2 - abcd:abcd:abcd::22 > 3. storagenode3 -

[ceph-users] Re: ceph cluster iops low

2023-01-24 Thread petersun
Hi Mark, Thanks for your response, it is help! Our Ceph cluster use Samsung SSD 870 EVO all backed with NVME drive. 12 SSD drives to 2 NVMe drives per storage node. Each 4TB SSD backed 283G NVMe lvm partition as DB. Now cluster throughput only 300M write, and around 5K IOPS. I could see NVMe

[ceph-users] Re: Problems with autoscaler (overlapping roots) after changing the pool class

2023-01-24 Thread Eugen Block
Hi, what you can’t change with EC pools is the EC profile, the pool‘s ruleset you can change. The fix is the same as for the replicates pools, assign a ruleset with hdd class and after some data movement the autoscaler should not complain anymore. Regards Eugen Zitat von Massimo

[ceph-users] Octopus mgr doesn't resume after boot

2023-01-24 Thread Renata Callado Borges
Dear all, I have a two hosts setup, and I recently rebooted a mgr machine without "set noout" and "set norebalance" commands. The "darkside2" machine is the cephadm machine, and "darkside3" is the improperly rebooted mgr. Now the darkside3 machine does not resume ceph configuration:

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-24 Thread Josh Durgin
Looks good to go! On Tue, Jan 24, 2023 at 7:57 AM Yuri Weinstein wrote: > Josh, this is ready for your final review/approval and publishing > > Release notes - https://github.com/ceph/ceph/pull/49839 > > On Tue, Jan 24, 2023 at 4:00 AM Venky Shankar wrote: > > > > On Mon, Jan 23, 2023 at 11:22

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-24 Thread Yuri Weinstein
Josh, this is ready for your final review/approval and publishing Release notes - https://github.com/ceph/ceph/pull/49839 On Tue, Jan 24, 2023 at 4:00 AM Venky Shankar wrote: > > On Mon, Jan 23, 2023 at 11:22 PM Yuri Weinstein wrote: > > > > Ilya, Venky > > > > rbd, krbd, fs reruns are almost

[ceph-users] Mount ceph using FQDN

2023-01-24 Thread kushagra . gupta
Hi team, We have a ceph cluster with 3 storage nodes: 1. storagenode1 - abcd:abcd:abcd::21 2. storagenode2 - abcd:abcd:abcd::22 3. storagenode3 - abcd:abcd:abcd::23 We have a dns server with ip abcd:abcd:abcd::31 which resolves the above ip's with a single hostname. The resolution is as

[ceph-users] Problems with autoscaler (overlapping roots) after changing the pool class

2023-01-24 Thread Massimo Sgaravatto
Dear all I have just changed the crush rule for all the replicated pools in the following way: ceph osd crush rule create-replicated replicated_hdd default host hdd ceph osd pool set crush_rule replicated_hdd See also this [*] thread Before applying this change, these pools were all using the

[ceph-users] Integrating openstack/swift to ceph cluster

2023-01-24 Thread Michel Niyoyita
Hello team,, I have deployed ceph pacific cluster using ceph-ansible running on ubuntu 20.04 which have 3 OSD hosts and 3 mons on each OSD host we have 20 osd . I am integrating swift in the cluster but I fail to find the policy and upload objects in the container . I have deployed

[ceph-users] OSDs fail to start after stopping them with ceph osd stop command

2023-01-24 Thread Stefan Hanreich
We encountered the following problems while trying to perform maintenance on a Ceph cluster: The cluster consists of 7 Nodes with 10 OSDs each. There are 4 pools on it: 3 of them are replicated pools with 3/2 size/min_size and one is an erasure coded pool with m=2 and k=5. The following

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-01-24 Thread Robert Sander
Hi, you can also use SRV records in DNS to publish the IPs of the MONs. Read https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/ for more info. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 /

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-01-24 Thread Robert Sander
Hi, On 24.01.23 15:02, Lokendra Rathour wrote: My /etc/ceph/ceph.conf is as follows: [global] fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe mon host =

[ceph-users] Re: Mds crash at cscs

2023-01-24 Thread Venky Shankar
On Thu, Jan 19, 2023 at 9:07 PM Lo Re Giuseppe wrote: > > Dear all, > > We have started to use more intensively cephfs for some wlcg related workload. > We have 3 active mds instances spread on 3 servers, > mds_cache_memory_limit=12G, most of the other configs are default ones. > One of them has

[ceph-users] deploying Ceph using FQDN for MON / MDS Services

2023-01-24 Thread Lokendra Rathour
Hi Team, We have a ceph cluster with 3 storage nodes: 1. storagenode1 - abcd:abcd:abcd::21 2. storagenode2 - abcd:abcd:abcd::22 3. storagenode3 - abcd:abcd:abcd::23 The requirement is to mount ceph using the domain name of MON node: Note: we resolved the domain name via DNS server. For

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-24 Thread Venky Shankar
On Mon, Jan 23, 2023 at 11:22 PM Yuri Weinstein wrote: > > Ilya, Venky > > rbd, krbd, fs reruns are almost ready, pls review/approve fs approved. > > On Mon, Jan 23, 2023 at 2:30 AM Ilya Dryomov wrote: > > > > On Fri, Jan 20, 2023 at 5:38 PM Yuri Weinstein wrote: > > > > > > The overall

[ceph-users] Re: 16.2.11 pacific QE validation status

2023-01-24 Thread Ilya Dryomov
On Mon, Jan 23, 2023 at 6:51 PM Yuri Weinstein wrote: > > Ilya, Venky > > rbd, krbd, fs reruns are almost ready, pls review/approve rbd and krbd approved. Thanks, Ilya ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe