[ceph-users] ceph-iscsi issue after upgrading from nautilus to octopus

2021-04-15 Thread icy chan
Hi, I had several clusters running as nautilus and pending upgrading to octopus. I am now testing the upgrade steps for ceph cluster from nautilus to octopus using cephadm adopt in lab referred to below link: - https://docs.ceph.com/en/octopus/cephadm/adoption/ Lab environment: 3 all-in-one

[ceph-users] Re: Swift Stat Timeout

2021-04-15 Thread Dylan Griff
Just some more info on this, it started happening after they added several thousand objects to their buckets. While the client side times out, the operation seems to proceed in ceph for a very long time happily working away getting the stat info for their objects. It doesn't appear to be

[ceph-users] Re: Fresh install of Ceph using Ansible

2021-04-15 Thread Philip Brown
erm use ceph-ansible? :) go to github, and find the correct branch associated with the particular release of ceph you want to use. then try to follow the ceph-ansible docs on setup. For example, to use ceph octopus, you are best off with the "STABLE-5" branch. After that, a hint: Just try

[ceph-users] Re: Cephadm upgrade to Pacific problem

2021-04-15 Thread Eneko Lacunza
Hi Dave, I see now what was the problem. Thanks a lot for the link. Cheers El 15/4/21 a las 14:55, Dave Hall escribió: Eneko, For clarification, this is the link I used to fix my particular docker issue: https://github.com/Debian/docker.io/blob/master/debian/README.Debian

[ceph-users] Fresh install of Ceph using Ansible

2021-04-15 Thread Jared Jacob
I am looking to rebuild my ceph cluster using ansible.What is the best way to start this process? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Cephadm upgrade to Pacific problem

2021-04-15 Thread Dave Hall
Eneko, For clarification, this is the link I used to fix my particular docker issue: https://github.com/Debian/docker.io/blob/master/debian/README.Debian The specific issue for me was as follows: My cluster is running: - Debian 10 with DefaultRelease=buster-backports - Ceph packages

[ceph-users] Re: s3 requires twice the space it should use

2021-04-15 Thread Boris Behrens
Cheers, [root@s3db1 ~]# ceph daemon osd.23 perf dump | grep numpg "numpg": 187, "numpg_primary": 64, "numpg_replica": 121, "numpg_stray": 2, "numpg_removing": 0, Am Do., 15. Apr. 2021 um 18:18 Uhr schrieb 胡 玮文 : > Hi Boris, > > Could you check something

[ceph-users] Re: s3 requires twice the space it should use

2021-04-15 Thread 胡 玮文
Hi Boris, Could you check something like ceph daemon osd.23 perf dump | grep numpg to see if there are some stray or removing PG? Weiwen Hu > 在 2021年4月15日,22:53,Boris Behrens 写道: > > Ah you are right. > [root@s3db1 ~]# ceph daemon osd.23 config get bluestore_min_alloc_size_hdd > { >

[ceph-users] How to handle bluestore fragmentation

2021-04-15 Thread David Caro
Reading the thread "s3 requires twice the space it should use", Boris pointed out that the fragmentation for the osds is around 0.8-0.9: > On Thu, Apr 15, 2021 at 8:06 PM Boris Behrens wrote: >> I also checked the fragmentation on the bluestore OSDs and it is around >> 0.80 - 0.89 on most

[ceph-users] Re: s3 requires twice the space it should use

2021-04-15 Thread Boris Behrens
Ah you are right. [root@s3db1 ~]# ceph daemon osd.23 config get bluestore_min_alloc_size_hdd { "bluestore_min_alloc_size_hdd": "65536" } But I also checked how many objects our s3 hold and the numbers just do not add up. There are only 26509200 objects, which would result in around 1TB "waste"

[ceph-users] Re: DocuBetter Meeting This Week -- 1630 UTC

2021-04-15 Thread Mike Perez
Here's the recording for the meeting: https://www.youtube.com/watch?v=5bemZ8opdhs On Wed, Apr 14, 2021 at 1:34 AM John Zachary Dover wrote: > > This week's meeting will focus on the ongoing rewrite of the cephadm > documentation and the upcoming Google Season of Docs project. > > Meeting:

[ceph-users] Re: s3 requires twice the space it should use

2021-04-15 Thread Boris Behrens
So, I need to live with it? A value of zero leads to use the default? [root@s3db1 ~]# ceph daemon osd.23 config get bluestore_min_alloc_size { "bluestore_min_alloc_size": "0" } I also checked the fragmentation on the bluestore OSDs and it is around 0.80 - 0.89 on most OSDs. yikes. [root@s3db1

[ceph-users] s3 requires twice the space it should use

2021-04-15 Thread Boris Behrens
Hi, maybe it is just a problem in my understanding, but it looks like our s3 requires twice the space it should use. I ran "radosgw-admin bucket stats", and added all "size_kb_actual" values up and divided to TB (/1024/1024/1024). The resulting space is 135,1636733 TB. When I tripple it because

[ceph-users] Re: [External Email] Cephadm upgrade to Pacific problem

2021-04-15 Thread Eneko Lacunza
Hi Dave, El 14/4/21 a las 19:15, Dave Hall escribió: Radoslav, I ran into the same. For Debian 10 - recent updates - you have to add 'cgroup_enable=memory swapaccount=1' to the kernel command line (/etc/default/grub). The reference I found said that Debian decided to disable this by default

[ceph-users] Re: has anyone enabled bdev_enable_discard?

2021-04-15 Thread Wido den Hollander
On 13/04/2021 11:07, Dan van der Ster wrote: On Tue, Apr 13, 2021 at 9:00 AM Wido den Hollander wrote: On 4/12/21 5:46 PM, Dan van der Ster wrote: Hi all, bdev_enable_discard has been in ceph for several major releases now but it is still off by default. Did anyone try it recently --

[ceph-users] Re: _delete_some new onodes has appeared since PG removal started

2021-04-15 Thread Dan van der Ster
Thanks Igor and Neha for the quick responses. I posted an osd log with debug_osd 10 and debug_bluestore 20: ceph-post-file: 09094430-abdb-4248-812c-47b7babae06c Hope that helps, Dan On Thu, Apr 15, 2021 at 1:27 AM Neha Ojha wrote: > > We saw this warning once in testing >