Hi,
I had several clusters running as nautilus and pending upgrading to
octopus.
I am now testing the upgrade steps for ceph cluster from nautilus
to octopus using cephadm adopt in lab referred to below link:
- https://docs.ceph.com/en/octopus/cephadm/adoption/
Lab environment:
3 all-in-one
Just some more info on this, it started happening after they added several
thousand objects to their buckets. While the client side times out, the
operation seems to proceed in ceph for a very long
time happily working away getting the stat info for their objects. It doesn't
appear to be
erm
use ceph-ansible? :)
go to github, and find the correct branch associated with the particular
release of ceph you want to use.
then try to follow the ceph-ansible docs on setup.
For example, to use ceph octopus, you are best off with the "STABLE-5" branch.
After that, a hint:
Just try
Hi Dave,
I see now what was the problem. Thanks a lot for the link.
Cheers
El 15/4/21 a las 14:55, Dave Hall escribió:
Eneko,
For clarification, this is the link I used to fix my particular
docker issue:
https://github.com/Debian/docker.io/blob/master/debian/README.Debian
I am looking to rebuild my ceph cluster using ansible.What is the best way
to start this process?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Eneko,
For clarification, this is the link I used to fix my particular
docker issue:
https://github.com/Debian/docker.io/blob/master/debian/README.Debian
The specific issue for me was as follows:
My cluster is running:
- Debian 10 with DefaultRelease=buster-backports
- Ceph packages
Cheers,
[root@s3db1 ~]# ceph daemon osd.23 perf dump | grep numpg
"numpg": 187,
"numpg_primary": 64,
"numpg_replica": 121,
"numpg_stray": 2,
"numpg_removing": 0,
Am Do., 15. Apr. 2021 um 18:18 Uhr schrieb 胡 玮文 :
> Hi Boris,
>
> Could you check something
Hi Boris,
Could you check something like
ceph daemon osd.23 perf dump | grep numpg
to see if there are some stray or removing PG?
Weiwen Hu
> 在 2021年4月15日,22:53,Boris Behrens 写道:
>
> Ah you are right.
> [root@s3db1 ~]# ceph daemon osd.23 config get bluestore_min_alloc_size_hdd
> {
>
Reading the thread "s3 requires twice the space it should use", Boris pointed
out that the fragmentation for the osds is around 0.8-0.9:
> On Thu, Apr 15, 2021 at 8:06 PM Boris Behrens wrote:
>> I also checked the fragmentation on the bluestore OSDs and it is around
>> 0.80 - 0.89 on most
Ah you are right.
[root@s3db1 ~]# ceph daemon osd.23 config get bluestore_min_alloc_size_hdd
{
"bluestore_min_alloc_size_hdd": "65536"
}
But I also checked how many objects our s3 hold and the numbers just do not
add up.
There are only 26509200 objects, which would result in around 1TB "waste"
Here's the recording for the meeting:
https://www.youtube.com/watch?v=5bemZ8opdhs
On Wed, Apr 14, 2021 at 1:34 AM John Zachary Dover wrote:
>
> This week's meeting will focus on the ongoing rewrite of the cephadm
> documentation and the upcoming Google Season of Docs project.
>
> Meeting:
So, I need to live with it? A value of zero leads to use the default?
[root@s3db1 ~]# ceph daemon osd.23 config get bluestore_min_alloc_size
{
"bluestore_min_alloc_size": "0"
}
I also checked the fragmentation on the bluestore OSDs and it is around
0.80 - 0.89 on most OSDs. yikes.
[root@s3db1
Hi,
maybe it is just a problem in my understanding, but it looks like our s3
requires twice the space it should use.
I ran "radosgw-admin bucket stats", and added all "size_kb_actual" values
up and divided to TB (/1024/1024/1024).
The resulting space is 135,1636733 TB. When I tripple it because
Hi Dave,
El 14/4/21 a las 19:15, Dave Hall escribió:
Radoslav,
I ran into the same. For Debian 10 - recent updates - you have to add
'cgroup_enable=memory swapaccount=1' to the kernel command line
(/etc/default/grub). The reference I found said that Debian decided to
disable this by default
On 13/04/2021 11:07, Dan van der Ster wrote:
On Tue, Apr 13, 2021 at 9:00 AM Wido den Hollander wrote:
On 4/12/21 5:46 PM, Dan van der Ster wrote:
Hi all,
bdev_enable_discard has been in ceph for several major releases now
but it is still off by default.
Did anyone try it recently --
Thanks Igor and Neha for the quick responses.
I posted an osd log with debug_osd 10 and debug_bluestore 20:
ceph-post-file: 09094430-abdb-4248-812c-47b7babae06c
Hope that helps,
Dan
On Thu, Apr 15, 2021 at 1:27 AM Neha Ojha wrote:
>
> We saw this warning once in testing
>
16 matches
Mail list logo