[ceph-users] v16.2.1 Pacific released

2021-04-19 Thread David Galloway
This is the first bugfix release in the Pacific stable series. It addresses a security vulnerability in the Ceph authentication framework. We recommend users to update to this release. For a detailed release notes with links & changelog please refer to the official blog entry at

[ceph-users] v14.2.20 Nautilus released

2021-04-19 Thread David Galloway
This is the 20th bugfix release in the Nautilus stable series. It addresses a security vulnerability in the Ceph authentication framework. We recommend users to update to this release. For a detailed release notes with links & changelog please refer to the official blog entry at

[ceph-users] v15.2.11 Octopus released

2021-04-19 Thread David Galloway
This is the 11th bugfix release in the Octopus stable series. It addresses a security vulnerability in the Ceph authentication framework. We recommend users to update to this release. For a detailed release notes with links & changelog please refer to the official blog entry at

[ceph-users] EC Backfill Observations

2021-04-19 Thread Josh Baergen
Hey all, I wanted to confirm my understanding of some of the mechanics of backfill in EC pools. I've yet to find a document that outlines this in detail; if there is one, please send it my way. :) Some of what I write below is likely in the "well, duh" category, but I tended towards completeness.

[ceph-users] Re: any experience on using Bcache on top of HDD OSD

2021-04-19 Thread Richard Bade
Hi, I also have used bcache extensively on filestore with journals on SSD for at least 5 years. This has worked very well in all versions up to luminous. The iops improvement was definitely beneficial for vm disk images in rbd. I am also using it under bluestore with db/wal on nvme on both

[ceph-users] Re: HBA vs caching Raid controller

2021-04-19 Thread Marc
This is what I have when I query prometheus, most hdd's are still sata 5400rpm, there are also some ssd's. I also did not optimize cpu frequency settings. (forget about the instance=c03, that is just because the data comes from mgr c03, these drives are on different hosts)

[ceph-users] Re: HBA vs caching Raid controller

2021-04-19 Thread Nico Schottelius
Marc writes: >> For the background: we have many Perc H800+MD1200 [1] systems running >> with >> 10TB HDDs (raid0, read ahead, writeback cache). >> One server has LSI SAS3008 [0] instead of the Perc H800, >> which comes with 512MB RAM + BBU. On most servers latencies are around >> 4-12ms

[ceph-users] Re: HBA vs caching Raid controller

2021-04-19 Thread Marc
> For the background: we have many Perc H800+MD1200 [1] systems running > with > 10TB HDDs (raid0, read ahead, writeback cache). > One server has LSI SAS3008 [0] instead of the Perc H800, > which comes with 512MB RAM + BBU. On most servers latencies are around > 4-12ms (average 6ms), on the system

[ceph-users] HBA vs caching Raid controller

2021-04-19 Thread Nico Schottelius
Good evening, I've to tackle an old, probably recurring topic: HBAs vs. Raid controllers. While generally speaking many people in the ceph field recommend to go with HBAs, it seems in our infrastructure the only server we phased in with an HBA vs. raid controller is actually doing worse in

[ceph-users] Re: BlueFS spillover detected (Nautilus 14.2.16)

2021-04-19 Thread by morphin
Thanks for the answer. It seems very easy. I've never played with rocksdb options before. I always used default and I think I need to play more with it but I couldn't find a good config reference to understand at ceph side. Can I use this guide instead?

[ceph-users] Logging to Graylog

2021-04-19 Thread Andrew Walker-Brown
Hi All, I want to send Ceph logs out to an external Graylog server. I’ve configured the Graylog host IP using “ceph config set global log_graylog_host x.x.x.x” and enabled logging through the Ceph dashboard (I’m running Octopus 15.2.9 – container based). I’ve also setup a GELF UDP input on

[ceph-users] Re: Documentation of the LVM metadata format

2021-04-19 Thread Dimitri Savineau
So that's a bug ;) https://github.com/ceph/ceph/blob/master/src/ceph-volume/ceph_volume/devices/lvm/activate.py#L248-L251 This doesn't honor the --no-systemd flag. But this should work when you're not using the --all option. Dimitri On Mon, Apr 19, 2021 at 10:41 AM Nico Schottelius <

[ceph-users] Re: any experience on using Bcache on top of HDD OSD

2021-04-19 Thread Matthias Ferdinand
On Sun, Apr 18, 2021 at 10:31:30PM +0200, huxia...@horebdata.cn wrote: > Dear Cephers, > > Just curious about any one who has some experience on using Bcache on top of > HDD OSD to accelerate IOPS performance? > > If any, how about the stability and the performance improvement, and for how >

[ceph-users] Re: Upgrade and lost osds Operation not permitted

2021-04-19 Thread Behzad Khoshbakhti
thanks by commenting the ProtectClock directive, the issue is resolved. Thanks for the support. On Sun, Apr 18, 2021 at 9:28 AM Lomayani S. Laizer wrote: > Hello, > > Uncomment ProtectClock=true in /lib/systemd/system/ceph-osd@.service > should fix the issue > > > > On Thu, Apr 8, 2021 at 9:49

[ceph-users] Re: Documentation of the LVM metadata format

2021-04-19 Thread Nico Schottelius
Hey Dimitir, because --no-systemd still requires systemd: [19:03:00] server20.place6:~# ceph-volume lvm activate --all --no-systemd --> Executable systemctl not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --> FileNotFoundError: [Errno 2] No such file or directory:

[ceph-users] Re: Documentation of the LVM metadata format

2021-04-19 Thread Dimitri Savineau
Hi, > My background is that ceph-volume activate does not work on non-systemd Linux distributions Why not using the --no-systemd option during the ceph-volume activate command? The systemd part is only enabling and starting the service but the tmpfs part should work if you're not using systemd

[ceph-users] BlueFS spillover detected (Nautilus 14.2.16)

2021-04-19 Thread by morphin
Hello. I'm trying to fix a wrong cluster deployment (Nautilus 14.2.16) Cluster usage is %40 EC pool with RGW Every node has: 20 x OSD = TOSHIBA MG08SCA16TEY 16.0TB 2 x DB = NVME PM1725b 1.6TB (linux mdadm raid1) NVME usage always goes around %90-99. With "iostat -xdh 1" r/s w/s

[ceph-users] Re: Documentation of the LVM metadata format

2021-04-19 Thread Nico Schottelius
The best questions are the ones that one can answer oneself. The great documentation on https://docs.ceph.com/en/latest/dev/ceph-volume/lvm/ gives the right pointers. The right search term is "lvm list tags" and results into something like this: [15:56:04] server20.place6:~# lvs -o lv_tags

[ceph-users] Re: cephadm: how to create more than 1 rgw per host

2021-04-19 Thread i...@z1storage.com
Hi Sebastian, Thank you. Is there a way to create more than 1 rgw per host until this new feature is released? On 2021/04/19 11:39, Sebastian Wagner wrote: Hi Ivan, this is a feature that is not yet released in Pacific. It seems the documentation is a bit ahead of time right now.

[ceph-users] Re: [Suspicious newsletter] cleanup multipart in radosgw

2021-04-19 Thread Szabo, Istvan (Agoda)
Hi, You have 2 ways: First is using s3vrowser app and in the menu select the multipart uploads and clean it up. The other is like this: Set lifecycle policy On the client: vim lifecyclepolicy http://s3.amazonaws.com/doc/2006-03-01/;>

[ceph-users] Documentation of the LVM metadata format

2021-04-19 Thread Nico Schottelius
Good morning, is there any documentation available regarding the meta data stored within LVM that ceph-volume manages / creates? My background is that ceph-volume activate does not work on non-systemd Linux distributions, but if I know how to recreate the tmpfs, we can easily start the osd

[ceph-users] Re: Octopus - unbalanced OSDs

2021-04-19 Thread Ml Ml
Anyone an idea? :) On Fri, Apr 16, 2021 at 3:09 PM Ml Ml wrote: > > Hello List, > > any ideas why my OSDs are that unbalanced ? > > root@ceph01:~# ceph -s > cluster: > id: 5436dd5d-83d4-4dc8-a93b-60ab5db145df > health: HEALTH_WARN > 1 nearfull osd(s) > 4

[ceph-users] Re: [Suspicious newsletter] cleanup multipart in radosgw

2021-04-19 Thread Boris Behrens
Hi Istvan, both of them require bucket access, correct? Is there a way to add the LC policy globally? Cheers Boris Am Mo., 19. Apr. 2021 um 11:58 Uhr schrieb Szabo, Istvan (Agoda) < istvan.sz...@agoda.com>: > Hi, > > You have 2 ways: > > First is using s3vrowser app and in the menu select the

[ceph-users] Radosgw - WARNING: couldn't find acl header for object, generating default

2021-04-19 Thread by morphin
Hello. I've a RGW bucket (versioning=on). And there was objects like this: radosgw-admin object stat --bucket=xdir --object=f5492238-50cb-4bc2-93fa-424869018946 { "name": "f5492238-50cb-4bc2-93fa-424869018946", "size": 0, "tag": "", "attrs": { "user.rgw.manifest": "",

[ceph-users] cleanup multipart in radosgw

2021-04-19 Thread Boris Behrens
Hi, is there a way to remove multipart uploads that are older than X days? It doesn't need to be build into ceph or is automated to the end. Just something I don't need to build on my own. I currently try to debug a problem where ceph reports a lot more used space than it actually requires (

[ceph-users] Re: cephadm: how to create more than 1 rgw per host

2021-04-19 Thread Sebastian Wagner
Hi Ivan, this is a feature that is not yet released in Pacific. It seems the documentation is a bit ahead of time right now. Sebastian On Fri, Apr 16, 2021 at 10:58 PM i...@z1storage.com wrote: > Hello, > > According to the documentation, there's count-per-host key to 'ceph > orch', but it

[ceph-users] Re: Octopus - unbalanced OSDs

2021-04-19 Thread Dan van der Ster
This should help: ceph config set mgr mgr/balancer/upmap_max_deviation 1 On Mon, Apr 19, 2021 at 10:17 AM Ml Ml wrote: > > Anyone an idea? :) > > On Fri, Apr 16, 2021 at 3:09 PM Ml Ml wrote: > > > > Hello List, > > > > any ideas why my OSDs are that unbalanced ? > > > > root@ceph01:~# ceph -s