[ceph-users] Re: cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-09-08 Thread klemen
I found out that it's already possible to specify storage path in OSD service specification yaml. It works for data_devices, but unfortunately not for db_devices and wal_devices, at least not in my case. service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices:

[ceph-users] Spam here still

2020-09-08 Thread Marc Roos
Do know that this is the only mailing list I am subscribed to, that sends me so much spam. Maybe the list admin should finally have a word with other list admins on how they are managing their lists ___ ceph-users mailing list -- ceph-users@ceph.

[ceph-users] Re: Spam here still

2020-09-08 Thread Lindsay Mathieson
On 8/09/2020 5:30 pm, Marc Roos wrote: Do know that this is the only mailing list I am subscribed to, that sends me so much spam. Maybe the list admin should finally have a word with other list admins on how they are managing their lists ___ cep

[ceph-users] Syncing cephfs from Ceph to Ceph

2020-09-08 Thread Simon Sutter
Hello, Is it possible to somehow sync a ceph from one site to a ceph form another site? I'm just using the cephfs feature and no block devices. Being able to sync cephfs pools between two sites would be great for a hot backup, in case one site fails. Thanks in advance, Simon

[ceph-users] Re: Syncing cephfs from Ceph to Ceph

2020-09-08 Thread Stefan Kooman
On 2020-09-08 11:22, Simon Sutter wrote: > Hello, > > > Is it possible to somehow sync a ceph from one site to a ceph form another > site? > I'm just using the cephfs feature and no block devices. > > Being able to sync cephfs pools between two sites would be great for a hot > backup, in case

[ceph-users] Re: Syncing cephfs from Ceph to Ceph

2020-09-08 Thread Simon Sutter
Thanks Stefan, First of all, for a bit more context, we use this ceph cluster just for hot backups, so 99% write 1% read, no need for low latency. Ok so the snapshot function would mean, we would have like a colder backup. Just like a snapshot of a VM, without any incremental functionality, whic

[ceph-users] Re: Spam here still

2020-09-08 Thread Gerhard W. Recher
just my 5 cents, admin should disable postings on web interface ... all spams are  injected via hyperkitty !! since there is no parameter to accomplish this, admin should hack into "post_to_list" and raise a exeption upon posting attempts to mittigate this ! regards Gerhard W. Recher n

[ceph-users] Re: Spam here still

2020-09-08 Thread Gerhard W. Recher
update: admin should consider to use Version 1.3.4 https://hyperkitty.readthedocs.io/en/latest/news.html * Implemented a new |HYPERKITTY_ALLOW_WEB_POSTING| that allows disabling the web posting feature. (Closes #264) Gerhard W. Recher net4sec UG (haftungsbeschränkt) Leitenweg 6 8692

[ceph-users] Re: cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-09-08 Thread Dimitri Savineau
https://tracker.ceph.com/issues/46558 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Multipart uploads with partsizes larger than 16MiB failing on Nautilus

2020-09-08 Thread shubjero
Hey all, I'm creating a new post for this issue as we've narrowed the problem down to a partsize limitation on multipart upload. We have discovered that in our production Nautilus (14.2.11) cluster and our lab Nautilus (14.2.10) cluster that multipart uploads with a configured part size of greater

[ceph-users] Re: cephadm didn't create journals

2020-09-08 Thread Dimitri Savineau
journal_devices is for filestore and filestore isn't supported with cephadm ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Multipart uploads with partsizes larger than 16MiB failing on Nautilus

2020-09-08 Thread shubjero
I had been looking into this issue all day and during testing found that a specific configuration option we had been setting for years was the culprit. Not setting this value and letting it fall back to the default seems to have fixed our issue with multipart uploads. If you are curious, the confi

[ceph-users] ceph pgs inconsistent, always the same checksum

2020-09-08 Thread David Orman
Hi, I've got a ceph cluster, 7 nodes, 168 OSDs, with 96G of ram on each server. Ceph has been instructed to set a memory target of 3G until we increase RAM to 128G per node. Available memory tends to hover around 14G. I do see a tiny bit (KB) of swap utilization per ceph-osd process, but there's n

[ceph-users] Re: Multipart uploads with partsizes larger than 16MiB failing on Nautilus

2020-09-08 Thread Matt Benjamin
thanks, Shubjero Would you consider creating a ceph tracker issue for this? regards, Matt On Tue, Sep 8, 2020 at 4:13 PM shubjero wrote: > > I had been looking into this issue all day and during testing found > that a specific configuration option we had been setting for years was > the culpri

[ceph-users] The confusing output of ceph df command

2020-09-08 Thread norman kern
Hi, I have changed most of pools from 3-replica to ec 4+2 in my cluster, when I use ceph df command to show the used capactiy of the cluster: RAW STORAGE:     CLASS SIZE    AVAIL   USED    RAW USED %RAW USED     hdd   1.8 PiB 788 TiB 1.0 PiB  1.0 PiB   

[ceph-users] How to delete OSD benchmark data

2020-09-08 Thread Jayesh Labade
Dear Ceph Users, I am testing my 3 node Proxmox + Ceph cluster. I have performed osd benchmark with the below command. # ceph tell osd.0 bench Do I need to perform any cleanup to delete benchmark data from osd ? I have googled for same but nowhere mentioned post steps after osd benchmark comman

[ceph-users] Re: ceph pgs inconsistent, always the same checksum

2020-09-08 Thread Janne Johansson
I googled "got 0x6706be76, expected" and found some hits regarding ceph, so whatever it is, you are not the first, and that number has some internal meaning. Redhat solution for similar issue says that checksum is for seeing all zeroes, and hints at a bad write cache on the controller or something