[ceph-users] Re: High io wait when osd rocksdb is compacting

2020-07-29 Thread Mark Nelson
On 7/29/20 7:47 PM, Raffael Bachmann wrote: Hi Mark I think its 15 hours not 15 days. But the compaction time seems really to be slow. I' destroying and recreating all nvme osds one by one. And the recreated ones don't have latency problems and are also much faster compacting the disk.

[ceph-users] Re: High io wait when osd rocksdb is compacting

2020-07-29 Thread Raffael Bachmann
Hi Mark I think its 15 hours not 15 days. But the compaction time seems really to be slow. I' destroying and recreating all nvme osds one by one. And the recreated ones don't have latency problems and are also much faster compacting the disk. This is since two hours: Compaction Statistics  

[ceph-users] Re: High io wait when osd rocksdb is compacting

2020-07-29 Thread Raffael Bachmann
Hi Igor Thanks for you answer. All the disks had low latancy warnings. "had" because I think the problem is solved. After moving some data and almost losing the nearfull nvme pool, because one disk had so much latency that ceph decided to mark it out, I could start destroying and recreating

[ceph-users] Re: cephadm and disk partitions

2020-07-29 Thread David Orman
cephadm will handle the LVM for you when you deploy using an OSD specification. For example, we have NVME and rotational drives, and cephadm will automatically deploy servers with the DB/WAL on NVME and the data on the rotational drives, with a limit of 12 rotational per NVME - it handles all the

[ceph-users] Re: Setting rbd_default_data_pool through the config store

2020-07-29 Thread Wido den Hollander
On 29/07/2020 16:54, Wido den Hollander wrote: On 29/07/2020 16:00, Jason Dillaman wrote: On Wed, Jul 29, 2020 at 9:07 AM Jason Dillaman wrote: On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote: On 29/07/2020 14:54, Jason Dillaman wrote: On Wed, Jul 29, 2020 at 6:23 AM Wido

[ceph-users] Re: [EXTERNAL] Re: S3 bucket lifecycle not deleting old objects

2020-07-29 Thread Alex Hussein-Kershaw
Hi Robin, Thanks for the reply. I'm currently testing this on a bucket with a single object, on a Ceph cluster with a very tiny amount of data. I've done what you suggested and run the `radosgw-admin lc process` command and turned up the RGW logs - but I saw nothing. [qs-admin@portala0

[ceph-users] Stuck removing osd with orch

2020-07-29 Thread Ml Ml
Hello, yesterday i did: ceph osd purge 32 --yes-i-really-mean-it I also started to upgrade: ceph orch upgrade start --ceph-version 15.2.4 It seems its really gone: ceph osd crush remove osd.32 => device 'osd.32' does not appear in the crush map ceph orch ps: osd.32

[ceph-users] Re: mimic: much more raw used than reported

2020-07-29 Thread Frank Schilder
Dear Igor, please find below data from "ceph osd df tree" and per-OSD bluestore stats pasted together with the script for extraction for reference. We have now: df USED: 142 TB bluestore_stored: 190.9TB (142*8/6 = 189, so matches) bluestore_allocated: 275.2TB osd df tree USE: 276.1 (so matches

[ceph-users] Re: Usable space vs. Overhead

2020-07-29 Thread Janne Johansson
Den ons 29 juli 2020 kl 16:34 skrev David Orman : > Thank you, everyone, for the help. I absolutely was mixing up the two, > which is why I was asking for guidance. The example made it clear. The > question I was trying to answer was: what would the capacity of the cluster > be, for actual data,

[ceph-users] Re: High io wait when osd rocksdb is compacting

2020-07-29 Thread Mark Nelson
Wow, that's crazy.  You only had 13 compaction events for that OSD over roughly 15 days but the average compaction time was 116 seconds!  Notice too though that the average compaction output size is 422MB with an average output throughput of 3.5MB!  That's really slow with RocksDB sitting on

[ceph-users] Re: Setting rbd_default_data_pool through the config store

2020-07-29 Thread Wido den Hollander
On 29/07/2020 16:00, Jason Dillaman wrote: On Wed, Jul 29, 2020 at 9:07 AM Jason Dillaman wrote: On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote: On 29/07/2020 14:54, Jason Dillaman wrote: On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote: Hi, I'm trying to have

[ceph-users] Re: High io wait when osd rocksdb is compacting

2020-07-29 Thread Igor Fedotov
Hi Raffael, wondering if all OSDs are suffering from slow compaction or just he one which is "near full"? Do other OSDs has that "log_latency_fn slow operation observed for" lines? Have you tried "osd bench" command for your OSDs? Does it show similar numbers for every OSD? You might want

[ceph-users] Re: Usable space vs. Overhead

2020-07-29 Thread David Orman
Hi, Thank you, everyone, for the help. I absolutely was mixing up the two, which is why I was asking for guidance. The example made it clear. The question I was trying to answer was: what would the capacity of the cluster be, for actual data, based on the raw disk space + server/drive count +

[ceph-users] Re: mimic: much more raw used than reported

2020-07-29 Thread Igor Fedotov
Frank, so you have pretty high amount of small writes indeed. More than a half of the written volume (in bytes) is done via small writes. And 6x times more small requests. This looks pretty odd for sequential write pattern and is likely to be the root cause for that space overhead. I can

[ceph-users] Re: Setting rbd_default_data_pool through the config store

2020-07-29 Thread Jason Dillaman
On Wed, Jul 29, 2020 at 9:07 AM Jason Dillaman wrote: > > On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote: > > > > > > > > On 29/07/2020 14:54, Jason Dillaman wrote: > > > On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote: > > >> > > >> Hi, > > >> > > >> I'm trying to have

[ceph-users] How to change toner in sharp printer

2020-07-29 Thread hirkumispe
Don't know how to change toner cartridge on your sharp printer? Check our this step by step guide to learn how to replace toner in sharp printer https://printersetup.org/change-toner-in-sharp-printer/ ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] How to contact geek squad online

2020-07-29 Thread hirkumispe
Fix technical breakdown of all your electronics and appliances at Geek Squad Support. Reach the certified experts at Geek Squad Support for fixing any kind of technical bug with your devices. Best of services and assistance assured at support. https://geekstechs.org/geek-squad-support/ Remove

[ceph-users] Re: High io wait when osd rocksdb is compacting

2020-07-29 Thread Raffael Bachmann
Hi Mark Unfortunately it is the production cluster and I don't have another one :-( This is the output of the log parser. I'have nothing to compare them to. Stupid me has no more logs from before the upgrade. python ceph_rocksdb_log_parser.py ceph-osd.1.log Compaction Statistics  

[ceph-users] Re: High io wait when osd rocksdb is compacting

2020-07-29 Thread Mark Nelson
Hi Raffael, Adam made a PR this year that shards rocksdb data across different column families to help reduce compaction overhead.  The goal is to reduce write-amplification during compaction by storing multiple small LSM hierarchies rather than 1 big one.  We've seen evidence that this

[ceph-users] Re: High io wait when osd rocksdb is compacting

2020-07-29 Thread Raffael Bachmann
Hi Wido Thanks for the quick answer. They are all Intel p3520 https://ark.intel.com/content/www/us/en/ark/products/88727/intel-ssd-dc-p3520-series-2-0tb-2-5in-pcie-3-0-x4-3d1-mlc.html And this is ceph df RAW STORAGE:     CLASS SIZE   AVAIL   USED    RAW USED %RAW USED    

[ceph-users] Re: Setting rbd_default_data_pool through the config store

2020-07-29 Thread Jason Dillaman
On Wed, Jul 29, 2020 at 9:03 AM Wido den Hollander wrote: > > > > On 29/07/2020 14:54, Jason Dillaman wrote: > > On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote: > >> > >> Hi, > >> > >> I'm trying to have clients read the 'rbd_default_data_pool' config > >> option from the config store

[ceph-users] Re: High io wait when osd rocksdb is compacting

2020-07-29 Thread Wido den Hollander
On 29/07/2020 14:52, Raffael Bachmann wrote: Hi All, I'm kind of crossposting this from here: https://forum.proxmox.com/threads/i-o-wait-after-upgrade-5-x-to-6-2-and-ceph-luminous-to-nautilus.73581/ But since I'm more and more sure that it's a ceph problem I'll try my luck here. Since

[ceph-users] Re: Setting rbd_default_data_pool through the config store

2020-07-29 Thread Wido den Hollander
On 29/07/2020 14:54, Jason Dillaman wrote: On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote: Hi, I'm trying to have clients read the 'rbd_default_data_pool' config option from the config store when creating a RBD image. This doesn't seem to work and I'm wondering if somebody

[ceph-users] Re: Setting rbd_default_data_pool through the config store

2020-07-29 Thread Jason Dillaman
On Wed, Jul 29, 2020 at 6:23 AM Wido den Hollander wrote: > > Hi, > > I'm trying to have clients read the 'rbd_default_data_pool' config > option from the config store when creating a RBD image. > > This doesn't seem to work and I'm wondering if somebody knows why. It looks like all string-based

[ceph-users] High io wait when osd rocksdb is compacting

2020-07-29 Thread Raffael Bachmann
Hi All, I'm kind of crossposting this from here: https://forum.proxmox.com/threads/i-o-wait-after-upgrade-5-x-to-6-2-and-ceph-luminous-to-nautilus.73581/ But since I'm more and more sure that it's a ceph problem I'll try my luck here. Since updating from Luminous to Nautilus I have a big

[ceph-users] Re: mimic: much more raw used than reported

2020-07-29 Thread Igor Fedotov
Hi Frank, you might want to proceed with perf counters' dump analysis in the following way: For 2-3 arbitrary osds - save current perf counter dump - reset perf counters - leave OSD under the regular load for a while. - dump perf counters again - share both saved and new dumps and/or

[ceph-users] Re: Usable space vs. Overhead

2020-07-29 Thread Benoît Knecht
Aren't you just looking at the same thing from two different perspective? In one case you say: I have 100% of useful data, and I need to add 50% of parity for a total of 150% raw data. In the other, you say: Out of 100% of raw data, 2/3 is useful data, 1/3 is parity, which gives you your 33.3%

[ceph-users] Setting rbd_default_data_pool through the config store

2020-07-29 Thread Wido den Hollander
Hi, I'm trying to have clients read the 'rbd_default_data_pool' config option from the config store when creating a RBD image. This doesn't seem to work and I'm wondering if somebody knows why. I tried: $ ceph config set client rbd_default_data_pool rbd-data $ ceph config set global

[ceph-users] Re: Not able to access radosgw S3 bucket creation with AWS java SDK. Caused by: java.net.UnknownHostException: issue.

2020-07-29 Thread sathvik vutukuri
Thanks, I'll check it out. On Wed, 29 Jul 2020, 13:35 Chris Palmer, wrote: > This works for me (the code switches between AWS and RGW according to > whether s3Endpoint is set). You need the pathStyleAccess unless you have > wildcard DNS names etc. > > String s3Endpoint =

[ceph-users] Re: Not able to access radosgw S3 bucket creation with AWS java SDK. Caused by: java.net.UnknownHostException: issue.

2020-07-29 Thread Chris Palmer
This works for me (the code switches between AWS and RGW according to whether s3Endpoint is set). You need the pathStyleAccess unless you have wildcard DNS names etc.             String s3Endpoint = "http://my.host:80;;             AmazonS3ClientBuilder s3b = AmazonS3ClientBuilder.standard

[ceph-users] Re: Not able to access radosgw S3 bucket creation with AWS java SDK. Caused by: java.net.UnknownHostException: issue.

2020-07-29 Thread Zhenshi Zhou
It's maybe a dns issue, I guess. sathvik vutukuri <7vik.sath...@gmail.com> 于2020年7月29日周三 下午3:21写道: > Hi All, > > Any update in this from any one? > > On Tue, Jul 28, 2020 at 4:00 PM sathvik vutukuri <7vik.sath...@gmail.com> > wrote: > > > Hi All, > > > > radosgw-admin is configured in

[ceph-users] Re: Usable space vs. Overhead

2020-07-29 Thread Janne Johansson
Den ons 29 juli 2020 kl 03:17 skrev David Orman : > That's what the formula on the ceph link arrives at, a 2/3 or 66.66% > overhead. But if a 4 byte object is split into 4x1 byte chunks data (4 > bytes total) + 2x 1 byte chunks parity (2 bytes total), you arrive at 6 > bytes, which is 50% more

[ceph-users] Re: Not able to access radosgw S3 bucket creation with AWS java SDK. Caused by: java.net.UnknownHostException: issue.

2020-07-29 Thread sathvik vutukuri
Hi All, Any update in this from any one? On Tue, Jul 28, 2020 at 4:00 PM sathvik vutukuri <7vik.sath...@gmail.com> wrote: > Hi All, > > radosgw-admin is configured in ceph-deploy, created a few buckets from the > Ceph dashboard, but when accessing through Java AWS S3 code to create a new >

[ceph-users] mon tried to load "000000.sst" which doesn't exist when recovering from osds

2020-07-29 Thread Yu Wei
Hi, I deployed rook v0.8.3 with ceph 12.2.7. This is production system deployed for a long time. Because unknown reason, mon couldn't form quorum anymore and I tried to restore mon from osd by following document below,

[ceph-users] Does Cash app customer service advise me if an account gets checked?

2020-07-29 Thread david william
At whatever point you are applying to confirm your character on the Cash app, at that point you will likewise get a warning of further advances. All things considered, you have to check your record, on the off chance that you need to go through more cash, or you need to utilize extra services

[ceph-users] Issue in getting a Cash App refund on account of headway tab? Call help gathering.

2020-07-29 Thread david william
The advancement tab is one of the basic holds onto the item this is in all probability subject for the Cash App refund. As such, at the off peril which you have to get your coins back, with the guide of using then you may use the help this is given withinside the assistance targets or you may