[ceph-users] Re: ceph-ansible question

2020-04-28 Thread Szabo, Istvan (Agoda)
Hi, So actually you’ve created in the DB VG many lv for the OSDs? This is that I want to avoid actually, because if some of the osds are not is use it is still holding the space, isn’t it? Istvan Szabo Senior Infrastructure Engineer --- Agoda

[ceph-users] Re: rados buckets copy

2020-04-28 Thread Andrei Mikhailovsky
Hi Manuel, My replica is 2, hence about 10TB of unaccounted usage. Andrei - Original Message - > From: "EDH - Manuel Rios" > To: "Andrei Mikhailovsky" > Sent: Tuesday, 28 April, 2020 23:57:20 > Subject: RE: rados buckets copy > Is your replica x3? 9x3 27... plus some overhead

[ceph-users] rados buckets copy

2020-04-28 Thread Andrei Mikhailovsky
Hello, I have a problem with radosgw service where the actual disk usage (ceph df shows 28TB usage) is way more than reported by the radosgw-admin bucket stats (9TB usage). I have tried to get to the end of the problem, but no one seems to be able to help. As a last resort I will attempt to

[ceph-users] Re: manually configure radosgw

2020-04-28 Thread Ken Dreyer
On Mon, Apr 27, 2020 at 11:21 AM Patrick Dowler wrote: > > I am trying to manually create a radosgw instance for a small development > installation. I was able to muddle through and get a working mon, mgr, and > osd (x2), but the docs for radosgw are based on ceph-deploy which is not > part of

[ceph-users] Re: Nautilus upgrade causes spike in MDS latency

2020-04-28 Thread Josh Haft
This issue did subside after restarting the original primary daemon and failing back to it. I've since enabled multi-MDS and latencies overall have decreased even further. Thanks for your assistance. On Wed, Apr 15, 2020 at 8:32 AM Josh Haft wrote: > > Thanks for the assistance. > > I restarted

[ceph-users] Re: ceph-ansible question

2020-04-28 Thread Robert LeBlanc
I'm sure there is a simpler way, but I wanted DBs of a certain size and a data OSD on the NVMe as well. I wrote a script to create all the VGs and LVs of the sizes that I wanted then added this to my Ansible inventory (I prefer to have as much config in the inventory rather than scattered

[ceph-users] Re: Upgrading to Octopus

2020-04-28 Thread Gert Wieberdink
Sorry for the typo: must be journalctl -f instead of syslogctl -f.-gw On Tue, 2020-04-28 at 19:12 +, Gert Wieberdink wrote: > Hello Simon,ceph-mgr and dashboard installation should > bestraightforward.These are tough ones (internal server error 500). > Did you create a selfsigned cert for

[ceph-users] Re: Upgrading to Octopus

2020-04-28 Thread Gert Wieberdink
Hello Simon,ceph-mgr and dashboard installation should be straightforward. These are tough ones (internal server error 500). Did you create a self signed cert for dashboard?Did you check firewalld (port 8443) and/or SELinux? Does syslogctl -f show anything? rgds,-gw On Tue, 2020-04-28 at 12:17

[ceph-users] 4.14 kernel or greater recommendation for multiple active MDS

2020-04-28 Thread Robert LeBlanc
In the Nautilus manual it recommends >= 4.14 kernel for multiple active MDSes. What are the potential issues for running the 4.4 kernel with multiple MDSes? We are in the process of upgrading the clients, but at times overrun the capacity of a single MDS server. MULTIPLE ACTIVE METADATA SERVERS

[ceph-users] Re: RGW and the orphans

2020-04-28 Thread EDH - Manuel Rios
Im prettty sure that you got the same issue than we already reported : https://tracker.ceph.com/issues/43756 Garbage and garbage stored into our OSD without be able to cleanup wasting a lot of space. As you can see its solved in the new versions but... the last versión didnt have any "scrub"

[ceph-users] Re: adding block.db to OSD

2020-04-28 Thread Stefan Priebe - Profihost AG
HI Igor, but the performance issue is still present even on the recreated OSD. # ceph tell osd.38 bench -f plain 12288000 4096 bench: wrote 12 MiB in blocks of 4 KiB in 1.63389 sec at 7.2 MiB/sec 1.84k IOPS vs. # ceph tell osd.10 bench -f plain 12288000 4096 bench: wrote 12 MiB in blocks of 4

[ceph-users] Re: adding block.db to OSD

2020-04-28 Thread Stefan Priebe - Profihost AG
Hi Igore, Am 27.04.20 um 15:03 schrieb Igor Fedotov: > Just left a comment at https://tracker.ceph.com/issues/44509 > > Generally bdev-new-db performs no migration, RocksDB might eventually do > that but no guarantee it moves everything. > > One should use bluefs-bdev-migrate to do actual

[ceph-users] Re: osd crashing and rocksdb corruption

2020-04-28 Thread Igor Fedotov
Short update - please treat bluefs_sync_write parameter instead of bdev-aio.  Changing the latter isn't supported in fact. On 4/28/2020 7:35 PM, Igor Fedotov wrote: Francious, here are some observations got from your log. 1) Rocksdb reports error on the following .sst file:    -35>

[ceph-users] Re: osd crashing and rocksdb corruption

2020-04-28 Thread Mark Nelson
Excellent analysis Igor! Mark On 4/28/20 11:35 AM, Igor Fedotov wrote: Francious, here are some observations got from your log. 1) Rocksdb reports error on the following .sst file:    -35> 2020-04-28 15:23:47.612 7f4856e82a80 -1 rocksdb: Corruption: Bad table magic number: expected

[ceph-users] Re: Lock errors in iscsi gateway

2020-04-28 Thread Mike Christie
On 4/28/20 2:21 AM, Simone Lazzaris wrote: > In data lunedì 27 aprile 2020 18:46:09 CEST, Mike Christie ha scritto: > >   > > [snip] > >   > >> Are you using the ceph-iscsi tools with tcmu-runner or did you setup > >> tcmu-runner directly with targetcli? > >> > > I followed this guide: >

[ceph-users] Re: osd crashing and rocksdb corruption

2020-04-28 Thread Igor Fedotov
Francious, here are some observations got from your log. 1) Rocksdb reports error on the following .sst file:    -35> 2020-04-28 15:23:47.612 7f4856e82a80 -1 rocksdb: Corruption: Bad table magic number: expected 986351839 0377041911, found 12950032858166034944 in db/068269.sst 2) which

[ceph-users] Re: osd crashing and rocksdb corruption

2020-04-28 Thread Francois Legrand
Here is the output of ceph-bluestore-tool bluefs-bdev-sizes inferring bluefs devices from bluestore path  slot 1 /var/lib/ceph/osd/ceph-5/block -> /dev/dm-17 1 : device size 0x746c000 : own 0x[37e1eb0~4a8290] = 0x4a8290 : using 0x5bc78(23 GiB) the result of the

[ceph-users] Re: Upgrading to Octopus

2020-04-28 Thread Simon Sutter
Hello, Yes I upgraded the system to Centos8 and now I can install the dashboard module. But the problem now is, I cannot log in to the dashboard. I deleted every cached file on my end and reinstalled the mgr and dashboard several times. If I try to log in with a wrong password, it tells me

[ceph-users] Re: osd crashing and rocksdb corruption

2020-04-28 Thread Igor Fedotov
Hi Francois, Could you please share OSD startup log with debug-bluestore (and debug-bluefs) set to 20. Also please run ceph-bluestore-tool's bluefs-bdev-sizes command and share the output. Thanks, Igor On 4/28/2020 12:55 AM, Francois Legrand wrote: Hi all, *** Short version *** Is

[ceph-users] Re: Bucket sync across available DCs

2020-04-28 Thread Matt Benjamin
Hi Szabo, Per-bucket sync with improved AWS compatibility was added in Octopus. regards, Matt On Mon, Apr 27, 2020 at 11:18 PM Szabo, Istvan (Agoda) wrote: > > Hi, > > is there a way to synchronize a specific bucket by Ceph across the available > datacenters? > I've just found multi site

[ceph-users] ceph-ansible question

2020-04-28 Thread Szabo, Istvan (Agoda)
Hi, I've tried to create ceph luminous cluster for testing porpuses with ceph-ansible on my 3 machines hyperv vms, but I've got the below error with the following with the following osd configuration: --- dummy: osd_scenario: lvm lvm_volumes: - data: osd1lv data_vg: osd1 db:

[ceph-users] Bucket dynamically resharded to 65521 shards - resharding manually won't work

2020-04-28 Thread gl
Hello, running Ceph Nautilus 14.2.4, we encountered this documented dynamic resharding issue: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-November/037531.html We disabled dynamic resharding in the configuration, and attempted to reshard to 1 shard: # radosgw-admin reshard add

[ceph-users] Re: manually configure radosgw

2020-04-28 Thread Marc Roos
This is out dated but will get you throuh it (especially the pools and civetweb) yum install ceph-radosgw ceph osd pool create default.rgw 8 ceph osd pool create default.rgw.meta 8 ceph osd pool create default.rgw.control 8 ceph osd pool create default.rgw.log 8 ceph osd pool create

[ceph-users] Re: manually configure radosgw

2020-04-28 Thread tdados
CAn you please display your keyring you use in the radosgw containers and also the ceph config? Seems like authentication issue or your containers don't pick up your ceph config? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Re: is ceph balancer doing anything?

2020-04-28 Thread tdados
Hello Andrei, I have kinda the same problem but because it's production and i don't want to do sudden moves that will have data redistribution and affect clients ( only with change approval and stuff) but from what i played into other test clusters and according to documentation... you need to

[ceph-users] Re: Lock errors in iscsi gateway

2020-04-28 Thread tdados
You can check the lock lists on each rbd and you can try removing the lock but only when the vm is shutdown and rbd is not used rbd lock list pool/volume-id rbd lock rm pool/volume-id "lock_id" client_id This was a bug in luminous upgrade i believe and i found it back in the days from this

[ceph-users] Re: Lock errors in iscsi gateway

2020-04-28 Thread Simone Lazzaris
In data lunedì 27 aprile 2020 18:46:09 CEST, Mike Christie ha scritto: [snip] > Are you using the ceph-iscsi tools with tcmu-runner or did you setup > tcmu-runner directly with targetcli? > I followed this guide: https://docs.ceph.com/docs/master//rbd/iscsi-target-cli/[1] and configured the

[ceph-users] Re: RGW and the orphans

2020-04-28 Thread Katarzyna Myrek
Hi all I am afraid that there is even more thrash available - running rgw-orphan-list does not find everything. Like I still have broken multiparts -> when I do s3cmd multipart I get a list of "pending/interrupted multiparts". When I try to cancel such multipart I get 404. Does anyone have a