Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-25 Thread Konstantin Shalygin
On 6/25/19 12:46 AM, Rudenko Aleksandr wrote: Hi, Konstantin. Thanks for the reply. I know about stale instances and that they remained from prior version. I ask about “marker” of bucket. I have bucket “clx” and I can see his current marker in stale-instances list. As I know,

[ceph-users] Fwd: [lca-announce] linux.conf.au 2020 - Call for Sessions and Miniconfs now open!

2019-06-25 Thread Tim Serong
Here we go again! As usual the conference theme is intended to inspire, not to restrict; talks on any topic in the world of free and open source software, hardware, etc. are most welcome, and Ceph talks definitely fit. I've added this to https://pad.ceph.com/p/cfp-coordination as well.

Re: [ceph-users] rebalancing ceph cluster

2019-06-25 Thread Matthew H
If you are running Luminous or newer, you can simply enable the balancer module [1]. [1] http://docs.ceph.com/docs/luminous/mgr/balancer/ From: ceph-users on behalf of Robert LeBlanc Sent: Tuesday, June 25, 2019 5:22 PM To: jinguk.k...@ungleich.ch Cc:

[ceph-users] pgs incomplete

2019-06-25 Thread ☣Adam
How can I tell ceph to give up on "incomplete" PGs? I have 12 pgs which are "inactive, incomplete" that won't recover. I think this is because in the past I have carelessly pulled disks too quickly without letting the system recover. I suspect the disks that have the data for these are long

Re: [ceph-users] CephFS : Kernel/Fuse technical differences

2019-06-25 Thread Robert LeBlanc
There may also be more memory coping involved instead of just passing pointers around as well, but I'm not 100% sure. Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, Jun 24, 2019 at 10:28 AM Jeff Layton wrote: > On Mon, 2019-06-24 at

Re: [ceph-users] rebalancing ceph cluster

2019-06-25 Thread Robert LeBlanc
The placement of PGs is random in the cluster and takes into account any CRUSH rules which may also skew the distribution. Having more PGs will help give more options for placing PGs, but it still may not be adequate. It is recommended to have between 100-150 PGs per OSD, and you are pretty close.

Re: [ceph-users] Thoughts on rocksdb and erasurecode

2019-06-25 Thread Christian Wuerdig
The sizes are determined by rocksdb settings - some details can be found here: https://tracker.ceph.com/issues/24361 One thing to note, in this thread http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-October/030775.html it's noted that rocksdb could use up to 100% extra space during

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Alex Litvak
Thank you for explanation Jason, and thank you for opening a ticket for my issue. On 6/25/2019 1:56 PM, Jason Dillaman wrote: On Tue, Jun 25, 2019 at 2:40 PM Tarek Zegar mailto:tze...@us.ibm.com>> wrote: Sasha, Sorry I don't get it, the documentation for the command states that in

Re: [ceph-users] CEPH pool statistics MAX AVAIL

2019-06-25 Thread Mohamad Gebai
MAX AVAIL is the amount of data you can still write to the cluster before *anyone one of your OSDs* becomes near full. If MAX AVAIL is not what you expect it to be, look at the data distribution using ceph osd tree and make sure you have a uniform distribution. Mohamad On 6/25/19 11:46 AM, Davis

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Jason Dillaman
On Tue, Jun 25, 2019 at 2:40 PM Tarek Zegar wrote: > Sasha, > > Sorry I don't get it, the documentation for the command states that in > order to see the config DB for all do: *"ceph config dump"* > To see what's in the config DB for a particular daemon do: *"ceph config > get "* > To see what's

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Tarek Zegar
Sasha, Sorry I don't get it, the documentation for the command states that in order to see the config DB for all do: "ceph config dump" To see what's in the config DB for a particular daemon do: "ceph config get " To see what's set for a particular daemon (be it from the config db, override,

Re: [ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-25 Thread M Ranga Swami Reddy
Thank you.. Looking into the URL... On Tue, 25 Jun, 2019, 12:18 PM Torben Hørup, wrote: > Hi > > You could look into the radosgw elasicsearch sync module, and use that > to find the objects last modified. > > http://docs.ceph.com/docs/master/radosgw/elastic-sync-module/ > > /Torben > > On

Re: [ceph-users] Cannot delete bucket

2019-06-25 Thread J. Eric Ivancich
On 6/24/19 1:49 PM, David Turner wrote: > It's aborting incomplete multipart uploads that were left around. First > it will clean up the cruft like that and then it should start actually > deleting the objects visible in stats. That's my understanding of it > anyway. I'm int he middle of cleaning

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Jason Dillaman
On Mon, Jun 24, 2019 at 4:30 PM Alex Litvak wrote: > > Jason, > > What are you suggesting to do ? Removing this line from the config database > and keeping in config files instead? I think it's a hole right now in the MON config store that should be addressed. I've opened a tracker ticket [1]

[ceph-users] CEPH pool statistics MAX AVAIL

2019-06-25 Thread Davis Mendoza Paco
Hi all, I have installed ceph luminous, with 43 OSD(3TB) Checking pool statistics ceph df detail GLOBAL: SIZE AVAIL RAW USED %RAW USED OBJECTS 117TiB 69.3TiB 48.0TiB 40.91 4.20M POOLS: NAMEID QUOTA OBJECTS QUOTA

Re: [ceph-users] Client admin socket for RBD

2019-06-25 Thread Sasha Litvak
Tarek, Of course you are correct about the client nodes. I executed this command inside of container that runs mon. Or it can be done on the bare metal node that runs mon. You essentially quering mon configuration database. On Tue, Jun 25, 2019 at 8:53 AM Tarek Zegar wrote: > "config get"

Re: [ceph-users] Changing the release cadence

2019-06-25 Thread Alfredo Deza
On Mon, Jun 17, 2019 at 4:09 PM David Turner wrote: > > This was a little long to respond with on Twitter, so I thought I'd share my > thoughts here. I love the idea of a 12 month cadence. I like October because > admins aren't upgrading production within the first few months of a new >

Re: [ceph-users] Radosgw federation replication

2019-06-25 Thread Marcelo Mariano Miziara
Hi...this page is for an old version (jewel). They call federated multi-site nowadays. Read this one instead [ http://docs.ceph.com/docs/master/radosgw/multisite/ | http://docs.ceph.com/docs/master/radosgw/multisite/ ] . There's some instructions in the end about migrating a single site

[ceph-users] [events] Ceph Day CERN September 17 - CFP now open!

2019-06-25 Thread Mike Perez
Hey everyone, Ceph CERN Day will be a full-day event dedicated to fostering Ceph's research and non-profit user communities. The event is hosted by the Ceph team from the CERN IT department. We invite this community to meet and discuss the status of the Ceph project, recent improvements, and

Re: [ceph-users] OSDs taking a long time to boot due to 'clear_temp_objects', even with fresh PGs

2019-06-25 Thread Thomas Byrne - UKRI STFC
I hadn't tried manual compaction, but it did the trick. The db shrunk down to 50MB and the OSD booted instantly. Thanks! I'm confused as to why the OSDs weren't doing this themselves, especially as the operation only took a few seconds. But for now I'm happy that this is easy to rectify if we

[ceph-users] Radosgw federation replication

2019-06-25 Thread Behnam Loghmani
Hi there, I have a Ceph cluster with radosgw and I use it in my production environment for a while. Now I decided to set up another cluster in another geo place to have a disaster recovery plan. I read some docs like http://docs.ceph.com/docs/jewel/radosgw/federated-config/, but all of them is

Re: [ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-25 Thread Torben Hørup
Hi You could look into the radosgw elasicsearch sync module, and use that to find the objects last modified. http://docs.ceph.com/docs/master/radosgw/elastic-sync-module/ /Torben On 25.06.2019 08:19, M Ranga Swami Reddy wrote: Thanks for the reply. Btw, one my customer wants to get the

Re: [ceph-users] radosgw-admin list bucket based on "last modified"

2019-06-25 Thread M Ranga Swami Reddy
Thanks for the reply. Btw, one my customer wants to get the objects based on last modified date filed. How do we can achive this? On Thu, Jun 13, 2019 at 7:09 PM Paul Emmerich wrote: > There's no (useful) internal ordering of these entries, so there isn't a > more efficient way than getting

Re: [ceph-users] Thoughts on rocksdb and erasurecode

2019-06-25 Thread Rafał Wądołowski
Why are you selected this specific sizes? Are there any tests/research on it? Best Regards, Rafał Wądołowski On 24.06.2019 13:05, Konstantin Shalygin wrote: > >> Hi >> >> Have been thinking a bit about rocksdb and EC pools: >> >> Since a RADOS object written to a EC(k+m) pool is split into