Re: [ceph-users] Is rbd caching safe to use in the current ceph-iscsi 3.0 implementation

2019-06-24 Thread Jason Dillaman
On Mon, Jun 24, 2019 at 4:05 PM Paul Emmerich wrote: > > No. > > tcmu-runner disables the cache automatically overriding your ceph.conf > setting. Correct. For safety purposes, we don't want to support a writeback cache when fallover between different gateways is possible > > Paul > > -- >

Re: [ceph-users] Client admin socket for RBD

2019-06-24 Thread Alex Litvak
Jason, What are you suggesting to do ? Removing this line from the config database and keeping in config files instead? On 6/24/2019 1:12 PM, Jason Dillaman wrote: On Mon, Jun 24, 2019 at 2:05 PM Alex Litvak wrote: Jason, Here you go: WHOMASK LEVELOPTION

Re: [ceph-users] Is rbd caching safe to use in the current ceph-iscsi 3.0 implementation

2019-06-24 Thread Paul Emmerich
No. tcmu-runner disables the cache automatically overriding your ceph.conf setting. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Mon, Jun 24, 2019 at 9:43 PM

[ceph-users] Is rbd caching safe to use in the current ceph-iscsi 3.0 implementation

2019-06-24 Thread Wesley Dillingham
Is it safe to have RBD cache enabled on all the gateways in the latest ceph 14.2+ and ceph-iscsi 3.0 setup? Assuming client are using multipath as outlined here: http://docs.ceph.com/docs/nautilus/rbd/iscsi-initiators/ Thanks. Respectfully, Wes Dillingham wdilling...@godaddy.com Site

[ceph-users] Ceph Multi-site control over sync

2019-06-24 Thread Marcelo Mariano Miziara
Hello! I did a lab with 2 separated clusters, each one with one zone. The tests were ok, if I put a file in a bucket in one zone, i could see it in the other. My question is if it's possible to have more control over this sync. I want that every sync is disabled by default, but if it's

Re: [ceph-users] Cannot delete bucket

2019-06-24 Thread Sergei Genchev
Thank you David! I will give it a whirl and see if running it long enough will do it. On Mon, Jun 24, 2019 at 12:49 PM David Turner wrote: > > It's aborting incomplete multipart uploads that were left around. First it > will clean up the cruft like that and then it should start actually

Re: [ceph-users] Client admin socket for RBD

2019-06-24 Thread Jason Dillaman
On Mon, Jun 24, 2019 at 2:05 PM Alex Litvak wrote: > > Jason, > > Here you go: > > WHOMASK LEVELOPTION VALUE > RO > client advanced admin_socket > /var/run/ceph/$name.$pid.asok * This is the offending config option that

Re: [ceph-users] Client admin socket for RBD

2019-06-24 Thread Alex Litvak
Jason, Here you go: WHOMASK LEVELOPTION VALUE RO client advanced admin_socket/var/run/ceph/$name.$pid.asok * global advanced cluster_network 10.0.42.0/23 * global advanced

Re: [ceph-users] Cannot delete bucket

2019-06-24 Thread David Turner
It's aborting incomplete multipart uploads that were left around. First it will clean up the cruft like that and then it should start actually deleting the objects visible in stats. That's my understanding of it anyway. I'm int he middle of cleaning up some buckets right now doing this same thing.

Re: [ceph-users] RGW: Is 'radosgw-admin reshard stale-instances rm' safe?

2019-06-24 Thread Rudenko Aleksandr
Hi, Konstantin. Thanks for the reply. I know about stale instances and that they remained from prior version. I ask about “marker” of bucket. I have bucket “clx” and I can see his current marker in stale-instances list. As I know, stale-instances list must contain only previous marker ids.

Re: [ceph-users] CephFS : Kernel/Fuse technical differences

2019-06-24 Thread Jeff Layton
On Mon, 2019-06-24 at 15:51 +0200, Hervé Ballans wrote: > Hi everyone, > > We successfully use Ceph here for several years now, and since recently, > CephFS. > > From the same CephFS server, I notice a big difference between a fuse > mount and a kernel mount (10 times faster for kernel

Re: [ceph-users] Client admin socket for RBD

2019-06-24 Thread Jason Dillaman
On Sun, Jun 23, 2019 at 4:27 PM Alex Litvak wrote: > > Hello everyone, > > I encounter this in nautilus client and not with mimic. Removing admin > socket entry from config on client makes no difference > > Error: > > rbd ls -p one > 2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed

Re: [ceph-users] OSDs taking a long time to boot due to 'clear_temp_objects', even with fresh PGs

2019-06-24 Thread Gregory Farnum
On Mon, Jun 24, 2019 at 9:06 AM Thomas Byrne - UKRI STFC wrote: > > Hi all, > > > > Some bluestore OSDs in our Luminous test cluster have started becoming > unresponsive and booting very slowly. > > > > These OSDs have been used for stress testing for hardware destined for our > production

[ceph-users] OSDs taking a long time to boot due to 'clear_temp_objects', even with fresh PGs

2019-06-24 Thread Thomas Byrne - UKRI STFC
Hi all, Some bluestore OSDs in our Luminous test cluster have started becoming unresponsive and booting very slowly. These OSDs have been used for stress testing for hardware destined for our production cluster, so have had a number of pools on them with many, many objects in the past.

Re: [ceph-users] Using Ceph Ansible to Add Nodes to Cluster at Weight 0

2019-06-24 Thread Brett Chancellor
I have used the gentle reweight script many times in the past. But more recently, I expanded one cluster from 334 to 1114 OSDs, by just changing the crush weight 100 OSDs at a time. Once all pgs from those 100 were stable and backfilling, add another hundred. I stopped at 500 and let the backfill

[ceph-users] CephFS : Kernel/Fuse technical differences

2019-06-24 Thread Hervé Ballans
Hi everyone, We successfully use Ceph here for several years now, and since recently, CephFS. From the same CephFS server, I notice a big difference between a fuse mount and a kernel mount (10 times faster for kernel mount). It makes sense to me (an additional fuse library versus a direct

[ceph-users] About available space ceph blue in store

2019-06-24 Thread gjprabu
Hi Team,             We have 9 OSD and when we run ceph osd df  its showing  TOTAL  SIZE 31 TiB  USE :- 13 TiB  AVAIL :- 18 TiB  %USE:- 42.49. When checked in client machine its showing Size :- 14T  USE:- 6.5T AVAIL  6.6T  around 3TB its missing.  We are using replication size is 2 . Any

Re: [ceph-users] rebalancing ceph cluster

2019-06-24 Thread Maged Mokhtar
On 24/06/2019 11:25, jinguk.k...@ungleich.ch wrote: Hello everyone, We have some osd on the ceph. Some osd's usage is more than 77% and another osd's usage is 39% in the same host. I wonder why osd’s usage is different.(Difference is large) and how can i fix it? ID  CLASS   WEIGHT    

Re: [ceph-users] Thoughts on rocksdb and erasurecode

2019-06-24 Thread Konstantin Shalygin
Hi Have been thinking a bit about rocksdb and EC pools: Since a RADOS object written to a EC(k+m) pool is split into several minor pieces, then the OSD will receive many more smaller objects, compared to the amount it would receive in a replicated setup. This must mean that the rocksdb will

Re: [ceph-users] rebalancing ceph cluster

2019-06-24 Thread Konstantin Shalygin
Hello everyone, We have some osd on the ceph. Some osd's usage is more than 77% and another osd's usage is 39% in the same host. I wonder why osd’s usage is different.(Difference is large) and how can i fix it? ID CLASS WEIGHTREWEIGHT SIZEUSE AVAIL %USE VAR PGS TYPE NAME

[ceph-users] Thoughts on rocksdb and erasurecode

2019-06-24 Thread Torben Hørup
Hi Have been thinking a bit about rocksdb and EC pools: Since a RADOS object written to a EC(k+m) pool is split into several minor pieces, then the OSD will receive many more smaller objects, compared to the amount it would receive in a replicated setup. This must mean that the rocksdb will

[ceph-users] rebalancing ceph cluster

2019-06-24 Thread jinguk.k...@ungleich.ch
Hello everyone, We have some osd on the ceph. Some osd's usage is more than 77% and another osd's usage is 39% in the same host. I wonder why osd’s usage is different.(Difference is large) and how can i fix it? ID CLASS WEIGHTREWEIGHT SIZEUSE AVAIL %USE VAR PGS TYPE NAME