[ceph-users] Re: Dirlisting hangs with cephfs

2019-10-28 Thread Patrick Donnelly
On Mon, Oct 28, 2019 at 12:17 PM Kári Bertilsson wrote: > > Hello Patrick, > > Here is output from those commands > https://pastebin.com/yUmuQuYj > > 5 clients have the file system mounted, but only 2 of them have most of the > activity. Have you modified any CephFS configurations? A copy of

[ceph-users] Re: Dirlisting hangs with cephfs

2019-10-28 Thread Kári Bertilsson
Any ideas or tips on how to debug further ? On Mon, Oct 28, 2019 at 7:17 PM Kári Bertilsson wrote: > Hello Patrick, > > Here is output from those commands > https://pastebin.com/yUmuQuYj > > 5 clients have the file system mounted, but only 2 of them have most of > the activity. > > > > On Mon,

[ceph-users] Re: Static website hosting with RGW

2019-10-28 Thread Oliver Freyermuth
Am 28.10.19 um 15:48 schrieb Casey Bodley: > > On 10/24/19 8:38 PM, Oliver Freyermuth wrote: >> Dear Cephers, >> >> I have a question concerning static websites with RGW. >> To my understanding, it is best to run >=1 RGW client for "classic" S3 and >> in addition operate >=1 RGW client for

[ceph-users] Help

2019-10-28 Thread Sumit Gaur
On Tue, 29 Oct 2019 at 1:50 am, wrote: > Send ceph-users mailing list submissions to > ceph-users@ceph.io > > To subscribe or unsubscribe via email, send a message with subject or > body 'help' to > ceph-users-requ...@ceph.io > > You can reach the person managing the list at >

[ceph-users] Bogus Entries in RGW Usage Log / Large omap object in rgw.log pool

2019-10-28 Thread David Monschein
Hi All, Running an object storage cluster, originally deployed with Nautilus 14.2.1 and now running 14.2.4. Last week I was alerted to a new warning from my object storage cluster: [root@ceph1 ~]# ceph health detail HEALTH_WARN 1 large omap objects LARGE_OMAP_OBJECTS 1 large omap objects 1

[ceph-users] Re: Dirlisting hangs with cephfs

2019-10-28 Thread Kári Bertilsson
Hello Patrick, Here is output from those commands https://pastebin.com/yUmuQuYj 5 clients have the file system mounted, but only 2 of them have most of the activity. On Mon, Oct 28, 2019 at 6:54 PM Patrick Donnelly wrote: > Hello Kári, > > On Mon, Oct 28, 2019 at 11:14 AM Kári Bertilsson >

[ceph-users] Re: iSCSI write performance

2019-10-28 Thread Mike Christie
On 10/25/2019 03:25 PM, Ryan wrote: > Can you point me to the directions for the kernel mode iscsi backend. I > was following these directions > https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ > If you just wanted to use the krbd device /dev/rbd$N and export it with iscsi from a single

[ceph-users] Re: Dirlisting hangs with cephfs

2019-10-28 Thread Patrick Donnelly
Hello Kári, On Mon, Oct 28, 2019 at 11:14 AM Kári Bertilsson wrote: > This seems to happen mostly when listing folders containing 10k+ folders. > > The dirlisting hangs indefinitely or until i restart the active MDS and then > the hanging "ls" command will finish running. > > Every time

[ceph-users] Correct Migration Workflow Replicated -> Erasure Code

2019-10-28 Thread Mac Wynkoop
Hi Everyone, So, I'm in the process of trying to migrate our rgw.buckets.data pool from a replicated rule pool to an erasure coded pool. I've gotten the EC pool set up, good EC profile and crush ruleset, pool created successfully, but when I go to "rados cppool xxx.rgw.buckets.data

[ceph-users] Dirlisting hangs with cephfs

2019-10-28 Thread Kári Bertilsson
This seems to happen mostly when listing folders containing 10k+ folders. The dirlisting hangs indefinitely or until i restart the active MDS and then the hanging "ls" command will finish running. Every time restarting the active MDS fixes the problem for a while.

[ceph-users] Ceph monitor start error: monitor data filesystem reached concerning levels of available storage space

2019-10-28 Thread Thomas Schneider
Hi, I'm facing an issue with starting 1 monitor. In the relevant error log the issue seems to be clear: root@ld5506:~# tail -n 30 /var/log/ceph/ceph-mon.ld5506.log 2019-10-28 17:14:46.471 7f157b097440  0 set uid:gid to 64045:64045 (ceph:ceph) 2019-10-28 17:14:46.471 7f157b097440  0 ceph version

[ceph-users] Lower mem radosgw config?

2019-10-28 Thread Dan van der Ster
Hi all, Does anyone have a good config for lower memory radosgw machines? We have 16GB VMs and our radosgw's go OOM when we have lots of parallel clients (e.g. I see around 500 objecter_ops via the rgw asok). Maybe lowering rgw_thread_pool_size from 512 would help? (This is running latest

[ceph-users] Re: Strage RBD images created

2019-10-28 Thread Randall Smith
On Fri, 25 Oct 2019 12:56:24 -0600, Wido den Hollander wrote: > [snip] > > What kind of application are you running on top of Ceph? I'm running KVM and Docker with the rexray storage driver. I kinda figured that something else triggered the creation, it's just weird. > > Wido

[ceph-users] radosgw recovering shards

2019-10-28 Thread Frank R
Hi all, Apologies for all the messages to the list over the past few days. After an upgrade from 12.2.7 to 12.2.12 (inherited cluster) for an RGW multisite active/active setup I am almost constantly seeing 1-10 "recovering shards" when running "radosgw-admin sync status", ie: -- #

[ceph-users] Re: Static website hosting with RGW

2019-10-28 Thread Casey Bodley
On 10/24/19 8:38 PM, Oliver Freyermuth wrote: Dear Cephers, I have a question concerning static websites with RGW. To my understanding, it is best to run >=1 RGW client for "classic" S3 and in addition operate >=1 RGW client for website serving (potentially with HAProxy or its friends in

[ceph-users] After delete 8.5M Objects in a bucket still 500K left

2019-10-28 Thread EDH - Manuel Rios Fernandez
Hi Ceph's! We started deteling a bucket several days ago. Total size 47TB / 8.5M objects. Now we see the cli bucket rm stucked and by console drop this messages. [root@ceph-rgw03 ~]# 2019-10-28 13:55:43.880 7f0dd92c9700 0 abort_bucket_multiparts WARNING : aborted 1000 incomplete

[ceph-users] Re: subtrees have overcommitted (target_size_bytes / target_size_ratio)

2019-10-28 Thread Lars Täuber
Is there a way to get rid of this warnings with activated autoscaler besides adding new osds? Yet I couldn't get a satisfactory answer to the question why this all happens. ceph osd pool autoscale-status : POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO BIAS

[ceph-users] Re: RMDA Bug?

2019-10-28 Thread Max Krasilnikov
День добрий! Sat, Oct 26, 2019 at 01:04:28AM +0800, changcheng.liu wrote: > What's your ceph version? Have you verified whether the problem could be > reproduced on master branch? As an option, it can be Jumbo Frames related bug. I had completely disabled JF in order to use RDMA over

[ceph-users] Re: rgw recovering shards

2019-10-28 Thread Konstantin Shalygin
On 10/27/19 6:01 AM, Frank R wrote: I hate to be a pain but I have one more question. After I run radosgw-admin reshard stale-instances rm if I run radosgw-admin reshard stale-instances list some new entries appear for a bucket that no longer exists. Is there a way to cancel the operation