On Mon, Oct 28, 2019 at 12:17 PM Kári Bertilsson wrote:
>
> Hello Patrick,
>
> Here is output from those commands
> https://pastebin.com/yUmuQuYj
>
> 5 clients have the file system mounted, but only 2 of them have most of the
> activity.
Have you modified any CephFS configurations?
A copy of
Any ideas or tips on how to debug further ?
On Mon, Oct 28, 2019 at 7:17 PM Kári Bertilsson
wrote:
> Hello Patrick,
>
> Here is output from those commands
> https://pastebin.com/yUmuQuYj
>
> 5 clients have the file system mounted, but only 2 of them have most of
> the activity.
>
>
>
> On Mon,
Am 28.10.19 um 15:48 schrieb Casey Bodley:
>
> On 10/24/19 8:38 PM, Oliver Freyermuth wrote:
>> Dear Cephers,
>>
>> I have a question concerning static websites with RGW.
>> To my understanding, it is best to run >=1 RGW client for "classic" S3 and
>> in addition operate >=1 RGW client for
On Tue, 29 Oct 2019 at 1:50 am, wrote:
> Send ceph-users mailing list submissions to
> ceph-users@ceph.io
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
> ceph-users-requ...@ceph.io
>
> You can reach the person managing the list at
>
Hi All,
Running an object storage cluster, originally deployed with Nautilus 14.2.1
and now running 14.2.4.
Last week I was alerted to a new warning from my object storage cluster:
[root@ceph1 ~]# ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OBJECTS 1 large omap objects
1
Hello Patrick,
Here is output from those commands
https://pastebin.com/yUmuQuYj
5 clients have the file system mounted, but only 2 of them have most of the
activity.
On Mon, Oct 28, 2019 at 6:54 PM Patrick Donnelly
wrote:
> Hello Kári,
>
> On Mon, Oct 28, 2019 at 11:14 AM Kári Bertilsson
>
On 10/25/2019 03:25 PM, Ryan wrote:
> Can you point me to the directions for the kernel mode iscsi backend. I
> was following these directions
> https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
>
If you just wanted to use the krbd device /dev/rbd$N and export it with
iscsi from a single
Hello Kári,
On Mon, Oct 28, 2019 at 11:14 AM Kári Bertilsson wrote:
> This seems to happen mostly when listing folders containing 10k+ folders.
>
> The dirlisting hangs indefinitely or until i restart the active MDS and then
> the hanging "ls" command will finish running.
>
> Every time
Hi Everyone,
So, I'm in the process of trying to migrate our rgw.buckets.data pool from
a replicated rule pool to an erasure coded pool. I've gotten the EC pool
set up, good EC profile and crush ruleset, pool created successfully, but
when I go to "rados cppool xxx.rgw.buckets.data
This seems to happen mostly when listing folders containing 10k+ folders.
The dirlisting hangs indefinitely or until i restart the active MDS and
then the hanging "ls" command will finish running.
Every time restarting the active MDS fixes the problem for a while.
Hi,
I'm facing an issue with starting 1 monitor.
In the relevant error log the issue seems to be clear:
root@ld5506:~# tail -n 30 /var/log/ceph/ceph-mon.ld5506.log
2019-10-28 17:14:46.471 7f157b097440 0 set uid:gid to 64045:64045
(ceph:ceph)
2019-10-28 17:14:46.471 7f157b097440 0 ceph version
Hi all,
Does anyone have a good config for lower memory radosgw machines?
We have 16GB VMs and our radosgw's go OOM when we have lots of
parallel clients (e.g. I see around 500 objecter_ops via the rgw
asok).
Maybe lowering rgw_thread_pool_size from 512 would help?
(This is running latest
On Fri, 25 Oct 2019 12:56:24 -0600,
Wido den Hollander wrote:
>
[snip]
>
> What kind of application are you running on top of Ceph?
I'm running KVM and Docker with the rexray storage driver. I kinda
figured that something else triggered the creation, it's just weird.
>
> Wido
Hi all,
Apologies for all the messages to the list over the past few days.
After an upgrade from 12.2.7 to 12.2.12 (inherited cluster) for an RGW
multisite
active/active setup I am almost constantly seeing 1-10 "recovering shards"
when running "radosgw-admin sync status", ie:
--
#
On 10/24/19 8:38 PM, Oliver Freyermuth wrote:
Dear Cephers,
I have a question concerning static websites with RGW.
To my understanding, it is best to run >=1 RGW client for "classic" S3 and in
addition operate >=1 RGW client for website serving
(potentially with HAProxy or its friends in
Hi Ceph's!
We started deteling a bucket several days ago. Total size 47TB / 8.5M
objects.
Now we see the cli bucket rm stucked and by console drop this messages.
[root@ceph-rgw03 ~]# 2019-10-28 13:55:43.880 7f0dd92c9700 0
abort_bucket_multiparts WARNING : aborted 1000 incomplete
Is there a way to get rid of this warnings with activated autoscaler besides
adding new osds?
Yet I couldn't get a satisfactory answer to the question why this all happens.
ceph osd pool autoscale-status :
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO
BIAS
День добрий!
Sat, Oct 26, 2019 at 01:04:28AM +0800, changcheng.liu wrote:
> What's your ceph version? Have you verified whether the problem could be
> reproduced on master branch?
As an option, it can be Jumbo Frames related bug. I had completely disabled JF
in order to use RDMA over
On 10/27/19 6:01 AM, Frank R wrote:
I hate to be a pain but I have one more question.
After I run
radosgw-admin reshard stale-instances rm
if I run
radosgw-admin reshard stale-instances list
some new entries appear for a bucket that no longer exists. Is there a
way to cancel the operation
19 matches
Mail list logo