I get a HEALTH_WARN when I run `ceph status`. It says
health HEALTH_WARN
mds0: Many clients (17) failing to respond to cache pressure
I have 50 OSDs, 3 MONs, and 1 MDS. I just use CephFS and attach it to 20 ~
30 clients using kernel mount option.
I wonder how to locate those "ma
Hello,
On Wed, 9 Nov 2016 21:56:08 +0100 Andreas Gerstmayr wrote:
> Hello,
>
> >> 2 parallel jobs with one job simulating the journal (sequential
> >> writes, ioengine=libaio, direct=1, sync=1, iodeph=128, bs=1MB) and the
> >> other job simulating the datastore (random writes of 1MB)?
> >>
> >
Hi Yoann,
I am running 10.2.3 on all nodes.
Andrei
- Original Message -
> From: "Yoann Moulin"
> To: "ceph-users"
> Sent: Wednesday, 9 November, 2016 21:20:45
> Subject: Re: [ceph-users] radosgw - http status 400 while creating a bucket
> Hello,
>
>> many thanks for your help. I've t
Hello,
> many thanks for your help. I've tried setting the zone to master, followed by
> the period update --commit command. This is what i've had:
maybe it's related to this issue :
http://tracker.ceph.com/issues/16839 (fixe in Jewel 10.2.3)
or this one :
http://tracker.ceph.com/issues/17239
Hi Yehuda,
many thanks for your help. I've tried setting the zone to master, followed by
the period update --commit command. This is what i've had:
root@arh-ibstorage1-ib:~# radosgw-admin zonegroup get --rgw-zonegroup=default
{
"id": "default",
"name": "default",
"api_name": "",
Hello,
2 parallel jobs with one job simulating the journal (sequential
writes, ioengine=libaio, direct=1, sync=1, iodeph=128, bs=1MB) and the
other job simulating the datastore (random writes of 1MB)?
To test against a single HDD?
Yes, something like that, the first fio job would need go again
Here it is after running overnight (~9h): http://ix.io/1DNi
On Tue, Nov 8, 2016 at 11:00 PM, bobobo1...@gmail.com
wrote:
> Ah, I was actually mistaken. After running without Valgrind, it seems
> I just estimated how slowed down it was. I'll leave it to run
> overnight as suggested.
>
> On Tue, No
On Wed, Nov 9, 2016 at 1:30 AM, Andrei Mikhailovsky wrote:
> Hi Yehuda,
>
> just tried to run the command to set the master_zone to default followed by
> the bucket create without doing the restart and I still have the same error
> on the client:
>
> encoding="UTF-8"?>InvalidArgumentmy-new-buck
That recommendation changed to upgrade OSDs first then Monitors due to
http://tracker.ceph.com/issues/17386#note-6
Ian
On Wed, Nov 9, 2016 at 3:11 PM, Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:
> On 11/09/16 15:06, Alexander Walker wrote:
>
> Hello,
>
> I've a cluster of three no
Hi,
I'm configuring ceph as the storage for our openstack install. One thing
we might want to do in the future is have a second openstack instance
(e.g. to test the next release of openstack); we might well want to have
this talk to our existing ceph cluster.
I could do this by giving each stack
Hi,
I have a jewel/Ubuntu16.40 ceph cluster. I attempted to add some
radosgws, having already made the pools I thought they would need per
http://docs.ceph.com/docs/jewel/radosgw/config-ref/#pools
i.e. .rgw and so on:
.rgw
.rgw.control
.rgw.gc
.log
.intent-log
.usage
.
On 11/09/16 15:06, Alexander Walker wrote:
>
> Hello,
>
> I've a cluster of three node (two osd on each node). First I've
> updated on node - osd is ok and running, but ceph-mon crashed.
>
What does that mean... you updated a mon and an osd, and other mons and
osds are not upgraded? I think you sh
Hello,
I've a cluster of three node (two osd on each node). First I've updated
on node - osd is ok and running, but ceph-mon crashed.
cephus@ceph3:~$ sudo /usr/bin/ceph-mon --cluster=ceph -i ceph3 -f
--setuser ceph --setgroup ceph --debug_mon 20
starting mon.ceph3 rank 2 at 192.168.49.103:678
Hi Mehmet,
It won't let me adjust the PGs because there are "creating" tasks not done
yet.
---
[root@avatar0-ceph0 ~]# !157
ceph osd pool set rbd pgp_num 300
Error EBUSY: currently creating pgs, wait
[root@avatar0-ceph0 ~]#
---
ᐧ
Regards,
Vladimir FS Blando
Cloud Operations Manager
www.morphl
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: 08 November 2016 22:55
> To: Nick Fisk
> Cc: Ceph Users
> Subject: Re: [ceph-users] MDS Problems - Solved but reporting for benefit of
> others
>
> On Wed, Nov 2, 2016 at 2:49 PM, Nick Fisk wrote:
> > A bit
Hi Yehuda,
just tried to run the command to set the master_zone to default followed by the
bucket create without doing the restart and I still have the same error on the
client:
InvalidArgumentmy-new-bucket-31337tx00010-005822ebbd-9951ad8-default9951ad8-default-default
Andrei
Hi Yehuda,
I've tried that and after performed:
# radosgw-admin zonegroup get --rgw-zonegroup=default
{
"id": "default",
"name": "default",
"api_name": "",
"is_master": "true",
"endpoints": [],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "default",
17 matches
Mail list logo