I found there is an option `mds_health_summarize_threshold` so it could
show the clients that are lagging.
I increased the default value. I ran `ceph daemon perf dump` to make sure
`inodes < inodes_max`. The problem still persists. I'll try looking into
the code for clues.
Thanks
On Fri, Nov 1
Doesn't the mds log tell you which clients ids are with problems?
Does you mds has enough RAM so that you can increase the default value 10
of the mds cache size
?
Cheers
G.
From: Yutian Li [l...@megvii.com]
Sent: 11 November 2016 14:03
To: Goncalo Borges; c
As for now, when I run `dump_ops_in_flight`, `ops` in empty and `num_ops`
is 0.
But when I run `ceph status`, I still get 15 clients failing to respond to
cache pressure.
Where should I start solving this problem?
On Thu, Nov 10, 2016 at 6:16 PM Goncalo Borges
wrote:
> Hi
>
> "ceph daemon mds.
Hi Dan,,,
I know there are path restriction issues in the kernel client. See the
discussion here.
http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2016-June/010656.html
http://tracker.ceph.com/issues/16358
Cheers
Goncalo
From: ceph-users [ceph
greetings
when i removed a single large rbd snap today, from a 20 TB rbd my osd's
had very high load for a while. during this periode of high load where
multiple osd's was marked down, and marked itself up again, 2 of my
osd's crashed, and these do not want to start again.
the log does not
Hi Daniel,
how would I check what server is the master and how do I set it?
cheers
- Original Message -
> From: "Daniel Gryniewicz"
> To: "ceph-users"
> Sent: Thursday, 10 November, 2016 15:08:55
> Subject: Re: [ceph-users] radosgw - http status 400 while creating a bucket
> Your RGW
Orit, here is the output:
root@arh-ibstorage2-ib:~# rados ls -p .rgw.root
region_map
default.zone.5b41b1b2-0f92-463d-b582-07552f83e66c
realms.5b41b1b2-0f92-463d-b582-07552f83e66c
zonegroups_names.default
zone_names.default
periods.a9543371-a073-4d73-ab6d-0f54991c7ad9.1
realms_names.default
realms
Hi all, Hi Zheng,
We're seeing a strange issue with the kernel cephfs clients, combined
with a path restricted mds cap. It seems that files/dirs are
intermittently not created due to permission denied.
For example, when I untar a kernel into cephfs, we see ~1/1000 files
failed to open/mkdir.
Clie
On Thu, Nov 10, 2016 at 3:32 PM, Andrei Mikhailovsky wrote:
> Orit, true.
>
> yeah, all my servers are running 10.2.3-1xenial or 10.2.3-1trusty. I have a
> small cluster and I always update all servers at once.
>
> I don't have any Hammer releases of ceph anywhere on the network.
>
can you run:
Your RGW doesn't think it's the master, and cannot connect to the
master, thus the create fails.
Daniel
On 11/08/2016 06:36 PM, Andrei Mikhailovsky wrote:
Hello
I am having issues with creating buckets in radosgw. It started with an
upgrade to version 10.2.x
When I am creating a bucket I get
Orit, true.
yeah, all my servers are running 10.2.3-1xenial or 10.2.3-1trusty. I have a
small cluster and I always update all servers at once.
I don't have any Hammer releases of ceph anywhere on the network.
Is 10.2.4 out already? I didn't see an update package to that.
Thanks
Andrei
-
On Thu, Nov 10, 2016 at 2:55 PM, Andrei Mikhailovsky wrote:
> Orit,
>
> Here is what i've done just now:
>
> root@arh-ibstorage1-ib:~# service ceph-radosgw@radosgw.gateway stop
>
> (the above command was ran on both radosgw servers). Checked with ps and no
> radosgw services were running. After t
Orit,
Here is what i've done just now:
root@arh-ibstorage1-ib:~# service ceph-radosgw@radosgw.gateway stop
(the above command was ran on both radosgw servers). Checked with ps and no
radosgw services were running. After that I've done:
root@arh-ibstorage1-ib:~# ./ceph-zones-fix.sh
+ RADOSGW_
Hi Orit,
I have two radosgw services running on two physical servers within the same
zone. This was done to minimise downtime while the maintenance is done. The
http proxy sits on top and balances the links. The clients connect to http
proxy.
I will double check, but I think I did shut down bo
On Thu, Nov 10, 2016 at 2:24 PM, Andrei Mikhailovsky wrote:
>
> Hi Orit,
>
> Thanks for the links.
>
> I've had a look at the link that you've sent
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html and
> followed the instructions. Created the script as depicted in the e
Hi Orit,
Thanks for the links.
I've had a look at the link that you've sent
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011157.html and
followed the instructions. Created the script as depicted in the email. Changed
the realm name to something relevant. The script ran withou
Hi,
> On 10 Nov 2016, at 12:17, han vincent wrote:
>
> Hello, all:
>Recently, I have a plan to build a large-scale ceph cluster in
> production for Openstack. I want to build the cluster as larger as
> possible.
>In the following maillist, Karol has asked a question about
> "largest cep
Hello, all:
Recently, I have a plan to build a large-scale ceph cluster in
production for Openstack. I want to build the cluster as larger as
possible.
In the following maillist, Karol has asked a question about
"largest ceph cluster":
http://lists.ceph.com/pipermail/ceph-users-ce
Hi
"ceph daemon mds. session ls", executed in your mds server, should give you
hostname and client id of all your cephfs clients.
"ceph daemon mds. dump_ops_in_flight" should give you operations not
completed or pending to complete for certain clients ids. In case of problems,
that those probl
On Wed, Nov 9, 2016 at 10:20 PM, Yoann Moulin wrote:
> Hello,
>
>> many thanks for your help. I've tried setting the zone to master, followed
>> by the period update --commit command. This is what i've had:
>
> maybe it's related to this issue :
>
> http://tracker.ceph.com/issues/16839 (fixe in J
***bump***
this is pretty broken and urgent. thanks
- Original Message -
> From: "Andrei Mikhailovsky"
> To: "Yoann Moulin"
> Cc: "ceph-users"
> Sent: Wednesday, 9 November, 2016 23:27:17
> Subject: Re: [ceph-users] radosgw - http status 400 while creating a bucket
> Hi Yoann,
>
>
https://www.suse.com/documentation/ses-3/book_storage_admin/data/ceph_rgw_manual.html
- this is example how documentation should look like :-).
Regards
--
Jarek
--
Jarosław Owsiewski
2016-11-09 15:48 GMT+01:00 Matthew Vernon :
> Hi,
>
> I have a jewel/Ubuntu16.40 ceph cluster. I attempted to
22 matches
Mail list logo