Re: [ceph-users] cephfs kernel client blocks when removing large files

2018-11-19 Thread Dylan McCulloch
> On Mon, Oct 22, 2018 at 7:47 PM Dylan McCulloch wrote: > > > > > On Mon, Oct 22, 2018 at 2:37 PM Dylan McCulloch > > > wrote: > > > > > > > > > > > > > > On Mon, Oct 22, 2018 at 9:46 AM Dylan McCulloch > > > > unimelb.edu.au> wrote: > > > > > > > > > > > > On Mon, Oct 8, 2018 at 2:57 PM

Re: [ceph-users] Huge latency spikes

2018-11-19 Thread Alex Litvak
I went through raid controller firmware update. I replaced a pair of SSDs with new ones. Nothing have changed. Per controller card utility it shows that no patrol reading happens and battery backup is in a good shape. Cache policy is WriteBack. I am aware on the bad battery effect but it

Re: [ceph-users] mon:failed in thread_name:safe_timer

2018-11-19 Thread Patrick Donnelly
On Mon, Nov 19, 2018 at 7:17 PM 楼锴毅 wrote: > sorry to disturb , but recently when I use ceph(12.2.8),I found that the > leader monitor will always failed in thread_name:safe_timer. > [...] Try upgrading the mons to v12.2.9 (but see recent warnings concerning upgrades to v12.2.9 for the OSDs):

Re: [ceph-users] read performance, separate client CRUSH maps or limit osd read access from each client

2018-11-19 Thread Vlad Kopylov
Yes. Using GlusterFS now. But Ceph has best write replication which I am struggling to make gluster guys implement. If this read replica pick issue could be fixed ceph could be a good cloud fs not just local network RAID. On Mon, Nov 19, 2018 at 2:54 AM Konstantin Shalygin wrote: > On 11/17/18

Re: [ceph-users] Huge latency spikes

2018-11-19 Thread Brendan Moloney
Hi, > Raid card for journal disks is Perc H730 (Megaraid), RAID 1, battery back > cache is on > > Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad > BBU > Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad > BBU > > I have 2 other nodes

Re: [ceph-users] Migrate OSD journal to SSD partition

2018-11-19 Thread David Turner
For this the procedure is generally to stop the osd, flush the journal, update the symlink on the osd to the new journal location, mkjournal, start osd. You shouldn't need to do anything in the ceph.conf file. On Thu, Nov 8, 2018 at 2:41 AM wrote: > Hi all, > > > > I have been trying to

[ceph-users] radosgw, Keystone integration, and the S3 API

2018-11-19 Thread Florian Haas
Hi everyone, I've recently started a documentation patch to better explain Swift compatibility and OpenStack integration for radosgw; a WIP PR is at https://github.com/ceph/ceph/pull/25056/. I have, however, run into an issue that I would really *like* to document, except I don't know whether

Re: [ceph-users] openstack swift multitenancy problems with ceph RGW

2018-11-19 Thread Florian Haas
On 18/11/2018 22:08, Dilip Renkila wrote: > Hi all, > > We are provisioning openstack swift api though ceph rgw (mimic). We have > problems when trying to create two containers in two projects of same > name. After scraping web, i came to know that i have to enable  > > *

Re: [ceph-users] Some pgs stuck unclean in active+remapped state

2018-11-19 Thread Burkhard Linke
Hi, On 11/19/18 12:49 PM, Thomas Klute wrote: Hi, we have a production cluster (3 nodes) stuck unclean after we had to replace one osd. Cluster recovered fine except some pgs that are stuck unclean for about 2-3 days now: *snipsnap* [root@ceph1 ~]# fgrep remapp /tmp/pgdump.txt 3.83   

[ceph-users] Some pgs stuck unclean in active+remapped state

2018-11-19 Thread Thomas Klute
Hi, we have a production cluster (3 nodes) stuck unclean after we had to replace one osd. Cluster recovered fine except some pgs that are stuck unclean for about 2-3 days now: [root@ceph1 ~]# ceph health detail HEALTH_WARN 7 pgs stuck unclean; recovery 8/8565617 objects degraded (0.000%);

Re: [ceph-users] get cephfs mounting clients' infomation

2018-11-19 Thread Zhenshi Zhou
Hi Yan, I can get the usage of sub dirctory on the client side. Is there a way I can get it from the server? Thanks. Yan, Zheng 于2018年11月19日周一 下午3:08写道: > On Mon, Nov 19, 2018 at 3:06 PM Zhenshi Zhou wrote: > > > > Many thanks Yan! > > > > This command can get IP, hostname, mounting point