Re: [ceph-users] ceph tool in interactive mode: not work

2018-11-16 Thread Liu, Changcheng
Thanks. Watching this problem. From: Ashley Merrick [mailto:singap...@amerrick.co.uk] Sent: Saturday, November 17, 2018 3:47 PM To: Liu, Changcheng Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph tool in interactive mode: not work http://tracker.ceph.com/issues/36358 On Sat, 17

Re: [ceph-users] ceph tool in interactive mode: not work

2018-11-16 Thread Ashley Merrick
http://tracker.ceph.com/issues/36358 On Sat, 17 Nov 2018 at 3:43 PM, Liu, Changcheng wrote: > Thanks Ashley Merrick. Does this problem have bug track id? > > > > *From:* Ashley Merrick [mailto:singap...@amerrick.co.uk] > *Sent:* Saturday, November 17, 2018 3:41 PM > *To:* Liu, Changcheng >

Re: [ceph-users] ceph tool in interactive mode: not work

2018-11-16 Thread Liu, Changcheng
Thanks Ashley Merrick. Does this problem have bug track id? From: Ashley Merrick [mailto:singap...@amerrick.co.uk] Sent: Saturday, November 17, 2018 3:41 PM To: Liu, Changcheng Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] ceph tool in interactive mode: not work Is a bug that will be

Re: [ceph-users] ceph tool in interactive mode: not work

2018-11-16 Thread Ashley Merrick
Is a bug that will be fixed in the next point release. On Sat, 17 Nov 2018 at 3:38 PM, Liu, Changcheng wrote: > Hi all, > > I’m running ceph tool in interactive mode. However, there’s no output. > > Does anyone know how to solve it? > > *jerry@nstcloud:~$ ls -l /etc/ceph/* > > *total

[ceph-users] ceph tool in interactive mode: not work

2018-11-16 Thread Liu, Changcheng
Hi all, I'm running ceph tool in interactive mode. However, there's no output. Does anyone know how to solve it? jerry@nstcloud:~$ ls -l /etc/ceph/ total 12 -rw--- 1 root root 151 Nov 13 16:50 ceph.client.admin.keyring -rw-r--r-- 1 root root 232 Nov 13 16:50 ceph.conf -rw-r--r-- 1

Re: [ceph-users] Mimic - EC and crush rules - clarification

2018-11-16 Thread David Turner
The difference for 2+2 vs 2x replication isn't in the amount of space being used or saved, but in the amount of OSDs you can safely lose without any data loss or outages. 2x replication is generally considered very unsafe for data integrity, but 2+2 would is as resilient as 3x replication while

Re: [ceph-users] read performance, separate client CRUSH maps or limit osd read access from each client

2018-11-16 Thread Vlad Kopylov
This is what Jean suggested. I understand it and it works with primary. *But what I need is for all clients to access same files, not separate sets (like red blue green)* Thanks Konstantin. On Fri, Nov 16, 2018 at 3:43 AM Konstantin Shalygin wrote: > On 11/16/18 11:57 AM, Vlad Kopylov wrote: >

Re: [ceph-users] pg 17.36 is active+clean+inconsistent head expected clone 1 missing?

2018-11-16 Thread Steve Anthony
Looks similar to a problem I had after a several OSDs crashed while trimming snapshots. In my case, the primary OSD thought the snapshot was gone, but some of the replicas are still there, so scrubbing flags it. First I purged all snapshots and then ran ceph pg repair on the problematic placement

[ceph-users] Checking cephfs compression is working

2018-11-16 Thread Rhian Resnick
How do you confirm that cephfs files and rados objects are being compressed? I don't see how in the docs. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] cephday berlin slides

2018-11-16 Thread Lenz Grimmer
Hi Serkan, On 11/16/18 11:29 AM, Serkan Çoban wrote: > Does anyone know if slides/recordings will be available online? Unfortunately, the presentations were not recorded. However, the slides are usually made available on the corresponding event page, https://ceph.com/cephdays/ceph-day-berlin/

[ceph-users] cephday berlin slides

2018-11-16 Thread Serkan Çoban
Hi, Does anyone know if slides/recordings will be available online? Thanks, Serkan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] pg 17.36 is active+clean+inconsistent head expected clone 1 missing?

2018-11-16 Thread Marc Roos
I am not sure that is going to work, because I have this error quite some time, from before I added the 4th node. And on the 3 node cluster it was: osdmap e18970 pg 17.36 (17.36) -> up [9,0,12] acting [9,0,12] If I understand correctly what you intent to do, moving the data around. This

Re: [ceph-users] Librbd performance VS KRBD performance

2018-11-16 Thread 赵赵贺东
Thank you very much, Jason.Our cluster’s target workload is something like monitoring system data center, we need save a lot of video stream  into cluster.I have to reconsider test case.Besides, a lot tests  to do about the config parameters as you mentioned.Help me a lot, thanks.在

Re: [ceph-users] read performance, separate client CRUSH maps or limit osd read access from each client

2018-11-16 Thread Konstantin Shalygin
On 11/16/18 11:57 AM, Vlad Kopylov wrote: Exactly. But write operations should go to all nodes. This can be set via primary affinity [1], when a ceph client reads or writes data, it always contacts the primary OSD in the acting set. If u want to totally segregate IO, you can use device

Re: [ceph-users] Migration osds to Bluestore on Ubuntu 14.04 Trusty

2018-11-16 Thread Zhenshi Zhou
Hi Klimenko, I did a migration from filestore to bluestroe on centos7 with ceph version 12.2.5. As it's the pro environment, I removed and recreated OSDs on each server at a time, online. Although I migreated on centos, I create osd manually so that you can have a try. Except one raid1 disk for