[ceph-users] ceph OSD journal (with dmcrypt) replacement

2017-09-04 Thread M Ranga Swami Reddy
Hello, How to replace an OSD's journal created with dmcrypt, from one drive to another drive, in case of current journal drive failed. Thanks Swami ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c

Re: [ceph-users] Jewel (10.2.7) osd suicide timeout while deep-scrub

2017-09-04 Thread Andreas Calminder
Hi! Thanks for the pointer about leveldb_compact_on_mount, it took a while to get everything compacted but after that the deep scrub of the offending pg went smooth without any suicides. I'm considering using the compact on mount feature for all our osd's in the cluster since they're kind of large

[ceph-users] Ceph on ARM meeting cancelled

2017-09-04 Thread Leonardo Vaz
Hey Cephers, Sorry for the short notice, but the Ceph on ARM meeting scheduled for today (Sep 5) has been cancelled. Kindest regards, Leo -- Leonardo Vaz Ceph Community Manager Open Source and Standards Team ___ ceph-users mailing list ceph-users@lis

Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-09-04 Thread Hyun Ha
thank you for response. yes, I know that we can lost data in this scenario and can not guarantee recover data. But, in my opinion, we need to make Ceph Cluster healthy in spite of data lose. In this scenario, Ceph cluster has some stuck+stale PGs and goes to Error state. >From the perspective of op

Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-09-04 Thread David Turner
I don't know that it's still clear what you're asking for. You're understanding that this scenario is going to have lost data that you cannot get back, correct? Some of the information for the RBD might have been in the PGs that you no longer have any copy of. Any RBD that has objects that are n

Re: [ceph-users] ceph pgs state forever stale+active+clean

2017-09-04 Thread Hyun Ha
Hi, I'm still having trouble with above issue. Is anybody there who have same issue or resolve this? Thanks. 2017-08-21 22:51 GMT+09:00 Hyun Ha : > Thanks for response. > > I can understand why size of 2 and min_size 1 is not an acceptable in > production. > but, I just want to make the situat

Re: [ceph-users] Bad IO performance CephFS vs. NFS for block size 4k/128k

2017-09-04 Thread Christian Balzer
Hello, On Mon, 04 Sep 2017 15:27:34 + c.mo...@web.de wrote: > Hello! > > I'm validating IO performance of CephFS vs. NFS. > Well, at this point you seem to be comparing apples to bananas. You're telling us results, but your mail lacks crucial information required to give you a qualified an

Re: [ceph-users] crushmap rule for not using all buckets

2017-09-04 Thread David Turner
I am unaware of any way to accomplish having 1 pool with all 3 racks and another pool with only 2 of them. If you could put the same osd in 2 different roots or have a crush rule choose from 2 different roots, then this might work out. To my knowledge neither of these is possible. What is your rea

Re: [ceph-users] Bad IO performance CephFS vs. NFS for block size 4k/128k

2017-09-04 Thread David
On Mon, Sep 4, 2017 at 4:27 PM, wrote: > Hello! > > I'm validating IO performance of CephFS vs. NFS. > > Therefore I have mounted the relevant filesystems on the same client. > Then I start fio with the following parameters: > action = randwrite randrw > blocksize = 4k 128k 8m > rwmixreadread = 7

Re: [ceph-users] How to distribute data

2017-09-04 Thread Oscar Segarra
Hi, For VDI (Windows 10) use case... is there any document about the recommended configuration with rbd? Thanks a lot! 2017-08-18 15:40 GMT+02:00 Oscar Segarra : > Hi, > > Yes, you are right, the idea is cloning a snapshot taken from the base > image... > > And yes, I'm working with the current

[ceph-users] Bad IO performance CephFS vs. NFS for block size 4k/128k

2017-09-04 Thread c . monty
Hello! I'm validating IO performance of CephFS vs. NFS. Therefore I have mounted the relevant filesystems on the same client. Then I start fio with the following parameters: action = randwrite randrw blocksize = 4k 128k 8m rwmixreadread = 70 50 30 32 jobs run in parallel The NFS share is stripin

[ceph-users] crushmap rule for not using all buckets

2017-09-04 Thread Andreas Herrmann
Hello, I'm building a 5 server cluster over three rooms/racks. Each server has 8 960GB SSDs used as bluestore OSDs. Ceph version 12.1.2 is used. rack1: server1(mon) server2 rack2: server3(mon) server4 rack3: server5(mon) The crushmap was built this way: ceph osd

[ceph-users] 答复: How to enable ceph-mgr dashboard

2017-09-04 Thread 许雪寒
Thanks for your quick reply:-) I checked the opened ports and 7000 is not opened, and all of my machines had selinux disabled. Can there be other causes? Thanks :-) -邮件原件- 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒 发送时间: 2017年9月4日 17:38 收件人: ceph-users@lists.ceph.

Re: [ceph-users] How to enable ceph-mgr dashboard

2017-09-04 Thread John Spray
On Mon, Sep 4, 2017 at 10:38 AM, 许雪寒 wrote: > Hi, everyone. > > I’m trying to enable mgr dashboard on Luminous. However, when I modified the > configuration and restart ceph-mgr, the following error came up: > > Sep 4 17:33:06 rg1-ceph7 ceph-mgr: 2017-09-04 17:33:06.495563 7fc49b3fc700 > -1 mgr

[ceph-users] How to enable ceph-mgr dashboard

2017-09-04 Thread 许雪寒
Hi, everyone. I’m trying to enable mgr dashboard on Luminous. However, when I modified the configuration and restart ceph-mgr, the following error came up: Sep 4 17:33:06 rg1-ceph7 ceph-mgr: 2017-09-04 17:33:06.495563 7fc49b3fc700 -1 mgr handle_signal *** Got signal Terminated *** Sep 4 17:33

Re: [ceph-users] Power outages!!! help!

2017-09-04 Thread hjcho616
Hmm.. I hope I don't really need any thing from osd.0. =P # ceph-objectstore-tool --op export --pgid 2.35 --data-path /var/lib/ceph/osd/ceph-0 --journal-path /var/lib/ceph/osd/ceph-0/journal --file 2.35.exportFailure to read OSD superblock: (2) No such file or directory# ceph-objectstore-tool --

Re: [ceph-users] Power outages!!! help!

2017-09-04 Thread hjcho616
Ronny, While letting cluster replicate, looks like this might be a while, I decided to look in to where those pgs are missing.. From the "ceph health detail" I found pgs that are unfound.  Then found the directories that had that pgs, pasted on the right of that detail message below..pg 2.35 is