[ceph-users] How to migrate ms_type to async ?

2018-01-25 Thread 周 威
Hi all, I have a cluster in jewel which is upgraded from hammer , I want to migrate ms_type to async, but I can't find documents about it. Does somebody know how to do that? Thanks Choury ___ ceph-users mailing list ceph-users@lists.ceph.com http://li

Re: [ceph-users] How to migrate ms_type to async ?

2018-01-25 Thread Gregory Farnum
You just have the set the config option ("ms_type = async", I think?) and restart daemons. Both messengers use the same protocol and there's no migration in terms of data or setting up compatibility. -Greg On Thu, Jan 25, 2018 at 9:43 AM 周 威 wrote: > Hi all, > > I have a cluster in jewel which i

[ceph-users] 答复: How to migrate ms_type to async ?

2018-01-25 Thread 周 威
Hi Greg, Can I restart daemons one by one online? Or there will be a down time before I restart all daemons? Thanks Choury 发件人: Gregory Farnum 发送时间: 2018年1月25日 16:52:46 收件人: 周 威 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-users] How to migrate ms_type

Re: [ceph-users] How to migrate ms_type to async ?

2018-01-25 Thread Gregory Farnum
Yes, you can (and should!) restart them one at a time to avoid downtime. On Thu, Jan 25, 2018 at 9:57 AM 周 威 wrote: > Hi Greg, > > Can I restart daemons one by one online? Or there will be a down time > before I restart all daemons? > > Thanks > Choury > > __

Re: [ceph-users] How to migrate ms_type to async ?

2018-01-25 Thread 周 威
I’ve test that, and yes. The cluster works fine with some node using async-message while others using simple-message. Thank you, Greg. Yes, you can (and should!) restart them one at a time to avoid downtime. On Thu, Jan 25, 2018 at 9:57 AM 周 威 mailto:cho...@msn.cn>> wrote: Hi Greg, Can I rest

Re: [ceph-users] SPDK for BlueStore rocksDB

2018-01-25 Thread jorpilo
Well I can absolsulty try that or I can even make a mix of SPDK and non SPDK OSDs and test them individually  I just wanted to know how stable is it and if I could use it as RocksDB Thanks for the help Mensaje original De: Igor Fedotov Fecha: 24/1/18 12:26 p. m. (GMT+01:00) Pa

[ceph-users] Ceph Tech Talk Canceled

2018-01-25 Thread Leonardo Vaz
Hey Cephers, Sorry for the short notice, but the Ceph Tech Talk schedule for today (January 25th) has been canceled because a calendar conflict. We will be posting the schedule for the upcoming Ceph Tech Talks here soon. If you have any interesting topic to talk about, feel free to contact me. K

[ceph-users] Cephalocon APAC Call for Proposals

2018-01-25 Thread Leonardo Vaz
Hey Cephers, This is a friendly reminder that the Call for Proposals for the Cephalocon APAC 2018[1] ends next Wednesday, January 31st. [1] http://cephalocon.doit.com.cn/guestreg_en.html If you haven't submitted your proposal so far, you still have time! Kindest regards, Leo -- Leonardo Vaz

Re: [ceph-users] Luminous - bad performance

2018-01-25 Thread Steven Vacaroaia
Hi, setting the pplication pool helped - the performance is not skewed anymore ( i.e. SSDpool is better than HDD) However latency when using more threads is still very high I am getting 9.91 Gbits/sec when testing with iperf Not sure what else should I check As always, your help will be greatly a

Re: [ceph-users] How to remove deactivated cephFS

2018-01-25 Thread Thomas Bennett
Hey Eugen, Pleasure. Glad you problem resolved itself. Regards, Tom On Wed, Jan 24, 2018 at 5:29 PM, Eugen Block wrote: > Hi Tom, > > thanks for the detailed steps. > > I think our problem literally vanished. A couple of days after my email I > noticed that the web interface suddenly listed o

[ceph-users] Signature check failures.

2018-01-25 Thread Cary
Hello, We are running Luminous 12.2.2. 6 OSD hosts with 12 1TB OSDs, and 64GB RAM. Each host has a SSD for Bluestore's block.wal and block.db. There are 5 monitor nodes as well with 32GB RAM. All servers have Gentoo with kernel, 4.12.12-gentoo. When I export an image using: rbd export pool-name/

[ceph-users] OSDs missing from cluster all from one node

2018-01-25 Thread Andre Goree
Yesterday I noticed some OSDs were missing from our cluster (96 OSDs total, 84up/84in is what showed). After drilling down to determine which node and the cause, I found that all the OSDs on that node (12 total) were in fact down. I entered 'systemctl status ceph-osd@$osd_number' to determine e

Re: [ceph-users] OSDs missing from cluster all from one node

2018-01-25 Thread Andre Goree
On 2018/01/25 2:03 pm, Andre Goree wrote: Yesterday I noticed some OSDs were missing from our cluster (96 OSDs total, 84up/84in is what showed). After drilling down to determine which node and the cause, I found that all the OSDs on that node (12 total) were in fact down. I entered 'systemctl s

[ceph-users] Two issues remaining after luminous upgrade

2018-01-25 Thread Matthew Stroud
The first and hopefully easy one: I have a situation where I have two pools that are rarely used (the third will be in use after I can get through these issues), but they need to present at the whims of our cloud team. Is there a way I can turn off ‘2 pools have many more objects per pg than av

Re: [ceph-users] client with uid

2018-01-25 Thread Patrick Donnelly
On Wed, Jan 24, 2018 at 7:47 AM, Keane Wolter wrote: > Hello all, > > I was looking at the Client Config Reference page > (http://docs.ceph.com/docs/master/cephfs/client-config-ref/) and there was > mention of a flag --client_with_uid. The way I read it is that you can > specify the UID of a user

Re: [ceph-users] OSDs missing from cluster all from one node

2018-01-25 Thread Brad Hubbard
On Fri, Jan 26, 2018 at 5:47 AM, Andre Goree wrote: > On 2018/01/25 2:03 pm, Andre Goree wrote: >> >> Yesterday I noticed some OSDs were missing from our cluster (96 OSDs >> total, 84up/84in is what showed). >> >> After drilling down to determine which node and the cause, I found >> that all the O

[ceph-users] ceph-volume raw disks

2018-01-25 Thread Nathan Dehnel
The doc at http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#ceph-volume-lvm-prepare says I can pass a physical device to ceph-volume. But when I try to do that: gentooserver ~ # ceph-volume lvm create --bluestore --data /dev/sdb usage: ceph-volume lvm create [-h] [--journal JOURNAL] --dat

Re: [ceph-users] ceph-volume raw disks

2018-01-25 Thread David Turner
Did you wipe all of the existing partitions and such first? Which version of ceph? The below commands are what I ran to re-add my osds as bluestore after moving all data off of them. ceph-volume lvm zap /dev/sdb ceph-volume lvm create --bluestore --data /dev/sdb On Thu, Jan 25, 2018 at 9:41 PM

[ceph-users] Fwd: ceph-volume raw disks

2018-01-25 Thread Nathan Dehnel
-- Forwarded message -- From: Nathan Dehnel Date: Thu, Jan 25, 2018 at 9:49 PM Subject: Re: [ceph-users] ceph-volume raw disks To: David Turner >Did you wipe all of the existing partitions and such first? I tried it both before and after creating an lvm physical partition. >Whi

[ceph-users] Snapshot trimming

2018-01-25 Thread Karun Josy
Hi, We have set no scrub , no deep scrub flag on a ceph cluster. When we are deleting snapshots we are not seeing any change in usage space. I understand that Ceph OSDs delete data asynchronously, so deleting a snapshot doesn’t free up the disk space immediately. But we are not seeing any change

[ceph-users] How ceph client read data from ceph cluster

2018-01-25 Thread shadow_lin
Hi List, I read a old article about how ceph client read from ceph cluster.It said the client only read from the primary osd. Since ceph cluster in replicate mode have serveral copys of data only read from one copy seems waste the performance of concurrent read from all the copys. But that artci

Re: [ceph-users] Weird issues related to (large/small) weights in mixed nvme/hdd pool

2018-01-25 Thread Thomas Bennett
Hi Peter, Not sure if you have got to the bottom of your problem, but I seem to have found what might be a similar problem. I recommend reading below, as there could be a potential hidden problem. Yesterday our cluster went into *HEALTH_WARN* state and I noticed that one of my pg's was listed a