Re: [ceph-users] Mix HDDs and SSDs togheter

2017-03-06 Thread Christian Balzer
Hello, On Mon, 6 Mar 2017 16:06:51 +0700 Vy Nguyen Tan wrote: > Hi Jiajia zhong, > > I'm using mixed SSD and HDD on the same node and I did it from url > https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/, > I don't get any problems when run SSD and HDD on t

Re: [ceph-users] Mix HDDs and SSDs togheter

2017-03-06 Thread Vy Nguyen Tan
Hi Jiajia zhong, I'm using mixed SSD and HDD on the same node and I did it from url https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/, I don't get any problems when run SSD and HDD on the same node. Now I want to increase Ceph thoughput by increase network int

Re: [ceph-users] Mix HDDs and SSDs togheter

2017-03-05 Thread jiajia zhong
we are using mixed too, intel PCIE 400G SSD * 8 for metadata pool and tier caching pool for our cephfs. *plus:* *'osd crush update on start = false*' as Vladimir replied. 2017-03-03 20:33 GMT+08:00 Дробышевский, Владимир : > Hi, Matteo! > > Yes, I'm using mixed cluster in production but it's

Re: [ceph-users] Mix HDDs and SSDs togheter

2017-03-03 Thread Дробышевский , Владимир
Hi, Matteo! Yes, I'm using mixed cluster in production but it's pretty small at the moment. I've made a smal step by step manual for myself when I did this for the first time and now put it as a gist: https://gist.github.com/vheathen/cf2203aeb53e33e3f80c8c64a02263bc#file-manual-txt. Probably it

Re: [ceph-users] Mix HDDs and SSDs togheter

2017-03-03 Thread Maxime Guyot
eparately (ceph osd df tree). Shameless plug: I wrote an article about this a couple of months ago: http://www.root314.com/2017/01/15/Ceph-storage-tiers/ hope it helps. Cheers, Maxime From: ceph-users on behalf of Matteo Dacrema Date: Friday 3 March 2017 12:30 To: ceph-users Subject: [ceph-

[ceph-users] Mix HDDs and SSDs togheter

2017-03-03 Thread Matteo Dacrema
Hi all, Does anyone run a production cluster with a modified crush map for create two pools belonging one to HDDs and one to SSDs. What’s the best method? Modify the crush map via ceph CLI or via text editor? Will the modification to the crush map be persistent across reboots and maintenance op