[ceph-users] Slow Requests when deep scrubbing PGs that hold Bucket Index

2018-07-10 Thread Christian Wimmer
Hi, I'm using ceph primarily for block storage (which works quite well) and as an object gateway using the S3 API. Here is some info about my system: Ceph: 12.2.4, OS: Ubuntu 18.04 OSD: Bluestore 6 servers in total, about 60 OSDs, 2TB SSDs each, no HDDs, CFQ scheduler 20 GBit private network 20 G

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Konstantin Shalygin
I have lots of Samsung 850 EVO but they are consumer, Do you think consume drive should be good for journal? No. Since the fall of 2017purchase of Intel P3700 is not difficult, you should buy it if you can. k ___ ceph-users mailing list ceph-us

Re: [ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-10 Thread Konstantin Shalygin
So if you want, two more questions to you : - How do you handle your ceph.conf configuration (default data pool by user) / distribution ? Manually, config management, openstack-ansible... ? - Did you made comparisons, benchmarks between replicated pools and EC pools, on the same hardware / drives

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Satish Patel
I am planning to use Intel 3700 (200GB) for journal and 500GB Samsung 850 EVO for OSD, do you think this design make sense? On Tue, Jul 10, 2018 at 3:04 PM, Simon Ironside wrote: > > On 10/07/18 19:32, Robert Stanford wrote: >> >> >> Do the recommendations apply to both data and journal SSDs eq

Re: [ceph-users] CephFS - How to handle "loaded dup inode" errors

2018-07-10 Thread Linh Vu
Hi John, Thanks for the explanation, that command is a lot more impacting than I thought! I hope the change of name for the verb "reset" comes through in the next version, because that is very easy to misunderstand. "The first question is why we're talking about running it at all. What chain o

Re: [ceph-users] CephFS - How to handle "loaded dup inode" errors

2018-07-10 Thread Linh Vu
Thanks John :) Has it - asserting out on dupe inode - already been logged as a bug yet? I could put one in if needed. Cheers, Linh From: John Spray Sent: Tuesday, 10 July 2018 7:11 PM To: Linh Vu Cc: Wido den Hollander; ceph-users@lists.ceph.com Subject: Re:

Re: [ceph-users] size of journal partitions pretty small

2018-07-10 Thread Paul Emmerich
1) yes, 5 GB is the default. You can control this with the 'osd journal size' option during creation. (Or partition the disk manually) 2) no, well, maybe a little bit in weird edge cases with tuned configs but that's rarely advisable. But using Bluestore instead of Filestore might help with the p

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Simon Ironside
On 10/07/18 19:32, Robert Stanford wrote:  Do the recommendations apply to both data and journal SSDs equally? Search the list for "Many concurrent drive failures - How do I activate pgs?" to read about the Intel DC S4600 failure story. The OP had several 2TB models of these fail when use

[ceph-users] size of journal partitions pretty small

2018-07-10 Thread Robert Stanford
I installed my OSDs using ceph-disk. The journals are SSDs and are 1TB. I notice that Ceph has only dedicated 5GB each to the four OSDs that use the journal. 1) Is this normal 2) Would performance increase if I made the partitions bigger? Thank you __

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Simon Ironside
On 10/07/18 18:59, Satish Patel wrote: Thanks, I would also like to know about Intel SSD 3700 (Intel SSD SC 3700 Series SSDSC2BA400G3P), price also looking promising, Do you have opinion on it? I can't quite tell from Google what exactly that is. If it's the Intel DC S3700 then I believe thos

Re: [ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-10 Thread Paul Emmerich
2018-07-10 6:26 GMT+02:00 Konstantin Shalygin : > > rbd default data pool = erasure_rbd_data > > > Keep in mind, your minimal client version is Luminous. > specifically, it's 12.2.2 or later for the clients! 12.2.0/1 clients have serious bugs in the rbd ec code that will ruin your day as soon

Re: [ceph-users] Looking for some advise on distributed FS: Is Ceph the right option for me?

2018-07-10 Thread Paul Emmerich
Yes, Ceph is probably a good fit for what you are planning. The documentation should answer your questions: http://docs.ceph.com/docs/master/ Look for erasure coding, crush rules, and CephFS-specific pages in particular. Paul 2018-07-10 18:40 GMT+02:00 Jones de Andrade : > Hi all. > > I'm lo

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Robert Stanford
Do the recommendations apply to both data and journal SSDs equally? On Tue, Jul 10, 2018 at 12:59 PM, Satish Patel wrote: > On Tue, Jul 10, 2018 at 11:51 AM, Simon Ironside > wrote: > > Hi, > > > > On 10/07/18 16:25, Satish Patel wrote: > >> > >> Folks, > >> > >> I am in middle or ordering har

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Satish Patel
On Tue, Jul 10, 2018 at 11:51 AM, Simon Ironside wrote: > Hi, > > On 10/07/18 16:25, Satish Patel wrote: >> >> Folks, >> >> I am in middle or ordering hardware for my Ceph cluster, so need some >> recommendation from communities. >> >> - What company/Vendor SSD is good ? > > > Samsung SM863a is th

Re: [ceph-users] Recovering from no quorum (2/3 monitors down) via 1 good monitor

2018-07-10 Thread Syahrul Sazli Shaharir
Hi Paul, Yes that's what I did - caused some errors. In the end I had to delete the /var/lib/ceph/mon/* directory in the bad node and run inject with --mkfs argument to recreate the database. I am good now - thanks. :) On Tue, Jul 10, 2018 at 10:46 PM, Paul Emmerich wrote: > easy: > > 1. make su

[ceph-users] Looking for some advise on distributed FS: Is Ceph the right option for me?

2018-07-10 Thread Jones de Andrade
Hi all. I'm looking for some information on several distributed filesystems for our application. It looks like it finally came down to two candidates, Ceph being one of them. But there are still a few questions about ir that I would really like to clarify, if possible. Our plan, initially on 6 w

Re: [ceph-users] rbd lock remove unable to parse address

2018-07-10 Thread Kevin Olbrich
2018-07-10 14:37 GMT+02:00 Jason Dillaman : > On Tue, Jul 10, 2018 at 2:37 AM Kevin Olbrich wrote: > >> 2018-07-10 0:35 GMT+02:00 Jason Dillaman : >> >>> Is the link-local address of "fe80::219:99ff:fe9e:3a86%eth0" at least >>> present on the client computer you used? I would have expected the OS

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Simon Ironside
Hi, On 10/07/18 16:25, Satish Patel wrote: Folks, I am in middle or ordering hardware for my Ceph cluster, so need some recommendation from communities. - What company/Vendor SSD is good ? Samsung SM863a is the current favourite I believe. The Intel DC S4600 is one to specifically avoid at

[ceph-users] OSDs stalling on Intel SSDs

2018-07-10 Thread Shawn Iverson
Hi everybody, I have a situation that occurs under moderate I/O load on Ceph Luminous: 2018-07-10 10:27:01.257916 mon.node4 mon.0 172.16.0.4:6789/0 15590 : cluster [INF] mon.node4 is new leader, mons node4,node5,node6,node7,node8 in quorum (ranks 0,1,2,3,4) 2018-07-10 10:27:01.306329 mon.node4 mo

Re: [ceph-users] Journel SSD recommendation

2018-07-10 Thread Anton Aleksandrov
I think you will get some useful information from this link: https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ Even though it is dated 2014 - you can get approximate direction. Anton On 10.07.2018 18:25, Satish Patel wrote: Folks, I am

[ceph-users] Journel SSD recommendation

2018-07-10 Thread Satish Patel
Folks, I am in middle or ordering hardware for my Ceph cluster, so need some recommendation from communities. - What company/Vendor SSD is good ? - What size should be good for Journal (for BlueStore) I have lots of Samsung 850 EVO but they are consumer, Do you think consume drive should be goo

Re: [ceph-users] Recovering from no quorum (2/3 monitors down) via 1 good monitor

2018-07-10 Thread Paul Emmerich
easy: 1. make sure that none of the mons are running 2. extract the monmap from the good one 3. use monmaptool to remove the two other mons from it 4. inject the mon map back into the good mon 5. start the good mon 6. you now have a running cluster with only one mon, add two new ones Paul 20

Re: [ceph-users] CephFS - How to handle "loaded dup inode" errors

2018-07-10 Thread John Spray
On Tue, Jul 10, 2018 at 3:14 PM Dennis Kramer (DBS) wrote: > > Hi John, > > On Tue, 2018-07-10 at 10:11 +0100, John Spray wrote: > > On Tue, Jul 10, 2018 at 12:43 AM Linh Vu wrote: > > > > > > > > > We're affected by something like this right now (the dup inode > > > causing MDS to crash via asse

Re: [ceph-users] CephFS - How to handle "loaded dup inode" errors

2018-07-10 Thread Dennis Kramer (DBS)
Hi John, On Tue, 2018-07-10 at 10:11 +0100, John Spray wrote: > On Tue, Jul 10, 2018 at 12:43 AM Linh Vu wrote: > > > > > > We're affected by something like this right now (the dup inode > > causing MDS to crash via assert(!p) with add_inode(CInode) > > function). > > > > In terms of behaviour

Re: [ceph-users] rbd lock remove unable to parse address

2018-07-10 Thread Jason Dillaman
On Tue, Jul 10, 2018 at 2:37 AM Kevin Olbrich wrote: > 2018-07-10 0:35 GMT+02:00 Jason Dillaman : > >> Is the link-local address of "fe80::219:99ff:fe9e:3a86%eth0" at least >> present on the client computer you used? I would have expected the OSD to >> determine the client address, so it's odd th

Re: [ceph-users] Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

2018-07-10 Thread Gilles Mocellin
Le 2018-07-10 06:26, Konstantin Shalygin a écrit : Does someone have used EC pools with OpenStack in production ? By chance, I found that link : https://www.reddit.com/r/ceph/comments/72yc9m/ceph_openstack_with_ec/ Yes, this good post. My configuration is: cinder.conf: [erasure-rbd-hdd]

Re: [ceph-users] Mimic 13.2.1 release date

2018-07-10 Thread Martin Overgaard Hansen
> Den 9. jul. 2018 kl. 17.12 skrev Wido den Hollander : > > Hi, > > Is there a release date for Mimic 13.2.1 yet? > > There are a few issues which currently make deploying with Mimic 13.2.0 > a bit difficult, for example: > > - https://tracker.ceph.com/issues/24423 > - https://github.com/ceph

[ceph-users] Add Partitions to Ceph Cluster

2018-07-10 Thread Dimitri Roschkowski
Hi, is it possible to use just a partition instead of a whole disk for OSD? On a server I already use hdb for Ceph and want to add hda4 to be used in the Ceph Cluster, but it didn’t work for me. On the server with the partition I tried: ceph-disk prepare /dev/sda4 and ceph-disk activate /d

Re: [ceph-users] Luminous 12.2.6 release date?

2018-07-10 Thread Sean Purdy
Hi Sean, On Tue, 10 Jul 2018, Sean Redmond said: > Can you please link me to the tracker 12.2.6 fixes? I have disabled > resharding in 12.2.5 due to it running endlessly. http://tracker.ceph.com/issues/22721 Sean > Thanks > > On Tue, Jul 10, 2018 at 9:07 AM, Sean Purdy > wrote: > > > While

Re: [ceph-users] CephFS - How to handle "loaded dup inode" errors

2018-07-10 Thread John Spray
On Tue, Jul 10, 2018 at 2:49 AM Linh Vu wrote: > > While we're on this topic, could someone please explain to me what > `cephfs-table-tool all reset inode` does? The inode table stores an interval set of free inode numbers. Active MDS daemons consume inode numbers as they create files. Resetti

Re: [ceph-users] CephFS - How to handle "loaded dup inode" errors

2018-07-10 Thread John Spray
On Tue, Jul 10, 2018 at 12:43 AM Linh Vu wrote: > > We're affected by something like this right now (the dup inode causing MDS to > crash via assert(!p) with add_inode(CInode) function). > > In terms of behaviours, shouldn't the MDS simply skip to the next available > free inode in the event of

Re: [ceph-users] Luminous 12.2.6 release date?

2018-07-10 Thread Sean Redmond
Hi Sean (Good name btw), Can you please link me to the tracker 12.2.6 fixes? I have disabled resharding in 12.2.5 due to it running endlessly. Thanks On Tue, Jul 10, 2018 at 9:07 AM, Sean Purdy wrote: > While we're at it, is there a release date for 12.2.6? It fixes a > reshard/versioning bug

Re: [ceph-users] Mimic 13.2.1 release date

2018-07-10 Thread Steffen Winther Sørensen
> On 9 Jul 2018, at 17.11, Wido den Hollander wrote: > > Hi, > > Is there a release date for Mimic 13.2.1 yet? > > There are a few issues which currently make deploying with Mimic 13.2.0 > a bit difficult, for example: > > - https://tracker.ceph.com/issues/24423 > - https://github.com/ceph/c

[ceph-users] Luminous 12.2.6 release date?

2018-07-10 Thread Sean Purdy
While we're at it, is there a release date for 12.2.6? It fixes a reshard/versioning bug for us. Sean ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ceph poor performance when compress files

2018-07-10 Thread Mostafa Hamdy Abo El-Maty El-Giar
Hi Ceph Experts, When I compress my files stored in ceph cluster using gzip command, the command take long time. The poor performance only when ziping files stored on ceph. Please , any idea about this problem. Thank you ___ ceph-users mailing list ce