Re: disabling buffer::raw crc cache

2015-11-11 Thread Ning Yao
>>the code logic would touch crc cache is bufferlist::crc32c and invalidate_crc. >Also for pg_log::_write_log(), but seems it is always miss and use at >once, no need to cache crc actually? Oh, no, it will be hit in FileJournal writing Regards Ning Yao 2015-11-11 18:03 GMT+08:00 Ning Yao

Re: disabling buffer::raw crc cache

2015-11-11 Thread Evgeniy Firsov
Rb-tree construction, insertion, which needs memory allocation, mutex lock, unlock is more CPU expensive then streamlined crc calculation of sometimes 100 bytes or less. On 11/11/15, 12:03 AM, "池信泽" wrote: >Ah, I confuse that why the crc cache logic would exhaust so much cpu.

Re: disabling buffer::raw crc cache

2015-11-11 Thread Ning Yao
>>the code logic would touch crc cache is bufferlist::crc32c and invalidate_crc. Also for pg_log::_write_log(), but seems it is always miss and use at once, no need to cache crc actually? So we may need to add some option to enable or disable it, or some identifier to instruct bufferlist wheher

Re: disabling buffer::raw crc cache

2015-11-11 Thread Ning Yao
>>>the code logic would touch crc cache is bufferlist::crc32c and >>>invalidate_crc. >>Also for pg_log::_write_log(), but seems it is always miss and use at >>once, no need to cache crc actually? > Oh, no, it will be hit in FileJournal writing Still miss as buffer::ptr length diff with

merge commits reminder

2015-11-11 Thread Sage Weil
Just a reminder: we'd like to generate the release changelog from the merge commits. Whenever merging a pull request, please remember to: - edit the first line to be what will will appear in the changelog. Prefix it with the subsystem and give it a short, meaningful description. - if the

Re: new scrub and repair discussion

2015-11-11 Thread kefu chai
On Wed, Nov 11, 2015 at 10:43 PM, 王志强 wrote: > 2015-11-11 19:44 GMT+08:00 kefu chai : >> currently, scrub and repair are pretty primitive. there are several >> improvements which need to be made: >> >> - user should be able to initialize scrub of a PG or an

Re: new scrub and repair discussion

2015-11-11 Thread kefu chai
On Wed, Nov 11, 2015 at 9:25 PM, Sage Weil wrote: > On Wed, 11 Nov 2015, kefu chai wrote: >> currently, scrub and repair are pretty primitive. there are several >> improvements which need to be made: >> >> - user should be able to initialize scrub of a PG or an object >> -

Re: new scrub and repair discussion

2015-11-11 Thread 王志强
2015-11-11 19:44 GMT+08:00 kefu chai : > currently, scrub and repair are pretty primitive. there are several > improvements which need to be made: > > - user should be able to initialize scrub of a PG or an object > - int scrub(pg_t, AioCompletion*) > - int scrub(const

11/11/2015 Weekly Ceph Performance Meeting IS ON!

2015-11-11 Thread Mark Nelson
8AM PST as usual! (ie in 10 minutes, sorry for the late notice) Discussion topics include newstore block and anything else folks want to talk about. See you there! Here's the links: Etherpad URL: http://pad.ceph.com/p/performance_weekly To join the Meeting: https://bluejeans.com/268261044

Re: disabling buffer::raw crc cache

2015-11-11 Thread 池信泽
the code logic would touch crc cache is bufferlist::crc32c and invalidate_crc. we call bufferlist::crc32 when sending or receiving message and writing filejournal. I miss something critical? I am agree with you that the benefit from that cache cache is every limit. 2015-11-11 16:25 GMT+08:00

Re: disabling buffer::raw crc cache

2015-11-11 Thread Ning Yao
2015-11-11 21:13 GMT+08:00 Sage Weil : > On Wed, 11 Nov 2015, Ning Yao wrote: >> >>>the code logic would touch crc cache is bufferlist::crc32c and >> >>>invalidate_crc. >> >>Also for pg_log::_write_log(), but seems it is always miss and use at >> >>once, no need to cache crc

Re: disabling buffer::raw crc cache

2015-11-11 Thread Sage Weil
On Wed, 11 Nov 2015, Ning Yao wrote: > 2015-11-11 21:13 GMT+08:00 Sage Weil : > > On Wed, 11 Nov 2015, Ning Yao wrote: > >> >>>the code logic would touch crc cache is bufferlist::crc32c and > >> >>>invalidate_crc. > >> >>Also for pg_log::_write_log(), but seems it is always

new scrub and repair discussion

2015-11-11 Thread kefu chai
currently, scrub and repair are pretty primitive. there are several improvements which need to be made: - user should be able to initialize scrub of a PG or an object - int scrub(pg_t, AioCompletion*) - int scrub(const string& pool, const string& nspace, const string& locator, const

Re: new scrub and repair discussion

2015-11-11 Thread Sage Weil
On Wed, 11 Nov 2015, kefu chai wrote: > currently, scrub and repair are pretty primitive. there are several > improvements which need to be made: > > - user should be able to initialize scrub of a PG or an object > - int scrub(pg_t, AioCompletion*) > - int scrub(const string& pool, const

Re: disabling buffer::raw crc cache

2015-11-11 Thread Sage Weil
On Wed, 11 Nov 2015, Ning Yao wrote: > >>>the code logic would touch crc cache is bufferlist::crc32c and > >>>invalidate_crc. > >>Also for pg_log::_write_log(), but seems it is always miss and use at > >>once, no need to cache crc actually? > > Oh, no, it will be hit in FileJournal writing >

[PATCH] mm: Allow GFP_IOFS for page_cache_read page cache allocation

2015-11-11 Thread mhocko
From: Michal Hocko page_cache_read has been historically using page_cache_alloc_cold to allocate a new page. This means that mapping_gfp_mask is used as the base for the gfp_mask. Many filesystems are setting this mask to GFP_NOFS to prevent from fs recursion issues.

Re: Increasing # Shards vs multi-OSDs per device

2015-11-11 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I should have the weighted round robin queue ready in the next few days. I shaking out a few bugs from converting it from my Hammer patch and I need to write a test suite, but I can get you the branch before then. I'd be interested to see what

test

2015-11-11 Thread Somnath Roy
Sorry for the spam , having some issues with devl -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: test

2015-11-11 Thread Mark Nelson
whatever you did, it appears to work. :) On 11/11/2015 05:44 PM, Somnath Roy wrote: Sorry for the spam , having some issues with devl -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at

RE: 11/11/2015 Weekly Ceph Performance Meeting IS ON!

2015-11-11 Thread James (Fei) Liu-SSI
Hi Mark, Have been busy this morning and missed the meeting today. Would be possible to upload the recording video to the site at your most convenient time? Thanks, James -Original Message- From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...@vger.kernel.org] On

Re: Increasing # Shards vs multi-OSDs per device

2015-11-11 Thread Mark Nelson
Hi Stephen, That's about what I expected to see, other than the write performance drop with more shards. We clearly still have some room for improvement. Good job doing the testing! Mark On 11/11/2015 02:57 PM, Blinick, Stephen L wrote: Sorry about the microphone issues in the performance

Increasing # Shards vs multi-OSDs per device

2015-11-11 Thread Blinick, Stephen L
Sorry about the microphone issues in the performance meeting today today. This is a followup to the 11/4 performance meeting where we discussed increasing the worker thread count in the OSD's vs making multiple OSD's (and partitions/filesystems) per device. We did the high level

RE: Increasing # Shards vs multi-OSDs per device

2015-11-11 Thread Somnath Roy
Thanks for the data Stephen. Some feedback: 1. I don't think single OSD is still there to serve 460K read iops irrespective of how many shards/threads you are running. I didn't have your NVMe data earlier :-)..But, probably for 50/60K SAS SSD iops single OSD per drive is good enough. I hope

RE: Increasing # Shards vs multi-OSDs per device

2015-11-11 Thread Blinick, Stephen L
Thanks for taking a look! First, the original slides are on the Ceph slideshare here: http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workloads-on-ceph-with-allflash-pcie-ssds That should show the 1/2/4 parition comparison and the overall performance #’s latency, and data

Re: disabling buffer::raw crc cache

2015-11-11 Thread 池信泽
Ah, I confuse that why the crc cache logic would exhaust so much cpu. 2015-11-11 15:27 GMT+08:00 Evgeniy Firsov : > Hello, Guys! > > While running CPU bound 4k block workload, I found that disabling crc > cache in the buffer::raw gives around 7% performance