>>the code logic would touch crc cache is bufferlist::crc32c and invalidate_crc.
>Also for pg_log::_write_log(), but seems it is always miss and use at
>once, no need to cache crc actually?
Oh, no, it will be hit in FileJournal writing
Regards
Ning Yao
2015-11-11 18:03 GMT+08:00 Ning Yao
Rb-tree construction, insertion, which needs memory allocation, mutex
lock, unlock is more CPU expensive then streamlined crc calculation of
sometimes 100 bytes or less.
On 11/11/15, 12:03 AM, "池信泽" wrote:
>Ah, I confuse that why the crc cache logic would exhaust so much cpu.
>>the code logic would touch crc cache is bufferlist::crc32c and invalidate_crc.
Also for pg_log::_write_log(), but seems it is always miss and use at
once, no need to cache crc actually?
So we may need to add some option to enable or disable it, or some
identifier to instruct bufferlist wheher
>>>the code logic would touch crc cache is bufferlist::crc32c and
>>>invalidate_crc.
>>Also for pg_log::_write_log(), but seems it is always miss and use at
>>once, no need to cache crc actually?
> Oh, no, it will be hit in FileJournal writing
Still miss as buffer::ptr length diff with
Just a reminder: we'd like to generate the release changelog from the
merge commits. Whenever merging a pull request, please remember to:
- edit the first line to be what will will appear in the changelog.
Prefix it with the subsystem and give it a short, meaningful description.
- if the
On Wed, Nov 11, 2015 at 10:43 PM, 王志强 wrote:
> 2015-11-11 19:44 GMT+08:00 kefu chai :
>> currently, scrub and repair are pretty primitive. there are several
>> improvements which need to be made:
>>
>> - user should be able to initialize scrub of a PG or an
On Wed, Nov 11, 2015 at 9:25 PM, Sage Weil wrote:
> On Wed, 11 Nov 2015, kefu chai wrote:
>> currently, scrub and repair are pretty primitive. there are several
>> improvements which need to be made:
>>
>> - user should be able to initialize scrub of a PG or an object
>> -
2015-11-11 19:44 GMT+08:00 kefu chai :
> currently, scrub and repair are pretty primitive. there are several
> improvements which need to be made:
>
> - user should be able to initialize scrub of a PG or an object
> - int scrub(pg_t, AioCompletion*)
> - int scrub(const
8AM PST as usual! (ie in 10 minutes, sorry for the late notice)
Discussion topics include newstore block and anything else folks want to
talk about. See you there!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
the code logic would touch crc cache is bufferlist::crc32c and invalidate_crc.
we call bufferlist::crc32 when sending or receiving message and
writing filejournal.
I miss something critical?
I am agree with you that the benefit from that cache cache is every limit.
2015-11-11 16:25 GMT+08:00
2015-11-11 21:13 GMT+08:00 Sage Weil :
> On Wed, 11 Nov 2015, Ning Yao wrote:
>> >>>the code logic would touch crc cache is bufferlist::crc32c and
>> >>>invalidate_crc.
>> >>Also for pg_log::_write_log(), but seems it is always miss and use at
>> >>once, no need to cache crc
On Wed, 11 Nov 2015, Ning Yao wrote:
> 2015-11-11 21:13 GMT+08:00 Sage Weil :
> > On Wed, 11 Nov 2015, Ning Yao wrote:
> >> >>>the code logic would touch crc cache is bufferlist::crc32c and
> >> >>>invalidate_crc.
> >> >>Also for pg_log::_write_log(), but seems it is always
currently, scrub and repair are pretty primitive. there are several
improvements which need to be made:
- user should be able to initialize scrub of a PG or an object
- int scrub(pg_t, AioCompletion*)
- int scrub(const string& pool, const string& nspace, const
string& locator, const
On Wed, 11 Nov 2015, kefu chai wrote:
> currently, scrub and repair are pretty primitive. there are several
> improvements which need to be made:
>
> - user should be able to initialize scrub of a PG or an object
> - int scrub(pg_t, AioCompletion*)
> - int scrub(const string& pool, const
On Wed, 11 Nov 2015, Ning Yao wrote:
> >>>the code logic would touch crc cache is bufferlist::crc32c and
> >>>invalidate_crc.
> >>Also for pg_log::_write_log(), but seems it is always miss and use at
> >>once, no need to cache crc actually?
> > Oh, no, it will be hit in FileJournal writing
>
From: Michal Hocko
page_cache_read has been historically using page_cache_alloc_cold to
allocate a new page. This means that mapping_gfp_mask is used as the
base for the gfp_mask. Many filesystems are setting this mask to
GFP_NOFS to prevent from fs recursion issues.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I should have the weighted round robin queue ready in the next few
days. I shaking out a few bugs from converting it from my Hammer patch
and I need to write a test suite, but I can get you the branch before
then. I'd be interested to see what
Sorry for the spam , having some issues with devl
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
whatever you did, it appears to work. :)
On 11/11/2015 05:44 PM, Somnath Roy wrote:
Sorry for the spam , having some issues with devl
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi Mark,
Have been busy this morning and missed the meeting today. Would be possible
to upload the recording video to the site at your most convenient time?
Thanks,
James
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On
Hi Stephen,
That's about what I expected to see, other than the write performance
drop with more shards. We clearly still have some room for improvement.
Good job doing the testing!
Mark
On 11/11/2015 02:57 PM, Blinick, Stephen L wrote:
Sorry about the microphone issues in the performance
Sorry about the microphone issues in the performance meeting today today.
This is a followup to the 11/4 performance meeting where we discussed
increasing the worker thread count in the OSD's vs making multiple OSD's (and
partitions/filesystems) per device. We did the high level
Thanks for the data Stephen. Some feedback:
1. I don't think single OSD is still there to serve 460K read iops irrespective
of how many shards/threads you are running. I didn't have your NVMe data
earlier :-)..But, probably for 50/60K SAS SSD iops single OSD per drive is good
enough. I hope
Thanks for taking a look!
First, the original slides are on the Ceph slideshare here:
http://www.slideshare.net/Inktank_Ceph/accelerating-cassandra-workloads-on-ceph-with-allflash-pcie-ssds
That should show the 1/2/4 parition comparison and the overall performance
#’s latency, and data
Ah, I confuse that why the crc cache logic would exhaust so much cpu.
2015-11-11 15:27 GMT+08:00 Evgeniy Firsov :
> Hello, Guys!
>
> While running CPU bound 4k block workload, I found that disabling crc
> cache in the buffer::raw gives around 7% performance
25 matches
Mail list logo