8AM PST as usual (ie in 18 minutes)! Discussion topics today include
bluestore testing results and a potential performance regression in
CentOS/RHEL 7.1 kernels. Please feel free to add your own topics!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the
8AM PST as usual (ie in 10 minutes)! No set agenda for today. Please
feel free to add your own topics!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044/browser
On 12/02/2015 12:23 PM, Gregory Farnum wrote:
On Tue, Dec 1, 2015 at 5:23 AM, Vimal wrote:
Hello,
This mail is to discuss the feature request at
http://tracker.ceph.com/issues/13578.
If done, such a tool should help point out several mis-configurations that
may cause
che.
Nick
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: 25 November 2015 20:41
To: Nick Fisk <n...@fisk.me.uk>
Cc: 'ceph-users' <ceph-us...@lists.ceph.com>; ceph-devel@vger.kernel.org;
'Mar
solution - Intel
CAS with NVMe PCIe SSD.
I would like to present this approach and show some results.
Maciej
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Wednesday, November 18, 2015 4:05 PM
To: ceph-devel
FWIW, if you've got collectl per-process logs, you might look for major
pagefaults associated with the osd processes. I've seen process
swapping cause heartbeat timeouts in the past. Not to say that's the
issue, but worth confirming it's not happening.
Mark
On 11/23/2015 01:03 PM, Robert
8AM PST as usual! I won't be there as I'm out giving a talk at SC15.
Sage will be there though, so go talk to him about newstore-block!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
8AM PST as usual! (ie in 10 minutes, sorry for the late notice)
Discussion topics include newstore block and anything else folks want to
talk about. See you there!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
whatever you did, it appears to work. :)
On 11/11/2015 05:44 PM, Somnath Roy wrote:
Sorry for the spam , having some issues with devl
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi Stephen,
That's about what I expected to see, other than the write performance
drop with more shards. We clearly still have some room for improvement.
Good job doing the testing!
Mark
On 11/11/2015 02:57 PM, Blinick, Stephen L wrote:
Sorry about the microphone issues in the performance
Hi Robert,
It definitely is exciting I think. Keep up the good work! :)
Mark
On 11/05/2015 09:14 AM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Thanks Gregory,
People are most likely busy and haven't had time to digest this and I
may be expecting more excitement
8AM PST as usual! Discussion topics include newstore changes, outdated
benchmarks (doh!) and anything else folks want to talk about. See you
there!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via
Various folks are going to be away at Openstack Summit and I don't have
anything on the agenda yet, so let's cancel this week. See you next week!
Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo
Hi Paul,
Sorry for the late reply, Sage will be out and I imagine a number of
other folks will be too. I don't have any topics on the agenda, so
let's just cancel and wait until next week.
Thanks,
Mark
On 10/21/2015 11:55 AM, Paul Von-Stamwitz wrote:
Hi Mark,
In light of OpenStack
On 10/21/2015 05:06 AM, Allen Samuels wrote:
I agree that moving newStore to raw block is going to be a significant
development effort. But the current scheme of using a KV store combined with a
normal file system is always going to be problematic (FileStore or NewStore).
This is caused by
: +1 408 801 7030| M: +1 408 780 6416
allen.samu...@sandisk.com
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, October 20, 2015 6:20 AM
To: Sage Weil <sw...@redhat.com>; Chen, Xiaoxi <
On 10/21/2015 06:24 AM, Ric Wheeler wrote:
On 10/21/2015 06:06 AM, Allen Samuels wrote:
I agree that moving newStore to raw block is going to be a significant
development effort. But the current scheme of using a KV store
combined with a normal file system is always going to be problematic
On 10/21/2015 10:51 AM, Ric Wheeler wrote:
On 10/21/2015 10:14 AM, Mark Nelson wrote:
On 10/21/2015 06:24 AM, Ric Wheeler wrote:
On 10/21/2015 06:06 AM, Allen Samuels wrote:
I agree that moving newStore to raw block is going to be a significant
development effort. But the current scheme
On 10/20/2015 07:30 AM, Sage Weil wrote:
On Tue, 20 Oct 2015, Chen, Xiaoxi wrote:
+1, nowadays K-V DB care more about very small key-value pairs, say
several bytes to a few KB, but in SSD case we only care about 4KB or
8KB. In this way, NVMKV is a good design and seems some of the SSD
vendor
performance as async but using much less
memory?
-Xiaoxi
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Mark Nelson
Sent: Tuesday, October 13, 2015 9:03 PM
To: Haomai Wang
Cc: ceph-devel; ceph-us...@lists.ceph.com
Subject: Re: [ceph-users] Initial
...@vger.kernel.org] On Behalf Of Curley, Matthew
Sent: Thursday, October 01, 2015 1:09 PM
To: Mark Nelson; ceph-devel@vger.kernel.org
Subject: RE: Reproducing allocator performance differences
Thanks a bunch for the feedback Mark. I'll push this back to the guy doing the
test runs and get more data
wrote:
resend
On Tue, Oct 13, 2015 at 10:56 AM, Haomai Wang <haomaiw...@gmail.com> wrote:
COOL
Interesting that async messenger will consume more memory than simple, in my
mind I always think async should use less memory. I will give a look at this
On Tue, Oct 13, 2015 at 12:50 AM, Mark
On 10/12/2015 11:12 PM, Gregory Farnum wrote:
On Mon, Oct 12, 2015 at 9:50 AM, Mark Nelson <mnel...@redhat.com> wrote:
Hi Guy,
Given all of the recent data on how different memory allocator
configurations improve SimpleMessenger performance (and the effect of memory
allocators and trans
Hi Guy,
Given all of the recent data on how different memory allocator
configurations improve SimpleMessenger performance (and the effect of
memory allocators and transparent hugepages on RSS memory usage), I
thought I'd run some tests looking how AsyncMessenger does in
comparison. We spoke
On 10/01/2015 10:32 AM, Curley, Matthew wrote:
We've been trying to reproduce the allocator performance impact on 4K random
reads seen in the Hackathon (and more recent tests). At this point though,
we're not seeing any significant difference between tcmalloc and jemalloc so
we're looking
8AM PST as usual! Discussion topics include Somnath's writepath PR and
more updates on transparent huge pages testing and async messenger
testing. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
Yay! Very much looking forward to checking this out Somnath! I'll let
you know how it goes.
Mark
On 09/29/2015 01:14 PM, Somnath Roy wrote:
Hi Mark,
I have sent out the following pull request for my write path changes.
https://github.com/ceph/ceph/pull/6112
Meanwhile, if you want to give
public.
Mark
On 09/23/2015 11:44 AM, Alexandre DERUMIER wrote:
Hi Mark,
can you post the video records of previous meetings ?
Thanks
Alexandre
- Mail original -
De: "Mark Nelson" <mnel...@redhat.com>
À: "ceph-devel" <ceph-devel@vger.kernel.org>
Envoy
Hi Everyone,
A while back Alexandre Derumier posted some test results looking at how
transparent huge pages can reduce memory usage with jemalloc. I went
back and ran a number of new tests on the community performance cluster
to verify his findings and also look at how performance and cpu
8AM PST as usual! Discussion topics include an update on transparent
huge pages testing and I think Ben would like to talk a bit about CBT
PRs. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
FWIW, we've got some 40GbE Intel cards in the community performance
cluster on a Mellanox 40GbE switch that appear (knock on wood) to be
running fine with 3.10.0-229.7.2.el7.x86_64. We did get feedback from
Intel that older drivers might cause problems though.
Here's ifconfig from one of the
On 09/23/2015 01:25 PM, Gregory Farnum wrote:
On Wed, Sep 23, 2015 at 11:19 AM, Sage Weil wrote:
On Wed, 23 Sep 2015, Deneau, Tom wrote:
Hi all --
Looking for guidance with perf counters...
I am trying to see whether the perf counters can tell me anything about the
8AM PST as usual! No set discussion topics this week. Please feel free
to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044/browser
To join with
On 09/08/2015 02:19 PM, Sage Weil wrote:
On Tue, Sep 8, 2015 at 9:58 PM, Haomai Wang wrote:
Hi Sage,
I notice your post in rocksdb page about make rocksdb aware of short
alive key/value pairs.
I think it would be great if one keyvalue db impl could support
different
Excellent investigation Alexandre! Have you noticed any performance
difference with tp=never?
Mark
On 09/08/2015 06:33 PM, Alexandre DERUMIER wrote:
I have done small benchmark with tcmalloc and jemalloc, transparent
hugepage=always|never.
for tcmalloc, they are no difference.
but for
Also, for what it's worth, I did analysis during recovery (though not
with different transparent hugepage settings). You can see it on slide
#13 here:
http://nhm.ceph.com/mark_nelson_ceph_tech_talk.odp
On 09/08/2015 06:49 PM, Mark Nelson wrote:
Excellent investigation Alexandre! Have you
On 09/03/2015 11:23 AM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Somnath,
I'm having a hard time with your slide deck. Am I understanding
correctly that the default Hammer install was performed on SSDs with
co-located journals, but the optimized code was performed
8AM PST as usual! No set topics for this morning. Please feel free to
add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044/browser
To join with Lync:
8AM PST Still! Discussion topics include kernel RBD client readahead
issues, and memory allocator testing under recovery. Please feel free
to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join
This is a really neat idea Loic! Do you think at some point it will
make sense to just setup the sepia lab as cloud infrastructure that lets
folks run teuthology on it in the same fashion?
Mark
On 08/24/2015 06:51 AM, Loic Dachary wrote:
Hi Sam,
Maybe we can start an experiment to enable
8AM PST. Let's followup with what was discussed at the hackathon and
look over the tcmalloc/jemalloc data from the new community cluster!
Also, discuss potentially moving the meeting 15 minutes later to not
conflict with the daily core standup. Please feel free to add your own
topic as well!
On 08/19/2015 07:36 AM, Dałek, Piotr wrote:
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Wednesday, August 19, 2015 2:17 PM
The RSS memory usage in the report is per OSD I guess(really?). It
can't
...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, August 18, 2015 9:46 PM
To: ceph-devel
Subject: Ceph Hackathon: More Memory Allocator Testing
Hi Everyone,
One of the goals at the Ceph Hackathon last week was to examine how to improve
Ceph Small IO
Nope! So in this case it's just server side.
On 08/19/2015 01:33 AM, Stefan Priebe - Profihost AG wrote:
Thanks for sharing. Do those tests use jemalloc for fio too? Otherwise
librbd on client side is running with tcmalloc again.
Stefan
Am 19.08.2015 um 06:45 schrieb Mark Nelson:
Hi
probably need to run the tests.
Thanks Regards
Somnath
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Tuesday, August 18, 2015 9:46 PM
To: ceph-devel
Subject: Ceph Hackathon: More Memory Allocator Testing
Hi Guys,
About a month ago we were going through the process of trying to figure
out how to replace some of the hardware in the community laboratory that
runs all of the nightly Teuthology tests. Given a limited budget to
replace the existing nodes, we wanted to understand how the current QA
19, 2015 10:30 AM
To: Alexandre DERUMIER
Cc: Mark Nelson; ceph-devel
Subject: RE: Ceph Hackathon: More Memory Allocator Testing
Yes, it should be 1 per OSD...
There is no doubt that TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES is relative to the
number of threads running..
But, I don't know if number
or the other.
My 3.3cts ;-)
On 19/08/2015 21:46, Mark Nelson wrote:
Hi Guys,
About a month ago we were going through the process of trying to figure out how
to replace some of the hardware in the community laboratory that runs all of
the nightly Teuthology tests. Given a limited budget to replace
Hi Everyone,
One of the goals at the Ceph Hackathon last week was to examine how to
improve Ceph Small IO performance. Jian Zhang presented findings
showing a dramatic improvement in small random IO performance when Ceph
is used with jemalloc. His results build upon Sandisk's original
Meeting is canceled this week due to the Ceph Hackathon. See you next
week guys!
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Srikanth,
Can you make a ticket on tracker.ceph.com for this? We'd like to not
loose track of it.
Thanks!
Mark
On 08/05/2015 07:01 PM, Srikanth Madugundi wrote:
Hi,
After upgrading to Hammer and moving from apache to civetweb. We
started seeing high PUT latency in the order of 2 sec
8AM PST as usual (that's in 13 minutes folks!) No specific topics for
this week, please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
On 08/05/2015 04:26 PM, Sage Weil wrote:
Today I learned that syncfs(2) does an O(n) search of the superblock's
inode list searching for dirty items. I've always assumed that it was
only traversing dirty inodes (e.g., a list of dirty inodes), but that
appears not to be the case, even on the
Just here to provide moral support. Go CMake go! :)
Mark
On 07/30/2015 02:01 PM, Ali Maredia wrote:
After discussing with several other Ceph developers and Sage, I wanted
to start a discussion about making CMake the primary build system for Ceph.
CMake works just fine as it is (make -j4 on
My first reading of the topic was Citerias (ie a project named
Citerias) to become a Ceph project. It wasn't until I re-read it more
closely that I realized it was criteria. :)
On 07/28/2015 12:51 PM, Joao Eduardo Luis wrote:
On 07/28/2015 07:59 AM, Loic Dachary wrote:
The title sound even
Hi Konstantin,
It might be best to move this to the cbt mailing list for discussion so
that we don't end up filling up ceph-devel. Would you mind re-posting
there?
Mark
On 07/24/2015 07:21 AM, Konstantin Danilov wrote:
Hi all,
This is BP/summary for changes in cbt, we discuss on previous
On 07/23/2015 06:12 PM, Travis Rhoden wrote:
HI Everyone,
I’m working on ways to improve Ceph installation with ceph-deploy, and a common
hurdle we have hit involves dependency issues between ceph.com hosted RPM
repos, and packages within EPEL. For a while we were able to managed this with
Haha, yes. I still use yum for everything. :D
Mark
On 07/24/2015 08:18 PM, Shinobu Kinjo wrote:
If there's really no better way around this, I think we need to
communicate to the Yum/DMF team(s) what the problem is and that we
need to come up with some better way to control the
On 07/23/2015 07:37 AM, John Spray wrote:
On 23/07/15 12:56, Mark Nelson wrote:
I had similar thoughts on the benchmarking side, which is why I
started writing cbt a couple years ago. I needed the ability to
quickly spin up clusters and run benchmarks on arbitrary sets of
hardware
Hi John,
I had similar thoughts on the benchmarking side, which is why I started
writing cbt a couple years ago. I needed the ability to quickly spin up
clusters and run benchmarks on arbitrary sets of hardware. The outcome
isn't perfect, but it's been extremely useful for running
8AM PST as usual! Topics today include a new ceph_test_rados benchmark
being added to CBT. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
8AM PST as usual! Topics today include proposed additions to cbt by
Mirantis! Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
/2015 03:11 AM, Konstantin Danilov wrote:
Mark,
does Wednesday performance meeting is a good place for discussion, or
we need a separated one?
On Mon, Jul 13, 2015 at 6:16 PM, Mark Nelson mnel...@redhat.com wrote:
Hi Konstantin,
I'm definitely interested in looking at your tools and seeing if we
FWIW,
It would be very interesting to see the output of:
https://github.com/ceph/cbt/blob/master/tools/readpgdump.py
If you see something that looks anomalous. I'd like to make sure that
I'm detecting issues like this.
Mark
On 07/09/2015 06:03 PM, Samuel Just wrote:
I've seen some odd
Hi Konstantin,
I'm definitely interested in looking at your tools and seeing if we can
merge them into cbt! One of the things we lack right now in cbt is any
kind of real openstack integration. Right now CBT basically just
assumes you've already launched VMs and specified them as clients in
with 30 osds).
It seems that our distribution is slightly better than expected in
your code.
Thanks.
On Mon, Jul 13, 2015 at 6:20 PM, Mark Nelson mnel...@redhat.com
mailto:mnel...@redhat.com wrote:
FWIW,
It would be very interesting to see the output of:
https://github.com
Absolutely!
On 07/10/2015 03:00 AM, Konstantin Danilov wrote:
Can I propose a topic for meeting by adding it to etherpad?
On Wed, Jul 8, 2015 at 5:30 PM, Mark Nelson mnel...@redhat.com
mailto:mnel...@redhat.com wrote:
8AM PST as usual! Let's discuss the topics that we didn't cover
8AM PST as usual! Let's discuss the topics that we didn't cover last
week due to CDS: New cache teiring promote probability investigation and
wbthrottle auto-tuning. Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the
8AM PST as usual! Also join us later today and tomorrow for the Ceph
Developer Summit where we will be discussing many different performance
related blueprints!
Please feel free to add discussion topics to the etherpad! Current
topics include new cache teiring promote probability
Hi Guys,
Looks like I got my timezone conversion off and we are overlapping with
CDS today, so let's cancel and push this off to next week. Enjoy CDS
everyone!
Mark
On 07/01/2015 09:20 AM, Mark Nelson wrote:
8AM PST as usual! Also join us later today and tomorrow for the Ceph
Developer
It would be fantastic if folks decided to work on this and got it pushed
upstream into fio proper. :D
Mark
On 06/30/2015 04:19 PM, James (Fei) Liu-SSI wrote:
Hi Casey,
Thanks a lot.
Regards,
James
-Original Message-
From: Casey Bodley [mailto:cbod...@gmail.com]
Sent:
this
Fujitsu presenting on bufferlist tuning
- about 2X savings in overall CPU Time with new code.
- Mail original -
De: Robert LeBlanc rob...@leblancnet.us
À: Mark Nelson mnel...@redhat.com
Cc: ceph-devel ceph-devel@vger.kernel.org
Envoyé: Jeudi 25 Juin 2015 17:58:10
Objet: Re: 06/24
8AM PST as usual!
Please feel free to add discussion topics to the etherpad!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044/browser
To join with Lync:
8AM PST as usual! Discussion topics for this week include:
- Alexandre's tcmalloc / memory pressure QEMU performance tests.
- Continuation of SimpleMessenger fastpath discussion?
Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To
On 06/16/2015 03:48 PM, GuangYang wrote:
Thanks Sage for the quick response.
It is on Firefly v0.80.4.
While trying to put with *rados* directly, the xattrs can be inline. The
problem comes to light when using radosgw, since we have a bunch of metadata to
keep via xattrs, including:
As an alternative, would dm-cache or bcache on the client VM be an option?
Mark
On 06/16/2015 06:43 AM, Mike wrote:
16.06.2015 12:54, Sebastien Han пишет:
This is not possible at the moment but long ago a BP was register to allow a
multiple backend functionality. (can’t find it anymore)
8AM PST as usual! Discussion topics for this week include:
- Throttling cache tier promotion.
- SimpleMessenger fastpath.
Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join
Hi All,
A couple of folks have asked for a recording of the performance meeting
this week as there was an excellent discussion today regarding
simplemessenger optimization with Sage.
Here's a link to the recording: https://bluejeans.com/s/8knV/
You can access this recording and all previous
Hi All,
In the past we've hit some performance issues with RBD cache that we've
fixed, but we've never really tried pushing a single VM beyond 40+K read
IOPS in testing (or at least I never have). I suspect there's a couple
of possibilities as to why it might be slower, but perhaps joshd can
8AM PST as usual! Discussion topics include: cache tiering update.
Please feel free to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
https://bluejeans.com/268261044/browser
To
On 05/20/2015 11:13 AM, Andreas Bluemle wrote:
Hi,
as discussed on todays performance meeting: find attached the
spreadsheet which I had shown during the meeting.
@Sam: your are right concerning the make_blist part: this
probably is expensive due to message signing being turned on.
For the osd
8AM PST as usual! Discussion topics include: Xiaoxi's newstore latency
tests, Andrea's microbenchmark results, Cache tiering. Please feel free
to add your own!
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To
On 05/27/2015 12:40 PM, Robert LeBlanc wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
With all the talk of tcmalloc and jemalloc, I decided to do some
testing og the different memory allocating technologies between KVM
and Ceph. These tests were done a pre-production system so I've
On 05/26/2015 06:57 AM, John Spray wrote:
On 26/05/2015 07:55, Yan, Zheng wrote:
the reason for slow file creations is that bonnie++ call fsync(2)
after each creat(2). fsync() wait for safe replies of the create
requests. MDS sends safe reply when log event for the request gets
journaled
On 05/24/2015 06:21 AM, Casier David wrote:
Hello everybody,
I have some suggestions to improve the Ceph performance with in case of
using Rados Block Device.
On FileStore :
- Remove all metadata in HDD and used omap on SSD. This reduce IOPS
and increases throughput.
This may matter more
On 05/11/2015 08:41 PM, kernel neophyte wrote:
Hi,
I am trying to use newstore + rocksdb + jemalloc. Currently rocksdb in
ceph compiles only with tcmalloc and not with jemalloc.
Is there any reason for this limitation ?
Heya, I'm guessing it's not by design, simply no one's tried doing it
On 05/11/2015 12:03 PM, Sage Weil wrote:
On Mon, 11 May 2015, Deneau, Tom wrote:
I have noticed the following while running rados bench seq read tests with a
40M object size
single rados bench, 4 concurrent ops, bandwidth = 190
MB/s
4 copies of rados bench, 1
FWIW, an easily buildable libcrush would be fantastic for simulation
purposes (and things like avalanche analysis!) as well.
Mark
On 05/08/2015 12:40 PM, James (Fei) Liu-SSI wrote:
Hi Yuan,
Very interesting. Would be possible to know why application needs to access
the cursh map directly
there?
Xiaoxi.
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Wednesday, April 29, 2015 7:25 AM
To: ceph-devel
Subject: newstore performance update
Hi Guys,
Sage has been furiously working away
Hi Alex,
Thanks! So far we've gotten a report that asyncmesseneger was a little
slower than simple messenger, but not this bad! I imagine Greg will
have lots of questions. :)
Mark
On 04/28/2015 03:36 AM, Alexandre DERUMIER wrote:
Hi,
here a small bench 4k randread of simple messenger vs
Hi Guys,
Sage has been furiously working away at fixing bugs in newstore and
improving performance. Specifically we've been focused on write
performance as newstore was lagging filestore but quite a bit
previously. A lot of work has gone into implementing libaio behind the
scenes and as a
any base line numbers to put things into perspective.
like what is it on XFS or on librados?
JV
On Tue, Apr 28, 2015 at 4:25 PM, Mark Nelson mnel...@redhat.com wrote:
Hi Guys,
Sage has been furiously working away at fixing bugs in newstore and
improving performance. Specifically we've been
On 04/28/2015 06:25 PM, Mark Nelson wrote:
Hi Guys,
Sage has been furiously working away at fixing bugs in newstore and
improving performance. Specifically we've been focused on write
performance as newstore was lagging filestore but quite a bit
previously. A lot of work has gone
On 04/27/2015 03:24 PM, Milosz Tanski wrote:
On 4/27/15 8:06 AM, Alexandre DERUMIER wrote:
Hi,
I'm hitting the tcmalloc even with patch apply.
It's mainly occur when I try to bench fio with a lot jobs (20 - 40 jobs)
Does It need to tuned something in osd environnement variable ?
I
better than jemalloc.
- Mail original -
De: aderumier aderum...@odiso.com
À: Mark Nelson mnel...@redhat.com
Cc: ceph-users ceph-us...@lists.ceph.com, ceph-devel ceph-devel@vger.kernel.org,
Milosz Tanski mil...@adfin.com
Envoyé: Lundi 27 Avril 2015 07:01:21
Objet: Re: [ceph-users] strange
Looks like the default is 16MB:
http://gperftools.googlecode.com/svn/trunk/doc/tcmalloc.html
On 04/27/2015 09:53 AM, Milosz Tanski wrote:
On 4/27/15 9:21 AM, Alexandre DERUMIER wrote:
Seem that starting osd with:
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=128M /usr/bin/ceph-osd
fix it.
I
/results I'm missing?
Mark
- Mail original -
De: Mark Nelson mnel...@redhat.com
À: aderumier aderum...@odiso.com
Cc: ceph-users ceph-us...@lists.ceph.com, ceph-devel ceph-devel@vger.kernel.org,
Milosz Tanski mil...@adfin.com
Envoyé: Lundi 27 Avril 2015 16:54:34
Objet: Re: [ceph-users
: aderumier aderum...@odiso.com mailto:aderum...@odiso.com,
Mark Nelson mnel...@redhat.com mailto:mnel...@redhat.com,
ceph-users ceph-us...@lists.ceph.com
mailto:ceph-us...@lists.ceph.com, ceph-devel
ceph-devel@vger.kernel.org mailto:ceph-devel@vger.kernel.org,
Milosz Tanski mil...@adfin.com mailto:mil
to help in my case (maybe not from 100-300k, but
250k-300k).
I need to do more long tests to confirm that.
- Mail original -
De: Srinivasula Maram srinivasula.ma...@sandisk.com
À: Mark Nelson mnel...@redhat.com, aderumier aderum...@odiso.com, Milosz
Tanski mil...@adfin.com
Cc: ceph-devel ceph
8AM PST as usual! Discussion topics include: Newstore updates, NUMA
affinity performance issues, and (hopefully) asyncmessenger performance.
Here's the links:
Etherpad URL:
http://pad.ceph.com/p/performance_weekly
To join the Meeting:
https://bluejeans.com/268261044
To join via Browser:
1 - 100 of 473 matches
Mail list logo