>>I just want to know if this is sufficient to wipe a RBD image?
AFAIK, ceph write zeroes in the rados objects with discard is used.
They are an option for skip zeroes write if needed
OPTION(rbd_skip_partial_discard, OPT_BOOL, false) // when trying to discard a
range inside an object, set to
Thanks for sharing !
Results are impressive, Great to see that write are finally improving.
I just wonder how much you could have with rbd_cache=false.
Also, with such high load,
maybe using jemalloc on fio could help too (I have seen around 20% improvement
on fio client)
Sorry,my fault, I had an old --without-lttng flag in my build packages.
- Mail original -
De: "aderumier"
À: "ceph-devel"
Envoyé: Mardi 10 Novembre 2015 15:06:19
Objet: infernalis build package on debian jessie : dh_install: ceph missing
Hi,
I'm trying to build infernalis packages on debian jessie,
and I have this error on package build
dh_install: ceph missing files (usr/lib/libos_tp.so.*), aborting
I think it's related to lltng change from here
https://github.com/ceph/ceph/pull/6135
Maybe is it missing an option in
e Ceph tracker ticket I opened
[1], that would be very helpful.
[1] http://tracker.ceph.com/issues/13726
Thanks,
Jason
- Original Message -
> From: "Alexandre DERUMIER" <aderum...@odiso.com>
> To: "ceph-devel" <ceph-devel@vger.kernel.org>
>
h-devel"
<ceph-devel@vger.kernel.org>, "qemu-devel" <qemu-de...@nongnu.org>
Envoyé: Lundi 9 Novembre 2015 08:22:34
Objet: Re: [Qemu-devel] qemu : rbd block driver internal snapshot and vm_stop
is hanging forever
On 11/09/2015 10:19 AM, Denis V. Lunev wr
Also,
this occur only with rbd_cache=false or qemu drive cache=none.
If I use rbd_cache=true or qemu drive cache=writeback, I don't have this bug.
- Mail original -
De: "aderumier"
À: "ceph-devel" , "qemu-devel"
Hi,
debian repository seem to miss librbd1 package for debian jessie
http://download.ceph.com/debian-infernalis/pool/main/c/ceph/
(ubuntu trusty librbd1 is present)
- Mail original -
De: "Sage Weil"
À: ceph-annou...@ceph.com, "ceph-devel"
Something is really wrong,
because guest is also freezing, with a simple snapshot, with cache=none /
rbd_cache=false
qemu monitor : snapshot_blkdev_internal drive-virtio0 snap1
or
rbd command : rbd --image myrbdvolume snap create --snap snap1
Then the guest can't read/write to disk
can enable the support and adjust their
SElinux / AppArmor rules to accommodate.
[1] https://github.com/ceph/ceph/pull/6135
--
Jason Dillaman
- Original Message -----
> From: "Paul HEWLETT (Paul)" <paul.hewl...@alcatel-lucent.com>
> To: "Alexandre DERUMIER&q
Hi,
it seem that since this commit
https://github.com/ceph/ceph/pull/4261/files
lltng is enabled by default.
But this give error with qemu when apparmor|selinux is enabled.
That's why ubuntu && redhat now disable it for their own packages.
https://bugzilla.redhat.com/show_bug.cgi?id=1223319
I was able reproduce it with this config a lot of time,4k randread,
intel s3610 ssd, testing with a small rbd which can be keep in buffer memory.
I think around 150-200k iops by osd, I was able to trigger it easily.
auth_cluster_required = none
auth_service_required = none
>>also - more clients would be better (or worse, depending on how you look at
>>it).
It's quite possible, if I remember, I could trigger more easily with fio with a
lot of numjobs (30-40)
- Mail original -
De: "Dałek, Piotr"
À: "Curley, Matthew"
ocator performance differences
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Alexandre DERUMIER
> Sent: Friday, October 02, 2015 8:55 AM
>
> >>also - more clients would be better (or wo
e etherpad:
http://pad.ceph.com/p/performance_weekly
Again, sorry for the delay on these. I can't find any way to make
bluejeans default to making the meetings public.
Mark
On 09/23/2015 11:44 AM, Alexandre DERUMIER wrote:
> Hi Mark,
>
> can you post the video records of previ
Hi Mark,
can you post the video records of previous meetings ?
Thanks
Alexandre
- Mail original -
De: "Mark Nelson"
À: "ceph-devel"
Envoyé: Mercredi 23 Septembre 2015 15:51:21
Objet: 09/23/2015 Weekly Ceph Performance Meeting IS ON!
Hi,
Paolo Bonzini from qemu team has finally applied my qemu jemalloc patch
in his for-upstream branch
https://github.com/bonzini/qemu/releases/tag/for-upstream
https://github.com/bonzini/qemu/tree/for-upstream
So,It'll be in qemu master soon and ready for qemu 2.5
I have write some small
Hi,
I have found an interesting article about jemalloc and transparent hugepages
https://www.digitalocean.com/company/blog/transparent-huge-pages-and-alternative-memory-allocators/
Could be great to see if disable transparent hugepage help to have lower
jemalloc memory usage.
Regards,
I have done small benchmark with tcmalloc and jemalloc, transparent
hugepage=always|never.
for tcmalloc, they are no difference.
but for jemalloc, the difference is huge (around 25% lower with tp=never).
jemmaloc 4.6.0+tp=never vs tcmalloc use 10% more RSS memory
jemmaloc 4.0+tp=never almost
nsparent hugepage
Excellent investigation Alexandre! Have you noticed any performance
difference with tp=never?
Mark
On 09/08/2015 06:33 PM, Alexandre DERUMIER wrote:
> I have done small benchmark with tcmalloc and jemalloc, transparent
> hugepage=always|never.
>
> for tcmal
ath@sandisk.com>
Envoyé: Mercredi 9 Septembre 2015 04:07:59
Objet: Re: [ceph-users] jemalloc and transparent hugepage
On Wed, 9 Sep 2015, Alexandre DERUMIER wrote:
> >>Have you noticed any performance difference with tp=never?
>
> No difference.
>
> I think
Mark Nelson" <mnel...@redhat.com>, "ceph-devel"
<ceph-devel@vger.kernel.org>, "ceph-users" <ceph-us...@lists.ceph.com>,
"Somnath Roy" <somnath@sandisk.com>
Envoyé: Mercredi 9 Septembre 2015 04:07:59
Objet: Re: [ceph-users] jemall
mer, Giant) it means that there will be a
Jessie package for them for new versions only.
Let me know if you have any questions.
Thanks!
Alfredo
On Mon, Aug 31, 2015 at 1:27 AM, Alexandre DERUMIER < aderum...@odiso.com >
wrote:
Hi,
any news for to add an official debian
Hi,
any news for to add an official debian jessie repository on ceph.com?
gitbuilder repository is available since some weeks
http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/
Is it something blocking the release of the packages ?
Regards,
Alexandre
--
To unsubscribe from
..
I am not sure in your case the benefit you are seeing is because of qemu is
more efficient with tcmalloc/jemalloc or the entire client stack ?
-Original Message-
From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
Sent: Saturday, August 22, 2015 9:57 AM
To: Somnath Roy
Cc: Sage
(but did link only OSDs/mon/RGW) ?
Thanks Regards
Somnath
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Saturday, August 22, 2015 6:56 AM
To: Milosz Tanski
Cc: Shishir Gowda; Somnath Roy; Stefan Priebe; Alexandre DERUMIER; Mark Nelson;
ceph-devel
Subject: Re
Hackathon: More Memory Allocator Testing
How about making any sheet for testing patter?
Shinobu
- Original Message -
From: Stephen L Blinick stephen.l.blin...@intel.com
To: Alexandre DERUMIER aderum...@odiso.com, Somnath Roy
somnath@sandisk.com
Cc: Mark Nelson mnel
Thanks Marc,
Results are matching exactly what I have seen with tcmalloc 2.1 vs 2.4 vs
jemalloc.
and indeed tcmalloc, even with bigger cache, seem decrease over time.
What is funny, is that I see exactly same behaviour client librbd side, with
qemu and multiple iothreads.
Switching both
I was listening at the today meeting,
and seem that the blocker to have jemalloc as default,
is that it's used more memory by osd (around 300MB?),
and some guys could have boxes with 60disks.
I just wonder if the memory increase is related to
osd_op_num_shards/osd_op_threads value ?
Seem
usage (?).
Thanks Regards
Somnath
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Alexandre DERUMIER
Sent: Wednesday, August 19, 2015 9:06 AM
To: Mark Nelson
Cc: ceph-devel
Subject: Re: Ceph Hackathon: More Memory
OSD...
There is no doubt that TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES is relative to the
number of threads running..
But, I don't know if number of threads is a factor for jemalloc..
Thanks Regards
Somnath
-Original Message-
From: Alexandre DERUMIER [mailto:aderum...@odiso.com
: Stephen L Blinick stephen.l.blin...@intel.com
To: Alexandre DERUMIER aderum...@odiso.com, Somnath Roy
somnath@sandisk.com
Cc: Mark Nelson mnel...@redhat.com, ceph-devel
ceph-devel@vger.kernel.org
Sent: Thursday, August 20, 2015 10:09:36 AM
Subject: RE: Ceph Hackathon: More Memory
Hi Loic,
I wonder if we could backport
https://github.com/ceph/ceph/pull/3979
to hammer.
It's mainly to add support to last kernels to cephx_require_signature and
tcp_nodelay mount options.
For example since kernel 3.19, cephx_require_signatures is the default,
and they are no way to mount
Hi,
since some day, sepia gitbuilder for debian jessie seem to be down.
http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-deb-jessie-amd64-basic/
I don't see new update since 6 august in repository
http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/
(BTW, is they any roadmap to
As I still haven't heard or seen about any upstream distros for Debian
Jessie (see also [1]),
Gitbuilder is already done for jessie
http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/
@Sage : Don't known if something is blocking to release package officially ?
- Mail original -
Hi,
debian jessie gitbuilder is ok since 2 weeks now,
http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-deb-jessie-amd64-basic
It is possible to push packages to repositories ?
http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/
?
Alexandre
--
To unsubscribe from this list: send the
Hi,
This was fixed a little while back when qemu was converted to using
coroutines and more than one thread, but it would make usage in other
applications with simpler threading models easier
AFAIK, currently qemu can use 1 thread by disk, with qemu iothread feature.
But this work with
So, this could be another step ahead of removing tcmalloc as Ceph's default
allocator and moving to jemalloc.
+1 for this.
I begin to be worried by all theses tcmalloc problems.
- Mail original -
De: Somnath Roy somnath@sandisk.com
À: Gregory Farnum g...@gregs42.com
Cc: ceph-devel
Hi,
I like the idea of having a compress option implemented in e.g. librbd
and rgw, both of these cases involve scale-out clients and so concerns
of performance overhead can be largely brushed aside (e.g., most
OpenStack hypervisors seem to have plenty of free CPU).
Keep in mind that qemu use
as I can tell. If I get behind just give me a poke (or
public shaming works!) and I'll make sure they get posted. ;)
Mark
On 06/25/2015 11:13 AM, Alexandre DERUMIER wrote:
I would like to have them too ;)
I have missed the yesterday session,
I would like to have infos about
I would like to have them too ;)
I have missed the yesterday session,
I would like to have infos about this
Fujitsu presenting on bufferlist tuning
- about 2X savings in overall CPU Time with new code.
- Mail original -
De: Robert LeBlanc rob...@leblancnet.us
À: Mark Nelson
will be limited to 30-40K.
On Tue, Jun 16, 2015 at 10:08 PM, Alexandre DERUMIER
aderum...@odiso.com wrote:
Hi,
some news about qemu with tcmalloc vs jemmaloc.
I'm testing with multiple disks (with iothreads) in 1 qemu guest.
And if tcmalloc is a little faster than jemmaloc,
I have hit a lot
at least in the configuration file?
Or it entails a change in the source code KVM?
Thanks.
2015-06-22 11:54 GMT+03:00 Alexandre DERUMIER aderum...@odiso.com :
It is already possible to do in proxmox 3.4 (with the latest updates qemu-kvm
2.2.x). But it is necessary to register in the conf
drives the ambiguous behavior of productivity.
2015-06-22 10:12 GMT+03:00 Stefan Priebe - Profihost AG s.pri...@profihost.ag
:
Am 22.06.2015 um 09:08 schrieb Alexandre DERUMIER aderum...@odiso.com :
Just an update, there seems to be no proper way to pass iothread
parameter from
Hi,
I have send a patch to qemu devel mailing list to add support jemalloc linking
http://lists.nongnu.org/archive/html/qemu-devel/2015-06/msg05265.html
Help is welcome to get it upstream !
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
40k
Thanks, posted the question in openstack list. Hopefully will get some
expert opinion.
On Fri, Jun 12, 2015 at 11:33 AM, Alexandre DERUMIER
aderum...@odiso.com wrote:
Hi,
here a libvirt xml sample from libvirt src
(you need to define iothreads number, then assign then in disks
, Alexandre DERUMIER wrote:
Hi Loic,
I'm always playing with cloudinit currently,
and I never can get working resolv_conf module too (with configdrive
datasource)
Finaly, I manage it with this configdrive:
/latest/meta_data.json
{
uuid: c5240fed-76a8-48d9-b417-45b46599d999
in domain.xml.
Could you suggest me a way to set the same.
-Pushpesh
On Wed, Jun 10, 2015 at 12:59 PM, Alexandre DERUMIER
aderum...@odiso.com wrote:
I need to try out the performance on qemu soon and may come back to you if I
need some qemu setting trick :-)
Sure no problem.
(BTW, I can reach
Looking at resolvconf cloud-init src:
https://github.com/number5/cloud-init/blob/74e61ab27addbfcceac4eba254f739ef9964b0ed/cloudinit/config/cc_resolv_conf.py
As Debian/Ubuntu will, by default, utilize
#resovlconf, and similarly RedHat will use sysconfig, this module is
#likely to be of
Hi Loic,
I'm always playing with cloudinit currently,
and I never can get working resolv_conf module too (with configdrive datasource)
Finaly, I manage it with this configdrive:
/latest/meta_data.json
{
uuid: c5240fed-76a8-48d9-b417-45b46599d999,
network_config :{ content_path:
,
Thanks for sharing the data.
I need to try out the performance on qemu soon and may come back to you if I
need some qemu setting trick :-)
Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Alexandre DERUMIER
Sent: Tuesday, June
to performance.
It would nice to route cause the problem if that is the case.
On Tue, Jun 9, 2015 at 11:21 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Hi,
I'm doing benchmark (ceph master branch), with randread 4k qdepth=32,
and rbd_cache=true seem to limit the iops around 40k
no cache
in as he's more familiar with what that code looks like.
Frankly, I'm a little impressed that without RBD cache we can hit 80K
IOPS from 1 VM! How fast are the SSDs in those 3 OSDs?
Mark
On 06/09/2015 03:36 AM, Alexandre DERUMIER wrote:
It's seem that the limit is mainly going in high queue depth
1aFIi15KqAwOp12yWCmrqKTeXhjwYQNd8viCQCGN7AQyPglmzfbuEHalVjz4
oSJX
=k281
-END PGP SIGNATURE-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, Jun 9, 2015 at 6:02 AM, Alexandre DERUMIER aderum...@odiso.com wrote:
Frankly, I'm a little impressed that without RBD
At high queue-depths and high IOPS, I would suspect that the bottleneck is
the single, coarse-grained mutex protecting the cache data structures. It's
been a back burner item to refactor the current cache mutex into
finer-grained locks.
Jason
Thanks for the explain Jason.
Anyway, inside
Alexandre DERUMIER aderum...@odiso.com :
Hi,
I have tested qemu with last tcmalloc 2.4, and the improvement is huge with
iothread: 50k iops (+45%) !
qemu : no iothread : glibc : iops=33395
qemu : no-iothread : tcmalloc (2.2.1) : iops=34516 (+3%)
qemu : no-iothread : jemmaloc : iops=42226
+F0Uvs3xWAxxaIR9r83wMj9qQeBZTKVzQ
1aFIi15KqAwOp12yWCmrqKTeXhjwYQNd8viCQCGN7AQyPglmzfbuEHalVjz4
oSJX
=k281
-END PGP SIGNATURE-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, Jun 9, 2015 at 6:02 AM, Alexandre DERUMIER aderum
Hi,
I'm doing benchmark (ceph master branch), with randread 4k qdepth=32,
and rbd_cache=true seem to limit the iops around 40k
no cache
1 client - rbd_cache=false - 1osd : 38300 iops
1 client - rbd_cache=false - 2osd : 69073 iops
1 client - rbd_cache=false - 3osd : 78292 iops
cache
Hi,
I'm currently compare results of rados bench vs fio-librbd,
and I notice that with rados bench (4k randread, 1osd)
rados bench -p pooltest 300 rand -t 128
Bandwidth (MB/sec): 284
Average IOPS: 72750
Increasing threads number don't improve results after 128 threads
but
think performance improvement will be pick up soon(May?), the
problem is clearly I think.
On Wed, Apr 29, 2015 at 2:10 PM, Alexandre DERUMIER aderum...@odiso.com
wrote:
Thanks! So far we've gotten a report that asyncmesseneger was a little
slower than simple messenger, but not this bad! I
, but not this bad! I imagine Greg will
have lots of questions. :)
Mark
On 04/28/2015 03:36 AM, Alexandre DERUMIER wrote:
Hi,
here a small bench 4k randread of simple messenger vs async messenger
This is with 2 osd, and 15 fio jobs on a single rbd volume
simple messager : 345kiops
Hi,
Debian Jessie has been released this weekend,
any plan to add jessie repositories soon ?
Regards,
Alexandre
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi,
I am trying to measure 4k RW performance on Newstore, and I am not
anywhere close to the numbers you are getting!
Could you share your ceph.conf for these test ?
I'll try also to help testing newstore with my ssd cluster.
what is used for benchmark ? rados bench ?
any command line to
Hi,
here a small bench 4k randread of simple messenger vs async messenger
This is with 2 osd, and 15 fio jobs on a single rbd volume
simple messager : 345kiops
async messenger : 139kiops
Regards,
Alexandre
simple messenger
---
^Cbs: 15 (f=15): [r(15)] [0.0% done]
)
Thanks in advance,
JV
On Sun, Apr 26, 2015 at 10:46 PM, Alexandre DERUMIER
aderum...@odiso.com wrote:
I'll retest tcmalloc, because I was prety sure to have patched it correctly.
Ok, I really think I have patched tcmalloc wrongly.
I have repatched it, reinstalled it, and now I'm getting
Hi,
I'm hitting the tcmalloc even with patch apply.
It's mainly occur when I try to bench fio with a lot jobs (20 - 40 jobs)
Does It need to tuned something in osd environnement variable ?
I double check it with
#g++ -o gperftest gperftest.c -ltcmalloc
# export
Seem that starting osd with:
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=128M /usr/bin/ceph-osd
fix it.
I don't known if it's the right way ?
- Mail original -
De: aderumier aderum...@odiso.com
À: ceph-devel ceph-devel@vger.kernel.org, Somnath Roy
somnath@sandisk.com
Envoyé: Lundi 27
conclusions about
jemalloc vs tcmalloc until we can figure out what went wrong.
Mark
On 04/27/2015 12:46 AM, Alexandre DERUMIER wrote:
I'll retest tcmalloc, because I was prety sure to have patched it
correctly.
Ok, I really think I have patched tcmalloc wrongly.
I have repatched
://gperftools.googlecode.com/svn/trunk/doc/tcmalloc.html
On 04/27/2015 09:53 AM, Milosz Tanski wrote:
On 4/27/15 9:21 AM, Alexandre DERUMIER wrote:
Seem that starting osd with:
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=128M /usr/bin/ceph-osd
fix it.
I don't known if it's the right way ?
Do
Avril 2015 18:34:50
Objet: Re: [ceph-users] strange benchmark problem : restarting osd daemon
improve performance from 100k iops to 300k iops
On 04/27/2015 10:11 AM, Alexandre DERUMIER wrote:
Is it possible that you were suffering from the bug during the first
test but once reinstalled you
it a shot. :D
Mark
On 04/24/2015 12:36 PM, Stefan Priebe - Profihost AG wrote:
Is jemalloc recommanded in general? Does it also work for firefly?
Stefan
Excuse my typo sent from my mobile phone.
Am 24.04.2015 um 18:38 schrieb Alexandre DERUMIER aderum...@odiso.com
mailto:aderum
it a shot. :D
Mark
On 04/24/2015 12:36 PM, Stefan Priebe - Profihost AG wrote:
Is jemalloc recommanded in general? Does it also work for firefly?
Stefan
Excuse my typo sent from my mobile phone.
Am 24.04.2015 um 18:38 schrieb Alexandre DERUMIER aderum...@odiso.com
mailto:aderum
.
Thanks Regards
Somnath
-Original Message-
From: ceph-users [mailto: ceph-users-boun...@lists.ceph.com ] On Behalf Of
Alexandre DERUMIER
Sent: Thursday, April 23, 2015 4:56 AM
To: Mark Nelson
Cc: ceph-users; ceph-devel; Milosz Tanski
Subject: Re: [ceph-users] strange benchmark problem
,
Srinivas
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Wednesday, April 22, 2015 7:31 PM
To: Alexandre DERUMIER; Milosz Tanski
Cc: ceph-devel; ceph-users
Subject: Re: [ceph-users] strange benchmark problem
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Wednesday, April 22, 2015 7:31 PM
To: Alexandre DERUMIER; Milosz Tanski
Cc: ceph-devel; ceph-users
Subject: Re: [ceph-users] strange benchmark problem : restarting osd daemon
improve performance from 100k iops to 300k
how it does.
In some ways I'm glad it turned out not to be NUMA. I still suspect we
will have to deal with it at some point, but perhaps not today. ;)
Mark
On 04/23/2015 05:58 AM, Alexandre DERUMIER wrote:
Maybe it's tcmalloc related
I thinked to have patched it correctly, but perf show
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Wednesday, April 22, 2015 7:31 PM
To: Alexandre DERUMIER; Milosz Tanski
Cc: ceph-devel; ceph-users
Subject: Re: [ceph-users] strange benchmark problem : restarting osd daemon
improve performance from 100k iops to 300k iops
: this is one of the reasons I like small nodes with single sockets
and fewer OSDs.
Mark
On 04/22/2015 08:56 AM, Alexandre DERUMIER wrote:
Hi,
I have done a lot of test today, and it seem indeed numa related.
My numastat was
# numastat
node0 node1
numa_hit 99075422 153976877
numa_miss
from 100k iops to 300k iops
On Wed, Apr 22, 2015 at 5:01 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:
I wonder if it could be numa related,
I'm using centos 7.1,
and auto numa balacning is enabled
cat /proc/sys/kernel/numa_balancing = 1
Maybe osd daemon access to buffer on wrong
Hi,
I was doing some benchmarks,
I have found an strange behaviour.
Using fio with rbd engine, I was able to reach around 100k iops.
(osd datas in linux buffer, iostat show 0% disk access)
then after restarting all osd daemons,
the same fio benchmark show now around 300k iops.
(osd datas in
I wonder if it could be numa related,
I'm using centos 7.1,
and auto numa balacning is enabled
cat /proc/sys/kernel/numa_balancing = 1
Maybe osd daemon access to buffer on wrong numa node.
I'll try to reproduce the problem
- Mail original -
De: aderumier aderum...@odiso.com
À:
).
- Mail original -
De: Josh Durgin jdur...@redhat.com
À: aderumier aderum...@odiso.com, ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 15 Avril 2015 01:12:38
Objet: Re: how to test hammer rbd objectmap feature ?
On 04/14/2015 12:48 AM, Alexandre DERUMIER wrote:
Hi,
I would like
Hi,
I would like to known how to enable object map on hammer ?
I found a post hammer commit here:
https://github.com/ceph/ceph/commit/3a7b28d9a2de365d515ea1380ee9e4f867504e10
rbd: add feature enable/disable support
- Specifies which RBD format 2 features are to be enabled when creating
-
Hi,
does this bug also apply on rhel7(.1)/centos7(.1) ?
In Intank repository, I see
in rhel7 repository:
http://ceph.com/rpm-giant/rhel7/x86_64/gperftools-libs-2.1-1.el7.x86_64.rpm
But I can't find it in centos repository
http://ceph.com/rpm-giant/el7/x86_64/
?
Does Intank will
Hi Sommath,
is it only on osds ?
or also clients ?
What is the tmalloc version on ubuntu ? (I would like to known if the problem
exist also on debian)
Regards,
Alexandre
- Mail original -
De: Somnath Roy somnath@sandisk.com
À: ceph-devel ceph-devel@vger.kernel.org
Envoyé:
\n,
+ (unsigned long) cache_sz, (unsigned long) DEFAULT_CACHE_SIZE);
+ }
+ } else {
+ CLS_LOG(0, TCMALLOC-ENV: Not Found\n);
+ }
+}
+/* EOF */
--
1.9.1
Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Alexandre
Hi,
thanks you very much ! It's very very nice article.
I didn't known about nojournal feature.
Does it make sense for full ssd setup ? or is it only a test/debug option ?
- Mail original -
De: Marcel Lauhoff m...@irq0.org
À: ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mardi 24 Mars
Hi,
I'm not sure it's helping for snapshot rollback, but hammer have a new
object_map feature:
https://wiki.ceph.com/Planning/Blueprints/Hammer/librbd%3A_shared_flag,_object_map
which help for resize,flatten,...
- Mail original -
De: 徐昕 xuxinh...@gmail.com
À: ceph-devel
numbers?
I also did some tests on fdcache, though just glancing at the results it
doesn't look like tweaking those parameters had much effect.
Mark
On 03/01/2015 08:38 AM, Alexandre DERUMIER wrote:
Hi Mark,
I found an previous bench from Vu Pham (it's was about simplemessenger vs
,
In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance comparison
thread, Alexandre DERUMIER wondered if changing the default shard and
threads per shard OSD settings might have a positive effect on
performance in our tests. I went back and used one of the PCIe SSDs
from our previous tests
2015 22:49:23
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
Can I ask what xio and simple messenger are and the differences?
Kind regards
Kevin Walker
+968 9765 1742
On 1 Mar 2015, at 18:38, Alexandre DERUMIER aderum...@odiso.com wrote:
Hi Mark,
I found an previous
comparison
thread, Alexandre DERUMIER wondered if changing the default shard and
threads per shard OSD settings might have a positive effect on
performance in our tests. I went back and used one of the PCIe SSDs
from our previous tests to experiment with a recent master pull. I
wanted to know how
Perhaps part of this might be to just try to get a better idea of which
tools folks are using to do performance monitoring on their existing
clusters (ceph or otherwise). I've heard zabbix come up quite a bit
recently.
Hi, we are using graphite here with collectd to retreive host stats.
It's
: Vendredi 20 Février 2015 17:12:46
Objet: Re: Memstore performance improvements v0.90 vs v0.87
On 02/20/2015 10:03 AM, Alexandre DERUMIER wrote:
http://rhelblog.redhat.com/2015/01/12/mysteries-of-numa-memory-management-revealed/
It's possible that this could be having an effect on the results
...@canonical.com, Leann Ogasawara
leann.ogasaw...@canonical.com
Envoyé: Vendredi 20 Février 2015 17:12:46
Objet: Re: Memstore performance improvements v0.90 vs v0.87
On 02/20/2015 10:03 AM, Alexandre DERUMIER wrote:
http://rhelblog.redhat.com/2015/01/12/mysteries-of-numa-memory-management
http://rhelblog.redhat.com/2015/01/12/mysteries-of-numa-memory-management-revealed/
It's possible that this could be having an effect on the results.
Isn't auto numa balancing enabled by default since kernel 3.8 ?
it can be checked with
cat /proc/sys/kernel/numa_balancing
- Mail
Nice Work Mark !
I don't see any tuning about sharding in the config file sample
(osd_op_num_threads_per_shard,osd_op_num_shards,...)
as you only use 1 ssd for the bench, I think it should improve results for
hammer ?
- Mail original -
De: Mark Nelson mnel...@redhat.com
À:
? It could
be an interesting follow-up paper.
Mark
On 02/18/2015 02:34 AM, Alexandre DERUMIER wrote:
Nice Work Mark !
I don't see any tuning about sharding in the config file sample
(osd_op_num_threads_per_shard,osd_op_num_shards,...)
as you only use 1 ssd for the bench, I think
the 2nd try
Am 16.02.2015 um 23:18 schrieb Alexandre DERUMIER:
Just tested write. This might be the result of higher CPU load of the
ceph-osd processes under firefly.
Dumpling 180% per process vs. firefly 220%
Oh yes, indeed, that's what I think too. (and more cpu - less ios in qemu
-devel
ceph-devel@vger.kernel.org
Envoyé: Lundi 16 Février 2015 15:50:56
Objet: Re: speed decrease since firefly,giant,hammer the 2nd try
Hi Mark,
Hi Alexandre,
Am 16.02.2015 um 10:11 schrieb Alexandre DERUMIER:
Hi Stefan,
I could be interesting to see if you have the same speed decrease
1 - 100 of 276 matches
Mail list logo