> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Sage Weil
> Sent: Monday, October 19, 2015 9:49 PM
>
> The current design is based on two simple ideas:
>
> 1) a key/value interface is better way to manage all of our
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Chen, Xiaoxi
> Sent: Friday, October 16, 2015 8:26 AM
> To: Xusangdi
> Cc: ceph-devel@vger.kernel.org
> Subject: RE: chooseleaf may cause some unnecessary pg migrations
>
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Loic Dachary
> Sent: Thursday, October 15, 2015 11:09 AM
>
> Hi,
>
> TL;DR: the make check bot is fixed (no more delays) but only keeps the last
> 30 runs
>[..]
> What
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Somnath Roy
> Sent: Tuesday, October 13, 2015 8:46 AM
>
> Thanks Haomai..
> Since Async messenger is always using a constant number of threads , there
> could be a
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Curley, Matthew
> Sent: Thursday, October 01, 2015 5:33 PM
>
> We've been trying to reproduce the allocator performance impact on 4K
> random reads seen in the Hackathon
> -Original Message-
> From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
> Sent: Friday, October 02, 2015 1:26 PM
> To: Dałek, Piotr
> Cc: Curley, Matthew; ceph-devel
> Subject: Re: Reproducing allocator performance differences
>
> >>I use rados ben
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Alexandre DERUMIER
> Sent: Friday, October 02, 2015 8:55 AM
>
> >>also - more clients would be better (or worse, depending on how you look
> at it).
>
> It's quite
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Mark Nelson
> Sent: Tuesday, September 29, 2015 3:22 PM
>
> >> https://drive.google.com/file/d/0B2gTBZrkrnpZY3U3TUU3RkJVeVk/view
> >
> > From my point of view, this
> -Original Message-
> From: Gregory Farnum [mailto:gfar...@redhat.com]
> Sent: Tuesday, September 29, 2015 3:53 PM
> To: Dałek, Piotr
>
> On Tue, Sep 29, 2015 at 2:21 AM, Dałek, Piotr <piotr.da...@ts.fujitsu.com>
> wrote:
> > Hello,
> >
> >
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Haomai Wang
> Sent: Tuesday, September 08, 2015 3:58 PM
> To: Sage Weil
> Hi Sage,
>
> I notice your post in rocksdb page about make rocksdb aware of short alive
>
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Tuesday, August 25, 2015 7:43 PM
I have built rpms from the tarball http://ceph.com/download/ceph-
9.0.3.tar.bz2.
Have done this for fedora 21 x86_64
-Original Message-
From: Deneau, Tom [mailto:tom.den...@amd.com]
Sent: Wednesday, August 26, 2015 5:23 PM
To: Dałek, Piotr; Sage Weil
There have been some recent changes to rados bench... Piotr, does
this seem like it might be caused by your changes?
Yes. My PR #4690 (https
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Thursday, August 20, 2015 9:09 AM
And it takes me just a few minutes with rados bench to reproduce this
issue on mixed-storage node (SSDs, SAS disks, high-capacity SATA disks, etc).
See here:
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Blinick, Stephen L
Sent: Wednesday, August 19, 2015 6:58 PM
[..
Regarding the all-HDD or high density HDD nodes, is it certain these issues
with tcmalloc don't apply,
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Allen Samuels
Sent: Wednesday, August 19, 2015 8:20 PM
It was a surprising result that the memory allocator is making such a large
difference in performance. All of the
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Wednesday, August 19, 2015 2:17 PM
The RSS memory usage in the report is per OSD I guess(really?). It
can't be ignored since it's really a great
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Wednesday, August 19, 2015 2:45 PM
Have you tried running these tests again with TCMalloc after applying
patches from
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Tuesday, August 11, 2015 10:11 PM
I went ahead and implemented both of these pieces. See
https://github.com/ceph/ceph/pull/5534
My benchmark
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Wednesday, August 12, 2015 4:56 AM
To: Dałek, Piotr
On Wed, Aug 12, 2015 at 5:48 AM, Dałek, Piotr piotr.da...@ts.fujitsu.com
wrote:
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
This is pretty much low-level approach, what I was actually wondering is
whether we can reduce amount of memory (de)allocations on higher level, like
improving the message lifecycle logic (from receiving to performing actual
operation and finishing it), so it wouldn't involve so many
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Monday, August 10, 2015 9:20 PM
To: Константин Сахинов
On Mon, 10 Aug 2015, ?? ??? wrote:
Sage, Piotr, sorry for your time and thanx for your help.
Memtest showed me red results. Will be digging for
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Monday, August 10, 2015 3:52 PM
To: Константин Сахинов
On Mon, 10 Aug 2015, ?? ??? wrote:
Uploaded another corrupted piece.
2015-08-10
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Sunday, August 09, 2015 7:39 PM
[..]
Well, I spoke too soon. The builds still aren't completing (the older boost's
shared_ptr is buggy on c++11, and
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Thursday, July 02, 2015 7:26 PM
Yes, some fields only used for special ops. But union may increase the
complexity of stuct.
Not by that much, actual changes would be only in coding and encoding, rest of
the
Hello,
In ObjectStore.h we have the following stuct:
struct Op {
__le32 op;
__le32 cid;
__le32 oid;
__le64 off;
__le64 len;
__le32 dest_cid;
__le32 dest_oid; //OP_CLONE, OP_CLONERANGE
__le64 dest_off;
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of James (Fei) Liu-SSI
Sent: Friday, June 26, 2015 12:01 AM
Hi Cephers,
It is not easy to ask when Ceph is going to support inline
dedup/compression across OSDs in
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Daniel Swarbrick
Sent: Monday, June 29, 2015 1:31 PM
Yes, we have our own CRC32 checksum because lng ago (before I
started!) Sage saw a lot of network corruption
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Friday, June 26, 2015 7:52 PM
ceph_crc32c_intel_fast is ~6 times faster than ceph_crc32c_sctp. If you are
not using intel cpus or you have older intel
-Original Message-
From: David Zafman [mailto:dzaf...@redhat.com]
Sent: Friday, June 26, 2015 5:21 PM
This is a dangerous command because it can remove all your objects.
Well, that's the point of it, isn't it?
At least
it can only do one namespace at a time. It was intended to
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Erik G. Burrows
Sent: Friday, June 26, 2015 6:49 PM
All,
Can someone explain to me the rationale for performing in-software CRC32
hashes of all messages through the
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Deneau, Tom
Sent: Wednesday, June 24, 2015 6:44 PM
I have benchmarking situations where I want to leave a pool around but
delete a lot of objects from the pool. Is
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Dan van der Ster
Hi all,
We've recently experienced a broken router than was corrupting packets in
way that the tcp checksums were still valid. There has been some
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
I'm digging into perf and the code to see here/how I might be able to
improve performance for small I/O around 16K.
I ran fio with rados and used perf to record. Looking through the report,
there is a
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Haomai Wang
This is so because you use SimpleMessenger, which can't handle small I/O
well.
Indeed, threads are problematic with it, as well as memory allocation.
I
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Deneau, Tom
Sent: Friday, May 29, 2015 1:10 AM
To: ceph-devel
Subject: rados bench throughput with no disk or network activity
I've noticed that
* with a single
35 matches
Mail list logo