hi, cephers:
Now, I want to reduce the cpu usage rate by osd in full ssd
cluster. In my test case, ceph run out of cpu, the cpu idle is about
10%.
The cpu in my cluster is Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz.
Can you give me some suggestion?
Thanks.
There are the cpu
Sage Weil wrote:
Calculating the ln() function is a bit annoying because it is a floating
point function and CRUSH is all fixed-point arithmetic (integer-based).
The current draft implementation uses a 128 KB lookup table (2 bytes per
entry for 16 bits of input precision). It seems to be
Hi,
Please wait out for the patch which supports jemalloc build.
That should see the tcmalloc issues disappear.
Please see the thread
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/22260 for further
information
With regards,
Shishir
-Original Message-
From:
On Fri, 12 Dec 2014, ??? wrote:
hi, cephers:
Now, I want to reduce the cpu usage rate by osd in full ssd
cluster. In my test case, ceph run out of cpu, the cpu idle is about
10%.
The cpu in my cluster is Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz.
Can you give me some
On Fri, 12 Dec 2014, Thorsten Behrens wrote:
Sage Weil wrote:
Calculating the ln() function is a bit annoying because it is a floating
point function and CRUSH is all fixed-point arithmetic (integer-based).
The current draft implementation uses a 128 KB lookup table (2 bytes per
entry
Hi Sam Sage,
In the context of http://tracker.ceph.com/issues/9566 I'm inclined to think the
best solution would be that the AsyncReserver choose a PG instead of just
picking the next one in the list when there is a free slot. It would always
choose a PG that must move to/from an OSDs for
On 12/12/2014 09:46 AM, Sage Weil wrote:
On Fri, 12 Dec 2014, Thorsten Behrens wrote:
Sage Weil wrote:
Calculating the ln() function is a bit annoying because it is a floating
point function and CRUSH is all fixed-point arithmetic (integer-based).
The current draft implementation uses a 128 KB
On Fri, Dec 12, 2014 at 9:46 AM, Sage Weil sw...@redhat.com wrote:
On Fri, 12 Dec 2014, Thorsten Behrens wrote:
Sage Weil wrote:
Calculating the ln() function is a bit annoying because it is a floating
point function and CRUSH is all fixed-point arithmetic (integer-based).
The current
On Fri, 12 Dec 2014, Loic Dachary wrote:
Hi Sam Sage,
In the context of http://tracker.ceph.com/issues/9566 I'm inclined to
think the best solution would be that the AsyncReserver choose a PG
instead of just picking the next one in the list when there is a free
slot. It would always
On Fri, 12 Dec 2014, Joe Landman wrote:
On 12/12/2014 09:46 AM, Sage Weil wrote:
On Fri, 12 Dec 2014, Thorsten Behrens wrote:
Sage Weil wrote:
Calculating the ln() function is a bit annoying because it is a floating
point function and CRUSH is all fixed-point arithmetic
On 12/12/2014 11:20 AM, Sage Weil wrote:
We can't use floating point. The code needs to run in the kernel. We also
need the rseults to be perfectly deterministic and consistent across all
architectures; I'm not sure if all floating point implementations (and log
implementations) will do that?
Hi folks,
The Apache fork that we ship on Ceph.com
(https://github.com/ceph/apache2) is several versions behind upstream
and has a couple CVEs by now.
I've heard from the developers (I don't remember if it was Dan, Yehuda,
or someone else) refer on IRC to the idea that the changes in our Ceph
On Mon, Dec 8, 2014 at 3:48 PM, Sage Weil sw...@redhat.com wrote:
Hey everyone,
When I was writing the original CRUSH code ages ago, I made several
different bucket types, each using a different 'choose' algorithm for
pseudorandomly selecting an item. Most of these were modeled after the
On Fri, Dec 12, 2014 at 9:12 AM, Ken Dreyer kdre...@redhat.com wrote:
Hi folks,
The Apache fork that we ship on Ceph.com
(https://github.com/ceph/apache2) is several versions behind upstream
and has a couple CVEs by now.
I've heard from the developers (I don't remember if it was Dan,
On 12/12/2014 17:12, Sage Weil wrote:
On Fri, 12 Dec 2014, Loic Dachary wrote:
Hi Sam Sage,
In the context of http://tracker.ceph.com/issues/9566 I'm inclined to
think the best solution would be that the AsyncReserver choose a PG
instead of just picking the next one in the list when
On 12/12/2014 11:20 AM, Sage Weil wrote:
We can't use floating point. The code needs to run in the kernel. We
also need the rseults to be perfectly deterministic and consistent
across all architectures; I'm not sure if all floating point
implementations (and log implementations) will do that?
On Fri, 12 Dec 2014, Joe Landman wrote:
On 12/12/2014 11:20 AM, Sage Weil wrote:
We can't use floating point. The code needs to run in the kernel. We also
need the rseults to be perfectly deterministic and consistent across all
architectures; I'm not sure if all floating point
ceph-disk does not support journal device using path /dev/disk/by-path
currently since the the device name has colon, this has been reported
over 1 year ago:
http://tracker.ceph.com/issues/5283
But it was set to Won't fix few months ago, can particular reason?
From user's perspective, after
On 12/12/2014 11:27 AM, Yehuda Sadeh wrote:
On Mon, Dec 8, 2014 at 3:48 PM, Sage Weil sw...@redhat.com wrote:
Hey everyone,
When I was writing the original CRUSH code ages ago, I made several
different bucket types, each using a different 'choose' algorithm for
pseudorandomly selecting an
Hi Sheldon,
In the context of http://tracker.ceph.com/issues/9566 do you remember what the
value of max_backfills was ?
https://github.com/ceph/ceph/blob/giant/src/common/config_opts.h#L409 It
probably makes a difference if it is set to 1 instead of the default.
Cheers
--
To unsubscribe from
On Fri, 12 Dec 2014, Fred Yang wrote:
ceph-disk does not support journal device using path /dev/disk/by-path
currently since the the device name has colon, this has been reported
over 1 year ago:
http://tracker.ceph.com/issues/5283
But it was set to Won't fix few months ago, can
subscribe ceph-devel
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
22 matches
Mail list logo