HI ALL,
I saw the ceph radios gateway doc
(http://ceph.com/docs/master/radosgw/config/#configuring-print-continue) say:
On CentOS/RHEL distributions, turn off print continue. If you have it set to
true, you may encounter problems with PUT operations.:
But when I set this config item to
hi,all
take a look at the link ,
http://www.ceph.com/docs/master/architecture/#smart-daemons-enable-hyperscale
could you explain point 2, 3 in that picture.
1.
at point 2,3, before primary writes data to next osd, where is the
data? it is in momory or on disk already?
2. where is the code
hi,all
my question is from my test.
let's take a example. object1(4MB)-- pg 0.1 -- osd 1,2,3,p1
when client is writing object1, during the write , osd1 is down. let
suppose 2MB is writed.
1.
when the connection to osd1 is down, what does client do? ask
monitor for new osdmap? or only the
On Mon, Sep 22, 2014 at 7:06 PM, Florian Haas flor...@hastexo.com wrote:
On Sun, Sep 21, 2014 at 9:52 PM, Sage Weil sw...@redhat.com wrote:
On Sun, 21 Sep 2014, Florian Haas wrote:
So yes, I think your patch absolutely still has merit, as would any
means of reducing the number of snapshots an
On Mon, Sep 22, 2014 at 5:01 PM, Alex Elder el...@ieee.org wrote:
On 09/05/2014 03:42 AM, Ilya Dryomov wrote:
Both not yet registered (r_linger list_empty(r_linger_item)) and
registered linger requests should use the new tid on resend to avoid
the dup op detection logic on the OSDs, yet we
Hi Sage,
I have created the following setup in order to examine how a single OSD is
behaving if say ~80-90% of ios hitting the SSDs.
My test includes the following steps.
1. Created a single OSD cluster.
2. Created two rbd images (110GB each) on 2 different pools.
3.
Somnath,
I wonder if there's a bottleneck or a point of contention for the
kernel. For a entirely uncached workload I expect the page cache
lookup to cause a slow down (since the lookup should be wasted). What
I wouldn't expect is a 45% performance drop. Memory speed should be
one magnitude
Milosz,
Thanks for the response. I will see if I can get any information out of perf.
Here is my OS information.
root@emsclient:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 13.10
Release:13.10
Codename: saucy
root@emsclient:~# uname
On Tue, 23 Sep 2014, Somnath Roy wrote:
Milosz,
Thanks for the response. I will see if I can get any information out of perf.
Here is my OS information.
root@emsclient:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 13.10
Release:
On Tue, Sep 23, 2014 at 6:20 AM, Florian Haas flor...@hastexo.com wrote:
On Mon, Sep 22, 2014 at 7:06 PM, Florian Haas flor...@hastexo.com wrote:
On Sun, Sep 21, 2014 at 9:52 PM, Sage Weil sw...@redhat.com wrote:
On Sun, 21 Sep 2014, Florian Haas wrote:
So yes, I think your patch absolutely
Sam and I discussed this on IRC and have we think two simpler patches that
solve the problem more directly. See wip-9487. Queued for testing now.
Once that passes we can backport and test for firefly and dumpling too.
Note that this won't make the next dumpling or firefly point releases
Good point, but do you have considered that the impaction for write
ops? And if skip page cache, FileStore is responsible for data cache?
On Wed, Sep 24, 2014 at 3:29 AM, Sage Weil sw...@redhat.com wrote:
On Tue, 23 Sep 2014, Somnath Roy wrote:
Milosz,
Thanks for the response. I will see if I
Haomai,
I am considering only about random reads and the changes I made only affecting
reads. For write, I have not measured yet. But, yes, page cache may be helpful
for write coalescing. Still need to evaluate how it is behaving comparing
direct_io on SSD though. I think Ceph code path will be
I agree with that direct read will help for disk read. But if read
data is hot and small enough to fit in memory, page cache is a good
place to hold data cache. If discard page cache, we need to implement
a cache to provide with effective lookup impl.
BTW, whether to use direct io we can refer to
14 matches
Mail list logo