On 02/20/2013 10:01 AM, Gandalf Corvotempesta wrote:
I'm trying to configure RGW following this guide:
http://ceph.com/docs/master/radosgw/config/
I have some questions about keyring. Should keyring files copied on
each cluster node?
For example, these commands:
sudo ceph-authtool
On 02/25/2013 12:16 AM, Josh Durgin wrote:
On 02/23/2013 02:33 AM, Loic Dachary wrote:
Hi,
In anticipation of the next OpenStack summit
http://www.openstack.org/summit/portland-2013/, I proposed a session to
discuss OpenStack and Ceph integration. Our meeting during FOSDEM earlier
this
Hi Neil,
I've added RBD backups secondary clusters within Openstack to the list of
blueprints. Do you have links to mail threads / chat logs related to this topic
?
I moved the content of the session to an etherpad for collaborative editing
Ok thanks guys. Hope we will find something :-).
--
Regards,
Sébastien Han.
On Mon, Feb 25, 2013 at 8:51 AM, Wido den Hollander w...@42on.com wrote:
On 02/25/2013 01:21 AM, Sage Weil wrote:
On Mon, 25 Feb 2013, S?bastien Han wrote:
Hi Sage,
Sorry it's a production system, so I can't test
On 02/22/2013 09:15 AM, Alex Elder wrote:
The following patches address some issues that were found while
investigating a kernel rbd client performance issue this week.
These patches are available in the branch test/wip-4234,5,7,8
in the ceph-client git repository.
On 02/22/2013 09:18 AM, Alex Elder wrote:
I'm re-posting these patches because I've updated them to be
based on the patches I just posted (Four miscellaneous patches).
These patches are available in the branch test/wip-4184 in
the ceph-client git repository. That branch is based on
branch
Any word on what the status of this is? I just ran into it myself,
all on 0.56.3, latest KVM/qemu for Ubuntu 12.04.
Looking at the bug in tracker, it's resolved. Is this going to be
backported to bobtail?
I'm booting VMs directly off of RBD, and this bug takes a few of them
down at startup. I
If an invalid layout is provided to ceph_osdc_new_request(), its
call to calc_layout() might return an error. At that point in the
function we've already allocated an osd request structure, so we
need to free it (drop a reference) in the event such an error
occurs.
The only other value
The bio_seg field is used by the ceph messenger in iterating through
a bio. It should never have a negative value, so make it an
unsigned.
Change variables used to hold bio_seg values to all be unsigned as
well. Change two variable names in init_bio_iter() to match the
convention used
This series refactors the code involved with identifying the
details of the name, offset, and length of an object involved
with an osd request based on a file layout. It makes the focus
of calc_layout() be filling in an osd op structure based on the
file layout it is provided. The caller
Have calc_layout() pass the computed object number back to its
caller. (This is a small step to simplify review.)
Signed-off-by: Alex Elder el...@inktank.com
---
net/ceph/osd_client.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/net/ceph/osd_client.c
Move the formatting of the object name (oid) to use for an object
request into the caller of calc_layout(). This makes the vino
parameter no longer necessary, so get rid of it.
Signed-off-by: Alex Elder el...@inktank.com
---
net/ceph/osd_client.c | 12 +---
1 file changed, 5
The only remaining reason to pass the osd request to calc_layout()
is to fill in its r_num_pages and r_page_alignment fields. Once it
fills those in, it doesn't do anything more with them.
We can therefore move those assignments into the caller, and get rid
of the req parameter entirely.
Note,
This series makes the fields related to the data portion of
a ceph message not get manipulated by code outside the ceph
messenger. It implements some interface functions that can
be used to assign data-related fields. Doing this will allow
the way message data is managed to be changed
Use distinct fields for tracking the number of pages in a message's
page array and in a message's page list. Currently only one or the
other is used at a time, but that will be changing soon.
Signed-off-by: Alex Elder el...@inktank.com
---
fs/ceph/mds_client.c |4 ++--
The page alignment field for a request is currently set in
ceph_osdc_build_request(). It's not needed at that point
nor do either of its callers need that value assigned at
any point before they call ceph_osdc_start_request().
So move that assignment into ceph_osdc_start_request().
Define a function ceph_msg_data_set_pages(), which more clearly
abstracts the assignment page-related fields for data in a ceph
message structure. These fields should never be set more than once
(add BUG_ON() calls to guarantee that). Use this new function in
the osd client and mds client.
Define ceph_msg_data_set_pagelist(), ceph_msg_data_set_bio(), and
ceph_msg_data_set_trail() to clearly abstract the assignment the
remaining data-related fields in a ceph message structure. These
fields should never be used more than once; add BUG_ON() calls to
guarantee this. Use the new
On Fri, Feb 22, 2013 at 8:31 PM, Yan, Zheng zheng.z@intel.com wrote:
On 02/23/2013 02:54 AM, Gregory Farnum wrote:
I haven't spent that much time in the kernel client, but this patch
isn't working out for me. In particular, I'm pretty sure we need to
preserve this:
diff --git
On 02/26/2013 08:01 AM, Gregory Farnum wrote:
On Fri, Feb 22, 2013 at 8:31 PM, Yan, Zheng zheng.z@intel.com wrote:
On 02/23/2013 02:54 AM, Gregory Farnum wrote:
I haven't spent that much time in the kernel client, but this patch
isn't working out for me. In particular, I'm pretty sure we
On Tue, 26 Feb 2013, Yan, Zheng wrote:
It looks to me like truncates can get queued for later, so that's not the
case?
And how could the client receive a truncate while in the middle of
writing? Either it's got the write caps (in which case nobody else can
truncate), or it shouldn't be
21 matches
Mail list logo