My requirement is to have full disaster recovery, buisness continuity,
and failover of automatet services on second Datacenter, and not on
same ceph cluster.
Datacenters have 10GE dedicated link, for communication, and there is
option to expand cluster into two DataCenters, but it is not what i
mea
Hello,
Ceph Rpm Packages are up to Fedora 17.
May I know when will Fedora 18 Rpm Packages release scheduled?
Thanks,
Kiran Patil.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.ker
On Tue, Feb 19, 2013 at 02:02:03PM +1100, Chris Dunlop wrote:
> On Sun, Feb 17, 2013 at 05:44:29PM -0800, Sage Weil wrote:
>> On Mon, 18 Feb 2013, Chris Dunlop wrote:
>>> On Sat, Feb 16, 2013 at 09:05:21AM +1100, Chris Dunlop wrote:
On Thu, Feb 14, 2013 at 08:57:11PM -0800, Sage Weil wrote:
>>
On Tue, Feb 19, 2013 at 5:00 PM, Kevin Decherf wrote:
> On Tue, Feb 19, 2013 at 10:15:48AM -0800, Gregory Farnum wrote:
>> Looks like you've got ~424k dentries pinned, and it's trying to keep
>> 400k inodes in cache. So you're still a bit oversubscribed, yes. This
>> might just be the issue where
An error occurring on a ceph connection is treated as a fault,
causing the connection to be reset. The initial part of this fault
handling has to be done while holding the connection mutex, but
it must then be dropped for the last part.
Separate the part of this fault handling that executes witho
On Tue, Feb 19, 2013 at 10:15:48AM -0800, Gregory Farnum wrote:
> Looks like you've got ~424k dentries pinned, and it's trying to keep
> 400k inodes in cache. So you're still a bit oversubscribed, yes. This
> might just be the issue where your clients are keeping a bunch of
> inodes cached for the
This is just the second part of a two-part patch. It simply
indents a block of code. This patch is going to be merged into
its predecessor following review.
Signed-off-by: Alex Elder
---
net/ceph/messenger.c | 80
+-
1 file changed, 40 insertio
This just converts a manually-implemented loop into a do..while loop
in con_work(). It also moves handling of EAGAIN inside the blocks
where it's already been determined an error code was returned.
NOTE:
This was done in two steps in order to facilitate review. The
This patch will be squ
This just rearranges the logic in con_work() a little bit so that a
flag is used to indicate a fault has occurred. This allows both the
fault and non-fault case to be handled the same way and avoids a
couple of nearly consecutive gotos.
Signed-off-by: Alex Elder
---
net/ceph/messenger.c | 24
Collect the code that tests for and implements a backoff delay for a
ceph connection into a new function, ceph_backoff().
Make the debug output messages in that part of the code report
things consistently by reporting a message in the socket closed
case, and by making the one for PREOPEN state rep
This series cleans up con_work() a bit. The original motivation
was to get rid of a warning issued by the sparse utility, but
addressing that required a little rework and it was fairly
straightforward once that was done to make that function
fairly simple.
The problem sparse reported was really d
Eliminate most of the problems in the libceph code that cause sparse
to issue warnings.
- Convert functions that are never referenced externally to have
static scope.
- Pass NULL rather than 0 for a pointer argument in one spot in
ceph_monc_delete_snapid()
This partially resolv
Fix the causes for sparse warnings reported in the ceph file system
code. Here there are only two (and they're sort of silly but
they're easy to fix).
This partially resolves:
http://tracker.ceph.com/issues/4184
Reported-by: Fengguang Wu
Signed-off-by: Alex Elder
---
fs/ceph/xattr.c |
Fengguang Wu reminded me that there were outstanding sparse reports
in the ceph and rbd code. This patch fixes these problems in rbd
that lead to those reports:
- Convert functions that are never referenced externally to have
static scope.
- Add a lockdep annotation to rbd_request_fn
What follows is a few series of patches that get rid of code
issues that lead to warnings from the sparse utility.
The first three patches address the warnings in the rbd, ceph
file system, and libceph code respectively. After that, one
warning remains in libceph, and that is addressed by a serie
On Tue, Feb 19, 2013 at 4:39 PM, Sage Weil wrote:
> On Tue, 19 Feb 2013, Noah Watkins wrote:
>> On Feb 19, 2013, at 2:22 PM, Gregory Farnum wrote:
>> > On Tue, Feb 19, 2013 at 2:10 PM, Noah Watkins
>> > wrote:
>> >
>> > That is just truly annoying. Is this described anywhere in their docs?
>>
>>
On Tue, 19 Feb 2013, Noah Watkins wrote:
> On Feb 19, 2013, at 2:22 PM, Gregory Farnum wrote:
> > On Tue, Feb 19, 2013 at 2:10 PM, Noah Watkins
> > wrote:
> >
> > That is just truly annoying. Is this described anywhere in their docs?
>
> Not really. It's just there in the code--I can figure ou
On Feb 19, 2013, at 2:22 PM, Gregory Farnum wrote:
> On Tue, Feb 19, 2013 at 2:10 PM, Noah Watkins wrote:
>
> That is just truly annoying. Is this described anywhere in their docs?
Not really. It's just there in the code--I can figure out the metric if you're
interested. I suspect it is loca
We've been spending a lot of time working on bobtail-related stabilization
and bug fixes, but our next development release v0.57 is finally here!
Notable changes include:
* osd: default to libaio for the journal (some performance boost)
* osd: validate snap collections on startup
* osd: ceph-
It recently occurred to me that I messed up an OSD's storage, and
decided that the easiest way to bring it back was to roll it back to an
earlier snapshot I'd taken (along the lines of clustersnap) and let it
recover from there.
The problem with that idea was that the cluster had advanced too much
On Tue, Feb 19, 2013 at 2:10 PM, Noah Watkins wrote:
> Here is the information that I've found so far regarding the operation of
> Hadoop w.r.t. DNS/topology. There are two parts, the file system client
> requirements, and other consumers of topology information.
>
> -- File System Client --
>
>
Here is the information that I've found so far regarding the operation of
Hadoop w.r.t. DNS/topology. There are two parts, the file system client
requirements, and other consumers of topology information.
-- File System Client --
The relevant interface between the Hadoop VFS and its underlying
Am 19.02.13 20:23, schrieb Samuel Just:
> Can you confirm that the memory size reported is res?
> -Sam#
I think it was virtual, seeing it was the SIZE parameter in ps.
However, we ran into massive slow request issues as soon as the memory
started ballooning.
--ck
--
To unsubscribe from this list
Can you confirm that the memory size reported is res?
-Sam
On Mon, Feb 18, 2013 at 8:46 AM, Christopher Kunz wrote:
> Am 16.02.13 10:09, schrieb Wido den Hollander:
>> On 02/16/2013 08:09 AM, Andrey Korolyov wrote:
>>> Can anyone who hit this bug please confirm that your system contains
>>> libc
Reviewed-by: Josh Durgin
On 02/08/2013 08:20 AM, Alex Elder wrote:
Add support for CEPH_OSD_OP_STAT operations in the osd client
and in rbd.
This operation sends no data to the osd; everything required is
encoded in identity of the target object.
The result will be ENOENT if the object doesn'
On 02/08/2013 08:31 AM, Alex Elder wrote:
This series does some cleanup related to some functions that
implement ceph page vectors. Together, these resolve:
http://tracker.ceph.com/issues/4053
-Alex
[PATCH 1/5] ceph: remove a few bogus declarations
Reviewed-by: Josh Durgin
On 02/08/2013 08:18 AM, Alex Elder wrote:
The for_each_obj_request*() macros should parenthesize their uses of
the ireq parameter.
Signed-off-by: Alex Elder
---
drivers/block/rbd.c |6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers
The d_path() and related kernel functions currently take a writer
lock on rename_lock because they need to follow pointers. By changing
rename_lock to be the new sequence read/write lock, a reader lock
can be taken and multiple d_path() threads can proceed concurrently
without blocking each other.
Here's a fix for bug 1435. The MDS didn't recover default_layout for
subdirs from dir entries encoded in the parent dir, so layout policies
could be lost upon MDS restart if they didn't happen to be encoded in
some other change still present in the MDS journal.
Fix restoring dir layouts from dir
On Sat, Feb 16, 2013 at 10:24 AM, Kevin Decherf wrote:
> On Sat, Feb 16, 2013 at 11:36:09AM -0600, Sam Lang wrote:
>> On Fri, Feb 15, 2013 at 7:02 PM, Kevin Decherf wrote:
>> > It seems better now, I didn't see any storm so far.
>> >
>> > But we observe high latency on some of our clients (with n
Hi,
For of all, I have some questions about your setup:
* What are your requirements?
* Are the DCs far from each others?
If they are reasonably close to each others, you can setup a single
cluster, with replicas across both DCs and manage the RBD devices with
pacemaker.
Cheers.
--
Regards,
Sé
31 matches
Mail list logo