Hi all,
after some days of successful creating and destroying rbd's, snapshots,
clones and migrating formats all of a sudden one of the monitors doesn't work
anymore.
I tried to remove and re-add the monitor from the cluster, but that doesn't
seem to work either.
Here's the situation:
I liked
On 09/03/2013 04:52 PM, majianpeng wrote:
For readv/preadv sync-operatoin, ceph only do the first iov.
It don't think other iovs.Now implement this.
Signed-off-by: Jianpeng Ma majianp...@gmail.com
---
fs/ceph/file.c | 175
-
1 file
Hi,
Thank you for the patch.
On 09/03/2013 04:52 PM, majianpeng wrote:
For writev/pwritev sync-operatoin, ceph only do the first iov.
It don't think other iovs.Now implement this.
I divided the write-sync-operation into two functions.One for
direct-write,other for none-direct-sync-write.This
Hi,
Thank you for the patch.
On 09/03/2013 04:52 PM, majianpeng wrote:
For writev/pwritev sync-operatoin, ceph only do the first iov.
It don't think other iovs.Now implement this.
I divided the write-sync-operation into two functions.One for
direct-write,other for none-direct-sync-write.This
On 09/04/2013 09:46 AM, Mike Dawson wrote:
There are currently a few metavariables available for use in ceph.conf:
http://ceph.com/docs/master/rados/configuration/ceph-conf/#metavariables
In addition to those listed in that document, $pid was added in Bobtail.
That allows me to get RBD admin
There are currently a few metavariables available for use in ceph.conf:
http://ceph.com/docs/master/rados/configuration/ceph-conf/#metavariables
In addition to those listed in that document, $pid was added in Bobtail.
That allows me to get RBD admin sockets for libvirt/qemu guests by
Milosz Tanski mil...@adfin.com wrote:
Am I on the right path here?
I think you are! :-)
Sorry for not getting back to you earlier, but August has been somewhat
compressed due to PTO.
David
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
On Wed, 4 Sep 2013, bernhard glomm wrote:
Hi all,
after some days of successful creating and destroying rbd's, snapshots,
clones and migrating formats all of a sudden one of the monitors doesn't work
anymore.
I tried to remove and re-add the monitor from the cluster, but that doesn't
Thanks for the feature request Mark!
To illustrate where my understanding ends, here is a patch to stub in
this functionality:
From 137ebc2d5326ca710a5e99e2899fd851b02c10f7 Mon Sep 17 00:00:00 2001
From: Mike Dawson mdaw...@gammacode.com
Date: Wed, 4 Sep 2013 11:39:05 -0400
Subject: [PATCH]
Hi Linus,
Please pull this first batch of Ceph updates for 3.12 from
git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-linus
This first batch include several important RBD bug fixes (more coming in
part 2), hole punch support, several different cleanups in the page cache
Hi David!
On Wed, 4 Sep 2013, David Howells wrote:
Sage Weil s...@inktank.com wrote:
David, are the fscache patches here ready for the next merge window? Do
you have a preference for whose tree they go through?
There's only one problem - patch 1 needs to come _after_ patch 2 to avoid
Sage Weil s...@inktank.com wrote:
David, are the fscache patches here ready for the next merge window? Do
you have a preference for whose tree they go through?
There's only one problem - patch 1 needs to come _after_ patch 2 to avoid
breaking git bisect. Plus these patches 2 and 4 extend
Hi Mike,
On Wed, 4 Sep 2013, Mike Dawson wrote:
Thanks for the feature request Mark!
To illustrate where my understanding ends, here is a patch to stub in this
functionality:
From 137ebc2d5326ca710a5e99e2899fd851b02c10f7 Mon Sep 17 00:00:00 2001
From: Mike Dawson mdaw...@gammacode.com
On Wed, 4 Sep 2013, Sage Weil wrote:
Hi David!
On Wed, 4 Sep 2013, David Howells wrote:
Sage Weil s...@inktank.com wrote:
David, are the fscache patches here ready for the next merge window? Do
you have a preference for whose tree they go through?
There's only one problem -
David,
If the cache is withdrawn and we're starting anew I would consider
that to okay. I would consider an empty page cache for a cookie to be
consistent since there's nothing stale that I can read. Unless there's
another synchronization issue that I'm missing in fscache.
Thanks,
- Milosz
On
Milosz Tanski mil...@adfin.com wrote:
If the cache is withdrawn and we're starting anew I would consider
that to okay. I would consider an empty page cache for a cookie to be
consistent since there's nothing stale that I can read. Unless there's
another synchronization issue that I'm missing
David,
Is it as simple as stick a mutex at the top of the
__fscache_check_consistency function before we try to find the object?
This code should be called from a context that can sleep, in the Ceph
code we call it from a delayed work queue (revalidate queue).
-- Milosz
On Wed, Sep 4, 2013 at
Our first development release after Dumpling has arrived! It includes a
lot of small cleanups and documentation improvements, as well as the first
blueprint from the Emperor CDS to land (support for ZFS). There are also
new sample implementations for RADOS classes (cls_hello) and a sample
David,
Is there anyway I can call you at (at your desk) during EST hours. I'd
like to talk this part through since I think we're going a bit in
circles. I'd like to get this fixed so we can submit the fscache for
Ceph code for the upstream kernel in the merge window.
Best,
-- Milosz
On Wed, Sep
Hi Matt,
I've just rebased wip-libcephfs on master and will be kicking off some
tests shortly. Please take a look at let me know if there is anything
missing or if you've run into any issues since this branch was prepared.
If everything looks okay I'd like to pull it in this week.
Thanks!
Where are you looking? ceph.com/debian-testing has 0.68
On 09/04/2013 07:12 PM, 이주헌 wrote:
Debian/Ubuntu packages is still 0.67.2.
--
Dan Mick, Filesystem Engineering
Inktank Storage, Inc. http://inktank.com
Ceph docs: http://ceph.com/docs
--
To unsubscribe from this list: send the line
Setup:
hosts: ceph1, ceph2
Command steps:
$ ceph-deploy new ceph1
$ ceph-deploy mon create ceph1
$ ceph-deploy gatherkeys ceph1
$ ceph-deploy disk zap ceph1:/dev/vdb
$ ceph-deploy disk zap ceph1:/dev/vdc
$ ceph-deploy disk zap ceph2:/dev/vdb
$ ceph-deploy disk zap ceph2:/dev/vdc
$ ceph-deploy
22 matches
Mail list logo