Hi Bob:
mkcephfs is still usable in 0.72 with a little path.We are still
using mkcephfs on 0.72 because ceph-deploy is not good enough.
You need to patch mkcephfs.in and init-ceph.in to do this.
The patch of mkcephfs.in is you need to modify three symbols:
BINDIR=/usr/bin
LIBDIR=/us
On 11/11/2013 06:51 PM, Dave (Bob) wrote:
The utility mkcephfs seemed to work, it was very simple to use and
apparently effective.
It has been deprecated in favour of something called ceph-deploy, which
does not work for me.
I've ignored the deprecation messages until now, but in going from 70
Hi all,
Now OpenStack Nova master branch still exists a bug when you boot a VM which
root disk size is specified. The storage backend of Nova also is rbd. For
example, you boot a VM and specify 10G as root disk size. But the image is only
1G. Then VM will be spawned and the root disk size will
this one looks quite good,
Thanks you
Yan, Zheng
Reviewed-by: Yan, Zheng
On 11/11/2013 03:18 PM, Guangliang Zhao wrote:
> v5: handle the roll back in ceph_set_acl(), correct
> ceph_get/set_cached_acl()
>
> v4: check the validity before set/get_cached_acl()
>
> v3: handle the attr change in ce
(resending as plain text)
I'm away and on my phone, I'll make it short. Overall direction is ok.
Run git submodule update because a submodule change snuck in.
I'd move the header dumping into a new RGWOp::pre_exec() callback and
fold the dump_continue() also into it.
Note that dumping bucket is on
Hi Loic,
I am finally doing the benchmark tool and I found a bunch of wrong parameter
checks which can make the whole thing SEGV.
All the RAID-6 codes have restrictions on the parameters but they are not
correctly enforced for Liberation & Blaum-Roth codes in the CEPH wrapper class
... see te
Hi Gregory,
On 12/11/13 10:13, Gregory Farnum wrote:
On Mon, Nov 11, 2013 at 3:04 AM, John Spray wrote:
This is a really useful summary from Malcolm.
In addition to the coordinator/copytool interface, there is the question of
where the policy engine gets its data from. Lustre has the MDS ch
On Mon, Nov 11, 2013 at 7:00 AM, Atchley, Scott wrote:
> On Nov 9, 2013, at 4:18 AM, Sage Weil wrote:
>
>> The SimpleMessenger implementation of the Messenger interface has grown
>> organically over many years and is one of the cruftier bits of code in
>> Ceph. The idea of building a fresh imple
On Mon, Nov 11, 2013 at 3:04 AM, John Spray wrote:
> This is a really useful summary from Malcolm.
>
> In addition to the coordinator/copytool interface, there is the question of
> where the policy engine gets its data from. Lustre has the MDS changelog,
> which Robinhood uses to replicate metada
For those of you working on fscache.
-- Forwarded message --
From: David Howells
Date: Mon, Nov 11, 2013 at 5:18 PM
Subject: [PATCH] FS-Cache: Fix handling of an attempt to store a page
that is now beyond EOF
To: torva...@linux-foundation.org
Cc: linux-cach...@redhat.com, linux-k
On 11/06/2013 10:12 PM, Yehuda Sadeh wrote:
On Wed, Nov 6, 2013 at 11:33 AM, Wido den Hollander wrote:
Hi,
I'm working on a RGW setup where I'm using Varnish[0] to cache objects, but
when doing so you run into the problem that a lot of (cached) requests will
not reach the RGW itself so the acc
Hi,
Loic posted a script he uses for testing setups without ceph-deploy:
http://www.spinics.net/lists/ceph-devel/msg16895.html
http://dachary.org/wp-uploads/2013/10/micro-osd.txt
it probably has enough steps in it for you to adapt.
Regards
Mark
P.s: what *is* your platform? It might not be t
The utility mkcephfs seemed to work, it was very simple to use and
apparently effective.
It has been deprecated in favour of something called ceph-deploy, which
does not work for me.
I've ignored the deprecation messages until now, but in going from 70 to
72 I find that mkcephfs has finally gone.
Li,
I recommend you CC David Howells who maintains FSCache directly on your emails.
Thanks,
- Milosz
On Mon, Nov 11, 2013 at 10:27 AM, Li Wang wrote:
> Currently, the page allocated into fscache in readpage()
> for Cifs and Ceph does not be uncached if no data read due
> to io error. This patch
Sorry for a maybe dumm idea,
but since this seems to be the beginnig of the discussion
about integrating HSM Systems it might still be the right place for it.
Integrating an HSM is surely interesting but the prize of a commercial
System (like SAMFS) exceeds the capacity of a wide range of custo
-- All Branches --
Alfredo Deza
2013-09-27 10:33:52 -0400 wip-5900
Dan Mick
2012-12-18 12:27:36 -0800 wip-rbd-striping
2013-07-16 23:00:06 -0700 wip-5634
Danny Al-Gaaf
2013-11-04 23:35:09 +0100 wip-da-fix-galois-warning
2013-11-05 22:01:42 +0100
Currently, if one page allocated into fscache in readpage(), however, with
no-data read, it is not uncached. This patch fixes this.
Signed-off-by: Li Wang
---
fs/ceph/addr.c |1 +
1 file changed, 1 insertion(+)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 6df8bd4..be5f4b6 100644
--- a
Introduce a new API fscache_readpage_cancel() for uncaching one single
no-data page from fscache.
Signed-off-by: Li Wang
---
include/linux/fscache.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/include/linux/fscache.h b/include/linux/fscache.h
index 115bb81..f1ed21f 100644
Similar to the routine for multiple pages except
that it takes page * as input rather than list head *.
Signed-off-by: Li Wang
---
fs/fscache/page.c |8
1 file changed, 8 insertions(+)
diff --git a/fs/fscache/page.c b/fs/fscache/page.c
index 7f5c658..0c69f72 100644
--- a/fs/fscache
Currently, the page allocated into fscache in readpage()
for Cifs and Ceph does not be uncached if no data read due
to io error. This patch fixes this. fscache_readpages_cancel()
is for this kind of job but taking list read * as input, so
a new routine take page * as input is introduced.
Li Wan
Introduce a routine for uncaching single no-data page, typically
in readpage().
Signed-off-by: Li Wang
---
fs/ceph/cache.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/fs/ceph/cache.h b/fs/ceph/cache.h
index ba94940..eb0ec76 100644
--- a/fs/ceph/cache.h
+++ b/fs/ceph/cach
Implement the routine for uncaching single no-data page, typically
in readpage().
Signed-off-by: Li Wang
---
fs/cifs/fscache.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/fs/cifs/fscache.c b/fs/cifs/fscache.c
index 8d4b7bc..168f184 100644
--- a/fs/cifs/fscache.c
+++ b/fs/cifs/f
Introduce a routine for uncaching single no-data page, typically
in readpage().
Signed-off-by: Li Wang
---
fs/cifs/fscache.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/fs/cifs/fscache.h b/fs/cifs/fscache.h
index 24794b6..c712f42 100644
--- a/fs/cifs/fscache.h
+++ b/fs/c
Currently, if one page allocated into fscache in readpage(), however, with
no-data read, it is not uncached. This patch fixes this.
Signed-off-by: Li Wang
---
fs/cifs/file.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 7f2..1
On Nov 9, 2013, at 4:18 AM, Sage Weil wrote:
> The SimpleMessenger implementation of the Messenger interface has grown
> organically over many years and is one of the cruftier bits of code in
> Ceph. The idea of building a fresh implementation has come up several
> times in the past, but is n
Hi Sage:
We always use mkcephfs to create out ceph cluster. Before
0.72, all is ok.
But in 0.72, we found the osds can't start up useing "service ceph -a
start". After viewing the init-ceph.in script , I found that a return
value errro caused osds can't startup.And if we modifiy a little co
This is a really useful summary from Malcolm.
In addition to the coordinator/copytool interface, there is the question of
where the policy engine gets its data from. Lustre has the MDS changelog,
which Robinhood uses to replicate metadata into its MySQL database with all
the indices that it wants
> - Keeping the tape drives busy js always difficult… tape drives are
> now regularly exceeding 250MB/s on a single stream so the storage
> system needs to be able to maintain a high data rate. Tape drive
> performance drops rapidly when the drives have to stop and then
> restart as the buffers fil
It may be even more lightweight than that depending on the Mass Storage
system intrinsic capabilities.
Here at CERN, we are testing the use of ceph in the disk cache in our
Mass Storage system and the only requirements we had was to store and
retrieve a file (put/get functionality). We've thus use
Congratulations Cephers its a great news.
@sage - Can we consider cephfs production ready now.
Many Thanks
Karan Singh
- Original Message -
From: "Sage Weil"
To: ceph-devel@vger.kernel.org, ceph-us...@ceph.com
Sent: Saturday, 9 November, 2013 7:40:04 AM
Subject: [ceph-users] v0.72 Empe
30 matches
Mail list logo