In commit bcc74879a, loic modifed the parameter name. But he forgot to
modify the name in qa/workunits/erasure-code/bench.sh. Now modify and make
bench.sh work.
Signed-off-by: Ma Jianpeng jianpeng...@intel.com
---
qa/workunits/erasure-code/bench.sh | 6 +++---
1 file changed, 3 insertions(+), 3
Thanks Sage. I'm not sure what is going on with librados2 (including
rados and rbd for that matter).
I'll see what I can do.
- Luis
On 07/15/2014 12:39 AM, Sage Weil wrote:
Hey Luis,
I pushed wip-dencoder, which moves ceph-dencoder to ceph from ceph-common.
This avoids the dependency for
On Tue, Jul 15, 2014 at 9:44 AM, Lakshminarayana Mavunduri
lakshminarayana.mavund...@sandisk.com wrote:
Hi,
Following the below set of steps, we are seeing data loss while reading from
clones.
1)Create an image with image format 2 (in this case we have made the size to
be 1024MB).
Hi All,
There is a new release of ceph-deploy, the easy deployment tool
for Ceph.
There is a minor cleanup when ceph-deploy disconnects from remote hosts that was
creating some tracebacks. And there is a new flag for the `new` subcommand that
allows to specify an fsid for the cluster.
The full
Well I did end up putting the data in different pools for custom
placement. However, I run into trouble during retrieval. The messy way
is to query every pool to check where the data is stored. This
requires many round trips to machines in the far off racks. Is it
possible this information is
One of Ceph's design tentpoles is *avoiding* a central metadata lookup
table. The Ceph MDS maintains a filesystem hierarchy but doesn't
really handle the sort of thing you're talking about, either. If you
want some kind of lookup, you'll need to build it yourself — although
you could make use of
Hi Sage,
Looks like it is configuration issue in configure.ac. I'm fixing
the issue now and will be sending a patch soon.
- Luis
On 07/15/2014 12:39 AM, Sage Weil wrote:
Hey Luis,
I pushed wip-dencoder, which moves ceph-dencoder to ceph from ceph-common.
This avoids the dependency for
Hi,
This is part of a larger pull request ( https://github.com/ceph/ceph/pull/1875
), specifically
https://github.com/dachary/ceph/commit/64175d9d6e1c8f6cbdc5396e85ba8c94168afba8#diff-8a5d2cfe0db7cb75615a5f6b2245148bL53
. But it's worth applying as suggested until it gets merged, to avoid
This Firefly point release fixes an potential data corruption problem
when ceph-osd daemons run on top of XFS and service Firefly librbd
clients. A recently added allocation hint that RBD utilizes triggers
an XFS bug on some kernels (Linux 3.2, and likely others) that leads
to data corruption and