On 12/22/2015 01:55 PM, Wido den Hollander wrote:
On 12/21/2015 11:51 PM, Josh Durgin wrote:
On 12/21/2015 11:06 AM, Wido den Hollander wrote:
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
On 12/22/2015 05:34 AM, Wido den Hollander wrote:
On 21-12-15 23:51, Josh Durgin wrote:
On 12/21/2015 11:06 AM, Wido den Hollander wrote:
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
This would
On 12/21/2015 11:00 AM, Wido den Hollander wrote:
My discard code now works, but I wanted to verify. If I understand Jason
correctly it would be a matter of figuring out the 'order' of a image
and call rbd_discard in a loop until you reach the end of the image.
You'd need to get the order via
On 12/21/2015 11:06 AM, Wido den Hollander wrote:
Hi,
While implementing the buildvolfrom method in libvirt for RBD I'm stuck
at some point.
$ virsh vol-clone --pool myrbdpool image1 image2
This would clone image1 to a new RBD image called 'image2'.
The code I've written now does:
1. Create
On 12/21/2015 07:09 AM, Jason Dillaman wrote:
You will have to ensure that your writes are properly aligned with the object
size (or object set if fancy striping is used on the RBD volume). In that
case, the discard is translated to remove operations on each individual backing
object. The
. However, rbd_img_request_create()
consumes a ref on snapc, so calling ceph_put_snap_context() after
a successful rbd_img_request_create() leads to an extra put. Fix it.
Cc: sta...@vger.kernel.org # 3.18+
Signed-off-by: Ilya Dryomov <idryo...@gmail.com>
Whoops!
Reviewed-by: J
On 10/24/2015 05:09 AM, Loic Dachary wrote:
Hi Josh,
The next firefly release as found at https://github.com/ceph/ceph/tree/firefly
passed the rbd suite (http://tracker.ceph.com/issues/11644#note-105 and
http://tracker.ceph.com/issues/11644#note-120). Do you think the firefly branch
is ready
On 10/16/2015 08:37 PM, Deneau, Tom wrote:
On an ubuntu trusty system,
* I installed v9.1.0 and could bring up a single node cluster with it.
* I did a git checkout of v9.1.0, followed by ./autogen.sh; ./configure;
make
Then when I try to run for example the rados I just built using
On 10/15/2015 06:45 AM, Sage Weil wrote:
On Thu, 15 Oct 2015, Mykola Golub wrote:
On Thu, Oct 15, 2015 at 08:47:58AM -0400, Jason Dillaman wrote:
But we don't need them to match between different platforms, no? Is
linking 64bit code with 32bit possible (supported)?
Also, for this particular
-by: Josh Durgin <jdur...@redhat.com>
Signed-off-by: Ilya Dryomov <idryo...@gmail.com>
---
drivers/block/rbd.c | 33 +++--
1 file changed, 23 insertions(+), 10 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index ccbc3cbbf24e..07f666f
On 10/14/2015 12:34 PM, Jason Dillaman wrote:
In general, I like the approach.
I am concerned about passing a void* + length to specify the option value since
you really can't protect against the user providing data in the incorrect
format. For example, if the backend treated
On 09/22/2015 02:55 PM, Shinobu Kinjo wrote:
Hello,
Does any of you know why *MAX_RBD_IMAGES* was changed from 16 to 128?
I hope that Dan remember -;
http://resources.ustack.com/ceph/ceph/commit/2a6dcabf7f1b7550a0fa4fd223970ffc24ad7870
I don't think there's a reason for the exact limit.
l Message -
From: "Josh Durgin" <josh.dur...@inktank.com>
To: "Huamin Chen" <hc...@redhat.com>, ceph-devel@vger.kernel.org
Sent: Monday, September 14, 2015 6:35:49 PM
Subject: Re: rbd lock list command failure
On 09/09/2015 09:26 AM, Huamin Chen wrote:
Hi
Runn
On 09/09/2015 09:26 AM, Huamin Chen wrote:
Hi
Running "rbd lock list" inside a Docker container yields mixed results.
Sometimes I can get the right results but most times I just get errors.
A good run is like this:
[root@host server]# docker run --privileged --net=host -v /dev:/dev -v
On 08/28/2015 12:16 PM, Loic Dachary wrote:
Hi Abhishek,
We've just had an example of a backport merged into hammer although it did not
follow the procedure : https://github.com/ceph/ceph/pull/5691
It's a key aspect of backports : we're bound to follow procedure, but
developers are allowed
It'd be nice to get omap in the python bindings, it's been a pain not
having it several times:
https://github.com/ceph/ceph/pull/5272
On 08/12/2015 02:20 PM, Sage Weil wrote:
The infernalis feature freeze is coming up Real Soon Now. I've marked
some of the pull requests on github that I would
On 08/03/2015 08:44 AM, Loic Dachary wrote:
Hi Josh,
The next hammer release as found at https://github.com/ceph/ceph/tree/hammer
passed the rbd suite (http://tracker.ceph.com/issues/11990#rbd). Do you think
it is ready for QE to start their own round of testing ?
Looks ready to me. Thanks!
On 07/21/2015 12:22 PM, Stefan Priebe wrote:
Am 21.07.2015 um 19:19 schrieb Jason Dillaman:
Does this still occur if you export the images to the console (i.e.
rbd export cephstor/disk-116@snap - dump_file)?
Would it be possible for you to provide logs from the two rbd export
runs on your
to firefly :(. We'll need to be
more vigilant about checking non-trivial backports when we're
going through all the bugs periodically.
Josh
On 07/21/2015 12:52 PM, Stefan Priebe wrote:
So this is really this old bug?
http://tracker.ceph.com/issues/9806
Stefan
Am 21.07.2015 um 21:46 schrieb Josh
On 07/13/2015 11:42 AM, Sage Weil wrote:
On Mon, 13 Jul 2015, Jason Dillaman wrote:
But it doesn't provide an easily compassable way
of integrating waiting on other events in the application. eventfd is
easy to embed in your (e)pool loop or any kind of event library
(libev).
Agreed -- which
On 07/07/2015 08:18 AM, Haomai Wang wrote:
Hi All,
Currently librbd support aio_read/write with specified
callback(AioCompletion). It would be nice for simple caller logic, but
it also has some problems:
1. Performance bottleneck: Create/Free AioCompletion and librbd
internal finisher thread
, end, b-num_nodes, bad);
+ ceph_decode_8_safe(p, end, b-num_nodes, bad);
b-node_weights = kcalloc(b-num_nodes, sizeof(u32), GFP_NOFS);
if (b-node_weights == NULL)
return -ENOMEM;
Reviewed-by: Josh Durgin jdur...@redhat.com
--
To unsubscribe from this list: send
On 06/30/2015 04:18 PM, Josh Durgin wrote:
Note that ceph's ObjectStore api is internal, and not stable. Maybe not
the best thing to include in packages, though keeping it as an option
at build time might make sense. The api itself isn't that prone to
change, but internal symbols (hooray c
Note that ceph's ObjectStore api is internal, and not stable. Maybe not
the best thing to include in packages, though keeping it as an option
at build time might make sense. The api itself isn't that prone to
change, but internal symbols (hooray c++...) certainly are, so any
update of ceph would
On 06/16/2015 10:13 AM, Josh Durgin wrote:
On 06/16/2015 12:02 AM, Xue, Chendi wrote:
HI, Josh and Andrew
I just rebase the wip-blkin branch to 9.0.1, and did a rados bench
test on that
https://github.com/ceph/ceph/pull/4963
Thanks! Reset the branch in ceph.git.
Lttng result is collected
;
}
+ if (msg-hdr.version = 2)
+ ceph_decode_32_safe(p, end, return_code, bad);
+
+ if (msg-hdr.version = 3)
+ ceph_decode_32_safe(p, end, notifier_gid, bad);
+
This should be ceph_decode_64_safe. With that fixed,
Reviewed-by: Josh Durgin jdur...@redhat.com
/rados/test_pool_quota.sh
qa/workunits/rados/cls.sh
These resulted in osd crashes in the last rados suite run linked
earlier in the thread.
Josh
Best Regards,
-Chendi
-Original Message-
From: Josh Durgin [mailto:jdur...@redhat.com]
Sent: Tuesday, June 9, 2015 10:45 PM
To: Xue, Chendi; ceph
...@cs.wisc.edu
Reviewed-by: Josh Durgin jdur...@redhat.com
---
drivers/block/rbd.c | 3 ++-
include/linux/ceph/osd_client.h | 7 +--
net/ceph/osd_client.c | 21 -
3 files changed, 23 insertions(+), 8 deletions(-)
diff --git a/drivers/block/rbd.c b
,
Reviewed-by: Josh Durgin jdur...@redhat.com
/*
* an individual object operation. each may be accompanied by some data
* payload
@@ -450,10 +466,14 @@ struct ceph_osd_op {
} __attribute__ ((packed)) snap;
struct {
__le64 cookie
On 06/12/2015 08:56 AM, Douglas Fuller wrote:
Change unused ceph_osd_event structure to refer to pending watch/notify2
messages. Watch events include the separate watch and watch error callbacks
used for watch/notify2. Update rbd to use separate watch and watch error
callbacks via the new watch
Hi Chendi,
On 06/09/2015 04:59 AM, Xue, Chendi wrote:
Hi, Josh and Andrew
Today, I applied wip-blkin branch to my 4 nodes ceph setup, and created
zipkin-based lttng results successfully.
Since we wanna using this latency analyzing methodology on further release or
on keyvalue store and
On 05/22/2015 01:01 PM, Loic Dachary wrote:
Hi Josh,
The next firefly release as found at https://github.com/ceph/ceph/tree/firefly
(68211f695941ee128eb9a7fd0d80b615c0ded6cf) passed the rbd suite
(http://tracker.ceph.com/issues/11090#rbd). Do you think it is ready for QE to
start their own
On 05/21/2015 07:56 AM, Chris H wrote:
I am assuming your client's do proper syncs and flushes? What about
applications that traditionally write to a BBU backed RAID card and
don't explicitly call sync and flushes? I just ask because the only way
I can think of handling this is to force
, but they're each still
operating correctly individually. This kind of higher-level monitoring
info for each site's health could perhaps come from calamari.
Josh
On Wed, May 13, 2015 at 10:21 PM, Josh Durgin jdur...@redhat.com
mailto:jdur...@redhat.com wrote:
On 05/13/2015 01:07 AM, Haomai
On 04/15/2015 02:49 AM, Loic Dachary wrote:
Hi Josh,
The next giant release as found at https://github.com/ceph/ceph/tree/giant
passed the rbd suite (http://tracker.ceph.com/issues/11153#rbd). Do you think
it is ready for QE to start their own round of testing ?
Yup, looks ready to me.
--
On 04/16/2015 09:42 AM, Sage Weil wrote:
I think the simplest way to address this is to talk about compatibility in
terms of the upstream stable releases (firefly, hammer, etc.), and test
that compatibility with teuthology tests from ceph-qa-suite.git. We have
some basic inter-version
PM, Josh Durgin jdur...@redhat.com
mailto:jdur...@redhat.com wrote:
I don't see any commits that would be likely to affect that between
0.80.7 and 0.80.9.
Is this after upgrading an existing cluster?
Could this be due to fs aging beneath your osds?
How are you measuring create
On 04/14/2015 12:48 AM, Alexandre DERUMIER wrote:
Hi,
I would like to known how to enable object map on hammer ?
I found a post hammer commit here:
https://github.com/ceph/ceph/commit/3a7b28d9a2de365d515ea1380ee9e4f867504e10
rbd: add feature enable/disable support
- Specifies which RBD
--with-radosgw --with-libatomic-ops
--without-lttng --disable-static --without-cryptopp --with-tcmalloc
if you change it to --with-lttng it will fail
-Neo
On Wed, Apr 8, 2015 at 4:20 PM, Josh Durgin jdur...@redhat.com wrote:
Are you still seeing the same error on the latest master?
The gitbuilder
Are you still seeing the same error on the latest master?
The gitbuilder is building successfully now:
http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-tarball-trusty-i386-basic/log.cgi?log=21f60a9d26f821ba1cd1db8bb79f8aff2a028582
On 04/08/2015 04:07 PM, kernel neophyte wrote:
Need help! This
...@redhat.com wrote:
I found that I could not build the docs on Ubuntu 14.10 with the proper
packages installed. Kefu is looking into Asphyxiate which is very
tempermental. I installed an Ubuntu 11.10 in order to generate docs.
David
On 3/17/15 10:11 AM, Sage Weil wrote:
On Tue, 17 Mar 2015, Josh
On 03/17/2015 01:58 AM, Loic Dachary wrote:
On 17/03/2015 09:45, Xinze Chi wrote:
Sorry, I have not measure it.
But I think it should really reduce latency when hit miss in cache
pool and do_proxy_read.
Interesting. I bet Jason or Josh have an opinion about this.
Yes, it sounds like a
On 03/17/2015 09:40 AM, Ken Dreyer wrote:
I had a question about the way that we're handling man pages.
In 356a749f63181d401d16371446bb8dc4f196c2a6 , rbd: regenerate rbd(8)
man page, it looks like man/rbd.8 was regenerated from doc/man/8/rbd.rst
It seems like it would be more efficient to
On 03/03/2015 03:28 PM, Ken Dreyer wrote:
On 03/03/2015 04:19 PM, Sage Weil wrote:
Hi,
This is just a heads up that we've identified a performance regression in
v0.80.8 from previous firefly releases. A v0.80.9 is working it's way
through QA and should be out in a few days. If you haven't
- Original Message -
From: Loic Dachary l...@dachary.org
To: Josh Durgin jdur...@redhat.com
Cc: Ceph Development ceph-devel@vger.kernel.org
Sent: Saturday, February 28, 2015 3:30:28 PM
Subject: rbd and the next firefly release
Hi Josh,
The rbd teuthology suite for the next
- Original Message -
From: Loic Dachary l...@dachary.org
To: Yuri Weinstein ywein...@redhat.com
Cc: Ceph Development ceph-devel@vger.kernel.org, Tamil Muthamizhan
tmuth...@redhat.com
Sent: Wednesday, February 18, 2015 9:56:14 AM
Subject: Re: dumpling integration branch for v0.67.12
On 02/11/2015 02:24 PM, Yuri Weinstein wrote:
rbd
['45365', '45366', '45367']
http://tracker.ceph.com/issues/10842
unable to connect to apt-mirror.front.sepia.ceph.com
['45349', '45350', '45351', '45355', '45356', '45357', '45363']
http://tracker.ceph.com/issues/10802
error:
On 02/10/2015 07:17 AM, Loic Dachary wrote:
Hi Josh,
The rbd teuthology suite for the next giant release as found in
https://github.com/ceph/ceph/commits/giant-backports came back (see
http://pulpito.ceph.com/loic-2015-02-02_23:28:17-rbd-giant-backports---basic-multi)
with one ceph-qa-suite
On 02/09/2015 12:27 PM, Jason Dillaman wrote:
I would agree with your assessment that
http://tracker.ceph.com/issues/10560#teuthology-runs-on-3944c77c404c4a05886fe8276d5d0dd7e4f20410-6-february
sounds like a repeat of http://tracker.ceph.com/issues/4959.
Josh, thoughts?
Yeah, that looks
On 02/05/2015 02:50 PM, Sage Weil wrote:
I wonder if we should simplify the cds workflow a bit to go straight to an
etherpad outline of the blueprint instead of the wiki blueprint doc. I
find it a bit disorienting to be flipping between the two, and after the
fact find it frustrating that there
image_name foo
snap_id 2
snap_name snap
overlap 0
Signed-off-by: Ilya Dryomov idryo...@redhat.com
Reviewed-by: Josh Durgin jdur...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info
turning
unconditional rbd_dev_parent_put() into a no-op.
Fixes: http://tracker.ceph.com/issues/10352
Cc: sta...@vger.kernel.org # 3.11+
Signed-off-by: Ilya Dryomov idryo...@redhat.com
Reviewed-by: Josh Durgin jdur...@redhat.com
--
To unsubscribe from this list: send the line unsubscribe ceph-devel
miserable
situation as far as proper locking goes regardless.
Yeah, looks like we need some refactoring to read parent_overlap safely
in the I/O path in a few places.
Reviewed-by: Josh Durgin jdur...@redhat.com
Cc: sta...@vger.kernel.org # 3.11+
Signed-off-by: Ilya Dryomov idryo...@redhat.com
On 01/12/2015 09:13 AM, Ilya Dryomov wrote:
Speaking of fsx and thrasher. Last time I tried fsx (both kernel and
librbd) couldn't survive thrashing due to watch/notify troubles. Are
all the new watch/notify pieces that were supposed to fix this in?
There's just one bug left afaik:
On 01/12/2015 06:26 AM, Loic Dachary wrote:
Hi Josh,
While looking at errors from a giant (plus backports) run of the RBD suite, I
stumbled upon:
http://tracker.ceph.com/issues/10513
and the error is unclear (at least to me ;-). Would you have an idea ? For
information the backports
On 12/08/2014 09:03 AM, Sage Weil wrote:
The current RADOS behavior is that reads (on any given object) are always
processed in the order they are submitted by the client. This causes a
few headaches for the cache tiering that it would be nice to avoid. It
also occurs to me that there are
On 12/08/2014 03:48 PM, Sage Weil wrote:
- Use floating point log function. This is problematic for the kernel
implementation (no floating point), is slower than the lookup table, and
makes me worry about whether the floating point calculations are
consistent across architectures (the mapping
On 09/17/2014 01:55 PM, Somnath Roy wrote:
Hi Sage,
We are experiencing severe librbd performance degradation in Giant over firefly
release. Here is the experiment we did to isolate it as a librbd problem.
1. Single OSD is running latest Giant and client is running fio rbd on top of
firefly
. Now, it is similar to firefly throughput !
So, loks like rbd_cache=true was the culprit.
Thanks Josh !
Regards
Somnath
-Original Message-
From: Josh Durgin [mailto:josh.dur...@inktank.com]
Sent: Wednesday, September 17, 2014 2:20 PM
To: Somnath Roy; ceph-devel@vger.kernel.org
Subject
On 08/11/2014 07:50 PM, Haomai Wang wrote:
Hi Sage, Josh:
ImageIndex is aimed to hold each object's location info which avoid
extra checking for none-existing object. It's only used when image flags
exists LIBRBD_CREATE_NONSHARED. Otherwise, ImageIndex will become gawp and
has no effect.
I
On 07/18/2014 02:45 AM, Dimitris Bliablias wrote:
Extend the rbd utility with a new option named '--force'. This option
will be used by the 'rbd import' command to allow overwriting an
existing rbd image, something which is currently forbidden. If the image
has snapshots, the command returns an
On 06/12/2014 01:15 AM, Ma, Jianpeng wrote:
In func do_bench_write if io_size is zero,it can cause floating point execption.
Signed-off-by: Jianpeng Ma jianpeng...@intel.com
Applied, thanks!
Josh
---
src/rbd.cc | 8
1 file changed, 8 insertions(+)
diff --git a/src/rbd.cc
for the purposes of the
overlap check.
Signed-off-by: Ilya Dryomov ilya.dryo...@inktank.com
---
Good catch! This should be included in any stable kernels 3.10 or later
too.
Reviewed-by: Josh Durgin josh.dur...@inktank.com
drivers/block/rbd.c | 10 +-
1 file changed, 9 insertions(+), 1
add a blueprint for
it on the wiki:
https://wiki.ceph.com/Planning/Blueprints/Submissions
Josh
On Tue, Jun 10, 2014 at 9:16 AM, Josh Durgin
josh.dur...@inktank.com wrote:
On 06/05/2014 12:01 AM, Haomai Wang wrote:
Hi,
Previously I sent a mail about the difficult of rbd snapshot size
On 06/10/2014 01:56 AM, Vilobh Meshram wrote:
How does CEPH guarantee data isolation for volumes which are not meant
to be shared in a Openstack tenant?
When used with OpenStack the data isolation is provided by the
Openstack level so that all users who are part of same tenant will be
able to
On 06/05/2014 12:01 AM, Haomai Wang wrote:
Hi,
Previously I sent a mail about the difficult of rbd snapshot size
statistic. The main solution is using object map to store the changes.
The problem is we can't handle with multi client concurrent modify.
Lack of object map(like pointer map in
On 06/02/2014 10:22 AM, Sage Weil wrote:
Ideally the change comes from Josh, who originally put the notice there,
but I think it shouldn't matter. We relicensed rbd.cc as LGPL2 a while
back (it was GPL due to a header we used?) and got confirmations from all
authors. It might be worth doing a
On 05/27/2014 03:19 PM, Thorsten Behrens wrote:
I wrote:
Sage Weil wrote:
If anybody is interested in helping with that effort, pull requests
are very welcome! :)
[snip]
As hinted at in the patch, something like boost::program_options would
be nice, but that's a chunk of work I'd rather
On 05/21/2014 03:03 PM, Olivier Bonvalet wrote:
Le mercredi 21 mai 2014 à 08:20 -0700, Sage Weil a écrit :
You're certain that that is the correct prefix for the rbd image you
removed? Do you see the objects lists when you do 'rados -p rbd ls - |
grep prefix'?
I'm pretty sure yes : since I
On 04/28/2014 04:40 PM, Gregory Farnum wrote:
The buffer changes appear to have some unnecessary Xio class
declarations in buffer.h, and it’d be nice if all of that was guarded
by HAVE_XIO blocks.
buffer.h is exposed by the librados c++ api, so hopefully nothing
xio-specific really needs to be
On 04/01/2014 07:22 AM, Guangliang Zhao wrote:
This patch add the discard support for rbd driver.
There are three types operation in the driver:
1. The objects would be removed if they completely contained
within the discard range.
2. The objects would be truncated if they partly contained
rbd: add discard support for rbd
Josh Durgin (8):
rbd: access snapshot context and mapping size safely
rbd: read image size for discard check safely
rbd: fix snapshot context reference count for discards
rbd: tolerate -ENOENT for discard operations
rbd: make discard trigger copy-on-write
,
rbd_parent_request_create() can just pass NULL for snapc, since the
snapshot context is only relevant for writes.
Signed-off-by: Josh Durgin josh.dur...@inktank.com
---
drivers/block/rbd.c | 31 +++
1 file changed, 19 insertions(+), 12 deletions(-)
diff --git a/drivers/block/rbd.c
Discards take a reference to the snapshot context of an image when
they are created. This reference needs to be cleaned up when the
request is done just as it is for regular writes.
Signed-off-by: Josh Durgin josh.dur...@inktank.com
---
drivers/block/rbd.c |3 ++-
1 file changed, 2
From: Guangliang Zhao lucienc...@gmail.com
It need to copyup the parent's content when layered writing,
but an entire object write would overwrite it, so skip it.
Signed-off-by: Guangliang Zhao lucienc...@gmail.com
Reviewed-by: Josh Durgin josh.dur...@inktank.com
Reviewed-by: Alex Elder el
the semaphore in this function.
Signed-off-by: Josh Durgin josh.dur...@inktank.com
---
drivers/block/rbd.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 9dc33d9..486e4b5 100644
--- a/drivers/block/rbd.c
+++ b
From: Guangliang Zhao lucienc...@gmail.com
It could only handle the read and write operations now,
extend it for the coming discard support.
Signed-off-by: Guangliang Zhao lucienc...@gmail.com
Reviewed-by: Josh Durgin josh.dur...@inktank.com
Reviewed-by: Alex Elder el...@linaro.org
---
drivers
discard requests as
well.
Signed-off-by: Josh Durgin josh.dur...@inktank.com
---
drivers/block/rbd.c | 47 +++
1 file changed, 19 insertions(+), 28 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 7bcdeda..c8fc8fc 100644
the original operations should be the same as when first
sending them, so move it to a helper function.
op_type only needs to be checked once, so create a helper for that as
well and call it outside the loop in rbd_img_request_fill().
Signed-off-by: Josh Durgin josh.dur...@inktank.com
---
drivers
://tracker.ceph.com/issues/190
Signed-off-by: Guangliang Zhao lucienc...@gmail.com
Reviewed-by: Josh Durgin josh.dur...@inktank.com
Reviewed-by: Alex Elder el...@linaro.org
---
drivers/block/rbd.c | 109 +++
1 file changed, 92 insertions(+), 17 deletions
Discard requests are a form of write, so they should go through the
same process as plain write requests and trigger copy-on-write for
layered images.
Signed-off-by: Josh Durgin josh.dur...@inktank.com
---
drivers/block/rbd.c |3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git
On 04/04/2014 09:17 PM, Sage Weil wrote:
Hi everyone,
This is what we have queued up for Linus for 3.15-rc1. Is there anything
missing? I pulled David Howell's patch out of for-linus (since I
think it should go in via Al's tree) and rebased. Is there anything
missing? Josh, do you want to
, but
discards done e.g. by mkfs are more likely to benefit from this
optimization. Nice refactoring of that conditional too (thanks for
suggesting that Alex)!
Reviewed-by: Josh Durgin josh.dur...@inktank.com
drivers/block/rbd.c | 49 ++---
1 file
On 03/12/2014 08:21 PM, Guangliang Zhao wrote:
It could only handle the read and write operations now,
extend it for the coming discard support.
Signed-off-by: Guangliang Zhao lucienc...@gmail.com
---
Looks good.
Reviewed-by: Josh Durgin josh.dur...@inktank.com
drivers/block/rbd.c | 96
On 03/12/2014 08:21 PM, Guangliang Zhao wrote:
This patch add the discard support for rbd driver.
There are there types operation in the driver:
1. The objects would be removed if they completely contained
within the discard range.
2. The objects would be truncated if they partly contained
good.
Reviewed-by: Josh Durgin josh.dur...@inktank.com
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
That's a good idea. This particular assert in a Mutex is almost always
a use-after-free of the Mutex or structure containing it though.
On 02/25/2014 09:33 AM, Noah Watkins wrote:
Perhaps using gtest-style asserts (ASSERT_EQ(r, 0)) in Ceph would be
useful so we can see parameter values to the
ICL was added in 1.46 according to http://www.boost.org/doc/libs/.
This also fails on debian wheezy with only 1.42 available:
http://tracker.ceph.com/issues/7422
On 02/14/2014 04:26 PM, Matt W. Benjamin wrote:
according to google, 1.41 ahould have it.
- Sage Weil s...@inktank.com wrote:
On 12/31/2013 08:59 AM, Noah Watkins wrote:
Thanks for testing that Josh. Before cleaning up this patch set, I
have a few questions.
I'm still not clear on how to handle the std::tr1::shared_ptr
ObjListCtx ctx; in librados.hpp. If we change this to
ceph::shared_ptr, then we'll also need to
On 11/27/2013 12:49 AM, Rutger ter Borg wrote:
On 2013-10-01 00:52, Josh Durgin wrote:
I'm fine applying this now (with one fix). It's a nice cleanup
even if things change more soon.
For the C interface, the return value stored in the AioCompletionImpl
needs to be the length read, so
On 12/28/2013 06:34 PM, James Harper wrote:
Is the rbd locking feature-compatible with scsi3 persistent reservations?
It wasn't designed with scsi-3 exactly in mind, but using exclusive
locks and fencing might suffice.
To fence a client, you'd get its address from rbd_list_lockers() and
On 12/30/2013 09:56 AM, Noah Watkins wrote:
It looks like we may be outgrowing the use of export-symbols-regex and
friends to control symbol visibility for published shared libraries.
On Linux, ld seems to be quite content linking against hidden symbols,
but at least on OSX with Clang it seems
On 12/27/2013 03:34 PM, Noah Watkins wrote:
On Wed, Oct 30, 2013 at 2:02 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 10/29/2013 03:51 PM, Noah Watkins wrote:
unsafe to me. Could you check whether you can run 'rados ls' compiled
against an old librados, but dynamically loading librados
On 12/28/2013 05:59 AM, Bjørnar Ness wrote:
Current code only lists first 512 k/v pairs, attached patch.
Thanks for looking into this!
I moved the last_read assignment out of the conditional so it applies
to nonprintable values too, and added a simple test case:
changed, 28 insertions(+), 13 deletions(-)
These look good to me.
Reviewed-by: Josh Durgin josh.dur...@inktank.com
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
the monitor if any of these
flags are set, so paused requests can be unblocked as soon as
possible.
Fixes: http://tracker.ceph.com/issues/6079
Signed-off-by: Josh Durgin josh.dur...@inktank.com
---
include/linux/ceph/osd_client.h |1 +
net/ceph/osd_client.c | 29
this at all, it is left for future work.
These patches can also be found in the wip-full-6938 branch of
ceph-client.git.
Josh Durgin (2):
libceph: block I/O when PAUSE or FULL osd map flags are set
libceph: resend all writes after the osdmap loses the full flag
include/linux/ceph
to be fixed to avoid the
race. Old clients talking to osds with this fix may hang instead of
returning EIO and potentially corrupting an fs. New clients talking to
old osds have the same behavior as before if they encounter this race.
Fixes: http://tracker.ceph.com/issues/6938
Signed-off-by: Josh
devices behave this way as well, blocking when
they run out of space until more space is available. Do you have an
idea for avoiding this?
On 2013/12/4 7:12, Josh Durgin wrote:
The PAUSEWR and PAUSERD flags are meant to stop the cluster from
processing writes and reads, respectively. The FULL flag
On 12/06/2013 06:24 PM, Gregory Farnum wrote:
On Fri, Dec 6, 2013 at 6:16 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 12/05/2013 08:58 PM, Gregory Farnum wrote:
On Thu, Dec 5, 2013 at 5:47 PM, Josh Durgin josh.dur...@inktank.com
wrote:
On 12/03/2013 03:12 PM, Josh Durgin wrote
1 - 100 of 777 matches
Mail list logo