Hi Sage,
Hope these two get a chance, they are both straightforward, and
useful for cache tiering.
https://github.com/ceph/ceph/pull/5362
https://github.com/ceph/ceph/pull/5473
Cheers,
Li Wang
On 2015/8/13 5:20, Sage Weil wrote:
The infernalis feature freeze is coming up Real Soon Now. I've
Hi Loic,
Thanks very much, we will give a try asap.
Cheers,
Li Wang
On 2015/8/10 21:09, Loic Dachary wrote:
Hi,
You should be able to follow the instructions at
https://github.com/dachary/teuthology/tree/wip-6502-v3-dragonfly#openstack-backend
on your OpenStack cluster. I expect
to publish the motivation and idea behind
on the mailing list as well, hope that is not annoying :)
Cheers,
Li Wang
On 2015/6/13 10:06, Sage Weil wrote:
Hi Li,
Reviewing this now! See comments on the PR.
Just FYI, the current convention is to send kernel patches to the list,
and to use github
the bandwidth contention
with user traffic.
Signed-off-by: Mingxin Liu mingxin...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
Suggested-by: Xinze Chi xmdx...@gmail.com
The codes:
https://github.com/ceph/ceph/pull/5362
Mingxin Liu (5):
osd: parse and queue user specified flushing
to a consistent state
With this process, it is transparent to scrub. We will describe it in
detail at the blueprint page later.
What do you think?
Cheers,
Li Wang
On 2015/6/3 8:49, Sage Weil wrote:
On Tue, 2 Jun 2015, Li Wang wrote:
I think for scrub, we have a relatively easy way to solve it,
add
the content to consistent.
Cheers,
Li Wang
On 2015/6/18 22:14, Sage Weil wrote:
On Thu, 18 Jun 2015, Li Wang wrote:
Hi Sage,
I think we can process the write in the following steps,
(1) Submit transaction A to journal, include a PGLog update and a
write zero operation at offset, length
(2
Hi Sage and Samuel,
We have updated the status of our design and implementation of
multi-object transaction support at
http://tracker.ceph.com/projects/ceph/wiki/Rados_-_multi-object_transaction_support,
your comments
are appreciated.
Cheers,
Li Wang
--
To unsubscribe from this list: send
your comments.
Cheers,
Li Wang
On 2015/6/13 10:06, Samuel Just wrote:
In the Infernalis CDS, we had a session on RADOS multi-object transactions.
I'd like to continue the discussion at the upcoming Jewel CDS. I thought I'd
prime the discussion by asking: if librados supported multi-object read
Just had a quick look, the idea behind it seems to want to give
a flexible, very fine-grained object-level behavior control,
for example, how long an object will stay in a pool.
however, it is not very convincing that whether it worth the
effort to do this fine-grained control, the benefit may
think new store is great for some of the scenarios,
while metadata-only is desirable for some others, they do not
contradict with each other, what do you think?
Cheers,
Li Wang
On 2015/6/1 8:39, Sage Weil wrote:
On Fri, 29 May 2015, Li Wang wrote:
An important usage of Ceph is to integrate
its distributed
file system on top of ext4 will do journaling itself, the
double-journaling degraded the performance. Similarly, the guest
file system on top of RBD will do journaling itself, if necessary,
so it is theoretically no problem to turn off the data journaling of RADOS.
Cheers,
Li Wang
the feedback of the community, and we may submit it as a
blueprint to under discussion in coming CDS.
Cheers,
Li Wang
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
This patch is to do write back throttling for cache tiering,
which is similar to what the Linux kernel does for
page cache write back. The motivation and original idea are
proposed by Nick Fisk, detailed in his email as below. In our
implementation, we introduce a paramter
file performance is generally
better than small file. If we introduce fragment, it looks like the
object storage self cares about the object data allocation now.
What is the community's option?
Cheers,
Li Wang
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body
hit_set_grade_decay_rate option to 'osd pool set/get'
- add 'osd tier cache-measure'
Also, for the latter we could also use an explanatory commit message.
Aside from that, I don't see anything obviously wrong with the patch.
-Joao
On 05/21/2015 02:34 PM, Li Wang wrote:
From: MingXin Liu mingxin
From: Min Chen minc...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
src/include/rados.h| 1 +
src/include/rados/librados.h | 1 +
src/include/rados/librados.hpp | 1 +
src/librados/librados.cc | 2 ++
src/osd
From: Min Chen minc...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
src/test/librados/tier.cc | 176 ++
1 file changed, 176 insertions(+)
diff --git a/src/test/librados/tier.cc b/src
The conventional io hints by fadvise() is to give chance
for applications to manipulate page cache. This patch extends
io hint ability to control cache pool behavior to avoid cache
pollution. For example, under WRITEBACK mode, consider the
following operation series, WRITE A; WRITE B; READ A,
From: Min Chen minc...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
doc/man/8/rbd.rst | 1 +
src/rbd.cc| 2 ++
2 files changed, 3 insertions(+)
diff --git a/doc/man/8/rbd.rst b/doc/man/8/rbd.rst
index 4552951..3fb747f
From: Min Chen minc...@ubuntukylin.com
Rbd_copyup_request is used only when copy_on_read enabled
for RBD child images. It is independent of rbd_obj_request.
Signed-off-by: Min Chen minc...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
Signed-off-by: Yunchuan Wen yunchuan
From: Min Chen minc...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
---
drivers/block/rbd.c | 128
1 file changed, 128
From: Min Chen minc...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
---
drivers/block/rbd.c | 21 +
1 file changed, 21 insertions(+)
diff --git a/drivers
From: Min Chen minc...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
---
drivers/block/rbd.c | 186 +++-
1 file changed, 183
Updated at https://github.com/ceph/ceph/pull/4735,
thanks for the comments
On 2015/5/21 17:12, Ilya Dryomov wrote:
On Thu, May 21, 2015 at 10:16 AM, Li Wang liw...@ubuntukylin.com wrote:
From: Min Chen minc...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Reviewed-by: Li
This is an implementation of temperature based object
eviction policy for cache tiering. The current eviction
policy is based only on the latest access time, without
considering the access frequency in the history. This
policy is apt to leave the just-accessed object in the
cache pool, even
From: MingXin Liu mingxin...@ubuntukylin.com
Signed-off-by: MingXin Liu mingxin...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
src/mon/MonCommands.h | 8 +++--
src/mon/OSDMonitor.cc | 87 ---
2 files changed, 88 insertions
From: MingXin Liu mingxin...@ubuntukylin.com
Signed-off-by: MingXin Liu mingxin...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
src/osd/ReplicatedPG.cc | 110 +---
1 file changed, 58 insertions(+), 52 deletions(-)
diff --git a/src
From: MingXin Liu mingxin...@ubuntukylin.com
Signed-off-by: MingXin Liu mingxin...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
src/osd/osd_types.cc | 32 ++--
src/osd/osd_types.h | 49 +
2 files
From: MingXin Liu mingxin...@ubuntukylin.com
Signed-off-by: MingXin Liu mingxin...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
doc/dev/cache-pool.rst | 4
doc/man/8/ceph.rst | 12 +---
doc/rados/operations/pools.rst | 7 +++
qa/workunits
From: MingXin Liu mingxin...@ubuntukylin.com
Signed-off-by: MingXin Liu mingxin...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
src/common/config_opts.h | 2 ++
src/mon/OSDMonitor.cc| 14 +-
2 files changed, 15 insertions(+), 1 deletion(-)
diff --git a/src
This is a new feature of rbd layering, when reading an object
from child, if not exist, the kernel rbd client will not only
request parent for the object, but also write it to child,
and the jobs are done in an asynchronous way. Therefore, the
subsequent accesses on this object will hit child
a slave. For example, in Step 3, the master also will
check if there is a conflict in-flight transaction.
Cheers,
Li Wang
On 2015/3/5 15:49, Sage Weil wrote:
On Thu, 5 Mar 2015, Li Wang wrote:
On 2015/3/5 8:56, Sage Weil wrote:
On Wed, 4 Mar 2015, Li Wang wrote:
Hi Sage, Please take a look
On 2015/3/5 8:56, Sage Weil wrote:
On Wed, 4 Mar 2015, Li Wang wrote:
Hi Sage, Please take a look if the below works,
[...]
I think this works. A few notes:
1- I don't think there's a need to persist the txn on the master until the
slaves reply with PREPARE_ACK.
I think the txn must
in
Cheers,
Li Wang
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
My thoughts, 'rbd info' is intended to describe
the STATIC, intrinsic properties of an image, while
the watchers information are dynamic, something like
'rbd showmapped', what is your opinion?
Cheers,
Li Wang
On 2015/1/22 9:35, Sage Weil wrote:
On Wed, 21 Jan 2015, Li Wang wrote:
Currently
Currently, RBD does not provide an easy way to
consult who opened a specied image, this complicates
the cloud maintenance, sometimes the administrator
found that a RBD image could not be deleted with
an error 'image has watchers', but no further
information available. The RADOS has a command
This feature has been done and currently undergoing review
with its 3rd version. Nevertheless, we are pleasure to give any help.
BTW, Sage, Could you please take a little bit time to review the patch
we submitted one month ago.
Cheers,
Li Wang
On 2014/9/5 10:39, Cheng Cheng wrote:
Hi Ceph
, also any suggestions, tests
and technical involvement are welcome, to make it ready to
be merged to the upstream.
Cheers,
Li Wang
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On 2014/5/6 11:54, Sage Weil wrote:
On Mon, 5 May 2014, Justin Erenkrantz wrote:
On Thu, May 1, 2014 at 12:32 PM, Patrick McGarry patr...@inktank.com wrote:
People like Jim Jagielski, Brian Stevens, Michael Tiemann, and many
others are bound to have years of experience that can help us make
pgp_num is the upper bound of number of OSD combinations, right?
so we can reduce pgp_num to constrain the possible combinations,
and the data loss probability is only dependent on pgp_num,
say, pgp_num/Pn(replica_num) (Since (a, b) and (b, a) are different pgs,
so it is permutation rather than
Provided 3 osds are down simultaneously
On 2014/3/7 11:51, Li Wang wrote:
Just had a quick look. It seems crush could meet the demand,
say, if we have 100 osds, replica_num is 3, then we partition the
100 osds into 3 trees, 'take' iterates on the 3 trees, for each tree,
select 1 osd
Just had a quick look. It seems crush could meet the demand,
say, if we have 100 osds, replica_num is 3, then we partition the
100 osds into 3 trees, 'take' iterates on the 3 trees, for each tree,
select 1 osd. Then the probability of losing data is at most n*n*n/Cn3,
can we make it better?
Then it seems that Coverity is only able to perform intra-procedure
check, is there any inter-procedure check option to turn on?
On 2014/3/4 6:53, John Spray wrote:
On Mon, Mar 3, 2014 at 10:23 PM, Sage Weil s...@inktank.com wrote:
** CID 1188299: Data race condition (MISSING_LOCK)
:)
Cheers,
Li Wang
On 2014/2/13 5:17, Patrick McGarry wrote:
Hey Ceph developers,
We are getting ready to submit our project list to be a mentoring
organization for GSoC 2014 and Sage suggested that perhaps there might
be a few more mentors/projects out there. At the very least perhaps
some of you
will be the Year of the Linux Desktop^W^W^WCephFS! To
that end, we
should schedule a daily standup to coordinate development
activities.
The regular participants are probably:
Zheng Yan (Shanghai, China)
Li Wang (Changsha, China)
Sage Weil
-by: Li Wang liw...@ubuntukylin.com
---
Yunchuan Wen (25):
ceph: Add quota feature flags
ceph: Add quota_info_t to store quota info
mds: Add quota field to inode_t
mds: Shutdown old mds without quota support
mds: Handle quota update
ceph: Add MClientQuota message type
ceph: Add
This patch implements inline data support for Ceph.
Review at:
https://github.com/ceph/ceph/pull/1081
Pull at:
https://github.com/kylinstorage/ceph.git wip-inline
Li Wang (23):
ceph: Add inline data feature
ceph: Add inline state definition
Signed-off-by: Yunchuan Wen yunchuan
This patch implements inline data support for Ceph.
Review at:
https://github.com/ceph/ceph/pull/1021
Pull at:
https://github.com/kylinstorage/ceph.git wip-inline
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
Against v4:
Forbid old mds
From: Yunchuan Wen yunchuan...@ubuntukylin.com
Synchronize object-store_limit[_l] with new inode-i_size after file writing.
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/ceph/file.c
, then immediately followed
by a writing, the initialization may have not completed, the code will
reach the ASSERT in fscache_submit_exclusive_op() to cause kernel
bug.
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Signed-off-by: Li Wang liw
From: Yunchuan Wen yunchuan...@ubuntukylin.com
The following scripts could easily panic the kernel,
#!/bin/bash
mount -t ceph -o fsc MONADDR:/ cephfs
rm -rf cephfs/foo
dd if=/dev/zero of=cephfs/foo bs=8 count=512
echo 3 /proc/sys/vm/drop_caches
dd if=cephfs/foo of=/dev/null bs=8 count=1024
Currently, if one new page allocated into fscache in readpage(), however,
with no data read into due to error encountered during reading from OSDs,
the slot in fscache is not uncached. This patch fixes this.
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/ceph/addr.c |1 +
1 file
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/ceph/cache.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/fs/ceph/cache.h b/fs/ceph/cache.h
index ba94940..da95f61 100644
--- a/fs/ceph/cache.h
+++ b/fs/ceph/cache.h
@@ -67,6 +67,14 @@ static inline int
Currently, if one new page allocated into fscache in readpage(), however,
with no data read into due to error encountered during reading from OSDs,
the slot in fscache is not uncached. This patch fixes this.
Li Wang (2):
ceph: Introduce a routine for uncaching single no data page from
Personally, I don't think there is issue for current implementation,
either. If no ACTIVE mds, the mount process put to wait, until updated
MDS map received and with active mds present indicated in the map, it
will be waked up and go on the mount process, otherwise, EIO returned if
timeout. If
the client be uselessly waked up ...
On 2013/12/9 22:26, Li Wang wrote:
Personally, I don't think there is issue for current implementation,
either. If no ACTIVE mds, the mount process put to wait, until updated
MDS map received and with active mds present indicated in the map, it
will be waked
by ceph.mount client for
printing simple message about what's going on.
2013/12/9 Li Wang liw...@ubuntukylin.com mailto:liw...@ubuntukylin.com
Personally, I don't think there is issue for current implementation,
either. If no ACTIVE mds, the mount process put to wait, until
updated MDS map
I just had a quick look, did not think it thoroughly.
(1) If possible, there is a race condition, that a former write get
blocked by FULL, a latter write is lucky to be sent to osd after FULL -
NOFULL,
then the former write is resent, to cause the old data overwrite the new
data.
(2) If it
the inline
threshold.
Cheers,
Li Wang
On 11/28/2013 11:02 AM, Yan, Zheng wrote:
On Wed, Nov 27, 2013 at 9:40 PM, Li Wang liw...@ubuntukylin.com wrote:
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/client/Client.cc | 55
length be zero, that will
capture some situations to almost eliminate the migration overhead (v4);
(3) It could be implicitly turned off at mount time by client (v4);
(4) It could be turned off globally by configuring the mds(v4).
v4 is coming soon.
Cheers,
Li Wang
On 11/30/2013 01:01 AM, Matt W
This patch implements inline data support for Ceph.
Review at:
https://github.com/ceph/ceph/pull/884
Pull at:
https://github.com/kylinstorage/ceph.git inline
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
Against v3:
Add the inline switch
Please install libboost-program-options-dev package before compiling
for example, for Ubuntu,
sudo apt-get install libboost-program-options-dev
On 12/02/2013 09:57 AM, charles L wrote:
Pls can some1 help? Im compiling ceph...i did the make -j2 command and got this
cannot find
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/include/ceph_fs.h |3 +++
1 file changed, 3 insertions(+)
diff --git a/src/include/ceph_fs.h b/src/include/ceph_fs.h
index 47ec1f1..07a78b8 100644
--- a/src/include/ceph_fs.h
+++ b
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/mds/mdstypes.cc | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/src/mds/mdstypes.cc b/src/mds/mdstypes.cc
index df6cd8e..01a04e8 100644
--- a/src/mds
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/mds/CInode.cc | 15 +++
src/messages/MClientReply.h |7 +++
2 files changed, 22 insertions(+)
diff --git a/src/mds/CInode.cc b/src/mds/CInode.cc
index
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/mds/Capability.h |2 ++
1 file changed, 2 insertions(+)
diff --git a/src/mds/Capability.h b/src/mds/Capability.h
index fb6b3dc..995ea3a 100644
--- a/src/mds/Capability.h
+++ b/src
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/osdc/Objecter.h | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/src/osdc/Objecter.h b/src/osdc/Objecter.h
index 41973dd..40f03de 100644
--- a/src/osdc
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/messages/MClientReply.h |2 ++
1 file changed, 2 insertions(+)
diff --git a/src/messages/MClientReply.h b/src/messages/MClientReply.h
index 896245f..47908e9 100644
--- a/src/messages
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/client/Inode.h |5 +
1 file changed, 5 insertions(+)
diff --git a/src/client/Inode.h b/src/client/Inode.h
index cc054a6..bb17706 100644
--- a/src/client/Inode.h
+++ b/src/client
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/client/Client.cc | 55 +-
1 file changed, 54 insertions(+), 1 deletion(-)
diff --git a/src/client/Client.cc b/src/client/Client.cc
index
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/client/Client.cc | 41 +
src/client/Client.h |3 +++
2 files changed, 44 insertions(+)
diff --git a/src/client/Client.cc b/src/client
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/messages/MClientCaps.h | 19 ++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/src/messages/MClientCaps.h b/src/messages/MClientCaps.h
index 117f241
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/mds/Locker.cc |7 +++
1 file changed, 7 insertions(+)
diff --git a/src/mds/Locker.cc b/src/mds/Locker.cc
index 63e0e08..4b02a56 100644
--- a/src/mds/Locker.cc
+++ b/src/mds
This patch implements inline data support for Ceph.
It is also available to be pulled from:
https://github.com/kylinstorage/ceph.git inline
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
Against v2:
Streamline the inline data migration
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/ceph_mds.cc |1 +
src/include/ceph_features.h |2 ++
2 files changed, 3 insertions(+)
diff --git a/src/ceph_mds.cc b/src/ceph_mds.cc
index 88b807b..dac676f 100644
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/mds/CInode.cc |7 +++
1 file changed, 7 insertions(+)
diff --git a/src/mds/CInode.cc b/src/mds/CInode.cc
index c8b00ef..4756865 100644
--- a/src/mds/CInode.cc
+++ b/src/mds
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/client/Client.cc |5 +
1 file changed, 5 insertions(+)
diff --git a/src/client/Client.cc b/src/client/Client.cc
index 19d31e0..3beab8f 100644
--- a/src/client/Client.cc
+++ b/src
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/client/Client.cc | 99 ++
1 file changed, 76 insertions(+), 23 deletions(-)
diff --git a/src/client/Client.cc b/src/client/Client.cc
Wake up possible waiters, invoke the call back if any, unregister the request
Signed-off-by: Li Wang liw...@ubuntukylin.com
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
---
net/ceph/osd_client.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/net/ceph/osd_client.c b/net
Clean up if error occurred rather than going through normal process
Signed-off-by: Li Wang liw...@ubuntukylin.com
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
---
fs/ceph/addr.c |3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 1e561c0
Signed-off-by: Li Wang liw...@ubuntukylin.com
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Li Wang (2):
ceph: Clean up if error occurred in finish_read()
ceph: Add necessary clean up if invalid reply received in
handle_reply()
fs/ceph/addr.c|3 +++
net/ceph
Hi Gregory,
Thanks for your comments.
On 11/13/2013 01:58 AM, Gregory Farnum wrote:
On Tue, Nov 12, 2013 at 6:10 AM, Li Wang liw...@ubuntukylin.com wrote:
Hi,
We want to implement encryption support for Ceph.
Currently, we have the draft design,
1 When user mount a ceph directory
Hi Alex,
Thanks for your comments.
On 11/13/2013 09:07 AM, Alex Elsayed wrote:
Li Wang wrote:
Hi,
We want to implement encryption support for Ceph.
Currently, we have the draft design,
1 When user mount a ceph directory for the first time, he can specify a
passphrase
Close file before return.
Fix coverity issue: CID 1019571
Signed-off-by: Li Wang liw...@ubuntukylin.com
Reported-by: Xianxia Xiao xianxiax...@ubuntukylin.com
---
src/mds/MDCache.cc |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/mds/MDCache.cc b/src/mds/MDCache.cc
Fix two coverity issues.
Li Wang (2):
rbd: Release resource before return
mds: Release resource before return
src/mds/MDCache.cc |2 +-
src/rbd.cc |6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
--
1.7.9.5
--
To unsubscribe from this list: send the line
Hi Yan,
zero_user_segment() has invoked flush_dcache_page() for us, we donnot
wanna flush d-cache twice.
Cheers,
Li Wang
On 11/13/2013 09:19 PM, Yan, Zheng wrote:
On Wed, Nov 13, 2013 at 3:22 PM, Li Wang liw...@ubuntukylin.com wrote:
If the length of data to be read in readpage
acl also as mount-time
option. Maybe you could take it into consideration.
Cheers,
Li Wang
On 11/11/2013 03:18 PM, Guangliang Zhao wrote:
v5: handle the roll back in ceph_set_acl(), correct
ceph_get/set_cached_acl()
v4: check the validity before set/get_cached_acl()
v3: handle the attr
, with
encryption enabled, the same file is not allowed by opened by the second
writer, alternatively, we enforce O_LAZYIO on the file, but application
is supposed to be aware of this.
We plan to submit it as a blueprint for the incoming CDS, comments are
welcome.
Cheers,
Li Wang
--
To unsubscribe from
If the length of data to be read in readpage() is exactly
PAGE_CACHE_SIZE, the original code does not flush d-cache
for data consistency after finishing reading. This patches fixes
this.
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/ceph/addr.c |8 ++--
1 file changed, 6
Currently, if one page allocated into fscache in readpage(), however, with
no-data read, it is not uncached. This patch fixes this.
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/cifs/file.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/cifs/file.c b/fs/cifs
Introduce a routine for uncaching single no-data page, typically
in readpage().
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/cifs/fscache.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/fs/cifs/fscache.h b/fs/cifs/fscache.h
index 24794b6..c712f42 100644
--- a/fs
Implement the routine for uncaching single no-data page, typically
in readpage().
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/cifs/fscache.c |7 +++
1 file changed, 7 insertions(+)
diff --git a/fs/cifs/fscache.c b/fs/cifs/fscache.c
index 8d4b7bc..168f184 100644
--- a/fs/cifs
Introduce a routine for uncaching single no-data page, typically
in readpage().
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/ceph/cache.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/fs/ceph/cache.h b/fs/ceph/cache.h
index ba94940..eb0ec76 100644
--- a/fs/ceph
Currently, the page allocated into fscache in readpage()
for Cifs and Ceph does not be uncached if no data read due
to io error. This patch fixes this. fscache_readpages_cancel()
is for this kind of job but taking list read * as input, so
a new routine take page * as input is introduced.
Li
Introduce a new API fscache_readpage_cancel() for uncaching one single
no-data page from fscache.
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
include/linux/fscache.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/include/linux/fscache.h b/include/linux/fscache.h
index
Similar to the routine for multiple pages except
that it takes page * as input rather than list head *.
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/fscache/page.c |8
1 file changed, 8 insertions(+)
diff --git a/fs/fscache/page.c b/fs/fscache/page.c
index 7f5c658..0c69f72
Currently, if one page allocated into fscache in readpage(), however, with
no-data read, it is not uncached. This patch fixes this.
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/ceph/addr.c |1 +
1 file changed, 1 insertion(+)
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index
Hi,
It seems to me there are three issues, you can take a look below if
they are really there,
On 11/08/2013 01:23 PM, Guangliang Zhao wrote:
v4: check the validity before set/get_cached_acl()
v3: handle the attr change in ceph_set_acl()
v2: remove some redundant code in ceph_setattr()
ceph_osdc_readpages() returns number of bytes read, currently,
the code only allocate full-zero page into fscache, this patch
fixes this.
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
fs/ceph/addr.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/ceph/addr.c b/fs
Free allocated memory before return.
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
src/os/chain_xattr.cc | 14 +-
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/src/os/chain_xattr.cc b/src/os/chain_xattr.cc
index 8ca8156..c020c9d 100644
--- a/src/os
1 - 100 of 152 matches
Mail list logo