For ceph_sync_read,it don't call file_accessed to update atime.But the
buffer_read do it .So add it.
Signed-off-by: Jianpeng Ma majianp...@gmail.com
---
fs/ceph/file.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index fa74e6f..b0e6f0b 100644
---
Hi Sage,
Andrey has amended our changes according to you comments except one regarding
re-fetching xattrs from the MDS after setting or removal the extended attribute
of the filesystem object, cause it just requires some resources we do not have
at the moment (Andrey is sick till Thursday) -
On Jul 2, 2013, at 10:12 PM, Paul Von-Stamwitz pvonstamw...@us.fujitsu.com
wrote:
Scott,
You make a good point comparing (5/3) RS with Xorbas, but a small nit:
The I/O to recover from a single failure for both schemes is 5 blocks so it
is as efficient as Xorbas.
Maybe not. You would
'Twas brillig, and Sage Weil at 03/07/13 04:06 did gyre and gimble:
Hi everyone,
I have a sysvinit script on fedora 18 (systemd 195) with
### BEGIN INIT INFO
# Provides: ceph
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Required-Start:$remote_fs $named
On Wed, 3 Jul 2013, Ilya Storozhilov wrote:
Hi Sage,
Andrey has amended our changes according to you comments except one
regarding re-fetching xattrs from the MDS after setting or removal the
extended attribute of the filesystem object, cause it just requires some
resources we do not
On Wed, 3 Jul 2013, Li Wang wrote:
This patch gives a preliminary implementation of inline data support for Ceph.
Comments are appreciated.
Signed-off-by: Li Wang liw...@ubuntukylin.com
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
A few comments below (although I didn't have time
Hi Sage,
no problem I'll review the pull request.
Danny
Am 03.07.2013 07:12, schrieb Sage Weil:
Hi Danny,
Can you review wip-5492? The original ceph_sbindir was introduced by your
patch fixing up python install locations,
4d16f38f48e276497190c8bc03abc55c40e18eed.
Hi Scott,
Point taken.
I was thinking about Loic's decode description where k+m was requested and data
was decoded when k blocks were received. But he was referring to full stripe
reads where all the memory is allocated.
Degraded reads and block repair are a different matter.
pvs
David,
I took your suggestions and updated my wip branch (on bitbucket) with
a handful of fixes except for the locking around registering the
cookie. I'm not sure what's the correct thing to do there.
On Tue, Jul 2, 2013 at 7:40 PM, David Howells dhowe...@redhat.com wrote:
Okay, my analysis of
Because of the changes made in dcache.h header file, files that use
the d_lock and d_count fields of the dentry structure need to be
changed accordingly. All the d_lock's spin_lock() and spin_unlock()
calls are replaced by the corresponding d_lock() and d_unlock() calls.
References to d_count are
Hi Matt,
This hit a problem in QA.. running
ceph.git/qa/workunits/kernel_untar_build.sh on ceph-fuse crashes with
2013-07-03 12:51:14.176096 7fae7ee93780 10 client.4106 _lookup
1004031.head(ref=4 cap_refs={} open={} mode=40775 size=0 mtime=2012-02-29
16:32:49.00
Hi Yan-
On Mon, 1 Jul 2013, Sage Weil wrote:
On Mon, 1 Jul 2013, Yan, Zheng wrote:
ping
I think this patch should goes into 3.11 or fix the issue by other means
Applied this to the testing branch, thanks. Let me know if there are any
others I missed!
This broke rbd, which was using
Yan,
Can you help me understand how this change fixes:
http://tracker.ceph.com/issues/2019 ? The symptom on the client is
that the processes get stuck waiting in ceph_mdsc_do_request according
to /proc/PID/stack.
Thanks in advance,
- Milosz
On Wed, Jul 3, 2013 at 5:57 PM, Sage Weil
On Wed, 3 Jul 2013, Milosz Tanski wrote:
Yan,
Can you help me understand how this change fixes:
http://tracker.ceph.com/issues/2019 ? The symptom on the client is
that the processes get stuck waiting in ceph_mdsc_do_request according
to /proc/PID/stack.
Note that the blocked request is a
On 07/03/2013 04:57 PM, Sage Weil wrote:
Hi Yan-
On Mon, 1 Jul 2013, Sage Weil wrote:
On Mon, 1 Jul 2013, Yan, Zheng wrote:
ping
I think this patch should goes into 3.11 or fix the issue by other means
Applied this to the testing branch, thanks. Let me know if there are any
others I
On Thu, Jul 4, 2013 at 5:57 AM, Sage Weil s...@inktank.com wrote:
Hi Yan-
On Mon, 1 Jul 2013, Sage Weil wrote:
On Mon, 1 Jul 2013, Yan, Zheng wrote:
ping
I think this patch should goes into 3.11 or fix the issue by other means
Applied this to the testing branch, thanks. Let me know if
On Wed, 3 Jul 2013, Sage Weil wrote:
On Thu, 4 Jul 2013, Yan, Zheng wrote:
On Thu, Jul 4, 2013 at 5:57 AM, Sage Weil s...@inktank.com wrote:
Hi Yan-
On Mon, 1 Jul 2013, Sage Weil wrote:
On Mon, 1 Jul 2013, Yan, Zheng wrote:
ping
I think this patch should goes into 3.11
On Thu, Jul 4, 2013 at 6:07 AM, Milosz Tanski mil...@adfin.com wrote:
Yan,
Can you help me understand how this change fixes:
http://tracker.ceph.com/issues/2019 ? The symptom on the client is
that the processes get stuck waiting in ceph_mdsc_do_request according
to /proc/PID/stack.
The bug
Milosz Tanski mil...@adfin.com wrote:
Looking at your index structure, ceph has per-fsid indices under the top
ceph index and then per-inode indices under those? Are fsids universally
unique - or just for a given server/cell/whatever?
It's my understanding that's a guuid assigned to the
Hi, all!
It's time to start planning our Ceph Developer Summit again! This summit
is where planning for the upcoming Emperor release will happen, and
attendance is (as always) open to all. It will be a virtual summit using
IRC, Etherpads, and Google Hangouts.
Here's our high-level summit
On Thu, 4 Jul 2013, David Howells wrote:
Milosz Tanski mil...@adfin.com wrote:
Looking at your index structure, ceph has per-fsid indices under the top
ceph index and then per-inode indices under those? Are fsids
universally
unique - or just for a given server/cell/whatever?
Scott, et al.
Here is an interesting paper from Usenix HotStorage Conference which provides
local codes without additional capacity overhead.
Check it out. (abstract with links to paper and slides)
Because of the d_count name change made in dcache.h, all references
to d_count have to be changed to d_refcount. There is no change in
logic and everything should just work.
Signed-off-by: Waiman Long waiman.l...@hp.com
---
fs/ceph/inode.c |4 ++--
fs/ceph/mds_client.c |2 +-
2
23 matches
Mail list logo