Now,when sent a cap msg to inode/snap, it don't include the ctime of
inode/snap.So make the mtime ahead of ctime.
BTY,for snap, i'm not exactly send the ctime of snap or null.
Signed-off-by: Jianpeng Ma majianp...@gmail.com
---
fs/ceph/caps.c | 11 +++
1 file changed, 7 insertions(+), 4
---
man/ceph-mds.8 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/man/ceph-mds.8 b/man/ceph-mds.8
index 0d34f75..5399308 100644
--- a/man/ceph-mds.8
+++ b/man/ceph-mds.8
@@ -81,7 +81,7 @@ Connect to specified monitor (instead of looking through
.UNINDENT
.SH AVAILABILITY
On 06/24/2013 01:41 AM, Yan, Zheng wrote:
From: Yan, Zheng zheng.z@intel.com
Sorry it took so long, I intended to take a look at this
for you sooner.
I would also like to thank you for this nice clear
description. It made it very easy to understand
why you were proposing the change, and to
On Jul 1, 2013, at 7:00 PM, Loic Dachary l...@dachary.org wrote:
Hi,
Today Sam pointed out that the API for LRC ( Xorbas Hadoop Project Page,
Locally Repairable Codes (LRC) http://smahesh.com/HadoopUSC/ for instance )
would need to be different from the one initialy proposed:
An
On Tue, Jul 2, 2013 at 9:07 PM, Alex Elder alex.el...@linaro.org wrote:
On 06/24/2013 01:41 AM, Yan, Zheng wrote:
From: Yan, Zheng zheng.z@intel.com
Sorry it took so long, I intended to take a look at this
for you sooner.
I would also like to thank you for this nice clear
description.
On Tue, 2 Jul 2013, Yan, Zheng wrote:
From: Yan, Zheng zheng.z@intel.com
The locking order for pending vmtruncate is wrong, it can lead to
following race:
write wmtruncate work
--
lock i_mutex
check
Reviewed-by: Sage Weil s...@inktank.com
On Tue, 2 Jul 2013, Yan, Zheng wrote:
From: Yan, Zheng zheng.z@intel.com
If caps are been revoking by the auth MDS, don't consider them as
issued even they are still issued by non-auth MDS. The non-auth
MDS should also be revoking/exporting these
Reviewed-by: Sage Weil s...@inktank.com
On Tue, 2 Jul 2013, Yan, Zheng wrote:
From: Yan, Zheng zheng.z@intel.com
If we receive new caps from the auth MDS and the non-auth MDS is
revoking the newly issued caps, we should release the caps from
the non-auth MDS. The scenario is filelock's
On Tue, 2 Jul 2013, majianpeng wrote:
Now, update atime only for CEPH_CAP_FILE_EXCL.Change this if
CEPH_CAP_FILE_RD.
Can we introduce a global config optoin (bool mds_atime in
common/config_opts.h, maybe) so that users can turn this off? And/or add
a 'relatime' option? More users won't want
On Tue, 2 Jul 2013, majianpeng wrote:
Now ceph don't support updating atime after read-operation if the open
mode is CEPH_CAP_FILE_RD.There are two reasons:
1:in client of fs,it don't set dirty cap of CEPH_CAP_FILE_RD.
2:in mds,it only update the atime if the condition
dirty
On Jul 2, 2013, at 10:07 AM, Atchley, Scott atchle...@ornl.gov wrote:
On Jul 1, 2013, at 7:00 PM, Loic Dachary l...@dachary.org wrote:
Hi,
Today Sam pointed out that the API for LRC ( Xorbas Hadoop Project Page,
Locally Repairable Codes (LRC) http://smahesh.com/HadoopUSC/ for instance )
On Tue, 2 Jul 2013, Alex Elder wrote:
On 06/24/2013 01:41 AM, Yan, Zheng wrote:
From: Yan, Zheng zheng.z@intel.com
Sorry it took so long, I intended to take a look at this
for you sooner.
I would also like to thank you for this nice clear
description. It made it very easy to
On 07/02/2013 01:10 PM, Sage Weil wrote:
On Tue, 2 Jul 2013, Alex Elder wrote:
On 06/24/2013 01:41 AM, Yan, Zheng wrote:
From: Yan, Zheng zheng.z@intel.com
Sorry it took so long, I intended to take a look at this
for you sooner.
I would also like to thank you for this nice clear
David,
I just looked and saw that it's been pulled a couple hours ago.
Can I also trouble you into looking at my patches for Ceph for the
FSCache? We're using it (in production starting today actually); we're
not able to find any bugs with the current iteration. But it's always
nice to have an
Milosz Tanski mil...@adfin.com wrote:
I just looked and saw that it's been pulled a couple hours ago.
Can I also trouble you into looking at my patches for Ceph for the
FSCache? We're using it (in production starting today actually); we're
not able to find any bugs with the current
David,
It hasn't changed since the patch I posted inline 4 days ago (same as
the one that went out to linux-cachefs mailing list). You can also get
the 'wip-ceph-fscache' branch from my gitrepo:
https://bitbucket.org/adfin/linux-fs.git. Finally, you can take a look
at the changes in the browser
Milosz Tanski mil...@adfin.com wrote:
You can also get the 'wip-ceph-fscache' branch from my gitrepo:
There's only one patch from you there. Shouldn't there be at least two as you
posted?
David
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
I've combined them into one.
On Tue, Jul 2, 2013 at 4:49 PM, David Howells dhowe...@redhat.com wrote:
Milosz Tanski mil...@adfin.com wrote:
You can also get the 'wip-ceph-fscache' branch from my gitrepo:
There's only one patch from you there. Shouldn't there be at least two as you
posted?
I think we should be able to cover most cases by adding an interface like:
setint minimum_to_read(const setint want_to_read, const setint
available_chunks);
which returns the smallest set required to read/rebuild the chunks in
want_to_read given the chunks in available_chunks. Alternately, we
Hi Sage (et al),
We have rebased the former wip-libcephfs branch, on the model of the
rebased example branch, as planned, and also pulled it up to Ceph's
v65 tag/master, also as planned.
In addition to cross checking this, Adam has updated our Ganesha client
driver to use the ll v2 API, and this
Hi Matt,
On Tue, 2 Jul 2013, Matt W. Benjamin wrote:
Hi Sage (et al),
We have rebased the former wip-libcephfs branch, on the model of the
rebased example branch, as planned, and also pulled it up to Ceph's
v65 tag/master, also as planned.
In addition to cross checking this, Adam has
Okay, my analysis of the patch:
Looking at your index structure, ceph has per-fsid indices under the top
ceph index and then per-inode indices under those? Are fsids universally
unique - or just for a given server/cell/whatever?
+#ifdef CONFIG_CEPH_FSCACHE
+ if (PageFsCache(page))
+
On Tue, 2 Jul 2013, majianpeng wrote:
Now, update atime only for CEPH_CAP_FILE_EXCL.Change this if
CEPH_CAP_FILE_RD.
Can we introduce a global config optoin (bool mds_atime in
common/config_opts.h, maybe) so that users can turn this off? And/or add
a 'relatime' option? More users won't want
Scott,
You make a good point comparing (5/3) RS with Xorbas, but a small nit:
The I/O to recover from a single failure for both schemes is 5 blocks so it is
as efficient as Xorbas.
Maybe not. You would probably issue I/O to all the remaining 7 blocks to cover
for the possibility of double
Hi everyone,
I have a sysvinit script on fedora 18 (systemd 195) with
### BEGIN INIT INFO
# Provides: ceph
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Required-Start:$remote_fs $named $network $time
# Required-Stop: $remote_fs $named $network $time
#
Hi Danny,
Can you review wip-5492? The original ceph_sbindir was introduced by your
patch fixing up python install locations,
4d16f38f48e276497190c8bc03abc55c40e18eed.
http://tracker.ceph.com/issues/5492
https://github.com/ceph/ceph/pull/389
Thanks!
sage
--
To unsubscribe from this list:
Now ceph don't update atime after read.So add this function.
Signed-off-by: Jianpeng Ma majianp...@gmail.com
---
fs/ceph/file.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 656e169..fa74e6f 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@
27 matches
Mail list logo