Signed-off-by: Brian Chrisman
---
src/client/testceph.cc | 17 +
1 files changed, 17 insertions(+), 0 deletions(-)
diff --git a/src/client/testceph.cc b/src/client/testceph.cc
index da4b7a2..520fa75 100644
--- a/src/client/testceph.cc
+++ b/src/client/testceph.cc
@@ -174,6 +17
Signed-off-by: Brian Chrisman
---
src/client/Client.cc |5 +
1 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/src/client/Client.cc b/src/client/Client.cc
index 10a6829..6ab4643 100644
--- a/src/client/Client.cc
+++ b/src/client/Client.cc
@@ -3267,6 +3267,11 @@ int Client::_
libceph lookup of the self-referencing '.' directory fails.
Patch makes Client class handle '.' specially like it does '..'.
testceph updated to check the special cases of lstat(.) and lstat(/.).
Brian Chrisman (2):
Add analogous special case for "." directory alongside ".." in
_lookup
upd
I created a ticket - http://tracker.newdream.net/issues/1084 and
uploaded the mds log where I found the problem.
(Full log is huge. I just uploaded the critical part. Let me know if
you need the full one.)
Henry
2011/5/12 Henry C Chang :
>> This one I'm not sure about. Do you have an MDS log for
> This one I'm not sure about. Do you have an MDS log for this case? I
> would expect the cap issue to happen when we drop_locks(mut) and the lock
> state changes.
>
OK. I'll try to reproduce it with debug log on.
Henry
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
On Wed, 11 May 2011, Sage Weil wrote:
> [half written email from wrong patch directory]
Hi Al, Christoph,
Once the dentry_unhash series is merged, the VFS won't be doing any
hashing or unhashing of dentries on behalf of file systems; that will
almost solely their responsibility the respective i_o
When the VFS prunes a dentry from the cache, clear the D_COMPLETE flag
on the parent dentry. Do this for the live and snapshotted namespaces. Do
not bother for the .snap dir contents, since we do not cache that.
Signed-off-by: Sage Weil
---
fs/ceph/dir.c | 28
f
We used to use a flag on the directory inode to track whether the dcache
contents for a directory were a complete cached copy. Switch to a dentry
flag CEPH_D_COMPLETE that can be safely updated by ->d_prune().
Signed-off-by: Sage Weil
---
fs/ceph/caps.c |8 ++
fs/ceph/dir.c
The Ceph client is told by the server when it has the entire contents of
a directory in cache, and is notified prior to any changes. However,
the current VFS infrastructure does not allow the client to handle a
lookup on a non-existent entry in a non-racy way.
The first patch addes a new d_pru
This adds a d_prune dentry operation that is called by the VFS prior to
pruning (i.e. unhashing and killing) a hashed dentry from the dcache. This
will be used by Ceph to maintain a flag indicating whether the complete
contents of a directory are contained in the dcache, allowing it to satisfy
loo
Hi Linus,
Please pull the following bug fixes from
git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git for-linus
These patches came in pretty late but fix a few crashes and deadlocks that
turned up under blogbench.
Thanks!
sage
Henry C Chang (3):
ceph: print debug mess
On 05/10/2011 06:54 PM, Simon Tian wrote:
If helpless, I will got more trace info.
BTW, where could I get the debug packages?
Unfortunately, the backtrace isn't very useful without debugging
symbols, and I don't see any Fedora packages that include them.
You can create a package with debuggi
The ability to list objects in rados pools is something we originally
added to round out the librados interface and to support the requirements
of radosgw (S3 and swift both let you list objects in buckets). The
current interface is stateless. You iterate over the PGs in the pool, and
for eac
On Wed, May 11, 2011 at 1:47 PM, Mark Nigh wrote:
> Some additional testing shows that the underlying filesystem btrfs does fail
> thus the daemon appropriately fails.
>
> The way I am simulating a failed HDD is by removing the HDD. The failure is
> working,
> but the problem is when I reinsert
On Wed, 11 May 2011, Mark Nigh wrote:
> Some additional testing shows that the underlying filesystem btrfs does
> fail thus the daemon appropriately fails.
>
> The way I am simulating a failed HDD is by removing the HDD. The failure
> is working, but the problem is when I reinsert the HDD. I thi
Some additional testing shows that the underlying filesystem btrfs does fail
thus the daemon appropriately fails.
The way I am simulating a failed HDD is by removing the HDD. The failure is
working, but the problem is when I reinsert the HDD. I think I see the BTRFS
filesystem recovery (btrfs f
On Wed, 11 May 2011, Henry C Chang wrote:
> Fix the following scenario that happens occasionally when running
> blogbench:
>
> client released caps on one inode. Then, the inode's ifile was wrlocked
> during updating client range. Before the update had finished, client
> re-opened the file again
Applied all three of these. Thanks, Henry! I'll send them to Linus today
or tomorrow so they'll make 2.6.39.
sage
On Wed, 11 May 2011, Henry C Chang wrote:
> The mds session, s, could be freed during ceph_put_mds_session.
> Move dout before ceph_put_mds_session.
>
> Signed-off-by: Henry C C
Thanks Brian, applied these. (Also broke out the namespace thing into a
separate patch.)
sage
On Tue, 10 May 2011, Brian Chrisman wrote:
> Expands libceph to handle xattr calls including underlying Client methods.
> testceph is expanded to verify libceph xattr calls work.
>
> Brian Chrisman (
Hi Sage,
after some digging we set
sysctl -w vm.min_free_kbytes=262144
default was around 16000
This solved our problem and rados bench survived a 5 minute torture
with no single failure:
min lat: 0.036177 max lat: 299.924 avg lat: 0.553904
sec Cur ops started finished avg MB/s cur MB/s
Fix the following scenario that happens occasionally when running
blogbench:
client released caps on one inode. Then, the inode's ifile was wrlocked
during updating client range. Before the update had finished, client
re-opened the file again for reading. Since ifile was wrlocked, the client
was n
The mds session, s, could be freed during ceph_put_mds_session.
Move dout before ceph_put_mds_session.
Signed-off-by: Henry C Chang
---
fs/ceph/mds_client.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
index f60b07b..d0fae4
We increments i_wrbuffer_ref when taking the Fb cap. This breaks
the dirty page accounting and causes looping in
__ceph_do_pending_vmtruncate, and ceph client hangs.
This bug can be reproduced occasionally by running blogbench.
Add a new field i_wb_ref to inode and dedicate it to Fb reference
cou
Hi Sage,
we were running rados bench like this:
# rados -p data bench 60 write -t 128
Maintaining 128 concurrent writes of 4194304 bytes for at least 60 seconds.
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat
0 0 0 0 0 0
Signed-off-by: Henry C Chang
---
fs/ceph/snap.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c
index e86ec11..24067d6 100644
--- a/fs/ceph/snap.c
+++ b/fs/ceph/snap.c
@@ -206,7 +206,7 @@ void ceph_put_snap_realm(struct ceph_mds_client *md
I was performing a few failure test with the osd by removing a HDD from one of
the osd host. All was well, the cluster noticed the failure and re-balanced
data but when I replace the HDD into the host, the cosd crashed.
Here is my setup. 6 osd host with 4 HDDs each (4 cosd daemons running for ea
26 matches
Mail list logo