Signed-off-by: Brian Chrisman
---
src/client/testceph.cc | 36 +++-
1 files changed, 35 insertions(+), 1 deletions(-)
diff --git a/src/client/testceph.cc b/src/client/testceph.cc
index 520fa75..c24cc03 100644
--- a/src/client/testceph.cc
+++ b/src/client/testce
Signed-off-by: Brian Chrisman
---
src/client/Client.cc |8 ++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/src/client/Client.cc b/src/client/Client.cc
index 8b8fcf1..dbac2c9 100644
--- a/src/client/Client.cc
+++ b/src/client/Client.cc
@@ -4330,8 +4330,12 @@ static int
_readdir_single_dirent_cb is invoked with zeroed pointers when called beneath
readdir_r rather than directly readdirplus_r.
Those pointers are then dereferenced in assignment.
There is still a problem in readdir_r, so I extended the basic scenario in
testceph.cc.
Methods readdir_r and readdirplus
It occurs to me that, in the non-recovery case, we are relying on the
readdir() guarantee that each object will be returned at most once. If
that is so, why not just create a danging symlink or something from
the PG directory to represent the objects that are in recovery at the
moment. Because we k
---
net/ceph/messenger.c | 13 +++--
1 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index e15a82c..db12abc 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -502,8 +502,8 @@ static void prepare_write_message(struct
Under heavy write load, ceph OSDs get messages from clients with bad
message tags. The result is that client/OSD connection stalls
for a while until the connection state gets sorted out.
The client-side sequence of events that has this result is as follows:
Due to the heavy write load, message p
So it occurs to me that one thing we could do in the Objecter layer,
to make this a bit more sane, is to optionally enable caching of the
missing objects. Then the OSD could send out all the missing objects
with a flag saying that they're missing, the Objecter could cache this
list, go through the
Hi!
Like previous, but ceph fs instead of rbd.
(i.e. iozone with 4G file).
[ 783.295035] ceph: loaded (mds proto 32)
[ 783.300122] libceph: client4125 fsid ff352dfd-078c-e65f-a769-d25abb384d92
[ 783.300642] libceph: mon0 77.120.112.193:6789 session established
[ 941.278185] libceph: msg_new
On Thu, 12 May 2011, Colin McCabe wrote:
> On Wed, May 11, 2011 at 2:57 PM, Sage Weil wrote:
> > The ability to list objects in rados pools is something we originally
> > added to round out the librados interface and to support the requirements
> > of radosgw (S3 and swift both let you list object
On Wed, May 11, 2011 at 2:57 PM, Sage Weil wrote:
> The ability to list objects in rados pools is something we originally
> added to round out the librados interface and to support the requirements
> of radosgw (S3 and swift both let you list objects in buckets). The
> current interface is statel
On Thu, 12 May 2011, Sage Weil wrote:
> equation. I'm running iozone on ext3 now and not having any problems.
I take it back.. I just reproduced a similar error on ext2:
random random
bkwd record stride
Hi Fyodor,
> Hi!
>
> Latest (git pulled) version of 2.6 kernel. Ceph - 0.27.1
>
> Still troubles with rbd. Now with ocfs2 no messages in syslog, but iozone
> still return error:
>
> #df -h
> FilesystemSize Used Avail Use% Mounted on
> /dev/sda1 237G 15G 210G 7% /
Applied, thanks!
On Wed, 11 May 2011, Brian Chrisman wrote:
> libceph lookup of the self-referencing '.' directory fails.
> Patch makes Client class handle '.' specially like it does '..'.
> testceph updated to check the special cases of lstat(.) and lstat(/.).
>
> Brian Chrisman (2):
> Add an
Hi!
Latest (git pulled) version of 2.6 kernel. Ceph - 0.27.1
Still troubles with rbd. Now with ocfs2 no messages in syslog, but iozone
still return error:
#df -h
FilesystemSize Used Avail Use% Mounted on
/dev/sda1 237G 15G 210G 7% /
none 2.0G 164K
14 matches
Mail list logo