Here's an (untested yet) patch in the rbd error path:
diff --git a/drivers/block-rbd.c b/drivers/block-rbd.c
index 68fbed7..ab2d2c5 100644
--- a/drivers/block-rbd.c
+++ b/drivers/block-rbd.c
@@ -560,6 +560,9 @@ err:
if (c)
rbd_aio_release(c);
+
Hi,
I've had a few occasions where tapdisk has segfaulted:
tapdisk[9180]: segfault at 7f7e3a5c8c10 ip 7f7e387532d4 sp
7f7e3a5c8c10 error 4 in libpthread-2.13.so[7f7e38748000+17000]
tapdisk:9180 blocked for more than 120 seconds.
tapdisk D 88043fc13540 0
Hi,
I've had a few occasions where tapdisk has segfaulted:
tapdisk[9180]: segfault at 7f7e3a5c8c10 ip 7f7e387532d4 sp
7f7e3a5c8c10 error 4 in libpthread-2.13.so[7f7e38748000+17000]
tapdisk:9180 blocked for more than 120 seconds.
tapdisk D 88043fc13540 0 9180 1
Yes the procedure didn't change.
If you're on debian I could also sent your prebuilt .deb for blktap
and for a patched xen version that includes userspace RBD support.
If you have any issue, I can be found on ceph's IRC under 'tnt' nick.
I've had a few occasions where tapdisk has
Hi,
Yes the procedure didn't change.
If you're on debian I could also sent your prebuilt .deb for blktap
and for a patched xen version that includes userspace RBD support.
If you have any issue, I can be found on ceph's IRC under 'tnt' nick.
Cheers,
Sylvain
--
To unsubscribe from this
Yes the procedure didn't change.
If you're on debian I could also sent your prebuilt .deb for blktap
and for a patched xen version that includes userspace RBD support.
It's working great so far. I just pulled the source and built it then copied
blktap in.
For some reason I already had
Hi,
It's working great so far. I just pulled the source and built it then copied
blktap in.
Good to hear :)
I've been using it more and more recently and it'll been good for me
too, even with live migrations.
For some reason I already had a tapdisk in /usr/sbin, as well as the one in
For some reason I already had a tapdisk in /usr/sbin, as well as the one in
/usr/bin, which confused the issue for a while. I must have installed
something manually but I don't remember what.
What distribution are you using ?
Debian Wheezy
James
--
To unsubscribe from this list:
On Mon, Aug 05, 2013 at 01:01:35PM +0200, Sylvain Munaut wrote:
Any chance this will be rolled into the main blktap sources?
I'd like to ... but I ave no idea how or even who to contact for that
... blktap is so fragmented ...
You have blktap2 which is in the man Xen tree. But that's
I think I saw an announcement recently on xen-devel that blktap3 development
has been stopped..
Oh :(
In the mail it speaks about QEMU but is it possible to use the QEMU
driver model when booting PV domains ? (and not PVHVM).
Cheers,
Sylvain
--
To unsubscribe from this list: send the
On Mon, Aug 5, 2013 at 1:03 PM, Sylvain Munaut
s.mun...@whatever-company.com wrote:
I think I saw an announcement recently on xen-devel that blktap3 development
has been stopped..
Oh :(
In the mail it speaks about QEMU but is it possible to use the QEMU
driver model when booting PV domains
Hi George,
Yes; qemu knows how to be a Xen PV block back-end.
Very interesting. Is there documentation about this somewhere ?
I had a look some time ago and it was really not very clear.
Things like what Xen version support this. And with which features (
indirect descriptors, persistent
On 05/08/13 14:55, Sylvain Munaut wrote:
Hi George,
Yes; qemu knows how to be a Xen PV block back-end.
Very interesting. Is there documentation about this somewhere ?
I had a look some time ago and it was really not very clear.
Things like what Xen version support this. And with which
On Mon, Aug 05, 2013 at 03:04:47PM +0100, George Dunlap wrote:
On 05/08/13 14:55, Sylvain Munaut wrote:
Hi George,
Yes; qemu knows how to be a Xen PV block back-end.
Very interesting. Is there documentation about this somewhere ?
I had a look some time ago and it was really not very
On 05/08/13 16:18, Wei Liu wrote:
On Mon, Aug 05, 2013 at 03:04:47PM +0100, George Dunlap wrote:
On 05/08/13 14:55, Sylvain Munaut wrote:
Hi George,
Yes; qemu knows how to be a Xen PV block back-end.
Very interesting. Is there documentation about this somewhere ?
I had a look some time ago
On Mon, Aug 05, 2013 at 04:20:20PM +0100, George Dunlap wrote:
On 05/08/13 16:18, Wei Liu wrote:
On Mon, Aug 05, 2013 at 03:04:47PM +0100, George Dunlap wrote:
On 05/08/13 14:55, Sylvain Munaut wrote:
Hi George,
Yes; qemu knows how to be a Xen PV block back-end.
Very interesting. Is
I'm about to start trying this out. Has anything changed since this email
http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg13984.html ?
Thanks
James
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More
This patch change the fsx.sh to pull better fsx.c from xfstests site
to support hole punching test.
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
qa/workunits/suites/fsx.sh |6 --
1 file changed, 4 insertions(+), 2 deletions
Hi again,
However when rbd cache is enabled with:
[client]
rbd_cache = true
the tapdisk process crashes if I do this in the domU:
dd if=/dev/xvda bs=1M /dev/null
I tested this locally and couldn't reproduce the issue.
Doing reads doesn't do anything bad AFAICT.
Doing writes OTOH seems to
I've installed debug symbols, perhaps that will give a better idea what
is going on?
#0 __GI___libc_free (mem=0x7f516065) at malloc.c:2970
#1 0x7f515f3ac84b in ~raw_posix_aligned (this=0x7f513c418f20,
__in_chrg=optimised out) at common/buffer.cc:152
#2
I've been testing this on Ubuntu 12.04.02 64-bit with kernel 3.2.0-48
and ceph 0.61.4
With rbd cache disabled, it works well enough in initial testing.
However when rbd cache is enabled with:
[client]
rbd_cache = true
the tapdisk process crashes if I do this in the domU:
dd if=/dev/xvda bs=1M
currently away for I'll try to setup a test and see
if I can reproduce the issue locally.
I never really tried with the cache enabled.
Cheers,
Sylvain
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info
Hi,
I just wanted to mention that I implemented a simple request merging
strategy to counter-act the request splitting done by the Xen block if
protocol.
The results are pretty good. When comparing to using the rbd kernel
module, I can now get 2-4x better write performance and 2x read
On Fri, 26 Apr 2013, Sylvain Munaut wrote:
Hi,
I just wanted to mention that I implemented a simple request merging
strategy to counter-act the request splitting done by the Xen block if
protocol.
The results are pretty good. When comparing to using the rbd kernel
module, I can now get
Hi,
Is this in the blktap layer or in librbd? FWIW, when rbd cache = true,
the writes will get merged by the cache and written out in large extents
on flush.
In the blktap layer.
I don't have the cache enabled because FLUSH request from the VM are
not forwarded down to that layer, when you
Hi,
We can test this, but just a couple of lines of input might be needed to get
us going with this without digging through all the code.
Ok, so I added proper argument parsing (using the same format as the
qemu rbd driver) now, so it's easier to test.
First off, you need a working blktap
/keyring with the user key if
you use cephx
Once that's done, you should be able to attach disk to a running VM using :
$ xm block-attach test_vm tap2:tapdisk:rbd:rbd/test xvda2 w
After replacing the /usr/sbin/tapdisk2 binary with the newly-built one from the
git repo, I'm unable to attach
format
(tap2:tapdisk:aio fails identically, as does tap2:rbd:rbd/test). In my
/var/log/messages file, I see the following:
Apr 23 10:34:51 se004922 kernel: [ 6328.409708] blktap_control_allocate_tap:
allocated tap 8807a4898800
Apr 23 10:34:51 se004922 tapdisk[32464]: tapdisk-control: init
On 23 Apr 2013, at 17:06, Sylvain Munaut s.mun...@whatever-company.com wrote:
Hi,
- client dom0: a simple quick debootstrap, and a low amount of memory to
bypass buffers
I assume you meant domU ?
You ran those tests in a VM right ?
Right! DomU indeed, not dom0. The benchmarks are
binaries, along with several
other tapdisk-related binaries. openSuSE uses this with a few minor patches.
This is attempting to just use tap2:aio for the image format
(tap2:tapdisk:aio fails identically, as does tap2:rbd:rbd/test). In my
/var/log/messages file, I see the following:
Apr
in the code). I'll get to that next, once I figured
the best format for arguments.
If you have time to write up some lines about steps required to test this,
that'd be nice, it'll help people to test this stuff.
Thanks,
-- Pasi
--
To unsubscribe from this list: send the line unsubscribe ceph-devel
Hi,
My Xen is kind of rusty, last time I used it was about 3 years ago, but
can't you do something similar like with Qemu? Just submit all the arguments
semi-column separated?
Yes probably, I just didn't get to it. I wanted to check first if this
approach was solving the issues I had with RBD
If you have time to write up some lines about steps required to test this,
that'd be nice, it'll help people to test this stuff.
To quickly test, I compiled the package and just replaced the tapdisk
binary from my normal blktap install with the newly compiled one.
Then you need to setup a RBD
We can test this, but just a couple of lines of input might be needed to get us
going with this without digging through all the code.
Rgds,
Bernard
Openminds BVBA
On 19 Apr 2013, at 16:37, Sylvain Munaut s.mun...@whatever-company.com wrote:
Hi,
My Xen is kind of rusty, last time I used
On Thu, 18 Apr 2013, Stefan Priebe - Profihost AG wrote:
Am 17.04.2013 um 23:14 schrieb Brian Behlendorf behlendo...@llnl.gov:
On 04/17/2013 01:16 PM, Mark Nelson wrote:
I'll let Brian talk about the virtues of ZFS,
I think the virtues of ZFS have been discussed at length in various
Hi,
I've been working on getting a working blktap driver allowing to
access ceph RBD block devices without relying on the RBD kernel driver
and it finally got to a point where, it works and is testable.
Some of the advantages are:
- Easier to update to newer RBD version
- Allows functionality
Hey,
Can you test with the wip-debug-xattr branch? Set debug filestore = 30
and it will dump the xattr values to the log on set and get, so we can see
what is going on.
Also/alternatively, strace with -f -v -x, which will (I think) include the
full value of the get/setxattr args..
Thanks
I looked into this problem earlier. The problem is that zfs does not
return ERANGE when the size of value buffer passed to getxattr is too
small. zfs returns with truncated xattr value.
Regards,
Henry
2013/4/17 Sage Weil s...@inktank.com:
Hey,
Can you test with the wip-debug-xattr branch
Henry C Chang wrote:
I looked into this problem earlier. The problem is that zfs does not
return ERANGE when the size of value buffer passed to getxattr is too
small. zfs returns with truncated xattr value.
Is this a bug in ZFS, or simply different behavior?
I've used ZFSonLinux quite a bit
getxattr linux man page says ERANGE will be returned if the size of
the value buffer is too small to hold the result. Thus, I think it is
a bug of ZFS (or ZOL, at least).
2013/4/18 Jeff Mitchell jeffrey.mitch...@gmail.com:
Henry C Chang wrote:
I looked into this problem earlier. The problem is
Adding Brian Behlendorf to the CC list, as we were just talking about this
yesterday at LUG. :)
I suspect this is a bug; the posix docs indicate ERANGE is apprpriate
here:
[ERANGE] value (as indicated by size) is too small to hold the
extended attribute
On Wed, Apr 17, 2013 at 9:37 AM, Jeff Mitchell
jeffrey.mitch...@gmail.com wrote:
Henry C Chang wrote:
I looked into this problem earlier. The problem is that zfs does not
return ERANGE when the size of value buffer passed to getxattr is too
small. zfs returns with truncated xattr value.
Is
[Adding Brian to CC list again :)]
On Wed, 17 Apr 2013, Yehuda Sadeh wrote:
On Wed, Apr 17, 2013 at 9:37 AM, Jeff Mitchell
jeffrey.mitch...@gmail.com wrote:
Henry C Chang wrote:
I looked into this problem earlier. The problem is that zfs does not
return ERANGE when the size of value
On Wed, Apr 17, 2013 at 10:05 AM, Sage Weil s...@inktank.com wrote:
[Adding Brian to CC list again :)]
On Wed, 17 Apr 2013, Yehuda Sadeh wrote:
On Wed, Apr 17, 2013 at 9:37 AM, Jeff Mitchell
jeffrey.mitch...@gmail.com wrote:
Henry C Chang wrote:
I looked into this problem earlier. The
On 04/17/2013 10:15 AM, Yehuda Sadeh wrote:
On Wed, Apr 17, 2013 at 10:05 AM, Sage Weil s...@inktank.com wrote:
[Adding Brian to CC list again :)]
On Wed, 17 Apr 2013, Yehuda Sadeh wrote:
On Wed, Apr 17, 2013 at 9:37 AM, Jeff Mitchell
jeffrey.mitch...@gmail.com wrote:
Henry C Chang wrote:
is
going to fail with ERANGE.
Why? We support an arbitrary number of maximum sized xattrs (65536).
What am I missing here?
Incidentally, does anybody know of an good xattr test suite we could add
to our regression tests?
Thanks,
Brian
diff --git a/module/zfs/zpl_xattr.c b/module/zfs/zpl_xattr.c
.
Well, looking at the code again it's not going to work, as setxattr is
going to fail with ERANGE.
Why? We support an arbitrary number of maximum sized xattrs (65536). What
am I missing here?
Incidentally, does anybody know of an good xattr test suite we could add to
our regression tests
(65536). What
am I missing here?
Incidentally, does anybody know of an good xattr test suite we could add to
our regression tests?
Thanks,
Brian
diff --git a/module/zfs/zpl_xattr.c b/module/zfs/zpl_xattr.c
index c03764f..9f4d63c 100644
--- a/module/zfs/zpl_xattr.c
+++ b/module/zfs/zpl_xattr.c
, as setxattr is
going to fail with ERANGE.
Why? We support an arbitrary number of maximum sized xattrs (65536).
What
am I missing here?
Incidentally, does anybody know of an good xattr test suite we could
add to
our regression tests?
Thanks,
Brian
diff --git a/module/zfs/zpl_xattr.c b/module/zfs
On 04/17/2013 02:09 PM, Stefan Priebe wrote:
Sorry to disturb, but what is the raeson / advantage of using zfs for
ceph?
A few things off the top of my head:
1) Very mature filesystem with full xattr support (this bug
notwithstanding) and copy-on-write snapshots. While the port to Linux
)
if (!size)
return (nv_size);
- memcpy(value, nv_value, MIN(size, nv_size));
+ if (size nv_size)
+ return (-ERANGE);
Note, that zpl_xattr_get_sa() is called by __zpl_xattr_get() which can
also be called by zpl_xattr_get() to test for xattr existence. So
(value, nv_value, MIN(size, nv_size));
+ if (size nv_size)
+ return (-ERANGE);
Note, that zpl_xattr_get_sa() is called by __zpl_xattr_get() which can
also be called by zpl_xattr_get() to test for xattr existence. So it
needs to make sure that zpl_xattr_set() doesn't fail
Am 17.04.2013 um 23:14 schrieb Brian Behlendorf behlendo...@llnl.gov:
On 04/17/2013 01:16 PM, Mark Nelson wrote:
I'll let Brian talk about the virtues of ZFS,
I think the virtues of ZFS have been discussed at length in various other
forums. But in short it brings some nice functionality
Hi Xiaxi,
Thanks your answer !!
FIO test:
4MB Sequential write (numjobs=1) : 203 MB/s (close to rados bench write)
4MB Random write (numjobs=8): 145 MB/s
but I still have some questions of write performance
According to this message:
http://www.mail-archive.com/ceph-devel@vger.kernel.org
have some problem after availability test
Setup:
Linux kernel: 3.2.0
OS: Ubuntu 12.04
Storage server : 11 HDD (each storage server has 11 osd, 7200 rpm, 1T) +
10GbE NIC
RAID card: LSI MegaRAID SAS 9260-4i For every HDD: RAID0, Write Policy:
Write Back with BBU, Read Policy: ReadAhead, IO
On 03/17/2013 05:18 AM, kelvin_hu...@wiwynn.com wrote:
Hi, all
Hi,
...
My question is:
1.The state of I/O pause is normal when ceph recovering ?
I have experienced the same issue. This works as designed, and is
probably because of the heartbeat-timeout in osd heartbeat grace
period set to
Hi, all
I have some problem after availability test
Setup:
Linux kernel: 3.2.0
OS: Ubuntu 12.04
Storage server : 11 HDD (each storage server has 11 osd, 7200 rpm, 1T) + 10GbE
NIC
RAID card: LSI MegaRAID SAS 9260-4i For every HDD: RAID0, Write Policy: Write
Back with BBU, Read Policy
Hi,
any comments on these patches?
Danny
Am 04.02.2013 18:22, schrieb Danny Al-Gaaf:
The ceph-test package contains files which are installed to /usr/bin,
but these files have names which don't give any hint that they are
part of ceph.
Rename the files of the ceph-test package to indicate
The files from the ceph-test subpackage are installed to /usr/bin,
give them more useful names to make sure that the user know they
belong to ceph. add a 'ceph_' prefix and change some test* binaries
to ceph_test_*.
Signed-off-by: Danny Al-Gaaf danny.al-g...@bisect.de
---
.gitignore
The ceph-test package contains files which are installed to /usr/bin,
but these files have names which don't give any hint that they are
part of ceph.
Rename the files of the ceph-test package to indicate they belong
to ceph. Prefix them with ceph_. Update packaging files like
RPM spec and debian
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
You can change the number replicas in runtime with the following command:
$ ceph osd pool set {poolname} size {num-replicas}
Xing
On 01/15/2013 03:00 PM, Gandalf Corvotempesta wrote:
Is possible to change the number of replicas in realtime ?
--
To unsubscribe from this list: send the line
It seems to be: Ceph will shuffle data to rebalance in situations such
as when we change the replica num or when some nodes or disks are down.
Xing
On 01/15/2013 03:26 PM, Gandalf Corvotempesta wrote:
2013/1/15 Xing Lin xing...@cs.utah.edu:
You can change the number replicas in runtime with
It seems to be: Ceph will shuffle data to rebalance in situations such
as when we change the replica num or when some nodes or disks are down.
Xing
On 01/15/2013 03:26 PM, Gandalf Corvotempesta wrote:
So it's absolutely safe to start with just 2 server, make all the
necessary tests and when
Hi,
I am working on a sample app to test the object storage and connectivity with
the CEPH cluster(basically an app that talk directly to the Ceph object store).
Initially the aim is to keep 1 CEPH cluster on 1 VM and try the object
operations through 2nd VM. We won't be using RADOS Gateway, CEPH
that this was an old-style image that we've just removed the last trace of and
returns success)
This breaks test_rbd.test_remove_dne in test/pybind/test_rbd.py
We could change librbd obviously. Did you scan for other users,
though?...maybe there are more lurking
Thoughts
that we've just removed the last trace of
and
returns success)
This breaks test_rbd.test_remove_dne in test/pybind/test_rbd.py
We could change librbd obviously. Did you scan for other users,
though?...maybe there are more lurking
Thoughts?
--
To unsubscribe from
I have a 3 line change to the file qa/workunits/libcephfs-java/test.sh that
tweaks how LD_LIBRARY_PATH is set for the test execution.
The branch is wip-java-test in ceph.git.
Best,
-Joe Buck--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord
There's a test for null rq pointer inside the while loop in
rbd_rq_fn() that's not needed. That same test already occurred
in the immediatly preceding loop condition test.
Signed-off-by: Alex Elder el...@inktank.com
---
drivers/block/rbd.c |4
1 file changed, 4 deletions(-)
diff --git
If an rbd image header is read and it doesn't begin with the
expected magic information, a warning is displayed. This is
a fairly simple test, but it could be extended at some point.
Fix the comparison so it actually looks at the text field
rather than the front of the structure.
In any case
On 06/10/2012 10:03 PM, Sage Weil wrote:
Hey-
The librados api tests were calling a dummy test_exec method in cls_rbd
that apparently got removed. We probably want to replace the test with
*something*, though... maybe a version or similar command that just
returns the version of the class
On Sun, 10 Jun 2012, Josh Durgin wrote:
On 06/10/2012 10:03 PM, Sage Weil wrote:
Hey-
The librados api tests were calling a dummy test_exec method in cls_rbd
that apparently got removed. We probably want to replace the test with
*something*, though... maybe a version or similar
Hey-
The librados api tests were calling a dummy test_exec method in cls_rbd
that apparently got removed. We probably want to replace the test with
*something*, though... maybe a version or similar command that just
returns the version of the class? Or an OSD built-in dummy class
On Tue, Jun 5, 2012 at 9:15 PM, udit agarwal fzdu...@gmail.com wrote:
Hi,
Thank you all for your support in resolving the issue. As I mentioned in my
previous post that I ran my iozone test again on a 5G file excluding the 4k
record size. And now, I have successfully completed this test
On Mon, Jun 4, 2012 at 4:43 PM, udit agarwal fzdu...@gmail.com wrote:
Sorry ,the link is: [...]
If you run iozone again, does the bug happen again?
Comparing your iozone run with our test suite, we don't currently do
-i 2 (random read/write), and we only run a few specific record sizes
to save
Hi,
Thanks for your reply.
After you hinted that this problem may have caused due to using record size of
length 4k. Then, I ran iozone test on a 5m file with record length 4k and to my
surprise, the ceph system again hanged up. Then, I tried same with record
lengths from 8k to 1m and everything
Hi,
Thank you all for your support in resolving the issue. As I mentioned in my
previous post that I ran my iozone test again on a 5G file excluding the 4k
record size. And now, I have successfully completed this test with record sizes
of 8k and 16k and hopefully all others will also work fine
On Mon, Jun 4, 2012 at 2:52 PM, udit agarwal fzdu...@gmail.com wrote:
I ran the 5g iozone test on my ceph system and I got the following output on
the
terminal:
...
Message from syslogd@hp1 at Jun 4 22:19:03 ...
kernel:[ 7627.132065] Oops: [#1] PREEMPT SMP
Message from syslogd@hp1
Hi,
Thanks for your reply.
Please follow this link
https://docs.google.com/document;.
d/1mYVyI75FGMYqPes5T5fkI0aUX8h2q6TFeWdoV9uJdQI/edit?pli=1
to find the whole message. (plz concatenate both strings for the link as I
wasn't able to post it in whole i.e. link is https://?pli=1 .
Sorry ,the link is:
https://docs.google.com/document/d/
1mYVyI75FGMYqPes5T5fkI0aUX8h2q6TFeWdoV9uJdQI/edit?pli=1
hope you can help me in this matter.
--Udit Agarwal
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
On Thursday, May 31, 2012 at 5:58 PM, udit agarwal wrote:
Hi,
I have set up ceph system with a client, mon and mds on one system which is
connected to 2 osds. I ran iozone test with a 10G file and it ran fine. But
when
I ran iozone test with a 5G file, the process got killed and our ceph
Hi,
I have set up ceph system with a client, mon and mds on one system which is
connected to 2 osds. I ran iozone test with a 10G file and it ran fine. But when
I ran iozone test with a 5G file, the process got killed and our ceph system
hanged. Can anyone please help me with this.
Thanks
On Thu, May 31, 2012 at 5:58 PM, udit agarwal fzdu...@gmail.com wrote:
Hi,
I have set up ceph system with a client, mon and mds on one system which is
connected to 2 osds. I ran iozone test with a 10G file and it ran fine. But
when
I ran iozone test with a 5G file, the process got killed and our
Hi,
thanks for your reply.
The output of 'modinfo ceph' is as follows:
filename: /lib/modules/3.1.10-1.9-desktop/kernel/fs/ceph/ceph.ko
license:GPL
description:Ceph filesystem for Linux
author: Patience Warnick patie...@newdream.net
author: Yehuda Sadeh
Hi,
I'm building my rados test cluster,
3 servers,with on each server : 1 mon - 5 osd
mon daemon and osd are started, but when i use ceph command, it's missing
client.admin.keyring
root@cephtest1:/etc/ceph# ceph -w
2012-05-30 09:05:35.255619 7fd1e9cfa760 -1 auth: failed to open keyring
Am 30.05.2012 09:20, schrieb Alexandre DERUMIER:
Hi,
I'm building my rados test cluster,
3 servers,with on each server : 1 mon - 5 osd
mon daemon and osd are started, but when i use ceph command, it's missing
client.admin.keyring
root@cephtest1:/etc/ceph# ceph -w
2012-05-30 09
= AQCQwcVPGIAwHhAAuS5Veg7GoOyzh59zq2TKag==
is it an error in documentation ?
- Mail original -
De: Stefan Priebe - Profihost AG s.pri...@profihost.ag
À: Alexandre DERUMIER aderum...@odiso.com
Cc: ceph-devel@vger.kernel.org
Envoyé: Mercredi 30 Mai 2012 09:25:56
Objet: Re: building test cluster
original -
De: Alexandre DERUMIER aderum...@odiso.com
À: Stefan Priebe - Profihost AG s.pri...@profihost.ag
Cc: ceph-devel@vger.kernel.org
Envoyé: Mercredi 30 Mai 2012 09:33:40
Objet: Re: building test cluster : missing /etc/ceph/client.admin.keyring, need
help
ok ,thanks
I had
to completion without a problem. The third
one runs for a while and then reports something like what's
below, and then hangs the test (system is still operational).
I see this in the syslog, but I'm not sure its timing aligned
with the failure:
[ 3925.501128] libceph: osd1 10.214.133.32:6800
10240M -r 1M -t 1 -F f3 -i 0 -i 1
The first two run to completion without a problem. The third
one runs for a while and then reports something like what's
below, and then hangs the test (system is still operational).
I see this in the syslog, but I'm not sure its timing aligned
Am Freitag, 24. Februar 2012, 10:51:10 schrieb Tommi Virtanen:
On Fri, Feb 24, 2012 at 00:58, madhusudhana
madhusudhana.u.acha...@gmail.com wrote:
4. If you don't mind, can you please give me a bit of insight on cluster
network, what it is and how i can configure one for my ceph cluster ?
On Wed, Mar 7, 2012 at 03:31, Wido den Hollander w...@widodh.nl wrote:
You should take a look at:
* public_addr
* cluster_addr
* public_network
* cluster_network
(From: src/common/config_opts.h)
[osd]
cluster network = 192.168.0.0/16
public network = 172.16.0.0/16
[osd.1]
cluster
CASE 1:[root@ceph-node-9 ~]# dd if=/dev/zero of=/mnt/ceph-test/wtest bs=4k
count=100
...
As you can see from above output, for 4G file of 4k blocks, speed clocked at
1GB/s, it gradually decreased when i increased the file size above 10G.
And also, if i run back to back dd with CASE 1 option
one known case where btrfs's internal structured
get fragmented, and its performance starts degrading. You might want
to make sure you start your test with freshly-mkfs'ed btrfses.
3. All hosts (including OSD) in my ceph cluster are running 3.0.9 ver
[root@ceph-node-8 ~]# uname -r
what I am using for iozone test
/opt/iozone/bin/iozone -R -e -l i -u 1 -r 4096k -s 1024m -F /mnt/ceph-
test/ceph.iozone
If I'm reading it correctly you're using Direct IO? That's almost
certainly just going to be slow...
When I see the result, the value from ceph cluster is not at all
coming
for testing the performance against NetApp filer
Below is the command what I am using for iozone test
/opt/iozone/bin/iozone -R -e -l i -u 1 -r 4096k -s 1024m -F /mnt/ceph-
test/ceph.iozone
1. Make sure you have only 1 active MDS, multi-MDS is an extra
complication you're better skipping
hi all,
this bug has been fixed yet? i can't find any information of it.
thanks for your help.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, Oct 30, 2011 at 6:21 PM, mowang da whooya@gmail.com wrote:
hi all,
this bug has been fixed yet? i can't find any information of it.
thanks for your help.
Yes, it was fixed a while ago in our master branch and is fixed in our
last couple of releases. (Our newest is v0.37.)
As the
thanks , if there is only one client, can we use local flock to
replace flock mds?
2011/10/31 Gregory Farnum gregory.far...@dreamhost.com:
On Sun, Oct 30, 2011 at 6:21 PM, mowang da whooya@gmail.com wrote:
hi all,
this bug has been fixed yet? i can't find any information of it.
thanks for
Not unless you want to hack the code to remove its flock
implementation. I'm not sure why you'd want to, though -- flock is
unlikely to be in the critical path for applications.
(If it is important, you can also use the userspace client -- unlike
the kernel client, it doesn't yet implement
101 - 200 of 231 matches
Mail list logo