[redirecting to ceph-devel].
Hi,
On 14/12/2015 21:20, Abe Asraoui wrote:
> Hi All,
>
> Does anyone know if this bug # 13191 has been resolved ??
http://tracker.ceph.com/issues/13191 has not been resolved. Could you please
comment on it ? A short explanation about why you need it resolved will
dumped)
FAIL: unittest_on_exit
PASS: unittest_readahead
PASS: unittest_tableformatter
PASS: unittest_bit_vector
FAIL: ceph-detect-init/run-tox.sh
FAIL: test/erasure-code/test-erasure-code.sh
FAIL: test/erasure-code/test-erasure-eio.sh
--
To unsubscribe from this list: send the line "unsubscribe
On 5-12-2015 14:02, Xinze Chi (信泽) wrote:
I think "const int k = 12; const int m = 4" would pass the compile?
Are these sizes big enough??
--WjW
2015-12-05 20:56 GMT+08:00 Willem Jan Withagen <w...@digiware.nl>:
src/test/erasure-code/TestErasureCodeIsa.cc
contains sn
Xinze Chi (信泽) wrote:
>>
>> I think "const int k = 12; const int m = 4" would pass the compile?
>
>
>
> Are these sizes big enough??
>
> --WjW
>
>> 2015-12-05 20:56 GMT+08:00 Willem Jan Withagen <w...@digiware.nl>:
>>>
>>&g
On 7-12-2015 23:19, Michal Jarzabek wrote:
Hi Willem,
If you look at line 411 and 412 you will have variables k and m
defined. They are not changed anywhere(I think), so the sizes must be
big enough.
As Xinze mentioned just add const in front of it:
const int k = 12
const int m = 4
and it
src/test/erasure-code/TestErasureCodeIsa.cc
contains snippets, function definition like:
buffer::ptr enc[k + m];
// create buffers with a copy of the original data to be able to
compare it after decoding
{
for (int i = 0; i < (k + m); i++) {
Clang refuses because the [k+m] s
I think "const int k = 12; const int m = 4" would pass the compile?
2015-12-05 20:56 GMT+08:00 Willem Jan Withagen <w...@digiware.nl>:
> src/test/erasure-code/TestErasureCodeIsa.cc
>
> contains snippets, function definition like:
>
> buffer::ptr enc[k + m];
&g
Sorry for the spam , having some issues with devl
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
whatever you did, it appears to work. :)
On 11/11/2015 05:44 PM, Somnath Roy wrote:
Sorry for the spam , having some issues with devl
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hi,
Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not
growing up
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..
Thanks
Hi,
Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not
growing up
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..
Thanks
We now have a gitbuilder up and running building test packages for arm64
(aarch64). The hardware for these builds has been graciously provided by
Cavium (thank you!).
Trusty aarch64 users can now install packages with
ceph-deploy install --dev BRANCH HOST
and build results are visible
test
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
tend an other SmPL script?
Related topic:
scripts/coccinelle/free: Delete NULL test before freeing functions
https://systeme.lip6.fr/pipermail/cocci/2015-May/001960.html
https://www.mail-archive.com/cocci@systeme.lip6.fr/msg01855.html
> If these changes are OK, I will address the remainder later
On Sun, Sep 13, 2015 at 3:15 PM, Julia Lawall <julia.law...@lip6.fr> wrote:
> Remove unneeded NULL test.
>
> The semantic patch that makes this change is as follows:
> (http://coccinelle.lip6.fr/)
>
> //
> @@ expression x; @@
> -if (x != NULL) {
> \(km
Remove unneeded NULL test.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
//
@@ expression x; @@
-if (x != NULL) {
\(kmem_cache_destroy\|mempool_destroy\|dma_pool_destroy\)(x);
x = NULL;
-}
//
Signed-off-by: Julia Lawall <julia.law...@lip6
that I found
easy to compile test. If these changes are OK, I will address the
remainder later.
---
arch/x86/kvm/mmu.c |6 --
block/bio-integrity.c |7 --
block/bio.c|7 --
block/blk
[adding ceph-devel as this may also be an inconvenient to others]
On 09/09/2015 10:23, Ma, Jianpeng wrote:> Hi Loic:
> Today, I run test/cephtool-test-mds.sh, because my code has bug cause osd
> down. I only from the screen saw "osd o down " and so on. But I don't
>
Test
-
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments
test
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, 27 Aug 2015, wenjunh wrote:
Hi
I have a try of the newstore to explore its performance, which shows
its performance is much poorer than filestore.
I test my cluster using fio, Here is the comparison of the two store
with randread randwrite scenario:
rw=randread, bs=4K, numjobs
Hi
I have a try of the newstore to explore its performance, which shows its
performance is much poorer than filestore.
I test my cluster using fio, Here is the comparison of the two store with
randread randwrite scenario:
rw=randread, bs=4K, numjobs=1:
newstore: bw=2280.7KB/s, iops=570
] TestLibRBD.ObjectMapConsistentSnap
using new format!
test/librbd/test_librbd.cc:2790: Failure
Value of: passed
Actual: false
Expected: true
[ FAILED ] TestLibRBD.ObjectMapConsistentSnap (396 ms)
[--] Global test environment tear-down
[==] 98 tests from 6 test cases ran. (10554 ms total)
[ PASSED ] 97
-qa-suite/pull/518) and hope
fix this point.
On Fri, Jul 31, 2015 at 5:50 PM, Haomai Wang haomaiw...@gmail.com wrote:
Hi all,
I ran a test
suite(http://pulpito.ceph.com/haomai-2015-07-29_11:40:40-rados-master-distro-basic-multi/)
and found the failed jobs are failed by 2015-07-29 10:52
Hi all,
I ran a test
suite(http://pulpito.ceph.com/haomai-2015-07-29_11:40:40-rados-master-distro-basic-multi/)
and found the failed jobs are failed by 2015-07-29 10:52:35.313197
7f16ae655780 -1 unrecognized ms_type 'async'
Then I found the failed jobs(like
http://pulpito.ceph.com/haomai-2015
From: Mike Christie micha...@cs.wisc.edu
The next patches add a couple new commands that have write data.
This patch adds a helper to combine all the IMG_REQ write tests.
Signed-off-by: Mike Christie micha...@cs.wisc.edu
---
drivers/block/rbd.c | 15 +--
1 file changed, 9
Please ignore.
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Jul 22, 2015 at 12:34 AM, Deneau, Tom tom.den...@amd.com wrote:
I was trying to do an rpmbuild of v9.0.2 for aarch64 and got the following
error:
test/perf_local.cc: In function 'double div32()':
test/perf_local.cc:396:31: error: impossible constraint in 'asm
I was trying to do an rpmbuild of v9.0.2 for aarch64 and got the following
error:
test/perf_local.cc: In function 'double div32()':
test/perf_local.cc:396:31: error: impossible constraint in 'asm'
cc);
Probably should have an if defined (__i386__) around it.
-- Tom
.
Mark
On 07/11/2015 03:02 AM, Konstantin Danilov wrote:
Hi all,
We(Mirantis ceph team) have a tool for block storage performance test,
called 'wally' -
https://github.com/Mirantis/disk_perf_test_tool.
It has some nice features, like:
* Openstack and FUEL integration (can spawn VM
in this area as well.
Mark
On 07/11/2015 03:02 AM, Konstantin Danilov wrote:
Hi all,
We(Mirantis ceph team) have a tool for block storage performance test,
called 'wally' -
https://github.com/Mirantis/disk_perf_test_tool.
It has some nice features, like:
* Openstack and FUEL integration (can
test,
called 'wally' -
https://github.com/Mirantis/disk_perf_test_tool.
It has some nice features, like:
* Openstack and FUEL integration (can spawn VM for tests, gather HW
info,
etc)
* Set of tests, joined into suit, which measures different performnce
aspects and
creates joined report
processing is also something that will be very useful. We
have a couple of folks really interested in this area as well.
Mark
On 07/11/2015 03:02 AM, Konstantin Danilov wrote:
Hi all,
We(Mirantis ceph team) have a tool for block storage performance test,
called 'wally' -
https://github.com
Hi all,
We(Mirantis ceph team) have a tool for block storage performance test,
called 'wally' -
https://github.com/Mirantis/disk_perf_test_tool.
It has some nice features, like:
* Openstack and FUEL integration (can spawn VM for tests, gather HW info, etc)
* Set of tests, joined into suit
, but thus far we haven't found
any.
Is there a unit test which validates this mechanism, e.g. one which
intentionally corrupts a Message then confirms that the crc code drops it? I
didn't find anything relevant in src/test/, but I'm not too familiar with the
framework.
Actually, all it takes
the Ceph messenger crc32c code.
We have crc32c enabled (as default), and I expected to find some bad
crc messages logged on the clients and/or osds, but thus far we
haven't found any.
Is there a unit test which validates this mechanism, e.g. one which
intentionally corrupts a Message then confirms
On Thu, Jun 18, 2015 at 9:53 AM, Dałek, Piotr
piotr.da...@ts.fujitsu.com wrote:
Actually, all it takes is just to disable CRC in configuration on one node
(or even
daemon). It'll cause to put zeros in CRC fields in all messages sent,
triggering
CRC check failures cluster-wide (on remaining,
We have a bunch of teuthology tests which build cache pool on top of an ec base
pool, and do partial object write. This is ok with the current cache tiering
implementation. But with proxy write, this won't work. In my testing, the error
message is something like below:
2015-05-20
] v0'0 uv0 ondisk = -95 ((95) Operation not supported)) v6 --
?+0 0xa2e23c0 con 0x9355760
What should we do with these tests?
I think the test is fine.. but the OSD should refuse to proxy the write if
the base tier won't support the write operation in question. I believe we
recently renamed
]
Sent: Monday, May 25, 2015 11:04 PM
To: Wang, Zhiqiang
Cc: ceph-devel@vger.kernel.org
Subject: Re: Cache pool on top of ec base pool teuthology test
On Mon, 25 May 2015, Wang, Zhiqiang wrote:
We have a bunch of teuthology tests which build cache pool on top of
an ec base pool, and do partial
Yes, we can do the force promotion check in init_op_flags, as we did before.
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Tuesday, May 26, 2015 10:19 AM
To: Wang, Zhiqiang
Cc: ceph-devel@vger.kernel.org
Subject: RE: Cache pool on top of ec base pool teuthology test
@vger.kernel.org
Subject: Re: Cache pool on top of ec base pool teuthology test
On Mon, 25 May 2015, Wang, Zhiqiang wrote:
We have a bunch of teuthology tests which build cache pool on top of
an ec base pool, and do partial object write. This is ok with the
current cache tiering implementation
From: Min Chen minc...@ubuntukylin.com
Signed-off-by: Min Chen minc...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
src/test/librados/tier.cc | 176 ++
1 file changed, 176 insertions(+)
diff --git a/src/test/librados/tier.cc b/src
From: MingXin Liu mingxin...@ubuntukylin.com
Signed-off-by: MingXin Liu mingxin...@ubuntukylin.com
Reviewed-by: Li Wang liw...@ubuntukylin.com
---
doc/dev/cache-pool.rst | 4
doc/man/8/ceph.rst | 12 +---
doc/rados/operations/pools.rst | 7 +++
Well, I do run it on Linux Mint, but rest of the tests passes without
any problems. So I was wondering if there was any simple way to fix
this one as well.
On Sat, May 16, 2015 at 10:30 PM, David Zafman dzaf...@redhat.com wrote:
Is something really broken? Or are you just on an unsupported
set the hit : miss rate for the test. Is there any way to
do this?
Regards
Ning Yao
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
).
- Mail original -
De: Josh Durgin jdur...@redhat.com
À: aderumier aderum...@odiso.com, ceph-devel ceph-devel@vger.kernel.org
Envoyé: Mercredi 15 Avril 2015 01:12:38
Objet: Re: how to test hammer rbd objectmap feature ?
On 04/14/2015 12:48 AM, Alexandre DERUMIER wrote:
Hi,
I would like
Hi,
I would like to known how to enable object map on hammer ?
I found a post hammer commit here:
https://github.com/ceph/ceph/commit/3a7b28d9a2de365d515ea1380ee9e4f867504e10
rbd: add feature enable/disable support
- Specifies which RBD format 2 features are to be enabled when creating
-
On 04/14/2015 12:48 AM, Alexandre DERUMIER wrote:
Hi,
I would like to known how to enable object map on hammer ?
I found a post hammer commit here:
https://github.com/ceph/ceph/commit/3a7b28d9a2de365d515ea1380ee9e4f867504e10
rbd: add feature enable/disable support
- Specifies which RBD
I'm trying to create a CRUSH ruleset and I'm using crushtool to test
the rules, but it doesn't seem to mapping things correctly. I have two
roots, on for spindles and another for SSD. I have two rules, one for
each root. The output of crushtool on rule 0 shows objects being
mapped to SSD OSDs when
Février 2015 07:10:42
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
Thanks Mark for the results,
default values seem to be quite resonable indeed.
I also wonder is cpu frequency can have an impact on latency or not.
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes
Cc: ceph-devel ceph-devel@vger.kernel.org, ceph-users
ceph-us...@lists.ceph.com
Envoyé: Lundi 2 Mars 2015 15:39:24
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
Hi Alex,
I see I even responded in the same thread! This would be a good thing
to bring up in the meeting
-users
ceph-us...@lists.ceph.com
Envoyé: Lundi 2 Mars 2015 15:39:24
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
Hi Alex,
I see I even responded in the same thread! This would be a good thing
to bring up in the meeting on Wednesday. Those are far faster single
OSD results than
aderum...@odiso.com
À: Mark Nelson mnel...@redhat.com
Cc: ceph-devel ceph-devel@vger.kernel.org, ceph-users
ceph-us...@lists.ceph.com
Envoyé: Vendredi 27 Février 2015 07:10:42
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
Thanks Mark for the results,
default values seem
: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
Thanks Mark for the results,
default values seem to be quite resonable indeed.
I also wonder is cpu frequency can have an impact on latency or not.
I'm going to benchmark on dual xeon 10-cores 3,1ghz nodes in coming weeks,
I'll try
2015 22:49:23
Objet: Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
Can I ask what xio and simple messenger are and the differences?
Kind regards
Kevin Walker
+968 9765 1742
On 1 Mar 2015, at 18:38, Alexandre DERUMIER aderum...@odiso.com wrote:
Hi Mark,
I found an previous
: Mark Nelson mnel...@redhat.com
À: ceph-devel ceph-devel@vger.kernel.org, ceph-users
ceph-us...@lists.ceph.com
Envoyé: Jeudi 26 Février 2015 05:44:15
Objet: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results
Hi Everyone,
In the Ceph Dumpling/Firefly/Hammer SSD/Memstore performance
Hi, all
Is any guideline that describes how to run the ceph unit test, and its
basic architecture?
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Dec 9, 2014 at 1:50 AM, Nicheal zay11...@gmail.com wrote:
Hi, all
Is any guideline that describes how to run the ceph unit test, and its
basic architecture?
You can run them all by executing make check [-j N]. The executables
run as part of that are specified in the makefiles
Hi All,
I am using giant branch for the development purpose.
One of the teuthology smoke test case
'ceph-qa-suite/suites/smoke/basic/tasks/rados_python.yaml' is failing on it.
From teuthology.log
==
2014-10-14T11:31:09.461
INFO:teuthology.task.workunit.client.0.plana02
giant branch for the development purpose.
One of the teuthology smoke test case
'ceph-qa-suite/suites/smoke/basic/tasks/rados_python.yaml' is failing on it.
From teuthology.log
==
2014-10-14T11:31:09.461
INFO:teuthology.task.workunit.client.0.plana02
Got it. Thanks for the reply Greg.
Regards,
Aanchal
-Original Message-
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Monday, October 20, 2014 10:46 PM
To: Aanchal Agrawal
Cc: ceph-devel@vger.kernel.org
Subject: Re: Teuthology smoke test case(tasks/rados_python.yaml) failed
in my
local setup with a few quick ugly hacks. The main set of changes were,
1. Explicitly named my test systems as plana01 , plana02 plana03.
Some of the teuthology code which checks for VM instances does compare with
known set/class of machine names
2. In lock_machines() routine
I read the doc on the github teuthology, but still can not figure out
how to run teuthology on my local test servers.
Any experience? Any advice?
--
Thanks,
Gu Ping
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More
Hi,
I had gotten teuthology to work some time back to a reasonable extent in my
local setup with a few quick ugly hacks. The main set of changes were,
1. Explicitly named my test systems as plana01 , plana02 plana03. Some
of the teuthology code which checks for VM instances does compare
call last):
2014-06-27T19:37:39.442 INFO:teuthology.orchestra.run.plana64.stderr: File
/home/ubuntu/cephtest/swift/test/functional/tests.py, line 104, in setUp
2014-06-27T19:37:39.442 INFO:teuthology.orchestra.run.plana64.stderr:
cls.env.setUp()
2014-06-27T19:37:39.442
Hi Sam,
TL;DR: what oneliner do you recommend to run upgrade tests for
https://github.com/ceph/ceph/pull/1890 ?
Running the rados suite can be done with :
./schedule_suite.sh rados wip-8071 testing l...@dachary.org basic master
plana
or something else since ./schedule_suite.sh was
Loic
I don't intent to answer all questions, but some info, see inline
On Fri, Jun 27, 2014 at 8:16 AM, Loic Dachary l...@dachary.org wrote:
Hi Sam,
TL;DR: what oneliner do you recommend to run upgrade tests for
https://github.com/ceph/ceph/pull/1890 ?
Running the rados suite can be done
On 27/06/2014 18:17, Yuri Weinstein wrote:
Loic
I don't intent to answer all questions, but some info, see inline
On Fri, Jun 27, 2014 at 8:16 AM, Loic Dachary l...@dachary.org wrote:
Hi Sam,
TL;DR: what oneliner do you recommend to run upgrade tests for
The conference room is open for the next 30 mins to test BlueJeans before CDS
next week.
https://bluejeans.com/362952863
Best Regards,
Patrick McGarry
Director Ceph Community, Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey || @ceph
On June 16, 2014 at 7:41:26 PM
Hey Cephers,
As you know the next Ceph Developer Summit is fast approaching! (stay tuned for
the schedule later in the week) This summit is going to be utilizing our new
video conferencing system “BlueJeans.” In order to ensure that things go
smoothly on summit day I’ll be running a few test
Hi guys,
I am now trying to use crushtool.cc to test crush algorithm. First, I
build a new crush map using crushtool.cc and all the devices have the
maximum weight (0x1). Then I assign different weights to devices
using the --weight option and run the test() function. It seems that
during
Hi,
What sort of memory are your instances using?
I just had a look. Around 120 Mb. Which indeed is a bit higher that I'd like.
I haven't turned on any caching so I assume it's disabled.
Yes.
Cheers,
Sylvain
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
Hi James,
Are you still working on this in any way?
Well I'm using it, but I haven't worked on it. I never was able to
reproduce any issue with it locally ...
In prod, I do run it with cache disabled though since I never took the
time to check using the cache was safe in the various
...@vger.kernel.org] On Behalf Of Sylvain Munaut
Sent: Saturday, 20 April 2013 12:41 AM
To: Pasi Kärkkäinen
Cc: ceph-devel@vger.kernel.org; xen-de...@lists.xen.org
Subject: Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to
test ? :p
If you have time to write up some lines about steps
Hi James,
Are you still working on this in any way?
Well I'm using it, but I haven't worked on it. I never was able to
reproduce any issue with it locally ...
In prod, I do run it with cache disabled though since I never took the
time to check using the cache was safe in the various failure
and release cmount at the end of tests
Signed-off-by: Xing Lin xing...@cs.utah.edu
---
src/test/libcephfs/readdir_r_cb.cc | 4
1 file changed, 4 insertions(+)
diff --git a/src/test/libcephfs/readdir_r_cb.cc
b/src/test/libcephfs/readdir_r_cb.cc
index 788260b..4a99f10 100644
Hi Sage,
Thanks for applying these two patches. I will try to accumulate more fixes and
submit pull requests via github later.
Thanks,
Xing
On Nov 3, 2013, at 12:17 AM, Sage Weil s...@inktank.com wrote:
Applied this one too!
BTW, an easier workflow than sending patches to the list is to
-adjusted and it is busy
backfilling and evening out the data-distribution across the OSDs.
My overall point is that the out-of-the-box defaults don't provide a
stable test-deployment (whereas older versions like 0.61 did), and so
minimally perhaps ceph-deploy needs to have a stab at choosing
.
Agree, it was too few PGs, I have no re-adjusted and it is busy
backfilling and evening out the data-distribution across the OSDs.
My overall point is that the out-of-the-box defaults don't provide a
stable test-deployment (whereas older versions like 0.61 did), and so
minimally perhaps ceph-deploy
Hi Sylvain,
I'm not quite sure what u mean, can u give some more information on how I do
this? I compiled tapdisk with ./configure CFLAGS=-g, but I'm not sure this
is what u meant.
Yes, ./configure CFLAGS=-g LDFLAGS=-g is a good start.
...
Then once you have a core file, you can use gdb
Hi,
I just tested with tap2:aio and that worked (had an old image of the VM
on
lvm still so just tested with that). Switching back to rbd and it crashes
every
time, just as postgres is starting in the vm. Booting into single user mode,
waiting 30 seconds, then letting the boot
I just had a crash since upgrading to dumpling, and will disable merging
tonight.
Still crashes with merging disabled.
James
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On 13-08-13 17:39, Sylvain Munaut wrote:
It's actually strange that it changes anything at all.
Can you try adding a ERROR(HERE\n); in that error path processing
and check syslog to see if it's triggered at all ?
A traceback would be great if you can get a core file. And possibly
compile
Hi,
I just tested with tap2:aio and that worked (had an old image of the VM on
lvm still so just tested with that). Switching back to rbd and it crashes
every time, just as postgres is starting in the vm. Booting into single user
mode, waiting 30 seconds, then letting the boot continue it
Hi,
I just tested with tap2:aio and that worked (had an old image of the VM on
lvm still so just tested with that). Switching back to rbd and it crashes
every
time, just as postgres is starting in the vm. Booting into single user mode,
waiting 30 seconds, then letting the boot continue
Hi Frederik,
A traceback would be great if you can get a core file. And possibly
compile tapdisk with debug symbols.
I'm not quite sure what u mean, can u give some more information on how I do
this? I compiled tapdisk with ./configure CFLAGS=-g, but I'm not sure this
is what u meant.
Yes,
FWIW, I can confirm via printf's that this error path is never hit in at
least some of the crashes I'm seeing.
Ok thanks.
Are you using cache btw ?
Cheers,
Sylvain
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
FWIW, I can confirm via printf's that this error path is never hit in at
least
some of the crashes I'm seeing.
Ok thanks.
Are you using cache btw ?
I hope not. How could I tell? It's not something I've explicitly enabled.
Thanks
James
--
To unsubscribe from this list: send the
Hi,
I hope not. How could I tell? It's not something I've explicitly enabled.
It's disabled by default.
So you'd have to have enabled it either in ceph.conf or directly in
the device path in the xen config. (option is 'rbd cache',
http://ceph.com/docs/next/rbd/rbd-config-ref/ )
Cheers,
in
libpthread-2.13.so[7f749d059000+17000]
domU:
Same as before patch.
I would like to add that I have the time to test this, we are happy to
help you in any way possible. However, since I am no C developer, I
won't be able to do much more than testing.
Regards
Frederik
On 13-08-13 11
Hi,
I have been testing this a while now, and just finished testing your
untested patch. The rbd caching problem still persists.
Yes, I wouldn't expect to change anything for caching. But I still
don't understand why caching would change anything at all ... all of
it should be handled within
-company.com]
Sent: Tuesday, 13 August 2013 7:20 PM
To: James Harper
Cc: Pasi Kärkkäinen; ceph-devel@vger.kernel.org; xen-de...@lists.xen.org
Subject: Re: [Xen-devel] Xen blktap driver for Ceph RBD : Anybody wants to
test ? :p
Hi,
I hope not. How could I tell? It's not something I've explicitly
I think I have a separate problem too - tapdisk will segfault almost
immediately upon starting but seemingly only for Linux PV DomU's. Once it has
started doing this I have to wait a few hours to a day before it starts working
again. My Windows DomU's appear to be able to start normally though.
On Wed, Aug 14, 2013 at 1:39 AM, James Harper
james.har...@bendigoit.com.au wrote:
I think I have a separate problem too - tapdisk will segfault almost
immediately upon starting but seemingly only for Linux PV DomU's. Once it has
started doing this I have to wait a few hours to a day before
On Wed, Aug 14, 2013 at 1:39 AM, James Harper
james.har...@bendigoit.com.au wrote:
I think I have a separate problem too - tapdisk will segfault almost
immediately upon starting but seemingly only for Linux PV DomU's. Once it
has started doing this I have to wait a few hours to a day
On Wed, Aug 14, 2013 at 1:39 AM, James Harper
james.har...@bendigoit.com.au wrote:
I think I have a separate problem too - tapdisk will segfault almost
immediately upon starting but seemingly only for Linux PV DomU's. Once it
has started doing this I have to wait a few hours to a
This patch change the fsx.sh to pull better fsx.c from xfstests site
to support hole punching test.
Signed-off-by: Yunchuan Wen yunchuan...@ubuntukylin.com
Signed-off-by: Li Wang liw...@ubuntukylin.com
---
qa/workunits/suites/fsx.sh |6 --
1 file changed, 4 insertions(+), 2 deletions
Hi,
tapdisk[9180]: segfault at 7f7e3a5c8c10 ip 7f7e387532d4 sp
7f7e3a5c8c10 error 4 in libpthread-2.13.so[7f7e38748000+17000]
tapdisk:9180 blocked for more than 120 seconds.
tapdisk D 88043fc13540 0 9180 1 0x
You can try generating a core file by
tapdisk[9180]: segfault at 7f7e3a5c8c10 ip 7f7e387532d4 sp
7f7e3a5c8c10 error 4 in libpthread-2.13.so[7f7e38748000+17000]
tapdisk:9180 blocked for more than 120 seconds.
tapdisk D 88043fc13540 0 9180 1 0x
You can try generating a core file by
1 - 100 of 231 matches
Mail list logo