HI Everyone,
I’m working on ways to improve Ceph installation with ceph-deploy, and a common
hurdle we have hit involves dependency issues between ceph.com hosted RPM
repos, and packages within EPEL. For a while we were able to managed this with
the priorities plugin, but then EPEL shipped
On 07/23/2015 09:44 AM, Sage Weil wrote:
On Thu, 23 Jul 2015, Deneau, Tom wrote:
I wanted to register for tracker.ceph.com to enter a few issues but never
got the confirming email and my registration is now in some stuck state
(not complete but name/email in use so can't re-register). Any
Why not zero?
If the answer is it can't be used, then, what arbitrary minimum size
is too small?
(also, given that resize exists, it can be used for storage after a resize.)
On 07/23/2015 06:03 AM, Jason Dillaman wrote:
According to the git history, support for zero MB images for the
On 23/07/15 12:56, Mark Nelson wrote:
I had similar thoughts on the benchmarking side, which is why I
started writing cbt a couple years ago. I needed the ability to
quickly spin up clusters and run benchmarks on arbitrary sets of
hardware. The outcome isn't perfect, but it's been
On 07/23/2015 07:37 AM, John Spray wrote:
On 23/07/15 12:56, Mark Nelson wrote:
I had similar thoughts on the benchmarking side, which is why I
started writing cbt a couple years ago. I needed the ability to
quickly spin up clusters and run benchmarks on arbitrary sets of
hardware. The
According to the git history, support for zero MB images for the create/resize
commands was explicitly added by commit 08f47a4. Dan Mick or Josh Durgin could
probably better explain the history behind the change since it was before my
time.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
On 23/07/2015 14:34, John Spray wrote:
On 23/07/15 12:23, Loic Dachary wrote:
You may be interested by
https://github.com/ceph/ceph/blob/master/src/test/ceph-disk-root.sh
which is conditionally included
https://github.com/ceph/ceph/blob/master/src/test/Makefile.am#L86
by
Hi John,
You may be interested by
https://github.com/ceph/ceph/blob/master/src/test/ceph-disk-root.sh
which is conditionally included
https://github.com/ceph/ceph/blob/master/src/test/Makefile.am#L86
by --enable-root-make-check
https://github.com/ceph/ceph/blob/master/configure.ac#L414
If
Hi John,
I had similar thoughts on the benchmarking side, which is why I started
writing cbt a couple years ago. I needed the ability to quickly spin up
clusters and run benchmarks on arbitrary sets of hardware. The outcome
isn't perfect, but it's been extremely useful for running
On 23/07/15 12:23, Loic Dachary wrote:
You may be interested by
https://github.com/ceph/ceph/blob/master/src/test/ceph-disk-root.sh
which is conditionally included
https://github.com/ceph/ceph/blob/master/src/test/Makefile.am#L86
by --enable-root-make-check
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Mark Nelson
Sent: Thursday, July 23, 2015 2:51 PM
To: John Spray; ceph-devel@vger.kernel.org
Subject: Re: vstart runner for cephfs tests
On 07/23/2015 07:37 AM,
sorry for double-send — forgot to make plain text for ceph-devel
Hi Shinobu,
Thanks for the response.
On Jul 23, 2015, at 5:05 PM, Shinobu Kinjo shinobu...@gmail.com wrote:
Hi Travis,
Is this that you are talking about:
``dnf [options] list obsoletes [package-name-specs...]``
``dnf
From now I don't find any question, Understand it, thank you
-邮件原件-
发件人: Dan Mick [mailto:dm...@redhat.com]
发送时间: 2015年7月24日 9:21
收件人: zhengbin 08747 (RD); ceph-devel
主题: Re: 答复: hello, I am confused about a question of rbd
Adding back ceph-devel
My point was, ok, ruling out 0, then we
Adding back ceph-devel
My point was, ok, ruling out 0, then we can create a block device of
size 1 byte. Is that useful? No, it is not. How about 10 bytes?
1000? 1MB?
There's no good reason to rule out zero. Is it causing a problem somehow?
On 07/23/2015 06:06 PM, zhengbin.08...@h3c.com
(Adding devel list to the CC)
Hi Eric,
To add more context to the problem:
Min_size was set to 1 and replication size is 2.
There was a flaky power connection to one of the enclosures. With min_size 1,
we were able to continue the IO's, and recovery was active once the power comes
back. But
correct.
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey || @ceph
On Tue, Jul 21, 2015 at 6:03 PM, Gregory Farnum g...@gregs42.com wrote:
On Tue, Jul 21, 2015 at 6:09 PM, Patrick McGarry pmcga...@redhat.com
I wanted to register for tracker.ceph.com to enter a few issues but never
got the confirming email and my registration is now in some stuck state
(not complete but name/email in use so can't re-register). Any suggestions?
-- Tom Deneau
--
To unsubscribe from this list: send the line unsubscribe
On Thu, Jul 23, 2015 at 11:00:57AM +0100, John Spray wrote:
Audience: anyone working on cephfs, general testing interest.
The tests in ceph-qa-suite/tasks/cephfs are growing in number, but kind of
inconvenient to run because they require teuthology (and therefore require
built packages,
Hey cephers,
Since Ceph Days for both Chicago and Raleigh are fast approaching, I
wanted to put another call out on the mailing lists for anyone who
might be interested in sharing their Ceph experiences with the
community at either location. If you have something to share
(integration, use case,
Hi all,
We meet a performance degrad in one of our cluster. Our
randwrite latency degraded from 1ms to 5ms(fio -ioengine=rbd
iodepth=1)
The cluster has about 200 osds runing on Intel 3500 SSD, we run
both qemu and ceph-osd on the hosts. The network for ceph is 10GbE.
While the
On Thu, 23 Jul 2015, Deneau, Tom wrote:
I wanted to register for tracker.ceph.com to enter a few issues but never
got the confirming email and my registration is now in some stuck state
(not complete but name/email in use so can't re-register). Any suggestions?
It does that sometimes... not
Audience: anyone working on cephfs, general testing interest.
The tests in ceph-qa-suite/tasks/cephfs are growing in number, but kind
of inconvenient to run because they require teuthology (and therefore
require built packages, locked nodes, etc). Most of them don't actually
require
22 matches
Mail list logo