On Fri, Aug 31, 2012 at 11:02 PM, Ryan Nicholson
ryan.nichol...@kcrg.com wrote:
Secondly: Through some trials, I've found that if one loses all of his
Monitors in a way that they also lose their disks, one basically loses their
cluster. I would like to recommend a lower priority shift in
...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Ross Turk
Sent: Tuesday, August 28, 2012 1:12 PM
To: ceph-devel@vger.kernel.org
Subject: Integration work
Hi, ceph-devel! It's me, your friendly community guy.
Inktank has an engineering team dedicated to Ceph, and we want to work
On Tuesday 28 August 2012 you wrote:
On Tue, Aug 28, 2012 at 11:51 AM, Dieter Kasper d.kas...@kabelmail.de
wrote:
Hi Ross,
focusing on core stability and feature expansion for RBD was the right
appoach in the past and I feel you have reached an adequate maturity
level here.
On 08/29/2012 10:20 AM, Sylvain Munaut wrote:
Hi,
How about Xen?
I vote for this :)
Using RBD storage for Xen VM images / disks is IMHO a very nice fit,
the same way people do with QEMU. This should even allow live
migration of VM.
Correct me if I'm wrong, but when I was at Citrix in May
Correct me if I'm wrong, but when I was at Citrix in May this year somebody
there told me that Xen was going 100% Qemu?
Huh ... I've never heard this. Also the guys in ##xen haven't either.
I'm not really involved in xen dev and don't follow it closely but
that seems unlikely. The few slides I
On 08/29/2012 02:35 PM, Sylvain Munaut wrote:
Correct me if I'm wrong, but when I was at Citrix in May this year somebody
there told me that Xen was going 100% Qemu?
Huh ... I've never heard this. Also the guys in ##xen haven't either.
I'm not really involved in xen dev and don't follow it
On Wed, Aug 29, 2012 at 9:40 AM, Wido den Hollander w...@widodh.nl wrote:
Huh ... I've never heard this. Also the guys in ##xen haven't either.
I'm not really involved in xen dev and don't follow it closely but
that seems unlikely. The few slides I looked at from the Xen Summit a
couple days
On 29 August 2012 23:43, Tommi Virtanen t...@inktank.com wrote:
On Wed, Aug 29, 2012 at 9:40 AM, Wido den Hollander w...@widodh.nl wrote:
Huh ... I've never heard this. Also the guys in ##xen haven't either.
I'm not really involved in xen dev and don't follow it closely but
that seems
stability, radosgw, and feature expansion for RBD. At the same time,
they have been regularly allocating cycles to integration work.
Recently, this has consisted of improvements to the way Ceph works
within OpenStack (even though OpenStack isn't the only technology that
we think Ceph should
are.
Over the past several months, Inktank's engineers have focused on core
stability, radosgw, and feature expansion for RBD. At the same time,
they have been regularly allocating cycles to integration work.
Recently, this has consisted of improvements to the way Ceph works
within OpenStack
. At the same time,
they have been regularly allocating cycles to integration work.
Recently, this has consisted of improvements to the way Ceph works
within OpenStack (even though OpenStack isn't the only technology that
we think Ceph should play nicely with).
What other sorts
Am 28.08.2012 20:51, schrieb Dieter Kasper:
Performance enhancements - especially to reduce the latency of a single IO /
increase IOPS -
and a stronger engagement on the CephFS client would be very much appreciated.
A stable and fast CephFS client would allow an efficient integration with
-
On Tue, Aug 28, 2012 at 08:57:02PM +0200, Smart Weblications GmbH - Florian
Wiessner wrote:
Am 28.08.2012 20:51, schrieb Dieter Kasper:
Performance enhancements - especially to reduce the latency of a single IO
/ increase IOPS -
and a stronger engagement on the CephFS client would be
On Tue, Aug 28, 2012 at 11:51 AM, Dieter Kasper d.kas...@kabelmail.de wrote:
Hi Ross,
focusing on core stability and feature expansion for RBD was the right appoach
in the past and I feel you have reached an adequate maturity level here.
Performance enhancements - especially to reduce the
like to check in with you to
make sure that we are.
Over the past several months, Inktank's engineers have focused on core
stability, radosgw, and feature expansion for RBD. At the same time,
they have been regularly allocating cycles to integration work.
Recently, this has consisted
On Tue, Aug 28, 2012 at 5:03 PM, Florian Haas flor...@hastexo.com wrote:
I for my part, in the documentation space, would love for the admin
tools to become self-documenting. For example, I would love a help
subcommand at any level of the ceph shell, listing the supported
subcommands in that
On 08/28/2012 02:15 PM, Tommi Virtanen wrote:
On Tue, Aug 28, 2012 at 5:03 PM, Florian Haas flor...@hastexo.com wrote:
I for my part, in the documentation space, would love for the admin
tools to become self-documenting. For example, I would love a help
subcommand at any level of the ceph
17 matches
Mail list logo