Current version of RPM spec for Ceph removes the whole /etc/ceph
directory on uninstall:
https://github.com/ceph/ceph/blob/master/ceph.spec.in#L557-L562
I don't think contents of /etc/ceph is disposable and should be
silently discarded like that.
--
Dmitry Borod
resence of patches about what their
> impact is and what the system's stability is. These are largely
> cleaning up rough edges around user interfaces, and smoothing out
> issues in the new functionality that a standard deployment isn't going
> to experience. :)
> -Greg
>
On Tue, Sep 30, 2014 at 6:49 PM, Dmitry Borodaenko
wrote:
> Last stable Firefly release (v0.80.5) was tagged on July 29 (over 2
> months ago). Since then, there were twice as many commits merged into
> the firefly branch than there existed on the branch before v0.80.5:
>
> $ gi
--oneline --no-merges v0.80.5..firefly|wc -l
227
Is this a one time aberration in the process or should we expect the
gap between maintenance updates for LTS releases of Ceph to keep
growing?
Thanks,
--
Dmitry Borodaenko
___
ceph-users mailing list
ceph
t;
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Dmitry Borodaenko
>> Sent: Wednesday, July 16, 2014 11:18 PM
>> To: ceph-users@lists.ceph.com
>> Cc: OpenStack Development Mailing List (not for usage questions)
m using
> the default ubuntu packages since Icehouse lives in core and I'm not
> sure how to apply the patch series. I would love to test and review it.
>
> With regards,
>
> Dennis
>
> On 07/16/2014 11:18 PM, Dmitry Borodaenko wrote:
>> I've got a bit
d about this patch series on ceph-users ML:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028097.html
--
Dmitry Borodaenko
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
uld need to be applied for the full
> Icehouse/Ceph integration?
>
> cheers
> jc
>
> On 01.05.2014, at 01:23, Dmitry Borodaenko wrote:
>
> I've re-proposed the rbd-clone-image-handler blueprint via nova-specs:
> https://review.openstack.org/91486
>
> In other news,
h RBD backed
ephemeral drives, which will need a bit more work and a separate
blueprint.
On Mon, Apr 28, 2014 at 7:44 PM, Dmitry Borodaenko
wrote:
> I have decoupled the Nova rbd-ephemeral-clone branch from the
> multiple-image-location patch, the result can be found at the same
> locatio
aining multiple-image-location, just leaving it
out there to save some rebasing effort for whoever decides to pick it
up.
-DmitryB
On Fri, Mar 21, 2014 at 1:12 PM, Josh Durgin wrote:
> On 03/20/2014 07:03 PM, Dmitry Borodaenko wrote:
>>
>> On Thu, Mar 20, 2014 at 3:43 PM, Josh Durg
On Thu, Mar 20, 2014 at 3:43 PM, Josh Durgin wrote:
> On 03/20/2014 02:07 PM, Dmitry Borodaenko wrote:
>> The patch series that implemented clone operation for RBD backed
>> ephemeral volumes in Nova did not make it into Icehouse. We have tried
>> our best to help it land,
nStack 5.0 which
will be based on Icehouse.
If you're interested in this feature, please review and test. Bug
reports and patches are welcome, as long as their scope is limited to
this patch series and is not applicable for mainline OpenStack.
--
Dm
On Tue, Jan 21, 2014 at 10:38 AM, Dmitry Borodaenko
wrote:
> On Tue, Jan 21, 2014 at 2:23 AM, Lalitha Maruthachalam
> wrote:
>> Can someone please let me know whether there is any documentation for
>> installing Havana release of Openstack along with Ceph.
> These slides
On Tue, Jan 21, 2014 at 10:38 AM, Dmitry Borodaenko
wrote:
> On Tue, Jan 21, 2014 at 2:23 AM, Lalitha Maruthachalam
> wrote:
>> Can someone please let me know whether there is any documentation for
>> installing Havana release of Openstack along with Ceph.
>
> These slid
g some gotchas and troubleshooting pointers:
http://files.meetup.com/11701852/fuel-ceph.pdf
--
Dmitry Borodaenko
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
es. Do you think we should keep the current commit
history in its current form, or would it be easier to squash it down
to a more manageable number of patches?
Thanks,
--
Dmitry Borodaenko
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ntains live-migration,
> incorrect filesystem size fix and ceph-snapshort support in a few days.
Can't wait to see this patch! Are you getting rid of the shared
storage requirement for live-migration?
Thanks,
--
Dmitry Borodaenko
___
ceph-
Still working on it, watch this space :)
On Tue, Nov 12, 2013 at 3:44 PM, Dinu Vlad wrote:
> Out of curiosity - can you live-migrate instances with this setup?
>
>
>
> On Nov 12, 2013, at 10:38 PM, Dmitry Borodaenko
> wrote:
>
>> And to answer my own question, I was
ges >
/etc/ceph/ceph.client.images.keyring' and reverting images caps back
to original state, it all works!
On Tue, Nov 12, 2013 at 12:19 PM, Dmitry Borodaenko
wrote:
> I can get ephemeral storage for Nova to work with RBD backend, but I
> don't understand why it only works with the admin c
ng rbd_user to images or volumes doesn't work.
What am I missing?
Thanks,
--
Dmitry Borodaenko
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
anything else but raw.
Background:
https://bugs.launchpad.net/fuel/+bug/1246219
Thoughts?
--
Dmitry Borodaenko
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
21 matches
Mail list logo