On 11/30/17 14:04, Fabian Grünbichler wrote:
> point is - you should not purposefully attempt to annoy users and/or
> downstreams by changing behaviour in the middle of an LTS release cycle,
exactly. upgrading the patch level (x.y.z to x.y.z+1) should imho never
introduce a behaviour-change,
On Thu, Nov 30, 2017 at 07:04:33AM -0500, Alfredo Deza wrote:
> On Thu, Nov 30, 2017 at 6:31 AM, Fabian Grünbichler
> wrote:
> > On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote:
> >> On Tue, Nov 28, 2017 at 9:22 AM, David Turner
>
On Thu, Nov 30, 2017 at 6:31 AM, Fabian Grünbichler
wrote:
> On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote:
>> On Tue, Nov 28, 2017 at 9:22 AM, David Turner wrote:
>> > Isn't marking something as deprecated meaning that there is
On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote:
> On Tue, Nov 28, 2017 at 9:22 AM, David Turner wrote:
> > Isn't marking something as deprecated meaning that there is a better option
> > that we want you to use and you should switch to it sooner than later? I
Le 27/11/2017 à 14:36, Alfredo Deza a écrit :
> For the upcoming Luminous release (12.2.2), ceph-disk will be
> officially in 'deprecated' mode (bug fixes only). A large banner with
> deprecation information has been added, which will try to raise
> awareness.
>
> We are strongly suggesting using
On Tue, Nov 28, 2017 at 9:22 AM, David Turner wrote:
> Isn't marking something as deprecated meaning that there is a better option
> that we want you to use and you should switch to it sooner than later? I
> don't understand how this is ready to be marked as such if
Isn't marking something as deprecated meaning that there is a better option
that we want you to use and you should switch to it sooner than later? I
don't understand how this is ready to be marked as such if ceph-volume
can't be switched to for all supported use cases. If ZFS, encryption,
FreeBSD,
On 28-11-2017 13:32, Alfredo Deza wrote:
I understand that this would involve a significant effort to fully
port over and drop ceph-disk entirely, and I don't think that dropping
ceph-disk in Mimic is set in stone (yet).
Alfredo,
When I expressed my concers about deprecating ceph-disk, I was
On 11/28/2017 12:52 PM, Alfredo Deza wrote:
On Tue, Nov 28, 2017 at 7:38 AM, Joao Eduardo Luis wrote:
On 11/28/2017 11:54 AM, Alfredo Deza wrote:
On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote:
Op 27 november 2017 om 14:36 schreef Alfredo Deza
Thanks!
I'll start looking into rebuilding my roles once 12.2.2 is out then.
On 28 November 2017 at 13:37, Alfredo Deza wrote:
> On Tue, Nov 28, 2017 at 7:22 AM, Andreas Calminder
> wrote:
>>> For the `simple` sub-command there is no
On Tue, Nov 28, 2017 at 7:38 AM, Joao Eduardo Luis wrote:
> On 11/28/2017 11:54 AM, Alfredo Deza wrote:
>>
>> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote:
>>>
>>>
Op 27 november 2017 om 14:36 schreef Alfredo Deza :
On 11/28/2017 11:54 AM, Alfredo Deza wrote:
On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote:
Op 27 november 2017 om 14:36 schreef Alfredo Deza :
For the upcoming Luminous release (12.2.2), ceph-disk will be
officially in 'deprecated' mode (bug
On Tue, Nov 28, 2017 at 7:22 AM, Andreas Calminder
wrote:
>> For the `simple` sub-command there is no prepare/activate, it is just
>> a way of taking over management of an already deployed OSD. For *new*
>> OSDs, yes, we are implying that we are going only with
On Tue, Nov 28, 2017 at 3:39 AM, Piotr Dałek wrote:
> On 17-11-28 09:12 AM, Wido den Hollander wrote:
>>
>>
>>> Op 27 november 2017 om 14:36 schreef Alfredo Deza :
>>>
>>>
>>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>>> officially
> Op 28 november 2017 om 12:54 schreef Alfredo Deza :
>
>
> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote:
> >
> >> Op 27 november 2017 om 14:36 schreef Alfredo Deza :
> >>
> >>
> >> For the upcoming Luminous release (12.2.2),
> For the `simple` sub-command there is no prepare/activate, it is just
> a way of taking over management of an already deployed OSD. For *new*
> OSDs, yes, we are implying that we are going only with Logical Volumes
> for data devices. It is a bit more flexible for Journals, block.db,
> and
I tend to agree with Wido. May of us still reply on ceph-disk and hope
to see it live a little longer.
Maged
On 2017-11-28 13:54, Alfredo Deza wrote:
> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote:
> Op 27 november 2017 om 14:36 schreef Alfredo Deza
On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander wrote:
>
>> Op 27 november 2017 om 14:36 schreef Alfredo Deza :
>>
>>
>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>> officially in 'deprecated' mode (bug fixes only). A large banner with
>>
On Tue, Nov 28, 2017 at 1:56 AM, Andreas Calminder
wrote:
> Hello,
> Thanks for the heads-up. As someone who's currently maintaining a
> Jewel cluster and are in the process of setting up a shiny new
> Luminous cluster and writing Ansible roles in the process to make
On 17-11-28 09:12 AM, Wido den Hollander wrote:
Op 27 november 2017 om 14:36 schreef Alfredo Deza :
For the upcoming Luminous release (12.2.2), ceph-disk will be
officially in 'deprecated' mode (bug fixes only). A large banner with
deprecation information has been added,
> Op 27 november 2017 om 14:36 schreef Alfredo Deza :
>
>
> For the upcoming Luminous release (12.2.2), ceph-disk will be
> officially in 'deprecated' mode (bug fixes only). A large banner with
> deprecation information has been added, which will try to raise
> awareness.
>
Hello,
Thanks for the heads-up. As someone who's currently maintaining a
Jewel cluster and are in the process of setting up a shiny new
Luminous cluster and writing Ansible roles in the process to make
setup reproducible. I immediately proceeded to look into ceph-volume
and I've some
For the upcoming Luminous release (12.2.2), ceph-disk will be
officially in 'deprecated' mode (bug fixes only). A large banner with
deprecation information has been added, which will try to raise
awareness.
We are strongly suggesting using ceph-volume for new (and old) OSD
deployments. The only
23 matches
Mail list logo