Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-03 Thread Stefan Kooman
Quoting Alfredo Deza (ad...@redhat.com):
> 
> Looks like there is a tag in there that broke it. Lets follow up on a
> tracker issue so that we don't hijack this thread?
> 
> http://tracker.ceph.com/projects/ceph-volume/issues/new

Issue 22305 made for this: http://tracker.ceph.com/issues/22305

You are right, sorry for hijacking this thread.

Gr. Stefan

P.s. co-worker of Dennis Lijnsveld

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Alfredo Deza
On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman  wrote:
> Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
>> I think the above roadmap is a good compromise for all involved parties,
>> and I hope we can use the remainder of Luminous to prepare for a
>> seam- and painless transition to ceph-volume in time for the Mimic
>> release, and then finally retire ceph-disk for good!
>
> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
> doing bluestore on top of LVM?

Yes, see the open pr for it (https://github.com/ceph/ceph-deploy/pull/455)

> Eager to use ceph-volume for that, and
> skip entirely over ceph-disk and our manual osd prepare process ...

Now please note that the API will change in a non backwards compatible
way, so a major release of ceph-deploy will
be done after that is merged.

>
> Gr. Stefan
>
> --
> | BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
> | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Alfredo Deza
On Fri, Dec 1, 2017 at 11:35 AM, Dennis Lijnsveld  wrote:
> On 12/01/2017 01:45 PM, Alfredo Deza wrote:
>>> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
>>> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
>>> skip entirely over ceph-disk and our manual osd prepare process ...
>>
>> Yes. I think that for 12.2.1 this was the case as well, in 12.2.2 is
>> the default.
>
> Just updated Ceph and received an update for 12.2.2 and afterwards tried
> to prepare an osd with the following command:
>
> ceph-volume lvm prepare --bluestore --data osd.9/osd.9
>
> in which osd.9 is both the name for the VG as for the LV. After running
> the command I got the error on screen:
>
> -->  ValueError: need more than 1 value to unpack
>
> I checked the logs /var/log/ceph-volume.log which gave me the output
> underneath. Am I hitting some kind of bug or am I doing something wrong
> perhaps?

Looks like there is a tag in there that broke it. Lets follow up on a
tracker issue so that we don't hijack this thread?

http://tracker.ceph.com/projects/ceph-volume/issues/new

>
> [2017-12-01 17:25:25,234][ceph_volume.process][INFO  ] Running command:
> ceph-authtool --gen-print-key
> [2017-12-01 17:25:25,278][ceph_volume.process][INFO  ] stdout
> AQB1giFayoNDEBAAtOCZgErrB02Hrs370zBDcA==
> [2017-12-01 17:25:25,279][ceph_volume.process][INFO  ] Running command:
> ceph --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> 063c7de3-d4b2-463b-9f56-7a76b0b48197
> [2017-12-01 17:25:25,940][ceph_volume.process][INFO  ] stdout 25
> [2017-12-01 17:25:25,940][ceph_volume.process][INFO  ] Running command:
> sudo lvs --noheadings --separator=";" -o
> lv_tags,lv_path,lv_name,vg_name,lv_uuid
> [2017-12-01 17:25:25,977][ceph_volume.process][INFO  ] stdout
> ";"/dev/LVM0/CEPH";"CEPH";"LVM0";"y4Al1c-SFHH-VARl-XQf3-Qsc8-H3MN-LLIIj4
> [2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
> ";"/dev/LVM0/ROOT";"ROOT";"LVM0";"31V3cd-E2b1-LcDz-2loq-egvh-lz4e-3u20ZN
> [2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
> ";"/dev/LVM0/SWAP";"SWAP";"LVM0";"hI3cNL-sddl-yXFB-BOXT-5R6j-fDtZ-kNixYa
> [2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
> d77bfa9f-4d8d-40df-852a-692a94929ed2";"/dev/osd.9/osd.9";"osd.9";"osd.9";"3NAmK8-U3Fx-KUOm-f8x8-aEtO-MbYh-uPGHhR
> [2017-12-01 17:25:25,979][ceph_volume][ERROR ] exception caught by decorator
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
> line 59, in newfunc
> return f(*a, **kw)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/main.py", line 144,
> in main
> terminal.dispatch(self.mapper, subcommand_args)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line
> 131, in dispatch
> instance.main()
>   File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/main.py", line
> 38, in main
> terminal.dispatch(self.mapper, self.argv)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line
> 131, in dispatch
> instance.main()
>   File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
> line 293, in main
> self.prepare(args)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
> line 16, in is_root
> return func(*a, **kw)
>   File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
> line 206, in prepare
> block_lv = self.get_lv(args.data)
>   File
> "/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
> line 102, in get_lv
> return api.get_lv(lv_name=lv_name, vg_name=vg_name)
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 162, in get_lv
> lvs = Volumes()
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 411, in __init__
> self._populate()
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 416, in _populate
> self.append(Volume(**lv_item))
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 638, in __init__
> self.tags = parse_tags(kw['lv_tags'])
>   File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
> 66, in parse_tags
> key, value = tag_assignment.split('=', 1)
> ValueError: need more than 1 value to unpack
>
> --
> Dennis Lijnsveld
> BIT BV - http://www.bit.nl
> Kvk: 09090351
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Dennis Lijnsveld
On 12/01/2017 01:45 PM, Alfredo Deza wrote:
>> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
>> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
>> skip entirely over ceph-disk and our manual osd prepare process ...
> 
> Yes. I think that for 12.2.1 this was the case as well, in 12.2.2 is
> the default.

Just updated Ceph and received an update for 12.2.2 and afterwards tried
to prepare an osd with the following command:

ceph-volume lvm prepare --bluestore --data osd.9/osd.9

in which osd.9 is both the name for the VG as for the LV. After running
the command I got the error on screen:

-->  ValueError: need more than 1 value to unpack

I checked the logs /var/log/ceph-volume.log which gave me the output
underneath. Am I hitting some kind of bug or am I doing something wrong
perhaps?

[2017-12-01 17:25:25,234][ceph_volume.process][INFO  ] Running command:
ceph-authtool --gen-print-key
[2017-12-01 17:25:25,278][ceph_volume.process][INFO  ] stdout
AQB1giFayoNDEBAAtOCZgErrB02Hrs370zBDcA==
[2017-12-01 17:25:25,279][ceph_volume.process][INFO  ] Running command:
ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
063c7de3-d4b2-463b-9f56-7a76b0b48197
[2017-12-01 17:25:25,940][ceph_volume.process][INFO  ] stdout 25
[2017-12-01 17:25:25,940][ceph_volume.process][INFO  ] Running command:
sudo lvs --noheadings --separator=";" -o
lv_tags,lv_path,lv_name,vg_name,lv_uuid
[2017-12-01 17:25:25,977][ceph_volume.process][INFO  ] stdout
";"/dev/LVM0/CEPH";"CEPH";"LVM0";"y4Al1c-SFHH-VARl-XQf3-Qsc8-H3MN-LLIIj4
[2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
";"/dev/LVM0/ROOT";"ROOT";"LVM0";"31V3cd-E2b1-LcDz-2loq-egvh-lz4e-3u20ZN
[2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
";"/dev/LVM0/SWAP";"SWAP";"LVM0";"hI3cNL-sddl-yXFB-BOXT-5R6j-fDtZ-kNixYa
[2017-12-01 17:25:25,978][ceph_volume.process][INFO  ] stdout
d77bfa9f-4d8d-40df-852a-692a94929ed2";"/dev/osd.9/osd.9";"osd.9";"osd.9";"3NAmK8-U3Fx-KUOm-f8x8-aEtO-MbYh-uPGHhR
[2017-12-01 17:25:25,979][ceph_volume][ERROR ] exception caught by decorator
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
line 59, in newfunc
return f(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/main.py", line 144,
in main
terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line
131, in dispatch
instance.main()
  File
"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/main.py", line
38, in main
terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/terminal.py", line
131, in dispatch
instance.main()
  File
"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 293, in main
self.prepare(args)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/decorators.py",
line 16, in is_root
return func(*a, **kw)
  File
"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 206, in prepare
block_lv = self.get_lv(args.data)
  File
"/usr/lib/python2.7/dist-packages/ceph_volume/devices/lvm/prepare.py",
line 102, in get_lv
return api.get_lv(lv_name=lv_name, vg_name=vg_name)
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
162, in get_lv
lvs = Volumes()
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
411, in __init__
self._populate()
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
416, in _populate
self.append(Volume(**lv_item))
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
638, in __init__
self.tags = parse_tags(kw['lv_tags'])
  File "/usr/lib/python2.7/dist-packages/ceph_volume/api/lvm.py", line
66, in parse_tags
key, value = tag_assignment.split('=', 1)
ValueError: need more than 1 value to unpack

-- 
Dennis Lijnsveld
BIT BV - http://www.bit.nl
Kvk: 09090351
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Dietmar Rieder
On 12/01/2017 01:45 PM, Alfredo Deza wrote:
> On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman  wrote:
>> Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
>>> I think the above roadmap is a good compromise for all involved parties,
>>> and I hope we can use the remainder of Luminous to prepare for a
>>> seam- and painless transition to ceph-volume in time for the Mimic
>>> release, and then finally retire ceph-disk for good!
>>
>> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
>> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
>> skip entirely over ceph-disk and our manual osd prepare process ...
> 
> Yes. I think that for 12.2.1 this was the case as well, in 12.2.2 is
> the default.


...and will ceph-deploy be ceph-volume capable and default to it in the
12.2.2  release?

Dietmar



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Alfredo Deza
On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman  wrote:
> Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
>> I think the above roadmap is a good compromise for all involved parties,
>> and I hope we can use the remainder of Luminous to prepare for a
>> seam- and painless transition to ceph-volume in time for the Mimic
>> release, and then finally retire ceph-disk for good!
>
> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
> skip entirely over ceph-disk and our manual osd prepare process ...

Yes. I think that for 12.2.1 this was the case as well, in 12.2.2 is
the default.


>
> Gr. Stefan
>
> --
> | BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
> | GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Stefan Kooman
Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
> I think the above roadmap is a good compromise for all involved parties,
> and I hope we can use the remainder of Luminous to prepare for a
> seam- and painless transition to ceph-volume in time for the Mimic
> release, and then finally retire ceph-disk for good!

Will the upcoming 12.2.2 release ship with a ceph-volume capable of
doing bluestore on top of LVM? Eager to use ceph-volume for that, and
skip entirely over ceph-disk and our manual osd prepare process ...

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/Kamer van Koophandel 09090351
| GPG: 0xD14839C6   +31 318 648 688 / i...@bit.nl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-12-01 Thread Fabian Grünbichler
On Thu, Nov 30, 2017 at 11:25:03AM -0500, Alfredo Deza wrote:
> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, removing the
> dependency on UDEV which has been a tremendous source of constant
> issues.
> 
> Initially (see "killing ceph-disk" thread [0]) we planned for removal
> of Mimic, but we didn't want to introduce the deprecation warnings up
> until we had an out for those who had OSDs deployed in previous
> releases with ceph-disk (we are now able to handle those as well).
> That is the reason ceph-volume, although present since the first
> Luminous release, hasn't been pushed forward much.
> 
> Now that we feel like we can cover almost all cases, we would really
> like to see a wider usage so that we can improve on issues/experience.
> 
> Given that 12.2.2 is already in the process of getting released, we
> can't undo the deprecation warnings for that version, but we will
> remove them for 12.2.3, add them back again in Mimic, which will mean
> ceph-disk will be kept around a bit longer, and finally fully removed
> by N.
> 
> To recap:
> 
> * ceph-disk deprecation warnings will stay for 12.2.2
> * deprecation warnings will be removed in 12.2.3 (and from all later
> Luminous releases)
> * deprecation warnings will be added again in ceph-disk for all Mimic releases
> * ceph-disk will no longer be available for the 'N' release, along
> with the UDEV rules
> 
> I believe these four points address most of the concerns voiced in
> this thread, and should give enough time to port clusters over to
> ceph-volume.
> 
> [0] 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021358.html

Thank you for listening to the feedback - I think most of us know the
balance that needs to be struck between moving a project forward and
decrufting a code base versus providing a stable enough interface for
users is not always easy to find.

I think the above roadmap is a good compromise for all involved parties,
and I hope we can use the remainder of Luminous to prepare for a
seam- and painless transition to ceph-volume in time for the Mimic
release, and then finally retire ceph-disk for good!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-11-30 Thread Peter Woodman
How quickly are you planning to cut 12.2.3?

On Thu, Nov 30, 2017 at 4:25 PM, Alfredo Deza  wrote:
> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, removing the
> dependency on UDEV which has been a tremendous source of constant
> issues.
>
> Initially (see "killing ceph-disk" thread [0]) we planned for removal
> of Mimic, but we didn't want to introduce the deprecation warnings up
> until we had an out for those who had OSDs deployed in previous
> releases with ceph-disk (we are now able to handle those as well).
> That is the reason ceph-volume, although present since the first
> Luminous release, hasn't been pushed forward much.
>
> Now that we feel like we can cover almost all cases, we would really
> like to see a wider usage so that we can improve on issues/experience.
>
> Given that 12.2.2 is already in the process of getting released, we
> can't undo the deprecation warnings for that version, but we will
> remove them for 12.2.3, add them back again in Mimic, which will mean
> ceph-disk will be kept around a bit longer, and finally fully removed
> by N.
>
> To recap:
>
> * ceph-disk deprecation warnings will stay for 12.2.2
> * deprecation warnings will be removed in 12.2.3 (and from all later
> Luminous releases)
> * deprecation warnings will be added again in ceph-disk for all Mimic releases
> * ceph-disk will no longer be available for the 'N' release, along
> with the UDEV rules
>
> I believe these four points address most of the concerns voiced in
> this thread, and should give enough time to port clusters over to
> ceph-volume.
>
> [0] 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021358.html
>
> On Thu, Nov 30, 2017 at 8:22 AM, Daniel Baumann  wrote:
>> On 11/30/17 14:04, Fabian Grünbichler wrote:
>>> point is - you should not purposefully attempt to annoy users and/or
>>> downstreams by changing behaviour in the middle of an LTS release cycle,
>>
>> exactly. upgrading the patch level (x.y.z to x.y.z+1) should imho never
>> introduce a behaviour-change, regardless if it's "just" adding new
>> warnings or not.
>>
>> this is a stable update we're talking about, even more so since it's an
>> LTS release. you never know how people use stuff (e.g. by parsing stupid
>> things), so such behaviour-change will break stuff for *some* people
>> (granted, most likely a really low number).
>>
>> my expection to an stable release is, that it stays, literally, stable.
>> that's the whole point of having it in the first place. otherwise we
>> would all be running git snapshots and update randomly to newer ones.
>>
>> adding deprecation messages in mimic makes sense, and getting rid of
>> it/not provide support for it in mimic+1 is reasonable.
>>
>> Regards,
>> Daniel
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-11-30 Thread Peter Woodman
how quickly are you planning to cut 12.2.3?

On Thu, Nov 30, 2017 at 4:25 PM, Alfredo Deza  wrote:

> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, removing the
> dependency on UDEV which has been a tremendous source of constant
> issues.
>
> Initially (see "killing ceph-disk" thread [0]) we planned for removal
> of Mimic, but we didn't want to introduce the deprecation warnings up
> until we had an out for those who had OSDs deployed in previous
> releases with ceph-disk (we are now able to handle those as well).
> That is the reason ceph-volume, although present since the first
> Luminous release, hasn't been pushed forward much.
>
> Now that we feel like we can cover almost all cases, we would really
> like to see a wider usage so that we can improve on issues/experience.
>
> Given that 12.2.2 is already in the process of getting released, we
> can't undo the deprecation warnings for that version, but we will
> remove them for 12.2.3, add them back again in Mimic, which will mean
> ceph-disk will be kept around a bit longer, and finally fully removed
> by N.
>
> To recap:
>
> * ceph-disk deprecation warnings will stay for 12.2.2
> * deprecation warnings will be removed in 12.2.3 (and from all later
> Luminous releases)
> * deprecation warnings will be added again in ceph-disk for all Mimic
> releases
> * ceph-disk will no longer be available for the 'N' release, along
> with the UDEV rules
>
> I believe these four points address most of the concerns voiced in
> this thread, and should give enough time to port clusters over to
> ceph-volume.
>
> [0] http://lists.ceph.com/pipermail/ceph-users-ceph.com/
> 2017-October/021358.html
>
> On Thu, Nov 30, 2017 at 8:22 AM, Daniel Baumann 
> wrote:
> > On 11/30/17 14:04, Fabian Grünbichler wrote:
> >> point is - you should not purposefully attempt to annoy users and/or
> >> downstreams by changing behaviour in the middle of an LTS release cycle,
> >
> > exactly. upgrading the patch level (x.y.z to x.y.z+1) should imho never
> > introduce a behaviour-change, regardless if it's "just" adding new
> > warnings or not.
> >
> > this is a stable update we're talking about, even more so since it's an
> > LTS release. you never know how people use stuff (e.g. by parsing stupid
> > things), so such behaviour-change will break stuff for *some* people
> > (granted, most likely a really low number).
> >
> > my expection to an stable release is, that it stays, literally, stable.
> > that's the whole point of having it in the first place. otherwise we
> > would all be running git snapshots and update randomly to newer ones.
> >
> > adding deprecation messages in mimic makes sense, and getting rid of
> > it/not provide support for it in mimic+1 is reasonable.
> >
> > Regards,
> > Daniel
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)

2017-11-30 Thread Alfredo Deza
Thanks all for your feedback on deprecating ceph-disk, we are very
excited to be able to move forwards on a much more robust tool and
process for deploying and handling activation of OSDs, removing the
dependency on UDEV which has been a tremendous source of constant
issues.

Initially (see "killing ceph-disk" thread [0]) we planned for removal
of Mimic, but we didn't want to introduce the deprecation warnings up
until we had an out for those who had OSDs deployed in previous
releases with ceph-disk (we are now able to handle those as well).
That is the reason ceph-volume, although present since the first
Luminous release, hasn't been pushed forward much.

Now that we feel like we can cover almost all cases, we would really
like to see a wider usage so that we can improve on issues/experience.

Given that 12.2.2 is already in the process of getting released, we
can't undo the deprecation warnings for that version, but we will
remove them for 12.2.3, add them back again in Mimic, which will mean
ceph-disk will be kept around a bit longer, and finally fully removed
by N.

To recap:

* ceph-disk deprecation warnings will stay for 12.2.2
* deprecation warnings will be removed in 12.2.3 (and from all later
Luminous releases)
* deprecation warnings will be added again in ceph-disk for all Mimic releases
* ceph-disk will no longer be available for the 'N' release, along
with the UDEV rules

I believe these four points address most of the concerns voiced in
this thread, and should give enough time to port clusters over to
ceph-volume.

[0] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021358.html

On Thu, Nov 30, 2017 at 8:22 AM, Daniel Baumann  wrote:
> On 11/30/17 14:04, Fabian Grünbichler wrote:
>> point is - you should not purposefully attempt to annoy users and/or
>> downstreams by changing behaviour in the middle of an LTS release cycle,
>
> exactly. upgrading the patch level (x.y.z to x.y.z+1) should imho never
> introduce a behaviour-change, regardless if it's "just" adding new
> warnings or not.
>
> this is a stable update we're talking about, even more so since it's an
> LTS release. you never know how people use stuff (e.g. by parsing stupid
> things), so such behaviour-change will break stuff for *some* people
> (granted, most likely a really low number).
>
> my expection to an stable release is, that it stays, literally, stable.
> that's the whole point of having it in the first place. otherwise we
> would all be running git snapshots and update randomly to newer ones.
>
> adding deprecation messages in mimic makes sense, and getting rid of
> it/not provide support for it in mimic+1 is reasonable.
>
> Regards,
> Daniel
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com