We're glad to announce the second bugfix release of Luminous v12.2.x
stable release series. It contains a range of bug fixes and a few
features across Bluestore, CephFS, RBD & RGW. We recommend all the users
of 12.2.x series update.
For more detailed information, see the blog[1] and the complete
On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman wrote:
> Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
>> I think the above roadmap is a good compromise for all involved parties,
>> and I hope we can use the remainder of Luminous to prepare for a
>> seam- and painless
On Fri, Dec 1, 2017 at 11:35 AM, Dennis Lijnsveld wrote:
> On 12/01/2017 01:45 PM, Alfredo Deza wrote:
>>> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
>>> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
>>> skip entirely over ceph-disk
On 12/01/2017 01:45 PM, Alfredo Deza wrote:
>> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
>> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
>> skip entirely over ceph-disk and our manual osd prepare process ...
>
> Yes. I think that for 12.2.1 this
On 12/01/2017 01:45 PM, Alfredo Deza wrote:
> On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman wrote:
>> Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
>>> I think the above roadmap is a good compromise for all involved parties,
>>> and I hope we can use the remainder of
Hay
Ceph version 10.2.5
i have had an Ceph cluster going for a few months, with iscsi servers that
are linked to Ceph by RBD.
All of an sudden i am starting the ESXI server will louse the isscsi data
store (disk space goes to 0 B) and i only fix this by rebooting the ISCSI
server
When checking
On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman wrote:
> Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
>> I think the above roadmap is a good compromise for all involved parties,
>> and I hope we can use the remainder of Luminous to prepare for a
>> seam- and painless
On 17-12-01 12:23 PM, Maged Mokhtar wrote:
Hi all,
I believe most exiting setups use 1 disk per OSD. Is this going to be the
most common setup in the future ? With the move to lvm, will this prefer the
use of multiple disks per OSD ? On the other side i also see nvme vendors
recommending
Hi all,
I believe most exiting setups use 1 disk per OSD. Is this going to be
the most common setup in the future ? With the move to lvm, will this
prefer the use of multiple disks per OSD ? On the other side i also see
nvme vendors recommending multiple OSDs ( 2,4 ) per disk as disks are
On Fri, Dec 1, 2017 at 7:28 PM, nokia ceph wrote:
> THanks brad, that got worked.. :)
No problem.
I created http://tracker.ceph.com/issues/22297
>
> On Fri, Dec 1, 2017 at 12:18 PM, Brad Hubbard wrote:
>>
>>
>>
>> On Thu, Nov 30, 2017 at 5:30 PM,
THanks brad, that got worked.. :)
On Fri, Dec 1, 2017 at 12:18 PM, Brad Hubbard wrote:
>
>
> On Thu, Nov 30, 2017 at 5:30 PM, nokia ceph
> wrote:
> > Hello,
> >
> > I'm following
> > http://docs.ceph.com/docs/master/ceph-volume/lvm/
>
Quoting Fabian Grünbichler (f.gruenbich...@proxmox.com):
> I think the above roadmap is a good compromise for all involved parties,
> and I hope we can use the remainder of Luminous to prepare for a
> seam- and painless transition to ceph-volume in time for the Mimic
> release, and then finally
On Thu, Nov 30, 2017 at 11:25:03AM -0500, Alfredo Deza wrote:
> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, removing the
> dependency on UDEV which
13 matches
Mail list logo