yes correct,but the main issue is, the osd configuration gets lost after
every reboot
On Fri, May 4, 2018 at 6:11 PM, Alfredo Deza wrote:
> On Fri, May 4, 2018 at 1:22 AM, Akshita Parekh
> wrote:
> > Steps followed during installing ceph-
> > 1) Installing rpms
> >
>
; when configuring your osds.
>
>
> On Fri, May 4, 2018, 12:14 AM Akshita Parekh
> wrote:
>
>> Ceph v10.2.0 -jewel , Why ceph disk or ceph volume is required to
>> configure disks?encryption where?
>>
>> On Thu, May 3, 2018 at 6:24 PM, David Turner
>>
Hi All,
after every reboot the current superblock etc folders get deleted from
/var/lib/ceph/osd/ceph-0(1,etc)
.I have to prepare and activate osd after every reboot. Any suggestions?
ceph.target and ceph-osd are enabled.
Thanks in advance!
___
ceph-u
Hi all,
I configured two weeks back. OSD always shows failed status ,so i removed
the OSDS and added them again,almost in the same process as given-
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/
My cluster has 2 OSDs, 1 monitor, 1 admin and 1 client. After removing and
addin