If we change the current (or failed journal path) to one the existing
journal and restart all failed/stop OSD.
Is the above will work? (not tested, assuming only)

Thanks
Swami


On Mon, Jan 25, 2016 at 9:18 PM, Loris Cuoghi <l...@stella-telecom.fr> wrote:
> Le 25/01/2016 15:28, Mihai Gheorghe a écrit :
>>
>> As far as i know you will not lose data, but it will be unaccessable
>> untill you bring the journal back online.
>
>
> http://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/
>
> After this, we should be able to restart the OSDs and wait for recovery.
>
> Never done this, anyone has experience to share?
>
>>
>> 2016-01-25 16:23 GMT+02:00 Daniel Schwager <daniel.schwa...@dtnet.de
>> <mailto:daniel.schwa...@dtnet.de>>:
>>
>>     Hi,
>>     > ok...OSD stop. Any reason why OSD stop ( I assume if journal disk
>>     > fails, OSD should work as no journal. Isn't it?)
>>
>>     No. In my understanding - if a journal fails, all the attached (to
>>     this Journal HDD) OSD's fails also.
>>
>>     E.g. if you have 4 OSD's with the 4 journals's located on one
>>     SSD-hard disk, the failure of this SSD
>>     will crash/fail also your 4 OSD's.
>>
>>     regards
>>     Danny
>>
>>
>>      >
>>      > Not understand, why the OSD data lost. You mean - data lost
>>     during the
>>      > traction time? or total OSD data lost?
>>      >
>>      > Thanks
>>      > Swami
>>      >
>>      > On Mon, Jan 25, 2016 at 7:06 PM, Jan Schermer <j...@schermer.cz
>>     <mailto:j...@schermer.cz>> wrote:
>>      > > OSD stops.
>>      > > And you pretty much lose all data on the OSD if you lose the
>>     journal.
>>      > >
>>      > > Jan
>>      > >
>>      > >> On 25 Jan 2016, at 14:04, M Ranga Swami Reddy
>>     <swamire...@gmail.com <mailto:swamire...@gmail.com>> wrote:
>>      > >>
>>      > >> Hello,
>>      > >>
>>      > >> If a journal disk fails (with crash or power failure, etc), what
>>      > >> happens on OSD operations?
>>      > >>
>>      > >> PS: Assume that journal and OSD is on a separate drive.
>>      > >>
>>      > >> Thanks
>>      > >> Swami
>>      > >> _______________________________________________
>>      > >> ceph-users mailing list
>>      > >> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>      > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>      > >
>>      > _______________________________________________
>>      > ceph-users mailing list
>>      > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>      > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>     _______________________________________________
>>     ceph-users mailing list
>>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to