the massive rebalancing does not affect the ssds in a good way either. But
from what I've gatherd the pro should be fine. Massive amount of write
errors in the logs?

/Josef
On 17 Apr 2015 21:07, "Andrija Panic" <andrija.pa...@gmail.com> wrote:

> nah....Samsun 850 PRO 128GB - dead after 3months - 2 of these died...
> wearing level is 96%, so only 4% wasted... (yes I know these are not
> enterprise,etc... )
>
> On 17 April 2015 at 21:01, Josef Johansson <jose...@gmail.com> wrote:
>
>> tough luck, hope everything comes up ok afterwards. What models on the
>> SSD?
>>
>> /Josef
>> On 17 Apr 2015 20:05, "Andrija Panic" <andrija.pa...@gmail.com> wrote:
>>
>>> SSD died that hosted journals for 6 OSDs - 2 x SSD died, so 12 OSDs are
>>> down, and rebalancing is about finish... after which I need to fix the OSDs.
>>>
>>> On 17 April 2015 at 19:01, Josef Johansson <jo...@oderland.se> wrote:
>>>
>>>> Hi,
>>>>
>>>> Did 6 other OSDs go down when re-adding?
>>>>
>>>> /Josef
>>>>
>>>> On 17 Apr 2015, at 18:49, Andrija Panic <andrija.pa...@gmail.com>
>>>> wrote:
>>>>
>>>> 12 osds down - I expect less work with removing and adding osd?
>>>> On Apr 17, 2015 6:35 PM, "Krzysztof Nowicki" <
>>>> krzysztof.a.nowi...@gmail.com> wrote:
>>>>
>>>>> Why not just wipe out the OSD filesystem, run ceph-osd --mkfs with the
>>>>> existing OSD UUID, copy the keyring and let it populate itself?
>>>>>
>>>>> pt., 17 kwi 2015 o 18:31 użytkownik Andrija Panic <
>>>>> andrija.pa...@gmail.com> napisał:
>>>>>
>>>>>> Thx guys, thats what I will be doing at the end.
>>>>>>
>>>>>> Cheers
>>>>>> On Apr 17, 2015 6:24 PM, "Robert LeBlanc" <rob...@leblancnet.us>
>>>>>> wrote:
>>>>>>
>>>>>>> Delete and re-add all six OSDs.
>>>>>>>
>>>>>>> On Fri, Apr 17, 2015 at 3:36 AM, Andrija Panic <
>>>>>>> andrija.pa...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi guys,
>>>>>>>>
>>>>>>>> I have 1 SSD that hosted 6 OSD's Journals, that is dead, so 6 OSD
>>>>>>>> down, ceph rebalanced etc.
>>>>>>>>
>>>>>>>> Now I have new SSD inside, and I will partition it etc - but would
>>>>>>>> like to know, how to proceed now, with the journal recreation for 
>>>>>>>> those 6
>>>>>>>> OSDs that are down now.
>>>>>>>>
>>>>>>>> Should I flush journal (where to, journals doesnt still exist...?),
>>>>>>>> or just recreate journal from scratch (making symboliv links again: ln 
>>>>>>>> -s
>>>>>>>> /dev/$DISK$PART /var/lib/ceph/osd/ceph-$ID/journal) and starting OSDs.
>>>>>>>>
>>>>>>>> I expect the folowing procedure, but would like confirmation please:
>>>>>>>>
>>>>>>>> rm /var/lib/ceph/osd/ceph-$ID/journal -f (sym link)
>>>>>>>> ln -s /dev/SDAxxx /var/lib/ceph/osd/ceph-$ID/journal
>>>>>>>> ceph-osd -i $ID --mkjournal
>>>>>>>> ll /var/lib/ceph/osd/ceph-$ID/journal
>>>>>>>> service ceph start osd.$ID
>>>>>>>>
>>>>>>>> Any thought greatly appreciated !
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Andrija Panić
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> ceph-users mailing list
>>>>>>>> ceph-users@lists.ceph.com
>>>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>>>
>>>>>>>>
>>>>>>>  _______________________________________________
>>>>>> ceph-users mailing list
>>>>>> ceph-users@lists.ceph.com
>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>
>>>>>  _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Andrija Panić
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>
>
> --
>
> Andrija Panić
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to