Nope. no problem here.
> On 13 Sep 2017, at 22:05, Anthony D'Atri wrote:
>
> For a couple of weeks now digests have been appearing to me off and on with a
> few sets of MIME headers and maybe 1-2 messages. When I look at the raw text
> the whole digest is in there.
>
> Screencap below. Anyon
x55e157f72510]
Sep 08 18:48:05 proxmox1 ceph-osd[3954]: 30: (()+0x7494) [0x7f5ca93f9494]
Sep 08 18:48:05 proxmox1 ceph-osd[3954]: 31: (clone()+0x3f) [0x7f5ca8480aff]
Sep 08 18:48:05 proxmox1 ceph-osd[3954]: NOTE: a copy of the executable, or
`objdump -rdS ` is needed to interpret this.
Sep 08 18
so on IRC I was asked to add this log from OSD that was marked as missing
during scrub:
https://pastebin.com/raw/YQj3Drzi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
for a full recovery, at which
>> point I just deleted the RBD.
>>
>> The possibilities afforded by Ceph inception are endless. ☺
>>
>>
>>
>> Steve Taylor | Senior Software Engineer | StorageCraft Technology Corporation
>> 380 Data Drive Suite 300 | Dr
I may try that.. =) Do
>>>> they last longer? Ones that fit the UPS original battery spec
>>>> didn't last very long... part of the reason why I gave up on them..
>>>> =P My wife probably won't like the idea of car battery hanging out
>>>> th
So nobody has any clue on this one ???
Should I go with this one to dev mailing list ?
> On 27 Aug 2017, at 01:49, Tomasz Kusmierz wrote:
>
> Hi,
> for purposes of experimenting I’m running a home cluster that consists of
> single node and 4 OSD (weights in crush map are tru
t be safe to
> add another OSD host?
>
> Regards,
> Hong
>
>
>
> On Monday, August 28, 2017 4:43 PM, Tomasz Kusmierz
> wrote:
>
>
> Sorry for being brutal … anyway
> 1. get the battery for UPS ( a car battery will do as well, I’ve moded on ups
> i
0 scrub errors; mds cluster is degraded; no legacy OSD
> present but 'sortbitwise' flag is not set
>
>
>
> Regards,
> Hong
>
>
> On Monday, August 28, 2017 4:18 PM, Tomasz Kusmierz
> wrote:
>
>
> So to decode few things about your disk:
>
So to decode few things about your disk:
1 Raw_Read_Error_Rate 0x002f 100 100 051Pre-fail Always
- 37
37 read erros and only one sector marked as pending - fun disk :/
181 Program_Fail_Cnt_Total 0x0022 099 099 000Old_age Always
- 35325174
S
I think you are looking at something more like this :
https://www.google.co.uk/imgres?imgurl=https%3A%2F%2Fthumbs.dreamstime.com%2Fz%2Fhard-drive-being-destroyed-hammer-16668693.jpg&imgrefurl=https%3A%2F%2Fwww.dreamstime.com%2Fstock-photos-hard-drive-being-destroyed-hammer-image16668693&docid=Ofi7
zing a Faulty Hard Disk using Smartctl - Thomas-Krenn-Wiki
>
> <https://www.thomas-krenn.com/en/wiki/Analyzing_a_Faulty_Hard_Disk_using_Smartctl>
>
>
>
> On Monday, August 28, 2017 3:24 PM, Tomasz Kusmierz
> wrote:
>
>
> I think you’ve got your anwser:
>
I think you’ve got your anwser:
197 Current_Pending_Sector 0x0032 100 100 000Old_age Always
- 1
> On 28 Aug 2017, at 21:22, hjcho616 wrote:
>
> Steve,
>
> I thought that was odd too..
>
> Below is from the log, This captures transition from good to bad. Looks like
> production environment but just driven by me. =)
>
> Do you have any suggestions to get any of those osd.3, osd.4, osd.5, and
> osd.8 come back up without removing them? I have a feeling I can get some
> data back with some of them intact.
>
> Thank you!
>
> Regar
Personally I would suggest to:
- change minimal replication type to OSD (from default host)
- remove the OSD from the host with all those "down OSD’s" (note that they are
down not out which makes it more weird)
- let single node cluster stabilise, yes performance will suck but at least you
will h
Hi,
for purposes of experimenting I’m running a home cluster that consists of
single node and 4 OSD (weights in crush map are true to actual hdd size). I
prefer to test all new stuff on home equipment before getting egg in the face
at work :)
Anyway recently I’ve upgrade to Luminous, and replace
15 matches
Mail list logo