On Fri, Nov 30, 2018 at 3:10 PM Paul Emmerich wrote:
>
> Am Mo., 8. Okt. 2018 um 23:34 Uhr schrieb Alfredo Deza :
> >
> > On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich wrote:
> > >
> > > ceph-volume unfortunately doesn't handle completely hanging IOs too
> > > well compared to ceph-disk.
> >
> > N
Am Mo., 8. Okt. 2018 um 23:34 Uhr schrieb Alfredo Deza :
>
> On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich wrote:
> >
> > ceph-volume unfortunately doesn't handle completely hanging IOs too
> > well compared to ceph-disk.
>
> Not sure I follow, would you mind expanding on what you mean by
> "ceph-v
Yeah, it's usually hanging in some low-level LVM tool (lvs, usually).
They unfortunately like to get stuck indefinitely on some hardware
failures, but there isn't really anything that can be done.
But we've found that it's far more reliable to just call lvs ourselves
instead of relying on ceph-volu
On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich wrote:
>
> ceph-volume unfortunately doesn't handle completely hanging IOs too
> well compared to ceph-disk.
Not sure I follow, would you mind expanding on what you mean by
"ceph-volume unfortunately doesn't handle completely hanging IOs" ?
ceph-volum
Hi Jakub,
"ceph osd metadata X" this is perfect! This also lists multipath devices
which I was looking for!
Kevin
Am Mo., 8. Okt. 2018 um 21:16 Uhr schrieb Jakub Jaszewski <
jaszewski.ja...@gmail.com>:
> Hi Kevin,
> Have you tried ceph osd metadata OSDid ?
>
> Jakub
>
> pon., 8 paź 2018, 19:32
ceph-volume unfortunately doesn't handle completely hanging IOs too
well compared to ceph-disk. It needs to read actual data from each
disk and it'll just hang completely if any of the disks doesn't
respond.
The low-level command to get the information from LVM is:
lvs -o lv_tags
this allows you
Hi Kevin,
Have you tried ceph osd metadata OSDid ?
Jakub
pon., 8 paź 2018, 19:32 użytkownik Alfredo Deza napisał:
> On Mon, Oct 8, 2018 at 6:09 AM Kevin Olbrich wrote:
> >
> > Hi!
> >
> > Yes, thank you. At least on one node this works, the other node just
> freezes but this might by caused by
On Mon, Oct 8, 2018 at 6:09 AM Kevin Olbrich wrote:
>
> Hi!
>
> Yes, thank you. At least on one node this works, the other node just freezes
> but this might by caused by a bad disk that I try to find.
If it is freezing, you could maybe try running the command where it
freezes? (ceph-volume will
Hi!
Yes, thank you. At least on one node this works, the other node just
freezes but this might by caused by a bad disk that I try to find.
Kevin
Am Mo., 8. Okt. 2018 um 12:07 Uhr schrieb Wido den Hollander :
> Hi,
>
> $ ceph-volume lvm list
>
> Does that work for you?
>
> Wido
>
> On 10/08/201
Hi,
$ ceph-volume lvm list
Does that work for you?
Wido
On 10/08/2018 12:01 PM, Kevin Olbrich wrote:
> Hi!
>
> Is there an easy way to find raw disks (eg. sdd/sdd1) by OSD id?
> Before I migrated from filestore with simple-mode to bluestore with lvm,
> I was able to find the raw disk with "df"
Hi!
Is there an easy way to find raw disks (eg. sdd/sdd1) by OSD id?
Before I migrated from filestore with simple-mode to bluestore with lvm, I
was able to find the raw disk with "df".
Now, I need to go from LVM LV to PV to disk every time I need to
check/smartctl a disk.
Kevin
__
11 matches
Mail list logo