On Fri, Nov 30, 2018 at 3:10 PM Paul Emmerich <paul.emmer...@croit.io> wrote:
>
> Am Mo., 8. Okt. 2018 um 23:34 Uhr schrieb Alfredo Deza <ad...@redhat.com>:
> >
> > On Mon, Oct 8, 2018 at 5:04 PM Paul Emmerich <paul.emmer...@croit.io> wrote:
> > >
> > > ceph-volume unfortunately doesn't handle completely hanging IOs too
> > > well compared to ceph-disk.
> >
> > Not sure I follow, would you mind expanding on what you mean by
> > "ceph-volume unfortunately doesn't handle completely hanging IOs" ?
> >
> > ceph-volume just provisions the OSD, nothing else. If LVM is hanging,
> > there is nothing we could do there, just like ceph-disk wouldn't be
> > able to do anything if the partitioning
> > tool would hang.
>
> Another follow-up for this since I ran into issues with ceph-volume
> again a few times in the last weeks:
> I've opened issues for the main problems that we are seeing since
> using ceph-volume
>
> http://tracker.ceph.com/issues/37490
> http://tracker.ceph.com/issues/37487
> http://tracker.ceph.com/issues/37492
>
> The summary is that most operations need to access *all* disks and
> that will cause problems if one of them is misbehaving.
> ceph-disk didn't have this problem (but a lot of other problems,
> overall we are more happy with ceph-volume)

Paul, thank you so much for opening these issues. It is sometimes hard
to prevent these sort of "real world" usage problems.

None of them seem hard to tackle, I anticipate they will get done and
merged rather quickly.
>
> Paul
>
> >
> >
> >
> > > It needs to read actual data from each
> > > disk and it'll just hang completely if any of the disks doesn't
> > > respond.
> > >
> > > The low-level command to get the information from LVM is:
> > >
> > > lvs -o lv_tags
> > >
> > > this allows you to map a LV to an OSD id.
> > >
> > >
> > > Paul
> > > Am Mo., 8. Okt. 2018 um 12:09 Uhr schrieb Kevin Olbrich <k...@sv01.de>:
> > > >
> > > > Hi!
> > > >
> > > > Yes, thank you. At least on one node this works, the other node just 
> > > > freezes but this might by caused by a bad disk that I try to find.
> > > >
> > > > Kevin
> > > >
> > > > Am Mo., 8. Okt. 2018 um 12:07 Uhr schrieb Wido den Hollander 
> > > > <w...@42on.com>:
> > > >>
> > > >> Hi,
> > > >>
> > > >> $ ceph-volume lvm list
> > > >>
> > > >> Does that work for you?
> > > >>
> > > >> Wido
> > > >>
> > > >> On 10/08/2018 12:01 PM, Kevin Olbrich wrote:
> > > >> > Hi!
> > > >> >
> > > >> > Is there an easy way to find raw disks (eg. sdd/sdd1) by OSD id?
> > > >> > Before I migrated from filestore with simple-mode to bluestore with 
> > > >> > lvm,
> > > >> > I was able to find the raw disk with "df".
> > > >> > Now, I need to go from LVM LV to PV to disk every time I need to
> > > >> > check/smartctl a disk.
> > > >> >
> > > >> > Kevin
> > > >> >
> > > >> >
> > > >> > _______________________________________________
> > > >> > ceph-users mailing list
> > > >> > ceph-users@lists.ceph.com
> > > >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > >> >
> > > >
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > >
> > >
> > > --
> > > Paul Emmerich
> > >
> > > Looking for help with your Ceph cluster? Contact us at https://croit.io
> > >
> > > croit GmbH
> > > Freseniusstr. 31h
> > > 81247 München
> > > www.croit.io
> > > Tel: +49 89 1896585 90
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to