[ovirt-users] Re: [Qemu-block] Re: Debugging ceph access

2018-06-07 Thread Jason Dillaman
On Wed, Jun 6, 2018 at 7:20 AM, Bernhard Dick  wrote:
> Hi,
>
> Am 05.06.2018 um 22:11 schrieb Nir Soffer:
>>
>> On Fri, Jun 1, 2018 at 3:54 PM Stefan Hajnoczi > > wrote:
>>
>> On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote:
>>  > On Thu, May 31, 2018 at 1:55 AM Bernhard Dick > > wrote:
>>  >
>>  > > Hi,
>>  > >
>>  > > I found the reason for my timeout problems: It is the version
>> of librbd1
>>  > > (which is 0.94.5) in conjunction with my CEPH test-environment
>> which is
>>  > > running the luminous release.
>>  > > When I install the librbd1 (and librados2) packages from the
>>  > > centos-ceph-luminous repository (version 12.2.5) I'm able to
>> start and
>>  > > migrate VMs inbetween the hosts.
>>  > >
>>  >
>>  > vdsm does not require librbd since qemu brings this dependency,
>> and vdsm
>>  > does not access ceph directly yet.
>>  >
>>  > Maybe qemu should require newer version of librbd?
>>
>> Upstream QEMU builds against any librbd version that exports the
>> necessary APIs.
>>
>> The choice of library versions is mostly up to distro package
>> maintainers.
>>
>> Have you filed a bug against Ceph on the distro you are using?
>>
>>
>> Thanks for clearing this up Stefan.
>>
>> Bernhard, can you give more info about your Linux version and
>> installed packages (.e.g qemu-*)?
>
> Sure. I have two test-systems. The first is running a stock oVirt Node 4.3
> which states "CentOS Linux release 7.5.1804 (Core)" as version string. The
> qemu and ceph packages are:
> Name: qemu-img-ev
> Arch: x86_64
> Epoch   : 10
> Version : 2.10.0
> Release : 21.el7_5.3.1
>
> Name: qemu-kvm-common-ev
> Arch: x86_64
> Epoch   : 10
> Version : 2.10.0
> Release : 21.el7_5.3.1
>
> Name: qemu-kvm-ev
> Arch: x86_64
> Epoch   : 10
> Version : 2.10.0
> Release : 21.el7_5.3.1
>
> Name: librados2
> Arch: x86_64
> Epoch   : 1
> Version : 0.94.5
> Release : 2.el7
>
> Name: librbd1
> Arch: x86_64
> Epoch   : 1
> Version : 0.94.5
> Release : 2.el7
>
> The Centos 7 system is a centos minimal installation with the following
> repos being enabled:
> CentOS-7 - Base
> CentOS-7 - Updates
> CentOS-7 - Extras
> ovirt-4.2-epel
> ovirt-4.2-centos-gluster123
> ovirt-4.2-virtio-win-latest
> ovirt-4.2-centos-qemu-ev
> ovirt-4.2-centos-opstools
> centos-sclo-rh-release
> ovirt-4.2-centos-ovirt42
> ovirt-4.2
>
> The version numbers for the qemu packages are the same as above as they're
> from the ovirt-4.2-centos-qemu-ev repository. Also the version numbers for
> librados2 and librbd1 match, while they're from the centos-base (instead of
> ovirt-base) repository.
>
> When I activate the ceph-centos-luminous repository librbd1 and librados2
> get upgraded to the following versions (leaving the qemu packages untouched,
> what is as expected):
> Name: librados2
> Arch: x86_64
> Epoch   : 2
> Version : 12.2.5
> Release : 0.el7
>
> Name: librbd1
> Arch: x86_64
> Epoch   : 2
> Version : 12.2.5
> Release : 0.el7
>
> So from my perspective on side of oVirt there should be thaught about a way
> to add a more recent ceph library version into the ovirt node image, as it
> is not the most common task to add extra repositories here (and I'm not
> whether that might break the image based upgrade-path).
>
> I will go for Centos based hosts in my case, as I'm a bit more flexible than
> so at least for me there is no real need to get the above noted implemented
> quickly :-)

Without knowing more about your setup, I suspect you might be using
unsupported tunables which are preventing the Hammer clients from
connecting to the Luminous cluster. Note that upstream, we really only
test client/server backwards compatibility between N - 2 releases (it
was N - 1) [1]. Therefore, in a perfect world, you should be selecting
the best Ceph client repo to match your Ceph cluster version.

There is an open BZ against RHEL 7 to upgrade the base Ceph client
RPMs to a more recent version [2], but in the interim if you need
newer client RPMs for CentOS 7, you will need to get them from
upstream Ceph by picking the appropriate release repository.

>   Regards
> Bernhard
>

[1] http://docs.ceph.com/docs/master/releases/schedule/#stable-releases-x-2-z
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1421783

-- 
Jason
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B5ZPKG426BGATX7ZXQVWZ7JJUR3QWG3D/


[ovirt-users] Re: [Qemu-block] Re: Debugging ceph access

2018-06-06 Thread Bernhard Dick

Hi,

Am 05.06.2018 um 22:11 schrieb Nir Soffer:
On Fri, Jun 1, 2018 at 3:54 PM Stefan Hajnoczi > wrote:


On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote:
 > On Thu, May 31, 2018 at 1:55 AM Bernhard Dick mailto:bernh...@bdick.de>> wrote:
 >
 > > Hi,
 > >
 > > I found the reason for my timeout problems: It is the version
of librbd1
 > > (which is 0.94.5) in conjunction with my CEPH test-environment
which is
 > > running the luminous release.
 > > When I install the librbd1 (and librados2) packages from the
 > > centos-ceph-luminous repository (version 12.2.5) I'm able to
start and
 > > migrate VMs inbetween the hosts.
 > >
 >
 > vdsm does not require librbd since qemu brings this dependency,
and vdsm
 > does not access ceph directly yet.
 >
 > Maybe qemu should require newer version of librbd?

Upstream QEMU builds against any librbd version that exports the
necessary APIs.

The choice of library versions is mostly up to distro package
maintainers.

Have you filed a bug against Ceph on the distro you are using?


Thanks for clearing this up Stefan.

Bernhard, can you give more info about your Linux version and
installed packages (.e.g qemu-*)?
Sure. I have two test-systems. The first is running a stock oVirt Node 
4.3 which states "CentOS Linux release 7.5.1804 (Core)" as version 
string. The qemu and ceph packages are:

Name: qemu-img-ev
Arch: x86_64
Epoch   : 10
Version : 2.10.0
Release : 21.el7_5.3.1

Name: qemu-kvm-common-ev
Arch: x86_64
Epoch   : 10
Version : 2.10.0
Release : 21.el7_5.3.1

Name: qemu-kvm-ev
Arch: x86_64
Epoch   : 10
Version : 2.10.0
Release : 21.el7_5.3.1

Name: librados2
Arch: x86_64
Epoch   : 1
Version : 0.94.5
Release : 2.el7

Name: librbd1
Arch: x86_64
Epoch   : 1
Version : 0.94.5
Release : 2.el7

The Centos 7 system is a centos minimal installation with the following 
repos being enabled:

CentOS-7 - Base
CentOS-7 - Updates
CentOS-7 - Extras
ovirt-4.2-epel
ovirt-4.2-centos-gluster123
ovirt-4.2-virtio-win-latest
ovirt-4.2-centos-qemu-ev
ovirt-4.2-centos-opstools
centos-sclo-rh-release
ovirt-4.2-centos-ovirt42
ovirt-4.2

The version numbers for the qemu packages are the same as above as 
they're from the ovirt-4.2-centos-qemu-ev repository. Also the version 
numbers for librados2 and librbd1 match, while they're from the 
centos-base (instead of ovirt-base) repository.


When I activate the ceph-centos-luminous repository librbd1 and 
librados2 get upgraded to the following versions (leaving the qemu 
packages untouched, what is as expected):

Name: librados2
Arch: x86_64
Epoch   : 2
Version : 12.2.5
Release : 0.el7

Name: librbd1
Arch: x86_64
Epoch   : 2
Version : 12.2.5
Release : 0.el7

So from my perspective on side of oVirt there should be thaught about a 
way to add a more recent ceph library version into the ovirt node image, 
as it is not the most common task to add extra repositories here (and 
I'm not whether that might break the image based upgrade-path).


I will go for Centos based hosts in my case, as I'm a bit more flexible 
than so at least for me there is no real need to get the above noted 
implemented quickly :-)


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HOP7SLMVCTXNQCACKIOYYENRTLOVNFQ3/


[ovirt-users] Re: [Qemu-block] Re: Debugging ceph access

2018-06-06 Thread Bernhard Dick

Hi,

Am 01.06.2018 um 14:54 schrieb Stefan Hajnoczi:

On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote:

On Thu, May 31, 2018 at 1:55 AM Bernhard Dick  wrote:


Hi,

I found the reason for my timeout problems: It is the version of librbd1
(which is 0.94.5) in conjunction with my CEPH test-environment which is
running the luminous release.
When I install the librbd1 (and librados2) packages from the
centos-ceph-luminous repository (version 12.2.5) I'm able to start and
migrate VMs inbetween the hosts.



vdsm does not require librbd since qemu brings this dependency, and vdsm
does not access ceph directly yet.

Maybe qemu should require newer version of librbd?


Upstream QEMU builds against any librbd version that exports the
necessary APIs.

The choice of library versions is mostly up to distro package
maintainers.

Have you filed a bug against Ceph on the distro you are using?
At least I didn't file a bug, as I'm not sure whether this might even 
desired. And if supplying a newer version of librbd within the base 
repository leads to problems with older clusters. The 0.94.5 version is 
from the base repository (On Centos 7 and oVirt Node 4.3).


  Regards
Bernhard
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/42SSHPXBXIWVRQGLIBSC3BI3DCP63O45/


[ovirt-users] Re: [Qemu-block] Re: Debugging ceph access

2018-06-05 Thread Nir Soffer
On Fri, Jun 1, 2018 at 3:54 PM Stefan Hajnoczi  wrote:

> On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote:
> > On Thu, May 31, 2018 at 1:55 AM Bernhard Dick  wrote:
> >
> > > Hi,
> > >
> > > I found the reason for my timeout problems: It is the version of
> librbd1
> > > (which is 0.94.5) in conjunction with my CEPH test-environment which is
> > > running the luminous release.
> > > When I install the librbd1 (and librados2) packages from the
> > > centos-ceph-luminous repository (version 12.2.5) I'm able to start and
> > > migrate VMs inbetween the hosts.
> > >
> >
> > vdsm does not require librbd since qemu brings this dependency, and vdsm
> > does not access ceph directly yet.
> >
> > Maybe qemu should require newer version of librbd?
>
> Upstream QEMU builds against any librbd version that exports the
> necessary APIs.
>
> The choice of library versions is mostly up to distro package
> maintainers.
>
> Have you filed a bug against Ceph on the distro you are using?
>

Thanks for clearing this up Stefan.

Bernhard, can you give more info about your Linux version and
installed packages (.e.g qemu-*)?

 Nir

> > Am 25.05.2018 um 17:08 schrieb Bernhard Dick:
> > > > Hi,
> > > >
> > > > as you might already know I try to use ceph with openstack in an
> oVirt
> > > > test environment. I'm able to create and remove volumes. But if I
> try to
> > > > run a VM which contains a ceph volume it is in the "Wait for launch"
> > > > state for a very long time. Then it gets into "down" state again. The
> > > > qemu log states
> > > >
> > > > 2018-05-25T15:03:41.100401Z qemu-kvm: -drive
> > > >
> > >
> file=rbd:rbd/volume-3bec499e-d0d0-45ef-86ad-2c187cdb2811:id=cinder:auth_supported=cephx\;none:mon_host=[mon0]\:6789\;[mon1]\:6789,file.password-secret=scsi0-0-0-0-secret0,format=raw,if=none,id=drive-scsi0-0-0-0,serial=3bec499e-d0d0-45ef-86ad-2c187cdb2811,cache=none,werror=stop,rerror=stop,aio=threads:
> > >
> > > > error connecting: Connection timed out
> > > >
> > > > 2018-05-25 15:03:41.109+: shutting down, reason=failed
> > > >
> > > > On the monitor hosts I see traffic with the ceph-mon-port, but not on
> > > > other ports (the osds for example). In the ceph logs however I don't
> > > > really see what happens.
> > > > Do you have some tips how to debug this problem?
> > > >
> > > >Regards
> > > >  Bernhard
> > > ___
> > > Users mailing list -- users@ovirt.org
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6ODADRIIYRJPSSX23ITWLNQLX3ER3Q4/
> > >
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EH5NDF76MHE5ZCCEGT54ZE74HFPWSJRP/


[ovirt-users] Re: [Qemu-block] Re: Debugging ceph access

2018-06-04 Thread Stefan Hajnoczi
On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote:
> On Thu, May 31, 2018 at 1:55 AM Bernhard Dick  wrote:
> 
> > Hi,
> >
> > I found the reason for my timeout problems: It is the version of librbd1
> > (which is 0.94.5) in conjunction with my CEPH test-environment which is
> > running the luminous release.
> > When I install the librbd1 (and librados2) packages from the
> > centos-ceph-luminous repository (version 12.2.5) I'm able to start and
> > migrate VMs inbetween the hosts.
> >
> 
> vdsm does not require librbd since qemu brings this dependency, and vdsm
> does not access ceph directly yet.
> 
> Maybe qemu should require newer version of librbd?

Upstream QEMU builds against any librbd version that exports the
necessary APIs.

The choice of library versions is mostly up to distro package
maintainers.

Have you filed a bug against Ceph on the distro you are using?

Stefan

> >
> >Regards
> >  Bernhard
> >
> > Am 25.05.2018 um 17:08 schrieb Bernhard Dick:
> > > Hi,
> > >
> > > as you might already know I try to use ceph with openstack in an oVirt
> > > test environment. I'm able to create and remove volumes. But if I try to
> > > run a VM which contains a ceph volume it is in the "Wait for launch"
> > > state for a very long time. Then it gets into "down" state again. The
> > > qemu log states
> > >
> > > 2018-05-25T15:03:41.100401Z qemu-kvm: -drive
> > >
> > file=rbd:rbd/volume-3bec499e-d0d0-45ef-86ad-2c187cdb2811:id=cinder:auth_supported=cephx\;none:mon_host=[mon0]\:6789\;[mon1]\:6789,file.password-secret=scsi0-0-0-0-secret0,format=raw,if=none,id=drive-scsi0-0-0-0,serial=3bec499e-d0d0-45ef-86ad-2c187cdb2811,cache=none,werror=stop,rerror=stop,aio=threads:
> >
> > > error connecting: Connection timed out
> > >
> > > 2018-05-25 15:03:41.109+: shutting down, reason=failed
> > >
> > > On the monitor hosts I see traffic with the ceph-mon-port, but not on
> > > other ports (the osds for example). In the ceph logs however I don't
> > > really see what happens.
> > > Do you have some tips how to debug this problem?
> > >
> > >Regards
> > >  Bernhard
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6ODADRIIYRJPSSX23ITWLNQLX3ER3Q4/
> >


signature.asc
Description: PGP signature
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNTWXF3YFXXPMSFF2GFFLF32WZ63VZIP/