[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-29 Thread Maxime Coquelin


On 11/28/2016 11:18 PM, Thomas Monjalon wrote:
> 2016-11-28 16:28, Maxime Coquelin:
>> On 11/24/2016 04:24 PM, Kavanagh, Mark B wrote:
>>> DPDK v16.04 added support for vHost User TSO; as such, by default,
>>> TSO is advertised to guest devices as an available feature during
>>> feature negotiation with QEMU.
>>> However, while the vHost user backend sets up the majority of the
>>> mbuf fields that are required for TSO, there is still a reliance
>>> on the associated DPDK application (i.e. in this case OvS-DPDK)
>>> to set the remaining flags and/or offsets.
>>> Since OvS-DPDK doesn't currently provide that functionality, it is
>>> necessary to explicitly disable TSO; otherwise, undefined behaviour
>>> will ensue.
>>
>> Thanks Mark for the clarification.
>>
>> In this case, maybe we could add a DPDK build option to disable Vhost's
>> TSO support, that would be selected for OVS packages?
>
> Why do you prefer a build-time option rather than the run-time config
> with rte_vhost_feature_disable()? Because we need to lock the features?

Right, we need to know what the backend supports before it is started,
so that management tool can check where it could be migrated to.

>
> Reminder: build-time configuration options are forbidden in DPDK for
> such usage. It would prevent other applications from using the feature
> in a given distribution, just because it is not implemented in OVS.

I understand, this is not the right solution.
I proposed this because I misunderstood how the distributions package
OVS+DPDK.

>
>> Does that sound reasonable?
>
> Maybe I'm missing something but I feel it is more reasonnable to implement
> the missing code in OVS.

Yes, that would be the ideal solution.
OVS implements TSO and we let management tool decide whether or not
enabling the features.

While this is done, we could deprecate rte_vhost_feature_disable, and
print error message notifying the user it should be done my the
management tool.

> If something is missing in DPDK, do not hesitate to request or add more
> helper functions.

Sure.

Thanks,
Maxime


[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-28 Thread Thomas Monjalon
2016-11-28 16:28, Maxime Coquelin:
> On 11/24/2016 04:24 PM, Kavanagh, Mark B wrote:
> > DPDK v16.04 added support for vHost User TSO; as such, by default,
> > TSO is advertised to guest devices as an available feature during
> > feature negotiation with QEMU.
> > However, while the vHost user backend sets up the majority of the
> > mbuf fields that are required for TSO, there is still a reliance
> > on the associated DPDK application (i.e. in this case OvS-DPDK)
> > to set the remaining flags and/or offsets.
> > Since OvS-DPDK doesn't currently provide that functionality, it is
> > necessary to explicitly disable TSO; otherwise, undefined behaviour
> > will ensue.
> 
> Thanks Mark for the clarification.
> 
> In this case, maybe we could add a DPDK build option to disable Vhost's
> TSO support, that would be selected for OVS packages?

Why do you prefer a build-time option rather than the run-time config
with rte_vhost_feature_disable()? Because we need to lock the features?

Reminder: build-time configuration options are forbidden in DPDK for
such usage. It would prevent other applications from using the feature
in a given distribution, just because it is not implemented in OVS.

> Does that sound reasonable?

Maybe I'm missing something but I feel it is more reasonnable to implement
the missing code in OVS.
If something is missing in DPDK, do not hesitate to request or add more
helper functions.



[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-28 Thread Maxime Coquelin


On 11/24/2016 04:24 PM, Kavanagh, Mark B wrote:
>>
>> On 11/24/2016 12:47 PM, Maxime Coquelin wrote:
>>>
>>>
>>> On 11/24/2016 01:33 PM, Yuanhan Liu wrote:
 On Thu, Nov 24, 2016 at 09:30:49AM +, Kevin Traynor wrote:
>> On 11/24/2016 06:31 AM, Yuanhan Liu wrote:
 On Tue, Nov 22, 2016 at 04:53:05PM +0200, Michael S. Tsirkin wrote:
>> You keep assuming that you have the VM started first and
>> figure out things afterwards, but this does not work.
>>
>> Think about a cluster of machines. You want to start a VM in
>> a way that will ensure compatibility with all hosts
>> in a cluster.

 I see. I was more considering about the case when the dst
 host (including the qemu and dpdk combo) is given, and
 then determine whether it will be a successfull migration
 or not.

 And you are asking that we need to know which host could
 be a good candidate before starting the migration. In such
 case, we indeed need some inputs from both the qemu and
 vhost-user backend.

 For DPDK, I think it could be simple, just as you said, it
 could be either a tiny script, or even a macro defined in
 the source code file (we extend it every time we add a
 new feature) to let the libvirt to read it. Or something
 else.
>>
>> There's the issue of APIs that tweak features as Maxime
>> suggested.

 Yes, it's a good point.

>> Maybe the only thing to do is to deprecate it,

 Looks like so.

>> but I feel some way for application to pass info into
>> guest might be benefitial.

 The two APIs are just for tweaking feature bits DPDK supports
>> before
 any device got connected. It's another way to disable some features
 (the another obvious way is to through QEMU command lines).

 IMO, it's bit handy only in a case like: we have bunch of VMs.
>> Instead
 of disabling something though qemu one by one, we could disable it
 once in DPDK.

 But I doubt the useful of it. It's only used in DPDK's vhost
>> example
 after all. Nor is it used in vhost pmd, neither is it used in OVS.
>>
>> rte_vhost_feature_disable() is currently used in OVS,
> lib/netdev-dpdk.c
 Hmmm. I must have checked very old code ...
>>
>> netdev_dpdk_vhost_class_init(void)
>> {
>> static struct ovsthread_once once = OVSTHREAD_ONCE_INITIALIZER;
>>
>> /* This function can be called for different classes.  The
>> initialization
>>  * needs to be done only once */
>> if (ovsthread_once_start()) {
>> rte_vhost_driver_callback_register(_net_device_ops);
>> rte_vhost_feature_disable(1ULL << VIRTIO_NET_F_HOST_TSO4
>>   | 1ULL << VIRTIO_NET_F_HOST_TSO6
>>   | 1ULL << VIRTIO_NET_F_CSUM);
 I saw the commit introduced such change, but it tells no reason why
 it was added.
>>>
>>> I'm also interested to know the reason.
>>
>> I can't remember off hand, added Mark K or Michal W who should be able
>> to shed some light on it.
>
> DPDK v16.04 added support for vHost User TSO; as such, by default, TSO is 
> advertised to guest devices as an available feature during feature 
> negotiation with QEMU.
> However, while the vHost user backend sets up the majority of the mbuf fields 
> that are required for TSO, there is still a reliance on the associated DPDK 
> application (i.e. in this case OvS-DPDK) to set the remaining flags and/or 
> offsets. Since OvS-DPDK doesn't currently provide that functionality, it is 
> necessary to explicitly disable TSO; otherwise, undefined behaviour will 
> ensue.
Thanks Mark for the clarification.

In this case, maybe we could add a DPDK build option to disable Vhost's
TSO support, that would be selected for OVS packages?

Does that sound reasonable?

Cheers,
Maxime

>>
>>> In any case, I think this is something that can/should be managed by
>>> the management tool, which  should disable it in cmd parameters.
>>>
>>> Kevin, do you agree?
>>
>> I think best to find out the reason first. Because if no reason to
>> disable in the code, then no need to debate!
>>
>>>
>>> Cheers,
>>> Maxime
>


[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-24 Thread Yuanhan Liu
On Thu, Nov 24, 2016 at 09:30:49AM +, Kevin Traynor wrote:
> On 11/24/2016 06:31 AM, Yuanhan Liu wrote:
> > On Tue, Nov 22, 2016 at 04:53:05PM +0200, Michael S. Tsirkin wrote:
>  You keep assuming that you have the VM started first and
>  figure out things afterwards, but this does not work.
> 
>  Think about a cluster of machines. You want to start a VM in
>  a way that will ensure compatibility with all hosts
>  in a cluster.
> >>>
> >>> I see. I was more considering about the case when the dst
> >>> host (including the qemu and dpdk combo) is given, and
> >>> then determine whether it will be a successfull migration
> >>> or not.
> >>>
> >>> And you are asking that we need to know which host could
> >>> be a good candidate before starting the migration. In such
> >>> case, we indeed need some inputs from both the qemu and
> >>> vhost-user backend.
> >>>
> >>> For DPDK, I think it could be simple, just as you said, it
> >>> could be either a tiny script, or even a macro defined in
> >>> the source code file (we extend it every time we add a
> >>> new feature) to let the libvirt to read it. Or something
> >>> else.
> >>
> >> There's the issue of APIs that tweak features as Maxime
> >> suggested.
> > 
> > Yes, it's a good point.
> > 
> >> Maybe the only thing to do is to deprecate it,
> > 
> > Looks like so.
> > 
> >> but I feel some way for application to pass info into
> >> guest might be benefitial.
> > 
> > The two APIs are just for tweaking feature bits DPDK supports before
> > any device got connected. It's another way to disable some features
> > (the another obvious way is to through QEMU command lines).
> > 
> > IMO, it's bit handy only in a case like: we have bunch of VMs. Instead
> > of disabling something though qemu one by one, we could disable it
> > once in DPDK.
> > 
> > But I doubt the useful of it. It's only used in DPDK's vhost example
> > after all. Nor is it used in vhost pmd, neither is it used in OVS.
> 
> rte_vhost_feature_disable() is currently used in OVS, lib/netdev-dpdk.c

Hmmm. I must have checked very old code ...
> 
> netdev_dpdk_vhost_class_init(void)
> {
> static struct ovsthread_once once = OVSTHREAD_ONCE_INITIALIZER;
> 
> /* This function can be called for different classes.  The
> initialization
>  * needs to be done only once */
> if (ovsthread_once_start()) {
> rte_vhost_driver_callback_register(_net_device_ops);
> rte_vhost_feature_disable(1ULL << VIRTIO_NET_F_HOST_TSO4
>   | 1ULL << VIRTIO_NET_F_HOST_TSO6
>   | 1ULL << VIRTIO_NET_F_CSUM);

I saw the commit introduced such change, but it tells no reason why
it was added.

commit 362ca39639ae871806be5ae97d55e1cbb14afd92
Author: mweglicx 
Date:   Thu Apr 14 17:40:06 2016 +0100

Update relevant artifacts to add support for DPDK 16.04.

Following changes are applied:
 - INSTALL.DPDK.md: CONFIG_RTE_BUILD_COMBINE_LIBS step has been
   removed because it is no longer present in DPDK configuration
   (combined library is created by default),
 - INSTALL.DPDK.md: VHost Cuse configuration is updated,
 - netdev-dpdk.c: Link speed definition is changed in DPDK and
   netdev_dpdk_get_features is updated accordingly,
 - netdev-dpdk.c: TSO and checksum offload has been disabled for
   vhostuser device.
 - .travis/linux-build.sh: DPDK version is updated and legacy
   flags have been removed in configuration.

Signed-off-by: Michal Weglicki 
Signed-off-by: Panu Matilainen 
Acked-by: Daniele Di Proietto 

--yliu


[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-24 Thread Kavanagh, Mark B
>
>On 11/24/2016 12:47 PM, Maxime Coquelin wrote:
>>
>>
>> On 11/24/2016 01:33 PM, Yuanhan Liu wrote:
>>> On Thu, Nov 24, 2016 at 09:30:49AM +, Kevin Traynor wrote:
 > On 11/24/2016 06:31 AM, Yuanhan Liu wrote:
> > > On Tue, Nov 22, 2016 at 04:53:05PM +0200, Michael S. Tsirkin wrote:
 >  You keep assuming that you have the VM started first and
 >  figure out things afterwards, but this does not work.
 > 
 >  Think about a cluster of machines. You want to start a VM in
 >  a way that will ensure compatibility with all hosts
 >  in a cluster.
>>> > >>>
>>> > >>> I see. I was more considering about the case when the dst
>>> > >>> host (including the qemu and dpdk combo) is given, and
>>> > >>> then determine whether it will be a successfull migration
>>> > >>> or not.
>>> > >>>
>>> > >>> And you are asking that we need to know which host could
>>> > >>> be a good candidate before starting the migration. In such
>>> > >>> case, we indeed need some inputs from both the qemu and
>>> > >>> vhost-user backend.
>>> > >>>
>>> > >>> For DPDK, I think it could be simple, just as you said, it
>>> > >>> could be either a tiny script, or even a macro defined in
>>> > >>> the source code file (we extend it every time we add a
>>> > >>> new feature) to let the libvirt to read it. Or something
>>> > >>> else.
>> > >>
>> > >> There's the issue of APIs that tweak features as Maxime
>> > >> suggested.
> > >
> > > Yes, it's a good point.
> > >
>> > >> Maybe the only thing to do is to deprecate it,
> > >
> > > Looks like so.
> > >
>> > >> but I feel some way for application to pass info into
>> > >> guest might be benefitial.
> > >
> > > The two APIs are just for tweaking feature bits DPDK supports
> before
> > > any device got connected. It's another way to disable some features
> > > (the another obvious way is to through QEMU command lines).
> > >
> > > IMO, it's bit handy only in a case like: we have bunch of VMs.
> Instead
> > > of disabling something though qemu one by one, we could disable it
> > > once in DPDK.
> > >
> > > But I doubt the useful of it. It's only used in DPDK's vhost
> example
> > > after all. Nor is it used in vhost pmd, neither is it used in OVS.
 >
 > rte_vhost_feature_disable() is currently used in OVS,
 lib/netdev-dpdk.c
>>> Hmmm. I must have checked very old code ...
 >
 > netdev_dpdk_vhost_class_init(void)
 > {
 > static struct ovsthread_once once = OVSTHREAD_ONCE_INITIALIZER;
 >
 > /* This function can be called for different classes.  The
 > initialization
 >  * needs to be done only once */
 > if (ovsthread_once_start()) {
 > rte_vhost_driver_callback_register(_net_device_ops);
 > rte_vhost_feature_disable(1ULL << VIRTIO_NET_F_HOST_TSO4
 >   | 1ULL << VIRTIO_NET_F_HOST_TSO6
 >   | 1ULL << VIRTIO_NET_F_CSUM);
>>> I saw the commit introduced such change, but it tells no reason why
>>> it was added.
>>
>> I'm also interested to know the reason.
>
>I can't remember off hand, added Mark K or Michal W who should be able
>to shed some light on it.

DPDK v16.04 added support for vHost User TSO; as such, by default, TSO is 
advertised to guest devices as an available feature during feature negotiation 
with QEMU.
However, while the vHost user backend sets up the majority of the mbuf fields 
that are required for TSO, there is still a reliance on the associated DPDK 
application (i.e. in this case OvS-DPDK) to set the remaining flags and/or 
offsets. Since OvS-DPDK doesn't currently provide that functionality, it is 
necessary to explicitly disable TSO; otherwise, undefined behaviour will ensue.

>
>> In any case, I think this is something that can/should be managed by
>> the management tool, which  should disable it in cmd parameters.
>>
>> Kevin, do you agree?
>
>I think best to find out the reason first. Because if no reason to
>disable in the code, then no need to debate!
>
>>
>> Cheers,
>> Maxime



[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-24 Thread Kevin Traynor
On 11/24/2016 12:47 PM, Maxime Coquelin wrote:
> 
> 
> On 11/24/2016 01:33 PM, Yuanhan Liu wrote:
>> On Thu, Nov 24, 2016 at 09:30:49AM +, Kevin Traynor wrote:
>>> > On 11/24/2016 06:31 AM, Yuanhan Liu wrote:
 > > On Tue, Nov 22, 2016 at 04:53:05PM +0200, Michael S. Tsirkin wrote:
>>> >  You keep assuming that you have the VM started first and
>>> >  figure out things afterwards, but this does not work.
>>> > 
>>> >  Think about a cluster of machines. You want to start a VM in
>>> >  a way that will ensure compatibility with all hosts
>>> >  in a cluster.
>> > >>>
>> > >>> I see. I was more considering about the case when the dst
>> > >>> host (including the qemu and dpdk combo) is given, and
>> > >>> then determine whether it will be a successfull migration
>> > >>> or not.
>> > >>>
>> > >>> And you are asking that we need to know which host could
>> > >>> be a good candidate before starting the migration. In such
>> > >>> case, we indeed need some inputs from both the qemu and
>> > >>> vhost-user backend.
>> > >>>
>> > >>> For DPDK, I think it could be simple, just as you said, it
>> > >>> could be either a tiny script, or even a macro defined in
>> > >>> the source code file (we extend it every time we add a
>> > >>> new feature) to let the libvirt to read it. Or something
>> > >>> else.
> > >>
> > >> There's the issue of APIs that tweak features as Maxime
> > >> suggested.
 > >
 > > Yes, it's a good point.
 > >
> > >> Maybe the only thing to do is to deprecate it,
 > >
 > > Looks like so.
 > >
> > >> but I feel some way for application to pass info into
> > >> guest might be benefitial.
 > >
 > > The two APIs are just for tweaking feature bits DPDK supports
 before
 > > any device got connected. It's another way to disable some features
 > > (the another obvious way is to through QEMU command lines).
 > >
 > > IMO, it's bit handy only in a case like: we have bunch of VMs.
 Instead
 > > of disabling something though qemu one by one, we could disable it
 > > once in DPDK.
 > >
 > > But I doubt the useful of it. It's only used in DPDK's vhost
 example
 > > after all. Nor is it used in vhost pmd, neither is it used in OVS.
>>> >
>>> > rte_vhost_feature_disable() is currently used in OVS,
>>> lib/netdev-dpdk.c
>> Hmmm. I must have checked very old code ...
>>> >
>>> > netdev_dpdk_vhost_class_init(void)
>>> > {
>>> > static struct ovsthread_once once = OVSTHREAD_ONCE_INITIALIZER;
>>> >
>>> > /* This function can be called for different classes.  The
>>> > initialization
>>> >  * needs to be done only once */
>>> > if (ovsthread_once_start()) {
>>> > rte_vhost_driver_callback_register(_net_device_ops);
>>> > rte_vhost_feature_disable(1ULL << VIRTIO_NET_F_HOST_TSO4
>>> >   | 1ULL << VIRTIO_NET_F_HOST_TSO6
>>> >   | 1ULL << VIRTIO_NET_F_CSUM);
>> I saw the commit introduced such change, but it tells no reason why
>> it was added.
> 
> I'm also interested to know the reason.

I can't remember off hand, added Mark K or Michal W who should be able
to shed some light on it.

> In any case, I think this is something that can/should be managed by
> the management tool, which  should disable it in cmd parameters.
> 
> Kevin, do you agree?

I think best to find out the reason first. Because if no reason to
disable in the code, then no need to debate!

> 
> Cheers,
> Maxime



[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-24 Thread Yuanhan Liu
On Tue, Nov 22, 2016 at 04:53:05PM +0200, Michael S. Tsirkin wrote:
> > > You keep assuming that you have the VM started first and
> > > figure out things afterwards, but this does not work.
> > > 
> > > Think about a cluster of machines. You want to start a VM in
> > > a way that will ensure compatibility with all hosts
> > > in a cluster.
> > 
> > I see. I was more considering about the case when the dst
> > host (including the qemu and dpdk combo) is given, and
> > then determine whether it will be a successfull migration
> > or not.
> > 
> > And you are asking that we need to know which host could
> > be a good candidate before starting the migration. In such
> > case, we indeed need some inputs from both the qemu and
> > vhost-user backend.
> > 
> > For DPDK, I think it could be simple, just as you said, it
> > could be either a tiny script, or even a macro defined in
> > the source code file (we extend it every time we add a
> > new feature) to let the libvirt to read it. Or something
> > else.
> 
> There's the issue of APIs that tweak features as Maxime
> suggested.

Yes, it's a good point.

> Maybe the only thing to do is to deprecate it,

Looks like so.

> but I feel some way for application to pass info into
> guest might be benefitial.

The two APIs are just for tweaking feature bits DPDK supports before
any device got connected. It's another way to disable some features
(the another obvious way is to through QEMU command lines).

IMO, it's bit handy only in a case like: we have bunch of VMs. Instead
of disabling something though qemu one by one, we could disable it
once in DPDK.

But I doubt the useful of it. It's only used in DPDK's vhost example
after all. Nor is it used in vhost pmd, neither is it used in OVS.

> > > If you don't, guest visible interface will change
> > > and you won't be able to migrate.
> > > 
> > > It does not make sense to discuss feature bits specifically
> > > since that is not the only part of interface.
> > > For example, max ring size supported might change.
> > 
> > I don't quite understand why we have to consider the max ring
> > size here? Isn't it a virtio device attribute, that QEMU could
> > provide such compatibility information?
> >
> > I mean, DPDK is supposed to support vary vring size, it's QEMU
> > to give a specifc value.
> 
> If backend supports s/g of any size up to 2^16, there's no issue.

I don't know others, but I see no issues in DPDK.

> ATM some backends might be assuming up to 1K s/g since
> QEMU never supported bigger ones. We might classify this
> as a bug, or not and add a feature flag.
> 
> But it's just an example. There might be more values at issue
> in the future.

Yeah, maybe. But we could analysis it one by one.

> > > Let me describe how it works in qemu/libvirt.
> > > When you install a VM, you can specify compatibility
> > > level (aka "machine type"), and you can query the supported compatibility
> > > levels. Management uses that to find the supported compatibility
> > > and stores the compatibility in XML that is migrated with the VM.
> > > There's also a way to find the latest level which is the
> > > default unless overridden by user, again this level
> > > is recorded and then
> > > - management can make sure migration destination is compatible
> > > - management can avoid migration to hosts without that support
> > 
> > Thanks for the info, it helps.
> > 
> > ...
> > > > > >>As version here is an opaque string for libvirt and qemu,
> > > > > >>anything can be used - but I suggest either a list
> > > > > >>of values defining the interface, e.g.
> > > > > >>any_layout=on,max_ring=256
> > > > > >>or a version including the name and vendor of the backend,
> > > > > >>e.g. "org.dpdk.v4.5.6".
> > 
> > The version scheme may not be ideal here. Assume a QEMU is supposed
> > to work with a specific DPDK version, however, user may disable some
> > newer features through qemu command line, that it also could work with
> > an elder DPDK version. Using the version scheme will not allow us doing
> > such migration to an elder DPDK version. The MTU is a lively example
> > here? (when MTU feature is provided by QEMU but is actually disabled
> > by user, that it could also work with an elder DPDK without MTU support).
> > 
> > --yliu
> 
> OK, so does a list of values look better to you then?

Yes, if there are no better way.

And I think it may be better to not list all those features, literally.
But instead, using the number should be better, say, features=0xdeadbeef.

Listing the feature names means we have to come to an agreement in all
components involved here (QEMU, libvirt, DPDK, VPP, and maybe more
backends), that we have to use the exact same feature names. Though it
may not be a big deal, it lacks some flexibility.

A feature bits will not have this issue.

--yliu

> 
> 
> > > > > >>
> > > > > >>Note that typically the list of supported versions can only be
> > > > > >>extended, not shrunk. Also, if the host/guest 

[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-24 Thread Maxime Coquelin


On 11/24/2016 01:33 PM, Yuanhan Liu wrote:
> On Thu, Nov 24, 2016 at 09:30:49AM +, Kevin Traynor wrote:
>> > On 11/24/2016 06:31 AM, Yuanhan Liu wrote:
>>> > > On Tue, Nov 22, 2016 at 04:53:05PM +0200, Michael S. Tsirkin wrote:
>> >  You keep assuming that you have the VM started first and
>> >  figure out things afterwards, but this does not work.
>> > 
>> >  Think about a cluster of machines. You want to start a VM in
>> >  a way that will ensure compatibility with all hosts
>> >  in a cluster.
> > >>>
> > >>> I see. I was more considering about the case when the dst
> > >>> host (including the qemu and dpdk combo) is given, and
> > >>> then determine whether it will be a successfull migration
> > >>> or not.
> > >>>
> > >>> And you are asking that we need to know which host could
> > >>> be a good candidate before starting the migration. In such
> > >>> case, we indeed need some inputs from both the qemu and
> > >>> vhost-user backend.
> > >>>
> > >>> For DPDK, I think it could be simple, just as you said, it
> > >>> could be either a tiny script, or even a macro defined in
> > >>> the source code file (we extend it every time we add a
> > >>> new feature) to let the libvirt to read it. Or something
> > >>> else.
 > >>
 > >> There's the issue of APIs that tweak features as Maxime
 > >> suggested.
>>> > >
>>> > > Yes, it's a good point.
>>> > >
 > >> Maybe the only thing to do is to deprecate it,
>>> > >
>>> > > Looks like so.
>>> > >
 > >> but I feel some way for application to pass info into
 > >> guest might be benefitial.
>>> > >
>>> > > The two APIs are just for tweaking feature bits DPDK supports before
>>> > > any device got connected. It's another way to disable some features
>>> > > (the another obvious way is to through QEMU command lines).
>>> > >
>>> > > IMO, it's bit handy only in a case like: we have bunch of VMs. Instead
>>> > > of disabling something though qemu one by one, we could disable it
>>> > > once in DPDK.
>>> > >
>>> > > But I doubt the useful of it. It's only used in DPDK's vhost example
>>> > > after all. Nor is it used in vhost pmd, neither is it used in OVS.
>> >
>> > rte_vhost_feature_disable() is currently used in OVS, lib/netdev-dpdk.c
> Hmmm. I must have checked very old code ...
>> >
>> > netdev_dpdk_vhost_class_init(void)
>> > {
>> > static struct ovsthread_once once = OVSTHREAD_ONCE_INITIALIZER;
>> >
>> > /* This function can be called for different classes.  The
>> > initialization
>> >  * needs to be done only once */
>> > if (ovsthread_once_start()) {
>> > rte_vhost_driver_callback_register(_net_device_ops);
>> > rte_vhost_feature_disable(1ULL << VIRTIO_NET_F_HOST_TSO4
>> >   | 1ULL << VIRTIO_NET_F_HOST_TSO6
>> >   | 1ULL << VIRTIO_NET_F_CSUM);
> I saw the commit introduced such change, but it tells no reason why
> it was added.

I'm also interested to know the reason.
In any case, I think this is something that can/should be managed by
the management tool, which  should disable it in cmd parameters.

Kevin, do you agree?

Cheers,
Maxime


[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-24 Thread Kevin Traynor
On 11/24/2016 06:31 AM, Yuanhan Liu wrote:
> On Tue, Nov 22, 2016 at 04:53:05PM +0200, Michael S. Tsirkin wrote:
 You keep assuming that you have the VM started first and
 figure out things afterwards, but this does not work.

 Think about a cluster of machines. You want to start a VM in
 a way that will ensure compatibility with all hosts
 in a cluster.
>>>
>>> I see. I was more considering about the case when the dst
>>> host (including the qemu and dpdk combo) is given, and
>>> then determine whether it will be a successfull migration
>>> or not.
>>>
>>> And you are asking that we need to know which host could
>>> be a good candidate before starting the migration. In such
>>> case, we indeed need some inputs from both the qemu and
>>> vhost-user backend.
>>>
>>> For DPDK, I think it could be simple, just as you said, it
>>> could be either a tiny script, or even a macro defined in
>>> the source code file (we extend it every time we add a
>>> new feature) to let the libvirt to read it. Or something
>>> else.
>>
>> There's the issue of APIs that tweak features as Maxime
>> suggested.
> 
> Yes, it's a good point.
> 
>> Maybe the only thing to do is to deprecate it,
> 
> Looks like so.
> 
>> but I feel some way for application to pass info into
>> guest might be benefitial.
> 
> The two APIs are just for tweaking feature bits DPDK supports before
> any device got connected. It's another way to disable some features
> (the another obvious way is to through QEMU command lines).
> 
> IMO, it's bit handy only in a case like: we have bunch of VMs. Instead
> of disabling something though qemu one by one, we could disable it
> once in DPDK.
> 
> But I doubt the useful of it. It's only used in DPDK's vhost example
> after all. Nor is it used in vhost pmd, neither is it used in OVS.

rte_vhost_feature_disable() is currently used in OVS, lib/netdev-dpdk.c

netdev_dpdk_vhost_class_init(void)
{
static struct ovsthread_once once = OVSTHREAD_ONCE_INITIALIZER;

/* This function can be called for different classes.  The
initialization
 * needs to be done only once */
if (ovsthread_once_start()) {
rte_vhost_driver_callback_register(_net_device_ops);
rte_vhost_feature_disable(1ULL << VIRTIO_NET_F_HOST_TSO4
  | 1ULL << VIRTIO_NET_F_HOST_TSO6
  | 1ULL << VIRTIO_NET_F_CSUM);

> 
 If you don't, guest visible interface will change
 and you won't be able to migrate.

 It does not make sense to discuss feature bits specifically
 since that is not the only part of interface.
 For example, max ring size supported might change.
>>>
>>> I don't quite understand why we have to consider the max ring
>>> size here? Isn't it a virtio device attribute, that QEMU could
>>> provide such compatibility information?
>>>
>>> I mean, DPDK is supposed to support vary vring size, it's QEMU
>>> to give a specifc value.
>>
>> If backend supports s/g of any size up to 2^16, there's no issue.
> 
> I don't know others, but I see no issues in DPDK.
> 
>> ATM some backends might be assuming up to 1K s/g since
>> QEMU never supported bigger ones. We might classify this
>> as a bug, or not and add a feature flag.
>>
>> But it's just an example. There might be more values at issue
>> in the future.
> 
> Yeah, maybe. But we could analysis it one by one.
> 
 Let me describe how it works in qemu/libvirt.
 When you install a VM, you can specify compatibility
 level (aka "machine type"), and you can query the supported compatibility
 levels. Management uses that to find the supported compatibility
 and stores the compatibility in XML that is migrated with the VM.
 There's also a way to find the latest level which is the
 default unless overridden by user, again this level
 is recorded and then
 - management can make sure migration destination is compatible
 - management can avoid migration to hosts without that support
>>>
>>> Thanks for the info, it helps.
>>>
>>> ...
 As version here is an opaque string for libvirt and qemu,
 anything can be used - but I suggest either a list
 of values defining the interface, e.g.
 any_layout=on,max_ring=256
 or a version including the name and vendor of the backend,
 e.g. "org.dpdk.v4.5.6".
>>>
>>> The version scheme may not be ideal here. Assume a QEMU is supposed
>>> to work with a specific DPDK version, however, user may disable some
>>> newer features through qemu command line, that it also could work with
>>> an elder DPDK version. Using the version scheme will not allow us doing
>>> such migration to an elder DPDK version. The MTU is a lively example
>>> here? (when MTU feature is provided by QEMU but is actually disabled
>>> by user, that it could also work with an elder DPDK without MTU support).
>>>
>>> --yliu
>>
>> OK, so does a list of values look better to you then?
> 
> Yes, if 

[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-22 Thread Yuanhan Liu
On Thu, Nov 17, 2016 at 07:37:09PM +0200, Michael S. Tsirkin wrote:
> On Thu, Nov 17, 2016 at 05:49:36PM +0800, Yuanhan Liu wrote:
> > On Thu, Nov 17, 2016 at 09:47:09AM +0100, Maxime Coquelin wrote:
> > > 
> > > 
> > > On 11/17/2016 09:29 AM, Yuanhan Liu wrote:
> > > >As usaual, sorry for late response :/
> > > >
> > > >On Thu, Oct 13, 2016 at 08:50:52PM +0300, Michael S. Tsirkin wrote:
> > > >>Hi!
> > > >>So it looks like we face a problem with cross-version
> > > >>migration when using vhost. It's not new but became more
> > > >>acute with the advent of vhost user.
> > > >>
> > > >>For users to be able to migrate between different versions
> > > >>of the hypervisor the interface exposed to guests
> > > >>by hypervisor must stay unchanged.
> > > >>
> > > >>The problem is that a qemu device is connected
> > > >>to a backend in another process, so the interface
> > > >>exposed to guests depends on the capabilities of that
> > > >>process.
> > > >>
> > > >>Specifically, for vhost user interface based on virtio, this includes
> > > >>the "host features" bitmap that defines the interface, as well as more
> > > >>host values such as the max ring size.  Adding new features/changing
> > > >>values to this interface is required to make progress, but on the other
> > > >>hand we need ability to get the old host features to be compatible.
> > > >
> > > >It looks like to the same issue of vhost-user reconnect to me. For 
> > > >example,
> > > >
> > > >- start dpdk 16.07 & qemu 2.5
> > > >- kill dpdk
> > > >- start dpdk 16.11
> > > >
> > > >Though DPDK 16.11 has more features comparing to dpdk 16.07 (say, 
> > > >indirect),
> > > >above should work. Because qemu saves the negotiated features before the
> > > >disconnect and stores it back after the reconnection.
> > > >
> > > >commit a463215b087c41d7ca94e51aa347cde523831873
> > > >Author: Marc-Andr? Lureau 
> > > >Date:   Mon Jun 6 18:45:05 2016 +0200
> > > >
> > > >vhost-net: save & restore vhost-user acked features
> > > >
> > > >The initial vhost-user connection sets the features to be 
> > > > negotiated
> > > >with the driver. Renegotiation isn't possible without device 
> > > > reset.
> > > >
> > > >To handle reconnection of vhost-user backend, ensure the same 
> > > > set of
> > > >features are provided, and reuse already acked features.
> > > >
> > > >Signed-off-by: Marc-Andr? Lureau 
> > > >
> > > >
> > > >So we could do similar to vhost-user? I mean, save the acked features
> > > >before migration and store it back after it. This should be able to
> > > >keep the compatibility. If user downgrades DPDK version, it also could
> > > >be easily detected, and then exit with an error to user: migration
> > > >failed due to un-compatible vhost features.
> > > >
> > > >Just some rough thoughts. Makes tiny sense?
> > > 
> > > My understanding is that the management tool has to know whether
> > > versions are compatible before initiating the migration:
> > 
> > Makes sense. How about getting and restoring the acked features through
> > qemu command lines then, say, through the monitor interface?
> > 
> > With that, it would be something like:
> > 
> > - start vhost-user backend (DPDK, VPP, or whatever) & qemu in the src host
> > 
> > - read the acked features (through monitor interface)
> > 
> > - start vhost-user backend in the dst host
> > 
> > - start qemu in the dst host with the just queried acked features
> > 
> >   QEMU then is expected to use this feature set for the later vhost-user
> >   feature negotitation. Exit if features compatibility is broken.
> > 
> > Thoughts?
> > 
> > --yliu
> 
> 
> You keep assuming that you have the VM started first and
> figure out things afterwards, but this does not work.
> 
> Think about a cluster of machines. You want to start a VM in
> a way that will ensure compatibility with all hosts
> in a cluster.

I see. I was more considering about the case when the dst
host (including the qemu and dpdk combo) is given, and
then determine whether it will be a successfull migration
or not.

And you are asking that we need to know which host could
be a good candidate before starting the migration. In such
case, we indeed need some inputs from both the qemu and
vhost-user backend.

For DPDK, I think it could be simple, just as you said, it
could be either a tiny script, or even a macro defined in
the source code file (we extend it every time we add a
new feature) to let the libvirt to read it. Or something
else.

> If you don't, guest visible interface will change
> and you won't be able to migrate.
> 
> It does not make sense to discuss feature bits specifically
> since that is not the only part of interface.
> For example, max ring size supported might change.

I don't quite understand why we have to consider the max ring
size here? Isn't it a virtio device attribute, that QEMU could
provide such compatibility information?

I mean, DPDK is supposed to support vary vring 

[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-17 Thread Michael S. Tsirkin
On Thu, Nov 17, 2016 at 05:49:36PM +0800, Yuanhan Liu wrote:
> On Thu, Nov 17, 2016 at 09:47:09AM +0100, Maxime Coquelin wrote:
> > 
> > 
> > On 11/17/2016 09:29 AM, Yuanhan Liu wrote:
> > >As usaual, sorry for late response :/
> > >
> > >On Thu, Oct 13, 2016 at 08:50:52PM +0300, Michael S. Tsirkin wrote:
> > >>Hi!
> > >>So it looks like we face a problem with cross-version
> > >>migration when using vhost. It's not new but became more
> > >>acute with the advent of vhost user.
> > >>
> > >>For users to be able to migrate between different versions
> > >>of the hypervisor the interface exposed to guests
> > >>by hypervisor must stay unchanged.
> > >>
> > >>The problem is that a qemu device is connected
> > >>to a backend in another process, so the interface
> > >>exposed to guests depends on the capabilities of that
> > >>process.
> > >>
> > >>Specifically, for vhost user interface based on virtio, this includes
> > >>the "host features" bitmap that defines the interface, as well as more
> > >>host values such as the max ring size.  Adding new features/changing
> > >>values to this interface is required to make progress, but on the other
> > >>hand we need ability to get the old host features to be compatible.
> > >
> > >It looks like to the same issue of vhost-user reconnect to me. For example,
> > >
> > >- start dpdk 16.07 & qemu 2.5
> > >- kill dpdk
> > >- start dpdk 16.11
> > >
> > >Though DPDK 16.11 has more features comparing to dpdk 16.07 (say, 
> > >indirect),
> > >above should work. Because qemu saves the negotiated features before the
> > >disconnect and stores it back after the reconnection.
> > >
> > >commit a463215b087c41d7ca94e51aa347cde523831873
> > >Author: Marc-Andr? Lureau 
> > >Date:   Mon Jun 6 18:45:05 2016 +0200
> > >
> > >vhost-net: save & restore vhost-user acked features
> > >
> > >The initial vhost-user connection sets the features to be 
> > > negotiated
> > >with the driver. Renegotiation isn't possible without device reset.
> > >
> > >To handle reconnection of vhost-user backend, ensure the same set 
> > > of
> > >features are provided, and reuse already acked features.
> > >
> > >Signed-off-by: Marc-Andr? Lureau 
> > >
> > >
> > >So we could do similar to vhost-user? I mean, save the acked features
> > >before migration and store it back after it. This should be able to
> > >keep the compatibility. If user downgrades DPDK version, it also could
> > >be easily detected, and then exit with an error to user: migration
> > >failed due to un-compatible vhost features.
> > >
> > >Just some rough thoughts. Makes tiny sense?
> > 
> > My understanding is that the management tool has to know whether
> > versions are compatible before initiating the migration:
> 
> Makes sense. How about getting and restoring the acked features through
> qemu command lines then, say, through the monitor interface?
> 
> With that, it would be something like:
> 
> - start vhost-user backend (DPDK, VPP, or whatever) & qemu in the src host
> 
> - read the acked features (through monitor interface)
> 
> - start vhost-user backend in the dst host
> 
> - start qemu in the dst host with the just queried acked features
> 
>   QEMU then is expected to use this feature set for the later vhost-user
>   feature negotitation. Exit if features compatibility is broken.
> 
> Thoughts?
> 
>   --yliu


You keep assuming that you have the VM started first and
figure out things afterwards, but this does not work.

Think about a cluster of machines. You want to start a VM in
a way that will ensure compatibility with all hosts
in a cluster.

If you don't, guest visible interface will change
and you won't be able to migrate.

It does not make sense to discuss feature bits specifically
since that is not the only part of interface.
For example, max ring size supported might change.


Let me describe how it works in qemu/libvirt.
When you install a VM, you can specify compatibility
level (aka "machine type"), and you can query the supported compatibility
levels. Management uses that to find the supported compatibility
and stores the compatibility in XML that is migrated with the VM.
There's also a way to find the latest level which is the
default unless overridden by user, again this level
is recorded and then
- management can make sure migration destination is compatible
- management can avoid migration to hosts without that support


We absolutely can QEMU be in control here, but what
is missing is ability to query compatibility as above.



> >  1. The downtime could be unpredictable if a VM has to move from hosts
> > to hosts multiple times, which is problematic, especially for NFV.
> >  2. If migration is not possible, maybe the management tool would
> > prefer not to interrupt the VM on current host.
> > 
> > I have little experience with migration though, so I could be mistaken.
> > 
> > Thanks,
> > Maxime
> > 
> > >
> > >   --yliu
> > >>
> > >>To 

[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-17 Thread Yuanhan Liu
On Thu, Nov 17, 2016 at 09:47:09AM +0100, Maxime Coquelin wrote:
> 
> 
> On 11/17/2016 09:29 AM, Yuanhan Liu wrote:
> >As usaual, sorry for late response :/
> >
> >On Thu, Oct 13, 2016 at 08:50:52PM +0300, Michael S. Tsirkin wrote:
> >>Hi!
> >>So it looks like we face a problem with cross-version
> >>migration when using vhost. It's not new but became more
> >>acute with the advent of vhost user.
> >>
> >>For users to be able to migrate between different versions
> >>of the hypervisor the interface exposed to guests
> >>by hypervisor must stay unchanged.
> >>
> >>The problem is that a qemu device is connected
> >>to a backend in another process, so the interface
> >>exposed to guests depends on the capabilities of that
> >>process.
> >>
> >>Specifically, for vhost user interface based on virtio, this includes
> >>the "host features" bitmap that defines the interface, as well as more
> >>host values such as the max ring size.  Adding new features/changing
> >>values to this interface is required to make progress, but on the other
> >>hand we need ability to get the old host features to be compatible.
> >
> >It looks like to the same issue of vhost-user reconnect to me. For example,
> >
> >- start dpdk 16.07 & qemu 2.5
> >- kill dpdk
> >- start dpdk 16.11
> >
> >Though DPDK 16.11 has more features comparing to dpdk 16.07 (say, indirect),
> >above should work. Because qemu saves the negotiated features before the
> >disconnect and stores it back after the reconnection.
> >
> >commit a463215b087c41d7ca94e51aa347cde523831873
> >Author: Marc-Andr? Lureau 
> >Date:   Mon Jun 6 18:45:05 2016 +0200
> >
> >vhost-net: save & restore vhost-user acked features
> >
> >The initial vhost-user connection sets the features to be negotiated
> >with the driver. Renegotiation isn't possible without device reset.
> >
> >To handle reconnection of vhost-user backend, ensure the same set of
> >features are provided, and reuse already acked features.
> >
> >Signed-off-by: Marc-Andr? Lureau 
> >
> >
> >So we could do similar to vhost-user? I mean, save the acked features
> >before migration and store it back after it. This should be able to
> >keep the compatibility. If user downgrades DPDK version, it also could
> >be easily detected, and then exit with an error to user: migration
> >failed due to un-compatible vhost features.
> >
> >Just some rough thoughts. Makes tiny sense?
> 
> My understanding is that the management tool has to know whether
> versions are compatible before initiating the migration:

Makes sense. How about getting and restoring the acked features through
qemu command lines then, say, through the monitor interface?

With that, it would be something like:

- start vhost-user backend (DPDK, VPP, or whatever) & qemu in the src host

- read the acked features (through monitor interface)

- start vhost-user backend in the dst host

- start qemu in the dst host with the just queried acked features

  QEMU then is expected to use this feature set for the later vhost-user
  feature negotitation. Exit if features compatibility is broken.

Thoughts?

--yliu

>  1. The downtime could be unpredictable if a VM has to move from hosts
> to hosts multiple times, which is problematic, especially for NFV.
>  2. If migration is not possible, maybe the management tool would
> prefer not to interrupt the VM on current host.
> 
> I have little experience with migration though, so I could be mistaken.
> 
> Thanks,
> Maxime
> 
> >
> > --yliu
> >>
> >>To solve this problem within qemu, qemu has a versioning system based on
> >>a machine type concept which fundamentally is a version string, by
> >>specifying that string one can get hardware compatible with a previous
> >>qemu version. QEMU also reports the latest version and list of versions
> >>supported so libvirt records the version at VM creation and then is
> >>careful to use this machine version whenever it migrates a VM.
> >>
> >>One might wonder how is this solved with a kernel vhost backend. The
> >>answer is that it mostly isn't - instead an assumption is made, that
> >>qemu versions are deployed together with the kernel - this is generally
> >>true for downstreams.  Thus whenever qemu gains a new feature, it is
> >>already supported by the kernel as well.  However, if one attempts
> >>migration with a new qemu from a system with a new to old kernel, one
> >>would get a failure.
> >>
> >>In the world where we have multiple userspace backends, with some of
> >>these supplied by ISVs, this seems non-realistic.
> >>
> >>IMO we need to support vhost backend versioning, ideally
> >>in a way that will also work for vhost kernel backends.
> >>
> >>So I'd like to get some input from both backend and management
> >>developers on what a good solution would look like.
> >>
> >>If we want to emulate the qemu solution, this involves adding the
> >>concept of interface versions to dpdk.  For example, dpdk could 

[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-17 Thread Yuanhan Liu
As usaual, sorry for late response :/

On Thu, Oct 13, 2016 at 08:50:52PM +0300, Michael S. Tsirkin wrote:
> Hi!
> So it looks like we face a problem with cross-version
> migration when using vhost. It's not new but became more
> acute with the advent of vhost user.
> 
> For users to be able to migrate between different versions
> of the hypervisor the interface exposed to guests
> by hypervisor must stay unchanged.
> 
> The problem is that a qemu device is connected
> to a backend in another process, so the interface
> exposed to guests depends on the capabilities of that
> process.
> 
> Specifically, for vhost user interface based on virtio, this includes
> the "host features" bitmap that defines the interface, as well as more
> host values such as the max ring size.  Adding new features/changing
> values to this interface is required to make progress, but on the other
> hand we need ability to get the old host features to be compatible.

It looks like to the same issue of vhost-user reconnect to me. For example,

- start dpdk 16.07 & qemu 2.5
- kill dpdk
- start dpdk 16.11

Though DPDK 16.11 has more features comparing to dpdk 16.07 (say, indirect),
above should work. Because qemu saves the negotiated features before the
disconnect and stores it back after the reconnection.

commit a463215b087c41d7ca94e51aa347cde523831873
Author: Marc-Andr? Lureau 
Date:   Mon Jun 6 18:45:05 2016 +0200

vhost-net: save & restore vhost-user acked features

The initial vhost-user connection sets the features to be negotiated
with the driver. Renegotiation isn't possible without device reset.

To handle reconnection of vhost-user backend, ensure the same set of
features are provided, and reuse already acked features.

Signed-off-by: Marc-Andr? Lureau 


So we could do similar to vhost-user? I mean, save the acked features
before migration and store it back after it. This should be able to
keep the compatibility. If user downgrades DPDK version, it also could
be easily detected, and then exit with an error to user: migration
failed due to un-compatible vhost features.

Just some rough thoughts. Makes tiny sense?

--yliu
> 
> To solve this problem within qemu, qemu has a versioning system based on
> a machine type concept which fundamentally is a version string, by
> specifying that string one can get hardware compatible with a previous
> qemu version. QEMU also reports the latest version and list of versions
> supported so libvirt records the version at VM creation and then is
> careful to use this machine version whenever it migrates a VM.
> 
> One might wonder how is this solved with a kernel vhost backend. The
> answer is that it mostly isn't - instead an assumption is made, that
> qemu versions are deployed together with the kernel - this is generally
> true for downstreams.  Thus whenever qemu gains a new feature, it is
> already supported by the kernel as well.  However, if one attempts
> migration with a new qemu from a system with a new to old kernel, one
> would get a failure.
> 
> In the world where we have multiple userspace backends, with some of
> these supplied by ISVs, this seems non-realistic.
> 
> IMO we need to support vhost backend versioning, ideally
> in a way that will also work for vhost kernel backends.
> 
> So I'd like to get some input from both backend and management
> developers on what a good solution would look like.
> 
> If we want to emulate the qemu solution, this involves adding the
> concept of interface versions to dpdk.  For example, dpdk could supply a
> file (or utility printing?) with list of versions: latest and versions
> supported. libvirt could read that and
> - store latest version at vm creation
> - pass it around with the vm
> - pass it to qemu
> 
> >From here, qemu could pass this over the vhost-user channel,
> thus making sure it's initialized with the correct
> compatible interface.
> 
> As version here is an opaque string for libvirt and qemu,
> anything can be used - but I suggest either a list
> of values defining the interface, e.g.
> any_layout=on,max_ring=256
> or a version including the name and vendor of the backend,
> e.g. "org.dpdk.v4.5.6".
> 
> Note that typically the list of supported versions can only be
> extended, not shrunk. Also, if the host/guest interface
> does not change, don't change the current version as
> this just creates work for everyone.
> 
> Thoughts? Would this work well for management? dpdk? vpp?
> 
> Thanks!
> 
> -- 
> MST


[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-17 Thread Maxime Coquelin


On 11/17/2016 09:29 AM, Yuanhan Liu wrote:
> As usaual, sorry for late response :/
>
> On Thu, Oct 13, 2016 at 08:50:52PM +0300, Michael S. Tsirkin wrote:
>> Hi!
>> So it looks like we face a problem with cross-version
>> migration when using vhost. It's not new but became more
>> acute with the advent of vhost user.
>>
>> For users to be able to migrate between different versions
>> of the hypervisor the interface exposed to guests
>> by hypervisor must stay unchanged.
>>
>> The problem is that a qemu device is connected
>> to a backend in another process, so the interface
>> exposed to guests depends on the capabilities of that
>> process.
>>
>> Specifically, for vhost user interface based on virtio, this includes
>> the "host features" bitmap that defines the interface, as well as more
>> host values such as the max ring size.  Adding new features/changing
>> values to this interface is required to make progress, but on the other
>> hand we need ability to get the old host features to be compatible.
>
> It looks like to the same issue of vhost-user reconnect to me. For example,
>
> - start dpdk 16.07 & qemu 2.5
> - kill dpdk
> - start dpdk 16.11
>
> Though DPDK 16.11 has more features comparing to dpdk 16.07 (say, indirect),
> above should work. Because qemu saves the negotiated features before the
> disconnect and stores it back after the reconnection.
>
> commit a463215b087c41d7ca94e51aa347cde523831873
> Author: Marc-Andr? Lureau 
> Date:   Mon Jun 6 18:45:05 2016 +0200
>
> vhost-net: save & restore vhost-user acked features
>
> The initial vhost-user connection sets the features to be negotiated
> with the driver. Renegotiation isn't possible without device reset.
>
> To handle reconnection of vhost-user backend, ensure the same set of
> features are provided, and reuse already acked features.
>
> Signed-off-by: Marc-Andr? Lureau 
>
>
> So we could do similar to vhost-user? I mean, save the acked features
> before migration and store it back after it. This should be able to
> keep the compatibility. If user downgrades DPDK version, it also could
> be easily detected, and then exit with an error to user: migration
> failed due to un-compatible vhost features.
>
> Just some rough thoughts. Makes tiny sense?

My understanding is that the management tool has to know whether
versions are compatible before initiating the migration:
  1. The downtime could be unpredictable if a VM has to move from hosts
 to hosts multiple times, which is problematic, especially for NFV.
  2. If migration is not possible, maybe the management tool would
 prefer not to interrupt the VM on current host.

I have little experience with migration though, so I could be mistaken.

Thanks,
Maxime

>
>   --yliu
>>
>> To solve this problem within qemu, qemu has a versioning system based on
>> a machine type concept which fundamentally is a version string, by
>> specifying that string one can get hardware compatible with a previous
>> qemu version. QEMU also reports the latest version and list of versions
>> supported so libvirt records the version at VM creation and then is
>> careful to use this machine version whenever it migrates a VM.
>>
>> One might wonder how is this solved with a kernel vhost backend. The
>> answer is that it mostly isn't - instead an assumption is made, that
>> qemu versions are deployed together with the kernel - this is generally
>> true for downstreams.  Thus whenever qemu gains a new feature, it is
>> already supported by the kernel as well.  However, if one attempts
>> migration with a new qemu from a system with a new to old kernel, one
>> would get a failure.
>>
>> In the world where we have multiple userspace backends, with some of
>> these supplied by ISVs, this seems non-realistic.
>>
>> IMO we need to support vhost backend versioning, ideally
>> in a way that will also work for vhost kernel backends.
>>
>> So I'd like to get some input from both backend and management
>> developers on what a good solution would look like.
>>
>> If we want to emulate the qemu solution, this involves adding the
>> concept of interface versions to dpdk.  For example, dpdk could supply a
>> file (or utility printing?) with list of versions: latest and versions
>> supported. libvirt could read that and
>> - store latest version at vm creation
>> - pass it around with the vm
>> - pass it to qemu
>>
>> >From here, qemu could pass this over the vhost-user channel,
>> thus making sure it's initialized with the correct
>> compatible interface.
>>
>> As version here is an opaque string for libvirt and qemu,
>> anything can be used - but I suggest either a list
>> of values defining the interface, e.g.
>> any_layout=on,max_ring=256
>> or a version including the name and vendor of the backend,
>> e.g. "org.dpdk.v4.5.6".
>>
>> Note that typically the list of supported versions can only be
>> extended, not shrunk. Also, if the host/guest interface
>> does 

[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-11-16 Thread Maxime Coquelin
Hi Michael,

On 10/13/2016 07:50 PM, Michael S. Tsirkin wrote:
> Hi!
> So it looks like we face a problem with cross-version
> migration when using vhost. It's not new but became more
> acute with the advent of vhost user.
>
> For users to be able to migrate between different versions
> of the hypervisor the interface exposed to guests
> by hypervisor must stay unchanged.
>
> The problem is that a qemu device is connected
> to a backend in another process, so the interface
> exposed to guests depends on the capabilities of that
> process.
>
> Specifically, for vhost user interface based on virtio, this includes
> the "host features" bitmap that defines the interface, as well as more
> host values such as the max ring size.  Adding new features/changing
> values to this interface is required to make progress, but on the other
> hand we need ability to get the old host features to be compatible.
>
> To solve this problem within qemu, qemu has a versioning system based on
> a machine type concept which fundamentally is a version string, by
> specifying that string one can get hardware compatible with a previous
> qemu version. QEMU also reports the latest version and list of versions
> supported so libvirt records the version at VM creation and then is
> careful to use this machine version whenever it migrates a VM.
>
> One might wonder how is this solved with a kernel vhost backend. The
> answer is that it mostly isn't - instead an assumption is made, that
> qemu versions are deployed together with the kernel - this is generally
> true for downstreams.  Thus whenever qemu gains a new feature, it is
> already supported by the kernel as well.  However, if one attempts
> migration with a new qemu from a system with a new to old kernel, one
> would get a failure.
>
> In the world where we have multiple userspace backends, with some of
> these supplied by ISVs, this seems non-realistic.
>
> IMO we need to support vhost backend versioning, ideally
> in a way that will also work for vhost kernel backends.
>
> So I'd like to get some input from both backend and management
> developers on what a good solution would look like.
>
> If we want to emulate the qemu solution, this involves adding the
> concept of interface versions to dpdk.  For example, dpdk could supply a
> file (or utility printing?) with list of versions: latest and versions
> supported. libvirt could read that and

So if I understand correctly, it would be generated at build time?
One problem I see is that the DPDK's vhost-user lib API provides a way
to disable features:
"
rte_vhost_feature_disable/rte_vhost_feature_enable(feature_mask)

This function disables/enables some features. For example, it can be 
used to disable mergeable buffers and TSO features, which both are 
enabled by default.
"

I think we should not have this capability on host side, it should be
guest's decision to use or not some features, and if it has to be done
on host, QEMU already provides a way to disable features (moreover
per-device, which is not the case with rte_vhost_feature_disable).
IMHO, we should consider deprecating this API in v17.02.

That said, the API is here, and it would break migration if the version
file advertises some features the vSwitch has disabled at runtime.

> - store latest version at vm creation
> - pass it around with the vm
> - pass it to qemu
> From here, qemu could pass this over the vhost-user channel,
> thus making sure it's initialized with the correct
> compatible interface.

Using vhost-user protocol features I guess?

> As version here is an opaque string for libvirt and qemu,
> anything can be used - but I suggest either a list
> of values defining the interface, e.g.
> any_layout=on,max_ring=256
> or a version including the name and vendor of the backend,
> e.g. "org.dpdk.v4.5.6".

I think the first option provides more flexibility.
For example, we could imagine migrating from a process using DPDK's
vhost-user lib, to another process using its own implementation (VPP
has its own implementation currently if I'm not mistaken).
Maybe this scenario does not make sense, but in this case, exposing
values directly would avoid the need for synchronization between
vhost-user implementations.

>
> Note that typically the list of supported versions can only be
> extended, not shrunk. Also, if the host/guest interface
> does not change, don't change the current version as
> this just creates work for everyone.
>
> Thoughts? Would this work well for management? dpdk? vpp?

One thing I'm not clear is how it will work for the MTU feature, if the
process it is migrated to exposes a larger MTU that the guest doesn't
support (if it has sized receive buffers to pre-migration MTU for
example).

Thanks,
Maxime


[dpdk-dev] dpdk/vpp and cross-version migration for vhost

2016-10-13 Thread Michael S. Tsirkin
Hi!
So it looks like we face a problem with cross-version
migration when using vhost. It's not new but became more
acute with the advent of vhost user.

For users to be able to migrate between different versions
of the hypervisor the interface exposed to guests
by hypervisor must stay unchanged.

The problem is that a qemu device is connected
to a backend in another process, so the interface
exposed to guests depends on the capabilities of that
process.

Specifically, for vhost user interface based on virtio, this includes
the "host features" bitmap that defines the interface, as well as more
host values such as the max ring size.  Adding new features/changing
values to this interface is required to make progress, but on the other
hand we need ability to get the old host features to be compatible.

To solve this problem within qemu, qemu has a versioning system based on
a machine type concept which fundamentally is a version string, by
specifying that string one can get hardware compatible with a previous
qemu version. QEMU also reports the latest version and list of versions
supported so libvirt records the version at VM creation and then is
careful to use this machine version whenever it migrates a VM.

One might wonder how is this solved with a kernel vhost backend. The
answer is that it mostly isn't - instead an assumption is made, that
qemu versions are deployed together with the kernel - this is generally
true for downstreams.  Thus whenever qemu gains a new feature, it is
already supported by the kernel as well.  However, if one attempts
migration with a new qemu from a system with a new to old kernel, one
would get a failure.

In the world where we have multiple userspace backends, with some of
these supplied by ISVs, this seems non-realistic.

IMO we need to support vhost backend versioning, ideally
in a way that will also work for vhost kernel backends.

So I'd like to get some input from both backend and management
developers on what a good solution would look like.

If we want to emulate the qemu solution, this involves adding the
concept of interface versions to dpdk.  For example, dpdk could supply a
file (or utility printing?) with list of versions: latest and versions
supported. libvirt could read that and
- store latest version at vm creation
- pass it around with the vm
- pass it to qemu

>From here, qemu could pass this over the vhost-user channel,
thus making sure it's initialized with the correct
compatible interface.

As version here is an opaque string for libvirt and qemu,
anything can be used - but I suggest either a list
of values defining the interface, e.g.
any_layout=on,max_ring=256
or a version including the name and vendor of the backend,
e.g. "org.dpdk.v4.5.6".

Note that typically the list of supported versions can only be
extended, not shrunk. Also, if the host/guest interface
does not change, don't change the current version as
this just creates work for everyone.

Thoughts? Would this work well for management? dpdk? vpp?

Thanks!

-- 
MST