On 5/14/20 12:44 PM, Michael S. Tsirkin wrote:
> On Thu, May 14, 2020 at 09:33:32AM +0200, Maxime Coquelin wrote:
>> It is usefull for the Vhost-user backend to know
>> about about the Virtio device status updates,
>> especially when the driver sets the DRIVER_OK bit.
>>
>> With that information, no more need to do hazardous
>> assumptions on when the driver is done with the
>> device configuration.
>>
>> Signed-off-by: Maxime Coquelin <maxime.coque...@redhat.com>
>> ---
>>
>> This patch applies on top of Cindy's "vDPA support in qemu"
>> series, which introduces the .vhost_set_state vhost-backend
>> ops.
>>
>> docs/interop/vhost-user.rst | 12 ++++++++++++
>> hw/net/vhost_net.c | 10 +++++-----
>> hw/virtio/vhost-user.c | 35 +++++++++++++++++++++++++++++++++++
>> 3 files changed, 52 insertions(+), 5 deletions(-)
>>
>> diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst
>> index 3b1b6602c7..f108de7458 100644
>> --- a/docs/interop/vhost-user.rst
>> +++ b/docs/interop/vhost-user.rst
>> @@ -815,6 +815,7 @@ Protocol features
>> #define VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD 12
>> #define VHOST_USER_PROTOCOL_F_RESET_DEVICE 13
>> #define VHOST_USER_PROTOCOL_F_INBAND_NOTIFICATIONS 14
>> + #define VHOST_USER_PROTOCOL_F_STATUS 15
>>
>> Master message types
>> --------------------
>> @@ -1263,6 +1264,17 @@ Master message types
>>
>> The state.num field is currently reserved and must be set to 0.
>>
>> +``VHOST_USER_SET_STATUS``
>> + :id: 36
>> + :equivalent ioctl: VHOST_VDPA_SET_STATUS
>> + :slave payload: N/A
>> + :master payload: ``u64``
>> +
>> + When the ``VHOST_USER_PROTOCOL_F_STATUS`` protocol feature has been
>> + successfully negotiated, this message is submitted by the master to
>> + notify the backend with updated device status as defined in the Virtio
>> + specification.
>> +
>> Slave message types
>> -------------------
>>
>> diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
>> index 463e333531..37f3156dbc 100644
>> --- a/hw/net/vhost_net.c
>> +++ b/hw/net/vhost_net.c
>> @@ -517,10 +517,10 @@ int vhost_set_state(NetClientState *nc, int state)
>> {
>> struct vhost_net *net = get_vhost_net(nc);
>> struct vhost_dev *hdev = &net->dev;
>> - if (nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
>> - if (hdev->vhost_ops->vhost_set_state) {
>> - return hdev->vhost_ops->vhost_set_state(hdev, state);
>> - }
>> - }
>> +
>> + if (hdev->vhost_ops->vhost_set_state) {
>> + return hdev->vhost_ops->vhost_set_state(hdev, state);
>> + }
>> +
>> return 0;
>> }
>> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
>> index ec21e8fbe8..b7e52d97fc 100644
>> --- a/hw/virtio/vhost-user.c
>> +++ b/hw/virtio/vhost-user.c
>> @@ -59,6 +59,7 @@ enum VhostUserProtocolFeature {
>> VHOST_USER_PROTOCOL_F_HOST_NOTIFIER = 11,
>> VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD = 12,
>> VHOST_USER_PROTOCOL_F_RESET_DEVICE = 13,
>> + VHOST_USER_PROTOCOL_F_STATUS = 15,
>> VHOST_USER_PROTOCOL_F_MAX
>> };
>>
>> @@ -100,6 +101,7 @@ typedef enum VhostUserRequest {
>> VHOST_USER_SET_INFLIGHT_FD = 32,
>> VHOST_USER_GPU_SET_SOCKET = 33,
>> VHOST_USER_RESET_DEVICE = 34,
>> + VHOST_USER_SET_STATUS = 36,
>> VHOST_USER_MAX
>> } VhostUserRequest;
>>
>> @@ -1886,6 +1888,38 @@ static int vhost_user_set_inflight_fd(struct
>> vhost_dev *dev,
>> return 0;
>> }
>>
>> +static int vhost_user_set_state(struct vhost_dev *dev, int state)
>> +{
>> + bool reply_supported = virtio_has_feature(dev->protocol_features,
>> +
>> VHOST_USER_PROTOCOL_F_REPLY_ACK);
>> +
>> + VhostUserMsg msg = {
>> + .hdr.request = VHOST_USER_SET_STATUS,
>> + .hdr.flags = VHOST_USER_VERSION,
>> + .hdr.size = sizeof(msg.payload.u64),
>> + .payload.u64 = (uint64_t)state,
>> + };
>> +
>> + if (!virtio_has_feature(dev->protocol_features,
>> + VHOST_USER_PROTOCOL_F_STATUS)) {
>> + return -1;
>> + }
>> +
>> + if (reply_supported) {
>> + msg.hdr.flags |= VHOST_USER_NEED_REPLY_MASK;
>> + }
>> +
>> + if (vhost_user_write(dev, &msg, NULL, 0) < 0) {
>> + return -1;
>> + }
>> +
>> + if (reply_supported) {
>> + return process_message_reply(dev, &msg);
>> + }
>
> So since we are doing this anyway, let's give backend the opportunity
> to validate features and fail FEATURES_OK?
Just to be sure I understand, the feature negotiation happens,
then when the backend receives FEATURES_OK, it uses the REPLY_ACK
protocol feature to NACK?
In DPDK backend, we already check the feature set by SET_FEATURES are
supported by the backend. If it is not the case, either the master did
set NEED_REPLY flag and error would be reported to the master, or the
NEED_REPLY flag isn't set in the message and the backend disconnects.
Looking at Qemu side, it does not seem to set the NEED_REPLY flag on
SET_FEATURES, so basically the backend close the sockets if it happened.
I wonder whether it would be better for Qemu to set the NEED_REPLY flag
on SET_FEATURES, so that when the backend is running and VHOST_F_LOG_ALL
feature bit is set, we are the sure the master starts the live-migration
only once the SET_FEATURES is fully handled on backend side.
What do you think?
>> +
>> + return 0;
>> +}
>> +
>> bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp)
>> {
>> if (user->chr) {
>> @@ -1947,4 +1981,5 @@ const VhostOps user_ops = {
>> .vhost_backend_mem_section_filter = vhost_user_mem_section_filter,
>> .vhost_get_inflight_fd = vhost_user_get_inflight_fd,
>> .vhost_set_inflight_fd = vhost_user_set_inflight_fd,
>> + .vhost_set_state = vhost_user_set_state,
>> };
>> --
>> 2.25.4
>