On 2015/7/10 21:05, Paolo Bonzini wrote:
On 26/06/2015 11:22, Thibaut Collet wrote:
Some vhost client/backend are able to support live migration.
To provide this service the following features must be added:
1. Add the VIRTIO_NET_F_GUEST_ANNOUNCE capability to vhost-net when netdev
On 2015/2/2 7:29, Paolo Bonzini wrote:
On 17/12/2014 07:02, haifeng@huawei.com wrote:
From: linhaifeng haifeng@huawei.com
If we create VM with two or more numa nodes qemu will create two
or more hugepage files but qemu only send one hugepage file fd
to vhost-user when VM's
On 2015/2/15 5:03, Michael S. Tsirkin wrote:
On Fri, Feb 13, 2015 at 09:45:37PM +0800, linhaifeng wrote:
@@ -35,7 +39,7 @@ consists of 3 header fields and a payload:
* Request: 32-bit type of the request
* Flags: 32-bit bit field:
- - Lower 2 bits are the version (currently 0x01
Hi, Michael
I'm trying to install guest OS with PXE(vhost-user backend), it was failed.
Is there any plans to support it?
--
Regards,
Haifeng
From: Linhaifeng haifeng@huawei.com
Every messages need reply.
This path just update the vhost-user.txt to version 0x6.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
docs/specs/vhost-user.txt | 21 -
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git
From: Linhaifeng haifeng@huawei.com
If slave's version bigger than 0x5 we will wait for reply.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
hw/virtio/vhost-user.c | 42 +-
1 file changed, 41 insertions(+), 1 deletion(-)
diff --git a/hw/virtio
From: Linhaifeng haifeng@huawei.com
We not need the VHOST_USER_REPLY_MASK so the base version now is 0x5.
- update the version to 0x6.
- change the name form flag to version.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
hw/virtio/vhost-user.c | 24
1
From: Linhaifeng haifeng@huawei.com
Mostly the same as ioctl master need the return value to
decided going on or not.So we add these patches for more
safe communication.
change log:
v1-v2: modify the annotate about slave's version.
v2-v3: update the description of version in vhost-user.txt
From: Linhaifeng haifeng@huawei.com
We not need the VHOST_USER_REPLY_MASK so the base version now is 0x5.
- update the version to 0x6.
- change the name form flag to version.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
hw/virtio/vhost-user.c | 24
1
From: Linhaifeng haifeng@huawei.com
Mostly the same as ioctl master need the return value to
decided going on or not.So we add these patches for more
safe communication.
Linhaifeng (3):
vhost-user: add reply let the portocol more safe.
vhost-user:update the version to 0x6
vhost
From: Linhaifeng haifeng@huawei.com
Mostly the same as ioctl master need the return value to
decided going on or not.So we add these patches for more
safe communication.
change log:
v1-v2: modify the annotate about slave's version.
Linhaifeng (3):
vhost-user: update the protocol.
vhost
From: Linhaifeng haifeng@huawei.com
We not need the VHOST_USER_REPLY_MASK so the base version now is 0x5.
- update the version to 0x6.
- change the name form flag to version.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
hw/virtio/vhost-user.c | 24
1
From: Linhaifeng haifeng@huawei.com
Every messages need reply.
This path just update the vhost-user.txt to version 0x6.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
docs/specs/vhost-user.txt | 17 +++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/docs
From: Linhaifeng haifeng@huawei.com
If slave's version bigger than 0x5 we will wait for reply.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
hw/virtio/vhost-user.c | 40
1 file changed, 40 insertions(+)
diff --git a/hw/virtio/vhost-user.c b
From: Linhaifeng haifeng@huawei.com
Every messages need reply.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
docs/specs/vhost-user.txt | 19 +--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index
No.May be the existing slaves need add reply in their codes.
So that's not good. We need a way to negotiate the capability,
we can't just deadlock with legacy slaves.
Hi,Michael
Do you have any suggestions?
On 2015/2/10 20:04, Michael S. Tsirkin wrote:
So that's not good. We need a way to negotiate the capability,
we can't just deadlock with legacy slaves.
Should we wait many seconds if slave not reply we just return error?
--
Regards,
Haifeng
On 2015/2/10 16:46, Michael S. Tsirkin wrote:
On Tue, Feb 10, 2015 at 01:48:12PM +0800, linhaifeng wrote:
From: Linhaifeng haifeng@huawei.com
Slave should reply to master and set u64 to 0 if
mmap all regions success otherwise set u64 to 1.
Signed-off-by: Linhaifeng haifeng
On 2015/2/10 18:41, Michael S. Tsirkin wrote:
On Tue, Feb 10, 2015 at 06:27:04PM +0800, Linhaifeng wrote:
On 2015/2/10 16:46, Michael S. Tsirkin wrote:
On Tue, Feb 10, 2015 at 01:48:12PM +0800, linhaifeng wrote:
From: Linhaifeng haifeng@huawei.com
Slave should reply to master and set
On 2015/2/10 20:04, Michael S. Tsirkin wrote:
So that's not good. We need a way to negotiate the capability,
we can't just deadlock with legacy slaves.
Or add a new message to query slaves' version if slaves not reply we don't wait
otherwise if the version as same as QEMU we wait the reply.
From: Linhaifeng haifeng@huawei.com
If u64 is not 0 we should return -1 to tell qemu not going on.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
hw/virtio/vhost-user.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio
From: Linhaifeng haifeng@huawei.com
If u64 is not 0 we should return -1 to tell qemu not going on.
Remove some unnecessary '\n' in error_report.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
hw/virtio/vhost-user.c | 33 ++---
1 file changed, 22 insertions
From: Linhaifeng haifeng@huawei.com
Slave should reply to master and set u64 to 0 if
mmap all regions success otherwise set u64 to 1.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
docs/specs/vhost-user.txt | 1 +
1 file changed, 1 insertion(+)
diff --git a/docs/specs/vhost-user.txt
On 2015/2/10 11:57, Gonglei wrote:
On 2015/2/10 11:24, linhaifeng wrote:
From: Linhaifeng haifeng@huawei.com
If u64 is not 0 we should return -1 to tell qemu not going on.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
hw/virtio/vhost-user.c | 13 -
1 file changed
From: Linhaifeng haifeng@huawei.com
Slave should reply to master and set u64 to 0 if
mmap all regions success otherwise set u64 to 1.
Signed-off-by: Linhaifeng haifeng@huawei.com
---
docs/specs/vhost-user.txt | 1 +
1 file changed, 1 insertion(+)
diff --git a/docs/specs/vhost-user.txt
On 2015/2/10 14:35, Gonglei wrote:
On 2015/2/10 13:48, linhaifeng wrote:
From: Linhaifeng haifeng@huawei.com
If u64 is not 0 we should return -1 to tell qemu not going on.
Remove some unnecessary '\n' in error_report.
Hi, haifeng:
You'd better split a single patch to do this work
On 2015/2/2 7:29, Paolo Bonzini wrote:
On 17/12/2014 07:02, haifeng@huawei.com wrote:
From: linhaifeng haifeng@huawei.com
If we create VM with two or more numa nodes qemu will create two
or more hugepage files but qemu only send one hugepage file fd
to vhost-user when VM's
On 2015/1/29 18:51, Michael S. Tsirkin wrote:
On Thu, Jan 29, 2015 at 11:58:08AM +0800, Linhaifeng wrote:
Hi,Michael S.Tsirkin
The vhost-user device will not work if there are two numa nodes in VM.
Should we fix this bug or ignore it ?
I suggest we fix this bug.
I saw that you
Hi,Michael S.Tsirkin
The vhost-user device will not work if there are two numa nodes in VM.
Should we fix this bug or ignore it ?
On 2014/12/18 13:06, Linhaifeng wrote:
On 2014/12/17 14:02, haifeng@huawei.com wrote:
From: linhaifeng haifeng@huawei.com
If we create VM with two
On 2014/12/17 14:02, haifeng@huawei.com wrote:
From: linhaifeng haifeng@huawei.com
If we create VM with two or more numa nodes qemu will create two
or more hugepage files but qemu only send one hugepage file fd
to vhost-user when VM's memory size is 2G and with two numa nodes
= 0xC
size = 2146697216
ua = 0x2acc
offset = 786432
region fd[7]:
gpa = 0x0
size = 655360
ua = 0x2ac0
offset = 0
we can see memory region only contain one hugepage.Is this a bug?
On 2014/12/11 11:10, Linhaifeng wrote
Hi,all
Yestoday i tested the set_mem_table message found that qemu not send all the
memory info(fd and size) when
VM memory size is 2G and have two numa nodes(two hugepage files).If VM memory
size is 4G and have two numa nodes
will send all the memory info.
Here is my understand,is this right?
On 2014/12/11 11:10, Linhaifeng wrote:
Hi,all
Yestoday i tested the set_mem_table message found that qemu not send all the
memory info(fd and size) when
VM memory size is 2G and have two numa nodes(two hugepage files).If VM memory
size is 4G and have two numa nodes
will send all
good job!passed test bigger than 3.5G VM.
On 2014/11/3 2:01, Michael S. Tsirkin wrote:
qemu_get_ram_block_host_ptr should get ram_addr_t,
vhost-user passes in GPA.
That's very wrong.
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
hw/virtio/vhost-user.c | 2 +-
1 file changed, 1
Hi,all
VM use vhost-user backend cannot startup when memory bigger than 3.5G.The log
print Bad ram offset 1 .Is this a bug?
logļ¼
[2014-11-01T08:39:07.245324Z] virtio_set_status:524 virtio-net device status is
1 that means ACKNOWLEDGE
[2014-11-01T08:39:07.247225Z] virtio_set_status:524
[18446650253340835840] failed
On Wed, Sep 17, 2014 at 10:33:23AM +0800, Linhaifeng wrote:
Hi,
There is two memory regions when receive VHOST_SET_MEM_TABLE message:
region[0]
gpa = 0x0
size = 655360
ua = 0x2ac0
offset = 0
region[1]
gpa
Hi,
I run qemu2.1 with 1G hugepage and found that VM can't start (too slowly?) but
can start with 2M hugepage quickly.
kernel version:3.10.0-123.6.3.el7.x86_64
command:qemu-kvm -name vm1 -enable-kvm -smp 2 -m 2048 -object
memory-backend-file,id=mem,size=2048M,mem-path=/dev/hugepages,share=on
On 2014/10/20 13:32, Wen Congyang wrote:
On 10/20/2014 12:48 PM, Linhaifeng wrote:
On 2014/10/20 10:12, Wen Congyang wrote:
On 10/18/2014 11:20 AM, Linhaifeng wrote:
On 2014/10/17 21:26, Daniel P. Berrange wrote:
On Fri, Oct 17, 2014 at 04:57:27PM +0800, Linhaifeng wrote:
On 2014/10
the f_count when call send.
2.kernel calls when we call recv
unix_stream_recvmsg - scm_fp_dup - get_file - atomic_long_inc(f-f_count)
On 2014/10/20 14:26, Wen Congyang wrote:
On 10/20/2014 02:17 PM, Linhaifeng wrote:
On 2014/10/20 13:32, Wen Congyang wrote:
On 10/20/2014 12:48 PM, Linhaifeng wrote
On 2014/10/20 10:12, Wen Congyang wrote:
On 10/18/2014 11:20 AM, Linhaifeng wrote:
On 2014/10/17 21:26, Daniel P. Berrange wrote:
On Fri, Oct 17, 2014 at 04:57:27PM +0800, Linhaifeng wrote:
On 2014/10/17 16:33, Daniel P. Berrange wrote:
On Fri, Oct 17, 2014 at 04:27:17PM +0800, haifeng
On 2014/10/17 16:33, Daniel P. Berrange wrote:
On Fri, Oct 17, 2014 at 04:27:17PM +0800, haifeng@huawei.com wrote:
From: linhaifeng haifeng@huawei.com
The VM start with share hugepage should close the hugefile fd
when exit.Because the hugepage fd may be send to other process
e.g
On 2014/10/17 16:56, Gonglei wrote:
On 2014/10/17 16:33, Daniel P. Berrange wrote:
On Fri, Oct 17, 2014 at 04:27:17PM +0800, haifeng@huawei.com wrote:
From: linhaifeng haifeng@huawei.com
The VM start with share hugepage should close the hugefile fd
when exit.Because the hugepage
On 2014/10/17 16:57, Linhaifeng wrote:
On 2014/10/17 16:33, Daniel P. Berrange wrote:
On Fri, Oct 17, 2014 at 04:27:17PM +0800, haifeng@huawei.com wrote:
From: linhaifeng haifeng@huawei.com
The VM start with share hugepage should close the hugefile fd
when exit.Because
On 2014/10/17 21:26, Daniel P. Berrange wrote:
On Fri, Oct 17, 2014 at 04:57:27PM +0800, Linhaifeng wrote:
On 2014/10/17 16:33, Daniel P. Berrange wrote:
On Fri, Oct 17, 2014 at 04:27:17PM +0800, haifeng@huawei.com wrote:
From: linhaifeng haifeng@huawei.com
The VM start
On 2014/10/17 16:43, zhanghailiang wrote:
On 2014/10/17 16:27, haifeng@huawei.com wrote:
From: linhaifeng haifeng@huawei.com
The VM start with share hugepage should close the hugefile fd
when exit.Because the hugepage fd may be send to other process
e.g vhost-user If qemu not close
On 2014/10/14 20:02, Linhaifeng wrote:
Hi,all
I was trying to use hugepage with VM and found that the hugepage not freed
when close VM.
1.Before start VM the /proc/meminfo is:
AnonHugePages:124928 kB
HugePages_Total:4096
HugePages_Free: 3072
HugePages_Rsvd:0
Hi,all
I was trying to use hugepage with VM and found that the hugepage not freed when
close VM.
1.Before start VM the /proc/meminfo is:
AnonHugePages:124928 kB
HugePages_Total:4096
HugePages_Free: 3072
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize: 2048 kB
On 2014/10/14 20:08, Daniel P. Berrange wrote:
On Tue, Oct 14, 2014 at 08:02:38PM +0800, Linhaifeng wrote:
Hi,all
I was trying to use hugepage with VM and found that the hugepage not freed
when close VM.
1.Before start VM the /proc/meminfo is:
AnonHugePages:124928 kB
I want to add this message for vhost-user backend's memory changed.Any
suggestion?
* VHOST_USER_CLEAR_MEM_TABLE
Id: 15
Equivalent ioctl: VHOST_USER_CLEAR_MEM_TABLE
Master payload: u64
Clear the memory regions on the slave when the memory of the forward
freed.e.g.when
On 2014/9/18 13:16, Michael S. Tsirkin wrote:
On Thu, Sep 18, 2014 at 08:45:37AM +0800, Linhaifeng wrote:
On 2014/9/17 17:56, Michael S. Tsirkin wrote:
On Wed, Sep 17, 2014 at 05:39:04PM +0800, Linhaifeng wrote:
I think maybe is not need for the backend to wait for response
sorry it is not need to add VHOST_NET_SET_BACKEND.I found i can reset in
VHOST_GET_VRING_BASE because before the backend cleanup it will send
VHOST_GET_VRING_BASE.
On 2014/9/17 13:50, Linhaifeng wrote:
Hi,
The VHOST_NET_SET_BACKEND could tell the user when the backend is created
I think maybe is not need for the backend to wait for response.
There is another way.vhost-user send VHOST_GET_MEM_TABLE to qemu then qemu
send VHOST_SET_MEM_TABLE to update the regions of vhost-user.same as other
command.
If qemu could response the request of the vhost-user.the vhost-user
On 2014/9/17 17:56, Michael S. Tsirkin wrote:
On Wed, Sep 17, 2014 at 05:39:04PM +0800, Linhaifeng wrote:
I think maybe is not need for the backend to wait for response.
There is another way.vhost-user send VHOST_GET_MEM_TABLE to qemu then qemu
send VHOST_SET_MEM_TABLE to update
On 2014/9/17 17:56, Michael S. Tsirkin wrote:
On Wed, Sep 17, 2014 at 05:39:04PM +0800, Linhaifeng wrote:
I think maybe is not need for the backend to wait for response.
There is another way.vhost-user send VHOST_GET_MEM_TABLE to qemu then qemu
send VHOST_SET_MEM_TABLE to update
Hi,
I write the data to the rx-ring and write the fd to notify the guest but there
is no interrupts in the guest.
my notify code:
uint64_t kick_it = 1;
write(vring[0]-kickfd, kick_it, sizeof(kick_it));
cat /proc/interrupts in the guest:
41: 0 PCI-MSI-EDGEvirtio0-input
42: 0
On 2014/9/15 2:41, Michael S. Tsirkin wrote:
From: Damjan Marion damar...@cisco.com
Header length check should happen only if backend is kernel. For user
backend there is no reason to reset this bit.
vhost-user code does not define .has_vnet_hdr_len so
VIRTIO_NET_F_MRG_RXBUF cannot be
Hi,
There is two memory regions when receive VHOST_SET_MEM_TABLE message:
region[0]
gpa = 0x0
size = 655360
ua = 0x2ac0
offset = 0
region[1]
gpa = 0xC
size = 2146697216
ua = 0x2acc
offset = 786432
region[0]
Hi,
The VHOST_NET_SET_BACKEND could tell the user when the backend is created or
destroyed.is usefull for the user but this command is lost in the protocol
How to test send data to VM?
On 2014/7/18 7:44, Michael S. Tsirkin wrote:
From: Nikolay Nikolaev n.nikol...@virtualopensystems.com
A new field mmap_offset was added in the vhost-user message, we need to
reflect
this change in the test too.
Signed-off-by: Nikolay Nikolaev
.qemu support multi net device the vhost-user module should support multi net
device too.
-Original Message-
From: Nikolay Nikolaev [mailto:n.nikol...@virtualopensystems.com]
Sent: Wednesday, September 10, 2014 1:54 AM
To: Linhaifeng; Daniel Raho
Cc: qemu-devel; m...@redhat.com Michael S
From: Michael S. Tsirkin [mailto:m...@redhat.com]
Sent: Wednesday, September 10, 2014 4:41 AM
To: Nikolay Nikolaev
Cc: Linhaifeng; Daniel Raho; qemu-devel; Lilijun (Jerry); Paolo Bonzini;
Damjan Marion; VirtualOpenSystems Technical Team
Subject: Re: Re: the userspace process vapp mmap
61 matches
Mail list logo