On Tue, Feb 10, 2015 at 07:59:16PM +0800, Linhaifeng wrote: > > > On 2015/2/10 18:41, Michael S. Tsirkin wrote: > > On Tue, Feb 10, 2015 at 06:27:04PM +0800, Linhaifeng wrote: > >> > >> > >> On 2015/2/10 16:46, Michael S. Tsirkin wrote: > >>> On Tue, Feb 10, 2015 at 01:48:12PM +0800, linhaifeng wrote: > >>>> From: Linhaifeng <haifeng....@huawei.com> > >>>> > >>>> Slave should reply to master and set u64 to 0 if > >>>> mmap all regions success otherwise set u64 to 1. > >>>> > >>>> Signed-off-by: Linhaifeng <haifeng....@huawei.com> > >>> > >>> How does this work with existig slaves though? > >>> > >> > >> Slaves should work like this: > >> > >> int set_mem_table(...) > >> { > >> .... > >> for (idx = 0, i = 0; idx < memory.nregions; idx++) { > >> .... > >> mem = mmap(..); > >> if (MAP_FAILED == mem) { > >> msg->msg.u64 = 1; > >> msg->msg.size = MEMB_SIZE(VhostUserMsg, u64); > >> return 1; > >> } > >> } > >> > >> .... > >> > >> msg->msg.u64 = 0; > >> msg->msg.size = MEMB_SIZE(VhostUserMsg, u64); > >> return 1; > >> } > >> > >> If slaves not reply QEMU will always wait. > > > > Are you sure existing slaves reply? > > No.May be the existing slaves need add reply in their codes.
So that's not good. We need a way to negotiate the capability, we can't just deadlock with legacy slaves. > > > >>>> --- > >>>> docs/specs/vhost-user.txt | 1 + > >>>> 1 file changed, 1 insertion(+) > >>>> > >>>> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt > >>>> index 650bb18..c96bf6b 100644 > >>>> --- a/docs/specs/vhost-user.txt > >>>> +++ b/docs/specs/vhost-user.txt > >>>> @@ -171,6 +171,7 @@ Message types > >>>> Id: 5 > >>>> Equivalent ioctl: VHOST_SET_MEM_TABLE > >>>> Master payload: memory regions description > >>>> + Slave payload: u64 (0:success >0:failed) > >>>> > >>>> Sets the memory map regions on the slave so it can translate the > >>>> vring > >>>> addresses. In the ancillary data there is an array of file > >>>> descriptors > >>>> -- > >>>> 1.7.12.4 > >>>> > >>> > >>> > >> > >> -- > >> Regards, > >> Haifeng > > > > . > > > > -- > Regards, > Haifeng