On Mon, 21 Nov 2016, Andrii Anisov wrote:
> > Why it is not a fair comparison? Because the design is different or
> > because of the settings?
> Because the design difference.
> It's not about memcpy vs mapping within the same stack (design). And
> you measured interdomain communication only, not
> Why it is not a fair comparison? Because the design is different or
> because of the settings?
Because the design difference.
It's not about memcpy vs mapping within the same stack (design). And
you measured interdomain communication only, not involving hardware
interfaces.
> I am happy to
On Thu, 17 Nov 2016, Stefano Stabellini wrote:
> > > I have just run the numbers on ARM64 (APM m400) and it is still much
> > > faster than netfront/netback. This is what I get by running iperf -c in
> > > a VM and iperf -s in Dom0:
> > >
> > > PVCalls Netfront/Netback
> > > -P
On Wed, 16 Nov 2016, Andrii Anisov wrote:
> > For example, take a look at PVCalls which is entirely based on data
> > copies:
> >
> > http://marc.info/?l=xen-devel=147639616310487
> >
> >
> > I have already shown that it performs better than netfront/netback on
> > x86 in this blog post:
> >
> >
> For example, take a look at PVCalls which is entirely based on data
> copies:
>
> http://marc.info/?l=xen-devel=147639616310487
>
>
> I have already shown that it performs better than netfront/netback on
> x86 in this blog post:
>
>
Julien,
>> What we estimate now is a thin Dom0 without any drivers running with
>> ramdisk. All drivers would be moved to a special guest domain.
>
> You may want to give a look what has been done on x86 with the "Dedicated
> hardware domain".
I have to look at the stuff.
> Another solution, is
On Mon, 14 Nov 2016, Andrii Anisov wrote:
> > Could you define unacceptable performance drop? Have you tried to measure
> > what would be the impact?
>
> > I know it can be bad, depending on the class of protocols. I think that
> > if numbers were provided to demonstrate that bounce buffers (the
Hi Andrii,
On 14/11/2016 03:11, Andrii Anisov wrote:
There are many reasons: for example because you want Dom0 to be Linux
and the storage driver domain to be FreeBSD. Or because you want the
network driver domain to be QNX.
What we estimate now is a thin Dom0 without any drivers running with
> Could you define unacceptable performance drop? Have you tried to measure
> what would be the impact?
> I know it can be bad, depending on the class of protocols. I think that
> if numbers were provided to demonstrate that bounce buffers (the swiotlb
> in Linux) are too slow for a given use
> You could also exhaust the memory of the backend domain.
> The problem with this is not much the code changes but the risk of
> exhausting Dom0 memory. I think the approach you proposed previously,
> explicitly giving memory below 4G to DomUs, is better.
I see the point.
Sincerely,
Andrii
> There are many reasons: for example because you want Dom0 to be Linux
> and the storage driver domain to be FreeBSD. Or because you want the
> network driver domain to be QNX.
What we estimate now is a thin Dom0 without any drivers running with
ramdisk. All drivers would be moved to a special
> Without an SMMU, driver domains are not about security anymore, they are
> about disaggregation and componentization
That is our case. And the thing we can provide to customers on chips
without SMMU.
Sincerely,
Andrii Anisov.
___
Xen-devel mailing
On Fri, 11 Nov 2016, Julien Grall wrote:
> > > The guest should be IPA agnostic and not care how the physical device is
> > > working when using PV drivers. So for me,
> > > this should be fixed in the DOM0 OS.
> > Do you consider driver domain guests?
>
> The main point of driver domain is
On Fri, 11 Nov 2016, Andrii Anisov wrote:
> Hello Julien,
>
> Please see my comments below:
>
> > From my understanding of what you say, the problem is not because domU is
> > using memory above 4GB but the fact that >the backend driver does not take
> > the right decision
>
> Yep, the
On 11/11/16 14:24, Andrii Anisov wrote:
Hello Julien,
Please see my comments below:
From my understanding of what you say, the problem is not because domU is using
memory above 4GB but the fact that >the backend driver does not take the right
decision
Yep, the problem could be treated
Sorry for a confusion.
The sentence:
> Also it does answer to the next question:
should be typed as:
> Also it does NOT answer to the next question:
> > The guest should be IPA agnostic and not care how the physical device is
> > working when using PV drivers. So for me,
> > this should be
Hello Julien,
Please see my comments below:
> From my understanding of what you say, the problem is not because domU is
> using memory above 4GB but the fact that >the backend driver does not take
> the right decision
Yep, the problem could be treated in such a way.
> (e.g using bounce
Hello,
On 11/11/16 11:35, Andrii Anisov wrote:
Sorry for the late intrusion into this discussion. I would introduce my
vision of the issues behind the 32 bits addressing DMA controllers in
ARMv7/v8 SoCs.
On AArch64 SoCs, some IPs may only have the capability to access
32 bits address
Sorry for the late intrusion into this discussion. I would introduce my
vision of the issues behind the 32 bits addressing DMA controllers in
ARMv7/v8 SoCs.
On AArch64 SoCs, some IPs may only have the capability to access
> 32 bits address space. The physical memory assigned for Dom0 maybe
> not
On Thu, Nov 10, 2016 at 01:01:38PM +, Julien Grall wrote:
> (CC Wei as release manager)
>
> On 10/11/16 08:30, Peng Fan wrote:
> >Hi Julien,
>
> Hi Peng,
>
> >On Tue, Nov 01, 2016 at 02:42:06PM +, Julien Grall wrote:
> >>Hi Peng,
> >>
> >>Sorry for the late answer.
> >>
> >>On
On Thu, Nov 10, 2016 at 01:01:38PM +, Julien Grall wrote:
>(CC Wei as release manager)
>
>On 10/11/16 08:30, Peng Fan wrote:
>>Hi Julien,
>
>Hi Peng,
>
>>On Tue, Nov 01, 2016 at 02:42:06PM +, Julien Grall wrote:
>>>Hi Peng,
>>>
>>>Sorry for the late answer.
>>>
>>>On 23/09/2016 03:55, Peng
(CC Wei as release manager)
On 10/11/16 08:30, Peng Fan wrote:
Hi Julien,
Hi Peng,
On Tue, Nov 01, 2016 at 02:42:06PM +, Julien Grall wrote:
Hi Peng,
Sorry for the late answer.
On 23/09/2016 03:55, Peng Fan wrote:
On AArch64 SoCs, some IPs may only have the capability to access
32
Hi Julien,
Sorry for late reply.
On Tue, Nov 01, 2016 at 02:42:06PM +, Julien Grall wrote:
>Hi Peng,
>
>Sorry for the late answer.
>
>On 23/09/2016 03:55, Peng Fan wrote:
>>On AArch64 SoCs, some IPs may only have the capability to access
>>32 bits address space. The physical memory assigned
On Tue, 1 Nov 2016, Julien Grall wrote:
> Hi Peng,
>
> Sorry for the late answer.
>
> On 23/09/2016 03:55, Peng Fan wrote:
> > On AArch64 SoCs, some IPs may only have the capability to access
> > 32 bits address space. The physical memory assigned for Dom0 maybe
> > not in 4GB address space,
Hi Peng,
Sorry for the late answer.
On 23/09/2016 03:55, Peng Fan wrote:
On AArch64 SoCs, some IPs may only have the capability to access
32 bits address space. The physical memory assigned for Dom0 maybe
not in 4GB address space, then the IPs will not work properly.
So need to allocate memory
Hi Stefano, Julien
Any comments on this v4 patch?
Thanks,
Peng
On Fri, Sep 23, 2016 at 10:55:34AM +0800, Peng Fan wrote:
>On AArch64 SoCs, some IPs may only have the capability to access
>32 bits address space. The physical memory assigned for Dom0 maybe
>not in 4GB address space, then the IPs
On AArch64 SoCs, some IPs may only have the capability to access
32 bits address space. The physical memory assigned for Dom0 maybe
not in 4GB address space, then the IPs will not work properly.
So need to allocate memory under 4GB for Dom0.
There is no restriction that how much lowmem needs to
27 matches
Mail list logo