Re: [Xen-devel] Question about mapping between domains

2015-07-22 Thread Oleksandr Dmytryshyn
On Fri, Jul 17, 2015 at 11:59 AM, Ian Campbell  wrote:
> Does this mean everything is working as you need, or is there a further
> issue which needs addressing?
All is working as needed. Thank You.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-17 Thread Ian Campbell
On Fri, 2015-07-17 at 10:43 +0300, Oleksandr Dmytryshyn wrote:
> Hi, Ian. Thank You for tips.
> 
> On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell  
> wrote:
> > There is an additional quirk for a 1:1 mapped dom0 which is that we
> > don't actually decrease reservation when ballooning, but keep the 1:1
> > mfn in anticipation of ballooning it back in later.
> Currently we have this quirk enabled in DomD (driver domain)
> 
> > If you can't arrange to use already ballooned buffers for your DMA
> > buffer then you will need to manually balloon it out before and balloon
> > it back in later.
> I've tried this and all is working (I can map and then unmap memory in both
> directions DomU -> DomD, DomD -> DomU)
> 
> > You may also want to extend the dom0 1:1 quirk described above to your
> > 1:1 mapped domD.
> Currently this quirk is enabled in DomD. In this case I can map memory from
> DomU to DomD (as it done in all PV drivers). But is this quirk is

   ^if?

> enabled in DomU, I can also map memory from DomD to DomU.

Does this mean everything is working as you need, or is there a further
issue which needs addressing?

Ian.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-17 Thread Oleksandr Dmytryshyn
Hi, Ian. Thank You for tips.

On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell  wrote:
> There is an additional quirk for a 1:1 mapped dom0 which is that we
> don't actually decrease reservation when ballooning, but keep the 1:1
> mfn in anticipation of ballooning it back in later.
Currently we have this quirk enabled in DomD (driver domain)

> If you can't arrange to use already ballooned buffers for your DMA
> buffer then you will need to manually balloon it out before and balloon
> it back in later.
I've tried this and all is working (I can map and then unmap memory in both
directions DomU -> DomD, DomD -> DomU)

> You may also want to extend the dom0 1:1 quirk described above to your
> 1:1 mapped domD.
Currently this quirk is enabled in DomD. In this case I can map memory from
DomU to DomD (as it done in all PV drivers). But is this quirk is
enabled in DomU,
I can also map memory from DomD to DomU.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-15 Thread Ian Campbell
On Wed, 2015-07-15 at 12:51 +0100, Stefano Stabellini wrote:
> On Wed, 15 Jul 2015, Oleksandr Dmytryshyn wrote:
> > Hi, Ian. Thank You for the response.
> > 
> > > Look at how the balloon driver does it, the hypercalls you want are
> > > XENMEM_(increase|decrease)_reservation.
> > I'll try to use those hypercalls.
> 
> In the modern Linux kernels, you just need to call gnttab_alloc_pages
> (see drivers/xen/grant-table.c:gnttab_alloc_pages).

The problem here is to grant map pages to fill and existing buffer which
is already allocated/supplied elswhere (in the GPU stack I suppose).


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-15 Thread Stefano Stabellini
On Wed, 15 Jul 2015, Oleksandr Dmytryshyn wrote:
> Hi, Ian. Thank You for the response.
> 
> > Look at how the balloon driver does it, the hypercalls you want are
> > XENMEM_(increase|decrease)_reservation.
> I'll try to use those hypercalls.

In the modern Linux kernels, you just need to call gnttab_alloc_pages
(see drivers/xen/grant-table.c:gnttab_alloc_pages).

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-15 Thread Oleksandr Dmytryshyn
Hi, Ian. Thank You for the response.

> Look at how the balloon driver does it, the hypercalls you want are
> XENMEM_(increase|decrease)_reservation.
I'll try to use those hypercalls.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-14 Thread Ian Campbell
On Tue, 2015-07-14 at 18:41 +0300, Oleksandr Dmytryshyn wrote:
> On Tue, Jul 14, 2015 at 6:31 PM, Oleksandr Dmytryshyn
>  wrote:
> >
> > Hi, Ian. Thank You for the responce.
> >
> > Currently have 3 kernels: Thin Dom0 (privileged), DomD (privileged
> > driver domain),
> > DomU (not privileged)
> >
> > On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell  
> > wrote:
> > > The way we deal with this elsewhere in the kernel is that we only ever
> > > do grant mappings over ballooned out pages, which are allocated via
> > > gnttab_alloc_pages. That way when they are unmapped the page is expected
> > > to be entry and no backing mfn is lost. The page can then subsequently
> > > be ballooned back in as normal.
> > We can not use this case because our DRM driver has already allocated memory
> > which will be mapped later.
> >
> > > There is an additional quirk for a 1:1 mapped dom0 which is that we
> > > don't actually decrease reservation when ballooning, but keep the 1:1
> > > mfn in anticipation of ballooning it back in later.
> > Could You please tell me a bit more information about this quirk. How this 
> > quirk
> > can be enabled?
> >
> > > If you can't arrange to use already ballooned buffers for your DMA
> > > buffer then you will need to manually balloon it out before and balloon
> > > it back in later.
> > This is my case. I'll try to to this.
> Here is one question.
> Could anybody tell me how to manually balloon a page in/out?

Look at how the balloon driver does it, the hypercalls you want are
XENMEM_(increase|decrease)_reservation.

Ian.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-14 Thread Ian Campbell
On Tue, 2015-07-14 at 18:31 +0300, Oleksandr Dmytryshyn wrote:
> > There is an additional quirk for a 1:1 mapped dom0 which is that we
> > don't actually decrease reservation when ballooning, but keep the 1:1
> > mfn in anticipation of ballooning it back in later.
> Could You please tell me a bit more information about this quirk. How this 
> quirk
> can be enabled?

It's enabled by the same dom0_11_mapping which the dom0 domain_builder
uses, look for uses of is_domain_direct_mapped, in particular the ones
in xen/common/memory.c.

Ian.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-14 Thread Oleksandr Dmytryshyn
On Tue, Jul 14, 2015 at 6:31 PM, Oleksandr Dmytryshyn
 wrote:
>
> Hi, Ian. Thank You for the responce.
>
> Currently have 3 kernels: Thin Dom0 (privileged), DomD (privileged
> driver domain),
> DomU (not privileged)
>
> On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell  
> wrote:
> > The way we deal with this elsewhere in the kernel is that we only ever
> > do grant mappings over ballooned out pages, which are allocated via
> > gnttab_alloc_pages. That way when they are unmapped the page is expected
> > to be entry and no backing mfn is lost. The page can then subsequently
> > be ballooned back in as normal.
> We can not use this case because our DRM driver has already allocated memory
> which will be mapped later.
>
> > There is an additional quirk for a 1:1 mapped dom0 which is that we
> > don't actually decrease reservation when ballooning, but keep the 1:1
> > mfn in anticipation of ballooning it back in later.
> Could You please tell me a bit more information about this quirk. How this 
> quirk
> can be enabled?
>
> > If you can't arrange to use already ballooned buffers for your DMA
> > buffer then you will need to manually balloon it out before and balloon
> > it back in later.
> This is my case. I'll try to to this.
Here is one question.
Could anybody tell me how to manually balloon a page in/out?

> > You may also want to extend the dom0 1:1 quirk described above to your
> > 1:1 mapped domD.
> Necessarily I will do this.
>
> > If you have sufficient control over/knowledge of the domD IPA space then
> > you could also try and arrange that the region used for these mappings
> > does not correspond to any real RAM in the guest (i.e. stick it in an
> > MMIO hole). That depends on you never needing to find an associated
> > struct page though, which will depend on your use case.
> Necessarily I will do this.
>
> > Ian.
> >
>
> Oleksandr Dmytryshyn | Product Engineering and Development
> GlobalLogic
> M +38.067.382.2525
> www.globallogic.com
>
> http://www.globallogic.com/email_disclaimer.txt

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-14 Thread Oleksandr Dmytryshyn
Hi, Ian. Thank You for the responce.

Currently have 3 kernels: Thin Dom0 (privileged), DomD (privileged
driver domain),
DomU (not privileged)

On Mon, Jul 13, 2015 at 12:04 PM, Ian Campbell  wrote:
> The way we deal with this elsewhere in the kernel is that we only ever
> do grant mappings over ballooned out pages, which are allocated via
> gnttab_alloc_pages. That way when they are unmapped the page is expected
> to be entry and no backing mfn is lost. The page can then subsequently
> be ballooned back in as normal.
We can not use this case because our DRM driver has already allocated memory
which will be mapped later.

> There is an additional quirk for a 1:1 mapped dom0 which is that we
> don't actually decrease reservation when ballooning, but keep the 1:1
> mfn in anticipation of ballooning it back in later.
Could You please tell me a bit more information about this quirk. How this quirk
can be enabled?

> If you can't arrange to use already ballooned buffers for your DMA
> buffer then you will need to manually balloon it out before and balloon
> it back in later.
This is my case. I'll try to to this.

> You may also want to extend the dom0 1:1 quirk described above to your
> 1:1 mapped domD.
Necessarily I will do this.

> If you have sufficient control over/knowledge of the domD IPA space then
> you could also try and arrange that the region used for these mappings
> does not correspond to any real RAM in the guest (i.e. stick it in an
> MMIO hole). That depends on you never needing to find an associated
> struct page though, which will depend on your use case.
Necessarily I will do this.

> Ian.
>

Oleksandr Dmytryshyn | Product Engineering and Development
GlobalLogic
M +38.067.382.2525
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] Question about mapping between domains

2015-07-13 Thread Ian Campbell
On Thu, 2015-07-09 at 16:31 +0300, Oleksandr Dmytryshyn wrote:
> I have some questions:
> 1. Is this a correct solution?
> 2. Could this solution be considered as a normal (not hack)?
> 3. If not then could anybody help me to implement this in the right way?

The way we deal with this elsewhere in the kernel is that we only ever
do grant mappings over ballooned out pages, which are allocated via
gnttab_alloc_pages. That way when they are unmapped the page is expected
to be entry and no backing mfn is lost. The page can then subsequently
be ballooned back in as normal.

There is an additional quirk for a 1:1 mapped dom0 which is that we
don't actually decrease reservation when ballooning, but keep the 1:1
mfn in anticipation of ballooning it back in later.

If you can't arrange to use already ballooned buffers for your DMA
buffer then you will need to manually balloon it out before and balloon
it back in later.

You may also want to extend the dom0 1:1 quirk described above to your
1:1 mapped domD.

If you have sufficient control over/knowledge of the domD IPA space then
you could also try and arrange that the region used for these mappings
does not correspond to any real RAM in the guest (i.e. stick it in an
MMIO hole). That depends on you never needing to find an associated
struct page though, which will depend on your use case.

Ian.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] Question about mapping between domains

2015-07-09 Thread Oleksandr Dmytryshyn
Hi to all.

I'm trying to map and then unmap some memory from one domain to another.
For example from DomU to DomD. DomU - not privileged domain, DomD - privileged
(driver domain). And DomD is mapped 1:1. I use a typical way - allocate grant
references and claim forein access in the DomU and map by grant references in 
the 
DomD. Then I unmap mapped memory.

I want to map/unmap memory to existing buffer in DomD. And this map/unmap 
procedure
should be done a lot of times. I've used virtual block device (VBD) driver as
reference. But there is a difference in compare to the VBD driver. I use
a buffer which was allocated previously in another driver (in DRM driver).
I need to map a DRM dumb buffer from DomU to DomD. VBD backend driver uses
pages taken from __get_free_pages().

Here is my mapping code (in DomD):

/* map dumb fb */
paddr = cma_obj->paddr;
for (i = 0; i < n_mfns; i++) {
cur_pfn = __phys_to_pfn(paddr);
vaddr = (unsigned long)pfn_to_kaddr(cur_pfn);

pages_mfns[i] = pfn_to_page(cur_pfn);

gnttab_set_map_op(&map_mfns[i], vaddr, GNTMAP_host_map,
gnt_mfns[i], args->fe_domid);

paddr += PAGE_SIZE;
}
ret = gnttab_map_refs(map_mfns, NULL, pages_mfns, n_mfns);
BUG_ON(ret);

Where 'cma_obj' is real object allocated in the DRM driver.

After mapping all works fine.

Here is my unmapping code (in DomD):

paddr = cma_obj->paddr;
cur_idx = 0;
for (i = 0; i < n_mfns; i++) {
if (handles_mfns[i] == DRMFRONT_INVALID_HANDLE) {
/* for now */
dev_err(dev->dev,
"invalid handle[%d] -- could not use it\n", i);
continue;
}

gnttab_set_unmap_op(&unmap_mfns[cur_idx],
(unsigned long)phys_to_virt(paddr),
GNTMAP_host_map,
handles_mfns[i]);

handles_mfns[i] = DRMFRONT_INVALID_HANDLE;

cur_idx++;
paddr += PAGE_SIZE;

if (cur_idx == MAX_MAP_OP_COUNT || i == n_mfns - 1) {
ret = gnttab_unmap_refs(unmap_mfns, NULL,
&pages_mfns[i + 1 - cur_idx],
cur_idx);
BUG_ON(ret);

cur_idx = 0;
}
}


The next crash appeared after unmap (in DomD):

Unhandled fault: terminal exception (0x002) at 0xcdbfb000
Internal error: : 2 [#1] PREEMPT SMP ARM
CPU: 1 PID: 853 Comm: drmback Not tainted 
3.14.33-0-ivi-arm-rcar-m2-rt31-00060-g653c5ff-dirty #173
task: cfa9d800 ti: ce298000 task.ti: ce298000
PC is at __copy_from_user+0xcc/0x3b0
LR is at 0x6
pc : []lr : [<0006>]psr: 0013
sp : ce299ef4  ip : 001c  fp : ce299f44
r10: b652e9a4  r9 : ce298000  r8 : 0004
r7 : cdbfb000  r6 : cfaa3580  r5 : b652e9a4  r4 : 0004
r3 :   r2 : ffe4  r1 : b652e9a8  r0 : cdbfb000
Flags: nzcv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
Control: 10c5307d  Table: 5e23806a  DAC: 0015
Process drmback (pid: 853, stack limit = 0xce298240)
Stack: (0xce299ef4 to 0xce29a000)
9ee0:  b652e9a4 cfaa3580 cdbfb000
9f00: 0004 cdbfb000 0004  0004 c01efdec ce299f78 ce228700
9f20: 0004 b652e9a4 ce299f78 0004 ce298000 b652e9a4 ce299f74 ce299f48
9f40: c00ca158 c01efd88 c00e339c c00e28d8   ce228700 ce228701
9f60: 0004 b652e9a4 ce299fa4 ce299f78 c00ca2cc c00ca094  
9f80: 00018208 0001 b652eb2c 0004 c000f944   ce299fa8
9fa0: c000f7c0 c00ca294 00018208 0001 0006 b652e9a4 0004 b652e9a4
9fc0: 00018208 0001 b652eb2c 0004 0002   b652e9bc
9fe0:  b652e998 b6ea0f94 b6ea0fa4 8010 0006 18140681 076136f5
Backtrace: 
[] (evtchn_write) from [] (vfs_write+0xd0/0x17c)
 r10:b652e9a4 r9:ce298000 r8:0004 r7:ce299f78 r6:b652e9a4 r5:0004
 r4:ce228700 r3:ce299f78
[] (vfs_write) from [] (SyS_write+0x44/0x84)
 r10:b652e9a4 r8:0004 r7:ce228701 r6:ce228700 r5: r4:
[] (SyS_write) from [] (ret_fast_syscall+0x0/0x30)
 r10: r8:c000f944 r7:0004 r6:b652eb2c r5:0001 r4:00018208
Code: e4803004 e4804004 e4805004 e4806004 (e4807004) 
---[ end trace 0002 ]---
[ cut here ]
Unhandled fault: terminal exception (0x002) at 0xcd7fc000
Internal error: : 2 [#2] PREEMPT SMP ARM
CPU: 1 PID: 852 Comm: drmback Tainted: G  D W
3.14.33-0-ivi-arm-rcar-m2-rt31-00060-g653c5ff-dirty #173
task: cfa9ee00 ti: ce28a000 task.ti: ce28a0