On Tue, 2012-10-16 at 10:46 -0700, Mukesh Rathor wrote:
> On Tue, 16 Oct 2012 17:27:01 +0100
> Ian Campbell wrote:
>
> > On Fri, 2012-10-12 at 09:57 +0100, Ian Campbell wrote:
> > > > +int xen_unmap_domain_mfn_range(struct vm_area_struct *vma)
> > > > +{
> > > > + int numpgs = (vma->vm_end
On Tue, 16 Oct 2012 17:27:01 +0100
Ian Campbell wrote:
> On Fri, 2012-10-12 at 09:57 +0100, Ian Campbell wrote:
> > > +int xen_unmap_domain_mfn_range(struct vm_area_struct *vma)
> > > +{
> > > + int numpgs = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
> > > + struct page **pages = vm
On Fri, 2012-10-12 at 09:57 +0100, Ian Campbell wrote:
> > +int xen_unmap_domain_mfn_range(struct vm_area_struct *vma)
> > +{
> > + int numpgs = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
> > + struct page **pages = vma ? vma->vm_private_data : NULL;
>
> I thought we agreed to keep u
On Fri, 12 Oct 2012 09:57:56 +0100
Ian Campbell wrote:
> On Thu, 2012-10-11 at 22:58 +0100, Mukesh Rathor wrote:
> > @@ -2177,8 +2210,19 @@ static const struct pv_mmu_ops xen_mmu_ops
> > __initconst = {
> >
> > void __init xen_init_mmu_ops(void)
> > {
> > - x86_init.mapping.pagetable_res
On Thu, 2012-10-11 at 22:58 +0100, Mukesh Rathor wrote:
> @@ -2177,8 +2210,19 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst
> = {
>
> void __init xen_init_mmu_ops(void)
> {
> - x86_init.mapping.pagetable_reserve = xen_mapping_pagetable_reserve;
> x86_init.paging.pageta
PVH: This patch implements mmu changes for PVH. First the set/clear
mmio pte function makes a hypercall to update the p2m in xen with 1:1
mapping. PVH uses mostly native mmu ops. Two local functions are
introduced to add to xen physmap for xen remap interface. xen unmap
interface is introduced so t
6 matches
Mail list logo