On Thursday, May 26, 2016 11:57 PM, Jan Beulich wrote:
> >>> "Xu, Quan" 05/26/16 12:38 PM >>>
> >On May 25, 2016 4:30 PM, Jan Beulich wrote:
> >> The patch getting too large is easy to deal with: Split it at a
> >> reasonable boundary.
> >I recall your suggestion: top one first, then low level o
>>> "Xu, Quan" 05/26/16 12:38 PM >>>
>On May 25, 2016 4:30 PM, Jan Beulich wrote:
>> The patch getting too large is easy to deal with: Split it at a reasonable
>> boundary.
>
>If I follow the below rule, I need to merge most of patches into this one. I
>can't find a reasonable boundary.
As befo
On May 26, 2016 6:38 PM, Xu, Quan wrote:
> On May 25, 2016 4:30 PM, Jan Beulich wrote:
> > The patch getting too large is easy to deal with: Split it at a
> > reasonable boundary.
>
> Jan,
> If I follow the below rule, I need to merge most of patches into this one. I
> can't
> find a reasonable
On May 25, 2016 4:30 PM, Jan Beulich wrote:
> The patch getting too large is easy to deal with: Split it at a reasonable
> boundary.
Jan,
If I follow the below rule, I need to merge most of patches into this one. I
can't find a reasonable boundary.
I recall your suggestion: top one first, then
On May 23, 2016 9:31 PM, Jan Beulich wrote:
> >>> On 18.05.16 at 10:08, wrote:
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -1391,13 +1399,26 @@ int domain_context_mapping_one(
> > spin_unlock(&iommu->lock);
> >
> > /* Context entry
On May 25, 2016 4:30 PM, Jan Beulich wrote:
> >>> On 25.05.16 at 10:04, wrote:
> > On May 23, 2016 11:43 PM, Jan Beulich wrote:
> >> >>> On 23.05.16 at 17:22, wrote:
> >> > On May 23, 2016 9:31 PM, Jan Beulich wrote:
> >> >> >>> On 18.05.16 at 10:08, wrote:
> >> >> > --- a/xen/drivers/passthr
>>> On 25.05.16 at 10:04, wrote:
> On May 23, 2016 11:43 PM, Jan Beulich wrote:
>> >>> On 23.05.16 at 17:22, wrote:
>> > On May 23, 2016 9:31 PM, Jan Beulich wrote:
>> >> >>> On 18.05.16 at 10:08, wrote:
>> >> > --- a/xen/drivers/passthrough/vtd/iommu.c
>> >> > +++ b/xen/drivers/passthrough/vt
On May 23, 2016 11:43 PM, Jan Beulich wrote:
> >>> On 23.05.16 at 17:22, wrote:
> > On May 23, 2016 9:31 PM, Jan Beulich wrote:
> >> >>> On 18.05.16 at 10:08, wrote:
> >> > --- a/xen/drivers/passthrough/vtd/iommu.c
> >> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> >> > @@ -557,14 +557,16 @@ st
>>> On 23.05.16 at 17:22, wrote:
> On May 23, 2016 9:31 PM, Jan Beulich wrote:
>> >>> On 18.05.16 at 10:08, wrote:
>> > --- a/xen/drivers/passthrough/vtd/iommu.c
>> > +++ b/xen/drivers/passthrough/vtd/iommu.c
>> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
>> > }
>> > }
>> >
>
On May 23, 2016 9:31 PM, Jan Beulich wrote:
> >>> On 18.05.16 at 10:08, wrote:
> > --- a/xen/drivers/passthrough/vtd/iommu.c
> > +++ b/xen/drivers/passthrough/vtd/iommu.c
> > @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
> > }
> > }
> >
> > -static void __intel_iommu_iotlb_flush(s
>>> On 18.05.16 at 10:08, wrote:
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -557,14 +557,16 @@ static void iommu_flush_all(void)
> }
> }
>
> -static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
> -int dma_old_
The propagation value from IOMMU flush interfaces may be positive, which
indicates callers need to flush cache, not one of faliures.
when the propagation value is positive, this patch fixes this flush issue
as follows:
- call iommu_flush_write_buffer() to flush cache.
- return zero.
Signed-of
12 matches
Mail list logo