On Tue, 2016-03-22 at 12:48 +1100, Alexey Kardashevskiy wrote: > > I suppose GPU from guest1 could trigger DMA from NPU to guest2 memory. > Which puts a constrain to management tools not to pass NPU without their > GPU counterparts.
Management tools will not be taught such constraints. The plan always was to make sure they are in the same group. So they should be. > The host can be affected as bypass is not disabled on NPU when GPU is taken > by VFIO, I'll fix this. > > >> If I put them to the same group as GPUs, I would have to have > >> IODA2-linked-to-NPU bridge type with different iommu_table_group_ops or > >> have multiple hacks everywhere in IODA2 to enable/disable bypass, > >> etc. > > > > Well.. I suspect it would mean no longer having a 1:1 correspondance > > between user-visible IOMMU groups and the internal iommu_table. > > Right. They can share the table too ... > Right now each GPU is sitting on a separate PHB and has its own PE. And all > NPUs sit on a separate PHB and each couple of NPUs (2 links of the same > GPU) gets a PE. > > So we have separate PEs (struct pnv_ioda_pe) already, each has its own > iommu_table_group_ops with all these VFIO IOMMU callbacks. So to make this > all appear as one IOMMU group in sysfs, I will need to stop embedding > iommu_table_group into pnv_ioda_pe but make it a pointer with reference > counting, etc. Quite a massive change... Or you just put a quirk flag of some sort and a pointer to the "linked" PE... sometimes that's a lot easier than lifting up the whole infrastructure. > > > > >>>> --- > >>>> arch/powerpc/platf _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev