RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-10 Thread bharat.bhus...@freescale.com


 -Original Message-
 From: Alex Williamson [mailto:alex.william...@redhat.com]
 Sent: Tuesday, December 10, 2013 11:23 AM
 To: Bhushan Bharat-R65777
 Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-
 B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 On Tue, 2013-12-10 at 05:37 +, bharat.bhus...@freescale.com wrote:
 
   -Original Message-
   From: Alex Williamson [mailto:alex.william...@redhat.com]
   Sent: Saturday, December 07, 2013 1:00 AM
   To: Wood Scott-B07421
   Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org; ag...@suse.de;
   Yoder Stuart-B08248; io...@lists.linux-foundation.org;
   bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
   linux-ker...@vger.kernel.org
   Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
   IOMMU (PAMU)
  
   On Fri, 2013-12-06 at 12:59 -0600, Scott Wood wrote:
On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:

  -Original Message-
  From: Wood Scott-B07421
  Sent: Friday, December 06, 2013 5:52 AM
  To: Bhushan Bharat-R65777
  Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de;
  Yoder Stuart- B08248; io...@lists.linux-foundation.org;
  bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
  linux-ker...@vger.kernel.org
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
  Freescale IOMMU (PAMU)
 
  On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
  
-Original Message-
From: Bhushan Bharat-R65777
Sent: Wednesday, November 27, 2013 9:39 PM
To: 'Alex Williamson'
Cc: Wood Scott-B07421; linux-...@vger.kernel.org;
ag...@suse.de; Yoder Stuart- B08248;
io...@lists.linux-foundation.org; bhelg...@google.com;
linuxppc- d...@lists.ozlabs.org;
linux-ker...@vger.kernel.org
Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for
Freescale IOMMU (PAMU)
   
If we just provide the size of MSI bank to userspace then
userspace cannot do anything wrong.
  
   So userspace does not know address, so it cannot mmap and
   cause any
  interference by directly reading/writing.
 
  That's security through obscurity...  Couldn't the malicious
  user find out the address via other means, such as
  experimentation on another system over which they have full
  control?  What would happen if the user reads from their
  device's PCI config space?  Or gets the information via some
  back door in the PCI device they own?  Or pokes throughout the
  address space looking for something that
   generates an interrupt to its own device?

 So how to solve this problem, Any suggestion ?

 We have to map one window in PAMU for MSIs and a malicious user
 can ask its device to do DMA to MSI window region with any pair
 of address and data, which can lead to unexpected MSIs in system?
   
I don't think there are any solutions other than to limit each
bank to one user, unless the admin turns some knob that says
they're OK with the partial loss of isolation.
  
   Even if the admin does opt-in to an allow_unsafe_interrupts options,
   it should still be reasonably difficult for one guest to interfere
   with the other.  I don't think we want to rely on the blind luck of
   making the full MSI bank accessible to multiple guests and hoping they 
   don't
 step on each other.
 
  Not sure how to solve in this case (sharing MSI page)
 
That probably means that vfio needs to manage the space rather than the
 guest.
 
  What you mean by  vfio needs to manage the space rather than the guest?
 
 I mean there needs to be some kernel component managing the contents of the 
 MSI
 page rather than just handing it out to the user and hoping for the best.  The
 user API also needs to remain the same whether the user has the MSI page
 exclusively or it's shared with others (kernel or users).  Thanks,

We have limited number of MSI banks, so we cannot provide explicit MSI bank to 
each VMs.
Below is the summary of msi allocation/ownership model I am thinking of:

Option-1: User-space aware of MSI banks
= 
1 ) Userspace will make GET_MSI_REGION(request number of MSI banks)
- VFIO will allocate requested number of MSI bank;
- If allocation succeed then return number of banks
- If allocation fails then check opt-in flag set by administrator 
(allow_unsafe_interrupts);
 allow_unsafe_interrupts  == 0; Not allowed to share; return FAIL 
(-ENODEV)
 else share MSI bank of another VM.

2) Userspace will adjust geometry size as per number of banks and calls 
SET_GEOMETRY

3) Userspace will do DMA_MAP for its memory

4) Userspace will do MSI_MAP for number of banks

Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-10 Thread Scott Wood
My e-mail address is scottw...@freescale.com, not
IMCEAEX-_O=MMS_OU=EXTERNAL+20+28FYDIBOHF25SPDLT
+29_CN=RECIPIENTS_CN=f0faac8d7e74473a9ee1c45b068d8...@namprd03.prod.outlook.com

On Tue, 2013-12-10 at 05:37 +, bharat.bhus...@freescale.com wrote:
 
  -Original Message-
  From: Wood Scott-B07421
  Sent: Saturday, December 07, 2013 12:55 AM
  To: Bhushan Bharat-R65777
  Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-
  B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
  d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  
  If the administrator does not opt into this partial loss of isolation, then 
  once
  you run out of MSI groups, new users should not be able to set up MSIs.
 
 So mean vfio should use Legacy when out of MSI banks?

Yes, if the administrator hasn't granted permission to share.

-Scott



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-09 Thread bharat.bhus...@freescale.com


 -Original Message-
 From: Wood Scott-B07421
 Sent: Saturday, December 07, 2013 12:55 AM
 To: Bhushan Bharat-R65777
 Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-
 B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 On Thu, 2013-12-05 at 22:17 -0600, Bharat Bhushan wrote:
 
   -Original Message-
   From: Wood Scott-B07421
   Sent: Friday, December 06, 2013 5:31 AM
   To: Bhushan Bharat-R65777
   Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de; Yoder
   Stuart- B08248; io...@lists.linux-foundation.org;
   bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
   linux-ker...@vger.kernel.org
   Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
   IOMMU (PAMU)
  
   On Sun, 2013-11-24 at 23:33 -0600, Bharat Bhushan wrote:
   
 -Original Message-
 From: Alex Williamson [mailto:alex.william...@redhat.com]
 Sent: Friday, November 22, 2013 2:31 AM
 To: Wood Scott-B07421
 Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org;
 ag...@suse.de; Yoder Stuart-B08248;
 io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
 IOMMU (PAMU)

 On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
  They can interfere.
   
Want to be sure of how they can interfere?
  
   If more than one VFIO user shares the same MSI group, one of the
   users can send MSIs to another user, by using the wrong interrupt
   within the bank.  Unexpected MSIs could cause misbehavior or denial of
 service.
  
  With this hardware, the only way to prevent that
  is to make sure that a bank is not shared by multiple protection
 contexts.
  For some of our users, though, I believe preventing this is
  less important than the performance benefit.
   
So should we let this patch series in without protection?
  
   No, there should be some sort of opt-in mechanism similar to
   IOMMU-less VFIO -- but not the same exact one, since one is a much
   more serious loss of isolation than the other.
 
  Can you please elaborate opt-in mechanism?
 
 The system should be secure by default.  If the administrator wants to relax
 protection in order to accomplish some functionality, that should require an
 explicit request such as a write to a sysfs file.
 
 I think we need some sort of ownership model around the msi banks 
 then.
 Otherwise there's nothing preventing another userspace from
 attempting an MSI based attack on other users, or perhaps even
 on the host.  VFIO can't allow that.  Thanks,
   
We have very few (3 MSI bank on most of chips), so we can not
assign one to each userspace.
  
   That depends on how many users there are.
 
  What I think we can do is:
   - Reserve one MSI region for host. Host will not share MSI region with 
  Guest.
   - For upto 2 Guest (MAX msi with host - 1) give then separate MSI sub
  regions
   - Additional Guest will share MSI region with other guest.
 
  Any better suggestion are most welcome.
 
 If the administrator does not opt into this partial loss of isolation, then 
 once
 you run out of MSI groups, new users should not be able to set up MSIs.

So mean vfio should use Legacy when out of MSI banks?

Thanks
-Bharat

 
 -Scott
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-09 Thread bharat.bhus...@freescale.com


 -Original Message-
 From: Alex Williamson [mailto:alex.william...@redhat.com]
 Sent: Saturday, December 07, 2013 1:00 AM
 To: Wood Scott-B07421
 Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org; ag...@suse.de; Yoder
 Stuart-B08248; io...@lists.linux-foundation.org; bhelg...@google.com; 
 linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 On Fri, 2013-12-06 at 12:59 -0600, Scott Wood wrote:
  On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
  
-Original Message-
From: Wood Scott-B07421
Sent: Friday, December 06, 2013 5:52 AM
To: Bhushan Bharat-R65777
Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de;
Yoder Stuart- B08248; io...@lists.linux-foundation.org;
bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
linux-ker...@vger.kernel.org
Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
IOMMU (PAMU)
   
On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:

  -Original Message-
  From: Bhushan Bharat-R65777
  Sent: Wednesday, November 27, 2013 9:39 PM
  To: 'Alex Williamson'
  Cc: Wood Scott-B07421; linux-...@vger.kernel.org;
  ag...@suse.de; Yoder Stuart- B08248;
  io...@lists.linux-foundation.org; bhelg...@google.com;
  linuxppc- d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for
  Freescale IOMMU (PAMU)
 
  If we just provide the size of MSI bank to userspace then
  userspace cannot do anything wrong.

 So userspace does not know address, so it cannot mmap and cause
 any
interference by directly reading/writing.
   
That's security through obscurity...  Couldn't the malicious user
find out the address via other means, such as experimentation on
another system over which they have full control?  What would
happen if the user reads from their device's PCI config space?  Or
gets the information via some back door in the PCI device they
own?  Or pokes throughout the address space looking for something that
 generates an interrupt to its own device?
  
   So how to solve this problem, Any suggestion ?
  
   We have to map one window in PAMU for MSIs and a malicious user can
   ask its device to do DMA to MSI window region with any pair of
   address and data, which can lead to unexpected MSIs in system?
 
  I don't think there are any solutions other than to limit each bank to
  one user, unless the admin turns some knob that says they're OK with
  the partial loss of isolation.
 
 Even if the admin does opt-in to an allow_unsafe_interrupts options, it should
 still be reasonably difficult for one guest to interfere with the other.  I
 don't think we want to rely on the blind luck of making the full MSI bank
 accessible to multiple guests and hoping they don't step on each other.

Not sure how to solve in this case (sharing MSI page)

  That probably means that vfio needs to manage the space rather than the 
 guest.

What you mean by  vfio needs to manage the space rather than the guest?

Thanks
-Bharat

 Thanks,
 
 Alex
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-09 Thread Alex Williamson
On Tue, 2013-12-10 at 05:37 +, bharat.bhus...@freescale.com wrote:
 
  -Original Message-
  From: Alex Williamson [mailto:alex.william...@redhat.com]
  Sent: Saturday, December 07, 2013 1:00 AM
  To: Wood Scott-B07421
  Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org; ag...@suse.de; Yoder
  Stuart-B08248; io...@lists.linux-foundation.org; bhelg...@google.com; 
  linuxppc-
  d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  
  On Fri, 2013-12-06 at 12:59 -0600, Scott Wood wrote:
   On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
   
 -Original Message-
 From: Wood Scott-B07421
 Sent: Friday, December 06, 2013 5:52 AM
 To: Bhushan Bharat-R65777
 Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de;
 Yoder Stuart- B08248; io...@lists.linux-foundation.org;
 bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
 linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
 IOMMU (PAMU)

 On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
 
   -Original Message-
   From: Bhushan Bharat-R65777
   Sent: Wednesday, November 27, 2013 9:39 PM
   To: 'Alex Williamson'
   Cc: Wood Scott-B07421; linux-...@vger.kernel.org;
   ag...@suse.de; Yoder Stuart- B08248;
   io...@lists.linux-foundation.org; bhelg...@google.com;
   linuxppc- d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
   Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for
   Freescale IOMMU (PAMU)
  
   If we just provide the size of MSI bank to userspace then
   userspace cannot do anything wrong.
 
  So userspace does not know address, so it cannot mmap and cause
  any
 interference by directly reading/writing.

 That's security through obscurity...  Couldn't the malicious user
 find out the address via other means, such as experimentation on
 another system over which they have full control?  What would
 happen if the user reads from their device's PCI config space?  Or
 gets the information via some back door in the PCI device they
 own?  Or pokes throughout the address space looking for something that
  generates an interrupt to its own device?
   
So how to solve this problem, Any suggestion ?
   
We have to map one window in PAMU for MSIs and a malicious user can
ask its device to do DMA to MSI window region with any pair of
address and data, which can lead to unexpected MSIs in system?
  
   I don't think there are any solutions other than to limit each bank to
   one user, unless the admin turns some knob that says they're OK with
   the partial loss of isolation.
  
  Even if the admin does opt-in to an allow_unsafe_interrupts options, it 
  should
  still be reasonably difficult for one guest to interfere with the other.  I
  don't think we want to rely on the blind luck of making the full MSI bank
  accessible to multiple guests and hoping they don't step on each other.
 
 Not sure how to solve in this case (sharing MSI page)
 
   That probably means that vfio needs to manage the space rather than the 
  guest.
 
 What you mean by  vfio needs to manage the space rather than the guest?

I mean there needs to be some kernel component managing the contents of
the MSI page rather than just handing it out to the user and hoping for
the best.  The user API also needs to remain the same whether the user
has the MSI page exclusively or it's shared with others (kernel or
users).  Thanks,

Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-06 Thread Scott Wood
On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
 
  -Original Message-
  From: Wood Scott-B07421
  Sent: Friday, December 06, 2013 5:52 AM
  To: Bhushan Bharat-R65777
  Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-
  B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
  d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
  On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
  
-Original Message-
From: Bhushan Bharat-R65777
Sent: Wednesday, November 27, 2013 9:39 PM
To: 'Alex Williamson'
Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de;
Yoder Stuart- B08248; io...@lists.linux-foundation.org;
bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
linux-ker...@vger.kernel.org
Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale
IOMMU (PAMU)
   
If we just provide the size of MSI bank to userspace then userspace
cannot do anything wrong.
  
   So userspace does not know address, so it cannot mmap and cause any
  interference by directly reading/writing.
 
  That's security through obscurity...  Couldn't the malicious user find out 
  the
  address via other means, such as experimentation on another system over 
  which
  they have full control?  What would happen if the user reads from their 
  device's
  PCI config space?  Or gets the information via some back door in the PCI 
  device
  they own?  Or pokes throughout the address space looking for something that
  generates an interrupt to its own device?
 
 So how to solve this problem, Any suggestion ?
 
 We have to map one window in PAMU for MSIs and a malicious user can ask
 its device to do DMA to MSI window region with any pair of address and
 data, which can lead to unexpected MSIs in system?

I don't think there are any solutions other than to limit each bank to
one user, unless the admin turns some knob that says they're OK with the
partial loss of isolation.

-Scott



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-06 Thread Scott Wood
On Thu, 2013-12-05 at 22:17 -0600, Bharat Bhushan wrote:
 
  -Original Message-
  From: Wood Scott-B07421
  Sent: Friday, December 06, 2013 5:31 AM
  To: Bhushan Bharat-R65777
  Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-
  B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
  d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
  On Sun, 2013-11-24 at 23:33 -0600, Bharat Bhushan wrote:
  
-Original Message-
From: Alex Williamson [mailto:alex.william...@redhat.com]
Sent: Friday, November 22, 2013 2:31 AM
To: Wood Scott-B07421
Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org; ag...@suse.de;
Yoder Stuart-B08248; io...@lists.linux-foundation.org;
bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
linux-ker...@vger.kernel.org
Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
IOMMU (PAMU)
   
On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
 They can interfere.
  
   Want to be sure of how they can interfere?
 
  If more than one VFIO user shares the same MSI group, one of the users can 
  send
  MSIs to another user, by using the wrong interrupt within the bank.  
  Unexpected
  MSIs could cause misbehavior or denial of service.
 
 With this hardware, the only way to prevent that
 is to make sure that a bank is not shared by multiple protection 
 contexts.
 For some of our users, though, I believe preventing this is less
 important than the performance benefit.
  
   So should we let this patch series in without protection?
 
  No, there should be some sort of opt-in mechanism similar to IOMMU-less 
  VFIO --
  but not the same exact one, since one is a much more serious loss of 
  isolation
  than the other.
 
 Can you please elaborate opt-in mechanism?

The system should be secure by default.  If the administrator wants to
relax protection in order to accomplish some functionality, that should
require an explicit request such as a write to a sysfs file.

I think we need some sort of ownership model around the msi banks then.
Otherwise there's nothing preventing another userspace from
attempting an MSI based attack on other users, or perhaps even on
the host.  VFIO can't allow that.  Thanks,
  
   We have very few (3 MSI bank on most of chips), so we can not assign
   one to each userspace.
 
  That depends on how many users there are.
 
 What I think we can do is:
  - Reserve one MSI region for host. Host will not share MSI region with Guest.
  - For upto 2 Guest (MAX msi with host - 1) give then separate MSI sub regions
  - Additional Guest will share MSI region with other guest.
 
 Any better suggestion are most welcome.

If the administrator does not opt into this partial loss of isolation,
then once you run out of MSI groups, new users should not be able to set
up MSIs.

-Scott



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-06 Thread Alex Williamson
On Fri, 2013-12-06 at 12:59 -0600, Scott Wood wrote:
 On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
  
   -Original Message-
   From: Wood Scott-B07421
   Sent: Friday, December 06, 2013 5:52 AM
   To: Bhushan Bharat-R65777
   Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de; Yoder 
   Stuart-
   B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
   d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
   Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU 
   (PAMU)
  
   On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
   
 -Original Message-
 From: Bhushan Bharat-R65777
 Sent: Wednesday, November 27, 2013 9:39 PM
 To: 'Alex Williamson'
 Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de;
 Yoder Stuart- B08248; io...@lists.linux-foundation.org;
 bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
 linux-ker...@vger.kernel.org
 Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale
 IOMMU (PAMU)

 If we just provide the size of MSI bank to userspace then userspace
 cannot do anything wrong.
   
So userspace does not know address, so it cannot mmap and cause any
   interference by directly reading/writing.
  
   That's security through obscurity...  Couldn't the malicious user find 
   out the
   address via other means, such as experimentation on another system over 
   which
   they have full control?  What would happen if the user reads from their 
   device's
   PCI config space?  Or gets the information via some back door in the PCI 
   device
   they own?  Or pokes throughout the address space looking for something 
   that
   generates an interrupt to its own device?
  
  So how to solve this problem, Any suggestion ?
  
  We have to map one window in PAMU for MSIs and a malicious user can ask
  its device to do DMA to MSI window region with any pair of address and
  data, which can lead to unexpected MSIs in system?
 
 I don't think there are any solutions other than to limit each bank to
 one user, unless the admin turns some knob that says they're OK with the
 partial loss of isolation.

Even if the admin does opt-in to an allow_unsafe_interrupts options, it
should still be reasonably difficult for one guest to interfere with the
other.  I don't think we want to rely on the blind luck of making the
full MSI bank accessible to multiple guests and hoping they don't step
on each other.  That probably means that vfio needs to manage the space
rather than the guest.  Thanks,

Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-06 Thread Scott Wood
On Fri, 2013-12-06 at 12:30 -0700, Alex Williamson wrote:
 On Fri, 2013-12-06 at 12:59 -0600, Scott Wood wrote:
  On Thu, 2013-12-05 at 22:11 -0600, Bharat Bhushan wrote:
   
-Original Message-
From: Wood Scott-B07421
Sent: Friday, December 06, 2013 5:52 AM
To: Bhushan Bharat-R65777
Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de; Yoder 
Stuart-
B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU 
(PAMU)
   
On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:

  -Original Message-
  From: Bhushan Bharat-R65777
  Sent: Wednesday, November 27, 2013 9:39 PM
  To: 'Alex Williamson'
  Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de;
  Yoder Stuart- B08248; io...@lists.linux-foundation.org;
  bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
  linux-ker...@vger.kernel.org
  Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale
  IOMMU (PAMU)
 
  If we just provide the size of MSI bank to userspace then userspace
  cannot do anything wrong.

 So userspace does not know address, so it cannot mmap and cause any
interference by directly reading/writing.
   
That's security through obscurity...  Couldn't the malicious user find 
out the
address via other means, such as experimentation on another system over 
which
they have full control?  What would happen if the user reads from their 
device's
PCI config space?  Or gets the information via some back door in the 
PCI device
they own?  Or pokes throughout the address space looking for something 
that
generates an interrupt to its own device?
   
   So how to solve this problem, Any suggestion ?
   
   We have to map one window in PAMU for MSIs and a malicious user can ask
   its device to do DMA to MSI window region with any pair of address and
   data, which can lead to unexpected MSIs in system?
  
  I don't think there are any solutions other than to limit each bank to
  one user, unless the admin turns some knob that says they're OK with the
  partial loss of isolation.
 
 Even if the admin does opt-in to an allow_unsafe_interrupts options, it
 should still be reasonably difficult for one guest to interfere with the
 other.  I don't think we want to rely on the blind luck of making the
 full MSI bank accessible to multiple guests and hoping they don't step
 on each other.  That probably means that vfio needs to manage the space
 rather than the guest.  Thanks,

Yes, the MSIs within a given bank would be allocated by the host kernel
in any case (presumably by the MSI driver, not VFIO itself).  This is
just about what happens if the MSI page is written to outside of the
normal mechanism.

-Scott



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-05 Thread Scott Wood
On Sun, 2013-11-24 at 23:33 -0600, Bharat Bhushan wrote:
 
  -Original Message-
  From: Alex Williamson [mailto:alex.william...@redhat.com]
  Sent: Friday, November 22, 2013 2:31 AM
  To: Wood Scott-B07421
  Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org; ag...@suse.de; Yoder
  Stuart-B08248; io...@lists.linux-foundation.org; bhelg...@google.com; 
  linuxppc-
  d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
  On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
   On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:

  -Original Message-
  From: Alex Williamson [mailto:alex.william...@redhat.com]
  Sent: Thursday, November 21, 2013 12:17 AM
  To: Bhushan Bharat-R65777
  Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de; Wood
  Scott-B07421; Yoder Stuart-B08248;
  io...@lists.linux-foundation.org; linux- p...@vger.kernel.org;
  linuxppc-dev@lists.ozlabs.org; linux- ker...@vger.kernel.org;
  Bhushan Bharat-R65777
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
  IOMMU (PAMU)
 
  Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each
  vfio user has $COUNT regions at their disposal exclusively)?

 Number of msi-bank count is system wide and not per aperture, But 
 will be
  setting windows for banks in the device aperture.
 So say if we are direct assigning 2 pci device (both have different 
 iommu
  group, so 2 aperture in iommu) to VM.
 Now qemu can make only one call to know how many msi-banks are there 
 but
  it must set sub-windows for all banks for both pci device in its respective
  aperture.
   
I'm still confused.  What I want to make sure of is that the banks
are independent per aperture.  For instance, if we have two separate
userspace processes operating independently and they both chose to
use msi bank zero for their device, that's bank zero within each
aperture and doesn't interfere.  Or another way to ask is can a
malicious user interfere with other users by using the wrong bank.
Thanks,
  
   They can interfere.
 
 Want to be sure of how they can interfere?

If more than one VFIO user shares the same MSI group, one of the users
can send MSIs to another user, by using the wrong interrupt within the
bank.  Unexpected MSIs could cause misbehavior or denial of service.

   With this hardware, the only way to prevent that
   is to make sure that a bank is not shared by multiple protection contexts.
   For some of our users, though, I believe preventing this is less
   important than the performance benefit.
 
 So should we let this patch series in without protection?

No, there should be some sort of opt-in mechanism similar to IOMMU-less
VFIO -- but not the same exact one, since one is a much more serious
loss of isolation than the other.

  I think we need some sort of ownership model around the msi banks then.
  Otherwise there's nothing preventing another userspace from attempting an 
  MSI
  based attack on other users, or perhaps even on the host.  VFIO can't allow
  that.  Thanks,
 
 We have very few (3 MSI bank on most of chips), so we can not assign
 one to each userspace.

That depends on how many users there are.

-Scott



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-05 Thread Scott Wood
On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
 
  -Original Message-
  From: Bhushan Bharat-R65777
  Sent: Wednesday, November 27, 2013 9:39 PM
  To: 'Alex Williamson'
  Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de; Yoder 
  Stuart-
  B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
  d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 
 
   -Original Message-
   From: Alex Williamson [mailto:alex.william...@redhat.com]
   Sent: Monday, November 25, 2013 10:08 PM
   To: Bhushan Bharat-R65777
   Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de; Yoder
   Stuart- B08248; io...@lists.linux-foundation.org; bhelg...@google.com;
   linuxppc- d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
   Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU
   (PAMU)
  
   On Mon, 2013-11-25 at 05:33 +, Bharat Bhushan wrote:
   
 -Original Message-
 From: Alex Williamson [mailto:alex.william...@redhat.com]
 Sent: Friday, November 22, 2013 2:31 AM
 To: Wood Scott-B07421
 Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org;
 ag...@suse.de; Yoder Stuart-B08248;
 io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
 IOMMU (PAMU)

 On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
  On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
   On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:
   
 -Original Message-
 From: Alex Williamson [mailto:alex.william...@redhat.com]
 Sent: Thursday, November 21, 2013 12:17 AM
 To: Bhushan Bharat-R65777
 Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de;
 Wood Scott-B07421; Yoder Stuart-B08248;
 io...@lists.linux-foundation.org; linux-
 p...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
 ker...@vger.kernel.org; Bhushan Bharat-R65777
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
 Freescale IOMMU (PAMU)

 Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie.
 each vfio user has $COUNT regions at their disposal 
 exclusively)?
   
Number of msi-bank count is system wide and not per
aperture, But will be
 setting windows for banks in the device aperture.
So say if we are direct assigning 2 pci device (both have
different iommu
 group, so 2 aperture in iommu) to VM.
Now qemu can make only one call to know how many msi-banks
are there but
 it must set sub-windows for all banks for both pci device in its
 respective aperture.
  
   I'm still confused.  What I want to make sure of is that the
   banks are independent per aperture.  For instance, if we have
   two separate userspace processes operating independently and
   they both chose to use msi bank zero for their device, that's
   bank zero within each aperture and doesn't interfere.  Or
   another way to ask is can a malicious user interfere with
   other users by
   using the wrong bank.
   Thanks,
 
  They can interfere.
   
Want to be sure of how they can interfere?
  
   What happens if more than one user selects the same MSI bank?
   Minimally, wouldn't that result in the IOMMU blocking transactions
   from the previous user once the new user activates their mapping?
 
  Yes and no; With current implementation yes but with a minor change no. 
  Later in
  this response I will explain how.
 
  
  With this hardware, the only way to prevent that
  is to make sure that a bank is not shared by multiple protection
  contexts.
  For some of our users, though, I believe preventing this is less
  important than the performance benefit.
   
So should we let this patch series in without protection?
  
   No.
  

 I think we need some sort of ownership model around the msi banks 
 then.
 Otherwise there's nothing preventing another userspace from
 attempting an MSI based attack on other users, or perhaps even on
 the host.  VFIO can't allow that.  Thanks,
   
We have very few (3 MSI bank on most of chips), so we can not assign
one to each userspace. What we can do is host and userspace does not
share a MSI bank while userspace will share a MSI bank.
  
   Then you probably need VFIO to own the MSI bank and program devices
   into it rather than exposing the MSI banks to userspace to let them have
  direct access.
 
  Overall idea of exposing the details of msi regions to userspace are
   1) User space can define the aperture size to fit MSI mapping in IOMMU.
   2) setup iova for a MSI banks; which is just after guest memory

RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-05 Thread Bharat Bhushan


 -Original Message-
 From: Wood Scott-B07421
 Sent: Friday, December 06, 2013 5:52 AM
 To: Bhushan Bharat-R65777
 Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-
 B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 On Thu, 2013-11-28 at 03:19 -0600, Bharat Bhushan wrote:
 
   -Original Message-
   From: Bhushan Bharat-R65777
   Sent: Wednesday, November 27, 2013 9:39 PM
   To: 'Alex Williamson'
   Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de;
   Yoder Stuart- B08248; io...@lists.linux-foundation.org;
   bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
   linux-ker...@vger.kernel.org
   Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale
   IOMMU (PAMU)
  
  
  
-Original Message-
From: Alex Williamson [mailto:alex.william...@redhat.com]
Sent: Monday, November 25, 2013 10:08 PM
To: Bhushan Bharat-R65777
Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de;
Yoder
Stuart- B08248; io...@lists.linux-foundation.org;
bhelg...@google.com;
linuxppc- d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
IOMMU
(PAMU)
   
On Mon, 2013-11-25 at 05:33 +, Bharat Bhushan wrote:

  -Original Message-
  From: Alex Williamson [mailto:alex.william...@redhat.com]
  Sent: Friday, November 22, 2013 2:31 AM
  To: Wood Scott-B07421
  Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org;
  ag...@suse.de; Yoder Stuart-B08248;
  io...@lists.linux-foundation.org; bhelg...@google.com;
  linuxppc- d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
  Freescale IOMMU (PAMU)
 
  On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
   On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:

  -Original Message-
  From: Alex Williamson
  [mailto:alex.william...@redhat.com]
  Sent: Thursday, November 21, 2013 12:17 AM
  To: Bhushan Bharat-R65777
  Cc: j...@8bytes.org; bhelg...@google.com;
  ag...@suse.de; Wood Scott-B07421; Yoder Stuart-B08248;
  io...@lists.linux-foundation.org; linux-
  p...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org;
  linux- ker...@vger.kernel.org; Bhushan Bharat-R65777
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
  Freescale IOMMU (PAMU)
 
  Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie.
  each vfio user has $COUNT regions at their disposal
 exclusively)?

 Number of msi-bank count is system wide and not per
 aperture, But will be
  setting windows for banks in the device aperture.
 So say if we are direct assigning 2 pci device (both
 have different iommu
  group, so 2 aperture in iommu) to VM.
 Now qemu can make only one call to know how many
 msi-banks are there but
  it must set sub-windows for all banks for both pci device in
  its respective aperture.
   
I'm still confused.  What I want to make sure of is that
the banks are independent per aperture.  For instance, if
we have two separate userspace processes operating
independently and they both chose to use msi bank zero for
their device, that's bank zero within each aperture and
doesn't interfere.  Or another way to ask is can a
malicious user interfere with other users by
using the wrong bank.
Thanks,
  
   They can interfere.

 Want to be sure of how they can interfere?
   
What happens if more than one user selects the same MSI bank?
Minimally, wouldn't that result in the IOMMU blocking transactions
from the previous user once the new user activates their mapping?
  
   Yes and no; With current implementation yes but with a minor change
   no. Later in this response I will explain how.
  
   
   With this hardware, the only way to prevent that
   is to make sure that a bank is not shared by multiple
   protection
   contexts.
   For some of our users, though, I believe preventing this is
   less important than the performance benefit.

 So should we let this patch series in without protection?
   
No.
   
 
  I think we need some sort of ownership model around the msi banks
 then.
  Otherwise there's nothing preventing another userspace from
  attempting an MSI based attack on other users, or perhaps even
  on the host.  VFIO can't allow that.  Thanks,

 We have very few (3 MSI bank on most of chips), so we can

RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-12-05 Thread Bharat Bhushan


 -Original Message-
 From: Wood Scott-B07421
 Sent: Friday, December 06, 2013 5:31 AM
 To: Bhushan Bharat-R65777
 Cc: Alex Williamson; linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-
 B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 On Sun, 2013-11-24 at 23:33 -0600, Bharat Bhushan wrote:
 
   -Original Message-
   From: Alex Williamson [mailto:alex.william...@redhat.com]
   Sent: Friday, November 22, 2013 2:31 AM
   To: Wood Scott-B07421
   Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org; ag...@suse.de;
   Yoder Stuart-B08248; io...@lists.linux-foundation.org;
   bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
   linux-ker...@vger.kernel.org
   Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
   IOMMU (PAMU)
  
   On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
 On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:
 
   -Original Message-
   From: Alex Williamson [mailto:alex.william...@redhat.com]
   Sent: Thursday, November 21, 2013 12:17 AM
   To: Bhushan Bharat-R65777
   Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de;
   Wood Scott-B07421; Yoder Stuart-B08248;
   io...@lists.linux-foundation.org; linux-
   p...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
   ker...@vger.kernel.org; Bhushan Bharat-R65777
   Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
   Freescale IOMMU (PAMU)
  
   Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each
   vfio user has $COUNT regions at their disposal exclusively)?
 
  Number of msi-bank count is system wide and not per aperture,
  But will be
   setting windows for banks in the device aperture.
  So say if we are direct assigning 2 pci device (both have
  different iommu
   group, so 2 aperture in iommu) to VM.
  Now qemu can make only one call to know how many msi-banks are
  there but
   it must set sub-windows for all banks for both pci device in its
   respective aperture.

 I'm still confused.  What I want to make sure of is that the
 banks are independent per aperture.  For instance, if we have
 two separate userspace processes operating independently and
 they both chose to use msi bank zero for their device, that's
 bank zero within each aperture and doesn't interfere.  Or
 another way to ask is can a malicious user interfere with other users 
 by
 using the wrong bank.
 Thanks,
   
They can interfere.
 
  Want to be sure of how they can interfere?
 
 If more than one VFIO user shares the same MSI group, one of the users can 
 send
 MSIs to another user, by using the wrong interrupt within the bank.  
 Unexpected
 MSIs could cause misbehavior or denial of service.
 
With this hardware, the only way to prevent that
is to make sure that a bank is not shared by multiple protection 
contexts.
For some of our users, though, I believe preventing this is less
important than the performance benefit.
 
  So should we let this patch series in without protection?
 
 No, there should be some sort of opt-in mechanism similar to IOMMU-less VFIO 
 --
 but not the same exact one, since one is a much more serious loss of isolation
 than the other.

Can you please elaborate opt-in mechanism?

 
   I think we need some sort of ownership model around the msi banks then.
   Otherwise there's nothing preventing another userspace from
   attempting an MSI based attack on other users, or perhaps even on
   the host.  VFIO can't allow that.  Thanks,
 
  We have very few (3 MSI bank on most of chips), so we can not assign
  one to each userspace.
 
 That depends on how many users there are.

What I think we can do is:
 - Reserve one MSI region for host. Host will not share MSI region with Guest.
 - For upto 2 Guest (MAX msi with host - 1) give then separate MSI sub regions
 - Additional Guest will share MSI region with other guest.

Any better suggestion are most welcome.

Thanks
-Bharat
 
 -Scott
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-28 Thread Bharat Bhushan


 -Original Message-
 From: Bhushan Bharat-R65777
 Sent: Wednesday, November 27, 2013 9:39 PM
 To: 'Alex Williamson'
 Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-
 B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 
 
  -Original Message-
  From: Alex Williamson [mailto:alex.william...@redhat.com]
  Sent: Monday, November 25, 2013 10:08 PM
  To: Bhushan Bharat-R65777
  Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de; Yoder
  Stuart- B08248; io...@lists.linux-foundation.org; bhelg...@google.com;
  linuxppc- d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU
  (PAMU)
 
  On Mon, 2013-11-25 at 05:33 +, Bharat Bhushan wrote:
  
-Original Message-
From: Alex Williamson [mailto:alex.william...@redhat.com]
Sent: Friday, November 22, 2013 2:31 AM
To: Wood Scott-B07421
Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org;
ag...@suse.de; Yoder Stuart-B08248;
io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
IOMMU (PAMU)
   
On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
 On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
  On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:
  
-Original Message-
From: Alex Williamson [mailto:alex.william...@redhat.com]
Sent: Thursday, November 21, 2013 12:17 AM
To: Bhushan Bharat-R65777
Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de;
Wood Scott-B07421; Yoder Stuart-B08248;
io...@lists.linux-foundation.org; linux-
p...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
ker...@vger.kernel.org; Bhushan Bharat-R65777
Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
Freescale IOMMU (PAMU)
   
Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie.
each vfio user has $COUNT regions at their disposal 
exclusively)?
  
   Number of msi-bank count is system wide and not per
   aperture, But will be
setting windows for banks in the device aperture.
   So say if we are direct assigning 2 pci device (both have
   different iommu
group, so 2 aperture in iommu) to VM.
   Now qemu can make only one call to know how many msi-banks
   are there but
it must set sub-windows for all banks for both pci device in its
respective aperture.
 
  I'm still confused.  What I want to make sure of is that the
  banks are independent per aperture.  For instance, if we have
  two separate userspace processes operating independently and
  they both chose to use msi bank zero for their device, that's
  bank zero within each aperture and doesn't interfere.  Or
  another way to ask is can a malicious user interfere with
  other users by
  using the wrong bank.
  Thanks,

 They can interfere.
  
   Want to be sure of how they can interfere?
 
  What happens if more than one user selects the same MSI bank?
  Minimally, wouldn't that result in the IOMMU blocking transactions
  from the previous user once the new user activates their mapping?
 
 Yes and no; With current implementation yes but with a minor change no. Later 
 in
 this response I will explain how.
 
 
 With this hardware, the only way to prevent that
 is to make sure that a bank is not shared by multiple protection
 contexts.
 For some of our users, though, I believe preventing this is less
 important than the performance benefit.
  
   So should we let this patch series in without protection?
 
  No.
 
   
I think we need some sort of ownership model around the msi banks then.
Otherwise there's nothing preventing another userspace from
attempting an MSI based attack on other users, or perhaps even on
the host.  VFIO can't allow that.  Thanks,
  
   We have very few (3 MSI bank on most of chips), so we can not assign
   one to each userspace. What we can do is host and userspace does not
   share a MSI bank while userspace will share a MSI bank.
 
  Then you probably need VFIO to own the MSI bank and program devices
  into it rather than exposing the MSI banks to userspace to let them have
 direct access.
 
 Overall idea of exposing the details of msi regions to userspace are
  1) User space can define the aperture size to fit MSI mapping in IOMMU.
  2) setup iova for a MSI banks; which is just after guest memory.
 
 But currently we expose the size and address of MSI banks, passing address
 is of no use and can be problematic.

I am sorry, above information is not correct. Currently neither we

RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-27 Thread Bharat Bhushan


 -Original Message-
 From: Alex Williamson [mailto:alex.william...@redhat.com]
 Sent: Monday, November 25, 2013 10:08 PM
 To: Bhushan Bharat-R65777
 Cc: Wood Scott-B07421; linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-
 B08248; io...@lists.linux-foundation.org; bhelg...@google.com; linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 On Mon, 2013-11-25 at 05:33 +, Bharat Bhushan wrote:
 
   -Original Message-
   From: Alex Williamson [mailto:alex.william...@redhat.com]
   Sent: Friday, November 22, 2013 2:31 AM
   To: Wood Scott-B07421
   Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org; ag...@suse.de;
   Yoder Stuart-B08248; io...@lists.linux-foundation.org;
   bhelg...@google.com; linuxppc- d...@lists.ozlabs.org;
   linux-ker...@vger.kernel.org
   Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
   IOMMU (PAMU)
  
   On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
 On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:
 
   -Original Message-
   From: Alex Williamson [mailto:alex.william...@redhat.com]
   Sent: Thursday, November 21, 2013 12:17 AM
   To: Bhushan Bharat-R65777
   Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de;
   Wood Scott-B07421; Yoder Stuart-B08248;
   io...@lists.linux-foundation.org; linux-
   p...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
   ker...@vger.kernel.org; Bhushan Bharat-R65777
   Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for
   Freescale IOMMU (PAMU)
  
   Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each
   vfio user has $COUNT regions at their disposal exclusively)?
 
  Number of msi-bank count is system wide and not per aperture,
  But will be
   setting windows for banks in the device aperture.
  So say if we are direct assigning 2 pci device (both have
  different iommu
   group, so 2 aperture in iommu) to VM.
  Now qemu can make only one call to know how many msi-banks are
  there but
   it must set sub-windows for all banks for both pci device in its
   respective aperture.

 I'm still confused.  What I want to make sure of is that the
 banks are independent per aperture.  For instance, if we have
 two separate userspace processes operating independently and
 they both chose to use msi bank zero for their device, that's
 bank zero within each aperture and doesn't interfere.  Or
 another way to ask is can a malicious user interfere with other users 
 by
 using the wrong bank.
 Thanks,
   
They can interfere.
 
  Want to be sure of how they can interfere?
 
 What happens if more than one user selects the same MSI bank?
 Minimally, wouldn't that result in the IOMMU blocking transactions from the
 previous user once the new user activates their mapping?

Yes and no; With current implementation yes but with a minor change no. Later 
in this response I will explain how.

 
With this hardware, the only way to prevent that
is to make sure that a bank is not shared by multiple protection 
contexts.
For some of our users, though, I believe preventing this is less
important than the performance benefit.
 
  So should we let this patch series in without protection?
 
 No.
 
  
   I think we need some sort of ownership model around the msi banks then.
   Otherwise there's nothing preventing another userspace from
   attempting an MSI based attack on other users, or perhaps even on
   the host.  VFIO can't allow that.  Thanks,
 
  We have very few (3 MSI bank on most of chips), so we can not assign
  one to each userspace. What we can do is host and userspace does not
  share a MSI bank while userspace will share a MSI bank.
 
 Then you probably need VFIO to own the MSI bank and program devices into it
 rather than exposing the MSI banks to userspace to let them have direct 
 access.

Overall idea of exposing the details of msi regions to userspace are
 1) User space can define the aperture size to fit MSI mapping in IOMMU.
 2) setup iova for a MSI banks; which is just after guest memory. 

But currently we expose the size and address of MSI banks, passing address 
is of no use and can be problematic.
If we just provide the size of MSI bank to userspace then userspace cannot do 
anything wrong.

While it is still the responsibility of host (MSI+VFIO) to compose MSI-address 
and MSI-data; so I think this should look fine.

 Thanks,
 
 Alex
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-25 Thread Alex Williamson
On Mon, 2013-11-25 at 05:33 +, Bharat Bhushan wrote:
 
  -Original Message-
  From: Alex Williamson [mailto:alex.william...@redhat.com]
  Sent: Friday, November 22, 2013 2:31 AM
  To: Wood Scott-B07421
  Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org; ag...@suse.de; Yoder
  Stuart-B08248; io...@lists.linux-foundation.org; bhelg...@google.com; 
  linuxppc-
  d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  
  On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
   On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:

  -Original Message-
  From: Alex Williamson [mailto:alex.william...@redhat.com]
  Sent: Thursday, November 21, 2013 12:17 AM
  To: Bhushan Bharat-R65777
  Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de; Wood
  Scott-B07421; Yoder Stuart-B08248;
  io...@lists.linux-foundation.org; linux- p...@vger.kernel.org;
  linuxppc-dev@lists.ozlabs.org; linux- ker...@vger.kernel.org;
  Bhushan Bharat-R65777
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
  IOMMU (PAMU)
 
  Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each
  vfio user has $COUNT regions at their disposal exclusively)?

 Number of msi-bank count is system wide and not per aperture, But 
 will be
  setting windows for banks in the device aperture.
 So say if we are direct assigning 2 pci device (both have different 
 iommu
  group, so 2 aperture in iommu) to VM.
 Now qemu can make only one call to know how many msi-banks are there 
 but
  it must set sub-windows for all banks for both pci device in its respective
  aperture.
   
I'm still confused.  What I want to make sure of is that the banks
are independent per aperture.  For instance, if we have two separate
userspace processes operating independently and they both chose to
use msi bank zero for their device, that's bank zero within each
aperture and doesn't interfere.  Or another way to ask is can a
malicious user interfere with other users by using the wrong bank.
Thanks,
  
   They can interfere.
 
 Want to be sure of how they can interfere?

What happens if more than one user selects the same MSI bank?
Minimally, wouldn't that result in the IOMMU blocking transactions from
the previous user once the new user activates their mapping?

   With this hardware, the only way to prevent that
   is to make sure that a bank is not shared by multiple protection contexts.
   For some of our users, though, I believe preventing this is less
   important than the performance benefit.
 
 So should we let this patch series in without protection?

No.

  
  I think we need some sort of ownership model around the msi banks then.
  Otherwise there's nothing preventing another userspace from attempting an 
  MSI
  based attack on other users, or perhaps even on the host.  VFIO can't allow
  that.  Thanks,
 
 We have very few (3 MSI bank on most of chips), so we can not assign
 one to each userspace. What we can do is host and userspace does not
 share a MSI bank while userspace will share a MSI bank.

Then you probably need VFIO to own the MSI bank and program devices
into it rather than exposing the MSI banks to userspace to let them have
direct access.  Thanks,

Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-24 Thread Bharat Bhushan


 -Original Message-
 From: Alex Williamson [mailto:alex.william...@redhat.com]
 Sent: Friday, November 22, 2013 2:31 AM
 To: Wood Scott-B07421
 Cc: Bhushan Bharat-R65777; linux-...@vger.kernel.org; ag...@suse.de; Yoder
 Stuart-B08248; io...@lists.linux-foundation.org; bhelg...@google.com; 
 linuxppc-
 d...@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
  On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
   On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:
   
 -Original Message-
 From: Alex Williamson [mailto:alex.william...@redhat.com]
 Sent: Thursday, November 21, 2013 12:17 AM
 To: Bhushan Bharat-R65777
 Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de; Wood
 Scott-B07421; Yoder Stuart-B08248;
 io...@lists.linux-foundation.org; linux- p...@vger.kernel.org;
 linuxppc-dev@lists.ozlabs.org; linux- ker...@vger.kernel.org;
 Bhushan Bharat-R65777
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale
 IOMMU (PAMU)

 Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each
 vfio user has $COUNT regions at their disposal exclusively)?
   
Number of msi-bank count is system wide and not per aperture, But will 
be
 setting windows for banks in the device aperture.
So say if we are direct assigning 2 pci device (both have different 
iommu
 group, so 2 aperture in iommu) to VM.
Now qemu can make only one call to know how many msi-banks are there but
 it must set sub-windows for all banks for both pci device in its respective
 aperture.
  
   I'm still confused.  What I want to make sure of is that the banks
   are independent per aperture.  For instance, if we have two separate
   userspace processes operating independently and they both chose to
   use msi bank zero for their device, that's bank zero within each
   aperture and doesn't interfere.  Or another way to ask is can a
   malicious user interfere with other users by using the wrong bank.
   Thanks,
 
  They can interfere.

Want to be sure of how they can interfere?

  With this hardware, the only way to prevent that
  is to make sure that a bank is not shared by multiple protection contexts.
  For some of our users, though, I believe preventing this is less
  important than the performance benefit.

So should we let this patch series in without protection?

 
 I think we need some sort of ownership model around the msi banks then.
 Otherwise there's nothing preventing another userspace from attempting an MSI
 based attack on other users, or perhaps even on the host.  VFIO can't allow
 that.  Thanks,

We have very few (3 MSI bank on most of chips), so we can not assign one to 
each userspace. What we can do is host and userspace does not share a MSI bank 
while userspace will share a MSI bank.


Thanks
-Bharat

 
 Alex
 

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-21 Thread Varun Sethi


 -Original Message-
 From: iommu-boun...@lists.linux-foundation.org [mailto:iommu-
 boun...@lists.linux-foundation.org] On Behalf Of Alex Williamson
 Sent: Thursday, November 21, 2013 12:17 AM
 To: Bhushan Bharat-R65777
 Cc: linux-...@vger.kernel.org; ag...@suse.de; Yoder Stuart-B08248; Wood
 Scott-B07421; io...@lists.linux-foundation.org; bhelg...@google.com;
 linuxppc-dev@lists.ozlabs.org; linux-ker...@vger.kernel.org
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU
 (PAMU)
 
 On Tue, 2013-11-19 at 10:47 +0530, Bharat Bhushan wrote:
  From: Bharat Bhushan bharat.bhus...@freescale.com
 
  PAMU (FSL IOMMU) has a concept of primary window and subwindows.
  Primary window corresponds to the complete guest iova address space
  (including MSI space), with respect to IOMMU_API this is termed as
  geometry. IOVA Base of subwindow is determined from the number of
  subwindows (configurable using iommu API).
  MSI I/O page must be within the geometry and maximum supported
  subwindows, so MSI IO-page is setup just after guest memory iova space.
 
  So patch 1/9-4/9(inclusive) are for defining the interface to get:
- Number of MSI regions (which is number of MSI banks for powerpc)
- MSI-region address range: Physical page which have the
  address/addresses used for generating MSI interrupt
  and size of the page.
 
  Patch 5/9-7/9(inclusive) is defining the interface of setting up MSI
  iova-base for a msi region(bank) for a device. so that when
  msi-message will be composed then this configured iova will be used.
  Earlier we were using iommu interface for getting the configured iova
  which was not currect and Alex Williamson suggeested this type of
 interface.
 
  patch 8/9 moves some common functions in a separate file so that these
  can be used by FSL_PAMU implementation (next patch uses this).
  These will be used later for iommu-none implementation. I believe we
  can do more of this but will take step by step.
 
  Finally last patch actually adds the support for FSL-PAMU :)
 
 Patches 1-3: msi_get_region needs to return an error an error (probably
 -EINVAL) if called on a path where there's no backend implementation.
 Otherwise the caller doesn't know that the data in the region pointer
 isn't valid.
 
 Patches 56: same as above for msi_set_iova, return an error if no
 backend implementation.
 
 Patch 7: Why does fsl_msi_del_iova_device bother to return anything if
 it's always zero?  Return -ENODEV when not found?
 
 Patch 9:
 
 vfio_handle_get_attr() passes random kernel data back to userspace in the
 event of iommu_domain_get_attr() error.
 
 vfio_handle_set_attr(): I don't see any data validation happening, is
 iommu_domain_set_attr() really that safe?
[Sethi Varun-B16395] The parameter validation can be left to the lower level 
iommu driver. The attribute could be specific to a given hardware.

-Varun

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


RE: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-21 Thread Bharat Bhushan


 -Original Message-
 From: Alex Williamson [mailto:alex.william...@redhat.com]
 Sent: Thursday, November 21, 2013 12:17 AM
 To: Bhushan Bharat-R65777
 Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de; Wood Scott-B07421;
 Yoder Stuart-B08248; io...@lists.linux-foundation.org; linux-
 p...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
 ker...@vger.kernel.org; Bhushan Bharat-R65777
 Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
 
 On Tue, 2013-11-19 at 10:47 +0530, Bharat Bhushan wrote:
  From: Bharat Bhushan bharat.bhus...@freescale.com
 
  PAMU (FSL IOMMU) has a concept of primary window and subwindows.
  Primary window corresponds to the complete guest iova address space
  (including MSI space), with respect to IOMMU_API this is termed as
  geometry. IOVA Base of subwindow is determined from the number of
  subwindows (configurable using iommu API).
  MSI I/O page must be within the geometry and maximum supported
  subwindows, so MSI IO-page is setup just after guest memory iova space.
 
  So patch 1/9-4/9(inclusive) are for defining the interface to get:
- Number of MSI regions (which is number of MSI banks for powerpc)
- MSI-region address range: Physical page which have the
  address/addresses used for generating MSI interrupt
  and size of the page.
 
  Patch 5/9-7/9(inclusive) is defining the interface of setting up MSI
  iova-base for a msi region(bank) for a device. so that when
  msi-message will be composed then this configured iova will be used.
  Earlier we were using iommu interface for getting the configured iova
  which was not currect and Alex Williamson suggeested this type of interface.
 
  patch 8/9 moves some common functions in a separate file so that these
  can be used by FSL_PAMU implementation (next patch uses this).
  These will be used later for iommu-none implementation. I believe we
  can do more of this but will take step by step.
 
  Finally last patch actually adds the support for FSL-PAMU :)
 
 Patches 1-3: msi_get_region needs to return an error an error (probably
 -EINVAL) if called on a path where there's no backend implementation.
 Otherwise the caller doesn't know that the data in the region pointer isn't
 valid.

will correct.

 
 Patches 56: same as above for msi_set_iova, return an error if no backend
 implementation.

Ok

 
 Patch 7: Why does fsl_msi_del_iova_device bother to return anything if it's
 always zero?  Return -ENODEV when not found?

Will make -ENODEV.

 
 Patch 9:
 
 vfio_handle_get_attr() passes random kernel data back to userspace in the 
 event
 of iommu_domain_get_attr() error.

Will correct.

 
 vfio_handle_set_attr(): I don't see any data validation happening, is
 iommu_domain_set_attr() really that safe?

We do not need any data validation here and iommu driver does whatever needed.
So yes,  iommu_domain_set_attr() is safe.

 
 For both of those, drop the pr_err on unknown attribute, it's sufficient to
 return error.

ok

 
 Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each vfio user has
 $COUNT regions at their disposal exclusively)?

Number of msi-bank count is system wide and not per aperture, But will be 
setting windows for banks in the device aperture.
So say if we are direct assigning 2 pci device (both have different iommu 
group, so 2 aperture in iommu) to VM.
Now qemu can make only one call to know how many msi-banks are there but it 
must set sub-windows for all banks for both pci device in its respective 
aperture.

Thanks
-Bharat

  Thanks,
 
 Alex
 
  v1-v2
   - Added interface for setting msi iova for a msi region for a device.
 Earlier I added iommu interface for same but as per comment that is
 removed and now created a direct interface between vfio and msi.
   - Incorporated review comments (details is in individual patch)
 
  Bharat Bhushan (9):
pci:msi: add weak function for returning msi region info
pci: msi: expose msi region information functions
powerpc: pci: Add arch specific msi region interface
powerpc: msi: Extend the msi region interface to get info from
  fsl_msi
pci/msi: interface to set an iova for a msi region
powerpc: pci: Extend msi iova page setup to arch specific
pci: msi: Extend msi iova setting interface to powerpc arch
vfio: moving some functions in common file
vfio pci: Add vfio iommu implementation for FSL_PAMU
 
   arch/powerpc/include/asm/machdep.h |   10 +
   arch/powerpc/kernel/msi.c  |   28 +
   arch/powerpc/sysdev/fsl_msi.c  |  132 +-
   arch/powerpc/sysdev/fsl_msi.h  |   25 +-
   drivers/pci/msi.c  |   35 ++
   drivers/vfio/Kconfig   |6 +
   drivers/vfio/Makefile  |5 +-
   drivers/vfio/vfio_iommu_common.c   |  227 
   drivers/vfio/vfio_iommu_common.h   |   27 +
   drivers/vfio/vfio_iommu_fsl_pamu.c | 1003
 
   drivers/vfio/vfio_iommu_type1.c|  206

Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-21 Thread Scott Wood
On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
 On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:
  
   -Original Message-
   From: Alex Williamson [mailto:alex.william...@redhat.com]
   Sent: Thursday, November 21, 2013 12:17 AM
   To: Bhushan Bharat-R65777
   Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de; Wood 
   Scott-B07421;
   Yoder Stuart-B08248; io...@lists.linux-foundation.org; linux-
   p...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
   ker...@vger.kernel.org; Bhushan Bharat-R65777
   Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU 
   (PAMU)
   
   Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each vfio user has
   $COUNT regions at their disposal exclusively)?
  
  Number of msi-bank count is system wide and not per aperture, But will be 
  setting windows for banks in the device aperture.
  So say if we are direct assigning 2 pci device (both have different iommu 
  group, so 2 aperture in iommu) to VM.
  Now qemu can make only one call to know how many msi-banks are there but it 
  must set sub-windows for all banks for both pci device in its respective 
  aperture.
 
 I'm still confused.  What I want to make sure of is that the banks are
 independent per aperture.  For instance, if we have two separate
 userspace processes operating independently and they both chose to use
 msi bank zero for their device, that's bank zero within each aperture
 and doesn't interfere.  Or another way to ask is can a malicious user
 interfere with other users by using the wrong bank.  Thanks,

They can interfere.  With this hardware, the only way to prevent that is
to make sure that a bank is not shared by multiple protection contexts.
For some of our users, though, I believe preventing this is less
important than the performance benefit.

-Scott



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-21 Thread Alex Williamson
On Thu, 2013-11-21 at 14:47 -0600, Scott Wood wrote:
 On Thu, 2013-11-21 at 13:43 -0700, Alex Williamson wrote:
  On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:
   
-Original Message-
From: Alex Williamson [mailto:alex.william...@redhat.com]
Sent: Thursday, November 21, 2013 12:17 AM
To: Bhushan Bharat-R65777
Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de; Wood 
Scott-B07421;
Yoder Stuart-B08248; io...@lists.linux-foundation.org; linux-
p...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
ker...@vger.kernel.org; Bhushan Bharat-R65777
Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU 
(PAMU)

Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each vfio user 
has
$COUNT regions at their disposal exclusively)?
   
   Number of msi-bank count is system wide and not per aperture, But will be 
   setting windows for banks in the device aperture.
   So say if we are direct assigning 2 pci device (both have different iommu 
   group, so 2 aperture in iommu) to VM.
   Now qemu can make only one call to know how many msi-banks are there but 
   it must set sub-windows for all banks for both pci device in its 
   respective aperture.
  
  I'm still confused.  What I want to make sure of is that the banks are
  independent per aperture.  For instance, if we have two separate
  userspace processes operating independently and they both chose to use
  msi bank zero for their device, that's bank zero within each aperture
  and doesn't interfere.  Or another way to ask is can a malicious user
  interfere with other users by using the wrong bank.  Thanks,
 
 They can interfere.  With this hardware, the only way to prevent that is
 to make sure that a bank is not shared by multiple protection contexts.
 For some of our users, though, I believe preventing this is less
 important than the performance benefit.

I think we need some sort of ownership model around the msi banks then.
Otherwise there's nothing preventing another userspace from attempting
an MSI based attack on other users, or perhaps even on the host.  VFIO
can't allow that.  Thanks,

Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-21 Thread Alex Williamson
On Thu, 2013-11-21 at 11:20 +, Bharat Bhushan wrote:
 
  -Original Message-
  From: Alex Williamson [mailto:alex.william...@redhat.com]
  Sent: Thursday, November 21, 2013 12:17 AM
  To: Bhushan Bharat-R65777
  Cc: j...@8bytes.org; bhelg...@google.com; ag...@suse.de; Wood Scott-B07421;
  Yoder Stuart-B08248; io...@lists.linux-foundation.org; linux-
  p...@vger.kernel.org; linuxppc-dev@lists.ozlabs.org; linux-
  ker...@vger.kernel.org; Bhushan Bharat-R65777
  Subject: Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)
  
  On Tue, 2013-11-19 at 10:47 +0530, Bharat Bhushan wrote:
   From: Bharat Bhushan bharat.bhus...@freescale.com
  
   PAMU (FSL IOMMU) has a concept of primary window and subwindows.
   Primary window corresponds to the complete guest iova address space
   (including MSI space), with respect to IOMMU_API this is termed as
   geometry. IOVA Base of subwindow is determined from the number of
   subwindows (configurable using iommu API).
   MSI I/O page must be within the geometry and maximum supported
   subwindows, so MSI IO-page is setup just after guest memory iova space.
  
   So patch 1/9-4/9(inclusive) are for defining the interface to get:
 - Number of MSI regions (which is number of MSI banks for powerpc)
 - MSI-region address range: Physical page which have the
   address/addresses used for generating MSI interrupt
   and size of the page.
  
   Patch 5/9-7/9(inclusive) is defining the interface of setting up MSI
   iova-base for a msi region(bank) for a device. so that when
   msi-message will be composed then this configured iova will be used.
   Earlier we were using iommu interface for getting the configured iova
   which was not currect and Alex Williamson suggeested this type of 
   interface.
  
   patch 8/9 moves some common functions in a separate file so that these
   can be used by FSL_PAMU implementation (next patch uses this).
   These will be used later for iommu-none implementation. I believe we
   can do more of this but will take step by step.
  
   Finally last patch actually adds the support for FSL-PAMU :)
  
  Patches 1-3: msi_get_region needs to return an error an error (probably
  -EINVAL) if called on a path where there's no backend implementation.
  Otherwise the caller doesn't know that the data in the region pointer isn't
  valid.
 
 will correct.
 
  
  Patches 56: same as above for msi_set_iova, return an error if no backend
  implementation.
 
 Ok
 
  
  Patch 7: Why does fsl_msi_del_iova_device bother to return anything if it's
  always zero?  Return -ENODEV when not found?
 
 Will make -ENODEV.
 
  
  Patch 9:
  
  vfio_handle_get_attr() passes random kernel data back to userspace in the 
  event
  of iommu_domain_get_attr() error.
 
 Will correct.
 
  
  vfio_handle_set_attr(): I don't see any data validation happening, is
  iommu_domain_set_attr() really that safe?
 
 We do not need any data validation here and iommu driver does whatever needed.
 So yes,  iommu_domain_set_attr() is safe.
 
  
  For both of those, drop the pr_err on unknown attribute, it's sufficient to
  return error.
 
 ok
 
  
  Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each vfio user has
  $COUNT regions at their disposal exclusively)?
 
 Number of msi-bank count is system wide and not per aperture, But will be 
 setting windows for banks in the device aperture.
 So say if we are direct assigning 2 pci device (both have different iommu 
 group, so 2 aperture in iommu) to VM.
 Now qemu can make only one call to know how many msi-banks are there but it 
 must set sub-windows for all banks for both pci device in its respective 
 aperture.

I'm still confused.  What I want to make sure of is that the banks are
independent per aperture.  For instance, if we have two separate
userspace processes operating independently and they both chose to use
msi bank zero for their device, that's bank zero within each aperture
and doesn't interfere.  Or another way to ask is can a malicious user
interfere with other users by using the wrong bank.  Thanks,

Alex

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


Re: [PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-20 Thread Alex Williamson
On Tue, 2013-11-19 at 10:47 +0530, Bharat Bhushan wrote:
 From: Bharat Bhushan bharat.bhus...@freescale.com
 
 PAMU (FSL IOMMU) has a concept of primary window and subwindows.
 Primary window corresponds to the complete guest iova address space
 (including MSI space), with respect to IOMMU_API this is termed as
 geometry. IOVA Base of subwindow is determined from the number of
 subwindows (configurable using iommu API).
 MSI I/O page must be within the geometry and maximum supported
 subwindows, so MSI IO-page is setup just after guest memory iova space.
 
 So patch 1/9-4/9(inclusive) are for defining the interface to get:
   - Number of MSI regions (which is number of MSI banks for powerpc)
   - MSI-region address range: Physical page which have the
 address/addresses used for generating MSI interrupt
 and size of the page.
 
 Patch 5/9-7/9(inclusive) is defining the interface of setting up
 MSI iova-base for a msi region(bank) for a device. so that when
 msi-message will be composed then this configured iova will be used.
 Earlier we were using iommu interface for getting the configured iova
 which was not currect and Alex Williamson suggeested this type of interface.
 
 patch 8/9 moves some common functions in a separate file so that these
 can be used by FSL_PAMU implementation (next patch uses this).
 These will be used later for iommu-none implementation. I believe we
 can do more of this but will take step by step.
 
 Finally last patch actually adds the support for FSL-PAMU :)

Patches 1-3: msi_get_region needs to return an error an error (probably
-EINVAL) if called on a path where there's no backend implementation.
Otherwise the caller doesn't know that the data in the region pointer
isn't valid.

Patches 56: same as above for msi_set_iova, return an error if no
backend implementation.

Patch 7: Why does fsl_msi_del_iova_device bother to return anything if
it's always zero?  Return -ENODEV when not found?

Patch 9:

vfio_handle_get_attr() passes random kernel data back to userspace in
the event of iommu_domain_get_attr() error.

vfio_handle_set_attr(): I don't see any data validation happening, is
iommu_domain_set_attr() really that safe?

For both of those, drop the pr_err on unknown attribute, it's sufficient
to return error.

Is VFIO_IOMMU_PAMU_GET_MSI_BANK_COUNT per aperture (ie. each vfio user
has $COUNT regions at their disposal exclusively)?  Thanks,

Alex

 v1-v2
  - Added interface for setting msi iova for a msi region for a device.
Earlier I added iommu interface for same but as per comment that is
removed and now created a direct interface between vfio and msi.
  - Incorporated review comments (details is in individual patch)
 
 Bharat Bhushan (9):
   pci:msi: add weak function for returning msi region info
   pci: msi: expose msi region information functions
   powerpc: pci: Add arch specific msi region interface
   powerpc: msi: Extend the msi region interface to get info from
 fsl_msi
   pci/msi: interface to set an iova for a msi region
   powerpc: pci: Extend msi iova page setup to arch specific
   pci: msi: Extend msi iova setting interface to powerpc arch
   vfio: moving some functions in common file
   vfio pci: Add vfio iommu implementation for FSL_PAMU
 
  arch/powerpc/include/asm/machdep.h |   10 +
  arch/powerpc/kernel/msi.c  |   28 +
  arch/powerpc/sysdev/fsl_msi.c  |  132 +-
  arch/powerpc/sysdev/fsl_msi.h  |   25 +-
  drivers/pci/msi.c  |   35 ++
  drivers/vfio/Kconfig   |6 +
  drivers/vfio/Makefile  |5 +-
  drivers/vfio/vfio_iommu_common.c   |  227 
  drivers/vfio/vfio_iommu_common.h   |   27 +
  drivers/vfio/vfio_iommu_fsl_pamu.c | 1003 
 
  drivers/vfio/vfio_iommu_type1.c|  206 +
  include/linux/msi.h|   14 +
  include/linux/pci.h|   21 +
  include/uapi/linux/vfio.h  |  100 
  14 files changed, 1623 insertions(+), 216 deletions(-)
  create mode 100644 drivers/vfio/vfio_iommu_common.c
  create mode 100644 drivers/vfio/vfio_iommu_common.h
  create mode 100644 drivers/vfio/vfio_iommu_fsl_pamu.c
 
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/



___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev


[PATCH 0/9 v2] vfio-pci: add support for Freescale IOMMU (PAMU)

2013-11-18 Thread Bharat Bhushan
From: Bharat Bhushan bharat.bhus...@freescale.com

PAMU (FSL IOMMU) has a concept of primary window and subwindows.
Primary window corresponds to the complete guest iova address space
(including MSI space), with respect to IOMMU_API this is termed as
geometry. IOVA Base of subwindow is determined from the number of
subwindows (configurable using iommu API).
MSI I/O page must be within the geometry and maximum supported
subwindows, so MSI IO-page is setup just after guest memory iova space.

So patch 1/9-4/9(inclusive) are for defining the interface to get:
  - Number of MSI regions (which is number of MSI banks for powerpc)
  - MSI-region address range: Physical page which have the
address/addresses used for generating MSI interrupt
and size of the page.

Patch 5/9-7/9(inclusive) is defining the interface of setting up
MSI iova-base for a msi region(bank) for a device. so that when
msi-message will be composed then this configured iova will be used.
Earlier we were using iommu interface for getting the configured iova
which was not currect and Alex Williamson suggeested this type of interface.

patch 8/9 moves some common functions in a separate file so that these
can be used by FSL_PAMU implementation (next patch uses this).
These will be used later for iommu-none implementation. I believe we
can do more of this but will take step by step.

Finally last patch actually adds the support for FSL-PAMU :)

v1-v2
 - Added interface for setting msi iova for a msi region for a device.
   Earlier I added iommu interface for same but as per comment that is
   removed and now created a direct interface between vfio and msi.
 - Incorporated review comments (details is in individual patch)

Bharat Bhushan (9):
  pci:msi: add weak function for returning msi region info
  pci: msi: expose msi region information functions
  powerpc: pci: Add arch specific msi region interface
  powerpc: msi: Extend the msi region interface to get info from
fsl_msi
  pci/msi: interface to set an iova for a msi region
  powerpc: pci: Extend msi iova page setup to arch specific
  pci: msi: Extend msi iova setting interface to powerpc arch
  vfio: moving some functions in common file
  vfio pci: Add vfio iommu implementation for FSL_PAMU

 arch/powerpc/include/asm/machdep.h |   10 +
 arch/powerpc/kernel/msi.c  |   28 +
 arch/powerpc/sysdev/fsl_msi.c  |  132 +-
 arch/powerpc/sysdev/fsl_msi.h  |   25 +-
 drivers/pci/msi.c  |   35 ++
 drivers/vfio/Kconfig   |6 +
 drivers/vfio/Makefile  |5 +-
 drivers/vfio/vfio_iommu_common.c   |  227 
 drivers/vfio/vfio_iommu_common.h   |   27 +
 drivers/vfio/vfio_iommu_fsl_pamu.c | 1003 
 drivers/vfio/vfio_iommu_type1.c|  206 +
 include/linux/msi.h|   14 +
 include/linux/pci.h|   21 +
 include/uapi/linux/vfio.h  |  100 
 14 files changed, 1623 insertions(+), 216 deletions(-)
 create mode 100644 drivers/vfio/vfio_iommu_common.c
 create mode 100644 drivers/vfio/vfio_iommu_common.h
 create mode 100644 drivers/vfio/vfio_iommu_fsl_pamu.c


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev