Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Fri, Sep 30, 2016 at 03:59:55PM -0400, Paul Moore wrote: > > We also have iwarp vs rocee where AFAIK iwarp should get the vlan tag > > from the IP socket that is allocated against the eth interface. > > Sigh. > > So we've got RDMA over IB (does this have an acronym? my googling We just call that IB > isn't showing anything ...), RoCEv1 which appears to be RDMA over Technically (IIRC) RoCEv1 is exactly the IB protocol with an ethernet MAC header tacked in front. It even has a slot for a pkey value, but no switches will inspect it. > Ethernet (although it looks like it might still use an IP header?), The 'IP' header is a IB GRH which is identical to a IPv6 header. We call the IPv6 address in this header a GID. > RoCEv2 which appears to be RDMA over UDP, and iWARP which seems to be > RDMA over TCP/SCTP. Are there any others? RoCEv2 is the IB protocol with a UDP header added in. iWARP is a unique protocol that runs RDMA inside TCP. > We've already talked about the RDMA/IB's pkeys and RoCEv1's GID/VLANs, > but RoCEv2 and iWARP are a little more interesting as they ride on top > of a routable network transport. Do RoCEv2 and iWARP use the kernel's > stack, or is that off-loaded? Gernally all off-loaded. There is one software implementation but it is not used for anything serious. Well, maybe two IB drivers don't offload this, I'm not sure. > Actually, now that I think of it, RoCEv2 and iWARP are probably > implemented as userspace libraries aren't they? Nope, there is a userspace library component, but the kernel is largely in charge. They are sort of distinct from the netstack, but part of the RDMA stack. It is very confusing because netdev is ideologically opposed (for good reason) to any form of offload, so even though these devices use the same physical network port, and use IP headers, they are not very well integrated. Eg iwarp calls out to a userspace process which opens a socket to reserve a port number and then feeds that back into the kernel to setup IP headers which are safe to use. :\ The nic steals those packets before the kernel ever sees them and processes them with an internal 'CPU' and then feeds the QP infrastructure. (this is what is ment by the term offload) This also means that likely all the SE linux protections that apply to ethernet are merrily voided by all this offload hardware and AFAIK nobody has done any work to try and do something about that. So Liran is right, when we talk about iWarp/RoCEv2 the SELinux stuff should follow the ethernet stack. However, every IB port typically has some number of child ipoib netdevices as well, and those devices also specify a Pkey. This is where the namespace patches source their pkey information from. I don't know why a different approach is proposed for selinux. (Well, aside from the fact the namespace patches were never completed and basically don't work for strong isolation..) .. and that is my basic concern, that selinux will get one patch series and be left essentially incomplete like namespaces were. Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 29, 2016 at 6:41 PM, Jason Gunthorpe wrote: > On Thu, Sep 29, 2016 at 06:16:03PM -0400, Paul Moore wrote: >> The queue pair (QP) concept lives in the RDMA layer and isn't tied to >> any particular transport. They appear to be somewhat analogous to >> network sockets, although I'm guessing they can't be shared/passed >> between process like a network socket, yes? > > Yes Okay, that should make life easier. >> The IB partition is similar to a ethernet VLAN in that it providedes >> enforced separation across the network; IB uses partition keys, VLANs >> use tags/IDs. IB partition keys are a 16 bit number, > >> GIDs appear to be a 16 byte number created from some combination of >> IP address, MAC address, and VLAN ID. > > There are several gid formats > > IB/OPA: 128 bit IPv6 address > RoCEv1: Sort of a link local IPv6 (?), vlan is specified directly > by apps > RoCEv2: Some sort of label that also implies a vlan tag Thanks for the extra information, but at this point I don't think the exact format is important; I'm just trying to get a basic understanding of what we might need to do. > We also have iwarp vs rocee where AFAIK iwarp should get the vlan tag > from the IP socket that is allocated against the eth interface. Sigh. So we've got RDMA over IB (does this have an acronym? my googling isn't showing anything ...), RoCEv1 which appears to be RDMA over Ethernet (although it looks like it might still use an IP header?), RoCEv2 which appears to be RDMA over UDP, and iWARP which seems to be RDMA over TCP/SCTP. Are there any others? We've already talked about the RDMA/IB's pkeys and RoCEv1's GID/VLANs, but RoCEv2 and iWARP are a little more interesting as they ride on top of a routable network transport. Do RoCEv2 and iWARP use the kernel's stack, or is that off-loaded? Actually, now that I think of it, RoCEv2 and iWARP are probably implemented as userspace libraries aren't they? The kernel probably doesn't know or care about these protocols at all, or does it? >> In the case of RDMA over IB, we want to control QP access to >> partitions/pkeys; in the case of RDMA over ethernet we want to control >> QP access to VLANs/GIDs. > > Broadly, yes, and I don't know what restriction iwarp would > need. Probably restrict access based on the eth device, but that will > probably need additional selinux checking in the rdma core. > > There are also UD QPs which are like UDP sockets, so every address > handle creation will need a security check too. -- paul moore www.paul-moore.com ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 29, 2016 at 06:16:03PM -0400, Paul Moore wrote: > The queue pair (QP) concept lives in the RDMA layer and isn't tied to > any particular transport. They appear to be somewhat analogous to > network sockets, although I'm guessing they can't be shared/passed > between process like a network socket, yes? Yes > The IB partition is similar to a ethernet VLAN in that it providedes > enforced separation across the network; IB uses partition keys, VLANs > use tags/IDs. IB partition keys are a 16 bit number, > GIDs appear to be a 16 byte number created from some combination of > IP address, MAC address, and VLAN ID. There are several gid formats IB/OPA: 128 bit IPv6 address RoCEv1: Sort of a link local IPv6 (?), vlan is specified directly by apps RoCEv2: Some sort of label that also implies a vlan tag We also have iwarp vs rocee where AFAIK iwarp should get the vlan tag from the IP socket that is allocated against the eth interface. > In the case of RDMA over IB, we want to control QP access to > partitions/pkeys; in the case of RDMA over ethernet we want to control > QP access to VLANs/GIDs. Broadly, yes, and I don't know what restriction iwarp would need. Probably restrict access based on the eth device, but that will probably need additional selinux checking in the rdma core. There are also UD QPs which are like UDP sockets, so every address handle creation will need a security check too. Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Fri, Sep 23, 2016 at 9:26 AM, Daniel Jurgens wrote: > On 9/20/2016 6:43 PM, Paul Moore wrote: >> On Tue, Sep 6, 2016 at 4:02 PM, Jason Gunthorpe >> wrote: >>> On Thu, Sep 01, 2016 at 02:06:46PM -0400, Paul Moore wrote: >>> Jason and/or Daniel, I think it would be helpful if you could explain both the InifiniBand and IP based approaches for those of us who know SELinux, but not necessarily the RDMA and InfiniBand portions of this discussion. Be verbose and explain it as if we were idiots (I get called that enough, it must be true). >>> Well, I'm not really familiar with SELinux, I know a little bit about >>> how labels are applied in the netstack, but not that much... >>> >>> The RDMA subsystem supports 4 different networking standards, and they >>> each have their own objects.. >> All right, I'm done traveling for a bit and it seems like this >> discussion has settled into a stalemate so let's try to pick things >> back up and sort this out. >> >> Starting we a better RDMA education for me. >> >> So far the discussion has been around providing access controls at the >> transport layer, are there any RDMA entities that are transport >> agnostic that might be better suited for what we are trying to do? Or >> is it simply that the RDMA layer is tied so tightly to the underlying >> transport that we can't separate the two and have to consider them as >> one? > > Welcome back Paul. > > I don't think there is a transport agnostic way to provide the kind of > control I use in this patch set, which is very Infiniband specific. RoCE > uses VLANs and they are conceptually similar to subnet partitions, but the > means of using them is completely different. To use a different VLAN the > user must select a GID for that VLAN. One could provide a means to control > RoCE access to VLANs by labeling GIDs and controlling them in a similar way > to how I do PKeys. That approach doesn't help with Infiniband partitions > though, because the same GID can be used on multiple partitions. It's also > not very desirable from a policy writers perspective because it makes it so a > bespoke policy is required per node. > > Regardless of any other approaches one might like to use to provide access > control for RDMA non-Infiniband transport I think controlling access to > Infiniband PKeys is still a desirable feature and I don't see any other way > to have that. Let me try to summarize and work through some of this stuff, please correct me if any of this is wrong. The queue pair (QP) concept lives in the RDMA layer and isn't tied to any particular transport. They appear to be somewhat analogous to network sockets, although I'm guessing they can't be shared/passed between process like a network socket, yes? The IB partition is similar to a ethernet VLAN in that it providedes enforced separation across the network; IB uses partition keys, VLANs use tags/IDs. IB partition keys are a 16 bit number, GIDs appear to be a 16 byte number created from some combination of IP address, MAC address, and VLAN ID. In the case of RDMA over IB, we want to control QP access to partitions/pkeys; in the case of RDMA over ethernet we want to control QP access to VLANs/GIDs. Is the above correct? -- paul moore www.paul-moore.com ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Sep 20, 2016 at 07:43:34PM -0400, Paul Moore wrote: > So far the discussion has been around providing access controls at the > transport layer, are there any RDMA entities that are transport > agnostic that might be better suited for what we are trying to do? Or > is it simply that the RDMA layer is tied so tightly to the underlying > transport that we can't separate the two and have to consider them as > one? The generic RDMA layer is called 'rdmacm' and it is the layer Mellanox's already applied patches for RDMA namespace enablement worked at. Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 9/20/2016 6:43 PM, Paul Moore wrote: > On Tue, Sep 6, 2016 at 4:02 PM, Jason Gunthorpe > wrote: >> On Thu, Sep 01, 2016 at 02:06:46PM -0400, Paul Moore wrote: >> >>> Jason and/or Daniel, I think it would be helpful if you could explain >>> both the InifiniBand and IP based approaches for those of us who know >>> SELinux, but not necessarily the RDMA and InfiniBand portions of this >>> discussion. Be verbose and explain it as if we were idiots (I get >>> called that enough, it must be true). >> Well, I'm not really familiar with SELinux, I know a little bit about >> how labels are applied in the netstack, but not that much... >> >> The RDMA subsystem supports 4 different networking standards, and they >> each have their own objects.. > All right, I'm done traveling for a bit and it seems like this > discussion has settled into a stalemate so let's try to pick things > back up and sort this out. > > Starting we a better RDMA education for me. > > So far the discussion has been around providing access controls at the > transport layer, are there any RDMA entities that are transport > agnostic that might be better suited for what we are trying to do? Or > is it simply that the RDMA layer is tied so tightly to the underlying > transport that we can't separate the two and have to consider them as > one? Welcome back Paul. I don't think there is a transport agnostic way to provide the kind of control I use in this patch set, which is very Infiniband specific. RoCE uses VLANs and they are conceptually similar to subnet partitions, but the means of using them is completely different. To use a different VLAN the user must select a GID for that VLAN. One could provide a means to control RoCE access to VLANs by labeling GIDs and controlling them in a similar way to how I do PKeys. That approach doesn't help with Infiniband partitions though, because the same GID can be used on multiple partitions. It's also not very desirable from a policy writers perspective because it makes it so a bespoke policy is required per node. Regardless of any other approaches one might like to use to provide access control for RDMA non-Infiniband transport I think controlling access to Infiniband PKeys is still a desirable feature and I don't see any other way to have that. ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
RE: [PATCH v3 0/9] SELinux support for Infiniband RDMA
> From: ira.weiny [mailto:ira.we...@intel.com] > > > > It really isn't. net ports and service_ids are global things that do > > not need machine-specific customizations while subnet prefix or device > > name/port are both machine-local information. > > I agree that service_ids are more analogous to net ports. > > However, subnet prefixes are _not_ machine-local. They are controlled by the > Admin of the fabric by a central entity (the SM). This is more helpful than > in > ethernet where if you configure the wrong port with the wrong subnet things > just don't work. In IB I can physically plug my network into any IB port I > want > and the system is _told_ which "subnet" that port belongs to. (OPA is the > same > way.) > > So for IB/OPA a subnet prefix is a really good way to ID which network > (subnet) > you want to use. Unfortunately, I'm not sure how to translate that to > iwarp/roce seamlessly except to have some concept of "domain" as I mentioned > in my other email. > Exactly. The identity of both the "domain" (the subnet ID) and the "label" stem from a central entity - the SM. It would be very natural to have IB/OPA subnet policies that are configured in all hosts and the SM. These policies are automatically enforced for any port connected to the subnet. Not everything needs to be related to IP interfaces. I can envision multiple jobs in the cluster, running on distinct partitions using distinct security tags, without configuring IP interfaces on these partitions. Partition security is a useful and an effective measure that is applicable to IB/OPA networks. That's it. Ethernet VLANs are a totally different thing --- SELinux *already* handles them for Ethernet interfaces. There is nothing special from an admin's point of view regarding how SELinux applies to RDMA over Ethernet (RoCE/iWarp). RDMA is just another transport, and any Ethernet L2 policies should apply to it seamlessly. --Liran ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 08, 2016 at 01:32:35PM -0600, Jason Gunthorpe wrote: > On Thu, Sep 08, 2016 at 06:59:13PM +, Daniel Jurgens wrote: > > > >> Net has variety of means of enforcement, one of which is controlling > > >> access to ports , which is the most like what > > >> I'm doing here. > > > No, the analog the tcp/udp,port number is > > > I should have been clearer here. From the SELinux perspective this > > scheme is very similar to net ports. > > It really isn't. net ports and service_ids are global things that do > not need machine-specific customizations while subnet prefix or device > name/port are both machine-local information. I agree that service_ids are more analogous to net ports. However, subnet prefixes are _not_ machine-local. They are controlled by the Admin of the fabric by a central entity (the SM). This is more helpful than in ethernet where if you configure the wrong port with the wrong subnet things just don't work. In IB I can physically plug my network into any IB port I want and the system is _told_ which "subnet" that port belongs to. (OPA is the same way.) So for IB/OPA a subnet prefix is a really good way to ID which network (subnet) you want to use. Unfortunately, I'm not sure how to translate that to iwarp/roce seamlessly except to have some concept of "domain" as I mentioned in my other email. Ira ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Sep 6, 2016 at 4:02 PM, Jason Gunthorpe wrote: > On Thu, Sep 01, 2016 at 02:06:46PM -0400, Paul Moore wrote: > >> Jason and/or Daniel, I think it would be helpful if you could explain >> both the InifiniBand and IP based approaches for those of us who know >> SELinux, but not necessarily the RDMA and InfiniBand portions of this >> discussion. Be verbose and explain it as if we were idiots (I get >> called that enough, it must be true). > > Well, I'm not really familiar with SELinux, I know a little bit about > how labels are applied in the netstack, but not that much... > > The RDMA subsystem supports 4 different networking standards, and they > each have their own objects.. All right, I'm done traveling for a bit and it seems like this discussion has settled into a stalemate so let's try to pick things back up and sort this out. Starting we a better RDMA education for me. So far the discussion has been around providing access controls at the transport layer, are there any RDMA entities that are transport agnostic that might be better suited for what we are trying to do? Or is it simply that the RDMA layer is tied so tightly to the underlying transport that we can't separate the two and have to consider them as one? -- paul moore www.paul-moore.com ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 08, 2016 at 01:35:12PM -0600, Jason Gunthorpe wrote: > On Thu, Sep 08, 2016 at 03:14:57PM -0400, ira.weiny wrote: > > On Thu, Sep 08, 2016 at 10:19:48AM -0600, Jason Gunthorpe wrote: > > > On Thu, Sep 08, 2016 at 02:12:48PM +, Daniel Jurgens wrote: > > > > > > > It would have to include the port, but idea of using a device name > > > > for this is pretty ugly. makes it very easy to > > > > write a policy that can be deployed widely. > > > > could require many different policies depending on the configuration > > > > of each machine. > > > > > > What does net do? Should we have a way to unformly label the rdma ports? > > > > Uniformly label them on the local node or across a cluster? > > However we want. If the argument comes down to 'we stupidly choose to > call our devices mlx5_0', then lets allow the admin rename that to > 'rdma0' and a cluster wide config file will apply uniformly. This > approach applies to all configuration related to rdma, not just > SELinux. I'm not sure I like the idea of trying to use "rdmaX". It seems like this has been a confusion point for things like drives and NICs in the past. (Where the order of device discovery is an issue.) But I guess with more network types coming online we may have to have something generic. That said in the netdev world not all things are called eth0. Some are called wlanX, etc... Does anyone know why do they have names based on network type? So I could see where having a global "name" for a subnet would be nice... But isn't something like that called a domain name? Does SELinux work in conjunction with domain names in the netdev stack? This may be a bit off topic but has anyone thought about adding GID specific DNS record types? I have experimented with just putting a GID in an IPv6 record and things I tried work quite well. Should we have a method to map a domain name to a subnet prefix? If the domain name mapped to a subnet prefix it would imply a set of port GIDs on IB/OPA devices and if it mapped to an IPv4/v6 subnet it would be iwarp/roce/usnic. For this series and others the kernel could continue to use the correct "subnet" information and user space could translate as appropriate? Would this series work looking at a "subnet prefix" of an IPv6 address in RoCE? Ira ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 08, 2016 at 03:14:57PM -0400, ira.weiny wrote: > On Thu, Sep 08, 2016 at 10:19:48AM -0600, Jason Gunthorpe wrote: > > On Thu, Sep 08, 2016 at 02:12:48PM +, Daniel Jurgens wrote: > > > > > It would have to include the port, but idea of using a device name > > > for this is pretty ugly. makes it very easy to > > > write a policy that can be deployed widely. > > > could require many different policies depending on the configuration > > > of each machine. > > > > What does net do? Should we have a way to unformly label the rdma ports? > > Uniformly label them on the local node or across a cluster? However we want. If the argument comes down to 'we stupidly choose to call our devices mlx5_0', then lets allow the admin rename that to 'rdma0' and a cluster wide config file will apply uniformly. This approach applies to all configuration related to rdma, not just SELinux. > > If they are not written to disk I don't see the problem, the dynamic > > injector will have to figure out what interface is what. > > Who is the "dynamic injector"? Docker, for instance. Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 08, 2016 at 06:59:13PM +, Daniel Jurgens wrote: > >> Net has variety of means of enforcement, one of which is controlling > >> access to ports , which is the most like what > >> I'm doing here. > > No, the analog the tcp/udp,port number is > I should have been clearer here. From the SELinux perspective this > scheme is very similar to net ports. It really isn't. net ports and service_ids are global things that do not need machine-specific customizations while subnet prefix or device name/port are both machine-local information. > >> with this aside from it being where the policy is stored before > >> being loaded. What is this dynamic injector you are talking about? > > The container projects (eg docker) somehow setup selinux on the > > fly for each container. I'm not sure how. > SELinux policy is modular and can be changed or updated while > running, I'm not very familiar with docker so I'm not sure what they > do regarding SELinux. I'm also not sure it's relevant to the issues > at hand. docker and the like would seem to be the #1 user of this kind of feature, it goes hand in hand with the ipoib namespace work that does a similar (but less complete thing). This is a great way to create a container and constrain it to a single pkey/vlan/ipoib device, which would be the basic capability needed to sensibly rdma and containers together. This is why thinking about how to fully support the pkey/vlan concept across all the rdma drivers seems so critical. I'm surprised this isn't your use case. Again, I wish you'd think more broadly before designing new uapis. selinux enabling the rdma subsystem is a whole new uapi aspect for rdma that we have to live with forever. > >> called mlx5_0, the another mlx4_0 and you want to grant access to > >> system administrators. > > So do this in userspace? Why should the kernel do the translation? > I'm still not clear on what translation you are talking about. Converting the subnet prefix to a list of physical ports. Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 9/8/2016 1:38 PM, Jason Gunthorpe wrote: > On Thu, Sep 08, 2016 at 05:47:46PM +, Liran Liss wrote: > >> This patch-set enables partition-based isolation for Infiniband networks in >> a very intuitive manner, that's it. >> IB partitions don't have anything to do with VLANs. > You guys need to do a better job at supporting the whole subsystem > when you propose new uapi features. > > Jason > -- > To unsubscribe from this list: send the line "unsubscribe > linux-security-module" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > The uapi of this subsystem isn't changed. ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 08, 2016 at 10:19:48AM -0600, Jason Gunthorpe wrote: > On Thu, Sep 08, 2016 at 02:12:48PM +, Daniel Jurgens wrote: > > > It would have to include the port, but idea of using a device name > > for this is pretty ugly. makes it very easy to > > write a policy that can be deployed widely. > > could require many different policies depending on the configuration > > of each machine. > > What does net do? Should we have a way to unformly label the rdma ports? Uniformly label them on the local node or across a cluster? I think Daniel has a point here. Given a node with multiple device/ports using the local device names is IMO wrong. > > How do you imagine these policies working anyhow? They cannot be > shipped from a distro. Are these going to be labeled on filesystem > objects? (how doe that work??) Or somehow injected when starting a > container? > > If they are not written to disk I don't see the problem, the dynamic > injector will have to figure out what interface is what. Who is the "dynamic injector"? Ira > > Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 9/8/2016 1:36 PM, Jason Gunthorpe wrote: > On Thu, Sep 08, 2016 at 04:44:36PM +, Daniel Jurgens wrote: > >> Net has variety of means of enforcement, one of which is controlling >> access to ports , which is the most like what >> I'm doing here. > No, the analog the tcp/udp,port number is I should have been clearer here. From the SELinux perspective this scheme is very similar to net ports. >> It will work like any other SELinux policy. You label the things >> you want to control with a type and setup rules about which >> roles/types can interact with them and how. I'm sure the default >> policy from distros will be to not restrict access. Policy is >> loaded into the kernel, the disk and filesystem has nothing to do > Eh? I thought the main utility of selinux was using the labels written > to the filesystem to constrain access, eg I might label > /usr/bin/apache in a way that gets the policy applied to it. Filesystems can be labeled, but so can other things without a filesystem representation. >> with this aside from it being where the policy is stored before >> being loaded. What is this dynamic injector you are talking about? > The container projects (eg docker) somehow setup selinux on the > fly for each container. I'm not sure how. SELinux policy is modular and can be changed or updated while running, I'm not very familiar with docker so I'm not sure what they do regarding SELinux. I'm also not sure it's relevant to the issues at hand. > >> Assume you have machines on one subnet (0xfe80::) one has a device >> called mlx5_0, the another mlx4_0 and you want to grant access to >> system administrators. > So do this in userspace? Why should the kernel do the translation? I'm still not clear on what translation you are talking about. To look up a label for something the kernel uses the same attributes the policy writer used to create the label. In this patch set when modify_qp is called there is a search of all the labels for pkeys for one that matches subnet prefix for the relevant port and the pkey number. ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 08, 2016 at 05:47:46PM +, Liran Liss wrote: > This patch-set enables partition-based isolation for Infiniband networks in a > very intuitive manner, that's it. > IB partitions don't have anything to do with VLANs. You guys need to do a better job at supporting the whole subsystem when you propose new uapi features. Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 08, 2016 at 04:44:36PM +, Daniel Jurgens wrote: > Net has variety of means of enforcement, one of which is controlling > access to ports , which is the most like what > I'm doing here. No, the analog the tcp/udp,port number is > It will work like any other SELinux policy. You label the things > you want to control with a type and setup rules about which > roles/types can interact with them and how. I'm sure the default > policy from distros will be to not restrict access. Policy is > loaded into the kernel, the disk and filesystem has nothing to do Eh? I thought the main utility of selinux was using the labels written to the filesystem to constrain access, eg I might label /usr/bin/apache in a way that gets the policy applied to it. > with this aside from it being where the policy is stored before > being loaded. What is this dynamic injector you are talking about? The container projects (eg docker) somehow setup selinux on the fly for each container. I'm not sure how. > Assume you have machines on one subnet (0xfe80::) one has a device > called mlx5_0, the another mlx4_0 and you want to grant access to > system administrators. So do this in userspace? Why should the kernel do the translation? Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 08, 2016 at 02:12:48PM +, Daniel Jurgens wrote: > On 9/7/2016 7:01 PM, ira.weiny wrote: > > On Tue, Sep 06, 2016 at 03:55:48PM -0600, Jason Gunthorpe wrote: > >> On Tue, Sep 06, 2016 at 08:35:56PM +, Daniel Jurgens wrote: > >> > >>> I think to control access to a VLAN for RoCE there would have to > >>> labels for GIDs, since that's how you select which VLAN to use. > >> Since people are talking about using GIDs for containers adding a GID > >> constraint for all technologies makes sense to me.. > >> > >> But rocev1 (at least mlx4) does not use vlan ids from the GID, the > >> vlan id is set directly in the id, so it still seems to need direct > >> containment. I also see vlan related stuff in the iwarp providers, so > >> they probably have a similar requirement. > >> > >>> required. RDMA device handle labeling isn't granular enough for > >>> what I'm trying to accomplish. We want users with different levels > >>> of permission to be able to use the same device, but restrict who > >>> they can communicate with by isolating them to separate partitions. > >> Sure, but maybe you should use the (device handle:pkey/vlan_id) as your > >> labeling tuple not (Subnet Prefix, pkey) > > Would "device handle" here specify the port? > > > > Ira > > It would have to include the port, but idea of using a device name for this > is pretty ugly. makes it very easy to write a policy > that can be deployed widely. could require many > different policies depending on the configuration of each machine. > I agree that this seems weird. Using the Subnet prefix seems much safer in an IB/OPA environment. That would be my vote. Unfortunately I don't have enough knowledge of the net stat to know how this would work with RoCE or iWarp. > I've added Liran Liss, he devised the approach that's implemented. This > would be a pretty big change, with worse usability so I'd like to get his > feedback. > Sounds good, Ira ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
RE: [PATCH v3 0/9] SELinux support for Infiniband RDMA
> From: Daniel Jurgens > It would have to include the port, but idea of using a device name for this is > pretty ugly. makes it very easy to write a policy that > can > be deployed widely. could require many different > policies depending on the configuration of each machine. > > I've added Liran Liss, he devised the approach that's implemented. This would > be a pretty big change, with worse usability so I'd like to get his feedback. This patch-set enables partition-based isolation for Infiniband networks in a very intuitive manner, that's it. IB partitions don't have anything to do with VLANs. --Liran ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 9/8/2016 11:20 AM, Jason Gunthorpe wrote: > On Thu, Sep 08, 2016 at 02:12:48PM +, Daniel Jurgens wrote: > >> It would have to include the port, but idea of using a device name >> for this is pretty ugly. makes it very easy to >> write a policy that can be deployed widely. >> could require many different policies depending on the configuration >> of each machine. > What does net do? Should we have a way to unformly label the rdma ports? > > How do you imagine these policies working anyhow? They cannot be > shipped from a distro. Are these going to be labeled on filesystem > objects? (how doe that work??) Or somehow injected when starting a > container? > > If they are not written to disk I don't see the problem, the dynamic > injector will have to figure out what interface is what. > > Jason > Net has variety of means of enforcement, one of which is controlling access to ports , which is the most like what I'm doing here. They also have other enforcement options that can't work for RDMA because it bypasses the kernel. It will work like any other SELinux policy. You label the things you want to control with a type and setup rules about which roles/types can interact with them and how. I'm sure the default policy from distros will be to not restrict access. Policy is loaded into the kernel, the disk and filesystem has nothing to do with this aside from it being where the policy is stored before being loaded. What is this dynamic injector you are talking about? Assume you have machines on one subnet (0xfe80::) one has a device called mlx5_0, the another mlx4_0 and you want to grant access to system administrators. This hypothetical policy could be deployed on both: pkeycon 0xfe80:: 0x gen_context(system_u:object_r:default_pkey_t); allow sysadm_t default_pkey_t access; If we use device name you'd need to write separate policy for each node. pkeyvlancon mlx4_0 1 0x gen_context(system_u:object_r:default_pkey_t); or pkeyvlancon mlx5_0 1 0x gen_context(system_u:object_r:default_pkey_t); ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 08, 2016 at 02:12:48PM +, Daniel Jurgens wrote: > It would have to include the port, but idea of using a device name > for this is pretty ugly. makes it very easy to > write a policy that can be deployed widely. > could require many different policies depending on the configuration > of each machine. What does net do? Should we have a way to unformly label the rdma ports? How do you imagine these policies working anyhow? They cannot be shipped from a distro. Are these going to be labeled on filesystem objects? (how doe that work??) Or somehow injected when starting a container? If they are not written to disk I don't see the problem, the dynamic injector will have to figure out what interface is what. Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 9/7/2016 7:01 PM, ira.weiny wrote: > On Tue, Sep 06, 2016 at 03:55:48PM -0600, Jason Gunthorpe wrote: >> On Tue, Sep 06, 2016 at 08:35:56PM +, Daniel Jurgens wrote: >> >>> I think to control access to a VLAN for RoCE there would have to >>> labels for GIDs, since that's how you select which VLAN to use. >> Since people are talking about using GIDs for containers adding a GID >> constraint for all technologies makes sense to me.. >> >> But rocev1 (at least mlx4) does not use vlan ids from the GID, the >> vlan id is set directly in the id, so it still seems to need direct >> containment. I also see vlan related stuff in the iwarp providers, so >> they probably have a similar requirement. >> >>> required. RDMA device handle labeling isn't granular enough for >>> what I'm trying to accomplish. We want users with different levels >>> of permission to be able to use the same device, but restrict who >>> they can communicate with by isolating them to separate partitions. >> Sure, but maybe you should use the (device handle:pkey/vlan_id) as your >> labeling tuple not (Subnet Prefix, pkey) > Would "device handle" here specify the port? > > Ira It would have to include the port, but idea of using a device name for this is pretty ugly. makes it very easy to write a policy that can be deployed widely. could require many different policies depending on the configuration of each machine. I've added Liran Liss, he devised the approach that's implemented. This would be a pretty big change, with worse usability so I'd like to get his feedback. ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Sep 06, 2016 at 03:55:48PM -0600, Jason Gunthorpe wrote: > On Tue, Sep 06, 2016 at 08:35:56PM +, Daniel Jurgens wrote: > > > I think to control access to a VLAN for RoCE there would have to > > labels for GIDs, since that's how you select which VLAN to use. > > Since people are talking about using GIDs for containers adding a GID > constraint for all technologies makes sense to me.. > > But rocev1 (at least mlx4) does not use vlan ids from the GID, the > vlan id is set directly in the id, so it still seems to need direct > containment. I also see vlan related stuff in the iwarp providers, so > they probably have a similar requirement. > > > required. RDMA device handle labeling isn't granular enough for > > what I'm trying to accomplish. We want users with different levels > > of permission to be able to use the same device, but restrict who > > they can communicate with by isolating them to separate partitions. > > Sure, but maybe you should use the (device handle:pkey/vlan_id) as your > labeling tuple not (Subnet Prefix, pkey) Would "device handle" here specify the port? Ira > > Jason > -- > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Sep 06, 2016 at 08:35:56PM +, Daniel Jurgens wrote: > I think to control access to a VLAN for RoCE there would have to > labels for GIDs, since that's how you select which VLAN to use. Since people are talking about using GIDs for containers adding a GID constraint for all technologies makes sense to me.. But rocev1 (at least mlx4) does not use vlan ids from the GID, the vlan id is set directly in the id, so it still seems to need direct containment. I also see vlan related stuff in the iwarp providers, so they probably have a similar requirement. > required. RDMA device handle labeling isn't granular enough for > what I'm trying to accomplish. We want users with different levels > of permission to be able to use the same device, but restrict who > they can communicate with by isolating them to separate partitions. Sure, but maybe you should use the (device handle:pkey/vlan_id) as your labeling tuple not (Subnet Prefix, pkey) Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 9/6/2016 3:02 PM, Jason Gunthorpe wrote: > On Thu, Sep 01, 2016 at 02:06:46PM -0400, Paul Moore wrote: > >> Jason and/or Daniel, I think it would be helpful if you could explain >> both the InifiniBand and IP based approaches for those of us who know >> SELinux, but not necessarily the RDMA and InfiniBand portions of this >> discussion. Be verbose and explain it as if we were idiots (I get >> called that enough, it must be true). > Well, I'm not really familiar with SELinux, I know a little bit about > how labels are applied in the netstack, but not that much... > > The RDMA subsystem supports 4 different networking standards, and they > each have their own objects.. > > Just focusing on the pkey/vlan ideas. Every packet placed on the > network has either a pkey or vlan label, the networking switches and > receivers use these labels to create strong access control. > > The labels are not-global, they are isolated to a site, or even a > single network within a site. > > ipoib also uses pkey&vlan in the same way netdev does (with these > patches it looks like a userspace can still access a pkey via ipoib > even if selinux is restricting access to it). > > Daniel's patch also touched on the QP1 and QP0 concepts. Packets can > be labeled as being for QP0/1 and the recievers process them under the > assumption they were sent by something with privilege (eg like the low > port numbers in IP) > > So, from my perspective, we shouldn't be talking about doing pkey > without also addressing vlan. It sounds like Daniel's concern is how to > identify the number space (eg he is using a GID prefix for IB, which > won't work on anything else, maybe rdma device handle is a better choice) > > Jason > I think to control access to a VLAN for RoCE there would have to labels for GIDs, since that's how you select which VLAN to use. It'd be very similar to how the pkey labels works, but it doesn't help with Infiniband, so I think the pkey labeling scheme is still required. RDMA device handle labeling isn't granular enough for what I'm trying to accomplish. We want users with different levels of permission to be able to use the same device, but restrict who they can communicate with by isolating them to separate partitions. ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 01, 2016 at 02:06:46PM -0400, Paul Moore wrote: > Jason and/or Daniel, I think it would be helpful if you could explain > both the InifiniBand and IP based approaches for those of us who know > SELinux, but not necessarily the RDMA and InfiniBand portions of this > discussion. Be verbose and explain it as if we were idiots (I get > called that enough, it must be true). Well, I'm not really familiar with SELinux, I know a little bit about how labels are applied in the netstack, but not that much... The RDMA subsystem supports 4 different networking standards, and they each have their own objects.. Just focusing on the pkey/vlan ideas. Every packet placed on the network has either a pkey or vlan label, the networking switches and receivers use these labels to create strong access control. The labels are not-global, they are isolated to a site, or even a single network within a site. ipoib also uses pkey&vlan in the same way netdev does (with these patches it looks like a userspace can still access a pkey via ipoib even if selinux is restricting access to it). Daniel's patch also touched on the QP1 and QP0 concepts. Packets can be labeled as being for QP0/1 and the recievers process them under the assumption they were sent by something with privilege (eg like the low port numbers in IP) So, from my perspective, we shouldn't be talking about doing pkey without also addressing vlan. It sounds like Daniel's concern is how to identify the number space (eg he is using a GID prefix for IB, which won't work on anything else, maybe rdma device handle is a better choice) Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Thu, Sep 1, 2016 at 12:34 PM, Jason Gunthorpe wrote: > On Tue, Aug 30, 2016 at 07:10:12PM +, Daniel Jurgens wrote: >> On 8/30/2016 1:56 PM, Jason Gunthorpe wrote: >> > >> > Are subsystems usually SELinux enabled in such a piecemeal way? >> > >> > Are you sure the 'partition' SELinux label should not be more general >> > to cover more of the similar RDMA cases? > >> In order to label something you have to be able to describe >> something unique about an instance of it, like a Subnet Prefix/PKey >> value pair. What other thing could we label more generally to >> control access to a partition/VLAN? > > IP prefix / vlan #? How does it work in net? > > Shouldn't you at least have a plan for how this will expand to cover > the whole subsystem?? Jason and/or Daniel, I think it would be helpful if you could explain both the InifiniBand and IP based approaches for those of us who know SELinux, but not necessarily the RDMA and InfiniBand portions of this discussion. Be verbose and explain it as if we were idiots (I get called that enough, it must be true). -- paul moore www.paul-moore.com ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Aug 30, 2016 at 07:10:12PM +, Daniel Jurgens wrote: > On 8/30/2016 1:56 PM, Jason Gunthorpe wrote: > > > > Are subsystems usually SELinux enabled in such a piecemeal way? > > > > Are you sure the 'partition' SELinux label should not be more general > > to cover more of the similar RDMA cases? > In order to label something you have to be able to describe > something unique about an instance of it, like a Subnet Prefix/PKey > value pair. What other thing could we label more generally to > control access to a partition/VLAN? IP prefix / vlan #? How does it work in net? Shouldn't you at least have a plan for how this will expand to cover the whole subsystem?? Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 8/30/2016 1:56 PM, Jason Gunthorpe wrote: > > Are subsystems usually SELinux enabled in such a piecemeal way? > > Are you sure the 'partition' SELinux label should not be more general > to cover more of the similar RDMA cases? > > Jason > In order to label something you have to be able to describe something unique about an instance of it, like a Subnet Prefix/PKey value pair. What other thing could we label more generally to control access to a partition/VLAN? ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Aug 30, 2016 at 06:52:28PM +, Daniel Jurgens wrote: > On 8/30/2016 1:46 PM, Jason Gunthorpe wrote: > > On Tue, Aug 30, 2016 at 02:06:53PM +, Daniel Jurgens wrote: > > > >> I don't this will be useful, RoCE doesn't have partitions/PKeys > >> because it uses Ethernet as the transport instead of Infiniband. > > The vlan stuff in roce should be just as restricted as the pkey is in > > IB > This patch set introduces a mechanism for controlling access to > Infiniband partitions. If someone is interested in writing SELinux > tests regarding RoCE and VLANs then RXE may very well be useful for > them. It just doesn't apply here. Are subsystems usually SELinux enabled in such a piecemeal way? Are you sure the 'partition' SELinux label should not be more general to cover more of the similar RDMA cases? Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 8/30/2016 1:46 PM, Jason Gunthorpe wrote: > On Tue, Aug 30, 2016 at 02:06:53PM +, Daniel Jurgens wrote: > >> I don't this will be useful, RoCE doesn't have partitions/PKeys >> because it uses Ethernet as the transport instead of Infiniband. > The vlan stuff in roce should be just as restricted as the pkey is in > IB > > Jason > This patch set introduces a mechanism for controlling access to Infiniband partitions. If someone is interested in writing SELinux tests regarding RoCE and VLANs then RXE may very well be useful for them. It just doesn't apply here. ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Aug 30, 2016 at 02:06:53PM +, Daniel Jurgens wrote: > I don't this will be useful, RoCE doesn't have partitions/PKeys > because it uses Ethernet as the transport instead of Infiniband. The vlan stuff in roce should be just as restricted as the pkey is in IB Jason ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Aug 30, 2016 at 10:46 AM, Leon Romanovsky wrote: > On Mon, Aug 29, 2016 at 08:00:32PM -0400, Paul Moore wrote: >> On Mon, Aug 29, 2016 at 5:48 PM, Daniel Jurgens wrote: >> > On 8/29/2016 4:40 PM, Paul Moore wrote: >> >> On Fri, Jul 29, 2016 at 9:53 AM, Dan Jurgens wrote: >> >>> From: Daniel Jurgens >> >> ... >> >> >> >>> Daniel Jurgens (9): >> >>> IB/core: IB cache enhancements to support Infiniband security >> >>> IB/core: Enforce PKey security on QPs >> >>> selinux lsm IB/core: Implement LSM notification system >> >>> IB/core: Enforce security on management datagrams >> >>> selinux: Create policydb version for Infiniband support >> >>> selinux: Allocate and free infiniband security hooks >> >>> selinux: Implement Infiniband PKey "Access" access vector >> >>> selinux: Add IB Port SMP access vector >> >>> selinux: Add a cache for quicker retreival of PKey SIDs >> >> Hi Daniel, >> >> >> >> My apologies for such a long delay in responding to this latest >> >> patchset; conferences, travel, and vacation have made for a very busy >> >> August. After you posted the v2 patchset we had an off-list >> >> discussion regarding testing the SELinux/IB integration; unfortunately >> >> we realized that IB hardware would be needed to test this (no IB >> >> loopback device), but we agreed that having tests would be beneficial. >> >> >> >> Have you done any work yet towards adding SELinux/IB tests to the >> >> selinux-testsuite project? >> >> >> >> * https://github.com/SELinuxProject/selinux-testsuite >> > >> > Hi Paul, I've not started doing that yet. I've been waiting for feedback >> > of any kind from the RDMA list. I thought the test updates would be more >> > appropriate around the time I'm submitting the changes to the user space >> > utilities to allow labeling the new types. >> Okay, no problem. I just want the tests in place and functional when >> we merge the kernel code. > Hi Paul, > IMHO, you can use Soft RoCE (RXE) [1] for it. If I got it right, little if not nothing of this patch set is applicable to RoCE ports, this is about IB ports, Daniel, can you comment? Or. ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Aug 30, 2016 at 02:06:53PM +, Daniel Jurgens wrote: > On 8/30/2016 8:53 AM, Paul Moore wrote: > > On Tue, Aug 30, 2016 at 3:46 AM, Leon Romanovsky wrote: > >> On Mon, Aug 29, 2016 at 08:00:32PM -0400, Paul Moore wrote: > >>> On Mon, Aug 29, 2016 at 5:48 PM, Daniel Jurgens > >>> wrote: > On 8/29/2016 4:40 PM, Paul Moore wrote: > > On Fri, Jul 29, 2016 at 9:53 AM, Dan Jurgens > > wrote: > >> From: Daniel Jurgens > > ... > > > >> Daniel Jurgens (9): > >> IB/core: IB cache enhancements to support Infiniband security > >> IB/core: Enforce PKey security on QPs > >> selinux lsm IB/core: Implement LSM notification system > >> IB/core: Enforce security on management datagrams > >> selinux: Create policydb version for Infiniband support > >> selinux: Allocate and free infiniband security hooks > >> selinux: Implement Infiniband PKey "Access" access vector > >> selinux: Add IB Port SMP access vector > >> selinux: Add a cache for quicker retreival of PKey SIDs > > Hi Daniel, > > > > My apologies for such a long delay in responding to this latest > > patchset; conferences, travel, and vacation have made for a very busy > > August. After you posted the v2 patchset we had an off-list > > discussion regarding testing the SELinux/IB integration; unfortunately > > we realized that IB hardware would be needed to test this (no IB > > loopback device), but we agreed that having tests would be beneficial. > > > > Have you done any work yet towards adding SELinux/IB tests to the > > selinux-testsuite project? > > > > * https://github.com/SELinuxProject/selinux-testsuite > Hi Paul, I've not started doing that yet. I've been waiting for > feedback of any kind from the RDMA list. I thought the test updates > would be more appropriate around the time I'm submitting the changes to > the user space utilities to allow labeling the new types. > >>> Okay, no problem. I just want the tests in place and functional when > >>> we merge the kernel code. > >> Hi Paul, > >> > >> IMHO, you can use Soft RoCE (RXE) [1] for it. > >> > >> > >> Soft RoCE (RXE) - The software RoCE driver > >> > >> ib_rxe implements the RDMA transport and registers to the RDMA core > >> device as a kernel verbs provider. It also implements the packet IO > >> layer. On the other hand ib_rxe registers to the Linux netdev stack > >> as a udp encapsulating protocol, in that case RDMA, for sending and > >> receiving packets over any Ethernet device. This yields a RDMA > >> transport over the UDP/Ethernet network layer forming a RoCEv2 > >> compatible device. > >> > >> The configuration procedure of the Soft RoCE drivers requires > >> binding to any existing Ethernet network device. This is done with > >> /sys interface. > >> > >> > >> [1] > >> https://git.kernel.org/cgit/linux/kernel/git/dledford/rdma.git/tree/drivers/infiniband/sw/rxe > > Hi Leon, > > > > It looks like v4.8 will have all the necessary pieces for this, yes? > > Is there any documentation on this other than the git log? Keep in > > mind I'm looking at this from the SELinux side, I'm very Infiniband > > ignorant at the moment; although Daniel has been very patient in > > walking me through some of the basics. > > > > Daniel, does this look like something we might be able to use? > > > I don't this will be useful, RoCE doesn't have partitions/PKeys because it > uses Ethernet as the transport instead of Infiniband. > Yeah, sorry for the noise. signature.asc Description: PGP signature ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 8/30/2016 8:53 AM, Paul Moore wrote: > On Tue, Aug 30, 2016 at 3:46 AM, Leon Romanovsky wrote: >> On Mon, Aug 29, 2016 at 08:00:32PM -0400, Paul Moore wrote: >>> On Mon, Aug 29, 2016 at 5:48 PM, Daniel Jurgens >>> wrote: On 8/29/2016 4:40 PM, Paul Moore wrote: > On Fri, Jul 29, 2016 at 9:53 AM, Dan Jurgens wrote: >> From: Daniel Jurgens > ... > >> Daniel Jurgens (9): >> IB/core: IB cache enhancements to support Infiniband security >> IB/core: Enforce PKey security on QPs >> selinux lsm IB/core: Implement LSM notification system >> IB/core: Enforce security on management datagrams >> selinux: Create policydb version for Infiniband support >> selinux: Allocate and free infiniband security hooks >> selinux: Implement Infiniband PKey "Access" access vector >> selinux: Add IB Port SMP access vector >> selinux: Add a cache for quicker retreival of PKey SIDs > Hi Daniel, > > My apologies for such a long delay in responding to this latest > patchset; conferences, travel, and vacation have made for a very busy > August. After you posted the v2 patchset we had an off-list > discussion regarding testing the SELinux/IB integration; unfortunately > we realized that IB hardware would be needed to test this (no IB > loopback device), but we agreed that having tests would be beneficial. > > Have you done any work yet towards adding SELinux/IB tests to the > selinux-testsuite project? > > * https://github.com/SELinuxProject/selinux-testsuite Hi Paul, I've not started doing that yet. I've been waiting for feedback of any kind from the RDMA list. I thought the test updates would be more appropriate around the time I'm submitting the changes to the user space utilities to allow labeling the new types. >>> Okay, no problem. I just want the tests in place and functional when >>> we merge the kernel code. >> Hi Paul, >> >> IMHO, you can use Soft RoCE (RXE) [1] for it. >> >> >> Soft RoCE (RXE) - The software RoCE driver >> >> ib_rxe implements the RDMA transport and registers to the RDMA core >> device as a kernel verbs provider. It also implements the packet IO >> layer. On the other hand ib_rxe registers to the Linux netdev stack >> as a udp encapsulating protocol, in that case RDMA, for sending and >> receiving packets over any Ethernet device. This yields a RDMA >> transport over the UDP/Ethernet network layer forming a RoCEv2 >> compatible device. >> >> The configuration procedure of the Soft RoCE drivers requires >> binding to any existing Ethernet network device. This is done with >> /sys interface. >> >> >> [1] >> https://git.kernel.org/cgit/linux/kernel/git/dledford/rdma.git/tree/drivers/infiniband/sw/rxe > Hi Leon, > > It looks like v4.8 will have all the necessary pieces for this, yes? > Is there any documentation on this other than the git log? Keep in > mind I'm looking at this from the SELinux side, I'm very Infiniband > ignorant at the moment; although Daniel has been very patient in > walking me through some of the basics. > > Daniel, does this look like something we might be able to use? > I don't this will be useful, RoCE doesn't have partitions/PKeys because it uses Ethernet as the transport instead of Infiniband. ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Tue, Aug 30, 2016 at 3:46 AM, Leon Romanovsky wrote: > On Mon, Aug 29, 2016 at 08:00:32PM -0400, Paul Moore wrote: >> On Mon, Aug 29, 2016 at 5:48 PM, Daniel Jurgens wrote: >> > On 8/29/2016 4:40 PM, Paul Moore wrote: >> >> On Fri, Jul 29, 2016 at 9:53 AM, Dan Jurgens wrote: >> >>> From: Daniel Jurgens >> >> ... >> >> >> >>> Daniel Jurgens (9): >> >>> IB/core: IB cache enhancements to support Infiniband security >> >>> IB/core: Enforce PKey security on QPs >> >>> selinux lsm IB/core: Implement LSM notification system >> >>> IB/core: Enforce security on management datagrams >> >>> selinux: Create policydb version for Infiniband support >> >>> selinux: Allocate and free infiniband security hooks >> >>> selinux: Implement Infiniband PKey "Access" access vector >> >>> selinux: Add IB Port SMP access vector >> >>> selinux: Add a cache for quicker retreival of PKey SIDs >> >> Hi Daniel, >> >> >> >> My apologies for such a long delay in responding to this latest >> >> patchset; conferences, travel, and vacation have made for a very busy >> >> August. After you posted the v2 patchset we had an off-list >> >> discussion regarding testing the SELinux/IB integration; unfortunately >> >> we realized that IB hardware would be needed to test this (no IB >> >> loopback device), but we agreed that having tests would be beneficial. >> >> >> >> Have you done any work yet towards adding SELinux/IB tests to the >> >> selinux-testsuite project? >> >> >> >> * https://github.com/SELinuxProject/selinux-testsuite >> > >> > Hi Paul, I've not started doing that yet. I've been waiting for feedback >> > of any kind from the RDMA list. I thought the test updates would be more >> > appropriate around the time I'm submitting the changes to the user space >> > utilities to allow labeling the new types. >> >> Okay, no problem. I just want the tests in place and functional when >> we merge the kernel code. > > Hi Paul, > > IMHO, you can use Soft RoCE (RXE) [1] for it. > > > Soft RoCE (RXE) - The software RoCE driver > > ib_rxe implements the RDMA transport and registers to the RDMA core > device as a kernel verbs provider. It also implements the packet IO > layer. On the other hand ib_rxe registers to the Linux netdev stack > as a udp encapsulating protocol, in that case RDMA, for sending and > receiving packets over any Ethernet device. This yields a RDMA > transport over the UDP/Ethernet network layer forming a RoCEv2 > compatible device. > > The configuration procedure of the Soft RoCE drivers requires > binding to any existing Ethernet network device. This is done with > /sys interface. > > > [1] > https://git.kernel.org/cgit/linux/kernel/git/dledford/rdma.git/tree/drivers/infiniband/sw/rxe Hi Leon, It looks like v4.8 will have all the necessary pieces for this, yes? Is there any documentation on this other than the git log? Keep in mind I'm looking at this from the SELinux side, I'm very Infiniband ignorant at the moment; although Daniel has been very patient in walking me through some of the basics. Daniel, does this look like something we might be able to use? -- paul moore www.paul-moore.com ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Mon, Aug 29, 2016 at 08:00:32PM -0400, Paul Moore wrote: > On Mon, Aug 29, 2016 at 5:48 PM, Daniel Jurgens wrote: > > On 8/29/2016 4:40 PM, Paul Moore wrote: > >> On Fri, Jul 29, 2016 at 9:53 AM, Dan Jurgens wrote: > >>> From: Daniel Jurgens > >> ... > >> > >>> Daniel Jurgens (9): > >>> IB/core: IB cache enhancements to support Infiniband security > >>> IB/core: Enforce PKey security on QPs > >>> selinux lsm IB/core: Implement LSM notification system > >>> IB/core: Enforce security on management datagrams > >>> selinux: Create policydb version for Infiniband support > >>> selinux: Allocate and free infiniband security hooks > >>> selinux: Implement Infiniband PKey "Access" access vector > >>> selinux: Add IB Port SMP access vector > >>> selinux: Add a cache for quicker retreival of PKey SIDs > >> Hi Daniel, > >> > >> My apologies for such a long delay in responding to this latest > >> patchset; conferences, travel, and vacation have made for a very busy > >> August. After you posted the v2 patchset we had an off-list > >> discussion regarding testing the SELinux/IB integration; unfortunately > >> we realized that IB hardware would be needed to test this (no IB > >> loopback device), but we agreed that having tests would be beneficial. > >> > >> Have you done any work yet towards adding SELinux/IB tests to the > >> selinux-testsuite project? > >> > >> * https://github.com/SELinuxProject/selinux-testsuite > > > > Hi Paul, I've not started doing that yet. I've been waiting for feedback > > of any kind from the RDMA list. I thought the test updates would be more > > appropriate around the time I'm submitting the changes to the user space > > utilities to allow labeling the new types. > > Okay, no problem. I just want the tests in place and functional when > we merge the kernel code. Hi Paul, IMHO, you can use Soft RoCE (RXE) [1] for it. Soft RoCE (RXE) - The software RoCE driver ib_rxe implements the RDMA transport and registers to the RDMA core device as a kernel verbs provider. It also implements the packet IO layer. On the other hand ib_rxe registers to the Linux netdev stack as a udp encapsulating protocol, in that case RDMA, for sending and receiving packets over any Ethernet device. This yields a RDMA transport over the UDP/Ethernet network layer forming a RoCEv2 compatible device. The configuration procedure of the Soft RoCE drivers requires binding to any existing Ethernet network device. This is done with /sys interface. [1] https://git.kernel.org/cgit/linux/kernel/git/dledford/rdma.git/tree/drivers/infiniband/sw/rxe > > -- > paul moore > www.paul-moore.com > -- > To unsubscribe from this list: send the line "unsubscribe linux-rdma" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html signature.asc Description: PGP signature ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Mon, Aug 29, 2016 at 5:48 PM, Daniel Jurgens wrote: > On 8/29/2016 4:40 PM, Paul Moore wrote: >> On Fri, Jul 29, 2016 at 9:53 AM, Dan Jurgens wrote: >>> From: Daniel Jurgens >> ... >> >>> Daniel Jurgens (9): >>> IB/core: IB cache enhancements to support Infiniband security >>> IB/core: Enforce PKey security on QPs >>> selinux lsm IB/core: Implement LSM notification system >>> IB/core: Enforce security on management datagrams >>> selinux: Create policydb version for Infiniband support >>> selinux: Allocate and free infiniband security hooks >>> selinux: Implement Infiniband PKey "Access" access vector >>> selinux: Add IB Port SMP access vector >>> selinux: Add a cache for quicker retreival of PKey SIDs >> Hi Daniel, >> >> My apologies for such a long delay in responding to this latest >> patchset; conferences, travel, and vacation have made for a very busy >> August. After you posted the v2 patchset we had an off-list >> discussion regarding testing the SELinux/IB integration; unfortunately >> we realized that IB hardware would be needed to test this (no IB >> loopback device), but we agreed that having tests would be beneficial. >> >> Have you done any work yet towards adding SELinux/IB tests to the >> selinux-testsuite project? >> >> * https://github.com/SELinuxProject/selinux-testsuite > > Hi Paul, I've not started doing that yet. I've been waiting for feedback of > any kind from the RDMA list. I thought the test updates would be more > appropriate around the time I'm submitting the changes to the user space > utilities to allow labeling the new types. Okay, no problem. I just want the tests in place and functional when we merge the kernel code. -- paul moore www.paul-moore.com ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On 8/29/2016 4:40 PM, Paul Moore wrote: > On Fri, Jul 29, 2016 at 9:53 AM, Dan Jurgens wrote: >> From: Daniel Jurgens > ... > >> Daniel Jurgens (9): >> IB/core: IB cache enhancements to support Infiniband security >> IB/core: Enforce PKey security on QPs >> selinux lsm IB/core: Implement LSM notification system >> IB/core: Enforce security on management datagrams >> selinux: Create policydb version for Infiniband support >> selinux: Allocate and free infiniband security hooks >> selinux: Implement Infiniband PKey "Access" access vector >> selinux: Add IB Port SMP access vector >> selinux: Add a cache for quicker retreival of PKey SIDs > Hi Daniel, > > My apologies for such a long delay in responding to this latest > patchset; conferences, travel, and vacation have made for a very busy > August. After you posted the v2 patchset we had an off-list > discussion regarding testing the SELinux/IB integration; unfortunately > we realized that IB hardware would be needed to test this (no IB > loopback device), but we agreed that having tests would be beneficial. > > Have you done any work yet towards adding SELinux/IB tests to the > selinux-testsuite project? > > * https://github.com/SELinuxProject/selinux-testsuite > Hi Paul, I've not started doing that yet. I've been waiting for feedback of any kind from the RDMA list. I thought the test updates would be more appropriate around the time I'm submitting the changes to the user space utilities to allow labeling the new types. ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
Re: [PATCH v3 0/9] SELinux support for Infiniband RDMA
On Fri, Jul 29, 2016 at 9:53 AM, Dan Jurgens wrote: > From: Daniel Jurgens ... > Daniel Jurgens (9): > IB/core: IB cache enhancements to support Infiniband security > IB/core: Enforce PKey security on QPs > selinux lsm IB/core: Implement LSM notification system > IB/core: Enforce security on management datagrams > selinux: Create policydb version for Infiniband support > selinux: Allocate and free infiniband security hooks > selinux: Implement Infiniband PKey "Access" access vector > selinux: Add IB Port SMP access vector > selinux: Add a cache for quicker retreival of PKey SIDs Hi Daniel, My apologies for such a long delay in responding to this latest patchset; conferences, travel, and vacation have made for a very busy August. After you posted the v2 patchset we had an off-list discussion regarding testing the SELinux/IB integration; unfortunately we realized that IB hardware would be needed to test this (no IB loopback device), but we agreed that having tests would be beneficial. Have you done any work yet towards adding SELinux/IB tests to the selinux-testsuite project? * https://github.com/SELinuxProject/selinux-testsuite -- paul moore www.paul-moore.com ___ Selinux mailing list Selinux@tycho.nsa.gov To unsubscribe, send email to selinux-le...@tycho.nsa.gov. To get help, send an email containing "help" to selinux-requ...@tycho.nsa.gov.
[PATCH v3 0/9] SELinux support for Infiniband RDMA
From: Daniel Jurgens The selinux next tree is missing some patches for IB/core. This series applies cleanly to ib-next, and should apply cleanly to selinux-next once the IB patches are merged. Currently there is no way to provide granular access control to an Infiniband fabric. By providing an ability to restrict user access to specific virtual subfabrics, administrators can limit access to bandwidth and isolate users on the fabric. The approach for controlling access for Infiniband is to control access to partitions. A partition is similar in concept to a VLAN where each data packet carries the partition key (PKey) in its header and isolation is enforced by the hardware. The partition key is not a cryptographic key, it's a 16 bit number identifying the partition. By controlling access to PKeys, users can be isolated on the fabric. Every Infiniband fabric must have a subnet manager. The subnet manager provisions the partitions and configures the end nodes. Each end port has a PKey table containing the partitions it can access. In order to enforce access to partitions, the subnet management interface (SMI) must also be controlled to prevent unauthorized changes to the fabric configuration. In order to support this there must be a capability to provide security contexts for two new types of objects - PKeys and IB ports. A PKey label consists of a subnet prefix and a range of PKey values and is similar to the labeling mechanism for netports. Each Infiniband port can reside on a different subnet, labeling the PKey values for specific subnet prefixes provides the user maximum flexibility. There is a single access vector for PKeys called "access". An Infiniband port is labeled by name and port number. There is a single access vector for IB ports called "manage_subnet". Because RDMA allows kernel bypass, enforcement must be done during connection setup. Communication over RDMA requires a send and receive queue called a queue pair (QP). During the creation of a QP it is initialized before it can be used to send or receive data. During initialization the user must provide the PKey and port the QP will use, at this time access can be enforced. Because there is a possibility that the enforcement settings or security policy can change, a means of notifying the ib_core module of such changes is required. To facilitate this a generic notification callback mechanism is added to the LSM. One callback is registered for checking the QP PKey associations when the policy changes. Mad agents also register a callback, they cache the permission to send and receive SMPs to avoid another per packet call to the LSM. Because frequent accesses to the same PKey's SID is expected a cache is implemented which is very similar to the netport cache. In order to properly enforce security when changes to the PKey table or security policy or enforcement occur ib_core must track which QPs are using which port, pkey index, and alternate path for every IB device. This makes operations that used to be atomic transactional. When modifying a QP, ib_core must associate it with the PKey index, port, and alternate path specified. If the QP was already associated with different settings, the QP is added to the new list prior to the modification. If the modify succeeds then the old listing is removed. If the modify fails the new listing is removed and the old listing remains unchanged. When destroying a QP the ib_qp structure is freed by the decive specific driver (i.e. mlx4_ib) if the 'destroy' is successful. This requires storing security related information in a separate structure. When a 'destroy' request is in process the ib_qp structure is in an undefined state so if there are changes to the security policy or PKey table, the security checks cannot reset the QP if it doesn't have permission for the new setting. If the 'destroy' fails, security for that QP must be enforced again and its status in the list is restored. If the 'destroy' succeeds the security info can be cleaned up and freed. There are a number of locks required to protect the QP security structure and the QP to device/port/pkey index lists. If multiple locks are required, the safe locking order is: QP security structure mutex first, followed by any list locks needed, which are sorted first by port followed by pkey index. --- v2: - Use void* blobs in the LSM hooks. Paul Moore - Make the policy change callback generic. Yuval Shaia, Paul Moore - Squash LSM changes into the patches where the calls are added. Paul Moore - Don't add new initial SIDs. Stephen Smalley - Squash MAD agent PKey and SMI patches. Dan Jurgens - Changed ib_end_port to ib_port. Paul Moore - Changed ib_port access vector from smp to manage_subnet. Paul Moore - Added pkey and ib_port details to the audit log. Paul Moore - See individual patches for more detail. v3: - ib_port -> ib_endport. Paul Moore - use notifier chains for LSM notifications. Paul Moore - reorder parameters in hooks to put securi