Re: [PATCH v14 2/6] namei: LOOKUP_IN_ROOT: chroot-like path resolution

2019-10-11 Thread Aleksa Sarai
On 2019-10-12, Aleksa Sarai  wrote:
> On 2019-10-10, Linus Torvalds  wrote:
> > On Wed, Oct 9, 2019 at 10:42 PM Aleksa Sarai  wrote:
> > >
> > > --- a/fs/namei.c
> > > +++ b/fs/namei.c
> > > @@ -2277,6 +2277,11 @@ static const char *path_init(struct nameidata *nd, 
> > > unsigned flags)
> > >
> > > nd->m_seq = read_seqbegin(_lock);
> > >
> > > +   /* LOOKUP_IN_ROOT treats absolute paths as being 
> > > relative-to-dirfd. */
> > > +   if (flags & LOOKUP_IN_ROOT)
> > > +   while (*s == '/')
> > > +   s++;
> > > +
> > > /* Figure out the starting path and root (if needed). */
> > > if (*s == '/') {
> > > error = nd_jump_root(nd);
> > 
> > Hmm. Wouldn't this make more sense all inside the if (*s =- '/') test?
> > That way if would be where we check for "should we start at the root",
> > which seems to make more sense conceptually.
> 
> I don't really agree (though I do think that both options are pretty
> ugly). Doing it before the block makes it clear that absolute paths are
> just treated relative-to-dirfd -- doing it inside the block makes it
> look more like "/" is a special-case for nd_jump_root(). And while that

Sorry, I meant "special-case for LOOKUP_IN_ROOT".

> is somewhat true, this is just a side-effect of making the code more
> clean -- my earlier versions reworked the dirfd handling to always grab
> nd->root first if LOOKUP_IS_SCOPED. I switched to this method based on
> Al's review.
> 
> In fairness, I do agree that the lonely while loop looks ugly.

And with the old way I did it (where we grabbed nd->root first) the
semantics were slightly more clear -- stripping leading "/"s doesn't
really look as "clearly obvious" as grabbing nd->root beforehand and
treating "/"s normally. But the code was also needlessly more complex.

> > That test for '/' currently has a "} else if (..)", but that's
> > pointless since it ends with a "return" anyway. So the "else" logic is
> > just noise.
> 
> This depends on the fact that LOOKUP_BENEATH always triggers -EXDEV for
> nd_jump_root() -- if we ever add another "scoped lookup" flag then the
> logic will have to be further reworked.
> 
> (It should be noted that the new version doesn't always end with a
> "return", but you could change it to act that way given the above
> assumption.)
> 
> > And if you get rid of the unnecessary else, moving the LOOKUP_IN_ROOT
> > inside the if-statement works fine.
> > 
> > So this could be something like
> > 
> > --- a/fs/namei.c
> > +++ b/fs/namei.c
> > @@ -2194,11 +2196,19 @@ static const char *path_init(struct
> > nameidata *nd, unsigned flags)
> > 
> > nd->m_seq = read_seqbegin(_lock);
> > if (*s == '/') {
> > -   set_root(nd);
> > -   if (likely(!nd_jump_root(nd)))
> > -   return s;
> > -   return ERR_PTR(-ECHILD);
> > -   } else if (nd->dfd == AT_FDCWD) {
> > +   /* LOOKUP_IN_ROOT treats absolute paths as being
> > relative-to-dirfd. */
> > +   if (!(flags & LOOKUP_IN_ROOT)) {
> > +   set_root(nd);
> > +   if (likely(!nd_jump_root(nd)))
> > +   return s;
> > +   return ERR_PTR(-ECHILD);
> > +   }
> > +
> > +   /* Skip initial '/' for LOOKUP_IN_ROOT */
> > +   do { s++; } while (*s == '/');
> > +   }
> > +
> > +   if (nd->dfd == AT_FDCWD) {
> > if (flags & LOOKUP_RCU) {
> > struct fs_struct *fs = current->fs;
> > unsigned seq;
> > 
> > instead. The patch ends up slightly bigger (due to the re-indentation)
> > but now it handles all the "start at root" in the same place. Doesn't
> > that make sense?
> 
> It is correct (though I'd need to clean it up a bit to handle
> nd_jump_root() correctly), and if you really would like me to change it
> I will -- but I just don't agree that it's cleaner.

-- 
Aleksa Sarai
Senior Software Engineer (Containers)
SUSE Linux GmbH



signature.asc
Description: PGP signature


Re: [PATCH v14 2/6] namei: LOOKUP_IN_ROOT: chroot-like path resolution

2019-10-11 Thread Aleksa Sarai
On 2019-10-10, Linus Torvalds  wrote:
> On Wed, Oct 9, 2019 at 10:42 PM Aleksa Sarai  wrote:
> >
> > --- a/fs/namei.c
> > +++ b/fs/namei.c
> > @@ -2277,6 +2277,11 @@ static const char *path_init(struct nameidata *nd, 
> > unsigned flags)
> >
> > nd->m_seq = read_seqbegin(_lock);
> >
> > +   /* LOOKUP_IN_ROOT treats absolute paths as being relative-to-dirfd. 
> > */
> > +   if (flags & LOOKUP_IN_ROOT)
> > +   while (*s == '/')
> > +   s++;
> > +
> > /* Figure out the starting path and root (if needed). */
> > if (*s == '/') {
> > error = nd_jump_root(nd);
> 
> Hmm. Wouldn't this make more sense all inside the if (*s =- '/') test?
> That way if would be where we check for "should we start at the root",
> which seems to make more sense conceptually.

I don't really agree (though I do think that both options are pretty
ugly). Doing it before the block makes it clear that absolute paths are
just treated relative-to-dirfd -- doing it inside the block makes it
look more like "/" is a special-case for nd_jump_root(). And while that
is somewhat true, this is just a side-effect of making the code more
clean -- my earlier versions reworked the dirfd handling to always grab
nd->root first if LOOKUP_IS_SCOPED. I switched to this method based on
Al's review.

In fairness, I do agree that the lonely while loop looks ugly.

> That test for '/' currently has a "} else if (..)", but that's
> pointless since it ends with a "return" anyway. So the "else" logic is
> just noise.

This depends on the fact that LOOKUP_BENEATH always triggers -EXDEV for
nd_jump_root() -- if we ever add another "scoped lookup" flag then the
logic will have to be further reworked.

(It should be noted that the new version doesn't always end with a
"return", but you could change it to act that way given the above
assumption.)

> And if you get rid of the unnecessary else, moving the LOOKUP_IN_ROOT
> inside the if-statement works fine.
> 
> So this could be something like
> 
> --- a/fs/namei.c
> +++ b/fs/namei.c
> @@ -2194,11 +2196,19 @@ static const char *path_init(struct
> nameidata *nd, unsigned flags)
> 
> nd->m_seq = read_seqbegin(_lock);
> if (*s == '/') {
> -   set_root(nd);
> -   if (likely(!nd_jump_root(nd)))
> -   return s;
> -   return ERR_PTR(-ECHILD);
> -   } else if (nd->dfd == AT_FDCWD) {
> +   /* LOOKUP_IN_ROOT treats absolute paths as being
> relative-to-dirfd. */
> +   if (!(flags & LOOKUP_IN_ROOT)) {
> +   set_root(nd);
> +   if (likely(!nd_jump_root(nd)))
> +   return s;
> +   return ERR_PTR(-ECHILD);
> +   }
> +
> +   /* Skip initial '/' for LOOKUP_IN_ROOT */
> +   do { s++; } while (*s == '/');
> +   }
> +
> +   if (nd->dfd == AT_FDCWD) {
> if (flags & LOOKUP_RCU) {
> struct fs_struct *fs = current->fs;
> unsigned seq;
> 
> instead. The patch ends up slightly bigger (due to the re-indentation)
> but now it handles all the "start at root" in the same place. Doesn't
> that make sense?

It is correct (though I'd need to clean it up a bit to handle
nd_jump_root() correctly), and if you really would like me to change it
I will -- but I just don't agree that it's cleaner.

-- 
Aleksa Sarai
Senior Software Engineer (Containers)
SUSE Linux GmbH



signature.asc
Description: PGP signature


[PATCH] crypto: powerpc - convert SPE AES algorithms to skcipher API

2019-10-11 Thread Eric Biggers
From: Eric Biggers 

Convert the glue code for the PowerPC SPE implementations of AES-ECB,
AES-CBC, AES-CTR, and AES-XTS from the deprecated "blkcipher" API to the
"skcipher" API.

Tested with:

export ARCH=powerpc CROSS_COMPILE=powerpc-linux-gnu-
make mpc85xx_defconfig
cat >> .config << EOF
# CONFIG_MODULES is not set
# CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_XTS=y
CONFIG_CRYPTO_AES_PPC_SPE=y
EOF
make olddefconfig
make -j32
qemu-system-ppc -M mpc8544ds -cpu e500 -nographic \
-kernel arch/powerpc/boot/zImage \
-append cryptomgr.fuzz_iterations=1000

Note that xts-ppc-spe still fails the comparison tests due to the lack
of ciphertext stealing support.  This is not addressed by this patch.

Signed-off-by: Eric Biggers 
---
 arch/powerpc/crypto/aes-spe-glue.c | 416 +
 crypto/Kconfig |   1 +
 2 files changed, 186 insertions(+), 231 deletions(-)

diff --git a/arch/powerpc/crypto/aes-spe-glue.c 
b/arch/powerpc/crypto/aes-spe-glue.c
index 3a4ca7d32477..374e3e51e998 100644
--- a/arch/powerpc/crypto/aes-spe-glue.c
+++ b/arch/powerpc/crypto/aes-spe-glue.c
@@ -17,6 +17,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 /*
@@ -86,17 +87,13 @@ static void spe_end(void)
preempt_enable();
 }
 
-static int ppc_aes_setkey(struct crypto_tfm *tfm, const u8 *in_key,
-   unsigned int key_len)
+static int expand_key(struct ppc_aes_ctx *ctx,
+ const u8 *in_key, unsigned int key_len)
 {
-   struct ppc_aes_ctx *ctx = crypto_tfm_ctx(tfm);
-
if (key_len != AES_KEYSIZE_128 &&
key_len != AES_KEYSIZE_192 &&
-   key_len != AES_KEYSIZE_256) {
-   tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+   key_len != AES_KEYSIZE_256)
return -EINVAL;
-   }
 
switch (key_len) {
case AES_KEYSIZE_128:
@@ -114,17 +111,40 @@ static int ppc_aes_setkey(struct crypto_tfm *tfm, const 
u8 *in_key,
}
 
ppc_generate_decrypt_key(ctx->key_dec, ctx->key_enc, key_len);
+   return 0;
+}
 
+static int ppc_aes_setkey(struct crypto_tfm *tfm, const u8 *in_key,
+   unsigned int key_len)
+{
+   struct ppc_aes_ctx *ctx = crypto_tfm_ctx(tfm);
+
+   if (expand_key(ctx, in_key, key_len) != 0) {
+   tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+   return -EINVAL;
+   }
+   return 0;
+}
+
+static int ppc_aes_setkey_skcipher(struct crypto_skcipher *tfm,
+  const u8 *in_key, unsigned int key_len)
+{
+   struct ppc_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+   if (expand_key(ctx, in_key, key_len) != 0) {
+   crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+   return -EINVAL;
+   }
return 0;
 }
 
-static int ppc_xts_setkey(struct crypto_tfm *tfm, const u8 *in_key,
+static int ppc_xts_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
   unsigned int key_len)
 {
-   struct ppc_xts_ctx *ctx = crypto_tfm_ctx(tfm);
+   struct ppc_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
int err;
 
-   err = xts_check_key(tfm, in_key, key_len);
+   err = xts_verify_key(tfm, in_key, key_len);
if (err)
return err;
 
@@ -133,7 +153,7 @@ static int ppc_xts_setkey(struct crypto_tfm *tfm, const u8 
*in_key,
if (key_len != AES_KEYSIZE_128 &&
key_len != AES_KEYSIZE_192 &&
key_len != AES_KEYSIZE_256) {
-   tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+   crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}
 
@@ -178,208 +198,154 @@ static void ppc_aes_decrypt(struct crypto_tfm *tfm, u8 
*out, const u8 *in)
spe_end();
 }
 
-static int ppc_ecb_encrypt(struct blkcipher_desc *desc, struct scatterlist 
*dst,
-  struct scatterlist *src, unsigned int nbytes)
+static int ppc_ecb_crypt(struct skcipher_request *req, bool enc)
 {
-   struct ppc_aes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
-   struct blkcipher_walk walk;
-   unsigned int ubytes;
+   struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+   struct ppc_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+   struct skcipher_walk walk;
+   unsigned int nbytes;
int err;
 
-   desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
-   blkcipher_walk_init(, dst, src, nbytes);
-   err = blkcipher_walk_virt(desc, );
+   err = skcipher_walk_virt(, req, false);
 
-   while ((nbytes = walk.nbytes)) {
-   ubytes = nbytes > MAX_BYTES ?
-  

Re: [PATCH 0/2] virtio: Support encrypted memory on powerpc secure guests

2019-10-11 Thread Ram Pai
Hmm.. git-send-email forgot to CC  Michael Tsirkin, and Thiago; the
original author, who is on vacation.

Adding them now.
RP

On Fri, Oct 11, 2019 at 06:25:17PM -0700, Ram Pai wrote:
>  **We would like the patches to be merged through the virtio tree.  Please
>  review, and ack merging the DMA mapping change through that tree. Thanks!**
> 
>  The memory of powerpc secure guests can't be accessed by the hypervisor /
>  virtio device except for a few memory regions designated as 'shared'.
> 
>  At the moment, Linux uses bounce-buffering to communicate with the
>  hypervisor, with a bounce buffer marked as shared. This is how the DMA API
>  is implemented on this platform.
> 
>  In particular, the most convenient way to use virtio on this platform is by
>  making virtio use the DMA API: in fact, this is exactly what happens if the
>  virtio device exposes the flag VIRTIO_F_ACCESS_PLATFORM.  However, bugs in 
> the
>  hypervisor on the powerpc platform do not allow setting this flag, with some
>  hypervisors already in the field that don't set this flag. At the moment they
>  are forced to use emulated devices when guest is in secure mode; virtio is
>  only useful when guest is not secure.
> 
>  Normally, both device and driver must support VIRTIO_F_ACCESS_PLATFORM:
>  if one of them doesn't, the other mustn't assume it for communication
>  to work.
> 
>  However, a guest-side work-around is possible to enable virtio
>  for these hypervisors with guest in secure mode: it so happens that on
>  powerpc secure platform the DMA address is actually a physical address -
>  that of the bounce buffer. For these platforms we can make the virtio
>  driver go through the DMA API even though the device itself ignores
>  the DMA API.
> 
>  These patches implement this work around for virtio: we detect that
>  - secure guest mode is enabled - so we know that since we don't share
>most memory and Hypervisor has not enabled VIRTIO_F_ACCESS_PLATFORM,
>regular virtio code won't work.
>  - DMA API is giving us addresses that are actually also physical
>addresses.
>  - Hypervisor has not enabled VIRTIO_F_ACCESS_PLATFORM.
> 
>  and if all conditions are true, we force all data through the bounce
>  buffer.
> 
>  To put it another way, from hypervisor's point of view DMA API is
>  not required: hypervisor would be happy to get access to all of guest
>  memory. That's why it does not set VIRTIO_F_ACCESS_PLATFORM. However,
>  guest decides that it does not trust the hypervisor and wants to force
>  a bounce buffer for its own reasons.
> 
> 
> Thiago Jung Bauermann (2):
>   dma-mapping: Add dma_addr_is_phys_addr()
>   virtio_ring: Use DMA API if memory is encrypted
> 
>  arch/powerpc/include/asm/dma-mapping.h | 21 +
>  arch/powerpc/platforms/pseries/Kconfig |  1 +
>  drivers/virtio/virtio.c| 18 ++
>  drivers/virtio/virtio_ring.c   |  8 
>  include/linux/dma-mapping.h| 20 
>  include/linux/virtio_config.h  | 14 ++
>  kernel/dma/Kconfig |  3 +++
>  7 files changed, 85 insertions(+)
> 
> -- 
> 1.8.3.1

-- 
Ram Pai



[PATCH 1/2] dma-mapping: Add dma_addr_is_phys_addr()

2019-10-11 Thread Ram Pai
From: Thiago Jung Bauermann 

In order to safely use the DMA API, virtio needs to know whether DMA
addresses are in fact physical addresses and for that purpose,
dma_addr_is_phys_addr() is introduced.

cc: Benjamin Herrenschmidt 
cc: David Gibson 
cc: Michael Ellerman 
cc: Paul Mackerras 
cc: Michael Roth 
cc: Alexey Kardashevskiy 
cc: Paul Burton 
cc: Robin Murphy 
cc: Bartlomiej Zolnierkiewicz 
cc: Marek Szyprowski 
cc: Christoph Hellwig 
Suggested-by: Michael S. Tsirkin 
Signed-off-by: Ram Pai 
Signed-off-by: Thiago Jung Bauermann 
---
 arch/powerpc/include/asm/dma-mapping.h | 21 +
 arch/powerpc/platforms/pseries/Kconfig |  1 +
 include/linux/dma-mapping.h| 20 
 kernel/dma/Kconfig |  3 +++
 4 files changed, 45 insertions(+)

diff --git a/arch/powerpc/include/asm/dma-mapping.h 
b/arch/powerpc/include/asm/dma-mapping.h
index 565d6f7..f92c0a4b 100644
--- a/arch/powerpc/include/asm/dma-mapping.h
+++ b/arch/powerpc/include/asm/dma-mapping.h
@@ -5,6 +5,8 @@
 #ifndef _ASM_DMA_MAPPING_H
 #define _ASM_DMA_MAPPING_H
 
+#include 
+
 static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
 {
/* We don't handle the NULL dev case for ISA for now. We could
@@ -15,4 +17,23 @@ static inline const struct dma_map_ops 
*get_arch_dma_ops(struct bus_type *bus)
return NULL;
 }
 
+#ifdef CONFIG_ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR
+/**
+ * dma_addr_is_phys_addr - check whether a device DMA address is a physical
+ * address
+ * @dev:   device to check
+ *
+ * Returns %true if any DMA address for this device happens to also be a valid
+ * physical address (not necessarily of the same page).
+ */
+static inline bool dma_addr_is_phys_addr(struct device *dev)
+{
+   /*
+* Secure guests always use the SWIOTLB, therefore DMA addresses are
+* actually the physical address of the bounce buffer.
+*/
+   return is_secure_guest();
+}
+#endif
+
 #endif /* _ASM_DMA_MAPPING_H */
diff --git a/arch/powerpc/platforms/pseries/Kconfig 
b/arch/powerpc/platforms/pseries/Kconfig
index 9e35cdd..0108150 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -152,6 +152,7 @@ config PPC_SVM
select SWIOTLB
select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
+   select ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR
help
 There are certain POWER platforms which support secure guests using
 the Protected Execution Facility, with the help of an Ultravisor
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index f7d1eea..6df5664 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -693,6 +693,26 @@ static inline bool dma_addressing_limited(struct device 
*dev)
dma_get_required_mask(dev);
 }
 
+#ifndef CONFIG_ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR
+/**
+ * dma_addr_is_phys_addr - check whether a device DMA address is a physical
+ * address
+ * @dev:   device to check
+ *
+ * Returns %true if any DMA address for this device happens to also be a valid
+ * physical address (not necessarily of the same page).
+ */
+static inline bool dma_addr_is_phys_addr(struct device *dev)
+{
+   /*
+* Except in very specific setups, DMA addresses exist in a different
+* address space from CPU physical addresses and cannot be directly used
+* to reference system memory.
+*/
+   return false;
+}
+#endif
+
 #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
 void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
const struct iommu_ops *iommu, bool coherent);
diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 9decbba..6209b46 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -51,6 +51,9 @@ config ARCH_HAS_DMA_MMAP_PGPROT
 config ARCH_HAS_FORCE_DMA_UNENCRYPTED
bool
 
+config ARCH_HAS_DMA_ADDR_IS_PHYS_ADDR
+   bool
+
 config DMA_NONCOHERENT_CACHE_SYNC
bool
 
-- 
1.8.3.1



[PATCH 2/2] virtio_ring: Use DMA API if memory is encrypted

2019-10-11 Thread Ram Pai
From: Thiago Jung Bauermann 

Normally, virtio enables DMA API with VIRTIO_F_IOMMU_PLATFORM, which must
be set by both device and guest driver. However, as a hack, when DMA API
returns physical addresses, guest driver can use the DMA API; even though
device does not set VIRTIO_F_IOMMU_PLATFORM and just uses physical
addresses.

Doing this works-around POWER secure guests for which only the bounce
buffer is accessible to the device, but which don't set
VIRTIO_F_IOMMU_PLATFORM due to a set of hypervisor and architectural bugs.
To guard against platform changes, breaking any of these assumptions down
the road, we check at probe time and fail if that's not the case.

cc: Benjamin Herrenschmidt 
cc: David Gibson 
cc: Michael Ellerman 
cc: Paul Mackerras 
cc: Michael Roth 
cc: Alexey Kardashevskiy 
cc: Jason Wang 
cc: Christoph Hellwig 
Suggested-by: Michael S. Tsirkin 
Signed-off-by: Ram Pai 
Signed-off-by: Thiago Jung Bauermann 
---
 drivers/virtio/virtio.c   | 18 ++
 drivers/virtio/virtio_ring.c  |  8 
 include/linux/virtio_config.h | 14 ++
 3 files changed, 40 insertions(+)

diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
index a977e32..77a3baf 100644
--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -4,6 +4,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 /* Unique numbering for virtio devices. */
@@ -245,6 +246,23 @@ static int virtio_dev_probe(struct device *_d)
if (err)
goto err;
 
+   /*
+* If memory is encrypted, but VIRTIO_F_IOMMU_PLATFORM is not set, then
+* the device is broken: DMA API is required for these platforms, but
+* the only way using the DMA API is going to work at all is if the
+* device is ready for it. So we need a flag on the virtio device,
+* exposed by the hypervisor (or hardware for hw virtio devices) that
+* says: hey, I'm real, don't take a shortcut.
+*
+* There's one exception where guest can make things work, and that is
+* when DMA API is guaranteed to always return physical addresses.
+*/
+   if (mem_encrypt_active() && !virtio_can_use_dma_api(dev)) {
+   dev_err(_d, "virtio: device unable to access encrypted 
memory\n");
+   err = -EINVAL;
+   goto err;
+   }
+
err = drv->probe(dev);
if (err)
goto err;
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index c8be1c4..9c56b61 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -255,6 +255,14 @@ static bool vring_use_dma_api(struct virtio_device *vdev)
if (xen_domain())
return true;
 
+   /*
+* Also, if guest memory is encrypted the host can't access it
+* directly. We need to either use an IOMMU or do bounce buffering.
+* Both are done via the DMA API.
+*/
+   if (mem_encrypt_active() && virtio_can_use_dma_api(vdev))
+   return true;
+
return false;
 }
 
diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
index bb4cc49..57bc25c 100644
--- a/include/linux/virtio_config.h
+++ b/include/linux/virtio_config.h
@@ -4,6 +4,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -174,6 +175,19 @@ static inline bool virtio_has_iommu_quirk(const struct 
virtio_device *vdev)
return !virtio_has_feature(vdev, VIRTIO_F_IOMMU_PLATFORM);
 }
 
+/**
+ * virtio_can_use_dma_api - determine whether the DMA API can be used
+ * @vdev: the device
+ *
+ * The DMA API can be used either when the device doesn't have the IOMMU quirk,
+ * or when the DMA API is guaranteed to always return physical addresses.
+ */
+static inline bool virtio_can_use_dma_api(const struct virtio_device *vdev)
+{
+   return !virtio_has_iommu_quirk(vdev) ||
+  dma_addr_is_phys_addr(vdev->dev.parent);
+}
+
 static inline
 struct virtqueue *virtio_find_single_vq(struct virtio_device *vdev,
vq_callback_t *c, const char *n)
-- 
1.8.3.1



[PATCH 0/2] virtio: Support encrypted memory on powerpc secure guests

2019-10-11 Thread Ram Pai
 **We would like the patches to be merged through the virtio tree.  Please
 review, and ack merging the DMA mapping change through that tree. Thanks!**

 The memory of powerpc secure guests can't be accessed by the hypervisor /
 virtio device except for a few memory regions designated as 'shared'.
 
 At the moment, Linux uses bounce-buffering to communicate with the
 hypervisor, with a bounce buffer marked as shared. This is how the DMA API
 is implemented on this platform.
 
 In particular, the most convenient way to use virtio on this platform is by
 making virtio use the DMA API: in fact, this is exactly what happens if the
 virtio device exposes the flag VIRTIO_F_ACCESS_PLATFORM.  However, bugs in the
 hypervisor on the powerpc platform do not allow setting this flag, with some
 hypervisors already in the field that don't set this flag. At the moment they
 are forced to use emulated devices when guest is in secure mode; virtio is
 only useful when guest is not secure.
 
 Normally, both device and driver must support VIRTIO_F_ACCESS_PLATFORM:
 if one of them doesn't, the other mustn't assume it for communication
 to work.
 
 However, a guest-side work-around is possible to enable virtio
 for these hypervisors with guest in secure mode: it so happens that on
 powerpc secure platform the DMA address is actually a physical address -
 that of the bounce buffer. For these platforms we can make the virtio
 driver go through the DMA API even though the device itself ignores
 the DMA API.
 
 These patches implement this work around for virtio: we detect that
 - secure guest mode is enabled - so we know that since we don't share
   most memory and Hypervisor has not enabled VIRTIO_F_ACCESS_PLATFORM,
   regular virtio code won't work.
 - DMA API is giving us addresses that are actually also physical
   addresses.
 - Hypervisor has not enabled VIRTIO_F_ACCESS_PLATFORM.
 
 and if all conditions are true, we force all data through the bounce
 buffer.
 
 To put it another way, from hypervisor's point of view DMA API is
 not required: hypervisor would be happy to get access to all of guest
 memory. That's why it does not set VIRTIO_F_ACCESS_PLATFORM. However,
 guest decides that it does not trust the hypervisor and wants to force
 a bounce buffer for its own reasons.


Thiago Jung Bauermann (2):
  dma-mapping: Add dma_addr_is_phys_addr()
  virtio_ring: Use DMA API if memory is encrypted

 arch/powerpc/include/asm/dma-mapping.h | 21 +
 arch/powerpc/platforms/pseries/Kconfig |  1 +
 drivers/virtio/virtio.c| 18 ++
 drivers/virtio/virtio_ring.c   |  8 
 include/linux/dma-mapping.h| 20 
 include/linux/virtio_config.h  | 14 ++
 kernel/dma/Kconfig |  3 +++
 7 files changed, 85 insertions(+)

-- 
1.8.3.1



Re: [PATCH v8 1/5] kasan: support backing vmalloc space with real shadow memory

2019-10-11 Thread Andrey Ryabinin



On 10/1/19 9:58 AM, Daniel Axtens wrote:
 
>  core_initcall(kasan_memhotplug_init);
>  #endif
> +
> +#ifdef CONFIG_KASAN_VMALLOC
> +static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,
> +   void *unused)
> +{
> + unsigned long page;
> + pte_t pte;
> +
> + if (likely(!pte_none(*ptep)))
> + return 0;
> +
> + page = __get_free_page(GFP_KERNEL);
> + if (!page)
> + return -ENOMEM;
> +
> + memset((void *)page, KASAN_VMALLOC_INVALID, PAGE_SIZE);
> + pte = pfn_pte(PFN_DOWN(__pa(page)), PAGE_KERNEL);
> +
> + /*
> +  * Ensure poisoning is visible before the shadow is made visible
> +  * to other CPUs.
> +  */
> + smp_wmb();

I'm not quite understand what this barrier do and why it needed.
And if it's really needed there should be a pairing barrier
on the other side which I don't see.

> +
> + spin_lock(_mm.page_table_lock);
> + if (likely(pte_none(*ptep))) {
> + set_pte_at(_mm, addr, ptep, pte);
> + page = 0;
> + }
> + spin_unlock(_mm.page_table_lock);
> + if (page)
> + free_page(page);
> + return 0;
> +}
> +


...

> @@ -754,6 +769,8 @@ merge_or_add_vmap_area(struct vmap_area *va,
>   }
>  
>  insert:
> + kasan_release_vmalloc(orig_start, orig_end, va->va_start, va->va_end);
> +
>   if (!merged) {
>   link_va(va, root, parent, link, head);
>   augment_tree_propagate_from(va);
> @@ -2068,6 +2085,22 @@ static struct vm_struct *__get_vm_area_node(unsigned 
> long size,
>  
>   setup_vmalloc_vm(area, va, flags, caller);
>  
> + /*
> +  * For KASAN, if we are in vmalloc space, we need to cover the shadow
> +  * area with real memory. If we come here through VM_ALLOC, this is
> +  * done by a higher level function that has access to the true size,
> +  * which might not be a full page.
> +  *
> +  * We assume module space comes via VM_ALLOC path.
> +  */
> + if (is_vmalloc_addr(area->addr) && !(area->flags & VM_ALLOC)) {
> + if (kasan_populate_vmalloc(area->size, area)) {
> + unmap_vmap_area(va);
> + kfree(area);
> + return NULL;
> + }
> + }
> +
>   return area;
>  }
>  
> @@ -2245,6 +2278,9 @@ static void __vunmap(const void *addr, int 
> deallocate_pages)
>   debug_check_no_locks_freed(area->addr, get_vm_area_size(area));
>   debug_check_no_obj_freed(area->addr, get_vm_area_size(area));
>  
> + if (area->flags & VM_KASAN)
> + kasan_poison_vmalloc(area->addr, area->size);
> +
>   vm_remove_mappings(area, deallocate_pages);
>  
>   if (deallocate_pages) {
> @@ -2497,6 +2533,9 @@ void *__vmalloc_node_range(unsigned long size, unsigned 
> long align,
>   if (!addr)
>   return NULL;
>  
> + if (kasan_populate_vmalloc(real_size, area))
> + return NULL;
> +

KASAN itself uses __vmalloc_node_range() to allocate and map shadow in memory 
online callback.
So we should either skip non-vmalloc and non-module addresses here or teach 
kasan's memory online/offline
callbacks to not use __vmalloc_node_range() (do something similar to 
kasan_populate_vmalloc() perhaps?). 


Re: [PATCH v2 -next] ASoC: fsl_mqs: Move static keyword to the front of declarations

2019-10-11 Thread Nicolin Chen
On Fri, Oct 11, 2019 at 10:35:38PM +0800, YueHaibing wrote:
> gcc warn about this:
> 
> sound/soc/fsl/fsl_mqs.c:146:1: warning:
>  static is not at beginning of declaration [-Wold-style-declaration]
> 
> Signed-off-by: YueHaibing 

Acked-by: Nicolin Chen 

> ---
> v2: Fix patch title
> ---
>  sound/soc/fsl/fsl_mqs.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/sound/soc/fsl/fsl_mqs.c b/sound/soc/fsl/fsl_mqs.c
> index f7fc44e..0c813a4 100644
> --- a/sound/soc/fsl/fsl_mqs.c
> +++ b/sound/soc/fsl/fsl_mqs.c
> @@ -143,7 +143,7 @@ static void fsl_mqs_shutdown(struct snd_pcm_substream 
> *substream,
>  MQS_EN_MASK, 0);
>  }
>  
> -const static struct snd_soc_component_driver soc_codec_fsl_mqs = {
> +static const struct snd_soc_component_driver soc_codec_fsl_mqs = {
>   .idle_bias_on = 1,
>   .non_legacy_dai_naming  = 1,
>  };
> -- 
> 2.7.4
> 
> 


Re: [PATCH] net/ibmvnic: Fix EOI when running in XIVE mode.

2019-10-11 Thread Thomas Falcon

On 10/11/19 12:52 AM, Cédric Le Goater wrote:

pSeries machines on POWER9 processors can run with the XICS (legacy)
interrupt mode or with the XIVE exploitation interrupt mode. These
interrupt contollers have different interfaces for interrupt
management : XICS uses hcalls and XIVE loads and stores on a page.
H_EOI being a XICS interface the enable_scrq_irq() routine can fail
when the machine runs in XIVE mode.

Fix that by calling the EOI handler of the interrupt chip.

Fixes: f23e0643cd0b ("ibmvnic: Clear pending interrupt after device reset")
Signed-off-by: Cédric Le Goater 
---


Thank you for this fix, Cedric!

Tom


  drivers/net/ethernet/ibm/ibmvnic.c | 8 +++-
  1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
b/drivers/net/ethernet/ibm/ibmvnic.c
index 2b073a3c0b84..f59d9a8e35e2 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -2878,12 +2878,10 @@ static int enable_scrq_irq(struct ibmvnic_adapter 
*adapter,
  
  	if (test_bit(0, >resetting) &&

adapter->reset_reason == VNIC_RESET_MOBILITY) {
-   u64 val = (0xff00) | scrq->hw_irq;
+   struct irq_desc *desc = irq_to_desc(scrq->irq);
+   struct irq_chip *chip = irq_desc_get_chip(desc);
  
-		rc = plpar_hcall_norets(H_EOI, val);

-   if (rc)
-   dev_err(dev, "H_EOI FAILED irq 0x%llx. rc=%ld\n",
-   val, rc);
+   chip->irq_eoi(>irq_data);
}
  
  	rc = plpar_hcall_norets(H_VIOCTL, adapter->vdev->unit_address,


Re: [PATCH v2 01/29] powerpc: Rename "notes" PT_NOTE to "note"

2019-10-11 Thread Segher Boessenkool
On Fri, Oct 11, 2019 at 09:11:43AM -0700, Kees Cook wrote:
> On Fri, Oct 11, 2019 at 03:25:19AM -0500, Segher Boessenkool wrote:
> > On Thu, Oct 10, 2019 at 05:05:41PM -0700, Kees Cook wrote:
> > > The Program Header identifiers are internal to the linker scripts. In
> > > preparation for moving the NOTES segment declaration into RO_DATA,
> > > standardize the identifier for the PT_NOTE entry to "note" as used by
> > > all other architectures that emit PT_NOTE.
> > 
> > All other archs are wrong, and "notes" is a much better name.  This
> > segment does not contain a single "note", but multiple "notes".
> 
> True, but the naming appears to be based off the Program Header name of
> "PT_NOTE".

Ah, so that's why the kernel segment (which isn't text btw, it's rwx) is
called "load" :-P

(Not convinced.  Some arch just got it wrong, and many others blindly
copied it?  That sounds a lot more likely imo.)

> Regardless, it is an entirely internal-to-the-linker-script
> identifier, so I am just consolidating on a common name with the least
> number of collateral changes.

Yes, that's what I'm complaining about.

Names *matter*, internal names doubly so.  So why replace a good name with
a worse name?  Because it is slightly less work for you?


Segher

p.s. Thanks for doing this, removing the powerpc workaround etc. :-)


Re: [PATCH v2 01/29] powerpc: Rename "notes" PT_NOTE to "note"

2019-10-11 Thread Kees Cook
On Fri, Oct 11, 2019 at 03:25:19AM -0500, Segher Boessenkool wrote:
> On Thu, Oct 10, 2019 at 05:05:41PM -0700, Kees Cook wrote:
> > The Program Header identifiers are internal to the linker scripts. In
> > preparation for moving the NOTES segment declaration into RO_DATA,
> > standardize the identifier for the PT_NOTE entry to "note" as used by
> > all other architectures that emit PT_NOTE.
> 
> All other archs are wrong, and "notes" is a much better name.  This
> segment does not contain a single "note", but multiple "notes".

True, but the naming appears to be based off the Program Header name of
"PT_NOTE". Regardless, it is an entirely internal-to-the-linker-script
identifier, so I am just consolidating on a common name with the least
number of collateral changes.

-- 
Kees Cook


Re: [PATCH v2 02/29] powerpc: Remove PT_NOTE workaround

2019-10-11 Thread Kees Cook
On Fri, Oct 11, 2019 at 05:07:04PM +1100, Michael Ellerman wrote:
> Kees Cook  writes:
> > In preparation for moving NOTES into RO_DATA, remove the PT_NOTE
> > workaround since the kernel requires at least gcc 4.6 now.
> >
> > Signed-off-by: Kees Cook 
> > ---
> >  arch/powerpc/kernel/vmlinux.lds.S | 24 ++--
> >  1 file changed, 2 insertions(+), 22 deletions(-)
> 
> Acked-by: Michael Ellerman 

Thanks!

> For the archives, Joel tried a similar patch a while back which caused
> some problems, see:
> 
>   https://lore.kernel.org/linuxppc-dev/20190321003253.22100-1-j...@jms.id.au/
> 
> and a v2:
> 
>   https://lore.kernel.org/linuxppc-dev/20190329064453.12761-1-j...@jms.id.au/
> 
> This is similar to his v2. The only outstanding comment on his v2 was
> from Segher:
>   (And I do not know if there are any tools that expect the notes in a phdr,
>   or even specifically the second phdr).
> 
> But this patch solves that by not changing the note.

Ah yes. Agreed: I'm retaining the note and dropping the workarounds.
FWIW, this builds happily for me in my tests.

-Kees

-- 
Kees Cook


[PATCH v2 -next] ASoC: fsl_mqs: Move static keyword to the front of declarations

2019-10-11 Thread YueHaibing
gcc warn about this:

sound/soc/fsl/fsl_mqs.c:146:1: warning:
 static is not at beginning of declaration [-Wold-style-declaration]

Signed-off-by: YueHaibing 
---
v2: Fix patch title
---
 sound/soc/fsl/fsl_mqs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/sound/soc/fsl/fsl_mqs.c b/sound/soc/fsl/fsl_mqs.c
index f7fc44e..0c813a4 100644
--- a/sound/soc/fsl/fsl_mqs.c
+++ b/sound/soc/fsl/fsl_mqs.c
@@ -143,7 +143,7 @@ static void fsl_mqs_shutdown(struct snd_pcm_substream 
*substream,
   MQS_EN_MASK, 0);
 }
 
-const static struct snd_soc_component_driver soc_codec_fsl_mqs = {
+static const struct snd_soc_component_driver soc_codec_fsl_mqs = {
.idle_bias_on = 1,
.non_legacy_dai_naming  = 1,
 };
-- 
2.7.4




Re: [PATCH -next] ASoC: fsl_mqs: fix old-style function declaration

2019-10-11 Thread Yuehaibing
On 2019/10/11 21:12, Andreas Schwab wrote:
> On Okt 11 2019, YueHaibing  wrote:
> 
>> gcc warn about this:
>>
>> sound/soc/fsl/fsl_mqs.c:146:1: warning:
>>  static is not at beginning of declaration [-Wold-style-declaration]
> 
> It's not a function, though.

Oh..., will fix this, thanks!

> 
> Andreas.
> 



Re: [PATCH v7 7/8] ima: check against blacklisted hashes for files with modsig

2019-10-11 Thread Mimi Zohar
On Mon, 2019-10-07 at 21:14 -0400, Nayna Jain wrote:
> Asymmetric private keys are used to sign multiple files. The kernel
> currently support checking against the blacklisted keys. However, if the
> public key is blacklisted, any file signed by the blacklisted key will
> automatically fail signature verification. We might not want to blacklist
> all the files signed by a particular key, but just a single file.
> Blacklisting the public key is not fine enough granularity.
> 
> This patch adds support for blacklisting binaries with appended signatures,
> based on the IMA policy.  Defined is a new policy option
> "appraise_flag=check_blacklist".

The blacklisted hash is not the same as the file hash, but is the file
hash without the appended signature.  Are there tools for calculating
the blacklisted hash?  Can you provide an example?

> 
> Signed-off-by: Nayna Jain 
> ---
>  Documentation/ABI/testing/ima_policy  |  1 +
>  security/integrity/ima/ima.h  |  9 +++
>  security/integrity/ima/ima_appraise.c | 39 +++
>  security/integrity/ima/ima_main.c | 12 ++---
>  security/integrity/ima/ima_policy.c   | 10 +--
>  security/integrity/integrity.h|  1 +
>  6 files changed, 66 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/ABI/testing/ima_policy 
> b/Documentation/ABI/testing/ima_policy
> index 29ebe9afdac4..4c97afcc0f3c 100644
> --- a/Documentation/ABI/testing/ima_policy
> +++ b/Documentation/ABI/testing/ima_policy
> @@ -25,6 +25,7 @@ Description:
>   lsm:[[subj_user=] [subj_role=] [subj_type=]
>[obj_user=] [obj_role=] [obj_type=]]
>   option: [[appraise_type=]] [template=] [permit_directio]
> + [appraise_flag=[check_blacklist]]
>   base:   func:= 
> [BPRM_CHECK][MMAP_CHECK][CREDS_CHECK][FILE_CHECK][MODULE_CHECK]
>   [FIRMWARE_CHECK]
>   [KEXEC_KERNEL_CHECK] [KEXEC_INITRAMFS_CHECK]
> diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
> index ed86c1f70d7f..63e20ccc91ce 100644
> --- a/security/integrity/ima/ima.h
> +++ b/security/integrity/ima/ima.h
> @@ -256,6 +256,8 @@ int ima_policy_show(struct seq_file *m, void *v);
>  #define IMA_APPRAISE_KEXEC   0x40
>  
>  #ifdef CONFIG_IMA_APPRAISE
> +int ima_check_blacklist(struct integrity_iint_cache *iint,
> + const struct modsig *modsig, int action, int pcr);
>  int ima_appraise_measurement(enum ima_hooks func,
>struct integrity_iint_cache *iint,
>struct file *file, const unsigned char *filename,
> @@ -271,6 +273,13 @@ int ima_read_xattr(struct dentry *dentry,
>  struct evm_ima_xattr_data **xattr_value);
>  
>  #else
> +static inline int ima_check_blacklist(struct integrity_iint_cache *iint,
> +   const struct modsig *modsig, int action,
> +   int pcr)
> +{
> + return 0;
> +}
> +
>  static inline int ima_appraise_measurement(enum ima_hooks func,
>  struct integrity_iint_cache *iint,
>  struct file *file,
> diff --git a/security/integrity/ima/ima_appraise.c 
> b/security/integrity/ima/ima_appraise.c
> index 136ae4e0ee92..fe34d64a684c 100644
> --- a/security/integrity/ima/ima_appraise.c
> +++ b/security/integrity/ima/ima_appraise.c
> @@ -12,6 +12,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include "ima.h"
>  
> @@ -303,6 +304,44 @@ static int modsig_verify(enum ima_hooks func, const 
> struct modsig *modsig,
>   return rc;
>  }
>  
> +/*
> + * ima_blacklist_measurement - Checks whether the binary is blacklisted. If
> + * yes, then adds the hash of the blacklisted binary to the measurement list.
> + *
> + * Returns -EPERM if the hash is blacklisted.
> + */
> +int ima_check_blacklist(struct integrity_iint_cache *iint,
> + const struct modsig *modsig, int action, int pcr)
> +{
> + enum hash_algo hash_algo;
> + const u8 *digest = NULL;
> + u32 digestsize = 0;
> + u32 secid;
> + int rc = 0;
> + struct ima_template_desc *template_desc;
> +
> + template_desc = lookup_template_desc("ima-buf");
> + template_desc_init_fields(template_desc->fmt, &(template_desc->fields),
> +   &(template_desc->num_fields));

Before using template_desc, make sure that template_desc isn't NULL.
 For completeness, check the return code of
template_desc_init_fields()

> +
> + if (!(iint->flags & IMA_CHECK_BLACKLIST))
> + return 0;

Move this check before getting the template_desc and make sure that
modsig isn't NULL.

> +
> + if (iint->flags & IMA_MODSIG_ALLOWED) {
> + security_task_getsecid(current, );

secid isn't being used.

> + ima_get_modsig_digest(modsig, 

Re: [PATCH v7 8/8] powerpc/ima: update ima arch policy to check for blacklist

2019-10-11 Thread Mimi Zohar
On Mon, 2019-10-07 at 21:14 -0400, Nayna Jain wrote:
> This patch updates the arch specific policies for PowernV systems
> to add check against blacklisted binary hashes before doing the
> verification.

This sentence explains how you're doing something.  A simple tweak in
the wording provides the motivation.

^to make sure that the binary hash is not blacklisted.

> 
> Signed-off-by: Nayna Jain 

Reviewed-by: Mimi Zohar 

> ---
>  arch/powerpc/kernel/ima_arch.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/ima_arch.c b/arch/powerpc/kernel/ima_arch.c
> index 88bfe4a1a9a5..4fa41537b846 100644
> --- a/arch/powerpc/kernel/ima_arch.c
> +++ b/arch/powerpc/kernel/ima_arch.c
> @@ -25,9 +25,9 @@ bool arch_ima_get_secureboot(void)
>  static const char *const arch_rules[] = {
>   "measure func=KEXEC_KERNEL_CHECK template=ima-modsig",
>   "measure func=MODULE_CHECK template=ima-modsig",
> - "appraise func=KEXEC_KERNEL_CHECK appraise_type=imasig|modsig",
> + "appraise func=KEXEC_KERNEL_CHECK appraise_flag=check_blacklist 
> appraise_type=imasig|modsig",
>  #if !IS_ENABLED(CONFIG_MODULE_SIG_FORCE)
> - "appraise func=MODULE_CHECK appraise_type=imasig|modsig",
> + "appraise func=MODULE_CHECK appraise_flag=check_blacklist 
> appraise_type=imasig|modsig",
>  #endif
>   NULL
>  };



Re: [PATCH v7 6/8] certs: add wrapper function to check blacklisted binary hash

2019-10-11 Thread Mimi Zohar
On Mon, 2019-10-07 at 21:14 -0400, Nayna Jain wrote:
> The existing is_hash_blacklisted() function returns -EKEYREJECTED
> error code for both the blacklisted keys and binaries.
> 
> This patch adds a wrapper function is_binary_blacklisted() to check
> against binary hashes and returns -EPERM.    
> 
> Signed-off-by: Nayna Jain 

This patch description describes what you're doing, not the
motivation.

Reviewed-by: Mimi Zohar 

> ---
>  certs/blacklist.c | 9 +
>  include/keys/system_keyring.h | 6 ++
>  2 files changed, 15 insertions(+)
> 
> diff --git a/certs/blacklist.c b/certs/blacklist.c
> index ec00bf337eb6..6514f9ebc943 100644
> --- a/certs/blacklist.c
> +++ b/certs/blacklist.c
> @@ -135,6 +135,15 @@ int is_hash_blacklisted(const u8 *hash, size_t hash_len, 
> const char *type)
>  }
>  EXPORT_SYMBOL_GPL(is_hash_blacklisted);
>  
> +int is_binary_blacklisted(const u8 *hash, size_t hash_len)
> +{
> + if (is_hash_blacklisted(hash, hash_len, "bin") == -EKEYREJECTED)
> + return -EPERM;
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(is_binary_blacklisted);
> +
>  /*
>   * Initialise the blacklist
>   */
> diff --git a/include/keys/system_keyring.h b/include/keys/system_keyring.h
> index c1a96fdf598b..fb8b07daa9d1 100644
> --- a/include/keys/system_keyring.h
> +++ b/include/keys/system_keyring.h
> @@ -35,12 +35,18 @@ extern int restrict_link_by_builtin_and_secondary_trusted(
>  extern int mark_hash_blacklisted(const char *hash);
>  extern int is_hash_blacklisted(const u8 *hash, size_t hash_len,
>  const char *type);
> +extern int is_binary_blacklisted(const u8 *hash, size_t hash_len);
>  #else
>  static inline int is_hash_blacklisted(const u8 *hash, size_t hash_len,
> const char *type)
>  {
>   return 0;
>  }
> +
> +static inline int is_binary_blacklisted(const u8 *hash, size_t hash_len)
> +{
> + return 0;
> +}
>  #endif
>  
>  #ifdef CONFIG_IMA_BLACKLIST_KEYRING



Re: [PATCH v7 5/8] ima: make process_buffer_measurement() generic

2019-10-11 Thread Mimi Zohar
[Cc'ing Prakhar Srivastava]

On Mon, 2019-10-07 at 21:14 -0400, Nayna Jain wrote:
> An additional measurement record is needed to indicate the blacklisted
> binary. The record will measure the blacklisted binary hash.
> 
> This patch makes the function process_buffer_measurement() generic to be
> called by the blacklisting function. It modifies the function to handle
> more than just the KEXEC_CMDLINE.

The purpose of this patch is to make process_buffer_measurement() more
generic.  The patch description should simply say,
process_buffer_measurement() is limited to measuring the kexec boot
command line.  This patch makes process_buffer_measurement() more
generic, allowing it to measure other types of buffer data (eg.
blacklisted binary hashes).

Mimi

> 
> Signed-off-by: Nayna Jain 
> ---
>  security/integrity/ima/ima.h  |  3 +++
>  security/integrity/ima/ima_main.c | 29 ++---
>  2 files changed, 17 insertions(+), 15 deletions(-)
> 
> diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
> index 3689081aaf38..ed86c1f70d7f 100644
> --- a/security/integrity/ima/ima.h
> +++ b/security/integrity/ima/ima.h
> @@ -217,6 +217,9 @@ void ima_store_measurement(struct integrity_iint_cache 
> *iint, struct file *file,
>  struct evm_ima_xattr_data *xattr_value,
>  int xattr_len, const struct modsig *modsig, int pcr,
>  struct ima_template_desc *template_desc);
> +void process_buffer_measurement(const void *buf, int size,
> + const char *eventname, int pcr,
> + struct ima_template_desc *template_desc);
>  void ima_audit_measurement(struct integrity_iint_cache *iint,
>  const unsigned char *filename);
>  int ima_alloc_init_template(struct ima_event_data *event_data,
> diff --git a/security/integrity/ima/ima_main.c 
> b/security/integrity/ima/ima_main.c
> index 60027c643ecd..77115e884496 100644
> --- a/security/integrity/ima/ima_main.c
> +++ b/security/integrity/ima/ima_main.c
> @@ -626,14 +626,14 @@ int ima_load_data(enum kernel_load_data_id id)
>   * @buf: pointer to the buffer that needs to be added to the log.
>   * @size: size of buffer(in bytes).
>   * @eventname: event name to be used for the buffer entry.
> - * @cred: a pointer to a credentials structure for user validation.
> - * @secid: the secid of the task to be validated.
> + * @pcr: pcr to extend the measurement
> + * @template_desc: template description
>   *
>   * Based on policy, the buffer is measured into the ima log.
>   */
> -static void process_buffer_measurement(const void *buf, int size,
> -const char *eventname,
> -const struct cred *cred, u32 secid)
> +void process_buffer_measurement(const void *buf, int size,
> + const char *eventname, int pcr,
> + struct ima_template_desc *template_desc)
>  {
>   int ret = 0;
>   struct ima_template_entry *entry = NULL;
> @@ -642,19 +642,11 @@ static void process_buffer_measurement(const void *buf, 
> int size,
>   .filename = eventname,
>   .buf = buf,
>   .buf_len = size};
> - struct ima_template_desc *template_desc = NULL;
>   struct {
>   struct ima_digest_data hdr;
>   char digest[IMA_MAX_DIGEST_SIZE];
>   } hash = {};
>   int violation = 0;
> - int pcr = CONFIG_IMA_MEASURE_PCR_IDX;
> - int action = 0;
> -
> - action = ima_get_action(NULL, cred, secid, 0, KEXEC_CMDLINE, ,
> - _desc);
> - if (!(action & IMA_MEASURE))
> - return;
>  
>   iint.ima_hash = 
>   iint.ima_hash->algo = ima_hash_algo;
> @@ -686,12 +678,19 @@ static void process_buffer_measurement(const void *buf, 
> int size,
>   */
>  void ima_kexec_cmdline(const void *buf, int size)
>  {
> + int pcr = CONFIG_IMA_MEASURE_PCR_IDX;
> + struct ima_template_desc *template_desc = NULL;
> + int action;
>   u32 secid;
>  
>   if (buf && size != 0) {
>   security_task_getsecid(current, );
> - process_buffer_measurement(buf, size, "kexec-cmdline",
> -current_cred(), secid);
> + action = ima_get_action(NULL, current_cred(), secid, 0,
> + KEXEC_CMDLINE, , _desc);
> + if (!(action & IMA_MEASURE))
> + return;
> + process_buffer_measurement(buf, size, "kexec-cmdline", pcr,
> +template_desc);
>   }
>  }
>  



Re: [PATCH v7 2/8] powerpc: add support to initialize ima policy rules

2019-10-11 Thread Mimi Zohar
On Mon, 2019-10-07 at 21:14 -0400, Nayna Jain wrote:
> PowerNV systems uses kernel based bootloader, thus its secure boot
> implementation uses kernel IMA security subsystem to verify the kernel
> before kexec. 

^use a Linux based bootloader, which rely on the IMA subsystem to
enforce different secure boot modes.

> Since the verification policy might differ based on the
> secure boot mode of the system, the policies are defined at runtime.

^the policies need to be defined at runtime.
> 
> This patch implements the arch-specific support to define the IMA policy
> rules based on the runtime secure boot mode of the system.
> 
> This patch provides arch-specific IMA policies if PPC_SECURE_BOOT
> config is enabled.
> 
> Signed-off-by: Nayna Jain 
> ---
>  arch/powerpc/Kconfig   |  2 ++
>  arch/powerpc/kernel/Makefile   |  2 +-
>  arch/powerpc/kernel/ima_arch.c | 33 +
>  include/linux/ima.h|  3 ++-
>  4 files changed, 38 insertions(+), 2 deletions(-)
>  create mode 100644 arch/powerpc/kernel/ima_arch.c
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index b4a221886fcf..deb19ec6ba3d 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -938,6 +938,8 @@ config PPC_SECURE_BOOT
>   prompt "Enable secure boot support"
>   bool
>   depends on PPC_POWERNV
> + depends on IMA
> + depends on IMA_ARCH_POLICY

As IMA_ARCH_POLICY is dependent on IMA, I don't see a need for
depending on both IMA and IMA_ARCH_POLICY.

Mimi



Re: [PATCH -next] ASoC: fsl_mqs: fix old-style function declaration

2019-10-11 Thread Andreas Schwab
On Okt 11 2019, YueHaibing  wrote:

> gcc warn about this:
>
> sound/soc/fsl/fsl_mqs.c:146:1: warning:
>  static is not at beginning of declaration [-Wold-style-declaration]

It's not a function, though.

Andreas.

-- 
Andreas Schwab, sch...@linux-m68k.org
GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510  2552 DF73 E780 A9DA AEC1
"And now for something completely different."


Re: [PATCH v6] numa: make node_to_cpumask_map() NUMA_NO_NODE aware

2019-10-11 Thread Peter Zijlstra
On Fri, Oct 11, 2019 at 11:27:54AM +0800, Yunsheng Lin wrote:
> But I failed to see why the above is related to making node_to_cpumask_map()
> NUMA_NO_NODE aware?

Your initial bug is for hns3, which is a PCI device, which really _MUST_
have a node assigned.

It not having one, is a straight up bug. We must not silently accept
NO_NODE there, ever.


[PATCH -next] ASoC: fsl_mqs: fix old-style function declaration

2019-10-11 Thread YueHaibing
gcc warn about this:

sound/soc/fsl/fsl_mqs.c:146:1: warning:
 static is not at beginning of declaration [-Wold-style-declaration]

Signed-off-by: YueHaibing 
---
 sound/soc/fsl/fsl_mqs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/sound/soc/fsl/fsl_mqs.c b/sound/soc/fsl/fsl_mqs.c
index f7fc44e..0c813a4 100644
--- a/sound/soc/fsl/fsl_mqs.c
+++ b/sound/soc/fsl/fsl_mqs.c
@@ -143,7 +143,7 @@ static void fsl_mqs_shutdown(struct snd_pcm_substream 
*substream,
   MQS_EN_MASK, 0);
 }
 
-const static struct snd_soc_component_driver soc_codec_fsl_mqs = {
+static const struct snd_soc_component_driver soc_codec_fsl_mqs = {
.idle_bias_on = 1,
.non_legacy_dai_naming  = 1,
 };
-- 
2.7.4




Re: [PATCH 0/2] ocxl: Move SPA and TL definitions

2019-10-11 Thread christophe lombard

On 11/10/2019 10:06, christophe lombard wrote:

On 11/10/2019 00:34, Andrew Donnellan wrote:

On 10/10/19 2:11 am, christophe lombard wrote:
This series moves the definition and the management of scheduled 
process area
(SPA) and of the templates (Transaction Layer) for an ocxl card, 
using the
OCAPI interface. The code is now located in the specific arch powerpc 
platform.
These patches will help for a futur implementation of the ocxl driver 
in QEMU.


Could you explain more about this?



The Scheduled Processes Area and the configuration of the Transaction
Layer are specific to the AFU and more generally to the Opencapi
device.
Running the ocxl module in a guest environment, and later in several 
guests in parallel, using the same Opencapi device and the same AFus, 
involves to have a common code handling the SPA. This explains why these 
parts of the ocxl driver will move to arch powerpc platform running on 
the host.


Thanks.



Implementation of the ocxl driver running on a QEMU guest environment 
will be detailed in the following patches but basically, a new ocxl vfio 
driver, running in the host, will interact, in side, with the SPA, using 
the pnv_ api(s) and on the other hand will interact, through ioctl 
commands, with the guest(s). Ocxl, running in the guest, through hcalls 
(handled by QEMU) will configure the device and interact with the vfio 
through ioctl commands.






Andrew




The Open Coherently Attached Processor Interface (OCAPI) is used to
allow an Attached Functional Unit (AFU) to connect to the Processor
Chip's system bus in a high speed and cache coherent manner.

It builds on top of the existing ocxl driver.

It has been tested in a bare-metal environment using the memcpy and
the AFP AFUs.

christophe lombard (2):
   powerpc/powernv: ocxl move SPA definition
   powerpc/powernv: ocxl move TL definition

  arch/powerpc/include/asm/pnv-ocxl.h   |  30 +-
  arch/powerpc/platforms/powernv/ocxl.c | 378 +++---
  drivers/misc/ocxl/afu_irq.c   |   1 -
  drivers/misc/ocxl/config.c    |  89 +-
  drivers/misc/ocxl/link.c  | 347 +++
  drivers/misc/ocxl/ocxl_internal.h |  12 -
  drivers/misc/ocxl/trace.h |  34 +--
  7 files changed, 467 insertions(+), 424 deletions(-)









Re: [PATCH 1/3] powernv/iov: Ensure the pdn for VFs always contains a valid PE number

2019-10-11 Thread Michael Ellerman
"Oliver O'Halloran"  writes:
> On Tue, Oct 1, 2019 at 3:09 AM Bjorn Helgaas  wrote:
>> On Mon, Sep 30, 2019 at 12:08:46PM +1000, Oliver O'Halloran wrote:
>> This is all powerpc, so I assume Michael will handle this.  Just
>> random things I noticed; ignore if they don't make sense:
>>
>> > On PowerNV we use the pcibios_sriov_enable() hook to do two things:
>> >
>> > 1. Create a pci_dn structure for each of the VFs, and
>> > 2. Configure the PHB's internal BARs that map MMIO ranges to PEs
>> >so that each VF has it's own PE. Note that the PE also determines
>>
>> s/it's/its/
>>
>> >the IOMMU table the HW uses for the device.
>> >
>> > Currently we do not set the pe_number field of the pci_dn immediately after
>> > assigning the PE number for the VF that it represents. Instead, we do that
>> > in a fixup (see pnv_pci_dma_dev_setup) which is run inside the
>> > pcibios_add_device() hook which is run prior to adding the device to the
>> > bus.
>> >
>> > On PowerNV we add the device to it's IOMMU group using a bus notifier and
>>
>> s/it's/its/
>>
>> > in order for this to work the PE number needs to be known when the bus
>> > notifier is run. This works today since the PE number is set in the fixup
>> > which runs before adding the device to the bus. However, if we want to move
>> > the fixup to a later stage this will break.
>> >
>> > We can fix this by setting the pdn->pe_number inside of
>> > pcibios_sriov_enable(). There's no good to avoid this since we already have
>>
>> s/no good/no good reason/ ?
>>
>> Not quite sure what "this" refers to ... "no good reason to avoid
>> setting pdn->pe_number in pcibios_sriov_enable()"?  The double
>> negative makes it a little hard to parse.
>
> I agree it's a bit vague, I'll re-word it.

So I'm expecting a v2?

cheers


Re: [PATCH v2 01/29] powerpc: Rename "notes" PT_NOTE to "note"

2019-10-11 Thread Segher Boessenkool
On Thu, Oct 10, 2019 at 05:05:41PM -0700, Kees Cook wrote:
> The Program Header identifiers are internal to the linker scripts. In
> preparation for moving the NOTES segment declaration into RO_DATA,
> standardize the identifier for the PT_NOTE entry to "note" as used by
> all other architectures that emit PT_NOTE.

All other archs are wrong, and "notes" is a much better name.  This
segment does not contain a single "note", but multiple "notes".


Segher


Re: [PATCH] spufs: fix a crash in spufs_create_root()

2019-10-11 Thread Michael Ellerman
On Tue, 2019-10-08 at 14:13:42 UTC, Emmanuel Nicolet wrote:
> The spu_fs_context was not set in fc->fs_private, this caused a crash
> when accessing ctx->mode in spufs_create_root().
> 
> Signed-off-by: Emmanuel Nicolet 

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/2272905a4580f26630f7d652cc33935b59f96d4c

cheers


Re: [PATCH] powerpc/kvm: Fix kvmppc_vcore->in_guest value in kvmhv_switch_to_host

2019-10-11 Thread Michael Ellerman
On Fri, 2019-10-04 at 02:53:17 UTC, Jordan Niethe wrote:
> kvmhv_switch_to_host() in arch/powerpc/kvm/book3s_hv_rmhandlers.S needs
> to set kvmppc_vcore->in_guest to 0 to signal secondary CPUs to continue.
> This happens after resetting the PCR. Before commit 13c7bb3c57dc
> ("powerpc/64s: Set reserved PCR bits"), r0 would always be 0 before it
> was stored to kvmppc_vcore->in_guest. However because of this change in
> the commit:
> 
> /* Reset PCR */
> ld  r0, VCORE_PCR(r5)
> -   cmpdi   r0, 0
> +   LOAD_REG_IMMEDIATE(r6, PCR_MASK)
> +   cmpld   r0, r6
> beq 18f
> -   li  r0, 0
> -   mtspr   SPRN_PCR, r0
> +   mtspr   SPRN_PCR, r6
>  18:
> /* Signal secondary CPUs to continue */
> stb r0,VCORE_IN_GUEST(r5)
> 
> We are no longer comparing r0 against 0 and loading it with 0 if it
> contains something else. Hence when we store r0 to
> kvmppc_vcore->in_guest, it might not be 0.  This means that secondary
> CPUs will not be signalled to continue. Those CPUs get stuck and errors
> like the following are logged:
> 
> KVM: CPU 1 seems to be stuck
> KVM: CPU 2 seems to be stuck
> KVM: CPU 3 seems to be stuck
> KVM: CPU 4 seems to be stuck
> KVM: CPU 5 seems to be stuck
> KVM: CPU 6 seems to be stuck
> KVM: CPU 7 seems to be stuck
> 
> This can be reproduced with:
> $ for i in `seq 1 7` ; do chcpu -d $i ; done ;
> $ taskset -c 0 qemu-system-ppc64 -smp 8,threads=8 \
>-M pseries,accel=kvm,kvm-type=HV -m 1G -nographic -vga none \
>-kernel vmlinux -initrd initrd.cpio.xz
> 
> Fix by making sure r0 is 0 before storing it to kvmppc_vcore->in_guest.
> 
> Fixes: 13c7bb3c57dc ("powerpc/64s: Set reserved PCR bits")
> Reported-by: Alexey Kardashevskiy 
> Signed-off-by: Jordan Niethe 

Applied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/7fe4e1176dfe47a243d8edd98d26abd11f91b042

cheers


Re: [PATCH] selftests/powerpc: Fix compiling error on tlbie_test due to newer gcc

2019-10-11 Thread Michael Ellerman
On Thu, 2019-10-03 at 21:10:10 UTC, "Desnes A. Nunes do Rosario" wrote:
> Newer versions of GCC demand that the size of the string to be copied must
> be explicitly smaller than the size of the destination. Thus, the NULL
> char has to be taken into account on strncpy.
> 
> This will avoid the following compiling error:
> 
>   tlbie_test.c: In function 'main':
>   tlbie_test.c:639:4: error: 'strncpy' specified bound 100 equals destination 
> size [-Werror=stringop-truncation]
>   strncpy(logdir, optarg, LOGDIR_NAME_SIZE);
>   ^
>   cc1: all warnings being treated as errors
> 
> Signed-off-by: Desnes A. Nunes do Rosario 

Reapplied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/5b216ea1c40cf06eead15054c70e238c9bd4729e

cheers


Re: [PATCH] powerpc/pseries: Remove confusing warning message.

2019-10-11 Thread Michael Ellerman
On Tue, 2019-10-01 at 13:29:28 UTC, Laurent Dufour wrote:
> Since the commit 1211ee61b4a8 ("powerpc/pseries: Read TLB Block Invalidate
> Characteristics"), a warning message is displayed when booting a guest on
> top of KVM:
> 
> lpar: arch/powerpc/platforms/pseries/lpar.c 
> pseries_lpar_read_hblkrm_characteristics Error calling get-system-parameter 
> (0xfffd)
> 
> This message is displayed because this hypervisor is not supporting the
> H_BLOCK_REMOVE hcall and thus is not exposing the corresponding feature.
> 
> Reading the TLB Block Invalidate Characteristics should not be done if the
> feature is not exposed.
> 
> Fixes: 1211ee61b4a8 ("powerpc/pseries: Read TLB Block Invalidate 
> Characteristics")
> Reported-by: Stephen Rothwell 
> Signed-off-by: Laurent Dufour 

Reapplied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/4ab8a485f7bc69e04e3e8d75f62bdcac5f4ed02e

cheers


Re: linux-next: build failure after merge of the powerpc tree

2019-10-11 Thread Michael Ellerman
On Mon, 2019-09-30 at 00:13:42 UTC, Stephen Rothwell wrote:
> Hi all,
> 
> After merging the powerpc tree, today's linux-next build (powerpc64
> allnoconfig) failed like this:
> 
> arch/powerpc/mm/book3s64/pgtable.c: In function 'flush_partition':
> arch/powerpc/mm/book3s64/pgtable.c:216:3: error: implicit declaration of fu=
> nction 'radix__flush_all_lpid_guest'; did you mean 'radix__flush_all_lpid'?=
>  [-Werror=3Dimplicit-function-declaration]
>   216 |   radix__flush_all_lpid_guest(lpid);
>   |   ^~~
>   |   radix__flush_all_lpid
> 
> Caused by commit
> 
>   99161de3a283 ("powerpc/64s/radix: tidy up TLB flushing code")
> 
> radix__flush_all_lpid_guest() is only declared for CONFIG_PPC_RADIX_MMU
> which is not set for this build.
> 
> I am not sure why this did not show up earlier (maybe a Kconfig
> change?).
> 
> I added the following hack for today.
> 
> From: Stephen Rothwell 
> Date: Mon, 30 Sep 2019 10:09:17 +1000
> Subject: [PATCH] powerpc/64s/radix: fix for "tidy up TLB flushing code" and
>  !CONFIG_PPC_RADIX_MMU
> 
> Signed-off-by: Stephen Rothwell 

Reapplied to powerpc fixes, thanks.

https://git.kernel.org/powerpc/c/18217da36103c25d87870624dfa569e6b9906a90

cheers


Re: [PATCH 0/2] ocxl: Move SPA and TL definitions

2019-10-11 Thread christophe lombard

On 11/10/2019 00:34, Andrew Donnellan wrote:

On 10/10/19 2:11 am, christophe lombard wrote:
This series moves the definition and the management of scheduled 
process area
(SPA) and of the templates (Transaction Layer) for an ocxl card, using 
the
OCAPI interface. The code is now located in the specific arch powerpc 
platform.
These patches will help for a futur implementation of the ocxl driver 
in QEMU.


Could you explain more about this?



The Scheduled Processes Area and the configuration of the Transaction
Layer are specific to the AFU and more generally to the Opencapi
device.
Running the ocxl module in a guest environment, and later in several 
guests in parallel, using the same Opencapi device and the same AFus, 
involves to have a common code handling the SPA. This explains why these 
parts of the ocxl driver will move to arch powerpc platform running on 
the host.


Thanks.




Andrew




The Open Coherently Attached Processor Interface (OCAPI) is used to
allow an Attached Functional Unit (AFU) to connect to the Processor
Chip's system bus in a high speed and cache coherent manner.

It builds on top of the existing ocxl driver.

It has been tested in a bare-metal environment using the memcpy and
the AFP AFUs.

christophe lombard (2):
   powerpc/powernv: ocxl move SPA definition
   powerpc/powernv: ocxl move TL definition

  arch/powerpc/include/asm/pnv-ocxl.h   |  30 +-
  arch/powerpc/platforms/powernv/ocxl.c | 378 +++---
  drivers/misc/ocxl/afu_irq.c   |   1 -
  drivers/misc/ocxl/config.c    |  89 +-
  drivers/misc/ocxl/link.c  | 347 +++
  drivers/misc/ocxl/ocxl_internal.h |  12 -
  drivers/misc/ocxl/trace.h |  34 +--
  7 files changed, 467 insertions(+), 424 deletions(-)







Re: [PATCH v2 12/29] vmlinux.lds.h: Replace RO_DATA_SECTION with RO_DATA

2019-10-11 Thread Geert Uytterhoeven
On Fri, Oct 11, 2019 at 2:07 AM Kees Cook  wrote:
> Finish renaming RO_DATA_SECTION to RO_DATA. (Calling this a "section"
> is a lie, since it's multiple sections and section flags cannot be
> applied to the macro.)
>
> Signed-off-by: Kees Cook 

>  arch/m68k/kernel/vmlinux-nommu.lds  | 2 +-

For m68k:
Acked-by: Geert Uytterhoeven 

Gr{oetje,eeting}s,

Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds


Re: [PATCH v2 13/29] vmlinux.lds.h: Replace RW_DATA_SECTION with RW_DATA

2019-10-11 Thread Geert Uytterhoeven
On Fri, Oct 11, 2019 at 2:07 AM Kees Cook  wrote:
> Rename RW_DATA_SECTION to RW_DATA. (Calling this a "section" is a lie,
> since it's multiple sections and section flags cannot be applied to
> the macro.)
>
> Signed-off-by: Kees Cook 

>  arch/m68k/kernel/vmlinux-nommu.lds   | 2 +-
>  arch/m68k/kernel/vmlinux-std.lds | 2 +-
>  arch/m68k/kernel/vmlinux-sun3.lds| 2 +-

For m68k:
Acked-by: Geert Uytterhoeven 

Gr{oetje,eeting}s,

Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds


[PATCH V5 2/2] mm/debug: Add tests validating architecture page table helpers

2019-10-11 Thread Anshuman Khandual
This adds tests which will validate architecture page table helpers and
other accessors in their compliance with expected generic MM semantics.
This will help various architectures in validating changes to existing
page table helpers or addition of new ones.

Test page table and memory pages creating it's entries at various level are
all allocated from system memory with required size and alignments. But if
memory pages with required size and alignment could not be allocated, then
all depending individual tests are just skipped afterwards. This test gets
called right after init_mm_internals() required for alloc_contig_range() to
work correctly.

This gets build and run when CONFIG_DEBUG_VM_PGTABLE is selected along with
CONFIG_VM_DEBUG. Architectures willing to subscribe this test also need to
select CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE which for now is limited to x86 and
arm64. Going forward, other architectures too can enable this after fixing
build or runtime problems (if any) with their page table helpers.

Cc: Andrew Morton 
Cc: Vlastimil Babka 
Cc: Greg Kroah-Hartman 
Cc: Thomas Gleixner 
Cc: Mike Rapoport 
Cc: Jason Gunthorpe 
Cc: Dan Williams 
Cc: Peter Zijlstra 
Cc: Michal Hocko 
Cc: Mark Rutland 
Cc: Mark Brown 
Cc: Steven Price 
Cc: Ard Biesheuvel 
Cc: Masahiro Yamada 
Cc: Kees Cook 
Cc: Tetsuo Handa 
Cc: Matthew Wilcox 
Cc: Sri Krishna chowdary 
Cc: Dave Hansen 
Cc: Russell King - ARM Linux 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Martin Schwidefsky 
Cc: Heiko Carstens 
Cc: "David S. Miller" 
Cc: Vineet Gupta 
Cc: James Hogan 
Cc: Paul Burton 
Cc: Ralf Baechle 
Cc: Kirill A. Shutemov 
Cc: Gerald Schaefer 
Cc: Christophe Leroy 
Cc: linux-snps-...@lists.infradead.org
Cc: linux-m...@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-i...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org

Tested-by: Christophe Leroy#PPC32
Suggested-by: Catalin Marinas 
Signed-off-by: Christophe Leroy 
Signed-off-by: Anshuman Khandual 
---
 .../debug/debug-vm-pgtable/arch-support.txt|  34 ++
 arch/arm64/Kconfig |   1 +
 arch/x86/Kconfig   |   1 +
 arch/x86/include/asm/pgtable_64.h  |   6 +
 include/asm-generic/pgtable.h  |   6 +
 init/main.c|   1 +
 lib/Kconfig.debug  |  21 +
 mm/Makefile|   1 +
 mm/debug_vm_pgtable.c  | 438 +
 9 files changed, 509 insertions(+)
 create mode 100644 
Documentation/features/debug/debug-vm-pgtable/arch-support.txt
 create mode 100644 mm/debug_vm_pgtable.c

diff --git a/Documentation/features/debug/debug-vm-pgtable/arch-support.txt 
b/Documentation/features/debug/debug-vm-pgtable/arch-support.txt
new file mode 100644
index 000..d6b8185
--- /dev/null
+++ b/Documentation/features/debug/debug-vm-pgtable/arch-support.txt
@@ -0,0 +1,34 @@
+#
+# Feature name:  debug-vm-pgtable
+# Kconfig:   ARCH_HAS_DEBUG_VM_PGTABLE
+# description:   arch supports pgtable tests for semantics compliance
+#
+---
+| arch |status|
+---
+|   alpha: | TODO |
+| arc: | TODO |
+| arm: | TODO |
+|   arm64: |  ok  |
+| c6x: | TODO |
+|csky: | TODO |
+|   h8300: | TODO |
+| hexagon: | TODO |
+|ia64: | TODO |
+|m68k: | TODO |
+|  microblaze: | TODO |
+|mips: | TODO |
+|   nds32: | TODO |
+|   nios2: | TODO |
+|openrisc: | TODO |
+|  parisc: | TODO |
+| powerpc: | TODO |
+|   riscv: | TODO |
+|s390: | TODO |
+|  sh: | TODO |
+|   sparc: | TODO |
+|  um: | TODO |
+|   unicore32: | TODO |
+| x86: |  ok  |
+|  xtensa: | TODO |
+---
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 950a56b..8a3b3ea 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -11,6 +11,7 @@ config ARM64
select ACPI_PPTT if ACPI
select ARCH_CLOCKSOURCE_DATA
select ARCH_HAS_DEBUG_VIRTUAL
+   select ARCH_HAS_DEBUG_VM_PGTABLE
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_DMA_COHERENT_TO_PFN
select ARCH_HAS_DMA_PREP_COHERENT
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index abe822d..13c9bd9 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -61,6 +61,7 @@ config X86
select ARCH_CLOCKSOURCE_INIT
select ARCH_HAS_ACPI_TABLE_UPGRADE  if ACPI
select ARCH_HAS_DEBUG_VIRTUAL
+   select ARCH_HAS_DEBUG_VM_PGTABLE
select ARCH_HAS_DEVMEM_IS_ALLOWED
select 

[PATCH V5 1/2] mm/hugetlb: Make alloc_gigantic_page() available for general use

2019-10-11 Thread Anshuman Khandual
alloc_gigantic_page() implements an allocation method where it scans over
various zones looking for a large contiguous memory block which could not
have been allocated through the buddy allocator. A subsequent patch which
tests arch page table helpers needs such a method to allocate PUD_SIZE
sized memory block. In the future such methods might have other use cases
as well. So alloc_gigantic_page() has been split carving out actual memory
allocation method and made available via new alloc_gigantic_page_order().

Cc: Andrew Morton 
Cc: Vlastimil Babka 
Cc: Greg Kroah-Hartman 
Cc: Thomas Gleixner 
Cc: Mike Rapoport 
Cc: Mike Kravetz 
Cc: Jason Gunthorpe 
Cc: Dan Williams 
Cc: Peter Zijlstra 
Cc: Michal Hocko 
Cc: Mark Rutland 
Cc: Mark Brown 
Cc: Steven Price 
Cc: Ard Biesheuvel 
Cc: Masahiro Yamada 
Cc: Kees Cook 
Cc: Tetsuo Handa 
Cc: Matthew Wilcox 
Cc: Sri Krishna chowdary 
Cc: Dave Hansen 
Cc: Russell King - ARM Linux 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Martin Schwidefsky 
Cc: Heiko Carstens 
Cc: "David S. Miller" 
Cc: Vineet Gupta 
Cc: James Hogan 
Cc: Paul Burton 
Cc: Ralf Baechle 
Cc: Kirill A. Shutemov 
Cc: Gerald Schaefer 
Cc: Christophe Leroy 
Cc: linux-snps-...@lists.infradead.org
Cc: linux-m...@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-i...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org
Signed-off-by: Anshuman Khandual 
---
 include/linux/hugetlb.h |  9 +
 mm/hugetlb.c| 24 ++--
 2 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 4c5a16b..7ff1e36 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -298,6 +298,9 @@ static inline bool is_file_hugepages(struct file *file)
 }
 
 
+struct page *
+alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask,
+ int nid, nodemask_t *nodemask);
 #else /* !CONFIG_HUGETLBFS */
 
 #define is_file_hugepages(file)false
@@ -309,6 +312,12 @@ hugetlb_file_setup(const char *name, size_t size, 
vm_flags_t acctflag,
return ERR_PTR(-ENOSYS);
 }
 
+static inline struct page *
+alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask,
+ int nid, nodemask_t *nodemask)
+{
+   return NULL;
+}
 #endif /* !CONFIG_HUGETLBFS */
 
 #ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 977f9a3..2996e44 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1066,10 +1066,9 @@ static bool zone_spans_last_pfn(const struct zone *zone,
return zone_spans_pfn(zone, last_pfn);
 }
 
-static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
+struct page *alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask,
int nid, nodemask_t *nodemask)
 {
-   unsigned int order = huge_page_order(h);
unsigned long nr_pages = 1 << order;
unsigned long ret, pfn, flags;
struct zonelist *zonelist;
@@ -1105,6 +1104,14 @@ static struct page *alloc_gigantic_page(struct hstate 
*h, gfp_t gfp_mask,
return NULL;
 }
 
+static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
+   int nid, nodemask_t *nodemask)
+{
+   unsigned int order = huge_page_order(h);
+
+   return alloc_gigantic_page_order(order, gfp_mask, nid, nodemask);
+}
+
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid);
 static void prep_compound_gigantic_page(struct page *page, unsigned int order);
 #else /* !CONFIG_CONTIG_ALLOC */
@@ -1113,6 +1120,12 @@ static struct page *alloc_gigantic_page(struct hstate 
*h, gfp_t gfp_mask,
 {
return NULL;
 }
+
+struct page *alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask,
+  int nid, nodemask_t *nodemask)
+{
+   return NULL;
+}
 #endif /* CONFIG_CONTIG_ALLOC */
 
 #else /* !CONFIG_ARCH_HAS_GIGANTIC_PAGE */
@@ -1121,6 +1134,13 @@ static struct page *alloc_gigantic_page(struct hstate 
*h, gfp_t gfp_mask,
 {
return NULL;
 }
+
+struct page *alloc_gigantic_page_order(unsigned int order, gfp_t gfp_mask,
+  int nid, nodemask_t *nodemask)
+{
+   return NULL;
+}
+
 static inline void free_gigantic_page(struct page *page, unsigned int order) { 
}
 static inline void destroy_compound_gigantic_page(struct page *page,
unsigned int order) { }
-- 
2.7.4



[PATCH V5 0/2] mm/debug: Add tests validating architecture page table helpers

2019-10-11 Thread Anshuman Khandual
This series adds a test validation for architecture exported page table
helpers. Patch in the series adds basic transformation tests at various
levels of the page table. Before that it exports gigantic page allocation
function from HugeTLB.

This test was originally suggested by Catalin during arm64 THP migration
RFC discussion earlier. Going forward it can include more specific tests
with respect to various generic MM functions like THP, HugeTLB etc and
platform specific tests.

https://lore.kernel.org/linux-mm/20190628102003.ga56...@arrakis.emea.arm.com/

Changes in V5:

- Redefined and moved X86 mm_p4d_folded() into a different header per 
Kirill/Ingo
- Updated the config option comment per Ingo and dropped 'kernel module' 
reference
- Updated the commit message and dropped 'kernel module' reference
- Changed DEBUG_ARCH_PGTABLE_TEST into DEBUG_VM_PGTABLE per Ingo
- Moved config option from mm/Kconfig.debug into lib/Kconfig.debug
- Renamed core test function arch_pgtable_tests() as debug_vm_pgtable()
- Renamed mm/arch_pgtable_test.c as mm/debug_vm_pgtable.c
- debug_vm_pgtable() gets called from kernel_init_freeable() after 
init_mm_internals()
- Added an entry in Documentation/features/debug/ per Ingo
- Enabled the test on arm64 and x86 platforms for now

Changes in V4: 
(https://patchwork.kernel.org/project/linux-mm/list/?series=183465)

- Disable DEBUG_ARCH_PGTABLE_TEST for ARM and IA64 platforms

Changes in V3: 
(https://lore.kernel.org/patchwork/project/lkml/list/?series=411216)

- Changed test trigger from module format into late_initcall()
- Marked all functions with __init to be freed after completion
- Changed all __PGTABLE_PXX_FOLDED checks as mm_pxx_folded()
- Folded in PPC32 fixes from Christophe

Changes in V2:

https://lore.kernel.org/linux-mm/1568268173-31302-1-git-send-email-anshuman.khand...@arm.com/T/#t

- Fixed small typo error in MODULE_DESCRIPTION()
- Fixed m64k build problems for lvalue concerns in pmd_xxx_tests()
- Fixed dynamic page table level folding problems on x86 as per Kirril
- Fixed second pointers during pxx_populate_tests() per Kirill and Gerald
- Allocate and free pte table with pte_alloc_one/pte_free per Kirill
- Modified pxx_clear_tests() to accommodate s390 lower 12 bits situation
- Changed RANDOM_NZVALUE value from 0xbe to 0xff
- Changed allocation, usage, free sequence for saved_ptep
- Renamed VMA_FLAGS as VMFLAGS
- Implemented a new method for random vaddr generation
- Implemented some other cleanups
- Dropped extern reference to mm_alloc()
- Created and exported new alloc_gigantic_page_order()
- Dropped the custom allocator and used new alloc_gigantic_page_order()

Changes in V1:

https://lore.kernel.org/linux-mm/1567497706-8649-1-git-send-email-anshuman.khand...@arm.com/

- Added fallback mechanism for PMD aligned memory allocation failure

Changes in RFC V2:

https://lore.kernel.org/linux-mm/1565335998-22553-1-git-send-email-anshuman.khand...@arm.com/T/#u

- Moved test module and it's config from lib/ to mm/
- Renamed config TEST_ARCH_PGTABLE as DEBUG_ARCH_PGTABLE_TEST
- Renamed file from test_arch_pgtable.c to arch_pgtable_test.c
- Added relevant MODULE_DESCRIPTION() and MODULE_AUTHOR() details
- Dropped loadable module config option
- Basic tests now use memory blocks with required size and alignment
- PUD aligned memory block gets allocated with alloc_contig_range()
- If PUD aligned memory could not be allocated it falls back on PMD aligned
  memory block from page allocator and pud_* tests are skipped
- Clear and populate tests now operate on real in memory page table entries
- Dummy mm_struct gets allocated with mm_alloc()
- Dummy page table entries get allocated with [pud|pmd|pte]_alloc_[map]()
- Simplified [p4d|pgd]_basic_tests(), now has random values in the entries

Original RFC V1:

https://lore.kernel.org/linux-mm/1564037723-26676-1-git-send-email-anshuman.khand...@arm.com/

Cc: Andrew Morton 
Cc: Vlastimil Babka 
Cc: Greg Kroah-Hartman 
Cc: Thomas Gleixner 
Cc: Mike Rapoport 
Cc: Jason Gunthorpe 
Cc: Dan Williams 
Cc: Peter Zijlstra 
Cc: Michal Hocko 
Cc: Mark Rutland 
Cc: Mark Brown 
Cc: Steven Price 
Cc: Ard Biesheuvel 
Cc: Masahiro Yamada 
Cc: Kees Cook 
Cc: Tetsuo Handa 
Cc: Matthew Wilcox 
Cc: Sri Krishna chowdary 
Cc: Dave Hansen 
Cc: Russell King - ARM Linux 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Martin Schwidefsky 
Cc: Heiko Carstens 
Cc: "David S. Miller" 
Cc: Vineet Gupta 
Cc: James Hogan 
Cc: Paul Burton 
Cc: Ralf Baechle 
Cc: Kirill A. Shutemov 
Cc: Gerald Schaefer 
Cc: Christophe Leroy 
Cc: Mike Kravetz 
Cc: linux-snps-...@lists.infradead.org
Cc: linux-m...@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-i...@vger.kernel.org
Cc: linuxppc-dev@lists.ozlabs.org
Cc: linux-s...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org

Anshuman Khandual (2):
  mm/hugetlb: Make alloc_gigantic_page() available for general use
  mm/debug: Add tests 

Re: [PATCH v2 03/29] powerpc: Rename PT_LOAD identifier "kernel" to "text"

2019-10-11 Thread Michael Ellerman
Kees Cook  writes:
> In preparation for moving NOTES into RO_DATA, rename the linker script
> internal identifier for the PT_LOAD Program Header from "kernel" to
> "text" to match other architectures.
>
> Signed-off-by: Kees Cook 
> ---
>  arch/powerpc/kernel/vmlinux.lds.S | 12 ++--
>  1 file changed, 6 insertions(+), 6 deletions(-)

Acked-by: Michael Ellerman 

cheers

> diff --git a/arch/powerpc/kernel/vmlinux.lds.S 
> b/arch/powerpc/kernel/vmlinux.lds.S
> index a3c8492b2b19..e184a63aa5b0 100644
> --- a/arch/powerpc/kernel/vmlinux.lds.S
> +++ b/arch/powerpc/kernel/vmlinux.lds.S
> @@ -18,7 +18,7 @@
>  ENTRY(_stext)
>  
>  PHDRS {
> - kernel PT_LOAD FLAGS(7); /* RWX */
> + text PT_LOAD FLAGS(7); /* RWX */
>   note PT_NOTE FLAGS(0);
>  }
>  
> @@ -63,7 +63,7 @@ SECTIONS
>  #else /* !CONFIG_PPC64 */
>   HEAD_TEXT
>  #endif
> - } :kernel
> + } :text
>  
>   __head_end = .;
>  
> @@ -112,7 +112,7 @@ SECTIONS
>   __got2_end = .;
>  #endif /* CONFIG_PPC32 */
>  
> - } :kernel
> + } :text
>  
>   . = ALIGN(ETEXT_ALIGN_SIZE);
>   _etext = .;
> @@ -163,9 +163,9 @@ SECTIONS
>  #endif
>   EXCEPTION_TABLE(0)
>  
> - NOTES :kernel :note
> + NOTES :text :note
>   /* Restore program header away from PT_NOTE. */
> - .dummy : { *(.dummy) } :kernel
> + .dummy : { *(.dummy) } :text
>  
>  /*
>   * Init sections discarded at runtime
> @@ -180,7 +180,7 @@ SECTIONS
>  #ifdef CONFIG_PPC64
>   *(.tramp.ftrace.init);
>  #endif
> - } :kernel
> + } :text
>  
>   /* .exit.text is discarded at runtime, not link time,
>* to deal with references from __bug_table
> -- 
> 2.17.1


Re: [PATCH v2 02/29] powerpc: Remove PT_NOTE workaround

2019-10-11 Thread Michael Ellerman
Kees Cook  writes:
> In preparation for moving NOTES into RO_DATA, remove the PT_NOTE
> workaround since the kernel requires at least gcc 4.6 now.
>
> Signed-off-by: Kees Cook 
> ---
>  arch/powerpc/kernel/vmlinux.lds.S | 24 ++--
>  1 file changed, 2 insertions(+), 22 deletions(-)

Acked-by: Michael Ellerman 

For the archives, Joel tried a similar patch a while back which caused
some problems, see:

  https://lore.kernel.org/linuxppc-dev/20190321003253.22100-1-j...@jms.id.au/

and a v2:

  https://lore.kernel.org/linuxppc-dev/20190329064453.12761-1-j...@jms.id.au/

This is similar to his v2. The only outstanding comment on his v2 was
from Segher:
  (And I do not know if there are any tools that expect the notes in a phdr,
  or even specifically the second phdr).

But this patch solves that by not changing the note.

cheers

> diff --git a/arch/powerpc/kernel/vmlinux.lds.S 
> b/arch/powerpc/kernel/vmlinux.lds.S
> index 81e672654789..a3c8492b2b19 100644
> --- a/arch/powerpc/kernel/vmlinux.lds.S
> +++ b/arch/powerpc/kernel/vmlinux.lds.S
> @@ -20,20 +20,6 @@ ENTRY(_stext)
>  PHDRS {
>   kernel PT_LOAD FLAGS(7); /* RWX */
>   note PT_NOTE FLAGS(0);
> - dummy PT_NOTE FLAGS(0);
> -
> - /* binutils < 2.18 has a bug that makes it misbehave when taking an
> -ELF file with all segments at load address 0 as input.  This
> -happens when running "strip" on vmlinux, because of the AT() magic
> -in this linker script.  People using GCC >= 4.2 won't run into
> -this problem, because the "build-id" support will put some data
> -into the "notes" segment (at a non-zero load address).
> -
> -To work around this, we force some data into both the "dummy"
> -segment and the kernel segment, so the dummy segment will get a
> -non-zero load address.  It's not enough to always create the
> -"notes" segment, since if nothing gets assigned to it, its load
> -address will be zero.  */
>  }
>  
>  #ifdef CONFIG_PPC64
> @@ -178,14 +164,8 @@ SECTIONS
>   EXCEPTION_TABLE(0)
>  
>   NOTES :kernel :note
> -
> - /* The dummy segment contents for the bug workaround mentioned above
> -near PHDRS.  */
> - .dummy : AT(ADDR(.dummy) - LOAD_OFFSET) {
> - LONG(0)
> - LONG(0)
> - LONG(0)
> - } :kernel :dummy
> + /* Restore program header away from PT_NOTE. */
> + .dummy : { *(.dummy) } :kernel
>  
>  /*
>   * Init sections discarded at runtime
> -- 
> 2.17.1


Re: [PATCH v2 01/29] powerpc: Rename "notes" PT_NOTE to "note"

2019-10-11 Thread Michael Ellerman
Kees Cook  writes:
> The Program Header identifiers are internal to the linker scripts. In
> preparation for moving the NOTES segment declaration into RO_DATA,
> standardize the identifier for the PT_NOTE entry to "note" as used by
> all other architectures that emit PT_NOTE.
>
> Signed-off-by: Kees Cook 
> ---
>  arch/powerpc/kernel/vmlinux.lds.S | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Acked-by: Michael Ellerman 

cheers

> diff --git a/arch/powerpc/kernel/vmlinux.lds.S 
> b/arch/powerpc/kernel/vmlinux.lds.S
> index 060a1acd7c6d..81e672654789 100644
> --- a/arch/powerpc/kernel/vmlinux.lds.S
> +++ b/arch/powerpc/kernel/vmlinux.lds.S
> @@ -19,7 +19,7 @@ ENTRY(_stext)
>  
>  PHDRS {
>   kernel PT_LOAD FLAGS(7); /* RWX */
> - notes PT_NOTE FLAGS(0);
> + note PT_NOTE FLAGS(0);
>   dummy PT_NOTE FLAGS(0);
>  
>   /* binutils < 2.18 has a bug that makes it misbehave when taking an
> @@ -177,7 +177,7 @@ SECTIONS
>  #endif
>   EXCEPTION_TABLE(0)
>  
> - NOTES :kernel :notes
> + NOTES :kernel :note
>  
>   /* The dummy segment contents for the bug workaround mentioned above
>  near PHDRS.  */
> -- 
> 2.17.1