[PATCH v2 5/6] kselftest/arm64: Verify KSM page merge for MTE pages

2020-10-02 Thread Amit Daniel Kachhap
Add a testcase to check that KSM should not merge pages containing
same data with same/different MTE tag values.

This testcase has one positive tests and passes if page merging
happens according to the above rule. It also saves and restores
any modified ksm sysfs entries.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Signed-off-by: Amit Daniel Kachhap 
---
Changes in v2:
* Moved read/write sysfs functions from global to local functions.

 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../selftests/arm64/mte/check_ksm_options.c   | 159 ++
 2 files changed, 160 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_ksm_options.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index 79a215d3bbd0..44e9bfdaeca6 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -2,3 +2,4 @@ check_buffer_fill
 check_tags_inclusion
 check_child_memory
 check_mmap_options
+check_ksm_options
diff --git a/tools/testing/selftests/arm64/mte/check_ksm_options.c 
b/tools/testing/selftests/arm64/mte/check_ksm_options.c
new file mode 100644
index ..bc41ae630c86
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_ksm_options.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define TEST_UNIT  10
+#define PATH_KSM   "/sys/kernel/mm/ksm/"
+#define MAX_LOOP   4
+
+static size_t page_sz;
+static unsigned long ksm_sysfs[5];
+
+static unsigned long read_sysfs(char *str)
+{
+   FILE *f;
+   unsigned long val = 0;
+
+   f = fopen(str, "r");
+   if (!f) {
+   ksft_print_msg("ERR: missing %s\n", str);
+   return 0;
+   }
+   fscanf(f, "%lu", );
+   fclose(f);
+   return val;
+}
+
+static void write_sysfs(char *str, unsigned long val)
+{
+   FILE *f;
+
+   f = fopen(str, "w");
+   if (!f) {
+   ksft_print_msg("ERR: missing %s\n", str);
+   return;
+   }
+   fprintf(f, "%lu", val);
+   fclose(f);
+}
+
+static void mte_ksm_setup(void)
+{
+   ksm_sysfs[0] = read_sysfs(PATH_KSM "merge_across_nodes");
+   write_sysfs(PATH_KSM "merge_across_nodes", 1);
+   ksm_sysfs[1] = read_sysfs(PATH_KSM "sleep_millisecs");
+   write_sysfs(PATH_KSM "sleep_millisecs", 0);
+   ksm_sysfs[2] = read_sysfs(PATH_KSM "run");
+   write_sysfs(PATH_KSM "run", 1);
+   ksm_sysfs[3] = read_sysfs(PATH_KSM "max_page_sharing");
+   write_sysfs(PATH_KSM "max_page_sharing", ksm_sysfs[3] + TEST_UNIT);
+   ksm_sysfs[4] = read_sysfs(PATH_KSM "pages_to_scan");
+   write_sysfs(PATH_KSM "pages_to_scan", ksm_sysfs[4] + TEST_UNIT);
+}
+
+static void mte_ksm_restore(void)
+{
+   write_sysfs(PATH_KSM "merge_across_nodes", ksm_sysfs[0]);
+   write_sysfs(PATH_KSM "sleep_millisecs", ksm_sysfs[1]);
+   write_sysfs(PATH_KSM "run", ksm_sysfs[2]);
+   write_sysfs(PATH_KSM "max_page_sharing", ksm_sysfs[3]);
+   write_sysfs(PATH_KSM "pages_to_scan", ksm_sysfs[4]);
+}
+
+static void mte_ksm_scan(void)
+{
+   int cur_count = read_sysfs(PATH_KSM "full_scans");
+   int scan_count = cur_count + 1;
+   int max_loop_count = MAX_LOOP;
+
+   while ((cur_count < scan_count) && max_loop_count) {
+   sleep(1);
+   cur_count = read_sysfs(PATH_KSM "full_scans");
+   max_loop_count--;
+   }
+#ifdef DEBUG
+   ksft_print_msg("INFO: pages_shared=%lu pages_sharing=%lu\n",
+   read_sysfs(PATH_KSM "pages_shared"),
+   read_sysfs(PATH_KSM "pages_sharing"));
+#endif
+}
+
+static int check_madvise_options(int mem_type, int mode, int mapping)
+{
+   char *ptr;
+   int err, ret;
+
+   err = KSFT_FAIL;
+   if (access(PATH_KSM, F_OK) == -1) {
+   ksft_print_msg("ERR: Kernel KSM config not enabled\n");
+   return err;
+   }
+
+   mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
+   ptr = mte_allocate_memory(TEST_UNIT * page_sz, mem_type, mapping, true);
+   if (check_allocated_memory(ptr, TEST_UNIT * page_sz, mem_type, false) 
!= KSFT_PASS)
+   return KSFT_FAIL;
+
+   /* Insert same data in all the pages */
+   memset(ptr, 'A', TEST_UNIT * page_sz);
+   ret = madvise(ptr, TEST_UNIT * page_sz, MADV_MERGEABLE);
+   if (ret) {
+   ks

[PATCH v2 1/6] kselftest/arm64: Add utilities and a test to validate mte memory

2020-10-02 Thread Amit Daniel Kachhap
This test checks that the memory tag is present after mte allocation and
the memory is accessible with those tags. This testcase verifies all
sync, async and none mte error reporting mode. The allocated mte buffers
are verified for Allocated range (no error expected while accessing
buffer), Underflow range, and Overflow range.

Different test scenarios covered here are,
* Verify that mte memory are accessible at byte/block level.
* Force underflow and overflow to occur and check the data consistency.
* Check to/from between tagged and untagged memory.
* Check that initial allocated memory to have 0 tag.

This change also creates the necessary infrastructure to add mte test
cases. MTE kselftests can use the several utility functions provided here
to add wide variety of mte test scenarios.

GCC compiler need flag '-march=armv8.5-a+memtag' so those flags are
verified before compilation.

The mte testcases can be launched with kselftest framework as,

make TARGETS=arm64 ARM64_SUBTARGETS=mte kselftest

or compiled as,

make -C tools/testing/selftests TARGETS=arm64 ARM64_SUBTARGETS=mte CC='compiler'

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Co-developed-by: Gabor Kertesz 
Signed-off-by: Gabor Kertesz 
Signed-off-by: Amit Daniel Kachhap 
---
Changes in v2:
* Redefined MTE kernel header definitions to decouple kselftest compilations.
* Removed gmi masking instructions in mte_insert_random_tag assembly
  function. This simplifies the tag inclusion mask test with only GCR
  mask register used.
* Now use /dev/shm/* to hold temporary tmpfs files.
* Few Code and comment clean-ups.

 tools/testing/selftests/arm64/Makefile|   2 +-
 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 tools/testing/selftests/arm64/mte/Makefile|  29 ++
 .../selftests/arm64/mte/check_buffer_fill.c   | 475 ++
 .../selftests/arm64/mte/mte_common_util.c | 341 +
 .../selftests/arm64/mte/mte_common_util.h | 117 +
 tools/testing/selftests/arm64/mte/mte_def.h   |  60 +++
 .../testing/selftests/arm64/mte/mte_helper.S  | 114 +
 8 files changed, 1138 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/arm64/mte/.gitignore
 create mode 100644 tools/testing/selftests/arm64/mte/Makefile
 create mode 100644 tools/testing/selftests/arm64/mte/check_buffer_fill.c
 create mode 100644 tools/testing/selftests/arm64/mte/mte_common_util.c
 create mode 100644 tools/testing/selftests/arm64/mte/mte_common_util.h
 create mode 100644 tools/testing/selftests/arm64/mte/mte_def.h
 create mode 100644 tools/testing/selftests/arm64/mte/mte_helper.S

diff --git a/tools/testing/selftests/arm64/Makefile 
b/tools/testing/selftests/arm64/Makefile
index 93b567d23c8b..3723d9dea11a 100644
--- a/tools/testing/selftests/arm64/Makefile
+++ b/tools/testing/selftests/arm64/Makefile
@@ -4,7 +4,7 @@
 ARCH ?= $(shell uname -m 2>/dev/null || echo not)
 
 ifneq (,$(filter $(ARCH),aarch64 arm64))
-ARM64_SUBTARGETS ?= tags signal
+ARM64_SUBTARGETS ?= tags signal mte
 else
 ARM64_SUBTARGETS :=
 endif
diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
new file mode 100644
index ..3f8c1f6c82b9
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -0,0 +1 @@
+check_buffer_fill
diff --git a/tools/testing/selftests/arm64/mte/Makefile 
b/tools/testing/selftests/arm64/mte/Makefile
new file mode 100644
index ..2480226dfe57
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/Makefile
@@ -0,0 +1,29 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2020 ARM Limited
+
+CFLAGS += -std=gnu99 -I.
+SRCS := $(filter-out mte_common_util.c,$(wildcard *.c))
+PROGS := $(patsubst %.c,%,$(SRCS))
+
+#Add mte compiler option
+ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep gcc),)
+CFLAGS += -march=armv8.5-a+memtag
+endif
+
+#check if the compiler works well
+mte_cc_support := $(shell if ($(CC) $(CFLAGS) -E -x c /dev/null -o /dev/null 
2>&1) then echo "1"; fi)
+
+ifeq ($(mte_cc_support),1)
+# Generated binaries to be installed by top KSFT script
+TEST_GEN_PROGS := $(PROGS)
+
+# Get Kernel headers installed and use them.
+KSFT_KHDR_INSTALL := 1
+endif
+
+# Include KSFT lib.mk.
+include ../../lib.mk
+
+ifeq ($(mte_cc_support),1)
+$(TEST_GEN_PROGS): mte_common_util.c mte_common_util.h mte_helper.S
+endif
diff --git a/tools/testing/selftests/arm64/mte/check_buffer_fill.c 
b/tools/testing/selftests/arm64/mte/check_buffer_fill.c
new file mode 100644
index ..242635d79035
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_buffer_fill.c
@@ -0,0 +1,475 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define OVERFLOW_RANGE MT_GRANULE_SIZE
+
+static int sizes[] = {
+   1, 555, 1033, MT_GR

[PATCH v2 6/6] kselftest/arm64: Check mte tagged user address in kernel

2020-10-02 Thread Amit Daniel Kachhap
Add a testcase to check that user address with valid/invalid
mte tag works in kernel mode. This test verifies that the kernel
API's __arch_copy_from_user/__arch_copy_to_user works by considering
if the user pointer has valid/invalid allocation tags.

In MTE sync mode, file memory read/write and other similar interfaces
fails if a user memory with invalid tag is accessed in kernel. In async
mode no such failure occurs.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Signed-off-by: Amit Daniel Kachhap 
---
Changes in v2:
* Updated the test to handle the error properly in case of failure
  in accessing memory with invalid tag in kernel. errno check is not
  done now and read length checks are sufficient.

 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../selftests/arm64/mte/check_user_mem.c  | 111 ++
 .../selftests/arm64/mte/mte_common_util.h |   1 +
 .../testing/selftests/arm64/mte/mte_helper.S  |  14 +++
 4 files changed, 127 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_user_mem.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index 44e9bfdaeca6..bc3ac63f3314 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -3,3 +3,4 @@ check_tags_inclusion
 check_child_memory
 check_mmap_options
 check_ksm_options
+check_user_mem
diff --git a/tools/testing/selftests/arm64/mte/check_user_mem.c 
b/tools/testing/selftests/arm64/mte/check_user_mem.c
new file mode 100644
index ..594e98e76880
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_user_mem.c
@@ -0,0 +1,111 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+static size_t page_sz;
+
+static int check_usermem_access_fault(int mem_type, int mode, int mapping)
+{
+   int fd, i, err;
+   char val = 'A';
+   size_t len, read_len;
+   void *ptr, *ptr_next;
+
+   err = KSFT_FAIL;
+   len = 2 * page_sz;
+   mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
+   fd = create_temp_file();
+   if (fd == -1)
+   return KSFT_FAIL;
+   for (i = 0; i < len; i++)
+   write(fd, , sizeof(val));
+   lseek(fd, 0, 0);
+   ptr = mte_allocate_memory(len, mem_type, mapping, true);
+   if (check_allocated_memory(ptr, len, mem_type, true) != KSFT_PASS) {
+   close(fd);
+   return KSFT_FAIL;
+   }
+   mte_initialize_current_context(mode, (uintptr_t)ptr, len);
+   /* Copy from file into buffer with valid tag */
+   read_len = read(fd, ptr, len);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid || read_len < len)
+   goto usermem_acc_err;
+   /* Verify same pattern is read */
+   for (i = 0; i < len; i++)
+   if (*(char *)(ptr + i) != val)
+   break;
+   if (i < len)
+   goto usermem_acc_err;
+
+   /* Tag the next half of memory with different value */
+   ptr_next = (void *)((unsigned long)ptr + page_sz);
+   ptr_next = mte_insert_new_tag(ptr_next);
+   mte_set_tag_address_range(ptr_next, page_sz);
+
+   lseek(fd, 0, 0);
+   /* Copy from file into buffer with invalid tag */
+   read_len = read(fd, ptr, len);
+   mte_wait_after_trig();
+   /*
+* Accessing user memory in kernel with invalid tag should fail in sync
+* mode without fault but may not fail in async mode as per the
+* implemented MTE userspace support in Arm64 kernel.
+*/
+   if (mode == MTE_SYNC_ERR &&
+   !cur_mte_cxt.fault_valid && read_len < len) {
+   err = KSFT_PASS;
+   } else if (mode == MTE_ASYNC_ERR &&
+  !cur_mte_cxt.fault_valid && read_len == len) {
+   err = KSFT_PASS;
+   }
+usermem_acc_err:
+   mte_free_memory((void *)ptr, len, mem_type, true);
+   close(fd);
+   return err;
+}
+
+int main(int argc, char *argv[])
+{
+   int err;
+
+   page_sz = getpagesize();
+   if (!page_sz) {
+   ksft_print_msg("ERR: Unable to get page size\n");
+   return KSFT_FAIL;
+   }
+   err = mte_default_setup();
+   if (err)
+   return err;
+   /* Register signal handlers */
+   mte_register_signal(SIGSEGV, mte_default_handler);
+
+   evaluate_test(check_usermem_access_fault(USE_MMAP, MTE_SYNC_ERR, 
MAP_PRIVATE),
+   "Check memory access from kernel in sync mode, private mapping 
and mmap memory\n");
+   evaluate_test(check_usermem_access_fault(USE_MMAP, MTE_SYNC_ERR, 
MAP_SHARED),

[PATCH v2 4/6] kselftest/arm64: Verify all different mmap MTE options

2020-10-02 Thread Amit Daniel Kachhap
This testcase checks the different unsupported/supported options for mmap
if used with PROT_MTE memory protection flag. These checks are,

* Either pstate.tco enable or prctl PR_MTE_TCF_NONE option should not cause
  any tag mismatch faults.
* Different combinations of anonymous/file memory mmap, mprotect,
  sync/async error mode and private/shared mappings should work.
* mprotect should not be able to clear the PROT_MTE page property.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Co-developed-by: Gabor Kertesz 
Signed-off-by: Gabor Kertesz 
Signed-off-by: Amit Daniel Kachhap 
---
 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../selftests/arm64/mte/check_mmap_options.c  | 262 ++
 2 files changed, 263 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_mmap_options.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index b5fcc0fb4d97..79a215d3bbd0 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -1,3 +1,4 @@
 check_buffer_fill
 check_tags_inclusion
 check_child_memory
+check_mmap_options
diff --git a/tools/testing/selftests/arm64/mte/check_mmap_options.c 
b/tools/testing/selftests/arm64/mte/check_mmap_options.c
new file mode 100644
index ..33b13b86199b
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_mmap_options.c
@@ -0,0 +1,262 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define RUNS   (MT_TAG_COUNT)
+#define UNDERFLOW  MT_GRANULE_SIZE
+#define OVERFLOW   MT_GRANULE_SIZE
+#define TAG_CHECK_ON   0
+#define TAG_CHECK_OFF  1
+
+static size_t page_size;
+static int sizes[] = {
+   1, 537, 989, 1269, MT_GRANULE_SIZE - 1, MT_GRANULE_SIZE,
+   /* page size - 1*/ 0, /* page_size */ 0, /* page size + 1 */ 0
+};
+
+static int check_mte_memory(char *ptr, int size, int mode, int tag_check)
+{
+   mte_initialize_current_context(mode, (uintptr_t)ptr, size);
+   memset(ptr, '1', size);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == true)
+   return KSFT_FAIL;
+
+   mte_initialize_current_context(mode, (uintptr_t)ptr, -UNDERFLOW);
+   memset(ptr - UNDERFLOW, '2', UNDERFLOW);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == false && tag_check == TAG_CHECK_ON)
+   return KSFT_FAIL;
+   if (cur_mte_cxt.fault_valid == true && tag_check == TAG_CHECK_OFF)
+   return KSFT_FAIL;
+
+   mte_initialize_current_context(mode, (uintptr_t)ptr, size + OVERFLOW);
+   memset(ptr + size, '3', OVERFLOW);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == false && tag_check == TAG_CHECK_ON)
+   return KSFT_FAIL;
+   if (cur_mte_cxt.fault_valid == true && tag_check == TAG_CHECK_OFF)
+   return KSFT_FAIL;
+
+   return KSFT_PASS;
+}
+
+static int check_anonymous_memory_mapping(int mem_type, int mode, int mapping, 
int tag_check)
+{
+   char *ptr, *map_ptr;
+   int run, result, map_size;
+   int item = sizeof(sizes)/sizeof(int);
+
+   item = sizeof(sizes)/sizeof(int);
+   mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
+   for (run = 0; run < item; run++) {
+   map_size = sizes[run] + OVERFLOW + UNDERFLOW;
+   map_ptr = (char *)mte_allocate_memory(map_size, mem_type, 
mapping, false);
+   if (check_allocated_memory(map_ptr, map_size, mem_type, false) 
!= KSFT_PASS)
+   return KSFT_FAIL;
+
+   ptr = map_ptr + UNDERFLOW;
+   mte_initialize_current_context(mode, (uintptr_t)ptr, 
sizes[run]);
+   /* Only mte enabled memory will allow tag insertion */
+   ptr = mte_insert_tags((void *)ptr, sizes[run]);
+   if (!ptr || cur_mte_cxt.fault_valid == true) {
+   ksft_print_msg("FAIL: Insert tags on anonymous mmap 
memory\n");
+   munmap((void *)map_ptr, map_size);
+   return KSFT_FAIL;
+   }
+   result = check_mte_memory(ptr, sizes[run], mode, tag_check);
+   mte_clear_tags((void *)ptr, sizes[run]);
+   mte_free_memory((void *)map_ptr, map_size, mem_type, false);
+   if (result == KSFT_FAIL)
+   return KSFT_FAIL;
+   }
+   return KSFT_PASS;
+}
+
+static int check_file_memory_mapping(int mem_type, int mode, int mapping, int 
tag_check)
+{
+   char *ptr, *map_ptr;
+   int run, fd, map_size;
+   int total = sizeof(sizes)/

[PATCH v2 3/6] kselftest/arm64: Check forked child mte memory accessibility

2020-10-02 Thread Amit Daniel Kachhap
This test covers the mte memory behaviour of the forked process with
different mapping properties and flags. It checks that all bytes of
forked child memory are accessible with the same tag as that of the
parent and memory accessed outside the tag range causes fault to
occur.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Co-developed-by: Gabor Kertesz 
Signed-off-by: Gabor Kertesz 
Signed-off-by: Amit Daniel Kachhap 
---
 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../selftests/arm64/mte/check_child_memory.c  | 195 ++
 2 files changed, 196 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_child_memory.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index c3fca255d3d6..b5fcc0fb4d97 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -1,2 +1,3 @@
 check_buffer_fill
 check_tags_inclusion
+check_child_memory
diff --git a/tools/testing/selftests/arm64/mte/check_child_memory.c 
b/tools/testing/selftests/arm64/mte/check_child_memory.c
new file mode 100644
index ..97bebdecd29e
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_child_memory.c
@@ -0,0 +1,195 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define BUFFER_SIZE(5 * MT_GRANULE_SIZE)
+#define RUNS   (MT_TAG_COUNT)
+#define UNDERFLOW  MT_GRANULE_SIZE
+#define OVERFLOW   MT_GRANULE_SIZE
+
+static size_t page_size;
+static int sizes[] = {
+   1, 537, 989, 1269, MT_GRANULE_SIZE - 1, MT_GRANULE_SIZE,
+   /* page size - 1*/ 0, /* page_size */ 0, /* page size + 1 */ 0
+};
+
+static int check_child_tag_inheritance(char *ptr, int size, int mode)
+{
+   int i, parent_tag, child_tag, fault, child_status;
+   pid_t child;
+
+   parent_tag = MT_FETCH_TAG((uintptr_t)ptr);
+   fault = 0;
+
+   child = fork();
+   if (child == -1) {
+   ksft_print_msg("FAIL: child process creation\n");
+   return KSFT_FAIL;
+   } else if (child == 0) {
+   mte_initialize_current_context(mode, (uintptr_t)ptr, size);
+   /* Do copy on write */
+   memset(ptr, '1', size);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == true) {
+   fault = 1;
+   goto check_child_tag_inheritance_err;
+   }
+   for (i = 0 ; i < size ; i += MT_GRANULE_SIZE) {
+   child_tag = 
MT_FETCH_TAG((uintptr_t)(mte_get_tag_address(ptr + i)));
+   if (parent_tag != child_tag) {
+   ksft_print_msg("FAIL: child mte tag 
mismatch\n");
+   fault = 1;
+   goto check_child_tag_inheritance_err;
+   }
+   }
+   mte_initialize_current_context(mode, (uintptr_t)ptr, 
-UNDERFLOW);
+   memset(ptr - UNDERFLOW, '2', UNDERFLOW);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == false) {
+   fault = 1;
+   goto check_child_tag_inheritance_err;
+   }
+   mte_initialize_current_context(mode, (uintptr_t)ptr, size + 
OVERFLOW);
+   memset(ptr + size, '3', OVERFLOW);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == false) {
+   fault = 1;
+   goto check_child_tag_inheritance_err;
+   }
+check_child_tag_inheritance_err:
+   _exit(fault);
+   }
+   /* Wait for child process to terminate */
+   wait(_status);
+   if (WIFEXITED(child_status))
+   fault = WEXITSTATUS(child_status);
+   else
+   fault = 1;
+   return (fault) ? KSFT_FAIL : KSFT_PASS;
+}
+
+static int check_child_memory_mapping(int mem_type, int mode, int mapping)
+{
+   char *ptr;
+   int run, result;
+   int item = sizeof(sizes)/sizeof(int);
+
+   item = sizeof(sizes)/sizeof(int);
+   mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
+   for (run = 0; run < item; run++) {
+   ptr = (char *)mte_allocate_memory_tag_range(sizes[run], 
mem_type, mapping,
+   UNDERFLOW, 
OVERFLOW);
+   if (check_allocated_memory_range(ptr, sizes[run], mem_type,
+UNDERFLOW, OVERFLOW) != 
KSFT_PASS)
+   return KSFT_FAIL;
+   result = check_child_tag_inheritance(ptr, sizes[run], mode);
+   

[PATCH v2 0/6] kselftest: arm64/mte: Tests for user-space MTE

2020-10-02 Thread Amit Daniel Kachhap
These patch series adds below kselftests to test the user-space support for the
ARMv8.5 Memory Tagging Extension present in arm64 tree [1]. This patch
series is based on Linux v5.9-rc3.

1) This test-case verifies that the memory allocated by kernel mmap interface
can support tagged memory access. It first checks the presence of tags at
address[56:59] and then proceeds with read and write. The pass criteria for
this test is that tag fault exception should not happen.

2) This test-case crosses the valid memory to the invalid memory. In this
memory area valid tags are not inserted so read and write should not pass. The
pass criteria for this test is that tag fault exception should happen for all
the illegal addresses. This test also verfies that PSTATE.TCO works properly.

3) This test-case verifies that the memory inherited by child process from
parent process should have same tags copied. The pass criteria for this test is
that tag fault exception should not happen.

4) This test checks different mmap flags with PROT_MTE memory protection.

5) This testcase checks that KSM should not merge pages containing different
MTE tag values. However, if the tags are same then the pages may merge. This
testcase uses the generic ksm sysfs interfaces to verify the MTE behaviour, so
this testcase is not fullproof and may be impacted due to other load in the 
system.

6) Fifth test verifies that syscalls read/write etc works by considering that
user pointer has valid/invalid allocation tags.

Changes since v1 [2]:
* Redefined MTE kernel header definitions to decouple kselftest compilations.
* Removed gmi masking instructions in mte_insert_random_tag assembly
  function. This simplifies the tag inclusion mask test with only GCR
  mask register used.
* Created a new mte_insert_random_tag function with gmi instruction.
  This is useful for the 6th test which reuses the original tag.
* Now use /dev/shm/* to hold temporary files.
* Updated the 6th test to handle the error properly in case of failure
  in accessing memory with invalid tag in kernel.
* Code and comment clean-ups.

Thanks,
Amit Daniel

[1]: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git 
for-next/mte
[2]: https://patchwork.kernel.org/patch/11747791/

Amit Daniel Kachhap (6):
  kselftest/arm64: Add utilities and a test to validate mte memory
  kselftest/arm64: Verify mte tag inclusion via prctl
  kselftest/arm64: Check forked child mte memory accessibility
  kselftest/arm64: Verify all different mmap MTE options
  kselftest/arm64: Verify KSM page merge for MTE pages
  kselftest/arm64: Check mte tagged user address in kernel

 tools/testing/selftests/arm64/Makefile|   2 +-
 tools/testing/selftests/arm64/mte/.gitignore  |   6 +
 tools/testing/selftests/arm64/mte/Makefile|  29 ++
 .../selftests/arm64/mte/check_buffer_fill.c   | 475 ++
 .../selftests/arm64/mte/check_child_memory.c  | 195 +++
 .../selftests/arm64/mte/check_ksm_options.c   | 159 ++
 .../selftests/arm64/mte/check_mmap_options.c  | 262 ++
 .../arm64/mte/check_tags_inclusion.c  | 185 +++
 .../selftests/arm64/mte/check_user_mem.c  | 111 
 .../selftests/arm64/mte/mte_common_util.c | 341 +
 .../selftests/arm64/mte/mte_common_util.h | 118 +
 tools/testing/selftests/arm64/mte/mte_def.h   |  60 +++
 .../testing/selftests/arm64/mte/mte_helper.S  | 128 +
 13 files changed, 2070 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/arm64/mte/.gitignore
 create mode 100644 tools/testing/selftests/arm64/mte/Makefile
 create mode 100644 tools/testing/selftests/arm64/mte/check_buffer_fill.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_child_memory.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_ksm_options.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_mmap_options.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_tags_inclusion.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_user_mem.c
 create mode 100644 tools/testing/selftests/arm64/mte/mte_common_util.c
 create mode 100644 tools/testing/selftests/arm64/mte/mte_common_util.h
 create mode 100644 tools/testing/selftests/arm64/mte/mte_def.h
 create mode 100644 tools/testing/selftests/arm64/mte/mte_helper.S

-- 
2.17.1



[PATCH v2 2/6] kselftest/arm64: Verify mte tag inclusion via prctl

2020-10-02 Thread Amit Daniel Kachhap
This testcase verifies that the tag generated with "irg" instruction
contains only included tags. This is done via prtcl call.

This test covers 4 scenarios,
* At least one included tag.
* More than one included tags.
* All included.
* None included.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Co-developed-by: Gabor Kertesz 
Signed-off-by: Gabor Kertesz 
Signed-off-by: Amit Daniel Kachhap 
---
Changes in v2:
* Small fix to check all tags in check_multiple_included_tags function.

 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../arm64/mte/check_tags_inclusion.c  | 185 ++
 2 files changed, 186 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_tags_inclusion.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index 3f8c1f6c82b9..c3fca255d3d6 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -1 +1,2 @@
 check_buffer_fill
+check_tags_inclusion
diff --git a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c 
b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
new file mode 100644
index ..94d245a0ed56
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
@@ -0,0 +1,185 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define BUFFER_SIZE(5 * MT_GRANULE_SIZE)
+#define RUNS   (MT_TAG_COUNT * 2)
+#define MTE_LAST_TAG_MASK  (0x7FFF)
+
+static int verify_mte_pointer_validity(char *ptr, int mode)
+{
+   mte_initialize_current_context(mode, (uintptr_t)ptr, BUFFER_SIZE);
+   /* Check the validity of the tagged pointer */
+   memset((void *)ptr, '1', BUFFER_SIZE);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid)
+   return KSFT_FAIL;
+   /* Proceed further for nonzero tags */
+   if (!MT_FETCH_TAG((uintptr_t)ptr))
+   return KSFT_PASS;
+   mte_initialize_current_context(mode, (uintptr_t)ptr, BUFFER_SIZE + 1);
+   /* Check the validity outside the range */
+   ptr[BUFFER_SIZE] = '2';
+   mte_wait_after_trig();
+   if (!cur_mte_cxt.fault_valid)
+   return KSFT_FAIL;
+   else
+   return KSFT_PASS;
+}
+
+static int check_single_included_tags(int mem_type, int mode)
+{
+   char *ptr;
+   int tag, run, result = KSFT_PASS;
+
+   ptr = (char *)mte_allocate_memory(BUFFER_SIZE + MT_GRANULE_SIZE, 
mem_type, 0, false);
+   if (check_allocated_memory(ptr, BUFFER_SIZE + MT_GRANULE_SIZE,
+  mem_type, false) != KSFT_PASS)
+   return KSFT_FAIL;
+
+   for (tag = 0; (tag < MT_TAG_COUNT) && (result == KSFT_PASS); tag++) {
+   mte_switch_mode(mode, MT_INCLUDE_VALID_TAG(tag));
+   /* Try to catch a excluded tag by a number of tries. */
+   for (run = 0; (run < RUNS) && (result == KSFT_PASS); run++) {
+   ptr = (char *)mte_insert_tags(ptr, BUFFER_SIZE);
+   /* Check tag value */
+   if (MT_FETCH_TAG((uintptr_t)ptr) == tag) {
+   ksft_print_msg("FAIL: wrong tag = 0x%x with 
include mask=0x%x\n",
+  MT_FETCH_TAG((uintptr_t)ptr),
+  MT_INCLUDE_VALID_TAG(tag));
+   result = KSFT_FAIL;
+   break;
+   }
+   result = verify_mte_pointer_validity(ptr, mode);
+   }
+   }
+   mte_free_memory_tag_range((void *)ptr, BUFFER_SIZE, mem_type, 0, 
MT_GRANULE_SIZE);
+   return result;
+}
+
+static int check_multiple_included_tags(int mem_type, int mode)
+{
+   char *ptr;
+   int tag, run, result = KSFT_PASS;
+   unsigned long excl_mask = 0;
+
+   ptr = (char *)mte_allocate_memory(BUFFER_SIZE + MT_GRANULE_SIZE, 
mem_type, 0, false);
+   if (check_allocated_memory(ptr, BUFFER_SIZE + MT_GRANULE_SIZE,
+  mem_type, false) != KSFT_PASS)
+   return KSFT_FAIL;
+
+   for (tag = 0; (tag < MT_TAG_COUNT - 1) && (result == KSFT_PASS); tag++) 
{
+   excl_mask |= 1 << tag;
+   mte_switch_mode(mode, MT_INCLUDE_VALID_TAGS(excl_mask));
+   /* Try to catch a excluded tag by a number of tries. */
+   for (run = 0; (run < RUNS) && (result == KSFT_PASS); run++) {
+   ptr = (char *)mte_insert_tags(ptr, BUFFER_SIZE);
+   /* Check tag value */
+   i

[PATCH 5/6] kselftest/arm64: Verify KSM page merge for MTE pages

2020-09-01 Thread Amit Daniel Kachhap
Add a testcase to check that KSM should not merge pages containing
same data with same/different MTE tag values.

This testcase has one positive tests and passes if page merging
happens according to the above rule. It also saves and restores
any modified ksm sysfs entries.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Signed-off-by: Amit Daniel Kachhap 
---
 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../selftests/arm64/mte/check_ksm_options.c   | 131 ++
 2 files changed, 132 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_ksm_options.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index 79a215d3bbd0..44e9bfdaeca6 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -2,3 +2,4 @@ check_buffer_fill
 check_tags_inclusion
 check_child_memory
 check_mmap_options
+check_ksm_options
diff --git a/tools/testing/selftests/arm64/mte/check_ksm_options.c 
b/tools/testing/selftests/arm64/mte/check_ksm_options.c
new file mode 100644
index ..33fcf77f3501
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_ksm_options.c
@@ -0,0 +1,131 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define TEST_UNIT  10
+#define PATH_KSM   "/sys/kernel/mm/ksm/"
+#define MAX_LOOP   4
+
+static size_t page_sz;
+static unsigned long ksm_sysfs[5];
+
+static void mte_ksm_setup(void)
+{
+   ksm_sysfs[0] = read_sysfs(PATH_KSM "merge_across_nodes");
+   write_sysfs(PATH_KSM "merge_across_nodes", 1);
+   ksm_sysfs[1] = read_sysfs(PATH_KSM "sleep_millisecs");
+   write_sysfs(PATH_KSM "sleep_millisecs", 0);
+   ksm_sysfs[2] = read_sysfs(PATH_KSM "run");
+   write_sysfs(PATH_KSM "run", 1);
+   ksm_sysfs[3] = read_sysfs(PATH_KSM "max_page_sharing");
+   write_sysfs(PATH_KSM "max_page_sharing", ksm_sysfs[3] + TEST_UNIT);
+   ksm_sysfs[4] = read_sysfs(PATH_KSM "pages_to_scan");
+   write_sysfs(PATH_KSM "pages_to_scan", ksm_sysfs[4] + TEST_UNIT);
+}
+
+static void mte_ksm_restore(void)
+{
+   write_sysfs(PATH_KSM "merge_across_nodes", ksm_sysfs[0]);
+   write_sysfs(PATH_KSM "sleep_millisecs", ksm_sysfs[1]);
+   write_sysfs(PATH_KSM "run", ksm_sysfs[2]);
+   write_sysfs(PATH_KSM "max_page_sharing", ksm_sysfs[3]);
+   write_sysfs(PATH_KSM "pages_to_scan", ksm_sysfs[4]);
+}
+
+static void mte_ksm_scan(void)
+{
+   int cur_count = read_sysfs(PATH_KSM "full_scans");
+   int scan_count = cur_count + 1;
+   int max_loop_count = MAX_LOOP;
+
+   while ((cur_count < scan_count) && max_loop_count) {
+   sleep(1);
+   cur_count = read_sysfs(PATH_KSM "full_scans");
+   max_loop_count--;
+   }
+#ifdef DEBUG
+   ksft_print_msg("INFO: pages_shared=%lu pages_sharing=%lu\n",
+   read_sysfs(PATH_KSM "pages_shared"),
+   read_sysfs(PATH_KSM "pages_sharing"));
+#endif
+}
+
+static int check_madvise_options(int mem_type, int mode, int mapping)
+{
+   char *ptr;
+   int err, ret;
+
+   err = KSFT_FAIL;
+   if (access(PATH_KSM, F_OK) == -1) {
+   ksft_print_msg("ERR: Kernel KSM config not enabled\n");
+   return err;
+   }
+
+   mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
+   ptr = mte_allocate_memory(TEST_UNIT * page_sz, mem_type, mapping, true);
+   if (check_allocated_memory(ptr, TEST_UNIT * page_sz, mem_type, false) 
!= KSFT_PASS)
+   return KSFT_FAIL;
+
+   /* Insert same data in all the pages */
+   memset(ptr, 'A', TEST_UNIT * page_sz);
+   ret = madvise(ptr, TEST_UNIT * page_sz, MADV_MERGEABLE);
+   if (ret) {
+   ksft_print_msg("ERR: madvise failed to set MADV_UNMERGEABLE\n");
+   goto madvise_err;
+   }
+   mte_ksm_scan();
+   /* Tagged pages should not merge */
+   if ((read_sysfs(PATH_KSM "pages_shared") < 1) ||
+   (read_sysfs(PATH_KSM "pages_sharing") < (TEST_UNIT - 1)))
+   err = KSFT_PASS;
+madvise_err:
+   mte_free_memory(ptr, TEST_UNIT * page_sz, mem_type, true);
+   return err;
+}
+
+int main(int argc, char *argv[])
+{
+   int err;
+
+   err = mte_default_setup();
+   if (err)
+   return err;
+   page_sz = getpagesize();
+   if (!page_sz) {
+   ksft_print_msg("ERR: 

[PATCH 6/6] kselftest/arm64: Check mte tagged user address in kernel

2020-09-01 Thread Amit Daniel Kachhap
Add a testcase to check that user address with valid/invalid
mte tag works in kernel mode. This test verifies the kernel API's
__arch_copy_from_user/__arch_copy_to_user works by considering
if the user pointer has valid/invalid allocation tags.

In MTE sync mode a SIGSEV fault is generated if a user memory
with invalid tag is accessed in kernel. In async mode no such
fault occurs.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Signed-off-by: Amit Daniel Kachhap 
---
 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../selftests/arm64/mte/check_user_mem.c  | 118 ++
 2 files changed, 119 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_user_mem.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index 44e9bfdaeca6..bc3ac63f3314 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -3,3 +3,4 @@ check_tags_inclusion
 check_child_memory
 check_mmap_options
 check_ksm_options
+check_user_mem
diff --git a/tools/testing/selftests/arm64/mte/check_user_mem.c 
b/tools/testing/selftests/arm64/mte/check_user_mem.c
new file mode 100644
index ..9df0681af5bd
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_user_mem.c
@@ -0,0 +1,118 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+static size_t page_sz;
+
+static int check_usermem_access_fault(int mem_type, int mode, int mapping)
+{
+   int fd, ret, i, err;
+   char val = 'A';
+   size_t len, read_len;
+   void *ptr, *ptr_next;
+   bool fault;
+
+   len = 2 * page_sz;
+   err = KSFT_FAIL;
+   /*
+* Accessing user memory in kernel with invalid tag should fault in sync
+* mode but may not fault in async mode as per the implemented MTE
+* support in Arm64 kernel.
+*/
+   if (mode == MTE_ASYNC_ERR)
+   fault = false;
+   else
+   fault = true;
+   mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
+   fd = create_temp_file();
+   if (fd == -1)
+   return KSFT_FAIL;
+   for (i = 0; i < len; i++)
+   write(fd, , sizeof(val));
+   lseek(fd, 0, 0);
+   ptr = mte_allocate_memory(len, mem_type, mapping, true);
+   if (check_allocated_memory(ptr, len, mem_type, true) != KSFT_PASS) {
+   close(fd);
+   return KSFT_FAIL;
+   }
+   mte_initialize_current_context(mode, (uintptr_t)ptr, len);
+   /* Copy from file into buffer with valid tag */
+   read_len = read(fd, ptr, len);
+   ret = errno;
+   mte_wait_after_trig();
+   if ((cur_mte_cxt.fault_valid == true) || ret == EFAULT || read_len < 
len)
+   goto usermem_acc_err;
+   /* Verify same pattern is read */
+   for (i = 0; i < len; i++)
+   if (*(char *)(ptr + i) != val)
+   break;
+   if (i < len)
+   goto usermem_acc_err;
+
+   /* Tag the next half of memory with different value */
+   ptr_next = (void *)((unsigned long)ptr + page_sz);
+   ptr_next = mte_insert_tags(ptr_next, page_sz);
+   if (!ptr_next)
+   goto usermem_acc_err;
+   lseek(fd, 0, 0);
+   /* Copy from file into buffer with invalid tag */
+   read_len = read(fd, ptr, len);
+   ret = errno;
+   mte_wait_after_trig();
+   if ((fault == true) &&
+   (cur_mte_cxt.fault_valid == true || ret == EFAULT || read_len < 
len)) {
+   err = KSFT_PASS;
+   } else if ((fault == false) &&
+  (cur_mte_cxt.fault_valid == false && read_len == len)) {
+   err = KSFT_PASS;
+   }
+usermem_acc_err:
+   mte_free_memory((void *)ptr, len, mem_type, true);
+   close(fd);
+   return err;
+}
+
+int main(int argc, char *argv[])
+{
+   int err;
+
+   page_sz = getpagesize();
+   if (!page_sz) {
+   ksft_print_msg("ERR: Unable to get page size\n");
+   return KSFT_FAIL;
+   }
+   err = mte_default_setup();
+   if (err)
+   return err;
+   /* Register signal handlers */
+   mte_register_signal(SIGSEGV, mte_default_handler);
+
+   evaluate_test(check_usermem_access_fault(USE_MMAP, MTE_SYNC_ERR, 
MAP_PRIVATE),
+   "Check memory access from kernel in sync mode, private mapping 
and mmap memory\n");
+   evaluate_test(check_usermem_access_fault(USE_MMAP, MTE_SYNC_ERR, 
MAP_SHARED),
+   "Check memory access from kernel in sync mode, shared mapping 
and mmap memory\n");
+
+   evaluate_test(check_usermem_ac

[PATCH 4/6] kselftest/arm64: Verify all different mmap MTE options

2020-09-01 Thread Amit Daniel Kachhap
This testcase checks the different unsupported/supported options for mmap
if used with PROT_MTE memory protection flag. These checks are,

* Either pstate.tco enable or prctl PR_MTE_TCF_NONE option should not cause
  any tag mismatch faults.
* Different combinations of anonymous/file memory mmap, mprotect,
  sync/async error mode and private/shared mappings should work.
* mprotect should not be able to clear the PROT_MTE page property.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Co-developed-by: Gabor Kertesz 
Signed-off-by: Gabor Kertesz 
Signed-off-by: Amit Daniel Kachhap 
---
 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../selftests/arm64/mte/check_mmap_options.c  | 262 ++
 2 files changed, 263 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_mmap_options.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index b5fcc0fb4d97..79a215d3bbd0 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -1,3 +1,4 @@
 check_buffer_fill
 check_tags_inclusion
 check_child_memory
+check_mmap_options
diff --git a/tools/testing/selftests/arm64/mte/check_mmap_options.c 
b/tools/testing/selftests/arm64/mte/check_mmap_options.c
new file mode 100644
index ..33b13b86199b
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_mmap_options.c
@@ -0,0 +1,262 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define RUNS   (MT_TAG_COUNT)
+#define UNDERFLOW  MT_GRANULE_SIZE
+#define OVERFLOW   MT_GRANULE_SIZE
+#define TAG_CHECK_ON   0
+#define TAG_CHECK_OFF  1
+
+static size_t page_size;
+static int sizes[] = {
+   1, 537, 989, 1269, MT_GRANULE_SIZE - 1, MT_GRANULE_SIZE,
+   /* page size - 1*/ 0, /* page_size */ 0, /* page size + 1 */ 0
+};
+
+static int check_mte_memory(char *ptr, int size, int mode, int tag_check)
+{
+   mte_initialize_current_context(mode, (uintptr_t)ptr, size);
+   memset(ptr, '1', size);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == true)
+   return KSFT_FAIL;
+
+   mte_initialize_current_context(mode, (uintptr_t)ptr, -UNDERFLOW);
+   memset(ptr - UNDERFLOW, '2', UNDERFLOW);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == false && tag_check == TAG_CHECK_ON)
+   return KSFT_FAIL;
+   if (cur_mte_cxt.fault_valid == true && tag_check == TAG_CHECK_OFF)
+   return KSFT_FAIL;
+
+   mte_initialize_current_context(mode, (uintptr_t)ptr, size + OVERFLOW);
+   memset(ptr + size, '3', OVERFLOW);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == false && tag_check == TAG_CHECK_ON)
+   return KSFT_FAIL;
+   if (cur_mte_cxt.fault_valid == true && tag_check == TAG_CHECK_OFF)
+   return KSFT_FAIL;
+
+   return KSFT_PASS;
+}
+
+static int check_anonymous_memory_mapping(int mem_type, int mode, int mapping, 
int tag_check)
+{
+   char *ptr, *map_ptr;
+   int run, result, map_size;
+   int item = sizeof(sizes)/sizeof(int);
+
+   item = sizeof(sizes)/sizeof(int);
+   mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
+   for (run = 0; run < item; run++) {
+   map_size = sizes[run] + OVERFLOW + UNDERFLOW;
+   map_ptr = (char *)mte_allocate_memory(map_size, mem_type, 
mapping, false);
+   if (check_allocated_memory(map_ptr, map_size, mem_type, false) 
!= KSFT_PASS)
+   return KSFT_FAIL;
+
+   ptr = map_ptr + UNDERFLOW;
+   mte_initialize_current_context(mode, (uintptr_t)ptr, 
sizes[run]);
+   /* Only mte enabled memory will allow tag insertion */
+   ptr = mte_insert_tags((void *)ptr, sizes[run]);
+   if (!ptr || cur_mte_cxt.fault_valid == true) {
+   ksft_print_msg("FAIL: Insert tags on anonymous mmap 
memory\n");
+   munmap((void *)map_ptr, map_size);
+   return KSFT_FAIL;
+   }
+   result = check_mte_memory(ptr, sizes[run], mode, tag_check);
+   mte_clear_tags((void *)ptr, sizes[run]);
+   mte_free_memory((void *)map_ptr, map_size, mem_type, false);
+   if (result == KSFT_FAIL)
+   return KSFT_FAIL;
+   }
+   return KSFT_PASS;
+}
+
+static int check_file_memory_mapping(int mem_type, int mode, int mapping, int 
tag_check)
+{
+   char *ptr, *map_ptr;
+   int run, fd, map_size;
+   int total = sizeof(sizes)/

[PATCH 1/6] kselftest/arm64: Add utilities and a test to validate mte memory

2020-09-01 Thread Amit Daniel Kachhap
This test checks that the memory tag is present after mte allocation and
the memory is accessible with those tags. This testcase verifies both
sync, async and none mte error mode. The allocated mte buffers are verified
for Allocated range (no error expected while accessing buffer),
Underflow range (SIGSEGV fault is expected with SEGV_MTESERR signal code)
and Overflow range (SIGSEGV fault is expected with SEGV_MTEAERR signal
code).

Different test scenarios covered here are,
* Verify that mte memory are accessible at byte/block level.
* Force underflow and overflow to occur and check the data consistency.
* Check to/from between tagged and untagged memory.
* Check that initial allocated memory to have 0 tag.

This change also creates the necessary infrastructure to add MTE test
cases. MTE kselftests can use the several utility functions
provided here to add wide variety of MTE test scenarios.

GCC compiler need flag '-march=armv8.5-a+memtag' so those flags are
verified before compilation.

The mte testcases can be launched with kselftest framework as,

make TARGETS=arm64 ARM64_SUBTARGETS=mte kselftest

or compiled as,

make -C tools/testing/selftests TARGETS=arm64 ARM64_SUBTARGETS=mte CC='compiler'

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Co-developed-by: Gabor Kertesz 
Signed-off-by: Gabor Kertesz 
Signed-off-by: Amit Daniel Kachhap 
---
 tools/testing/selftests/arm64/Makefile|   2 +-
 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 tools/testing/selftests/arm64/mte/Makefile|  29 ++
 .../selftests/arm64/mte/check_buffer_fill.c   | 476 ++
 .../selftests/arm64/mte/mte_common_util.c | 374 ++
 .../selftests/arm64/mte/mte_common_util.h | 135 +
 tools/testing/selftests/arm64/mte/mte_def.h   |  26 +
 .../testing/selftests/arm64/mte/mte_helper.S  | 116 +
 8 files changed, 1158 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/arm64/mte/.gitignore
 create mode 100644 tools/testing/selftests/arm64/mte/Makefile
 create mode 100644 tools/testing/selftests/arm64/mte/check_buffer_fill.c
 create mode 100644 tools/testing/selftests/arm64/mte/mte_common_util.c
 create mode 100644 tools/testing/selftests/arm64/mte/mte_common_util.h
 create mode 100644 tools/testing/selftests/arm64/mte/mte_def.h
 create mode 100644 tools/testing/selftests/arm64/mte/mte_helper.S

diff --git a/tools/testing/selftests/arm64/Makefile 
b/tools/testing/selftests/arm64/Makefile
index 93b567d23c8b..3723d9dea11a 100644
--- a/tools/testing/selftests/arm64/Makefile
+++ b/tools/testing/selftests/arm64/Makefile
@@ -4,7 +4,7 @@
 ARCH ?= $(shell uname -m 2>/dev/null || echo not)
 
 ifneq (,$(filter $(ARCH),aarch64 arm64))
-ARM64_SUBTARGETS ?= tags signal
+ARM64_SUBTARGETS ?= tags signal mte
 else
 ARM64_SUBTARGETS :=
 endif
diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
new file mode 100644
index ..3f8c1f6c82b9
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -0,0 +1 @@
+check_buffer_fill
diff --git a/tools/testing/selftests/arm64/mte/Makefile 
b/tools/testing/selftests/arm64/mte/Makefile
new file mode 100644
index ..2480226dfe57
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/Makefile
@@ -0,0 +1,29 @@
+# SPDX-License-Identifier: GPL-2.0
+# Copyright (C) 2020 ARM Limited
+
+CFLAGS += -std=gnu99 -I.
+SRCS := $(filter-out mte_common_util.c,$(wildcard *.c))
+PROGS := $(patsubst %.c,%,$(SRCS))
+
+#Add mte compiler option
+ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep gcc),)
+CFLAGS += -march=armv8.5-a+memtag
+endif
+
+#check if the compiler works well
+mte_cc_support := $(shell if ($(CC) $(CFLAGS) -E -x c /dev/null -o /dev/null 
2>&1) then echo "1"; fi)
+
+ifeq ($(mte_cc_support),1)
+# Generated binaries to be installed by top KSFT script
+TEST_GEN_PROGS := $(PROGS)
+
+# Get Kernel headers installed and use them.
+KSFT_KHDR_INSTALL := 1
+endif
+
+# Include KSFT lib.mk.
+include ../../lib.mk
+
+ifeq ($(mte_cc_support),1)
+$(TEST_GEN_PROGS): mte_common_util.c mte_common_util.h mte_helper.S
+endif
diff --git a/tools/testing/selftests/arm64/mte/check_buffer_fill.c 
b/tools/testing/selftests/arm64/mte/check_buffer_fill.c
new file mode 100644
index ..9710854585ad
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_buffer_fill.c
@@ -0,0 +1,476 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define OVERFLOW_RANGE MT_GRANULE_SIZE
+
+static int sizes[] = {
+   1, 555, 1033, MT_GRANULE_SIZE - 1, MT_GRANULE_SIZE,
+   /* page size - 1*/ 0, /* page_size */ 0, /* page size + 1 */ 0
+};
+
+enum mte_block_test_alloc {
+   UNTAGGED_TAGGED,
+   TAGGED_UNTAGGED,
+   TAGGED_TAGGED,
+   BLOCK_ALLOC_MAX,

[PATCH 3/6] kselftest/arm64: Check forked child mte memory accessibility

2020-09-01 Thread Amit Daniel Kachhap
This test covers the mte memory behaviour of the forked process with
different mapping properties and flags. It checks that all bytes of
forked child memory are accessible with the same tag as that of the
parent and memory accessed outside the tag range causes fault to
occur.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Co-developed-by: Gabor Kertesz 
Signed-off-by: Gabor Kertesz 
Signed-off-by: Amit Daniel Kachhap 
---
 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../selftests/arm64/mte/check_child_memory.c  | 195 ++
 2 files changed, 196 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_child_memory.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index c3fca255d3d6..b5fcc0fb4d97 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -1,2 +1,3 @@
 check_buffer_fill
 check_tags_inclusion
+check_child_memory
diff --git a/tools/testing/selftests/arm64/mte/check_child_memory.c 
b/tools/testing/selftests/arm64/mte/check_child_memory.c
new file mode 100644
index ..97bebdecd29e
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_child_memory.c
@@ -0,0 +1,195 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define BUFFER_SIZE(5 * MT_GRANULE_SIZE)
+#define RUNS   (MT_TAG_COUNT)
+#define UNDERFLOW  MT_GRANULE_SIZE
+#define OVERFLOW   MT_GRANULE_SIZE
+
+static size_t page_size;
+static int sizes[] = {
+   1, 537, 989, 1269, MT_GRANULE_SIZE - 1, MT_GRANULE_SIZE,
+   /* page size - 1*/ 0, /* page_size */ 0, /* page size + 1 */ 0
+};
+
+static int check_child_tag_inheritance(char *ptr, int size, int mode)
+{
+   int i, parent_tag, child_tag, fault, child_status;
+   pid_t child;
+
+   parent_tag = MT_FETCH_TAG((uintptr_t)ptr);
+   fault = 0;
+
+   child = fork();
+   if (child == -1) {
+   ksft_print_msg("FAIL: child process creation\n");
+   return KSFT_FAIL;
+   } else if (child == 0) {
+   mte_initialize_current_context(mode, (uintptr_t)ptr, size);
+   /* Do copy on write */
+   memset(ptr, '1', size);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == true) {
+   fault = 1;
+   goto check_child_tag_inheritance_err;
+   }
+   for (i = 0 ; i < size ; i += MT_GRANULE_SIZE) {
+   child_tag = 
MT_FETCH_TAG((uintptr_t)(mte_get_tag_address(ptr + i)));
+   if (parent_tag != child_tag) {
+   ksft_print_msg("FAIL: child mte tag 
mismatch\n");
+   fault = 1;
+   goto check_child_tag_inheritance_err;
+   }
+   }
+   mte_initialize_current_context(mode, (uintptr_t)ptr, 
-UNDERFLOW);
+   memset(ptr - UNDERFLOW, '2', UNDERFLOW);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == false) {
+   fault = 1;
+   goto check_child_tag_inheritance_err;
+   }
+   mte_initialize_current_context(mode, (uintptr_t)ptr, size + 
OVERFLOW);
+   memset(ptr + size, '3', OVERFLOW);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid == false) {
+   fault = 1;
+   goto check_child_tag_inheritance_err;
+   }
+check_child_tag_inheritance_err:
+   _exit(fault);
+   }
+   /* Wait for child process to terminate */
+   wait(_status);
+   if (WIFEXITED(child_status))
+   fault = WEXITSTATUS(child_status);
+   else
+   fault = 1;
+   return (fault) ? KSFT_FAIL : KSFT_PASS;
+}
+
+static int check_child_memory_mapping(int mem_type, int mode, int mapping)
+{
+   char *ptr;
+   int run, result;
+   int item = sizeof(sizes)/sizeof(int);
+
+   item = sizeof(sizes)/sizeof(int);
+   mte_switch_mode(mode, MTE_ALLOW_NON_ZERO_TAG);
+   for (run = 0; run < item; run++) {
+   ptr = (char *)mte_allocate_memory_tag_range(sizes[run], 
mem_type, mapping,
+   UNDERFLOW, 
OVERFLOW);
+   if (check_allocated_memory_range(ptr, sizes[run], mem_type,
+UNDERFLOW, OVERFLOW) != 
KSFT_PASS)
+   return KSFT_FAIL;
+   result = check_child_tag_inheritance(ptr, sizes[run], mode);
+   

[PATCH 2/6] kselftest/arm64: Verify mte tag inclusion via prctl

2020-09-01 Thread Amit Daniel Kachhap
This testcase verifies that the tag generated with "irg" instruction
contains only included tags. This is done via prtcl call.

This test covers 4 scenarios,
* At least one included tag.
* More than one included tags.
* None included.
* All included.

Cc: Shuah Khan 
Cc: Catalin Marinas 
Cc: Will Deacon 
Co-developed-by: Gabor Kertesz 
Signed-off-by: Gabor Kertesz 
Signed-off-by: Amit Daniel Kachhap 
---
 tools/testing/selftests/arm64/mte/.gitignore  |   1 +
 .../arm64/mte/check_tags_inclusion.c  | 183 ++
 2 files changed, 184 insertions(+)
 create mode 100644 tools/testing/selftests/arm64/mte/check_tags_inclusion.c

diff --git a/tools/testing/selftests/arm64/mte/.gitignore 
b/tools/testing/selftests/arm64/mte/.gitignore
index 3f8c1f6c82b9..c3fca255d3d6 100644
--- a/tools/testing/selftests/arm64/mte/.gitignore
+++ b/tools/testing/selftests/arm64/mte/.gitignore
@@ -1 +1,2 @@
 check_buffer_fill
+check_tags_inclusion
diff --git a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c 
b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
new file mode 100644
index ..9614988e2e75
--- /dev/null
+++ b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2020 ARM Limited
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "kselftest.h"
+#include "mte_common_util.h"
+#include "mte_def.h"
+
+#define BUFFER_SIZE(5 * MT_GRANULE_SIZE)
+#define RUNS   (MT_TAG_COUNT * 2)
+#define MTE_LAST_TAG_MASK  (0x7FFF)
+
+static int verify_mte_pointer_validity(char *ptr, int mode)
+{
+   mte_initialize_current_context(mode, (uintptr_t)ptr, BUFFER_SIZE);
+   /* Check the validity of the tagged pointer */
+   memset((void *)ptr, '1', BUFFER_SIZE);
+   mte_wait_after_trig();
+   if (cur_mte_cxt.fault_valid)
+   return KSFT_FAIL;
+   /* Proceed further for nonzero tags */
+   if (!MT_FETCH_TAG((uintptr_t)ptr))
+   return KSFT_PASS;
+   mte_initialize_current_context(mode, (uintptr_t)ptr, BUFFER_SIZE + 1);
+   /* Check the validity outside the range */
+   ptr[BUFFER_SIZE] = '2';
+   mte_wait_after_trig();
+   if (!cur_mte_cxt.fault_valid)
+   return KSFT_FAIL;
+   else
+   return KSFT_PASS;
+}
+
+static int check_single_included_tags(int mem_type, int mode)
+{
+   char *ptr;
+   int tag, run, result = KSFT_PASS;
+
+   ptr = (char *)mte_allocate_memory(BUFFER_SIZE + MT_GRANULE_SIZE, 
mem_type, 0, false);
+   if (check_allocated_memory(ptr, BUFFER_SIZE + MT_GRANULE_SIZE,
+  mem_type, false) != KSFT_PASS)
+   return KSFT_FAIL;
+
+   for (tag = 0; (tag < MT_TAG_COUNT) && (result == KSFT_PASS); tag++) {
+   mte_switch_mode(mode, MT_INCLUDE_VALID_TAG(tag));
+   /* Try to catch a excluded tag by a number of tries. */
+   for (run = 0; (run < RUNS) && (result == KSFT_PASS); run++) {
+   ptr = (char *)mte_insert_tags(ptr, BUFFER_SIZE);
+   /* Check tag value */
+   if (MT_FETCH_TAG((uintptr_t)ptr) == tag) {
+   ksft_print_msg("FAIL: wrong tag = 0x%x with 
include mask=0x%x\n",
+  MT_FETCH_TAG((uintptr_t)ptr),
+  MT_INCLUDE_VALID_TAG(tag));
+   result = KSFT_FAIL;
+   break;
+   }
+   result = verify_mte_pointer_validity(ptr, mode);
+   }
+   }
+   mte_free_memory_tag_range((void *)ptr, BUFFER_SIZE, mem_type, 0, 
MT_GRANULE_SIZE);
+   return result;
+}
+
+static int check_multiple_included_tags(int mem_type, int mode)
+{
+   char *ptr;
+   int tag, run, result = KSFT_PASS;
+   unsigned long excl_mask = 0;
+
+   ptr = (char *)mte_allocate_memory(BUFFER_SIZE + MT_GRANULE_SIZE, 
mem_type, 0, false);
+   if (check_allocated_memory(ptr, BUFFER_SIZE + MT_GRANULE_SIZE,
+  mem_type, false) != KSFT_PASS)
+   return KSFT_FAIL;
+
+   for (tag = 0; (tag < MT_TAG_COUNT - 1) && (result == KSFT_PASS); tag++) 
{
+   excl_mask |= 1 << tag;
+   mte_switch_mode(mode, MT_INCLUDE_VALID_TAGS(excl_mask));
+   /* Try to catch a excluded tag by a number of tries. */
+   for (run = 0; (run < RUNS) && (result == KSFT_PASS); run++) {
+   /* If excl_mask is all except last then clear tag for 
the next loop */
+   if (excl_mask == MTE_LAST_TAG_MASK && run)
+   ptr = (cha

[PATCH 0/6] kselftest: arm64/mte: Tests for user-space MTE

2020-09-01 Thread Amit Daniel Kachhap
These patch series adds below kselftests to test the user-space support for the
ARMv8.5 Memory Tagging Extension present in arm64 tree [1].

1) This test-case verifies that the memory allocated by kernel mmap interface
can support tagged memory access. It first checks the presence of tags at
address[56:59] and then proceeds with read and write. The pass criteria for
this test is that tag fault exception should not happen.

2) This test-case crosses the valid memory to the invalid memory. In this
memory area valid tags are not inserted so read and write should not pass. The
pass criteria for this test is that tag fault exception should happen for all
the illegal addresses. This test also verfies that PSTATE.TCO works properly.

3) This test-case verifies that the memory inherited by child process from
parent process should have same tags copied. The pass criteria for this test is
that tag fault exception should not happen.

4) This test checks different mmap flags with PROT_MTE memory protection.

5) This testcase checks that KSM should not merge pages containing different
MTE tag values. However, if the tags are same then the pages may merge. This
testcase uses the generic ksm sysfs interfaces to verify the MTE behaviour, so
this testcase is not fullproof and may be impacted due to other load in the 
system.

6) Fifth test verifies that syscalls read/write etc works by considering that
user pointer has valid/invalid allocation tags.

To simplify the testing, a copy of the patchset on top of a recent linux
tree can be found at [2].


Thanks,
Amit Daniel

[1]: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git 
for-next/mte
[2]: 
http://linux-arm.org/git?p=linux-ak.git;a=shortlog;h=refs/heads/kselftest-mte-mainline-v1


Amit Daniel Kachhap (6):
  kselftest/arm64: Add utilities and a test to validate mte memory
  kselftest/arm64: Verify mte tag inclusion via prctl
  kselftest/arm64: Check forked child mte memory accessibility
  kselftest/arm64: Verify all different mmap MTE options
  kselftest/arm64: Verify KSM page merge for MTE pages
  kselftest/arm64: Check mte tagged user address in kernel

 tools/testing/selftests/arm64/Makefile|   2 +-
 tools/testing/selftests/arm64/mte/.gitignore  |   6 +
 tools/testing/selftests/arm64/mte/Makefile|  29 ++
 .../selftests/arm64/mte/check_buffer_fill.c   | 476 ++
 .../selftests/arm64/mte/check_child_memory.c  | 195 +++
 .../selftests/arm64/mte/check_ksm_options.c   | 131 +
 .../selftests/arm64/mte/check_mmap_options.c  | 262 ++
 .../arm64/mte/check_tags_inclusion.c  | 183 +++
 .../selftests/arm64/mte/check_user_mem.c  | 118 +
 .../selftests/arm64/mte/mte_common_util.c | 374 ++
 .../selftests/arm64/mte/mte_common_util.h | 135 +
 tools/testing/selftests/arm64/mte/mte_def.h   |  26 +
 .../testing/selftests/arm64/mte/mte_helper.S  | 116 +
 13 files changed, 2052 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/arm64/mte/.gitignore
 create mode 100644 tools/testing/selftests/arm64/mte/Makefile
 create mode 100644 tools/testing/selftests/arm64/mte/check_buffer_fill.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_child_memory.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_ksm_options.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_mmap_options.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_tags_inclusion.c
 create mode 100644 tools/testing/selftests/arm64/mte/check_user_mem.c
 create mode 100644 tools/testing/selftests/arm64/mte/mte_common_util.c
 create mode 100644 tools/testing/selftests/arm64/mte/mte_common_util.h
 create mode 100644 tools/testing/selftests/arm64/mte/mte_def.h
 create mode 100644 tools/testing/selftests/arm64/mte/mte_helper.S

-- 
2.17.1



[PATCH v3 2/2] Documentation/vmcoreinfo: Add documentation for 'KERNELPACMASK'

2020-05-11 Thread Amit Daniel Kachhap
Add documentation for KERNELPACMASK variable being added to the vmcoreinfo.

It indicates the PAC bits mask information of signed kernel pointers if
Armv8.3-A Pointer Authentication feature is present.

Cc: Catalin Marinas 
Cc: Will Deacon 
Cc: Mark Rutland 
Cc: Dave Young 
Cc: Baoquan He 
Signed-off-by: Amit Daniel Kachhap 
---
 Documentation/admin-guide/kdump/vmcoreinfo.rst | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst 
b/Documentation/admin-guide/kdump/vmcoreinfo.rst
index 007a6b8..e4ee8b2 100644
--- a/Documentation/admin-guide/kdump/vmcoreinfo.rst
+++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst
@@ -393,6 +393,12 @@ KERNELOFFSET
 The kernel randomization offset. Used to compute the page offset. If
 KASLR is disabled, this value is zero.
 
+KERNELPACMASK
+-
+
+The mask to extract the Pointer Authentication Code from a kernel virtual
+address.
+
 arm
 ===
 
-- 
2.7.4



[PATCH v3 1/2] arm64/crash_core: Export KERNELPACMASK in vmcoreinfo

2020-05-11 Thread Amit Daniel Kachhap
Recently arm64 linux kernel added support for Armv8.3-A Pointer
Authentication feature. If this feature is enabled in the kernel and the
hardware supports address authentication then the return addresses are
signed and stored in the stack to prevent ROP kind of attack. Kdump tool
will now dump the kernel with signed lr values in the stack.

Any user analysis tool for this kernel dump may need the kernel pac mask
information in vmcoreinfo to generate the correct return address for
stacktrace purpose as well as to resolve the symbol name.

This patch is similar to commit ec6e822d1a22d0eef ("arm64: expose user PAC
bit positions via ptrace") which exposes pac mask information via ptrace
interfaces.

The config gaurd ARM64_PTR_AUTH is removed form asm/compiler.h so macros
like ptrauth_kernel_pac_mask can be used ungaurded. This config protection
is confusing as the pointer authentication feature may be missing at
runtime even though this config is present.

Cc: Catalin Marinas 
Cc: Will Deacon 
Cc: Mark Rutland 
Signed-off-by: Amit Daniel Kachhap 
---
Changes since v2:
* Removed CONFIG_ARM64_PTR_AUTH config guard as suggested by Will.
* A commit log change.

An implementation of this new KERNELPACMASK vmcoreinfo field used by crash
tool can be found here[1]. This change is accepted by crash utility
maintainer [2].

[1]: https://www.redhat.com/archives/crash-utility/2020-April/msg00095.html
[2]: https://www.redhat.com/archives/crash-utility/2020-April/msg00099.html

 arch/arm64/include/asm/compiler.h | 4 
 arch/arm64/kernel/crash_core.c| 4 
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/compiler.h 
b/arch/arm64/include/asm/compiler.h
index eece20d..51a7ce8 100644
--- a/arch/arm64/include/asm/compiler.h
+++ b/arch/arm64/include/asm/compiler.h
@@ -2,8 +2,6 @@
 #ifndef __ASM_COMPILER_H
 #define __ASM_COMPILER_H
 
-#if defined(CONFIG_ARM64_PTR_AUTH)
-
 /*
  * The EL0/EL1 pointer bits used by a pointer authentication code.
  * This is dependent on TBI0/TBI1 being enabled, or bits 63:56 would also 
apply.
@@ -19,6 +17,4 @@
 #define __builtin_return_address(val)  \
(void *)(ptrauth_clear_pac((unsigned 
long)__builtin_return_address(val)))
 
-#endif /* CONFIG_ARM64_PTR_AUTH */
-
 #endif /* __ASM_COMPILER_H */
diff --git a/arch/arm64/kernel/crash_core.c b/arch/arm64/kernel/crash_core.c
index ca4c3e1..1f646b0 100644
--- a/arch/arm64/kernel/crash_core.c
+++ b/arch/arm64/kernel/crash_core.c
@@ -5,6 +5,7 @@
  */
 
 #include 
+#include 
 #include 
 
 void arch_crash_save_vmcoreinfo(void)
@@ -16,4 +17,7 @@ void arch_crash_save_vmcoreinfo(void)
vmcoreinfo_append_str("NUMBER(PHYS_OFFSET)=0x%llx\n",
PHYS_OFFSET);
vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
+   vmcoreinfo_append_str("NUMBER(KERNELPACMASK)=0x%llx\n",
+   system_supports_address_auth() ?
+   ptrauth_kernel_pac_mask() : 0);
 }
-- 
2.7.4



Re: [kvmtool PATCH v10 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication

2019-05-28 Thread Amit Daniel Kachhap

Hi Dave,

On 5/28/19 3:41 PM, Dave Martin wrote:

On Wed, Apr 24, 2019 at 02:41:21PM +0100, Dave Martin wrote:

On Wed, Apr 24, 2019 at 12:32:22PM +0530, Amit Daniel Kachhap wrote:

Hi,

On 4/23/19 9:16 PM, Dave Martin wrote:


[...]


diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..acd1d5f 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,18 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
+   /* Check Pointer Authentication command line arguments. */
+   if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth)
+   die("Both enable-ptrauth and disable-ptrauth option cannot be 
present");


Preferably, print the leading dashes, the same as the user would see
on the command line (e.g., --enable-ptrauth, --disable-ptrauth).

For brevity, we could write something like:

die("--enable-ptrauth conflicts with --disable-ptrauth");


[...]


@@ -106,8 +118,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
die("Unable to find matching target");
}
-   if (err || target->init(vcpu))
-   die("Unable to initialise vcpu");
+   if (err || target->init(vcpu)) {
+   if (kvm->cfg.arch.enable_ptrauth)
+   die("Unable to initialise vcpu with pointer authentication 
feature");


We don't special-case this error message for any other feature yet:
there are a variety of reasons why we might have failed, so suggesting
that the failure is something to do with ptrauth may be misleading to
the user.

If we want to be more informative, we could do something like the
following:

bool supported;

supported = kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC);

if (kvm->cfg.arch.enable_ptrauth && !supported)
die("--enable-ptrauth not supported on this host");

if (supported && !kvm->cfg.arch.disable_ptrauth)
vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;

/* ... */

if (err || target->init(vcpu))
die("Unable to initialise vcpu");

We don't do this for any other feature today, but since it helps the
user to understand what went wrong it's probably a good idea.

Yes this is more clear. As Mark has picked the core guest ptrauth patches. I
will post this changes as standalone.


Sounds good.  (I also need to do that separately for SVE...)


Were you planning to repost this?

Alternatively, I can fix up the diagnostic messages discussed here and
post it together with the SVE support.  I'll do that locally for now,
but let me know what you plan to do.  I'd like to get the SVE support
posted soon so that people can test it.

I will clean up the print messages as you suggested and repost it shortly.

Thanks,
Amit Daniel


Cheers
---Dave



Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

2019-03-25 Thread Amit Daniel Kachhap

Hi,

On 3/26/19 1:34 AM, Kristina Martsenko wrote:

On 19/03/2019 08:30, Amit Daniel Kachhap wrote:

From: Mark Rutland 

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present in the CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key save is
optimized and implemented inside ptrauth instruction/register access
trap.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

This switch of key is done from guest enter/exit assembly as preperation
for the upcoming in-kernel pointer authentication support. Hence, these
key switching routines are not implemented in C code as they may cause
pointer authentication key signing error in some situations.

Signed-off-by: Mark Rutland 
[Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
, save host key in ptrauth exception trap]
Signed-off-by: Amit Daniel Kachhap 
Reviewed-by: Julien Thierry 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu


[...]


+/* SPDX-License-Identifier: GPL-2.0
+ * arch/arm64/include/asm/kvm_ptrauth_asm.h: Guest/host ptrauth save/restore
+ * Copyright 2019 Arm Limited
+ * Author: Mark Rutland 
+ * Amit Daniel Kachhap 
+ */


I think the license needs to be in its own comment, like

/* SPDX-License-Identifier: GPL-2.0 */

yes this is indeed the format followed.

/* arch/arm64/include/asm/kvm_ptrauth_asm.h: ...
  * ...
  */


+
+#ifndef __ASM_KVM_ASM_PTRAUTH_H
+#define __ASM_KVM_ASM_PTRAUTH_H


__ASM_KVM_PTRAUTH_ASM_H ? (to match the file name)


+   if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) ||
+   test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) {
+   /* Verify that KVM startup matches the conditions for ptrauth */
+   if (WARN_ON(!vcpu_has_ptrauth(vcpu)))
+   return -EINVAL;
+   }


I think this now needs to have "goto out;" instead of "return -EINVAL;",
since 5.1-rcX contains commit e761a927bc9a ("KVM: arm/arm64: Reset the
VCPU without preemption and vcpu state loaded") which changed some of
this code.

ok missed the changes for this commit.



@@ -385,6 +385,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vcpu_clear_wfe_traps(vcpu);
else
vcpu_set_wfe_traps(vcpu);
+
+   kvm_arm_vcpu_ptrauth_setup_lazy(vcpu);


This version of the series seems to have lost the arch/arm/ definition
of kvm_arm_vcpu_ptrauth_setup_lazy (previously
kvm_arm_vcpu_ptrauth_reset), so KVM no longer compiles for arch/arm/ :(


ok my bad.

Thanks,
Amit D


Thanks,
Kristina



Re: [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication

2019-03-04 Thread Amit Daniel Kachhap



Hi Dave,

On 3/1/19 4:54 PM, Dave P Martin wrote:

On Fri, Mar 01, 2019 at 10:37:54AM +, Amit Daniel Kachhap wrote:

Hi,

On 2/21/19 9:24 PM, Dave Martin wrote:

On Tue, Feb 19, 2019 at 02:54:31PM +0530, Amit Daniel Kachhap wrote:


[...]


diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
"Create PMUv3 device"),   \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,\
"Specify random seed for Kernel Address Space "   \
-   "Layout Randomization (KASLR)"),
+   "Layout Randomization (KASLR)"),  \
+   OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,  \
+   "Enable address authentication"),


Nit: doesn't this enable address *and* generic authentication?  The
discussion on what capababilities and enables the ABI exposes probably
needs to conclude before we can finalise this here.

ok.


However, I would recommend that we provide a single option here that
turns both address authentication and generic authentication on, even
if the ABI treats them independently.  This is expected to be the common
case by far.

ok


We can always add more fine-grained options later if it turns out to be
necessary.

Mark suggested to provide 2 flags [1] for Address and Generic
authentication so I was thinking of adding 2 features like,

+#define KVM_ARM_VCPU_PTRAUTH_ADDR  4 /* CPU uses pointer address
authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC   5 /* CPU uses pointer generic
authentication */

And supply both of them concatenated in VCPU_INIT stage. Kernel KVM
would expect both feature requested together.


Seems reasonable.  Do you mean the kernel would treat it as an error if
only one of these flags is passed to KVM_ARM_VCPU_INIT, or would KVM
simply treat them as independent?
If both flags are passed together then only start using ptrauth 
otherwise keep ptrauth disabled. This is just to finalize the user side 
abi as of now and KVM can be updated later.


Thanks,
Amit D


[...]


diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..4ac80f8 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
   
+	/* Set KVM_ARM_VCPU_PTRAUTH if available */

+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
+   if (kvm->cfg.arch.has_ptrauth)
+   vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+   }
+


I'm not too keen on requiring a dummy #define for AArch32 here.  How do
we handle other subarch-specific feature flags?  Is there something we
can reuse?

I will check it.


OK

Cheers
---Dave



Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication

2019-03-04 Thread Amit Daniel Kachhap

Hi James,

On 2/27/19 12:03 AM, James Morse wrote:

Hi Amit,

On 19/02/2019 09:24, Amit Daniel Kachhap wrote:

This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to enable this feature.


and an attempt to restore the id register with the other version would fail.



diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
index a25cd21..0529a7d 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -82,7 +82,8 @@ pointers).
  Virtualization
  --
  
-Pointer authentication is not currently supported in KVM guests. KVM

-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.



+Pointer authentication is enabled in KVM guest when virtual machine is
+created by passing a flag (KVM_ARM_VCPU_PTRAUTH)


(This is still mixing VM and VCPU)



+ requesting this feature to be enabled.


.. on each vcpu?



+Without this flag, pointer authentication is not enabled
+in KVM guests and attempted use of the feature will result in an UNDEFINED
+exception being injected into the guest.


'guests' here suggests its a VM property. If you set it on some VCPU but not 
others KVM
will generate undefs instead of enabling the feature. (which is the right thing 
to do)

I think it needs to be clear this is a per-vcpu property.

ok.




diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478..5f82ca1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
  #define KVM_ARM_VCPU_EL1_32BIT1 /* CPU running a 32bit VM */
  #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */
  #define KVM_ARM_VCPU_PMU_V3   3 /* Support guest PMUv3 */



+#define KVM_ARM_VCPU_PTRAUTH   4 /* VCPU uses address authentication */


Just address authentication? I agree with Mark we should have two bits to match 
what gets
exposed to EL0. One would then be address, the other generic.

ok.




diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
index 528ee6e..6846a23 100644
--- a/arch/arm64/kvm/hyp/ptrauth-sr.c
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -93,9 +93,23 @@ void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)



+/**
+ * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is allowed by user
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function will be used to check userspace option to have ptrauth or not
+ * in the guest kernel.
+ */
+bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu)
+{
+   return kvm_supports_ptrauth() &&
+   test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features);
+}


This isn't used from world-switch, could it be moved to guest.c?

yes sure.




diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 12529df..f7bcc60 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
  }
  
  /* Read a sanitised cpufeature ID register by sys_reg_desc */

-static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
+static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, 
bool raz)


(It might be easier on the reviewer to move these mechanical changes to an 
earlier patch)

Yes with including some of Dave SVE patches this wont be required.

Thanks,
Amit D



Looks good,

Thanks,

James



Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers

2019-03-04 Thread Amit Daniel Kachhap

Hi James,

On 2/27/19 12:01 AM, James Morse wrote:

Hi Amit,

On 19/02/2019 09:24, Amit Daniel Kachhap wrote:

From: Mark Rutland 

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present into CPU implementation so only VHE code
paths are modified.



When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again.



However the host key registers
are saved in vcpu load stage as they remain constant for each vcpu
schedule.


(I think we can get away with doing this later ... with the hope of doing it 
never!)



Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.




diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 2f1bb86..1bacf78 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -146,6 +146,18 @@ enum vcpu_sysreg {



+static inline bool kvm_supports_ptrauth(void)
+{
+   return has_vhe() && system_supports_address_auth() &&
+   system_supports_generic_auth();
+}


Do we intend to support the implementation defined algorithm? I'd assumed not.

system_supports_address_auth() is defined as:
| static inline bool system_supports_address_auth(void)
| {
|   return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
|   (cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_ARCH) ||
|   cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH_IMP_DEF));
| }


So we could return true from kvm_supports_ptrauth() even if we only support the 
imp-def
algorithm.

I think we should hide the imp-def ptrauth support as KVM hides all other 
imp-def
features. This lets us avoid trying to migrate values that have been signed 
with the
imp-def algorithm.
I suppose imp-def algorithm should not make any difference in migration 
case even if the 2 system uses different imp-def algorithm. As the LR 
PAC field generation happens at runtime so only things matters is key 
value and SP which is taken care. Also the model on which I am testing 
uses imp-def algorithm. Or am I missing something ?


I'm worried that it could include some value that we can't migrate (e.g. the 
SoC serial
number). Does the ARM-ARM say this can't happen?

All I can find is D5.1.5 "Pointer authentication in AArch64 state" of 
DDI0487D.a which has
this clause for the imp-def algorithm:
| For a set of arguments passed to the function, must give the same result for 
all PEs
| that a thread of execution could migrate between.

... with KVM we've extended the scope of migration significantly.


Could we check the cpus_have_const_cap() values for the two architected 
algorithms directly?



diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 6e65cad..09e061a 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -153,6 +153,13 @@ bool __fpsimd_enabled(void);
  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
  void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
  
+void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,

+  struct kvm_cpu_context *host_ctxt,
+  struct kvm_cpu_context *guest_ctxt);
+void __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_context *guest_ctxt,
+ struct kvm_cpu_context *host_ctxt);



Why do you need the vcpu and the guest_ctxt?
Would it be possible for these to just take the vcpu, and to pull the host 
context from
the per-cpu variable?
This would avoid any future bugs where the ctxt's are the wrong way round, 
taking two is
unusual in KVM, but necessary here.

As you're calling these from asm you want the compiler to do as much of the 
type mangling
a

Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value

2019-03-02 Thread Amit Daniel Kachhap

Hi,

On 2/25/19 11:09 PM, James Morse wrote:

Hi Amit,

On 19/02/2019 09:24, Amit Daniel Kachhap wrote:

From: Mark Rutland 

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
for the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
and guest can now use this field in a common way.




diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index ca56537..05706b4 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation);
  }
  
+static inline void __cpu_copy_hyp_conf(void) {}

+


I agree Mark's suggestion of adding 'host_ctxt' in here makes it clearer what 
it is.

ok.




diff --git a/arch/arm64/include/asm/kvm_emulate.h 
b/arch/arm64/include/asm/kvm_emulate.h
index 506386a..0dbe795 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h


Hmmm, there is still a fair amount of churn due to moving the struct 
definition, but its
easy enough to ignore as its mechanical. A preparatory patch that switched as 
may as
possible to '*vcpu_hcr() = ' would cut the churn down some more, but I don't 
think its
worth the extra effort.



diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index a80a7ef..6e65cad 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -151,7 +151,7 @@ void __fpsimd_restore_state(struct user_fpsimd_state 
*fp_regs);
  bool __fpsimd_enabled(void);
  
  void activate_traps_vhe_load(struct kvm_vcpu *vcpu);

-void deactivate_traps_vhe_put(void);
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);


I've forgotten why this is needed. You don't add a user of vcpu to
deactivate_traps_vhe_put() in this patch.



diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index b0b1478..006bd33 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -191,7 +194,7 @@ void activate_traps_vhe_load(struct kvm_vcpu *vcpu)



-void deactivate_traps_vhe_put(void)
+void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu)
  {
u64 mdcr_el2 = read_sysreg(mdcr_el2);
  


Why does deactivate_traps_vhe_put() need the vcpu?
vcpu is needed for the next patch which saves/restore mdcr_el2. I will 
add this in that patch.




diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 7732d0b..1b2e05b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -458,6 +459,16 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,

  static inline void __cpu_init_stage2(void) {}

+/**
+ * __cpu_copy_hyp_conf - copy the boot hyp configuration registers
+ *
+ * It is called once per-cpu during CPU hyp initialisation.
+ */


Is it just the boot cpu?



+static inline void __cpu_copy_hyp_conf(void)
+{
+   kvm_call_hyp(__kvm_populate_host_regs);
+}
+




diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 68d6f7c..68ddc0f 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -21,6 +21,7 @@
  #include 
  #include 
  #include 
+#include 


... what's kvm_mmu.h needed for?
The __hyp_this_cpu_ptr() you add comes from kvm_asm.h.

/me tries it.

Heh, hyp_symbol_addr(). kvm_asm.h should include this, but can't because the
kvm_ksym_ref() dependency is the other-way round. This is just going to bite us 
somewhere
else later!
If we want to fix it now, moving hyp_symbol_addr() to kvm_asm.h would fix it. 
It's
generating adrp/add so the 'asm' label is fair, and it really should live with 
its EL1
counterpart kvm_ksym_ref().


Yes moving hyp_symbol_addr() fixes the dependency error.



@@ -294,7 +295,7 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu)
if (!has_vhe())
return;
  
-	deactivate_traps_vhe_put();

+   deactivate_traps_vhe_put(vcpu);
  
  	__sysreg_save_el1_state(guest_ctxt);

__sysreg_save_user_state(guest_ctxt);
@@ -316,3 +317,21 @@ void __hyp_text __kvm_enable_ssbs(void)
"msr   sctlr_el2, %0"
: &quo

Re: [kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication

2019-03-01 Thread Amit Daniel Kachhap

Hi,

On 2/21/19 9:24 PM, Dave Martin wrote:

On Tue, Feb 19, 2019 at 02:54:31PM +0530, Amit Daniel Kachhap wrote:

This is a runtime capabality for KVM tool to enable Armv8.3 Pointer
Authentication in guest kernel. A command line option --ptrauth is
required for this.

Signed-off-by: Amit Daniel Kachhap 
---
  arm/aarch32/include/kvm/kvm-cpu-arch.h| 1 +
  arm/aarch64/include/asm/kvm.h | 1 +
  arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
  arm/aarch64/include/kvm/kvm-cpu-arch.h| 1 +
  arm/include/arm-common/kvm-config-arch.h  | 1 +
  arm/kvm-cpu.c | 6 ++
  include/linux/kvm.h   | 1 +
  7 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..520ea76 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,5 @@
  #define ARM_CPU_ID0, 0, 0
  #define ARM_CPU_ID_MPIDR  5
  
+#define ARM_VCPU_PTRAUTH_FEATURE	0

  #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..1068fd1 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
  #define KVM_ARM_VCPU_EL1_32BIT1 /* CPU running a 32bit VM */
  #define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */
  #define KVM_ARM_VCPU_PMU_V3   3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH   4 /* CPU uses pointer authentication */
  
  struct kvm_vcpu_init {

__u32 target;
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
"Create PMUv3 device"),   \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed,\
"Specify random seed for Kernel Address Space "   \
-   "Layout Randomization (KASLR)"),
+   "Layout Randomization (KASLR)"),  \
+   OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,  \
+   "Enable address authentication"),


Nit: doesn't this enable address *and* generic authentication?  The
discussion on what capababilities and enables the ABI exposes probably
needs to conclude before we can finalise this here.

ok.


However, I would recommend that we provide a single option here that
turns both address authentication and generic authentication on, even
if the ABI treats them independently.  This is expected to be the common
case by far.

ok


We can always add more fine-grained options later if it turns out to be
necessary.
Mark suggested to provide 2 flags [1] for Address and Generic 
authentication so I was thinking of adding 2 features like,


+#define KVM_ARM_VCPU_PTRAUTH_ADDR		4 /* CPU uses pointer address 
authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC		5 /* CPU uses pointer generic 
authentication */


And supply both of them concatenated in VCPU_INIT stage. Kernel KVM 
would expect both feature requested together.


[1]: https://www.spinics.net/lists/arm-kernel/msg709181.html



  #include "arm-common/kvm-config-arch.h"
  
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h b/arm/aarch64/include/kvm/kvm-cpu-arch.h

index a9d8563..496ece8 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,5 @@
  #define ARM_CPU_CTRL  3, 0, 1, 0
  #define ARM_CPU_CTRL_SCTLR_EL10
  
+#define ARM_VCPU_PTRAUTH_FEATURE	(1UL << KVM_ARM_VCPU_PTRAUTH)

  #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..5badcbd 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,7 @@ struct kvm_config_arch {
boolaarch32_guest;
boolhas_pmuv3;
u64 kaslr_seed;
+   boolhas_ptrauth;
enum irqchip_type irqchip;
u64 fw_addr;
  };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..4ac80f8 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
  
+	/* Set KVM_ARM_VCPU_PTRAUTH if available */

+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
+   if (kvm->cfg.arch.has_ptrauth)
+   vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+   }

Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication

2019-03-01 Thread Amit Daniel Kachhap

Hi,

On 2/21/19 9:23 PM, Dave Martin wrote:

On Tue, Feb 19, 2019 at 02:54:29PM +0530, Amit Daniel Kachhap wrote:

This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to enable this feature.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
  Documentation/arm64/pointer-authentication.txt |  9 +
  Documentation/virtual/kvm/api.txt  |  4 
  arch/arm64/include/asm/kvm_host.h  |  3 ++-
  arch/arm64/include/uapi/asm/kvm.h  |  1 +
  arch/arm64/kvm/handle_exit.c   |  2 +-
  arch/arm64/kvm/hyp/ptrauth-sr.c| 16 +++-
  arch/arm64/kvm/reset.c |  3 +++
  arch/arm64/kvm/sys_regs.c  | 26 +-
  include/uapi/linux/kvm.h   |  1 +
  9 files changed, 45 insertions(+), 20 deletions(-)



[...]


diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 1bacf78..2768a53 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
  
  #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
  
-#define KVM_VCPU_MAX_FEATURES 4

+#define KVM_VCPU_MAX_FEATURES 5
  
  #define KVM_REQ_SLEEP \

KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
@@ -451,6 +451,7 @@ static inline bool kvm_arch_requires_vhe(void)
return false;
  }
  
+bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu);

  static inline bool kvm_supports_ptrauth(void)
  {
return has_vhe() && system_supports_address_auth() &&


[...]


diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
index 528ee6e..6846a23 100644
--- a/arch/arm64/kvm/hyp/ptrauth-sr.c
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -93,9 +93,23 @@ void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)


[...]


+/**
+ * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is allowed by user
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function will be used to check userspace option to have ptrauth or not
+ * in the guest kernel.
+ */
+bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu)
+{
+   return kvm_supports_ptrauth() &&
+   test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features);
+}


Nit: for SVE is called the equivalent helper vcpu_has_sve(vcpu).

Neither naming is more correct, but it would make sense to be
consistent.  We will likely accumulate more of these vcpu feature
predicates over time.

Given that this is trivial and will be used all over the place, it
probably makes sense to define it in kvm_host.h rather than having it
out of line in a separate C file.

Ok I checked the SVE implementation. vcpu_has_ptrauth macro make more sense.



diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index b72a3dd..987e0c3c 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -91,6 +91,9 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long 
ext)
case KVM_CAP_ARM_VM_IPA_SIZE:
r = kvm_ipa_limit;
break;
+   case KVM_CAP_ARM_PTRAUTH:
+   r = kvm_supports_ptrauth();
+   break;
default:
r = 0;
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 12529df..f7bcc60 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
  }
  
  /* Read a sanitised cpufeature ID register by sys_reg_desc */

-static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
+static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, 
bool raz)
  {
u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
@@ -1071,7 +1071,7 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool 
raz)
 (0xfUL << ID_AA64ISAR1_API_SHIFT) |
 (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
 (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-   if (!kvm_supports_ptrauth()) {
+   if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
kvm_debug("ptrauth unsupported for guests, 
suppressing\n");
val &= ~ptrauth_mask;
}
@@ -1095,7 +1095,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
if (p->is_write)
return write_to_re

Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers

2019-03-01 Thread Amit Daniel Kachhap

Hi,

On 2/21/19 9:23 PM, Dave Martin wrote:

On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote:

From: Mark Rutland 

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present into CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key registers
are saved in vcpu load stage as they remain constant for each vcpu
schedule.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

Signed-off-by: Mark Rutland 
[Only VHE, key switch from from assembly, kvm_supports_ptrauth
checks, save host key in vcpu_load]
Signed-off-by: Amit Daniel Kachhap 
Reviewed-by: Julien Thierry 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
  arch/arm/include/asm/kvm_host.h   |   1 +
  arch/arm64/include/asm/kvm_host.h |  23 +
  arch/arm64/include/asm/kvm_hyp.h  |   7 +++
  arch/arm64/kernel/traps.c |   1 +
  arch/arm64/kvm/handle_exit.c  |  21 +---
  arch/arm64/kvm/hyp/Makefile   |   1 +
  arch/arm64/kvm/hyp/entry.S|  17 +++
  arch/arm64/kvm/hyp/ptrauth-sr.c   | 101 ++
  arch/arm64/kvm/sys_regs.c |  37 +-
  virt/kvm/arm/arm.c|   2 +
  10 files changed, 201 insertions(+), 10 deletions(-)
  create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c


[...]


diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
new file mode 100644
index 000..528ee6e
--- /dev/null
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -0,0 +1,101 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore
+ *
+ * Copyright 2018 Arm Limited
+ * Author: Mark Rutland 
+ * Amit Daniel Kachhap 
+ */
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static __always_inline bool __ptrauth_is_enabled(struct kvm_vcpu *vcpu)
+{
+   return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+   vcpu->arch.ctxt.hcr_el2 & (HCR_API | HCR_APK);
+}
+
+#define __ptrauth_save_key(regs, key)  
\
+({ 
\
+   regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);   
\
+   regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);   
\
+})
+
+static __always_inline void __ptrauth_save_state(struct kvm_cpu_context *ctxt)


Why __always_inline?


+{
+   __ptrauth_save_key(ctxt->sys_regs, APIA);
+   __ptrauth_save_key(ctxt->sys_regs, APIB);
+   __ptrauth_save_key(ctxt->sys_regs, APDA);
+   __ptrauth_save_key(ctxt->sys_regs, APDB);
+   __ptrauth_save_key(ctxt->sys_regs, APGA);
+}
+
+#define __ptrauth_restore_key(regs, key)   
\
+({ 
\
+   write_sysreg_s(regs[key ## KEYLO_EL1], SYS_ ## key ## KEYLO_EL1);   
\
+   write_sysreg_s(regs[key ## KEYHI_EL1], SYS_ ## key ## KEYHI_EL1);   
\
+})
+
+static __always_inline void __ptrauth_restore_state(struct kvm_cpu_context 
*ctxt)


Same here.  I would hope these just need to be marked with the correct
function attribute to disable ptrauth by the compiler.  I don't see why
it makes a difference whether it's inline or not.

If the compiler semantics are not sufficiently clear, make it a macro.

ok.


(Bikeshedding here, so it you feel this has already been discussed to
death I'm happy for this to stay as-is.)


+{
+   __ptrauth_restore_key(ctxt->sys_regs, APIA);
+   __ptrauth_restore_key(ctxt->sys_regs, APIB);
+   

Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers

2019-02-28 Thread Amit Daniel Kachhap




On 2/21/19 9:21 PM, Dave Martin wrote:

On Thu, Feb 21, 2019 at 12:29:42PM +, Mark Rutland wrote:

On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote:

From: Mark Rutland 

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present into CPU implementation so only VHE code
paths are modified.


Nit: s/into/in the/



When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key registers
are saved in vcpu load stage as they remain constant for each vcpu
schedule.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

Signed-off-by: Mark Rutland 
[Only VHE, key switch from from assembly, kvm_supports_ptrauth
checks, save host key in vcpu_load]


Hmm, why do we need to do the key switch in assembly, given it's not
used in-kernel right now?

Is that in preparation for in-kernel pointer auth usage? If so, please
call that out in the commit message.


[...]


Huh, so we're actually doing the switch in C code...


  # KVM code is run at a different exception code with a different map, so
  # compiler instrumentation that inserts callbacks or checks into the code may
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 675fdc1..b78cc15 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -64,6 +64,12 @@ ENTRY(__guest_enter)
  
  	add	x18, x0, #VCPU_CONTEXT
  
+#ifdef	CONFIG_ARM64_PTR_AUTH

+   // Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest).
+   mov x2, x18
+   bl  __ptrauth_switch_to_guest
+#endif


... and conditionally *calling* that switch code from assembly ...


+
// Restore guest regs x0-x17
ldp x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
ldp x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
@@ -118,6 +124,17 @@ ENTRY(__guest_exit)
  
  	get_host_ctxt	x2, x3
  
+#ifdef	CONFIG_ARM64_PTR_AUTH

+   // Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host).
+   // Save x0, x2 which are used later in callee saved registers.
+   mov x19, x0
+   mov x20, x2
+   sub x0, x1, #VCPU_CONTEXT
+   ldr x29, [x2, #CPU_XREG_OFFSET(29)]
+   bl  __ptrauth_switch_to_host
+   mov x0, x19
+   mov x2, x20
+#endif


... which adds a load of boilerplate for no immediate gain.

Do we really need to do this in assembly today?


If we will need to move this to assembly when we add in-kernel ptrauth
support, it may be best to have it in assembly from the start, to reduce
unnecessary churn.

But having a mix of C and assembly is likely to make things more
complicated: we should go with one or the other IMHO.

ok, I will check on this.

Thanks,
Amit D


Cheers
---Dave



Re: [PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value

2019-02-28 Thread Amit Daniel Kachhap

Hi,

On 2/21/19 9:21 PM, Dave Martin wrote:

On Tue, Feb 19, 2019 at 02:54:27PM +0530, Amit Daniel Kachhap wrote:

Save host MDCR_EL2 value during kvm HYP initialisation and restore
after every switch from host to guest. There should not be any
change in functionality due to this.

The value of mdcr_el2 is now stored in struct kvm_cpu_context as
both host and guest can now use this field in a common way.


Is MDCR_EL2 somehow relevant to pointer auth?

It's not entirely clear why this patch is here.

If this is a cleanup to align the handling of this register with
how HCR_EL2 is handled, it would be good to explain that in the commit
message.

Agreed I will more information in commit message.



Signed-off-by: Amit Daniel Kachhap 
Cc: Marc Zyngier 
Cc: Mark Rutland 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
  arch/arm/include/asm/kvm_host.h   |  1 -
  arch/arm64/include/asm/kvm_host.h |  6 ++
  arch/arm64/kvm/debug.c| 28 ++--
  arch/arm64/kvm/hyp/switch.c   | 17 -
  arch/arm64/kvm/hyp/sysreg-sr.c|  6 ++
  virt/kvm/arm/arm.c|  1 -
  6 files changed, 18 insertions(+), 41 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 05706b4..704667e 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -294,7 +294,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu 
*vcpu) {}
  static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
  static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
  
-static inline void kvm_arm_init_debug(void) {}

  static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
  static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
  static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 1b2e05b..2f1bb86 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -205,6 +205,8 @@ struct kvm_cpu_context {
  
  	/* HYP host/guest configuration */

u64 hcr_el2;
+   u32 mdcr_el2;
+


ARMv8-A says MDCR_EL2 is a 64-bit register.

Bits [63:20] are currently RES0, so this is probably not a big deal.
But it would be better to make this 64-bit to prevent future accidents.
It may be better to make that change in a separate patch.

Yes this a potential issue. I will fix it in a separate patch.

Thanks,
Amit D


This is probably non-urgent, since this is clearly not causing problems
for anyone today.

[...]

Cheers
---Dave



Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value

2019-02-28 Thread Amit Daniel Kachhap

Hi,

On 2/21/19 9:19 PM, Dave Martin wrote:

On Tue, Feb 19, 2019 at 02:54:26PM +0530, Amit Daniel Kachhap wrote:

From: Mark Rutland 

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
for the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.


Minor nit: NVHE misspelled.  This looks a bit like it's naming an arch
feature rather than a kernel implementation detail though.  Maybe write
"non-VHE".

yes.



For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
and guest can now use this field in a common way.

Signed-off-by: Mark Rutland 
[Added __cpu_copy_hyp_conf, hcr_el2 field in struct kvm_cpu_context]
Signed-off-by: Amit Daniel Kachhap 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
  arch/arm/include/asm/kvm_host.h  |  2 ++
  arch/arm64/include/asm/kvm_asm.h |  2 ++
  arch/arm64/include/asm/kvm_emulate.h | 22 +++---
  arch/arm64/include/asm/kvm_host.h| 13 -
  arch/arm64/include/asm/kvm_hyp.h |  2 +-
  arch/arm64/kvm/guest.c   |  2 +-
  arch/arm64/kvm/hyp/switch.c  | 23 +--
  arch/arm64/kvm/hyp/sysreg-sr.c   | 21 -
  arch/arm64/kvm/hyp/tlb.c |  6 +-
  virt/kvm/arm/arm.c   |  1 +
  10 files changed, 68 insertions(+), 26 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index ca56537..05706b4 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation);
  }
  
+static inline void __cpu_copy_hyp_conf(void) {}

+
  static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
  {
return 0;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index f5b79e9..8acd73f 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -80,6 +80,8 @@ extern void __vgic_v3_init_lrs(void);
  
  extern u32 __kvm_get_mdcr_el2(void);
  
+extern void __kvm_populate_host_regs(void);

+
  /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
  #define __hyp_this_cpu_ptr(sym)   
\
({  \
diff --git a/arch/arm64/include/asm/kvm_emulate.h 
b/arch/arm64/include/asm/kvm_emulate.h
index 506386a..0dbe795 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -50,25 +50,25 @@ void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long 
addr);
  
  static inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)

  {
-   return !(vcpu->arch.hcr_el2 & HCR_RW);
+   return !(vcpu->arch.ctxt.hcr_el2 & HCR_RW);


Putting hcr_el2 into struct kvm_cpu_context creates a lot of splatter
here, and I'm wondering whether it's really necessary.  Otherwise,
we could just put the per-vcpu guest HCR_EL2 value in struct
kvm_vcpu_arch.
I did like that in V4 version [1] but comments were raised that this was 
repetition of hcr_el2 field in 2 places and may be avoided.


[1]: https://lkml.org/lkml/2019/1/4/433


Is the *host* hcr_el2 value really different per-vcpu?  That looks
odd.  I would have thought this is fixed across the system at KVM
startup time.

Having a single global host hcr_el2 would also avoid the need for
__kvm_populate_host_regs(): instead, we just decide what HCR_EL2 is to
be ahead of time and set a global variable that we map into Hyp.


Or does the host HCR_EL2 need to vary at runtime for some reason I've
missed?
This patch basically makes host hcr_el2 not to use fixed values like 
HCR_HOST_NVHE_FLAGS/HCR_HOST_VHE_FLAGS during context switch and hence 
saves those values at boot time. This patch is just preparation to 
configure host hcr_el2 dynamically. However currently it is same for all 
cpus.


I suppose it is better to have host hcr_el2 as percpu to take care of 
heterogeneous systems. Currently even host mdcr_el2 is stored on percpu 
basis(arch/arm64/kvm/debug.c).


[...]

+void __hyp_text __kvm_populate_host_regs(void)
+{
+ 

Re: [PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication

2019-02-28 Thread Amit Daniel Kachhap

Hi,

On 2/21/19 6:04 PM, Mark Rutland wrote:

On Tue, Feb 19, 2019 at 02:54:29PM +0530, Amit Daniel Kachhap wrote:

This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to enable this feature.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
  Documentation/arm64/pointer-authentication.txt |  9 +
  Documentation/virtual/kvm/api.txt  |  4 
  arch/arm64/include/asm/kvm_host.h  |  3 ++-
  arch/arm64/include/uapi/asm/kvm.h  |  1 +
  arch/arm64/kvm/handle_exit.c   |  2 +-
  arch/arm64/kvm/hyp/ptrauth-sr.c| 16 +++-
  arch/arm64/kvm/reset.c |  3 +++
  arch/arm64/kvm/sys_regs.c  | 26 +-
  include/uapi/linux/kvm.h   |  1 +
  9 files changed, 45 insertions(+), 20 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
index a25cd21..0529a7d 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -82,7 +82,8 @@ pointers).
  Virtualization
  --
  
-Pointer authentication is not currently supported in KVM guests. KVM

-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when virtual machine is
+created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
+to be enabled. Without this flag, pointer authentication is not enabled
+in KVM guests and attempted use of the feature will result in an UNDEFINED
+exception being injected into the guest.
diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index 356156f..1e646fb 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2642,6 +2642,10 @@ Possible features:
  Depends on KVM_CAP_ARM_PSCI_0_2.
- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
  Depends on KVM_CAP_ARM_PMU_V3.
+   - KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
+ Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
+ set, then the KVM guest allows the execution of pointer authentication
+ instructions. Otherwise, KVM treats these instructions as undefined.


I think that we should have separate flags for address auth and generic
auth, to match the ID register split.

For now, we can have KVM only support the case where both are set, but
it gives us freedom to support either in isolation if we have to in
future, without an ABI break.
yes agree with you about having two address and generic ptrauth flags. 
Will add in next iteration.


Thanks,
Amit D


Thanks,
Mark.



Re: [PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers

2019-02-28 Thread Amit Daniel Kachhap

Hi Mark,

On 2/21/19 5:59 PM, Mark Rutland wrote:

On Tue, Feb 19, 2019 at 02:54:28PM +0530, Amit Daniel Kachhap wrote:

From: Mark Rutland 

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present into CPU implementation so only VHE code
paths are modified.


Nit: s/into/in the/

ok.




When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key registers
are saved in vcpu load stage as they remain constant for each vcpu
schedule.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

Signed-off-by: Mark Rutland 
[Only VHE, key switch from from assembly, kvm_supports_ptrauth
checks, save host key in vcpu_load]


Hmm, why do we need to do the key switch in assembly, given it's not
used in-kernel right now?

Is that in preparation for in-kernel pointer auth usage? If so, please
call that out in the commit message.

ok sure.


[...]


diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 4e2fb87..5cac605 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -749,6 +749,7 @@ static const char *esr_class_str[] = {
[ESR_ELx_EC_CP14_LS]= "CP14 LDC/STC",
[ESR_ELx_EC_FP_ASIMD]   = "ASIMD",
[ESR_ELx_EC_CP10_ID]= "CP10 MRC/VMRS",
+   [ESR_ELx_EC_PAC]= "Pointer authentication trap",


For consistency with the other strings, can we please make this "PAC"?

ok. It makes sense.


[...]


diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 82d1904..17cec99 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -19,6 +19,7 @@ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
  obj-$(CONFIG_KVM_ARM_HOST) += fpsimd.o
  obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
  obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
+obj-$(CONFIG_KVM_ARM_HOST) += ptrauth-sr.o


Huh, so we're actually doing the switch in C code...


  # KVM code is run at a different exception code with a different map, so
  # compiler instrumentation that inserts callbacks or checks into the code may
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index 675fdc1..b78cc15 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -64,6 +64,12 @@ ENTRY(__guest_enter)
  
  	add	x18, x0, #VCPU_CONTEXT
  
+#ifdef	CONFIG_ARM64_PTR_AUTH

+   // Prepare parameter for __ptrauth_switch_to_guest(vcpu, host, guest).
+   mov x2, x18
+   bl  __ptrauth_switch_to_guest
+#endif


... and conditionally *calling* that switch code from assembly ...


+
// Restore guest regs x0-x17
ldp x0, x1,   [x18, #CPU_XREG_OFFSET(0)]
ldp x2, x3,   [x18, #CPU_XREG_OFFSET(2)]
@@ -118,6 +124,17 @@ ENTRY(__guest_exit)
  
  	get_host_ctxt	x2, x3
  
+#ifdef	CONFIG_ARM64_PTR_AUTH

+   // Prepare parameter for __ptrauth_switch_to_host(vcpu, guest, host).
+   // Save x0, x2 which are used later in callee saved registers.
+   mov x19, x0
+   mov x20, x2
+   sub x0, x1, #VCPU_CONTEXT
+   ldr x29, [x2, #CPU_XREG_OFFSET(29)]
+   bl  __ptrauth_switch_to_host
+   mov x0, x19
+   mov x2, x20
+#endif


... which adds a load of boilerplate for no immediate gain.
Here some parameter optimizations may be possible as guest and host ctxt 
can be derived from vcpu itself as James suggested in other review 
comments. I thought about doing all save/restore in assembly but for 
optimization now host keys are saved in vcpu_load stage in C so reused 
those C codes here also.


Again all these codes are beneficial with in-kernel ptrauth and hence in 
case of strong objection may revert to old way.


Do we really need to d

Re: [PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value

2019-02-27 Thread Amit Daniel Kachhap

Hi,

On 2/21/19 5:20 PM, Mark Rutland wrote:

Hi,

On Tue, Feb 19, 2019 at 02:54:26PM +0530, Amit Daniel Kachhap wrote:

From: Mark Rutland 

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
for the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
and guest can now use this field in a common way.

Signed-off-by: Mark Rutland 
[Added __cpu_copy_hyp_conf, hcr_el2 field in struct kvm_cpu_context]
Signed-off-by: Amit Daniel Kachhap 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu


[...]


+/**
+ * __cpu_copy_hyp_conf - copy the boot hyp configuration registers
+ *
+ * It is called once per-cpu during CPU hyp initialisation.
+ */
+static inline void __cpu_copy_hyp_conf(void)


I think this would be better named as something like:

   cpu_init_host_ctxt()

... as that makes it a bit clearer as to what is being initialized.

ok, Agreed with the name.


[...]


+/**
+ * __kvm_populate_host_regs - Stores host register values
+ *
+ * This function acts as a function handler parameter for kvm_call_hyp and
+ * may be called from EL1 exception level to fetch the register value.
+ */
+void __hyp_text __kvm_populate_host_regs(void)
+{
+   struct kvm_cpu_context *host_ctxt;
+
+   if (has_vhe())
+   host_ctxt = this_cpu_ptr(_host_cpu_state);
+   else
+   host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);


Do we need the has_vhe() check here?

Can't we always do:

host_ctxt = __hyp_this_cpu_ptr(kvm_host_cpu_state);

... regardless of VHE? Or is that broken for VHE somehow?

Yes it works fine for VHE. I missed this.

Thanks,
Amit


Thanks,
Mark.



[PATCH v6 0/6] Add ARMv8.3 pointer authentication for kvm guest

2019-02-19 Thread Amit Daniel Kachhap
Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux 5.0-rc6. The basic patches in this series was
originally posted by Mark Rutland earlier[1,2] and contains some history
of this work.

Extension Overview:
=

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel and can be found in Kristina's generic pointer authentication
patch series[3].

KVM guest work:
==

If pointer authentication is enabled for KVM guests then the new PAC 
instructions
will not trap to EL2. If not then they may be ignored if in HINT region or 
trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they 
have
a key initialized which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

There were some review comments by Christoffer Dall in the original series 
[1,2,3]
and this patch series tries to implement them.

The current v6 patch series contains most of the suggestions by James Morse,
Kristina, Julien and Dave.

This patch series is based on just a single patch from Dave Martin [8] which add
control checks for accessing sys registers. 

Changes since v5 [7]: Major changes are listed below.

* Split hcr_el2 and mdcr_el2 save/restore in two patches.
* Reverted back save/restore of sys-reg keys as done in V4 [5]. There was
  suggestion by James Morse to use ptrauth utilities in a single place
  in arm core and use them from kvm. However this change deviates from the
  existing sys-reg implementations and is not scalable.
* Invoked the key switch C functions from __guest_enter/__guest_exit assembly.
* Host key save is now done inside vcpu_load.
* Reverted back masking of cpufeature ID registers for ptrauth when disabled
  from userpace.
* Reset of ptrauth key registers not done conditionally.
* Code and Documentation cleanup.

Changes since v4 [6]: Several suggestions from James Morse
* Move host registers to be saved/restored inside struct kvm_cpu_context.
* Similar to hcr_el2, save/restore mdcr_el2 register also.
* Added save routines for ptrauth keys in generic arm core and
  use them during KVM context switch.
* Defined a GCC attribute __no_ptrauth which discards generating
  ptrauth instructions in a function. This is taken from Kristina's
  earlier kernel pointer authentication support patches [4].
* Dropped a patch to mask cpufeature when not enabled from userspace and
  now only key registers are masked from register list.

Changes since v3 [5]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
  features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again. 
* Added more details in Documentation.

Changes since v2 [1,2]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutl...@arm.com/
[2]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutl...@arm.com/
[3]: https://lkml.org/lkml/2018/12/7/666
[4]: 
https://lore.kernel.org/lkml/20181005084754.20950-1-kristina.martse...@arm.com/
[5]: https://lkml.org/lkml/2018/10/17/594
[6]: https://lkml.org/lkml/2018/12/18/80
[7]: https://lkml.org/lkml/2019/1/28/49
[8]: 
https://lore.kernel.org/linux-arm-kernel/1547757219-19439-13-git-send-email-dave.mar...@arm.com/


Linux (5.0-rc6 based):

Amit Daniel Kachhap (5):
  arm64/kvm: preserve host HCR_EL2 value
  arm64/kvm: preserve host

[PATCH v6 3/6] arm64/kvm: context-switch ptrauth registers

2019-02-19 Thread Amit Daniel Kachhap
From: Mark Rutland 

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
in the kernel and present into CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again. However the host key registers
are saved in vcpu load stage as they remain constant for each vcpu
schedule.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). Hence, this patch expects both type of
authentication to be present in a cpu.

Signed-off-by: Mark Rutland 
[Only VHE, key switch from from assembly, kvm_supports_ptrauth
checks, save host key in vcpu_load]
Signed-off-by: Amit Daniel Kachhap 
Reviewed-by: Julien Thierry 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h   |   1 +
 arch/arm64/include/asm/kvm_host.h |  23 +
 arch/arm64/include/asm/kvm_hyp.h  |   7 +++
 arch/arm64/kernel/traps.c |   1 +
 arch/arm64/kvm/handle_exit.c  |  21 +---
 arch/arm64/kvm/hyp/Makefile   |   1 +
 arch/arm64/kvm/hyp/entry.S|  17 +++
 arch/arm64/kvm/hyp/ptrauth-sr.c   | 101 ++
 arch/arm64/kvm/sys_regs.c |  37 +-
 virt/kvm/arm/arm.c|   2 +
 10 files changed, 201 insertions(+), 10 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 704667e..b200c14 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -345,6 +345,7 @@ static inline int kvm_arm_have_ssbd(void)
 
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu) {}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 2f1bb86..1bacf78 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -146,6 +146,18 @@ enum vcpu_sysreg {
PMSWINC_EL0,/* Software Increment Register */
PMUSERENR_EL0,  /* User Enable Register */
 
+   /* Pointer Authentication Registers */
+   APIAKEYLO_EL1,
+   APIAKEYHI_EL1,
+   APIBKEYLO_EL1,
+   APIBKEYHI_EL1,
+   APDAKEYLO_EL1,
+   APDAKEYHI_EL1,
+   APDBKEYLO_EL1,
+   APDBKEYHI_EL1,
+   APGAKEYLO_EL1,
+   APGAKEYHI_EL1,
+
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
@@ -439,6 +451,17 @@ static inline bool kvm_arch_requires_vhe(void)
return false;
 }
 
+static inline bool kvm_supports_ptrauth(void)
+{
+   return has_vhe() && system_supports_address_auth() &&
+   system_supports_generic_auth();
+}
+
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
 static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 6e65cad..09e061a 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -153,6 +153,13 @@ bool __fpsimd_enabled(void);
 void activate_traps_vhe_load(struct kvm_vcpu *vcpu);
 void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu);
 
+void __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+

[PATCH v6 2/6] arm64/kvm: preserve host MDCR_EL2 value

2019-02-19 Thread Amit Daniel Kachhap
Save host MDCR_EL2 value during kvm HYP initialisation and restore
after every switch from host to guest. There should not be any
change in functionality due to this.

The value of mdcr_el2 is now stored in struct kvm_cpu_context as
both host and guest can now use this field in a common way.

Signed-off-by: Amit Daniel Kachhap 
Cc: Marc Zyngier 
Cc: Mark Rutland 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h   |  1 -
 arch/arm64/include/asm/kvm_host.h |  6 ++
 arch/arm64/kvm/debug.c| 28 ++--
 arch/arm64/kvm/hyp/switch.c   | 17 -
 arch/arm64/kvm/hyp/sysreg-sr.c|  6 ++
 virt/kvm/arm/arm.c|  1 -
 6 files changed, 18 insertions(+), 41 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 05706b4..704667e 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -294,7 +294,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu 
*vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
-static inline void kvm_arm_init_debug(void) {}
 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 1b2e05b..2f1bb86 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -205,6 +205,8 @@ struct kvm_cpu_context {
 
/* HYP host/guest configuration */
u64 hcr_el2;
+   u32 mdcr_el2;
+
struct kvm_vcpu *__hyp_running_vcpu;
 };
 
@@ -213,9 +215,6 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
 struct kvm_vcpu_arch {
struct kvm_cpu_context ctxt;
 
-   /* HYP configuration */
-   u32 mdcr_el2;
-
/* Exception Information */
struct kvm_vcpu_fault_info fault;
 
@@ -446,7 +445,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu 
*vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
-void kvm_arm_init_debug(void);
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
index f39801e..99dc0a4 100644
--- a/arch/arm64/kvm/debug.c
+++ b/arch/arm64/kvm/debug.c
@@ -32,8 +32,6 @@
DBG_MDSCR_KDE | \
DBG_MDSCR_MDE)
 
-static DEFINE_PER_CPU(u32, mdcr_el2);
-
 /**
  * save/restore_guest_debug_regs
  *
@@ -65,21 +63,6 @@ static void restore_guest_debug_regs(struct kvm_vcpu *vcpu)
 }
 
 /**
- * kvm_arm_init_debug - grab what we need for debug
- *
- * Currently the sole task of this function is to retrieve the initial
- * value of mdcr_el2 so we can preserve MDCR_EL2.HPMN which has
- * presumably been set-up by some knowledgeable bootcode.
- *
- * It is called once per-cpu during CPU hyp initialisation.
- */
-
-void kvm_arm_init_debug(void)
-{
-   __this_cpu_write(mdcr_el2, kvm_call_hyp(__kvm_get_mdcr_el2));
-}
-
-/**
  * kvm_arm_reset_debug_ptr - reset the debug ptr to point to the vcpu state
  */
 
@@ -111,6 +94,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu)
 
 void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 {
+   kvm_cpu_context_t *host_cxt = this_cpu_ptr(_host_cpu_state);
bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY);
unsigned long mdscr;
 
@@ -120,8 +104,8 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 * This also clears MDCR_EL2_E2PB_MASK to disable guest access
 * to the profiling buffer.
 */
-   vcpu->arch.mdcr_el2 = __this_cpu_read(mdcr_el2) & MDCR_EL2_HPMN_MASK;
-   vcpu->arch.mdcr_el2 |= (MDCR_EL2_TPM |
+   vcpu->arch.ctxt.mdcr_el2 = host_cxt->mdcr_el2 & MDCR_EL2_HPMN_MASK;
+   vcpu->arch.ctxt.mdcr_el2 |= (MDCR_EL2_TPM |
MDCR_EL2_TPMS |
MDCR_EL2_TPMCR |
MDCR_EL2_TDRA |
@@ -130,7 +114,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
/* Is Guest debugging in effect? */
if (vcpu->guest_debug) {
/* Route all software debug exceptions to EL2 */
-   vcpu->arch.mdcr_el2 |= MDCR_EL2_TDE;
+   vcpu->arch.ctxt.mdcr_el2 |= MDCR_EL2_TDE;
 
/* Save guest debug state */
save_guest_debug_regs(vcpu);
@@ -202,13 +186,13 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu)
 
/* Trap debug register access */
if (trap_debug)
-   vcpu->arch.mdcr

[PATCH v6 4/6] arm64/kvm: add a userspace option to enable pointer authentication

2019-02-19 Thread Amit Daniel Kachhap
This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to enable this feature.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
 Documentation/arm64/pointer-authentication.txt |  9 +
 Documentation/virtual/kvm/api.txt  |  4 
 arch/arm64/include/asm/kvm_host.h  |  3 ++-
 arch/arm64/include/uapi/asm/kvm.h  |  1 +
 arch/arm64/kvm/handle_exit.c   |  2 +-
 arch/arm64/kvm/hyp/ptrauth-sr.c| 16 +++-
 arch/arm64/kvm/reset.c |  3 +++
 arch/arm64/kvm/sys_regs.c  | 26 +-
 include/uapi/linux/kvm.h   |  1 +
 9 files changed, 45 insertions(+), 20 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
index a25cd21..0529a7d 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -82,7 +82,8 @@ pointers).
 Virtualization
 --
 
-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when virtual machine is
+created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
+to be enabled. Without this flag, pointer authentication is not enabled
+in KVM guests and attempted use of the feature will result in an UNDEFINED
+exception being injected into the guest.
diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index 356156f..1e646fb 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2642,6 +2642,10 @@ Possible features:
  Depends on KVM_CAP_ARM_PSCI_0_2.
- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
  Depends on KVM_CAP_ARM_PMU_V3.
+   - KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
+ Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
+ set, then the KVM guest allows the execution of pointer authentication
+ instructions. Otherwise, KVM treats these instructions as undefined.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 1bacf78..2768a53 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 5
 
 #define KVM_REQ_SLEEP \
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
@@ -451,6 +451,7 @@ static inline bool kvm_arch_requires_vhe(void)
return false;
 }
 
+bool kvm_arm_vcpu_ptrauth_allowed(const struct kvm_vcpu *vcpu);
 static inline bool kvm_supports_ptrauth(void)
 {
return has_vhe() && system_supports_address_auth() &&
diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478..5f82ca1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH   4 /* VCPU uses address authentication */
 
 struct kvm_vcpu_init {
__u32 target;
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 7622ab3..d9f583b 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -179,7 +179,7 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run 
*run)
  */
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
 {
-   if (kvm_supports_ptrauth())
+   if (kvm_arm_vcpu_ptrauth_allowed(vcpu))
kvm_arm_vcpu_ptrauth_enable(vcpu);
else
kvm_inject_undefined(vcpu);
diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
index 528ee6e..6846a23 100644
--- a/arch/arm64/kvm/hyp/ptrauth-sr.c
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -93,9 +93,23 @@ void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
 {
struct kvm_cpu_context *host_ctxt;
 
-   if (kvm_supports_ptrauth()) {
+   if (kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
kvm_arm_vcpu_ptrauth_disable(vcpu);

[PATCH v6 5/6] arm64/kvm: control accessibility of ptrauth key registers

2019-02-19 Thread Amit Daniel Kachhap
According to userspace settings, ptrauth key registers are conditionally
present in guest system register list based on user specified flag
KVM_ARM_VCPU_PTRAUTH.

Reset routines still sets these registers to default values but they are
left like that as they are conditionally accessible (set/get).

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
This patch needs patch [1] by Dave Martin and adds feature to manage 
accessibility in a scalable way.

[1]: 
https://lore.kernel.org/linux-arm-kernel/1547757219-19439-13-git-send-email-dave.mar...@arm.com/
 

 Documentation/arm64/pointer-authentication.txt | 4 
 arch/arm64/kvm/sys_regs.c  | 7 ++-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
index 0529a7d..996e435 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,3 +87,7 @@ created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting 
this feature
 to be enabled. Without this flag, pointer authentication is not enabled
 in KVM guests and attempted use of the feature will result in an UNDEFINED
 exception being injected into the guest.
+
+Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will filter
+out the Pointer Authentication system key registers from KVM_GET/SET_REG_*
+ioctls.
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f7bcc60..c2f4974 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1005,8 +1005,13 @@ static bool trap_ptrauth(struct kvm_vcpu *vcpu,
return false;
 }
 
+static bool check_ptrauth(const struct kvm_vcpu *vcpu, const struct 
sys_reg_desc *rd)
+{
+   return kvm_arm_vcpu_ptrauth_allowed(vcpu);
+}
+
 #define __PTRAUTH_KEY(k)   \
-   { SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k }
+   { SYS_DESC(SYS_## k), trap_ptrauth, reset_unknown, k , .check_present = 
check_ptrauth}
 
 #define PTRAUTH_KEY(k) \
__PTRAUTH_KEY(k ## KEYLO_EL1),  \
-- 
2.7.4



[kvmtool PATCH v6 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication

2019-02-19 Thread Amit Daniel Kachhap
This is a runtime capabality for KVM tool to enable Armv8.3 Pointer
Authentication in guest kernel. A command line option --ptrauth is
required for this.

Signed-off-by: Amit Daniel Kachhap 
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h| 1 +
 arm/aarch64/include/asm/kvm.h | 1 +
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h| 1 +
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c | 6 ++
 include/linux/kvm.h   | 1 +
 7 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..520ea76 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,5 @@
 #define ARM_CPU_ID 0, 0, 0
 #define ARM_CPU_ID_MPIDR   5
 
+#define ARM_VCPU_PTRAUTH_FEATURE   0
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..1068fd1 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_PTRAUTH   4 /* CPU uses pointer authentication */
 
 struct kvm_vcpu_init {
__u32 target;
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
"Create PMUv3 device"), \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
"Specify random seed for Kernel Address Space " \
-   "Layout Randomization (KASLR)"),
+   "Layout Randomization (KASLR)"),\
+   OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,   \
+   "Enable address authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..496ece8 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,5 @@
 #define ARM_CPU_CTRL   3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1 0
 
+#define ARM_VCPU_PTRAUTH_FEATURE   (1UL << KVM_ARM_VCPU_PTRAUTH)
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..5badcbd 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,7 @@ struct kvm_config_arch {
boolaarch32_guest;
boolhas_pmuv3;
u64 kaslr_seed;
+   boolhas_ptrauth;
enum irqchip_type irqchip;
u64 fw_addr;
 };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..4ac80f8 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
 
+   /* Set KVM_ARM_VCPU_PTRAUTH if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
+   if (kvm->cfg.arch.has_ptrauth)
+   vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+   }
+
/*
 * If the preferred target ioctl is successful then
 * use preferred target else try each and every target type
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 6d4ea4b..a553477 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -988,6 +988,7 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
 #define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
 #define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_ARM_PTRAUTH 168
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.7.4



[PATCH v6 1/6] arm64/kvm: preserve host HCR_EL2 value

2019-02-19 Thread Amit Daniel Kachhap
From: Mark Rutland 

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
for the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

The value of hcr_el2 is now stored in struct kvm_cpu_context as both host
and guest can now use this field in a common way.

Signed-off-by: Mark Rutland 
[Added __cpu_copy_hyp_conf, hcr_el2 field in struct kvm_cpu_context]
Signed-off-by: Amit Daniel Kachhap 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h  |  2 ++
 arch/arm64/include/asm/kvm_asm.h |  2 ++
 arch/arm64/include/asm/kvm_emulate.h | 22 +++---
 arch/arm64/include/asm/kvm_host.h| 13 -
 arch/arm64/include/asm/kvm_hyp.h |  2 +-
 arch/arm64/kvm/guest.c   |  2 +-
 arch/arm64/kvm/hyp/switch.c  | 23 +--
 arch/arm64/kvm/hyp/sysreg-sr.c   | 21 -
 arch/arm64/kvm/hyp/tlb.c |  6 +-
 virt/kvm/arm/arm.c   |  1 +
 10 files changed, 68 insertions(+), 26 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index ca56537..05706b4 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation);
 }
 
+static inline void __cpu_copy_hyp_conf(void) {}
+
 static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 {
return 0;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index f5b79e9..8acd73f 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -80,6 +80,8 @@ extern void __vgic_v3_init_lrs(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+extern void __kvm_populate_host_regs(void);
+
 /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
 #define __hyp_this_cpu_ptr(sym)
\
({  \
diff --git a/arch/arm64/include/asm/kvm_emulate.h 
b/arch/arm64/include/asm/kvm_emulate.h
index 506386a..0dbe795 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -50,25 +50,25 @@ void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long 
addr);
 
 static inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
 {
-   return !(vcpu->arch.hcr_el2 & HCR_RW);
+   return !(vcpu->arch.ctxt.hcr_el2 & HCR_RW);
 }
 
 static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 {
-   vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+   vcpu->arch.ctxt.hcr_el2 = HCR_GUEST_FLAGS;
if (is_kernel_in_hyp_mode())
-   vcpu->arch.hcr_el2 |= HCR_E2H;
+   vcpu->arch.ctxt.hcr_el2 |= HCR_E2H;
if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) {
/* route synchronous external abort exceptions to EL2 */
-   vcpu->arch.hcr_el2 |= HCR_TEA;
+   vcpu->arch.ctxt.hcr_el2 |= HCR_TEA;
/* trap error record accesses */
-   vcpu->arch.hcr_el2 |= HCR_TERR;
+   vcpu->arch.ctxt.hcr_el2 |= HCR_TERR;
}
if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
-   vcpu->arch.hcr_el2 |= HCR_FWB;
+   vcpu->arch.ctxt.hcr_el2 |= HCR_FWB;
 
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
-   vcpu->arch.hcr_el2 &= ~HCR_RW;
+   vcpu->arch.ctxt.hcr_el2 &= ~HCR_RW;
 
/*
 * TID3: trap feature register accesses that we virtualise.
@@ -76,22 +76,22 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 * are currently virtualised.
 */
if (!vcpu_el1_is_32bit(vcpu))
-   vcpu->arch.hcr_el2 |= HCR_TID3;
+   vcpu->arch.ctxt.hcr_el2 |= HCR_TID3;
 }
 
 static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu)
 {
-   return (unsigned long *)>arch.hcr_el2;
+   return (unsigned long *)>arch.ctxt.hcr_el2;
 }
 
 static inline void vcpu_clear_wfe_traps(struct kvm_vcpu *vcpu)
 {
-   vc

Re: [PATCH v5 5/5] arm64/kvm: control accessibility of ptrauth key registers

2019-02-14 Thread Amit Daniel Kachhap

Hi,

On 2/13/19 11:24 PM, Dave P Martin wrote:

On Wed, Feb 13, 2019 at 05:35:46PM +, Kristina Martsenko wrote:

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:

According to userspace settings, ptrauth key registers are conditionally
present in guest system register list based on user specified flag
KVM_ARM_VCPU_PTRAUTH.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Christoffer Dall 
Cc: Marc Zyngier 
Cc: Kristina Martsenko 
Cc: kvm...@lists.cs.columbia.edu
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
  Documentation/arm64/pointer-authentication.txt |  3 ++
  arch/arm64/kvm/sys_regs.c  | 42 +++---
  2 files changed, 34 insertions(+), 11 deletions(-)



[...]


diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c


[...]


@@ -2487,18 +2493,22 @@ static int walk_one_sys_reg(const struct sys_reg_desc 
*rd,
  }
  
  /* Assumed ordered tables, see kvm_sys_reg_table_init. */

-static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind,
+   const struct sys_reg_desc *desc, unsigned int len)
  {
const struct sys_reg_desc *i1, *i2, *end1, *end2;
unsigned int total = 0;
size_t num;
int err;
  
+	if (desc == ptrauth_reg_descs && !kvm_arm_vcpu_ptrauth_allowed(vcpu))

+   return total;
+
/* We check for duplicates here, to allow arch-specific overrides. */
i1 = get_target_table(vcpu->arch.target, true, );
end1 = i1 + num;
-   i2 = sys_reg_descs;
-   end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
+   i2 = desc;
+   end2 = desc + len;
  
  	BUG_ON(i1 == end1 || i2 == end2);
  
@@ -2526,7 +2536,10 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu *vcpu)

  {
return ARRAY_SIZE(invariant_sys_regs)
+ num_demux_regs()
-   + walk_sys_regs(vcpu, (u64 __user *)NULL);
+   + walk_sys_regs(vcpu, (u64 __user *)NULL, sys_reg_descs,
+   ARRAY_SIZE(sys_reg_descs))
+   + walk_sys_regs(vcpu, (u64 __user *)NULL, ptrauth_reg_descs,
+   ARRAY_SIZE(ptrauth_reg_descs));
  }
  
  int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)

@@ -2541,7 +2554,12 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, 
u64 __user *uindices)
uindices++;
}
  
-	err = walk_sys_regs(vcpu, uindices);

+   err = walk_sys_regs(vcpu, uindices, sys_reg_descs, 
ARRAY_SIZE(sys_reg_descs));
+   if (err < 0)
+   return err;
+   uindices += err;
+
+   err = walk_sys_regs(vcpu, uindices, ptrauth_reg_descs, 
ARRAY_SIZE(ptrauth_reg_descs));
if (err < 0)
return err;
uindices += err;
@@ -2575,6 +2593,7 @@ void kvm_sys_reg_table_init(void)
BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs)));
BUG_ON(check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs)));
BUG_ON(check_sysreg_table(invariant_sys_regs, 
ARRAY_SIZE(invariant_sys_regs)));
+   BUG_ON(check_sysreg_table(ptrauth_reg_descs, 
ARRAY_SIZE(ptrauth_reg_descs)));
  
  	/* We abuse the reset function to overwrite the table itself. */

for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
@@ -2616,6 +2635,7 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
  
  	/* Generic chip reset first (so target could override). */

reset_sys_reg_descs(vcpu, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+   reset_sys_reg_descs(vcpu, ptrauth_reg_descs, 
ARRAY_SIZE(ptrauth_reg_descs));
  
  	table = get_target_table(vcpu->arch.target, true, );

reset_sys_reg_descs(vcpu, table, num);


This isn't very scalable, since we'd need to duplicate all the above
code every time we add new system registers that are conditionally
accessible.


Agreed, putting feature-specific checks in walk_sys_regs() is probably
best avoided.  Over time we would likely accumulate a fair number of
these checks.


It seems that the SVE patches [1] solved this problem by adding a
check_present() callback into struct sys_reg_desc. It probably makes
sense to rebase onto that patch and just implement the callback for the
ptrauth key registers as well.

[1] 
https://lore.kernel.org/linux-arm-kernel/1547757219-19439-13-git-send-email-dave.mar...@arm.com/


Note, I'm currently refactoring this so that enumeration through
KVM_GET_REG_LIST can be disabled independently of access to the
register.  This may not be the best approach, but for SVE I have a
feature-specific ID register to handle too (ID_AA64ZFR0_EL1), which
needs to be hidden from the enumeration but still accessible with
read-as-zero behaviour.

This changes the API a bit: I move to a .restrictions() callback which
returns flags to say what is disabled, and this callback is used in the
common code so that you don't have repeat your "fea

Re: [PATCH v5 4/5] arm64/kvm: add a userspace option to enable pointer authentication

2019-02-14 Thread Amit Daniel Kachhap

Hi,

On 2/13/19 11:05 PM, Kristina Martsenko wrote:

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:

This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to enable this feature.


[...]


diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index b200c14..b6950df 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -346,6 +346,10 @@ static inline int kvm_arm_have_ssbd(void)
  static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
  static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
  static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu) {}
+static inline bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
+{
+   return false;
+}


It seems like this is only ever called from arm64 code, so do we need an
arch/arm/ definition?

Yes not required. Nice catch.



+/**
+ * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is present in vcpu
+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function will be used to enable/disable ptrauth in guest as configured
+ * by the KVM userspace API.
+ */
+bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
+{
+   return test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features);
+}


I'm not sure, but should there also be something like

if (test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features) &&
 !kvm_supports_ptrauth())
return -EINVAL;

in kvm_reset_vcpu?
Yes makes sense. I missed it and with Dave martin patch this may be done 
cleanly.


Thanks,
Amit D



Thanks,
Kristina



Re: [PATCH v5 3/5] arm64/kvm: context-switch ptrauth register

2019-02-14 Thread Amit Daniel Kachhap

Hi,

On 2/13/19 11:05 PM, Kristina Martsenko wrote:

On 31/01/2019 16:25, James Morse wrote:

Hi Amit,

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.


[...]


+void __no_ptrauth __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_context *host_ctxt,
+ struct kvm_cpu_context *guest_ctxt)
+{
+   if (!__ptrauth_is_enabled(vcpu))
+   return;
+



+   ptrauth_keys_store((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);


We can't cast part of an array to a structure like this. What happens if the
compiler inserts padding in struct-ptrauth_keys, or the struct randomization
thing gets hold of it: https://lwn.net/Articles/722293/

If we want to use the helpers that take a struct-ptrauth_keys, we need to keep
the keys in a struct-ptrauth_keys. To do this we'd need to provide accessors so
that GET_ONE_REG() of APIAKEYLO_EL1 comes from the struct-ptrauth_keys, instead
of the sys_reg array.


If I've understood correctly, the idea is to have a struct ptrauth_keys
in struct kvm_vcpu_arch, instead of having the keys in the
kvm_cpu_context->sys_regs array. This is to avoid having similar code in
__ptrauth_key_install/ptrauth_keys_switch and
__ptrauth_restore_key/__ptrauth_restore_state, and so that future
patches (that add pointer auth in the kernel) would only need to update
one place instead of two.

Yes your observation is correct.


But it also means we'll have to special case pointer auth in
kvm_arm_sys_reg_set_reg/kvm_arm_sys_reg_get_reg and kvm_vcpu_arch. Is it
worth it? I'd prefer to keep the slight code duplication but avoid the
special casing.
In my local implementation I implemented above by separating ptrauth 
registers from sys registers but if I use the new way suggested by Dave
[1] then those things are not possible as reg ID is used for matching 
entries.


So I will stick to the single sys_reg list for next iteration using [1].

[1]: 
https://lore.kernel.org/linux-arm-kernel/1547757219-19439-11-git-send-email-dave.mar...@arm.com/





Wouldn't the host keys be available somewhere else? (they must get transfer to
secondary CPUs somehow). Can we skip the save step when switching from the host?



+   ptrauth_keys_switch((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);
+}




[...]




diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 03b36f1..301d332 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -483,6 +483,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
sysreg_restore_guest_state_vhe(guest_ctxt);
__debug_switch_to_guest(vcpu);
  
+	__ptrauth_switch_to_guest(vcpu, host_ctxt, guest_ctxt);

+
__set_guest_arch_workaround_state(vcpu);
  
  	do {

@@ -494,6 +496,8 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
  
  	__set_host_arch_workaround_state(vcpu);
  
+	__ptrauth_switch_to_host(vcpu, host_ctxt, guest_ctxt);

+
sysreg_save_guest_state_vhe(guest_ctxt);
  
  	__deactivate_traps(vcpu);


...This makes me nervous...

__guest_enter() is a function that (might) change the keys, then we change them
again here. We can't have any signed return address between these two points. I
don't trust the compiler not to generate any.

~

I had a chat with some friendly compiler folk... because there are two identical
sequences in kvm_vcpu_run_vhe() and __kvm_vcpu_run_nvhe(), the compiler could
move the common code to a function it then calls. Apparently this is called
'function outlining'.

If the compiler does this, and the guest changes the keys, I think we would fail
the return address check.

Painting the whole thing with no_prauth would solve this, but this code then
becomes a target.
Because the compiler can't anticipate the keys changing, we ought to treat them
the same way we do the callee saved registers, stack-pointer etc, and
save/restore them in the __guest_enter() assembly code.

(we can still keep the save/restore in C, but call it from assembly so we know
nothing new is going on the stack).


I agree that this should be called from assembly if we were building the
kernel with pointer auth. But as we are not doing that yet in this
series, can't we keep the calls in kvm_vcpu_run_vhe for now?
Well if we keep them in kvm_vcpu_run_vhe then there is not much issue 
also in calling those C functions from assembly guest_enter/guest_exit. 
It works fine in my local implementation. This will also save code 
churning again when kernel ptrauth support is added. The only extra 
change required to be done is to assign attribute _noptrauth to those 
functions. I will add these comments properly in function description.


In general I would prefer if the keys were

Re: [PATCH v5 3/5] arm64/kvm: context-switch ptrauth registers

2019-02-14 Thread Amit Daniel Kachhap

Hi,

On 2/13/19 11:04 PM, Kristina Martsenko wrote:

Hi Amit,

(Please always Cc: everyone who commented on previous versions of the
series.)

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
into the kernel and present into CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). When the guest is scheduled on a
physical CPU lacking the feature, these attempts will result in an UNDEF
being taken by the guest.


[...]


  /*
+ * Handle the guest trying to use a ptrauth instruction, or trying to access a
+ * ptrauth register.
+ */
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+{
+   if (has_vhe() && kvm_supports_ptrauth())
+   kvm_arm_vcpu_ptrauth_enable(vcpu);
+   else
+   kvm_inject_undefined(vcpu);
+}
+
+/*
   * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP).
+ * a NOP), or guest EL1 access to a ptrauth register.


Doesn't guest EL1 access of ptrauth registers go through trap_ptrauth
instead?

Yes you are right.



   */
  static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu, struct kvm_run *run)
  {
-   /*
-* We don't currently support ptrauth in a guest, and we mask the ID
-* registers to prevent well-behaved guests from trying to make use of
-* it.
-*
-* Inject an UNDEF, as if the feature really isn't present.
-*/
-   kvm_inject_undefined(vcpu);
+   kvm_arm_vcpu_ptrauth_trap(vcpu);
return 1;
  }
  


[...]


+static __always_inline bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu 
*vcpu)
+{
+   return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+   vcpu->arch.ctxt.hcr_el2 & (HCR_API | HCR_APK);
+}
+
+void __no_ptrauth __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_context *host_ctxt,
+ struct kvm_cpu_context *guest_ctxt)
+{
+   if (!__ptrauth_is_enabled(vcpu))
+   return;
+
+   ptrauth_keys_store((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);
+   ptrauth_keys_switch((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);
+}
+
+void __no_ptrauth __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu *vcpu,


We don't call this code in the !VHE case anymore, so are the __hyp_text
annotations still needed?

Yes they can be removed.



+struct kvm_cpu_context *host_ctxt,
+struct kvm_cpu_context *guest_ctxt)
+{
+   if (!__ptrauth_is_enabled(vcpu))
+   return;
+
+   ptrauth_keys_store((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);
+   ptrauth_keys_switch((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);
+}


[...]


@@ -1040,14 +1066,6 @@ static u64 read_id_reg(struct sys_reg_desc const *r, 
bool raz)
kvm_debug("SVE unsupported for guests, suppressing\n");
  
  		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);

-   } else if (id == SYS_ID_AA64ISAR1_EL1) {
-   const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
-(0xfUL << ID_AA64ISAR1_API_SHIFT) |
-(0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
-(0xfUL << ID_AA64ISAR1_GPI_SHIFT);
-   if (val & ptrauth_mask)
-   kvm_debug("ptrauth unsupported for guests, 
suppressing\n");
-   val &= ~ptrauth_mask;


If all CPUs support address authenti

Re: [PATCH v5 2/5] arm64/kvm: preserve host HCR_EL2/MDCR_EL2 value

2019-02-14 Thread Amit Daniel Kachhap

Hi,

On 2/13/19 11:04 PM, Kristina Martsenko wrote:

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.


Why is this patch needed? I couldn't find anything in this series that
sets HCR_EL2 conditionally for the host. It seems like the kernel still
always sets it to HCR_HOST_VHE_FLAGS/HCR_HOST_NVHE_FLAGS.


This patch is not directly related to pointer authentication but just a 
helper to optimize save/restore. In this way save may be avoided for 
each switch and only restore is done. Patch 3 does sets HCR_EL2 in VHE_RUN.


Looking back at v2 of the userspace pointer auth series, it seems that
the API/APK bits were set conditionally [1], so this patch would have
been needed to preserve HCR_EL2. But as of v3 of that series, the bits
have been set unconditionally through HCR_HOST_NVHE_FLAGS [2].

Is there something else I've missed?
Now HCR_EL2 is modified during switch time and NHVE doesnt support 
ptrauth so [2] doesn't makes sense.


//Amit D


Thanks,
Kristina

[1] 
https://lore.kernel.org/linux-arm-kernel/20171127163806.31435-6-mark.rutl...@arm.com/
[2] 
https://lore.kernel.org/linux-arm-kernel/20180417183735.56985-5-mark.rutl...@arm.com/



Re: [PATCH v5 1/5] arm64: Add utilities to save restore pointer authentication keys

2019-02-14 Thread Amit Daniel Kachhap

Hi,

On 2/13/19 11:02 PM, Kristina Martsenko wrote:

On 31/01/2019 16:20, James Morse wrote:

Hi Amit,

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:

The keys can be switched either inside an assembly or such
functions which do not have pointer authentication checks, so a GCC
attribute is added to enable it.

A function ptrauth_keys_store is added which is similar to existing
function ptrauth_keys_switch but saves the key values in memory.
This may be useful for save/restore scenarios when CPU changes
privilege levels, suspend/resume etc.




diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index 15d4951..98441ce 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -11,6 +11,13 @@
  
  #ifdef CONFIG_ARM64_PTR_AUTH

  /*
+ * Compile the function without pointer authentication instructions. This
+ * allows pointer authentication to be enabled/disabled within the function
+ * (but leaves the function unprotected by pointer authentication).
+ */
+#define __no_ptrauth   __attribute__((target("sign-return-address=none")))


The documentation[0] for this says 'none' is the default. Will this only
take-effect once the kernel supports pointer-auth for the host? (Is this just
documentation until then?)


Yes, I don't think this should be in this series, since we're not
building the kernel with pointer auth yet.
I added it to stress on some functions which should be pointer 
authentication safe. Yes this can be dropped and a small comment may 
also do.


//Amit D




('noptrauth' would fit with 'notrace' slightly better)


(But worse with e.g. __noreturn, __notrace_funcgraph, __init,
__always_inline, __exception. Not sure what the pattern is. Would
__noptrauth be better?)

Thanks,
Kristina



[0]
https://gcc.gnu.org/onlinedocs/gcc/AArch64-Function-Attributes.html#AArch64-Function-Attributes





Re: [PATCH v5 4/5] arm64/kvm: add a userspace option to enable pointer authentication

2019-02-14 Thread Amit Daniel Kachhap



Hi,
On 1/31/19 9:57 PM, James Morse wrote:

Hi Amit,

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:

This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to enable this feature.




diff --git a/Documentation/arm64/pointer-authentication.txt

b/Documentation/arm64/pointer-authentication.txt

index a25cd21..0529a7d 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -82,7 +82,8 @@ pointers).
  Virtualization
  --

-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when virtual machine is
+created by passing a flag (KVM_ARM_VCPU_PTRAUTH)


Isn't that a VCPU flag? Shouldn't this be when each VCPU is created?

Yes it is a VCPU flag.




requesting this feature
+to be enabled. Without this flag, pointer authentication is not enabled
+in KVM guests and attempted use of the feature will result in an UNDEFINED
+exception being injected into the guest.


... what happens if KVM's user-space enables ptrauth on some vcpus, but not on
others?
Yes seems to be issue. Let me check more on this if there are other ways 
of passing the userspace parameter such as in CREATE_VM type ioctl.


You removed the id-register suppression in the previous patch, but it doesn't
get hooked up to kvm_arm_vcpu_ptrauth_allowed() here. (you could add
kvm_arm_vcpu_ptrauth_allowed() earlier, and default it to true to make it 
easier).

Doesn't this mean that if the CPU supports pointer auth, but user-space doesn't
specify this flag, the guest gets mysterious undef's whenever it tries to use
the advertised feature?

Agree, ID registers should be masked  when userspace disables it.


(whether we support big/little virtual-machines is probably a separate issue,
but the id registers need to be consistent with our trap-and-undef behaviour)



diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index c798d0c..4a6ec40 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -453,14 +453,15 @@ static inline bool kvm_arch_requires_vhe(void)
  
  void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);

  void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu);
  
  static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)

  {
/* Disable ptrauth and use it in a lazy context via traps */
-   if (has_vhe() && kvm_supports_ptrauth())
+   if (has_vhe() && kvm_supports_ptrauth()
+   && kvm_arm_vcpu_ptrauth_allowed(vcpu))
kvm_arm_vcpu_ptrauth_disable(vcpu);
  }
-
  void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
  



diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 5b980e7..c0e5dcd 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -179,7 +179,8 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run 
*run)
   */
  void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
  {
-   if (has_vhe() && kvm_supports_ptrauth())
+   if (has_vhe() && kvm_supports_ptrauth()
+   && kvm_arm_vcpu_ptrauth_allowed(vcpu))


Duplication. If has_vhe() moved into kvm_supports_ptrauth(), and
kvm_supports_ptrauth() was called from kvm_arm_vcpu_ptrauth_allowed() it would
be clearer that use of this feature was becoming user-controlled policy.

(We don't need to list the dependencies at every call site)

ok.




diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
index 0576c01..369624f 100644
--- a/arch/arm64/kvm/hyp/ptrauth-sr.c
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -42,3 +42,16 @@ void __no_ptrauth __hyp_text __ptrauth_switch_to_host(struct 
kvm_vcpu *vcpu,
ptrauth_keys_store((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);
ptrauth_keys_switch((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);
  }
+
+/**
+ * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is present in vcpu


('enabled by KVM's user-space' may be clearer. 'Present in vcpu' could be down
to a cpufeature thing)

ok.




+ *
+ * @vcpu: The VCPU pointer
+ *
+ * This function will be used to enable/disable ptrauth in guest as configured


... but it just tests the bit ...


+ * by the KVM userspace API.
+ */
+bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
+{
+   ret

Re: [PATCH v5 3/5] arm64/kvm: context-switch ptrauth register

2019-02-14 Thread Amit Daniel Kachhap

Hi James,

On 1/31/19 9:55 PM, James Morse wrote:

Hi Amit,

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:

When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
into the kernel and present into CPU implementation so only VHE code


~s/into/in the/?


paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again.



Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.




Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). When the guest is scheduled on a
physical CPU lacking the feature, these attempts will result in an UNDEF
being taken by the guest.


This won't be fun. Can't KVM check that both are supported on all CPUs to avoid
this? ...
The above message is confusing as both checks actually present. I will 
update.




diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index dfcfba7..e1bf2a5 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -612,6 +612,11 @@ static inline bool system_supports_generic_auth(void)
 cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
  }
  
+static inline bool kvm_supports_ptrauth(void)

+{
+   return system_supports_address_auth() && system_supports_generic_auth();
+}


... oh you do check. Could you cover this in the commit message? (to avoid an
UNDEF being taken by the guest we ... )

cpufeature.h is a strange place to put this, there are no other kvm symbols in
there. But there are users of system_supports_foo() in kvm_host.h.

ok will check.




diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c b/arch/arm64/kvm/hyp/ptrauth-sr.c
new file mode 100644
index 000..0576c01
--- /dev/null
+++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * arch/arm64/kvm/hyp/ptrauth-sr.c: Guest/host ptrauth save/restore
+ *
+ * Copyright 2018 Arm Limited
+ * Author: Mark Rutland 
+ *     Amit Daniel Kachhap 
+ */
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static __always_inline bool __hyp_text __ptrauth_is_enabled(struct kvm_vcpu 
*vcpu)


Why __always_inline? Doesn't the compiler decide for 'static' symbols in C 
files?
This is to make the function pointer authentication safe. Although it 
placed before key switch so may not be required.




+{
+   return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+   vcpu->arch.ctxt.hcr_el2 & (HCR_API | HCR_APK);
+}
+
+void __no_ptrauth __hyp_text __ptrauth_switch_to_guest(struct kvm_vcpu *vcpu,
+ struct kvm_cpu_context *host_ctxt,
+ struct kvm_cpu_context *guest_ctxt)
+{
+   if (!__ptrauth_is_enabled(vcpu))
+   return;
+



+   ptrauth_keys_store((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);


We can't cast part of an array to a structure like this. What happens if the
compiler inserts padding in struct-ptrauth_keys, or the struct randomization
thing gets hold of it: https://lwn.net/Articles/722293/

Yes this has got issue.


If we want to use the helpers that take a struct-ptrauth_keys, we need to keep
the keys in a struct-ptrauth_keys. To do this we'd need to provide accessors so
that GET_ONE_REG() of APIAKEYLO_EL1 comes from the struct-ptrauth_keys, instead
of the sys_reg array.

ok.



Wouldn't the host keys be available somewhere else? (they must get transfer to
secondary CPUs somehow). Can we skip the save step when switching from the host?
Yes Host save can be done during vcpu_load and it works fine. However it 
does not work during hypervisor configuration stage(i.e where HCR_EL2 is 
saved/restored now) as keys are different.



+   ptrauth_keys_switch((struct ptrauth_keys *) 
_ctxt->sys_regs[APIAKEYLO_EL1]);
+}



diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include

Re: [PATCH v5 2/5] arm64/kvm: preserve host HCR_EL2/MDCR_EL2 value

2019-02-14 Thread Amit Daniel Kachhap



Hi James,

Little late in replying as some issue in my mail settings.
On 1/31/19 9:52 PM, James Morse wrote:

Hi Amit,

On 28/01/2019 06:58, Amit Daniel Kachhap wrote:

When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.



For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().




While at it, host MDCR_EL2 value is fetched in a similar way and restored
after every switch from host to guest. There should not be any change in
functionality due to this.


Could this step be done as a separate subsequent patch? It would make review
easier! The MDCR stuff would be a simplification if done second, done in one go
like this its pretty noisy.

Ok, agree.


There ought to be some justification for moving hcr/mdcr into the cpu_context in
the commit message.

ohh I missed adding in commit. Just added in cover letter.



If you're keeping Mark's 'Signed-off-by' its would be normal to keep Mark as the
author in git. This shows up a an extra 'From:' when you post the patch, and
gets picked up when the maintainer runs git-am.

This patch has changed substantially from Mark's version:
https://lkml.org/lkml/2017/11/27/675

If you keep the signed-off-by, could you add a [note] in the signed-off area
with a terse summary. Something like:

Signed-off-by: Mark Rutland 

[ Move hcr to cpu_context, added __cpu_copy_hyp_conf()]

Signed-off-by: Amit Daniel Kachhap 


(9c06602b1b92 is a good picked-at-random example for both of these)

Thanks for the information.




diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index f5b79e9..2da6e43 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -80,6 +80,8 @@ extern void __vgic_v3_init_lrs(void);
  
  extern u32 __kvm_get_mdcr_el2(void);
  
+extern u64 __kvm_get_hcr_el2(void);


Do we need these in separate helpers? For non-vhe this means two separate trips
to EL2. Something like kvm_populate_host_context(void), and an __ version for
the bit at EL2?

yes one wrapper for each of them will do.


We don't need to pass the host-context to EL2 as once kvm is loaded we can
access host per-cpu variables at EL2 using __hyp_this_cpu_read(). This will save
passing the vcpu around.



@@ -458,6 +457,25 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
  
  static inline void __cpu_init_stage2(void) {}
  
+/**

+ * __cpu_copy_hyp_conf - copy the boot hyp configuration registers
+ *
+ * It is called once per-cpu during CPU hyp initialisation.
+ */
+static inline void __cpu_copy_hyp_conf(void)
+{
+   kvm_cpu_context_t *host_cxt = this_cpu_ptr(_host_cpu_state);
+
+   host_cxt->hcr_el2 = kvm_call_hyp(__kvm_get_hcr_el2);
+
+   /*
+* Retrieve the initial value of mdcr_el2 so we can preserve
+* MDCR_EL2.HPMN which has presumably been set-up by some
+* knowledgeable bootcode.
+*/
+   host_cxt->mdcr_el2 = kvm_call_hyp(__kvm_get_mdcr_el2);
+}


Its strange to make this an inline in a header. kvm_arm_init_debug() is a
static-inline for arch/arm, but a regular C function for arch/arm64. Can't we do
the same?



diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 68d6f7c..22c854a 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -316,3 +316,14 @@ void __hyp_text __kvm_enable_ssbs(void)
"msr   sctlr_el2, %0"
: "=" (tmp) : "L" (SCTLR_ELx_DSSBS));
  }
+
+/**
+ * __read_hyp_hcr_el2 - Returns hcr_el2 register value
+ *
+ * This function acts as a function handler parameter for kvm_call_hyp and
+ * may be called from EL1 exception level to fetch the register value.
+ */
+u64 __hyp_text __kvm_get_hcr_el2(void)
+{
+   return read_sysreg(hcr_el2);
+}


While I'm all in favour of kernel-doc comments for functions, it may be
over-kill in this case!



diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 9e350fd3..2d65ada 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -1327,10 +1327,10 @@ static void cpu_hyp_reinit(void)
else
cpu_init_hyp_mode(NULL);
  
-	kvm_arm_init_debug();

-
if (vgic_present)
kvm_vgic_init_cpu_hardware();
+
+   __cpu_copy_

Re: [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally

2019-01-27 Thread Amit Daniel Kachhap
Hi James,
On Fri, Jan 4, 2019 at 11:32 PM James Morse  wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > According to userspace settings, pointer authentication cpufeature
> > is enabled/disabled from guests.
>
> This reads like the guest is changing something in the host. Isn't this hiding
> the id-register values from the guest?
I dropped this patch altogether in V5 series and now only key
registers are masked
if userspace disables it.

Thanks,
Amit Daniel
>
>
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 6af6c7d..ce6144a 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1066,6 +1066,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, 
> > bool raz)
> >   kvm_debug("SVE unsupported for guests, 
> > suppressing\n");
> >
> >   val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > + } else if (id == SYS_ID_AA64ISAR1_EL1) {
> > + const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > +  (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > +  (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > +  (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> > + if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
> > + kvm_debug("ptrauth unsupported for guests, 
> > suppressing\n");
> > + val &= ~ptrauth_mask;
> > + }
>
> I think this hunk should have been in the previous patch as otherwise its a
> bisection oddity.
>
> Could you merge this hunk with the previous patch, and move the mechanical 
> bits
> that pass vcpu around to a prior preparatory patch.
>
> (I'm still unsure if we need to hide this as a user-controlled policy)
>
>
> Thanks,
>
> James


[kvmtool PATCH v5 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication

2019-01-27 Thread Amit Daniel Kachhap
This is a runtime feature and can be enabled by --ptrauth option.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Christoffer Dall 
Cc: Marc Zyngier 
Cc: Kristina Martsenko 
Cc: kvm...@lists.cs.columbia.edu
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h| 2 ++
 arm/aarch64/include/asm/kvm.h | 3 +++
 arm/aarch64/include/kvm/kvm-arch.h| 1 +
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h| 2 ++
 arm/aarch64/kvm-cpu.c | 5 +
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c | 7 +++
 include/linux/kvm.h   | 1 +
 9 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..5779767 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,6 @@
 #define ARM_CPU_ID 0, 0, 0
 #define ARM_CPU_ID_MPIDR   5
 
+unsigned int kvm__cpu_ptrauth_get_feature(void) {}
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index c286035..0fd183d 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -98,6 +98,9 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
 
+/* CPU uses address authentication and A key */
+#define KVM_ARM_VCPU_PTRAUTH   4
+
 struct kvm_vcpu_init {
__u32 target;
__u32 features[7];
diff --git a/arm/aarch64/include/kvm/kvm-arch.h 
b/arm/aarch64/include/kvm/kvm-arch.h
index 9de623a..bd566cb 100644
--- a/arm/aarch64/include/kvm/kvm-arch.h
+++ b/arm/aarch64/include/kvm/kvm-arch.h
@@ -11,4 +11,5 @@
 
 #include "arm-common/kvm-arch.h"
 
+
 #endif /* KVM__KVM_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
"Create PMUv3 device"), \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
"Specify random seed for Kernel Address Space " \
-   "Layout Randomization (KASLR)"),
+   "Layout Randomization (KASLR)"),\
+   OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,   \
+   "Enable address authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..f7b64b7 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,6 @@
 #define ARM_CPU_CTRL   3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1 0
 
+unsigned int kvm__cpu_ptrauth_get_feature(void);
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index 1b29374..10da2cb 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -123,6 +123,11 @@ void kvm_cpu__reset_vcpu(struct kvm_cpu *vcpu)
return reset_vcpu_aarch64(vcpu);
 }
 
+unsigned int kvm__cpu_ptrauth_get_feature(void)
+{
+   return (1UL << KVM_ARM_VCPU_PTRAUTH);
+}
+
 int kvm_cpu__get_endianness(struct kvm_cpu *vcpu)
 {
struct kvm_one_reg reg;
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 6a196f1..eb872db 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,7 @@ struct kvm_config_arch {
boolaarch32_guest;
boolhas_pmuv3;
u64 kaslr_seed;
+   boolhas_ptrauth;
enum irqchip_type irqchip;
 };
 
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..5afd727 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,13 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
 
+   /* Set KVM_ARM_VCPU_PTRAUTH_I_A if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
+   if (kvm->cfg.arch.has_ptrauth)
+   vcpu_init.features[0] |=
+   kvm__cpu_ptrauth_get_feature();
+   }
+
/*
 * If the preferred target ioctl is successful then
 * use preferred target else try each and every target type
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index f51d508..204315e 100644
--- a/inclu

[PATCH v5 5/5] arm64/kvm: control accessibility of ptrauth key registers

2019-01-27 Thread Amit Daniel Kachhap
According to userspace settings, ptrauth key registers are conditionally
present in guest system register list based on user specified flag
KVM_ARM_VCPU_PTRAUTH.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Christoffer Dall 
Cc: Marc Zyngier 
Cc: Kristina Martsenko 
Cc: kvm...@lists.cs.columbia.edu
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 Documentation/arm64/pointer-authentication.txt |  3 ++
 arch/arm64/kvm/sys_regs.c  | 42 +++---
 2 files changed, 34 insertions(+), 11 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
index 0529a7d..3be4ee1 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,3 +87,6 @@ created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting 
this feature
 to be enabled. Without this flag, pointer authentication is not enabled
 in KVM guests and attempted use of the feature will result in an UNDEFINED
 exception being injected into the guest.
+
+Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will filter
+out the authentication key registers from userspace.
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2546a65..b46a78e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1334,12 +1334,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
-   PTRAUTH_KEY(APIA),
-   PTRAUTH_KEY(APIB),
-   PTRAUTH_KEY(APDA),
-   PTRAUTH_KEY(APDB),
-   PTRAUTH_KEY(APGA),
-
{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
@@ -1491,6 +1485,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
 };
 
+static const struct sys_reg_desc ptrauth_reg_descs[] = {
+   PTRAUTH_KEY(APIA),
+   PTRAUTH_KEY(APIB),
+   PTRAUTH_KEY(APDA),
+   PTRAUTH_KEY(APDB),
+   PTRAUTH_KEY(APGA),
+};
+
 static bool trap_dbgidr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -2093,6 +2095,8 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
r = find_reg(params, table, num);
if (!r)
r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+   if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
+   r = find_reg(params, ptrauth_reg_descs, 
ARRAY_SIZE(ptrauth_reg_descs));
 
if (likely(r)) {
perform_access(vcpu, params, r);
@@ -2206,6 +2210,8 @@ static const struct sys_reg_desc 
*index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
r = find_reg_by_id(id, , table, num);
if (!r)
r = find_reg(, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+   if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
+   r = find_reg(, ptrauth_reg_descs, 
ARRAY_SIZE(ptrauth_reg_descs));
 
/* Not saved in the sys_reg array and not otherwise accessible? */
if (r && !(r->reg || r->get_user))
@@ -2487,18 +2493,22 @@ static int walk_one_sys_reg(const struct sys_reg_desc 
*rd,
 }
 
 /* Assumed ordered tables, see kvm_sys_reg_table_init. */
-static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind,
+   const struct sys_reg_desc *desc, unsigned int len)
 {
const struct sys_reg_desc *i1, *i2, *end1, *end2;
unsigned int total = 0;
size_t num;
int err;
 
+   if (desc == ptrauth_reg_descs && !kvm_arm_vcpu_ptrauth_allowed(vcpu))
+   return total;
+
/* We check for duplicates here, to allow arch-specific overrides. */
i1 = get_target_table(vcpu->arch.target, true, );
end1 = i1 + num;
-   i2 = sys_reg_descs;
-   end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
+   i2 = desc;
+   end2 = desc + len;
 
BUG_ON(i1 == end1 || i2 == end2);
 
@@ -2526,7 +2536,10 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu 
*vcpu)
 {
return ARRAY_SIZE(invariant_sys_regs)
+ num_demux_regs()
-   + walk_sys_regs(vcpu, (u64 __user *)NULL);
+   + walk_sys_regs(vcpu, (u64 __user *)NULL, sys_reg_descs,
+   ARRAY_SIZE(sys_reg_descs))
+   + walk_sys_regs(vcpu, (u64 __user *)NULL, ptrauth_reg_descs,
+   ARRAY_SIZE(ptrauth_reg_descs));
 }
 
 int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
@@ -2541,7 +2554,12 @@ int kvm

[PATCH v5 3/5] arm64/kvm: context-switch ptrauth registers

2019-01-27 Thread Amit Daniel Kachhap
When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
into the kernel and present into CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). When the guest is scheduled on a
physical CPU lacking the feature, these attempts will result in an UNDEF
being taken by the guest.

Signed-off-by: Mark Rutland 
Signed-off-by: Amit Daniel Kachhap 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: Kristina Martsenko 
Cc: kvm...@lists.cs.columbia.edu
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 arch/arm/include/asm/kvm_host.h |  1 +
 arch/arm64/include/asm/cpufeature.h |  5 +
 arch/arm64/include/asm/kvm_host.h   | 24 
 arch/arm64/include/asm/kvm_hyp.h|  7 ++
 arch/arm64/kernel/traps.c   |  1 +
 arch/arm64/kvm/handle_exit.c| 23 +++
 arch/arm64/kvm/hyp/Makefile |  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c | 44 +
 arch/arm64/kvm/hyp/switch.c |  4 
 arch/arm64/kvm/sys_regs.c   | 40 ++---
 virt/kvm/arm/arm.c  |  2 ++
 11 files changed, 135 insertions(+), 17 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 704667e..b200c14 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -345,6 +345,7 @@ static inline int kvm_arm_have_ssbd(void)
 
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu) {}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index dfcfba7..e1bf2a5 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -612,6 +612,11 @@ static inline bool system_supports_generic_auth(void)
 cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH_IMP_DEF));
 }
 
+static inline bool kvm_supports_ptrauth(void)
+{
+   return system_supports_address_auth() && system_supports_generic_auth();
+}
+
 #define ARM64_SSBD_UNKNOWN -1
 #define ARM64_SSBD_FORCE_DISABLE   0
 #define ARM64_SSBD_KERNEL  1
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 1f2d237..c798d0c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -146,6 +146,18 @@ enum vcpu_sysreg {
PMSWINC_EL0,/* Software Increment Register */
PMUSERENR_EL0,  /* User Enable Register */
 
+   /* Pointer Authentication Registers */
+   APIAKEYLO_EL1,
+   APIAKEYHI_EL1,
+   APIBKEYLO_EL1,
+   APIBKEYHI_EL1,
+   APDAKEYLO_EL1,
+   APDAKEYHI_EL1,
+   APDBKEYLO_EL1,
+   APDBKEYHI_EL1,
+   APGAKEYLO_EL1,
+   APGAKEYHI_EL1,
+
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
@@ -439,6 +451,18 @@ static inline bool kvm_arch_requires_vhe(void)
return false;
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+
+static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
+{
+   /* Disable ptrauth and use it in a lazy context via traps */
+   if (has_vhe() && kvm_supports_ptrauth())
+   kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
+
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 st

[PATCH v5 4/5] arm64/kvm: add a userspace option to enable pointer authentication

2019-01-27 Thread Amit Daniel Kachhap
This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to enable this feature.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: Kristina Martsenko 
Cc: kvm...@lists.cs.columbia.edu
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 Documentation/arm64/pointer-authentication.txt |  9 +
 Documentation/virtual/kvm/api.txt  |  4 
 arch/arm/include/asm/kvm_host.h|  4 
 arch/arm64/include/asm/kvm_host.h  |  7 ---
 arch/arm64/include/uapi/asm/kvm.h  |  1 +
 arch/arm64/kvm/handle_exit.c   |  3 ++-
 arch/arm64/kvm/hyp/ptrauth-sr.c| 13 +
 arch/arm64/kvm/reset.c |  3 +++
 include/uapi/linux/kvm.h   |  1 +
 9 files changed, 37 insertions(+), 8 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
index a25cd21..0529a7d 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -82,7 +82,8 @@ pointers).
 Virtualization
 --
 
-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when virtual machine is
+created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
+to be enabled. Without this flag, pointer authentication is not enabled
+in KVM guests and attempted use of the feature will result in an UNDEFINED
+exception being injected into the guest.
diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index 356156f..1e646fb 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2642,6 +2642,10 @@ Possible features:
  Depends on KVM_CAP_ARM_PSCI_0_2.
- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
  Depends on KVM_CAP_ARM_PMU_V3.
+   - KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
+ Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
+ set, then the KVM guest allows the execution of pointer authentication
+ instructions or treats them as undefined if not set.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index b200c14..b6950df 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -346,6 +346,10 @@ static inline int kvm_arm_have_ssbd(void)
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu) {}
+static inline bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
+{
+   return false;
+}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index c798d0c..4a6ec40 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 5
 
 #define KVM_REQ_SLEEP \
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
@@ -453,14 +453,15 @@ static inline bool kvm_arch_requires_vhe(void)
 
 void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
 void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu);
 
 static inline void kvm_arm_vcpu_ptrauth_reset(struct kvm_vcpu *vcpu)
 {
/* Disable ptrauth and use it in a lazy context via traps */
-   if (has_vhe() && kvm_supports_ptrauth())
+   if (has_vhe() && kvm_supports_ptrauth()
+   && kvm_arm_vcpu_ptrauth_allowed(vcpu))
kvm_arm_vcpu_ptrauth_disable(vcpu);
 }
-
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
 
 static inline void kvm_arch_hardware_unsetup(void) {}
diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478..5f82ca1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support

[PATCH v5 0/6] Add ARMv8.3 pointer authentication for kvm guest

2019-01-27 Thread Amit Daniel Kachhap
Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux 5.0-rc3. The basic patches in this series was
originally posted by Mark Rutland earlier[1,2] and contains some history
of this work.

Extension Overview:
=

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel and can be found in Kristina's generic pointer authentication
patch series[3].

KVM guest work:
==

If pointer authentication is enabled for KVM guests then the new PAC 
instructions
will not trap to EL2. If not then they may be ignored if in HINT region or 
trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they 
have
a key initialized which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

There were some review comments by Christoffer Dall in the original series 
[1,2,3]
and this patch series tries to implement them. The original series enabled 
pointer
authentication for both userspace and kvm userspace. However it is now
bifurcated and this series contains only KVM guest support.

The current v5 patch series contains most of the suggestion by James Morse.
One of the suggestions was to do save/restore keys in asm during switch. However
this series re-uses the arm64 core functions __ptrauth_key_install but called 
from
C function using __no_ptrauth attribute.

Changes since v4 [6]: Several suggestions from James Morse
* Move host registers to be saved/restored inside struct kvm_cpu_context.
* Similar to hcr_el2, save/restore mdcr_el2 register also.
* Added save routines for ptrauth keys in generic arm core and
  use them during KVM context switch.
* Defined a GCC attribute __no_ptrauth which discards generating
  ptrauth instructions in a function. This is taken from Kristina's
  earlier kernel pointer authentication support patches [4].
* Dropped a patch to mask cpufeature when not enabled from userspace and
  now only key registers are masked from register list.

Changes since v3 [5]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
  features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again. 
* Added more details in Documentation.

Changes since v2 [1,2]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutl...@arm.com/
[2]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutl...@arm.com/
[3]: https://lkml.org/lkml/2018/12/7/666
[4]: 
https://lore.kernel.org/lkml/20181005084754.20950-1-kristina.martse...@arm.com/
[5]: https://lkml.org/lkml/2018/10/17/594
[6]: https://lkml.org/lkml/2018/12/18/80


Linux (5.0-rc3 based):

Amit Daniel Kachhap (5):
  arm64: Add utilities to save restore pointer authentication keys
  arm64/kvm: preserve host HCR_EL2/MDCR_EL2 value
  arm64/kvm: context-switch ptrauth registers
  arm64/kvm: add a userspace option to enable pointer authentication
  arm64/kvm: control accessibility of ptrauth key registers

 Documentation/arm64/pointer-authentication.txt | 12 +++--
 Documentation/virtual/kvm/api.txt  |  4 ++
 arch/arm/include/asm/kvm_host.h|  8 ++-
 arch/arm64/include/asm/cpufeature.h|  5 ++
 arch/arm64/include/asm/kvm_asm.h   |  2 +
 arch/arm64/include/asm/kvm_emulate.h   | 22 
 arch/arm64/include/asm/kvm_host.h  | 55

[PATCH v5 2/5] arm64/kvm: preserve host HCR_EL2/MDCR_EL2 value

2019-01-27 Thread Amit Daniel Kachhap
When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initialisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

While at it, host MDCR_EL2 value is fetched in a similar way and restored
after every switch from host to guest. There should not be any change in
functionality due to this.

Signed-off-by: Mark Rutland 
Signed-off-by: Amit Daniel Kachhap 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: Kristina Martsenko 
Cc: kvm...@lists.cs.columbia.edu
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 arch/arm/include/asm/kvm_host.h  |  3 ++-
 arch/arm64/include/asm/kvm_asm.h |  2 ++
 arch/arm64/include/asm/kvm_emulate.h | 22 ++--
 arch/arm64/include/asm/kvm_host.h| 28 -
 arch/arm64/include/asm/kvm_hyp.h |  2 +-
 arch/arm64/kvm/debug.c   | 28 ++---
 arch/arm64/kvm/guest.c   |  2 +-
 arch/arm64/kvm/hyp/switch.c  | 40 +++-
 arch/arm64/kvm/hyp/sysreg-sr.c   | 13 +++-
 arch/arm64/kvm/hyp/tlb.c |  6 +-
 virt/kvm/arm/arm.c   |  4 ++--
 11 files changed, 82 insertions(+), 68 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index ca56537..704667e 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation);
 }
 
+static inline void __cpu_copy_hyp_conf(void) {}
+
 static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 {
return 0;
@@ -292,7 +294,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu 
*vcpu) {}
 static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
 static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
 
-static inline void kvm_arm_init_debug(void) {}
 static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_clear_debug(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index f5b79e9..2da6e43 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -80,6 +80,8 @@ extern void __vgic_v3_init_lrs(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+extern u64 __kvm_get_hcr_el2(void);
+
 /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
 #define __hyp_this_cpu_ptr(sym)
\
({  \
diff --git a/arch/arm64/include/asm/kvm_emulate.h 
b/arch/arm64/include/asm/kvm_emulate.h
index 506386a..0dbe795 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -50,25 +50,25 @@ void kvm_inject_pabt32(struct kvm_vcpu *vcpu, unsigned long 
addr);
 
 static inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu)
 {
-   return !(vcpu->arch.hcr_el2 & HCR_RW);
+   return !(vcpu->arch.ctxt.hcr_el2 & HCR_RW);
 }
 
 static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 {
-   vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+   vcpu->arch.ctxt.hcr_el2 = HCR_GUEST_FLAGS;
if (is_kernel_in_hyp_mode())
-   vcpu->arch.hcr_el2 |= HCR_E2H;
+   vcpu->arch.ctxt.hcr_el2 |= HCR_E2H;
if (cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) {
/* route synchronous external abort exceptions to EL2 */
-   vcpu->arch.hcr_el2 |= HCR_TEA;
+   vcpu->arch.ctxt.hcr_el2 |= HCR_TEA;
/* trap error record accesses */
-   vcpu->arch.hcr_el2 |= HCR_TERR;
+   vcpu->arch.ctxt.hcr_el2 |= HCR_TERR;
}
if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))
-   vcpu->arch.hcr_el2 |= HCR_FWB;
+   vcpu->arch.ctxt.hcr_el2 |= HCR_FWB;
 
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
-   vcpu->arch.hcr_el2 &= ~HCR_RW;
+   vcpu->arch.ctxt.hcr_el2 &= ~HCR_RW;
 
/*
 * TID3: trap f

[PATCH v5 1/5] arm64: Add utilities to save restore pointer authentication keys

2019-01-27 Thread Amit Daniel Kachhap
The keys can be switched either inside an assembly or such
functions which do not have pointer authentication checks, so a GCC
attribute is added to enable it.

A function ptrauth_keys_store is added which is similar to existing
function ptrauth_keys_switch but saves the key values in memory.
This may be useful for save/restore scenarios when CPU changes
privilege levels, suspend/resume etc.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: Kristina Martsenko 
Cc: kvm...@lists.cs.columbia.edu
Cc: Ramana Radhakrishnan 
Cc: Will Deacon 
---
 arch/arm64/include/asm/pointer_auth.h | 28 
 1 file changed, 28 insertions(+)

diff --git a/arch/arm64/include/asm/pointer_auth.h 
b/arch/arm64/include/asm/pointer_auth.h
index 15d4951..98441ce 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -11,6 +11,13 @@
 
 #ifdef CONFIG_ARM64_PTR_AUTH
 /*
+ * Compile the function without pointer authentication instructions. This
+ * allows pointer authentication to be enabled/disabled within the function
+ * (but leaves the function unprotected by pointer authentication).
+ */
+#define __no_ptrauth   __attribute__((target("sign-return-address=none")))
+
+/*
  * Each key is a 128-bit quantity which is split across a pair of 64-bit
  * registers (Lo and Hi).
  */
@@ -50,6 +57,13 @@ do { 
\
write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \
 } while (0)
 
+#define __ptrauth_key_save(k, v)   \
+do {   \
+   struct ptrauth_key __pki_v = (v);   \
+   __pki_v.lo = read_sysreg_s(SYS_ ## k ## KEYLO_EL1); \
+   __pki_v.hi = read_sysreg_s(SYS_ ## k ## KEYHI_EL1); \
+} while (0)
+
 static inline void ptrauth_keys_switch(struct ptrauth_keys *keys)
 {
if (system_supports_address_auth()) {
@@ -63,6 +77,19 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys 
*keys)
__ptrauth_key_install(APGA, keys->apga);
 }
 
+static inline void ptrauth_keys_store(struct ptrauth_keys *keys)
+{
+   if (system_supports_address_auth()) {
+   __ptrauth_key_save(APIA, keys->apia);
+   __ptrauth_key_save(APIB, keys->apib);
+   __ptrauth_key_save(APDA, keys->apda);
+   __ptrauth_key_save(APDB, keys->apdb);
+   }
+
+   if (system_supports_generic_auth())
+   __ptrauth_key_save(APGA, keys->apga);
+}
+
 extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long 
arg);
 
 /*
@@ -88,6 +115,7 @@ do { 
\
ptrauth_keys_switch(&(tsk)->thread.keys_user)
 
 #else /* CONFIG_ARM64_PTR_AUTH */
+#define __no_ptrauth
 #define ptrauth_prctl_reset_keys(tsk, arg) (-EINVAL)
 #define ptrauth_strip_insn_pac(lr) (lr)
 #define ptrauth_thread_init_user(tsk)
-- 
2.7.4



Re: [PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally

2019-01-09 Thread Amit Daniel Kachhap
Hi,

On Fri, Jan 4, 2019 at 11:32 PM James Morse  wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > According to userspace settings, pointer authentication cpufeature
> > is enabled/disabled from guests.
>
> This reads like the guest is changing something in the host. Isn't this hiding
> the id-register values from the guest?
>
>
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 6af6c7d..ce6144a 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1066,6 +1066,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, 
> > bool raz)
> >   kvm_debug("SVE unsupported for guests, 
> > suppressing\n");
> >
> >   val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > + } else if (id == SYS_ID_AA64ISAR1_EL1) {
> > + const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > +  (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > +  (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > +  (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> > + if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
> > + kvm_debug("ptrauth unsupported for guests, 
> > suppressing\n");
> > + val &= ~ptrauth_mask;
> > + }
>
> I think this hunk should have been in the previous patch as otherwise its a
> bisection oddity.
>
> Could you merge this hunk with the previous patch, and move the mechanical 
> bits
> that pass vcpu around to a prior preparatory patch.
Yes will do.
>
> (I'm still unsure if we need to hide this as a user-controlled policy)
>
>
> Thanks,
>
> James
//Amit


Re: [PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication

2019-01-09 Thread Amit Daniel Kachhap
Hi,

On Sat, Jan 5, 2019 at 12:05 AM James Morse  wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > This feature will allow the KVM guest to allow the handling of
> > pointer authentication instructions or to treat them as undefined
> > if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
> > supply this parameter instead of creating a new API.
> >
> > A new register is not created to pass this parameter via
> > SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
> > supplied is enough to select this feature.
>
> What is the motivation for doing this? Doesn't ptrauth 'just' need turning on?
> It doesn't need additional setup to be useable, or rely on some qemu support 
> to
> work properly. There isn't any hidden state that can't be migrated in the 
> usual way.
> Is it just because we don't want to commit to the ABI?
This allows migration of guest to non pointer authenticated supported
systems and hides the extra ptrauth registers.
Basically this suggestion was given by Christoffer
(https://lore.kernel.org/lkml/20180206123847.GY21802@cbox/).
 I don't have strong reservation to have this option and can be
dropped if this doesn't make sense.
>
>
> > diff --git a/Documentation/arm64/pointer-authentication.txt 
> > b/Documentation/arm64/pointer-authentication.txt
> > index 5baca42..8c0f338 100644
> > --- a/Documentation/arm64/pointer-authentication.txt
> > +++ b/Documentation/arm64/pointer-authentication.txt
> > @@ -87,7 +87,8 @@ used to get and set the keys for a thread.
> >  Virtualization
> >  --
> >
> > -Pointer authentication is not currently supported in KVM guests. KVM
> > -will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
> > -the feature will result in an UNDEFINED exception being injected into
> > -the guest.
> > +Pointer authentication is enabled in KVM guest when virtual machine is
> > +created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
> > +to be enabled. Without this flag, pointer authentication is not enabled
> > +in KVM guests and attempted use of the feature will result in an UNDEFINED
> > +exception being injected into the guest.
>
> > diff --git a/Documentation/virtual/kvm/api.txt 
> > b/Documentation/virtual/kvm/api.txt
> > index cd209f7..e20583a 100644
> > --- a/Documentation/virtual/kvm/api.txt
> > +++ b/Documentation/virtual/kvm/api.txt
> > @@ -2634,6 +2634,10 @@ Possible features:
> > Depends on KVM_CAP_ARM_PSCI_0_2.
> >   - KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
> > Depends on KVM_CAP_ARM_PMU_V3.
> > + - KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
> > +   Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
> > +   set, then the KVM guest allows the execution of pointer 
> > authentication
> > +   instructions or treats them as undefined if not set.
>
> > diff --git a/arch/arm64/kvm/hyp/ptrauth-sr.c 
> > b/arch/arm64/kvm/hyp/ptrauth-sr.c
> > index 1bfaf74..03999bb 100644
> > --- a/arch/arm64/kvm/hyp/ptrauth-sr.c
> > +++ b/arch/arm64/kvm/hyp/ptrauth-sr.c
> > @@ -71,3 +71,19 @@ void __hyp_text __ptrauth_switch_to_host(struct kvm_vcpu 
> > *vcpu,
> >   __ptrauth_save_state(guest_ctxt);
> >   __ptrauth_restore_state(host_ctxt);
> >  }
> > +
> > +/**
> > + * kvm_arm_vcpu_ptrauth_allowed - checks if ptrauth feature is present in 
> > vcpu
> > + *
> > + * @vcpu: The VCPU pointer
> > + *
> > + * This function will be used to enable/disable ptrauth in guest as 
> > configured
> > + * by the KVM userspace API.
> > + */
> > +bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
> > +{
> > + if (test_bit(KVM_ARM_VCPU_PTRAUTH, vcpu->arch.features))
> > + return true;
> > + else
> > + return false;
> > +}
>
> Can't you return the result of test_bit() directly?
ok.
>
>
> Thanks,
>
> James

//Amit


Re: [PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers

2019-01-09 Thread Amit Daniel Kachhap
Hi,

On Sat, Jan 5, 2019 at 12:05 AM James Morse  wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > When pointer authentication is supported, a guest may wish to use it.
> > This patch adds the necessary KVM infrastructure for this to work, with
> > a semi-lazy context switch of the pointer auth state.
> >
> > Pointer authentication feature is only enabled when VHE is built
> > into the kernel and present into CPU implementation so only VHE code
> > paths are modified.
> >
> > When we schedule a vcpu, we disable guest usage of pointer
> > authentication instructions and accesses to the keys. While these are
> > disabled, we avoid context-switching the keys. When we trap the guest
> > trying to use pointer authentication functionality, we change to eagerly
> > context-switching the keys, and enable the feature. The next time the
> > vcpu is scheduled out/in, we start again.
>
> Why start again?
> Taking the trap at all suggests the guest knows about pointer-auth, if it uses
> it, its probably in some context switch code. It would be good to avoid 
> trapping
> that.
This is a pointer to the earlier discussion
https://lore.kernel.org/lkml/20180409125818.GE10904@cbox/.
It seems there is some agreement reached to flip ptrauth status in
every vcpu load/unload
as this would cater to both ptrauth enabled/disabled application.
>
>
> > Pointer authentication consists of address authentication and generic
> > authentication, and CPUs in a system might have varied support for
> > either. Where support for either feature is not uniform, it is hidden
> > from guests via ID register emulation, as a result of the cpufeature
> > framework in the host.
> >
> > Unfortunately, address authentication and generic authentication cannot
> > be trapped separately, as the architecture provides a single EL2 trap
> > covering both. If we wish to expose one without the other, we cannot
> > prevent a (badly-written) guest from intermittently using a feature
> > which is not uniformly supported (when scheduled on a physical CPU which
> > supports the relevant feature).
>
> Yuck!
:)
>
>
> > When the guest is scheduled on a
> > physical CPU lacking the feature, these attemts will result in an UNDEF
>
> (attempts)
ok.
>
> > being taken by the guest.
>
> Can we only expose the feature to a guest if both address and generic
> authentication are supported? (and hide it from the id register otherwise).
>
> This avoids having to know if this cpu supports address/generic when the 
> system
> doesn't. We would need to scrub the values to avoid guest-values being left
> loaded when something else is running.
Yes it can be done. Currently it is done indirectly by userspace
option. Agree with you on this.
>
>
> > diff --git a/arch/arm64/include/asm/cpufeature.h 
> > b/arch/arm64/include/asm/cpufeature.h
> > index 1c8393f..ac7d496 100644
> > --- a/arch/arm64/include/asm/cpufeature.h
> > +++ b/arch/arm64/include/asm/cpufeature.h
> > @@ -526,6 +526,12 @@ static inline bool system_supports_generic_auth(void)
> >   cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
> >  }
> >
> > +static inline bool system_supports_ptrauth(void)
> > +{
> > + return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
> > + cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
> > +}
>
> (mainline has a system_supports_address_auth(), I guess this will be folded
> around a bit during the rebase).
>
> system_supports_ptrauth() that checks the two architected algorithm caps?
yes. I will add.
>
>
> > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> > index ab35929..5c47a8f47 100644
> > --- a/arch/arm64/kvm/handle_exit.c
> > +++ b/arch/arm64/kvm/handle_exit.c
> > @@ -174,19 +174,25 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct 
> > kvm_run *run)
> >  }
> >
> >  /*
> > + * Handle the guest trying to use a ptrauth instruction, or trying to 
> > access a
> > + * ptrauth register. This trap should not occur as we enable ptrauth during
> > + * vcpu schedule itself but is anyway kept here for any unfortunate 
> > scenario.
>
> ... so we don't need this? Or if it ever runs its indicative of a bug?
Sorry This comment is confusing and is leftover of last V3 version.
>
>
> > + */
> > +void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
> > +{
> > + if (system_supports_ptrauth())
> > + kvm_arm_vcpu_ptrauth_enable(vcpu);
> > + else
> > + kvm_inject_undefined(vcpu);
> > +}
> > +

Re: [PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value

2019-01-07 Thread Amit Daniel Kachhap
Hi,

On Sat, Jan 5, 2019 at 12:05 AM James Morse  wrote:
>
> Hi Amit,
>
> On 18/12/2018 07:56, Amit Daniel Kachhap wrote:
> > When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
> > is a constant value. This works today, as the host HCR_EL2 value is
> > always the same, but this will get in the way of supporting extensions
> > that require HCR_EL2 bits to be set conditionally for the host.
> >
> > To allow such features to work without KVM having to explicitly handle
> > every possible host feature combination, this patch has KVM save/restore
> > the host HCR when switching to/from a guest HCR. The saving of the
> > register is done once during cpu hypervisor initialization state and is
> > just restored after switch from guest.
> >
> > For fetching HCR_EL2 during kvm initilisation, a hyp call is made using
>
> (initialisation)
>
>
> > kvm_call_hyp and is helpful in NHVE case.
> >
> > For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
> > to toggle the TGE bit with a RMW sequence, as we already do in
> > __tlb_switch_to_guest_vhe().
>
>
> > diff --git a/arch/arm64/include/asm/kvm_asm.h 
> > b/arch/arm64/include/asm/kvm_asm.h
> > index aea01a0..25ac9fa 100644
> > --- a/arch/arm64/include/asm/kvm_asm.h
> > +++ b/arch/arm64/include/asm/kvm_asm.h
> > @@ -73,6 +73,8 @@ extern void __vgic_v3_init_lrs(void);
> >
> >  extern u32 __kvm_get_mdcr_el2(void);
> >
> > +extern u64 __read_hyp_hcr_el2(void);
>
> How come this isn't __kvm_get_hcr_el2() like mdcr?
yes.
>
>
> > diff --git a/arch/arm64/include/asm/kvm_host.h 
> > b/arch/arm64/include/asm/kvm_host.h
> > index 52fbc82..1b9eed9 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -196,13 +196,17 @@ enum vcpu_sysreg {
> >
> >  #define NR_COPRO_REGS(NR_SYS_REGS * 2)
> >
> > +struct kvm_cpu_init_host_regs {
> > + u64 hcr_el2;
> > +};
> > +
> >  struct kvm_cpu_context {
> >   struct kvm_regs gp_regs;
> >   union {
> >   u64 sys_regs[NR_SYS_REGS];
> >   u32 copro[NR_COPRO_REGS];
> >   };
> > -
> > + struct kvm_cpu_init_host_regs init_regs;
> >   struct kvm_vcpu *__hyp_running_vcpu;
> >  };
>
> Hmm, so we grow every vcpu's struct kvm_cpu_context with some host-only 
> registers...
>
>
> > @@ -211,7 +215,7 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
> >  struct kvm_vcpu_arch {
> >   struct kvm_cpu_context ctxt;
> >
> > - /* HYP configuration */
> > + /* Guest HYP configuration */
> >   u64 hcr_el2;
> >   u32 mdcr_el2;
>
> ... but they aren't actually host-only.
>
>
> I think it would be tidier to move these two into struct kvm_cpu_context (not 
> as
> some init_host state), as both host and vcpu's have these values.
> You could then add the mdcr_el2 stashing to your __cpu_copy_host_registers()
> too. This way they both work in the same way, otherwise one is per-cpu, the
> other is in a special bit of only the host's kvm_cpu_context.
>
Your suggestion looks doable. I will implement in next iteration.
>
> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> > index f6e02cc..85a2a5c 100644
> > --- a/arch/arm64/kvm/hyp/switch.c
> > +++ b/arch/arm64/kvm/hyp/switch.c
> > @@ -139,15 +139,15 @@ static void __hyp_text __activate_traps(struct 
> > kvm_vcpu *vcpu)
> >   __activate_traps_nvhe(vcpu);
> >  }
> >
> > -static void deactivate_traps_vhe(void)
> > +static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
> >  {
> >   extern char vectors[];  /* kernel exception vectors */
> > - write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
> > + write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
> >   write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
> >   write_sysreg(vectors, vbar_el1);
> >  }
> >
> > -static void __hyp_text __deactivate_traps_nvhe(void)
> > +static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context 
> > *host_ctxt)
> >  {
> >   u64 mdcr_el2 = read_sysreg(mdcr_el2);
> >
> > @@ -157,12 +157,15 @@ static void __hyp_text __deactivate_traps_nvhe(void)
> >   mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
> >
> >   write_sysreg(mdcr_el2, mdcr_el2);
>
> Strangely we try to rebuild the host's mdcr value here. If we had the host 
> mdcr
> value in host_ctxt we could restore it dire

Re: [PATCH v6] arm64: implement ftrace with regs

2019-01-06 Thread Amit Daniel Kachhap
On Fri, Jan 4, 2019 at 8:05 PM Torsten Duwe  wrote:
>
> Use -fpatchable-function-entry (gcc8) to add 2 NOPs at the beginning
> of each function. Replace the first NOP thus generated with a quick LR
> saver (move it to scratch reg x9), so the 2nd replacement insn, the call
> to ftrace, does not clobber the value. Ftrace will then generate the
> standard stack frames.
>
> Note that patchable-function-entry in GCC disables IPA-RA, which means
> ABI register calling conventions are obeyed *and* scratch registers
> such as x9 are available.
>
> Introduce and handle an ftrace_regs_trampoline for module PLTs, right
> after ftrace_trampoline, and double the size of this special section.
>
> Signed-off-by: Torsten Duwe 
>
> ---
>
> This patch applies on 4.20 with the additional changes
> bdb85cd1d20669dfae813555dddb745ad09323ba
> (arm64/module: switch to ADRP/ADD sequences for PLT entries)
> and
> 7dc48bf96aa0fc8aa5b38cc3e5c36ac03171e680
> (arm64: ftrace: always pass instrumented pc in x0)
> along with their respective series, or alternatively on Linus' master,
> which already has these.
>
> changes since v5:
>
> * fix mentioned pc in x0 to hold the start address of the call site,
>   not the return address or the branch address.
>   This resolves the problem found by Amit.

Function graph tracer display works fine with this version. From my side,
Tested by: Amit Daniel Kachhap 

// Amit
>
> ---
>  arch/arm64/Kconfig|2
>  arch/arm64/Makefile   |4 +
>  arch/arm64/include/asm/assembler.h|1
>  arch/arm64/include/asm/ftrace.h   |   13 +++
>  arch/arm64/include/asm/module.h   |3
>  arch/arm64/kernel/Makefile|6 -
>  arch/arm64/kernel/entry-ftrace.S  |  131 
> ++
>  arch/arm64/kernel/ftrace.c|  125 
>  arch/arm64/kernel/module-plts.c   |3
>  arch/arm64/kernel/module.c|2
>  drivers/firmware/efi/libstub/Makefile |3
>  include/asm-generic/vmlinux.lds.h |1
>  include/linux/compiler_types.h|4 +
>  13 files changed, 262 insertions(+), 36 deletions(-)
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -131,6 +131,8 @@ config ARM64
> select HAVE_DEBUG_KMEMLEAK
> select HAVE_DMA_CONTIGUOUS
> select HAVE_DYNAMIC_FTRACE
> +   select HAVE_DYNAMIC_FTRACE_WITH_REGS \
> +   if $(cc-option,-fpatchable-function-entry=2)
> select HAVE_EFFICIENT_UNALIGNED_ACCESS
> select HAVE_FTRACE_MCOUNT_RECORD
> select HAVE_FUNCTION_TRACER
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -79,6 +79,10 @@ ifeq ($(CONFIG_ARM64_MODULE_PLTS),y)
>  KBUILD_LDFLAGS_MODULE  += -T $(srctree)/arch/arm64/kernel/module.lds
>  endif
>
> +ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y)
> +  CC_FLAGS_FTRACE := -fpatchable-function-entry=2
> +endif
> +
>  # Default value
>  head-y := arch/arm64/kernel/head.o
>
> --- a/arch/arm64/include/asm/ftrace.h
> +++ b/arch/arm64/include/asm/ftrace.h
> @@ -17,6 +17,19 @@
>  #define MCOUNT_ADDR((unsigned long)_mcount)
>  #define MCOUNT_INSN_SIZE   AARCH64_INSN_SIZE
>
> +/*
> + * DYNAMIC_FTRACE_WITH_REGS is implemented by adding 2 NOPs at the beginning
> + * of each function, with the second NOP actually calling ftrace. In contrary
> + * to a classic _mcount call, the call instruction to be modified is thus
> + * the second one, and not the only one.
> + */
> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> +#define ARCH_SUPPORTS_FTRACE_OPS 1
> +#define REC_IP_BRANCH_OFFSET AARCH64_INSN_SIZE
> +#else
> +#define REC_IP_BRANCH_OFFSET 0
> +#endif
> +
>  #ifndef __ASSEMBLY__
>  #include 
>
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -7,9 +7,9 @@ CPPFLAGS_vmlinux.lds:= -DTEXT_OFFSET=$(
>  AFLAGS_head.o  := -DTEXT_OFFSET=$(TEXT_OFFSET)
>  CFLAGS_armv8_deprecated.o := -I$(src)
>
> -CFLAGS_REMOVE_ftrace.o = -pg
> -CFLAGS_REMOVE_insn.o = -pg
> -CFLAGS_REMOVE_return_address.o = -pg
> +CFLAGS_REMOVE_ftrace.o = -pg $(CC_FLAGS_FTRACE)
> +CFLAGS_REMOVE_insn.o = -pg $(CC_FLAGS_FTRACE)
> +CFLAGS_REMOVE_return_address.o = -pg $(CC_FLAGS_FTRACE)
>
>  # Object file lists.
>  arm64-obj-y:= debug-monitors.o entry.o irq.o fpsimd.o
>   \
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -13,7 +13,8 @@ cflags-$(CONFIG_X86)  += -m$(BITS) -D__K
>
>  # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
>  # disable the stackleak plugin
> -cflags-$(CONFIG_ARM64) := $(subs

[PATCH v4 3/6] arm64/kvm: add a userspace option to enable pointer authentication

2018-12-17 Thread Amit Daniel Kachhap
This feature will allow the KVM guest to allow the handling of
pointer authentication instructions or to treat them as undefined
if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
supply this parameter instead of creating a new API.

A new register is not created to pass this parameter via
SET/GET_ONE_REG interface as just a flag (KVM_ARM_VCPU_PTRAUTH)
supplied is enough to select this feature.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
 Documentation/arm64/pointer-authentication.txt |  9 +
 Documentation/virtual/kvm/api.txt  |  4 
 arch/arm/include/asm/kvm_host.h|  4 
 arch/arm64/include/asm/kvm_host.h  |  7 ---
 arch/arm64/include/uapi/asm/kvm.h  |  1 +
 arch/arm64/kvm/handle_exit.c   |  2 +-
 arch/arm64/kvm/hyp/ptrauth-sr.c| 16 
 arch/arm64/kvm/reset.c |  3 +++
 include/uapi/linux/kvm.h   |  1 +
 9 files changed, 39 insertions(+), 8 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
index 5baca42..8c0f338 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -87,7 +87,8 @@ used to get and set the keys for a thread.
 Virtualization
 --
 
-Pointer authentication is not currently supported in KVM guests. KVM
-will mask the feature bits from ID_AA64ISAR1_EL1, and attempted use of
-the feature will result in an UNDEFINED exception being injected into
-the guest.
+Pointer authentication is enabled in KVM guest when virtual machine is
+created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting this feature
+to be enabled. Without this flag, pointer authentication is not enabled
+in KVM guests and attempted use of the feature will result in an UNDEFINED
+exception being injected into the guest.
diff --git a/Documentation/virtual/kvm/api.txt 
b/Documentation/virtual/kvm/api.txt
index cd209f7..e20583a 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2634,6 +2634,10 @@ Possible features:
  Depends on KVM_CAP_ARM_PSCI_0_2.
- KVM_ARM_VCPU_PMU_V3: Emulate PMUv3 for the CPU.
  Depends on KVM_CAP_ARM_PMU_V3.
+   - KVM_ARM_VCPU_PTRAUTH: Emulate Pointer authentication for the CPU.
+ Depends on KVM_CAP_ARM_PTRAUTH and only on arm64 architecture. If
+ set, then the KVM guest allows the execution of pointer authentication
+ instructions or treats them as undefined if not set.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 02d9bfc..62a85d9 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -352,6 +352,10 @@ static inline int kvm_arm_have_ssbd(void)
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu) {}
+static inline bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu)
+{
+   return false;
+}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 629712d..f853a95 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 5
 
 #define KVM_REQ_SLEEP \
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
@@ -453,14 +453,15 @@ static inline bool kvm_arch_check_sve_has_vhe(void)
 
 void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
 void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+bool kvm_arm_vcpu_ptrauth_allowed(struct kvm_vcpu *vcpu);
 
 static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu)
 {
/* Disable ptrauth and use it in a lazy context via traps */
-   if (has_vhe() && system_supports_ptrauth())
+   if (has_vhe() && system_supports_ptrauth()
+   && kvm_arm_vcpu_ptrauth_allowed(vcpu))
kvm_arm_vcpu_ptrauth_disable(vcpu);
 }
-
 void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
 
 static inline void kvm_arch_hardware_unsetup(void) {}
diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478..5f82ca1 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#def

[PATCH v4 1/6] arm64/kvm: preserve host HCR_EL2 value

2018-12-17 Thread Amit Daniel Kachhap
When restoring HCR_EL2 for the host, KVM uses HCR_HOST_VHE_FLAGS, which
is a constant value. This works today, as the host HCR_EL2 value is
always the same, but this will get in the way of supporting extensions
that require HCR_EL2 bits to be set conditionally for the host.

To allow such features to work without KVM having to explicitly handle
every possible host feature combination, this patch has KVM save/restore
the host HCR when switching to/from a guest HCR. The saving of the
register is done once during cpu hypervisor initialization state and is
just restored after switch from guest.

For fetching HCR_EL2 during kvm initilisation, a hyp call is made using
kvm_call_hyp and is helpful in NHVE case.

For the hyp TLB maintenance code, __tlb_switch_to_host_vhe() is updated
to toggle the TGE bit with a RMW sequence, as we already do in
__tlb_switch_to_guest_vhe().

Signed-off-by: Mark Rutland 
Signed-off-by: Amit Daniel Kachhap 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h   |  2 ++
 arch/arm64/include/asm/kvm_asm.h  |  2 ++
 arch/arm64/include/asm/kvm_host.h | 14 --
 arch/arm64/kvm/hyp/switch.c   | 15 +--
 arch/arm64/kvm/hyp/sysreg-sr.c| 11 +++
 arch/arm64/kvm/hyp/tlb.c  |  6 +-
 virt/kvm/arm/arm.c|  2 ++
 7 files changed, 43 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 5ca5d9a..0f012c8 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -273,6 +273,8 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation);
 }
 
+static inline void __cpu_copy_host_registers(void) {}
+
 static inline int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 {
return 0;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index aea01a0..25ac9fa 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -73,6 +73,8 @@ extern void __vgic_v3_init_lrs(void);
 
 extern u32 __kvm_get_mdcr_el2(void);
 
+extern u64 __read_hyp_hcr_el2(void);
+
 /* Home-grown __this_cpu_{ptr,read} variants that always work at HYP */
 #define __hyp_this_cpu_ptr(sym)
\
({  \
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 52fbc82..1b9eed9 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -196,13 +196,17 @@ enum vcpu_sysreg {
 
 #define NR_COPRO_REGS  (NR_SYS_REGS * 2)
 
+struct kvm_cpu_init_host_regs {
+   u64 hcr_el2;
+};
+
 struct kvm_cpu_context {
struct kvm_regs gp_regs;
union {
u64 sys_regs[NR_SYS_REGS];
u32 copro[NR_COPRO_REGS];
};
-
+   struct kvm_cpu_init_host_regs init_regs;
struct kvm_vcpu *__hyp_running_vcpu;
 };
 
@@ -211,7 +215,7 @@ typedef struct kvm_cpu_context kvm_cpu_context_t;
 struct kvm_vcpu_arch {
struct kvm_cpu_context ctxt;
 
-   /* HYP configuration */
+   /* Guest HYP configuration */
u64 hcr_el2;
u32 mdcr_el2;
 
@@ -455,6 +459,12 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
 
 static inline void __cpu_init_stage2(void) {}
 
+static inline void __cpu_copy_host_registers(void)
+{
+   kvm_cpu_context_t *host_cxt = this_cpu_ptr(_host_cpu_state);
+   host_cxt->init_regs.hcr_el2 = kvm_call_hyp(__read_hyp_hcr_el2);
+}
+
 /* Guest/host FPSIMD coordination helpers */
 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
 void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index f6e02cc..85a2a5c 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -139,15 +139,15 @@ static void __hyp_text __activate_traps(struct kvm_vcpu 
*vcpu)
__activate_traps_nvhe(vcpu);
 }
 
-static void deactivate_traps_vhe(void)
+static void deactivate_traps_vhe(struct kvm_cpu_context *host_ctxt)
 {
extern char vectors[];  /* kernel exception vectors */
-   write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
+   write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
write_sysreg(vectors, vbar_el1);
 }
 
-static void __hyp_text __deactivate_traps_nvhe(void)
+static void __hyp_text __deactivate_traps_nvhe(struct kvm_cpu_context 
*host_ctxt)
 {
u64 mdcr_el2 = read_sysreg(mdcr_el2);
 
@@ -157,12 +157,15 @@ static void __hyp_text __deactivate_traps_nvhe(void)
mdcr_el2 |= MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT;
 
write_sysreg(mdcr_el2, mdcr_el2);
-   write_sysreg(HCR_HOST_NVHE_FLAGS, hcr_el2);
+   write_sysreg(host_ctxt->init_regs.hcr_el2, hcr_el2);
write_sysreg(CPTR_EL2_DE

[PATCH v4 4/6] arm64/kvm: enable pointer authentication cpufeature conditionally

2018-12-17 Thread Amit Daniel Kachhap
According to userspace settings, pointer authentication cpufeature
is enabled/disabled from guests.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Christoffer Dall 
Cc: Marc Zyngier 
Cc: kvm...@lists.cs.columbia.edu
---
 Documentation/arm64/pointer-authentication.txt |  3 +++
 arch/arm64/kvm/sys_regs.c  | 33 --
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
index 8c0f338..a65dca2 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -92,3 +92,6 @@ created by passing a flag (KVM_ARM_VCPU_PTRAUTH) requesting 
this feature
 to be enabled. Without this flag, pointer authentication is not enabled
 in KVM guests and attempted use of the feature will result in an UNDEFINED
 exception being injected into the guest.
+
+Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will mask the
+feature bits from ID_AA64ISAR1_EL1.
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6af6c7d..ce6144a 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1055,7 +1055,7 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
 }
 
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
-static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
+static u64 read_id_reg(struct kvm_vcpu *vcpu, struct sys_reg_desc const *r, 
bool raz)
 {
u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
@@ -1066,6 +1066,15 @@ static u64 read_id_reg(struct sys_reg_desc const *r, 
bool raz)
kvm_debug("SVE unsupported for guests, suppressing\n");
 
val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
+   } else if (id == SYS_ID_AA64ISAR1_EL1) {
+   const u64 ptrauth_mask = (0xfUL << ID_AA64ISAR1_APA_SHIFT) |
+(0xfUL << ID_AA64ISAR1_API_SHIFT) |
+(0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
+(0xfUL << ID_AA64ISAR1_GPI_SHIFT);
+   if (!kvm_arm_vcpu_ptrauth_allowed(vcpu)) {
+   kvm_debug("ptrauth unsupported for guests, 
suppressing\n");
+   val &= ~ptrauth_mask;
+   }
} else if (id == SYS_ID_AA64MMFR1_EL1) {
if (val & (0xfUL << ID_AA64MMFR1_LOR_SHIFT))
kvm_debug("LORegions unsupported for guests, 
suppressing\n");
@@ -1086,7 +1095,7 @@ static bool __access_id_reg(struct kvm_vcpu *vcpu,
if (p->is_write)
return write_to_read_only(vcpu, p, r);
 
-   p->regval = read_id_reg(r, raz);
+   p->regval = read_id_reg(vcpu, r, raz);
return true;
 }
 
@@ -1115,17 +1124,17 @@ static u64 sys_reg_to_index(const struct sys_reg_desc 
*reg);
  * are stored, and for set_id_reg() we don't allow the effective value
  * to be changed.
  */
-static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
-   bool raz)
+static int __get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+   void __user *uaddr, bool raz)
 {
const u64 id = sys_reg_to_index(rd);
-   const u64 val = read_id_reg(rd, raz);
+   const u64 val = read_id_reg(vcpu, rd, raz);
 
return reg_to_user(uaddr, , id);
 }
 
-static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
-   bool raz)
+static int __set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
+   void __user *uaddr, bool raz)
 {
const u64 id = sys_reg_to_index(rd);
int err;
@@ -1136,7 +1145,7 @@ static int __set_id_reg(const struct sys_reg_desc *rd, 
void __user *uaddr,
return err;
 
/* This is what we mean by invariant: you can't change it. */
-   if (val != read_id_reg(rd, raz))
+   if (val != read_id_reg(vcpu, rd, raz))
return -EINVAL;
 
return 0;
@@ -1145,25 +1154,25 @@ static int __set_id_reg(const struct sys_reg_desc *rd, 
void __user *uaddr,
 static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-   return __get_id_reg(rd, uaddr, false);
+   return __get_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
  const struct kvm_one_reg *reg, void __user *uaddr)
 {
-   return __set_id_reg(rd, uaddr, false);
+   return __set_id_reg(vcpu, rd, uaddr, false);
 }
 
 static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
  const struct

[PATCH v4 5/6] arm64/kvm: control accessibility of ptrauth key registers

2018-12-17 Thread Amit Daniel Kachhap
According to userspace settings, ptrauth key registers are conditionally
present in guest system register list based on user specified flag
KVM_ARM_VCPU_PTRAUTH.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Christoffer Dall 
Cc: Marc Zyngier 
Cc: kvm...@lists.cs.columbia.edu
---
 Documentation/arm64/pointer-authentication.txt |  3 +-
 arch/arm64/kvm/sys_regs.c  | 42 +++---
 2 files changed, 33 insertions(+), 12 deletions(-)

diff --git a/Documentation/arm64/pointer-authentication.txt 
b/Documentation/arm64/pointer-authentication.txt
index a65dca2..729055a 100644
--- a/Documentation/arm64/pointer-authentication.txt
+++ b/Documentation/arm64/pointer-authentication.txt
@@ -94,4 +94,5 @@ in KVM guests and attempted use of the feature will result in 
an UNDEFINED
 exception being injected into the guest.
 
 Additionally, when KVM_ARM_VCPU_PTRAUTH is not set then KVM will mask the
-feature bits from ID_AA64ISAR1_EL1.
+feature bits from ID_AA64ISAR1_EL1 and pointer authentication key registers
+are hidden from userspace.
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ce6144a..09302b2 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1343,12 +1343,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_TTBR1_EL1), access_vm_reg, reset_unknown, TTBR1_EL1 },
{ SYS_DESC(SYS_TCR_EL1), access_vm_reg, reset_val, TCR_EL1, 0 },
 
-   PTRAUTH_KEY(APIA),
-   PTRAUTH_KEY(APIB),
-   PTRAUTH_KEY(APDA),
-   PTRAUTH_KEY(APDB),
-   PTRAUTH_KEY(APGA),
-
{ SYS_DESC(SYS_AFSR0_EL1), access_vm_reg, reset_unknown, AFSR0_EL1 },
{ SYS_DESC(SYS_AFSR1_EL1), access_vm_reg, reset_unknown, AFSR1_EL1 },
{ SYS_DESC(SYS_ESR_EL1), access_vm_reg, reset_unknown, ESR_EL1 },
@@ -1500,6 +1494,14 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 },
 };
 
+static const struct sys_reg_desc ptrauth_reg_descs[] = {
+   PTRAUTH_KEY(APIA),
+   PTRAUTH_KEY(APIB),
+   PTRAUTH_KEY(APDA),
+   PTRAUTH_KEY(APDB),
+   PTRAUTH_KEY(APGA),
+};
+
 static bool trap_dbgidr(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -2100,6 +2102,8 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
r = find_reg(params, table, num);
if (!r)
r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+   if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
+   r = find_reg(params, ptrauth_reg_descs, 
ARRAY_SIZE(ptrauth_reg_descs));
 
if (likely(r)) {
perform_access(vcpu, params, r);
@@ -2213,6 +2217,8 @@ static const struct sys_reg_desc 
*index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
r = find_reg_by_id(id, , table, num);
if (!r)
r = find_reg(, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+   if (!r && kvm_arm_vcpu_ptrauth_allowed(vcpu))
+   r = find_reg(, ptrauth_reg_descs, 
ARRAY_SIZE(ptrauth_reg_descs));
 
/* Not saved in the sys_reg array and not otherwise accessible? */
if (r && !(r->reg || r->get_user))
@@ -2494,18 +2500,22 @@ static int walk_one_sys_reg(const struct sys_reg_desc 
*rd,
 }
 
 /* Assumed ordered tables, see kvm_sys_reg_table_init. */
-static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
+static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind,
+   const struct sys_reg_desc *desc, unsigned int len)
 {
const struct sys_reg_desc *i1, *i2, *end1, *end2;
unsigned int total = 0;
size_t num;
int err;
 
+   if (desc == ptrauth_reg_descs && !kvm_arm_vcpu_ptrauth_allowed(vcpu))
+   return total;
+
/* We check for duplicates here, to allow arch-specific overrides. */
i1 = get_target_table(vcpu->arch.target, true, );
end1 = i1 + num;
-   i2 = sys_reg_descs;
-   end2 = sys_reg_descs + ARRAY_SIZE(sys_reg_descs);
+   i2 = desc;
+   end2 = desc + len;
 
BUG_ON(i1 == end1 || i2 == end2);
 
@@ -2533,7 +2543,10 @@ unsigned long kvm_arm_num_sys_reg_descs(struct kvm_vcpu 
*vcpu)
 {
return ARRAY_SIZE(invariant_sys_regs)
+ num_demux_regs()
-   + walk_sys_regs(vcpu, (u64 __user *)NULL);
+   + walk_sys_regs(vcpu, (u64 __user *)NULL, sys_reg_descs,
+   ARRAY_SIZE(sys_reg_descs))
+   + walk_sys_regs(vcpu, (u64 __user *)NULL, ptrauth_reg_descs,
+   ARRAY_SIZE(ptrauth_reg_descs));
 }
 
 int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
@@ -2548,7 +2561,12 @@ int kvm_arm_copy_sys_reg_indices(struct kvm_vcpu *vcpu, 
u64 __user *uindices)
uindices++;
}
 
-   err = w

[PATCH v4 6/6] arm/kvm: arm64: Add a vcpu feature for pointer authentication

2018-12-17 Thread Amit Daniel Kachhap
This is a runtime feature and can be enabled by --ptrauth option.

Signed-off-by: Amit Daniel Kachhap 
Cc: Mark Rutland 
Cc: Christoffer Dall 
Cc: Marc Zyngier 
Cc: kvm...@lists.cs.columbia.edu
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h| 2 ++
 arm/aarch64/include/asm/kvm.h | 3 +++
 arm/aarch64/include/kvm/kvm-arch.h| 1 +
 arm/aarch64/include/kvm/kvm-config-arch.h | 4 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h| 2 ++
 arm/aarch64/kvm-cpu.c | 5 +
 arm/include/arm-common/kvm-config-arch.h  | 1 +
 arm/kvm-cpu.c | 7 +++
 include/linux/kvm.h   | 1 +
 9 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..5779767 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,6 @@
 #define ARM_CPU_ID 0, 0, 0
 #define ARM_CPU_ID_MPIDR   5
 
+unsigned int kvm__cpu_ptrauth_get_feature(void) {}
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index c286035..0fd183d 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -98,6 +98,9 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
 
+/* CPU uses address authentication and A key */
+#define KVM_ARM_VCPU_PTRAUTH   4
+
 struct kvm_vcpu_init {
__u32 target;
__u32 features[7];
diff --git a/arm/aarch64/include/kvm/kvm-arch.h 
b/arm/aarch64/include/kvm/kvm-arch.h
index 9de623a..bd566cb 100644
--- a/arm/aarch64/include/kvm/kvm-arch.h
+++ b/arm/aarch64/include/kvm/kvm-arch.h
@@ -11,4 +11,5 @@
 
 #include "arm-common/kvm-arch.h"
 
+
 #endif /* KVM__KVM_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..2074684 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,9 @@
"Create PMUv3 device"), \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
"Specify random seed for Kernel Address Space " \
-   "Layout Randomization (KASLR)"),
+   "Layout Randomization (KASLR)"),\
+   OPT_BOOLEAN('\0', "ptrauth", &(cfg)->has_ptrauth,   \
+   "Enable address authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..f7b64b7 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,6 @@
 #define ARM_CPU_CTRL   3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1 0
 
+unsigned int kvm__cpu_ptrauth_get_feature(void);
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index 1b29374..10da2cb 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -123,6 +123,11 @@ void kvm_cpu__reset_vcpu(struct kvm_cpu *vcpu)
return reset_vcpu_aarch64(vcpu);
 }
 
+unsigned int kvm__cpu_ptrauth_get_feature(void)
+{
+   return (1UL << KVM_ARM_VCPU_PTRAUTH);
+}
+
 int kvm_cpu__get_endianness(struct kvm_cpu *vcpu)
 {
struct kvm_one_reg reg;
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 6a196f1..eb872db 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,7 @@ struct kvm_config_arch {
boolaarch32_guest;
boolhas_pmuv3;
u64 kaslr_seed;
+   boolhas_ptrauth;
enum irqchip_type irqchip;
 };
 
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..5afd727 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,13 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
 
+   /* Set KVM_ARM_VCPU_PTRAUTH_I_A if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH)) {
+   if (kvm->cfg.arch.has_ptrauth)
+   vcpu_init.features[0] |=
+   kvm__cpu_ptrauth_get_feature();
+   }
+
/*
 * If the preferred target ioctl is successful then
 * use preferred target else try each and every target type
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index f51d508..ffd8f5c 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -883,6

[PATCH v4 2/6] arm64/kvm: context-switch ptrauth registers

2018-12-17 Thread Amit Daniel Kachhap
When pointer authentication is supported, a guest may wish to use it.
This patch adds the necessary KVM infrastructure for this to work, with
a semi-lazy context switch of the pointer auth state.

Pointer authentication feature is only enabled when VHE is built
into the kernel and present into CPU implementation so only VHE code
paths are modified.

When we schedule a vcpu, we disable guest usage of pointer
authentication instructions and accesses to the keys. While these are
disabled, we avoid context-switching the keys. When we trap the guest
trying to use pointer authentication functionality, we change to eagerly
context-switching the keys, and enable the feature. The next time the
vcpu is scheduled out/in, we start again.

Pointer authentication consists of address authentication and generic
authentication, and CPUs in a system might have varied support for
either. Where support for either feature is not uniform, it is hidden
from guests via ID register emulation, as a result of the cpufeature
framework in the host.

Unfortunately, address authentication and generic authentication cannot
be trapped separately, as the architecture provides a single EL2 trap
covering both. If we wish to expose one without the other, we cannot
prevent a (badly-written) guest from intermittently using a feature
which is not uniformly supported (when scheduled on a physical CPU which
supports the relevant feature). When the guest is scheduled on a
physical CPU lacking the feature, these attemts will result in an UNDEF
being taken by the guest.

Signed-off-by: Mark Rutland 
Signed-off-by: Amit Daniel Kachhap 
Cc: Marc Zyngier 
Cc: Christoffer Dall 
Cc: kvm...@lists.cs.columbia.edu
---
 arch/arm/include/asm/kvm_host.h |  1 +
 arch/arm64/include/asm/cpufeature.h |  6 +++
 arch/arm64/include/asm/kvm_host.h   | 24 
 arch/arm64/include/asm/kvm_hyp.h|  7 
 arch/arm64/kernel/traps.c   |  1 +
 arch/arm64/kvm/handle_exit.c| 24 +++-
 arch/arm64/kvm/hyp/Makefile |  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c | 73 +
 arch/arm64/kvm/hyp/switch.c |  4 ++
 arch/arm64/kvm/sys_regs.c   | 40 
 virt/kvm/arm/arm.c  |  2 +
 11 files changed, 166 insertions(+), 17 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 0f012c8..02d9bfc 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -351,6 +351,7 @@ static inline int kvm_arm_have_ssbd(void)
 
 static inline void kvm_vcpu_load_sysregs(struct kvm_vcpu *vcpu) {}
 static inline void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu) {}
 
 #define __KVM_HAVE_ARCH_VM_ALLOC
 struct kvm *kvm_arch_alloc_vm(void);
diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index 1c8393f..ac7d496 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -526,6 +526,12 @@ static inline bool system_supports_generic_auth(void)
cpus_have_const_cap(ARM64_HAS_GENERIC_AUTH);
 }
 
+static inline bool system_supports_ptrauth(void)
+{
+   return IS_ENABLED(CONFIG_ARM64_PTR_AUTH) &&
+   cpus_have_const_cap(ARM64_HAS_ADDRESS_AUTH);
+}
+
 #define ARM64_SSBD_UNKNOWN -1
 #define ARM64_SSBD_FORCE_DISABLE   0
 #define ARM64_SSBD_KERNEL  1
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 1b9eed9..629712d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -146,6 +146,18 @@ enum vcpu_sysreg {
PMSWINC_EL0,/* Software Increment Register */
PMUSERENR_EL0,  /* User Enable Register */
 
+   /* Pointer Authentication Registers */
+   APIAKEYLO_EL1,
+   APIAKEYHI_EL1,
+   APIBKEYLO_EL1,
+   APIBKEYHI_EL1,
+   APDAKEYLO_EL1,
+   APDAKEYHI_EL1,
+   APDBKEYLO_EL1,
+   APDBKEYHI_EL1,
+   APGAKEYLO_EL1,
+   APGAKEYHI_EL1,
+
/* 32bit specific registers. Keep them at the end of the range */
DACR32_EL2, /* Domain Access Control Register */
IFSR32_EL2, /* Instruction Fault Status Register */
@@ -439,6 +451,18 @@ static inline bool kvm_arch_check_sve_has_vhe(void)
return true;
 }
 
+void kvm_arm_vcpu_ptrauth_enable(struct kvm_vcpu *vcpu);
+void kvm_arm_vcpu_ptrauth_disable(struct kvm_vcpu *vcpu);
+
+static inline void kvm_arm_vcpu_ptrauth_config(struct kvm_vcpu *vcpu)
+{
+   /* Disable ptrauth and use it in a lazy context via traps */
+   if (has_vhe() && system_supports_ptrauth())
+   kvm_arm_vcpu_ptrauth_disable(vcpu);
+}
+
+void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu);
+
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_

[PATCH v4 0/6] Add ARMv8.3 pointer authentication for kvm guest

2018-12-17 Thread Amit Daniel Kachhap
Hi,

This patch series adds pointer authentication support for KVM guest and
is based on top of Linux 4.20-rc5 and generic pointer authentication patch
series[1]. The first two patch in this series was originally posted by
Mark Rutland earlier[2,3] and contains some history of this work.

Extension Overview:
=

The ARMv8.3 pointer authentication extension adds functionality to detect
modification of pointer values, mitigating certain classes of attack such as
stack smashing, and making return oriented programming attacks harder.

The extension introduces the concept of a pointer authentication code (PAC),
which is stored in some upper bits of pointers. Each PAC is derived from the
original pointer, another 64-bit value (e.g. the stack pointer), and a secret
128-bit key.

New instructions are added which can be used to:

* Insert a PAC into a pointer
* Strip a PAC from a pointer
* Authenticate and strip a PAC from a pointer

The detailed description of ARMv8.3 pointer authentication support in
userspace/kernel can be found in Kristina's generic pointer authentication
patch series[1].

KVM guest work:
==

If pointer authentication is enabled for KVM guests then the new PAC intructions
will not trap to EL2. If not then they may be ignored if in HINT region or 
trapped
in EL2 as illegal instruction. Since KVM guest vcpu runs as a thread so they 
have
a key initialised which will be used by PAC. When world switch happens between
host and guest then this key is exchanged.

There were some review comments by Christoffer Dall in the original 
series[2,3,4]
and this patch series tries to implement them. The original series enabled 
pointer
authentication for both userspace and kvm userspace. However it is now
bifurcated and this series contains only KVM guest support.

Changes since v3 [4]:
* Use pointer authentication only when VHE is present as ARM8.3 implies ARM8.1
  features to be present.
* Added lazy context handling of ptrauth instructions from V2 version again. 
* Added more details in Documentation.
* Rebased to new version of generic ptrauth patches [1].

Changes since v2 [2,3]:
* Allow host and guest to have different HCR_EL2 settings and not just constant
  value HCR_HOST_VHE_FLAGS or HCR_HOST_NVHE_FLAGS.
* Optimise the reading of HCR_EL2 in host/guest switch by fetching it once
  during KVM initialisation state and using it later.
* Context switch pointer authentication keys when switching between guest
  and host. Pointer authentication was enabled in a lazy context earlier[2] and
  is removed now to make it simple. However it can be revisited later if there
  is significant performance issue.
* Added a userspace option to choose pointer authentication.
* Based on the userspace option, ptrauth cpufeature will be visible.
* Based on the userspace option, ptrauth key registers will be accessible.
* A small document is added on how to enable pointer authentication from
  userspace KVM API.

Looking for feedback and comments.

Thanks,
Amit

[1]: https://lkml.org/lkml/2018/12/7/666
[2]: https://lore.kernel.org/lkml/20171127163806.31435-11-mark.rutl...@arm.com/
[3]: https://lore.kernel.org/lkml/20171127163806.31435-10-mark.rutl...@arm.com/
[4]: https://lkml.org/lkml/2018/10/17/594


Linux (4.20-rc5 based):


Amit Daniel Kachhap (5):
  arm64/kvm: preserve host HCR_EL2 value
  arm64/kvm: context-switch ptrauth registers
  arm64/kvm: add a userspace option to enable pointer authentication
  arm64/kvm: enable pointer authentication cpufeature conditionally
  arm64/kvm: control accessibility of ptrauth key registers

 Documentation/arm64/pointer-authentication.txt | 13 ++--
 Documentation/virtual/kvm/api.txt  |  4 ++
 arch/arm/include/asm/kvm_host.h|  7 ++
 arch/arm64/include/asm/cpufeature.h|  6 ++
 arch/arm64/include/asm/kvm_asm.h   |  2 +
 arch/arm64/include/asm/kvm_host.h  | 41 +++-
 arch/arm64/include/asm/kvm_hyp.h   |  7 ++
 arch/arm64/include/uapi/asm/kvm.h  |  1 +
 arch/arm64/kernel/traps.c  |  1 +
 arch/arm64/kvm/handle_exit.c   | 24 ---
 arch/arm64/kvm/hyp/Makefile|  1 +
 arch/arm64/kvm/hyp/ptrauth-sr.c| 89 +
 arch/arm64/kvm/hyp/switch.c| 19 --
 arch/arm64/kvm/hyp/sysreg-sr.c | 11 
 arch/arm64/kvm/hyp/tlb.c   |  6 +-
 arch/arm64/kvm/reset.c |  3 +
 arch/arm64/kvm/sys_regs.c  | 91 --
 include/uapi/linux/kvm.h   |  1 +
 virt/kvm/arm/arm.c |  4 ++
 19 files changed, 289 insertions(+), 42 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/ptrauth-sr.c

kvmtool:

Repo: git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git
Amit Daniel Kachhap (1

Re: [PATCH v5] arm64: implement ftrace with regs

2018-12-16 Thread Amit Daniel Kachhap
Hi,
On Sat, Dec 15, 2018 at 6:14 PM Torsten Duwe  wrote:
>
> On Fri, 14 Dec 2018 21:45:03 +0530
> Amit Daniel Kachhap  wrote:
>
> > Sorry I didn't mention my environment. I am using 4.20-rc3 and it has
> > all the above 8 extra patches
> > mentioned by you.
>
> So that should be fine.
ok thanks.
>
> > I read your change description in v3 patchset. You had mentioned there
> > that graph caller
> > is broken.
>
> No, actually I thought I had fixed that for v4. Would you care to show
> us an actual error message or some symptom?
There is no error message or crash but no useful output like below,

/sys/kernel/tracing # echo wake_up_process > set_graph_function
/sys/kernel/tracing # echo function_graph > current_tracer
/sys/kernel/tracing # cat trace
# tracer: function_graph
#
# CPU  DURATION  FUNCTION CALLS
# | |   | |   |   |   |

//Amit
>
> Torsten


Re: [PATCH v5] arm64: implement ftrace with regs

2018-12-14 Thread Amit Daniel Kachhap
Hi,
On Fri, Dec 14, 2018 at 3:18 PM Torsten Duwe  wrote:
>
> On Thu, Dec 13, 2018 at 11:01:38PM +0530, Amit Daniel Kachhap wrote:
> > On Fri, Nov 30, 2018 at 9:53 PM Torsten Duwe  wrote:
> >
> > Hi Torsten,
> >
> > I was testing your patch and it seems to work fine for function trace. 
> > However
> > function graph trace seems broken. Is it work in progress ? or I am
> > missing something.
>
> What did you base your tests on, you didn't specify?
> OTOH neither did I ;-) I precluded all addressees had the full picture.
>
> Precisely, I basically start with 4.19, but 4.20-rc shouldn't make a
> difference. BUT...
>
> > > [Changes from v4]
> > >
> > > * include Ard's feedback and pending changes:
> > >   - ADR/ADRP veneers
> > >   - ftrace_trampolines[2]
> > >   - add a .req for fp, just in case
> " [PATCH 1/2] arm64/insn: add support for emitting ADR/ADRP instructions "
> <20181122084646.3247-2-ard.biesheu...@linaro.org> et.al, esp:
> Git-commit: bdb85cd1d20669dfae813555dddb745ad09323ba
>
> > >   - comment on the pt_regs.stackframe[2] mapping
> > > * include Mark's ftrace cleanup
> > >   - GLOBAL()
> > >   - prepare_ftrace_return(pc, , fp)
> > >
> " [PATCH 1/6] linkage: add generic GLOBAL() macro "
> <20181115224203.24847-2-mark.rutl...@arm.com> et.al., esp:
> Git-commit: 7dc48bf96aa0fc8aa5b38cc3e5c36ac03171e680
>
> change the API this patch set relies on. AFAIU they are on their way into
> mainline so I updated v5 accordingly. If you don't have these, just use v4;
> the other changes are only for compatibility and cosmetics.

Sorry I didn't mention my environment. I am using 4.20-rc3 and it has
all the above 8 extra patches
mentioned by you.
I read your change description in v3 patchset. You had mentioned there
that graph caller
is broken.

//Amit
>
> HTH,
> Torsten
>


Re: [PATCH v5] arm64: implement ftrace with regs

2018-12-13 Thread Amit Daniel Kachhap
On Fri, Nov 30, 2018 at 9:53 PM Torsten Duwe  wrote:
>
> Use -fpatchable-function-entry (gcc8) to add 2 NOPs at the beginning
> of each function. Replace the first NOP thus generated with a quick LR
> saver (move it to scratch reg x9), so the 2nd replacement insn, the call
> to ftrace, does not clobber the value. Ftrace will then generate the
> standard stack frames.
>
> Note that patchable-function-entry in GCC disables IPA-RA, which means
> ABI register calling conventions are obeyed *and* scratch registers
> such as x9 are available.
>
> Introduce and handle an ftrace_regs_trampoline for module PLTs, together
> with ftrace_trampoline, and double the size of this special section
> if .text.ftrace_trampoline is present in the module.

Hi Torsten,

I was testing your patch and it seems to work fine for function trace. However
function graph trace seems broken. Is it work in progress ? or I am
missing something.

Regards,
Amit Daniel
>
> Signed-off-by: Torsten Duwe 
>
> ---
>
> As reliable stack traces are still being discussed, here is
> dynamic ftrace with regs only, which has a value of its own.
> I can see Mark has broken down an earlier version into smaller
> patches; just let me know if you prefer that.
>
> [Changes from v4]
>
> * include Ard's feedback and pending changes:
>   - ADR/ADRP veneers
>   - ftrace_trampolines[2]
>   - add a .req for fp, just in case
>   - comment on the pt_regs.stackframe[2] mapping
> * include Mark's ftrace cleanup
>   - GLOBAL()
>   - prepare_ftrace_return(pc, , fp)
>
> ---
>  arch/arm64/Kconfig|2
>  arch/arm64/Makefile   |4 +
>  arch/arm64/include/asm/assembler.h|1
>  arch/arm64/include/asm/ftrace.h   |   13 +++
>  arch/arm64/include/asm/module.h   |3
>  arch/arm64/kernel/Makefile|6 -
>  arch/arm64/kernel/entry-ftrace.S  |  130 
> ++
>  arch/arm64/kernel/ftrace.c|  125 +---
>  arch/arm64/kernel/module-plts.c   |3
>  arch/arm64/kernel/module.c|2
>  drivers/firmware/efi/libstub/Makefile |3
>  include/asm-generic/vmlinux.lds.h |1
>  include/linux/compiler_types.h|4 +
>  13 files changed, 261 insertions(+), 36 deletions(-)
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -125,6 +125,8 @@ config ARM64
> select HAVE_DEBUG_KMEMLEAK
> select HAVE_DMA_CONTIGUOUS
> select HAVE_DYNAMIC_FTRACE
> +   select HAVE_DYNAMIC_FTRACE_WITH_REGS \
> +   if $(cc-option,-fpatchable-function-entry=2)
> select HAVE_EFFICIENT_UNALIGNED_ACCESS
> select HAVE_FTRACE_MCOUNT_RECORD
> select HAVE_FUNCTION_TRACER
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -79,6 +79,10 @@ ifeq ($(CONFIG_ARM64_MODULE_PLTS),y)
>  KBUILD_LDFLAGS_MODULE  += -T $(srctree)/arch/arm64/kernel/module.lds
>  endif
>
> +ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y)
> +  CC_FLAGS_FTRACE := -fpatchable-function-entry=2
> +endif
> +
>  # Default value
>  head-y := arch/arm64/kernel/head.o
>
> --- a/arch/arm64/include/asm/ftrace.h
> +++ b/arch/arm64/include/asm/ftrace.h
> @@ -17,6 +17,19 @@
>  #define MCOUNT_ADDR((unsigned long)_mcount)
>  #define MCOUNT_INSN_SIZE   AARCH64_INSN_SIZE
>
> +/*
> + * DYNAMIC_FTRACE_WITH_REGS is implemented by adding 2 NOPs at the beginning
> + * of each function, with the second NOP actually calling ftrace. In contrary
> + * to a classic _mcount call, the call instruction to be modified is thus
> + * the second one, and not the only one.
> + */
> +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> +#define ARCH_SUPPORTS_FTRACE_OPS 1
> +#define REC_IP_BRANCH_OFFSET AARCH64_INSN_SIZE
> +#else
> +#define REC_IP_BRANCH_OFFSET 0
> +#endif
> +
>  #ifndef __ASSEMBLY__
>  #include 
>
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -7,9 +7,9 @@ CPPFLAGS_vmlinux.lds:= -DTEXT_OFFSET=$(
>  AFLAGS_head.o  := -DTEXT_OFFSET=$(TEXT_OFFSET)
>  CFLAGS_armv8_deprecated.o := -I$(src)
>
> -CFLAGS_REMOVE_ftrace.o = -pg
> -CFLAGS_REMOVE_insn.o = -pg
> -CFLAGS_REMOVE_return_address.o = -pg
> +CFLAGS_REMOVE_ftrace.o = -pg $(CC_FLAGS_FTRACE)
> +CFLAGS_REMOVE_insn.o = -pg $(CC_FLAGS_FTRACE)
> +CFLAGS_REMOVE_return_address.o = -pg $(CC_FLAGS_FTRACE)
>
>  # Object file lists.
>  arm64-obj-y:= debug-monitors.o entry.o irq.o fpsimd.o
>   \
> --- a/drivers/firmware/efi/libstub/Makefile
> +++ b/drivers/firmware/efi/libstub/Makefile
> @@ -13,7 +13,8 @@ cflags-$(CONFIG_X86)  += -m$(BITS) -D__K
>
>  # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
>  # disable the stackleak plugin
> -cflags-$(CONFIG_ARM64) := $(subst -pg,,$(KBUILD_CFLAGS)) -fpie \
> +cflags-$(CONFIG_ARM64) := $(filter-out -pg $(CC_FLAGS_FTRACE)\
> + ,$(KBUILD_CFLAGS)) -fpie \
>

[PATCH] clk: scmi: Fix the rounding of clock rate

2018-07-30 Thread Amit Daniel Kachhap
This fix rounds the clock rate properly by using quotient and not
remainder in the calculation. This issue was found while testing HDMI
in the Juno platform.

Fixes: 6d6a1d82eaef7 ("clk: add support for clocks provided by SCMI")
Acked-by: Sudeep Holla 
Signed-off-by: Amit Daniel Kachhap 
---
 drivers/clk/clk-scmi.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
index bb2a6f2..a985bf5 100644
--- a/drivers/clk/clk-scmi.c
+++ b/drivers/clk/clk-scmi.c
@@ -38,7 +38,6 @@ static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
 static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *parent_rate)
 {
-   int step;
u64 fmin, fmax, ftmp;
struct scmi_clk *clk = to_scmi_clk(hw);
 
@@ -60,9 +59,9 @@ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned 
long rate,
 
ftmp = rate - fmin;
ftmp += clk->info->range.step_size - 1; /* to round up */
-   step = do_div(ftmp, clk->info->range.step_size);
+   do_div(ftmp, clk->info->range.step_size);
 
-   return step * clk->info->range.step_size + fmin;
+   return ftmp * clk->info->range.step_size + fmin;
 }
 
 static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
-- 
2.7.4



[PATCH] clk: scmi: Fix the rounding of clock rate

2018-07-30 Thread Amit Daniel Kachhap
This fix rounds the clock rate properly by using quotient and not
remainder in the calculation. This issue was found while testing HDMI
in the Juno platform.

Fixes: 6d6a1d82eaef7 ("clk: add support for clocks provided by SCMI")
Acked-by: Sudeep Holla 
Signed-off-by: Amit Daniel Kachhap 
---
 drivers/clk/clk-scmi.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
index bb2a6f2..a985bf5 100644
--- a/drivers/clk/clk-scmi.c
+++ b/drivers/clk/clk-scmi.c
@@ -38,7 +38,6 @@ static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
 static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *parent_rate)
 {
-   int step;
u64 fmin, fmax, ftmp;
struct scmi_clk *clk = to_scmi_clk(hw);
 
@@ -60,9 +59,9 @@ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned 
long rate,
 
ftmp = rate - fmin;
ftmp += clk->info->range.step_size - 1; /* to round up */
-   step = do_div(ftmp, clk->info->range.step_size);
+   do_div(ftmp, clk->info->range.step_size);
 
-   return step * clk->info->range.step_size + fmin;
+   return ftmp * clk->info->range.step_size + fmin;
 }
 
 static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
-- 
2.7.4



Re: [PATCH] clk: scmi: Fix the rounding of clock rate

2018-07-30 Thread Amit Daniel Kachhap
On Mon, Jul 30, 2018 at 5:10 PM, Sudeep Holla  wrote:
> On Mon, Jul 30, 2018 at 11:03:51AM +0530, Amit Daniel Kachhap wrote:
>> Hi,
>>
>> On Fri, Jul 27, 2018 at 10:07 PM, Stephen Boyd  wrote:
>> > Quoting Amit Daniel Kachhap (2018-07-27 07:01:52)
>> >> This fix rounds the clock rate properly by using quotient and not
>> >> remainder in the calculation. This issue was found while testing HDMI
>> >> in the Juno platform.
>> >>
>> >> Signed-off-by: Amit Daniel Kachhap 
>> >
>> > Any Fixes: tag here?
>> Yes, This patch is tested with Linux v4.18-rc6 tag.
>>
>
> No Stephen meant the commit that this fixes, something like below:
>
> Fixes: 6d6a1d82eaef ("clk: add support for clocks provided by SCMI")
>
> so that it can get backported if required.

ok my mistake. Thanks for the clarification.

>
> --
> Regards,
> Sudeep


Re: [PATCH] clk: scmi: Fix the rounding of clock rate

2018-07-30 Thread Amit Daniel Kachhap
On Mon, Jul 30, 2018 at 5:10 PM, Sudeep Holla  wrote:
> On Mon, Jul 30, 2018 at 11:03:51AM +0530, Amit Daniel Kachhap wrote:
>> Hi,
>>
>> On Fri, Jul 27, 2018 at 10:07 PM, Stephen Boyd  wrote:
>> > Quoting Amit Daniel Kachhap (2018-07-27 07:01:52)
>> >> This fix rounds the clock rate properly by using quotient and not
>> >> remainder in the calculation. This issue was found while testing HDMI
>> >> in the Juno platform.
>> >>
>> >> Signed-off-by: Amit Daniel Kachhap 
>> >
>> > Any Fixes: tag here?
>> Yes, This patch is tested with Linux v4.18-rc6 tag.
>>
>
> No Stephen meant the commit that this fixes, something like below:
>
> Fixes: 6d6a1d82eaef ("clk: add support for clocks provided by SCMI")
>
> so that it can get backported if required.

ok my mistake. Thanks for the clarification.

>
> --
> Regards,
> Sudeep


Re: [PATCH] clk: scmi: Fix the rounding of clock rate

2018-07-29 Thread Amit Daniel Kachhap
Hi,

On Fri, Jul 27, 2018 at 10:07 PM, Stephen Boyd  wrote:
> Quoting Amit Daniel Kachhap (2018-07-27 07:01:52)
>> This fix rounds the clock rate properly by using quotient and not
>> remainder in the calculation. This issue was found while testing HDMI
>> in the Juno platform.
>>
>> Signed-off-by: Amit Daniel Kachhap 
>
> Any Fixes: tag here?
Yes, This patch is tested with Linux v4.18-rc6 tag.
>
>> ---
>>  drivers/clk/clk-scmi.c | 5 ++---
>>  1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
>> index bb2a6f2..a985bf5 100644
>> --- a/drivers/clk/clk-scmi.c
>> +++ b/drivers/clk/clk-scmi.c
>> @@ -60,9 +59,9 @@ static long scmi_clk_round_rate(struct clk_hw *hw, 
>> unsigned long rate,
>>
>> ftmp = rate - fmin;
>> ftmp += clk->info->range.step_size - 1; /* to round up */
>> -   step = do_div(ftmp, clk->info->range.step_size);
>> +   do_div(ftmp, clk->info->range.step_size);
>>
>> -   return step * clk->info->range.step_size + fmin;
>> +   return ftmp * clk->info->range.step_size + fmin;
>
> Good catch.
Thanks.
>
Regards,
Amit


Re: [PATCH] clk: scmi: Fix the rounding of clock rate

2018-07-29 Thread Amit Daniel Kachhap
Hi,

On Fri, Jul 27, 2018 at 10:07 PM, Stephen Boyd  wrote:
> Quoting Amit Daniel Kachhap (2018-07-27 07:01:52)
>> This fix rounds the clock rate properly by using quotient and not
>> remainder in the calculation. This issue was found while testing HDMI
>> in the Juno platform.
>>
>> Signed-off-by: Amit Daniel Kachhap 
>
> Any Fixes: tag here?
Yes, This patch is tested with Linux v4.18-rc6 tag.
>
>> ---
>>  drivers/clk/clk-scmi.c | 5 ++---
>>  1 file changed, 2 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
>> index bb2a6f2..a985bf5 100644
>> --- a/drivers/clk/clk-scmi.c
>> +++ b/drivers/clk/clk-scmi.c
>> @@ -60,9 +59,9 @@ static long scmi_clk_round_rate(struct clk_hw *hw, 
>> unsigned long rate,
>>
>> ftmp = rate - fmin;
>> ftmp += clk->info->range.step_size - 1; /* to round up */
>> -   step = do_div(ftmp, clk->info->range.step_size);
>> +   do_div(ftmp, clk->info->range.step_size);
>>
>> -   return step * clk->info->range.step_size + fmin;
>> +   return ftmp * clk->info->range.step_size + fmin;
>
> Good catch.
Thanks.
>
Regards,
Amit


[PATCH] clk: scmi: Fix the rounding of clock rate

2018-07-27 Thread Amit Daniel Kachhap
This fix rounds the clock rate properly by using quotient and not
remainder in the calculation. This issue was found while testing HDMI
in the Juno platform.

Signed-off-by: Amit Daniel Kachhap 
---
 drivers/clk/clk-scmi.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
index bb2a6f2..a985bf5 100644
--- a/drivers/clk/clk-scmi.c
+++ b/drivers/clk/clk-scmi.c
@@ -38,7 +38,6 @@ static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
 static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *parent_rate)
 {
-   int step;
u64 fmin, fmax, ftmp;
struct scmi_clk *clk = to_scmi_clk(hw);
 
@@ -60,9 +59,9 @@ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned 
long rate,
 
ftmp = rate - fmin;
ftmp += clk->info->range.step_size - 1; /* to round up */
-   step = do_div(ftmp, clk->info->range.step_size);
+   do_div(ftmp, clk->info->range.step_size);
 
-   return step * clk->info->range.step_size + fmin;
+   return ftmp * clk->info->range.step_size + fmin;
 }
 
 static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
-- 
2.7.4



[PATCH] clk: scmi: Fix the rounding of clock rate

2018-07-27 Thread Amit Daniel Kachhap
This fix rounds the clock rate properly by using quotient and not
remainder in the calculation. This issue was found while testing HDMI
in the Juno platform.

Signed-off-by: Amit Daniel Kachhap 
---
 drivers/clk/clk-scmi.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/clk/clk-scmi.c b/drivers/clk/clk-scmi.c
index bb2a6f2..a985bf5 100644
--- a/drivers/clk/clk-scmi.c
+++ b/drivers/clk/clk-scmi.c
@@ -38,7 +38,6 @@ static unsigned long scmi_clk_recalc_rate(struct clk_hw *hw,
 static long scmi_clk_round_rate(struct clk_hw *hw, unsigned long rate,
unsigned long *parent_rate)
 {
-   int step;
u64 fmin, fmax, ftmp;
struct scmi_clk *clk = to_scmi_clk(hw);
 
@@ -60,9 +59,9 @@ static long scmi_clk_round_rate(struct clk_hw *hw, unsigned 
long rate,
 
ftmp = rate - fmin;
ftmp += clk->info->range.step_size - 1; /* to round up */
-   step = do_div(ftmp, clk->info->range.step_size);
+   do_div(ftmp, clk->info->range.step_size);
 
-   return step * clk->info->range.step_size + fmin;
+   return ftmp * clk->info->range.step_size + fmin;
 }
 
 static int scmi_clk_set_rate(struct clk_hw *hw, unsigned long rate,
-- 
2.7.4



Re: [PATCH 10/10] scsi: ufs-exynos: add UFS host support for Exynos SoCs

2015-08-26 Thread amit daniel kachhap
On Fri, Aug 21, 2015 at 2:58 PM, Alim Akhtar  wrote:
> From: Seungwon Jeon 
>
> This patch introduces Exynos UFS host controller driver,
> which mainly handles vendor-specific operations including
> link startup, power mode change and hibernation/unhibernation.
>
> Signed-off-by: Seungwon Jeon 
> Signed-off-by: Alim Akhtar 
> ---
>  .../devicetree/bindings/ufs/ufs-exynos.txt |   92 ++
>  drivers/scsi/ufs/Kconfig   |   12 +
>  drivers/scsi/ufs/Makefile  |1 +
>  drivers/scsi/ufs/ufs-exynos-hw.c   |  147 +++
>  drivers/scsi/ufs/ufs-exynos-hw.h   |   43 +
>  drivers/scsi/ufs/ufs-exynos.c  | 1175 
> 
>  drivers/scsi/ufs/ufs-exynos.h  |  463 
>  drivers/scsi/ufs/ufshci.h  |   26 +-
>  drivers/scsi/ufs/unipro.h  |   47 +
>  9 files changed, 2005 insertions(+), 1 deletion(-)
>  create mode 100644 Documentation/devicetree/bindings/ufs/ufs-exynos.txt
>  create mode 100644 drivers/scsi/ufs/ufs-exynos-hw.c
>  create mode 100644 drivers/scsi/ufs/ufs-exynos-hw.h
>  create mode 100644 drivers/scsi/ufs/ufs-exynos.c
>  create mode 100644 drivers/scsi/ufs/ufs-exynos.h
>
> diff --git a/Documentation/devicetree/bindings/ufs/ufs-exynos.txt 
> b/Documentation/devicetree/bindings/ufs/ufs-exynos.txt
> new file mode 100644
> index 000..1a6184d
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/ufs/ufs-exynos.txt
> @@ -0,0 +1,92 @@
> +* Exynos Universal Flash Storage (UFS) Host Controller
> +
> +UFSHC nodes are defined to describe on-chip UFS host controllers.
> +Each UFS controller instance should have its own node.
> +
> +Required properties:
> +- compatible: compatible list, contains "samsung,exynos7-ufs"
> +- interrupts: 
> +- reg   : 
> +
> +Optional properties:
> +- vdd-hba-supply: phandle to UFS host controller supply regulator 
> node
> +- vcc-supply: phandle to VCC supply regulator node
> +- vccq-supply   : phandle to VCCQ supply regulator node
> +- vccq2-supply  : phandle to VCCQ2 supply regulator node
> +- vcc-supply-1p8: For embedded UFS devices, valid VCC range is 
> 1.7-1.95V
> +  or 2.7-3.6V. This boolean property when set, 
> specifies
> + to use low voltage range of 1.7-1.95V. Note for 
> external
> + UFS cards this property is invalid and valid VCC 
> range is
> + always 2.7-3.6V.
> +- vcc-max-microamp  : specifies max. load that can be drawn from vcc 
> supply
> +- vccq-max-microamp : specifies max. load that can be drawn from vccq 
> supply
> +- vccq2-max-microamp: specifies max. load that can be drawn from vccq2 
> supply
> +- -fixed-regulator : boolean property specifying that -supply is 
> a fixed regulator
> +
> +- clocks: List of phandle and clock specifier pairs
> +- clock-names   : List of clock input name strings sorted in the same
> +  order as the clocks property.
> +- freq-table-hz: Array of  operating frequencies 
> stored in the same
> +  order as the clocks property. If this property is 
> not
> + defined or a value in the array is "0" then it is 
> assumed
> + that the frequency is set by the parent clock or a
> + fixed rate clock source.
> +- pclk-freq-avail-range : specifies available frequency range(min/max) for 
> APB clock
> +- ufs,pwr-attr-mode : specifies mode value for power mode change
> +- ufs,pwr-attr-lane : specifies lane count value for power mode change
> +- ufs,pwr-attr-gear : specifies gear count value for power mode change
> +- ufs,pwr-attr-hs-series : specifies HS rate series for power mode change
> +- ufs,pwr-local-l2-timer : specifies array of local UNIPRO L2 timer values
> +   AFC0ReqTimeOutVal>
> +- ufs,pwr-remote-l2-timer : specifies array of remote UNIPR L2 timer values
s/UNIPR/UNIPRO
> +   AFC0ReqTimeOutVal>
> +- ufs-rx-adv-fine-gran-sup_en : specifies support of fine granularity of MPHY
I suppose this field is bool type. This can be mentioned here.
> +- ufs-rx-adv-fine-gran-step : specifies granularity steps of MPHY
> +- ufs-rx-adv-min-activate-time-cap : specifies rx advanced minimum activate 
> time of MPHY
> +- ufs-pa-granularity : specifies Granularity for PA_TActivate and 
> PA_Hibern8Time
> +- ufs-pa-tacctivate : specifies time wake-up remote M-RX
> +- ufs-pa-hibern8time : specifies minimum time to wait in HIBERN8 state
> +
> +Note: If above properties are not defined it can be assumed that the supply
> +regulators or clocks are always on.
> +
> +Example:
> +   ufshc@0x1557 {
> +   compatible = "samsung,exynos7-ufs";
> +   reg = 

Re: [PATCH 09/10] scsi: ufs: return value of pwr_change_notify

2015-08-26 Thread amit daniel kachhap
On Fri, Aug 21, 2015 at 2:58 PM, Alim Akhtar  wrote:
> From: Seungwon Jeon 
>
> Behavior of the "powwer mode change" contains vendor specific
s/powwer/power
> operation known as pwr_change_notify. This change adds return
> for pwr_change_notify to find success or failure.
>
> Signed-off-by: Seungwon Jeon 
> Signed-off-by: Alim Akhtar 
> ---
>  drivers/scsi/ufs/ufshcd.c |   22 +++---
>  1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> index 8982da9..142a927 100644
> --- a/drivers/scsi/ufs/ufshcd.c
> +++ b/drivers/scsi/ufs/ufshcd.c
> @@ -2579,14 +2579,18 @@ static int ufshcd_change_power_mode(struct ufs_hba 
> *hba,
> dev_err(hba->dev,
> "%s: power mode change failed %d\n", __func__, ret);
> } else {
> -   if (hba->vops && hba->vops->pwr_change_notify)
> -   hba->vops->pwr_change_notify(hba,
> -   POST_CHANGE, NULL, pwr_mode);
> +   if (hba->vops && hba->vops->pwr_change_notify) {
> +   ret = hba->vops->pwr_change_notify(hba,
> +   POST_CHANGE, NULL, pwr_mode);
> +   if (ret)
> +   goto out;
> +   }
>
> memcpy(>pwr_info, pwr_mode,
> sizeof(struct ufs_pa_layer_attr));
> }
>
> +out:
> return ret;
>  }
>
> @@ -2601,14 +2605,18 @@ int ufshcd_config_pwr_mode(struct ufs_hba *hba,
> struct ufs_pa_layer_attr final_params = { 0 };
> int ret;
>
> -   if (hba->vops && hba->vops->pwr_change_notify)
> -   hba->vops->pwr_change_notify(hba,
> -PRE_CHANGE, desired_pwr_mode, _params);
> -   else
> +   if (hba->vops && hba->vops->pwr_change_notify) {
> +   ret = hba->vops->pwr_change_notify(hba,
> +   PRE_CHANGE, desired_pwr_mode, _params);
> +   if (ret)
> +   goto out;
> +   } else {
> memcpy(_params, desired_pwr_mode, sizeof(final_params));
> +   }
>
> ret = ufshcd_change_power_mode(hba, _params);
>
> +out:
> return ret;
>  }
>  EXPORT_SYMBOL_GPL(ufshcd_config_pwr_mode);
> --
> 1.7.10.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 08/10] scsi: ufs: make ufshcd_config_pwr_mode of non-static func

2015-08-26 Thread amit daniel kachhap
On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar  wrote:
> From: Seungwon Jeon 
>
> It can be used in the vendor's driver for the specific purpose.
more description of this log will be useful.
>
> Signed-off-by: Seungwon Jeon 
> Signed-off-by: Alim Akhtar 
> ---
>  drivers/scsi/ufs/ufshcd.c |5 ++---
>  drivers/scsi/ufs/ufshcd.h |2 ++
>  2 files changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> index d425ea1..8982da9 100644
> --- a/drivers/scsi/ufs/ufshcd.c
> +++ b/drivers/scsi/ufs/ufshcd.c
> @@ -185,8 +185,6 @@ static int ufshcd_uic_hibern8_ctrl(struct ufs_hba *hba, 
> bool en);
>  static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba);
>  static int ufshcd_host_reset_and_restore(struct ufs_hba *hba);
>  static irqreturn_t ufshcd_intr(int irq, void *__hba);
> -static int ufshcd_config_pwr_mode(struct ufs_hba *hba,
> -   struct ufs_pa_layer_attr *desired_pwr_mode);
>  static int ufshcd_change_power_mode(struct ufs_hba *hba,
>  struct ufs_pa_layer_attr *pwr_mode);
>
> @@ -2597,7 +2595,7 @@ static int ufshcd_change_power_mode(struct ufs_hba *hba,
>   * @hba: per-adapter instance
>   * @desired_pwr_mode: desired power configuration
>   */
> -static int ufshcd_config_pwr_mode(struct ufs_hba *hba,
> +int ufshcd_config_pwr_mode(struct ufs_hba *hba,
> struct ufs_pa_layer_attr *desired_pwr_mode)
>  {
> struct ufs_pa_layer_attr final_params = { 0 };
> @@ -2613,6 +2611,7 @@ static int ufshcd_config_pwr_mode(struct ufs_hba *hba,
>
> return ret;
>  }
> +EXPORT_SYMBOL_GPL(ufshcd_config_pwr_mode);
>
>  /**
>   * ufshcd_complete_dev_init() - checks device readiness
> diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
> index 045968e..13368e1 100644
> --- a/drivers/scsi/ufs/ufshcd.h
> +++ b/drivers/scsi/ufs/ufshcd.h
> @@ -636,6 +636,8 @@ extern int ufshcd_dme_set_attr(struct ufs_hba *hba, u32 
> attr_sel,
>u8 attr_set, u32 mib_val, u8 peer);
>  extern int ufshcd_dme_get_attr(struct ufs_hba *hba, u32 attr_sel,
>u32 *mib_val, u8 peer);
> +extern int ufshcd_config_pwr_mode(struct ufs_hba *hba,
> +   struct ufs_pa_layer_attr *desired_pwr_mode);
>
>  /* UIC command interfaces for DME primitives */
>  #define DME_LOCAL  0
> --
> 1.7.10.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 07/10] scsi: ufs: add add specific callback for hibern8

2015-08-26 Thread amit daniel kachhap
On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar  wrote:
> From: Seungwon Jeon 
>
> Some host controller needs specific handling before/after
> (un)hibernation, This change adds specific callback function
> to support vendor's implementation.
>
> Signed-off-by: Seungwon Jeon 
> Signed-off-by: Alim Akhtar 
> ---
>  drivers/scsi/ufs/ufshcd.c |   36 
>  drivers/scsi/ufs/ufshcd.h |3 +++
>  2 files changed, 35 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> index bc27f5e..d425ea1 100644
> --- a/drivers/scsi/ufs/ufshcd.c
> +++ b/drivers/scsi/ufs/ufshcd.c
> @@ -181,8 +181,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba);
>  static int __ufshcd_setup_clocks(struct ufs_hba *hba, bool on,
>  bool skip_ref_clk);
>  static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on);
> -static int ufshcd_uic_hibern8_exit(struct ufs_hba *hba);
> -static int ufshcd_uic_hibern8_enter(struct ufs_hba *hba);
> +static int ufshcd_uic_hibern8_ctrl(struct ufs_hba *hba, bool en);
>  static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba);
>  static int ufshcd_host_reset_and_restore(struct ufs_hba *hba);
>  static irqreturn_t ufshcd_intr(int irq, void *__hba);
> @@ -215,6 +214,16 @@ static inline void ufshcd_disable_irq(struct ufs_hba 
> *hba)
> }
>  }
>
> +static inline int ufshcd_uic_hibern8_enter(struct ufs_hba *hba)
> +{
> +   return ufshcd_uic_hibern8_ctrl(hba, true);
> +}
> +
> +static inline int ufshcd_uic_hibern8_exit(struct ufs_hba *hba)
> +{
> +   return ufshcd_uic_hibern8_ctrl(hba, false);
> +}
> +
>  /*
>   * ufshcd_wait_for_register - wait for register value to change
>   * @hba - per-adapter interface
> @@ -2395,7 +2404,7 @@ out:
> return ret;
>  }
>
> -static int ufshcd_uic_hibern8_enter(struct ufs_hba *hba)
> +static int __ufshcd_uic_hibern8_enter(struct ufs_hba *hba)
>  {
> struct uic_command uic_cmd = {0};
>
> @@ -2404,7 +2413,7 @@ static int ufshcd_uic_hibern8_enter(struct ufs_hba *hba)
> return ufshcd_uic_pwr_ctrl(hba, _cmd);
>  }
>
> -static int ufshcd_uic_hibern8_exit(struct ufs_hba *hba)
> +static int __ufshcd_uic_hibern8_exit(struct ufs_hba *hba)
>  {
> struct uic_command uic_cmd = {0};
> int ret;
> @@ -2419,6 +2428,25 @@ static int ufshcd_uic_hibern8_exit(struct ufs_hba *hba)
> return ret;
>  }
>
> +static int ufshcd_uic_hibern8_ctrl(struct ufs_hba *hba, bool en)
> +{
> +   int ret;
> +
> +   if (hba->vops && hba->vops->hibern8_notify)
> +   hba->vops->hibern8_notify(hba, en, PRE_CHANGE);
Return of hibern8_notify is not checked. Otherwise make the return type void.
> +
> +   ret = en ? __ufshcd_uic_hibern8_enter(hba) :
> +   __ufshcd_uic_hibern8_exit(hba);
> +   if (ret)
> +   goto out;
> +
> +   if (hba->vops && hba->vops->hibern8_notify)
> +   hba->vops->hibern8_notify(hba, en, POST_CHANGE);
> +
> +out:
> +   return ret;
> +}
> +
>   /**
>   * ufshcd_init_pwr_info - setting the POR (power on reset)
>   * values in hba power info
> diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
> index 0b7dde0..045968e 100644
> --- a/drivers/scsi/ufs/ufshcd.h
> +++ b/drivers/scsi/ufs/ufshcd.h
> @@ -260,6 +260,8 @@ struct ufs_pwr_mode_info {
>   * @specify_nexus_t_xfer_req:
>   * @specify_nexus_t_tm_req: called before command is issued to allow vendor
>   * specific handling to be set for nexus type.
> + * @hibern8_notify: called before and after hibernate/unhibernate is carried 
> out
> + * to allow vendor spesific implementation.
>   * @suspend: called during host controller PM callback
>   * @resume: called during host controller PM callback
>   */
> @@ -276,6 +278,7 @@ struct ufs_hba_variant_ops {
> int (*pwr_change_notify)(struct ufs_hba *,
> bool, struct ufs_pa_layer_attr *,
> struct ufs_pa_layer_attr *);
> +   int (*hibern8_notify)(struct ufs_hba *, bool, bool);
> void(*specify_nexus_t_xfer_req)(struct ufs_hba *,
> int, struct scsi_cmnd *);
> void(*specify_nexus_t_tm_req)(struct ufs_hba *, int, u8);
> --
> 1.7.10.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 04/10] scsi: ufs: add quirk not to allow reset of interrupt aggregation

2015-08-26 Thread amit daniel kachhap
Few comments below,

On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar  wrote:
> From: Seungwon Jeon 
>
> Some host controller supports interrupt aggregation, but doesn't
> allow to reset counter and timer by s/w.
>
> Signed-off-by: Seungwon Jeon 
> Signed-off-by: Alim Akhtar 
> ---
>  drivers/scsi/ufs/ufshcd.c |3 ++-
>  drivers/scsi/ufs/ufshcd.h |6 ++
>  2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> index b441a39..35380aa 100644
> --- a/drivers/scsi/ufs/ufshcd.c
> +++ b/drivers/scsi/ufs/ufshcd.c
> @@ -3204,7 +3204,8 @@ static void ufshcd_transfer_req_compl(struct ufs_hba 
> *hba)
>  * false interrupt if device completes another request after resetting
>  * aggregation and before reading the DB.
>  */
> -   if (ufshcd_is_intr_aggr_allowed(hba))
> +   if (ufshcd_is_intr_aggr_allowed(hba) &&
> +   !(hba->quirks & UFSHCI_QUIRK_BROKEN_RESET_INTR_AGGR))
How about to rename this quirk as UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR as
there are some drawbacks about the existing method also as per the
comments above. Or this can be also put as opts instead as quirk.
> ufshcd_reset_intr_aggr(hba);
>
> tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
> diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
> index 24245c9..7986a54 100644
> --- a/drivers/scsi/ufs/ufshcd.h
> +++ b/drivers/scsi/ufs/ufshcd.h
> @@ -471,6 +471,12 @@ struct ufs_hba {
>  */
> #define UFSHCI_QUIRK_BROKEN_REQ_LIST_CLRUFS_BIT(7)
>
> +   /*
> +* This quirk needs to be enabled if host controller doesn't allow
> +* that the interrupt aggregation timer and counter are reset by s/w.
> +*/
> +   #define UFSHCI_QUIRK_BROKEN_RESET_INTR_AGGR UFS_BIT(8)
> +
> unsigned int quirks;/* Deviations from standard UFSHCI spec. */
>
> wait_queue_head_t tm_wq;
> --
> 1.7.10.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 02/10] scsi: ufs: add quirk to contain unconformable utrd field

2015-08-26 Thread amit daniel kachhap
Few minor comments below,

On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar  wrote:
> From: Seungwon Jeon 
>
> UTRD(UTP Transfer Request Descriptor)'s field such as offset/length,
> especially response's has DWORD expression. This quirk can be specified
> for host controller not to conform standard.
>
> Signed-off-by: Seungwon Jeon 
> Signed-off-by: Alim Akhtar 
> ---
>  drivers/scsi/ufs/ufshcd.c |   28 +---
>  drivers/scsi/ufs/ufshcd.h |7 +++
>  2 files changed, 28 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> index b0ade73..f882bf0 100644
> --- a/drivers/scsi/ufs/ufshcd.c
> +++ b/drivers/scsi/ufs/ufshcd.c
> @@ -1009,7 +1009,7 @@ ufshcd_send_uic_cmd(struct ufs_hba *hba, struct 
> uic_command *uic_cmd)
>   *
>   * Returns 0 in case of success, non-zero value in case of failure
>   */
> -static int ufshcd_map_sg(struct ufshcd_lrb *lrbp)
> +static int ufshcd_map_sg(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
>  {
> struct ufshcd_sg_entry *prd_table;
> struct scatterlist *sg;
> @@ -1023,8 +1023,13 @@ static int ufshcd_map_sg(struct ufshcd_lrb *lrbp)
> return sg_segments;
>
> if (sg_segments) {
> -   lrbp->utr_descriptor_ptr->prd_table_length =
> -   cpu_to_le16((u16) (sg_segments));
> +   if (hba->quirks & UFSHCI_QUIRK_BROKEN_UTRD)
> +   lrbp->utr_descriptor_ptr->prd_table_length =
> +   cpu_to_le16((u16)(sg_segments *
> +   sizeof(struct ufshcd_sg_entry)));
> +   else
> +   lrbp->utr_descriptor_ptr->prd_table_length =
> +   cpu_to_le16((u16) (sg_segments));
>
> prd_table = (struct ufshcd_sg_entry *)lrbp->ucd_prdt_ptr;
>
> @@ -1347,7 +1352,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, 
> struct scsi_cmnd *cmd)
>
> /* form UPIU before issuing the command */
> ufshcd_compose_upiu(hba, lrbp);
> -   err = ufshcd_map_sg(lrbp);
> +   err = ufshcd_map_sg(hba, lrbp);
> if (err) {
> lrbp->cmd = NULL;
> clear_bit_unlock(tag, >lrb_in_use);
> @@ -2035,12 +2040,21 @@ static void ufshcd_host_memory_configure(struct 
> ufs_hba *hba)
> 
> cpu_to_le32(upper_32_bits(cmd_desc_element_addr));
>
> /* Response upiu and prdt offset should be in double words */
This comment can be moved below for the else case.
> -   utrdlp[i].response_upiu_offset =
> +   if (hba->quirks & UFSHCI_QUIRK_BROKEN_UTRD) {
> +   utrdlp[i].response_upiu_offset =
> +   cpu_to_le16(response_offset);
> +   utrdlp[i].prd_table_offset =
> +   cpu_to_le16(prdt_offset);
> +   utrdlp[i].response_upiu_length =
> +   cpu_to_le16(ALIGNED_UPIU_SIZE);
> +   } else {
> +   utrdlp[i].response_upiu_offset =
> cpu_to_le16((response_offset >> 2));
> -   utrdlp[i].prd_table_offset =
> +   utrdlp[i].prd_table_offset =
> cpu_to_le16((prdt_offset >> 2));
> -   utrdlp[i].response_upiu_length =
> +   utrdlp[i].response_upiu_length =
> cpu_to_le16(ALIGNED_UPIU_SIZE >> 2);
> +   }
>
> hba->lrb[i].utr_descriptor_ptr = (utrdlp + i);
> hba->lrb[i].ucd_req_ptr =
> diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
> index c40a0e7..1fa5ac1 100644
> --- a/drivers/scsi/ufs/ufshcd.h
> +++ b/drivers/scsi/ufs/ufshcd.h
> @@ -459,6 +459,13 @@ struct ufs_hba {
>  */
> #define UFSHCD_QUIRK_BROKEN_UFS_HCI_VERSION UFS_BIT(5)
>
> +   /*
> +* This quirk needs to be enabled if host controller doesn't conform
> +* with UTRD. Some fields such as offset/length might not be in 
> double word,
> +* but in byte.
> +*/
> +   #define UFSHCI_QUIRK_BROKEN_UTRDUFS_BIT(6)
This macro name may be given more meaningful name such as
UFSHCI_QUIRK_BYTE_ALIGN_UTRD or something similar.
> +
> unsigned int quirks;/* Deviations from standard UFSHCI spec. */
>
> wait_queue_head_t tm_wq;
> --
> 1.7.10.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

Re: [PATCH 01/10] phy: exynos-ufs: add UFS PHY driver for EXYNOS SoC

2015-08-26 Thread amit daniel kachhap
Hi,

Few minor comments,

On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar  wrote:
> From: Seungwon Jeon 
>
> This patch introduces Exynos UFS PHY driver. This driver
> supports to deal with phy calibration and power control
> according to UFS host driver's behavior.
>
> Signed-off-by: Seungwon Jeon 
> Signed-off-by: Alim Akhtar 
> Cc: Kishon Vijay Abraham I 
> ---
>  .../devicetree/bindings/phy/samsung-phy.txt|   22 ++
>  drivers/phy/Kconfig|7 +
>  drivers/phy/Makefile   |1 +
>  drivers/phy/phy-exynos-ufs.c   |  277 
> 
>  drivers/phy/phy-exynos-ufs.h   |   73 ++
>  drivers/phy/phy-exynos7-ufs.h  |   89 +++
>  include/linux/phy/phy-exynos-ufs.h |  107 
>  7 files changed, 576 insertions(+)
>  create mode 100644 drivers/phy/phy-exynos-ufs.c
>  create mode 100644 drivers/phy/phy-exynos-ufs.h
>  create mode 100644 drivers/phy/phy-exynos7-ufs.h
>  create mode 100644 include/linux/phy/phy-exynos-ufs.h
>
> diff --git a/Documentation/devicetree/bindings/phy/samsung-phy.txt 
> b/Documentation/devicetree/bindings/phy/samsung-phy.txt
> index 60c6f2a..1abe2c4 100644
> --- a/Documentation/devicetree/bindings/phy/samsung-phy.txt
> +++ b/Documentation/devicetree/bindings/phy/samsung-phy.txt
> @@ -174,3 +174,25 @@ Example:
> usbdrdphy0 = _phy0;
> usbdrdphy1 = _phy1;
> };
> +
> +Samsung Exynos7 soc serise UFS PHY Controller
> +-
> +
> +UFS PHY nodes are defined to describe on-chip UFS Physical layer controllers.
> +Each UFS PHY controller should have its own node.
> +
> +Required properties:
> +- compatible: compatible list, contains "samsung,exynos7-ufs-phy"
> +- reg : offset and length of the UFS PHY register set;
> +- reg-names : reg name(s) must be 'phy-pma';
> +- #phy-cells : must be zero
> +- samsung,syscon-phandle : a phandle to the PMU system controller, no 
> arguments
> +
> +Example:
> +   ufs_phy: ufs-phy@0x15571800 {
> +   compatible = "samsung,exynos7-ufs-phy";
> +   reg = <0x15571800 0x240>;
> +   reg-names = "phy-pma";
> +   samsung,syscon-phandle = <_system_controller>;
> +   #phy-cells = <0>;
> +   };
> diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig
> index 6b8dd16..7449376 100644
> --- a/drivers/phy/Kconfig
> +++ b/drivers/phy/Kconfig
> @@ -358,4 +358,11 @@ config PHY_BRCMSTB_SATA
>   Enable this to support the SATA3 PHY on 28nm Broadcom STB SoCs.
>   Likely useful only with CONFIG_SATA_BRCMSTB enabled.
>
> +config PHY_EXYNOS_UFS
> +   tristate "EXYNOS SoC series UFS PHY driver"
> +   depends on OF && ARCH_EXYNOS
> +   select GENERIC_PHY
> +   help
> + Support for UFS PHY on Samsung EXYNOS chipsets.
> +
>  endmenu
> diff --git a/drivers/phy/Makefile b/drivers/phy/Makefile
> index f344e1b..7a36818 100644
> --- a/drivers/phy/Makefile
> +++ b/drivers/phy/Makefile
> @@ -45,3 +45,4 @@ obj-$(CONFIG_PHY_QCOM_UFS)+= phy-qcom-ufs-qmp-14nm.o
>  obj-$(CONFIG_PHY_TUSB1210) += phy-tusb1210.o
>  obj-$(CONFIG_PHY_BRCMSTB_SATA) += phy-brcmstb-sata.o
>  obj-$(CONFIG_PHY_PISTACHIO_USB)+= phy-pistachio-usb.o
> +obj-$(CONFIG_PHY_EXYNOS_UFS)   += phy-exynos-ufs.o
> diff --git a/drivers/phy/phy-exynos-ufs.c b/drivers/phy/phy-exynos-ufs.c
> new file mode 100644
> index 000..840375d
> --- /dev/null
> +++ b/drivers/phy/phy-exynos-ufs.c
> @@ -0,0 +1,277 @@
> +/*
> + * UFS PHY driver for Samsung EXYNOS SoC
> + *
> + * Copyright (C) 2015 Samsung Electronics Co., Ltd.
> + * Author: Seungwon Jeon 
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + */
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "phy-exynos-ufs.h"
> +
> +#define for_each_phy_lane(phy, i) \
> +   for (i = 0; i < (phy)->lane_cnt; i++)
> +#define for_each_phy_cfg(cfg) \
> +   for (; (cfg)->id; (cfg)++)
> +
> +#define phy_pma_writel(phy, val, reg) \
> +   writel((val), (phy)->reg_pma + (reg))
> +#define phy_pma_readl(phy, reg) \
> +   readl((phy)->reg_pma + (reg))
> +
> +#define PHY_DEF_LANE_CNT   1
> +
> +static inline struct exynos_ufs_phy *get_exynos_ufs_phy(struct phy *phy)
> +{
> +   return (struct exynos_ufs_phy *)phy_get_drvdata(phy);
> +}
> +
> +static void exynos_ufs_phy_config(struct exynos_ufs_phy *phy,
> +   const struct exynos_ufs_phy_cfg *cfg, u8 lane)
> +{
> +   enum {LANE_0, LANE_1}; /* lane index */
> +
> +   switch (lane) {
> +

Re: [PATCH 07/10] scsi: ufs: add add specific callback for hibern8

2015-08-26 Thread amit daniel kachhap
On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar alim.akh...@samsung.com wrote:
 From: Seungwon Jeon tgih@samsung.com

 Some host controller needs specific handling before/after
 (un)hibernation, This change adds specific callback function
 to support vendor's implementation.

 Signed-off-by: Seungwon Jeon tgih@samsung.com
 Signed-off-by: Alim Akhtar alim.akh...@samsung.com
 ---
  drivers/scsi/ufs/ufshcd.c |   36 
  drivers/scsi/ufs/ufshcd.h |3 +++
  2 files changed, 35 insertions(+), 4 deletions(-)

 diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
 index bc27f5e..d425ea1 100644
 --- a/drivers/scsi/ufs/ufshcd.c
 +++ b/drivers/scsi/ufs/ufshcd.c
 @@ -181,8 +181,7 @@ static int ufshcd_probe_hba(struct ufs_hba *hba);
  static int __ufshcd_setup_clocks(struct ufs_hba *hba, bool on,
  bool skip_ref_clk);
  static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on);
 -static int ufshcd_uic_hibern8_exit(struct ufs_hba *hba);
 -static int ufshcd_uic_hibern8_enter(struct ufs_hba *hba);
 +static int ufshcd_uic_hibern8_ctrl(struct ufs_hba *hba, bool en);
  static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba);
  static int ufshcd_host_reset_and_restore(struct ufs_hba *hba);
  static irqreturn_t ufshcd_intr(int irq, void *__hba);
 @@ -215,6 +214,16 @@ static inline void ufshcd_disable_irq(struct ufs_hba 
 *hba)
 }
  }

 +static inline int ufshcd_uic_hibern8_enter(struct ufs_hba *hba)
 +{
 +   return ufshcd_uic_hibern8_ctrl(hba, true);
 +}
 +
 +static inline int ufshcd_uic_hibern8_exit(struct ufs_hba *hba)
 +{
 +   return ufshcd_uic_hibern8_ctrl(hba, false);
 +}
 +
  /*
   * ufshcd_wait_for_register - wait for register value to change
   * @hba - per-adapter interface
 @@ -2395,7 +2404,7 @@ out:
 return ret;
  }

 -static int ufshcd_uic_hibern8_enter(struct ufs_hba *hba)
 +static int __ufshcd_uic_hibern8_enter(struct ufs_hba *hba)
  {
 struct uic_command uic_cmd = {0};

 @@ -2404,7 +2413,7 @@ static int ufshcd_uic_hibern8_enter(struct ufs_hba *hba)
 return ufshcd_uic_pwr_ctrl(hba, uic_cmd);
  }

 -static int ufshcd_uic_hibern8_exit(struct ufs_hba *hba)
 +static int __ufshcd_uic_hibern8_exit(struct ufs_hba *hba)
  {
 struct uic_command uic_cmd = {0};
 int ret;
 @@ -2419,6 +2428,25 @@ static int ufshcd_uic_hibern8_exit(struct ufs_hba *hba)
 return ret;
  }

 +static int ufshcd_uic_hibern8_ctrl(struct ufs_hba *hba, bool en)
 +{
 +   int ret;
 +
 +   if (hba-vops  hba-vops-hibern8_notify)
 +   hba-vops-hibern8_notify(hba, en, PRE_CHANGE);
Return of hibern8_notify is not checked. Otherwise make the return type void.
 +
 +   ret = en ? __ufshcd_uic_hibern8_enter(hba) :
 +   __ufshcd_uic_hibern8_exit(hba);
 +   if (ret)
 +   goto out;
 +
 +   if (hba-vops  hba-vops-hibern8_notify)
 +   hba-vops-hibern8_notify(hba, en, POST_CHANGE);
 +
 +out:
 +   return ret;
 +}
 +
   /**
   * ufshcd_init_pwr_info - setting the POR (power on reset)
   * values in hba power info
 diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
 index 0b7dde0..045968e 100644
 --- a/drivers/scsi/ufs/ufshcd.h
 +++ b/drivers/scsi/ufs/ufshcd.h
 @@ -260,6 +260,8 @@ struct ufs_pwr_mode_info {
   * @specify_nexus_t_xfer_req:
   * @specify_nexus_t_tm_req: called before command is issued to allow vendor
   * specific handling to be set for nexus type.
 + * @hibern8_notify: called before and after hibernate/unhibernate is carried 
 out
 + * to allow vendor spesific implementation.
   * @suspend: called during host controller PM callback
   * @resume: called during host controller PM callback
   */
 @@ -276,6 +278,7 @@ struct ufs_hba_variant_ops {
 int (*pwr_change_notify)(struct ufs_hba *,
 bool, struct ufs_pa_layer_attr *,
 struct ufs_pa_layer_attr *);
 +   int (*hibern8_notify)(struct ufs_hba *, bool, bool);
 void(*specify_nexus_t_xfer_req)(struct ufs_hba *,
 int, struct scsi_cmnd *);
 void(*specify_nexus_t_tm_req)(struct ufs_hba *, int, u8);
 --
 1.7.10.4

 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 10/10] scsi: ufs-exynos: add UFS host support for Exynos SoCs

2015-08-26 Thread amit daniel kachhap
On Fri, Aug 21, 2015 at 2:58 PM, Alim Akhtar alim.akh...@samsung.com wrote:
 From: Seungwon Jeon tgih@samsung.com

 This patch introduces Exynos UFS host controller driver,
 which mainly handles vendor-specific operations including
 link startup, power mode change and hibernation/unhibernation.

 Signed-off-by: Seungwon Jeon tgih@samsung.com
 Signed-off-by: Alim Akhtar alim.akh...@samsung.com
 ---
  .../devicetree/bindings/ufs/ufs-exynos.txt |   92 ++
  drivers/scsi/ufs/Kconfig   |   12 +
  drivers/scsi/ufs/Makefile  |1 +
  drivers/scsi/ufs/ufs-exynos-hw.c   |  147 +++
  drivers/scsi/ufs/ufs-exynos-hw.h   |   43 +
  drivers/scsi/ufs/ufs-exynos.c  | 1175 
 
  drivers/scsi/ufs/ufs-exynos.h  |  463 
  drivers/scsi/ufs/ufshci.h  |   26 +-
  drivers/scsi/ufs/unipro.h  |   47 +
  9 files changed, 2005 insertions(+), 1 deletion(-)
  create mode 100644 Documentation/devicetree/bindings/ufs/ufs-exynos.txt
  create mode 100644 drivers/scsi/ufs/ufs-exynos-hw.c
  create mode 100644 drivers/scsi/ufs/ufs-exynos-hw.h
  create mode 100644 drivers/scsi/ufs/ufs-exynos.c
  create mode 100644 drivers/scsi/ufs/ufs-exynos.h

 diff --git a/Documentation/devicetree/bindings/ufs/ufs-exynos.txt 
 b/Documentation/devicetree/bindings/ufs/ufs-exynos.txt
 new file mode 100644
 index 000..1a6184d
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/ufs/ufs-exynos.txt
 @@ -0,0 +1,92 @@
 +* Exynos Universal Flash Storage (UFS) Host Controller
 +
 +UFSHC nodes are defined to describe on-chip UFS host controllers.
 +Each UFS controller instance should have its own node.
 +
 +Required properties:
 +- compatible: compatible list, contains samsung,exynos7-ufs
 +- interrupts: interrupt mapping for UFS host controller IRQ
 +- reg   : registers mapping
 +
 +Optional properties:
 +- vdd-hba-supply: phandle to UFS host controller supply regulator 
 node
 +- vcc-supply: phandle to VCC supply regulator node
 +- vccq-supply   : phandle to VCCQ supply regulator node
 +- vccq2-supply  : phandle to VCCQ2 supply regulator node
 +- vcc-supply-1p8: For embedded UFS devices, valid VCC range is 
 1.7-1.95V
 +  or 2.7-3.6V. This boolean property when set, 
 specifies
 + to use low voltage range of 1.7-1.95V. Note for 
 external
 + UFS cards this property is invalid and valid VCC 
 range is
 + always 2.7-3.6V.
 +- vcc-max-microamp  : specifies max. load that can be drawn from vcc 
 supply
 +- vccq-max-microamp : specifies max. load that can be drawn from vccq 
 supply
 +- vccq2-max-microamp: specifies max. load that can be drawn from vccq2 
 supply
 +- name-fixed-regulator : boolean property specifying that name-supply is 
 a fixed regulator
 +
 +- clocks: List of phandle and clock specifier pairs
 +- clock-names   : List of clock input name strings sorted in the same
 +  order as the clocks property.
 +- freq-table-hz: Array of min max operating frequencies 
 stored in the same
 +  order as the clocks property. If this property is 
 not
 + defined or a value in the array is 0 then it is 
 assumed
 + that the frequency is set by the parent clock or a
 + fixed rate clock source.
 +- pclk-freq-avail-range : specifies available frequency range(min/max) for 
 APB clock
 +- ufs,pwr-attr-mode : specifies mode value for power mode change
 +- ufs,pwr-attr-lane : specifies lane count value for power mode change
 +- ufs,pwr-attr-gear : specifies gear count value for power mode change
 +- ufs,pwr-attr-hs-series : specifies HS rate series for power mode change
 +- ufs,pwr-local-l2-timer : specifies array of local UNIPRO L2 timer values
 +  FC0ProtectionTimeOutVal,TC0ReplayTimeOutVal, 
 AFC0ReqTimeOutVal
 +- ufs,pwr-remote-l2-timer : specifies array of remote UNIPR L2 timer values
s/UNIPR/UNIPRO
 +  FC0ProtectionTimeOutVal,TC0ReplayTimeOutVal, 
 AFC0ReqTimeOutVal
 +- ufs-rx-adv-fine-gran-sup_en : specifies support of fine granularity of MPHY
I suppose this field is bool type. This can be mentioned here.
 +- ufs-rx-adv-fine-gran-step : specifies granularity steps of MPHY
 +- ufs-rx-adv-min-activate-time-cap : specifies rx advanced minimum activate 
 time of MPHY
 +- ufs-pa-granularity : specifies Granularity for PA_TActivate and 
 PA_Hibern8Time
 +- ufs-pa-tacctivate : specifies time wake-up remote M-RX
 +- ufs-pa-hibern8time : specifies minimum time to wait in HIBERN8 state
 +
 +Note: If above properties are not defined it can be assumed that the supply
 +regulators or 

Re: [PATCH 01/10] phy: exynos-ufs: add UFS PHY driver for EXYNOS SoC

2015-08-26 Thread amit daniel kachhap
Hi,

Few minor comments,

On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar alim.akh...@samsung.com wrote:
 From: Seungwon Jeon tgih@samsung.com

 This patch introduces Exynos UFS PHY driver. This driver
 supports to deal with phy calibration and power control
 according to UFS host driver's behavior.

 Signed-off-by: Seungwon Jeon tgih@samsung.com
 Signed-off-by: Alim Akhtar alim.akh...@samsung.com
 Cc: Kishon Vijay Abraham I kis...@ti.com
 ---
  .../devicetree/bindings/phy/samsung-phy.txt|   22 ++
  drivers/phy/Kconfig|7 +
  drivers/phy/Makefile   |1 +
  drivers/phy/phy-exynos-ufs.c   |  277 
 
  drivers/phy/phy-exynos-ufs.h   |   73 ++
  drivers/phy/phy-exynos7-ufs.h  |   89 +++
  include/linux/phy/phy-exynos-ufs.h |  107 
  7 files changed, 576 insertions(+)
  create mode 100644 drivers/phy/phy-exynos-ufs.c
  create mode 100644 drivers/phy/phy-exynos-ufs.h
  create mode 100644 drivers/phy/phy-exynos7-ufs.h
  create mode 100644 include/linux/phy/phy-exynos-ufs.h

 diff --git a/Documentation/devicetree/bindings/phy/samsung-phy.txt 
 b/Documentation/devicetree/bindings/phy/samsung-phy.txt
 index 60c6f2a..1abe2c4 100644
 --- a/Documentation/devicetree/bindings/phy/samsung-phy.txt
 +++ b/Documentation/devicetree/bindings/phy/samsung-phy.txt
 @@ -174,3 +174,25 @@ Example:
 usbdrdphy0 = usb3_phy0;
 usbdrdphy1 = usb3_phy1;
 };
 +
 +Samsung Exynos7 soc serise UFS PHY Controller
 +-
 +
 +UFS PHY nodes are defined to describe on-chip UFS Physical layer controllers.
 +Each UFS PHY controller should have its own node.
 +
 +Required properties:
 +- compatible: compatible list, contains samsung,exynos7-ufs-phy
 +- reg : offset and length of the UFS PHY register set;
 +- reg-names : reg name(s) must be 'phy-pma';
 +- #phy-cells : must be zero
 +- samsung,syscon-phandle : a phandle to the PMU system controller, no 
 arguments
 +
 +Example:
 +   ufs_phy: ufs-phy@0x15571800 {
 +   compatible = samsung,exynos7-ufs-phy;
 +   reg = 0x15571800 0x240;
 +   reg-names = phy-pma;
 +   samsung,syscon-phandle = pmu_system_controller;
 +   #phy-cells = 0;
 +   };
 diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig
 index 6b8dd16..7449376 100644
 --- a/drivers/phy/Kconfig
 +++ b/drivers/phy/Kconfig
 @@ -358,4 +358,11 @@ config PHY_BRCMSTB_SATA
   Enable this to support the SATA3 PHY on 28nm Broadcom STB SoCs.
   Likely useful only with CONFIG_SATA_BRCMSTB enabled.

 +config PHY_EXYNOS_UFS
 +   tristate EXYNOS SoC series UFS PHY driver
 +   depends on OF  ARCH_EXYNOS
 +   select GENERIC_PHY
 +   help
 + Support for UFS PHY on Samsung EXYNOS chipsets.
 +
  endmenu
 diff --git a/drivers/phy/Makefile b/drivers/phy/Makefile
 index f344e1b..7a36818 100644
 --- a/drivers/phy/Makefile
 +++ b/drivers/phy/Makefile
 @@ -45,3 +45,4 @@ obj-$(CONFIG_PHY_QCOM_UFS)+= phy-qcom-ufs-qmp-14nm.o
  obj-$(CONFIG_PHY_TUSB1210) += phy-tusb1210.o
  obj-$(CONFIG_PHY_BRCMSTB_SATA) += phy-brcmstb-sata.o
  obj-$(CONFIG_PHY_PISTACHIO_USB)+= phy-pistachio-usb.o
 +obj-$(CONFIG_PHY_EXYNOS_UFS)   += phy-exynos-ufs.o
 diff --git a/drivers/phy/phy-exynos-ufs.c b/drivers/phy/phy-exynos-ufs.c
 new file mode 100644
 index 000..840375d
 --- /dev/null
 +++ b/drivers/phy/phy-exynos-ufs.c
 @@ -0,0 +1,277 @@
 +/*
 + * UFS PHY driver for Samsung EXYNOS SoC
 + *
 + * Copyright (C) 2015 Samsung Electronics Co., Ltd.
 + * Author: Seungwon Jeon tgih@samsung.com
 + *
 + * This program is free software; you can redistribute it and/or modify
 + * it under the terms of the GNU General Public License as published by
 + * the Free Software Foundation; either version 2 of the License, or
 + * (at your option) any later version.
 + */
 +#include linux/module.h
 +#include linux/platform_device.h
 +#include linux/of.h
 +#include linux/io.h
 +#include linux/err.h
 +#include linux/clk.h
 +#include linux/delay.h
 +#include linux/phy/phy.h
 +#include linux/mfd/syscon.h
 +#include linux/regmap.h
 +#include linux/iopoll.h
 +#include linux/phy/phy-exynos-ufs.h
 +
 +#include phy-exynos-ufs.h
 +
 +#define for_each_phy_lane(phy, i) \
 +   for (i = 0; i  (phy)-lane_cnt; i++)
 +#define for_each_phy_cfg(cfg) \
 +   for (; (cfg)-id; (cfg)++)
 +
 +#define phy_pma_writel(phy, val, reg) \
 +   writel((val), (phy)-reg_pma + (reg))
 +#define phy_pma_readl(phy, reg) \
 +   readl((phy)-reg_pma + (reg))
 +
 +#define PHY_DEF_LANE_CNT   1
 +
 +static inline struct exynos_ufs_phy *get_exynos_ufs_phy(struct phy *phy)
 +{
 +   return (struct exynos_ufs_phy *)phy_get_drvdata(phy);
 +}
 +
 +static void exynos_ufs_phy_config(struct exynos_ufs_phy *phy,
 +

Re: [PATCH 08/10] scsi: ufs: make ufshcd_config_pwr_mode of non-static func

2015-08-26 Thread amit daniel kachhap
On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar alim.akh...@samsung.com wrote:
 From: Seungwon Jeon tgih@samsung.com

 It can be used in the vendor's driver for the specific purpose.
more description of this log will be useful.

 Signed-off-by: Seungwon Jeon tgih@samsung.com
 Signed-off-by: Alim Akhtar alim.akh...@samsung.com
 ---
  drivers/scsi/ufs/ufshcd.c |5 ++---
  drivers/scsi/ufs/ufshcd.h |2 ++
  2 files changed, 4 insertions(+), 3 deletions(-)

 diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
 index d425ea1..8982da9 100644
 --- a/drivers/scsi/ufs/ufshcd.c
 +++ b/drivers/scsi/ufs/ufshcd.c
 @@ -185,8 +185,6 @@ static int ufshcd_uic_hibern8_ctrl(struct ufs_hba *hba, 
 bool en);
  static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba);
  static int ufshcd_host_reset_and_restore(struct ufs_hba *hba);
  static irqreturn_t ufshcd_intr(int irq, void *__hba);
 -static int ufshcd_config_pwr_mode(struct ufs_hba *hba,
 -   struct ufs_pa_layer_attr *desired_pwr_mode);
  static int ufshcd_change_power_mode(struct ufs_hba *hba,
  struct ufs_pa_layer_attr *pwr_mode);

 @@ -2597,7 +2595,7 @@ static int ufshcd_change_power_mode(struct ufs_hba *hba,
   * @hba: per-adapter instance
   * @desired_pwr_mode: desired power configuration
   */
 -static int ufshcd_config_pwr_mode(struct ufs_hba *hba,
 +int ufshcd_config_pwr_mode(struct ufs_hba *hba,
 struct ufs_pa_layer_attr *desired_pwr_mode)
  {
 struct ufs_pa_layer_attr final_params = { 0 };
 @@ -2613,6 +2611,7 @@ static int ufshcd_config_pwr_mode(struct ufs_hba *hba,

 return ret;
  }
 +EXPORT_SYMBOL_GPL(ufshcd_config_pwr_mode);

  /**
   * ufshcd_complete_dev_init() - checks device readiness
 diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
 index 045968e..13368e1 100644
 --- a/drivers/scsi/ufs/ufshcd.h
 +++ b/drivers/scsi/ufs/ufshcd.h
 @@ -636,6 +636,8 @@ extern int ufshcd_dme_set_attr(struct ufs_hba *hba, u32 
 attr_sel,
u8 attr_set, u32 mib_val, u8 peer);
  extern int ufshcd_dme_get_attr(struct ufs_hba *hba, u32 attr_sel,
u32 *mib_val, u8 peer);
 +extern int ufshcd_config_pwr_mode(struct ufs_hba *hba,
 +   struct ufs_pa_layer_attr *desired_pwr_mode);

  /* UIC command interfaces for DME primitives */
  #define DME_LOCAL  0
 --
 1.7.10.4

 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 02/10] scsi: ufs: add quirk to contain unconformable utrd field

2015-08-26 Thread amit daniel kachhap
Few minor comments below,

On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar alim.akh...@samsung.com wrote:
 From: Seungwon Jeon tgih@samsung.com

 UTRD(UTP Transfer Request Descriptor)'s field such as offset/length,
 especially response's has DWORD expression. This quirk can be specified
 for host controller not to conform standard.

 Signed-off-by: Seungwon Jeon tgih@samsung.com
 Signed-off-by: Alim Akhtar alim.akh...@samsung.com
 ---
  drivers/scsi/ufs/ufshcd.c |   28 +---
  drivers/scsi/ufs/ufshcd.h |7 +++
  2 files changed, 28 insertions(+), 7 deletions(-)

 diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
 index b0ade73..f882bf0 100644
 --- a/drivers/scsi/ufs/ufshcd.c
 +++ b/drivers/scsi/ufs/ufshcd.c
 @@ -1009,7 +1009,7 @@ ufshcd_send_uic_cmd(struct ufs_hba *hba, struct 
 uic_command *uic_cmd)
   *
   * Returns 0 in case of success, non-zero value in case of failure
   */
 -static int ufshcd_map_sg(struct ufshcd_lrb *lrbp)
 +static int ufshcd_map_sg(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
  {
 struct ufshcd_sg_entry *prd_table;
 struct scatterlist *sg;
 @@ -1023,8 +1023,13 @@ static int ufshcd_map_sg(struct ufshcd_lrb *lrbp)
 return sg_segments;

 if (sg_segments) {
 -   lrbp-utr_descriptor_ptr-prd_table_length =
 -   cpu_to_le16((u16) (sg_segments));
 +   if (hba-quirks  UFSHCI_QUIRK_BROKEN_UTRD)
 +   lrbp-utr_descriptor_ptr-prd_table_length =
 +   cpu_to_le16((u16)(sg_segments *
 +   sizeof(struct ufshcd_sg_entry)));
 +   else
 +   lrbp-utr_descriptor_ptr-prd_table_length =
 +   cpu_to_le16((u16) (sg_segments));

 prd_table = (struct ufshcd_sg_entry *)lrbp-ucd_prdt_ptr;

 @@ -1347,7 +1352,7 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, 
 struct scsi_cmnd *cmd)

 /* form UPIU before issuing the command */
 ufshcd_compose_upiu(hba, lrbp);
 -   err = ufshcd_map_sg(lrbp);
 +   err = ufshcd_map_sg(hba, lrbp);
 if (err) {
 lrbp-cmd = NULL;
 clear_bit_unlock(tag, hba-lrb_in_use);
 @@ -2035,12 +2040,21 @@ static void ufshcd_host_memory_configure(struct 
 ufs_hba *hba)
 
 cpu_to_le32(upper_32_bits(cmd_desc_element_addr));

 /* Response upiu and prdt offset should be in double words */
This comment can be moved below for the else case.
 -   utrdlp[i].response_upiu_offset =
 +   if (hba-quirks  UFSHCI_QUIRK_BROKEN_UTRD) {
 +   utrdlp[i].response_upiu_offset =
 +   cpu_to_le16(response_offset);
 +   utrdlp[i].prd_table_offset =
 +   cpu_to_le16(prdt_offset);
 +   utrdlp[i].response_upiu_length =
 +   cpu_to_le16(ALIGNED_UPIU_SIZE);
 +   } else {
 +   utrdlp[i].response_upiu_offset =
 cpu_to_le16((response_offset  2));
 -   utrdlp[i].prd_table_offset =
 +   utrdlp[i].prd_table_offset =
 cpu_to_le16((prdt_offset  2));
 -   utrdlp[i].response_upiu_length =
 +   utrdlp[i].response_upiu_length =
 cpu_to_le16(ALIGNED_UPIU_SIZE  2);
 +   }

 hba-lrb[i].utr_descriptor_ptr = (utrdlp + i);
 hba-lrb[i].ucd_req_ptr =
 diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
 index c40a0e7..1fa5ac1 100644
 --- a/drivers/scsi/ufs/ufshcd.h
 +++ b/drivers/scsi/ufs/ufshcd.h
 @@ -459,6 +459,13 @@ struct ufs_hba {
  */
 #define UFSHCD_QUIRK_BROKEN_UFS_HCI_VERSION UFS_BIT(5)

 +   /*
 +* This quirk needs to be enabled if host controller doesn't conform
 +* with UTRD. Some fields such as offset/length might not be in 
 double word,
 +* but in byte.
 +*/
 +   #define UFSHCI_QUIRK_BROKEN_UTRDUFS_BIT(6)
This macro name may be given more meaningful name such as
UFSHCI_QUIRK_BYTE_ALIGN_UTRD or something similar.
 +
 unsigned int quirks;/* Deviations from standard UFSHCI spec. */

 wait_queue_head_t tm_wq;
 --
 1.7.10.4

 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  

Re: [PATCH 04/10] scsi: ufs: add quirk not to allow reset of interrupt aggregation

2015-08-26 Thread amit daniel kachhap
Few comments below,

On Fri, Aug 21, 2015 at 2:57 PM, Alim Akhtar alim.akh...@samsung.com wrote:
 From: Seungwon Jeon tgih@samsung.com

 Some host controller supports interrupt aggregation, but doesn't
 allow to reset counter and timer by s/w.

 Signed-off-by: Seungwon Jeon tgih@samsung.com
 Signed-off-by: Alim Akhtar alim.akh...@samsung.com
 ---
  drivers/scsi/ufs/ufshcd.c |3 ++-
  drivers/scsi/ufs/ufshcd.h |6 ++
  2 files changed, 8 insertions(+), 1 deletion(-)

 diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
 index b441a39..35380aa 100644
 --- a/drivers/scsi/ufs/ufshcd.c
 +++ b/drivers/scsi/ufs/ufshcd.c
 @@ -3204,7 +3204,8 @@ static void ufshcd_transfer_req_compl(struct ufs_hba 
 *hba)
  * false interrupt if device completes another request after resetting
  * aggregation and before reading the DB.
  */
 -   if (ufshcd_is_intr_aggr_allowed(hba))
 +   if (ufshcd_is_intr_aggr_allowed(hba) 
 +   !(hba-quirks  UFSHCI_QUIRK_BROKEN_RESET_INTR_AGGR))
How about to rename this quirk as UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR as
there are some drawbacks about the existing method also as per the
comments above. Or this can be also put as opts instead as quirk.
 ufshcd_reset_intr_aggr(hba);

 tr_doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
 diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
 index 24245c9..7986a54 100644
 --- a/drivers/scsi/ufs/ufshcd.h
 +++ b/drivers/scsi/ufs/ufshcd.h
 @@ -471,6 +471,12 @@ struct ufs_hba {
  */
 #define UFSHCI_QUIRK_BROKEN_REQ_LIST_CLRUFS_BIT(7)

 +   /*
 +* This quirk needs to be enabled if host controller doesn't allow
 +* that the interrupt aggregation timer and counter are reset by s/w.
 +*/
 +   #define UFSHCI_QUIRK_BROKEN_RESET_INTR_AGGR UFS_BIT(8)
 +
 unsigned int quirks;/* Deviations from standard UFSHCI spec. */

 wait_queue_head_t tm_wq;
 --
 1.7.10.4

 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 09/10] scsi: ufs: return value of pwr_change_notify

2015-08-26 Thread amit daniel kachhap
On Fri, Aug 21, 2015 at 2:58 PM, Alim Akhtar alim.akh...@samsung.com wrote:
 From: Seungwon Jeon tgih@samsung.com

 Behavior of the powwer mode change contains vendor specific
s/powwer/power
 operation known as pwr_change_notify. This change adds return
 for pwr_change_notify to find success or failure.

 Signed-off-by: Seungwon Jeon tgih@samsung.com
 Signed-off-by: Alim Akhtar alim.akh...@samsung.com
 ---
  drivers/scsi/ufs/ufshcd.c |   22 +++---
  1 file changed, 15 insertions(+), 7 deletions(-)

 diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
 index 8982da9..142a927 100644
 --- a/drivers/scsi/ufs/ufshcd.c
 +++ b/drivers/scsi/ufs/ufshcd.c
 @@ -2579,14 +2579,18 @@ static int ufshcd_change_power_mode(struct ufs_hba 
 *hba,
 dev_err(hba-dev,
 %s: power mode change failed %d\n, __func__, ret);
 } else {
 -   if (hba-vops  hba-vops-pwr_change_notify)
 -   hba-vops-pwr_change_notify(hba,
 -   POST_CHANGE, NULL, pwr_mode);
 +   if (hba-vops  hba-vops-pwr_change_notify) {
 +   ret = hba-vops-pwr_change_notify(hba,
 +   POST_CHANGE, NULL, pwr_mode);
 +   if (ret)
 +   goto out;
 +   }

 memcpy(hba-pwr_info, pwr_mode,
 sizeof(struct ufs_pa_layer_attr));
 }

 +out:
 return ret;
  }

 @@ -2601,14 +2605,18 @@ int ufshcd_config_pwr_mode(struct ufs_hba *hba,
 struct ufs_pa_layer_attr final_params = { 0 };
 int ret;

 -   if (hba-vops  hba-vops-pwr_change_notify)
 -   hba-vops-pwr_change_notify(hba,
 -PRE_CHANGE, desired_pwr_mode, final_params);
 -   else
 +   if (hba-vops  hba-vops-pwr_change_notify) {
 +   ret = hba-vops-pwr_change_notify(hba,
 +   PRE_CHANGE, desired_pwr_mode, final_params);
 +   if (ret)
 +   goto out;
 +   } else {
 memcpy(final_params, desired_pwr_mode, sizeof(final_params));
 +   }

 ret = ufshcd_change_power_mode(hba, final_params);

 +out:
 return ret;
  }
  EXPORT_SYMBOL_GPL(ufshcd_config_pwr_mode);
 --
 1.7.10.4

 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 04/13] thermal: Fix not emulating critical temperatures

2015-03-26 Thread amit daniel kachhap
Hi Sascha,

On Thu, Mar 26, 2015 at 9:23 PM, Sascha Hauer  wrote:
> commit e6e238c38 (thermal: sysfs: Add a new sysfs node emul_temp for
> thermal emulation)  promised not to emulate critical temperatures,
> but the check for critical temperatures is broken in multiple ways:
>
> - The code should only accept an emulated temperature when the emulated
>   temperature is lower than the critical temperature. Instead the code
>   accepts an emulated temperature whenever the real temperature is lower
>   than the critical temperature. This makes no sense and trying to
>   emulate a temperature higher than the critical temperature halts the
>   system.
Even higher than critical temperature should be accepted. see my
further comments below.
> - When trying to emulate a higher-than-critical temperature we should either
>   limit the emulated temperature to the maximum non critical temperature
>   or refuse to emulate this temperature. Instead the code just silently
>   ignores the emulated temperature and continues with the real temperature.
>
> This patch moves the test for illegal emulated temperature to the sysfs
> write function so that we can properly refuse illegal temperatures here.
> Trying to write illegal temperatures results in an error message. While
> at it use IS_ENABLED() instead of #ifdefs.
>
> Signed-off-by: Sascha Hauer 
> ---
>  drivers/thermal/thermal_core.c | 46 
> ++
>  1 file changed, 24 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
> index dcea909..ebca854 100644
> --- a/drivers/thermal/thermal_core.c
> +++ b/drivers/thermal/thermal_core.c
> @@ -414,11 +414,6 @@ static void handle_thermal_trip(struct 
> thermal_zone_device *tz, int trip)
>  int thermal_zone_get_temp(struct thermal_zone_device *tz, unsigned long 
> *temp)
>  {
> int ret = -EINVAL;
> -#ifdef CONFIG_THERMAL_EMULATION
> -   int count;
> -   unsigned long crit_temp = -1UL;
> -   enum thermal_trip_type type;
> -#endif
>
> if (!tz || IS_ERR(tz) || !tz->ops->get_temp)
> goto exit;
> @@ -426,25 +421,10 @@ int thermal_zone_get_temp(struct thermal_zone_device 
> *tz, unsigned long *temp)
> mutex_lock(>lock);
>
> ret = tz->ops->get_temp(tz, temp);
> -#ifdef CONFIG_THERMAL_EMULATION
> -   if (!tz->emul_temperature)
> -   goto skip_emul;
> -
> -   for (count = 0; count < tz->trips; count++) {
> -   ret = tz->ops->get_trip_type(tz, count, );
> -   if (!ret && type == THERMAL_TRIP_CRITICAL) {
> -   ret = tz->ops->get_trip_temp(tz, count, _temp);
> -   break;
> -   }
> -   }
> -
> -   if (ret)
> -   goto skip_emul;
>
> -   if (*temp < crit_temp)
I guess this check is confusing. Actually instead of returning
emulating temperature it is returning actual temperature. But the
important thing to look here is that actual temperature is higher than
critical temperature. So this check prevents the user from suppressing
the critical temperature and hence prevents from burning up the chip.
> +   if (IS_ENABLED(CONFIG_THERMAL_EMULATION) && tz->emul_temperature)
> *temp = tz->emul_temperature;
> -skip_emul:
> -#endif
> +
> mutex_unlock(>lock);
>  exit:
> return ret;
> @@ -788,10 +768,32 @@ emul_temp_store(struct device *dev, struct 
> device_attribute *attr,
> struct thermal_zone_device *tz = to_thermal_zone(dev);
> int ret = 0;
> unsigned long temperature;
> +   int trip;
> +   unsigned long crit_temp;
> +   enum thermal_trip_type type;
>
> if (kstrtoul(buf, 10, ))
> return -EINVAL;
>
> +   for (trip = 0; trip < tz->trips; trip++) {
> +   ret = tz->ops->get_trip_type(tz, trip, );
> +   if (ret)
> +   return ret;
> +
> +   if (type != THERMAL_TRIP_CRITICAL)
> +   continue;
> +
> +   ret = tz->ops->get_trip_temp(tz, trip, _temp);
> +   if (ret)
> +   return ret;
> +
> +   if (temperature >= crit_temp) {
> +   dev_err(>device, "Will not emulate critical 
> temperature %luC (tcrit=%luC)\n",
> +   temperature / 1000, crit_temp / 1000);
> +   return -EINVAL;
> +   }
Emulating critical temperature is very much needed.
> +   }
> +
> if (!tz->ops->set_emul_temp) {
> mutex_lock(>lock);
> tz->emul_temperature = temperature;
> --
> 2.1.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a 

Re: [PATCH 04/13] thermal: Fix not emulating critical temperatures

2015-03-26 Thread amit daniel kachhap
Hi Sascha,

On Thu, Mar 26, 2015 at 9:23 PM, Sascha Hauer s.ha...@pengutronix.de wrote:
 commit e6e238c38 (thermal: sysfs: Add a new sysfs node emul_temp for
 thermal emulation)  promised not to emulate critical temperatures,
 but the check for critical temperatures is broken in multiple ways:

 - The code should only accept an emulated temperature when the emulated
   temperature is lower than the critical temperature. Instead the code
   accepts an emulated temperature whenever the real temperature is lower
   than the critical temperature. This makes no sense and trying to
   emulate a temperature higher than the critical temperature halts the
   system.
Even higher than critical temperature should be accepted. see my
further comments below.
 - When trying to emulate a higher-than-critical temperature we should either
   limit the emulated temperature to the maximum non critical temperature
   or refuse to emulate this temperature. Instead the code just silently
   ignores the emulated temperature and continues with the real temperature.

 This patch moves the test for illegal emulated temperature to the sysfs
 write function so that we can properly refuse illegal temperatures here.
 Trying to write illegal temperatures results in an error message. While
 at it use IS_ENABLED() instead of #ifdefs.

 Signed-off-by: Sascha Hauer s.ha...@pengutronix.de
 ---
  drivers/thermal/thermal_core.c | 46 
 ++
  1 file changed, 24 insertions(+), 22 deletions(-)

 diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
 index dcea909..ebca854 100644
 --- a/drivers/thermal/thermal_core.c
 +++ b/drivers/thermal/thermal_core.c
 @@ -414,11 +414,6 @@ static void handle_thermal_trip(struct 
 thermal_zone_device *tz, int trip)
  int thermal_zone_get_temp(struct thermal_zone_device *tz, unsigned long 
 *temp)
  {
 int ret = -EINVAL;
 -#ifdef CONFIG_THERMAL_EMULATION
 -   int count;
 -   unsigned long crit_temp = -1UL;
 -   enum thermal_trip_type type;
 -#endif

 if (!tz || IS_ERR(tz) || !tz-ops-get_temp)
 goto exit;
 @@ -426,25 +421,10 @@ int thermal_zone_get_temp(struct thermal_zone_device 
 *tz, unsigned long *temp)
 mutex_lock(tz-lock);

 ret = tz-ops-get_temp(tz, temp);
 -#ifdef CONFIG_THERMAL_EMULATION
 -   if (!tz-emul_temperature)
 -   goto skip_emul;
 -
 -   for (count = 0; count  tz-trips; count++) {
 -   ret = tz-ops-get_trip_type(tz, count, type);
 -   if (!ret  type == THERMAL_TRIP_CRITICAL) {
 -   ret = tz-ops-get_trip_temp(tz, count, crit_temp);
 -   break;
 -   }
 -   }
 -
 -   if (ret)
 -   goto skip_emul;

 -   if (*temp  crit_temp)
I guess this check is confusing. Actually instead of returning
emulating temperature it is returning actual temperature. But the
important thing to look here is that actual temperature is higher than
critical temperature. So this check prevents the user from suppressing
the critical temperature and hence prevents from burning up the chip.
 +   if (IS_ENABLED(CONFIG_THERMAL_EMULATION)  tz-emul_temperature)
 *temp = tz-emul_temperature;
 -skip_emul:
 -#endif
 +
 mutex_unlock(tz-lock);
  exit:
 return ret;
 @@ -788,10 +768,32 @@ emul_temp_store(struct device *dev, struct 
 device_attribute *attr,
 struct thermal_zone_device *tz = to_thermal_zone(dev);
 int ret = 0;
 unsigned long temperature;
 +   int trip;
 +   unsigned long crit_temp;
 +   enum thermal_trip_type type;

 if (kstrtoul(buf, 10, temperature))
 return -EINVAL;

 +   for (trip = 0; trip  tz-trips; trip++) {
 +   ret = tz-ops-get_trip_type(tz, trip, type);
 +   if (ret)
 +   return ret;
 +
 +   if (type != THERMAL_TRIP_CRITICAL)
 +   continue;
 +
 +   ret = tz-ops-get_trip_temp(tz, trip, crit_temp);
 +   if (ret)
 +   return ret;
 +
 +   if (temperature = crit_temp) {
 +   dev_err(tz-device, Will not emulate critical 
 temperature %luC (tcrit=%luC)\n,
 +   temperature / 1000, crit_temp / 1000);
 +   return -EINVAL;
 +   }
Emulating critical temperature is very much needed.
 +   }
 +
 if (!tz-ops-set_emul_temp) {
 mutex_lock(tz-lock);
 tz-emul_temperature = temperature;
 --
 2.1.4

 --
 To unsubscribe from this list: send the line unsubscribe linux-pm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

Re: [PATCH RFC v4 1/3] PM / Runtime: Add an API pm_runtime_set_slave

2015-02-12 Thread amit daniel kachhap
Hi Alan,

On Mon, Feb 9, 2015 at 9:28 PM, Alan Stern  wrote:
> On Mon, 9 Feb 2015, Amit Daniel Kachhap wrote:
>
>> This API creates a pm runtime slave type device which does not itself
>> participates in pm runtime but depends on the master devices to power
>> manage them.
>
> This makes no sense.  How can a master device manage a slave device?
> Devices are managed by drivers, not by other devices.
May be my commit is not explaining the requirements completely and the
API name may not reflect the relevance. But If you see the 3rd patch
it has one clock use-case where this new feature is used and the
current pm runtime feature is not sufficient enough to handle it. I
have one more IOMMU use case also which is not part of this patch
series.
I am not sure if this approach is final but I looked at runtime.c file
and it has couple of API's like pm_runtime_forbid/allow which
blocks/unblocks the runtime callbacks according to driver requirement.
In the similar line I added this new API.
>
>>  These devices should have pm runtime callbacks.
>>
>> These devices (like clock) may not implement complete pm_runtime calls
>> such as pm_runtime_get/pm_runtime_put due to subsystems interaction
>> behaviour or any other reason.
>>
>> Signed-off-by: Amit Daniel Kachhap 
>> ---
>>  drivers/base/power/runtime.c | 18 ++
>>  include/linux/pm.h   |  1 +
>>  include/linux/pm_runtime.h   |  2 ++
>>  3 files changed, 21 insertions(+)
>
> This patch is unacceptable because it does not update the runtime PM
> documentation file.
my fault. Will update in next version.
>
> Besides, doesn't the no_callbacks flag already do more or less what you
> want?
yes to some extent. But I thought its purpose is different so I added 1 more.

Regards,
Amit D
>
> Alan Stern
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" 
> in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH RFC v4 1/3] PM / Runtime: Add an API pm_runtime_set_slave

2015-02-12 Thread amit daniel kachhap
Hi Alan,

On Mon, Feb 9, 2015 at 9:28 PM, Alan Stern st...@rowland.harvard.edu wrote:
 On Mon, 9 Feb 2015, Amit Daniel Kachhap wrote:

 This API creates a pm runtime slave type device which does not itself
 participates in pm runtime but depends on the master devices to power
 manage them.

 This makes no sense.  How can a master device manage a slave device?
 Devices are managed by drivers, not by other devices.
May be my commit is not explaining the requirements completely and the
API name may not reflect the relevance. But If you see the 3rd patch
it has one clock use-case where this new feature is used and the
current pm runtime feature is not sufficient enough to handle it. I
have one more IOMMU use case also which is not part of this patch
series.
I am not sure if this approach is final but I looked at runtime.c file
and it has couple of API's like pm_runtime_forbid/allow which
blocks/unblocks the runtime callbacks according to driver requirement.
In the similar line I added this new API.

  These devices should have pm runtime callbacks.

 These devices (like clock) may not implement complete pm_runtime calls
 such as pm_runtime_get/pm_runtime_put due to subsystems interaction
 behaviour or any other reason.

 Signed-off-by: Amit Daniel Kachhap amit.dan...@samsung.com
 ---
  drivers/base/power/runtime.c | 18 ++
  include/linux/pm.h   |  1 +
  include/linux/pm_runtime.h   |  2 ++
  3 files changed, 21 insertions(+)

 This patch is unacceptable because it does not update the runtime PM
 documentation file.
my fault. Will update in next version.

 Besides, doesn't the no_callbacks flag already do more or less what you
 want?
yes to some extent. But I thought its purpose is different so I added 1 more.

Regards,
Amit D

 Alan Stern

 --
 To unsubscribe from this list: send the line unsubscribe linux-samsung-soc 
 in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH RFC v4 2/3] PM / Domains: Save restore slave pm runtime devices

2015-02-09 Thread Amit Daniel Kachhap
Based on the runtime request of the active device, the callbacks of
the passive pm runtime devices will be invoked.

Signed-off-by: Amit Daniel Kachhap 
---
 drivers/base/power/domain.c | 28 
 1 file changed, 28 insertions(+)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index c5280f2..160e74a 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -49,6 +49,7 @@
 
 static LIST_HEAD(gpd_list);
 static DEFINE_MUTEX(gpd_list_lock);
+static void __pm_genpd_restore_devices(struct generic_pm_domain *genpd);
 
 static struct generic_pm_domain *pm_genpd_lookup_name(const char *domain_name)
 {
@@ -176,6 +177,8 @@ static int genpd_power_on(struct generic_pm_domain *genpd)
pr_warn("%s: Power-%s latency exceeded, new value %lld ns\n",
genpd->name, "on", elapsed_ns);
 
+   __pm_genpd_restore_devices(genpd);
+
return ret;
 }
 
@@ -397,6 +400,9 @@ static int __pm_genpd_save_device(struct pm_domain_data 
*pdd,
struct device *dev = pdd->dev;
int ret = 0;
 
+   if (dev->power.slave == true)
+   gpd_data->need_restore = 0;
+
if (gpd_data->need_restore > 0)
return 0;
 
@@ -453,6 +459,28 @@ static void __pm_genpd_restore_device(struct 
pm_domain_data *pdd,
 }
 
 /**
+ * __pm_genpd_restore_devices- Restore the pre-suspend state of all device
+ * according to the restore flag.
+ * @genpd: PM domain the device belongs to.
+ */
+static void __pm_genpd_restore_devices(struct generic_pm_domain *genpd)
+{
+   struct pm_domain_data *pdd;
+   struct generic_pm_domain_data *gpd_data;
+   struct device *dev;
+
+   /* Force restore the devices according to the restore flag */
+   list_for_each_entry(pdd, >dev_list, list_node) {
+   dev = pdd->dev;
+   gpd_data = to_gpd_data(pdd);
+   if (dev->power.slave == true) {
+   gpd_data->need_restore = 1;
+   __pm_genpd_restore_device(pdd, genpd);
+   }
+   }
+}
+
+/**
  * genpd_abort_poweroff - Check if a PM domain power off should be aborted.
  * @genpd: PM domain to check.
  *
-- 
2.2.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH RFC v4 3/3] clk: samsung: Add PM runtime support for clocks.

2015-02-09 Thread Amit Daniel Kachhap
This patch adds PM runtime support for clocks associated with Power
Domain. The PM runtime suspend/resume handlers will be called when the
power domain associated with it, is turned on/off.

The registration of clocks happen in early initailisation. The probe
is later called to register the clock device with the power domain.

Signed-off-by: Amit Daniel Kachhap 
---
 drivers/clk/samsung/clk.c | 93 +++
 drivers/clk/samsung/clk.h | 11 ++
 2 files changed, 104 insertions(+)

diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c
index 4bda540..0b5c82a 100644
--- a/drivers/clk/samsung/clk.c
+++ b/drivers/clk/samsung/clk.c
@@ -12,6 +12,9 @@
 */
 
 #include 
+#include 
+#include 
+#include 
 #include 
 
 #include "clk.h"
@@ -370,6 +373,92 @@ static void samsung_clk_sleep_init(void __iomem *reg_base,
unsigned long nr_rdump) {}
 #endif
 
+static int samsung_cmu_runtime_suspend(struct device *dev)
+{
+   struct samsung_clock_pd_reg_cache *reg_cache;
+
+   reg_cache = dev_get_drvdata(dev);
+   samsung_clk_save(reg_cache->reg_base, reg_cache->rdump,
+   reg_cache->rd_num);
+   return 0;
+}
+
+static int samsung_cmu_runtime_resume(struct device *dev)
+{
+   struct samsung_clock_pd_reg_cache *reg_cache;
+
+   reg_cache = dev_get_drvdata(dev);
+   samsung_clk_restore(reg_cache->reg_base, reg_cache->rdump,
+   reg_cache->rd_num);
+   return 0;
+}
+
+#define MAX_CMU_DEVICE_MATCH   50
+static int samsung_cmu_count;
+static struct of_device_id samsung_cmu_match[MAX_CMU_DEVICE_MATCH];
+MODULE_DEVICE_TABLE(of, samsung_cmu_match);
+
+static void samsung_clk_pd_init(struct device_node *np, void __iomem *reg_base,
+   struct samsung_cmu_info *cmu)
+{
+   struct samsung_clock_pd_reg_cache *pd_reg_cache;
+   const char *name;
+
+   if (samsung_cmu_count == MAX_CMU_DEVICE_MATCH)
+   panic("Maximum clock device limit reached.\n");
+
+   if (of_property_read_string_index(np, "compatible", 0, ))
+   panic("Invalid DT node.\n");
+
+   pd_reg_cache = kzalloc(sizeof(struct samsung_clock_pd_reg_cache),
+   GFP_KERNEL);
+   if (!pd_reg_cache)
+   panic("Could not allocate register reg_cache.\n");
+
+   pd_reg_cache->rdump = samsung_clk_alloc_reg_dump(cmu->pd_clk_regs,
+   cmu->nr_pd_clk_regs);
+   if (!pd_reg_cache->rdump)
+   panic("Could not allocate register dump storage.\n");
+
+   pd_reg_cache->reg_base = reg_base;
+   pd_reg_cache->rd_num = cmu->nr_pd_clk_regs;
+
+   /* Fill up the compatible string and data */
+   samsung_cmu_match[samsung_cmu_count].data = pd_reg_cache;
+   strcpy(samsung_cmu_match[samsung_cmu_count].compatible, name);
+   samsung_cmu_count++;
+}
+
+static int __init samsung_cmu_probe(struct platform_device *pdev)
+{
+   struct device *dev = >dev;
+   struct of_device_id *match;
+
+   /* get the platform data */
+   match = (struct of_device_id *)of_match_node(samsung_cmu_match,
+   pdev->dev.of_node);
+   if (!match)
+   return 0;
+   platform_set_drvdata(pdev, (void *)match->data);
+   pm_runtime_enable(dev);
+   pm_runtime_set_slave(dev);
+   return 0;
+}
+
+static const struct dev_pm_ops samsung_cmu_pm_ops = {
+   SET_RUNTIME_PM_OPS(samsung_cmu_runtime_suspend,
+   samsung_cmu_runtime_resume, NULL)
+};
+
+static struct platform_driver samsung_cmu_driver = {
+   .driver = {
+   .name   = "exynos-clk",
+   .of_match_table = samsung_cmu_match,
+   .pm = _cmu_pm_ops,
+   },
+   .probe = samsung_cmu_probe,
+};
+
 /*
  * Common function which registers plls, muxes, dividers and gates
  * for each CMU. It also add CMU register list to register cache.
@@ -409,5 +498,9 @@ void __init samsung_cmu_register_one(struct device_node *np,
samsung_clk_sleep_init(reg_base, cmu->clk_regs,
cmu->nr_clk_regs);
 
+   if (cmu->pd_clk_regs)
+   samsung_clk_pd_init(np, reg_base, cmu);
+
samsung_clk_of_add_provider(np, ctx);
 }
+module_platform_driver(samsung_cmu_driver);
diff --git a/drivers/clk/samsung/clk.h b/drivers/clk/samsung/clk.h
index 8acabe1..7565be8 100644
--- a/drivers/clk/samsung/clk.h
+++ b/drivers/clk/samsung/clk.h
@@ -327,6 +327,12 @@ struct samsung_clock_reg_cache {
unsigned int rd_num;
 };
 
+struct samsung_clock_pd_reg_cache {
+   void __iomem *reg_base;
+   struct samsung_clk_reg_dump *rdump;
+   unsigned int rd_num;
+};
+
 struct samsung_cmu_info {
/* list of pll clocks and respective count */
st

[PATCH RFC v4 1/3] PM / Runtime: Add an API pm_runtime_set_slave

2015-02-09 Thread Amit Daniel Kachhap
This API creates a pm runtime slave type device which does not itself
participates in pm runtime but depends on the master devices to power
manage them. These devices should have pm runtime callbacks.

These devices (like clock) may not implement complete pm_runtime calls
such as pm_runtime_get/pm_runtime_put due to subsystems interaction
behaviour or any other reason.

Signed-off-by: Amit Daniel Kachhap 
---
 drivers/base/power/runtime.c | 18 ++
 include/linux/pm.h   |  1 +
 include/linux/pm_runtime.h   |  2 ++
 3 files changed, 21 insertions(+)

diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 5070c4f..e247f08 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -1252,6 +1252,24 @@ void pm_runtime_no_callbacks(struct device *dev)
 EXPORT_SYMBOL_GPL(pm_runtime_no_callbacks);
 
 /**
+ * pm_runtime_set_slave - Set these devices as slave.
+ * @dev: Device to handle.
+ *
+ * Set the power.slave flag, which tells the PM core that this device is
+ * power-managed by the master device through the runtime callbacks. Since it
+ * will not manage runtime callbacks on its own, so runtime sysfs attributes is
+ * removed.
+ */
+void pm_runtime_set_slave(struct device *dev)
+{
+   spin_lock_irq(>power.lock);
+   dev->power.slave = true;
+   spin_unlock_irq(>power.lock);
+   if (device_is_registered(dev))
+   rpm_sysfs_remove(dev);
+}
+
+/**
  * pm_runtime_irq_safe - Leave interrupts disabled during callbacks.
  * @dev: Device to handle
  *
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 8b59763..a4090ef 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -581,6 +581,7 @@ struct dev_pm_info {
unsigned intuse_autosuspend:1;
unsigned inttimer_autosuspends:1;
unsigned intmemalloc_noio:1;
+   unsigned intslave:1;
enum rpm_requestrequest;
enum rpm_status runtime_status;
int runtime_error;
diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
index 30e84d4..0707a4b 100644
--- a/include/linux/pm_runtime.h
+++ b/include/linux/pm_runtime.h
@@ -47,6 +47,7 @@ extern void __pm_runtime_disable(struct device *dev, bool 
check_resume);
 extern void pm_runtime_allow(struct device *dev);
 extern void pm_runtime_forbid(struct device *dev);
 extern void pm_runtime_no_callbacks(struct device *dev);
+extern void pm_runtime_set_slave(struct device *dev);
 extern void pm_runtime_irq_safe(struct device *dev);
 extern void __pm_runtime_use_autosuspend(struct device *dev, bool use);
 extern void pm_runtime_set_autosuspend_delay(struct device *dev, int delay);
@@ -168,6 +169,7 @@ static inline bool pm_runtime_suspended_if_enabled(struct 
device *dev) { return
 static inline bool pm_runtime_enabled(struct device *dev) { return false; }
 
 static inline void pm_runtime_no_callbacks(struct device *dev) {}
+static inline void pm_runtime_set_slave(struct device *dev) {}
 static inline void pm_runtime_irq_safe(struct device *dev) {}
 static inline bool pm_runtime_is_irq_safe(struct device *dev) { return false; }
 
-- 
2.2.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH RFC v4 0/3] PM / Runtime: Feature to power manage slave devices

2015-02-09 Thread Amit Daniel Kachhap
This patch is somewhat similar to V3 version but uses pm runtime framework to
register/fetch the information about the device which may be slave or master.
Since the power domain has the necessary information about devices so it invokes
the save/restore runtime callbacks for the slave devices.

There was objection raised by Kevin and others that V3 is hacky in nature so 
this
revision fixes it by keeping the slave device information in pm runtime 
framework.

The model works like,
DEV1 (Attaches itself with PD but no support of  pm_runtime_get and
 /  pm_runtime_put. Registers itself as pm runtime slave device.
/   Its local runtime_suspend/resume is invoked by PD.
   /
  /
PD -- DEV2 (Implements complete PM runtime and calls pm_runtime_get and
  \ pm_runtime_put. This in turn invokes PD On/Off)
   \
DEV3 (Similar to DEV1)

This work is continuation of earlier work,

In V3: Used the existing API pm_genpd_dev_need_restore to add a feature to force
devices to perform restore/save during every power domain on/off
operation. This API is now removed.
Link (https://lkml.org/lkml/2014/12/13/74

In V2: Completely removed notfiers and add support for huge clock list to be
suspended and restored. There was some issue in this as some clocks are
not exposed and are just initialised by bootloaders. This approach
required all of them to be exposed which is cumbersome. Also some
clock API's set_parent are disabling the original parent clocks
which is not required.
Link (https://lkml.org/lkml/2014/11/24/290)

In V1: Implemented PM Domain notfier such as PD_ON_PRE, PD_ON_POST
PD_OFF_PRE and PD_OFF_POST. This was not supported and other
options suggested. link 
(http://www.spinics.net/lists/linux-samsung-soc/msg38442.html)

This work may also assist exynos iommu pm runtime posted earlier by Marek
http://lists.linaro.org/pipermail/linaro-mm-sig/2014-August/004099.html

Amit Daniel Kachhap (3):
  PM / Runtime: Add an API pm_runtime_set_slave
  PM / Domains: Save restore slave pm runtime devices
  clk: samsung: Add PM runtime support for clocks.

 drivers/base/power/domain.c  | 28 +
 drivers/base/power/runtime.c | 18 +
 drivers/clk/samsung/clk.c| 93 
 drivers/clk/samsung/clk.h| 11 ++
 include/linux/pm.h   |  1 +
 include/linux/pm_runtime.h   |  2 +
 6 files changed, 153 insertions(+)

-- 
2.2.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


  1   2   3   4   5   6   7   8   9   10   >