Hi Boris,
On 03/24/2017 12:12 PM, Borislav Petkov wrote:
}
+static inline int __init early_set_memory_decrypted(void *addr,
+ unsigned long size)
+{
+ return 1;
^^^^^^^^
return 1 when !CONFIG_AMD_MEM_ENCRYPT ?
The non-early variants return 0.
I will fix it and use the same return value.
+}
+
+static inline int __init early_set_memory_encrypted(void *addr,
+ unsigned long size)
+{
+ return 1;
+}
+
#define __sme_pa __pa
+ unsigned long pfn, npages;
+ unsigned long addr = (unsigned long)vaddr & PAGE_MASK;
+
+ /* We are going to change the physical page attribute from C=1 to C=0.
+ * Flush the caches to ensure that all the data with C=1 is flushed to
+ * memory. Any caching of the vaddr after function returns will
+ * use C=0.
+ */
Kernel comments style is:
/*
* A sentence ending with a full-stop.
* Another sentence. ...
* More sentences. ...
*/
I will update to use kernel comment style.
+ clflush_cache_range(vaddr, size);
+
+ npages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ pfn = slow_virt_to_phys((void *)addr) >> PAGE_SHIFT;
+
+ return kernel_map_pages_in_pgd(init_mm.pgd, pfn, addr, npages,
+ flags & ~sme_me_mask);
+
+}
+
+int __init early_set_memory_decrypted(void *vaddr, unsigned long size)
+{
+ unsigned long flags = get_pte_flags((unsigned long)vaddr);
So this does lookup_address()...
+ return early_set_memory_enc_dec(vaddr, size, flags & ~sme_me_mask);
... and this does it too in slow_virt_to_phys(). So you do it twice per
vaddr.
So why don't you define a __slow_virt_to_phys() helper - notice
the "__" - which returns flags in its second parameter and which
slow_virt_to_phys() calls with a NULL second parameter in the other
cases?
I will look into creating a helper function. thanks
-Brijesh