On 2/3/21 6:00 AM, Jarkko Sakkinen wrote:
On Mon, Feb 01, 2021 at 09:26:50PM +0800, Tianjia Zhang wrote:
The spin lock of sgx_epc_section only locks the page_list. The
EREMOVE operation and init_laundry_list is not necessary in the
protection range of the spin lock. This patch reduces the lock
range of the spin lock in the function sgx_sanitize_section()
and only protects the operation of the page_list.

Suggested-by: Sean Christopherson <sea...@google.com>
Signed-off-by: Tianjia Zhang <tianjia.zh...@linux.alibaba.com>

I'm not confident that this change has any practical value.

/Jarkko


As a process executed during initialization, this optimization effect may not be obvious. If possible, this critical area can be moved outside to protect the entire while loop.

Best regards,
Tianjia
---
  arch/x86/kernel/cpu/sgx/main.c | 11 ++++-------
  1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/cpu/sgx/main.c b/arch/x86/kernel/cpu/sgx/main.c
index c519fc5f6948..4465912174fd 100644
--- a/arch/x86/kernel/cpu/sgx/main.c
+++ b/arch/x86/kernel/cpu/sgx/main.c
@@ -41,20 +41,17 @@ static void sgx_sanitize_section(struct sgx_epc_section 
*section)
                if (kthread_should_stop())
                        return;
- /* needed for access to ->page_list: */
-               spin_lock(&section->lock);
-
                page = list_first_entry(&section->init_laundry_list,
                                        struct sgx_epc_page, list);
ret = __eremove(sgx_get_epc_virt_addr(page));
-               if (!ret)
+               if (!ret) {
+                       spin_lock(&section->lock);
                        list_move(&page->list, &section->page_list);
-               else
+                       spin_unlock(&section->lock);
+               } else
                        list_move_tail(&page->list, &dirty);
- spin_unlock(&section->lock);
-
                cond_resched();
        }
--
2.19.1.3.ge56e4f7


Reply via email to