Hi Dmitrii,

On 5/15/2024 6:12 AM, Dmitrii Kuvaiskii wrote:
> Two enclave threads may try to access the same non-present enclave page
> simultaneously (e.g., if the SGX runtime supports lazy allocation). The
> threads will end up in sgx_encl_eaug_page(), racing to acquire the
> enclave lock. The winning thread will perform EAUG, set up the page
> table entry, and insert the page into encl->page_array. The losing
> thread will then get -EBUSY on xa_insert(&encl->page_array) and proceed
> to error handling path.
> 
> This race condition can be illustrated as follows:
> 
> /*                             /*
>  * Fault on CPU1                * Fault on CPU2
>  * on enclave page X            * on enclave page X
>  */                             */
> sgx_vma_fault() {              sgx_vma_fault() {
> 
>   xa_load(&encl->page_array)     xa_load(&encl->page_array)
>       == NULL -->                    == NULL -->
> 
>   sgx_encl_eaug_page() {         sgx_encl_eaug_page() {
> 
>     ...                            ...
> 
>     /*                             /*
>      * alloc encl_page              * alloc encl_page
>      */                             */
>                                    mutex_lock(&encl->lock);
>                                    /*
>                                     * alloc EPC page
>                                     */
>                                    epc_page = sgx_alloc_epc_page(...);
>                                    /*
>                                     * add page to enclave's xarray
>                                     */
>                                    xa_insert(&encl->page_array, ...);
>                                    /*
>                                     * add page to enclave via EAUG
>                                     * (page is in pending state)
>                                     */
>                                    /*
>                                     * add PTE entry
>                                     */
>                                    vmf_insert_pfn(...);
> 
>                                    mutex_unlock(&encl->lock);
>                                    return VM_FAULT_NOPAGE;
>                                  }
>                                }
>                                /*
>                                 * All good up to here: enclave page
>                                 * successfully added to enclave,
>                                 * ready for EACCEPT from user space
>                                 */
>     mutex_lock(&encl->lock);
>     /*
>      * alloc EPC page
>      */
>     epc_page = sgx_alloc_epc_page(...);
>     /*
>      * add page to enclave's xarray,
>      * this fails with -EBUSY as this
>      * page was already added by CPU2
>      */
>     xa_insert(&encl->page_array, ...);
> 
>   err_out_shrink:
>     sgx_encl_free_epc_page(epc_page) {
>       /*
>        * remove page via EREMOVE
>        *
>        * *BUG*: page added by CPU2 is
>        * yanked from enclave while it
>        * remains accessible from OS
>        * perspective (PTE installed)
>        */
>       /*
>        * free EPC page
>        */
>       sgx_free_epc_page(epc_page);
>     }
> 
>     mutex_unlock(&encl->lock);
>     /*
>      * *BUG*: SIGBUS is returned
>      * for a valid enclave page
>      */
>     return VM_FAULT_SIGBUS;
>   }
> }
> 
> The err_out_shrink error handling path contains two bugs: (1) function
> sgx_encl_free_epc_page() is called that performs EREMOVE even though the
> enclave page was never intended to be removed, and (2) SIGBUS is sent to
> userspace even though the enclave page is correctly installed by another
> thread.
> 
> The first bug renders the enclave page perpetually inaccessible (until
> another SGX_IOC_ENCLAVE_REMOVE_PAGES ioctl). This is because the page is
> marked accessible in the PTE entry but is not EAUGed, and any subsequent
> access to this page raises a fault: with the kernel believing there to
> be a valid VMA, the unlikely error code X86_PF_SGX encountered by code
> path do_user_addr_fault() -> access_error() causes the SGX driver's
> sgx_vma_fault() to be skipped and user space receives a SIGSEGV instead.
> The userspace SIGSEGV handler cannot perform EACCEPT because the page
> was not EAUGed. Thus, the user space is stuck with the inaccessible
> page. The second bug is less severe: a spurious SIGBUS signal is
> unnecessarily sent to user space.
> 
> Fix these two bugs (1) by returning VM_FAULT_NOPAGE to the generic Linux
> fault handler so that no signal is sent to userspace, and (2) by
> replacing sgx_encl_free_epc_page() with sgx_free_epc_page() so that no
> EREMOVE is performed.
> 
> Note that sgx_encl_free_epc_page() performs an additional WARN_ON_ONCE
> check in comparison to sgx_free_epc_page(): whether the EPC page is
> being reclaimer tracked. However, the EPC page is allocated in
> sgx_encl_eaug_page() and has zeroed-out flags in all error handling
> paths. In other words, the page is marked as reclaimable only in the
> happy path of sgx_encl_eaug_page(). Therefore, in the particular code
> path affected in this commit, the "page reclaimer tracked" condition is
> always false and the warning is never printed. Thus, it is safe to
> replace sgx_encl_free_epc_page() with sgx_free_epc_page().
> 
> Fixes: 5a90d2c3f5ef ("x86/sgx: Support adding of pages to an initialized 
> enclave")
> Cc: sta...@vger.kernel.org
> Reported-by: Marcelina Koƛcielnicka <m...@invisiblethingslab.com>
> Suggested-by: Reinette Chatre <reinette.cha...@intel.com>
> Signed-off-by: Dmitrii Kuvaiskii <dmitrii.kuvais...@intel.com>
> ---
>  arch/x86/kernel/cpu/sgx/encl.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c
> index 279148e72459..41f14b1a3025 100644
> --- a/arch/x86/kernel/cpu/sgx/encl.c
> +++ b/arch/x86/kernel/cpu/sgx/encl.c
> @@ -382,8 +382,11 @@ static vm_fault_t sgx_encl_eaug_page(struct 
> vm_area_struct *vma,
>        * If ret == -EBUSY then page was created in another flow while
>        * running without encl->lock
>        */
> -     if (ret)
> +     if (ret) {
> +             if (ret == -EBUSY)
> +                     vmret = VM_FAULT_NOPAGE;
>               goto err_out_shrink;
> +     }
>  
>       pginfo.secs = (unsigned long)sgx_get_epc_virt_addr(encl->secs.epc_page);
>       pginfo.addr = encl_page->desc & PAGE_MASK;
> @@ -419,7 +422,7 @@ static vm_fault_t sgx_encl_eaug_page(struct 
> vm_area_struct *vma,
>  err_out_shrink:
>       sgx_encl_shrink(encl, va_page);
>  err_out_epc:
> -     sgx_encl_free_epc_page(epc_page);
> +     sgx_free_epc_page(epc_page);
>  err_out_unlock:
>       mutex_unlock(&encl->lock);
>       kfree(encl_page);

Thank you very much. I understand the changelog is still being discussed
and those changes look good to me, to which you can add:

Reviewed-by: Reinette Chatre <reinette.cha...@intel.com>

Reinette

Reply via email to