Re: [PATCH] arm: Fix stack alignment during interrupt handling

2022-01-13 Thread Sebastian Huber

On 13/01/2022 13:34, Sebastian Huber wrote:

On a public interface, the stack pointer must be aligned on an 8-byte
boundary.  However, it may temporarily be only aligned on a 4-byte
boundary.  The interrupt handling code must ensure that the stack
pointer is properly aligned before it calls a function.  See also:

https://developer.arm.com/documentation/den0013/d/Interrupt-Handling/External-interrupt-requests/Nested-interrupt-handling

Close #4579.


This fix doesn't work for nested interrupts.

--
embedded brains GmbH
Herr Sebastian HUBER
Dornierstr. 4
82178 Puchheim
Germany
email: sebastian.hu...@embedded-brains.de
phone: +49-89-18 94 741 - 16
fax:   +49-89-18 94 741 - 08

Registergericht: Amtsgericht München
Registernummer: HRB 157899
Vertretungsberechtigte Geschäftsführer: Peter Rasmussen, Thomas Dörfler
Unsere Datenschutzerklärung finden Sie hier:
https://embedded-brains.de/datenschutzerklaerung/
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: Malloc tests

2022-01-13 Thread zack leung
There a way to get the same values used to make the calculation  in malloc
get usable size?
Bump

Il ven 7 gen 2022, 21:25 zack leung  ha scritto:

> I think that the malloc tests is calculated differently than  alloc_size+
> allocsize mod it looks like this
>  *alloc_size = (uintptr_t) next_block + HEAP_ALLOC_BONUS - alloc_begin;
> when i run the comparison  i get 8 (with heap alignment) and the function
> gives me 12.
>  is the heap alloc bonus part of the current block?
>
> On Thu, 6 Jan 2022 at 21:28, Joel Sherrill  wrote:
>
>> On Thu, Jan 6, 2022 at 2:55 PM Gedare Bloom  wrote:
>> >
>> > On Tue, Jan 4, 2022 at 6:10 PM zack leung 
>> wrote:
>> > >
>> > > Helllo  ,
>> > > I'm working on a patch for malloc_get_usable size right now so far i
>> have
>> > > this test case for malloc, I just make sure that the value is null
>> and i
>> > > just malloc an int and then  i make a call to the function
>> malloc_usable
>> > > size and then i compare it like this.
>> > >
>> > > static void test_malloc_usablesize( void ){
>> > >int * a = malloc(sizeof(int));
>> > >rtems_test_assert((int) malloc_usable_size(a) == 12);
>> > >free (a);
>> > >
>> > >int * b = NULL;
>> > >rtems_test_assert( 0 == malloc_usable_size(b));
>> > >free(b);
>> > >
>> > >
>> > > }
>> > > Is there a good amount of storage to allocate? Also I heard someone
>> made a
>> >
>> > I think that this test case is quite brittle. The "usable size" will
>> > depend on the implementation details of malloc, and the conditions of
>> > the heap when the allocation request gets made. I don't have better
>> > ideas however for how to test the correctness of the usable_size
>> > function. Maybe there are certain sequences of malloc/free calls that
>> > can be relied upon to give you deterministic sizes of allocations?
>>
>> My suggestion in private chat was that you can depend on the
>> malloc heap being initialized with CPU_HEAP_ALIGNMENT. That's
>> what that macro is specifically for. It is used in
>> include/rtems/score/wkspaceinit*.h
>> and has been since the dawn of time.
>>
>> Then the size for a valid pointer from malloc() should be between
>> the allocated size and the next number on a CPU_HEAP_ALIGNMENT
>> boundary. I think the exact number is actually something like this:
>>
>> expected = alloc_size;
>> mod = alloc_size % CPU_HEAP_ALIGMENT;
>> expected += mod;
>>
>> Adding a helper function in the test to compute the expected
>> size allocated and validate the return may be wise if multiple
>> size allocations are going to be tested.
>>
>>
>> > > formatter for rtems source files.  Can someone tell me how to use it
>> and if
>> > > it is on the main branch?
>> >
>> > This was part of a gsoc project last year. We haven't quite made the
>> > switch over to it and the associated coding style I think, but you can
>> > find the code via
>> >
>> https://devel.rtems.org/wiki/GSoC/2021#StudentsSummerofCodeTrackingTable
>> > the student was Meh Mbeh Ida Delphine
>> >
>> >
>> > > ___
>> > > devel mailing list
>> > > devel@rtems.org
>> > > http://lists.rtems.org/mailman/listinfo/devel
>> > ___
>> > devel mailing list
>> > devel@rtems.org
>> > http://lists.rtems.org/mailman/listinfo/devel
>>
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH] arm: Optimize interrupt handling

2022-01-13 Thread Sebastian Huber
Use the SRS (Store Return State) instruction if available.  This
considerably simplifies the context save and restore.
---
 cpukit/score/cpu/arm/arm_exc_interrupt.S  | 45 +--
 .../score/cpu/arm/include/rtems/score/arm.h   |  1 +
 .../cpu/arm/include/rtems/score/cpuimpl.h | 13 ++
 3 files changed, 56 insertions(+), 3 deletions(-)

diff --git a/cpukit/score/cpu/arm/arm_exc_interrupt.S 
b/cpukit/score/cpu/arm/arm_exc_interrupt.S
index 43568747b1..1dc8c6eab4 100644
--- a/cpukit/score/cpu/arm/arm_exc_interrupt.S
+++ b/cpukit/score/cpu/arm/arm_exc_interrupt.S
@@ -34,6 +34,11 @@
 
 #ifdef ARM_MULTILIB_ARCH_V4
 
+#define SELF_CPU_CONTROL r7
+#define NON_VOLATILE_SCRATCH r9
+
+#ifndef ARM_MULTILIB_HAS_STORE_RETURN_STATE
+
 #define EXCHANGE_LR r4
 #define EXCHANGE_SPSR r5
 #define EXCHANGE_CPSR r6
@@ -42,16 +47,31 @@
 #define EXCHANGE_LIST {EXCHANGE_LR, EXCHANGE_SPSR, EXCHANGE_CPSR, 
EXCHANGE_INT_SP}
 #define EXCHANGE_SIZE 16
 
-#define SELF_CPU_CONTROL r7
-#define NON_VOLATILE_SCRATCH r9
-
 #define CONTEXT_LIST {r0, r1, r2, r3, EXCHANGE_LR, EXCHANGE_SPSR, 
SELF_CPU_CONTROL, r12}
 #define CONTEXT_SIZE 32
 
+#endif /* ARM_MULTILIB_HAS_STORE_RETURN_STATE */
+
 .arm
 .globl _ARMV4_Exception_interrupt
 _ARMV4_Exception_interrupt:
 
+#ifdef ARM_MULTILIB_HAS_STORE_RETURN_STATE
+   /* Prepare return from interrupt */
+   sub lr, lr, #4
+
+   /* Save LR_irq and SPSR_irq to the SVC stack */
+   srsfd   sp!, #ARM_PSR_M_SVC
+
+   /* Switch to SVC mode */
+   cps #ARM_PSR_M_SVC
+
+   /*
+* Save the volatile registers, two non-volatile registers used for
+* interrupt processing, and the link register.
+*/
+   push{r0-r3, SELF_CPU_CONTROL, NON_VOLATILE_SCRATCH, r12, lr}
+#else /* ARM_MULTILIB_HAS_STORE_RETURN_STATE */
/* Save exchange registers to exchange area */
stmdb   sp, EXCHANGE_LIST
 
@@ -73,6 +93,7 @@ _ARMV4_Exception_interrupt:
 */
stmdb   sp!, CONTEXT_LIST
stmdb   sp!, {NON_VOLATILE_SCRATCH, lr}
+#endif /* ARM_MULTILIB_HAS_STORE_RETURN_STATE */
 
 #ifdef ARM_MULTILIB_VFP
/* Save VFP context */
@@ -87,11 +108,13 @@ _ARMV4_Exception_interrupt:
/* Get per-CPU control of current processor */
GET_SELF_CPU_CONTROLSELF_CPU_CONTROL
 
+#ifndef ARM_MULTILIB_HAS_STORE_RETURN_STATE
/* Remember INT stack pointer */
mov r1, EXCHANGE_INT_SP
 
/* Restore exchange registers from exchange area */
ldmia   r1, EXCHANGE_LIST
+#endif /* ARM_MULTILIB_HAS_STORE_RETURN_STATE */
 
/* Get interrupt nest level */
ldr r2, [SELF_CPU_CONTROL, #PER_CPU_ISR_NEST_LEVEL]
@@ -99,7 +122,11 @@ _ARMV4_Exception_interrupt:
/* Switch stack if necessary and save original stack pointer */
mov NON_VOLATILE_SCRATCH, sp
cmp r2, #0
+#ifdef ARM_MULTILIB_HAS_STORE_RETURN_STATE
+   ldreq   sp, [SELF_CPU_CONTROL, #PER_CPU_INTERRUPT_STACK_HIGH]
+#else
moveq   sp, r1
+#endif
 
/* Increment interrupt nest and thread dispatch disable level */
ldr r3, [SELF_CPU_CONTROL, #PER_CPU_THREAD_DISPATCH_DISABLE_LEVEL]
@@ -208,6 +235,13 @@ _ARMV4_Exception_interrupt:
vmsrFPSCR, r0
 #endif /* ARM_MULTILIB_VFP */
 
+#ifdef ARM_MULTILIB_HAS_STORE_RETURN_STATE
+   /*
+* Restore the volatile registers, two non-volatile registers used for
+* interrupt processing, and the link register.
+*/
+   pop {r0-r3, SELF_CPU_CONTROL, NON_VOLATILE_SCRATCH, r12, lr}
+#else /* ARM_MULTILIB_HAS_STORE_RETURN_STATE */
/* Restore NON_VOLATILE_SCRATCH register and link register */
ldmia   sp!, {NON_VOLATILE_SCRATCH, lr}
 
@@ -238,6 +272,7 @@ _ARMV4_Exception_interrupt:
 
/* Restore EXCHANGE_LR and EXCHANGE_SPSR registers from exchange area */
ldmia   sp!, {EXCHANGE_LR, EXCHANGE_SPSR}
+#endif /* ARM_MULTILIB_HAS_STORE_RETURN_STATE */
 
 #ifdef ARM_MULTILIB_HAS_LOAD_STORE_EXCLUSIVE
/*
@@ -267,7 +302,11 @@ _ARMV4_Exception_interrupt:
 #endif
 
/* Return from interrupt */
+#ifdef ARM_MULTILIB_HAS_STORE_RETURN_STATE
+   rfefd   sp!
+#else
subspc, lr, #4
+#endif
 
 #ifdef RTEMS_PROFILING
 #ifdef __thumb2__
diff --git a/cpukit/score/cpu/arm/include/rtems/score/arm.h 
b/cpukit/score/cpu/arm/include/rtems/score/arm.h
index b1e4b07a37..7eaa69d889 100644
--- a/cpukit/score/cpu/arm/include/rtems/score/arm.h
+++ b/cpukit/score/cpu/arm/include/rtems/score/arm.h
@@ -47,6 +47,7 @@ extern "C" {
   #define ARM_MULTILIB_HAS_WFI
   #define ARM_MULTILIB_HAS_LOAD_STORE_EXCLUSIVE
   #define ARM_MULTILIB_HAS_BARRIER_INSTRUCTIONS
+  #define ARM_MULTILIB_HAS_STORE_RETURN_STATE
 #endif
 
 #ifndef ARM_DISABLE_THREAD_ID_REGISTER_USE
diff --git a/cpukit/score/cpu/arm/include/rtems/score/cpuimpl.h 
b/cpukit/score/cpu/arm/include/rtems/score/cpuimpl.h
index 0f86710966..a6fe74e9ad 100644
--- a/cpukit/score/cpu/arm/include/rtems/score/cpuimpl.h
+++ b/cpu

[PATCH] arm: Fix stack alignment during interrupt handling

2022-01-13 Thread Sebastian Huber
On a public interface, the stack pointer must be aligned on an 8-byte
boundary.  However, it may temporarily be only aligned on a 4-byte
boundary.  The interrupt handling code must ensure that the stack
pointer is properly aligned before it calls a function.  See also:

https://developer.arm.com/documentation/den0013/d/Interrupt-Handling/External-interrupt-requests/Nested-interrupt-handling

Close #4579.
---
 cpukit/score/cpu/arm/arm_exc_interrupt.S | 34 
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/cpukit/score/cpu/arm/arm_exc_interrupt.S 
b/cpukit/score/cpu/arm/arm_exc_interrupt.S
index ddcaf945b5..43568747b1 100644
--- a/cpukit/score/cpu/arm/arm_exc_interrupt.S
+++ b/cpukit/score/cpu/arm/arm_exc_interrupt.S
@@ -7,7 +7,7 @@
  */
 
 /*
- * Copyright (c) 2009, 2016 embedded brains GmbH.  All rights reserved.
+ * Copyright (c) 2009, 2022 embedded brains GmbH.  All rights reserved.
  *
  *  embedded brains GmbH
  *  Dornierstr. 4
@@ -68,9 +68,8 @@ _ARMV4_Exception_interrupt:
/*
 * Save context.  We save the link register separately because it has
 * to be restored in SVC mode.  The other registers can be restored in
-* INT mode.  Ensure that stack remains 8 byte aligned.  Use register
-* necessary for the stack alignment for the stack pointer of the
-* interrupted context.
+* INT mode.  Provide a non-volatile scratch register which is used
+* accross function calls.
 */
stmdb   sp!, CONTEXT_LIST
stmdb   sp!, {NON_VOLATILE_SCRATCH, lr}
@@ -102,9 +101,6 @@ _ARMV4_Exception_interrupt:
cmp r2, #0
moveq   sp, r1
 
-   /* Switch to Thumb-2 instructions if necessary */
-   SWITCH_FROM_ARM_TO_THUMB_2  r1
-
/* Increment interrupt nest and thread dispatch disable level */
ldr r3, [SELF_CPU_CONTROL, #PER_CPU_THREAD_DISPATCH_DISABLE_LEVEL]
add r2, #1
@@ -139,9 +135,6 @@ _ARMV4_Exception_interrupt:
/* Restore stack pointer */
mov sp, NON_VOLATILE_SCRATCH
 
-   /* Save CPSR in non-volatile register */
-   mrs NON_VOLATILE_SCRATCH, CPSR
-
/* Decrement levels and determine thread dispatch state */
eor r1, r0
sub r0, #1
@@ -160,9 +153,11 @@ _ARMV4_Exception_interrupt:
cmp r1, #0
bne .Lthread_dispatch_done
 
-   /* Thread dispatch */
+   /* Save CPSR in non-volatile register */
mrs NON_VOLATILE_SCRATCH, CPSR
 
+   /* Thread dispatch */
+
 .Ldo_thread_dispatch:
 
/* Set ISR dispatch disable and thread dispatch disable level to one */
@@ -170,19 +165,27 @@ _ARMV4_Exception_interrupt:
str r0, [SELF_CPU_CONTROL, #PER_CPU_ISR_DISPATCH_DISABLE]
str r0, [SELF_CPU_CONTROL, #PER_CPU_THREAD_DISPATCH_DISABLE_LEVEL]
 
-   /* Call _Thread_Do_dispatch(), this function will enable interrupts */
+   /*
+* Call _Thread_Do_dispatch(), this function will enable interrupts.
+* Make sure that the stack pointer is aligned on an 8-byte boundary.
+*/
mov r0, SELF_CPU_CONTROL
mov r1, NON_VOLATILE_SCRATCH
mov r2, #0x80
bic r1, r2
+   and SELF_CPU_CONTROL, sp, #0x4
+   sub sp, sp, SELF_CPU_CONTROL
BLX_TO_THUMB_1  _Thread_Do_dispatch
 
/* Disable interrupts */
msr CPSR, NON_VOLATILE_SCRATCH
 
-#ifdef RTEMS_SMP
+   /*
+* Restore stack pointer and reinitialize SELF_CPU_CONTROL.  In SMP
+* configurations, we may run on a different processor.
+*/
+   add sp, sp, SELF_CPU_CONTROL
GET_SELF_CPU_CONTROLSELF_CPU_CONTROL
-#endif
 
/* Check if we have to do the thread dispatch again */
ldrbr0, [SELF_CPU_CONTROL, #PER_CPU_DISPATCH_NEEDED]
@@ -195,9 +198,6 @@ _ARMV4_Exception_interrupt:
 
 .Lthread_dispatch_done:
 
-   /* Switch to ARM instructions if necessary */
-   SWITCH_FROM_THUMB_2_TO_ARM
-
 #ifdef ARM_MULTILIB_VFP
/* Restore VFP context */
ldmia   sp!, {r0, r1}
-- 
2.31.1

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel