Re: Use of rtems_fdt_* and sp01

2022-11-17 Thread Sebastian Huber

On 18/11/2022 06:32, Kinsey Moore wrote:
On Thu, Nov 17, 2022 at 4:49 PM Chris Johns > wrote:


On 18/11/2022 2:39 am, Kinsey Moore wrote:
 > I recently added FDT support to the AArch64 ZynqMP BSPs to
support an optional
 > management console and managing ethernet parameters for LibBSD.
Use of the
 > rtems_fdt_* functions implies use of malloc which adds 4 bytes in
the TLS space.
 > The sp01 test uses rtems_task_construct and specifies a maximum
TLS size of 0
 > bytes which causes a failure which the non-zero TLS size is
checked against the
 > maximum TLS size of 0.

Does this mean there exists a requirement a BSPs cannot internally
use the heap?


That appears to be the case, at least practically. It works fine, but it 
causes a test failure...


You can use the heap during system initialization. However, you should 
avoid dependencies on errno, so instead of using malloc()/calloc() use 
rtems_malloc()/rtems_calloc(). Independent of this, if the BSP 
initialization can easily avoid the heap, then it should not use the heap.





Maybe the test needs to be adjusted?


That's part of why I sent this to the list. I was unsure whether it was 
allowed or whether the test had bad assumptions baked into it.


The tests check specific aspects of the thread-local storage support and 
this works only if the BSP does not depend on TLS objects such as errno.


--
embedded brains GmbH
Herr Sebastian HUBER
Dornierstr. 4
82178 Puchheim
Germany
email: sebastian.hu...@embedded-brains.de
phone: +49-89-18 94 741 - 16
fax:   +49-89-18 94 741 - 08

Registergericht: Amtsgericht München
Registernummer: HRB 157899
Vertretungsberechtigte Geschäftsführer: Peter Rasmussen, Thomas Dörfler
Unsere Datenschutzerklärung finden Sie hier:
https://embedded-brains.de/datenschutzerklaerung/
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: Use of rtems_fdt_* and sp01

2022-11-17 Thread Kinsey Moore
On Thu, Nov 17, 2022 at 4:49 PM Chris Johns  wrote:

> On 18/11/2022 2:39 am, Kinsey Moore wrote:
> > I recently added FDT support to the AArch64 ZynqMP BSPs to support an
> optional
> > management console and managing ethernet parameters for LibBSD. Use of
> the
> > rtems_fdt_* functions implies use of malloc which adds 4 bytes in the
> TLS space.
> > The sp01 test uses rtems_task_construct and specifies a maximum TLS size
> of 0
> > bytes which causes a failure which the non-zero TLS size is checked
> against the
> > maximum TLS size of 0.
>
> Does this mean there exists a requirement a BSPs cannot internally use the
> heap?
>

That appears to be the case, at least practically. It works fine, but it
causes a test failure...

>
> Maybe the test needs to be adjusted?
>

That's part of why I sent this to the list. I was unsure whether it was
allowed or whether the test had bad assumptions baked into it.

Kinsey
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: Use of rtems_fdt_* and sp01

2022-11-17 Thread Chris Johns
On 18/11/2022 2:39 am, Kinsey Moore wrote:
> I recently added FDT support to the AArch64 ZynqMP BSPs to support an optional
> management console and managing ethernet parameters for LibBSD. Use of the
> rtems_fdt_* functions implies use of malloc which adds 4 bytes in the TLS 
> space.
> The sp01 test uses rtems_task_construct and specifies a maximum TLS size of 0
> bytes which causes a failure which the non-zero TLS size is checked against 
> the
> maximum TLS size of 0.

Does this mean there exists a requirement a BSPs cannot internally use the heap?

Maybe the test needs to be adjusted?
> It appears that other BSPs that use FDT data avoid the rtems_fdt_* calls,
> possibly because they weren’t available until recently, and this is the path
> that I’ll be following to resolve this issue for the moment.
> 
> I thought it would be good to bring this up if the rtems_fdt_* wrappers are to
> be more widely useful.

FDT support seems to be based around isolated BSPs and how they use the FDT. I
guess this approach has been done to minimise the impact on RTEMS and I would
have most likely followed a similar path.

Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

[PATCH] bsps/zynqmp: Use direct fdt_* calls

2022-11-17 Thread Kinsey Moore
This changes the ZynqMP device tree parsing over to direct libfdt calls
to avoid inclusion of malloc() in the base BSP which currently causes
sp01 to fail due to unexpected use of TLS space.
---
 bsps/aarch64/xilinx-zynqmp/console/console.c | 32 ++--
 1 file changed, 10 insertions(+), 22 deletions(-)

diff --git a/bsps/aarch64/xilinx-zynqmp/console/console.c 
b/bsps/aarch64/xilinx-zynqmp/console/console.c
index d546db8535..992b8a62da 100644
--- a/bsps/aarch64/xilinx-zynqmp/console/console.c
+++ b/bsps/aarch64/xilinx-zynqmp/console/console.c
@@ -37,7 +37,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 
 #include 
@@ -47,6 +46,7 @@
 #include 
 
 #include 
+#include 
 
 #include 
 
@@ -92,42 +92,36 @@ __attribute__ ((weak)) void 
zynqmp_configure_management_console(rtems_termios_de
 static void zynqmp_management_console_init(void)
 {
   /* Find the management console in the device tree */
-  rtems_fdt_handle fdt_handle;
+  const void *fdt = bsp_fdt_get();
   const uint32_t *prop;
   uint32_t outprop[4];
   int proplen;
   int node;
 
-  rtems_fdt_init_handle(_handle);
-  rtems_fdt_register(bsp_fdt_get(), _handle);
-  const char *alias = rtems_fdt_get_alias(_handle, "mgmtport");
+  const char *alias = fdt_get_alias(fdt, "mgmtport");
   if (alias == NULL) {
-rtems_fdt_release_handle(_handle);
 return;
   }
-  node = rtems_fdt_path_offset(_handle, alias);
+  node = fdt_path_offset(fdt, alias);
 
-  prop = rtems_fdt_getprop(_handle, node, "clock-frequency", );
+  prop = fdt_getprop(fdt, node, "clock-frequency", );
   if ( prop == NULL || proplen != 4 ) {
-rtems_fdt_release_handle(_handle);
 zynqmp_mgmt_uart_context.port = 0;
 return;
   }
   outprop[0] = rtems_uint32_from_big_endian((const uint8_t *) [0]);
   zynqmp_mgmt_uart_context.clock = outprop[0];
 
-  prop = rtems_fdt_getprop(_handle, node, "current-speed", );
+  prop = fdt_getprop(fdt, node, "current-speed", );
   if ( prop == NULL || proplen != 4 ) {
-rtems_fdt_release_handle(_handle);
 zynqmp_mgmt_uart_context.port = 0;
 return;
   }
   outprop[0] = rtems_uint32_from_big_endian((const uint8_t *) [0]);
   zynqmp_mgmt_uart_context.initial_baud = outprop[0];
 
-  prop = rtems_fdt_getprop(_handle, node, "interrupts", );
+  prop = fdt_getprop(fdt, node, "interrupts", );
   if ( prop == NULL || proplen != 12 ) {
-rtems_fdt_release_handle(_handle);
 zynqmp_mgmt_uart_context.port = 0;
 return;
   }
@@ -137,14 +131,12 @@ static void zynqmp_management_console_init(void)
   /* proplen is in bytes, interrupt mapping expects a length in 32-bit cells */
   zynqmp_mgmt_uart_context.irq = bsp_fdt_map_intr(outprop, proplen / 4);
   if ( zynqmp_mgmt_uart_context.irq == 0 ) {
-rtems_fdt_release_handle(_handle);
 zynqmp_mgmt_uart_context.port = 0;
 return;
   }
 
-  prop = rtems_fdt_getprop(_handle, node, "reg", );
+  prop = fdt_getprop(fdt, node, "reg", );
   if ( prop == NULL || proplen != 16 ) {
-rtems_fdt_release_handle(_handle);
 zynqmp_mgmt_uart_context.port = 0;
 return;
   }
@@ -164,26 +156,22 @@ static void zynqmp_management_console_init(void)
 return;
   }
 
-  prop = rtems_fdt_getprop(_handle, node, "reg-offset", );
+  prop = fdt_getprop(fdt, node, "reg-offset", );
   if ( prop == NULL || proplen != 4 ) {
-rtems_fdt_release_handle(_handle);
 zynqmp_mgmt_uart_context.port = 0;
 return;
   }
   outprop[0] = rtems_uint32_from_big_endian((const uint8_t *) [0]);
   zynqmp_mgmt_uart_context.port += outprop[0];
 
-  prop = rtems_fdt_getprop(_handle, node, "reg-shift", );
+  prop = fdt_getprop(fdt, node, "reg-shift", );
   if ( prop == NULL || proplen != 4 ) {
-rtems_fdt_release_handle(_handle);
 zynqmp_mgmt_uart_context.port = 0;
 return;
   }
   outprop[0] = rtems_uint32_from_big_endian((const uint8_t *) [0]);
   mgmt_uart_reg_shift = outprop[0];
 
-  rtems_fdt_release_handle(_handle);
-
   ns16550_probe(_mgmt_uart_context.base);
 
   zynqmp_configure_management_console(_mgmt_uart_context.base);
-- 
2.30.2

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Use of rtems_fdt_* and sp01

2022-11-17 Thread Kinsey Moore
I recently added FDT support to the AArch64 ZynqMP BSPs to support an optional 
management console and managing ethernet parameters for LibBSD. Use of the 
rtems_fdt_* functions implies use of malloc which adds 4 bytes in the TLS 
space. The sp01 test uses rtems_task_construct and specifies a maximum TLS size 
of 0 bytes which causes a failure which the non-zero TLS size is checked 
against the maximum TLS size of 0.

It appears that other BSPs that use FDT data avoid the rtems_fdt_* calls, 
possibly because they weren't available until recently, and this is the path 
that I'll be following to resolve this issue for the moment.

I thought it would be good to bring this up if the rtems_fdt_* wrappers are to 
be more widely useful.

Kinsey
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [PATCH] icpp remedy

2022-11-17 Thread shijunjie
Dear all,

To make a formal description of the bug/mismatch of implemented
protocol(s), we prepared a ticket in https://devel.rtems.org/ticket/4742#no2,
which contains the bug description and counter example.
I hope it can help to give us a clear overview of the bug/mismatch that we
found.

Best,
Junjie

Gedare Bloom  于2022年9月21日周三 23:50写道:

> CC: devel@rtems.org
>
> Preserving this discussion for posterity.
>
> On Fri, Sep 16, 2022 at 3:09 PM Jian-Jia Chen
>  wrote:
> >
> > Dear Gedare, dear Sebastian,
> >   I would like to clarify a few things.
> >
> > == historical perspectives ==
> >   First of all, let me shortly talk about the history of PCP and the
> extension of it. PCP was proposed in 1989 with two advantages in comparison
> to PIP
> >
> > Under PCP, the access of resources in a nested manner is deadlock-free
> > Under PCP, a higher-priority job J_i is blocked by at most one
> lower-priority critical section whose priority ceiling has a higher
> priority than or the same priority as J_i.
> >
> >
> >   PCP therefore has become an interesting result. However, in its
> original design, a lower-priority job only inherits the priority of a
> higher-priority job that it blocks. It was later noted that the original
> PCP design can be extended to the so-called immediate ceiling priority
> protocol (ICPP) which promotes the priority of a lower-priority job
> immediately to its priority ceiling after the lock. This extension ensures
> that the two important properties of PCP remain valid.
> >
> >   Now, in our discussion, we should further clarify whether the
> implementation of RTEMS is the ICPP described above or not. If yes, then it
> should be deadlock-free under any nested resource access sequence.
> Otherwise, it is not supposed to be the ICPP described above. You may want
> to call it “restrictive ICPP” or anything alike.
> >
> >   When we submitted the paper, one reviewer also asked a question
> whether the issue we found is a bug or a feature. I believe that it is both
> because there is no documentation. However, in case there is no
> documentation on such a limitation, then it is a bug as the implementation
> does not comply to the ICPP; otherwise, it is a feature for the implemented
> “restrictive ICPP”.
> >
> > == designers’ perspectives ==
> >
> >   Now, coming to the discussion whether “acquiring mutexes in an
> arbitrary priority order is a good application design” or not, I think we
> have to also clarify two things:
> >
> > The program's semantics of resource access: it is supposed to lock first
> resource R1 (guarded by mutex M1) and then potentially resource R2 (guarded
> by mutex M2) in a nested manner, depending on the condition needed.
> > The resource priority ceiling: it is unrelated to the program context
> but just used to ensure that there is no deadlock in any request order and
> there is at most one blocking from the lower-priority jobs.
> >
> >
> >   So, (1) is about the context of the program(s) and (2) is about the
> resource management schemes.
> >
> >   One can of course enforce the sequence of locks, but as soon as the
> designer changes the priority assignment of the tasks or adds a new task,
> the program may have to be rewritten in their statements and may not be
> able to keep the same program behavior. Therefore, I think the beauty of
> PCP and ICPP for achieving the deadlock-free property is destroyed if the
> mutex usage has to follow a strict order to ensure deadlock free. In fact,
> under this strict order, as there is no circular waiting, there is by
> definition no deadlock.
> >
> > == ending ==
> >
> >   Furthermore, no matter which approach is taken, it is important to
> document the assertions/assumptions that must hold to lead to the
> correctness of the implemented IPCC.
> >
> >
> >   Best Regards,
> >   Jian-Jia Chen
> >
> >
> >
> > On Sep 16, 2022, at 9:47 AM, Kuan-Hsun Chen  wrote:
> >
> >
> >
> > -- Forwarded message -
> > From: Sebastian Huber 
> >
> > Date: Fri, Sep 16, 2022 at 08:09
> > Subject: Re: [PATCH] icpp remedy
> > To: Gedare Bloom , Kuan-Hsun Chen 
> > Cc: Strange369 , rtems-de...@rtems.org <
> devel@rtems.org>
> >
> >
> > On 15.09.22 00:06, Gedare Bloom wrote:
> > > On Tue, Sep 13, 2022 at 12:42 PM Kuan-Hsun Chen
> wrote:
> > >> Thanks for the prompt reply. Yes, I will guide Junjie to make a
> ticket and go through the issue.
> > >>
> > >>> Is there a test case for this?
> > >> Yes, a test case is also ready to be reviewed and can be part of the
> testsuite to test out ICPP (MrsP should also pass).
> > >>
> > >>> If it applies to 5 or 4.11, there needs to be another ticket to get
> the fix back ported.
> > >> So each release version with one ticket? We only check 5.1 in this
> work. My intuitive guess is that if the functionality does not change over
> the versions, the remedy should be similar.
> > >> Let us figure it out.
> > >>
> > >> On Tue, Sep 13, 2022 at 8:21 PM Joel Sherrill  wrote:
> > >>>
> > >>>
> > >>> On Tue,