On 21.07.23 03:27, Chris Johns wrote:
On 21/7/2023 3:51 am, Sebastian Huber wrote:
On 20.07.23 18:58, Gedare Bloom wrote:
On Thu, Jul 20, 2023 at 7:42 AM Sebastian Huber
<sebastian.hu...@embedded-brains.de>  wrote:
These memory benchmark programs are not supposed to run.  Instead, they
can be analysed on the host system to measure the memory usage of
features.  See the membench module of rtems-central.

This needs some kind of documentation and probably a README inside of
membench with that information.

Ok, I can add a README.md.


This appears to be about benchmarking the program size (static memory
usage) only? If so, make that clear in the README / log note. I think
it's in the doxygen already so that's helpful.

Yes, it measures only the static memory size required for certain operating
system services. See 4.7 Memory Usage Benchmarks in:

https://ftp.rtems.org/pub/rtems/people/sebh/rtems-6-sparc-gr740-uni-6-scf.pdf

Should `static` be part of naming?

Yes, good idea.


What happens when the membench gets built, and then someone runs
$> rtems-test build/${ARCH}/${BSP}/testsuites

Because I don't see anything that is filtering these executables.

They are filtered out due to the *.norun.* pattern:>
target: testsuites/membench/mem-scheduler-add-cpu.norun.exe


Currently tests with `norun` assume the build fails if there is an issue with a
test. This is why we allow these tests and they are tagged `norun`.

We already have a couple of norun tests in libtests. This filtering is simple and works fine why would you want to change it?


I am happy to see these tests however I am not comfortable about the reference
and dependency to rtems-central to understand or analyse them. I have looked at
the test source and I do not understand their purpose. Are they generated?

Yes, they are generated in a two step process. A script generates the specification items:

https://git.rtems.org/rtems-central/tree/generate_membench.py

The code is generated from items like this:

https://git.rtems.org/rtems-central/tree/spec/rtems/task/val/mem-wake-after.yml


Are they suppose to be checked or are they informational? Is something going to
be added to the project, for example in rtems-tools.git, to allow these tests to
be checked?

Currently, they are just informational, however, it would be possible to check the results if limits are specified. All the stuff to analyze this and work with the specification is in rtems-central.git. If you think this needs to be changed, then I am happy to discuss this. My preference is still to get rid of all the separate repositories and move everything back to rtems.git.


I am sorry, until I understand more about these tests, I have to say no. I would
much prefer to say yes but I will need a hand in understanding them.

My concern is simple. I make a change and it effects something in these tests
yet I have no ability to know if I have triggered a failure.

We will be adding CI flows to improve the quality of what we do and this
approach side steps that effort however I am not sure about this until I
understand more about them.
I don't think this approach side steps improving the quality of what we do, on the contrary, these static memory benchmarks and the specification approach enables you to catch static memory usage regressions. It would be ideally suited for CI jobs. In fact, the infrastructure in rtems-central.git is an excellent tool box for CI jobs.

What is the plan for the CI flows?

--
embedded brains GmbH
Herr Sebastian HUBER
Dornierstr. 4
82178 Puchheim
Germany
email: sebastian.hu...@embedded-brains.de
phone: +49-89-18 94 741 - 16
fax:   +49-89-18 94 741 - 08

Registergericht: Amtsgericht München
Registernummer: HRB 157899
Vertretungsberechtigte Geschäftsführer: Peter Rasmussen, Thomas Dörfler
Unsere Datenschutzerklärung finden Sie hier:
https://embedded-brains.de/datenschutzerklaerung/
_______________________________________________
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Reply via email to