Hi As Chris is wrapping up the work on not building the BSP/test combinations which are known to run out of memory when linking, I thought I would gather the information about another large class of test failures.
Some BSPs are for simulators. Some of those simulators do not support interrupts or a clock tick source.[1] Thus the only way to mimic the passage of time is with the "sim clock idle" code. This is a clock device driver which is just an idle task which calls rtems_clock_tick() until some higher priority task is unblocked and it is preempted. Some tests were designed to assume a real clock tick which can result in a preemption even from a busy loop. These tests may or may not need to assume that but, at the moment, they do. I am certain that sp04 and *intrcrit* do indeed require a clock tick interrupt to work. I have attached a list of the 24 BSPs which use the simulated clock tick and the 47 tests which will never complete execution without a real interrupt driven clock tick. When running the tests, these are known to fail but this is not accounted for by anything in the build or test framework. These tests will run until a time limit is reached. On most of our simulator scripts, this is 180 seconds. For a single test execution cycle, this wastes approximately 142 minutes per BSP with this characteristic. Over 24 BSPs, that is about 56 hours and 15 minutes wasted. There are two possibilities of what to do: + not build these for these BSPs + build but do not execute these for these BSPs Personally I like building them and not executing them. It is better to build all the code we can for tool and RTEMS integrity purposes. Whatever we decide to do, I think we should think in generic terms that "test X requires capability Y which a BSP may or may not provide". How can we say BSP does not have capability Y? And thus have a master list of tests which need that capability. My first thought was to integrate this information into rtems-testing/sim-scripts. This would build them but not execute them. If we take this approach, this would have to be integrated into the rtems-test framework as it matures. They could also be integrated into Chris' "do not build" framework which was posted this week. I would like some discussion on this. Addressing this in a general way is important to getting accurate and quality test results and being able to execute them in a reasonable length of time. [1] Why use these simulators? We use them because they are sufficient to test most of RTEMS, run all of the GCC tests, usually are sufficient to debug an architecture specific problem that is not interrupt related, and are often built into GDB. -- Joel Sherrill, Ph.D. Director of Research & Development [email protected] On-Line Applications Research Ask me about RTEMS: a free RTOS Huntsville AL 35805 Support Available (256) 722-9985
arm1136jfs arm1136js arm7tdmi arm920 armcortexa9 avrtest h8sim h8sxsim m32csim m32rsim moxiesim qemuppc simsh1 simsh2 simsh2e simsh4 niagara usiii v850e1sim v850e2sim v850e2v3sim v850esim v850essim v850sim
cpuuse psx07 psx09 psx10 psx11 psxcancel01 psxgetrusage01 psxintrcritical01 psxsignal01 psxsignal02 psxspin01 psxtime psxtimes01 sp04 sp14 sp19 sp35 sp38 sp44 sp69 spcbssched02 spcbssched03 spcontext01 spcpucounter01 spedfsched03 spintrcritical01 spintrcritical02 spintrcritical03 spintrcritical04 spintrcritical05 spintrcritical06 spintrcritical07 spintrcritical08 spintrcritical09 spintrcritical10 spintrcritical11 spintrcritical12 spintrcritical13 spintrcritical14 spintrcritical15 spintrcritical16 spintrcritical17 spintrcritical18 spintrcritical19 spintrcritical20 spnsext01 spqreslib
_______________________________________________ rtems-devel mailing list [email protected] http://www.rtems.org/mailman/listinfo/rtems-devel
