Test Specification for ChibiOS/RT. ChibiOS/RT Test Suite. Test suite for ChibiOS/RT. The purpose of this suite is to perform unit tests on the RT modules and to converge to 100% code coverage through successive improvements. rt_ Internal Tests Information. This sequence reports configuration and version information about the RT kernel. Port Info. Port-related info are reported. Prints the version string. Kernel Info. The version numbers are reported. Prints the version string. Kernel Settings. The static kernel settings are reported. Prints the configuration options settings. Internal Tests System layer and port interface. The functionality of the system layer and port interface is tested. Basic RT functionality is taken for granted or this test suite could not even be executed. Errors in implementation are detected by executing this sequence with the state checker enabled (CH_DBG_STATE_CHECKER=TRUE). System integrity functionality. The system self-test functionality is invoked in order to make an initial system state assessment and for coverage. Testing Ready List integrity. Testing Virtual Timers List integrity. Testing Registry List integrity. Testing Port-defined integrity. Critical zones functionality. The critical zones API is invoked for coverage. Testing chSysGetStatusAndLockX() and chSysRestoreStatusX(), non reentrant case. Testing chSysGetStatusAndLockX() and chSysRestoreStatusX(), reentrant case. Testing chSysUnconditionalLock(). Testing chSysUnconditionalUnlock(). Testing from ISR context using a virtual timer. Interrupts handling functionality. The interrupts handling API is invoked for coverage. Testing chSysSuspend(), chSysDisable() and chSysEnable(). System Tick Counter functionality. The functionality of the API @p chVTGetSystemTimeX() is tested. A System Tick Counter increment is expected, the test simply hangs if it does not happen. Internal Tests Threads Functionality. This sequence tests the ChibiOS/RT functionalities related to threading. Thread Sleep functionality. The functionality of @p chThdSleep() and derivatives is tested. The current system time is read then a sleep is performed for 100 system ticks and on exit the system time is verified again. The current system time is read then a sleep is performed for 100000 microseconds and on exit the system time is verified again. The current system time is read then a sleep is performed for 100 milliseconds and on exit the system time is verified again. The current system time is read then a sleep is performed for 1 second and on exit the system time is verified again. Function chThdSleepUntil() is tested with a timeline of "now" + 100 ticks. Ready List functionality, threads priority order. Five threads, are enqueued in the ready list and atomically executed. The test expects the threads to perform their operations in correct priority order regardless of the initial order. Creating 5 threads with increasing priority, execution sequence is tested. Creating 5 threads with decreasing priority, execution sequence is tested. Creating 5 threads with pseudo-random priority, execution sequence is tested. Priority change test. A series of priority changes are performed on the current thread in order to verify that the priority change happens as expected. Thread priority is increased by one then a check is performed. Thread priority is returned to the previous value then a check is performed. Priority change test with Priority Inheritance. A series of priority changes are performed on the current thread in order to verify that the priority change happens as expected. CH_CFG_USE_MUTEXES Simulating a priority boost situation (prio > realprio). prio += 2; test_assert(chThdGetPriorityX() == prio + 2, "unexpected priority level");]]> Raising thread priority above original priority but below the boosted level. prio == prio + 2, "unexpected priority level"); test_assert(chThdGetSelfX()->realprio == prio + 1, "unexpected returned real priority level");]]> Raising thread priority above the boosted level. prio == prio + 3, "unexpected priority level"); test_assert(chThdGetSelfX()->realprio == prio + 3, "unexpected real priority level");]]> Restoring original conditions. prio = prio; chThdGetSelfX()->realprio = prio; chSysUnlock();]]> Internal Tests Suspend/Resume. This sequence tests the ChibiOS/RT functionalities related to threads suspend/resume. Suspend and Resume functionality. The functionality of chThdSuspendTimeoutS() and chThdResumeI() is tested. The function chThdSuspendTimeoutS() is invoked, the thread is remotely resumed with message @p MSG_OK. On return the message and the state of the reference are tested. The function chThdSuspendTimeoutS() is invoked, the thread is not resumed so a timeout must occur. On return the message and the state of the reference are tested. Internal Tests Counter Semaphores. This sequence tests the ChibiOS/RT functionalities related to counter semaphores. CH_CFG_USE_SEMAPHORES Semaphore primitives, no state change. Wait, Signal and Reset primitives are tested. The testing thread does not trigger a state change. The function chSemWait() is invoked, after return the counter and the returned message are tested. The function chSemSignal() is invoked, after return the counter is tested. The function chSemReset() is invoked, after return the counter is tested. Semaphore enqueuing test. Five threads with randomized priorities are enqueued to a semaphore then awakened one at time. The test expects that the threads reach their goal in FIFO order or priority order depending on the @p CH_CFG_USE_SEMAPHORES_PRIORITY configuration setting. Five threads are created with mixed priority levels (not increasing nor decreasing). Threads enqueue on a semaphore initialized to zero. The semaphore is signaled 5 times. The thread activation sequence is tested. Semaphore timeout test. The three possible semaphore waiting modes (do not wait, wait with timeout, wait without timeout) are explored. The test expects that the semaphore wait function returns the correct value in each of the above scenario and that the semaphore structure status is correct after each operation. Testing special case TIME_IMMEDIATE. Testing non-timeout condition. Testing timeout condition. Testing chSemAddCounterI() functionality. The functon is tested by waking up a thread then the semaphore counter value is tested. A thread is created, it goes to wait on the semaphore. The semaphore counter is increased by two, it is then tested to be one, the thread must have completed. Testing chSemWaitSignal() functionality. This test case explicitly addresses the @p chSemWaitSignal() function. A thread is created that performs a wait and a signal operations. The tester thread is awakened from an atomic wait/signal operation. The test expects that the semaphore wait function returns the correct value in each of the above scenario and that the semaphore structure status is correct after each operation. An higher priority thread is created that performs non-atomical wait and signal operations on a semaphore. The function chSemSignalWait() is invoked by specifying the same semaphore for the wait and signal phases. The counter value must be one on exit. The function chSemSignalWait() is invoked again by specifying the same semaphore for the wait and signal phases. The counter value must be one on exit. Testing Binary Semaphores special case. This test case tests the binary semaphores functionality. The test both checks the binary semaphore status and the expected status of the underlying counting semaphore. Creating a binary semaphore in "taken" state, the state is checked. Resetting the binary semaphore in "taken" state, the state must not change. Starting a signaler thread at a lower priority. Waiting for the binary semaphore to be signaled, the semaphore is expected to be taken. Signaling the binary semaphore, checking the binary semaphore state to be "not taken" and the underlying counter semaphore counter to be one. Signaling the binary semaphore again, the internal state must not change from "not taken". Internal Tests Mutexes, Condition Variables and Priority Inheritance. This sequence tests the ChibiOS/RT functionalities related to mutexes, condition variables and priority inheritance algorithm. CH_CFG_USE_MUTEXES Priority enqueuing test. Five threads, with increasing priority, are enqueued on a locked mutex then the mutex is unlocked. The test expects the threads to perform their operations in increasing priority order regardless of the initial order. Getting the initial priority. Locking the mutex. Five threads are created that try to lock and unlock the mutex then terminate. The threads are created in ascending priority order. Unlocking the mutex, the threads will wakeup in priority order because the mutext queue is an ordered one. Priority inheritance, simple case. Three threads are involved in the classic priority inversion scenario, a medium priority thread tries to starve an high priority thread by blocking a low priority thread into a mutex lock zone. The test expects the threads to reach their goal in increasing priority order by rearranging their priorities in order to avoid the priority inversion trap. CH_DBG_THREADS_PROFILING Getting the system time for test duration measurement. The three contenders threads are created and let run atomically, the goals sequence is tested, the threads must complete in priority order. Testing that all threads completed within the specified time windows (100mS...100mS+ALLOWED_DELAY). Priority inheritance, complex case. Five threads are involved in the complex priority inversion scenario, the priority inheritance algorithm is tested for depths greater than one. The test expects the threads to perform their operations in increasing priority order by rearranging their priorities in order to avoid the priority inversion trap. CH_DBG_THREADS_PROFILING Getting the system time for test duration measurement. The five contenders threads are created and let run atomically, the goals sequence is tested, the threads must complete in priority order. Testing that all threads completed within the specified time windows (110mS...110mS+ALLOWED_DELAY). Priority return verification. Two threads are spawned that try to lock the mutexes already locked by the tester thread with precise timing. The test expects that the priority changes caused by the priority inheritance algorithm happen at the right moment and with the right values.<br> Thread A performs wait(50), lock(m1), unlock(m1), exit. Thread B performs wait(150), lock(m2), unlock(m2), exit. Getting current thread priority P(0) and assigning to the threads A and B priorities +1 and +2. Spawning threads A and B at priorities P(A) and P(B). Locking the mutex M1 before thread A has a chance to lock it. The priority must not change because A has not yet reached chMtxLock(M1). the mutex is not locked. Waiting 100mS, this makes thread A reach chMtxLock(M1) and get the mutex. This must boost the priority of the current thread at the same level of thread A. Locking the mutex M2 before thread B has a chance to lock it. The priority must not change because B has not yet reached chMtxLock(M2). the mutex is not locked. Waiting 100mS, this makes thread B reach chMtxLock(M2) and get the mutex. This must boost the priority of the current thread at the same level of thread B. Unlocking M2, the priority should fall back to P(A). Unlocking M1, the priority should fall back to P(0). Repeated locks, non recursive scenario. The behavior of multiple mutex locks from the same thread is tested when recursion is disabled !CH_CFG_USE_MUTEXES_RECURSIVE Getting current thread priority for later checks. Locking the mutex first time, it must be possible because it is not owned. Locking the mutex second time, it must fail because it is already owned. Unlocking the mutex then it must not be owned anymore and the queue must be empty. Testing that priority has not changed after operations. Testing chMtxUnlockAll() behavior. Testing that priority has not changed after operations. Repeated locks using, recursive scenario. The behavior of multiple mutex locks from the same thread is tested when recursion is enabled CH_CFG_USE_MUTEXES_RECURSIVE Getting current thread priority for later checks. Locking the mutex first time, it must be possible because it is not owned. Locking the mutex second time, it must be possible because it is recursive. Unlocking the mutex then it must be still owned because recursivity. Unlocking the mutex then it must not be owned anymore and the queue must be empty. Testing that priority has not changed after operations. Testing consecutive chMtxTryLock()/chMtxTryLockS() calls and a final chMtxUnlockAllS(). Testing consecutive chMtxLock()/chMtxLockS() calls and a final chMtxUnlockAll(). Testing that priority has not changed after operations. Condition Variable signal test. Five threads take a mutex and then enter a conditional variable queue, the tester thread then proceeds to signal the conditional variable five times atomically.<br> The test expects the threads to reach their goal in increasing priority order regardless of the initial order. CH_CFG_USE_CONDVARS Starting the five threads with increasing priority, the threads will queue on the condition variable. Atomically signaling the condition variable five times then waiting for the threads to terminate in priority order, the order is tested. Condition Variable broadcast test. Five threads take a mutex and then enter a conditional variable queue, the tester thread then proceeds to broadcast the conditional variable.<br> The test expects the threads to reach their goal in increasing priority order regardless of the initial order. CH_CFG_USE_CONDVARS Starting the five threads with increasing priority, the threads will queue on the condition variable. Broarcasting on the condition variable then waiting for the threads to terminate in priority order, the order is tested. Condition Variable priority boost test. This test case verifies the priority boost of a thread waiting on a conditional variable queue. It tests this very specific situation in order to improve code coverage. The created threads perform the following operations: TA{lock(M2), lock(M1), wait(C1), unlock(M1), unlock(M2)}, TB{lock(M2), wait(C1), unlock(M2)}. TC{lock(M1), unlock(M1)}. CH_CFG_USE_CONDVARS Reading current base priority. Thread A is created at priority P(+1), it locks M2, locks M1 and goes to wait on C1. Thread C is created at priority P(+2), it enqueues on M1 and boosts TA priority at P(+2). Thread B is created at priority P(+3), it enqueues on M2 and boosts TA priority at P(+3). Signaling C1: TA wakes up, unlocks M1 and priority goes to P(+2). TB locks M1, unlocks M1 and completes. TA unlocks M2 and priority goes to P(+1). TC waits on C1. TA completes. Signaling C1: TC wakes up, unlocks M1 and completes. Checking the order of operations. Internal Tests Synchronous Messages. This module implements the test sequence for the Synchronous Messages subsystem. CH_CFG_USE_MESSAGES Messages Server loop. A messenger thread is spawned that sends four messages back to the tester thread.<br> The test expect to receive the messages in the correct sequence and to not find a fifth message waiting. Starting the messenger thread. Waiting for four messages then testing the receive order. Internal Tests Event Sources and Event Flags. This module implements the test sequence for the Events subsystem. CH_CFG_USE_EVENTS Events registration. Two event listeners are registered on an event source and then unregistered in the same order.<br> The test expects that the even source has listeners after the registrations and after the first unregistration, then, after the second unegistration, the test expects no more listeners. An Event Source is initialized. Two Event Listeners are registered on the Event Source, the Event Source is tested to have listeners. An Event Listener is unregistered, the Event Source must still have listeners. An Event Listener is unregistered, the Event Source must not have listeners. Event Flags dispatching. The test dispatches three event flags and verifies that the associated event handlers are invoked in LSb-first order. Three evenf flag bits are raised then chEvtDispatch() is invoked, the sequence of handlers calls is tested. Events Flags wait using chEvtWaitOne(). Functionality of chEvtWaitOne() is tested under various scenarios. Setting three event flags. Calling chEvtWaitOne() three times, each time a single flag must be returned in order of priority. Getting current time and starting a signaler thread, the thread will set an event flag after 50mS. Calling chEvtWaitOne() then verifying that the event has been received after 50mS and that the event flags mask has been emptied. Events Flags wait using chEvtWaitAny(). Functionality of chEvtWaitAny() is tested under various scenarios. Setting two, non contiguous, event flags. Calling chEvtWaitAny() one time, the two flags must be returned. Getting current time and starting a signaler thread, the thread will set an event flag after 50mS. Calling chEvtWaitAny() then verifying that the event has been received after 50mS and that the event flags mask has been emptied. Events Flags wait using chEvtWaitAll(). Functionality of chEvtWaitAll() is tested under various scenarios. Setting two, non contiguous, event flags. Calling chEvtWaitAll() one time, the two flags must be returned. Setting one event flag. Getting current time and starting a signaler thread, the thread will set another event flag after 50mS. Calling chEvtWaitAll() then verifying that both event flags have been received after 50mS and that the event flags mask has been emptied. Events Flags wait timeouts. Timeout functionality is tested for chEvtWaitOneTimeout(), chEvtWaitAnyTimeout() and chEvtWaitAllTimeout(). CH_CFG_USE_EVENTS_TIMEOUT The functions are invoked first with TIME_IMMEDIATE timeout, the timeout condition is tested. The functions are invoked first with a 50mS timeout, the timeout condition is tested. Broadcasting using chEvtBroadcast(). Functionality of chEvtBroadcast() is tested. Registering on two event sources associating them with flags 1 and 4. Getting current time and starting a broadcaster thread, the thread broadcast the first Event Source immediately and the other after 50mS. Calling chEvtWaitAll() then verifying that both event flags have been received after 50mS and that the event flags mask has been emptied. Unregistering from the Event Sources. Internal Tests Dynamic threads. This module implements the test sequence for the dynamic thread creation APIs. CH_CFG_USE_DYNAMIC Threads creation from Memory Heap. Two threads are started by allocating the memory from the Memory Heap then a third thread is started with a huge stack requirement.<br> The test expects the first two threads to successfully start and the third one to fail. CH_CFG_USE_HEAP Getting base priority for threads. Getting heap info before the test. Creating thread 1, it is expected to succeed. Creating thread 2, it is expected to succeed. Creating thread 3, it is expected to fail > 1U) + 1U, "dyn3", prio-3, dyn_thread1, "C"); test_assert(threads[2] == NULL, "thread creation not failed");]]> Letting threads execute then checking the start order and freeing memory. Getting heap info again for verification. Threads creation from Memory Pool. Five thread creation are attempted from a pool containing only four elements.<br> The test expects the first four threads to successfully start and the last one to fail. CH_CFG_USE_MEMPOOLS Adding four working areas to the pool. Getting base priority for threads. Creating the five threads. Testing that only the fifth thread creation failed. Letting them run, free the memory then checking the execution sequence. Testing that the pool contains four elements again. Benchmarks Benchmarks. This module implements a series of system benchmarks. The benchmarks are useful as a stress test and as a reference when comparing ChibiOS/RT with similar systems.<br> Objective of the test sequence is to provide a performance index for the most critical system subsystems. The performance numbers allow to discover performance regressions between successive ChibiOS/RT releases. u.rdymsg; } while (msg == MSG_OK); chSysUnlock(); } #if CH_CFG_USE_SEMAPHORES static THD_FUNCTION(bmk_thread7, p) { (void)p; while (!chThdShouldTerminateX()) chSemWait(&sem1); } #endif static THD_FUNCTION(bmk_thread8, p) { do { chThdYield(); chThdYield(); chThdYield(); chThdYield(); (*(uint32_t *)p) += 4; #if defined(SIMULATOR) _sim_check_for_interrupts(); #endif } while(!chThdShouldTerminateX()); }]]> Messages performance #1. A message server thread is created with a lower priority than the client thread, the messages throughput per second is measured and the result printed on the output log. CH_CFG_USE_MESSAGES The messenger thread is started at a lower priority than the current thread. The number of messages exchanged is counted in a one second time window. Score is printed. Messages performance #2. A message server thread is created with an higher priority than the client thread, the messages throughput per second is measured and the result printed on the output log. CH_CFG_USE_MESSAGES The messenger thread is started at an higher priority than the current thread. The number of messages exchanged is counted in a one second time window. Score is printed. Messages performance #3. A message server thread is created with an higher priority than the client thread, four lower priority threads crowd the ready list, the messages throughput per second is measured while the ready list and the result printed on the output log. CH_CFG_USE_MESSAGES The messenger thread is started at an higher priority than the current thread. Four threads are started at a lower priority than the current thread. The number of messages exchanged is counted in a one second time window. Score is printed. Context Switch performance. A thread is created that just performs a @p chSchGoSleepS() into a loop, the thread is awakened as fast is possible by the tester thread.<br> The Context Switch performance is calculated by measuring the number of iterations after a second of continuous operations. Starting the target thread at an higher priority level. Waking up the thread as fast as possible in a one second time window. Stopping the target thread. Score is printed. Threads performance, full cycle. Threads are continuously created and terminated into a loop. A full chThdCreateStatic() / @p chThdExit() / @p chThdWait() cycle is performed in each iteration.<br> The performance is calculated by measuring the number of iterations after a second of continuous operations. A thread is created at a lower priority level and its termination detected using @p chThdWait(). The operation is repeated continuously in a one-second time window. Score is printed. Threads performance, create/exit only. Threads are continuously created and terminated into a loop. A partial @p chThdCreateStatic() / @p chThdExit() cycle is performed in each iteration, the @p chThdWait() is not necessary because the thread is created at an higher priority so there is no need to wait for it to terminate.<br> The performance is calculated by measuring the number of iterations after a second of continuous operations. A thread is created at an higher priority level and let terminate immediately. The operation is repeated continuously in a one-second time window. Score is printed. Mass reschedule performance. Five threads are created and atomically rescheduled by resetting the semaphore where they are waiting on. The operation is performed into a continuous loop.<br> The performance is calculated by measuring the number of iterations after a second of continuous operations. CH_CFG_USE_SEMAPHORES Five threads are created at higher priority that immediately enqueue on a semaphore. The semaphore is reset waking up the five threads. The operation is repeated continuously in a one-second time window. The five threads are terminated. The score is printed. Round-Robin voluntary reschedule. Five threads are created at equal priority, each thread just increases a variable and yields.<br> The performance is calculated by measuring the number of iterations after a second of continuous operations. The five threads are created at lower priority. The threds have equal priority and start calling @p chThdYield() continuously. Waiting one second then terminating the 5 threads. The score is printed. Virtual Timers set/reset performance. A virtual timer is set and immediately reset into a continuous loop.<br> The performance is calculated by measuring the number of iterations after a second of continuous operations. Two timers are set then reset without waiting for their counter to elapse. The operation is repeated continuously in a one-second time window. The score is printed. Semaphores wait/signal performance A counting semaphore is taken/released into a continuous loop, no Context Switch happens because the counter is always non negative.<br> The performance is calculated by measuring the number of iterations after a second of continuous operations. CH_CFG_USE_SEMAPHORES A semaphore is teken and released. The operation is repeated continuously in a one-second time window. The score is printed. Mutexes lock/unlock performance A mutex is locked/unlocked into a continuous loop, no Context Switch happens because there are no other threads asking for the mutex.<br> The performance is calculated by measuring the number of iterations after a second of continuous operations. CH_CFG_USE_MUTEXES A mutex is locked and unlocked. The operation is repeated continuously in a one-second time window. The score is printed. RAM Footprint. The memory size of the various kernel objects is printed. The size of the system area is printed. The size of a thread structure is printed. The size of a virtual timer structure is printed. The size of a semaphore structure is printed. The size of a mutex is printed. The size of a condition variable is printed. The size of an event source is printed. The size of an event listener is printed. The size of a mailbox is printed.