My, how time flies … again!
Unity and doctest#
Since I’m using PlatformIO, the obvious candidate for running embedded tests is
Unity. It’s small, and
well-supported with pio test. It’s also a far cry from what the
native-only doctest framework offers.
Here are some of doctest’s distinguishing features:
- Automatic registration of tests: this avoids a common gotcha in Unity that I added a test, but then forgot to also add a call to it in the main run code.
- Being able to descibe tests using arbitrary text, instead of having to restrict each name to valid C function names (“thisIsYetAnotherTest” …).
- A straightforward way to generate arbitrary clarifying output, some of which will only be printed when a check fails.
- Optional setup and teardown code which can be adjusted for each group of tests. In Unity, you have to create separate main apps if the setup / teardown requirements differ between tests.
- This is doctest’s killer C++ feature (gleaned from Catch2): printing not just the comparison expresssion which failed, but also the values on both sides of that comparison.
Why?#
Feature #1 adds considerable convenience, because you can comment out failing tests right where the code is. No need to comment out the calls (sometimes test code fails to compile, and then you really want it removed as you work through the new issues).
Feature #2 is also quite important, to reduce the friction of having to come up with new function names. Sometimes even briefly using the same name twice helps.
Feature #3 is a matter of letting logging output through, preferably in a way which can easily be turned off once such detailed output starts to distract.
Feature #4 makes it easy to refactor tests by combining common code. This tends to be a constant activity, as tests get added, adjusted, and split into two. In Unity, you end up fighting the system by placing all the setup / teardown logic in one spot, sometimes far away from where it gets used.
And lastly, feature #5 is a real treat: the code CHECK(abc == 123); will
report the value of abc when its value is not 123. An ordinary assertion
will only report “abc == 123 failed”, because that’s all the C/C++
pre-processer can generate as string.
As time went on, I got more and more dissatisfied with Unity. I looked at other test frameworks, but found nothing with these features while also being suited for embedded µC use.
And so I set out to write my own test framework (how hard could it be, right?) …
Meet “utest” …#
Like doctest, the entire framework is contained in a single source file, called
utest.h. No installation needed, it’s just a header file. Unlike doctest, I
don’t use STL, or exceptions, or even setjmp / longjmp. The source code of utest
is about 1% the size of doctest. It’s made for
JeeH (it relies on logf for its output, but see
below). I will generalise utest as the need arises … but not prematurely.
Tests are placed in header files. Here’s an example mytest.h:
TEST(let's add) {
int a = 1, b = 2;
CHECK(a + b == 3);
}To run this test, it needs a main.cpp file, e.g. something like this:
#include <jee.h>
#include <jee/hal.h>
using namespace jeeh;
#include "defs.hpp"
#include "utest.h"
#include "mytest.h"
int main () {
initBoard();
int ms = 500;
if (!utest::runner())
ms = 100;
while (true) { led.toggle(); msWait(ms); }
}Several things to note here:
- The main code is very much like any main code using JeeH.
- Test output will be sent to wherever
logfoutput is configured to go. - Once all tests run to completion, the LED will blink: slow if it passed, or fast if it failed.
- The output ends with:
=====> 1 tests (0 skipped), checks: 1 OK <===== - Feature #1: yes, all tests automatically register themselves.
- Feature #2: test names can be just about anything, even empty or duplicates.
- Feature #3: sure, anything can be logged (although not conditionally).
- Feature #4: yep, see the next section.
- Feature #5: nope, that’s way too complicated. Logging will have to do, see #4.
Setup and teardown#
To use setup code, add a SETUP() { ... } definition before the affected tests.
Same for teardown code, using TEARDOWN() { ... }. These will be called for
all subsequent tests in the same source file. Setup and teardown functions can
be redefined as needed, again applying to all subsequent tests. To disable
either one for the rest of the file, define it with an empty body:
SETUP() { logf("setup"); }
TEARDOWN() { logf("teardown"); }
TEST(wrapped in setup & teardown) { ... }
TEST(also wrapped in setup & teardown) { ... }
SETUP() { logf("different setup"); }
TEST(wrapped in different setup & same teardown) { ... }
SETUP() {}
TEARDOWN() {}
TEST(no more setup & teardown here) { ... }Verbose output#
Normally, only failed and skipped tests are reported. To report every check
before it runs, utest::runner accepts a boolean argument. If true, the
runner switches to verbose mode:
... utest::runner(true) ...Skipping tests#
There are a number of ways to disable tests:
- the most obvious approach perhaps, is to comment it out (or use
#if 0) - to disable all tests in a specific header, comment out its
#includeline - a test can also be stopped at any point by inserting a
SKIPPED();statement - lastly, a test can be forced to explicitly fail, by inserting a
FAILED();call
Additional output#
Since utest relies on JeeH’s logf function to report its result, all other
logf calls inside the tests will also be shown. To easily disable an entire
class of debug output, the following trick can be used:
// uncomment first version to include output, or second to omit it
#define testf logf
//#define testf(...)
#include "mytest1.h"
#include "mytest2.h"
#include "mytest3.h"Then, just call testf instead of logf to generate output which can easily be
turned off globally. When disabled, all those calls will be left out of the
build.
PlatformIO integration#
To integrate this into PlatformIO’s pio test command, place all the files in
the test/ folder:
.
├── platformio.ini
├── README.md
└── test
├── blah
│ └── mytest.h
├── defs.hpp
├── main.cpp
├── mytest.h
├── test_custom_runner.py
│ └── utest.hThe platformio.ini file must specify test_framework = custom and works in
combination with a small test/test_custom_runner.py script, which ties it all
together:
$ pio test
Verbosity level can be increased via `-v, -vv, or -vvv` option
Collected 1 tests
[...]
** Resetting Target **
shutdown command invoked
Testing...
If you don't see any output for the first 10 secs, please reset board (press reset button)
------------------------- main:* [FAILED] Took 2.12 seconds -------------------------
====================================== SUMMARY ======================================
Environment Test Status Duration
------------- ------ -------- ------------
main * FAILED 00:00:02.115
______________________________________ main:* ______________________________________
test/mytest.h:13:two is company:FAIL:
test/mytest.h:28:changed setup & teardown:FAIL:42 == 40+1
test/blah/mytest.h:10:even more!:FAIL:42 == 40+1
========== 6 test cases: 3 failed, 1 skipped, 2 succeeded in 00:00:02.115 ==========Mixing run and test modes#
Sometimes it’s easier to use PlatformIO’s normal pio run command to build and
upload code, and sometimes the pio test command is more convenient:
- With
pio run, you control when the code gets uploaded and when to connect to the board for communicating with the app: it just happens to consist of tests. - With
pio test, the entire process of uploading, connecting, and parsing the output is automated. Test results are summarised and a status code is returned for use from a shell script.
With a small modification, both modes can be used interchangeably:
-
Place all the code in PlatformIO’s
src/folder. Don’t create atest/folder. -
Insert these lines at the start of the
platformio.inifile:[platformio] test_dir = src
This way, pio run and pio test can both be used. The former builds (and
uploads), the latter also “runs” the tests, i.e. parses the output, waits for
completion, and reports the outcome.
Note that run and test builds can also be used to build different versions
of an app within the same project: the UNIT_TEST macro is only defined in test
builds.
Limitations#
The CHECK, FAILED, and SKIPPED macros can only be used inside a TEST,
SETUP, or TEARDOWN definition, because they’ll perform a simple return to
exit a test case. There are no exceptions or “long” jumps out of the code.
Utest is not meant for use in nested function calls: test results can only be
verified at the top level. On the plus side: all C++ destructor cleanup will
take place as expected.
Unlike doctest, there are no “subcases” in utest (neither are there in Unity).
The “feature #5” magic is unique to doctest. When things fail, logf and gdb
are your friends.
General use#
The code for utest is here:
https://git.sr.ht/~jcw/doodle/tree/main/item/tester/src/utest.h.
Utest can be used without the JeeH library if there is
an implementation of logf matching this signature:
void logf (char const* fmt, ...);Since logf acts as alias for printf, a simple trick can be used to make
utest work in any context:
#define logf printf
#include "utest.h"There are no other dependencies. It’s all implemented as a single header file.
License#
There is no license: this code lives in the public domain. Go wild!