top of page
Writer's pictureSahil

A case for trying Test-Driven Development as an Embedded Engineer

Intro

This blog post is a written version of a conversation I've had a few (or maybe more than a few😅) times. As an embedded software developer, I am always looking for ways to make development easier, faster, and more efficient. One of the best improvements I've made to the quality of my software was adopting Test-Driven Development.


I'll dive a little into what it is and how it may help you as an embedded engineer, but for now, I'll leave you with a hook - What if you could develop software for hardware without ever pulling out the debugger? Without needing the target device to test? And with some practice, produce cleaner code faster while shipping with confidence?


Ambitious; I know.


What is TDD?

So.... what is Test-Driven Development (TDD)? Glad you asked - at its core, it's a method of developing software with the following steps:

  1. Write a failing test that captures the intended behaviour of a software module.

  2. Write just enough code to satisfy the test (even if it means hard-coding variables or not accounting for errors).

  3. Refactor the test (with more robust expectations or to make the test cleaner).

  4. Repeat - writing only as much code as is ever required to pass a failing test.


By writing a test, I'm referring to a unit test, though the concept of TDD can also be extended to other types of testing, such as integration or hardware-in-the-loop.


Often, the phrase "red, green, refactor" is used when describing the cycle of TDD, where 'red' is the failing test, 'green' is making the test pass, and refactoring until it matches the desired requirements and spec of the software module.


Okay... but why?

I boldly claimed at the start that TDD can help you develop software without a debugger or target device and produce faster, cleaner code. In my experience writing embedded software, this holds true most of the time—let me explain why.


Unit tests usually run 'off-target' in embedded software development, providing a significant advantage: you don't need to flash the target as often to test changes. Consider your most recent embedded project—how long did a build-compile-flash cycle take? If you're lucky, it’s a couple of seconds. More often, it takes a minute and in some cases, hours. After flashing, you need to set up the test target, which can take additional time, especially if the software doesn't work as expected. This compile-flash-test cycle can become a blocker in delivering code quickly.


The TDD workflow relies on predefined unit tests that run instantly, avoiding the need to flash and set up the target for every change. While creating unit tests takes time, it is often much less than required for repeated compile-flash-test cycles.


The second benefit comes from the unit test itself—it immediately alerts the developer if the result doesn't match expectations. If a test fails, the developer can debug it instantly rather than discovering the issue later in a longer cycle. Anecdotally, practicing TDD has led to fewer instances where I need to use a debugger on the target. James Grenning (who also wrote an excellent book on TDD for embedded C) coined the term "Debug Later Programming" in his blog post on The Physics of Test Driven Development to compare how TDD differs from the modern compile-flash-test cycle, where he explains that "In DLP, code is considered 'done' after it is designed and written. After the code is 'done' it is debugged". Debugging the change while developing the code with incremental tests helps avoid debug-later programming and in doing so, prevents situations where "the feedback, revealing programming mistakes, may take days, weeks or months to get back to the developer."


Moreover, unit tests offer free regression coverage, ensuring that refactored code meets expectations. This makes it easier to refactor confidently and assures that features won't break in the future."

What's different from just writing unit tests?

Strictly speaking, there isn't much difference—the software committed from 'just writing unit tests,' assuming all code paths are tested, can be just as good as code produced with TDD.


However, I view TDD as more of a tool to guide software development rather than a specification that necessitates specific testing behaviour. Following the red-green-refactor cycle helps keep iteration times short and, in my experience, reduces the time spent developing and debugging software while increasing the developer's confidence that the software is functioning correctly.


What TDD is bad at

While it can seem that I pitch TDD as a silver bullet for writing better-embedded software, it is far from a complete solution that guarantees the quality of software a given developer produces.


Some of the drawbacks I've noticed are as follows -


Can't mock complex interactions

Many of the systems I work on have complex, system-level interactions—testing a module alone does not guarantee that its new behaviour does not break an interaction outside of its module. This isn't a problem inherent to TDD but instead with unit testing in general and is mitigated by having integration-level tests. Nevertheless, TDD does little to address this case, or requires a complex system-level mock to test a module that may not be reasonable to develop.


Only asserts a programmer's expectations, and nothing more

TDD is excellent at giving programmers a tool to test their software and ensure it does what they think it should. However, humans are not perfect, and the programmer may misunderstand what the system does or should do. TDD can provide false confidence that a system works correctly but, in reality, only claims that software works as intended - it can happily expect from software that cows can fly, and it motivates the need for good code-review practices alongside other measures to make sure the software does the correct thing.

Testing infrastructure is a tax

A quote that I think is valuable to remember is that "software is functionally an asset, but code should be thought of as a liability". Tests are code - it requires infrastructure to run, scripts to maintain, and needs proper mocks to produce value in test expectations.


What are the use-cases for TDD in embedded?

I've been able to apply TDD in various instances where I am working on isolated software modules. For embedded systems specifically, I've found TDD works well for a diverse range of modules, from HW-level device drivers to high-level application logic. A classic example where I encourage the use of TDD is when writing a new SPI/I2C-based device driver. It makes it easy to evaluate and verify that packet processing is done correctly and the right transceive commands are passed through the API.


A common response I've received when mentioning TDD to embedded developers is that the software they work on straddles the hardware/software divide and doesn't lend itself to classical software testing strategies. While this can be true, using software design techniques to minimize coupling, such as hardware abstraction layers (HALs), can often provide enough of a layer above which modules can be tested.



32 views0 comments

Recent Posts

See All

Interview Question Banks

Embedded Software Interview Question Bank While on a weekend trip in Lisbon, I had one too many coffees and decided to see how far I...

Comments


bottom of page