iOS SDK Unit testing

Igor AsharenkovJune 06, 2023


Unit testing keeps being a contentious topic within the iOS development community. While most developers agree why and how to write tests, the reality is many projects end up without sufficient tests due to time constraints or forced project management decisions. Sometimes we simply lack the time, budget, or understanding of why testing is necessary in a particular case. When everything seems to be working fine, no critical bugs get reported, the clients are paying on time, it can be difficult to see the need for additional “invisible” work.

What to expect
In this article, we will explore how AppSpector organizes unit testing. We’ll try to answer the main question: whether or not testing is necessary for your project, as well as what you can expect if you do decide to go the extra mile.At AppSpector, unit testing has been our daily practice from the start, thanks in large part to the skilled and experienced developers who started the project and established a very strong engineering culture that we continue to uphold.Right from the beginning, writing tests was not seen as punishment or some kind of task to be completed in the next sprint if you have failed at a demo. It was and still is viewed as an essential tool to help achieve consistent system design, safely, introduce new features, and document how different parts of the system work. To begin our exploration of AppSpector’s approach to unit testing, let’s take a look at the infrastructure and tools that we use.

Infrastructure and tools

At AppSpector, our testing toolkit consists predominantly of the very same tools employed for assessing your everyday app.
Since our SDK was written in Objective-C with some parts of code being in C, we use OCMock as a mocking library that relies on Objective-C runtime.For a long-term project, it’s crucial to be able to quickly understand how the old code works and how to write tests for it. That is why we decided to use Expecta as a matcher framework, instead of Apple’s XCTest framework.Expecta is more expressive than XCTest, and provides plenty of syntactic sugar around conditions matching, which allows for more readable tests.The third and final library that makes our test suite is Nocilla, an HTTP testing library that provides a convenient API for request recording and allows testing network code against those requests. By using these tools, we are able to write efficient tests that ensure the stability and reliability of our SDK.The organization of our tests at AppSpector is largely influenced by the structure of our SDK. Our SDK supports encryption, and the encryption-enabled version is a completely different binary build from the same source base, but with some extra code that handles encryption logic. That build has crypto library as a third-party dependency (because we are good at debugging tools, not crypto :) ) and some code paths that differ from the non-encryption enabled version. How the SDK is built (with encryption support or without one) is decided at the compile time. If you try to run tests for the version without encryption by building a binary with encryption enabled, they will fail.Due to this structure and our build process, we have two separate test targets: one that includes all tests for the ordinary SDK version and the other one only for the encryption-related code. Each target links to a corresponding SDK binary. By using this approach we are able to ensure that our tests are always accurate and relevant to a specific version of the SDK being tested.Another great tool we use at AppSpector is called Codecov, a service that visualizes test coverage and provides analytics for our tests. At this point it’s important to take a moment to explain why we track coverage as a metric. While it’s true that code coverage is often considered a fake metric for unit tests and it doesn’t necessarily reflect the test suite quality, coverage is still a good metric to track code/test ratio and its rise or fall over time.
It’s not as important to have 40% or 50% of your code covered, but to maintain the same percentage while introducing new features. If your coverage falls with the addition of new code, it means you are writing fewer tests than before, and it may be the time to pay more attention to this process.But I digress. So why Codecov? Well, not only does it track coverage, but it also shows useful radial diagrams that help identify which areas of our code are not covered by tests. It’s very convenient, because it helps us plan further test suite development.
We highly recommend Codecov if your team is looking to improve your testing process. [screenshot of codecov’s radial diagrams]

What to test

When building an app, it’s not necessary to write tests for every single line of code. Testing boilerplate view-layer code or view controller lifecycle often doesn’t make sense. While this code may contain bugs, it’s more efficient to start testing with the networking layer, business logic and other critical parts of the codebase. For an ordinary app, a coverage of more than 30% may not be necessary in the real world. However, for the AppSpector SDK, every part of the codebase is crucial, and bugs can potentially cause a host app crash which is our worst nightmare.

After careful consideration, we decided to start from the top and work our way down, beginning with the most frequently used code and moving on to less frequently used code. As you may recall from our blog post on how the iOS SDK is structured, the SDK consists of the core part that handles networking, message processing, encryption, and other basic features. At runtime, the modules called “monitors” are plugged into the core. These modules contain the implementation of AppSpector monitors, such as Networking and CoreData. With this in mind we decided to start testing from the core and then move down to the monitors, aiming to reach a 50% coverage. By following this plan, we now have a 67% coverage for the SDK core and a 60% coverage for the monitors on average. Looking back, I can definitely say that this approach has been successful in achieving the balance between the time spent on new features vs their quality.

When discussing unit testing, it’s impossible to ignore TDD. Honestly, I’ve only used the pure TDD cycle once during development of our SDK. I used it for in a complex and tricky part that touches on packing messages into larger batches. The logic was so difficult to understand that after writing a specification for it I had started with some failing tests and only then wrote the actual code. After repeating this a couple of times I suddenly realized that it was a classic TDD flow. So yeah - sometimes it works. But, personally, I can’t imagine writing all tests in this manner. Perhaps I’m just not smart enough.

How to use tests
In general, unit testing is profitable from the start. No, 1 test does not equal $1. But writing tests influences the design of your code tremendously. It becomes less coupled, interfaces become clearer, and writing code while keeping tests in mind will lead to more robust and well-designed software.
But there is more.Once written, the tests are a great tool helping you during your development. At AppSpector, we can point to two practices we are utilizing for maximum efficiency:

  • Running tests frequently. Ideally, you should be able to run all your tests for every commit you make. That’s rarely possible within a large codebase due to the time and resources required to run the entire test suite. AppSpector iOS SDK now has 600 unit tests, and running all of them on every commit is obviously not possible. But what we do instead is run all of them for every commit pushed to the repository. We do it not only for pull requests or code merged to the master branch, but for every push we do. Countless times we were surprised by the “red” build state while writing code that we were absolutely sure should work just fine. The earlier you manage to catch the mistake, the easier it will be to fix it.
  • Writing a test covering a particular bug. Every time our users report an issue or we find one while testing new features, the first thing we think of is “what test should have been written to catch this case during development?” And after a brief discussion we write one. This approach has some clear benefits. First, you have a test that verifies the bug is really there and that you understand it correctly. So many times writing a test for a bug revealed some underlying issue we were not aware of. Second, after the fix was posted, in your codebase you now have a test for one more case that wasn’t covered before. And the chances that you’ll step on the same mine again are very low.

Remember, writing tests takes time and effort, but it pays off in the long run by making your code more reliable, maintainable and easier to extend. Don’t be discouraged if you don’t achieve 100% code coverage or if you encounter difficulties with writing tests. Start small, focus on the most critical parts of your code, and gradually improve your testing skills and practices.