A new toy
We keep playing with the new stuff Apple presented at the WWDC. This time we are going to be looking at the MetricKit, an absolutely new framework and a tool behind it for monitoring your app performance.
We all know, measuring your app performance during development is a piece of cake. Xcode shows you gauges with memory and CPU load, you can attach Instruments to simulator or your test device and even write custom instruments (for more details see our articles about custom Instruments packages: part 1 / part 2). Only understanding the importance of performance tuning stops you from measuring almost anything your app does. But things get complicated when we talk about the AppStore environment, when your app goes to real users. No matter how thoroughly you test your app, real world always has a bunch of surprises for you that will influence performance and user experience. Of course, there are a lot of tools out there gathering various metrics in the production environment, but most of them are limited by the iOS SDK restrictions as well as by the influence on application behaviour by the actual monitoring.
This year, Apple decided to fill the gap and bless developers with a tool that helps them gather and analyse app performance metrics in the production environment. It consists of MetricKit (a framework that gives you access to metrics provided by the OS) and a separate tab in the Xcode 11 organizer, where you can find metrics from your apps. We are going to pause on MetricKit, because metrics browser in Xcode will only work with apps submitted to the AppStore.
The framework architecture is rather simple and straightforward. The central part is taken by the MXMetricManager class, which is a singleton that provides most of the framework APIs.
In general, the workflow has 3 main steps:
- You initialize MXMetricMnager and assign an observer to it.
- You optionally implement custom metrics in your app using signpost APIs
- And finally, you deal with received metrics in observers ‘didReceivePayloads’ method, i.e. send them to your backend for further analysis.
Metrics come to you in the form of an array of the MXMetricPayload instances. Payload encapsulates sets of metrics metadata and timestamps them. Metric payload is a simple wrapper for MXMetric subclasses. It’s separate for each metric type.
Metric types are pretty well documented by Apple so we will not be stopping here for too long. However, we must stop to notice one interesting thing - MXMetric provides a public API to serialize it to NSDictionary or JSON, which, I have to admin, is a bit unusual.
From the outside, MetricKit looks pretty straightforward. But to me, it’s always exciting to see how things work from the inside. Diving into something deeper and deeper is always more intriguing if you have a specific task in front of you. So I decided I wanted to feed MetricKit stubbed metrics and then force it to deliver metric updates to me whenever I want. True, it’s not very useful, but it gives you a direction in your research.
To start executing on the task, we obviously need the MetricKit itself. You might think that obtaining binary for a framework is easy, because Xcode shows it to you in the frameworks list once you add it via ‘link binary with libraries’ dialog. That’s an optimistic thought. Because if you open the MetricKit.framework, you will see the MetricKit.tbd file inside. Its size is just 4kb. Obviously, it is not what we are looking for.
So what's really happening here?
TBD stands for 'text-based dylib stub' and is actually a YAML file with dylib description, exported symbols and a path to the dylib binary. Linking against tbd files reduces binary size. Later, at runtime, real dylib binary will be loaded from the OS using a path provided in the tbd file. Here is what the file looks like when you open it in Xcode:
Using a path from the tbd file, we can easily get the MetricKit binary for further research, but there is an even simpler method.
Our app binary contains a path to each dynamically linked library in its Mach-O header section. This info can easily be obtained with otool using -l flag.
Here is the output for a test project I have built:
→ otool -l ./Metrics | grep -i metrickit name /System/Library/Frameworks/MetricKit.framework/MetricKit (offset 24)
We can see the same path we saw earlier in the tbd file. Having a binary of the framework we can finally look at the internals. I usually use Hopper Disassemble for this. It’s an easy to use, yet a very powerful tool to inspect binaries.
Once we open the MetricKit binary - we navigate to ‘Proc.’ tab and expand the ’Tags’ list. Here, we can see all the exported symbols. Selecting one of them (for example, the MXMetricManager) we can see all its methods below and by selecting method we can see its disassembled content on the right:
When browsing through the MXMetricManager method [https://gist.github.com/deszip/88a258ae21d33dc75d7cbac9569c6ec1] list it’s easy to notice '_checkAndDeliverMetricReports’ method. Looks like this is what we need to call to force the MetricKit to deliver updates to subscribers.
Unfortunately, trying to call it didn’t result in a subscriber call, which probably means there are no metric data to be delivered. By looking at the method implementation we notice a few interesting things: it iterates content of the /Library/Caches/MetricKit/Reports directory.
Then it tries to unarchive the MXMetricPayload instance from each item on the disk. And in the end, it iterates registered subscribers and calls ‘didReceive’ method with payloads list.
The problem is probably that we don’t have anything under /Library/Caches/MetricKit/Reports, but we know that we need some archived MXMetricPayload instances there. So let’s build them and put on the disk before calling ‘ _checkAndDeliverMetricReports'. Again, the plan is to build a MXMetricPayload instance, then build and add any type of MXMetric to it, and then archive the payload instance on the disk. Calling ‘_checkAndDeliverMetricReports’ after all that happens, should result in our subscriber call with our stub as an argument.
When looking through Apple docs on payload and metrics, you could notice they don’t have any public initializers and most properties are read only. So how should we make an instance then?
Again, we return to Hopper to look at the MXMetricPayload methods list:
Here, we can see its initializers and methods to assign metrics. Calling all of the private methods is easy with NSInvocation and ‘performSelector’ due to Objective-C dynamic nature.
As an example, we’ll build a CPU metric and add it to the payload. You can find a complete code snippet here: [https://gist.github.com/deszip/a0cf877b07cc2877129e0aaef2fed1e4].
In the end, we archive build payload instance and write it to the /Library/Caches/MetricKit/Reports directory.
Now it’s time to call the ‘_checkAndDeliverMetricReports’, which should finally result in a subscriber call. This time passing our stubbed payload as an argument.
Where metrics come from
Getting metric reports is pretty easy with the MetricKit, but you are probably interested to find out how reports appear in your app /Library directory. Here’s how.
While digging inside MetricKit binary I noticed this method: ' _createXPCConnection’. Inspecting its implementation makes it clear - it builds NSXPCConnection to service with a name ' com.apple.metrickit.xpc’ and two interfaces ‘MXXPCServer’ and ‘MXXPCClient’ for client and server sides. If you look at the protocol description:
and the MXMetricManager initializer, it will become obvious that the MetricKit registers itself as a client for remote service, which probably puts report files into the apps container. But this post is already way too long, so we’ll explore how MetricKit XPC service works in one of our next posts.
The MetricKit is a unique and irreplaceable tool if you care about your app performance under real circumstances in production environment.
Unfortunately, it’s not possible to take a look at Xcode organizer’s ‘Metric’ UI at the moment, except for what we were shown during the demo at the WWDC session.
This could be a priceless tool for moving your user experience to the next level by eliminating glitches and performance issues in your code.
One disadvantage I can see right now is the lack of details for each metric type: only separation is the app version and you can’t see any metrics for an exact group of devices/OS versions/regions, etc.
But, of course, you can always send your metrics data to your own service for further processing along with any vital info you need. You can attach it to the issues in your bug tracker and much more. At AppSpector we are already working on extending our performance monitor functionality with data obtained from the MetricKit.