Customising the Menu Bar of a Catalyst App Using UIMenuBuilder

I'm in the process of building out a Mac app for one of my iOS side projects - Petty - using the new "Catalyst" tools announced by Apple at WWDC this year. Petty has become my iOS development playground, as I use it to experiment with new iOS technologies. This was the case with Siri Shortcuts last year. My motivation for finishing work on the Mac app for Petty is that I'll be presenting a talk, Updating Your App for iOS 13 at the /dev/world/2019 conference in Melbourne, and part of that talk will cover bringing the iOS app to macOS.

The menu bar is one of the most iconic parts of a Mac GUI. It's consistent, helpful, and rather intuitive. It helps people uncover actions, and abilities of an app, and also provides a little more for power users - keyboard shortcuts for these actions. Naturally, most Mac apps feel more "at home" on the Mac if they take advantage of system features such as the menu bar. Admittedly, Petty is a simple app with few hidden features. That said, I'd still like to customise the menu bar when Petty is run on the Mac.

This post will be about customising the menus in the menu bar programatically, with Swift. It is also possible to achieve this using Storyboards as demonstrated in the WWDC19 session video, Introducing iPad Apps for Mac. There's a live demo towards the end of that session video if you're interested.

Let's build and run Petty on the Mac.

Menu bar after building and running with no customisation.

Menu bar after building and running with no customisation.

By default, this is what the menu bar looks like when Petty is first built run on the Mac. Without customisation, there's an application, file, edit, format, view, window, and help menu.

To help customise the menu bar there's a new method in the AppDelegate which can be overridden, buildMenuWithBuilder. The first step is to override this, and call super.

override func buildMenu(with builder: UIMenuBuilder) {
    super.buildMenu(with: builder)
    /* Do something */

The first objective is to remove the menus that aren't wanted. There is text input in Petty (in the form of a search field), so the "Edit" menu should stay. There's no need to format text, so the "Format" menu can go. To do this, within the buildMenu method, we're going to call, builder.remove(menu: .format). Building and running again will show that the Format menu is no longer present. You'll notice in the screenshot above that there's also a "Services" sub-menu in the main application menu. This is unnecessary for Petty, so it's going to go as well. Removing it is the same as removing the Format menu, except we will specify .services instead. I'm also going to remove .toolbar-related menu items.

At this point, the buildMenu method is as follows:

override func buildMenu(with builder: UIMenuBuilder) {
    super.buildMenu(with: builder)

    builder.remove(menu: .services)
    builder.remove(menu: .format)
    builder.remove(menu: .toolbar)

Everything remaining in the menu at this point can stay there. The actions in the "View", "Window", and "Edit" menus are all taken care of by the system - at least for Petty.

There are a couple of actions that should be added to the menu bar. In the iOS version of Petty, it's possible to pull-to-refresh on the table view. This will reload the visible data. This action is also possible on macOS, and the Catalyst tools bring this feature across nicely. However, Mac users may expect other ways to refresh data. The Command-R keyboard shortcut is a common way to refresh data on the Mac, and it would be nice to use this shortcut to reload data in Petty on the Mac.

To add to the menu bar, either the .insertChild or .insertSibling methods can be called on the builder object. Calling insert child allows you to place a UIMenu object at either the start (top) or end (bottom) of a menu - for example atStartOfMenu: .file. Inserting a sibling allows for a more precise insertion either before (above) or after (below) another one. For example - afterMenu: .about means we want to insert a menu after the About menu. In this case of the reload data action, it should be put at the top of the file menu.

let refreshCommand = UIKeyCommand(input: "R", modifierFlags: [.command], action: #selector(reloadData))
refreshCommand.title = "Reload data"
let reloadDataMenu = UIMenu(title: "Reload data", image: nil, identifier: UIMenu.Identifier("reloadData"), options: .displayInline, children: [refreshCommand])
builder.insertChild(reloadDataMenu, atStartOfMenu: .file)

There's a bit happening in the code above. First, a UIKeyCommand is being created. Here is where a desirable keyboard shortcut - in this case Command-R - is specified for the action. The refresh command also needs a title, and this title is shown in the menu. We then initialise a UIMenu with the same title, can optionally provide it an image but ignore it in this case, and optionally give it an identifier. Options are also specified. In this case, .displayInLine is given as an option, which tells the system that this command belongs in the menu we're adding it to, and doesn't open yet another submenu. Then, we pass it an array containing one object - the refresh command which we created earlier. Finally, we insert the reloadDataMenu as a child on the builder object which we got from overriding the buildMenu method earlier, and specify that we want to insert it as an option at the start of the File menu. Note that when constructing the refreshCommand, a selector is specified. It's the code that will run when the menu item we just created is pressed, or the Command-R keyboard shortcut is used. If you want to specify a sender to that method, the type is simply AppDelegate.

Success! There's now a reload action in the menu bar.

Success! There's now a reload action in the menu bar.

This is what the buildMenu method should look like now:

override func buildMenu(with builder: UIMenuBuilder) {
    super.buildMenu(with: builder)

    builder.remove(menu: .services)
    builder.remove(menu: .format)
    builder.remove(menu: .toolbar)

    let refreshCommand = UIKeyCommand(input: "R", modifierFlags: [.command], action: #selector(reloadData))
    refreshCommand.title = "Reload data"
    let reloadDataMenu = UIMenu(title: "Reload data", image: nil, identifier: UIMenu.Identifier("reloadData"), options: .displayInline, children: [refreshCommand])
    builder.insertChild(reloadDataMenu, atStartOfMenu: .file)

The second action to add to the menu bar for Petty is a quick action to open the application settings. The iOS version of the app has its own settings screen, and it can be opened by tapping the settings icon on the main screen of the app. This is also the case when using the app on macOS - settings can be opened by tapping a button. However, Mac users are accustomed to open application preferences either via the application menu bar item, or via the keyboard shortcut Command-, (command-comma). It makes sense to support this in Petty, too.

The code is as follows, and is pretty similar to the reload data action:

let preferencesCommand = UIKeyCommand(input: ",", modifierFlags: [.command], action: #selector(openPreferences))
preferencesCommand.title = "Preferences..."
let openPreferences = UIMenu(title: "Preferences...", image: nil, identifier: UIMenu.Identifier("openPreferences"), options: .displayInline, children: [preferencesCommand])
builder.insertSibling(openPreferences, afterMenu: .about)

Note the input value is different. The keyboard shortcut is Command-, (command-comma), not Command-R, and that's specified in the UIKeyCommand object. This shortcut is also tied to a different action - openPreferences. The other difference is that we're inserting it by calling the insertSibling method on the builder object which allows us to specify that it belongs after the .about menu. The system knows what the About menu is. In this case it's the "About Petty" action in the main application menu. The system has also put this action it in its own section of the menu.

The preferences menu makes an appearance.

The preferences menu makes an appearance.

What has been achieved? We've programatically modified the menu bar actions in a Catalyst Mac app. This includes removing unneeded actions, and adding our own custom actions. The power of the menu bar is far greater than has been explored in this post, but it's a decent start, and satisfies the needs for menu bar customisation while building a macOS version of Petty. Of course, there are better ways to write and manage the code when a menu bar becomes more complex, or when the items in the menu bar differ from screen-to-screen in your app. Naturally, it is also not good practice to write too much code in the AppDelegate. That said, hopefully this post acts as a helpful guide for getting started when customising the menu bar for your own macOS apps that are being brought across from iOS using Catalyst.

The finished menu bar, after the customisation. It's what's on the inside of those menus that counts!

The finished menu bar, after the customisation. It's what's on the inside of those menus that counts!

Below is the finished Swift code from this post. Note that in order for the following code to compile, you'll need to implement reloadData and openPreferences methods, and prefix them with @objc.

override func buildMenu(with builder: UIMenuBuilder) {
    super.buildMenu(with: builder)

    builder.remove(menu: .services)
    builder.remove(menu: .format)
    builder.remove(menu: .toolbar)

    let refreshCommand = UIKeyCommand(input: "R", modifierFlags: [.command], action: #selector(reloadData))
    refreshCommand.title = "Reload data"
    let reloadDataMenu = UIMenu(title: "Reload data", image: nil, identifier: UIMenu.Identifier("reloadData"), options: .displayInline, children: [refreshCommand])
    builder.insertChild(reloadDataMenu, atStartOfMenu: .file)

    let preferencesCommand = UIKeyCommand(input: ",", modifierFlags: [.command], action: #selector(openPreferences))
    preferencesCommand.title = "Preferences..."
    let openPreferences = UIMenu(title: "Preferences...", image: nil, identifier: UIMenu.Identifier("openPreferences"), options: .displayInline, children: [preferencesCommand])
    builder.insertSibling(openPreferences, afterMenu: .about)

What Will Happen to Buddybuild at WWDC19?

When Apple acquired TestFlight in 2014, they integrated it into App Store Connect (called iTunes Connect at the time), but kept the purpose of the service very much the same - allowing developers to easily distribute pre-release builds of their software. I'm curious as to the direction Apple will take with Buddybuild.

I see a few ways it could go. Buddybuild could become a premium CI service, as it was previously, with a monthly or annual fee for developers to use it, with the intention being making money. I don't see this as the most likely outcome, but it's a possibility. Apple tends to build mass-market consumer services when looking to grow services revenue, not niche developer tools.

Alternatively, Buddybuild, the service that we once knew, may return and be integrated into App Store Connect. Developers might have free, or inexpensive, access to a CI/CD service that will run tests for them, build their app, and distribute it where it needs to go automatically. This approach would interest me the most, and I think it's the most likely. The fact that they didn't shut the whole service down upon acquisition shows CI still interests them, and they have plans to continue the service, at the very least for existing customers, but I'm sure that will change. It would also mean Apple is exploring an area they haven't before - they would have direct access to source code, be it hosted by Apple itself, or pulled from a third-party code hosting service such as GitHub or BitBucket.

A final approach which I believe is plausible would be that they take the technology behind Buddybuild (its integrations, build scripts, and hardware infrastructure) and use it for something else - such as remote code compilation. This could be especially useful should Apple be looking at bringing Xcode to iPad. I could imagine it being a bit like Google Stadia but for developers and their Xcode projects.

It's also possible that none of the three approaches mentioned above are the route Apple take. WWDC19 is going to be an interesting conference, and I look forward to seeing what Apple has been up to in the world of CI/CD.

Petty 2.1

The iOS version of Petty, my app for displaying real-time petrol prices in the state of New South Wales, was updated to v2.1 today.

The full release notes for this version are as follows:

* Adds additional Siri Shortcut for showing the real-time price at a given station. This will show up as a suggested Shortcut around iOS where it's relevant, and can also be added to Siri and can be accessed with your voice. This can be done by tapping the "Add to Siri" button at the bottom of the screen when looking at the prices for a particular station. Please note, this feature requires Petty Premium to be unlocked for it to work.
* Updates to the design of the station view - the map takes up more space and is more of a priority on this screen.
* Improved error handling if an attempt to buy the in-app purchase to unlock Petty Premium was unsuccessful.
* Updated icon on Apple Watch.
* Updates display name of Petty widget.
* Added app version and build number to the bottom of the Settings page.
* Lots of other small bug fixes and improvements throughout the app.

I'm most excited about the additional shortcut. There are now a few things you can do with Siri Shortcuts inside of Petty - such finding out the prices at a particular station, or having Petty find the nearest or cheapest petrol around. This is especially handy if you're driving, can't look at or touch your iPhone, and need to use Siri hands-free to interact with your iPhone.

I didn't write on this blog about the initial Siri features when I added them back in September, so consider this a belated introductory post. I'm pretty happy with how the shortcuts work in Petty, and I hope you enjoy using them just as much.

Petty is available on the App Store as a free download, with a one time in-app purchase to unlock Premium mode.

Tools for an iOS Developer

Working efficiently with better tools

In the interest of becoming a better, more focused iOS software developer, I've been thinking a bit about the tools I use to work.

iOS developers are fortunate not to need a lot of expensive software to be able to get their job done. Most of the expense associated with developing for the iOS platform comes from having to buy hardware; the Mac, iPhone, iPad, and Apple Watch aren't cheap these days. Once the necessary hardware is acquired, it's possible to get most of our work done with a few, "free" applications. Mainly Xcode and a terminal.

While it's possible to do the job with minimal software, there are some tools which can make life easier, or boost productivity in ways not possible with the default software stack. When iOS development was exclusively a hobby for me, I never gave much thought to paid software tools. It's difficult to justify spending money on these things. Now, working in and around Xcode somewhere between 24 and 40 hours a week (depending on if Uni is in session), I've decided to allocate myself more of a budget for these tools. The benefit is two-fold, I find it easier to do my work - which is always a nice thing - and my increased efficiency benefits the person/people/company paying me to complete said task, as I'm likely to develop and test something faster, and move onto the next thing sooner. A further bonus is that buying these tools for work means I also have access to better tools when working on side projects!

There are many tools out there for iOS developers. I've chosen three that I think are worth paying for, and am quite satisfied using myself.

Tower 3

Git is the version control system of choice for most iOS developers. The full power of Git can is accessible from the command line, is installed by default on any Mac, and can be used completely for free.

Git can be powerful but scary. No one wants to make a mistake and realise they've deleted a feature that's still in progress! Some people are amazingly skilled at using Git and all its features from the command line, but I am not one of those people. I prefer to use a GUI Git client, find that to be more friendly, and better suits how I use Git. There are a few popular GUI Git clients out there, including GitHub Desktop, Atlassian's Sourcetree, and Tower. Having used all three, Tower feels the most polished. I feel comfortable using it, and feel in control when performing actions with Git. I like having everything displayed in a GUI, and not having to rely exclusively on the command line.

I recently upgraded from Tower 2 to Tower 3. Some features are "nice to have," such as support for dark mode in macOS. There are other new features which improve my workflow and productivity noticeably. Being able to re-write commit messages, and delete specific commits - generally before having pushed to remote - is fantastic! Completing these tasks from the command line would be tedious, and I'd be fearful of making a mistake! Considering I'm in and out of a Git client all day every day at work, and just as frequently when working on side projects, it makes sense to choose something that I feel comfortable with and can use efficiently.

Although Tower is, under the hood, performing actions that you could do from a command line, or with an alternative free Git client, I see it as a worthwhile tool to have while writing software.


Charles is a proxy service which will capture all network traffic from either a physical iOS device or from a simulator while developing, allowing you to inspect the network traffic being sent and received, as well as modify the requests and responses for testing and debugging purposes. There are certainly other ways to mock the traffic coming in and out of your app, but Charles is the most straightforward way I've found to do this. I'm also pretty sure Charles has a lot of great features I don't use, but even just for the basic inspecting and modifying of network traffic it's worth its money. One of my favourite features is being able to "breakpoint" a request, meaning Charles will always stop when a certain request is being sent or received, allowing you to modify it every time.

Using Charles helps me feel a little more confident when working with API calls, knowing how easy it makes it test an app, check edge-cases, or throw data at an app during development before the backend API is ready for use. Though not an essential tool, it's one which I'm glad I have available to use and means there's one less thing to worry about when building iOS apps.


A newer tool on the scene for iOS developers is Sherlock, a simulator UI inspection tool. It's different from the inbuilt Xcode View Hierarchy Debugger as it allows you to change UI on the fly, while the app is running in the simulator. It's a handy tool to test auto layout constraints, change label text and other UI element attributes such as colours, across multiple device screen sizes without having to build and run on a simulator each time. No setup is required. Sherlock handles installing itself on the simulator and attaching to the app you're currently running from Xcode. It's delightfully easy to use. While not an essential tool, I've found it to speed up my workflow dramatically when building UI, and is now a tool I wouldn't want to work without when doing iOS development.

Wrapping up

Good tools can make your work easier, and more enjoyable. Saving a few minutes at a time, multiple times a day adds up quickly. Everyone values different tools, but recently I've been quite impressed with Tower, Charles, and Sherlock, see them as worthwhile purchases, and wouldn't want to be doing my job as an iOS developer without them.

Shipping Side Projects 🚀

I'm a huge fan of side projects. They're an opportunity to exercise creativity however you see fit. They provide you with a chance to work on something where you set 100% of the requirements. They give you a way to learn new skills and new technologies without any time constraints.

For some, side projects exist only to the creator. They aren't built to ship, and they aren't built with the thought that anyone else will ever see or use the project. For others, myself included, one of the best things about working on a side project is the moment it's ready to ship. I enjoy working on side projects and enjoy shipping them just as much.

In thinking about how to approach side projects going forward, I've come to realise that the kind of apps I want to work on as side projects typically aren't going to be fully-featured. At least at first. It's no secret that a considerable amount of my side project time over the last six months has been spent on a weekly podcast. Though it doesn't involve writing code, recording and editing the podcast has been amongst the most enjoyable side project work I've done. The podcast is time-consuming, and I don't necessarily have the time I once did to spend on more polished mobile app side projects - such as with Petty, and HeartMonitor - which were both mostly feature-complete at launch. Petty has received regular updates since its launch, adding features based on user feedback, but at the time it shipped there wasn't a whole lot more I'd hoped to add for 1.0.

There are some side projects I'd like to start working on, but to get them to a point where I'm completely satisfied would take longer than is ideal. As I said, I enjoy shipping side projects. I'm thinking about a different approach going forward. It's not a new or novel concept, but I'm coming around to the idea of shipping the bare minimum and adding to it over time. I guess it's the software MVP approach. I'm not sure if shipping software with limited features will make adding additional features more motivating than if it weren't out in the world yet, but it's something I'd like to try.

The idea of shipping more frequently can be applied to existing projects too. If new iPhones come out, why not release a small update that adds support? That's enough for one update; it doesn't have to wait until a bunch of new features are also ready.

The intention is not to ship poor-quality software, but to have a wider range of public projects, and to ship updates to them more frequently. This might mean v1.0 isn't quite where I'd like it to be, but that's okay. It can then be followed up with fairly continuous updates. Add a small feature one week? Cool, ship it. Fix a tiny bug the next? Ship that too! Updates don't have to be huge. But hopefully, the enjoyment of shipping will still be felt from shipping fairly often.

Software developers are lucky. We can build the software that we want to exist. Going forward, I'm going to try and ship more work more often, even if the project doesn't tick every feature box that I'd like for each release.

So, We're Making a Podcast

So, we're making a podcast. It's called So cast, and it's an excuse for Kai, Malin and I to drink coffee and have a chat each week, since they moved halfway around the world.

We're all software developers so, naturally, most of our conversations end up being about something tech related. Hopefully we're able to provide unique and interesting insights through these weekly discussions, and hopefully you find the show interesting enough to subscribe to.

We've hit the ground running and have five great episodes for you to listen to.

The show kicked off when we were fortunate enough to have access to the Apple Podcast Studio at WWDC, where we spoke about our WWDC experience. The second episode covers our early thoughts on the iOS 12 and watchOS 5 beta. In the third we discuss home automation and smart assistants, with a bit about Siri Shortcuts at the end. In episode 4 we talk about new MacBook Pros, and the 10th anniversary of the App Store. And finally in the fifth, we talk about everything from buying a new Mac, to what cloud storage to use, a whole bunch about emoji, and Apple Watch move streaks.

We've got some great future topics planned, and plan on recording and releasing weekly. If the show sounds like something you'd be interested in, it's likely available in your podcast player of choice, including iTunes, Pocket Casts, and Overcast.

If you've got any feedback, or would just like to find out when we release new episodes, you can follow the show account on Twitter: @So_cast.

"Hey Siri, what on earth is a Shortcut?"

At WWDC this year, Apple introduced Siri Shortcuts, which are a new way for developers to integrate their app with Siri on iOS, and watchOS.

I'm in the process of writing a talk on Siri Shortcuts to present at the fantastic /dev/world/2018 conference in late August, and one of the things I'm struggling the most with is how to explain Shortcuts, and how to differentiate between the types of Shortcuts. From a developer's point of view, there are many things referred to as Shortcuts. This post is an attempt to clear the thoughts in my mind, and hopefully help clarify things for you too.

Shortcuts, the app

In my experience from talking to people, "Shortcuts" tends to refer to Shortcuts the app, which is essentially Apple's replacement for the Workflow app after acquiring the team behind it in early 2017. The Shortcuts app allows you to chain actions together, and run them as a group. For example, you might have a "Running late" Shortcut that sends a message to your boss saying you'll be late, starts a "hurry" playlist, and opens Maps with directions to work. A lot of the integrations in Shortcuts at the moment are first-party actions - including controls such as toggling Wi-Fi or Do Not Disturb settings or interacting with built-in Apple apps. Shortcuts will shine once it's possible to run shortcuts (yes, that means something different here) as a part of the shortcuts you can create in the Shortcuts app. Have I confused you enough yet?

Siri Shortcuts

There is a new Shortcuts API available to developers. In this context, a shortcut is an action or task that is often repeated with a somewhat predictable pattern. Developers are responsible for "donating" a shortcut to the system when a user performs a relevant action, and the system takes care of when and where to suggest the shortcut. Suggestions are displayed both on the lock screen as notifications, and under the Siri app suggestions when you swipe down on the home screen. Examples of shortcuts include suggesting a lunch order when it's nearing midday, or opening a document that you often open when you first get to work. These suggestions also appear on the Apple Watch Siri watch face.

Shortcuts can be created in one of two ways. Firstly, using an NSUserActivity. You should create these for specific activities or screens within your app where a notable action is performed, and where there is a possibility of someone needing to return to the app in that state for whatever reason. NSUserActivity is used for handoff features allowing someone to pick up on one device where they left off on another. These activities can be "donated" to the system when a user performs a relevant task, and as long as the isEligibleForPrediction flag is set to true, they will begin to surface around iOS when they are relevant. A simple example of a useful NSUserActivity would be one that's donated every time someone starts a workout in a workout app. Over time, the system will get a sense for when this action is performed - perhaps when someone gets to the gym, or every morning at 6 am - and intelligently suggest it ahead of time.

The second way to donate a shortcut is by creating an INInteraction, built specifically for shortcut functionality. An INInteraction contains an INIntent which describes the user's request. If the purpose of the shortcut is to resume state in an app, then similar to donating an NSUserActivity, handling an intent is done from the App Delegate with the application(_:continue:restorationHandler:) method.

Intents Extension

Here's where things get interesting (but maybe a tad confusing). Up until now, the shortcuts I've mentioned are good for simple suggestions. These suggestions, when interacted with, kick the user to your app where you handle the rest of the interaction. There's a lot of practical use-cases for that, but what about if you want to do more? It might not always be necessary to open the app to achieve a task. For example, if I order coffee at 9 am every morning through a shortcut, do I want the app to open every time? Not only is it slow, but I might not have an interest in customising the order - it's the same every day! This is where an Intents Extension comes into play. An intents extension allows you to run code from an extension bundle, meaning you can perform tasks without opening your main app, or touching code from that bundle. It is suggested that any shared code or business logic that the extension needs access to be moved to a shared framework, and then imported into the relevant targets in your project (including the intents extension) where it's used. It is also possible to start an activity from a shortcut, and then resume that activity in the app. It might be useful if you find the user waiting too long for a server response, but it is recommended to design with the idea that the entire activity or task be completed from the extension.

Intents UI extension

Sometimes, a binary success/failure response isn't enough to properly convey the result of the shortcut that just ran. For this case, Apple provides an Intents UI extension which allows your intent to show a view controller and accompany the response with custom UI. These extensions are only supported on iOS (not watchOS). The only limitation to these view controllers is that they do not receive touch events. Design with the assumption a user is not able to interact with the view. Content can be updated as required, however. Extensions that use maps (e.g. Ride sharing) will already have a map provided by the system, so it's not necessary to also display a map in the UI extension. More on the different domains later on.

Interacting with shortcuts by voice

Hopefully, at this point, you've got a reasonable understanding of a Siri shortcut, and understanding the different uses for the word "shortcut." I've written about intents up to this point with the idea that they're surfaced as suggestions, and run from either a lock screen notification or the shortcuts area of the Siri search page. There is another way that these Shortcuts can be surfaced around the operating system, and that's via voice command using Siri. You can suggest that someone adds a shortcut from your app to Siri via a button in your app to open an INUIAddVoiceShortcutViewController. A user can also add a shortcut to Siri themselves, from the Siri options in the Settings app. There is a convenience aspect to being able to run a shortcut from Siri, but one of the biggest advantages I've found to running them from Siri instead of manually is that you're able to provide voice feedback on the request. An Intents extension (without UI) will show the name of the shortcut it's running, as well as the app running it, and then provide a success or failure indicator, or optionally kick the user back to the main iOS app. There is no other feedback. With an Intents UI extension, visual feedback can be given through the UI. When a shortcut is activated with Siri it can provide voice feedback from the request to the user, regardless of whether there is a UI component, making for a great Siri experience. In this way, Siri can become conversational. Asking Siri something simple such as, "When is the next bus to the city?" or, "Are the Dragons up?" could return a voice-based response making it a great way to get an answer on the go, or when making the request from across the room. I believe that these shortcuts can be run from both HomePod and Apple Watch, even if the app with the Shortcut doesn't have an Apple Watch app. This might only be with certain categories of SiriKit apps, however. I haven't been able to test it yet.

SiriKit Domains and Intents

Integrating your app with a voice assistant is tricky. People talk in different ways. The way I ask for directions might be different from the way you do. How does a voice assistant know what you mean? SiriKit uses intents which are a type of request the user can make. Up until now, SiriKit integration has been limited to intents in specific "domains," as Apple calls them. These domains are Messaging, Lists and Notes, Workouts, Payments, VoIP Calling, Visual Codes, Photos, Ride Booking, Car Commands, CarPlay and Restaurant Reservations. Previously, if your app didn't fit into one of these categories, you weren't able to integrate with Siri. Intents for these domains aren't new. What is new is the ability for any developer to create a custom intent, meaning any app can integrate with SiriKit. Each action needs to have a defined intent. These are defined through the new intents definition file that you can add to Xcode, and include fields for any required parameters necessary for either the request or the response.

With the old set of domains, there was room for ambiguity in the request. You, as the developer, could say that you didn't understand the request, or that you require more information. There's no room for ambiguity with the new domains as they're triggered by a custom phrase with no ability to input. At the time that a shortcut is added to Siri, the person adding it must tie a custom phrase to it. For example, you, a completely sensible human, might use the phrase, "Coffee order" for your morning coffee order, whereas I, a questionably sensible person, might use the phrase, "Banana peel."

Shortcut intents can take parameters as far as you, the developer, is concerned. The same "Order coffee" intent that you define might have an option for the number of sugars someone wants with their coffee. However, the number must be set at the time the shortcut is added. A "coffee order" shortcut can't have one sugar one day, and none the next. If it's set up, the parameters are all the same. Different shortcuts with unique trigger phrases would need to be set up by the user if they wish to have multiple options available when ordering coffee from Siri. You might suggest that the shortcut is set up for a coffee with no sugar after a user places this order a few times, but once it's added to Siri, it cannot be changed without removing the shortcut or adding a new one altogether.

Wrapping up

Many hours later, this post has certainly helped me come up with more precise distinctions between the types of shortcuts developers can build into their apps, and what it means for users of these apps. I hope it can clarify some of the questions you have, too. Maybe now the next time you're talking to someone about shortcuts, you'll better be able to pick up on exactly what aspect of shortcuts they're talking about - Shortcuts the app, the suggestions, Intents, or the Siri Shortcuts. If there's anything I've missed, or you'd like to ask a question, feel free to reach out on Twitter.

Parallel Testing in Xcode 10

Parallel testing

At the Apple Worldwide Developers Conference last week, Xcode 10 was announced with a bunch of new features and enhancements to various developer tools. One of the features that caught my eye was parallel testing.

We should all be writing tests for our code. Unit tests run relatively quickly and are used to test small sections of code, generally in isolation. UITests are another form of test that, as the name implies, test the UI of your application. They do this by running through full flows in your app - such as a purchase flow from start to end - ensuring that all the expected UI elements are present and that each button and control works as expected. UITests are useful for catching regressions, and for feeling confident that nothing broke after making a change to your app, but unfortunately can take a while to run. Dozens of 30-second flows in your app add up, and suddenly you might find your test suite taking 30+ minutes to run.

Enter parallel testing.

Previously only available to xcodebuild for separate simulators, and now available for all projects in Xcode, parallel testing allows multiple tests to be run simultaneously, with the main advantage being dramatically shortened XCTest and XCUITest times.

Screen Shot 2018-06-15 at 8.36.22 pm.png
How to enable parallel testing
  1. Select your target scheme in Xcode, and "Edit Scheme..."
  2. Find the settings for "Test", and press on the "Info" tab
  3. You'll see a list of your Unit and UI tests, press on the associated "Options..." button
  4. Select "Execute in parallel on Simulator"
  5. Optionally select "Randomize execution order"
Running tests in parallel

It's only possible to run Unit tests in parallel on macOS. Both Unit tests and UI tests can be run in parallel on iOS and tvOS.

When running tests in parallel, Xcode will run them on a clone of the same simulator. Most Macs should be capable of running at least two cloned simulators in parallel. Modern Macs with more RAM and more processor cores should be capable of running even more tests simultaneously.

Tests are split up by classes and allocated to each clone of the simulator as Xcode sees fit. What this means is that your tests are only as fast as the longest-running test class. For this reason, it's important to keep each test class as concise as possible and consider splitting tests into as many classes as is practical.


There are some things to consider now that tests can run in parallel, and optionally, in a random order.

Ensure that tests are able to run independently of one another and that no test relies on the test that comes before or after it to set up or clean up. Each test should be truly independent of all other tests. You are no longer able to ensure that test A will finish before test B begins, so this independence is important.

Performance tests will not achieve maximum performance when running in parallel. Apple suggests putting performance tests in their own bundle and disabling parallel testing for this bundle.

Wrapping up

Parallel testing certainly made for an impressive demo during the Platforms State of the Union at WWDC last week. It's something that will save countless hours of development time. Long testing time is something that may discourage additional tests from being written, and anything combatting that is a benefit to software quality going forward.

Introducing HeartMonitor

Update (5th May 2018): HeartMonitor is back on the App Store.

Update (12th April 2018): Late last week, HeartMonitor was removed from the App Store by Apple. I'm unsure if it will return to the App Store, but I'd like it to, and am working with Apple to hopefully find a way that can happen.

What is HeartMonitor?

HeartMonitor is an app for iPhone and Apple Watch that allows you to use the Apple Watch heart rate sensor to record "sessions" of heart rate activity - without having to start an active workout. The advantage of this is that HeartMonitor sessions have minimal impact on your activity rings.

Potential use-cases

HeartMonitor is useful for monitoring your heart rate in nervous situations, such as while giving a conference talk or presentation, in the dying moments of a football game, or while watching a horror movie. HeartMonitor is also useful if you want to keep an eye on your resting heart rate for short periods of time. I'm sure that there are other creative uses - if you think of any, let me know!


The idea

Almost two years ago, I wrote a blog post highlighting the case for a dedicated heart rate monitoring mode in watchOS - something that gives you the frequent heart rate readings of a workout session, but without having to start a workout and have your activity rings added to. There are issues with a feature like this. The original Apple Watch had mediocre battery life, and throwing in a 30-min heart rate tracking session would be enough to send the battery flat before the end of the day for most people. Since then, the Series 2 and more recent Series 3 have proven battery life is now a non-issue, with many Apple Watch wearers finishing the day with over 50% battery left. This leaves room for cool features that may use slightly more battery. It surprises me that there isn't a feature like this from Apple yet. When I wrote that original post, I was unable to find an app on the watch App Store that did this, and that was still the case up until the launch of HeartMonitor.


I had seen that people were starting a dedicate workout on their watch before an event where they wanted to measure heart rate, and then analysing that data later. There are two problems with this:

  1. It will very likely add unwanted activity data to your activity rings.
  2. There is no decent way to view the data afterwards.

HeartMonitor solves both of these problems. While it isn't perfect, it is far less likely that your exercise ring will be added to. HeartMonitor will add to your rings if your heart rate gets high enough that the activity you're doing should probably be considered exercise, but unlike the Workout app, it won't count every minute simply because you've told it you're working out.

HeartMonitor also solves the second issue of not having a place to view this data. The companion iPhone app will show you a list of sessions with detailed information and statistics for that session. From here, you can rename the session which makes it easier to remember what you were recording, and share the session with friends.


HeartMonitor is completely free. Free to download, with no advertising, or in-app-purchases. There is no analytics framework (first or third-party) in HeartMonitor, no third-party code, and no tracking whatsoever. You can download it assured there's no hidden catch.

So, what are you waiting for? Just open HeartMonitor on your Apple Watch to get started!

If you like the sound of HeartMonitor, you can download it for free on the App Store.

Feedback is always appreciated. I can be contacted here.

CGM Diaries: Things I Wish I Knew

I last wrote about the Dexcom Continuous Glucose Monitor (CGM) on 11 December, 2017. In that post, I highlighted what it was like after a week. The first week was not without hiccups, and the following few weren't perfect either. Things have improved dramatically after almost two months of wearing a CGM. The idea behind this post is to share things I've learnt that weren't obvious from the beginning. These are the things that had I known in December, would've made the first few weeks with the CGM a lot smoother. Your mileage may vary, but this is a collection of what has been working for me.


Get your overnight blood sugar level right. That is the single best piece of advice I can give. The biggest annoyance I had with the CGM after a week was the alarms - especially overnight. There's no such thing as perfect control, and things are going to go wrong, but if you can stabilise the night as much as possible, it'll make your life easier. If you've got a CGM, chances are you're controlling your blood sugar levels through an insulin pump. It's important to fine-tune your basal insulin to a rate that will keep you steady overnight. Easier said than done, of course, but if it works 5 or 6 nights per week, you'll be well on your way to having an easier time with the CGM. Being woken three nights a week is no fun for anyone.

But how? Before getting a CGM, I would've told you that it was impossible to have consistent levels overnight. I've since learnt this isn't true. I noticed a correlation between the nights I had stabilised my blood sugar level before sleep (i.e. Was hovering at a constant level, and had no active insulin - this was key). Short-acting insulin stays in your system for just over three hours, so it's important to avoid eating carbs up to three hours before sleeping. I've changed habits to have dinner finished by 7 pm at the latest so that insulin is out of my system by 10 pm. Now I know that if by 10 pm my blood sugar is between 5 & 7 mmol/L, it's unlikely to fluctuate, thanks to no active insulin, and a good basal rate. The result? Hours of uninterrupted sleep. For the sake of your sanity, especially when it comes to alarms that can't be turned off, you'll want to ensure this stability. You may need to tweak basal insulin rates, but it's well worth the effort.

A stable night is advantageous for two reasons; it gets the day off to a good start and helps your average BGL for that day. If my average is higher than usual towards the end of the day, there's a good chance that a bad night was a contributing factor. Ultimately, the goal of using a CGM is to have better stability and control of your blood glucose, which shows in HbA1c results. If the first 6-8 hours of your day are spent at a healthy blood glucose level, that will go a long way to improving your HbA1c.

Bolus early, bolus often

The second most important piece of advice I can give is to bolus (or inject, if needles are your thing) well in advance of eating a meal. Something you'll likely notice during the early days with a CGM is the delay between giving insulin and when it takes effect. If you give insulin for food at the time you begin to eat, you'll see that the food takes effect well before the insulin does, resulting in a spike in your BGL, followed by a dip 60-120 minutes later as the bulk of the insulin takes effect. I've found that giving insulin 20-30 minutes before a meal is the best way to avoid this spike. Your mileage may vary, but I haven't had to worry about my blood sugar falling too fast when bolusing up to half an hour before a meal. Any earlier would put you at risk of a hypo before the food takes effect, so I don't recommend it. Avoiding this spike will not only make you feel better but will help stabilise your blood sugar after eating. The idea is that you then consume food around the time the insulin starts to act in your system, and they balance each other out. It's never perfect, but it may be the difference between "spiking" to 9 mmol/L rather than 13 mmol/L. Avoiding these higher blood sugar levels a few times a day will undoubtedly benefit your HbA1c result.

It's easy to bolus early when you know what you're going to eat at a prepared meal such as an at-home dinner. Eating lunch at uncertain times at work, or eating out and not knowing what to expect can make this quite difficult. You can't bolus 30 minutes in advance of a meal when you don't know what time you'll get around to eating, nor can you bolus correctly for a meal you've never seen before - as is the case when you eat at a restaurant. Years of diabetes management has taught me to guess the carbohydrate content of meals at a glance. Apply this skill to a plate of food you haven't seen before, and you might be able to guesstimate carb quantity in advance. For example, it's a safe bet that a plate of pasta will have a non-zero amount of carb, whereas fish and veggies will have very little carb. In the case of the former, I'd be looking to bolus early for approximately 50% of the carbs I expect in the pasta. A typical bowl might have 50g worth of carb, but as you don't know the portion size, nor when the meal will arrive, I don't recommend bolusing for the full 50g in advance. It's unlikely a bowl of pasta will have <25g, data-preserve-html-node="true" so start by bolusing for that in advance. When the meal arrives, you can use your amazing carb-guessing skills to deliver insulin for the rest of the meal. This has been working for me. Again, it might be different for you. Don't bolus early if your BGL is low to begin with, or if your BGL tends to drop right after bolusing. In my case, I know that if I give insulin for 25g carb, I can afford to wait 30 minutes before eating and still avoid a hypo. It's not always practical to bolus in advance, and that's okay. It isn't the end of the world - after all, it's what you've been doing for years, isn't it? If you're not confident guessing the carb content of a meal in advance or bolusing in advance of eating, you can either choose low-carb food to avoid a spike or just enjoy the meal and accept that a spike now and then is a part of having diabetes. Just be sure to check in with your BGL a few hours later and correct if need be.


The next thing I wish I knew is that carbs are bad. Actually, carbs are great. But carbs are bad for your blood glucose levels. You might've read the section above on sleeping and thought, "That's great, but it isn't possible for me to avoid eating within three hours of sleeping." That's fine! Just be more careful with what you choose to eat. You'll quickly realise the types of foods in your diet that need to go. Have a low (or no) carb dinner if you can. Different foods affect different people in different ways, but these are some key things I've learnt:

  • Plain Weet-Bix (a cereal with effectively no sugar [~2.5%]) and low-fat milk will send my blood sugar soaring immediately after breakfast. It is classified as a medium-GI food.
  • Protein shakes have also been cut out entirely, despite there being no sugar in the protein powder itself. 200mL of milk and a small scoop of ice-cream is enough to raise sugar levels fast.
  • Low-GI bread is a lifesaver. Two slices of Lo-GI bread from Bakers Delight has replaced Weet-Bix as my breakfast, and this has a noticeably positive impact on my blood sugar. I rarely spike after breakfast, and if I do, it's to ~9mmol/L, instead of 14mmol/L.

Choosing low-GI foods where possible will help to stabilise your blood sugar levels as these foods release energy slower over a longer period, which is more aligned with how insulin acts in your body. I'd also recommend not eating too much carbohydrate in a short space of time, as it's harder to guess carb content and the effect it'll have on your body. Ideally, you won't see a noticeable spike in your blood sugar after eating, but the ideal is rarely possible. The next best thing is to minimise the spike, and that can be done through a combination of giving insulin early enough, not overeating carbs, and choosing low-GI food.

The sensor

There are a few things worth noting about the sensor itself that would've been handy to know from the beginning.

  • The Dexcom CGM sensor is meant to last seven days. I've had it last for less time and fail to give readings (fortunately Dexcom were great about this and offered a replacement), but also considerably more - up to 14 days. I don't suggest leaving it in longer the recommended seven days, but note I had success the one time I tried it. I've only been able to try it once as often the tape has started to peel after 5 or 6 days, so the sensor is ready to be removed on day 7.
  • If you're inserting the sensor in your stomach, it's most likely that the top of the tape (closest to your chest) will peel first, so when fixing the tape, I'd recommend sealing the top edge first to ensure it's stuck in place.
  • Inserting the sensor doesn't hurt, so don't be afraid. This is a case of do as I say, not as I do - years of changing insulin pump sets has me conditioned to expect pain far too often. Surprisingly, the Dexcom sensor insertion is completely painless. There's only been once where I've found it more noticeable than a tickle, but even then it couldn't be felt after a few minutes. The insertion device is quite large and can be intimidating, but it isn't as painful as it seems.

Sanity (check)

I'm the first person to praise the convenience and luxuries afforded by having an Apple Watch. The potential of the Apple Watch as a health and fitness device is truly unlimited. A long-term hope is that the Apple Watch can perform non-invasive blood glucose monitoring, but until that day we've got invasive solutions. Dexcom's integration with both iOS and watchOS is impressive. The apps are reliable, and that's exactly what you want from a medical device. The Dexcom app for Apple Watch supports a "complication" - this means it's possible to view your blood glucose reading in real-time on the face of your Apple Watch, with no need to open the watch app to see this data. At first, I thought there could be no better feature. After a while, I realised it's unhealthy to be glancing at my watch every five minutes to get an update on my blood glucose level. Focusing on this constantly, and anxiously watching it rise or fall, is no good for anyone. You survived up until now without such frequent updates, so you'll be okay to hide the complication. Opening the watch app every hour or so should still be enough for you to see significant benefit from wearing a CGM without going insane. Again, this is a case of do as I say, not as I do. I'm a lot better at not constantly checking my watch than even this time last month, but it's still more of a distraction than I'd like it to be.

Take a break

Continuous glucose monitoring is, on the whole, fantastic. That doesn't mean it isn't overwhelming at times. There have been many occasions where I've thought it's more trouble than it's worth. Since, things have settled down mainly thanks to bolusing early, and trying to get my levels stable before sleeping. If you feel overwhelmed, I encourage you to take a few days off from wearing it. A great time for this is in between taking out the sensor and putting in a new one. There's nothing wrong with going back to doing things the old way for a while. Sure, your levels might spike, but that's been happening for years, and at least you won't be anxious as you watch your BGL climb after a meal. You'll know when you're ready to put it back in. CGM has more of a learning curve than beginning to use an insulin pump does, and like an insulin pump isn't a device for everyone.

Learning experience

Everything mentioned thus far is something that I've learnt from experience with the CGM. It's only been a couple of months, and I'm sure that the learning will continue to be plentiful for a long time to come. Ultimately the best way to know what works for you is to try yourself. Things won't work perfectly at first. I could not have imagined how frustrated I would be by the alarms after a few weeks of wearing the Dexcom.

What I'd like to see

I'm looking forward to an update to the Dexcom Apple Watch app which brings the ability for the Dexcom to send data directly to an Apple Watch. Currently, your phone must be in range of the Dexcom to receive updates, which are pushed from the phone to the watch. As of watchOS 4, it is possible for a Bluetooth health device to send data directly to the Apple Watch, and Dexcom have said they will support this feature.

I'd also really like to see granular control of the alerts. There's a lot that could be improved here, including:

  • Make it clear that by acknowledging an alarm in the iOS app, it will be silenced. I spent days thinking it was impossible to silence alarms, and that I was stuck with being woken every five minutes until my blood sugar rose high enough to stop the alerts.
  • Time of day settings. Living in constant fear that your phone will sound an uncontrollable alert is no fun. People have occasions where they don't want to be alerted to things - at work, in meetings, at the cinema, etc. Silent alerts are possible for all but critical low alerts on the Dexcom. This is a start but not perfect, as you then have to remember to turn them on again or else you won't be alerted when you actually want to be - such as overnight. Almost 17 years of diabetes means that if I'm awake, I can tell with 99% accuracy that my blood sugar is low or high. How high, or how low? Who knows, that's what the Dexcom (and previously, finger-pricks) are for, but if I'm awake I'll be able to feel the onset of low or high blood sugar. I'd love to be able to tell the Dexcom to send audio alerts between 10 pm and 6 am, but no other time. Silent notifications to my Apple Watch and iPhone will suffice the other 16 hours of the day.
  • To extend on the above, there's the concept of "Low" blood sugar in the Dexcom app. This value is defined by the user (I have mine set to 3.7mmol/L), and anything under this reading will trigger an alert. There's also an "urgent low" alarm - set to 3.1mmol/L - which can't be changed, nor silenced under any circumstances. It's a long shot, and there are other things which should be a priority, but I'd like to see smarter use of these settings based on a wake-up alarm. For example, I absolutely should be alerted to a level of 3.6mmol/L at 2 am. However, if my BGL drops to 3.6mmol/L at 5:30 am, and the Dexcom knows my alarm is set to go off in the next 30 minutes, the fact that the low isn't "urgent" means it doesn't need to wake me. I have no issue with it waking me at any time if the level is considered an urgent low, of course. This is simply something I'd like, and I realise everyone would have a different preference.
  • On a similar note, a true wildcard feature would be integration with a sleep tracker on the Apple Watch (AutoWake?). This could mean that if your blood sugar was low and steady, but not falling nor urgently low, it would wait until the end of a sleep cycle to wake you. Let me tell you, Dexcom shows no mercy and will wake you 40 mins after falling asleep if it has to. That is not a time one would want to be woken if it can be avoided.

Again, these business rules are tricky to decide, and this is no easy task to tackle, but in a world where Dexcom have infinite time and resources to develop their mobile apps, I'd like to see it happen. As a general rule, fewer app settings are better, but not for a medical device like the Dexcom, which plays such an important role in the life of the wearer. Fine-grain control is exactly what's needed here, and I find the existing control over alerts to be frustratingly limiting.


Continuous glucose monitoring can benefit every person with diabetes in a slightly different way. For a young child who may not have yet developed the ability to detect low blood sugar, CGM does this for them. For that child's parents, they can send their child off to school knowing that the CGM will catch irregular blood sugar and alert the child sooner than a teacher might notice the child's pale face and scattered concentration, which are both indicators of low blood sugar. For someone my age, CGM is about the freedom and flexibility of not having to carry a blood glucose monitor everywhere I go, but more importantly, it's about smoothing the fluctuations in my blood sugar levels leading to better control which will (hopefully) mean fewer complications from diabetes later in life. For someone prone to fainting when their blood sugar falls, CGM is about saving their life, be it overnight, or just one afternoon while excitedly watching the footy and not realising their BGL has dropped. CGM has infinite potential to benefit people with diabetes, as well as those without it. Learning what works best for each individual takes time, and is no small task. Had I known the tips above this time two months ago, I would've had a much easier time using a CGM in the beginning, and I hope that someone new to using CGM finds the tips to be beneficial.