Using Fritz AI in SwiftUI
Demonstrating How by Porting Our Portrait Mode Example to SwiftUI
SwiftUI is a young, yet powerful, UI framework by Apple that is both declarative and reactive by design. Its appeal to so many developers is that it is simple, modern, and NOT Interface Builder/Storyboard driven.
That being said, not everything about SwiftUI is polished and matured just yet, meaning that your features may still require a reliance on UIKit.
Take, for example, our very own Fritz Pre-Trained Image Segmentation Model. In our demo code on how to build Portrait Mode (which can be found here along with this tutorial), you’ll see we rely on a ViewController. Within this ViewController, we are able to get the live camera feed, use AVKit components, run our model, and manipulate the UI accordingly. This is just not yet possible with pure SwiftUI.
This doesn’t mean it’s impossible to use cool visual models in your SwiftUI app. On the contrary, it’s quite simple to turn our demo VC into a Representable View. From there, you could build out amazing UI/UX using the power and ease of SwiftUI and include awesome Fritz AI features simply by declaring them.
Let’s learn how by taking the VC in the demo project and placing it into a new SwiftUI project!
Setup
To really get an understanding of how we can get these features working in a SwiftUI app, let’s first walkthrough the setup. There are some key steps that we will want to go through in order to make our features work in SwiftUI.
Note: We will be working off of Xcode 12, SwiftUI 2.0, and Cocoapods.
Xcode and Pods
First, let’s create a new Xcode project. Let’s go with an iOS App -> Interface: SwiftUI -> Lifecycle -> SwiftUI App (the rest is your choice). Create your project and then close it.
Next, in your new projects directory, add a Podfile
with the following code:
From terminal, navigate to your directory and run pod install
. Once once complete, open the new .xcworkspace
file, which we'll use from now on.
Permissions
In your project’s info, be sure to add the following permission so we can use the camera: Privacy - Camera Usage Description
Register on Fritz AI
If you haven’t already, sign-up for a new Fritz AI account. Once you do, you’ll want to register your new iOS app. To do this, you’ll need the App Name and Bundle Identifier you gave your app.
When the registration prompts you to download the Fritz-info.plist
file, follow the instructions on where to place the file in your project.
Before you hit next in the registration process, we’ll want to finish setting up your app so we can’t complete the registration fully. Let’s do that now.
Bring Back the App Delegate
In the Fritz AI registration walkthrough, you’ll notice that it instructs you to place code in your projects App Delegate. However, with SwiftUI 2.0, the App Delegate is essentially replaced by @main
. This doesn't mean that we still can't bring App Delegate back. Insert the following in the file <yourappnamehere>App.swift
beneath the App
struct
:
And in your App
, add the following line:
This way, we can run FrtizCore.configure()
on launch. This may seem like it eliminates the benefit of @main
, but we still eliminate a ton of boilerplate and unnecessary code in our project.
Complete Registration
To wrap up registration, let’s run our app in the simulator so we establish contact with Fritz AI. Once we see “Hello World,” we can go back to the registration page and hit next. Fritz will check to see that our app has at least attempted to reach out to them and, once it does, complete the registration successfully!
To get the most out of your mobile ML models, you’ll need to monitor and improve them in production. This can be complicated work, but Fritz AI Studio allows you to easily manage models and improve them over time.
Code
Now comes the fun part, though you’ll find it will be almost too simple since we’ll be porting in the demo. You’ll want to copy both ViewController.swift
and CustomBlurView.swift
from the demo project and into your project. I also chose to rename ViewController
to ImageSegmentationViewController
to make it more identifiable, so that's how I'll be referring to it here.
Wrapping a VC with Representable
Next, we’ll create a UIViewControllerRepresentable
so that our VC will be seen and usable as a View
by SwiftUI. On the bottom of our VC file, add the following:
That’s… actually it. We’ve now created a new View
called ImageSegmenter
which contains our ImageSegmentationViewController
.
Declare Our View
All that’s left to do is declare our ImageSegmenter
in our UI just like we would any other View
. Go to ContentView.swift
, wrap the Text
in a VStack
, and add ImageSegmenter
below Text
:
When we run our app on an actual device, you’ll see that our camera feed is showing and that, should we detect any people, our Portrait Mode now works!
Conclusion
While there were a number of steps to follow to get our feature up and running in our SwiftUI app, they’re really isn’t that much of a difference from setting it up in a traditional app. The biggest difference is what we’ve now gained: the ability to flesh out the rest of our app’s UI/UX using SwiftUI!
Resources:
Editor’s Note: Heartbeat is a contributor-driven online publication and community dedicated to exploring the emerging intersection of mobile app development and machine learning. We’re committed to supporting and inspiring developers and engineers from all walks of life.
Editorially independent, Heartbeat is sponsored and published by Fritz AI, the machine learning platform that helps developers teach devices to see, hear, sense, and think. We pay our contributors, and we don’t sell ads.
If you’d like to contribute, head on over to our call for contributors. You can also sign up to receive our weekly newsletters (Deep Learning Weekly and the Fritz AI Newsletter), join us on Slack, and follow Fritz AI on Twitter for all the latest in mobile machine learning.