Inside this folder, we are going put each feature in its own folder, dividing the classes, controllers and storyboards like this: We are going to have a main folder called “Features” where we are going to add all the features of our app.
#Ios photos face recognition registration
We are going to have the following screens in the app: a landing screen, a registration screen, a “saving user data” screen to, an “uploading” screen for when the data has been successfully saved, and an “identifier” or “people tracker” screen. This means that you have successfully created your first AR project with a default object living in a virtual space. Open Xcode and create a new project using the template iOS->Augmented Reality App.Īfter you select the template, click “next” and set up your project’s settings, including your project name, team, identifier, language, etc., and click “next” to create your project in the selected folder.Īt this point, if you build and run your app, you will see something like this: To start, we’ll need to have Xcode 9 installed on our machines. The resulting Docker container will be based on Ubuntu 16.04 and run a Gunicorn HTTP server.īased on the requirements for the project, the design team proposed an app layout with the following main screens: The back end will be a combination of two elements: a REST API implemented on top of Flask and Turi Create, which is being developed by Apple and makes it easy to build, train, and export CoreML models.įinally, Docker will allow us to deploy the backend to AWS with minimal effort. Once we have the training data (images), we will send them to a web application programmed in Python.
For 25 frames in the video feed, we will request Vision to return a bounding box that will be used to crop the frame and obtain 25 images of the user’s face. We can accomplish this task by using Vision Framework to perform face detection. We are also going to need a real device with iOS 11 and ARKit support, which could be an iPhone 6S or newer.įor this project, we want to detect the face of a person using a live video feed within the application. We are going to build our app using Swift 4 and Xcode 9.
#Ios photos face recognition how to
In the second post, learn how to build the back end and put it in charge of receiving the images from the app, implement our machine learning model, and send the resulting data back to the app. In this first post, we are going to talk about how to build our iOS app and set up everything to work with a custom-made back end in the second part, we are going to talk about how to build the back end and put it in charge of receiving the images from the app, implementing our machine learning model, and sending the resulting data back to the app. Augmented reality (AR) and machine learning are the hottest technologies on the market right now, so what we are going to build this time is a face recognition app that identifies people in our office and shows basic information when the app identifies someone who has been registered.