AR Tech You Can Use in Your App Today

Augmented Reality Developer explains the technology

From games, to navigation and shopping, AR has already trickled into the mobile app world, giving users only a small taste of its potential. To the app-creator’s benefit, both Android and iOS have been taking the lead in making state-of-the-art AR features available to developers to use for their needs.

With ARKit and ARCore, Workinman Interactive has been developing apps in AR from the onset, building AR apps that feel like they are working magic to the average user.  Our goal is to maximize the magic, while keeping those costs manageable and the app monetizable.

To help guide our clients, we created this quick summary of Augmented Reality features that are readily available, as well as a quick note for each on how they may impact development costs. The list will grow as new technologies emerge, so bookmark the page and come back often for updates.

SLAM (Simultaneous Localization and Mapping)

Think of a space as an environment, and the virtual space is made up of points on things it recognizes best in the real environment. These are usually edges, corners, high contrast areas, surfaces, symbols, and a few things in between. With these points plotted, it creates a 3D map by connecting those dots.

This map can then be used to anchor virtual objects, such as a race car, a piece of furniture, or a virtual pet on a real surface. This technology works reasonably well on horizontal flat surfaces, but more complex geometry and vertical surfaces are becoming more reliable. Recently SLAM has allowed for considerably accurate measurements to be made of the mapped environments. This allows developers to create measuring tools and to place virtual objects in a real environment with more accurate scale.

An Anchored Object can be anything in the virtual world, such as a 3D model, text, image, animation, or video, that is attached to a point mapped in the world the camera sees. When the camera moves, the point and the anchored object moves to. This is what ties object of the virtual world to the real one.

Possible uses: Virtual remote control car you can drive anywhere, virtual pets that move around your home’s interior, trying virtual furniture out in rooms, adding game creatures into the world with a solid sense of realism, AR portals.

Cost impact: SLAM is available in Apple and Android’s modern devices, allowing us to tap into it without additional costs or subscriptions. Our familiarity with SLAM makes it a pretty straightforward implementation. SLAM isn’t perfect out of the box, so refining the experience may take some time depending on the use requirements. For instance, for apps that need more precise color accuracy, we may want to budget additional time for perfecting color management. For most uses, it just works!

Above: we used SLAM technology to detect the a surface to use as a base for spawning characters.

360° Video and Worlds

We can complete cover the environment in 3D assets or video offering an experience much like VR. The user can then look around the environment by moving their device. In combination with SLAM, we can allow the users to walk around and interact with objects. 

Possible uses: VR-like games and experiences. AR portals, where users can “step into” an environment.

Cost impact: No real lost concerns here. The technology is pretty straightforward and supported readily by most devices. One cost to consider is if creating an app that centers around 360° video, production time and costs for creating that video content.

Above: For this virtual tour AR portal, we used SLAM technology to spawn a door, and then users can walk through into an environment that plays a 360° HD video.

Image Recognition, Object Recognition, and Object Tracking

Cameras can now use photos, symbols, and qr codes to spawn virtual objects in the view of the camera. Simply by aiming a camera at screen, printout, poster, label, or billboard, whole entire games can be spawned on their device in an instant. Once the image is recognized we also have the ability to track it as it moves.

Using machine learning, we can teach an app to detect and track objects based on their physical features and designs printed on them. It also accounts for how the object looks in different lighting conditions, the angle viewed from, and if it is partially obscured. We also have control over the tolerance for the match the object, which allows us to recognize a wine bottle both full or empty, or perhaps the label is askew. 

If your AR app will need to recognize a multitude of targets, and perhaps those will change or more will be added, a Cloud-based system is likely needed. this allows for cloud-based management of all targets and the algorithms needed to recognize them. The beauty of a cloud-based system is not having to release an app update every time you want to change content.

Area Targets use partial or complete environments to build a virtual space. It’s object recognition on a larger scale. Area targets allow for flexible recognition of multiple things in a given space. For the AR application to be able to recognize an environment, machine learning crunches through photos or videos of the space at all angles and lighting conditions. Alternatively a 3D model of the environment can also be used to train the engine.

Possible uses: Apps that detect objects/products in a scene, such as a store shelf, or in a museum, and enhance them with more information or animation. Toys that come to life when viewed through the app.  Product labels and instructions that can play videos and demonstrations.

Cost impact: Cost generally scales with the number of objects that need to be learned. On the low end, we can design an app that recognizes and tracks a unique label or pattern on the surface of a product in less time than tracking the object as a whole.

On the high end, a cloud-based system of feeding in and managing ever changing objects for recognition may require a third party solution. A cloud infrastructure, such as Vuforia’s is recommended for Cloud-based Targets. This would require a licensing fee.

Area targets are available through the Vuforia (PTC) branded SDK which does carry licensing costs. One also needs to consider the effort capturing media and training the machine learning algorithm.

YouTube video

Above: We used image recognition and tracking technology to detect sides of a cube, which was then used to spawn game elements. 

World Tracking

World Tracking uses both SLAM and object tracking to better understand a space and how to interact with it, similar to how self driving cars identify the road and traffic signs. The ability to create and track a correspondence between the real-world and the virtual spaces allows for a more immersive experience. This feature becomes especially important when you want more interaction between the virtual and real worlds.

Positional and Orientation Tracking is a subset of technologies that play into this. They track a mobile device’s position and orientation within the world.

Possible uses: AR features on remote controlled vehicles, self-navigating vehicles, AR games where enemies or pickups are strategically positioned in the world.

Cost impact: SLAM, at its core, handles most of this. Costs vary considerably on number and scope of interactions and what types of real world objects will need to enhance tracking and allow for deeper interactions between the virtual and real worlds.

For instance, say a person walks in front of the camera, or you drive a virtual car behind the coffee table. Usually AR systems don’t account for this and the virtual objects in the scene will show through, as if they are on top. This is called Occlusion, and can break immersion to a great degree. ARKIT and ARCORE now support different types of simple occlusion, but more robust solutions to account for a wider range of obstacles would require additional paid integrations and development time.

Augmented reality apps explained by a developer

Face Tracking and Filters

Snapchat made them popular, and now they are in a ton of apps and a part of our modern culture. The core of this tech is available in Apple and Google tech. We can easily track faces and apply masks and other embellishments to them. Some more advanced filters would warrant licensing some state of the art tech, like Banuba, to create some spectacular effects and offer features seen in some apps.  Filters specific to Facebook and Snapchat actually use their own systems to create and publish. 

Possible uses: Face filters. Makeover apps. Hat and glasses try-before-you buy apps. Smarter cameras that prevent eyes closed or detect smiles.

Cost impact: A lot depends on the number of filters and how interactive they are. We would usually recommend the most cost effective approach, leaning on what’s available before approaching into high-cost licensing. Creating sets of face filters for Facebook and Snapchat could be more cost efficient than a custom app. 

Face filters and AR masks are hugely popular and run on most devices. We can create them as a part of a greater app/game, or for platforms such as Facebook Messenger or Snapchat.

Body/Motion Tracking

Years ago one would need a camera array to properly body track. Nowadays, mobile devices can do a reasonable job of detecting a human form and mapping a simple skeletal structure to it. It works best in ideal lighting, with 1 or 2 people in the frame, and when they are facing towards the camera. 

Possible uses: AR makeover and fashion apps, virtual avatars, people detection.

Cost impact: Years prior, this tech required a hefty license. Body tracking was just recently added to Apple and Android’s AR SDKs, so getting base features implemented is pretty light. Perfecting body tracking still poses a challenge in these early days of the tech, but it’s very much usable. 

Multi-camera AR

Utilizing multiple cameras to an AR setup can offer accuracy improvements in distance calculations and also some new creative ways to present content.  A carefully aligned second camera can be used as a rangefinder, which allows for more precise distance and scale calculations. For applications that require precise measurements of real life objects and accurate scaling of virtual objects, this is one of the better ways to go. The down side here is that mobile devices typically have only a single AR-capable camera available for this use. Custom hardware or a PC would be needed. Two cameras can also be used creatively (can on mobile), for instances when objects from one camera are placed in the scene of another. 

Possible uses: Measurement tools. AR architecture and furnishing apps where precision is required. Entertainment apps where things from two multiple are combined into one view.

Cost impact: If a custom hardware solution is needed, that cost needs to be factored in and can vary. Many inexpensive development boards can handle this setup. PCs as well. At this time multi-camera AR typically requires a license for an SDK, such as Vuforia (PTC), which can be cost prohibitive. 

Shared AR

It’s possible to network AR apps together so multiple players can experience charged objects and worlds an interact together within them. Cloud-based Markers create spatial reference points so that multiple devices can use the same AR space and the virtual objects within. Player positions are also tracked and communicated to other devices. This allows for awesome multiplayer experiences for AR games and seamless collaboration for professional use.

Possible uses: Multiplayer games such as shooters and escape rooms, collaborative spaces like whiteboards and graffiti walls, AR board games and tabletop sports.

Cost impact: At a minimum level, the internet infrastructure needed to manage shared AR is typically free with Apple and Android devices. At scale this may require a paid system. Shared AR is cutting edge, and needs more design effort to smooth out its rough and limited functionality. Large shared spaces may require a mapping (multiple rooms and outdoors) and GPS functionality which would take additional.

5 Awesome Ways to Promote Your Brand with AR

Need some creative inspiration for your AR apps? Check out these ideas!

Augmented Reality development service

Workinman Interactive for AR Development

Workinman is more than a VR/AR development house: we’re a studio of engaging game designers passionate about exploring the frontiers of technology. With our rich history of creating apps for major advertising and entertainment clients, we can bring compelling experiences to your audiences in the realm of AR and more. Talk with us today to get started.

Previous Post
Mines of Dalarnia: now playable for alpha testers!
Next Post
VR for Mental Health: Training, Treatment, and Applications