ARKit and ARCore Will Not Jumpstart Augmented Reality


People have gotten really excited about the new augmented reality (AR) development kits from Apple and Google, not to mention Facebook’s Camera Effects platform. Some analysts and media have significantly revised their forecasts upward for AR market adoption, both for users and revenue, based on the idea that developers will now flock to build AR-capable apps within those proven ecosystems.

Digi-Capital published a revised forecast of 900 million AR users by the end of 2018 and 3 billion AR users by the end of 2021. However, that kind of growth is not going to happen. AR adoption will creep up slowly through the end of 2018. Tractica estimates there will be somewhere between 650 million and 750 million mobile AR unique monthly active users (MAUs) by the end of 2018 and 1.8 billion unique MAUs by the end of 2021. The Digi-Capital estimate for 2018 is less than our market forecasts of 881 million unique MAUs published in our report, Mobile Augmented Reality.

Smartphones Need 3D Cameras

The main reason for this more modest adoption is because the most compelling AR use cases depend on smartphones being equipped with dedicated three-dimensional (3D) cameras, which give AR the ability to sense depth. 3D sensors and camera capabilities will enable AR developers to build apps with better depth perception and motion tracking, resulting in more precise object positioning and occlusion (alignment of digital images within a physical space). Time-of-flight (ToF) systems are now viable as image sensors for mobile AR, resulting in more accurate object placement over a larger field of view and range with minimal latency. The problem is that nearly all of today’s smartphone cameras can only map two-dimensional (2D) environments.

Tractica’s market intelligence pointed to Apple including dedicated 3D cameras in its full lineup of phones for this product cycle, which includes the iPhone 8 and the iPhone X. But the iPhone 8 does not include a dedicated 3D camera (portrait mode does not count). According to our source, a chief executive for a company that provides the 3D camera technology for key smartphone original equipment manufacturers (OEMs), the iPhone X will have some 3D capabilities; the user-facing side of the iPhone X incorporates a combination of structured light and ToF. Android phones, with the exception of the now-defunct Tango, do not have dedicated 3D cameras. Now, it appears that the market will not even have significant distribution of smartphones with dedicated 3D cameras until mid- to late 2018.

Lessons Learned and Potentially Lucrative Use Cases

AR proponents have been excited about AR before. In the early years between 2008 and 2015, AR was a technology gimmick looking for a market. Innovators struggled with technical limitations, hype for smart glasses, and weak use cases. Starting in mid-2015, the market began a period of recalibration. The most significant developments for mobile AR since mid-2015 have been solid thinking around how to leverage AR to produce meaningful end-user benefits, the advancement of computer vision, and the integration of artificial intelligence (AI). During 2018, AR will start to become an increasingly embedded mobile capability through software development kits (SDKs) and other platformization strategies, and AR will live within social media platforms, e-commerce apps, maps, and business applications. Mobile AR capabilities will increasingly be made available in “point and click” means to non-developers.

Most of the highly compelling use cases emerging for AR will depend heavily on depth sensing to succeed. These will include social media, games & entertainment, e-commerce, mapping/indoor navigation, visual search, and toys for the consumer market; and education, plant maintenance, field service, and business-to-business (B2B) sales tools for the enterprise market.

Tractica’s Mobile Augmented Reality report describes gaming as one of the most potentially lucrative use cases:

Due to the established business models and market demand, gaming has been the crucible for market viability and the fate of many technical innovations. AR will be no exception. Many broad-based AR barriers will be overcome and capabilities refined in games, to the benefit of those looking to leverage AR in other use cases.

The compelling attraction of mobile AR for games is interaction specific to time and place, which takes advantage of the unique sophisticated sensors of smartphones. A good example of such a game is, an Indiegogo project that is the self-proclaimed “world’s first real-life, massive multiplayer, first person shooter” game. It combines real-time geo-localized strategy with first person battles, allowing participants to hit targets up to 50 meters away (it is outdoor laser tag). The game requires an add-on piece of hardware, called the Interceptor. The Interceptor houses six infrared (IR) sensors with wide angles of horizontal and vertical reception for maximum shot detection. The added equipment is not optimal for the market and the future of such games will likely rely on sensors built into smartphone hardware.

Another potentially lucrative use case cited within the same report is mapping and indoor navigation:

There is a wide range of mobile AR use cases under the broader umbrella of mapping and indoor navigation. Some are more obvious, such as applying AR elements to existing outdoor map applications to help guide and inform users. One of the shortfalls of many current mapping applications is that they rely on smartphone GPS, which does not work well for indoor navigation. The addition of AR capabilities, using 3D sensors and cameras, solves this issue.

Does a Future Exist without Depth Sensing?

Depth sensing is important to many specific areas within e-commerce, particularly those where the selling point is the placement of an object in 3D space, such as furniture. How far will the real estate-related use cases go without depth sensing? The general capability to place a digital object in space and be able to walk around it to achieve different views requires depth sensing. The ability to place a digital object precisely in any use case requires depth sensing. So, what use cases will enable AR to experience explosive growth without depth sensing? Hopefully, developers are thinking about solving actual problems instead of creating novelty.

Comments are closed.