CES 2017: Artificial Intelligence’s Ubiquity and Invisibility


The Consumer Electronics Show (CES) is a gadget show at its heart, showcasing the latest in next-generation televisions, refrigerators, kitchen and home appliances, smartphones, laptops, personal computers, audio-visual equipment, and other electronic gadgetry. Over the years, the focus has expanded to include wearables, drones, autonomous cars, robots, and the Internet of Things (IoT) in general. Most devices today are connected and, increasingly, many are gaining intelligence. Essentially, artificial intelligence (AI) is beginning to power devices, helping them learn and adapt around the needs of the end user.

At CES 2017, although we did not see a dedicated marketplace for AI, its impact was all-pervasive and its presence could be felt throughout the show. In most instances, AI was under the surface, part of the software that drives the product. But in some cases, companies chose to highlight or oversell its capabilities. One example is the AI-powered toothbrush that supposedly uses deep learning to learn your brushing habits and provide daily or weekly insights.


In my opinion, the companies that chose to make a splash about AI in their marketing collateral offered the least interesting use cases. To really understand the impact of AI, one had to peel back the layers. The key themes of AI in the consumer context revolve around analysis, interaction, and autonomy. These themes were on show at CES and, in the future, they may be increasingly highlighted or used as marketing terminology.


Wearables provide the best example of AI, specifically machine learning being used as an analytical tool. Smart watches, fitness trackers, and smart clothing today use algorithms that keep track of your activity levels, sleep, or other biometrics, and provide suggestions or habit changing routines. Under Armour (UA) launched its next generation of UA SpeedForm connected shoes that analyze your “run readiness” based on a jumping routine and are able to automatically tell when you start running. This forms UA’s data-driven and AI-infused wearables strategy that informs its UA Record solution, which supports one of the largest fitness tracking user platforms in the world with close to 300 million users.

Carnival Cruises, one of this year’s exhibitors at CES 2017, spoke about an upcoming wearable bracelet that more or less is a wearable payment solution, but by using machine learning, it can analyze your preferences, including your most frequented places on the ship, to provide a more personalized and context-driven experience.

While AI-infused health improvement and customer journey enhancement devices were on display, streamlining user productivity was another aspect that was shown to be the by-product of AI-based machine learning algorithms. Mercedes Benz talked about how it is combining user data from its mobile app with AI to help users schedule their day and map daily routes based on their typical routines and calendars.



Amazon Echo, Amazon’s voice powered personal assistant, was not officially part of CES this year, but its integration with third-party devices or devices with Echo services built in were ubiquitous. According to The Verge, there were at least 20 products at CES 2017 with Echo services built in, and at least 9 products that were integrated onto the Echo platform, allowing for voice-based control. The number of skills supported on Echo has grown seven-fold over the past 12 months to surpass 7,000, so it is no surprise that CES exhibitors were keen to showcase their Echo integration capabilities.


Amazon’s Echo is one of the best examples of natural language processing (NLP) and AI making human-level interaction with machines possible. Thanks to Echo, voice is the next platform to watch out for, and CES 2017 was proof that Amazon has convincingly won the first round of the battle for voice assistants. Siri and Google Now have been absent in terms of driving an ecosystem forward, or supporting third-party devices and applications. Another product category where AI-enabled voice interaction is beginning to show promise is in the use of smart earphones as personal assistants or providing voice-based coaching for fitness enthusiasts.

Computer vision, in combination with NLP, was also seen around the show, most prominently in family and customer service robots, which was a big theme at CES 2017.


Autonomy is one of the most desired and sought after features in automobiles and drones today. Human transport and delivery logistics are expected to see major disruptions in the next few decades with autonomous capabilities being built into cars, trucks, delivery vans, shuttles, drones, boats, ships, and any other type of transportation infrastructure. This year, the Automotive section in the North Hall of CES, which has traditionally been the stomping ground for automotive audiophiles, was transformed into an autonomous car show of the future. Autonomous transport was a key theme at the show, and my guess is that we shall soon see the Drones marketplace currently located deep in the South Hall move closer to the North Hall, or the two will converge into one big marketplace with original equipment manufacturers (OEMs) and component suppliers all showcasing their wares in one place. The Mercedes drone van concept on display at CES was just the beginning of this trend with all transport converging.

Computer vision and AI power many of the autonomous and driver safety capabilities, including vehicle object detection, navigation, object avoidance, swarming drones, truck platooning, navigation, surge pricing for on-demand taxis, driver facial recognition, and many others. While we dream of a fully autonomous future, we will have a long period of semi-autonomous capabilities being built into our transport infrastructure. Driver assistance and safety will be areas where we see a lot of AI use cases applied in the near term, as the regulatory and ethical issues of an AI-based autonomous transport future start to dawn on us.


It was interesting to see NVIDIA showcase its co-pilot technology that keeps an eye on the driver to warn of fatigue or tiredness. Hyundai showcased a scenario where its car would transition between human driving to autopilot mode and how AI would play a role in both scenarios.


Toyota’s Dr. Gill Pratt from the $1 billion funded Toyota Research Institute (TRI) laid out its plans to work simultaneously on achieving semi-autonomous (Level 2 and 3) and fully autonomous (Level 4 and 5) cars. Dr. Pratt also introduced the AI-based personal assistant, YUI, which will be deployed in the Concept-I autonomous car to keep the driver engaged and maintain continued situational awareness.

Looking for Clues to AI’s Future at CES

In summary, AI was everywhere at CES, not as a visible trend, but as a largely hidden software layer that powers the next generation of technology products. I found the companies overtly using AI as a marketing tool offered the least interesting use cases of AI. To really understand AI, one had to look deeper.

There is no looking back. Every CES show from now on will have AI as an engine that drives technology forward. If one wants to gauge the level of AI integration in consumer products, CES will be the show to visit. AI will continue to be deeply embedded as an analytical tool in wearables. We will also see vast improvements in human-level interaction capabilities that will power robots and home automation. And finally, AI will be the autonomy engine that revolutionizes transport and logistics.

Comments are closed.