This week Apple released its WatchKit platform for Apple Watch developers. Since the Apple Watch announcement in September, we now have more details about the forthcoming device’s functionality and usage.
Some of the highlights include:
- Watch apps, at least for now, will run for the most part on the iPhone rather than on the watch itself. Native watch apps are expected to come later in 2015. My guess is that most Apple Watch apps will follow the master/slave processing model, as this approach will conserve the watch’s battery life. That said, watch apps should have an impact on the phone battery, which is not ideal.
- Watch apps are only extensions of phone apps. Apple recommends that only relevant information from the phone app be displayed on the watch, with minimal interaction. Essentially, the shorter the watch app interaction, the better it is.
- Glance is a shortened, minimalistic form of interaction, only providing relevant information for the app. For example, the watch could display “time to destination” when using directions on Apple Maps. One can open the glance version of an app by simply swiping bottom to top. If a user keeps the watch raised during a glance, after a moment it would reveal more information.
- Apple has restricted customized inputs or gestures. This limitation is not surprising, as Apple wants to standardize the inputs and have users get accustomed to the basics of watch interaction.
- Video is not supported. While this approach makes sense, there might be some value in having the watch act as a complimentary video output to the phone – such as for apps like FaceTime.
It is somewhat early to draw many conclusions about WatchKit, since this is only the first version. But it’s quite clear that, holding true to the Apple way of doing things, the first apps on the Apple Watch will be limited in terms of functionality. The one big limitation I can see, especially when comparing to Android Wear, is that of context awareness. I listed some of the issues around wearable notifications in my blog post on Finding the Right Time and Place for Wearable Notifications. It looks like Apple is falling behind Google on this one.
Where Google’s Android Wear operates around the concept of context awareness, running on top of the Google Now platform, Apple’s WatchKit lacks context awareness for the most part. Context awareness refers to the time, location, or specific activity in which a user is currently engaged.
Google Now is essentially a context-aware platform. It alerts you when you need to leave for a meeting or catch a train, or suggests restaurants near your location, all of which is available at a single glance in the form of scrolling cards. Google Now is essentially built for wearables, with the software built for glances, accessible in one location, eliminating the need for multiple steps to access relevant information.
However, Apple lacks a platform of its own like Google Now. Google Now does have an iOS app, but that is very different from Google using it as the building block for Android Wear. The WatchKit is essentially forcing Apple developers, who are accustomed to building deep, interactive, engaging experiences on the phone, to design minimal glanceable interactions on the watch. In a way, this approach is antithetical to the usual Apple user experience.
Apple is trying to recreate some of the phone UI/UX on the watch, but then add restrictions to it. To illustrate, the WatchKit has an interface for scrolling through apps and launching them, just like on a smartphone, but the interaction with the apps is limited. On the other hand, Android Wear launches apps automatically based on context, suggests interactions, and then closes down. Android Wear has a number of examples for these contextual interactions, which could occur at the airport, zoo, restaurant, gas station, conference, or simply walking down the street. Apple’s WatchKit lacks any such out-of-the-box contextual functionality.
The one way Apple WatchKit could enable context awareness is by enabling these features in their mobile app itself, which can then pass information along to the watch app. I am not sure this is the most efficient way of doing context awareness, especially when the device that it matters to most is the smart watch, not so much the phone.