During 2014, the user experience (UX) group at ICS worked on our usual fare of mobile and desktop apps, but we also saw a large expansion of embedded device projects that fall into three categories: kiosk information systems, in-vehicle infotainment systems (IVI) and robotics control systems. Each area presents unique and complex challenges for a UX designer. However, we noticed some common requests between all three of those areas: the preference for touchscreens and the desire to be connected to the Internet. This implies that the UX challenges tend to fall into two categories: the tangible, immediate issues related to a particular touch context and the more general issues of rich connectivity across multiple devices.
Designing for touch in the varied environments of embedded systems is a more understandable problem than the prospect of varied connectivity in that it at least has a visible and physical aspect to it. Touch interaction is in a realm that we as designers can experience ourselves just by using systems. The range of possible use scenarios is reasonably definable by actually testing with prototypes. With kiosk design, issues specific to touch include the size of the screen (mostly large touchscreens) posture of the user (mostly standing) and visibility of the interaction (mostly public).
In-vehicle devices have a different set of issues. They must have easy and convenient interaction so users can concentrate on the road and the traffic around them. IVI systems need to be understandable and readable at a glance, while delivering rather complex options. The lack of physical feedback offered by physical nobs and buttons on legacy dashboards is an unfortunate shortcoming of touchscreens in this context.
Robotic control systems, however, require translating complex 3-D movements into simplified and natural representations of those movements, a longstanding problem, even with Graphical User Interfaces (GUIs). Touchscreens make the 2-D representations of 3-D movements more fluid because of the direct touch interaction compared with using a mouse or other pointing device.
In contrast to the tangible and visible issue of touchscreens, constant connectivity on multiple devices is invisible, complex and rapidly evolving. It’s more difficult to encompass the range of possible use cases because we can’t predict all the ways data will be used by devices in the near future.
Kiosks, for example, remain mostly impersonal devices where there is no current need for the user to identify themselves. Instead, the user is simply gathering information about what restaurants are available nearby (hospitality kiosk) or to have the ability to view King Tut’s timeline (museum kiosk).
This impersonal use is poised to change rapidly as companies and users alike, seek more efficiency and convenience from devices (check-in kiosks at service providers is only one example). However, devices are rapidly becoming more capable, not just receiving deliberate or inadvertent user input and making sense of it, but also gathering user activity data via sensors. For example, in-vehicle systems have the potential for increasing driving safety by detecting driving habits and changes in habits, the state of a car, road, weather, and traffic conditions.
As UX designers, it is still difficult to predict fully what our responsibilities will be as devices become more “aware.” Although ubiquitous computing or the Internet of Things (IoT) has forever seemed like something we would eventually experience in the future, for us, 2014 felt like it had finally arrived. I can predict with confidence that we will spend 2015 getting a deeper understanding of the UX design issues related to IoT.