Limited interface options on wearable computers are often missing as a product of considered design — they’re not just features the team didn’t get around to building.
Tim Roberts, VP of interactive and design at Fitbit, told attendees at Gigaom’s Roadmap conference in San Francisco on Tuesday that many of the Fitbit interface innovations, like double-tapping a device without a screen, came out of the device’s limited number of capabilities.
“For our devices, from a UI perspective, we have a small bag of tricks. Fitbit Flex has five LED lights, an accelerometer and a haptic motor,” Roberts said. “So we’ve been using the accelerometer as an input method.”
One way that interface designers can expand that bag of tricks is to incorporate gestures and hand motions, said Stephen Lake, the CEO of Thalmic Labs, which produces an armband for gestures called the Myo.
“One developer is using our platform to record reps at the gym, data about your muscles,” Lake said.
Even with limited input methods, designers have to be careful not to overcomplicate the action. “We’ve found out it’s hard to distinguish between various vibration patterns. Fitbit vibrates and users use context to figure out what that means,” Roberts said.
Overcomplicated wearable technology doesn’t necessarily need a small screen, either. Users can be overwhelmed by the amount of data that a sensor worn on a person produces, said OMsignal CEO Stephane Marceau.
“Some consumers want lots of data, but the interpretation of what it all means may require too much cognitive load,” Marceau said. “So we really pared it down to four or five core variables, and held back on the other ones.”
[protected-iframe id=”a7de97a5dd06743acf71173814e05386-14960843-34305775″ info=”https://new.livestream.com/accounts/74987/events/3541069/videos/68644072/player?width=640&height=360&autoPlay=false&mute=false” width=”640″ height=”360″ frameborder=”0″ scrolling=”no”]