- January 6, 2017
- 0 Views
Decisions facing device designers
One of the decisions facing device designers is whether or not to have a display screen on the device. The decision is both aesthetic and functional.
The moment you put a screen on an object, in most instances the object feels like an electronic device. It becomes categorized in the user’s mind as piece of equipment—a gadget—and expectations are set accordingly. The less you want your device to seem electro-digital, the less likely it should have a screen. Also some devices (like wearables) make having a screen on them impractical, either because of size, shape, or context.
Of course, removing the screen doesn’t obviate the need for feedback, or for the controls (and labels for controls) that screens typically afford. You still have to let users know when something has happened, and without the “luxury” that a screen provides. There are several ways to do this, namely:
- create other visual cues
- use audio feedback
- use haptic feedback
Creating other visual cues means creating visual effects, typically via LEDs. An on/off indicator light, for example, or a status light that indicates what state the device is in. With LEDs, you have a number of variables to consider when designing: color, brightness/intensity, and pattern. Pattern can be multiple LEDs working together—for instance a string of lights that turn on in sequence—or a single LED that blinks in a distinct way, like a flashing light to indicate a warning. These variables can be used singularly in combination, such as an LED that dims, turns red, and pulses to indicate low battery.
Of course, unlike a screen where words can be displayed, an LED light(s) is open to interpretation. Does that blinking mean it’s connected to the wireless network, or not connected? Labels can help, certainly, but labels can also add to the “device-ness” of the object, not to mention visual clutter.
Audio feedback can replace a screen, although it does require a speaker. (Of course, if your device already has a speaker, it can be repurposed for feedback.) Audio feedback is of two types: sounds and voice. Sounds can be anything from the traditional beeps and boops and whines to music, sound effects, speech-like babble, and general noises of all sorts. A little of these goes a long way. The two-second sound clip of a woodchipper to indicate data being deleted is awesome the first time you hear it, a pain the 500th. Speech presents its own complications, as unlike other sounds, we are attuned to read a lot of emotional content into voices, both biologically and by training from birth. The choice of male or female voice can make a huge difference to how users feel about the device. Some people don’t like their devices talking to them at all, in fact.
One final note about sound: if there is one (and almost every digital device has one built in), make use of the clock. Sound feedback that is appropriate during the middle of the day is not in the middle of the night.
If your device is worn or handheld, another option is haptics: vibrotactile feedback. Haptic feedback is increasingly becoming more sophisticated, and a wide(r) variety of sensations are becoming available, from brief, subtle vibrations to more complex patterns and the ability to feel simulated movement “inside” the device. Unlike sound and even visuals, haptics are discrete and personal.
Needless to say, the type, kind, and combination of these different feedback mechanisms you employ work as much as the form to give the device its character.
N.B. Another increasingly employed method of getting around a lack of screen is to simply move the screen to another platform. Your beautiful toaster doesn’t have room for a screen? No problem: make a mobile app for it. Think: iPod shuffle and iTunes.
This method of moving display onto another platform has its merits and drawbacks. For devices with limited controls (e.g. a medical monitoring device, for instance), the screen might be perfectly acceptable on a tablet device like an iPad. Where this method breaks down is for many devices (as is best practice) you want the feedback and controls near the thing you’re controlling. Having to pull out another device to control the object in front of you can be tedious in the extreme. It’s best to do afunctional cartography and figure out what high-use information and controls are needed, then figure out how to get those on the device itself. Features that can usually be easily (or more easily, anyway) moved off the device are related to management of the device itself, i.e. settings and any content management.
Not having a screen is difficult. It makes feedback harder and more ambiguous. You can’t make it a touchscreen and use it for both display and controls. In other words, it’s a constraint. But like all constraints, it can be a source of creativity if we let it.