After David offers Leeloo some clothes, he also offers her a device for applying eye makeup. Leeloo only has the most rudamentary grasp of English at this point, so to demonstrate its use he holds it up to his eyes.
This is a clear enough signal for Leeloo, who puts the device up to her eyes like a large pair of sunglasses. She can feel the momentary button near her left fingertip and presses it. In response a white ring around a Chanel™ logo illuminates for a second. Leeloo feels an unfamiliar sensation and pulls her face away, and we see that the device has applied complete eye makeup for her.
The industrial design of the device is brilliant. It’s sized to be slightly larger than the eye area and it has the right shape for someone to know where to place it. The activation button sits exactly where the user needs it, and with enough of a button-like affordance that even without looking she can find it and press it. The device is just heavy enough to encourage supporting it with palms, which provides a firm base to resist too much movement on activation (and thereby risking eyeshadow right on the eye). The shiny black plastic reads like a cosmetic object, and professional enough that you can presume it’s safe to use near delicate eye parts. The white ring is a simple cue for those nearby that it’s in progress and not to interrupt the user.
A minor improvement would be to improve that simple light on/off to a progress ring that swept around. This would gave a sense of how much time it will take and how much time is left, even if it’s only a second.
The main question of the device is of course how does Leeloo specify the details of the makeup. A quick Google image search shows that the number of parameters is…um…vast.
Of course, if the device had some kind of low-level artificial intelligence, that agentive algorithm could handle a lot of the complexity for her, deciding on the best match for her schedule, fashion trends, current outfit, and her preferred position in the fashion-aggression spectrum. (Would there be a device that went up to 11 for drag queens?) But, when the agentive algorithm got it wrong, and Leeloo wanted to override those settings, she’s back to needing to tell the device how she’d like to override its suggestions. How does she do that?
Which raises the question of those three buttons across the bridge of the device.
Those three buttons
Of course three momentary buttons aren’t enough to control all the variables in eye makeup. Even if these are dials that control three variables, three variables aren’t enough. (Even if they were dials, why would they look identical to the momentary buttons? Things that behave differently should look different.)
Even if these buttons are not controls for variables but rather presets for sets of variables (Such as: “Work,” “Formal wear,” or “Defeating ultimate evil”), they’re not signaling their state well. Looking at that screen grab, can you tell which one is currently selected? I can’t. It should be apparent at a glance, so no one accidentally applies “clubbing” makeup when they mean “funeral.” So there should be some indication of what’s currently selected. Note that a lit button is not enough. Some descriptive text is needed. Such text would ideally be on both the “inside” and the “outside” so no matter how it was lying on a dresser, its state could be read.
Anyway, since those buttons aren’t sufficient for setting up the eye makeup, let’s hope that it’s networked to some other device with a richer interface, like a voice interface or Cornelius’ WIMP computer, where she can have a rich interaction for setting up those buttons.
With all that in mind, here’s another comp to illustrate these ideas. Admittedly, Chanel’s brand police wouldn’t be comfortable with an LED font, but it would clearly communicate that the text represents a variable and not a product name.