When Luke is driving Kee and Theo to a boat on the coast, the car’s heads-up-display shows him the car’s speed with a translucent red number and speed gauge. There are also two broken, blurry gauges showing unknown information.
Suddenly the road becomes blocked by a flaming car rolled onto the road by a then unknown gang. In response, an IMPACT warning triangle zooms in several times to warn the driver of the danger, accompanied by a persistent dinging sound.
Later, in the brain hacking scene, we’ll hear two more sentences spoken.
Completionists: There’s also extensive use of voice output during a cyberspace search sequence, but there Johnny is wearing a headset so he is the only one who can hear it. That is sufficiently different to be left out of this discussion.
Voice is public
Sonic output in general and voice in particular have the advantage of being omnidirectional, so the user does not need to pay visual attention to the device, and, depending on volume and ambient noise, can be understood at much greater distances than a screen can be read. These same qualities are not so desirable if the user would prefer to keep the message or information private. We can’t tell whether these systems can detect the presence or absence of people, but the hotel message centre only spoke when Johnny was alone. Later in the film we will see two medical systems that don’t talk at all. This is most likely deliberate because few patients would appreciate their symptoms being broadcast to all and sundry.
Unless you’re the only one in the room
The bathroom tap is interesting because the temperature message was in English. This is a Beijing hotel, and the scientists who booked the suite are Vietnamese, so why? It’s not because we the audience need to know this particular detail. But we do have one clue: Johnny cursed rather loudly once he was inside the bathroom. I suggest that there is a hotel computer monitoring the languages being used by guests within the room and adjusting voice outputs to match. Current day word processors, web browsers, and search engines can recognise the language of typed text input and load the matching spellcheck dictionaries, so it’s a fair bet that by 2021 our computers will be able to do the same for speech.
When we first see the HUD, Tony is donning the Iron Man mask. Tony asks, JARVIS, “You there?” To which JARVIS replies, “At your service sir.” Tony tells him to Engage the heads-up display, and we see the HUD initialize. It is a dizzying mixture of blue wireframe motion graphics. Some imply system functions, such as the reticle that pinpoints Tonys eye. Most are small dashboard-like gauges that remain small and in Tonys peripheral vision while the information is not needed, and become larger and more central when needed. These features are catalogued in another post, but we learn about them through two points-of-view: a first-person view, which shows us what Tony’s sees as if we were there, donning the mask in his stead, and second-person view, which shows us Tony’s face overlaid against a dark background with floating graphics.
This post is about that first-person view. Specifically it’s about the visual design and the four awarenesses it displays.
In the Augmented Reality chapter of Make It So, I identified four types of awareness seen in the survey for Augmented Reality displays:
The Iron Man HUD illustrates all four and is a useful framework for describing and critiquing the 1st-person view. Continue reading →