Luke’s predictive HUD

When Luke is driving Kee and Theo to a boat on the coast, the car’s heads-up-display shows him the car’s speed with a translucent red number and speed gauge. There are also two broken, blurry gauges showing unknown information.

Suddenly the road becomes blocked by a flaming car rolled onto the road by a then unknown gang. In response, an IMPACT warning triangle zooms in several times to warn the driver of the danger, accompanied by a persistent dinging sound.

childrenofmen-impact-08

It commands attention effectively

Props to this attention-commanding signal. Neuroscience tells us that symmetrical expansion like this triggers something called a startle response.  (I first learned this in the awesome and highly recommended book Mind Hacks.) Any time we see symmetrical expansion in our field of vision, within milliseconds our sympathetic nervous system takes over, fixes our attention to that spot, and prompts us to avoid the thing that our brains believe is coming right at us. It all happens way before conscious processing, and that’s a good thing. It’s evolutionarily designed to keep us safe from falling rocks, flying fists, and pouncing tigers, and scenarios like that don’t have time for the relatively slow conscious processes.

Well visualized

The startle response varies in strength depending on several things.

  • The anxiety of the person (an anxious person will react to a slighter signal)
  • The driver’s habituation to the signal
  • The strength of the signal, in this case…
    • Contrast of the shape against its background
    • The speed of the expansion
  • The presence of a prepulse stimulus

We want the signal to be strong enough to grab the attention of a possibly-distracted driver, but not strong enough to cause them to overreact and risk control of car. While anything this critical to safety needs to be thoroughly tested, the size of the IMPACT triangle seems to sit in the golden mean between these two.

And while the effect is strongest in the lab with a dark shape expanding over a light background, I suspect given habituation to the moving background of the roadscape and a comparatively static HUD, the sympathetic nervous system would have no problem processing this light-on-dark shape.

Well placed

We only see it in action once, so we don’t know if the placement is dynamic. But it appears to be positioned on the HUD such that it draws Luke’s attention directly to the point in his field of vision where the flaming car is. (It looks offset to us because the camera is positioned in the middle of the back seat rather than the driver’s seat.) This dynamic positioning is great since it saves the driver critical bits of time. If the signal was fixed, then the driver would have his attention pulled between the IMPACT triangle and the actual thing. Much better to have the display say, “LOOK HERE!”

Readers of the book will recall this nuance from the lesson from Chapter 8, Augment the Periphery of Vision: “Objects should be placed at the edge of the user’s view when they are not needed, and adjacent to the locus of attention when they are.”

Improvements

There are a few improvements that could be made.

  • It could synchronize the audio to the visual. The dinging is dissociated from the motion of the triangle, and even sounds a bit like a seat belt warning rather than something trying to warn you of a possible, life-threatening collision. Having the sound and visual in sync would strengthen the signal. It could even increase volume with the probability and severity of impact.
  • It could increase the strength of the audio signal by suppressing competing audio, by pausing any audio entertainment and even canceling ambient sounds.
  • It could predict farther into the future. The triangle only appears once the flaming car actually stops in the road a few meters ahead. But there is clearly a burning car rolling down to the road for seconds before that. We see it. The passengers see it. Better sensors and prediction models would have drawn Luke’s attention to the problem earlier and helped him react sooner.
  • It could also know when the driver is actually focused on the problem and than fade the signal to the periphery so that it does not cover up any vital visual information. It can then fade completely when the risk has passed.
  • An even smarter system might be able to adjust the strength of the signal based on real-time variables, like the anxiety of the driver, his or her current level of distraction, ambient noise and light, and of course the degree of risk (a tumbleweed vs. a small child on the road).
  • It could of course go full agentive and apply the brakes or swerve if the driver fails to take appropriate action in time.

Despite these improvements, I believe Luke’s HUD to be well designed that gets underplayed in the drama and disorientation of the scene.

childrenofmen-impact-09

Talking Technology

We’ve seen four interfaces with voice output through speakers so far.

  1. The message centre in the New Darwin hotel room, which repeated the onscreen text
  2. The MemDoubler, which provided most information to Johnny through voice alone
  3. The bathroom tap in the Beijing hotel which told Johnny the temperature of the water
  4. The Newark airport security system

jm-9-5-talking-montage

Later, in the brain hacking scene, we’ll hear two more sentences spoken.

Completionists: There’s also extensive use of voice output during a cyberspace search sequence, but there Johnny is wearing a headset so he is the only one who can hear it. That is sufficiently different to be left out of this discussion.

Voice is public

Sonic output in general and voice in particular have the advantage of being omnidirectional, so the user does not need to pay visual attention to the device, and, depending on volume and ambient noise, can be understood at much greater distances than a screen can be read. These same qualities are not so desirable if the user would prefer to keep the message or information private. We can’t tell whether these systems can detect the presence or absence of people, but the hotel message centre only spoke when Johnny was alone. Later in the film we will see two medical systems that don’t talk at all. This is most likely deliberate because few patients would appreciate their symptoms being broadcast to all and sundry.

Unless you’re the only one in the room

jm-8-bathroom-tap

The bathroom tap is interesting because the temperature message was in English. This is a Beijing hotel, and the scientists who booked the suite are Vietnamese, so why? It’s not because we the audience need to know this particular detail. But we do have one clue: Johnny cursed rather loudly once he was inside the bathroom. I suggest that there is a hotel computer monitoring the languages being used by guests within the room and adjusting voice outputs to match. Current day word processors, web browsers, and search engines can recognise the language of typed text input and load the matching spellcheck dictionaries, so it’s a fair bet that by 2021 our computers will be able to do the same for speech.

Iron Man HUD: 1st person view

When we first see the HUD, Tony is donning the Iron Man mask. Tony asks, “JARVIS, “You there?”” To which JARVIS replies, ““At your service sir.”” Tony tells him to “Engage the heads-up display,” and we see the HUD initialize. It is a dizzying mixture of blue wireframe motion graphics. Some imply system functions, such as the reticle that pinpoints Tony’s eye. Most are small dashboard-like gauges that remain small and in Tony’s peripheral vision while the information is not needed, and become larger and more central when needed. These features are catalogued in another post, but we learn about them through two points-of-view: a first-person view, which shows us what Tony’s sees as if we were there, donning the mask in his stead, and second-person view, which shows us Tony’s face overlaid against a dark background with floating graphics.

This post is about that first-person view. Specifically it’s about the visual design and the four awarenesses it displays.

Avengers-missile-fetching04

In the Augmented Reality chapter of Make It So, I identified four types of awareness seen in the survey for Augmented Reality displays:

  1. Sensor display
  2. Location awareness
  3. Context awareness
  4. Goal awareness

The Iron Man HUD illustrates all four and is a useful framework for describing and critiquing the 1st-person view. Continue reading