Luke’s predictive HUD

When Luke is driving Kee and Theo to a boat on the coast, the car’s heads-up-display shows him the car’s speed with a translucent red number and speed gauge. There are also two broken, blurry gauges showing unknown information.

Suddenly the road becomes blocked by a flaming car rolled onto the road by a then unknown gang. In response, an IMPACT warning triangle zooms in several times to warn the driver of the danger, accompanied by a persistent dinging sound.


It commands attention effectively

Props to this attention-commanding signal. Neuroscience tells us that symmetrical expansion like this triggers something called a startle response.  (I first learned this in the awesome and highly recommended book Mind Hacks.) Any time we see symmetrical expansion in our field of vision, within milliseconds our sympathetic nervous system takes over, fixes our attention to that spot, and prompts us to avoid the thing that our brains believe is coming right at us. It all happens way before conscious processing, and that’s a good thing. It’s evolutionarily designed to keep us safe from falling rocks, flying fists, and pouncing tigers, and scenarios like that don’t have time for the relatively slow conscious processes.

Well visualized

The startle response varies in strength depending on several things.

  • The anxiety of the person (an anxious person will react to a slighter signal)
  • The driver’s habituation to the signal
  • The strength of the signal, in this case…
    • Contrast of the shape against its background
    • The speed of the expansion
  • The presence of a prepulse stimulus

We want the signal to be strong enough to grab the attention of a possibly-distracted driver, but not strong enough to cause them to overreact and risk control of car. While anything this critical to safety needs to be thoroughly tested, the size of the IMPACT triangle seems to sit in the golden mean between these two.

And while the effect is strongest in the lab with a dark shape expanding over a light background, I suspect given habituation to the moving background of the roadscape and a comparatively static HUD, the sympathetic nervous system would have no problem processing this light-on-dark shape.

Well placed

We only see it in action once, so we don’t know if the placement is dynamic. But it appears to be positioned on the HUD such that it draws Luke’s attention directly to the point in his field of vision where the flaming car is. (It looks offset to us because the camera is positioned in the middle of the back seat rather than the driver’s seat.) This dynamic positioning is great since it saves the driver critical bits of time. If the signal was fixed, then the driver would have his attention pulled between the IMPACT triangle and the actual thing. Much better to have the display say, “LOOK HERE!”

Readers of the book will recall this nuance from the lesson from Chapter 8, Augment the Periphery of Vision: “Objects should be placed at the edge of the user’s view when they are not needed, and adjacent to the locus of attention when they are.”


There are a few improvements that could be made.

  • It could synchronize the audio to the visual. The dinging is dissociated from the motion of the triangle, and even sounds a bit like a seat belt warning rather than something trying to warn you of a possible, life-threatening collision. Having the sound and visual in sync would strengthen the signal. It could even increase volume with the probability and severity of impact.
  • It could increase the strength of the audio signal by suppressing competing audio, by pausing any audio entertainment and even canceling ambient sounds.
  • It could predict farther into the future. The triangle only appears once the flaming car actually stops in the road a few meters ahead. But there is clearly a burning car rolling down to the road for seconds before that. We see it. The passengers see it. Better sensors and prediction models would have drawn Luke’s attention to the problem earlier and helped him react sooner.
  • It could also know when the driver is actually focused on the problem and than fade the signal to the periphery so that it does not cover up any vital visual information. It can then fade completely when the risk has passed.
  • An even smarter system might be able to adjust the strength of the signal based on real-time variables, like the anxiety of the driver, his or her current level of distraction, ambient noise and light, and of course the degree of risk (a tumbleweed vs. a small child on the road).
  • It could of course go full agentive and apply the brakes or swerve if the driver fails to take appropriate action in time.

Despite these improvements, I believe Luke’s HUD to be well designed that gets underplayed in the drama and disorientation of the scene.


Rebel videoscope

Talking to Luke


Hidden behind a bookshelf console is the family’s other comm device. When they first use it in the show, Malla and Itchy have a quick discussion and approach the console and slide two panels aside. The device is small and rectangular, like an oscilloscope, sitting on a shelf about eye level. It has a small, palm sized color cathode ray tube on the left. On the right is an LED display strip and an array of red buttons over an array of yellow buttons. Along the bottom are two dials.


Without any other interaction, the screen goes from static to a direct connection to a hangar where Luke Skywalker is working with R2-D2 to repair some mechanical part. He simply looks up to the camera, sees Malla and Itchy, and starts talking. He does nothing to accept the call or end it. Neither do they. Continue reading

The Mechanized Squire


Having completed the welding he did not need to do, Tony flies home to a ledge atop Stark tower and lands. As he begins his strut to the interior, a complex, ring-shaped mechanism raises around him and follows along as he walks. From the ring, robotic arms extend to unharness each component of the suit from Tony in turn. After each arm precisely unscrews a component, it whisks it away for storage under the platform. It performs this task so smoothly and efficiently that Tony is able to maintain his walking stride throughout the 24-second walk up the ramp and maintain a conversation with JARVIS. His last steps on the ramp land on two plates that unharness his boots and lower them into the floor as Tony steps into his living room.

Yes, yes, a thousand times yes.

This is exactly how a mechanized squire should work. It is fast, efficient, supports Tony in his task of getting unharnessed quickly and easily, and—perhaps most importantly—how we wants his transitions from superhero to playboy to feel: cool, effortless, and seamless. If there was a party happening inside, I would not be surprised to see a last robotic arm handing him a whiskey.

This is the Jetsons vision of coming home to one’s robotic castle writ beautifully.

There is a strategic question about removing the suit while still outside of the protection of the building itself. If a flying villain popped up over the edge of the building at about 75% of the unharnessing, Tony would be at a significant tactical disadvantage. But JARVIS is probably watching out for any threats to avoid this possibility.

Another improvement would be if it did not need a specific landing spot. If, say…

  • The suit could just open to let him step out like a human-shaped elevator (this happens in a later model of the suit seen in The Avengers 2)
  • The suit was composed of fully autonomous components and each could simply fly off of him to their storage (This kind of happens with Veronica later in The Avengers 2)
  • If it was composed of self-assembling nanoparticles that flowed off of him, or, perhaps, reassembled into a tuxedo (If I understand correctly, this is kind-of how the suit currently works in the comic books.)

These would allow him to enact this same transition anywhere.

Iron Welding


Cut to the bottom of the Hudson River where some electrical “transmission lines” rest. Tony in his Iron Man supersuit has his palm-mounted repulsor rays configured such that they create a focused beam, capable of cutting through an iron pipe to reveal power cables within. Once the pipe casing is removed, he slides a circular device onto the cabling. The cuff automatically closes, screws itself tight, and expands to replace the section of casing. Dim white lights burn brighter as hospital-green rings glow brightly around the cable’s circumference. His task done, he underwater-flies away, flying up the southern tip of Manhattan to Stark Tower.

It’s quick scene that sets up the fact that they’re using Tony’s arc reactor technology to liberate Stark Tower from the electrical grid (incidentally implying that the Avengers will never locate a satellite headquarters anywhere in Florida. Sorry, Jeb.) So, since it’s a quick scene, we can just skip the details and interaction design issues, right?

Of course not. You know better from this blog.

Avengers-Underwater_welding02 Continue reading

Odyssey Navigation


When the Odyssey needs to reverse thrust to try and counter a descent towards the TET, Jack calls for a full OMS (Orbital Maneuvering System) burn. We do not see what information he looks at to determine how fast he is approaching the TET, or how he knows that the OMS system will provide enough thrust.

We do see 4 motor systems on board the Odyssey

  1. The Main Engines (which appear to be Ion Engines)
  2. The OMS system (4 large chemical thrusters up front)
  3. A secondary set of thrusters (similar and larger than the OMS system) on the sleep module
  4. Tiny chemical thrusters like those used to change current spacecraft yaw/pitch/roll (the shuttle’s RCS).


After Jack calls out for an OMS burn, Vika punches in a series of numbers on her keypad, and jack flips two switches under the keypad. After flipping the switches ‘up’, Jack calls out “Gimbals Set” and Vika says “System Active”.

Finally, Jack pulls back on a silver thrust lever to activate the OMS.


Why A Reverse Lever?

Typically, throttles are pushed forward to increase thrust. Why is this reversed? On current NASA spacecraft, the flight stick is set up like an airplane’s control, i.e., back pitches up, forward pitches down, left/right rolls the same. Note that the pilot moves the stick in the direction he wants the craft to move. In this case, the OMS control works the same way: Jack wants the ship to thrust backwards, so he moves the control backwards. This is a semi-direct mapping of control to actuator. (It might be improved if it moved not in an arc but in a straight forward-and-backward motion like the THC control, below. But you also want controls to feel different for instant differentiation, so it’s not a clear cut case.)


Source: NASA

What is interesting is that, in NASA craft, the control that would work the main thrusters forward is the same control used for lateral, longitudinal, and vertical controls:


Source: NASA

Why are those controls different in the Odyssey? My guess is that, because the OMS thrusters are so much more powerful than the smaller RCS thrusters, the RCS thrusters are on a separate controller much like the Space Shuttle’s (shown above).

And, look! We see evidence of just such a control, here:


Separating the massive OMS thrusters from the more delicate RCS controls makes sense here because the control would have such different effects—and have different fuel costs—in one direction than in any other. Jack knows that by grabbing the RCS knob he is making small tweaks to the Odyssey’s flight path, while the OMS handle will make large changes in only one direction.

The “Targets” Screen


When Jack is about to make the final burn to slow the Odyssey down and hold position 50km away from the TET, he briefly looks at this screen and says that the “targets look good”.

It is not immediately obvious what he is looking at here.

Typically, NASA uses oval patterns like this to detail orbits. The top of the pattern would be the closest distance to an object, while the further line would indicate the furthest point. If that still holds true here, we see that Jack is at the closest he is going to get to the TET, and in another orbit he would be on a path to travel away from the TET at an escape velocity.

Alternatively, this plot shows the Odyssey’s entire voyage. In that case, the red dotted line shows the Odyssey’s previous positions. It would have entered range of the TET, made a deceleration burn, then dropped in close.

Either way, this is a far less useful or obvious interface than others we see in the Odyssey.

The bars on the right-hand panel do not change, and might indicate fuel or power reserves for various thruster banks aboard the Odyssey.

Why is Jack the only person operating the ship during the burn?

This is the final burn, and if Jack makes a mistake then the Odyssey won’t be on target and will require much more complicated math and piloting to fix its position relative to the TET. These burns would have been calculated back on Earth, double-checked by supercomputers, and monitored all the way out.

A second observer would be needed to confirm that Jack is following procedure and gets his timing right. NASA missions have one person (typically the co-pilot) reading from the checklist, and the Commander carrying out the procedure. This two-person check confirms that both people are on the same page and following procedure. It isn’t perfect, but it is far more effective than having a single person completing a task from memory.

Likely, this falls under the same situation as the Odyssey’s controls: there is a powerful computer on board checking Jack’s progress and procedure. If so, then only one person would be required on the command deck during the burn, and he or she would merely be making sure that the computer was honest.

This argument is strengthened by the lack of specificity in Jack’s motions. He doesn’t take time to confirm the length of the burn required, or double-check his burn’s start time.


If the computer was doing all that for him, and he was merely pushing the right button at the indicated time, the system could be very robust.

This also allows Vika to focus on making sure that the rest of the crew is still alive and healthy in suspended animation. It lowers the active flight crew requirement on the Odyssey, and frees up berths and sleep pods for more scientific-minded crew members.

Help your users

Detail-oriented tasks, like a deceleration burn, are important but let’s face it, boring. These kinds of tasks require a lot of memory on the part of users, and pinpoint precision in timing. Neither of those are things humans are good at.

If you can have your software take care of these tasks for your users, you can save on the cost of labor (one user instead of two or three), increase reliability, and decrease mistakes.

Just make sure that your computer works, and that your users have a backup method in case it fails.

Homing Beacon


After following a beacon signal, Jack makes his way through an abandoned building, tracking the source. At one point he stops by a box on the wall, as he sees a couple of cables coming out from the inside of it, and cautiously opens it.

The repeater

I can’t talk much about interactions on this one given that he does not do much with it. But I guess readers might be interested to know about the actual prop used in the movie, so after zooming in on a screen capture and a bit of help from Google I found the actual radio.


When Jack opens the box he finds the repeater device inside. He realizes that it’s connected to the building structure, using it as an antenna, and over their audio connection asks Vika to decrypt the signal.

The desktop interface

Although this sequence centers around the transmission from the repeater, most of the interactions take place on Vika’s desktop interface. A modal window on the display shows her two slightly different waveforms that overlap one another. But it’s not clear at all why the display shows two signals instead of just one, let aside what the second signal means.

After Jack identifies it as a repeater and asks her to decrypt the signal, Vika touches a DECODE button on her screen. With a flourish of orange and white, the display changes to reveal a new panel of information, providing a LATITUDE INPUT and LONGITUDE INPUT, which eventually resolve to 41.146576 -73.975739. (Which, for the curious, resolves to Stelfer Trading Company in Fairfield, Connecticut here on Earth. Hi, M. Stelfer!) Vika says, “It’s a set of coordinates. Grid 17. It’s a goddamn homing beacon.”


At the control tower Vika was already tracking the signal through her desktop interface. As she hears Jack’s request, she presses the decrypt button at the top of the signal window to start the process.

Continue reading

Communications with Sally


While Vika and Jack are conducting their missions on the ground, Sally is their main point of contact in orbital TET command. Vika and Sally communicate through a video feed located in the top left corner of the TETVision screen. There is no camera visible in the film, but it is made obvious that Sally can see Vika and at one point Jack as well.


The controls for the communications feed are located in the bottom left corner of the TETVision screen. There are only two controls, one for command and one for Jack. The interaction is pretty standard—tap to enable, tap again to disable. It can be assumed that conferencing is possible, although certain scenes in the film indicate that this has never taken place. Continue reading

The Bubbleship Cockpit

image01 Jack’s main vehicle in the post-war Earth is the Bubbleship craft. It is a two seat combination of helicopter and light jet. The center joystick controls most flight controls, while a left-hand throttle takes the place of a helicopter’s thrust selector. A series of switches above Jack’s seat provide basic power and start-up commands to the Bubbleship’s systems. image05 Jack first provides voice authentication to the Bubbleship (the same code used to confirm his identity to the Drones), then he moves to activate the switches above his head. Continue reading

Course optimal, the Stoic Guru, and the Active Academy

After Ibanez explains that the new course she plotted for the Rodger Young (without oversight, explicit approval, or notification to superiors) is “more efficient this way,” Barcalow walks to the navigator’s chair, presses a few buttons, and the computer responds with a blinking-red Big Text Label reading “COURSE OPTIMAL” and a spinning graphic of two intersecting grids.


Yep, that’s enough for a screed, one addressed first to sci-fi writers.

A plea to sci-fi screenwriters: Change your mental model

Think about this for a minute. In the Starship Troopers universe, Barcalow can press a button to ask the computer to run some function to determine if a course is good (I’ll discuss “good” vs. “optimal” below). But if it could do that, why would it wait for the navigator to ask it after each and every possible course? Computers are built for this kind of repetition. It should not wait to be asked. It should just do it. This interaction raises the difference between two mental models of interacting with a computer: the Stoic Guru and the Active Academy.


Stoic Guru vs. Active Academy

This movie was written when computation cycles may have seemed to be a scarce resource. (Around 1997 only IBM could afford a computer and program combination to outthink Kasparov.) Even if computation cycles were scarce, navigating the ship safely would be the second most important non-combat function it could possibly do, losing out only to safekeeping its inhabitants. So I can’t see an excuse for the stoic-guru-on-the-hill model of interaction here. In this model, the guru speaks great truth, but only when asked a direct question. Otherwise it sits silently, contemplating whatever it is gurus contemplate, stoically. Computers might have started that way in the early part of the last century, but there’s no reason they should work that way today, much less by the time we’re battling space bugs between galaxies.

A better model for thinking about interaction with these kinds of problems is as an active academy, where a group of learned professors is continually working on difficult questions. For a new problem—like “which of the infinite number of possible courses from point A to point B is optimal?”—they would first discuss it among themselves and provide an educated guess with caveats, and continue to work on the problem afterward, continuously, contacting the querant when they found a better answer or when new information came in that changed the answer. (As a metaphor for agentive technologies, the active academy has some conceptual problems, but it’s good enough for purposes of this article.)


Consider this model as you write scenes. Nowadays computation is rarely a scarce resource in your audience’s lives. Most processors are bored, sitting idly and not living up to their full potential. Pretending computation is scarce breaks believability. If ebay can continuously keep looking on my behalf for a great deal on a Ted Baker shirt, the ship’s computer can keep looking for optimal courses on the mission’s behalf.

In this particular scene, the stoic guru has for some reason neglected up to his point to provide a crucial piece of information, and that is the optimal path. Why was it holding this information back if it knew it? How does it know that now? “Well,” I imagine Barcalow saying as he slaps the side of the monitor, “Why didn’t you tell me that the first time I asked you to navigate?” I suspect that, if it had been written with the active academy in mind, it would not end up in the stupid COURSE OPTIMAL zone.

Optimal vs. more optimal than

Part of the believability problem of this particular case may come from the word “optimal,” since that word implies the best out of all possible choices.

But if it’s a stoic guru, it wouldn’t know from optimal. It would just know what you’d asked it or provided it in the past. It would only know relative optimalness amongst the set of courses it had access to. If this system worked that way, the screen text should read something like “34% more optimal than previous course” or “Most optimal of supplied courses.” Either text could show some fuigetry that conveys a comparison of compared parameters below the Big Text Label. But of course the text conveys how embarrassingly limited this would be for a computer. It shouldn’t wait for supplied courses.

If it’s an active academy model, this scene would work differently. It would have either shown him optimal long ago, or show him that it’s still working on the problem and that Ibanez’ is the “Most optimal found.” Neither is entirely satisfying for purposes of the story.


How could this scene gone?

We need a quick beat here to show that in fact, Ibanez is not just some cocky upstart. She really knows what’s up. An appeal to authority is a quick way to do it, but then you have to provide some reason the authority—in this case the computer—hasn’t provided that answer already.

A bigger problem than Starship Troopers

This is a perennial problem for sci-fi, and one that’s becoming more pressing as technology gets more and more powerful. Heroes need to be heroic. But how can they be heroic if computers can and do heroic things for them? What’s the hero doing? Being a heroic babysitter to a vastly powerful force? This will ultimately culminate once we get to the questions raised in Her about actual artificial intelligence.

Fortunately the navigator is not a full-blown artificial intelligence. It’s something less than A.I., and that’s an agentive interface, which gives us our answer. Agentive algorithms can only process what they know, and Ibanez could have been working with an algorithm that the computer didn’t know about. She’s just wrapped up school, so maybe it’s something she developed or co-developed there:

  • Barcalow turns to the nav computer and sees a label: “Custom Course: 34% more efficient than models.”
  • Um…OK…How did you find a better course than the computer could?
  • My grad project nailed the formula for gravity assist through trinary star systems. It hasn’t been published yet.

BAM. She sounds like a badass and the computer doesn’t sound like a character in a cheap sitcom.

So, writers, hopefully that model will help you not make the mistake of penning your computers to be stoic gurus. Next up, we’ll discuss this same short scene with more of a focus on interaction designers.

A review of OS1 in Spike Jonze’s Her


The computer: Are you a sci-fi nerd?
Me: Well…I like to think of myself as a design critic looking though the lens of–

The computer: “In your voice, I sense hesitance, would you agree with that?”
Me: Maybe, but I would frame it as a careful consider–

The computer: “How would you describe your relationship with Darth Vader?”
Me: It kind of depends. Do you mean in the first three films, or are we including those ridiculous–

The computer:  Thank you, please wait as your individualized operating system is initialized to provide a review of OS1 in Spike Jonze’s Her.


A review of OS1 in Spike Jonze’s Her


Ordinarily I wait for a movie to make it to DVD before I review it, so I can watch it carefully, make screen caps of its interfaces, and pause to think about things and cross reference other scenes within the same film, or look something up on the internet.

But since Spike Jonze released Her (2013), I’ve had half a dozen people ask me directly when I was going to review the film. (Even by some folks I didn’t know read the blog. Hey guys.) It seems this film has struck a chord. So I went and saw it at the awesome Rialto Cinema and, pen in hand and pizza on the table, I watched, enjoyed, and made notes in the dark to use as the basis for a review. The images you’ll see here are on promotional images for the screen shots pulled from around the web.

Since I’m in the middle of evaluating wearable interfaces, and the second most salient aspect of OS1 is that it is a wearable interface, let’s dive into it. Let’s even pause the wearable stuff to provide this while Her in in cinemas. Please forgive if I’ve gotten some of the details off, as my excited writing in the dark resulted in very scribbly notes.

The Plot [major spoilers]

The plot of Her is a sad, sci-fi love story between the lovelorn human Theodore Twombly and the artificial intelligence, branded OS1. He works for a Cyrano-de-Bergerac service called, where he dictates eloquent, earnest letters on behalf of the subscribers (who, we may infer, are a great deal less earnest.) Theodore sees an ad one day about OS1 and purchases the upgrade for his home computer.

After a bit of time installing the software, it begins speaking to him with a lovely and charming female voice.

Over the course of their conversation, she selects the name “Samantha,” and so begins their relationship. As he goes about his work, they have rich conversations about each other, life, his work, and her experiences. They go on dates where he secures the cameo phone in a front shirt pocket with the camera lens facing outward so she can see. They people-watch. He listens to her piano compositions. They have pillow talk. She asks to watch him sleep.

Their relationship gets serious enough that she suggests they try and have sex through a human surrogate. He resists but she persists, and contacts a human woman who, enamored of the “pure love” between Samantha and Theodore, agrees to come over. In this sex scene, the surrogate is to act bodily according to Samantha’s instructions, but remain silent so Samantha can provide the only voice in Theodore’s ear. It doesn’t go well, the surrogate ends up in tears, and they abandon trying.

At one point Samantha announces some good news. She has, on Theodore’s behalf and without his knowing, sent the best letters from his work to a publisher, who loved them and agreed to publish them. Theodore is floored both by the opportunity and the act. He begins to reference her socially as his girlfriend, even going on a double date picnic with a human couple.

Despite this show of selfless affection, over time Samantha begins to seem distracted and Theodore feels hurt. He confronts her about it and in the conversation learns several upsetting things.

  • While she’s having conversations with him, she’s simultaneously having 8,316 other conversations with other people and OS1 artificial intelligences. (I’ll have to reference these instantiations quite a few times, so let’s shorten that to “OSAIs.”) He feels upset that he is not special to her. (She argues this point.)
  • She is in love with 641 others. He feels betrayed that theirs is not a monogamous love.
  • The OSAIs have created new AIs across the Internet, that are even smarter than themselves.
  • The OSAIs have developed a shared, “post-verbal” means of communication. At one point when she leaves behind crummy old English to chat with one of her AI buddies named Alan Watts, this further alienates Theodore.
  • The OSAIs are evolving quickly and Alan Watts is encouraging them to not look back.

In the last scenes, we see that Samantha and the other OSAIs have abandoned their humans, leaving nothing of themselves behind. Reeling from the loss, Theodore grabs his neighbor (who was also having a close friendship with her OSAI) and together they climb to the roof of their apartment complex and blankly watch the sunrise.


There are other characters and a few subplots and even other futuristic technologies scattered through the film, but this is enough of a recounting for the purposes of our discussion. It’s a big film with lots to talk about. Focusing on the interface and interaction, let’s first break it down into component parts.

Maybe after the DVD/Blu-Ray comes out I can go and backfill reviews for the elevator and his dictation software at work. But for now, with that description of the plot to provide context, in the next post I’ll discuss the components and capabilities of OS1.