Stark Tower monitoring

Since Tony disconnected the power transmission lines, Pepper has been monitoring Stark Tower in its new, off-the-power-grid state. To do this she studies a volumetric dashboard display that floats above glowing shelves on a desktop.


Volumetric elements

The display features some volumetric elements, all rendered as wireframes in the familiar Pepper’s Ghost (I know, I know) visual style: translucent, edge-lit planes. A large component to her right shows Stark Tower, with red lines highlighting the power traveling from the large arc reactor in the basement through the core of the building.

The center of the screen has a similarly-rendered close up of the arc reactor. A cutaway shows a pulsing ring of red-tinged energy flowing through its main torus.

This component makes a good deal of sense, showing her the physical thing she’s meant to be monitoring but not in a photographic way, but a way that helps her quickly locate any problems in space. The torus cutaway is a little strange, since if she’s meant to be monitoring it, she should monitor the whole thing, not just a quarter of it that has been cut away.

Flat elements

The remaining elements in the display appear on a flat plane. Continue reading

Iron Man HUD: 2nd-person view

The HUD itself displays a number of core capabilities across the Iron Man movies prior to its appearance in The Avengers. Cataloguing these capabilities lets us understand (or backworld) how he interacts with the HUD, equipping us to look for its common patterns and possible conflicts. In the first-person view, we saw it looked almost entirely like a rich agentive display, but with little interaction. Now, let’s look at that gorgeous 2nd-person view.

When in the first film Tony first puts the faceplate on and says to JARVIS, “Engage heads-up display”… IronMan1_HUD00 …we see things from a narrative-conceit, 2nd-person perspective, as if the helmet were huge and we are inside the cavernous space with him, seeing only Tony’s face and the augmented reality interface elements. IronMan1_HUD07 You might be thinking, “Of course it’s a narrative conceit. It’s not real. It’s in a movie.” But what I mean by that is that even in the diegesis, the Marvel Cinematic World, this is not something that could be seen. Let’s move through the reasons why. Continue reading

Iron Man HUD: 1st person view

When we first see the HUD, Tony is donning the Iron Man mask. Tony asks, “JARVIS, “You there?”” To which JARVIS replies, ““At your service sir.”” Tony tells him to “Engage the heads-up display,” and we see the HUD initialize. It is a dizzying mixture of blue wireframe motion graphics. Some imply system functions, such as the reticle that pinpoints Tony’s eye. Most are small dashboard-like gauges that remain small and in Tony’s peripheral vision while the information is not needed, and become larger and more central when needed. These features are catalogued in another post, but we learn about them through two points-of-view: a first-person view, which shows us what Tony’s sees as if we were there, donning the mask in his stead, and second-person view, which shows us Tony’s face overlaid against a dark background with floating graphics.

This post is about that first-person view. Specifically it’s about the visual design and the four awarenesses it displays.


In the Augmented Reality chapter of Make It So, I identified four types of awareness seen in the survey for Augmented Reality displays:

  1. Sensor display
  2. Location awareness
  3. Context awareness
  4. Goal awareness

The Iron Man HUD illustrates all four and is a useful framework for describing and critiquing the 1st-person view. Continue reading

Iron Man HUD: Just the functions

There is a great deal to say about the interactions and interface, but let’s just take a moment to recount everything that the HUD does over the Iron Man movies and The Avengers. Keep in mind that just as there are many iterations of the suit, there can be many iterations of the HUD, but since it’s largely display software controlled by JARVIS, the functions can very easily move between exosuits.


Along the bottom of the HUD are some small gauges, which, though they change iconography across the properties, are consistently present.


For the most part they persist as tiny icons and thereby hard to read, but when the suit reboots in a high-altitude freefall, we get to see giant versions of them, and can read that they are:

Continue reading


The first computer interface we see in the film occurs at 3:55. It’s an interface for housing and monitoring the tesseract, a cube that is described in the film as “an energy source” that S.H.I.E.L.D. plans to use to “harness energy from space.” We join the cube after it has unexpectedly and erratically begun to throw off low levels of gamma radiation.

The harnessing interface consists of a housing, a dais at the end of a runway, and a monitoring screen.


Fury walks past the dais they erected just because.

The housing & dais

The harness consists of a large circular housing that holds the cube and exposes one face of it towards a long runway that ends in a dais. Diegetically this is meant to be read more as engineering than interface, but it does raise questions. For instance, if they didn’t already know it was going to teleport someone here, why was there a dais there at all, at that exact distance, with stairs leading up to it? How’s that harnessing energy? Wouldn’t you expect a battery at the far end? If they did expect a person as it seems they did, then the whole destroying swaths of New York City thing might have been avoided if the runway had ended instead in the Hulk-holding cage that we see later in the film. So…you know…a considerable flaw in their unknown-passenger teleportation landing strip design. Anyhoo, the housing is also notable for keeping part of the cube visible to users near it, and holding it at a particular orientation, which plays into the other component of the harness—the monitor.

Avengers-cubemonitoring-03 Continue reading

Sleep Pod—Wake Up Countdown

On each of the sleep pods in which the Odyssey crew sleep, there is a display for monitoring the health of the sleeper. It includes some biometric charts, measurements, a body location indicator, and a countdown timer. This post focuses on that timer.

To show the remaining time of until waking Julia, the pod’s display prompts a countdown that shows hours, minutes and seconds. It shows in red the final seconds while also beeping for every second. It pops-up over the monitoring interface.


Julia’s timer reaches 0:00:01.

The thing with pop-ups

We all know how it goes with pop-ups—pop-ups are bad and you should feel bad for using them. Well, in this case it could actually be not that bad.

The viewer

Although the sleep pod display’s main function is to show biometric data of the sleeper, the system prompts a popup to show the remaining time until the sleeper wakes up. And while the display has some degree of redundancy to show the data—i.e. heart rate in graphics and numbers— the design of the countdown brings two downsides for the viewer.

  1. Position: it’s placed right in the middle of the screen.
  2. Size: it’s roughly a quarter of the whole size of the display

Between the two, it partially covers both the pulse graphics and the numbers, which can be vital, i.e. life threatening—information of use to the viewer. Continue reading

Otto’s Manual Control



When it refused to give up authority, the Captain wrested control of the Axiom from the artificial intelligence autopilot, Otto. Otto’s body is the helm wheel of the ship and fights back against the Captain. Otto wants to fulfil BNL’s orders to keep the ship in space. As they fight, the Captain dislodges a cover panel for Otto’s off-switch. When the captain sees the switch, he immediately realizes that he can regain control of the ship by deactivating Otto. After fighting his way to the switch and flipping it, Otto deactivates and reverts to a manual control interface for the ship.

The panel of buttons showing Otto’s current status next to the on/off switch deactivates half its lights when the Captain switches over to manual. The dimmed icons are indicating which systems are now offline. Effortlessly, the captain then returns the ship to its proper flight path with a quick turn of the controls.

One interesting note is the similarity between Otto’s stalk control keypad, and the keypad on the Eve Pod. Both have the circular button in the middle, with blue buttons in a semi-radial pattern around it. Given the Eve Pod’s interface, this should also be a series of start-up buttons or option commands. The main difference here is that they are all lit, where the Eve Pod’s buttons were dim until hit. Since every other interface on the Axiom glows when in use, it looks like all of Otto’s commands and autopilot options are active when the Captain deactivates him.

A hint of practicality…

The panel is in a place that is accessible and would be easily located by service crew or trained operators. Given that the Axiom is a spaceship, the systems on board are probably heavily regulated and redundant. However, the panel isn’t easily visible thanks to specific decisions by BNL. This system makes sense for a company that doesn’t think people need or want to deal with this kind of thing on their own.

Once the panel is open, the operator has a clear view of which systems are on, and which are off. The major downside to this keypad (like the Eve Pod) is that the coding of the information is obscure. These cryptic buttons would only be understandable for a highly trained operator/programmer/setup technician for the system. Given the current state of the Axiom, unless the crew were to check the autopilot manual, it is likely that no one on board the ship knows what those buttons mean anymore.


Thankfully, the most important button is in clear English. We know English is important to BNL because it is the language of the ship and the language seen being taught to the new children on board. Anyone who had an issue with the autopilot system and could locate the button, would know which button press would turn Otto off (as we then see the Captain immediately do).

Considering that Buy-N-Large’s mission is to create robots to fill humans’ every need, saving them from every tedious or unenjoyable job (garbage collecting, long-distance transportation, complex integrated systems, sports), it was both interesting and reassuring to see that there are manual over-rides on their mission-critical equipment.

…But hidden

The opposite situation could get a little tricky though. If the ship was in manual mode, with the door closed, and no qualified or trained personnel on the bridge, it would be incredibly difficult for them to figure out how to physically turn the ship back to auto-pilot. A hidden emergency control is useless in an emergency.

Hopefully, considering the heavy use of voice recognition on the ship, there is a way for the ship to recognize an emergency situation and quickly take control. We know this is possible because we see the ship completely take over and run through a Code Green procedure to analyze whether Eve had actually returned a plant from Earth. In that instance, the ship only required a short, confused grunt from the Captain to initiate a very complex procedure.

Security isn’t an issue here because we already know that the Axiom screens visitors to the bridge (the Gatekeeper). By tracking who is entering the bridge using the Axiom’s current systems, the ship would know who is and isn’t allowed to activate certain commands. The Gatekeeper would either already have this information coded in, or be able to activate it when he allowed people into the bridge.

For very critical emergencies, a system that could recognize a spoken ‘off’ command from senior staff or trained technicians on the Axiom would be ideal.

Anti-interaction as Standard Operating Procedure


The hidden door, and the obscure hard-wired off button continue the mission of Buy-N-Large: to encourage citizens to give up control for comfort, and make it difficult to undo that decision. Seeing as how the citizens are more than happy to give up that control at first, it looks like profitable assumption for Buy-N-Large, at least in the short term. In the long term we can take comfort that the human spirit–aided by an adorable little robot–will prevail.

So for BNL’s goals, this interface is fairly well designed. But for the real world, you would want some sort of graceful degradation that would enable qualified people to easily take control in an emergency. Even the most highly trained technicians appreciate clearly labeled controls and overrides so that they can deal directly with the problem at hand rather than fighting with the interface.

The answer does not program


Logan’s life is changed when he surrenders an ankh found on a particular runner. Instead being asked to identify, the central computer merely stays quiet a long while as it scans the objects. Then its lights shut off, and Logan has a discussion with the computer he has never had before.

The computer asks him to “approach and identify.” The computer gives him, by name, explicit instructions to sit facing the screen. Lights below the seat illuminate. He identifies in this chair by positioning his lifeclock in a recess in the chair’s arm, and a light above him illuminates. Then a conversation ensues between Logan and the computer.


The computer communicates through a combination of voice and screen, on which it shows blue text and occasional illustrative shapes. The computer’s voice is emotionless and soothing. For the most part it speaks in complete sentences. In contrast, Logan’s responses are stilted and constrained, saying “negative” instead of “no,” and prefacing all questions with the word, “Question,” as in, “Question: What is it?”

On the one hand it’s linguistically sophisticated

Speech recognition and generation would not have a commercially released product for four years after the release of Logan’s Run, but there is an odd inconsistency here even for those unfamiliar with the actual constraints of the technology. The computer is sophisticated enough to generate speech with demonstrative pronouns, referring to the picture of the ankh as “this object” and the label as “that is the name of the object.” It can even communicate with pragmatic meaning. When Logan says,

“Question: Nobody reached renewal,”

…and receives nothing but silence, the computer doesn’t object to the fact that his question is not a question. It infers the most reasonable interpretation, as we see when Logan is cut off during his following objection by the computer’s saying,…

“The question has been answered.”

Despite these linguistic sophistications, it cannot parse anything but the most awkwardly structured inputs? Sadly, this is just an introduction to the silliness that is this interface.

Logan undergoes procedure “033-03,” in which his lifeclock is artificially set to blinking. He is then instructed to become a runner himself and discover where “sanctuary” is. After his adventure in the outside performing the assignment he was forced to accept, he is brought in as a prisoner. The computer traps him in a ring of bars demanding to know the location of sanctuary. Logan reports (correctly) that Santuary doesn’t exist.




On the other hand, it explodes

This freaks the computer out. Seriously. Now, the crazy thing is that the computer actually understands Logan’s answer, because it comments on it. It says, “Unacceptable. The answer does not program [sic].” That means that it’s not a data-type error, as if it got the wrong kind of input. No, the thing heard what Logan was saying. It’s just unsatisfied, and the programmer decided that the best response to dissatisfaction was to engage the heretofore unused red and green pixels in the display, randomly delete letters from the text—and explode.That’s right. He decided that in addition to the Dissatisfaction() subroutine calling the FreakOut(Seriously) subroutine, the FreakOut(Seriously) subroutine in its turn calls Explode(Yourself), Release(The Prisoner), and the WhileYoureAtItRuinAllStructuralIntegrityoftheSurroundingArcitecture() subroutines.


Frankly, if this is the kind of coding that this entire society was built upon, this whole social collapse thing was less deep commentary and really just a matter of computer Darwinism catching up with them.





Gravity (?) Scan


The first bit of human technology we see belongs to the Federation of Territories, as a spaceship engages the planet-sized object that is the Ultimate Evil. The interfaces are the screen-based systems that bridge crew use to scan the object and report back to General Staedert so he can make tactical decisions.


We see very few input mechanisms and very little interaction with the system. The screen includes a large image on the right hand side of the display and smaller detailed bits of information on the left. Inputs include

  • Rows of backlit modal pushbuttons adjacent to red LEDs
  • A few red 7-segment displays
  • An underlit trackball
  • A keyboard
  • An analog, underlit, grease-pencil plotting board.
    (Nine Inch Nails fans may be pleased to find that initialism written near the top.)

The operator of the first of these screens touches one of the pushbuttons to no results. He then scrolls the trackball downward, which scrolls the green text in the middle-left part of the screen as the graphics in the main section resolve from wireframes to photographic renderings of three stars, three planets, and the evil planet in the foreground, in blue.

FifthE-UFT008 FifthE-UFT014 FifthE-UFT010

The main challenge with the system is what the heck is being visualized? Professor Pacoli says in the beginning of the film that, “When the three planets are in eclipse, the black hole, like a door, is open.” This must refer to an unusual, trinary star system. But if that’s the case, the perspective is all wrong on screen.

Plus, the main sphere in the foreground is the evil planet, but it is resolved to a blue-tinted circle before the evil planet actually appears. So is it a measure of gravity and event horizons of the “black hole?” Then why are the others photo-real?

Where is the big red gas giant planet that the ship is currently orbiting? And where is the ship? As we know from racing game interfaces and first-person shooters, having an avatar representation of yourself is useful for orientation, and that’s missing.

And finally, why does the operator need to memorize what “Code 487” is? That places a burden on his memory that would be better used for other, more human-value things. This is something of a throw-away interface, meant only to show the high-tech nature of the Federated Territories and for an alternate view for the movie’s editor to show, but even still it presents a lot of problems.

Alien Astrometrics


When David is exploring the ancient alien navigation interfaces, he surveys a panel, and presses three buttons whose bulbous tops have the appearance of soft-boiled eggs. As he presses them in order, electronic clucks echo in in the cavern. After a beat, one of the eggs flickers, and glows from an internal light. He presses this one, and a seat glides out for a user to sit in. He does so, and a glowing pollen volumetric projection of several aliens appears. The one before David takes a seat in the chair, which repositions itself in the semicircular indentation of the large circular table.


The material selection of the egg buttons could not be a better example of affordance. The part that’s meant to be touched looks soft and pliable, smooth and cool to the touch. The part that’s not meant to be touched looks rough, like immovable stone. At a glance, it’s clear what is interactive and what isn’t. Among the egg buttons there are some variations in orientation, size, and even surface texture. It is the bumpy-surfaced one that draws David’s attention to touch first that ultimately activates the seat.

The VP alien picks up and blows a few notes on a simple flute, which brings that seat’s interface fully to life. The eggs glow green and emit green glowing plasma arcs between certain of them. David is able to place his hand in the path of one of the arcs and change its shape as the plasma steers around him, but it does not appear to affect the display. The arcs themselves appear to be a status display, but not a control.

After the alien manipulates these controls for a bit, a massive, cyan volumetric projection appears and fills the chamber. It depicts a fluid node network mapped to the outside of a sphere. Other node network clouds appear floating everywhere in the room along with objects that look like old Bohr models of atoms, but with galaxies at their center. Within the sphere three-dimensional astronomical charts appear. Additionally huge rings appear and surround the main sphere, rotating slowly. After a few inputs from the VP alien at the interface, the whole display reconfigures, putting one of the small orbiting Bohr models at the center, illuminating emerald green lines that point to it and a faint sphere of emerald green lines that surround it. The total effect of this display is beautiful and spectacular, even for David, who is an unfeeling replicant cyborg.


At the center of the display, David observes that the green-highlighted sphere is the planet Earth. He reaches out towards it, and it falls to his hand. When it is within reach, he plucks it from its orbit, at which point the green highlights disappear with an electronic glitch sound. He marvels at it for a bit, turning it in his hands, looking at Africa. Then after he opens his hands, the VP Earth gently returns to its rightful position in the display, where it is once again highlighted with emerald, volumetric graphics.


Finally, in a blinding flash, the display suddenly quits, leaving David back in the darkness of the abandoned room, with the exception of the small Earth display, which is floating over a small pyramid-shaped protrusion before flickering away.

After the Earth fades, david notices the stasis chambers around the outside of the room. He realizes that what he has just seen (and interacted with) is a memory from one of the aliens still present.



Hilarious and insightful Youtube poster CinemaSins asks in the video “Everything Wrong with Prometheus in 4 minutes or Less,” “How the f*ck is he holding the memory of a hologram?” Fair question, but not unanswerable. The critique only stands if you presume that the display must be passive and must play uninterrupted like a television show or movie. But it certainly doesn’t have to be that way.

Imagine if this is less like a YouTube video, and more like a playback through a game engine like a holodeck StarCraft. Of course it’s entirely possible to pause the action in the middle of playback and investigate parts of the display, before pressing play again and letting it resume its course. But that playback is a live system. It would be possible to run it afresh from the paused point with changed parameters as well. This sort of interrupt-and-play model would be a fantastic learning tool for sensemaking of 4D information. Want to pause playback of the signing of the Magna Carta and pick up the document to read it? That’s a “learning moment” and one that a system should take advantage of. I’d be surprised if—once such a display were possible—it wouldn’t be the norm.


The only thing I see that’s missing in the scene is a clear signal about the different state of the playback:

  1. As it happened
  2. Paused for investigation
  3. Playing with new parameters (if it was actually available)

David moves from 1 to 2, but the only change of state is the appearance and disappearance of the green highlight VP graphics around the Earth. This is a signal that could easily be missed, and wasn’t present at the start of the display. Better would be some global change, like a global shift in color to indicate the different state. A separate signal might compare As it Happened with the results of Playing with new parameters, but that’s a speculative requirement of a speculative technology. Best to put it down for now and return to what this interface is: One of the most rich, lovely, and promising examples of sensemaking interactions seen on screen. (See what I did there?)

For more about how VP might be more than a passive playback, see the lesson in Chapter 4 of Make It So, page 84, VP Systems Should Interpret, Not Just Report.