Homing Beacon


After following a beacon signal, Jack makes his way through an abandoned building, tracking the source. At one point he stops by a box on the wall, as he sees a couple of cables coming out from the inside of it, and cautiously opens it.

The repeater

I can’t talk much about interactions on this one given that he does not do much with it. But I guess readers might be interested to know about the actual prop used in the movie, so after zooming in on a screen capture and a bit of help from Google I found the actual radio.


When Jack opens the box he finds the repeater device inside. He realizes that it’s connected to the building structure, using it as an antenna, and over their audio connection asks Vika to decrypt the signal.

The desktop interface

Although this sequence centers around the transmission from the repeater, most of the interactions take place on Vika’s desktop interface. A modal window on the display shows her two slightly different waveforms that overlap one another. But it’s not clear at all why the display shows two signals instead of just one, let aside what the second signal means.

After Jack identifies it as a repeater and asks her to decrypt the signal, Vika touches a DECODE button on her screen. With a flourish of orange and white, the display changes to reveal a new panel of information, providing a LATITUDE INPUT and LONGITUDE INPUT, which eventually resolve to 41.146576 -73.975739. (Which, for the curious, resolves to Stelfer Trading Company in Fairfield, Connecticut here on Earth. Hi, M. Stelfer!) Vika says, “It’s a set of coordinates. Grid 17. It’s a goddamn homing beacon.”


At the control tower Vika was already tracking the signal through her desktop interface. As she hears Jack’s request, she presses the decrypt button at the top of the signal window to start the process.

Continue reading

Bike interfaces

There is one display on the bike to discuss, some audio features, and a whole lot of things missing.


The bike display is a small screen near the front of the handlebars that displays a limited set of information to Jack as he’s riding.  It is seen used as a radar system.  The display is circular, with main content in the middle, a turquoise sweep, and a turquoise ring just inside the bezel. We never see Jack touch the screen, but we do see him work a small, unlabeled knob at the bottom left of the bike’s plates.  It is not obvious what this knob does, but Jack does fiddle with it. Continue reading

Communications with Sally


While Vika and Jack are conducting their missions on the ground, Sally is their main point of contact in orbital TET command. Vika and Sally communicate through a video feed located in the top left corner of the TETVision screen. There is no camera visible in the film, but it is made obvious that Sally can see Vika and at one point Jack as well.


The controls for the communications feed are located in the bottom left corner of the TETVision screen. There are only two controls, one for command and one for Jack. The interaction is pretty standard—tap to enable, tap again to disable. It can be assumed that conferencing is possible, although certain scenes in the film indicate that this has never taken place. Continue reading

Hydro-rig Monitoring


As a part of their morning routine, Jack makes the rounds in his Bubbleship to provide a visual confirmation that the hydro-rigs are operating properly. In order to send the hydro-rig coordinates to the Bubbleship, Vika:

  1. Holds with two fingers on the hydro-rig symbol on the left-hand side panel of the TETVision feed
  2. A summary of coordinates is displayed around the touchpoint (hydro-rig symbol)
  3. Drags the data up to the Bubbleship symbol on the side panel

Inconsistent interactions

When Vika sends the drone coordinates, she interacts directly with the map and uses only one finger. Why is the interaction for sending hydro-rig coordinates different than the interaction for sending drone coordinates? Continue reading

The Bubbleship Cockpit

image01 Jack’s main vehicle in the post-war Earth is the Bubbleship craft. It is a two seat combination of helicopter and light jet. The center joystick controls most flight controls, while a left-hand throttle takes the place of a helicopter’s thrust selector. A series of switches above Jack’s seat provide basic power and start-up commands to the Bubbleship’s systems. image05 Jack first provides voice authentication to the Bubbleship (the same code used to confirm his identity to the Drones), then he moves to activate the switches above his head. Continue reading



When Ibanez and Barcalow enter the atmosphere in the escape pod, we see a brief, shaky glimpse of the COURSE OPTION ANALYSIS interface. In the screen grab below, you can see it has a large, yellow, all-caps label at the top. The middle shows the TERRAIN PROFILE. This consists of a real-time, topography map as a grid of screen-green dots that produce a shaded relief map.


On the right is a column of text that includes:

  • The title, i.e., TERRAIN PROFILE
  • The location data: Planet P, Scylla Charybdis (which I don’t think is mentioned in the film, but a fun detail. Is this the star system?)
  • Coordinates in 3D: XCOORD, YCOORD, and ELEVATION. (Sadly these don’t appear to change, despite the implied precision of 5 decimal places)
  • Three unknown variables: NOMINAL, R DIST, HAZARD Q (these also don’t change)

The lowest part of the block reads that the SITE ASSESSMENT (at 74.28%, which—does it need to be said at this point—also does not change.)

Two inscrutable green blobs extend out past the left and bottom white line that borders this box. (Seriously what the glob are these meant to be?)

At the bottom is SCAN M and PLACE wrapped in the same purple “NV” wrappers seen throughout the Federation spaceship interfaces. At the bottom is an array of inscrutable numbers in white.

Since that animated gif is a little crazy to stare at, have this serene, still screen cap to reference for the remainder of the article.




Three things to note in the analysis.

1. Yes, fuigetry

I’ll declare everything on the bottom to be filler unless someone out there can pull some apologetics to make sense of it. But even if an array of numbers was ever meant to be helpful, an emergency landing sequence does not appear to be the time. If it needs to be said, emergency interfaces should include only the information needed to manage the crisis.

2. The visual style of the topography

I have before blasted the floating pollen displays of Prometheus for not describing the topography well, but the escape pod display works while using similar pointillist tactics. Why does this work when the floating pollen does not? First, note that the points here are in a grid. This makes the relationship of adjacent points easy to understand. The randomness of the Promethean displays confounds this. Second, note the angle of the “light” in the scene, which appears to come from the horizon directly ahead of the ship. This creates a strong shaded relief effect, a tried and true method of conveying the shape of a terrain.

3. How does this interface even help?

Let’s get this out of the way: What’s Ibanez’ goal here? To land the pod safely. Agreed? Agreed.

Certainly the terrain view is helpful to understand the terrain in the flight path, especially in low visibility. But similar to the prior interface in this pod, there is no signal to indicate how the ship’s position and path relate to it. Are these hills kilometers below (not a problem) or meters (take some real care there, Ibanez.) This interface should have some indication of the pod. (Show me me.)

Additionally, if any of the peaks pose threats, she can avoid them tactically, but adjusting long before they’re a problem will probably help more than veering once she’s right upon them. Best is to show the optimal path, and highlight any threats that would explain the path. Doing so in color (presuming pilots who can see it) would make the information instantly recognizable.

Finally the big label quantifies a “site assessment,” which seems to relay some important information about the landing location. Presumably pilots know what this number represents (process indicator? structural integrity? deviation from an ideal landing strip? danger from bugs?) but putting it here does not help her. So what? If this is a warning, why doesn’t it look like one? Or is there another landing site that she can get to with a better assessment? Why isn’t it helping her find that by default? If this is the best site, why bother her with the number at all? Or the label at all? She can’t do anything with this information, and it takes up a majority of the screen. Better is just to get that noise off the screen along with all the fuigetry. Replace it with a marker for where the ideal landing site is, its distance, and update it live if her path makes that original site no longer viable.


Of course it must be said that this would work better as a HUD which would avoid splitting her attention from the viewport, but HUDs or augmented reality aren’t really a thing in the diegesis.


The next scene shows them crashing through the side of a mountain, so despite this more helpful design, better for the scene might be to design a warning mode that reads SAFE SITE: NOT FOUND. SEARCHING… and let that blink manically while real-time, failing site assessments blink all over the terrain map. Then the next scene makes much more sense as they skip off a hill and into a mountain.



At dispatch for the central computer, Sandmen monitor a large screen that displays a wireframe plan of the city, including architectural detail and even plants, all color coded using saturated reds, greens, and blues. When a Sandman has accepted the case of a runner, he appears as a yellow dot on the screen. The runner appears as a red dot. Weapons fire can even be seen as a bright flash of blue. The red dots of terminated runners fades from view.

Using the small screens and unlabeled arrays of red and yellow lit buttons situated on an angled panel in front of them, the seated Sandman can send a call out to catch runners, listen to any spoken communications, and respond with text and images.


*UXsigh* What are we going to do with this thing? With an artificial intelligence literally steps behind them, why rely on a slow bunch of humans at all for answering questions and transmitting data? It might be better to just let the Sandmen do what they’re good at, and let the AI handle what it’s good at.

But OK, if it’s really that limited of an Übercomputer and can only focus on whatever is occupying it at the moment, at least make the controls usable by people. Let’s do the hard work of reducing the total number of controls, so they can be clustered all within easy reach rather than spread out so you have to move around just to operate them all. Or use your feet or whatever. Differentiate the controls so they are easy to tell apart by sight and touch rather than this undifferentiated mess. Let’s take out a paint pen and actually label the buttons. Do…do something.


This display could use some rethinking as well. It’s nice that it’s overhead, so that dispatch can be thinking about field strategy rather than ground tactics. But if that’s the case, it could use some design help and some strategic information. How about downplaying the saturation on the things that don’t matter that much, like walls and plants? Then the Sandmen can focus more on the interplay of the Runner and his assailants. Next you could augment the display with information about the runner, and perhaps a best-guess prediction of where they’re likely to run, maybe the health of individuals, or the amount of ammunitition they have.

Which makes me realize that more than anything, this screen could use the hand of a real-time strategy game user interface designer, because that’s what they’re doing. The Sandmen are playing a deadly, deadly video game right here in this room, and they’re using a crappy interface to try and win it.

Virtual 3D Scanner



The film opens as a camera moves through an abstract, screen-green 3D projection of a cityscape. A police dispatch voice says,

“To all patrolling air units. A 208 is in progress in the C-13 district of Newport City. The airspace over this area will be closed. Repeat:…”

The camera floats to focus on two white triangles, which become two numbers, 267 and 268. The thuck-thuck sounds of a helicopter rotor appear in the background. The camera continues to drop below the numbers, but turns and points back up at them. When the view abruptly shifts to the real world, we see that 267 and 268 represent two police helicopters on patrol.



The roads on the map of the city are a slightly yellower green, and the buildings are a brighter and more saturated green. Having all of the colors on the display be so similar certainly sets a mood for the visualization, but it doesn’t do a lot for its readability. Working with broader color harmonies would help a reader distinguish the elements and scan for particular things.



The perspective of the projection is quite exaggerated. This serves partly as a modal cue to let the audience know that it’s not looking at some sort of emerald city, but also hinders readability. The buildings are tall enough to obscure information behind them, and the extreme perspective makes it hard to understand their comparative heights or their relation to the helicopters, which is the erstwhile point of the screen.


There are two ways to access and control this display. The first is direct brain access. The second is by a screen and keyboard.

Brain Access

Kusanagi and other cyborgs can jack in to the network and access this display. The jacks are in the back of their neck and as with most brain interfaces, there is no indication about what they’re doing with their thoughts to control the display. She also uses this jack interface to take control of the intercept van and drive it to the destination indicated on the map.

During this sequence the visual display is slightly different, removing any 3D information so that the route can be unobscured. This makes sense for wayfinding tasks, though 3D might help with a first-person navigation tasks.


Screen and keyboard access

While Kusanagi is piloting an intercept van, she is in contact with a Section 9 control center. Though the 3D visualization might have been disregarded up to this point as a film conceit, here see that it is the actual visualization seen by people in the diegesis. The information workers at Section 9 Control communicate with agents in the field through headsets, type onto specialized keyboards, and watch a screen that displays the visualization.


Their use is again a different mode of the visualization. The information workers are using it to locate the garbage truck. The first screens they see show a large globe with a white graticule and an overlay reading “Global Positioning System Ver 3.27sp.” Dots of different sizes are positioned around the globe. Triangles then appear along with an overlay listing latitude, longitude, and altitude. Three other options appear in the lower-right, “Hunting, Navigation, and Auto.” The “Hunting” option is highlighted with a translucent kelley green rectangle.

After a few seconds the system switches to focus on the large yellow triangle as it moves along screen-green roads. Important features of the road, like “Gate 13″ are labeled in a white, rare serif font, floating above the road, in 3D but mostly facing the user, casting a shadow on the road below. The projected path of the truck is drawn in a pea green. A kelley green rectangle bears the legend “Game 121 mile/h / Hunter->00:05:22 ->Game.” The speed indicator changes over time, and the time indicator counts down. As the intercept van approaches the garbage truck, the screen displays an all-caps label in the lower-left corner reading, somewhat cryptically, “FULL COURSE CAUTION !!!”

The most usable mode

Despite the unfamiliar language and unclear labeling, this “Hunter” mode looks to be the most functional. The color is better, replacing the green background with a black one to create a clearer foreground and background for better focus. No 3D buildings are shown, and the camera angle is similar to a real-time-strategy angle of around 30 degrees from the ground, with a mild perspective that hints at the 3D but doesn’t distort. Otherwise the 3D information of the roads’ relationship to other roads is shown with shape and shadow. No 3D buildings are shown, letting the user keep her focus on the target and the path of intercept.