Profiling “CAT” scan


After her escape from the nucleolab, Leeloo ends up on a thin ledge of a building, unsure where to go or what to do. As a police car hovers nearby, the officers use an onboard computer to try and match her identity against their database. One officer taps a few keys into an unseen keyboard, her photograph is taken, and the results displays in about 8 seconds. Not surprisingly, it fails to find a match, and the user is told so with an unambiguous, red “NO FILE” banner across the screen.


This interface flies by very quickly, so it’s not meant to be read screen by screen. Still, the wireframes present a clear illustration of what the system doing, and what the results are.

The system shouldn’t just provide dead ends like this, though. Any such system has to account for human faces changing over the time since the last capture: aging, plastic surgery, makeup, and disfiguring accidents, to name a few. Since Leeloo isn’t inhuman, it could provide some results of “closest matches,” perhaps with a confidence percentage alongside individual results. Even if the confidence number was very low, that output would help the officers understand it was an issue with the subject, and not an issue of an incomplete database or weak algorithm.

One subtle element is that we don’t see or hear the officer telling the system where the perp is, or pointing a camera. He doesn’t even have to identify her face. It automatically finds her in the camera few, identifies her face, and starts scanning. The sliding green lines tell the officer what it’s finding, giving him confidence in its process, and offering an opportunity to intervene if it’s getting things wrong.

Nucleolab Progress Indicator


As the nucleolab is reconstructing Leeloo, the screen on the control panel provides update, detailing the process. For the most part this update is a wireframe version of what everyone can see with their eyes.



The only time it describes something we can’t see with our own eyes is when Leeloo’s skin is being “baked” by an ultraviolet light under a metal cover. Of course we know this is a narrative device to heighten the power of the big reveal, but it’s also an opportunity for the interface to actually do something useful. It has a green countdown clock, and visualizes something that’s hidden from view.


As far as a progress indicator goes, it’s mostly useful. Mactilburgh presumably knows roughly how long things take and even the order of operations. All he needs is confirmation that his system is doing what it’s supposed to be, and the absence of an error is enough for him. The timer helps, too, since he’s like a kid waiting for an Easy Bake Oven…of science.

But Munro doesn’t know what the heck is going on. Sure he knows some of the basics of biology. There’s going to be a skeleton, some muscle, some nerves. But beyond that, he’s got a job to do, and that’s to take this thing out the minute it goes pear-shaped. So he needs to know: Is everything going OK? Should I pop the top on a tall boy of Big Red Button? It might be that the interface has some kind of Dire Warning mode for when things go off the rails, but that doesn’t help during the good times. Giving Munro some small indicator that things are going well would remove any ambiguity and set him at ease.

An argument could be made that you don’t want Munro at ease, but a false positive might kill Leeloo and risk the world. A false negative (or a late negative) just risks her escape. Which happens anyway. Fortunately for us.


NucleoLab Display


The scientist Mactilburgh reconstructs Leeloo from a bit of her remains in his “nucleolab.” We see a few interfaces here.


We never see Mactilburgh interact with the controls on this display: Potentiometers, dials with circular LED readout rings, glowing toggle buttons, and unlit buttons labeled “OFF” and “ESC.” There’s not much to grasp onto for analysis. These are just “sciencey” set of physical controls. The display is a bit of similar scienciness, meant to vaguely convey that Leeloo is a higher-order being, but beyond that, incomprehensible. Interestingly, the Mondoshawan DNA shows not just a more detailed graphic, but adds color to convey an additional level of complexity.


An odd bit: In the lower right hand corner of the screen you can see the words “FAMILIAL HYPERCHOL TEROLEMIA.” Looking up this term reveals the genetic condition Familial Hypercholesterolemia. It’s only missing the “ES.” What’s this label doing here? This could be the area on the DNA chain where the markers appear for this predisposition to high cholesterol, but wouldn’t you expect that to take up 5000 times less room on a DNA strand of a perfect being, not the same percentage? Also it kind of takes the winds out of the sails of Mactilburgh’s breathless claim that she’s perfect. Anyway it’s a warning lesson for sci-fi interface designers: Watch where you pull your sciencey words from. If it’s a real thing, ask whether the meaning runs counter to your purposes or not.

Taxi navigation


The taxi has a screen on the passenger’s side dashboard that faces the driver. This display does two things. First, it warns the driver when the taxi is about to be attacked. Secondly, it helps him navigate the complexities of New York circa 2163.

Warning system

After Korben decides to help Leeloo escape the police, they send a squadron of cop cars to apprehend them. And by apprehend I mean blow to smithereens. The moment Korben’s taxi is in sights, they don’t try to detain or disable the vehicle, but to blast it to bits with bullets and more bullets. It seems this is a common enough thing to have happen that Korben’s on-board computer can detect it in advance and provide a big, flashing, noisemaking warning to this effect.


In many cases I object to the Big Label, but not here. In fact, for such a life-threatening issue, more of the taxi’s interface should highlight the seriousness. My life’s in danger? Go full red alert, car. Change the lights to crimson. Dim non-essential things. You’ve got an “automatic” button there. Does that include evasive maneuvering? If so, make that thing opt-out rather than opt-in. Help a brother out.

Navigation aid

At other times during the chase scene, Korben can glance at the screen to see a wireframe of the local surroundings. This interface has a lot of problems.

1. This would work much, much more safely and efficiently for Korben if it was a heads-up display on the windshield. Let’s shrink that feedback loop. Every time a driver glances down he risks a crash and in this case, Korben risks the entire world. If HUD tech isn’t a part of the diegesis, audio cues might be some small help that don’t require him to take his eyes of the “road.”

2. How does the wireframe style help? It’s future-y of course, but it adds a lot of noise to what he’s got to process. He doesn’t need to understand tesselations of surfaces. He needs to understand the shapes and velocities of things around him so he can lose the tail.


(Exercise for the reader: Provide a solid diegetic explanation for why this screen appeared in the film flipped horizontally.)


3. There’s some missing information. If the onboard computer can do some real-time calculations and make a recommendation on the best next step, why not do it? We see above that the police have the same information that Korben does. So even better might be information on what the tail is likely to do so Korben can do the opposite. Or maneuvers that Korben can execute that the cop car can’t. If it’s possible to show places he should definitely not go, like dead ends or right into the path, say, of a firing squad of police cars, that would be useful to know, too.



4. What are those icons in the lower right meant to do? They’re not suggestions as they appear after Korben performs his maneuvers, and sometimes appear along with warnings instead of maneuvers.

Even if they are suggestions, what are they directions to? His original destination? He didn’t have one. Some new destination? When did he provide it? Simple, goal-aware directions to safety? Whatever the information, these icons add a lot cognitive weight and visual work. Surely there’s some more direct way to provide cues, like being superimposed on the 3D so he can see the information rather than read and interpret it.

If they’re something else other than suggestions, they’re just noise. In a pursuit scenario, you’d want to strip that stuff out of the interface.


5. What is that color gradient on the left meant to tell him? All the walls in this corridor are 350…what? The screen shot above hints that it represents simple height from the ground, but the 2D map has these colors as well, and height cues wouldn’t make sense there. If it is height, this information might help Korben quickly build a 3D mental map of the information he’s seeing. But using arbitrary colors forces him to remember what each color means. Better would be to use something with a natural order to it like the visible spectrum or black-body spectrum. Or, since people already have lots of experience with monocular distance cues and lighting from above, maybe a simple rendering as if the shapes were sunlit would be fastest to process. Taking advantage of any of these perceptual faculties would let him build a 3D model quickly so he can focus on what he’s going to do with the information.

Side note: Density might actually make a great deal more sense to the readout, knowing that Korben has a penchant for ramming his taxi through things. If this was the information being conveyed, varying degrees of transparency might have served him better to know what he can smash through safely, and even what to expect on the other side.

6. Having the 2D map helps a bit to understand the current level of the city from a top-down view. Having it be small in the upper right is a sound placement, since that’s a less-important subset of the information he really needs. It has some color coding but as mentioned above it doesn’t seem to relate to what’s colored in the 3D portion, which could make for an interpretation disaster. In any case, Korben shouldn’t have to read this information in the tiny map. It’s a mode, a distraction. While he’s navigating the alleys and tunnels of the city, he’s thinking in a kind of 3D node-graph. Respect that kind of thinking with a HUD that puts information on the “edges” of the graph, i.e., the holes in the surfaces around him that he’s looking at. That’s his locus of attention. That’s where he’s thinking. Augment that.

So, you know…bad

Fortunately, given that the interface has so many problems, Korben only really glances at this once during the chase, and that’s at the warning sound. But if the younger Korben was meant to use this at all, there’s a lot of work to make this useful rather than dangerous.

Missile Scan

Despite its defenses, Staedert continues with the attack against the evil planet, and several screens help the crew monitor the attack with the “120″ missiles.


First there is an overhead view of the space between the ship and the planet. The ship is represented as a red dot, the planet as a red wireframe, and the path of the missiles magnified as a large white wireframe column. A small legend in the upper right reads “CODIFY” with some confirmation text. Some large text confirms the missiles are “ACTIVE” and an inscrutable “W 6654″ appears in the lower right.

As the missiles launch, their location is tracked along the axis of the column as three white dots. The small paragraph of text in the upper right hand scrolls quickly, displaying tracking information about them. A number in the upper left confirms the number of missiles. A number below tracks some important pair of numeric variables. In the lower right, the label has changed to “SY 6654.” A red vertical line tracks with the missiles across the display, and draws the operator’s attention to another small pair of numeric variables that also follow along.

These missiles have no effect, so he sends a larger group of 9 “240″ missiles. Operators watch its impact through the same display.




These screens are quite literal in the information they provide, i.e. physical objects in space, but abstract it in a way that helps a tactician keep track of and think about the important parts without the distraction of surface appearance, or, say, first-person perspective. Of all the scanner screens, these function the best, even if General Staedert’s tactics were ultimately futile.


Surface Scan


Later in the scene General Staedert orders a “thermonucleatic imaging.” The planet swallows it up. Then Staedert orders an “upfront loading of a 120-ZR missile” and in response to the order, the planet takes a preparatory defensive stance, armoring up like a pillbug. The scanner screens reflect this with a monitoring display.


In contrast to the prior screen for the Gravity (?) Scan, these screens make some sense. They show:

  • A moving pattern on the surface of a sphere slowing down
  • clear Big Label indications when those variables hit an important threshold, which is in this case 0
  • A summary assessment, “ZERO SURFACE ACTIVITY”
  • A key on the left identifying what the colors and patterns mean
  • Some sciency scatter plots on the right

The majority of these would directly help someone monitoring the planet for its key variables.


Though these are useful, it would be even more useful if the system would help track these variables not just when they hit a threshold, but how they are trending. Waveforms like the type used in medical monitoring of the “MOVEMENT LOCK,” “DYNAMIC FLOW,” and “DATA S C A T” might help the operator see a bit into the future rather than respond after the fact.

Human VPs

In the volumetric projection chapter of Make It So, we note that sci-fi makers take pains to distinguish the virtual from the real most often with a set of visual treatments derived from the “Pepper’s Ghost” parlor trick, augmented with additional technology cues: translucency, a blue tint, glowing whites, supersaturated colors for wireframed objects, clear pixels and/or flicker, with optional projection rays.

Prometheus has four types of VPs that adhere to this style in varying degrees. Individual displays (with their interactions) are discussed in other posts. This collection of posts compares their styles. This particular post describes the human VPs.


Blue-box displays

One type of human-technology VPs are the blue-box displays:

  • David’s language program
  • Halloway and Shaw’s mission briefing
  • The display in Shaw’s quarters

These adhere more closely to the Pepper’s Ghost style, being contained in a translucent blue cuboid with saturated surface graphics and a grid pattern on the sides.

Weyland-Yutani VP

The other type of human displays are the Weyland-Yutani VPs. These have translucency and supersaturated wireframes, but they do not have any of the other conventional Pepper’s Ghost cues. Instead they add two new visual cues to signal to the audience their virtualness: scaffolded transitions and edge embers.

When a Weyland-Yutani VP is turned on, it does not simply blink into view. It builds. First, shapes are described in space as a tessellated surface, made of yellow-green lines describing large triangles that roughly describe the forthcoming object or its extents. These triangles have a faint smoky-yellow pattern on their surface. Some of the lines have yellow clouds and bright red segments along their lengths. Additionally, a few new triangles extend to a point space where another piece of the projection is about to appear. Then the triangles disappear, replaced with a fully refined image of the 3D object. The refined image may flicker once or twice before settling into persistence. The whole scaffolding effect is staggered across time, providing an additional sense of flicker to the transition.


Motion in resolved parts of the VP begins immediately, even as other aspects of the VP are still transitioning on.

When a VP is turned off, this scaffolding happens in reverse, as elements decay into tessellated yellow wireframes before flickering out of existence.

Edge embers

A line of glowing, flickering, sliding, yellow-green points illustrates the extents of the VP area, where a continuous surface like flooring is clipped at the limits of the display. These continue across the duration of the playback.

A growing confidence in audiences

This slightly different strategy to distinguishing VPs from the real world indicates the filmmaker’s confidence that audiences are growing familiar enough with this trope that fewer cues are needed during the display. In this case the translucency and subtle edge embers are the only persistent cues, pushing the major signals of the scaffolding and surface flicker to the transitions.

If this trend continues and sci-fi makers become overconfident, it may confuse some audiences, but at the same time give the designers of the first real-world VPs more freedom with their appearance. They wouldn’t have to look like Star Wars’.

Something new: Projected Reflectance

One interesting detail is that when we see Vickers standing in the projection of Weyland’s office, she casts a slight reflection in the volumetric surface. It implies a technology capable of projecting not just luminance, but reflectivity as well. The ability to project volumetric mirrors hasn’t appeared before in the survey.


Lesson: Transition by importance

Another interesting detail is that when the introduction to the Mission briefing ends, the environment flickers out first, then the 2D background, then Weyland’s dog, then finally Weyland.

This order isn’t by position, brightness, motion, or even surface area (the dog confounds that.) It is by narrative importance: Foreground, background, tertiary character, primary character. The fact that the surrounding elements fade first keep your eyes glued onto the last motion (kind of like watching the last bit of sun at a sunset), which in this order is the most important thing in the feed, i.e. the human in view. If a staggered-element fade-out becomes a norm in the real world for video conferencing (or eventually VP conferencing), this cinematic order is worth remembering.